Serveur d'exploration sur la musique en Sarre

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.
***** Acces problem to record *****\

Identifieur interne : 0000399 ( Pmc/Corpus ); précédent : 0000398; suivant : 0000400 ***** probable Xml problem with record *****

Links to Exploration step


Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Characterization of the Nencki Affective Picture System by discrete emotional categories (NAPS BE)</title>
<author>
<name sortKey="Riegel, Monika" sort="Riegel, Monika" uniqKey="Riegel M" first="Monika" last="Riegel">Monika Riegel</name>
<affiliation>
<nlm:aff id="Aff1">Laboratory of Brain Imaging, Neurobiology Centre, Nencki Institute of Experimental Biology, 3, Pasteur Street, 02-093 Warsaw, Poland</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey=" Urawski, Lukasz" sort=" Urawski, Lukasz" uniqKey=" Urawski L" first="Łukasz" last=" Urawski">Łukasz Urawski</name>
<affiliation>
<nlm:aff id="Aff2">Laboratory of Psychophysiology, Department of Neurophysiology, Nencki Institute of Experimental Biology, Warsaw, Poland</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Wierzba, Malgorzata" sort="Wierzba, Malgorzata" uniqKey="Wierzba M" first="Małgorzata" last="Wierzba">Małgorzata Wierzba</name>
<affiliation>
<nlm:aff id="Aff1">Laboratory of Brain Imaging, Neurobiology Centre, Nencki Institute of Experimental Biology, 3, Pasteur Street, 02-093 Warsaw, Poland</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Moslehi, Abnoss" sort="Moslehi, Abnoss" uniqKey="Moslehi A" first="Abnoss" last="Moslehi">Abnoss Moslehi</name>
<affiliation>
<nlm:aff id="Aff1">Laboratory of Brain Imaging, Neurobiology Centre, Nencki Institute of Experimental Biology, 3, Pasteur Street, 02-093 Warsaw, Poland</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="Aff4">Faculty of Psychology, University of Warsaw, Warsaw, Poland</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Klocek, Lukasz" sort="Klocek, Lukasz" uniqKey="Klocek L" first="Łukasz" last="Klocek">Łukasz Klocek</name>
<affiliation>
<nlm:aff id="Aff1">Laboratory of Brain Imaging, Neurobiology Centre, Nencki Institute of Experimental Biology, 3, Pasteur Street, 02-093 Warsaw, Poland</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="Aff4">Faculty of Psychology, University of Warsaw, Warsaw, Poland</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Horvat, Marko" sort="Horvat, Marko" uniqKey="Horvat M" first="Marko" last="Horvat">Marko Horvat</name>
<affiliation>
<nlm:aff id="Aff5">Department of Computer Science and Information Technology, University of Applied Sciences, Zagreb, Croatia</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Grabowska, Anna" sort="Grabowska, Anna" uniqKey="Grabowska A" first="Anna" last="Grabowska">Anna Grabowska</name>
<affiliation>
<nlm:aff id="Aff2">Laboratory of Psychophysiology, Department of Neurophysiology, Nencki Institute of Experimental Biology, Warsaw, Poland</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="Aff3">University of Social Sciences and Humanities, Warsaw, Poland</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Michalowski, Jaroslaw" sort="Michalowski, Jaroslaw" uniqKey="Michalowski J" first="Jarosław" last="Michałowski">Jarosław Michałowski</name>
<affiliation>
<nlm:aff id="Aff4">Faculty of Psychology, University of Warsaw, Warsaw, Poland</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Jednor G, Katarzyna" sort="Jednor G, Katarzyna" uniqKey="Jednor G K" first="Katarzyna" last="Jednor G">Katarzyna Jednor G</name>
<affiliation>
<nlm:aff id="Aff2">Laboratory of Psychophysiology, Department of Neurophysiology, Nencki Institute of Experimental Biology, Warsaw, Poland</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Marchewka, Artur" sort="Marchewka, Artur" uniqKey="Marchewka A" first="Artur" last="Marchewka">Artur Marchewka</name>
<affiliation>
<nlm:aff id="Aff1">Laboratory of Brain Imaging, Neurobiology Centre, Nencki Institute of Experimental Biology, 3, Pasteur Street, 02-093 Warsaw, Poland</nlm:aff>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">26205422</idno>
<idno type="pmc">4891391</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4891391</idno>
<idno type="RBID">PMC:4891391</idno>
<idno type="doi">10.3758/s13428-015-0620-1</idno>
<date when="2015">2015</date>
<idno type="wicri:Area/Pmc/Corpus">000039</idno>
<idno type="wicri:explorRef" wicri:stream="Pmc" wicri:step="Corpus" wicri:corpus="PMC">000039</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Characterization of the Nencki Affective Picture System by discrete emotional categories (NAPS BE)</title>
<author>
<name sortKey="Riegel, Monika" sort="Riegel, Monika" uniqKey="Riegel M" first="Monika" last="Riegel">Monika Riegel</name>
<affiliation>
<nlm:aff id="Aff1">Laboratory of Brain Imaging, Neurobiology Centre, Nencki Institute of Experimental Biology, 3, Pasteur Street, 02-093 Warsaw, Poland</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey=" Urawski, Lukasz" sort=" Urawski, Lukasz" uniqKey=" Urawski L" first="Łukasz" last=" Urawski">Łukasz Urawski</name>
<affiliation>
<nlm:aff id="Aff2">Laboratory of Psychophysiology, Department of Neurophysiology, Nencki Institute of Experimental Biology, Warsaw, Poland</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Wierzba, Malgorzata" sort="Wierzba, Malgorzata" uniqKey="Wierzba M" first="Małgorzata" last="Wierzba">Małgorzata Wierzba</name>
<affiliation>
<nlm:aff id="Aff1">Laboratory of Brain Imaging, Neurobiology Centre, Nencki Institute of Experimental Biology, 3, Pasteur Street, 02-093 Warsaw, Poland</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Moslehi, Abnoss" sort="Moslehi, Abnoss" uniqKey="Moslehi A" first="Abnoss" last="Moslehi">Abnoss Moslehi</name>
<affiliation>
<nlm:aff id="Aff1">Laboratory of Brain Imaging, Neurobiology Centre, Nencki Institute of Experimental Biology, 3, Pasteur Street, 02-093 Warsaw, Poland</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="Aff4">Faculty of Psychology, University of Warsaw, Warsaw, Poland</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Klocek, Lukasz" sort="Klocek, Lukasz" uniqKey="Klocek L" first="Łukasz" last="Klocek">Łukasz Klocek</name>
<affiliation>
<nlm:aff id="Aff1">Laboratory of Brain Imaging, Neurobiology Centre, Nencki Institute of Experimental Biology, 3, Pasteur Street, 02-093 Warsaw, Poland</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="Aff4">Faculty of Psychology, University of Warsaw, Warsaw, Poland</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Horvat, Marko" sort="Horvat, Marko" uniqKey="Horvat M" first="Marko" last="Horvat">Marko Horvat</name>
<affiliation>
<nlm:aff id="Aff5">Department of Computer Science and Information Technology, University of Applied Sciences, Zagreb, Croatia</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Grabowska, Anna" sort="Grabowska, Anna" uniqKey="Grabowska A" first="Anna" last="Grabowska">Anna Grabowska</name>
<affiliation>
<nlm:aff id="Aff2">Laboratory of Psychophysiology, Department of Neurophysiology, Nencki Institute of Experimental Biology, Warsaw, Poland</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="Aff3">University of Social Sciences and Humanities, Warsaw, Poland</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Michalowski, Jaroslaw" sort="Michalowski, Jaroslaw" uniqKey="Michalowski J" first="Jarosław" last="Michałowski">Jarosław Michałowski</name>
<affiliation>
<nlm:aff id="Aff4">Faculty of Psychology, University of Warsaw, Warsaw, Poland</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Jednor G, Katarzyna" sort="Jednor G, Katarzyna" uniqKey="Jednor G K" first="Katarzyna" last="Jednor G">Katarzyna Jednor G</name>
<affiliation>
<nlm:aff id="Aff2">Laboratory of Psychophysiology, Department of Neurophysiology, Nencki Institute of Experimental Biology, Warsaw, Poland</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Marchewka, Artur" sort="Marchewka, Artur" uniqKey="Marchewka A" first="Artur" last="Marchewka">Artur Marchewka</name>
<affiliation>
<nlm:aff id="Aff1">Laboratory of Brain Imaging, Neurobiology Centre, Nencki Institute of Experimental Biology, 3, Pasteur Street, 02-093 Warsaw, Poland</nlm:aff>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Behavior Research Methods</title>
<idno type="ISSN">1554-351X</idno>
<idno type="eISSN">1554-3528</idno>
<imprint>
<date when="2015">2015</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>The Nencki Affective Picture System (NAPS; Marchewka, Żurawski, Jednoróg, & Grabowska,
<italic>Behavior Research Methods</italic>
, 2014) is a standardized set of 1,356 realistic, high-quality photographs divided into five categories (people, faces, animals, objects, and landscapes). NAPS has been primarily standardized along the affective dimensions of valence, arousal, and approach–avoidance, yet the characteristics of discrete emotions expressed by the images have not been investigated thus far. The aim of the present study was to collect normative ratings according to categorical models of emotions. A subset of 510 images from the original NAPS set was selected in order to proportionally cover the whole dimensional affective space. Among these, using three available classification methods, we identified images eliciting distinguishable discrete emotions. We introduce the basic-emotion normative ratings for the Nencki Affective Picture System (NAPS BE), which will allow researchers to control and manipulate stimulus properties specifically for their experimental questions of interest. The NAPS BE system is freely accessible to the scientific community for noncommercial use as supplementary materials to this article.</p>
<sec>
<title>Electronic supplementary material</title>
<p>The online version of this article (doi:10.3758/s13428-015-0620-1) contains supplementary material, which is available to authorized users.</p>
</sec>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Abramson, Ly" uniqKey="Abramson L">LY Abramson</name>
</author>
<author>
<name sortKey="Metalsky, Gi" uniqKey="Metalsky G">GI Metalsky</name>
</author>
<author>
<name sortKey="Alloy, Lb" uniqKey="Alloy L">LB Alloy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Barrett, Lf" uniqKey="Barrett L">LF Barrett</name>
</author>
<author>
<name sortKey="Bliss Moreau, E" uniqKey="Bliss Moreau E">E Bliss-Moreau</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bayer, M" uniqKey="Bayer M">M Bayer</name>
</author>
<author>
<name sortKey="Sommer, W" uniqKey="Sommer W">W Sommer</name>
</author>
<author>
<name sortKey="Schacht, A" uniqKey="Schacht A">A Schacht</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bradley, Mm" uniqKey="Bradley M">MM Bradley</name>
</author>
<author>
<name sortKey="Codispoti, M" uniqKey="Codispoti M">M Codispoti</name>
</author>
<author>
<name sortKey="Cuthbert, Bn" uniqKey="Cuthbert B">BN Cuthbert</name>
</author>
<author>
<name sortKey="Lang, Pj" uniqKey="Lang P">PJ Lang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bradley, Mm" uniqKey="Bradley M">MM Bradley</name>
</author>
<author>
<name sortKey="Codispoti, M" uniqKey="Codispoti M">M Codispoti</name>
</author>
<author>
<name sortKey="Sabatinelli, D" uniqKey="Sabatinelli D">D Sabatinelli</name>
</author>
<author>
<name sortKey="Lang, Pj" uniqKey="Lang P">PJ Lang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bradley, Mm" uniqKey="Bradley M">MM Bradley</name>
</author>
<author>
<name sortKey="Lang, Pj" uniqKey="Lang P">PJ Lang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bradley, Mm" uniqKey="Bradley M">MM Bradley</name>
</author>
<author>
<name sortKey="Lang, Pj" uniqKey="Lang P">PJ Lang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Briesemeister, Bb" uniqKey="Briesemeister B">BB Briesemeister</name>
</author>
<author>
<name sortKey="Kuchinke, L" uniqKey="Kuchinke L">L Kuchinke</name>
</author>
<author>
<name sortKey="Jacobs, Am" uniqKey="Jacobs A">AM Jacobs</name>
</author>
<author>
<name sortKey="Braun, M" uniqKey="Braun M">M Braun</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Briesemeister, Bb" uniqKey="Briesemeister B">BB Briesemeister</name>
</author>
<author>
<name sortKey="Kuchinke, L" uniqKey="Kuchinke L">L Kuchinke</name>
</author>
<author>
<name sortKey="Jacobs, Am" uniqKey="Jacobs A">AM Jacobs</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Briesemeister, Bb" uniqKey="Briesemeister B">BB Briesemeister</name>
</author>
<author>
<name sortKey="Kuchinke, L" uniqKey="Kuchinke L">L Kuchinke</name>
</author>
<author>
<name sortKey="Jacobs, Am" uniqKey="Jacobs A">AM Jacobs</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Briesemeister, Bb" uniqKey="Briesemeister B">BB Briesemeister</name>
</author>
<author>
<name sortKey="Kuchinke, L" uniqKey="Kuchinke L">L Kuchinke</name>
</author>
<author>
<name sortKey="Jacobs, Am" uniqKey="Jacobs A">AM Jacobs</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Caldwell Harris, Cl" uniqKey="Caldwell Harris C">CL Caldwell-Harris</name>
</author>
<author>
<name sortKey="Aycice I Dinn, A" uniqKey="Aycice I Dinn A">A Ayçiçeǧi-Dinn</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chapman, Ha" uniqKey="Chapman H">HA Chapman</name>
</author>
<author>
<name sortKey="Johannes, K" uniqKey="Johannes K">K Johannes</name>
</author>
<author>
<name sortKey="Poppenk, Jl" uniqKey="Poppenk J">JL Poppenk</name>
</author>
<author>
<name sortKey="Moscovitch, M" uniqKey="Moscovitch M">M Moscovitch</name>
</author>
<author>
<name sortKey="Anderson, Ak" uniqKey="Anderson A">AK Anderson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Colden, A" uniqKey="Colden A">A Colden</name>
</author>
<author>
<name sortKey="Bruder, M" uniqKey="Bruder M">M Bruder</name>
</author>
<author>
<name sortKey="Manstead, Asr" uniqKey="Manstead A">ASR Manstead</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Costa, T" uniqKey="Costa T">T Costa</name>
</author>
<author>
<name sortKey="Cauda, F" uniqKey="Cauda F">F Cauda</name>
</author>
<author>
<name sortKey="Crini, M" uniqKey="Crini M">M Crini</name>
</author>
<author>
<name sortKey="Tatu, M K" uniqKey="Tatu M">M-K Tatu</name>
</author>
<author>
<name sortKey="Celeghin, A" uniqKey="Celeghin A">A Celeghin</name>
</author>
<author>
<name sortKey="De Gelder, B" uniqKey="De Gelder B">B de Gelder</name>
</author>
<author>
<name sortKey="Tamietto, M" uniqKey="Tamietto M">M Tamietto</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Croucher, Cj" uniqKey="Croucher C">CJ Croucher</name>
</author>
<author>
<name sortKey="Calder, Aj" uniqKey="Calder A">AJ Calder</name>
</author>
<author>
<name sortKey="Ramponi, C" uniqKey="Ramponi C">C Ramponi</name>
</author>
<author>
<name sortKey="Barnard, Pj" uniqKey="Barnard P">PJ Barnard</name>
</author>
<author>
<name sortKey="Murphy, Fc" uniqKey="Murphy F">FC Murphy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Darwin, C" uniqKey="Darwin C">C Darwin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Delaveau, P" uniqKey="Delaveau P">P Delaveau</name>
</author>
<author>
<name sortKey="Jabourian, M" uniqKey="Jabourian M">M Jabourian</name>
</author>
<author>
<name sortKey="Lemogne, C" uniqKey="Lemogne C">C Lemogne</name>
</author>
<author>
<name sortKey="Guionnet, S" uniqKey="Guionnet S">S Guionnet</name>
</author>
<author>
<name sortKey="Bergouignan, L" uniqKey="Bergouignan L">L Bergouignan</name>
</author>
<author>
<name sortKey="Fossati, P" uniqKey="Fossati P">P Fossati</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Eerola, T" uniqKey="Eerola T">T Eerola</name>
</author>
<author>
<name sortKey="Vuoskoski, Jk" uniqKey="Vuoskoski J">JK Vuoskoski</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ekman, P" uniqKey="Ekman P">P Ekman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ekman, P" uniqKey="Ekman P">P Ekman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ekman, P" uniqKey="Ekman P">P Ekman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ferre, P" uniqKey="Ferre P">P Ferré</name>
</author>
<author>
<name sortKey="Guasch, M" uniqKey="Guasch M">M Guasch</name>
</author>
<author>
<name sortKey="Moldovan, C" uniqKey="Moldovan C">C Moldovan</name>
</author>
<author>
<name sortKey="Sanchez Casas, R" uniqKey="Sanchez Casas R">R Sánchez-Casas</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Flom, R" uniqKey="Flom R">R Flom</name>
</author>
<author>
<name sortKey="Janis, Rb" uniqKey="Janis R">RB Janis</name>
</author>
<author>
<name sortKey="Garcia, Dj" uniqKey="Garcia D">DJ Garcia</name>
</author>
<author>
<name sortKey="Kirwan, Cb" uniqKey="Kirwan C">CB Kirwan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fontaine, Jrj" uniqKey="Fontaine J">JRJ Fontaine</name>
</author>
<author>
<name sortKey="Scherer, Kr" uniqKey="Scherer K">KR Scherer</name>
</author>
<author>
<name sortKey="Roesch, Eb" uniqKey="Roesch E">EB Roesch</name>
</author>
<author>
<name sortKey="Ellsworth, Pc" uniqKey="Ellsworth P">PC Ellsworth</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gerrards Hesse, A" uniqKey="Gerrards Hesse A">A Gerrards-Hesse</name>
</author>
<author>
<name sortKey="Spies, K" uniqKey="Spies K">K Spies</name>
</author>
<author>
<name sortKey="Hesse, Fw" uniqKey="Hesse F">FW Hesse</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hamann, S" uniqKey="Hamann S">S Hamann</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hinojosa, Ja" uniqKey="Hinojosa J">JA Hinojosa</name>
</author>
<author>
<name sortKey="Martinez Garcia, N" uniqKey="Martinez Garcia N">N Martínez-García</name>
</author>
<author>
<name sortKey="Villalba Garcia, C" uniqKey="Villalba Garcia C">C Villalba-García</name>
</author>
<author>
<name sortKey="Fernandez Folgueiras, U" uniqKey="Fernandez Folgueiras U">U Fernández-Folgueiras</name>
</author>
<author>
<name sortKey="Sanchez Carmona, A" uniqKey="Sanchez Carmona A">A Sánchez-Carmona</name>
</author>
<author>
<name sortKey="Pozo, Ma" uniqKey="Pozo M">MA Pozo</name>
</author>
<author>
<name sortKey="Montoro, Pr" uniqKey="Montoro P">PR Montoro</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hu, S" uniqKey="Hu S">S Hu</name>
</author>
<author>
<name sortKey="Wan, H" uniqKey="Wan H">H Wan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Imbir, Kk" uniqKey="Imbir K">KK Imbir</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Izard, Ce" uniqKey="Izard C">CE Izard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Javela, Jj" uniqKey="Javela J">JJ Javela</name>
</author>
<author>
<name sortKey="Mercadillo, Re" uniqKey="Mercadillo R">RE Mercadillo</name>
</author>
<author>
<name sortKey="Martin Ramirez, J" uniqKey="Martin Ramirez J">J Martín Ramírez</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kassam, Ks" uniqKey="Kassam K">KS Kassam</name>
</author>
<author>
<name sortKey="Markey, Ar" uniqKey="Markey A">AR Markey</name>
</author>
<author>
<name sortKey="Cherkassky, Vl" uniqKey="Cherkassky V">VL Cherkassky</name>
</author>
<author>
<name sortKey="Loewenstein, G" uniqKey="Loewenstein G">G Loewenstein</name>
</author>
<author>
<name sortKey="Just, Ma" uniqKey="Just M">MA Just</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kissler, J" uniqKey="Kissler J">J Kissler</name>
</author>
<author>
<name sortKey="Herbert, C" uniqKey="Herbert C">C Herbert</name>
</author>
<author>
<name sortKey="Peyk, P" uniqKey="Peyk P">P Peyk</name>
</author>
<author>
<name sortKey="Junghofer, M" uniqKey="Junghofer M">M Junghofer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lang, Pj" uniqKey="Lang P">PJ Lang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lang, Pj" uniqKey="Lang P">PJ Lang</name>
</author>
<author>
<name sortKey="Bradley, Mm" uniqKey="Bradley M">MM Bradley</name>
</author>
<author>
<name sortKey="Cuthbert, Bn" uniqKey="Cuthbert B">BN Cuthbert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lindquist, Ka" uniqKey="Lindquist K">KA Lindquist</name>
</author>
<author>
<name sortKey="Wager, Td" uniqKey="Wager T">TD Wager</name>
</author>
<author>
<name sortKey="Kober, H" uniqKey="Kober H">H Kober</name>
</author>
<author>
<name sortKey="Bliss Moreau, E" uniqKey="Bliss Moreau E">E Bliss-Moreau</name>
</author>
<author>
<name sortKey="Barrett, Lf" uniqKey="Barrett L">LF Barrett</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Majid, A" uniqKey="Majid A">A Majid</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Marchewka, A" uniqKey="Marchewka A">A Marchewka</name>
</author>
<author>
<name sortKey=" Urawski, L" uniqKey=" Urawski L">Ł Żurawski</name>
</author>
<author>
<name sortKey="Jednor G, K" uniqKey="Jednor G K">K Jednoróg</name>
</author>
<author>
<name sortKey="Grabowska, A" uniqKey="Grabowska A">A Grabowska</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mauss, Ib" uniqKey="Mauss I">IB Mauss</name>
</author>
<author>
<name sortKey="Robinson, Md" uniqKey="Robinson M">MD Robinson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mikels, Ja" uniqKey="Mikels J">JA Mikels</name>
</author>
<author>
<name sortKey="Fredrickson, Bl" uniqKey="Fredrickson B">BL Fredrickson</name>
</author>
<author>
<name sortKey="Larkin, Gr" uniqKey="Larkin G">GR Larkin</name>
</author>
<author>
<name sortKey="Lindberg, Cm" uniqKey="Lindberg C">CM Lindberg</name>
</author>
<author>
<name sortKey="Maglio, Sj" uniqKey="Maglio S">SJ Maglio</name>
</author>
<author>
<name sortKey="Reuter Lorenz, Pa" uniqKey="Reuter Lorenz P">PA Reuter-Lorenz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Monnier, C" uniqKey="Monnier C">C Monnier</name>
</author>
<author>
<name sortKey="Syssau, A" uniqKey="Syssau A">A Syssau</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Montefinese, M" uniqKey="Montefinese M">M Montefinese</name>
</author>
<author>
<name sortKey="Ambrosini, E" uniqKey="Ambrosini E">E Ambrosini</name>
</author>
<author>
<name sortKey="Fairfield, B" uniqKey="Fairfield B">B Fairfield</name>
</author>
<author>
<name sortKey="Mammarella, N" uniqKey="Mammarella N">N Mammarella</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Myers, Rh" uniqKey="Myers R">RH Myers</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ortony, A" uniqKey="Ortony A">A Ortony</name>
</author>
<author>
<name sortKey="Turner, Tj" uniqKey="Turner T">TJ Turner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Osgood, Ce" uniqKey="Osgood C">CE Osgood</name>
</author>
<author>
<name sortKey="Suci, Gj" uniqKey="Suci G">GJ Suci</name>
</author>
<author>
<name sortKey="Tannenbaum, Ph" uniqKey="Tannenbaum P">PH Tannenbaum</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Panksepp, J" uniqKey="Panksepp J">J Panksepp</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Reisenzein, R" uniqKey="Reisenzein R">R Reisenzein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Remmington, Na" uniqKey="Remmington N">NA Remmington</name>
</author>
<author>
<name sortKey="Fabrigar, Lr" uniqKey="Fabrigar L">LR Fabrigar</name>
</author>
<author>
<name sortKey="Visser, Ps" uniqKey="Visser P">PS Visser</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ric, F" uniqKey="Ric F">F Ric</name>
</author>
<author>
<name sortKey="Alexopoulos, T" uniqKey="Alexopoulos T">T Alexopoulos</name>
</author>
<author>
<name sortKey="Muller, D" uniqKey="Muller D">D Muller</name>
</author>
<author>
<name sortKey="Aube, B" uniqKey="Aube B">B Aubé</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Russell, Ja" uniqKey="Russell J">JA Russell</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Scherer, Kr" uniqKey="Scherer K">KR Scherer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schienle, A" uniqKey="Schienle A">A Schienle</name>
</author>
<author>
<name sortKey="Wabnegger, A" uniqKey="Wabnegger A">A Wabnegger</name>
</author>
<author>
<name sortKey="Schoengassner, F" uniqKey="Schoengassner F">F Schoengassner</name>
</author>
<author>
<name sortKey="Scharmuller, W" uniqKey="Scharmuller W">W Scharmüller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Silva, C" uniqKey="Silva C">C Silva</name>
</author>
<author>
<name sortKey="Montant, M" uniqKey="Montant M">M Montant</name>
</author>
<author>
<name sortKey="Ponz, A" uniqKey="Ponz A">A Ponz</name>
</author>
<author>
<name sortKey="Ziegler, Jc" uniqKey="Ziegler J">JC Ziegler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Smith, Ca" uniqKey="Smith C">CA Smith</name>
</author>
<author>
<name sortKey="Lazarus, Rs" uniqKey="Lazarus R">RS Lazarus</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stanley, Dj" uniqKey="Stanley D">DJ Stanley</name>
</author>
<author>
<name sortKey="Meyer, Jp" uniqKey="Meyer J">JP Meyer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stevenson, Ra" uniqKey="Stevenson R">RA Stevenson</name>
</author>
<author>
<name sortKey="James, Tw" uniqKey="James T">TW James</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stevenson, Ra" uniqKey="Stevenson R">RA Stevenson</name>
</author>
<author>
<name sortKey="Mikels, Ja" uniqKey="Mikels J">JA Mikels</name>
</author>
<author>
<name sortKey="James, Tw" uniqKey="James T">TW James</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tettamanti, M" uniqKey="Tettamanti M">M Tettamanti</name>
</author>
<author>
<name sortKey="Rognoni, E" uniqKey="Rognoni E">E Rognoni</name>
</author>
<author>
<name sortKey="Cafiero, R" uniqKey="Cafiero R">R Cafiero</name>
</author>
<author>
<name sortKey="Costa, T" uniqKey="Costa T">T Costa</name>
</author>
<author>
<name sortKey="Galati, D" uniqKey="Galati D">D Galati</name>
</author>
<author>
<name sortKey="Perani, D" uniqKey="Perani D">D Perani</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Hooff, Jc" uniqKey="Van Hooff J">JC Van Hooff</name>
</author>
<author>
<name sortKey="Van Buuringen, M" uniqKey="Van Buuringen M">M van Buuringen</name>
</author>
<author>
<name sortKey="El M Abet, I" uniqKey="El M Abet I">I El M’rabet</name>
</author>
<author>
<name sortKey="De Gier, M" uniqKey="De Gier M">M de Gier</name>
</author>
<author>
<name sortKey="Van Zalingen, L" uniqKey="Van Zalingen L">L van Zalingen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Viinikainen, M" uniqKey="Viinikainen M">M Viinikainen</name>
</author>
<author>
<name sortKey="J Skel Inen, Ip" uniqKey="J Skel Inen I">IP Jääskeläinen</name>
</author>
<author>
<name sortKey="Alexandrov, Y" uniqKey="Alexandrov Y">Y Alexandrov</name>
</author>
<author>
<name sortKey="Balk, Mh" uniqKey="Balk M">MH Balk</name>
</author>
<author>
<name sortKey="Autti, T" uniqKey="Autti T">T Autti</name>
</author>
<author>
<name sortKey="Sams, M" uniqKey="Sams M">M Sams</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vytal, K" uniqKey="Vytal K">K Vytal</name>
</author>
<author>
<name sortKey="Hamann, S" uniqKey="Hamann S">S Hamann</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wilson Mendenhall, Cd" uniqKey="Wilson Mendenhall C">CD Wilson-Mendenhall</name>
</author>
<author>
<name sortKey="Barrett, Lf" uniqKey="Barrett L">LF Barrett</name>
</author>
<author>
<name sortKey="Barsalou, Lw" uniqKey="Barsalou L">LW Barsalou</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Behav Res Methods</journal-id>
<journal-id journal-id-type="iso-abbrev">Behav Res Methods</journal-id>
<journal-title-group>
<journal-title>Behavior Research Methods</journal-title>
</journal-title-group>
<issn pub-type="ppub">1554-351X</issn>
<issn pub-type="epub">1554-3528</issn>
<publisher>
<publisher-name>Springer US</publisher-name>
<publisher-loc>New York</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">26205422</article-id>
<article-id pub-id-type="pmc">4891391</article-id>
<article-id pub-id-type="publisher-id">620</article-id>
<article-id pub-id-type="doi">10.3758/s13428-015-0620-1</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Characterization of the Nencki Affective Picture System by discrete emotional categories (NAPS BE)</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Riegel</surname>
<given-names>Monika</given-names>
</name>
<address>
<email>m.riegel@nencki.gov.pl</email>
<email>a.marchewka@nencki.gov.pl</email>
</address>
<xref ref-type="aff" rid="Aff1"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Żurawski</surname>
<given-names>Łukasz</given-names>
</name>
<xref ref-type="aff" rid="Aff2"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Wierzba</surname>
<given-names>Małgorzata</given-names>
</name>
<xref ref-type="aff" rid="Aff1"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Moslehi</surname>
<given-names>Abnoss</given-names>
</name>
<xref ref-type="aff" rid="Aff1"></xref>
<xref ref-type="aff" rid="Aff4"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Klocek</surname>
<given-names>Łukasz</given-names>
</name>
<xref ref-type="aff" rid="Aff1"></xref>
<xref ref-type="aff" rid="Aff4"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Horvat</surname>
<given-names>Marko</given-names>
</name>
<xref ref-type="aff" rid="Aff5"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Grabowska</surname>
<given-names>Anna</given-names>
</name>
<xref ref-type="aff" rid="Aff2"></xref>
<xref ref-type="aff" rid="Aff3"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Michałowski</surname>
<given-names>Jarosław</given-names>
</name>
<xref ref-type="aff" rid="Aff4"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Jednoróg</surname>
<given-names>Katarzyna</given-names>
</name>
<xref ref-type="aff" rid="Aff2"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Marchewka</surname>
<given-names>Artur</given-names>
</name>
<xref ref-type="aff" rid="Aff1"></xref>
</contrib>
<aff id="Aff1">
<label></label>
Laboratory of Brain Imaging, Neurobiology Centre, Nencki Institute of Experimental Biology, 3, Pasteur Street, 02-093 Warsaw, Poland</aff>
<aff id="Aff2">
<label></label>
Laboratory of Psychophysiology, Department of Neurophysiology, Nencki Institute of Experimental Biology, Warsaw, Poland</aff>
<aff id="Aff3">
<label></label>
University of Social Sciences and Humanities, Warsaw, Poland</aff>
<aff id="Aff4">
<label></label>
Faculty of Psychology, University of Warsaw, Warsaw, Poland</aff>
<aff id="Aff5">
<label></label>
Department of Computer Science and Information Technology, University of Applied Sciences, Zagreb, Croatia</aff>
</contrib-group>
<pub-date pub-type="epub">
<day>24</day>
<month>7</month>
<year>2015</year>
</pub-date>
<pub-date pub-type="pmc-release">
<day>24</day>
<month>7</month>
<year>2015</year>
</pub-date>
<pub-date pub-type="ppub">
<year>2016</year>
</pub-date>
<volume>48</volume>
<fpage>600</fpage>
<lpage>612</lpage>
<permissions>
<copyright-statement>© The Author(s) 2015</copyright-statement>
<license license-type="OpenAccess">
<license-p>
<bold>Open Access</bold>
This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.</license-p>
</license>
</permissions>
<abstract id="Abs1">
<p>The Nencki Affective Picture System (NAPS; Marchewka, Żurawski, Jednoróg, & Grabowska,
<italic>Behavior Research Methods</italic>
, 2014) is a standardized set of 1,356 realistic, high-quality photographs divided into five categories (people, faces, animals, objects, and landscapes). NAPS has been primarily standardized along the affective dimensions of valence, arousal, and approach–avoidance, yet the characteristics of discrete emotions expressed by the images have not been investigated thus far. The aim of the present study was to collect normative ratings according to categorical models of emotions. A subset of 510 images from the original NAPS set was selected in order to proportionally cover the whole dimensional affective space. Among these, using three available classification methods, we identified images eliciting distinguishable discrete emotions. We introduce the basic-emotion normative ratings for the Nencki Affective Picture System (NAPS BE), which will allow researchers to control and manipulate stimulus properties specifically for their experimental questions of interest. The NAPS BE system is freely accessible to the scientific community for noncommercial use as supplementary materials to this article.</p>
<sec>
<title>Electronic supplementary material</title>
<p>The online version of this article (doi:10.3758/s13428-015-0620-1) contains supplementary material, which is available to authorized users.</p>
</sec>
</abstract>
<kwd-group xml:lang="en">
<title>Keywords</title>
<kwd>Basic emotion</kwd>
<kwd>Visual stimuli</kwd>
<kwd>Affective ratings</kwd>
<kwd>Valence</kwd>
<kwd>Arousal</kwd>
<kwd>Happiness</kwd>
<kwd>Fear</kwd>
<kwd>Sadness</kwd>
<kwd>Surprise</kwd>
<kwd>Anger</kwd>
<kwd>Disgust</kwd>
<kwd>Nencki Affective Picture System</kwd>
</kwd-group>
<custom-meta-group>
<custom-meta>
<meta-name>issue-copyright-statement</meta-name>
<meta-value>© Psychonomic Society, Inc. 2016</meta-value>
</custom-meta>
</custom-meta-group>
</article-meta>
</front>
<body>
<p>Given that there is no single gold-standard method for the measurement of emotion, researchers are often faced with a need to select appropriate and controlled stimuli for inducing specific emotional states (Gerrards-Hesse, Spies, & Hesse,
<xref ref-type="bibr" rid="CR28">1994</xref>
; Mauss & Robinson,
<xref ref-type="bibr" rid="CR42">2009</xref>
; Scherer,
<xref ref-type="bibr" rid="CR57">2005</xref>
). The Nencki Affective Picture System (NAPS; 2014) is a set of 1,356 photographs divided into five content categories (people, faces, animals, objects, and landscapes). All of the photographs have been standardized on the basis of dimensional theories of emotions, according to which several fundamental dimensions can characterize each affective experience. In the case of the NAPS, these dimensions are valence (ranging from
<italic>highly negative</italic>
to
<italic>highly positive</italic>
), arousal (ranging from
<italic>relaxed</italic>
/
<italic>unaroused</italic>
to
<italic>excited</italic>
/
<italic>aroused</italic>
), and approach–avoidance (ranging from a
<italic>tendency to avoid</italic>
to a
<italic>tendency to approach</italic>
a stimulus) (Osgood, Suci, & Tannenbaum,
<xref ref-type="bibr" rid="CR50">1957</xref>
; Russell,
<xref ref-type="bibr" rid="CR55">2003</xref>
). Although the identity and number of dimensions have been debated (Fontaine, Scherer, Roesch, & Ellsworth,
<xref ref-type="bibr" rid="CR26">2007</xref>
; Stanley & Meyer,
<xref ref-type="bibr" rid="CR61">2009</xref>
), this approach has been successfully used in many studies and has provided much insight into affective experience (Bayer, Sommer, & Schacht,
<xref ref-type="bibr" rid="CR3">2010</xref>
; Briesemeister, Kuchinke, & Jacobs,
<xref ref-type="bibr" rid="CR11">2014</xref>
; Colibazzi et al.,
<xref ref-type="bibr" rid="CR15">2010</xref>
; Kassam, Markey, Cherkassky, Loewenstein, & Just,
<xref ref-type="bibr" rid="CR35">2013</xref>
; Viinikainen et al.,
<xref ref-type="bibr" rid="CR66">2010</xref>
).</p>
<p>As a different way to conceptualize human emotions, they can be categorized in terms of discrete emotional states (Darwin,
<xref ref-type="bibr" rid="CR18">1872</xref>
; Ekman,
<xref ref-type="bibr" rid="CR21">1992</xref>
; Panksepp,
<xref ref-type="bibr" rid="CR51">1992</xref>
), and each emotion has unique experiential, physiological, and behavioral correlates. As was stated by Ekman (
<xref ref-type="bibr" rid="CR21">1992</xref>
) in his theory of basic emotions, “a number of separate emotions . . . differ one from another in important ways” (p. 170). In line with this theoretical framework, researchers argue that one- or two-dimensional representations fail to capture important aspects of the emotional experience and do not reflect critical differences between certain emotions (Remmington, Fabrigar, & Visser,
<xref ref-type="bibr" rid="CR53">2000</xref>
). Instead, at least five different discrete emotion categories are proposed to reflect facial or vocal expression, namely: happiness, sadness, anger, fear, and disgust. By using the term “basic emotions,” Ekman (
<xref ref-type="bibr" rid="CR21">1992</xref>
) wanted to indicate that “evolution played an important role in shaping both the unique and the common features which these emotions display as well as their current function” (p. 170). They are supposed to originate from biological markers, regardless of any cultural differences (Ekman,
<xref ref-type="bibr" rid="CR22">1993</xref>
). This categorical model of emotions has also provided numerous empirical insights (Briesemeister, Kuchinke, & Jacobs,
<xref ref-type="bibr" rid="CR9">2011a</xref>
; Mikels et al.,
<xref ref-type="bibr" rid="CR43">2005</xref>
; Silva, Montant, Ponz, & Ziegler,
<xref ref-type="bibr" rid="CR59">2012</xref>
; Stevenson, Mikels, & James,
<xref ref-type="bibr" rid="CR63">2007</xref>
; Tettamanti et al.,
<xref ref-type="bibr" rid="CR64">2012</xref>
; Vytal & Hamann,
<xref ref-type="bibr" rid="CR67">2010</xref>
).</p>
<p>A longstanding dispute concerning whether emotions are better conceptualized in terms of discrete categories or underlying dimensions has gained new insight from different methods in the domain of neuroimaging (see Briesemeister, Kuchinke, Jacobs, & Braun,
<xref ref-type="bibr" rid="CR8">2015</xref>
; Fusar-Poli et al.,
<xref ref-type="bibr" rid="CR27">2009</xref>
; Kassam et al.,
<xref ref-type="bibr" rid="CR35">2013</xref>
; Lindquist, Wager, Kober, Bliss-Moreau, & Barrett,
<xref ref-type="bibr" rid="CR39">2012</xref>
). Although some studies have identified consistent neural correlates that are associated with basic emotions and affective dimensions, the studies have ruled out simple one-to-one mappings between emotions and brain regions. This points to the need for more complex, network-based representations of emotions (Hamann,
<xref ref-type="bibr" rid="CR29">2012</xref>
; Saarimaki et al.,
<xref ref-type="bibr" rid="CR56">2015</xref>
). Given that both discrete emotion and dimensional theories are greatly overlapping in their explanatory values (Reisenzein,
<xref ref-type="bibr" rid="CR52">1994</xref>
), further experimental investigations are needed using combined approaches (Briesemeister, Kuchinke, & Jacobs,
<xref ref-type="bibr" rid="CR11">2014</xref>
; Briesemeister et al.,
<xref ref-type="bibr" rid="CR8">2015</xref>
; Eerola & Vuoskoski,
<xref ref-type="bibr" rid="CR20">2011</xref>
; Hinojosa et al.,
<xref ref-type="bibr" rid="CR30">2015</xref>
). Therefore, providing appropriate pictorial stimuli combining both perspectives will be of great usefulness.</p>
<p>To meet this need, many of the existing datasets of standardized stimuli in various modalities that were originally assessed in line with the dimensional approach have now received complementary ratings on the expressed emotion categories (Briesemeister, Kuchinke, & Jacobs,
<xref ref-type="bibr" rid="CR10">2011b</xref>
; Mikels et al.,
<xref ref-type="bibr" rid="CR43">2005</xref>
; Stevenson et al.,
<xref ref-type="bibr" rid="CR63">2007</xref>
; Stevenson & James,
<xref ref-type="bibr" rid="CR62">2008</xref>
). Due to this contribution, it has become possible to investigate various topics in affective neuroscience, such as temporal and spatial neural dynamics in the perception of basic emotions from complex scenes (Costa et al.,
<xref ref-type="bibr" rid="CR16">2014</xref>
), or the neural correlates of different attentional strategies during affective picture processing (Schienle, Wabnegger, Schoengassner, & Scharmüller,
<xref ref-type="bibr" rid="CR58">2014</xref>
). With such stimuli, different theories of emotion processing and their applicability to affective processing studies have also been examined (Briesemeister et al.,
<xref ref-type="bibr" rid="CR8">2015</xref>
). Moreover, basic emotion ratings have enabled researchers to select pictures in order to study the neural correlates of affective experience and therapeutic effects in different clinical populations (Delaveau et al.,
<xref ref-type="bibr" rid="CR19">2011</xref>
). In this way, a combination of the dimensional approach, useful to describe a number of broad features of emotion, and the categorical approach, focused on capturing discrete emotional responses, is supplying researchers with a more complete view of affect.</p>
<p>The aim of the present study was to provide researchers with a list of reliable discrete emotion norms for a subset of images selected from the Nencki Affective Picture System as being characterized both with the intensities of basic emotions (happiness, anger, fear, disgust, sadness, and surprise) and the affective dimensions of valence and arousal. Additionally, the obtained ratings are going to be analyzed for the problems of the relationship between affective dimensions and basic emotions and the relations between the affective variables and the content categories. This subset hereafter is referred to as NAPS BE.</p>
<sec id="Sec1" sec-type="materials|methods">
<title>Method</title>
<sec id="Sec2" sec-type="materials|methods">
<title>Materials</title>
<p>A subset of 510 images was selected from the NAPS database in order to proportionally cover the dimensional affective space across the content categories of animals, faces, landscapes, objects, and people. The selection was driven by reports showing that in the International Affective Picture System (IAPS; Bradley & Lang,
<xref ref-type="bibr" rid="CR7">2007</xref>
), the distribution of stimuli across the valence and arousal dimensions is related to human versus inanimate picture content (Colden, Bruder, & Manstead,
<xref ref-type="bibr" rid="CR14">2008</xref>
). Specifically, pictures depicting humans were over-represented in the high arousal–positive and high arousal–negative areas of affective space, as compared to inanimate objects, which were especially frequent in the low arousal–neutral valence area. In order to avoid a similar pattern in our dataset, and to provide a variety of stimulus content for the basic emotion classification, we chose and counterbalanced pictures from each content category covering the whole affective space. Also, we aimed at limiting the number of neutral stimuli in each subset. In this way, we obtained the following numbers of images per category: 98 animals, 161 faces, 49 landscapes, 102 objects, and 100 people. The landscape category was the least numerous, since these pictures were predominantly not arousing and of neutral valence. The NAPS BE images that proportionally covered the dimensional affective space of valence and arousal across the content categories of animals, faces, landscapes, objects, and people are depicted on Fig. 
<xref rid="MOESM1" ref-type="media">S1</xref>
(supplementary materials).</p>
</sec>
<sec id="Sec3">
<title>Participants</title>
<p>A total of 124 healthy volunteers (67 females, 57 males; mean age = 22.95 years,
<italic>SD</italic>
= 3.76, range = 19 to 37) without history of any neurological illness or treatment with psychoactive drugs took part in the study. The participants were mainly Erasmus (European student exchange programme) students from various European countries recruited at the University of Warsaw and the University of Zagreb. All of them were proficient speakers of English, and the procedure was conducted in English in order to obtain more universal norms. All of the participants obtained a financial reward of 30 PLN (approximately EUR 7).</p>
</sec>
<sec id="Sec4">
<title>Procedure</title>
<p>Participants were first asked to fill in the informed consent form and to read instructions displayed on the computer screen (see the Appendix), then they familiarized themselves with their task in a short training session with exemplary stimuli. All of the participants were informed that in case of feeling any discomfort due to the content of the pictures, they should report it immediately to stop the experimental session. English was the language of the instructions, rating scales, and communication with the participants. During the experiment, they individually rated images through a platform available on a local server, with an average distance of 60 cm from the computer screen.</p>
<p>Each participant was exposed to a series of 170 images chosen pseudorandomly from all of the categories and presented consecutively under the following constraints: No more than two pictures from each affective valence category (positive, neutral, and negative) and no more than three pictures from each content category appeared consecutively. In order to avoid serial position (primacy and recency) effects, each subset of 170 pictures was divided into three parts; these parts were positioned in three possible ways and were counterbalanced across the participants.</p>
<p>Single images were presented in a full-screen view for 2 s. Each presentation was followed by an exposure of the rating scales (for the assessment of the basic emotions and affective dimensions) on a new screen with a smaller picture presented in the upper part of the screen. The task of the participants was to evaluate each picture on the eight scales described below. Completing the task with no time constraints took approximately 45–60 min. The local Research Ethics Committee in Warsaw approved the experimental protocol of the study.</p>
</sec>
<sec id="Sec5">
<title>Rating scales</title>
<p>Analogously to some previously used procedures (Briesemeister et al.,
<xref ref-type="bibr" rid="CR10">2011b</xref>
; Mikels et al.,
<xref ref-type="bibr" rid="CR43">2005</xref>
; Stevenson et al.,
<xref ref-type="bibr" rid="CR63">2007</xref>
), participants were asked to use six independent 7-point Likert scales to indicate the intensity of the feelings of happiness, anger, fear, sadness, disgust, and surprise (with 1 indicating
<italic>not at all</italic>
and 7 indicating
<italic>very much</italic>
) elicited by each presented image, as is presented in Fig. 
<xref rid="Fig1" ref-type="fig">1</xref>
. This procedure allowed the participants to indicate multiple labels for a given image. Although surprise has been considered by some researchers to be a neutral cognitive state (Ortony & Turner,
<xref ref-type="bibr" rid="CR49">1990</xref>
) rather than an emotion, and therefore does not appear in certain classifications of the basic emotions (Ekman,
<xref ref-type="bibr" rid="CR23">1999</xref>
; Izard,
<xref ref-type="bibr" rid="CR33">2009</xref>
), it was also included in the ratings.
<fig id="Fig1">
<label>Fig. 1</label>
<caption>
<p>Example screen of the assessment platform for a single image, along with the discrete and dimensional scales</p>
</caption>
<graphic xlink:href="13428_2015_620_Fig1_HTML" id="MO1"></graphic>
</fig>
</p>
<p>Additionally, the pictures were rated on two affective dimensions using the Self-Assessment Manikin (Lang,
<xref ref-type="bibr" rid="CR37">1980</xref>
), as is also presented in Fig. 
<xref rid="Fig1" ref-type="fig">1</xref>
. The scale of emotional valence was used to estimate the extent of the positive or negative reaction evoked by a given picture, ranging from 1 to 9 (1 for
<italic>very negative emotions</italic>
and 9 for
<italic>very positive emotions</italic>
). On the scale of arousal, participants estimated to what extent a particular picture made them feel unaroused or aroused, ranging from 1 to 9 (1 for
<italic>unaroused</italic>
/
<italic>relaxed</italic>
and 9 for
<italic>very much aroused</italic>
—e.g., jittery or excited). Although the ratings of these two affective dimensions were originally included in NAPS, they were previously obtained with the use of continuous bipolar semantic sliding scales (SLIDER) by moving a bar over a horizontal scale. The original NAPS ratings showed a more linear association between the valence and arousal dimensions, as compared to the “boomerang-shaped” relationship found, for instance, in our sample and in the IAPS database (Lang, Bradley, & Cuthbert,
<xref ref-type="bibr" rid="CR38">2008</xref>
).</p>
</sec>
<sec id="Sec6">
<title>Data analysis</title>
<p>The data analysis is arranged into three sections. First, we investigated whether the obtained ratings were consistent across the individuals taking part in the experiment and what was the upper limit of the correlations. Therefore, we addressed the issue of the consistency of the collected ratings, applying split-half reliability estimation. Second, we described the distributions of the norms in order to provide researchers with useful characteristics of the dataset. Although the ratings of each basic emotion were given for each picture (provided in the supplementary materials, Table S2), we were also interested in searching for the pictures expressing specifically one basic emotion much more than the others. Thus, we used several methods for classifying pictures to particular basic emotions, which we consider to be useful for more precise experimental manipulations of the NAPS BE stimuli. The last section is devoted to further analyses of the patterns observed in the obtained ratings and addresses the potential doubts of researchers. In order to give a rationale for combining the theoretical frameworks of affective dimensions and basic emotions instead of choosing only one, we investigated the relationship between these approaches. Our research question was whether the information collected by emotional categories represented the same emotional information described by the dimensional ratings. To answer this question, regression analyses were performed, using the categorical data for each picture to predict the dimensional data, and vice versa. Finally, we aimed at showing researchers that other stimulus parameters are important for their experiments. Our research question was whether there were any differences in the mean basic emotion intensities across the content categories of the pictures. We investigated this relation with multivariate analysis of variance (MANOVA), considering Content Category (animals, faces, landscapes, objects, and people) and classes of the pictures’ Valence (negative, neutral, and positive) as between-subjects factors, and all of the affective ratings as dependent variables. Answering these research questions should encourage future users of NAPS BE to use all of the provided norms and variables in their experiments.</p>
</sec>
</sec>
<sec id="Sec7" sec-type="results">
<title>Results</title>
<sec id="Sec8">
<title>Reliability</title>
<p>Since the applicability of the collected affective norms in experimental studies is highly dependent on their reliability, we addressed this issue by applying split-half reliability estimation, following descriptions provided in the literature (Monnier & Syssau,
<xref ref-type="bibr" rid="CR44">2014</xref>
; Montefinese, Ambrosini, Fairfield, & Mammarella,
<xref ref-type="bibr" rid="CR45">2014</xref>
; Moors et al.,
<xref ref-type="bibr" rid="CR46">2013</xref>
). The whole sample was split into halves in order to form two groups with the odd and even experiment entrance ranks. Within each group, the mean ratings of each basic emotion were calculated for each picture. Pairwise Pearson’s correlation coefficients of these means between the two groups were then calculated and adjusted using the Spearman–Brown formula. All correlations were significant (
<italic>p</italic>
< .01). The obtained reliability coefficients were high and comparable to the values obtained in other datasets of standardized stimuli (Bradley & Lang,
<xref ref-type="bibr" rid="CR7">2007</xref>
; Imbir,
<xref ref-type="bibr" rid="CR32">2014</xref>
; Monnier & Syssau,
<xref ref-type="bibr" rid="CR44">2014</xref>
; Moors et al.,
<xref ref-type="bibr" rid="CR46">2013</xref>
)- namely,
<italic>r</italic>
= .97 for happiness,
<italic>r</italic>
= .98 for sadness,
<italic>r</italic>
= .93 for fear,
<italic>r</italic>
= .94 for surprise,
<italic>r</italic>
= .95 for anger,
<italic>r</italic>
= .97 for disgust,
<italic>r</italic>
= .93 for arousal, and
<italic>r</italic>
= .98 for valence.</p>
</sec>
<sec id="Sec9">
<title>Ratings of the affective variables</title>
<p>For each picture, we obtained from 39 to 44 ratings (
<italic>M</italic>
= 41.33,
<italic>SD</italic>
= 2.06) on each scale from the 124 participants of the study. In order to further explore the present data, we divided the whole set of pictures by their valence classes into negative, neutral, and positive pictures, according to the criteria introduced in previous studies (e.g., Ferré, Guasch, Moldovan, & Sánchez-Casas,
<xref ref-type="bibr" rid="CR24">2012</xref>
; Kissler, Herbert, Peyk, & Junghofer,
<xref ref-type="bibr" rid="CR36">2007</xref>
). These criteria were based on the mean valences for negative, neutral, and positive pictures, which usually took values around 2, 5, and 7, respectively. Therefore, we classified pictures with values of valence ranging from 1 to 4 as
<italic>negative</italic>
(
<italic>M</italic>
= 3.10,
<italic>SD</italic>
= 0.58), pictures with values ranging from 4 to 6 as
<italic>neutral</italic>
(
<italic>M</italic>
= 5.02,
<italic>SD</italic>
= 0.55), and pictures with values ranging from 6 to 9 as
<italic>positive</italic>
(
<italic>M</italic>
= 6.52,
<italic>SD</italic>
= 0.39). These criteria resulted in the following proportions in the present database: 148 negative pictures (28.6 %), 203 neutral pictures (40.8 %), and 159 positive pictures (30.6 %). The following distributions of negative, neutral, and positive pictures were observed in the different content categories: animals (25.5 % negative, 43.9 % neutral, 30.6 % positive), faces (26.1 % negative, 30.4 % neutral, 43.5 % positive), landscapes (12.2 % negative, 38.8 % neutral, 49.0 % positive), objects (19.6 % negative, 69.6 % neutral, 10.8 % positive), people (55.0 % negative, 21.0 % neutral, 24.0 % positive).</p>
<p>The distributions of all of the basic emotions, as collected for each picture and with pictures divided by their valence classes, are depicted in Fig. 
<xref rid="Fig2" ref-type="fig">2</xref>
. We split the full range of the basic emotions (1–7 on the rating scales) into seven bins. For each bin, the number of means falling within the bin range was calculated for each basic emotion separately. Numbers obtained in this way (normalized by dividing them by the number of pictures in a particular valence class) are plotted for each valence class separately in each of the panels of Fig. 
<xref rid="Fig2" ref-type="fig">2</xref>
.
<fig id="Fig2">
<label>Fig. 2</label>
<caption>
<p>Distributions of the ratings of discrete emotion categories (happiness, sadness, fear, surprise, anger, and disgust), together with the medians of the respective distributions (dotted lines), for the negative (left), neutral (middle), and positive (right) pictures in NAPS BE</p>
</caption>
<graphic xlink:href="13428_2015_620_Fig2_HTML" id="MO2"></graphic>
</fig>
</p>
<p>The distributions of all the basic-emotion intensity ratings among negative pictures seem to be skewed, with a strong bias toward the low range of the scale. Only 31 % and 23 % of the pictures were rated above the middle value of the rating scales (=4) for sadness and disgust, respectively. All of the other basic emotion intensities were almost always rated lower. This low-intensity bias, which stands for a relative lack of pictures presenting high-intensity values of basic emotions, is strongest for happiness and surprise and weakest for sadness and disgust. All of the basic emotion intensities were rated low among the neutral pictures, with the highest median value being for happiness (Mdn = 2.15). In the positive picture group, the distribution of happiness covers the middle of the rating scale, Mdn = 4.15.</p>
</sec>
<sec id="Sec10">
<title>Basic-emotion classification</title>
<p>The analysis above shows that the majority of images do not express just one discrete emotion, but rather are associated with several different emotional states. Therefore, from the practical point of view it might be important to select stimuli representing one particular emotion much more than any other. Such images will be very useful for further studies in which an emotional category is considered an important factor (Briesemeister et al.,
<xref ref-type="bibr" rid="CR8">2015</xref>
; Chapman, Johannes, Poppenk, Moscovitch, & Anderson,
<xref ref-type="bibr" rid="CR13">2012</xref>
; Costa et al.,
<xref ref-type="bibr" rid="CR16">2014</xref>
; Croucher, Calder, Ramponi, Barnard, & Murphy,
<xref ref-type="bibr" rid="CR17">2011</xref>
; Flom, Janis, Garcia, & Kirwan,
<xref ref-type="bibr" rid="CR25">2014</xref>
; Schienle et al.,
<xref ref-type="bibr" rid="CR58">2014</xref>
; van Hooff, van Buuringen, El M’rabet, de Gier, & van Zalingen,
<xref ref-type="bibr" rid="CR65">2014</xref>
). Importantly, several methods of stimulus classification according to the basic emotion categories available in the literature (Briesemeister et al.,
<xref ref-type="bibr" rid="CR10">2011b</xref>
; Mikels et al.,
<xref ref-type="bibr" rid="CR43">2005</xref>
) can be employed, depending on the specific interest of the researcher. One of the most popular is based on the overlapping of confidence intervals (CIs; Mikels et al.,
<xref ref-type="bibr" rid="CR43">2005</xref>
). Using this method, the 85 % CI was constructed around the mean intensity of each basic emotion for a given picture, and a category membership was determined according to the overlap of the CIs. A single emotion category was ascribed to a given picture if the mean of one emotion was higher than the means of all of the other emotions, and if the CI for that emotion did not overlap with the CIs for the other five emotional categories. An image was classified as
<italic>blended</italic>
if two or three means were higher than the rest and if the CIs of those means overlapped only with each other. Finally, if the CIs of more than three means overlapped, such an image was classified as
<italic>undifferentiated</italic>
(Mikels et al.,
<xref ref-type="bibr" rid="CR43">2005</xref>
).</p>
<p>The aforementioned procedure was used to find images that elicited one discrete emotion more than the others. As a result, 510 images used in the study were divided into six categories: happiness (
<italic>n</italic>
= 240), anger (
<italic>n</italic>
= 2), sadness (
<italic>n</italic>
= 62), fear (
<italic>n</italic>
= 11), disgust (
<italic>n</italic>
= 51), and surprise (
<italic>n</italic>
= 2), giving a total number of 368 pictures that were matched to specific basic emotions. The other pictures were classified as blended, including two (
<italic>n</italic>
= 21) or three (
<italic>n</italic>
= 22) emotions, or were classified as undifferentiated, eliciting similar amounts of four, five, or six emotions (
<italic>n</italic>
= 20, 25, and 54 pictures, respectively). Some example images from the animals category are presented in Fig. 
<xref rid="Fig3" ref-type="fig">3</xref>
.
<fig id="Fig3">
<label>Fig. 3</label>
<caption>
<p>A sample of standardized images classified as representing each basic emotion within the content category of animals</p>
</caption>
<graphic xlink:href="13428_2015_620_Fig3_HTML" id="MO3"></graphic>
</fig>
</p>
<p>We computed a series of one-way analyses of variance solely on the pictures classified with the CI method (Mikels et al.,
<xref ref-type="bibr" rid="CR43">2005</xref>
) as eliciting single basic emotions. For each group of pictures classified with a particular basic emotion, we compared the intensity ratings of this basic emotion in these pictures and in the pictures classified with all the other basic emotions. We obtained a significant effect of the basic-emotion classification in each case—namely, for happiness,
<italic>F</italic>
(5, 362) = 200.43,
<italic>p</italic>
< .001; sadness,
<italic>F</italic>
(5, 362) = 449.92,
<italic>p</italic>
< .001; fear,
<italic>F</italic>
(5, 362) = 147.10,
<italic>p</italic>
< .001; surprise,
<italic>F</italic>
(5, 362) = 44.19,
<italic>p</italic>
< .001; anger,
<italic>F</italic>
(5, 362) = 138.02,
<italic>p</italic>
< .001; and disgust,
<italic>F</italic>
(5, 362) = 350.14,
<italic>p</italic>
< .001.</p>
<p>The frequencies of each basic emotion among the pictures classified as single, blended, and undifferentiated basic emotions are presented in Fig. 
<xref rid="Fig4" ref-type="fig">4</xref>
. It is noteworthy that the three panels of this figure cannot be compared with regard to the sums of the pictures, since in the middle and right panels the same image contributed to several bars, whereas the number of pictures equals the sum of the bars in the first panel. The bars should be interpreted only in terms of the single bars informing us how often a particular emotion was represented as single, blended, or undifferentiated.
<fig id="Fig4">
<label>Fig. 4</label>
<caption>
<p>Numbers of pictures expressing each discrete emotional category, classified on the basis of confidence intervals as expressing pure, blended, and undifferentiated emotions</p>
</caption>
<graphic xlink:href="13428_2015_620_Fig4_HTML" id="MO4"></graphic>
</fig>
</p>
<p>In order to provide researchers with an overview of the groups of pictures distinguished with the CI classification method, descriptive statistics for the basic emotions and affective dimensions are presented in Table 
<xref rid="Tab1" ref-type="table">1</xref>
.
<table-wrap id="Tab1">
<label>Table 1</label>
<caption>
<p>Descriptive statistics of all of the pictures classified by single basic emotions: Happiness, sadness, fear, surprise, anger, and disgust (
<italic>N</italic>
= 369)</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th></th>
<th>
<italic>M</italic>
</th>
<th>
<italic>SD</italic>
</th>
<th>Min</th>
<th>Max</th>
</tr>
</thead>
<tbody>
<tr>
<td>Happiness</td>
<td char="." align="char">3.71</td>
<td char="." align="char">0.86</td>
<td char="." align="char">1.63</td>
<td char="." align="char">5.56</td>
</tr>
<tr>
<td>Sadness</td>
<td char="." align="char">4.04</td>
<td char="." align="char">0.77</td>
<td char="." align="char">2.28</td>
<td char="." align="char">5.49</td>
</tr>
<tr>
<td>Fear</td>
<td char="." align="char">3.30</td>
<td char="." align="char">0.33</td>
<td char="." align="char">2.71</td>
<td char="." align="char">3.70</td>
</tr>
<tr>
<td>Surprise</td>
<td char="." align="char">2.16</td>
<td char="." align="char">0.54</td>
<td char="." align="char">1.78</td>
<td char="." align="char">2.54</td>
</tr>
<tr>
<td>Anger</td>
<td char="." align="char">4.36</td>
<td char="." align="char">0.05</td>
<td char="." align="char">4.32</td>
<td char="." align="char">4.39</td>
</tr>
<tr>
<td>Disgust</td>
<td char="." align="char">3.93</td>
<td char="." align="char">0.79</td>
<td char="." align="char">1.82</td>
<td char="." align="char">5.71</td>
</tr>
<tr>
<td>Arousal</td>
<td char="." align="char">3.10</td>
<td char="." align="char">0.90</td>
<td char="." align="char">1.49</td>
<td char="." align="char">6.38</td>
</tr>
<tr>
<td>Valence</td>
<td char="." align="char">5.22</td>
<td char="." align="char">1.46</td>
<td char="." align="char">1.84</td>
<td char="." align="char">7.82</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>
<italic>N</italic>
, number of ratings;
<italic>M</italic>
, mean;
<italic>SD</italic>
, standard deviation; Min, minimal rating; Max, maximal rating</p>
</table-wrap-foot>
</table-wrap>
</p>
<p>As was mentioned in previous studies (e.g., Mikels et al.,
<xref ref-type="bibr" rid="CR43">2005</xref>
), alternative methods could be used to investigate the data. For instance, the CI method would classify images rated by one discrete emotion as having significantly higher ratings than the others, even though the intensity of this single rating was lower than those for other images that elicit blended or undifferentiated emotions. Following this, we provide a conservative classification method (Briesemeister et al.,
<xref ref-type="bibr" rid="CR10">2011b</xref>
), according to which pictures were assigned to a specific discrete emotion category if the mean rating in one discrete emotion was more than one standard deviation higher than the ratings for other discrete emotions. Finally, the most liberal classification criterion was applied (Briesemeister et al.,
<xref ref-type="bibr" rid="CR10">2011b</xref>
), according to which all of the pictures that received a higher mean rating in a particular discrete emotion were labeled as being related to this emotion. The results of all three classification methods are presented in Table 
<xref rid="Tab2" ref-type="table">2</xref>
.
<table-wrap id="Tab2">
<label>Table 2</label>
<caption>
<p>Pictures in NAPS BE representing basic emotions, as classified with confidence intervals, according to the conservative and the liberal method</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th></th>
<th></th>
<th>Hap</th>
<th>Sad</th>
<th>Fea</th>
<th>Sur</th>
<th>Ang</th>
<th>Dis</th>
<th>Total</th>
</tr>
</thead>
<tbody>
<tr>
<td>Cis</td>
<td>Single</td>
<td>240</td>
<td>62</td>
<td>11</td>
<td>2</td>
<td>2</td>
<td>51</td>
<td>368</td>
</tr>
<tr>
<td></td>
<td>Blended</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>43</td>
</tr>
<tr>
<td></td>
<td>Undifferentiated</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>99</td>
</tr>
<tr>
<td>Conservative</td>
<td></td>
<td>153</td>
<td>17</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>6</td>
<td>176</td>
</tr>
<tr>
<td>Liberal</td>
<td></td>
<td>273</td>
<td>195</td>
<td>21</td>
<td>14</td>
<td>0</td>
<td>6</td>
<td>509</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>Hap, happiness; Sad, sadness; Fea, fear; Sur, surprise; Ang, anger; Dis, disgust</p>
</table-wrap-foot>
</table-wrap>
</p>
<p>Since all of the methods of classification are based on means and CIs, the picture classifications of our data did not differ substantially across the three methods described above. No pictures were classified with different basic emotions according to the different methods. The only difference was the obtained numbers of pictures classified as expressing specific basic emotions. Table S2 includes the results of each classification method for each single picture.</p>
</sec>
<sec id="Sec11">
<title>Relationship between basic emotions and affective dimensions</title>
<p>An exploration of the relationships between the basic emotions and affective dimensions showed that these variables were highly intercorrelated, as is demonstrated in Table 
<xref rid="Tab3" ref-type="table">3</xref>
.
<table-wrap id="Tab3">
<label>Table 3</label>
<caption>
<p>Correlations between the ratings obtained for all affective variables</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th></th>
<th>Hap</th>
<th>Sad</th>
<th>Fea</th>
<th>Sur</th>
<th>Ang</th>
<th>Dis</th>
<th>Aro</th>
<th>Val</th>
</tr>
</thead>
<tbody>
<tr>
<td>Happiness</td>
<td>1.00
<sup>*</sup>
</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Sadness</td>
<td>–.67
<sup>*</sup>
</td>
<td>1.00
<sup>*</sup>
</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Fear</td>
<td>–.60
<sup>*</sup>
</td>
<td>.67
<sup>*</sup>
</td>
<td>1.00
<sup>*</sup>
</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Surprise</td>
<td>–.43
<sup>*</sup>
</td>
<td>.56
<sup>*</sup>
</td>
<td>.76
<sup>*</sup>
</td>
<td>1.00
<sup>*</sup>
</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Anger</td>
<td>–.62
<sup>*</sup>
</td>
<td>.82
<sup>*</sup>
</td>
<td>.66
<sup>*</sup>
</td>
<td>.58
<sup>*</sup>
</td>
<td>1.00
<sup>*</sup>
</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Disgust</td>
<td>–.62
<sup>*</sup>
</td>
<td>.52
<sup>*</sup>
</td>
<td>.63
<sup>*</sup>
</td>
<td>.71
<sup>*</sup>
</td>
<td>.63
<sup>*</sup>
</td>
<td>1.00
<sup>*</sup>
</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Arousal</td>
<td>–.25
<sup>*</sup>
</td>
<td>.64
<sup>*</sup>
</td>
<td>.79
<sup>*</sup>
</td>
<td>.74
<sup>*</sup>
</td>
<td>.65
<sup>*</sup>
</td>
<td>.61
<sup>*</sup>
</td>
<td>1.00
<sup>*</sup>
</td>
<td></td>
</tr>
<tr>
<td>Valence</td>
<td>.93
<sup>*</sup>
</td>
<td>–.85
<sup>*</sup>
</td>
<td>–.75
<sup>*</sup>
</td>
<td>–.60
<sup>*</sup>
</td>
<td>–.78
<sup>*</sup>
</td>
<td>–.73
<sup>*</sup>
</td>
<td>–.53
<sup>*</sup>
</td>
<td>1.00
<sup>*</sup>
</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>
<sup>*</sup>
<italic>p</italic>
< .01</p>
</table-wrap-foot>
</table-wrap>
</p>
<p>Additionally, regression analyses were computed using the discrete emotional category ratings in order to examine the extent to which these variables could predict the ratings of valence and arousal (Bradley & Lang,
<xref ref-type="bibr" rid="CR6">1999</xref>
). We performed four separate analyses using the six emotional category ratings to predict valence and arousal within the three valence classes distinguished in the previous sections (negative, neutral, and positive), in line with analyses reported in literature (Montefinese et al.,
<xref ref-type="bibr" rid="CR45">2014</xref>
; Stevenson et al.,
<xref ref-type="bibr" rid="CR63">2007</xref>
; Stevenson & James,
<xref ref-type="bibr" rid="CR62">2008</xref>
).</p>
<p>After removing the insignificant coefficients, we repeated the regressions; all four models turned out to fit the data, and the basic-emotion intensities explained a large percentage of the variance of valence [
<italic>F</italic>
(5, 142) = 172.41,
<italic>p</italic>
< .001,
<italic>R</italic>
<sup>2</sup>
= .86, for negative pictures;
<italic>F</italic>
(5, 197) = 413.99,
<italic>p</italic>
< .001,
<italic>R</italic>
<sup>2</sup>
= .91, for neutral pictures; and
<italic>F</italic>
(3, 155) = 182.33,
<italic>p</italic>
< .001,
<italic>R</italic>
<sup>2</sup>
= .78, for positive pictures] and of arousal [
<italic>F</italic>
(6, 141) = 93.47,
<italic>p</italic>
< .001,
<italic>R</italic>
<sup>2</sup>
= .80, for negative pictures;
<italic>F</italic>
(6, 196) = 157.52,
<italic>p</italic>
< .001,
<italic>R</italic>
<sup>2</sup>
= .83, for neutral pictures; and
<italic>F</italic>
(6, 152) = 59.14,
<italic>p</italic>
< .001,
<italic>R</italic>
<sup>2</sup>
= .70, for positive pictures].</p>
<p>Standardized
<italic>β</italic>
coefficients were calculated for all six emotional categories. As for negative pictures, valence was strongly related to sadness, disgust, happiness, fear, and anger, yet it was not related to surprise. Arousal, in turn, was related to fear, disgust, and sadness, but not to anger and surprise. In the case of neutral pictures, valence was most strongly related to happiness, sadness, disgust, and fear, and additionally to surprise, but not to anger. Arousal was also not related to anger, yet it was related to fear, happiness, sadness, disgust, and surprise. As far as positive pictures were concerned, valence was related to happiness, sadness, and disgust only. Arousal was related to fear, disgust, anger, and sadness (but only fear was significant).</p>
<p>However, partial correlations (representing the unique influence of one predictor relative to the part of the variance of a dependent variable unexplained by the other predictors) revealed that discrete emotions contributed to valence and arousal in different ways (Ric, Alexopoulos, Muller, & Aubé,
<xref ref-type="bibr" rid="CR54">2013</xref>
) (Table 
<xref rid="Tab4" ref-type="table">4</xref>
). The ratings of affective dimensions were predicted particularly well by the level of happiness among positive pictures; by the levels of happiness, sadness, and fear among neutral pictures; and by the levels of sadness, fear, and disgust among negative pictures. The distribution of the ratings of pictures classified as eliciting particular discrete emotions on the basis of the CI criterion is presented in the affective space of valence and arousal in Fig. 
<xref rid="Fig5" ref-type="fig">5</xref>
.
<table-wrap id="Tab4">
<label>Table 4</label>
<caption>
<p>Regressions and partial correlations of discrete emotional category ratings predicting valence and arousal, for negative, neutral, and positive words separately</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th></th>
<th colspan="3">Predicting Valence</th>
<th colspan="3">Predicting Arousal</th>
</tr>
<tr>
<th></th>
<th>
<italic>β</italic>
</th>
<th>
<italic>t</italic>
</th>
<th>Partial
<italic>r</italic>
</th>
<th>
<italic>β</italic>
</th>
<th>
<italic>t</italic>
</th>
<th>Partial
<italic>r</italic>
</th>
</tr>
</thead>
<tbody>
<tr>
<td colspan="7">Negative</td>
</tr>
<tr>
<td> Happiness</td>
<td>.20</td>
<td>5.15
<sup>**</sup>
</td>
<td>.40</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td> Sadness</td>
<td>–.59</td>
<td>–13.09
<sup>**</sup>
</td>
<td>–.74</td>
<td>.34</td>
<td>6.64
<sup>**</sup>
</td>
<td>.49</td>
</tr>
<tr>
<td> Fear</td>
<td>–.26</td>
<td>–7.21
<sup>**</sup>
</td>
<td>–.52</td>
<td>.55</td>
<td>12.97
<sup>**</sup>
</td>
<td>.74</td>
</tr>
<tr>
<td> Surprise</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td> Anger</td>
<td>–.12</td>
<td>–3.03
<sup>**</sup>
</td>
<td>–.25</td>
<td>.09</td>
<td>1.97</td>
<td>.16</td>
</tr>
<tr>
<td> Disgust</td>
<td>–.33</td>
<td>–7.80
<sup>**</sup>
</td>
<td>–.55</td>
<td>.38</td>
<td>8.62
<sup>**</sup>
</td>
<td>.59</td>
</tr>
<tr>
<td colspan="7">Neutral</td>
</tr>
<tr>
<td> Happiness</td>
<td>.57</td>
<td>23.94
<sup>**</sup>
</td>
<td>.86</td>
<td>.32</td>
<td>9.49
<sup>**</sup>
</td>
<td>.56</td>
</tr>
<tr>
<td> Sadness</td>
<td>–.34</td>
<td>–14.36
<sup>**</sup>
</td>
<td>–.72</td>
<td>.19</td>
<td>5.86
<sup>**</sup>
</td>
<td>.39</td>
</tr>
<tr>
<td> Fear</td>
<td>–.21</td>
<td>–7.48
<sup>**</sup>
</td>
<td>–.47</td>
<td>.68</td>
<td>16.94
<sup>**</sup>
</td>
<td>.77</td>
</tr>
<tr>
<td> Surprise</td>
<td>.08</td>
<td>2.55
<sup>*</sup>
</td>
<td>.18</td>
<td>.14</td>
<td>3.22
<sup>**</sup>
</td>
<td>.22</td>
</tr>
<tr>
<td> Anger</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td> Disgust</td>
<td>–.26</td>
<td>–10.21
<sup>**</sup>
</td>
<td>–.59</td>
<td>.20</td>
<td>5.65
<sup>**</sup>
</td>
<td>.37</td>
</tr>
<tr>
<td colspan="7">Positive</td>
</tr>
<tr>
<td> Happiness</td>
<td>.83</td>
<td>21.56
<sup>**</sup>
</td>
<td>.87</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td> Sadness</td>
<td>–.13</td>
<td>–3.34
<sup>**</sup>
</td>
<td>–.26</td>
<td>–.08</td>
<td>–0.97</td>
<td>–.08</td>
</tr>
<tr>
<td> Fear</td>
<td></td>
<td></td>
<td></td>
<td>.40</td>
<td>5.34
<sup>**</sup>
</td>
<td>.40</td>
</tr>
<tr>
<td> Surprise</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td> Anger</td>
<td></td>
<td></td>
<td></td>
<td>.09</td>
<td>1.22</td>
<td>.10</td>
</tr>
<tr>
<td> Disgust</td>
<td>–.12</td>
<td>–2.96
<sup>**</sup>
</td>
<td>–.23</td>
<td>–.12</td>
<td>–1.52</td>
<td>–.12</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>
<sup>*</sup>
<italic>p</italic>
< .01,
<sup>**</sup>
<italic>p</italic>
< .001</p>
</table-wrap-foot>
</table-wrap>
<fig id="Fig5">
<label>Fig. 5</label>
<caption>
<p>Ratings of the pictures classified on the basis of the confidence interval as basic, blended, and undifferentiated emotions in the space of the affective dimensions: valence and arousal</p>
</caption>
<graphic xlink:href="13428_2015_620_Fig5_HTML" id="MO5"></graphic>
</fig>
</p>
<p>The regressions calculated using the dimensional ratings to predict emotional category ratings were similar to the previous ones, also showing a lack of homogeneity in their relationships (beta weights and a statistical analysis are presented in Table 
<xref rid="MOESM2" ref-type="media">S1</xref>
in the supplementary materials).</p>
</sec>
<sec id="Sec12">
<title>Relations between the affective variables and the content categories</title>
<p>Subsequently, we performed a MANOVA including the five Content Categories (animals, faces, landscapes, objects, and people) and the three classes of Picture Valence (negative, neutral, and positive) as between-object factors, and the ratings of the six basic emotions intensities as well as the ratings of the two affective dimensions as dependent variables. Before that, we tested the assumption of the absence of multicollinearity between the dependent variables. The variance inflation factor (VIF) showed that multicollinearity might be a problem (Myers,
<xref ref-type="bibr" rid="CR47">1990</xref>
) for valence (VIF = 17.56) and happiness (VIF = 11.29). Therefore, we removed valence as a dependent variable from the analysis. Additionally, conducting collinearity diagnostics checked for interdependence of the independent variables. The obtained tolerance and VIF values were not considered problematic (tolerance > 10 and VIF < 10; Myers,
<xref ref-type="bibr" rid="CR47">1990</xref>
).</p>
<p>As for the between-object effects, we found significant main effects of content category [
<italic>F</italic>
(28, 1968) = 7.99,
<italic>p</italic>
< .001,
<italic>η</italic>
<sub>p</sub>
<sup>2</sup>
= .10] and valence class [
<italic>F</italic>
(14, 980) = 121.25,
<italic>p</italic>
< .001,
<italic>η</italic>
<sub>p</sub>
<sup>2</sup>
= .63], as well as a significant effect of the interaction between the two [
<italic>F</italic>
(56, 3465) = 4.73,
<italic>p</italic>
< .001,
<italic>η</italic>
<sub>p</sub>
<sup>2</sup>
= .07]. Further analysis of this interaction showed interesting patterns specific to each basic emotion. This interaction was further interpreted through an analysis of the simple main effects of content category performed separately for each valence class, and the results are depicted in Fig. 
<xref rid="Fig6" ref-type="fig">6</xref>
. There were significant differences in the mean basic-emotion intensities among the pictures of different valence classes, depending on their content category. To start listing all of them, the ratings of happiness were lower for objects than for landscapes for both neutral and positive pictures. As far as sadness was concerned, among negative pictures the ratings were significantly higher for faces and lower for objects than for the other categories. The ratings of fear were higher than those of the other categories for people (among both negative and neutral pictures) as well as animals (among neutral pictures). As for surprise, among positive pictures these ratings were lower for animals and people than among the neutral pictures. Anger among the negative pictures was rated significantly higher for landscapes than for the other categories. Finally, disgust among the negative and neutral pictures was rated higher for objects and lower for faces than for the other content categories. All of the significant differences (
<italic>p</italic>
< .05) are marked with an asterisk in Fig. 
<xref rid="Fig6" ref-type="fig">6</xref>
.
<fig id="Fig6">
<label>Fig. 6</label>
<caption>
<p>Mean intensities of all discrete emotion categories, as a function of all semantic categories and all valence classes.
<sup>*</sup>
Significant differences between the mean intensities of particular basic emotions of content categories, marked with relevant colors: animals = blue, faces = red, landscapes = green, objects = purple, and people = orange;
<italic>p</italic>
< .05</p>
</caption>
<graphic xlink:href="13428_2015_620_Fig6_HTML" id="MO6"></graphic>
</fig>
</p>
</sec>
</sec>
<sec id="Sec13" sec-type="discussion">
<title>Discussion</title>
<p>The present study aimed at providing categorical data that would allow NAPS to be used more generally in studies of emotion from a discrete categorical perspective, as well as providing a means of investigating the association of the dimensional and categorical approaches to the study of affect.</p>
<p>Concerning the relationship between affective dimensions and emotional category ratings, our findings are in line with the previous characterizations of affective stimuli, including written words (Stevenson et al.,
<xref ref-type="bibr" rid="CR63">2007</xref>
), emotional faces (Olszanowski et al.,
<xref ref-type="bibr" rid="CR48">2015</xref>
), and affective sounds (Stevenson & James,
<xref ref-type="bibr" rid="CR62">2008</xref>
). These results showed differences in the predictions based on categories, depending on the predicted dimension, as well as on whether the pictures were positive or negative. The regressions using the dimensional ratings to predict emotional category ratings were similar to the previous regressions, with a lack of homogeneity in the ability of the categorical ratings to predict the dimensional ratings. In other words, emotional categories cannot be extrapolated from the affective dimensions; conversely, dimensional information cannot be extrapolated from the emotional categories. The heterogeneous relationships between each emotional category and the different affective dimensions of the stimuli confirms the importance of using categorical data both independently and as a supplement to dimensional data (Stevenson et al.,
<xref ref-type="bibr" rid="CR63">2007</xref>
). From a practical point of view, using both dimensional and discrete emotion classifications, the researcher could design a more ecologically valid paradigm by utilizing, for instance, negative pictures that were not biased toward any particular discrete emotion, or by using pictures evoking only a particular discrete emotion (Stevenson & James,
<xref ref-type="bibr" rid="CR62">2008</xref>
).</p>
<p>What can be considered a particularity of the NAPS BE dataset is the fact that sadness is related not only to low arousal. As has been stated in literature (Javela, Mercadillo, & Martín Ramírez,
<xref ref-type="bibr" rid="CR34">2008</xref>
), the definition of the elements and particular elicitors of one emotion becomes difficult when one considers that individuals could experience many negative emotions when being confronted with a certain unpleasant stimulus (Mikels et al.,
<xref ref-type="bibr" rid="CR43">2005</xref>
), such as a visual scene (Bradley, Codispoti, Cuthbert, & Lang,
<xref ref-type="bibr" rid="CR4">2001</xref>
; Bradley, Codispoti, Sabatinelli, & Lang,
<xref ref-type="bibr" rid="CR5">2001</xref>
). Considering that the experiences of anger, fear, and sadness elicit similar electromyographic activity (Hu & Wan,
<xref ref-type="bibr" rid="CR31">2003</xref>
), it may be argued that these emotions are related to similar levels of arousal. For instance, anger and sadness could both be elicited with the occurrence of negative events, such as blaming others and loss (Abramson, Metalsky, & Alloy,
<xref ref-type="bibr" rid="CR1">1989</xref>
; Smith & Lazarus,
<xref ref-type="bibr" rid="CR60">1993</xref>
). Thus, these might be differentiated from each other only by considering how they are appraised, but not by the related arousal. On the other hand, according to the “core affect” theory (Barrett & Bliss-Moreau,
<xref ref-type="bibr" rid="CR2">2009</xref>
; Russell,
<xref ref-type="bibr" rid="CR55">2003</xref>
), the core affective feelings evoked during an emotion depend on the situation; for instance, fear can be pleasant and highly arousing (in a rollercoaster car) or unpleasant and less arousing (detecting bodily signs of an illness) (Wilson-Mendenhall, Barrett, & Barsalou,
<xref ref-type="bibr" rid="CR70">2013</xref>
). Therefore, there might be situations in which sadness is related to high arousal, or in which high-arousing sadness is closely related to other negative, high-arousing emotions.</p>
<p>Importantly, we chose images that were counterbalanced in terms of content categories, thanks to which we could explore the relationship of the affective variables and the content categories included in the NAPS. This examination revealed significant main effects of content category and valence class, indicating differences across all of them. Additionally, we found significant interactions of content category and valence class. Such interactions had been reported previously for verbal materials with regard to affective dimensions (Ferré et al.,
<xref ref-type="bibr" rid="CR24">2012</xref>
), yet not for visual material and basic emotions. Further analysis of this interaction in our data showed that interesting patterns were visible, especially among positive pictures for happiness and among negative pictures for sadness, fear, surprise, anger, and disgust. Depending on the basic emotion of interest, there were differences in the ratings of various content categories: For instance, sadness was induced much less by objects, and disgust much less by faces, than were any of the other categories. These interactions show that content categories should be taken into account by researchers attempting to choose appropriate stimuli to induce specific basic emotions.</p>
<p>When compared to the previously offered datasets of affective pictures characterized by discrete emotions, NAPS BE offers larger samples of images expressing single basic emotions, as classified with the CI method (Mikels et al.,
<xref ref-type="bibr" rid="CR43">2005</xref>
). For instance, IAPS contains fewer images expressing disgust and sadness (
<italic>n</italic>
s = 31 and 42, respectively; Bradley & Lang,
<xref ref-type="bibr" rid="CR7">2007</xref>
) than does NAPS BE (
<italic>n</italic>
s = 51 and 62). Another advantage of NAPS BE is that it enables researchers to control for the physical properties of the images (Marchewka, Żurawski, Jednoróg, & Grabowska,
<xref ref-type="bibr" rid="CR41">2014</xref>
). However, the greatest advantage of the presently introduced dataset is that it offers pictorial stimuli characterized from both dimensional and basic-emotion perspectives, which makes it extremely useful for experiments within a combined approach.</p>
<sec id="Sec14">
<title>Limitations and future directions</title>
<p>An important limitation of the present study, similarly to previous ones (Mikels et al.,
<xref ref-type="bibr" rid="CR43">2005</xref>
), is that we were not able to differentiate representative numbers of stimuli that induce clear basic emotions such as surprise and anger. The small number of images expressing anger in NAPS BE is in line with the previous results (Mikels et al.,
<xref ref-type="bibr" rid="CR43">2005</xref>
) and might be explained by the fact that it is difficult to elicit extreme unpleasantness, high effort, high certainty, and strong human agency with the passive and essentially effortless viewing of static images.</p>
<p>Another possible limitation is that NAPS BE, just like NAPS (Marchewka et al.,
<xref ref-type="bibr" rid="CR41">2014</xref>
), lacks very positive pictures with high arousal (e.g., pictures with erotic content). However, the Erotic Subset for NAPS (NAPS ERO; Wierzba et al.,
<xref ref-type="bibr" rid="CR68">2015</xref>
) has been prepared. Also, the images included in NAPS BE are only moderately inductive of basic emotions. First, this may result from the nature of static images. Second, it has been claimed that pictures evoking basic emotions of higher intensities probably also evoke different emotions (van Hooff et al.,
<xref ref-type="bibr" rid="CR65">2014</xref>
). Therefore, perhaps only mild emotions can be evokedi by images classified by single basic emotions. These moderate intensities should be taken into account when investigating the specific effects of basic emotions through the use of NAPS BE.</p>
<p>Normative ratings were collected in a group of participants from various European and non-European countries using the English language, which could potentially have influenced the obtained results (Majid,
<xref ref-type="bibr" rid="CR40">2012</xref>
). For instance, using a nonnative language to evaluate emotions could potentially increase participants’ arousal, due to anxiety (Caldwell-Harris & Ayçiçeǧi-Dinn,
<xref ref-type="bibr" rid="CR12">2009</xref>
). Thus, future investigations of the basic emotions expressed by NAPS BE should exploit cross-linguistic variation to take into account possible principles operating between language and emotion.</p>
<p>Finally, we have not applied all of the possible methods of classifying emotional stimuli; for instance, we did not use a recently published method based on Euclidean distance (Wierzba et al.,
<xref ref-type="bibr" rid="CR69">2015</xref>
).</p>
<p>Currently, we are working on dedicated software called the Nencki Affective Picture System Search Tool, which will allow researchers to choose stimuli according to the normative ratings within a combined theoretical framework.</p>
</sec>
</sec>
<sec sec-type="supplementary-material">
<sec id="Sec16">
<title>Electronic supplementary material</title>
<p>Below is the link to the electronic supplementary material.
<supplementary-material content-type="local-data" id="MOESM1">
<media xlink:href="13428_2015_620_MOESM1_ESM.doc">
<label>Fig. S1</label>
<caption>
<p>(DOC 20 kb)</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="MOESM2">
<media xlink:href="13428_2015_620_MOESM2_ESM.doc">
<label>Table S1</label>
<caption>
<p>(DOC 22 kb)</p>
</caption>
</media>
</supplementary-material>
</p>
</sec>
</sec>
</body>
<back>
<app-group>
<app id="App1">
<sec id="Sec15">
<title>Appendix: Instructions for the picture ratings in English</title>
<p>Thank you for agreeing to participate in this study.</p>
<p>We are interested in people’s responses to pictures representing a wide spectrum of situations. For the next 50 min, you will be looking at color photographs on the computer screen.</p>
<p>Your task will be to evaluate each photograph on six 7-point scales, indicating the degree to which you feel happiness, surprise, sadness, anger, disgust, and fear while viewing the picture.</p>
<p>The far left of each scale represents total absence of the given emotion (1 =
<italic>I do not feel it at all</italic>
), while the far right of the scale represents the highest intensity of the emotion (7 =
<italic>I feel it very strongly</italic>
).</p>
<p>Also, fill in responses for the following two scales describing how you feel while viewing the picture.</p>
<p>For the arousal scale (1 =
<italic>I feel completely unaroused or calm</italic>
, and 9 =
<italic>I feel completely aroused or excited</italic>
).</p>
<p>For the valence scale (1 =
<italic>I feel completely happy or satisfied</italic>
, and 9 =
<italic>I feel completely unhappy or annoyed</italic>
).</p>
<p>There are no right or wrong answers; respond as honestly as you can. Before we start, I’
<italic>d</italic>
like you to read and sign the “informed consent” statement.</p>
<p>You may find some of the pictures rather disturbing; should you feel uncomfortable, feel free to quit the experiment at any time.</p>
</sec>
</app>
</app-group>
<ack>
<title>Author note</title>
<p>We are grateful to Paweł Turnau, who constructed a Web-based assessment platform used to collect the affective ratings. We also appreciate the help of Łukasz Okruszek and Maksymilian Bielecki in preparing the statistical analysis of the collected affective ratings. Likewise, we thank Benny Briesemeister, the author of DENN-BAWL (Briesemeister et al.,
<xref ref-type="bibr" rid="CR10">2011b</xref>
), for his helpful comments on the construction of NAPS BE. Finally, we thank the reviewers of this article for their valuable and constructive comments, which let us remarkably improve it. The study was supported by the Polish Ministry of Science and Higher Education, Iuventus Plus Grant No. IP2012 042072. The authors have declared that no competing interests exist.</p>
</ack>
<ref-list id="Bib1">
<title>References</title>
<ref id="CR1">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Abramson</surname>
<given-names>LY</given-names>
</name>
<name>
<surname>Metalsky</surname>
<given-names>GI</given-names>
</name>
<name>
<surname>Alloy</surname>
<given-names>LB</given-names>
</name>
</person-group>
<article-title>Hopelessness depression: A theory-based subtype of depression</article-title>
<source>Psychological Review</source>
<year>1989</year>
<volume>96</volume>
<fpage>358</fpage>
<lpage>372</lpage>
<pub-id pub-id-type="doi">10.1037/0033-295X.96.2.358</pub-id>
</element-citation>
</ref>
<ref id="CR2">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Barrett</surname>
<given-names>LF</given-names>
</name>
<name>
<surname>Bliss-Moreau</surname>
<given-names>E</given-names>
</name>
</person-group>
<person-group person-group-type="editor">
<name>
<surname>Zanna</surname>
<given-names>MP</given-names>
</name>
</person-group>
<article-title>Affect as a psychological primitive</article-title>
<source>Advances in experimental social psychology</source>
<year>2009</year>
<publisher-loc>London, UK</publisher-loc>
<publisher-name>Academic Press</publisher-name>
<fpage>10.1016/S0065-2601(08)00404-8</fpage>
<lpage>218</lpage>
</element-citation>
</ref>
<ref id="CR3">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bayer</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Sommer</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Schacht</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>Reading emotional words within sentences: The impact of arousal and valence on event-related potentials</article-title>
<source>International Journal of Psychophysiology</source>
<year>2010</year>
<volume>78</volume>
<fpage>299</fpage>
<lpage>307</lpage>
<pub-id pub-id-type="doi">10.1016/j.ijpsycho.2010.09.004</pub-id>
<pub-id pub-id-type="pmid">20854848</pub-id>
</element-citation>
</ref>
<ref id="CR4">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bradley</surname>
<given-names>MM</given-names>
</name>
<name>
<surname>Codispoti</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Cuthbert</surname>
<given-names>BN</given-names>
</name>
<name>
<surname>Lang</surname>
<given-names>PJ</given-names>
</name>
</person-group>
<article-title>Emotion and motivation I: Defensive and appetitive reactions in picture processing</article-title>
<source>Emotion</source>
<year>2001</year>
<volume>1</volume>
<fpage>276</fpage>
<lpage>298</lpage>
<pub-id pub-id-type="doi">10.1037/1528-3542.1.3.276</pub-id>
<pub-id pub-id-type="pmid">12934687</pub-id>
</element-citation>
</ref>
<ref id="CR5">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bradley</surname>
<given-names>MM</given-names>
</name>
<name>
<surname>Codispoti</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Sabatinelli</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Lang</surname>
<given-names>PJ</given-names>
</name>
</person-group>
<article-title>Emotion and motivation II: Sex differences in picture processing</article-title>
<source>Emotion</source>
<year>2001</year>
<volume>1</volume>
<fpage>300</fpage>
<lpage>319</lpage>
<pub-id pub-id-type="doi">10.1037/1528-3542.1.3.300</pub-id>
<pub-id pub-id-type="pmid">12934688</pub-id>
</element-citation>
</ref>
<ref id="CR6">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Bradley</surname>
<given-names>MM</given-names>
</name>
<name>
<surname>Lang</surname>
<given-names>PJ</given-names>
</name>
</person-group>
<source>Affective Norms for English Words (ANEW): Stimuli, instruction manual and affective ratings (Technical Report No. C-1)</source>
<year>1999</year>
<publisher-loc>Gainesville, FL</publisher-loc>
<publisher-name>University of Florida, NIMH Center for Research in Psychophysiology</publisher-name>
</element-citation>
</ref>
<ref id="CR7">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Bradley</surname>
<given-names>MM</given-names>
</name>
<name>
<surname>Lang</surname>
<given-names>PJ</given-names>
</name>
</person-group>
<person-group person-group-type="editor">
<name>
<surname>Coan</surname>
<given-names>JA</given-names>
</name>
<name>
<surname>Allen</surname>
<given-names>JJB</given-names>
</name>
</person-group>
<article-title>The International Affective Picture System (IAPS) in the study of emotion and attention</article-title>
<source>Handbook of emotion elicitation and assessment</source>
<year>2007</year>
<publisher-loc>Oxford, UK</publisher-loc>
<publisher-name>Oxford University Press</publisher-name>
<fpage>29</fpage>
<lpage>46</lpage>
</element-citation>
</ref>
<ref id="CR8">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Briesemeister</surname>
<given-names>BB</given-names>
</name>
<name>
<surname>Kuchinke</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Jacobs</surname>
<given-names>AM</given-names>
</name>
<name>
<surname>Braun</surname>
<given-names>M</given-names>
</name>
</person-group>
<article-title>Emotions in reading: Dissociation of happiness and positivity</article-title>
<source>Cognitive, Affective, & Behavioral Neuroscience</source>
<year>2015</year>
<volume>15</volume>
<fpage>287</fpage>
<lpage>298</lpage>
<pub-id pub-id-type="doi">10.3758/s13415-014-0327-2</pub-id>
</element-citation>
</ref>
<ref id="CR9">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Briesemeister</surname>
<given-names>BB</given-names>
</name>
<name>
<surname>Kuchinke</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Jacobs</surname>
<given-names>AM</given-names>
</name>
</person-group>
<article-title>Discrete emotion effects on lexical decision response times</article-title>
<source>PLoS ONE</source>
<year>2011</year>
<volume>6</volume>
<pub-id pub-id-type="doi">10.1371/journal.pone.0023743</pub-id>
<pub-id pub-id-type="pmid">21887307</pub-id>
</element-citation>
</ref>
<ref id="CR10">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Briesemeister</surname>
<given-names>BB</given-names>
</name>
<name>
<surname>Kuchinke</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Jacobs</surname>
<given-names>AM</given-names>
</name>
</person-group>
<article-title>Discrete emotion norms for nouns: Berlin affective word list (DENN-BAWL)</article-title>
<source>Behavior Research Methods</source>
<year>2011</year>
<volume>43</volume>
<fpage>441</fpage>
<lpage>448</lpage>
<pub-id pub-id-type="doi">10.3758/s13428-011-0059-y</pub-id>
<pub-id pub-id-type="pmid">21416309</pub-id>
</element-citation>
</ref>
<ref id="CR11">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Briesemeister</surname>
<given-names>BB</given-names>
</name>
<name>
<surname>Kuchinke</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Jacobs</surname>
<given-names>AM</given-names>
</name>
</person-group>
<article-title>Emotion word recognition: Discrete information effects first, continuous later?</article-title>
<source>Brain Research</source>
<year>2014</year>
<volume>1564</volume>
<fpage>62</fpage>
<lpage>71</lpage>
<pub-id pub-id-type="doi">10.1016/j.brainres.2014.03.045</pub-id>
<pub-id pub-id-type="pmid">24713350</pub-id>
</element-citation>
</ref>
<ref id="CR12">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Caldwell-Harris</surname>
<given-names>CL</given-names>
</name>
<name>
<surname>Ayçiçeǧi-Dinn</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>Emotion and lying in a non-native language</article-title>
<source>International Journal of Psychophysiology</source>
<year>2009</year>
<volume>71</volume>
<fpage>193</fpage>
<lpage>204</lpage>
<pub-id pub-id-type="doi">10.1016/j.ijpsycho.2008.09.006</pub-id>
<pub-id pub-id-type="pmid">18929603</pub-id>
</element-citation>
</ref>
<ref id="CR13">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chapman</surname>
<given-names>HA</given-names>
</name>
<name>
<surname>Johannes</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Poppenk</surname>
<given-names>JL</given-names>
</name>
<name>
<surname>Moscovitch</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Anderson</surname>
<given-names>AK</given-names>
</name>
</person-group>
<article-title>Evidence for the differential salience of disgust and fear in episodic memory</article-title>
<source>Journal of Experimental Psychology. General</source>
<year>2012</year>
<volume>142</volume>
<fpage>1100</fpage>
<lpage>1112</lpage>
<pub-id pub-id-type="doi">10.1037/a0030503</pub-id>
<pub-id pub-id-type="pmid">23067064</pub-id>
</element-citation>
</ref>
<ref id="CR14">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Colden</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Bruder</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Manstead</surname>
<given-names>ASR</given-names>
</name>
</person-group>
<article-title>Human content in affect-inducing stimuli: A secondary analysis of the international affective picture system</article-title>
<source>Motivation and Emotion</source>
<year>2008</year>
<volume>32</volume>
<fpage>260</fpage>
<lpage>269</lpage>
<pub-id pub-id-type="doi">10.1007/s11031-008-9107-z</pub-id>
</element-citation>
</ref>
<ref id="CR15">
<mixed-citation publication-type="other">Colibazzi, T., Posner, J., Wang, Z., Gorman, D., Gerber, A., Yu, S., . . . Peterson, B. S. (2010). Neural systems subserving valence and arousal during the experience of induced emotions.
<italic>Emotion</italic>
,
<italic>10</italic>
, 377–389. doi:10.1037/a0018484</mixed-citation>
</ref>
<ref id="CR16">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Costa</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Cauda</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Crini</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Tatu</surname>
<given-names>M-K</given-names>
</name>
<name>
<surname>Celeghin</surname>
<given-names>A</given-names>
</name>
<name>
<surname>de Gelder</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Tamietto</surname>
<given-names>M</given-names>
</name>
</person-group>
<article-title>Temporal and spatial neural dynamics in the perception of basic emotions from complex scenes</article-title>
<source>Social Cognitive and Affective Neuroscience</source>
<year>2014</year>
<volume>9</volume>
<fpage>1690</fpage>
<lpage>1703</lpage>
<pub-id pub-id-type="doi">10.1093/scan/nst164</pub-id>
<pub-id pub-id-type="pmid">24214921</pub-id>
</element-citation>
</ref>
<ref id="CR17">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Croucher</surname>
<given-names>CJ</given-names>
</name>
<name>
<surname>Calder</surname>
<given-names>AJ</given-names>
</name>
<name>
<surname>Ramponi</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Barnard</surname>
<given-names>PJ</given-names>
</name>
<name>
<surname>Murphy</surname>
<given-names>FC</given-names>
</name>
</person-group>
<article-title>Disgust enhances the recollection of negative emotional images</article-title>
<source>PLoS ONE</source>
<year>2011</year>
<volume>6</volume>
<pub-id pub-id-type="doi">10.1371/journal.pone.0026571</pub-id>
<pub-id pub-id-type="pmid">22110588</pub-id>
</element-citation>
</ref>
<ref id="CR18">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Darwin</surname>
<given-names>C</given-names>
</name>
</person-group>
<article-title>The expression of the emotions in man and animals</article-title>
<source>American Journal of the Medical Sciences</source>
<year>1872</year>
<volume>232</volume>
<fpage>477</fpage>
<pub-id pub-id-type="doi">10.1097/00000441-195610000-00024</pub-id>
</element-citation>
</ref>
<ref id="CR19">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Delaveau</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Jabourian</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Lemogne</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Guionnet</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Bergouignan</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Fossati</surname>
<given-names>P</given-names>
</name>
</person-group>
<article-title>Brain effects of antidepressants in major depression: A meta-analysis of emotional processing studies</article-title>
<source>Journal of Affective Disorders</source>
<year>2011</year>
<volume>130</volume>
<fpage>66</fpage>
<lpage>74</lpage>
<pub-id pub-id-type="doi">10.1016/j.jad.2010.09.032</pub-id>
<pub-id pub-id-type="pmid">21030092</pub-id>
</element-citation>
</ref>
<ref id="CR20">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Eerola</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Vuoskoski</surname>
<given-names>JK</given-names>
</name>
</person-group>
<article-title>A comparison of the discrete and dimensional models of emotion in music</article-title>
<source>Psychology of Music</source>
<year>2011</year>
<volume>39</volume>
<fpage>18</fpage>
<lpage>49</lpage>
<pub-id pub-id-type="doi">10.1177/0305735610362821</pub-id>
</element-citation>
</ref>
<ref id="CR21">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ekman</surname>
<given-names>P</given-names>
</name>
</person-group>
<article-title>An argument for basic emotions</article-title>
<source>Cognition & Emotion</source>
<year>1992</year>
<volume>6</volume>
<fpage>169</fpage>
<lpage>200</lpage>
<pub-id pub-id-type="doi">10.1080/02699939208411068</pub-id>
</element-citation>
</ref>
<ref id="CR22">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ekman</surname>
<given-names>P</given-names>
</name>
</person-group>
<article-title>Facial expression and emotion</article-title>
<source>American Psychologist</source>
<year>1993</year>
<volume>48</volume>
<fpage>384</fpage>
<lpage>392</lpage>
<pub-id pub-id-type="doi">10.1037/0003-066X.48.4.384</pub-id>
<pub-id pub-id-type="pmid">8512154</pub-id>
</element-citation>
</ref>
<ref id="CR23">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Ekman</surname>
<given-names>P</given-names>
</name>
</person-group>
<person-group person-group-type="editor">
<name>
<surname>Dalgleish</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Power</surname>
<given-names>MJ</given-names>
</name>
</person-group>
<article-title>Basic emotions</article-title>
<source>Handbook of cognition and emotion</source>
<year>1999</year>
<publisher-loc>New York, NY</publisher-loc>
<publisher-name>Wiley</publisher-name>
<fpage>45</fpage>
<lpage>60</lpage>
</element-citation>
</ref>
<ref id="CR24">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ferré</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Guasch</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Moldovan</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Sánchez-Casas</surname>
<given-names>R</given-names>
</name>
</person-group>
<article-title>Affective norms for 380 Spanish words belonging to three different semantic categories</article-title>
<source>Behavior Research Methods</source>
<year>2012</year>
<volume>44</volume>
<fpage>395</fpage>
<lpage>403</lpage>
<pub-id pub-id-type="doi">10.3758/s13428-011-0165-x</pub-id>
<pub-id pub-id-type="pmid">22042646</pub-id>
</element-citation>
</ref>
<ref id="CR25">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Flom</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Janis</surname>
<given-names>RB</given-names>
</name>
<name>
<surname>Garcia</surname>
<given-names>DJ</given-names>
</name>
<name>
<surname>Kirwan</surname>
<given-names>CB</given-names>
</name>
</person-group>
<article-title>The effects of exposure to dynamic expressions of affect on 5-month-olds’ memory</article-title>
<source>Infant Behavior and Development</source>
<year>2014</year>
<volume>37</volume>
<fpage>752</fpage>
<lpage>759</lpage>
<pub-id pub-id-type="doi">10.1016/j.infbeh.2014.09.006</pub-id>
<pub-id pub-id-type="pmid">25459793</pub-id>
</element-citation>
</ref>
<ref id="CR26">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fontaine</surname>
<given-names>JRJ</given-names>
</name>
<name>
<surname>Scherer</surname>
<given-names>KR</given-names>
</name>
<name>
<surname>Roesch</surname>
<given-names>EB</given-names>
</name>
<name>
<surname>Ellsworth</surname>
<given-names>PC</given-names>
</name>
</person-group>
<article-title>The world of emotion is not two-dimensional</article-title>
<source>Psychological Science</source>
<year>2007</year>
<volume>18</volume>
<fpage>1050</fpage>
<lpage>1057</lpage>
<pub-id pub-id-type="doi">10.1111/j.1467-9280.2007.02024.x</pub-id>
<pub-id pub-id-type="pmid">18031411</pub-id>
</element-citation>
</ref>
<ref id="CR27">
<mixed-citation publication-type="other">Fusar-Poli, P., Placentino, A., Carletti, F., Landi, P., Allen, P., Surguladze, S., . . . Politi, P. (2009). Functional atlas of emotional faces processing: A voxel-based meta-analysis of 105 functional magnetic resonance imaging studies.
<italic>Journal of Psychiatry & Neuroscience</italic>
,
<italic>34</italic>
, 418–432. Retrieved from
<ext-link ext-link-type="uri" xlink:href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2783433/">www.ncbi.nlm.nih.gov/pmc/articles/PMC2783433/</ext-link>
</mixed-citation>
</ref>
<ref id="CR28">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gerrards-Hesse</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Spies</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Hesse</surname>
<given-names>FW</given-names>
</name>
</person-group>
<article-title>Experimental inductions of emotional states and their effectiveness: A review</article-title>
<source>British Journal of Psychology</source>
<year>1994</year>
<volume>85</volume>
<fpage>55</fpage>
<lpage>78</lpage>
<pub-id pub-id-type="doi">10.1111/j.2044-8295.1994.tb02508.x</pub-id>
</element-citation>
</ref>
<ref id="CR29">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hamann</surname>
<given-names>S</given-names>
</name>
</person-group>
<article-title>Mapping discrete and dimensional emotions onto the brain: Controversies and consensus</article-title>
<source>Trends in Cognitive Sciences</source>
<year>2012</year>
<volume>16</volume>
<fpage>458</fpage>
<lpage>466</lpage>
<pub-id pub-id-type="doi">10.1016/j.tics.2012.07.006</pub-id>
<pub-id pub-id-type="pmid">22890089</pub-id>
</element-citation>
</ref>
<ref id="CR30">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hinojosa</surname>
<given-names>JA</given-names>
</name>
<name>
<surname>Martínez-García</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Villalba-García</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Fernández-Folgueiras</surname>
<given-names>U</given-names>
</name>
<name>
<surname>Sánchez-Carmona</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Pozo</surname>
<given-names>MA</given-names>
</name>
<name>
<surname>Montoro</surname>
<given-names>PR</given-names>
</name>
</person-group>
<article-title>Affective norms of 875 Spanish words for five discrete emotional categories and two emotional dimensions</article-title>
<source>Behavior Research Methods</source>
<year>2015</year>
</element-citation>
</ref>
<ref id="CR31">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hu</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Wan</surname>
<given-names>H</given-names>
</name>
</person-group>
<article-title>Imagined events with specific emotional valence produce specific patterns of facial EMG activity</article-title>
<source>Perceptual and Motor Skills</source>
<year>2003</year>
<volume>97</volume>
<fpage>1091</fpage>
<lpage>1099</lpage>
<pub-id pub-id-type="doi">10.2466/PMS.97.8.1091-1099</pub-id>
<pub-id pub-id-type="pmid">15002852</pub-id>
</element-citation>
</ref>
<ref id="CR32">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Imbir</surname>
<given-names>KK</given-names>
</name>
</person-group>
<article-title>Affective norms for 1,586 Polish words (ANPW): Duality-of-mind approach</article-title>
<source>Behavior Research Methods</source>
<year>2014</year>
</element-citation>
</ref>
<ref id="CR33">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Izard</surname>
<given-names>CE</given-names>
</name>
</person-group>
<article-title>Emotion theory and research: Highlights, unanswered questions, and emerging issues</article-title>
<source>Annual Review of Psychology</source>
<year>2009</year>
<volume>60</volume>
<fpage>1</fpage>
<lpage>25</lpage>
<pub-id pub-id-type="doi">10.1146/annurev.psych.60.110707.163539</pub-id>
<pub-id pub-id-type="pmid">18729725</pub-id>
</element-citation>
</ref>
<ref id="CR34">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Javela</surname>
<given-names>JJ</given-names>
</name>
<name>
<surname>Mercadillo</surname>
<given-names>RE</given-names>
</name>
<name>
<surname>Martín Ramírez</surname>
<given-names>J</given-names>
</name>
</person-group>
<article-title>Anger and associated experiences of sadness, fear, valence, arousal, and dominance evoked by visual scenes</article-title>
<source>Psychological Reports</source>
<year>2008</year>
<volume>103</volume>
<fpage>663</fpage>
<lpage>681</lpage>
<pub-id pub-id-type="pmid">19320198</pub-id>
</element-citation>
</ref>
<ref id="CR35">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kassam</surname>
<given-names>KS</given-names>
</name>
<name>
<surname>Markey</surname>
<given-names>AR</given-names>
</name>
<name>
<surname>Cherkassky</surname>
<given-names>VL</given-names>
</name>
<name>
<surname>Loewenstein</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Just</surname>
<given-names>MA</given-names>
</name>
</person-group>
<article-title>Identifying emotions on the basis of neural activation</article-title>
<source>PLoS ONE</source>
<year>2013</year>
<volume>8</volume>
<pub-id pub-id-type="doi">10.1371/journal.pone.0066032</pub-id>
<pub-id pub-id-type="pmid">23840392</pub-id>
</element-citation>
</ref>
<ref id="CR36">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kissler</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Herbert</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Peyk</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Junghofer</surname>
<given-names>M</given-names>
</name>
</person-group>
<article-title>Buzzwords: Early cortical responses to emotional words during reading</article-title>
<source>Psychological Science</source>
<year>2007</year>
<volume>18</volume>
<fpage>475</fpage>
<lpage>480</lpage>
<pub-id pub-id-type="doi">10.1111/j.1467-9280.2007.01924.x</pub-id>
<pub-id pub-id-type="pmid">17576257</pub-id>
</element-citation>
</ref>
<ref id="CR37">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Lang</surname>
<given-names>PJ</given-names>
</name>
</person-group>
<person-group person-group-type="editor">
<name>
<surname>Sidowski</surname>
<given-names>JB</given-names>
</name>
<name>
<surname>Johnson</surname>
<given-names>JH</given-names>
</name>
<name>
<surname>Williams</surname>
<given-names>TA</given-names>
</name>
</person-group>
<article-title>Behavioral treatment and bio-behavioral assessment: Computer applications</article-title>
<source>Technology in mental health care delivery systems</source>
<year>1980</year>
<publisher-loc>Norwood, NJ</publisher-loc>
<publisher-name>Ablex</publisher-name>
<fpage>119</fpage>
<lpage>137</lpage>
</element-citation>
</ref>
<ref id="CR38">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Lang</surname>
<given-names>PJ</given-names>
</name>
<name>
<surname>Bradley</surname>
<given-names>MM</given-names>
</name>
<name>
<surname>Cuthbert</surname>
<given-names>BN</given-names>
</name>
</person-group>
<source>International Affective Picture System (IAPS): Affective ratings of pictures and instruction manual (Technical Report No. A-8)</source>
<year>2008</year>
<publisher-loc>Gainesville, FL</publisher-loc>
<publisher-name>University of Florida, Center for Research in Psychophysiology</publisher-name>
</element-citation>
</ref>
<ref id="CR39">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lindquist</surname>
<given-names>KA</given-names>
</name>
<name>
<surname>Wager</surname>
<given-names>TD</given-names>
</name>
<name>
<surname>Kober</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Bliss-Moreau</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Barrett</surname>
<given-names>LF</given-names>
</name>
</person-group>
<article-title>The brain basis of emotion: A meta-analytic review</article-title>
<source>Behavioral and Brain Sciences</source>
<year>2012</year>
<volume>35</volume>
<fpage>121</fpage>
<lpage>143</lpage>
<pub-id pub-id-type="doi">10.1017/S0140525X11000446</pub-id>
<pub-id pub-id-type="pmid">22617651</pub-id>
</element-citation>
</ref>
<ref id="CR40">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Majid</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>Current emotion research in the language sciences</article-title>
<source>Emotion Review</source>
<year>2012</year>
<volume>4</volume>
<fpage>432</fpage>
<lpage>443</lpage>
<pub-id pub-id-type="doi">10.1177/1754073912445827</pub-id>
</element-citation>
</ref>
<ref id="CR41">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Marchewka</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Żurawski</surname>
<given-names>Ł</given-names>
</name>
<name>
<surname>Jednoróg</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Grabowska</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>The Nencki Affective Picture System (NAPS): Introduction to a novel, standardized, wide-range, high-quality, realistic picture database</article-title>
<source>Behavior Research Methods</source>
<year>2014</year>
<volume>46</volume>
<fpage>596</fpage>
<lpage>610</lpage>
<pub-id pub-id-type="doi">10.3758/s13428-013-0379-1</pub-id>
<pub-id pub-id-type="pmid">23996831</pub-id>
</element-citation>
</ref>
<ref id="CR42">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mauss</surname>
<given-names>IB</given-names>
</name>
<name>
<surname>Robinson</surname>
<given-names>MD</given-names>
</name>
</person-group>
<article-title>Measures of emotion: A review</article-title>
<source>Cognition & Emotion</source>
<year>2009</year>
<volume>23</volume>
<fpage>209</fpage>
<lpage>237</lpage>
<pub-id pub-id-type="doi">10.1080/02699930802204677</pub-id>
<pub-id pub-id-type="pmid">19809584</pub-id>
</element-citation>
</ref>
<ref id="CR43">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mikels</surname>
<given-names>JA</given-names>
</name>
<name>
<surname>Fredrickson</surname>
<given-names>BL</given-names>
</name>
<name>
<surname>Larkin</surname>
<given-names>GR</given-names>
</name>
<name>
<surname>Lindberg</surname>
<given-names>CM</given-names>
</name>
<name>
<surname>Maglio</surname>
<given-names>SJ</given-names>
</name>
<name>
<surname>Reuter-Lorenz</surname>
<given-names>PA</given-names>
</name>
</person-group>
<article-title>Emotional category data on images from the International Affective Picture System</article-title>
<source>Behavior Research Methods</source>
<year>2005</year>
<volume>37</volume>
<fpage>626</fpage>
<lpage>630</lpage>
<pub-id pub-id-type="doi">10.3758/BF03192732</pub-id>
<pub-id pub-id-type="pmid">16629294</pub-id>
</element-citation>
</ref>
<ref id="CR44">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Monnier</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Syssau</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>Affective norms for French words (FAN)</article-title>
<source>Behavior Research Methods</source>
<year>2014</year>
<volume>46</volume>
<fpage>1128</fpage>
<lpage>1137</lpage>
<pub-id pub-id-type="doi">10.3758/s13428-013-0431-1</pub-id>
<pub-id pub-id-type="pmid">24366716</pub-id>
</element-citation>
</ref>
<ref id="CR45">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Montefinese</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Ambrosini</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Fairfield</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Mammarella</surname>
<given-names>N</given-names>
</name>
</person-group>
<article-title>The adaptation of the Affective Norms for English Words (ANEW) for Italian</article-title>
<source>Behavior Research Methods</source>
<year>2014</year>
<volume>46</volume>
<fpage>887</fpage>
<lpage>903</lpage>
<pub-id pub-id-type="doi">10.3758/s13428-013-0405-3</pub-id>
<pub-id pub-id-type="pmid">24150921</pub-id>
</element-citation>
</ref>
<ref id="CR46">
<mixed-citation publication-type="other">Moors, A., De Houwer, J., Hermans, D., Wanmaker, S., van Schie, K., Van Harmelen, A.-L., . . . Brysbaert, M. (2013). Norms of valence, arousal, dominance, and age of acquisition for 4,300 Dutch words.
<italic>Behavior Research Methods</italic>
,
<italic>45</italic>
, 169–177. doi:10.3758/s13428-012-0243-8</mixed-citation>
</ref>
<ref id="CR47">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Myers</surname>
<given-names>RH</given-names>
</name>
</person-group>
<source>Classical and modern regression with applications</source>
<year>1990</year>
<publisher-loc>Boston, MA</publisher-loc>
<publisher-name>Duxbury Press</publisher-name>
</element-citation>
</ref>
<ref id="CR48">
<mixed-citation publication-type="other">Olszanowski, M., Pochwatko, G., Kuklinski, K., Scibor-Rylski, M., Lewinski, P. & Ohme, R.K. (2015). Warsaw set of emotional facial expression pictures: a validation study of facial display photographs. 
<italic>Frontiers in Psychology, 5</italic>
, 1516. doi:10.3389/fpsyg.2014.01516</mixed-citation>
</ref>
<ref id="CR49">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ortony</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Turner</surname>
<given-names>TJ</given-names>
</name>
</person-group>
<article-title>What’s basic about basic emotions?</article-title>
<source>Psychological Review</source>
<year>1990</year>
<volume>97</volume>
<fpage>315</fpage>
<lpage>331</lpage>
<pub-id pub-id-type="doi">10.1037/0033-295X.97.3.315</pub-id>
<pub-id pub-id-type="pmid">1669960</pub-id>
</element-citation>
</ref>
<ref id="CR50">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Osgood</surname>
<given-names>CE</given-names>
</name>
<name>
<surname>Suci</surname>
<given-names>GJ</given-names>
</name>
<name>
<surname>Tannenbaum</surname>
<given-names>PH</given-names>
</name>
</person-group>
<source>The measurement of meaning</source>
<year>1957</year>
<publisher-loc>Urbana, IL</publisher-loc>
<publisher-name>University of Illinois Press</publisher-name>
</element-citation>
</ref>
<ref id="CR51">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Panksepp</surname>
<given-names>J</given-names>
</name>
</person-group>
<article-title>A critical role for “affective neuroscience” in resolving what is basic about basic emotions</article-title>
<source>Psychological Review</source>
<year>1992</year>
<volume>99</volume>
<fpage>554</fpage>
<lpage>560</lpage>
<pub-id pub-id-type="doi">10.1037/0033-295X.99.3.554</pub-id>
<pub-id pub-id-type="pmid">1502276</pub-id>
</element-citation>
</ref>
<ref id="CR52">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Reisenzein</surname>
<given-names>R</given-names>
</name>
</person-group>
<article-title>Pleasure-arousal theory and the intensity of emotions</article-title>
<source>Journal of Personality and Social Psychology</source>
<year>1994</year>
<volume>67</volume>
<fpage>525</fpage>
<lpage>539</lpage>
<pub-id pub-id-type="doi">10.1037/0022-3514.67.3.525</pub-id>
</element-citation>
</ref>
<ref id="CR53">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Remmington</surname>
<given-names>NA</given-names>
</name>
<name>
<surname>Fabrigar</surname>
<given-names>LR</given-names>
</name>
<name>
<surname>Visser</surname>
<given-names>PS</given-names>
</name>
</person-group>
<article-title>Reexamining the circumplex model of affect</article-title>
<source>Journal of Personality and Social Psychology</source>
<year>2000</year>
<volume>79</volume>
<fpage>286</fpage>
<lpage>300</lpage>
<pub-id pub-id-type="doi">10.1037/0022-3514.79.2.286</pub-id>
<pub-id pub-id-type="pmid">10948981</pub-id>
</element-citation>
</ref>
<ref id="CR54">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ric</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Alexopoulos</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Muller</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Aubé</surname>
<given-names>B</given-names>
</name>
</person-group>
<article-title>Emotional norms for 524 French personality trait words</article-title>
<source>Behavior Research Methods</source>
<year>2013</year>
<volume>45</volume>
<fpage>414</fpage>
<lpage>421</lpage>
<pub-id pub-id-type="doi">10.3758/s13428-012-0276-z</pub-id>
<pub-id pub-id-type="pmid">23263927</pub-id>
</element-citation>
</ref>
<ref id="CR55">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Russell</surname>
<given-names>JA</given-names>
</name>
</person-group>
<article-title>Core affect and the psychological construction of emotion</article-title>
<source>Psychological Review</source>
<year>2003</year>
<volume>110</volume>
<fpage>145</fpage>
<lpage>172</lpage>
<pub-id pub-id-type="doi">10.1037/0033-295X.110.1.145</pub-id>
<pub-id pub-id-type="pmid">12529060</pub-id>
</element-citation>
</ref>
<ref id="CR56">
<mixed-citation publication-type="other">Saarimaki, H., Gotsopoulos, A., Jääskeläinen, I. P., Lampinen, J., Vuilleumier, P., Hari, R., . . . Nummenmaa, L. (2015). Discrete neural signatures of basic emotions.
<italic>Cerebral Cortex</italic>
. Advance online publication. doi:10.1093/cercor/bhv086</mixed-citation>
</ref>
<ref id="CR57">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Scherer</surname>
<given-names>KR</given-names>
</name>
</person-group>
<article-title>What are emotions? And how can they be measured?</article-title>
<source>Social Science Information</source>
<year>2005</year>
<volume>44</volume>
<fpage>695</fpage>
<lpage>729</lpage>
<pub-id pub-id-type="doi">10.1177/0539018405058216</pub-id>
</element-citation>
</ref>
<ref id="CR58">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schienle</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Wabnegger</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Schoengassner</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Scharmüller</surname>
<given-names>W</given-names>
</name>
</person-group>
<article-title>Neuronal correlates of three attentional strategies during affective picture processing: An fMRI study</article-title>
<source>Cognitive, Affective, & Behavioral Neuroscience</source>
<year>2014</year>
<volume>14</volume>
<fpage>1320</fpage>
<lpage>1326</lpage>
<pub-id pub-id-type="doi">10.3758/s13415-014-0274-y</pub-id>
</element-citation>
</ref>
<ref id="CR59">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Silva</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Montant</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Ponz</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Ziegler</surname>
<given-names>JC</given-names>
</name>
</person-group>
<article-title>Emotions in reading: Disgust, empathy and the contextual learning hypothesis</article-title>
<source>Cognition</source>
<year>2012</year>
<volume>125</volume>
<fpage>333</fpage>
<lpage>338</lpage>
<pub-id pub-id-type="doi">10.1016/j.cognition.2012.07.013</pub-id>
<pub-id pub-id-type="pmid">22884243</pub-id>
</element-citation>
</ref>
<ref id="CR60">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Smith</surname>
<given-names>CA</given-names>
</name>
<name>
<surname>Lazarus</surname>
<given-names>RS</given-names>
</name>
</person-group>
<article-title>Appraisal components, core relational themes, and the emotions</article-title>
<source>Cognition & Emotion</source>
<year>1993</year>
<volume>7</volume>
<fpage>233</fpage>
<lpage>269</lpage>
<pub-id pub-id-type="doi">10.1080/02699939308409189</pub-id>
</element-citation>
</ref>
<ref id="CR61">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stanley</surname>
<given-names>DJ</given-names>
</name>
<name>
<surname>Meyer</surname>
<given-names>JP</given-names>
</name>
</person-group>
<article-title>Two-dimensional affective space: A new approach to orienting the axes</article-title>
<source>Emotion</source>
<year>2009</year>
<volume>9</volume>
<fpage>214</fpage>
<lpage>237</lpage>
<pub-id pub-id-type="doi">10.1037/a0014612</pub-id>
<pub-id pub-id-type="pmid">19348534</pub-id>
</element-citation>
</ref>
<ref id="CR62">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stevenson</surname>
<given-names>RA</given-names>
</name>
<name>
<surname>James</surname>
<given-names>TW</given-names>
</name>
</person-group>
<article-title>Affective auditory stimuli: Characterization of the International Affective Digitized Sounds (IADS) by discrete emotional categories</article-title>
<source>Behavior Research Methods</source>
<year>2008</year>
<volume>40</volume>
<fpage>315</fpage>
<lpage>321</lpage>
<pub-id pub-id-type="doi">10.3758/BRM.40.1.315</pub-id>
<pub-id pub-id-type="pmid">18411555</pub-id>
</element-citation>
</ref>
<ref id="CR63">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stevenson</surname>
<given-names>RA</given-names>
</name>
<name>
<surname>Mikels</surname>
<given-names>JA</given-names>
</name>
<name>
<surname>James</surname>
<given-names>TW</given-names>
</name>
</person-group>
<article-title>Characterization of the affective norms for English words by discrete emotional categories</article-title>
<source>Behavior Research Methods</source>
<year>2007</year>
<volume>39</volume>
<fpage>1020</fpage>
<lpage>1024</lpage>
<pub-id pub-id-type="doi">10.3758/BF03192999</pub-id>
<pub-id pub-id-type="pmid">18183921</pub-id>
</element-citation>
</ref>
<ref id="CR64">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tettamanti</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Rognoni</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Cafiero</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Costa</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Galati</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Perani</surname>
<given-names>D</given-names>
</name>
</person-group>
<article-title>Distinct pathways of neural coupling for different basic emotions</article-title>
<source>NeuroImage</source>
<year>2012</year>
<volume>59</volume>
<fpage>1804</fpage>
<lpage>1817</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2011.08.018</pub-id>
<pub-id pub-id-type="pmid">21888979</pub-id>
</element-citation>
</ref>
<ref id="CR65">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Van Hooff</surname>
<given-names>JC</given-names>
</name>
<name>
<surname>van Buuringen</surname>
<given-names>M</given-names>
</name>
<name>
<surname>El M’rabet</surname>
<given-names>I</given-names>
</name>
<name>
<surname>de Gier</surname>
<given-names>M</given-names>
</name>
<name>
<surname>van Zalingen</surname>
<given-names>L</given-names>
</name>
</person-group>
<article-title>Disgust-specific modulation of early attention processes</article-title>
<source>Acta Psychologica</source>
<year>2014</year>
<volume>152</volume>
<fpage>149</fpage>
<lpage>157</lpage>
<pub-id pub-id-type="doi">10.1016/j.actpsy.2014.08.009</pub-id>
<pub-id pub-id-type="pmid">25226546</pub-id>
</element-citation>
</ref>
<ref id="CR66">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Viinikainen</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Jääskeläinen</surname>
<given-names>IP</given-names>
</name>
<name>
<surname>Alexandrov</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Balk</surname>
<given-names>MH</given-names>
</name>
<name>
<surname>Autti</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Sams</surname>
<given-names>M</given-names>
</name>
</person-group>
<article-title>Nonlinear relationship between emotional valence and brain activity: Evidence of separate negative and positive valence dimensions</article-title>
<source>Human Brain Mapping</source>
<year>2010</year>
<volume>31</volume>
<fpage>1030</fpage>
<lpage>1040</lpage>
<pub-id pub-id-type="doi">10.1002/hbm.20915</pub-id>
<pub-id pub-id-type="pmid">19957266</pub-id>
</element-citation>
</ref>
<ref id="CR67">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vytal</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Hamann</surname>
<given-names>S</given-names>
</name>
</person-group>
<article-title>Neuroimaging support for discrete neural correlates of basic emotions: A voxel-based meta-analysis</article-title>
<source>Journal of Cognitive Neuroscience</source>
<year>2010</year>
<volume>22</volume>
<fpage>2864</fpage>
<lpage>2885</lpage>
<pub-id pub-id-type="doi">10.1162/jocn.2009.21366</pub-id>
<pub-id pub-id-type="pmid">19929758</pub-id>
</element-citation>
</ref>
<ref id="CR68">
<mixed-citation publication-type="other">Wierzba, M., Riegel, M., Pucz, A., Lesniewska, Z., Dragan,W. Ł, Gola, M., Jednoróg, K., & Marchewka, A. (2015a).
<italic>Erotic subset for the Nencki Affective Picture System (NAPS ERO): Cross-sexual comparison study</italic>
. Manuscript under review.</mixed-citation>
</ref>
<ref id="CR69">
<mixed-citation publication-type="other">Wierzba, M., Riegel, M., Wypych, M., Jednoróg, K., Turnau, P., Grabowska, A., & Marchewka, A. (2015b). Basic Emotions in the Nencki Affective Word List (NAWL BE): New Method of Classifying Emotional Stimuli.
<italic>PLoS ONE, 10</italic>
(7), e0132305. doi:10.1371/journal.pone.0132305</mixed-citation>
</ref>
<ref id="CR70">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wilson-Mendenhall</surname>
<given-names>CD</given-names>
</name>
<name>
<surname>Barrett</surname>
<given-names>LF</given-names>
</name>
<name>
<surname>Barsalou</surname>
<given-names>LW</given-names>
</name>
</person-group>
<article-title>Neural evidence that human emotions share core affective properties</article-title>
<source>Psychological Science</source>
<year>2013</year>
<volume>24</volume>
<fpage>947</fpage>
<lpage>956</lpage>
<pub-id pub-id-type="doi">10.1177/0956797612464242</pub-id>
<pub-id pub-id-type="pmid">23603916</pub-id>
</element-citation>
</ref>
</ref-list>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Sarre/explor/MusicSarreV3/Data/Pmc/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 0000399 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Corpus/biblio.hfd -nk 0000399 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Sarre
   |area=    MusicSarreV3
   |flux=    Pmc
   |étape=   Corpus
   |type=    RBID
   |clé=     
   |texte=   
}}

Wicri

This area was generated with Dilib version V0.6.33.
Data generation: Sun Jul 15 18:16:09 2018. Site generation: Tue Mar 5 19:21:25 2024