Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Emotional effects of dynamic textures

Identifieur interne : 001B58 ( Pmc/Checkpoint ); précédent : 001B57; suivant : 001B59

Emotional effects of dynamic textures

Auteurs : Alexander Toet ; Menno Henselmans ; Marcel P. Lucassen ; Theo Gevers

Source :

RBID : PMC:3485791

Abstract

This study explores the effects of various spatiotemporal dynamic texture characteristics on human emotions. The emotional experience of auditory (eg, music) and haptic repetitive patterns has been studied extensively. In contrast, the emotional experience of visual dynamic textures is still largely unknown, despite their natural ubiquity and increasing use in digital media. Participants watched a set of dynamic textures, representing either water or various different media, and self-reported their emotional experience. Motion complexity was found to have mildly relaxing and nondominant effects. In contrast, motion change complexity was found to be arousing and dominant. The speed of dynamics had arousing, dominant, and unpleasant effects. The amplitude of dynamics was also regarded as unpleasant. The regularity of the dynamics over the textures' area was found to be uninteresting, nondominant, mildly relaxing, and mildly pleasant. The spatial scale of the dynamics had an unpleasant, arousing, and dominant effect, which was larger for textures with diverse content than for water textures. For water textures, the effects of spatial contrast were arousing, dominant, interesting, and mildly unpleasant. None of these effects were observed for textures of diverse content. The current findings are relevant for the design and synthesis of affective multimedia content and for affective scene indexing and retrieval.


Url:
DOI: 10.1068/i0477
PubMed: 23145257
PubMed Central: 3485791


Affiliations:


Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:3485791

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Emotional effects of dynamic textures</title>
<author>
<name sortKey="Toet, Alexander" sort="Toet, Alexander" uniqKey="Toet A" first="Alexander" last="Toet">Alexander Toet</name>
</author>
<author>
<name sortKey="Henselmans, Menno" sort="Henselmans, Menno" uniqKey="Henselmans M" first="Menno" last="Henselmans">Menno Henselmans</name>
</author>
<author>
<name sortKey="Lucassen, Marcel P" sort="Lucassen, Marcel P" uniqKey="Lucassen M" first="Marcel P" last="Lucassen">Marcel P. Lucassen</name>
</author>
<author>
<name sortKey="Gevers, Theo" sort="Gevers, Theo" uniqKey="Gevers T" first="Theo" last="Gevers">Theo Gevers</name>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">23145257</idno>
<idno type="pmc">3485791</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3485791</idno>
<idno type="RBID">PMC:3485791</idno>
<idno type="doi">10.1068/i0477</idno>
<date when="2011">2011</date>
<idno type="wicri:Area/Pmc/Corpus">001F95</idno>
<idno type="wicri:Area/Pmc/Curation">001F95</idno>
<idno type="wicri:Area/Pmc/Checkpoint">001B58</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Emotional effects of dynamic textures</title>
<author>
<name sortKey="Toet, Alexander" sort="Toet, Alexander" uniqKey="Toet A" first="Alexander" last="Toet">Alexander Toet</name>
</author>
<author>
<name sortKey="Henselmans, Menno" sort="Henselmans, Menno" uniqKey="Henselmans M" first="Menno" last="Henselmans">Menno Henselmans</name>
</author>
<author>
<name sortKey="Lucassen, Marcel P" sort="Lucassen, Marcel P" uniqKey="Lucassen M" first="Marcel P" last="Lucassen">Marcel P. Lucassen</name>
</author>
<author>
<name sortKey="Gevers, Theo" sort="Gevers, Theo" uniqKey="Gevers T" first="Theo" last="Gevers">Theo Gevers</name>
</author>
</analytic>
<series>
<title level="j">i-Perception</title>
<idno type="eISSN">2041-6695</idno>
<imprint>
<date when="2011">2011</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>This study explores the effects of various spatiotemporal dynamic texture characteristics on human emotions. The emotional experience of auditory (eg, music) and haptic repetitive patterns has been studied extensively. In contrast, the emotional experience of visual dynamic textures is still largely unknown, despite their natural ubiquity and increasing use in digital media. Participants watched a set of dynamic textures, representing either water or various different media, and self-reported their emotional experience. Motion complexity was found to have mildly relaxing and nondominant effects. In contrast, motion change complexity was found to be arousing and dominant. The speed of dynamics had arousing, dominant, and unpleasant effects. The amplitude of dynamics was also regarded as unpleasant. The regularity of the dynamics over the textures' area was found to be uninteresting, nondominant, mildly relaxing, and mildly pleasant. The spatial scale of the dynamics had an unpleasant, arousing, and dominant effect, which was larger for textures with diverse content than for water textures. For water textures, the effects of spatial contrast were arousing, dominant, interesting, and mildly unpleasant. None of these effects were observed for textures of diverse content. The current findings are relevant for the design and synthesis of affective multimedia content and for affective scene indexing and retrieval.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Adolphs, R" uniqKey="Adolphs R">R Adolphs</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Akaike, H" uniqKey="Akaike H">H Akaike</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Akaike, H" uniqKey="Akaike H">H Akaike</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Amaral, D G" uniqKey="Amaral D">D G Amaral</name>
</author>
<author>
<name sortKey="Behniea, H" uniqKey="Behniea H">H Behniea</name>
</author>
<author>
<name sortKey="Kelly, J L" uniqKey="Kelly J">J L Kelly</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Arbuckle, J L" uniqKey="Arbuckle J">J L Arbuckle</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Arifin, S" uniqKey="Arifin S">S Arifin</name>
</author>
<author>
<name sortKey="Cheung, P Y K" uniqKey="Cheung P">P Y K Cheung</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Barrington, L" uniqKey="Barrington L">L Barrington</name>
</author>
<author>
<name sortKey="Chan, A B" uniqKey="Chan A">A B Chan</name>
</author>
<author>
<name sortKey="Lanckriet, G" uniqKey="Lanckriet G">G Lanckriet</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Baumgartner, T" uniqKey="Baumgartner T">T Baumgartner</name>
</author>
<author>
<name sortKey="Lutz, K" uniqKey="Lutz K">K Lutz</name>
</author>
<author>
<name sortKey="Schmidt, C F" uniqKey="Schmidt C">C F Schmidt</name>
</author>
<author>
<name sortKey="Jancke, L" uniqKey="Jancke L">L Jancke</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bentler, P M" uniqKey="Bentler P">P M Bentler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bigand, E" uniqKey="Bigand E">E Bigand</name>
</author>
<author>
<name sortKey="Vieillard, S" uniqKey="Vieillard S">S Vieillard</name>
</author>
<author>
<name sortKey="Madurell, F" uniqKey="Madurell F">F Madurell</name>
</author>
<author>
<name sortKey="Marozeau, J" uniqKey="Marozeau J">J Marozeau</name>
</author>
<author>
<name sortKey="Dacquet, A" uniqKey="Dacquet A">A Dacquet</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Billock, V A" uniqKey="Billock V">V A Billock</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Billock, V A" uniqKey="Billock V">V A Billock</name>
</author>
<author>
<name sortKey="Cunningham, D W" uniqKey="Cunningham D">D W Cunningham</name>
</author>
<author>
<name sortKey="Havig, P R" uniqKey="Havig P">P R Havig</name>
</author>
<author>
<name sortKey="Tsou, B H" uniqKey="Tsou B">B H Tsou</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Billock, V A" uniqKey="Billock V">V A Billock</name>
</author>
<author>
<name sortKey="De Guzman, G C" uniqKey="De Guzman G">G C de Guzman</name>
</author>
<author>
<name sortKey="Kelso, J A S" uniqKey="Kelso J">J A S Kelso</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Boltz, M G" uniqKey="Boltz M">M G Boltz</name>
</author>
<author>
<name sortKey="Ebendorf, B" uniqKey="Ebendorf B">B Ebendorf</name>
</author>
<author>
<name sortKey="Field, B" uniqKey="Field B">B Field</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bresin, R" uniqKey="Bresin R">R Bresin</name>
</author>
<author>
<name sortKey="Friberg, A" uniqKey="Friberg A">A Friberg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cerf, M" uniqKey="Cerf M">M Cerf</name>
</author>
<author>
<name sortKey="Cleary, D R" uniqKey="Cleary D">D R Cleary</name>
</author>
<author>
<name sortKey="Peters, R J" uniqKey="Peters R">R J Peters</name>
</author>
<author>
<name sortKey="Einh User, W" uniqKey="Einh User W">W Einhäuser</name>
</author>
<author>
<name sortKey="Koch, C" uniqKey="Koch C">C Koch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chan, A B" uniqKey="Chan A">A B Chan</name>
</author>
<author>
<name sortKey="Vasconcelos, N" uniqKey="Vasconcelos N">N Vasconcelos</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chen, Tp" uniqKey="Chen T">TP Chen</name>
</author>
<author>
<name sortKey="Chen, C W" uniqKey="Chen C">C-W Chen</name>
</author>
<author>
<name sortKey="Popp, P" uniqKey="Popp P">P Popp</name>
</author>
<author>
<name sortKey="Coover, B" uniqKey="Coover B">B Coover</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chuang, Y Y" uniqKey="Chuang Y">Y-Y Chuang</name>
</author>
<author>
<name sortKey="Goldman, D B" uniqKey="Goldman D">D B Goldman</name>
</author>
<author>
<name sortKey="Zheng, K C" uniqKey="Zheng K">K C Zheng</name>
</author>
<author>
<name sortKey="Curless, B" uniqKey="Curless B">B Curless</name>
</author>
<author>
<name sortKey="Salesin, D H" uniqKey="Salesin D">D H Salesin</name>
</author>
<author>
<name sortKey="Szeliski, R" uniqKey="Szeliski R">R Szeliski</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Constantini, R" uniqKey="Constantini R">R Constantini</name>
</author>
<author>
<name sortKey="Sbaiz, L" uniqKey="Sbaiz L">L Sbaiz</name>
</author>
<author>
<name sortKey="Susstrunk, S" uniqKey="Susstrunk S">S Süsstrunk</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Delplanque, S" uniqKey="Delplanque S">S Delplanque</name>
</author>
<author>
<name sortKey="N Diaye, K" uniqKey="N Diaye K">K N'diaye</name>
</author>
<author>
<name sortKey="Scherer, K" uniqKey="Scherer K">K Scherer</name>
</author>
<author>
<name sortKey="Grandjean, D" uniqKey="Grandjean D">D Grandjean</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dijkstra, K" uniqKey="Dijkstra K">K Dijkstra</name>
</author>
<author>
<name sortKey="Pieterse, M" uniqKey="Pieterse M">M Pieterse</name>
</author>
<author>
<name sortKey="Pruyn, A" uniqKey="Pruyn A">A Pruyn</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Doretto, G" uniqKey="Doretto G">G Doretto</name>
</author>
<author>
<name sortKey="Chiuso, A" uniqKey="Chiuso A">A Chiuso</name>
</author>
<author>
<name sortKey="Wu, Y N" uniqKey="Wu Y">Y N Wu</name>
</author>
<author>
<name sortKey="Soatto, S" uniqKey="Soatto S">S Soatto</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fernandez, D" uniqKey="Fernandez D">D Fernandez</name>
</author>
<author>
<name sortKey="Wilkins, A J" uniqKey="Wilkins A">A J Wilkins</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Field, D J" uniqKey="Field D">D J Field</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Field, D J" uniqKey="Field D">D J Field</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Forsythe, A" uniqKey="Forsythe A">A Forsythe</name>
</author>
<author>
<name sortKey="Nadal, M" uniqKey="Nadal M">M Nadal</name>
</author>
<author>
<name sortKey="Sheehy, N" uniqKey="Sheehy N">N Sheehy</name>
</author>
<author>
<name sortKey="Cela Conde, Cj" uniqKey="Cela Conde C">CJ Cela-Conde</name>
</author>
<author>
<name sortKey="Sawey, M" uniqKey="Sawey M">M Sawey</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fujiwara, M" uniqKey="Fujiwara M">M Fujiwara</name>
</author>
<author>
<name sortKey="Aono, S" uniqKey="Aono S">S Aono</name>
</author>
<author>
<name sortKey="Kuwano, S" uniqKey="Kuwano S">S Kuwano</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gomez, P" uniqKey="Gomez P">P Gomez</name>
</author>
<author>
<name sortKey="Danuser, B" uniqKey="Danuser B">B Danuser</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Groissboeck, W" uniqKey="Groissboeck W">W Groissboeck</name>
</author>
<author>
<name sortKey="Lughofer, E" uniqKey="Lughofer E">E Lughofer</name>
</author>
<author>
<name sortKey="Thumfart, S" uniqKey="Thumfart S">S Thumfart</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Guttman, S E" uniqKey="Guttman S">S E Guttman</name>
</author>
<author>
<name sortKey="Gilroy, L A" uniqKey="Gilroy L">L A Gilroy</name>
</author>
<author>
<name sortKey="Blake, R" uniqKey="Blake R">R Blake</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hagerhall, C M" uniqKey="Hagerhall C">C M Hagerhall</name>
</author>
<author>
<name sortKey="Laike, T" uniqKey="Laike T">T Laike</name>
</author>
<author>
<name sortKey="Taylor, R P" uniqKey="Taylor R">R P Taylor</name>
</author>
<author>
<name sortKey="Kuller, M" uniqKey="Kuller M">M Küller</name>
</author>
<author>
<name sortKey="Kuller, R" uniqKey="Kuller R">R Küller</name>
</author>
<author>
<name sortKey="Martin, T P" uniqKey="Martin T">T P Martin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hanjalic, A" uniqKey="Hanjalic A">A Hanjalic</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hanjalic, A" uniqKey="Hanjalic A">A Hanjalic</name>
</author>
<author>
<name sortKey="Xu, L Q" uniqKey="Xu L">L-Q Xu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Houtkamp, J M" uniqKey="Houtkamp J">J M Houtkamp</name>
</author>
<author>
<name sortKey="Schuurink, E L" uniqKey="Schuurink E">E L Schuurink</name>
</author>
<author>
<name sortKey="Toet, A" uniqKey="Toet A">A Toet</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Huang, J" uniqKey="Huang J">J Huang</name>
</author>
<author>
<name sortKey="Waldvogel, M" uniqKey="Waldvogel M">M Waldvogel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Husain, G" uniqKey="Husain G">G Husain</name>
</author>
<author>
<name sortKey="Thompson, W F" uniqKey="Thompson W">W F Thompson</name>
</author>
<author>
<name sortKey="Schellenberg, E G" uniqKey="Schellenberg E">E G Schellenberg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Juricevic, I" uniqKey="Juricevic I">I Juricevic</name>
</author>
<author>
<name sortKey="Land, L" uniqKey="Land L">L Land</name>
</author>
<author>
<name sortKey="Wilkins, A" uniqKey="Wilkins A">A Wilkins</name>
</author>
<author>
<name sortKey="Webster, M A" uniqKey="Webster M">M A Webster</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Juslin, P N" uniqKey="Juslin P">P N Juslin</name>
</author>
<author>
<name sortKey="V Stfj Ll, D" uniqKey="V Stfj Ll D">D Västfjäll</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kellaris, J J" uniqKey="Kellaris J">J J Kellaris</name>
</author>
<author>
<name sortKey="Kent, R J" uniqKey="Kent R">R J Kent</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kim, E Y" uniqKey="Kim E">E Y Kim</name>
</author>
<author>
<name sortKey="Kim, S J" uniqKey="Kim S">S-J Kim</name>
</author>
<author>
<name sortKey="Jeong, K" uniqKey="Jeong K">K Jeong</name>
</author>
<author>
<name sortKey="Kim, J" uniqKey="Kim J">J Kim</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kline, R B" uniqKey="Kline R">R B Kline</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kuller, R" uniqKey="Kuller R">R Küller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lai, C H" uniqKey="Lai C">C-H Lai</name>
</author>
<author>
<name sortKey="Wu, J L" uniqKey="Wu J">J-L Wu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lin, T" uniqKey="Lin T">T Lin</name>
</author>
<author>
<name sortKey="Imamiya, A" uniqKey="Imamiya A">A Imamiya</name>
</author>
<author>
<name sortKey="Hu, W" uniqKey="Hu W">W Hu</name>
</author>
<author>
<name sortKey="Omata, M" uniqKey="Omata M">M Omata</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lucassen, M P" uniqKey="Lucassen M">M P Lucassen</name>
</author>
<author>
<name sortKey="Gevers, T" uniqKey="Gevers T">T Gevers</name>
</author>
<author>
<name sortKey="Gijsenij, A" uniqKey="Gijsenij A">A Gijsenij</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Maccallum, R C" uniqKey="Maccallum R">R C MacCallum</name>
</author>
<author>
<name sortKey="Austin, J T" uniqKey="Austin J">J T Austin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Machajdik, J" uniqKey="Machajdik J">J Machajdik</name>
</author>
<author>
<name sortKey="Hanbury, A" uniqKey="Hanbury A">A Hanbury</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mao, X" uniqKey="Mao X">X Mao</name>
</author>
<author>
<name sortKey="Chen, B" uniqKey="Chen B">B Chen</name>
</author>
<author>
<name sortKey="Muta, I" uniqKey="Muta I">I Muta</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Masakura, Y" uniqKey="Masakura Y">Y Masakura</name>
</author>
<author>
<name sortKey="Nagai, M" uniqKey="Nagai M">M Nagai</name>
</author>
<author>
<name sortKey="Kumada, T" uniqKey="Kumada T">T Kumada</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mckinney, C H" uniqKey="Mckinney C">C H McKinney</name>
</author>
<author>
<name sortKey="Tims, F C" uniqKey="Tims F">F C Tims</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mehrabian, A" uniqKey="Mehrabian A">A Mehrabian</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nasar, J L" uniqKey="Nasar J">J L Nasar</name>
</author>
<author>
<name sortKey="Lin, Y H" uniqKey="Lin Y">Y-H Lin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="O Hare, L" uniqKey="O Hare L">L O'Hare</name>
</author>
<author>
<name sortKey="Hibbard, Pb" uniqKey="Hibbard P">PB Hibbard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Parraga, Ca" uniqKey="Parraga C">CA Parraga</name>
</author>
<author>
<name sortKey="Troscianko, T" uniqKey="Troscianko T">T Troscianko</name>
</author>
<author>
<name sortKey="Tolhurst, D J" uniqKey="Tolhurst D">D J Tolhurst</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Peteri, R" uniqKey="Peteri R">R Péteri</name>
</author>
<author>
<name sortKey="Chetverikov, D" uniqKey="Chetverikov D">D Chetverikov</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Peteri, R" uniqKey="Peteri R">R Péteri</name>
</author>
<author>
<name sortKey="Fazekas, S" uniqKey="Fazekas S">S Fazekas</name>
</author>
<author>
<name sortKey="Huiskes, M J" uniqKey="Huiskes M">M J Huiskes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Post, F H" uniqKey="Post F">F H Post</name>
</author>
<author>
<name sortKey="Vrolijk, B" uniqKey="Vrolijk B">B Vrolijk</name>
</author>
<author>
<name sortKey="Hauser, H" uniqKey="Hauser H">H Hauser</name>
</author>
<author>
<name sortKey="Laramee, R S" uniqKey="Laramee R">R S Laramee</name>
</author>
<author>
<name sortKey="Doleisch, H" uniqKey="Doleisch H">H Doleisch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Recanzone, G H" uniqKey="Recanzone G">G H Recanzone</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Russell, J A" uniqKey="Russell J">J A Russell</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Russell, J A" uniqKey="Russell J">J A Russell</name>
</author>
<author>
<name sortKey="Pratt, G" uniqKey="Pratt G">G Pratt</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Russell, J A" uniqKey="Russell J">J A Russell</name>
</author>
<author>
<name sortKey="Ward, L M" uniqKey="Ward L">L M Ward</name>
</author>
<author>
<name sortKey="Pratt, G" uniqKey="Pratt G">G Pratt</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Salminen, K" uniqKey="Salminen K">K Salminen</name>
</author>
<author>
<name sortKey="Surakka, V" uniqKey="Surakka V">V Surakka</name>
</author>
<author>
<name sortKey="Lylykangas, J" uniqKey="Lylykangas J">J Lylykangas</name>
</author>
<author>
<name sortKey="Raisamo, R" uniqKey="Raisamo R">R Raisamo</name>
</author>
<author>
<name sortKey="Saarinen, R" uniqKey="Saarinen R">R Saarinen</name>
</author>
<author>
<name sortKey="Raisamo, R" uniqKey="Raisamo R">R Raisamo</name>
</author>
<author>
<name sortKey="Rantala, J" uniqKey="Rantala J">J Rantala</name>
</author>
<author>
<name sortKey="Evreinov, G" uniqKey="Evreinov G">G Evreinov</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schermelleh Engel, K" uniqKey="Schermelleh Engel K">K Schermelleh-Engel</name>
</author>
<author>
<name sortKey="Moosbrugger, H" uniqKey="Moosbrugger H">H Moosbrugger</name>
</author>
<author>
<name sortKey="Muller, H" uniqKey="Muller H">H Müller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Simmons, D R" uniqKey="Simmons D">D R Simmons</name>
</author>
<author>
<name sortKey="Russell, C L" uniqKey="Russell C">C L Russell</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Smith, J R" uniqKey="Smith J">J R Smith</name>
</author>
<author>
<name sortKey="Lin, C Y" uniqKey="Lin C">C-Y Lin</name>
</author>
<author>
<name sortKey="Naphade, M" uniqKey="Naphade M">M Naphade</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Soleymani, M" uniqKey="Soleymani M">M Soleymani</name>
</author>
<author>
<name sortKey="Chanel, G" uniqKey="Chanel G">G Chanel</name>
</author>
<author>
<name sortKey="Kierkels, J J M" uniqKey="Kierkels J">J J M Kierkels</name>
</author>
<author>
<name sortKey="Pun, T" uniqKey="Pun T">T Pun</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Steiger, J H" uniqKey="Steiger J">J H Steiger</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Suk, H J" uniqKey="Suk H">H-J Suk</name>
</author>
<author>
<name sortKey="Jeong, S H" uniqKey="Jeong S">S-H Jeong</name>
</author>
<author>
<name sortKey="Hang, T H" uniqKey="Hang T">T-H Hang</name>
</author>
<author>
<name sortKey="Kwon, D S" uniqKey="Kwon D">D-S Kwon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Taylor, R P" uniqKey="Taylor R">R P Taylor</name>
</author>
<author>
<name sortKey="Spehar, B" uniqKey="Spehar B">B Spehar</name>
</author>
<author>
<name sortKey="Wise, J A" uniqKey="Wise J">J A Wise</name>
</author>
<author>
<name sortKey="Clifford, C W" uniqKey="Clifford C">C W Clifford</name>
</author>
<author>
<name sortKey="Newell, B R" uniqKey="Newell B">B R Newell</name>
</author>
<author>
<name sortKey="Hagerhall, C M" uniqKey="Hagerhall C">C M Hagerhall</name>
</author>
<author>
<name sortKey="Purcell, T" uniqKey="Purcell T">T Purcell</name>
</author>
<author>
<name sortKey="Martin, T P" uniqKey="Martin T">T P Martin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Hagen, M" uniqKey="Van Hagen M">M Van Hagen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wang, K" uniqKey="Wang K">K Wang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Weiskopf, D" uniqKey="Weiskopf D">D Weiskopf</name>
</author>
<author>
<name sortKey="Erlebacher, G" uniqKey="Erlebacher G">G Erlebacher</name>
</author>
<author>
<name sortKey="Ertl, T" uniqKey="Ertl T">T Ertl</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zhang, Q" uniqKey="Zhang Q">Q Zhang</name>
</author>
<author>
<name sortKey="Wangbo, T" uniqKey="Wangbo T">T Wangbo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zhao, G" uniqKey="Zhao G">G Zhao</name>
</author>
<author>
<name sortKey="Pietikainen, M" uniqKey="Pietikainen M">M Pietikainen</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Iperception</journal-id>
<journal-id journal-id-type="iso-abbrev">Iperception</journal-id>
<journal-id journal-id-type="publisher-id">pmed</journal-id>
<journal-title-group>
<journal-title>i-Perception</journal-title>
</journal-title-group>
<issn pub-type="epub">2041-6695</issn>
<publisher>
<publisher-name>Pion</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">23145257</article-id>
<article-id pub-id-type="pmc">3485791</article-id>
<article-id pub-id-type="doi">10.1068/i0477</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Emotional effects of dynamic textures</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Toet</surname>
<given-names>Alexander</given-names>
</name>
<aff>Intelligent Systems Lab Amsterdam (ISLA), University of Amsterdam, Science Park 904, 1098 XH, Amsterdam, The Netherlands; and TNO, Kampweg 5, 3769 DE Soesterberg, The Netherlands; e-mail:
<email>lextoet@gmail.com</email>
;</aff>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Henselmans</surname>
<given-names>Menno</given-names>
</name>
<aff>TNO, Kampweg 5, 3769 DE Soesterberg, The Netherlands; e-mail:
<email>menno.henselmans@gmail.com</email>
;</aff>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Lucassen</surname>
<given-names>Marcel P</given-names>
</name>
<aff>Intelligent Systems Lab Amsterdam (ISLA), University of Amsterdam, Science Park 904, 1098 XH, Amsterdam, The Netherlands; e-mail:
<email>m.p.lucassen@uva.nl</email>
;</aff>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Gevers</surname>
<given-names>Theo</given-names>
</name>
<aff>Intelligent Systems Lab Amsterdam (ISLA), University of Amsterdam, Science Park 904, 1098 XH, Amsterdam, The Netherlands; e-mail:
<email>th.gevers@uva.nl</email>
;</aff>
</contrib>
</contrib-group>
<pub-date pub-type="collection">
<year>2011</year>
</pub-date>
<pub-date pub-type="epub">
<day>24</day>
<month>11</month>
<year>2011</year>
</pub-date>
<volume>2</volume>
<issue>9</issue>
<fpage>969</fpage>
<lpage>991</lpage>
<history>
<date date-type="received">
<day>13</day>
<month>8</month>
<year>2011</year>
</date>
<date date-type="rev-recd">
<day>14</day>
<month>11</month>
<year>2011</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright © 2011 A Toet, M Henselmans, M P Lucassen, T Gevers</copyright-statement>
<copyright-year>2011</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by-nc-nd/3.0/">
<license-p>This open-access article is distributed under a Creative Commons Licence, which permits noncommercial use, distribution, and reproduction, provided the original author(s) and source are credited and no alterations are made.</license-p>
</license>
</permissions>
<abstract>
<p>This study explores the effects of various spatiotemporal dynamic texture characteristics on human emotions. The emotional experience of auditory (eg, music) and haptic repetitive patterns has been studied extensively. In contrast, the emotional experience of visual dynamic textures is still largely unknown, despite their natural ubiquity and increasing use in digital media. Participants watched a set of dynamic textures, representing either water or various different media, and self-reported their emotional experience. Motion complexity was found to have mildly relaxing and nondominant effects. In contrast, motion change complexity was found to be arousing and dominant. The speed of dynamics had arousing, dominant, and unpleasant effects. The amplitude of dynamics was also regarded as unpleasant. The regularity of the dynamics over the textures' area was found to be uninteresting, nondominant, mildly relaxing, and mildly pleasant. The spatial scale of the dynamics had an unpleasant, arousing, and dominant effect, which was larger for textures with diverse content than for water textures. For water textures, the effects of spatial contrast were arousing, dominant, interesting, and mildly unpleasant. None of these effects were observed for textures of diverse content. The current findings are relevant for the design and synthesis of affective multimedia content and for affective scene indexing and retrieval.</p>
</abstract>
<kwd-group>
<kwd>emotion</kwd>
<kwd>dynamic textures</kwd>
<kwd>pleasure</kwd>
<kwd>arousal</kwd>
<kwd>dominance</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<label>1</label>
<title>Introduction</title>
<p>This study explores the effects of visual dynamic textures on human emotional experience. Dynamic textures are spatially repetitive, time-varying visual patterns that repeat or seem to repeat themselves over time. Dynamic textures extend the notion of self-similarity (central to conventional image texture) to the spatiotemporal domain. A dynamic texture may either be continuous or discrete. Discrete textures have clearly discernible parts (eg, a group of marching ants or fluttering leaves), whereas continuous textures represent media that are either continuous (eg, water or a waving flag that covers the entire field of view) or practically indiscernible thereof (eg, a waving field of grass seen from far away). Dynamic textures occur in nature and in videos of waves, moving clouds, smoke, fire, fluttering flags or foliage, traffic scenes, and moving masses of humans seen from a bird's eye view. Recently, dynamic textures have also appeared on large-scale digital billboards and electronic wallpaper (Huang and Waldvogel
<xref ref-type="bibr" rid="R36">2005</xref>
). Dynamic textures are thus ubiquitous, and knowledge of their emotional effects may have important implications for the design and experience of our daily environment.</p>
<p>It has long been recognized that the emotional connotations of static visual textures are highly relevant for the appreciation of, for example, textile (Kim et al
<xref ref-type="bibr" rid="R41">2005</xref>
), wall-paper, and surface coating (Wang
<xref ref-type="bibr" rid="R72">2009</xref>
). As a result, the perceptual and emotional properties of static textures in general (eg, Machajdik and Hanbury
<xref ref-type="bibr" rid="R48">2010</xref>
; Mao et al
<xref ref-type="bibr" rid="R49">2003</xref>
), and the effects of their color distribution in particular (eg, Lucassen et al
<xref ref-type="bibr" rid="R46">2011</xref>
; Simmons and Russell
<xref ref-type="bibr" rid="R65">2008</xref>
), have been well documented. It appears that the human visual system is optimized for the perception of natural images (Field
<xref ref-type="bibr" rid="R25">1987</xref>
,
<xref ref-type="bibr" rid="R26">1994</xref>
; Parraga et al
<xref ref-type="bibr" rid="R55">2000</xref>
), which typically have fractal-like spatiotemporal spectra (Billock
<xref ref-type="bibr" rid="R11">2000</xref>
; Billock et al
<xref ref-type="bibr" rid="R12">2001a</xref>
,
<xref ref-type="bibr" rid="R13">2001b</xref>
). The amplitude spectra of dynamic natural textures closely follow an inverse power law relationship:
<disp-formula>
<mml:math id="M001">
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mo></mml:mo>
<mml:mi>s</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>f</mml:mi>
<mml:mi>t</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mi>c</mml:mi>
<mml:msubsup>
<mml:mi>f</mml:mi>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mi>β</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:msubsup>
<mml:mi>f</mml:mi>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mi>α</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</disp-formula>
where
<italic>f</italic>
<sub>
<italic>s</italic>
</sub>
and
<italic>f</italic>
<sub>
<italic>t</italic>
</sub>
are, respectively, the spatial and temporal frequency, with 0.9 ≤ β ≤ 1.2 (
<italic>M</italic>
= 1.08; Billock
<xref ref-type="bibr" rid="R11">2000</xref>
), and 0.61 ≤ α ≤ 1.2 (Billock et al
<xref ref-type="bibr" rid="R13">2001b</xref>
). Spatial amplitude spectra approaching a
<italic>f</italic>
<sup>−1</sup>
distribution are typically considered pleasant, which is also reflected in EEG (Hagerhall et al
<xref ref-type="bibr" rid="R32">2008</xref>
) and skin conductance (Taylor et al
<xref ref-type="bibr" rid="R70">2005</xref>
) response. Deviations from a spatial
<italic>f</italic>
<sup>−1</sup>
distribution are perceived as unpleasant or uncomfortable (Fernandez and Wilkins
<xref ref-type="bibr" rid="R24">2008</xref>
; Juricevic et al
<xref ref-type="bibr" rid="R38">2010</xref>
; O'Hare and Hibbard
<xref ref-type="bibr" rid="R54">2011</xref>
). White noise or 1/
<italic>f</italic>
<sup>0</sup>
is perceived as disorder, while
<italic>f</italic>
<sup>−2</sup>
is considered monotonous (Mao et al
<xref ref-type="bibr" rid="R49">2003</xref>
). The functional visual brain circuitry that conveys the spatial frequency information is closely related and functionally linked to the circuitry that conveys emotional information (eg, Adolphs
<xref ref-type="bibr" rid="R1">2004</xref>
; Amaral et al
<xref ref-type="bibr" rid="R4">2003</xref>
). Emotional response to images may therefore indeed be modulated by their spatial frequency content (Delplanque et al
<xref ref-type="bibr" rid="R21">2007</xref>
). Research on visual art indicates that complexity correlates positively with interest and has a non-linear relationship with pleasure (Forsythe et al
<xref ref-type="bibr" rid="R27">2011</xref>
). Some of these findings have been used to develop algorithms that generate visual textures with a desired aesthetic perception, such as (non-)elegance and (dis-)like (eg, Groissboeck et al
<xref ref-type="bibr" rid="R30">2010</xref>
).</p>
<p>The emotional experience of haptic (eg, Salminen et al
<xref ref-type="bibr" rid="R63">2011</xref>
) and auditory (eg, Bigand et al
<xref ref-type="bibr" rid="R10">2005</xref>
; Gomez and Danuser
<xref ref-type="bibr" rid="R29">2007</xref>
) repetitive patterns has also been studied extensively. Music is well known to arouse strong emotional responses in people (Juslin and Västfjäll
<xref ref-type="bibr" rid="R39">2008</xref>
). For instance, it is typically found that tempo is positively related to arousal (Bresin and Friberg
<xref ref-type="bibr" rid="R15">2000</xref>
; Husain et al
<xref ref-type="bibr" rid="R37">2002</xref>
) but mainly for popular music and in a non-linear manner (Kellaris and Kent
<xref ref-type="bibr" rid="R40">1993</xref>
). Tempo is also positively related to happiness and negatively to sadness, while irregularity is negatively appraised and volume is generally positively related to arousal (Bresin and Friberg
<xref ref-type="bibr" rid="R15">2000</xref>
).</p>
<p>In contrast, the emotional experience of visual dynamic textures is still largely unknown. Recently, dynamic textures have found applications in many different areas, such as animation (Chuang et al
<xref ref-type="bibr" rid="R19">2005</xref>
), video classification (eg, Zhao and Pietikainen
<xref ref-type="bibr" rid="R76">2007</xref>
), video retrieval (eg, Péteri and Chetverikov
<xref ref-type="bibr" rid="R56">2006</xref>
; Smith et al
<xref ref-type="bibr" rid="R66">2002</xref>
), and video synthesis (eg, Chan and Vasconcelos
<xref ref-type="bibr" rid="R17">2005</xref>
; Constantini et al
<xref ref-type="bibr" rid="R20">2008</xref>
; Doretto et al
<xref ref-type="bibr" rid="R23">2003</xref>
; Lai and Wu
<xref ref-type="bibr" rid="R44">2007</xref>
; Zhang and Wangbo
<xref ref-type="bibr" rid="R75">2007</xref>
), and in the visualization of time-dependent vector fields (eg, Post et al
<xref ref-type="bibr" rid="R58">2003</xref>
; Weiskopf et al
<xref ref-type="bibr" rid="R73">2003</xref>
). The need to control their emotional effects is therefore increasing.</p>
<p>Visual imagery is one of the mechanisms through which auditory stimuli may induce emotion (Juslin and Västfjäll
<xref ref-type="bibr" rid="R39">2008</xref>
). Musical characteristics such as repetition, melody, rhythm, and tempo are especially effective in stimulating vivid mental imagery (McKinney and Tims
<xref ref-type="bibr" rid="R51">1995</xref>
). Music and visual information reciprocally influence emotion: music enhances the emotional experience of images (Baumgartner et al
<xref ref-type="bibr" rid="R8">2006</xref>
), while visual stimuli in turn effectively modulate the structural and emotional experience of music (Boltz et al
<xref ref-type="bibr" rid="R14">2009</xref>
). Visual imagery and music have a common temporal nature. It has been suggested that rhythm may be the link between the two (Chen et al
<xref ref-type="bibr" rid="R18">2011</xref>
). It has indeed been shown that people “hear” purely visual rhythms (Guttman et al
<xref ref-type="bibr" rid="R31">2005</xref>
), and that auditory signals drive perceived visual temporal rate (Recanzone
<xref ref-type="bibr" rid="R59">2003</xref>
). Thus it seems that the brain attempts to create an emotional and structural congruent unified percept. Although music can be modeled by dynamic textures (Barrington et al
<xref ref-type="bibr" rid="R7">2010</xref>
), it is unknown whether the emotional response to visual dynamic textures resembles the response to music with similar temporal characteristics.</p>
<p>In this study, observer experiments were performed to assess emotional experience as a function of the spatiotemporal characteristics of visual dynamic textures. Participants watched a set of dynamic textures, representing either water or a collection of various media, and self-reported their emotional experience.</p>
<p>Based on knowledge about the emotional experience of auditory stimuli, it is hypothesized that (i) temporal regularity is positively related to pleasure, (ii) both the speed and amplitude of movement are positively related to arousal and negatively to relaxation, and (iii) complexity of motion and changes therein is positively related to interest. The hypothesized (baseline) structural model is graphically represented in
<xref ref-type="fig" rid="F1">Figure 1</xref>
. Note that since this model is by nature exploratory, all spatiotemporal variables predict all emotional response variables.</p>
<fig id="F1" orientation="portrait" position="float">
<label>Figure 1.</label>
<caption>
<p>Hypothesized structural model describing the interrelations between temporal (left) and spatial (right) dynamic texture characteristics and human emotional response (middle). Blue (positive) and red (negative) colors indicate hypothetical path polarity (the sign of the correlation). d1–d5 represent residual disturbance terms.</p>
</caption>
<graphic xlink:href="i-perception-2-969-g0001"></graphic>
</fig>
</sec>
<sec id="s2">
<label>2</label>
<title>General methods</title>
<sec id="s2.1">
<label>2.1</label>
<title>Dynamic textures</title>
<p>Two stimulus sets were created from a total of 56 different dynamic textures that were manually selected from the DynTex database (Péteri et al
<xref ref-type="bibr" rid="R57">2010</xref>
). Both sets had a high diversity of spatiotemporal characteristics. The first set consisted of 30 continuous dynamic textures representing only water (eg, sea, ponds, fountains, waterfalls, rivers;
<xref ref-type="fig" rid="F2">Figure 2</xref>
). This collection will be called the “Water Set”.
<sup>
<xref ref-type="fn" rid="fn1">(1)</xref>
</sup>
The second set consisted of 36 textures representing both continuous and discrete textures of various semantic content (eg, crawling ants, fluttering leaves, rippling water, moving traffic, flickering candles;
<xref ref-type="fig" rid="F3">Figure 3</xref>
). This collection will be called the “Mixed Set”.
<sup>
<xref ref-type="fn" rid="fn2">(2)</xref>
</sup>
Both sets had 9 water textures
<sup>
<xref ref-type="fn" rid="fn3">(3)</xref>
</sup>
in common.</p>
<fig id="F2" orientation="portrait" position="float">
<label>Figure 2.</label>
<caption>
<p>The Water Set, consisting of 30 different dynamic water textures from the DynTex database. See the animated version of this figure here.</p>
</caption>
<graphic xlink:href="i-perception-2-969-g0002"></graphic>
</fig>
<fig id="F3" orientation="portrait" position="float">
<label>Figure 3.</label>
<caption>
<p>The Mixed Set, consisting of 36 different dynamic textures from the DynTex database. See the animated version of this figure here.</p>
</caption>
<graphic xlink:href="i-perception-2-969-g0003"></graphic>
</fig>
<p>A set with elements representing a homogeneous medium from a single category (the Water Set) was used in an attempt to minimize potential confounding affective and cognitive effects of semantic image content. Also, the results obtained with this set may be related to earlier results that have been obtained for static water pictures (Nasar and Lin
<xref ref-type="bibr" rid="R53">2003</xref>
).</p>
<p>In the classification phase of the experiments the textures were presented in their original format (720 × 576 pixels, 25 fps). In the rating phase the textures were displayed at 1/3 of their original size (240 × 192 pixels, 25 fps).</p>
</sec>
<sec id="s2.2">
<label>2.2</label>
<title>Spatio-temporal texture descriptors</title>
<p>The DynTex database also provides 10 structural descriptors for each dynamic texture, together with annotations that are based on a careful analysis of the underlying physical process that is represented (Péteri et al
<xref ref-type="bibr" rid="R57">2010</xref>
).</p>
<p>The following five descriptors were used in the present study to quantify the temporal characteristics of the dynamic textures.
<italic>TrajectoryType</italic>
describes the complexity of motion and varies from still to straight to curving to oscillating to irregular, with values ranging from 1 to 5, respectively.
<italic>AppearanceChange</italic>
similarly describes the complexity of the change in the texture's appearance (eg, colour shifting) and varies from no change to directed to oscillating to irregular, with values 1 to 4, respectively.
<italic>SpeedFrequency</italic>
describes the speed of dynamics and varies from low to medium to high, with values ranging from 1 to 3, respectively.
<italic>Amplitude</italic>
describes the extent of the dynamics and varies from small to medium to large, with values ranging from 1 to 3, respectively.
<italic>TemporalRegularity</italic>
describes the regularity in terms of the former two variables over time and varies from low to medium to high, with values ranging from 1 to 3, respectively. Together,
<italic>SpeedFrequency, TemporalRegularity</italic>
, and
<italic>AppearanceChange</italic>
describe the temporal frequency content of a dynamic texture, while
<italic>TrajectoryType</italic>
and
<italic>Amplitude</italic>
relate to the optic flow in the stimulus pattern.</p>
<p>In addition, the following three descriptors were used to quantify the spatial characteristics of the dynamic textures.
<italic>SpatialRegularity</italic>
describes the amount of spatial variation in the dynamics (ie, the amount of different patterns occurring simultaneously) and varies from low to medium to high, with values 1 to 3, respectively.
<italic>SpatialScale</italic>
describes the scale of spatial variation between the moving parts of the texture and varies from fine to medium to coarse, with values ranging from 1 to 3, respectively.
<italic>SpatialContrast</italic>
analogously describes the extent of spatial variation between the moving parts of the texture and varies from low to medium to high, with values ranging from 1 to 3, respectively. Together,
<italic>Spatial Regularity</italic>
,
<italic>Spatial Scale</italic>
, and
<italic>Spatial Contrast</italic>
describe the spatial frequency content of a dynamic texture.</p>
<p>The temporal
<italic>MainClass</italic>
descriptor from the DynTex database was not used in this study, since it represents a rather subjective complex construct charactering the overall motion type. A fourth spatial descriptor called
<italic>Density</italic>
represents the discernability of the texture's individual parts and varies from sparse to medium to dense to continuous with values ranging from 1 to 4, respectively. In this study
<italic>Density</italic>
is only used in the analysis of the observer results for the Mixed Set, since this descriptor inherently applies only to textures showing discrete elements and not to continuous media.</p>
</sec>
<sec id="s2.3">
<label>2.3</label>
<title>Apparatus</title>
<p>Dell Precision 490 PC computers were used to present the dynamic textures to the observers and to register their response. The computers were equipped with Dell 19-inch monitors, with a screen resolution of 1280 × 1024 pixels, and a screen refresh rate of 60 Hz. Observers used standard mouse pointers to indicate their response and to move the dynamic textures on the screen.</p>
</sec>
<sec id="s2.4">
<label>2.4</label>
<title>Participants</title>
<p>It has previously been observed that the emotional experience of (static) textures is invariant for age, gender, personality and social class (Nasar and Lin
<xref ref-type="bibr" rid="R53">2003</xref>
). Convenience sampling was therefore used to select the 107 participants of this study.</p>
<p>The experimental protocol was reviewed and approved by TNO internal review board on experiments with human participants and was in accordance with the Helsinki Declaration of 1975, as revised in 2000 (World Medical Association
<xref ref-type="bibr" rid="R74">2000</xref>
). The participants gave their informed consent prior to testing. The participants received a modest financial compensation for their participation.</p>
</sec>
<sec id="s2.5">
<label>2.5</label>
<title>Analyis</title>
<p>SPSS 19 (
<ext-link ext-link-type="uri" xlink:href="http://www.spss.com">www.spss.com</ext-link>
) was used for the statistical analysis of the data. IBM SPSS AMOS 19 (Arbuckle
<xref ref-type="bibr" rid="R5">2010</xref>
) was used evaluate the (hypothetical) baseline model shown in
<xref ref-type="fig" rid="F1">Figure 1</xref>
through covariance structure analysis.</p>
<p>Structural equation modeling (SEM) is a very general statistical modeling technique widely used in the behavioral sciences (eg, MacCallum and Austin
<xref ref-type="bibr" rid="R47">2000</xref>
). SEM provides a very general and convenient framework for statistical analysis that includes several traditional multivariate procedures, such as factor analysis, path analysis, regression analysis, discriminant analysis, and canonical correlation as special cases. The basic idea differs from the usual statistical approach of modeling individual observations. In multiple regression or ANOVA the regression coefficients or parameters of the model are based on the minimization of the sum of squared differences between the predicted and observed dependent variables. SEM approaches the data from a different perspective. Instead of considering variables individually, the emphasis is on the covariance structure. Parameters are estimated in structural equation modeling by minimizing the difference between the observed covariances and those implied by a structural or path model. Among the strengths of SEM is the ability to construct latent variables: variables which are not measured directly, but are estimated in the model from several measured variables, each of which is predicted to ‘tap into’ the latent variables. This allows the modeler to explicitly capture the unreliability of measurement in the model, which in theory allows the structural relations between latent variables to be accurately estimated. Structural equation models are usually represented by a set of matrix equations and visualized by graphical path diagrams.</p>
<p>SEM allows both confirmatory and exploratory modeling; it is suited to both theory testing and theory development. Confirmatory modeling usually starts out with a hypothesis that gets represented in a causal model. The concepts used in the model must then be operationalized to allow testing of the relationships between the concepts in the model. The model is tested against the obtained measurement data to determine how well the model fits the data. The causal assumptions embedded in the model often have falsifiable implications, which can be tested against the data. With an initial theory, SEM can be used inductively by specifying a corresponding model and using data to estimate the values of free parameters. The initial hypothesis usually requires adjustment in light of model evidence.</p>
<p>We assessed the overall fit of the hypothesized model to the data using several goodness-of-fit measures, such as the χ
<sup>2</sup>
goodness-of-fit test; the Comparative Fit Index (CFI; Bentler
<xref ref-type="bibr" rid="R9">1990</xref>
); the model residual, measured by the Root Mean Square Error of Approximation (RMSEA; Steiger
<xref ref-type="bibr" rid="R68">1990</xref>
); and relative goodness of fit, measured by the Akaike Information Criterion (AIC; Akaike
<xref ref-type="bibr" rid="R2">1974</xref>
,
<xref ref-type="bibr" rid="R3">1987</xref>
). A significant χ
<sup>2</sup>
statistic may suggest that the hypothesized model does not adequately fit the observed data, whereas a non-significant χ
<sup>2</sup>
suggests model adequacy. However, this index is sensitive to sample size and violations of the assumption of multivariate normality. Therefore, alternative fit indices are generally used (Schermelleh-Engel et al
<xref ref-type="bibr" rid="R64">2003</xref>
). The CFI indexes the relative change in model fit as estimated by the noncentral chi-square of a target model versus the independence model. The RMSEA measures the discrepancy due to approximation and is relatively independent of sample size. The AIC adjusts χ
<sup>2</sup>
for the number of estimated parameters.</p>
</sec>
</sec>
<sec id="s3">
<label>3</label>
<title>Experiment I: Affective dynamic texture descriptors</title>
<p>A preliminary experiment was performed to select the most appropriate adjectives describing the affective properties of dynamic textures. The widely used and well-validated Pleasure-Arousal-Dominance (PAD) emotional state model (eg, Arifin and Cheung
<xref ref-type="bibr" rid="R6">2007</xref>
; Mehrabian
<xref ref-type="bibr" rid="R52">1996</xref>
) was used in this study to encode the affective characteristics of the dynamic textures. This model states that the emotional spectrum can reliably be described along three bipolar dimensions: pleasure-displeasure (ie, affect), relaxed/aroused (ie, intensity), and controlling/controlled (ie, dominance). The participants' emotional response to their viewing of the dynamic textures was measured by self-report through the use of a scoring list of affective adjectives. Self-report has been shown to be a reliable measure of emotional reactions to audio-video clips and is consistent with five peripheral physiological signals: galvanic skin resistance (GSR), electromyograms (EMG), blood pressure, respiration patterns, and skin temperature (Soleymani et al
<xref ref-type="bibr" rid="R67">2008</xref>
).</p>
<sec id="s3.1">
<label>3.1</label>
<title>Methods</title>
<sec id="s3.1.1">
<label>3.1.1</label>
<title>Affective terms.</title>
<p>A list of 58 candidate affective adjectives was compiled from a literature study (eg, Cerf et al
<xref ref-type="bibr" rid="R16">2007</xref>
; Fujiwara et al
<xref ref-type="bibr" rid="R28">2006</xref>
; Küller
<xref ref-type="bibr" rid="R43">1975</xref>
; Masakura et al
<xref ref-type="bibr" rid="R50">2006</xref>
; Russell
<xref ref-type="bibr" rid="R60">1980</xref>
; Russell et al
<xref ref-type="bibr" rid="R62">1981</xref>
; Russell and Pratt
<xref ref-type="bibr" rid="R61">1980</xref>
; see
<xref ref-type="fig" rid="F4">Figure 4</xref>
). These 58 candidate adjectives were first divided into 10 categories. Eight categories corresponded to the ends of the four axes of Russell's circumplex model of affect (Pleasure, Arousal, Interest, Relaxation [Russell
<xref ref-type="bibr" rid="R60">1980</xref>
]; see upper section of
<xref ref-type="fig" rid="F4">Figure 4</xref>
). The remaining two categories corresponded to the ends of the Dominance scale (
<xref ref-type="fig" rid="F4">Figure 4</xref>
lower section). Then, a scoring list was made which listed all 58 candidate terms in a spatial layout that grouped the 52 adjectives in the Pleasure, Arousal, Interest, and Relaxation categories according to a circumplex ordering (
<xref ref-type="fig" rid="F4">Figure 4</xref>
upper section) and listed the 6 adjectives in the Dominance category in a separate section (
<xref ref-type="fig" rid="F4">Figure 4</xref>
lower section). The circumplex ordering was achieved by mapping the eight categories from Russel's model onto the eight outer cells of a 3 × 3 square matrix (see also the left button section of
<xref ref-type="fig" rid="F6">Figure 6</xref>
).</p>
<fig id="F4" orientation="portrait" position="float">
<label>Figure 4.</label>
<caption>
<p>Upper 3 rows: Eight categories of candidate affective adjectives ordered corresponding to Russell's circumplex model of affect (Pleasure – middle row, Arousal – middle column, Interest – lower-left to top-right diagonal, Relaxation - top-left to lower right diagonal [Russell
<xref ref-type="bibr" rid="R60">1980</xref>
]; see also
<xref ref-type="fig" rid="F5">Figure 5</xref>
). Lower row: Two categories corresponding to the Dominance scale (left: non-dominant; right: dominant). The two most frequently selected adjectives in each category are printed in bold.</p>
</caption>
<graphic xlink:href="i-perception-2-969-g0004"></graphic>
</fig>
<fig id="F5" orientation="portrait" position="float">
<label>Figure 5.</label>
<caption>
<p>A two-dimensional representation of the self-report affect circumplex space, with 8 circularly ordered affective states represented by two adjectives each (after Russell
<xref ref-type="bibr" rid="R60">1980</xref>
). The horizontal axis corresponds to
<italic>Pleasure</italic>
, the vertical axis to
<italic>Arousal</italic>
, the right-diagonal to
<italic>Interest</italic>
, and the left-diagonal to
<italic>Relaxation</italic>
.</p>
</caption>
<graphic xlink:href="i-perception-2-969-g0005"></graphic>
</fig>
<fig id="F6" orientation="portrait" position="float">
<label>Figure 6.</label>
<caption>
<p>Screen layout during the affective classification phase of the experiment. The dynamic texture is shown on top. Buttons in the left section correspond to the
<italic>Pleasure</italic>
-
<italic>Arousal</italic>
scale. Buttons in the right section correspond to, respectively,
<italic>Dominance</italic>
(weak, inconspicuous - strong, conspicuous),
<italic>SpatialStructure</italic>
(complex, organized - simple, disorganized), and
<italic>TemporalStructure</italic>
(regular, fluent - irregular, choppy).</p>
</caption>
<graphic xlink:href="i-perception-2-969-g0006"></graphic>
</fig>
</sec>
<sec id="s3.1.2">
<label>3.1.2</label>
<title>Participants.</title>
<p>A total of 24 participants (14 males and 10 females), ranging in age from 18 to 55 years (
<italic>M</italic>
= 36.2,
<italic>SD</italic>
= 15.9) participated in this experiment.</p>
</sec>
<sec id="s3.1.3">
<label>3.1.3</label>
<title>Stimuli.</title>
<p>The stimuli were 36 highly diverse (with respect to spatial and temporal characteristics, and with respect to semantic scene content) dynamic textures from the DynTex database.
<sup>
<xref ref-type="fn" rid="fn4">(4)</xref>
</sup>
</p>
</sec>
<sec id="s3.1.4">
<label>3.1.4</label>
<title>Procedure.</title>
<p>Before starting the experiment the participants read the experimental instructions. These instructions explained that the participants should attribute the most appropriate adjectives from a given list of candidate affective adjectives to each of presented dynamic textures. The participants were informed about the nature of Russell's circumplex model of affect (Russell
<xref ref-type="bibr" rid="R60">1980</xref>
), and in particular about the fact that affective terms close to each other on the perimeter of the circumplex refer to similar emotions while terms on opposite sides of the same bipolar axis are mutually exclusive. The participants were asked to select at least one term from the entire set of 8 categories representing the circumplex model of affect. They were allowed to select more than one term, and terms in adjacent (in the circumplex model) categories, with a maximum of three terms from a single category. Additionally, they were asked to select at least one term from each of the two dominance categories. It was emphasized that the participants should ignore the semantic content of the video clips and base their judgments solely on the spatiotemporal characteristics of the textures, preferably using their first impression. Each participant privately watched the entire stimulus set and performed the experiment self-paced without any time restrictions. The experiment typically lasted about an hour.</p>
</sec>
</sec>
<sec id="s3.2">
<label>3.2</label>
<title>Results</title>
<p>Four adjectives had item-total correlations smaller than 0.3 or reduced Cronbach's alpha inter-item reliability score in the category below 0.7 and were therefore deleted for lack of reliability. Finally, the two most frequently scored adjectives in each category were selected (see bold printed adjectives in
<xref ref-type="fig" rid="F4">Figure 4</xref>
), and categories on opposite ends of the circumplex model were joined. The result was a list of 28 appropriate affective terms. The resulting dimensions were
<italic>Relaxation</italic>
(disturbing, nervous – tranquil, restful),
<italic>Pleasure</italic>
(uncomfortable, unpleasant – beautiful, pleasant),
<italic>Arousal</italic>
(passive, lazy – active, lively),
<italic>Interest</italic>
(boring, monotonous – stimulating, interesting), and
<italic>Dominance</italic>
(weak, inconspicuous – strong, conspicuous). These five dimensions were used to measure emotional response in the rest of this study. The two additional dimensions
<italic>Complexity</italic>
(complex, organized – simple, disorganized) and
<italic>Regularity</italic>
(regular, fluent – irregular, choppy) were adopted to measure the participants' impression of, respectively, the spatial and temporal textures characteristics (see
<xref ref-type="fig" rid="F6">Figure 6</xref>
).</p>
</sec>
</sec>
<sec id="s4">
<label>4</label>
<title>Experiment II: Affective rating of dynamic textures</title>
<p>A second experiment was performed to measure the degree to which the affective classifiers determined in Experiment I applied to each of the dynamic textures of the Water Set and the Mixed Set. In addition, the spatiotemporal characteristics of the dynamic textures were investigated through the concepts of spatial complexity and temporal regularity.</p>
<sec id="s4.1">
<label>4.1</label>
<title>Participants</title>
<p>The set of dynamic water textures was judged by 38 participants (22 males and 16 females), whose age ranged from 19 to 64 years (
<italic>M</italic>
= 37.0,
<italic>SD</italic>
= 16.2). The set of mixed dynamic textures was judged by 45 participants (23 males and 22 females), ranging in age from 18 to 64 years (
<italic>M</italic>
= 34.8,
<italic>SD</italic>
= 17.5).</p>
</sec>
<sec id="s4.2">
<label>4.2</label>
<title>Procedure</title>
<p>Before starting the experiment the participants read the experimental instructions. These instructions explained the experimental procedure, the stimulus presentation programme, and its response buttons, and showed screen shots from all stages of the experiment. The instructions emphasised that the participants should ignore the semantic content of the dynamic textures and should base their judgments solely on their spatiotemporal characteristics, preferably using their first impression. The participants were also informed about the nature of Russell's circumplex model of affect (Russell
<xref ref-type="bibr" rid="R60">1980</xref>
), and in particular about the fact that affective terms close to each other on the perimeter of the circumplex refer to similar emotions, while terms on opposite sides of the same bipolar axis are mutually exclusive. The participants were asked to select at least one and most two (adjacent) affective terms from the left response button section and to select exactly one term on each of the 3 rows on the right response button section. The participants could indicate their selection by placing the cursor successively over the corresponding response buttons on the screen and clicking a mouse button (see Video 1).</p>
<p>An experimental run consisted of two parts. In the first part of the experiment the participants attributed each dynamic texture the appropriate affective and spatiotemporal classification terms. In the second part of the experiment, the participants rank ordered the dynamic textures according to the classification terms that had been attributed in the first part of the experiment.</p>
<p>A typical run went as follows. After the participants had read the instructions, the experimenter selected the appropriate set of dynamic textures (ie, either the water or the mixed set) and started the test programme in the texture classification mode. The test programme then presented the first dynamic texture of the test set in the upper half of the screen, while the lower half of the screen showed two sections with response buttons (
<xref ref-type="fig" rid="F6">Figure 6</xref>
). The buttons in the left section correspond to categories from Russell's circumplex model of affect (
<italic>Pleasure</italic>
and
<italic>Arousal</italic>
; Russell
<xref ref-type="bibr" rid="R60">1980</xref>
). The buttons in the right response section correspond, respectively, to
<italic>Dominance</italic>
(weak, inconspicuous - strong, conspicuous), spatial
<italic>Complexity</italic>
(complex, organized - simple, disorganized), and temporal
<italic>Regularity</italic>
(regular, fluent - irregular, choppy). The participants then classified the texture by pressing the buttons labelled with the affective and structural terms that corresponded most closely to their impression of the texture. A button changed colour when activated. A previous classification could be undone by pressing an activated button a second time. When they were satisfied with their classification, the participants could press a button labelled “Next” to proceed to the next dynamic texture. When all textures in the test set had been labelled the test programme displayed a message that the rating phase would start when the “Next” button was pressed again. In the rating mode, the participants successively rank ordered the dynamic textures with respect to each of the classification terms that they had previously attributed to the textures during the first part of the experiment. Because of this procedure, a different number of dynamic textures (ranging from 0 to the cardinality of the test set) may correspond to (and therefore need to be rated for) a given classification term. This procedure was chosen to restrict the experimental time (rating all textures with respect to all classification terms takes a large amount of time, and may easily lead to observer fatigue), and to keep the participants motivated (rating many textures with respect to terms that obviously don't apply may easily induce boredom).</p>
<p>In the rating mode, all dynamic textures in a given category (ie, textures to which a given classification term had been attributed) were initially shown in the upper part of the screen (
<xref ref-type="fig" rid="F7">Figure 7a</xref>
). A scale bar in the middle of the screen showed the current classification term together with the range of the scale (ie, the degree to which the classification term applies to the dynamic texture: ranging from 0 = “not at all” to 1 = “very much”). A participant could then drag the dynamic textures from the upper part of the screen to the lower part using a mouse (
<xref ref-type="fig" rid="F7">Figure 7b</xref>
). The horizontal position of the midpoint of a dynamic texture patch is adopted as its value on the rating scale. When all dynamic textures in a category had been ordered with respect to their common classifier, the participant could proceed to the next texture category by pressing a button that appeared in the upper part of the screen (
<xref ref-type="fig" rid="F7">Figure 7c</xref>
). Before this button was pressed, the order of the dynamic textures could still be adjusted.</p>
<fig id="F7" orientation="portrait" position="float">
<label>Figure 7.</label>
<caption>
<p>Screen layout during the affective rating phase of the experiment. This example illustrates the rating of dynamic water textures. (a) Initially, all dynamic textures in a given category are shown in the upper part of the screen. A scale bar in the middle of the screen shows the classification term (in this example: Active, Lively) and the range of the scale (ie, the degree to which the classification term applies to the dynamic textures: ranging from 0 = “not at all” to 1 = “very much”). (b) Participants can drag the dynamic textures from the upper part of the screen to the lower part using a mouse. The horizontal position of the midpoint of a dynamic texture patch is adopted as its value on the rating scale. (c) When all dynamic textures in a category have been ordered with respect to their common classifier, the participant can proceed to the next texture category by pressing a button that appears in the upper part of the screen. Before pressing this button, the order of the dynamic textures can still be adjusted.</p>
</caption>
<graphic xlink:href="i-perception-2-969-g0007"></graphic>
</fig>
<p>The test programme allowed the simultaneous presentation of an arbitrary number of video clips on a regular computer. Hence, all textures could dynamically be presented simultaneously at their full frame rate. If there were more textures in a category than could simultaneously be represented in the upper display area, the programme initially showed only the first elements of the category. Each time a texture was moved to the lower part of the screen, a new element from the category appeared in the display area that had become available in upper part of the screen, until finally all textures in the category were represented on the screen.</p>
<p>The experiments were performed self-paced without any time restrictions. A single run typically lasted between 30 and 45 minutes.</p>
</sec>
<sec id="s4.3">
<label>4.3</label>
<title>Preliminary analyses</title>
<p>The mean rating scores were computed for each texture in both (Water and Mixed) datasets and for all classification terms. The two resulting datasets with scores on the spatiotemporal characteristics (
<xref ref-type="table" rid="T1">Table 1</xref>
) and mean scores for the emotional response variables for each texture (
<xref ref-type="table" rid="T2">Table 2</xref>
) were then used as groups in the model of
<xref ref-type="fig" rid="F1">Figure 1</xref>
, which was trimmed and then analysed for multi-group invariance.</p>
<table-wrap id="T1" orientation="portrait" position="float">
<label>Table 1.</label>
<caption>
<title>Descriptive statistics (Mean and Standard Deviation) of the spatiotemporal characteristics of the Water and Mixed stimulus sets. See text for an explanation of the descriptors.</title>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<td rowspan="2" valign="top" colspan="1">Descriptor</td>
<td colspan="2" rowspan="1">Water textures
<hr></hr>
</td>
<td colspan="2" rowspan="1">Mixed textures
<hr></hr>
</td>
</tr>
<tr>
<td rowspan="1" colspan="1">M</td>
<td rowspan="1" colspan="1">SD</td>
<td rowspan="1" colspan="1">M</td>
<td rowspan="1" colspan="1">SD</td>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="1" colspan="1">Trajectory type</td>
<td rowspan="1" colspan="1">3.733</td>
<td rowspan="1" colspan="1">1.081</td>
<td rowspan="1" colspan="1">3.597</td>
<td rowspan="1" colspan="1">1.189</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Appearance change</td>
<td rowspan="1" colspan="1">3.400</td>
<td rowspan="1" colspan="1">1.020</td>
<td rowspan="1" colspan="1">2.778</td>
<td rowspan="1" colspan="1">1.180</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Speed frequency</td>
<td rowspan="1" colspan="1">2.333</td>
<td rowspan="1" colspan="1">0.547</td>
<td rowspan="1" colspan="1">2.250</td>
<td rowspan="1" colspan="1">0.500</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Amplitude</td>
<td rowspan="1" colspan="1">1.300</td>
<td rowspan="1" colspan="1">0.837</td>
<td rowspan="1" colspan="1">1.500</td>
<td rowspan="1" colspan="1">0.878</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Temporal regularity</td>
<td rowspan="1" colspan="1">1.733</td>
<td rowspan="1" colspan="1">0.450</td>
<td rowspan="1" colspan="1">1.861</td>
<td rowspan="1" colspan="1">0.351</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Spatial regularity</td>
<td rowspan="1" colspan="1">1.667</td>
<td rowspan="1" colspan="1">0.606</td>
<td rowspan="1" colspan="1">1.694</td>
<td rowspan="1" colspan="1">0.467</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Spatial scale</td>
<td rowspan="1" colspan="1">1.767</td>
<td rowspan="1" colspan="1">0.504</td>
<td rowspan="1" colspan="1">1.778</td>
<td rowspan="1" colspan="1">0.540</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Spatial contrast</td>
<td rowspan="1" colspan="1">1.500</td>
<td rowspan="1" colspan="1">0.509</td>
<td rowspan="1" colspan="1">1.722</td>
<td rowspan="1" colspan="1">0.513</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="T2" orientation="portrait" position="float">
<label>Table 2.</label>
<caption>
<title>Descriptive statistics (Mean and Standard Deviation) of the emotional (
<italic>Relaxation, Pleasure, Arousal, Interest, Dominance</italic>
), spatial (
<italic>Complexity</italic>
), and temporal (
<italic>Regularity</italic>
) response variables of the Water and Mixed stimulus sets. See text for an explanation of the descriptors.</title>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<td rowspan="2" valign="top" colspan="1">Response variable</td>
<td colspan="2" rowspan="1">Water textures
<hr></hr>
</td>
<td colspan="2" rowspan="1">Mixed textures
<hr></hr>
</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Mean</td>
<td rowspan="1" colspan="1">SD</td>
<td rowspan="1" colspan="1">Mean</td>
<td rowspan="1" colspan="1">SD</td>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="1" colspan="1">Relaxation</td>
<td rowspan="1" colspan="1">0.551</td>
<td rowspan="1" colspan="1">0.242</td>
<td rowspan="1" colspan="1">0.560</td>
<td rowspan="1" colspan="1">0.177</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Pleasure</td>
<td rowspan="1" colspan="1">0.548</td>
<td rowspan="1" colspan="1">0.222</td>
<td rowspan="1" colspan="1">0.536</td>
<td rowspan="1" colspan="1">0.184</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Arousal</td>
<td rowspan="1" colspan="1">0.479</td>
<td rowspan="1" colspan="1">0.249</td>
<td rowspan="1" colspan="1">0.454</td>
<td rowspan="1" colspan="1">0.211</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Interest</td>
<td rowspan="1" colspan="1">0.460</td>
<td rowspan="1" colspan="1">0.209</td>
<td rowspan="1" colspan="1">0.451</td>
<td rowspan="1" colspan="1">0.156</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Dominance</td>
<td rowspan="1" colspan="1">0.480</td>
<td rowspan="1" colspan="1">0.239</td>
<td rowspan="1" colspan="1">0.496</td>
<td rowspan="1" colspan="1">0.154</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Complexity</td>
<td rowspan="1" colspan="1">0.479</td>
<td rowspan="1" colspan="1">0.249</td>
<td rowspan="1" colspan="1">0.448</td>
<td rowspan="1" colspan="1">0.178</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Regularity</td>
<td rowspan="1" colspan="1">0.522</td>
<td rowspan="1" colspan="1">0.233</td>
<td rowspan="1" colspan="1">0.538</td>
<td rowspan="1" colspan="1">0.186</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Absolute values of the multivariate kurtosis and all the endogenous variables' skew and kurtosis values were less than 3, and none of the endogenous variables had
<italic>p</italic>
-values smaller than .05 for the Kolmogorov-Smirnov and Shapiro-Wilk normality tests. Linearity was found roughly between only certain variables, but for these cases no curvilinear relationships were found either. Box-plots and Mahalanobis distances revealed no significant outliers. No other assumptions were violated. All statistical model assumptions were thus found to hold.</p>
<p>The response variables
<italic>Complexity</italic>
and
<italic>Regularity</italic>
are directly related to the spatiotemporal stimulus descriptors from the DynTex database:
<italic>Regularity</italic>
represents an overall impression of
<italic>TrajectoryType, SpatialRegularity, TemporalRegularity</italic>
, and
<italic>AppearanceChange</italic>
, while
<italic>Complexity</italic>
refers to an overall impression of
<italic>SpatialContrast, Amplitude, SpeedFrequency</italic>
, and
<italic>SpatialScale.</italic>
Hence, it is of interest to know to what extent these descriptors indeed explain the participants' impressions of the dynamic textures. A regression analysis was therefore performed with
<italic>Complexity</italic>
as the dependent variable and the spatio-temporal texture descriptors as the independent variables. R
<sup>2</sup>
was 0.576. When the analysis was performed with
<italic>Regularity</italic>
as the dependent variable, R
<sup>2</sup>
was 0.596. Hence, the spatio-temporal descriptors indeed adequately explain the participants' impressions of the textures.</p>
</sec>
<sec id="s4.4">
<label>4.4</label>
<title>Model modification</title>
<p>Firstly, all regression paths in the initial model from
<xref ref-type="fig" rid="F1">Figure 1</xref>
that were not significant for both (water and mixed) data sets were deleted one by one, starting with the paths with the highest
<italic>p</italic>
-values and working down. All regression paths were then one by one constrained to be equal across the data sets, starting with the paths that had the greatest inter-set differences in regression coefficients and
<italic>p</italic>
-values. Constraints were kept only if they did not significantly degrade the model fit (ie, if they resulted in an insignificant increase in χ
<sup>2</sup>
). Since the two datasets result from two different observer populations watching different dynamic texture sets, the resulting path constraints indicate that the corresponding relationships are invariant across populations and texture types, and consequently have high internal and external validity and high replicability. The comparative model fit over the independence model, measured by the Comparative Fit Index (CFI), the model residual, measured by the Root Mean Square Error of Approximation (RMSEA), and the relative goodness of fit, measured by the Akaike Information Criterion (AIC), are all better for the final than for the unconstrained model, and changes in these statistics were generally in line with changes in χ
<sup>2</sup>
. After adding the valid constraints, no
<italic>p</italic>
-values exceeded 0.25, so the model was not further trimmed. The final model is presented in
<xref ref-type="fig" rid="F8">Figure 8</xref>
.</p>
<fig id="F8" orientation="portrait" position="float">
<label>Figure 8.</label>
<caption>
<p>The final model. Dashed lines represent relationships that vary per texture set.</p>
</caption>
<graphic xlink:href="i-perception-2-969-g0008"></graphic>
</fig>
</sec>
<sec id="s4.5">
<label>4.5</label>
<title>Final results</title>
<p>
<xref ref-type="table" rid="T3">Table 3</xref>
reports the regression path coefficients of the final model shown in
<xref ref-type="fig" rid="F8">Figure 8</xref>
(the coefficients are not included in this figure to prevent clutter). For the standardised regression coefficients, values below 0.3 can be considered as weak, values between 0.3 and 0.5 as modest, and values larger than 0.5 as strong.</p>
<table-wrap id="T3" orientation="portrait" position="float">
<label>Table 3.</label>
<caption>
<p>Regression coefficients for the paths in the final structural equation model. For estimates that are different for the two (water and mixed) stimulus sets, the values are reported as [estimate water set]/[estimate mixed set].</p>
</caption>
<graphic xlink:href="i-perception-2-969-t0003"></graphic>
</table-wrap>
<p>
<italic>TrajectoryType</italic>
was a modest, positive predictor of
<italic>Relaxation</italic>
for the water textures: water in complex motion was perceived as relaxing. However, this relation may be specific for homogeneous or water textures, since no such relation was observed for dynamic textures with diverse content. There was a trend towards significance for a relation between
<italic>TrajectoryType</italic>
and
<italic>Pleasure</italic>
. This relation was positive for water textures, but negative for textures of diverse content, providing tentative support that (like in visual art) the relation between pleasure and complexity is non-linear. Linearity tests correspondingly found only very rough linearity.
<italic>TrajectoryType</italic>
had a weak, negative correlation with
<italic>Arousal</italic>
and
<italic>Dominance</italic>
for both stimulus sets, indicating that arousal and dominance both increase with motion complexity. In sum, motion complexity generally has a slightly relaxing and weakening effect on the perception of a dynamic texture, while its effect on pleasure may be non-linear.</p>
<p>
<italic>AppearanceChange</italic>
correlated positively with
<italic>Arousal</italic>
and
<italic>Dominance</italic>
and negatively with
<italic>Relaxation.</italic>
The effects were all weak for the Water Set and modest for the Mixed Set. All relations were invariant between the two texture sets and thus had high validity. In contrast with the complexity of motion itself, the complexity of changes in the motion of a texture is thus perceived as dominant and arousing.</p>
<p>
<italic>SpeedFrequency</italic>
correlated modestly and positively with
<italic>Dominance</italic>
and
<italic>Arousal</italic>
, and it correlated weakly and negatively with
<italic>Relaxation</italic>
and
<italic>Pleasure</italic>
. All relations were invariant between the two texture sets. As such, the speed of a texture's dynamics has a dominant and arousing effect that is regarded as unpleasant.
<italic>Amplitude</italic>
only predicted
<italic>Pleasure,</italic>
and the correlation was negative and invariant between the two texture sets: it was modest for the Water Set and weak for the Mixed Set. That is, the extent of a texture's dynamics has an unpleasant effect.
<italic>TemporalRegularity,</italic>
which describes the regularity of the previous two variables, had a modestly positive correlation with
<italic>Pleasure</italic>
for the Water Set, but no relation for the Mixed Set. The desirability of dynamic regularity in textures may thus be content specific.</p>
<p>
<italic>SpatialRegularity</italic>
correlated weakly positively with
<italic>Relaxation</italic>
. Correspondingly, there was a weak and negative correlation with
<italic>Arousal. SpatialRegularity</italic>
also correlated weakly positively with
<italic>Pleasure.</italic>
Its relation with
<italic>Dominance</italic>
was modestly negative, and the relation with
<italic>Interest</italic>
was weakly negative. All relations were invariant between the two texture sets, except the relation with
<italic>Relaxation,</italic>
and that relation also did not vary in strength. In conclusion, a texture with more regularity in space, or less different dynamics occurring simultaneously, elicits somewhat more relaxation and pleasure and less interest, and is perceived as less dominant.</p>
<p>
<italic>SpatialScale</italic>
correlated negatively with
<italic>Relaxation</italic>
and positively with
<italic>Arousal.</italic>
All effects were weak, except the negative correlation with
<italic>Relaxation</italic>
for the Mixed Set, which was modest.
<italic>SpatialScale</italic>
also correlated weakly positively with
<italic>Dominance</italic>
. Its relation with
<italic>Pleasure</italic>
was modestly negative for the Water Set and strongly negative for the Mixed Set. All relations were invariant between the two texture sets. Similar to the speed of a texture's dynamics, the relative surface area of the dynamics has an unpleasant, arousing, and dominant effect, which is greater for the Mixed Set than for the Water Set.</p>
<p>
<italic>SpatialContrast</italic>
had no relations with the emotional response variables for the Mixed Set. For the Water Set, it correlated modestly positively with
<italic>Arousal</italic>
and a modestly negatively with
<italic>Relaxation</italic>
.
<italic>SpatialContrast</italic>
furthermore correlated weakly negatively with
<italic>Pleasure</italic>
and modestly positively with both
<italic>Interest</italic>
and
<italic>Dominance.</italic>
In conclusion, the effects of a water texture's degree of spatial variation are arousing, dominant, mildly unpleasant, and interesting. These effects are not seen for textures of diverse content.</p>
<p>
<xref ref-type="table" rid="T4">Table 4</xref>
presents the proportions of explained variance for the dependent variables. In line with the many significant predictors, explained variance was around 50% for all variables except
<italic>Pleasure</italic>
for the water textures and
<italic>Interest</italic>
in general. Proportions around 50% are typically considered a significant amount for indeterminate concepts like emotions (Kline
<xref ref-type="bibr" rid="R42">2010</xref>
, p. 185; denotes values below 1% as small, 10% as typical or medium, and values above 30% as large). These results also indicate that the selected spatio-temporal video characteristics adequately characterise a dynamic texture with regard to its effects on emotional response. Even the values for
<italic>Interest</italic>
are more in the range of “typical” than “small”.
<xref ref-type="table" rid="T5">Table 5</xref>
reports the variances of all the predictors and the residual error terms. All variances were significant. This finding indicates that there was significant variability in the spatio-temporal characteristics of the selected dynamic textures. Note that, in agreement with the high proportions of explained variance, the variances of the residual error terms are small relative to those of the predictors.</p>
<table-wrap id="T4" orientation="portrait" position="float">
<label>Table 4.</label>
<caption>
<title>Proportions of explained variance (R
<sup>2</sup>
) for the dependent variables.</title>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1">Pleasure</td>
<td rowspan="1" colspan="1">Arousal</td>
<td rowspan="1" colspan="1">Dominance</td>
<td rowspan="1" colspan="1">Interest</td>
<td rowspan="1" colspan="1">Relaxation</td>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="1" colspan="1">Water set</td>
<td rowspan="1" colspan="1">0.734</td>
<td rowspan="1" colspan="1">0.554</td>
<td rowspan="1" colspan="1">0.475</td>
<td rowspan="1" colspan="1">0.150</td>
<td rowspan="1" colspan="1">0.560</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Mixed set</td>
<td rowspan="1" colspan="1">0.571</td>
<td rowspan="1" colspan="1">0.492</td>
<td rowspan="1" colspan="1">0.530</td>
<td rowspan="1" colspan="1">0.042</td>
<td rowspan="1" colspan="1">0.496</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="T5" orientation="portrait" position="float">
<label>Table 5.</label>
<caption>
<title>Variances of the predictor variables.</title>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<td rowspan="2" valign="top" colspan="1">Predictor</td>
<td colspan="3" rowspan="1">Water set
<hr></hr>
</td>
<td colspan="3" rowspan="1">Mixed set
<hr></hr>
</td>
</tr>
<tr>
<td rowspan="1" colspan="1">σ
<sup>2</sup>
</td>
<td rowspan="1" colspan="1">S.E.</td>
<td rowspan="1" colspan="1">
<italic>p</italic>
</td>
<td rowspan="1" colspan="1">σ
<sup>2</sup>
</td>
<td rowspan="1" colspan="1">S.E.</td>
<td rowspan="1" colspan="1">
<italic>p</italic>
</td>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="1" colspan="1">Trajectory type</td>
<td rowspan="1" colspan="1">1.129</td>
<td rowspan="1" colspan="1">0.296</td>
<td rowspan="1" colspan="1"><0.001</td>
<td rowspan="1" colspan="1">1.372</td>
<td rowspan="1" colspan="1">0.329</td>
<td rowspan="1" colspan="1"><0.001</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Appearance change</td>
<td rowspan="1" colspan="1">1.007</td>
<td rowspan="1" colspan="1">0.264</td>
<td rowspan="1" colspan="1"><0.001</td>
<td rowspan="1" colspan="1">1.353</td>
<td rowspan="1" colspan="1">0.324</td>
<td rowspan="1" colspan="1"><0.001</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Speed frequency</td>
<td rowspan="1" colspan="1">0.289</td>
<td rowspan="1" colspan="1">0.076</td>
<td rowspan="1" colspan="1"><0.001</td>
<td rowspan="1" colspan="1">0.243</td>
<td rowspan="1" colspan="1">0.058</td>
<td rowspan="1" colspan="1"><0.001</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Amplitude</td>
<td rowspan="1" colspan="1">0.677</td>
<td rowspan="1" colspan="1">0.177</td>
<td rowspan="1" colspan="1"><0.001</td>
<td rowspan="1" colspan="1">0.750</td>
<td rowspan="1" colspan="1">0.180</td>
<td rowspan="1" colspan="1"><0.001</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Temporal regularity</td>
<td rowspan="1" colspan="1">0.196</td>
<td rowspan="1" colspan="1">0.051</td>
<td rowspan="1" colspan="1"><0.001</td>
<td rowspan="1" colspan="1">0.120</td>
<td rowspan="1" colspan="1">0.029</td>
<td rowspan="1" colspan="1"><0.001</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Spatial regularity</td>
<td rowspan="1" colspan="1">0.356</td>
<td rowspan="1" colspan="1">0.093</td>
<td rowspan="1" colspan="1"><0.001</td>
<td rowspan="1" colspan="1">0.212</td>
<td rowspan="1" colspan="1">0.051</td>
<td rowspan="1" colspan="1"><0.001</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Spatial scale</td>
<td rowspan="1" colspan="1">0.246</td>
<td rowspan="1" colspan="1">0.064</td>
<td rowspan="1" colspan="1"><0.001</td>
<td rowspan="1" colspan="1">0.284</td>
<td rowspan="1" colspan="1">0.068</td>
<td rowspan="1" colspan="1"><0.001</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Spatial contrast</td>
<td rowspan="1" colspan="1">0.250</td>
<td rowspan="1" colspan="1">0.066</td>
<td rowspan="1" colspan="1"><0.001</td>
<td rowspan="1" colspan="1">0.256</td>
<td rowspan="1" colspan="1">0.061</td>
<td rowspan="1" colspan="1"><0.001</td>
</tr>
<tr>
<td rowspan="1" colspan="1">d1 (relaxation)</td>
<td rowspan="1" colspan="1">0.029</td>
<td rowspan="1" colspan="1">0.008</td>
<td rowspan="1" colspan="1"><0.001</td>
<td rowspan="1" colspan="1">0.016</td>
<td rowspan="1" colspan="1">0.004</td>
<td rowspan="1" colspan="1"><0.001</td>
</tr>
<tr>
<td rowspan="1" colspan="1">d2 (pleasure)</td>
<td rowspan="1" colspan="1">0.019</td>
<td rowspan="1" colspan="1">0.005</td>
<td rowspan="1" colspan="1"><0.001</td>
<td rowspan="1" colspan="1">0.017</td>
<td rowspan="1" colspan="1">0.004</td>
<td rowspan="1" colspan="1"><0.001</td>
</tr>
<tr>
<td rowspan="1" colspan="1">d3 (arousal)</td>
<td rowspan="1" colspan="1">0.029</td>
<td rowspan="1" colspan="1">0.008</td>
<td rowspan="1" colspan="1"><0.001</td>
<td rowspan="1" colspan="1">0.021</td>
<td rowspan="1" colspan="1">0.005</td>
<td rowspan="1" colspan="1"><0.001</td>
</tr>
<tr>
<td rowspan="1" colspan="1">d4 (interest)</td>
<td rowspan="1" colspan="1">0.037</td>
<td rowspan="1" colspan="1">0.010</td>
<td rowspan="1" colspan="1"><0.001</td>
<td rowspan="1" colspan="1">0.021</td>
<td rowspan="1" colspan="1">0.005</td>
<td rowspan="1" colspan="1"><0.001</td>
</tr>
<tr>
<td rowspan="1" colspan="1">d5(dominance)</td>
<td rowspan="1" colspan="1">0.025</td>
<td rowspan="1" colspan="1">0.007</td>
<td rowspan="1" colspan="1"><0.001</td>
<td rowspan="1" colspan="1">0.013</td>
<td rowspan="1" colspan="1">0.003</td>
<td rowspan="1" colspan="1"><0.001</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
</sec>
<sec sec-type="discussion" id="s5">
<label>5</label>
<title>Discussion</title>
<p>The relation between various spatiotemporal characteristics and emotional experience was studied for visual dynamic textures. Motion complexity was found to have mildly relaxing and nondominant effects. In contrast, motion change complexity was found to be arousing and dominant. The speed of dynamics had arousing, dominant and unpleasant effects. The amplitude of dynamics was also regarded as unpleasant. The regularity of the dynamics may interact with video content in eliciting emotions. The regularity of the dynamics over the textures' space was found to be uninteresting, nondominant, mildly relaxing, and mildly pleasant. The spatial scale of the dynamics had an unpleasant, arousing, and dominant effect, which was larger for textures with diverse content than for water textures. For water textures, the effects of spatial contrast were arousing, dominant, interesting, and mildly unpleasant, but none of these effects were found for textures of diverse content.</p>
<p>These results only partially agree with the hypotheses, which demonstrate that the effects are domain specific and the hypotheses cannot easily be derived from literature on other sensory domains, such as the studies on the emotional response to auditory or tactile stimuli. Hypothesis (i)—temporal regularity is positively related to pleasure—was only supported for the set of water textures: no relation was found for the set of mixed textures. Hypothesis (ii)—both the speed and amplitude of movement are positively related to arousal and negatively to relaxation—was supported for the speed of the dynamics, but not for their amplitude. Amplitude did not significantly correlate with either
<italic>Arousal</italic>
or
<italic>Relaxation</italic>
. Finally, hypothesis (iii)—complexity of motion and change therein is positively related to interest—was not even partially supported. Neither motion complexity nor motion change complexity correlated significantly with
<italic>Interest</italic>
.</p>
<p>The present study shows that the speed of a texture's dynamics has a dominant and arousing effect. This finding agrees with the result of an earlier study on static imagery of water textures where it was found that stillness is perceived as relaxing and that (the impression of) movement is perceived as exciting (Nasar and Lin
<xref ref-type="bibr" rid="R53">2003</xref>
). The current finding that complexity correlates positively with pleasure for water textures agrees with the earlier results that composite water textures are preferred over simple ones (Nasar and Lin
<xref ref-type="bibr" rid="R53">2003</xref>
), and that complexity correlates positively with interest in visual art (Forsythe et al
<xref ref-type="bibr" rid="R27">2011</xref>
). The present result that complexity, speed, and amplitude of movement all serve to increase arousal also agrees with similar findings from the tactile domain, where it was found that “Smooth” stimuli elicit a lethargic feeling, and “Prickly” elicits a nervous feeling, while an increase in frequency and amplitude is positively correlated with the intensity of the emotional response (Suk et al
<xref ref-type="bibr" rid="R69">2009</xref>
).</p>
<p>Certain findings are of particular interest.
<italic>AppearanceChange</italic>
correlated positively with
<italic>Arousal</italic>
and
<italic>Dominance</italic>
and negatively with
<italic>Relaxation,</italic>
and these relations were all weak for the water textures but moderate for the textures of diverse content. This discrepancy in strength may have occurred because water textures all have relatively high and invariant motion change complexity (
<italic>M</italic>
<sub>water</sub>
= 3.400,
<italic>SD</italic>
<sub>water</sub>
= 1.020;
<italic>M</italic>
<sub>mixed</sub>
= 2.778,
<italic>SD</italic>
<sub>mixed</sub>
= 1.180), so that participants watching the Water Set may have adapted to its effects while participants watching the Mixed Set had less opportunity to adapt.</p>
<p>The effects of
<italic>SpatialContrast</italic>
also differed greatly between the two texture sets.
<italic>SpatialContrast</italic>
correlated significantly with several emotional response variables for the Water Set, but it was not a significant predictor for any variable in the Mixed Set. It is hypothesised that this distinction occurs because the spatial contrast of the diverse textures exists predominantly between different objects (eg, cars, flora), for which different dynamics are naturally expected. Conversely, the spatial variation of water textures occurs within the same homogeneous water mass and may in consequence be perceived as more unexpected and chaotic. To informally test this hypothesis, a regression analysis was performed with
<italic>SpatialContrast∗Density</italic>
as the predictor of the emotional response variables.
<italic>Density</italic>
is additional annotation in the DynTex database and describes the discernability of a texture's individual parts and varies from sparse to medium to dense to continuous, with values ranging from 1 to 4, respectively. A significant interaction between
<italic>Density</italic>
and
<italic>SpatialContrast</italic>
was found for
<italic>Relaxation</italic>
(
<italic>p</italic>
= 0.046;
<italic>β</italic>
= 0.334) and
<italic>Arousal</italic>
(
<italic>p</italic>
= 0.023;
<italic>β</italic>
= 0.378), providing tentative support for the hypothesis.</p>
<p>
<italic>Interest</italic>
was an outlier with regards to the relatively low amount of its variance that could be explained by the spatiotemporal texture characteristics. It is hypothesised that interest is primarily determined by content, although spatio-temporal texture characteristics evidently play a significant role as well.</p>
<p>The current rating procedure was adopted because it was observed in earlier experiments that participants tend to forget their responses for similar samples shown earlier in a trial when samples are shown individually one after the other (Lucassen et al
<xref ref-type="bibr" rid="R46">2011</xref>
). This would increase the response variability and reduce intra- and inter-observer correlations. A possible limitation of the present emotional response rating procedure is that it may be prone to bias. In this study, the participants assigned the stimuli to several emotional classes during a first viewing and ranked all stimuli in each class during a second viewing. Hence, habituation (both by repeated viewing of the same stimuli and by simultaneously viewing all stimuli in the same class) may have diminished the emotional response during the second viewing of the textures, which would imply that the results underestimate the true effects. A procedure in which the stimuli are rated individually may be less prone to this type of bias. Also, presenting each texture individually instead of the simultaneous presentation of all previously selected textures eliminates relativity biases. In this study, the participants (despite their instructions) may merely have rated the textures relative to the other textures that were simultaneously presented, instead of using the absolute scale of 0 to 1. Fortunately, no statistical problems were observed as a result of the chosen methodology, and all indicators of the study's validity were favourable. As most of the relations were invariant across the two stimulus sets (ie, for different textures and participants), the results (though exploratory) had high internal and external validity. The selected spatio-temporal characteristics furthermore explain large amounts of the emotional response variance and thus adequately characterise the dynamical textures. After standardisation, the observed relations were stronger for the mixed textures than for the set of water textures. This suggests that the current findings may even underestimate the emotional effects of dynamic textures in real life. In addition, the current results may also underestimate these effects because of the small angular size of the stimuli that were used (about 18 deg). In real life dynamic textures may fill a much larger part of the visual field of the observer, which may significantly enhance the effects that were observed here (eg, Lin et al
<xref ref-type="bibr" rid="R45">2007</xref>
).</p>
<p>Taking into account the high prevalence of dynamic textures in nature and their increasing importance in digital media, the current findings may have important practical implications for designers and observers of dynamic sceneries. Possible applications are the synthesis of affective multimedia content (eg, backgrounds for games, video clips, or digital wallpaper; eg, Houtkamp et al
<xref ref-type="bibr" rid="R35">2008</xref>
), the design of restorative or healing environments (Dijkstra et al
<xref ref-type="bibr" rid="R22">2006</xref>
), and affective video retrieval (Hanjalic
<xref ref-type="bibr" rid="R33">2006</xref>
, Hanjalic and Xu
<xref ref-type="bibr" rid="R34">2005</xref>
). For instance, virtual environments can be made emotionally more compelling by introducing dominant and arousing dynamic textures, such as large and fast breaking waves, that heighten tension and create dramatic effects (Houtkamp et al
<xref ref-type="bibr" rid="R35">2008</xref>
). Similarly, the restorative value of healing environments may benefit from the introduction of relaxing dynamic textures like slowly undulating water surfaces or waving corn fields (Dijkstra et al
<xref ref-type="bibr" rid="R22">2006</xref>
). Displays on empty train stations may stimulate the lonely or bored traveller by showing interesting and complex dynamic patterns, while the same displays may have a relaxing effect on hurried and aroused travellers by showing simple and slow moving patterns when stations are crowded and tension mounts (Van Hagen
<xref ref-type="bibr" rid="R71">2011</xref>
). Most spatiotemporal dynamic texture descriptors from the DynTex database have a direct relation with computer vision algorithms (Péteri et al
<xref ref-type="bibr" rid="R57">2010</xref>
). The relation between these descriptors and human emotional experience therefore enables the automatic indexing and retrieval of affective video content.</p>
<p>Future research could investigate the hypothesised explanations for some of the current findings, such as a non-linear relationship between complexity and pleasure, that spatial contrast evokes emotional responses only if it occurs within the same object and that interest is predominantly a result of content. Interaction effects between the relationships that were observed in this study could also be explored. It would also be interesting to compare the results for the Water Set with those obtained for other continuous or semi-continuous media (eg, waving textile or grass), to investigate whether the difference between the results for the Water Set and the Mixed Set reflects a semantic component or the fact that not all observer impressions are captured by the current set of descriptors. Finally, it could be tested if the results of this study can be extrapolated to non-recurrent textures, which would be of significance for the emotional experience of people's entire visual surroundings.</p>
</sec>
</body>
<back>
<ack>
<p>This research was supported by NWO-VICI grant 639.023.705 Color in Computer Vision.</p>
</ack>
<fn-group>
<fn id="fn1">
<label>(1)</label>
<p>The 30 dynamic textures in the Water Set had the following identifiers in the DynTex database: 6ame100, 54ab110, 54pf110, 54pg110, 55fa110, 64adf10, 64adl10, 64cb810, 571b110, 571b310, 644c610, 647b110, 647b210, 647b410, 647b710, 647b810, 647c310, 648e510, 649dd10, 649de10, 649h310, 649i410, 649i810, 649ic10, 6484f10, 6484i10, 6485110, 6485210, 6487510, 6489510.</p>
</fn>
<fn id="fn2">
<label>(2)</label>
<p>The 36 dynamic textures in the Mixed Set had the following identifiers in the DynTex database: 6ame100, 6ammi00, 54ab110, 54ac110, 54pf110, 64aa410, 64ab410, 64ab510, 64ad410, 64ad910, 64adb10, 64adf10, 64adl10, 571b110, 571b310, 571c110, 571d110, 644a910, 645ab10, 645b710, 645c110, 645c220, 645c610, 646c410, 646c510, 648b610, 648dc10, 649ha10, 6481f10, 6482c10, 6484d10, 6486b10, 6482210, 6485110, 6485310, 6489510.</p>
</fn>
<fn id="fn3">
<label>(3)</label>
<p>The following 9 textures were included in both stimulus sets: 6ame100, 54ab110, 54pf110, 64adf10, 64adl10, 571b110, 571b310, 6485110, 6489510.</p>
</fn>
<fn id="fn4">
<label>(4)</label>
<p>The 36 dynamic textures had the following identifiers in the DynTex database: 644a910,645ab10,645b710,645c110,645c220,645c610,646c410,646c510,648b610,648dc10,649ha10, 6481f1, 6482c10,6484d10,6486b10,6482210,6485110,6485310,6489510,6ame100,6ammi00,54ab110, 54ac110, 54pf110,64aa410,64ab410,64ab510,64ad410,64ad910,64adb10,64adf10,64adl10,571b110, 571b310,571c110,571d110.</p>
</fn>
</fn-group>
<bio id="d34e1825">
<p>
<inline-graphic xlink:href="i-perception-2-969-i0001.gif"></inline-graphic>
<bold>Alexander Toet</bold>
received his PhD in physics from the University of Utrecht, Utrecht, The Netherlands in 1987, where he worked on visual spatial localization (hyperacuity) and image processing. His is currently a guest researcher at the Intelligent System Laboratory Amsterdam, Faculty of Science, University of Amsterdam, where he investigates the effects of color on affective image classification, and a senior research scientist at TNO (Soesterberg, The Netherlands), where he investigates multimodal image fusion, image quality, computational models of human visual search and detection, and the quantification of visual target distinctness. He also studies crossmodal perceptual interactions between the visual, auditory, olfactory, and tactile senses, with the aim to deploy these interactions to enhance the affective quality of virtual environments for training and simulation.</p>
<p>
<inline-graphic xlink:href="i-perception-2-969-i0002.gif"></inline-graphic>
<bold>Menno Henselmans</bold>
received his BSc (Social Sciences with a minor in statistics) with magna cum laude at the University College Utrecht, The Netherlands in 2011. He is currently a Behavioral and Economic Science Master student at the University of Warwick, UK.</p>
<p>
<inline-graphic xlink:href="i-perception-2-969-i0003.gif"></inline-graphic>
<bold>Marcel Lucassen</bold>
received his MSc degree in technical physics from Twente University and his PhD in biophysics (on color constancy) from Utrecht University, The Netherlands. Thereafter he worked with Akzo Nobel Coatings and TNO Human Factors in research and management positions. He is now a freelance color scientist at Lucassen Colour Research and holds a part-time position at the University of Amsterdam. His research interests are in basic and applied color vision.</p>
<p>
<inline-graphic xlink:href="i-perception-2-969-i0004.gif"></inline-graphic>
<bold>Theo Gevers</bold>
is an Associate Professor of Computer Science at the University of Amsterdam, The Netherlands, where he is also teaching director of the MSc of Artificial Intelligence. He is a full professor at the Computer Vision Center of the Universitat Autonoma de Barcelona, Spain. He currently holds a VICI award (for excellent researchers) from the Dutch Organisation for Scientific Research. His main research interests are in the fundamentals of color image processing, image understanding, and computer vision. He is chair of various conferences, and he is associate editor for the IEEE Transactions on Image Processing. Further, he is program committee member of a number of conferences, and an invited speaker at major conferences. He is a lecturer of post-doctoral courses given at various major conferences (CVPR, ICPR, SPIE, CGIV).</p>
</bio>
<ref-list>
<title>References</title>
<ref id="R1">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Adolphs</surname>
<given-names>R</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>“Emotional vision”</article-title>
<source>Nat. Neurosci.</source>
<volume>7</volume>
<fpage>1167</fpage>
<lpage>1168</lpage>
<pub-id pub-id-type="pmid">15508009</pub-id>
</element-citation>
</ref>
<ref id="R2">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Akaike</surname>
<given-names>H</given-names>
</name>
</person-group>
<year>1974</year>
<article-title>“A new look at statistical model identification”</article-title>
<source>IEEE Transactions on Automatic Control</source>
<volume>19</volume>
<fpage>716</fpage>
<lpage>723</lpage>
<pub-id pub-id-type="doi">10.1109/TAC.1974.1100705</pub-id>
</element-citation>
</ref>
<ref id="R3">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Akaike</surname>
<given-names>H</given-names>
</name>
</person-group>
<year>1987</year>
<article-title>“Factor analysis and AIC”</article-title>
<source>Psychometrika</source>
<volume>52</volume>
<fpage>317</fpage>
<lpage>332</lpage>
<pub-id pub-id-type="doi">10.1007/BF02294359</pub-id>
</element-citation>
</ref>
<ref id="R4">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Amaral</surname>
<given-names>D G</given-names>
</name>
<name>
<surname>Behniea</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Kelly</surname>
<given-names>J L</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>“Topographic organization of projections from the amygdala to the visual cortex in the macaque monkey”</article-title>
<source>Neuroscience</source>
<volume>118</volume>
<fpage>1099</fpage>
<lpage>1120</lpage>
<pub-id pub-id-type="doi">10.1016/S0306-4522(02)01001-1</pub-id>
<pub-id pub-id-type="pmid">12732254</pub-id>
</element-citation>
</ref>
<ref id="R5">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Arbuckle</surname>
<given-names>J L</given-names>
</name>
</person-group>
<year>2010</year>
<source>IBM SPSS¯ Amos™ 19 User's Guide</source>
<publisher-loc>Crawfordville, FL</publisher-loc>
<publisher-name>Amos Development Corporation</publisher-name>
</element-citation>
</ref>
<ref id="R6">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Arifin</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Cheung</surname>
<given-names>P Y K</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>“A novel video parsing algorithm utilizing the Pleasure-Arousal-Dominance emotional information”</article-title>
<source>IEEE International Conference on Image Processing 2007</source>
<publisher-loc>Washington, DC</publisher-loc>
<publisher-name>IEEE Press</publisher-name>
</element-citation>
</ref>
<ref id="R7">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Barrington</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Chan</surname>
<given-names>A B</given-names>
</name>
<name>
<surname>Lanckriet</surname>
<given-names>G</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>“Modeling music as a dynamic texture”</article-title>
<source>IEEE Transactions on Audio, Speech, and Language Processing</source>
<volume>18</volume>
<fpage>602</fpage>
<lpage>612</lpage>
<pub-id pub-id-type="doi">10.1109/TASL.2009.2036306</pub-id>
</element-citation>
</ref>
<ref id="R8">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Baumgartner</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Lutz</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Schmidt</surname>
<given-names>C F</given-names>
</name>
<name>
<surname>Jancke</surname>
<given-names>L</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>“The emotional power of music: how music enhances the feeling of affective pictures”</article-title>
<source>Brain Research</source>
<volume>1075</volume>
<fpage>151</fpage>
<lpage>164</lpage>
<pub-id pub-id-type="doi">10.1016/j.brainres.200-5.12.065</pub-id>
<pub-id pub-id-type="pmid">16458860</pub-id>
</element-citation>
</ref>
<ref id="R9">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bentler</surname>
<given-names>P M</given-names>
</name>
</person-group>
<year>1990</year>
<article-title>“Comparative fit indexes in structural models”</article-title>
<source>Psychological Bulletin</source>
<volume>107</volume>
<fpage>238</fpage>
<lpage>246</lpage>
<pub-id pub-id-type="doi">10.1037/0033-2909.107.2.238</pub-id>
<pub-id pub-id-type="pmid">2320703</pub-id>
</element-citation>
</ref>
<ref id="R10">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bigand</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Vieillard</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Madurell</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Marozeau</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Dacquet</surname>
<given-names>A</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>“Multidimensional scaling of emotional responses to music: The effect of musical expertise and of the duration of the excerpts”</article-title>
<source>Cognition & Emotion</source>
<volume>19</volume>
<fpage>1113</fpage>
<lpage>1139</lpage>
<pub-id pub-id-type="doi">10.1080/02699930500204250</pub-id>
</element-citation>
</ref>
<ref id="R11">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Billock</surname>
<given-names>V A</given-names>
</name>
</person-group>
<year>2000</year>
<article-title>“Neural acclimation to 1/f spatial frequency spectra in natural images transduced by the human visual system”</article-title>
<source>Physica D: Nonlinear Phenomena</source>
<volume>137</volume>
<fpage>379</fpage>
<lpage>3910</lpage>
<pub-id pub-id-type="doi">10.1016/S0167-2789(99)00197-9</pub-id>
</element-citation>
</ref>
<ref id="R12">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Billock</surname>
<given-names>V A</given-names>
</name>
<name>
<surname>Cunningham</surname>
<given-names>D W</given-names>
</name>
<name>
<surname>Havig</surname>
<given-names>P R</given-names>
</name>
<name>
<surname>Tsou</surname>
<given-names>B H</given-names>
</name>
</person-group>
<year>2001a</year>
<article-title>“Perception of spatiotemporal random fractals: an extension of colorimetric methods to the study of dynamic texture”</article-title>
<source>Journal of the Optical Society of America A</source>
<volume>18</volume>
<fpage>2404</fpage>
<lpage>2413</lpage>
<pub-id pub-id-type="doi">10.1364/JOSAA.18.002404</pub-id>
</element-citation>
</ref>
<ref id="R13">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Billock</surname>
<given-names>V A</given-names>
</name>
<name>
<surname>de Guzman</surname>
<given-names>G C</given-names>
</name>
<name>
<surname>Kelso</surname>
<given-names>J A S</given-names>
</name>
</person-group>
<year>2001b</year>
<article-title>“Fractal time and 1/f spectra in dynamic images and human vision”</article-title>
<source>Physica D: Nonlinear Phenomena</source>
<volume>148</volume>
<fpage>136</fpage>
<lpage>146</lpage>
<pub-id pub-id-type="doi">10.1016/S0167-2789(00)00174-3</pub-id>
</element-citation>
</ref>
<ref id="R14">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Boltz</surname>
<given-names>M G</given-names>
</name>
<name>
<surname>Ebendorf</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Field</surname>
<given-names>B</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>“Audiovisual interactions: the impact of visual information on music perception and memory”</article-title>
<source>Music Perception</source>
<volume>27</volume>
<fpage>43</fpage>
<lpage>59</lpage>
<pub-id pub-id-type="doi">10.1525/mp.2009.27.1.43</pub-id>
</element-citation>
</ref>
<ref id="R15">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bresin</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Friberg</surname>
<given-names>A</given-names>
</name>
</person-group>
<year>2000</year>
<article-title>“Emotional coloring of computer-controlled music performances”</article-title>
<source>Computer Music Journal</source>
<volume>24</volume>
<fpage>44</fpage>
<lpage>63</lpage>
<pub-id pub-id-type="doi">10.1162/014892600559515</pub-id>
</element-citation>
</ref>
<ref id="R16">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cerf</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Cleary</surname>
<given-names>D R</given-names>
</name>
<name>
<surname>Peters</surname>
<given-names>R J</given-names>
</name>
<name>
<surname>Einhäuser</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Koch</surname>
<given-names>C</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>“Observers are consistent when rating image conspicuity”</article-title>
<source>Vision Research</source>
<volume>47</volume>
<fpage>3052</fpage>
<lpage>3060</lpage>
<pub-id pub-id-type="doi">10.1016/j.visres.2007.06.025</pub-id>
<pub-id pub-id-type="pmid">17923144</pub-id>
</element-citation>
</ref>
<ref id="R17">
<element-citation publication-type="other">
<person-group person-group-type="author">
<name>
<surname>Chan</surname>
<given-names>A B</given-names>
</name>
<name>
<surname>Vasconcelos</surname>
<given-names>N</given-names>
</name>
</person-group>
<year>2005</year>
<source>“Layered dynamic textures”</source>
<italic>Proceedings of Neural Information Processing Systems 18</italic>
</element-citation>
</ref>
<ref id="R18">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chen</surname>
<given-names>TP</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>C-W</given-names>
</name>
<name>
<surname>Popp</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Coover</surname>
<given-names>B</given-names>
</name>
</person-group>
<year>2011</year>
<article-title>“Visual rhythm detection and its applications in interactive multimedia”</article-title>
<source>IEEE Multimedia</source>
<volume>18</volume>
<fpage>88</fpage>
<lpage>95</lpage>
<pub-id pub-id-type="doi">10.1109/MMUL.2011.19</pub-id>
</element-citation>
</ref>
<ref id="R19">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chuang</surname>
<given-names>Y-Y</given-names>
</name>
<name>
<surname>Goldman</surname>
<given-names>D B</given-names>
</name>
<name>
<surname>Zheng</surname>
<given-names>K C</given-names>
</name>
<name>
<surname>Curless</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Salesin</surname>
<given-names>D H</given-names>
</name>
<name>
<surname>Szeliski</surname>
<given-names>R</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>“Animating pictures with stochastic motion textures”</article-title>
<source>ACM Transactions on Graphics</source>
<volume>24</volume>
<fpage>853</fpage>
<lpage>860</lpage>
<pub-id pub-id-type="doi">10.1145/1073204.1073273</pub-id>
</element-citation>
</ref>
<ref id="R20">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Constantini</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Sbaiz</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Süsstrunk</surname>
<given-names>S</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>“Higher order SVD analysis for dynamic texture synthesis”</article-title>
<source>IEEE Transactions on Image Processing</source>
<volume>17</volume>
<fpage>42</fpage>
<lpage>52</lpage>
<pub-id pub-id-type="doi">10.1109/TIP.2007.910956</pub-id>
<pub-id pub-id-type="pmid">18229803</pub-id>
</element-citation>
</ref>
<ref id="R21">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Delplanque</surname>
<given-names>S</given-names>
</name>
<name>
<surname>N'diaye</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Scherer</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Grandjean</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>“Spatial frequencies or emotional effects? A systematic measure of spatial frequencies for IAPS pictures by a discrete wavelet analysis”</article-title>
<source>Journal of Neuroscience Methods</source>
<volume>165</volume>
<fpage>144</fpage>
<lpage>150</lpage>
<pub-id pub-id-type="doi">10.1016/j.jneumeth.2007.05.030</pub-id>
<pub-id pub-id-type="pmid">17629569</pub-id>
</element-citation>
</ref>
<ref id="R22">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dijkstra</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Pieterse</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Pruyn</surname>
<given-names>A</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>“Physical environmental stimuli that turn healthcare facilities into healing environments through psychologically mediated effects: systematic review”</article-title>
<source>Journal of Advanced Nursing</source>
<volume>56</volume>
<fpage>166</fpage>
<lpage>181</lpage>
<pub-id pub-id-type="doi">10.1111/j.1365-2648.2006.03990.x</pub-id>
<pub-id pub-id-type="pmid">17018065</pub-id>
</element-citation>
</ref>
<ref id="R23">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Doretto</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Chiuso</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>Y N</given-names>
</name>
<name>
<surname>Soatto</surname>
<given-names>S</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>“Dynamic textures”</article-title>
<source>International Journal of Computer Vision</source>
<volume>51</volume>
<fpage>91</fpage>
<lpage>109</lpage>
<pub-id pub-id-type="doi">10.1023/A:1021669406132</pub-id>
</element-citation>
</ref>
<ref id="R24">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fernandez</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Wilkins</surname>
<given-names>A J</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>“Uncomfortable images in art and nature”</article-title>
<source>Perception</source>
<volume>37</volume>
<fpage>1098</fpage>
<lpage>1113</lpage>
<pub-id pub-id-type="doi">10.1068/p5814</pub-id>
<pub-id pub-id-type="pmid">18773732</pub-id>
</element-citation>
</ref>
<ref id="R25">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Field</surname>
<given-names>D J</given-names>
</name>
</person-group>
<year>1987</year>
<article-title>“Relations between the statistics of natural images and the response properties of cortical cells”</article-title>
<source>Journal of the Optical Society of America A</source>
<volume>4</volume>
<fpage>2379</fpage>
<lpage>2394</lpage>
<pub-id pub-id-type="doi">10.1364/JOSAA.4.002379</pub-id>
</element-citation>
</ref>
<ref id="R26">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Field</surname>
<given-names>D J</given-names>
</name>
</person-group>
<year>1994</year>
<article-title>“What is the goal of sensory coding?”</article-title>
<source>Neural Computation</source>
<volume>6</volume>
<fpage>559</fpage>
<lpage>601</lpage>
<pub-id pub-id-type="doi">10.1162/neco.1994.6.4.559</pub-id>
</element-citation>
</ref>
<ref id="R27">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Forsythe</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Nadal</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Sheehy</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Cela-Conde</surname>
<given-names>CJ</given-names>
</name>
<name>
<surname>Sawey</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>2011</year>
<article-title>“Predicting beauty: fractal dimension and visual complexity in art”</article-title>
<source>British Journal of Psychology</source>
<volume>102</volume>
<fpage>49</fpage>
<lpage>70</lpage>
<pub-id pub-id-type="doi">10.1348/000712610-X498958</pub-id>
<pub-id pub-id-type="pmid">21241285</pub-id>
</element-citation>
</ref>
<ref id="R28">
<element-citation publication-type="other">
<person-group person-group-type="author">
<name>
<surname>Fujiwara</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Aono</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Kuwano</surname>
<given-names>S</given-names>
</name>
</person-group>
<year>2006</year>
<source>“Audio-visual interaction in the image evaluation of the environment—an on site investigation”</source>
<comment>Inter-Noise</comment>
</element-citation>
</ref>
<ref id="R29">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gomez</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Danuser</surname>
<given-names>B</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>“Affective and physiological responses to environmental noises and music”</article-title>
<source>International Journal of Psychophysiology</source>
<volume>53</volume>
<fpage>91</fpage>
<lpage>103</lpage>
<pub-id pub-id-type="doi">10.1016/j.ijpsycho.2004.02.002</pub-id>
<pub-id pub-id-type="pmid">15210287</pub-id>
</element-citation>
</ref>
<ref id="R30">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Groissboeck</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Lughofer</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Thumfart</surname>
<given-names>S</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>“Associating visual textures with human perceptions using genetic algorithms”</article-title>
<source>Information Sciences</source>
<volume>180</volume>
<fpage>2065</fpage>
<lpage>2084</lpage>
<pub-id pub-id-type="doi">10.1016/j.ins.2010.01.035</pub-id>
</element-citation>
</ref>
<ref id="R31">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Guttman</surname>
<given-names>S E</given-names>
</name>
<name>
<surname>Gilroy</surname>
<given-names>L A</given-names>
</name>
<name>
<surname>Blake</surname>
<given-names>R</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>“Hearing what the eyes see: auditory encoding of visual temporal sequences”</article-title>
<source>Psychological Science</source>
<volume>16</volume>
<fpage>228</fpage>
<lpage>235</lpage>
<pub-id pub-id-type="doi">10.1111/j.0956-7976.2005.00808.x</pub-id>
<pub-id pub-id-type="pmid">15733204</pub-id>
</element-citation>
</ref>
<ref id="R32">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hagerhall</surname>
<given-names>C M</given-names>
</name>
<name>
<surname>Laike</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Taylor</surname>
<given-names>R P</given-names>
</name>
<name>
<surname>Küller</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Küller</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Martin</surname>
<given-names>T P</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>“Investigations of human EEG response to viewing fractal patterns”</article-title>
<source>Perception</source>
<volume>37</volume>
<fpage>1488</fpage>
<lpage>1494</lpage>
<pub-id pub-id-type="doi">10.1068/p5918</pub-id>
<pub-id pub-id-type="pmid">19065853</pub-id>
</element-citation>
</ref>
<ref id="R33">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hanjalic</surname>
<given-names>A</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>“Extracting moods from pictures and sounds: towards truly personalized TV”</article-title>
<source>IEEE Signal Processing Magazine</source>
<volume>23</volume>
<fpage>90</fpage>
<lpage>100</lpage>
<pub-id pub-id-type="doi">10.1109/MSP.2006.1621452</pub-id>
</element-citation>
</ref>
<ref id="R34">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hanjalic</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Xu</surname>
<given-names>L-Q</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>“Affective video content representation and modeling”</article-title>
<source>IEEE Transactions on Multimedia</source>
<volume>7</volume>
<fpage>143</fpage>
<lpage>154</lpage>
<pub-id pub-id-type="doi">10.1109/TMM.2004.840618</pub-id>
</element-citation>
</ref>
<ref id="R35">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Houtkamp</surname>
<given-names>J M</given-names>
</name>
<name>
<surname>Schuurink</surname>
<given-names>E L</given-names>
</name>
<name>
<surname>Toet</surname>
<given-names>A</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>“Thunderstorms in my computer: the effect of visual dynamics and sound in a 3D environment”</article-title>
<source>Proceedings of the International Conference on Visualisation in Built and Rural Environments BuiltViz'08 M Bannatyne and J Counsell</source>
<publisher-loc>Los Alamitos, CA</publisher-loc>
<publisher-name>IEEE Computer Society</publisher-name>
</element-citation>
</ref>
<ref id="R36">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Huang</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Waldvogel</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>“Interactive wallpaper”</article-title>
<source>SIGGRAPH 2005 Electronic Art and Animation Catalog</source>
<publisher-loc>New York</publisher-loc>
<publisher-name>ACM</publisher-name>
</element-citation>
</ref>
<ref id="R37">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Husain</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Thompson</surname>
<given-names>W F</given-names>
</name>
<name>
<surname>Schellenberg</surname>
<given-names>E G</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>“Effects of musical tempo and mode on arousal, mood, and spatial abilities”</article-title>
<source>Music Perception</source>
<volume>20</volume>
<fpage>151</fpage>
<lpage>171</lpage>
<pub-id pub-id-type="doi">10.1525/mp.2002.20.2.151</pub-id>
</element-citation>
</ref>
<ref id="R38">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Juricevic</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Land</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Wilkins</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Webster</surname>
<given-names>M A</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>“Visual discomfort and natural image statistics”</article-title>
<source>Perception</source>
<volume>39</volume>
<fpage>884</fpage>
<lpage>899</lpage>
<pub-id pub-id-type="doi">10.1068/p6656</pub-id>
<pub-id pub-id-type="pmid">20842966</pub-id>
</element-citation>
</ref>
<ref id="R39">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Juslin</surname>
<given-names>P N</given-names>
</name>
<name>
<surname>Västfjäll</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>“Emotional responses to music: the need to consider underlying mechanisms”</article-title>
<source>Behavioral and Brain Sciences</source>
<volume>31</volume>
<fpage>559</fpage>
<lpage>575</lpage>
<pub-id pub-id-type="pmid">18826699</pub-id>
</element-citation>
</ref>
<ref id="R40">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kellaris</surname>
<given-names>J J</given-names>
</name>
<name>
<surname>Kent</surname>
<given-names>R J</given-names>
</name>
</person-group>
<year>1993</year>
<article-title>“An exploratory investigation of responses elicited by music varying in tempo, tonality, and texture”</article-title>
<source>Journal of Consumer Psychology</source>
<volume>2</volume>
<fpage>381</fpage>
<lpage>401</lpage>
<pub-id pub-id-type="doi">10.1016/S1057-7408(08)80068-X</pub-id>
</element-citation>
</ref>
<ref id="R41">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Kim</surname>
<given-names>E Y</given-names>
</name>
<name>
<surname>Kim</surname>
<given-names>S-J</given-names>
</name>
<name>
<surname>Jeong</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Kim</surname>
<given-names>J</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>“Emotion-based textile indexing using colors and texture”</article-title>
<source>Fuzzy Systems and Knowledge Discovery Lecture Notes in Computer Science</source>
<publisher-loc>Berlin / Heidelberg</publisher-loc>
<publisher-name>Springer</publisher-name>
</element-citation>
</ref>
<ref id="R42">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Kline</surname>
<given-names>R B</given-names>
</name>
</person-group>
<year>2010</year>
<source>Principles and practice of structural equation modelling</source>
<publisher-loc>New York</publisher-loc>
<publisher-name>The Guilford Press</publisher-name>
</element-citation>
</ref>
<ref id="R43">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Küller</surname>
<given-names>R</given-names>
</name>
</person-group>
<year>1975</year>
<source>Semantisk miljöbeskrivning (SMB)</source>
<publisher-loc>Stockholm, Sweden</publisher-loc>
<publisher-name>Psykologiförlaget</publisher-name>
</element-citation>
</ref>
<ref id="R44">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lai</surname>
<given-names>C-H</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>J-L</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>“Temporal texture synthesis by patch-based sampling and morphing interpolation”</article-title>
<source>Computer Animation and Virtual Worlds</source>
<volume>18</volume>
<fpage>415</fpage>
<lpage>428</lpage>
<pub-id pub-id-type="doi">10.1002/cav.195</pub-id>
</element-citation>
</ref>
<ref id="R45">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Lin</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Imamiya</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Hu</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Omata</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>“Display characteristics affect users' emotional arousal in 3D games”</article-title>
<source>Universal access in ambient intelligence environments Lecture Notes in Computer Science</source>
<publisher-loc>Berlin / Heidelberg</publisher-loc>
<publisher-name>Springer</publisher-name>
</element-citation>
</ref>
<ref id="R46">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lucassen</surname>
<given-names>M P</given-names>
</name>
<name>
<surname>Gevers</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Gijsenij</surname>
<given-names>A</given-names>
</name>
</person-group>
<year>2011</year>
<article-title>“Texture affects color emotion”</article-title>
<source>Color Research & Application</source>
<volume>36</volume>
<fpage>426</fpage>
<lpage>436</lpage>
<pub-id pub-id-type="doi">10.1002/col.20647</pub-id>
</element-citation>
</ref>
<ref id="R47">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>MacCallum</surname>
<given-names>R C</given-names>
</name>
<name>
<surname>Austin</surname>
<given-names>J T</given-names>
</name>
</person-group>
<year>2000</year>
<article-title>“Applications of structural equation modeling in psychological research”</article-title>
<source>Annual Review of Psychology</source>
<volume>51</volume>
<fpage>201</fpage>
<lpage>226</lpage>
<pub-id pub-id-type="doi">10.1146/annurev.psych.51.1.201</pub-id>
</element-citation>
</ref>
<ref id="R48">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Machajdik</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Hanbury</surname>
<given-names>A</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>“Affective image classification using features inspired by psychology and art theory”</article-title>
<source>Proceedings of the International Conference on Multimedia (MM'10)</source>
<publisher-loc>New York</publisher-loc>
<publisher-name>ACM</publisher-name>
</element-citation>
</ref>
<ref id="R49">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mao</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Muta</surname>
<given-names>I</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>“Affective property of image and fractal dimension”</article-title>
<source>Chaos, Solitons & Fractals</source>
<volume>15</volume>
<fpage>905</fpage>
<lpage>910</lpage>
<pub-id pub-id-type="doi">10.1016/S0960-0779(02)00209-6</pub-id>
</element-citation>
</ref>
<ref id="R50">
<element-citation publication-type="other">
<person-group person-group-type="author">
<name>
<surname>Masakura</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Nagai</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Kumada</surname>
<given-names>T</given-names>
</name>
</person-group>
<year>2006</year>
<source>“Effective visual cue for guiding peoples' attention to important information based on subjective and behavioral measures”</source>
<italic>Proceedings of The First International Workshop on Kansei</italic>
</element-citation>
</ref>
<ref id="R51">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>McKinney</surname>
<given-names>C H</given-names>
</name>
<name>
<surname>Tims</surname>
<given-names>F C</given-names>
</name>
</person-group>
<year>1995</year>
<article-title>“Differential effects of selected classical music on the imagery of high versus low imagers: two studies”</article-title>
<source>Journal of Music Therapy</source>
<volume>32</volume>
<fpage>22</fpage>
<lpage>45</lpage>
</element-citation>
</ref>
<ref id="R52">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mehrabian</surname>
<given-names>A</given-names>
</name>
</person-group>
<year>1996</year>
<article-title>“Pleasure-arousal-dominance: a general framework for describing and measuring individual”</article-title>
<source>Current Psychology</source>
<volume>14</volume>
<fpage>261</fpage>
<lpage>292</lpage>
<pub-id pub-id-type="doi">10.1007/BF02686918</pub-id>
</element-citation>
</ref>
<ref id="R53">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nasar</surname>
<given-names>J L</given-names>
</name>
<name>
<surname>Lin</surname>
<given-names>Y-H</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>“Evaluative responses to five kinds of water features”</article-title>
<source>Landscape Research</source>
<volume>28</volume>
<fpage>441</fpage>
<lpage>450</lpage>
<pub-id pub-id-type="doi">10.1080/0142639032000150167</pub-id>
</element-citation>
</ref>
<ref id="R54">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>O'Hare</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Hibbard</surname>
<given-names>PB</given-names>
</name>
</person-group>
<year>2011</year>
<article-title>“Spatial frequency and visual discomfort”</article-title>
<source>Vision Research</source>
<volume>51</volume>
<fpage>1767</fpage>
<lpage>1777</lpage>
<pub-id pub-id-type="doi">10.1016/j.visres.2011.06.002</pub-id>
<pub-id pub-id-type="pmid">21684303</pub-id>
</element-citation>
</ref>
<ref id="R55">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Parraga</surname>
<given-names>CA</given-names>
</name>
<name>
<surname>Troscianko</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Tolhurst</surname>
<given-names>D J</given-names>
</name>
</person-group>
<year>2000</year>
<article-title>“The human visual system is optimised for processing the spatial information in natural visual images”</article-title>
<source>Current Biology</source>
<volume>10</volume>
<fpage>35</fpage>
<lpage>38</lpage>
<pub-id pub-id-type="doi">10.1016/S0960-9822(99)00262-6</pub-id>
<pub-id pub-id-type="pmid">10660301</pub-id>
</element-citation>
</ref>
<ref id="R56">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Péteri</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Chetverikov</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>“Qualitative characterization of dynamic textures for video retrieval”</article-title>
<source>Computer Vision and Graphics. Proceedings of the International Conference on Computer Vision and Graphics (ICCVG 2004) Computational Imaging and Vision 32</source>
<publisher-loc>Berlin / Heidelberg</publisher-loc>
<publisher-name>Springer</publisher-name>
</element-citation>
</ref>
<ref id="R57">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Péteri</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Fazekas</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Huiskes</surname>
<given-names>M J</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>“DynTex: a comprehensive database of dynamic textures”</article-title>
<source>Pattern Recognition Letters</source>
<volume>31</volume>
<fpage>1627</fpage>
<lpage>1632</lpage>
<pub-id pub-id-type="doi">10.1016/j.patrec.2010.05.009</pub-id>
</element-citation>
</ref>
<ref id="R58">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Post</surname>
<given-names>F H</given-names>
</name>
<name>
<surname>Vrolijk</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Hauser</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Laramee</surname>
<given-names>R S</given-names>
</name>
<name>
<surname>Doleisch</surname>
<given-names>H</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>“The state of the art in flow visualisation: feature extraction and tracking”</article-title>
<source>Computer Graphics Forum</source>
<volume>22</volume>
<fpage>775</fpage>
<lpage>792</lpage>
<pub-id pub-id-type="doi">10.1111/j.1467-8659.2003.00723.x</pub-id>
</element-citation>
</ref>
<ref id="R59">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Recanzone</surname>
<given-names>G H</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>“Auditory influences on visual temporal rate perception”</article-title>
<source>Journal of Neurophysiology</source>
<volume>89</volume>
<fpage>1078</fpage>
<lpage>1093</lpage>
<pub-id pub-id-type="doi">10.1152/jn.00706.2002</pub-id>
<pub-id pub-id-type="pmid">12574482</pub-id>
</element-citation>
</ref>
<ref id="R60">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Russell</surname>
<given-names>J A</given-names>
</name>
</person-group>
<year>1980</year>
<article-title>“A circumplex model of affect”</article-title>
<source>Journal of Personality and Social Psychology</source>
<volume>39</volume>
<fpage>1161</fpage>
<lpage>1178</lpage>
<pub-id pub-id-type="doi">10.1037/h0077714</pub-id>
</element-citation>
</ref>
<ref id="R61">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Russell</surname>
<given-names>J A</given-names>
</name>
<name>
<surname>Pratt</surname>
<given-names>G</given-names>
</name>
</person-group>
<year>1980</year>
<article-title>“A description of the affective quality attributed to environments”</article-title>
<source>Journal of Personality and Social Psychology</source>
<volume>38</volume>
<fpage>311</fpage>
<lpage>322</lpage>
<pub-id pub-id-type="doi">10.1037/0022-3514.38.2.311</pub-id>
</element-citation>
</ref>
<ref id="R62">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Russell</surname>
<given-names>J A</given-names>
</name>
<name>
<surname>Ward</surname>
<given-names>L M</given-names>
</name>
<name>
<surname>Pratt</surname>
<given-names>G</given-names>
</name>
</person-group>
<year>1981</year>
<article-title>“Affective quality attributed to environments: a factor analytic study”</article-title>
<source>Environment and Behavior</source>
<volume>13</volume>
<fpage>259</fpage>
<lpage>288</lpage>
<pub-id pub-id-type="doi">10.1177/0013916581133001</pub-id>
</element-citation>
</ref>
<ref id="R63">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Salminen</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Surakka</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Lylykangas</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Raisamo</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Saarinen</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Raisamo</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Rantala</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Evreinov</surname>
<given-names>G</given-names>
</name>
</person-group>
<year>2011</year>
<article-title>“Emotional and behavioral responses to haptic stimulation”</article-title>
<source>Proceeding of the twenty-sixth annual SIGCHI conference on Human factors in computing systems</source>
<publisher-loc>New York</publisher-loc>
<publisher-name>ACM Press</publisher-name>
</element-citation>
</ref>
<ref id="R64">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schermelleh-Engel</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Moosbrugger</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Müller</surname>
<given-names>H</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>“Evaluating the fit of structural equation models: tests of significance and descriptive goodness-of-fit measures”</article-title>
<source>Methods of Psychological Research Online</source>
<volume>8</volume>
<fpage>23</fpage>
<lpage>74</lpage>
</element-citation>
</ref>
<ref id="R65">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Simmons</surname>
<given-names>D R</given-names>
</name>
<name>
<surname>Russell</surname>
<given-names>C L</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>“Visual texture affects the perceived unpleasantness of colours”</article-title>
<source>Perception</source>
<volume>37</volume>
<fpage>146</fpage>
<lpage>146</lpage>
</element-citation>
</ref>
<ref id="R66">
<element-citation publication-type="other">
<person-group person-group-type="author">
<name>
<surname>Smith</surname>
<given-names>J R</given-names>
</name>
<name>
<surname>Lin</surname>
<given-names>C-Y</given-names>
</name>
<name>
<surname>Naphade</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>2002</year>
<source>“Video texture indexing using spatio-temporal wavelets”</source>
<italic>Proceedings of the International Conference on Image Processing</italic>
</element-citation>
</ref>
<ref id="R67">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Soleymani</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Chanel</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Kierkels</surname>
<given-names>J J M</given-names>
</name>
<name>
<surname>Pun</surname>
<given-names>T</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>“Affective ranking of movie scenes using physiological signals and content analysis”</article-title>
<source>Proceeding of the 2nd ACM workshop on Multimedia semantics</source>
<publisher-loc>New York</publisher-loc>
<publisher-name>ACM</publisher-name>
</element-citation>
</ref>
<ref id="R68">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Steiger</surname>
<given-names>J H</given-names>
</name>
</person-group>
<year>1990</year>
<article-title>“Structural model evaluation and modification: An interval estimation approach”</article-title>
<source>Multivariate Behavioral Research</source>
<volume>25</volume>
<fpage>173</fpage>
<lpage>185</lpage>
<pub-id pub-id-type="doi">10.1207/s15327906mbr2502_4</pub-id>
</element-citation>
</ref>
<ref id="R69">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Suk</surname>
<given-names>H-J</given-names>
</name>
<name>
<surname>Jeong</surname>
<given-names>S-H</given-names>
</name>
<name>
<surname>Hang</surname>
<given-names>T-H</given-names>
</name>
<name>
<surname>Kwon</surname>
<given-names>D-S</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>“Tactile sensation as emotion elicitor”</article-title>
<source>Kansei Engineering International</source>
<volume>8</volume>
<fpage>147</fpage>
<lpage>152</lpage>
</element-citation>
</ref>
<ref id="R70">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Taylor</surname>
<given-names>R P</given-names>
</name>
<name>
<surname>Spehar</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Wise</surname>
<given-names>J A</given-names>
</name>
<name>
<surname>Clifford</surname>
<given-names>C W</given-names>
</name>
<name>
<surname>Newell</surname>
<given-names>B R</given-names>
</name>
<name>
<surname>Hagerhall</surname>
<given-names>C M</given-names>
</name>
<name>
<surname>Purcell</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Martin</surname>
<given-names>T P</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>“Perceptual and physiological responses to the visual complexity of fractal patterns”</article-title>
<source>Nonlinear Dynamics, Psychology and Life Sciences</source>
<volume>9</volume>
<fpage>89</fpage>
<lpage>114</lpage>
</element-citation>
</ref>
<ref id="R71">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Van Hagen</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>2011</year>
<source>Waiting experience at train stations</source>
<publisher-loc>Delft, The Netherlands</publisher-loc>
<publisher-name>Eburon</publisher-name>
</element-citation>
</ref>
<ref id="R72">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>K</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>“Research of the affective responses to product's texture based on the Kansei evaluation”</article-title>
<source>Proceedings of the 2009 Second International Symposium on Computational Intelligence and Design</source>
<publisher-loc>Los Alamos, CA</publisher-loc>
<publisher-name>IEEE Press</publisher-name>
</element-citation>
</ref>
<ref id="R73">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Weiskopf</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Erlebacher</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Ertl</surname>
<given-names>T</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>“A texture-based framework for spacetime-coherent visualization of time-dependent vector fields”</article-title>
<source>Proceedings of the 14th IEEE Visualization Conference 2003 (VIS'03)</source>
<publisher-loc>Washington, DC</publisher-loc>
<publisher-name>IEEE Computer Society</publisher-name>
</element-citation>
</ref>
<ref id="R74">
<element-citation publication-type="journal">
<collab>World Medical Association</collab>
<year>2000</year>
<article-title>“World Medical Association Declaration of Helsinki: ethical principles for medical research involving human subjects”</article-title>
<source>JAMA: The Journal of the American Medical Association</source>
<volume>284</volume>
<fpage>3043</fpage>
<lpage>3045</lpage>
<pub-id pub-id-type="doi">10.1001/jama.284.23.3043</pub-id>
<pub-id pub-id-type="pmid">11122593</pub-id>
</element-citation>
</ref>
<ref id="R75">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>Q</given-names>
</name>
<name>
<surname>Wangbo</surname>
<given-names>T</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>“The dynamic textures for water synthesis based on statistical modeling”</article-title>
<source>Technologies for E-Learning and Digital Entertainment LNCS-4469 Eds</source>
<publisher-loc>Berlin/Heidelberg</publisher-loc>
<publisher-name>Springer</publisher-name>
</element-citation>
</ref>
<ref id="R76">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhao</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Pietikainen</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>“Dynamic Texture Recognition Using Local Binary Patterns with an Application to Facial Expressions”</article-title>
<source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>
<volume>29</volume>
<fpage>915</fpage>
<lpage>928</lpage>
<pub-id pub-id-type="doi">10.1109/TPAMI.2007.1110</pub-id>
<pub-id pub-id-type="pmid">17431293</pub-id>
</element-citation>
</ref>
</ref-list>
</back>
</pmc>
<affiliations>
<list></list>
<tree>
<noCountry>
<name sortKey="Gevers, Theo" sort="Gevers, Theo" uniqKey="Gevers T" first="Theo" last="Gevers">Theo Gevers</name>
<name sortKey="Henselmans, Menno" sort="Henselmans, Menno" uniqKey="Henselmans M" first="Menno" last="Henselmans">Menno Henselmans</name>
<name sortKey="Lucassen, Marcel P" sort="Lucassen, Marcel P" uniqKey="Lucassen M" first="Marcel P" last="Lucassen">Marcel P. Lucassen</name>
<name sortKey="Toet, Alexander" sort="Toet, Alexander" uniqKey="Toet A" first="Alexander" last="Toet">Alexander Toet</name>
</noCountry>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Checkpoint
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001B58 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd -nk 001B58 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Checkpoint
   |type=    RBID
   |clé=     PMC:3485791
   |texte=   Emotional effects of dynamic textures
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/RBID.i   -Sk "pubmed:23145257" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024