Serveur d'exploration sur les relations entre la France et l'Australie

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.
***** Acces problem to record *****\

Identifieur interne : 002A17 ( Pmc/Corpus ); précédent : 002A169; suivant : 002A180 ***** probable Xml problem with record *****

Links to Exploration step


Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Synchronized Audio-Visual Transients Drive Efficient Visual Search for Motion-in-Depth</title>
<author>
<name sortKey="Zannoli, Marina" sort="Zannoli, Marina" uniqKey="Zannoli M" first="Marina" last="Zannoli">Marina Zannoli</name>
<affiliation>
<nlm:aff id="aff1">
<addr-line>Université Paris Descartes, Sorbonne Paris Cité, Paris, France</addr-line>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="aff2">
<addr-line>Laboratoire Psychologie de la Perception, CNRS UMR 8158, Paris, France</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Cass, John" sort="Cass, John" uniqKey="Cass J" first="John" last="Cass">John Cass</name>
<affiliation>
<nlm:aff id="aff3">
<addr-line>School of Psychology, University of Western Sydney, Sydney, New South Wales, Australia</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Mamassian, Pascal" sort="Mamassian, Pascal" uniqKey="Mamassian P" first="Pascal" last="Mamassian">Pascal Mamassian</name>
<affiliation>
<nlm:aff id="aff1">
<addr-line>Université Paris Descartes, Sorbonne Paris Cité, Paris, France</addr-line>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="aff2">
<addr-line>Laboratoire Psychologie de la Perception, CNRS UMR 8158, Paris, France</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Alais, David" sort="Alais, David" uniqKey="Alais D" first="David" last="Alais">David Alais</name>
<affiliation>
<nlm:aff id="aff4">
<addr-line>School of Psychology, University of Sydney, Sydney, New South Wales, Australia</addr-line>
</nlm:aff>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">22615939</idno>
<idno type="pmc">3355117</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3355117</idno>
<idno type="RBID">PMC:3355117</idno>
<idno type="doi">10.1371/journal.pone.0037190</idno>
<date when="2012">2012</date>
<idno type="wicri:Area/Pmc/Corpus">002A17</idno>
<idno type="wicri:explorRef" wicri:stream="Pmc" wicri:step="Corpus" wicri:corpus="PMC">002A17</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Synchronized Audio-Visual Transients Drive Efficient Visual Search for Motion-in-Depth</title>
<author>
<name sortKey="Zannoli, Marina" sort="Zannoli, Marina" uniqKey="Zannoli M" first="Marina" last="Zannoli">Marina Zannoli</name>
<affiliation>
<nlm:aff id="aff1">
<addr-line>Université Paris Descartes, Sorbonne Paris Cité, Paris, France</addr-line>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="aff2">
<addr-line>Laboratoire Psychologie de la Perception, CNRS UMR 8158, Paris, France</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Cass, John" sort="Cass, John" uniqKey="Cass J" first="John" last="Cass">John Cass</name>
<affiliation>
<nlm:aff id="aff3">
<addr-line>School of Psychology, University of Western Sydney, Sydney, New South Wales, Australia</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Mamassian, Pascal" sort="Mamassian, Pascal" uniqKey="Mamassian P" first="Pascal" last="Mamassian">Pascal Mamassian</name>
<affiliation>
<nlm:aff id="aff1">
<addr-line>Université Paris Descartes, Sorbonne Paris Cité, Paris, France</addr-line>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="aff2">
<addr-line>Laboratoire Psychologie de la Perception, CNRS UMR 8158, Paris, France</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Alais, David" sort="Alais, David" uniqKey="Alais D" first="David" last="Alais">David Alais</name>
<affiliation>
<nlm:aff id="aff4">
<addr-line>School of Psychology, University of Sydney, Sydney, New South Wales, Australia</addr-line>
</nlm:aff>
</affiliation>
</author>
</analytic>
<series>
<title level="j">PLoS ONE</title>
<idno type="eISSN">1932-6203</idno>
<imprint>
<date when="2012">2012</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>In natural audio-visual environments, a change in depth is usually correlated with a change in loudness. In the present study, we investigated whether correlating changes in disparity and loudness would provide a functional advantage in binding disparity and sound amplitude in a visual search paradigm. To test this hypothesis, we used a method similar to that used by van der Burg et al. to show that non-spatial transient (square-wave) modulations of loudness can drastically improve spatial visual search for a correlated luminance modulation. We used dynamic random-dot stereogram displays to produce pure disparity modulations. Target and distractors were small disparity-defined squares (either 6 or 10 in total). Each square moved back and forth in depth in front of the background plane at different phases. The target’s depth modulation was synchronized with an amplitude-modulated auditory tone. Visual and auditory modulations were always congruent (both sine-wave or square-wave). In a speeded search task, five observers were asked to identify the target as quickly as possible. Results show a significant improvement in visual search times in the square-wave condition compared to the sine condition, suggesting that transient auditory information can efficiently drive visual search in the disparity domain. In a second experiment, participants performed the same task in the absence of sound and showed a clear set-size effect in both modulation conditions. In a third experiment, we correlated the sound with a distractor instead of the target. This produced longer search times, indicating that the correlation is not easily ignored.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Neisser, U" uniqKey="Neisser U">U Neisser</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Treisman, Am" uniqKey="Treisman A">AM Treisman</name>
</author>
<author>
<name sortKey="Gelade, G" uniqKey="Gelade G">G Gelade</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Posner, Mi" uniqKey="Posner M">MI Posner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nakayama, K" uniqKey="Nakayama K">K Nakayama</name>
</author>
<author>
<name sortKey="Silverman, Gh" uniqKey="Silverman G">GH Silverman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Harris, Jm" uniqKey="Harris J">JM Harris</name>
</author>
<author>
<name sortKey="Mckee, Sp" uniqKey="Mckee S">SP McKee</name>
</author>
<author>
<name sortKey="Watamaniuk, Sn" uniqKey="Watamaniuk S">SN Watamaniuk</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Andersen, Ts" uniqKey="Andersen T">TS Andersen</name>
</author>
<author>
<name sortKey="Mamassian, P" uniqKey="Mamassian P">P Mamassian</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Noesselt, T" uniqKey="Noesselt T">T Noesselt</name>
</author>
<author>
<name sortKey="Tyll, S" uniqKey="Tyll S">S Tyll</name>
</author>
<author>
<name sortKey="Boehler, Cn" uniqKey="Boehler C">CN Boehler</name>
</author>
<author>
<name sortKey="Budinger, E" uniqKey="Budinger E">E Budinger</name>
</author>
<author>
<name sortKey="Heinze, Hj" uniqKey="Heinze H">HJ Heinze</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lippert, M" uniqKey="Lippert M">M Lippert</name>
</author>
<author>
<name sortKey="Logothetis, Nk" uniqKey="Logothetis N">NK Logothetis</name>
</author>
<author>
<name sortKey="Kayser, C" uniqKey="Kayser C">C Kayser</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Der Burg, E" uniqKey="Van Der Burg E">E van der Burg</name>
</author>
<author>
<name sortKey="Olivers, Cnl" uniqKey="Olivers C">CNL Olivers</name>
</author>
<author>
<name sortKey="Bronkhorst, Aw" uniqKey="Bronkhorst A">AW Bronkhorst</name>
</author>
<author>
<name sortKey="Theeuwes, J" uniqKey="Theeuwes J">J Theeuwes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Der Burg, E" uniqKey="Van Der Burg E">E van der Burg</name>
</author>
<author>
<name sortKey="Cass, J" uniqKey="Cass J">J Cass</name>
</author>
<author>
<name sortKey="Olivers, Cnl" uniqKey="Olivers C">CNL Olivers</name>
</author>
<author>
<name sortKey="Theeuwes, J" uniqKey="Theeuwes J">J Theeuwes</name>
</author>
<author>
<name sortKey="Alais, D" uniqKey="Alais D">D Alais</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Der Burg, E" uniqKey="Van Der Burg E">E van der Burg</name>
</author>
<author>
<name sortKey="Talsma, D" uniqKey="Talsma D">D Talsma</name>
</author>
<author>
<name sortKey="Olivers, Cnl" uniqKey="Olivers C">CNL Olivers</name>
</author>
<author>
<name sortKey="Hickey, C" uniqKey="Hickey C">C Hickey</name>
</author>
<author>
<name sortKey="Theeuwes, J" uniqKey="Theeuwes J">J Theeuwes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Harris, Jm" uniqKey="Harris J">JM Harris</name>
</author>
<author>
<name sortKey="Nefs, Ht" uniqKey="Nefs H">HT Nefs</name>
</author>
<author>
<name sortKey="Grafton, Ce" uniqKey="Grafton C">CE Grafton</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Marr, D" uniqKey="Marr D">D Marr</name>
</author>
<author>
<name sortKey="Poggio, T" uniqKey="Poggio T">T Poggio</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Assee, A" uniqKey="Assee A">A Assee</name>
</author>
<author>
<name sortKey="Qian, N" uniqKey="Qian N">N Qian</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nienborg, H" uniqKey="Nienborg H">H Nienborg</name>
</author>
<author>
<name sortKey="Bridge, H" uniqKey="Bridge H">H Bridge</name>
</author>
<author>
<name sortKey="Parker, Aj" uniqKey="Parker A">AJ Parker</name>
</author>
<author>
<name sortKey="Cumming, Bg" uniqKey="Cumming B">BG Cumming</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tsirlin, I" uniqKey="Tsirlin I">I Tsirlin</name>
</author>
<author>
<name sortKey="Wilcox, Lm" uniqKey="Wilcox L">LM Wilcox</name>
</author>
<author>
<name sortKey="Allison, Rs" uniqKey="Allison R">RS Allison</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brooks, Kr" uniqKey="Brooks K">KR Brooks</name>
</author>
<author>
<name sortKey="Stone, Ls" uniqKey="Stone L">LS Stone</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brainard, Dh" uniqKey="Brainard D">DH Brainard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pelli, Dg" uniqKey="Pelli D">DG Pelli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zannoli, M" uniqKey="Zannoli M">M Zannoli</name>
</author>
<author>
<name sortKey="Cass, J" uniqKey="Cass J">J Cass</name>
</author>
<author>
<name sortKey="Mamassian, P" uniqKey="Mamassian P">P Mamassian</name>
</author>
<author>
<name sortKey="Alais, D" uniqKey="Alais D">D Alais</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">PLoS One</journal-id>
<journal-id journal-id-type="iso-abbrev">PLoS ONE</journal-id>
<journal-id journal-id-type="publisher-id">plos</journal-id>
<journal-id journal-id-type="pmc">plosone</journal-id>
<journal-title-group>
<journal-title>PLoS ONE</journal-title>
</journal-title-group>
<issn pub-type="epub">1932-6203</issn>
<publisher>
<publisher-name>Public Library of Science</publisher-name>
<publisher-loc>San Francisco, USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">22615939</article-id>
<article-id pub-id-type="pmc">3355117</article-id>
<article-id pub-id-type="publisher-id">PONE-D-12-06315</article-id>
<article-id pub-id-type="doi">10.1371/journal.pone.0037190</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Biology</subject>
<subj-group>
<subject>Neuroscience</subject>
<subj-group>
<subject>Sensory Perception</subject>
<subj-group>
<subject>Psychophysics</subject>
</subj-group>
</subj-group>
<subj-group>
<subject>Sensory Systems</subject>
<subj-group>
<subject>Auditory System</subject>
<subject>Visual System</subject>
</subj-group>
</subj-group>
<subj-group>
<subject>Cognitive Neuroscience</subject>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Medicine</subject>
<subj-group>
<subject>Mental Health</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Cognitive Psychology</subject>
<subject>Psychophysics</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Social and Behavioral Sciences</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Cognitive Psychology</subject>
<subject>Psychophysics</subject>
<subject>Sensory Perception</subject>
</subj-group>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Synchronized Audio-Visual Transients Drive Efficient Visual Search for Motion-in-Depth</article-title>
<alt-title alt-title-type="running-head">Multisensory Processing</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Zannoli</surname>
<given-names>Marina</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="corresp" rid="cor1">
<sup>*</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Cass</surname>
<given-names>John</given-names>
</name>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Mamassian</surname>
<given-names>Pascal</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Alais</surname>
<given-names>David</given-names>
</name>
<xref ref-type="aff" rid="aff4">
<sup>4</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff1">
<label>1</label>
<addr-line>Université Paris Descartes, Sorbonne Paris Cité, Paris, France</addr-line>
</aff>
<aff id="aff2">
<label>2</label>
<addr-line>Laboratoire Psychologie de la Perception, CNRS UMR 8158, Paris, France</addr-line>
</aff>
<aff id="aff3">
<label>3</label>
<addr-line>School of Psychology, University of Western Sydney, Sydney, New South Wales, Australia</addr-line>
</aff>
<aff id="aff4">
<label>4</label>
<addr-line>School of Psychology, University of Sydney, Sydney, New South Wales, Australia</addr-line>
</aff>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Martinez</surname>
<given-names>Luis M.</given-names>
</name>
<role>Editor</role>
<xref ref-type="aff" rid="edit1"></xref>
</contrib>
</contrib-group>
<aff id="edit1">CSIC-Univ Miguel Hernandez, Spain</aff>
<author-notes>
<corresp id="cor1">* E-mail:
<email>marinazannoli@gmail.com</email>
</corresp>
<fn fn-type="con">
<p>Conceived and designed the experiments: MZ JC PM DA. Performed the experiments: MZ. Analyzed the data: MZ. Wrote the paper: MZ JC PM DA.</p>
</fn>
</author-notes>
<pub-date pub-type="collection">
<year>2012</year>
</pub-date>
<pub-date pub-type="epub">
<day>17</day>
<month>5</month>
<year>2012</year>
</pub-date>
<volume>7</volume>
<issue>5</issue>
<elocation-id>e37190</elocation-id>
<history>
<date date-type="received">
<day>1</day>
<month>3</month>
<year>2012</year>
</date>
<date date-type="accepted">
<day>18</day>
<month>4</month>
<year>2012</year>
</date>
</history>
<permissions>
<copyright-statement>Zannoli et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.</copyright-statement>
<copyright-year>2012</copyright-year>
</permissions>
<abstract>
<p>In natural audio-visual environments, a change in depth is usually correlated with a change in loudness. In the present study, we investigated whether correlating changes in disparity and loudness would provide a functional advantage in binding disparity and sound amplitude in a visual search paradigm. To test this hypothesis, we used a method similar to that used by van der Burg et al. to show that non-spatial transient (square-wave) modulations of loudness can drastically improve spatial visual search for a correlated luminance modulation. We used dynamic random-dot stereogram displays to produce pure disparity modulations. Target and distractors were small disparity-defined squares (either 6 or 10 in total). Each square moved back and forth in depth in front of the background plane at different phases. The target’s depth modulation was synchronized with an amplitude-modulated auditory tone. Visual and auditory modulations were always congruent (both sine-wave or square-wave). In a speeded search task, five observers were asked to identify the target as quickly as possible. Results show a significant improvement in visual search times in the square-wave condition compared to the sine condition, suggesting that transient auditory information can efficiently drive visual search in the disparity domain. In a second experiment, participants performed the same task in the absence of sound and showed a clear set-size effect in both modulation conditions. In a third experiment, we correlated the sound with a distractor instead of the target. This produced longer search times, indicating that the correlation is not easily ignored.</p>
</abstract>
<counts>
<page-count count="6"></page-count>
</counts>
</article-meta>
</front>
<body>
<sec id="s1">
<title>Introduction</title>
<p>For the last fifty years
<xref ref-type="bibr" rid="pone.0037190-Neisser1">[1]</xref>
, visual search paradigms have proven to be a useful tool to study feature integration
<xref ref-type="bibr" rid="pone.0037190-Treisman1">[2]</xref>
and allocation of attention
<xref ref-type="bibr" rid="pone.0037190-Posner1">[3]</xref>
. A majority of studies using this paradigm have focused on the processing of basic feature dimensions such as luminance, color, orientation or motion, and have shown that searching for a target which is distinguished from the surrounding distractors by having, for example, a different orientation (or color, or luminance, etc) produces fast, efficient searches. Most visual search studies employ 2D arrays and relatively few have examined visual search in the 3D domain. Of these, an early study by Nakayama & Silverman
<xref ref-type="bibr" rid="pone.0037190-Nakayama1">[4]</xref>
showed that distinguishing targets and distractors by their horizontal binocular disparity (stereopsis) was sufficient to support efficient visual search. Later, Harris, McKee & Watamaniuk
<xref ref-type="bibr" rid="pone.0037190-Harris1">[5]</xref>
found that when binocular disparity was defined by spatiotemporal correlations (i.e., perceptual stereomotion), search performance became far less efficient. That is, stereomotion did not support pop-out. This is an intriguing result because even though static stereopsis and stereomotion are each capable of supporting vivid and clearly discriminable perceptual structure, stereomotion seems to require serial search.</p>
<p>In the present study, we will investigate whether search efficiency for stimuli defined by stereomotion can be improved by a non-spatial auditory cue correlated with the visual target. The ability of auditory signals to improve visual processing is now well known. Several studies have shown that the presentation of a simultaneous sound can improve visual performance for detection
<xref ref-type="bibr" rid="pone.0037190-Andersen1">[6]</xref>
can increase the saliency of visual events
<xref ref-type="bibr" rid="pone.0037190-Noesselt1">[7]</xref>
and can drive visual attention
<xref ref-type="bibr" rid="pone.0037190-Lippert1">[8]</xref>
. More specifically, using the visual search paradigm, van der Burg and colleagues recently conducted a series of studies on the so-called “pip and pop” effect and demonstrated that a synchronized, but spatially nonspecific, sound can drastically improve search efficiency as long as the visual signal is temporally abrupt
<xref ref-type="bibr" rid="pone.0037190-vanderBurg1">[9]</xref>
<xref ref-type="bibr" rid="pone.0037190-vanderBurg3">[11]</xref>
. In the so-called “pip and pop” effect, search times are drastically decreased for visual objects that are synchronized with an auditory beep even though the sound contains no spatial or identity information concerning the visual target. According to van der Burg and colleagues the auditory “pip” and the visual target are integrated, creating a salient audiovisual object that draws exogenous attention. To test the effect of an auditory cue on visual search for stereomotion stimuli, we used a method similar to the one introduced by van der Burg et al.
<xref ref-type="bibr" rid="pone.0037190-vanderBurg2">[10]</xref>
.</p>
<p>The study by van der Burg et al.
<xref ref-type="bibr" rid="pone.0037190-vanderBurg2">[10]</xref>
demonstrated that non-spatial modulations of loudness can drastically improve spatial visual search for a correlated
<italic>luminance</italic>
modulation but that it requires
<italic>transient</italic>
visual events (square modulations instead of sine) to elicit efficient search. To enable a comparison with the findings of Van der Burg, et al.
<xref ref-type="bibr" rid="pone.0037190-vanderBurg2">[10]</xref>
in the luminance domain, we decided to use similar modulation conditions. Our participants were presented with a dynamic random dot stereogram
<xref ref-type="bibr" rid="pone.0037190-Harris2">[12]</xref>
in which 6 or 10 disparity-defined squares arranged on a ring moved back and forth in depth in front of the background plane. Critically, elements in these displays are invisible when viewed monocularly, and require binocular integration across multiple frames. All the elements followed the same spatio-temporal modulation frequency but with different phases. An amplitude-modulating auditory beep was synchronized with the on of the elements’ depth modulation. Following the lead of van der Burg, et al.
<xref ref-type="bibr" rid="pone.0037190-vanderBurg1">[9]</xref>
,
<xref ref-type="bibr" rid="pone.0037190-vanderBurg2">[10]</xref>
we employed a compound search task in which participants performed a discrimination task on a luminance-defined target. The discrimination task is unrelated to the stereomotion but does require participants to successfully find the sound synchronized visual element first.</p>
<p>Although our study uses similar experimental conditions to van der Burg et al.
<xref ref-type="bibr" rid="pone.0037190-Treisman1">[2]</xref>
, different predictions can be made concerning the modulation conditions. In their study, search for luminance-defined targets was more efficient in the square-wave condition. In our experiment, because binocular matching processes are known to favor smooth over abrupt changes of disparity across space and time
<xref ref-type="bibr" rid="pone.0037190-Marr1">[13]</xref>
<xref ref-type="bibr" rid="pone.0037190-Nienborg1">[15]</xref>
, we predict that the square-modulation condition will not suit stereo processing and will therefore lead to longer response times compared to the sine-modulation condition. In addition, we predict that the presence of the auditory cue will enhance search efficiency in the sine condition and produce smaller set-size effects.</p>
</sec>
<sec sec-type="materials|methods" id="s2">
<title>Materials and Methods</title>
<sec id="s2a">
<title>Experiment 1</title>
<p>In the first experiment, we tested whether correlating changes in disparity and loudness would provide a functional advantage in binding disparity and sound amplitude in a visual search task. For this purpose, we used visual stimuli moving in depth together with an amplitude-modulating auditory sound with a static location. Participants had to perform a search and a spatial discrimination task on a small 2×2 pixel square defined by luminance. Participants were informed that this luminance target was adjacent to the visual element that was correlated with the accompanying sound changes.</p>
<sec id="s2a1">
<title>Participants</title>
<p>Five observers (two naïve) with normal or corrected-to-normal vision were recruited in the laboratory building. All participants had experience in psychophysical observation and had normal stereo acuity and hearing. They all gave written informed consent before participating in the experiment.</p>
</sec>
<sec id="s2a2">
<title>Stimulus presentation</title>
<p>The stereograms were presented on a 21″ CRT monitor (Sony Multiscan G500, resolution 1024×768 pixels x 85 Hz, for four observers and ViewSonic 2100, resolution 1280×960×85 Hz for one observer) at a simulated distance of 57 cm. To avoid the issues raised by shutter or polarized glasses
<xref ref-type="bibr" rid="pone.0037190-Tsirlin1">[16]</xref>
we used a modified Wheatstone stereoscope. In this type of display, the images presented to the two eyes are completely independent and are presented in perfect synchrony. Each eye viewed one horizontal half of the CRT screen. A chin rest was used to stabilize the observer’s head and to control the viewing distance. The display was the only source of light and the stereoscope was calibrated geometrically to account for each participant’s interocular distance. The auditory stimuli were presented via a single loudspeaker, which was placed above the monitor.</p>
</sec>
<sec id="s2a3">
<title>Stimuli</title>
<p>Stereomotion can be extracted by computing interocular velocity differences and/or by tracking changes of disparity over time
<xref ref-type="bibr" rid="pone.0037190-Harris2">[12]</xref>
,
<xref ref-type="bibr" rid="pone.0037190-Brooks1">[17]</xref>
. In the first case, 2D motion is extracted for each monocular image and then compared between the two eyes’ images to compute speed and direction of motion. To avoid any 2D motion cues in the monocular components, we used dynamic random dot stereograms (DRDS). In DRDSs, the stereogram is rebuilt on each new video frame using a new pattern of random noise. Disparity is achieved by adding opposite disparity offsets to a small portion of the left and right images. Stereomotion is then obtained by smoothly changing the value of the disparity offsets from frame to frame. This way, stereomotion in our stimuli was entirely defined by changes of disparity over time. All Stimuli were generated using the Psychophysics Toolbox
<xref ref-type="bibr" rid="pone.0037190-Brainard1">[18]</xref>
,
<xref ref-type="bibr" rid="pone.0037190-Pelli1">[19]</xref>
.</p>
<p>The background consisted of a 3.5×3.5 deg
<sup>2</sup>
square of dynamic random noise (mean luminance 40 cd/m
<sup>2</sup>
; one-pixel resolution; refreshed every frame). Visual elements were 0.8×0.8 deg
<sup>2</sup>
squares defined only by disparity and evenly presented on a virtual ring at 2.5 deg eccentricity. The number of elements was either 6 or 10. A small bright square (2×2 pixels, 80 cd/m
<sup>2</sup>
), too small to capture exogenous attention, was placed either above or below the sound synchronized disparity-defined square to enable a compound search task (see Procedure, below). The background was surrounded by a vergence-stabilization frame consisting of multiple luminance-defined squares (0.20×0.20 deg
<sup>2</sup>
; grey: 40 cd/m
<sup>2</sup>
and white: 80 cd/m
<sup>2</sup>
) presented on a black background (5 cd/m
<sup>2</sup>
), with black nonius lines at the center (see
<xref ref-type="fig" rid="pone-0037190-g001">Figure 1</xref>
).</p>
<fig id="pone-0037190-g001" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0037190.g001</object-id>
<label>Figure 1</label>
<caption>
<title>Perspective view of the stimulus used in all experiments.</title>
<p>Visual elements were disparity-defined squares distributed evenly on a ring at 2.5 deg eccentricity and moved back and forth in depth from zero to +12 arcmin (crossed) disparity. The stimuli were surrounded by a vergence-stabilisation frame.</p>
</caption>
<graphic xlink:href="pone.0037190.g001"></graphic>
</fig>
<p>Visual elements moved in depth back and forth from 0 to +12 arcmin following a 0.7 Hz modulation. All elements moved at different phases. One of the squares’ depth modulation was synchronized with the sound amplitude modulation. To avoid overlapping temporal synchrony between the sound synchronized square and the other visual elements, we created an exclusion window of at least 60° around the sound synchronized square phase: for the other elements, phases were randomly assigned from the following values: ±60°, 80°, 100°, 120°, 140°, 160°, relative to the sound synchronized square’s phase.</p>
<p>The auditory stimulus was a 500 Hz sine-wave (44.1 kHz sample rate; mono) whose volume was modulated in amplitude (between 0 and 70 dB) at the same frequency as the visual motion-in-depth and synchronized with the square adjacent to the luminance target. The sound was presented over one loudspeaker placed on top of the CRT screen.</p>
<p>Both visual and auditory modulations were either sine-wave or square-wave and always congruent. A random phase was added to all modulations (see
<xref ref-type="fig" rid="pone-0037190-g002">Figure 2</xref>
). The auditory modulation was synchronized with the depth modulation of the disparity-defined square that was adjacent to the luminance target of the visual search.</p>
<fig id="pone-0037190-g002" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0037190.g002</object-id>
<label>Figure 2</label>
<caption>
<title>Audiovisual modulations.</title>
<p>The depth modulation of the square adjacent to the luminance target is synchronized with an amplitude-modulated 500 Hz tone. Auditory and visual modulations are always congruent (both sine-wave or square-wave). A random phase is added to the AV modulation.</p>
</caption>
<graphic xlink:href="pone.0037190.g002"></graphic>
</fig>
</sec>
<sec id="s2a4">
<title>Procedure</title>
<p>Participants were instructed to respond as fast as they could while maintaining good performance. Each trial started with a presentation of the nonius lines. When correctly fusing the nonius, participants pressed any key to start the stimulus presentation. In a speeded resnse task, the stimulus stayed on until participants had found the sound synchronized square and made the up/down judgment about the luminance target location and entered their answer on the keypad (which terminated the display). This up/down task (discriminating the position of the luminance target relative to the sound synchronized square) was orthogonal to the stereomotion search (locating the sound synchronized square), as it did not depend on the motion itself. However, as the luminance target was hardly visible while fixating centrally, the localization of the sound synchronized square was necessary first, before the up/down task could be done. This ensured that participants did perceive the disparity-defined squares.</p>
<p>Each combination of waveform condition (square vs. sine) and set size (6 vs 10) was repeated 80 times in total. The experiment was divided in ten sessions. Participants did not receive feedback regarding their accuracy, although they were aware that the amplitude modulation of the auditory signal was synchronized with the visual depth modulation of the adjacent square.</p>
</sec>
</sec>
<sec id="s2b">
<title>Experiment 2</title>
<p>To test whether results obtained in Experiment 1 are due to the presence of a sound, we tested whether visual sine- and square-wave modulations would lead to different set-size effects in the absence of a congruent auditory modulation.</p>
<sec id="s2b1">
<title>Method</title>
<p>For the second experiment, the five observers who participated in Experiment 1 (two of whom were naïve) were recruited for Experiment 2. Stimuli were presented using the same setup as in Experiment 1 and the stimuli were identical to the ones used in the first experiment. No auditory signal was presented. Visual elements moved in depth following the same modulation patterns as in Experiment 1. Instructions given to participants were identical to those in Experiment 1.</p>
</sec>
</sec>
<sec id="s2c">
<title>Experiment 3</title>
<p>In the third experiment, we investigated whether observers were using a voluntary or automatic binding of audiovisual information. We tested this by measuring whether correlating the sound with a square that is not adjacent to the luminance target would lead to longer response times, using a cost-benefit paradigm similar to the one introduced by Posner
<xref ref-type="bibr" rid="pone.0037190-Posner1">[3]</xref>
. In the cost-benefit paradigm, the subject has to perform a discrimination task on a target presented at different locations. Before the presentation of the target stimulus a cue is displayed briefly, indicating the location of the target for that trial. Posner demonstrated that presenting a valid cue (indicating the actual target location) led to shorter response times (i.e., a benefit), relative to a neutral cue (not indicative). On the contrary, presentation of an invalid cue (indicating a wrong location for the target) led to longer response times (i.e., a search cost).</p>
<p>We implemented a cost-benefit experiment in which the square-wave sound could be presented in synchrony with either the square adjacent to the luminance target or another square. 20% of trials were valid (i.e., the sound was synchronized with the adjacent square) and the remaining 80% were invalid trials (i.e., the sound was synchronized with one of the other squares). In invalid trials, if observers were automatically binding the auditory and visual information and going directly to the location where they were synchronized, they would be at a wrong location and would not find the small square there for the up/down discrimination task. They would then have to make a serial search around the depth-modulating visual squares until the one with the small square adjacent to it was found. For this reason, there would be a search cost for invalid if binding were automatic. Alternatively, if the binding of the sound and stereomotion signals were a voluntary strategy, it would be more strategic to ignore the audiovisual correlation (which would be beneficial in only 20% of trials) and begin each trial immediately with a serial search for the small square. If we observe a search cost in the invalid trials (i.e., a slowing of search times), it would show that audiovisual binding was automatic and difficult to ignore.</p>
<sec id="s2c1">
<title>Method</title>
<p>The five observers who participated in the first two experiments were recruited for the third experiment. Stimuli were presented using the same setup as in the first two experiments. Visual stimuli consisted of nine elements (squares of 0.8×0.8 deg
<sup>2</sup>
) evenly distributed on a ring as in the first two experiments. Auditory stimuli were the same as in Experiment 1. Audiovisual modulations were similar to those in the first experiment (square vs. sine) except that the auditory signal was synchronized with the square adjacent to the luminance target modulation in only 20% of trials. In the remaining 80%, the sound was synchronized with one of the other eight squares. Instructions given to participants were identical as in the first two experiments.</p>
</sec>
</sec>
</sec>
<sec id="s3">
<title>Results</title>
<sec id="s3a">
<title>Experiment 1</title>
<p>Participants reported that they first localized the sound synchronized square and then saccaded to it to make the up/down judgment concerning the luminance target.</p>
<p>Overall mean error rate was approximately 5% and error trials were discarded and no further analysis was conducted on those data. A cut-off was applied at two standard deviations from the mean response time for each participant (see
<xref ref-type="fig" rid="pone-0037190-g003">Figure 3a</xref>
and
<xref ref-type="fig" rid="pone-0037190-g004">4</xref>
and
<xref ref-type="supplementary-material" rid="pone.0037190.s001">Table S1</xref>
). A repeated-measures ANOVA was run on the response times with set size (6 vs. 10) and waveform (sine-wave vs. square-wave) as within-subject variables. The ANOVA revealed a significant main effects of set size (
<italic>F</italic>
(1, 3) = 25.9,
<italic>P</italic>
<0.01) and waveform (
<italic>F</italic>
(1, 3) = 15.7,
<italic>P</italic>
<0.05) and a significant interaction (set size x waveform) effect (
<italic>F</italic>
(3, 1) = 11.6,
<italic>P</italic>
<0.05).</p>
<fig id="pone-0037190-g003" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0037190.g003</object-id>
<label>Figure 3</label>
<caption>
<title>Results of Experiment 1 & 2.</title>
<p>Mean response times pooled across five participants as a function of set size and waveform for Experiments 1 (a) & 2 (b). The y-axis on the right represents response times in number of cycles (at 0.7 Hz, 1 cycle lasts 1.4 s). The error bars reflect the overall standard errors of individuals’ mean response times. Dashed lines and solid lines code for sine-wave and square-wave modulations respectively.</p>
</caption>
<graphic xlink:href="pone.0037190.g003"></graphic>
</fig>
<sec id="s3a1">
<title>Preliminary discussion</title>
<p>As shown in
<xref ref-type="fig" rid="pone-0037190-g003">Figure 3a</xref>
, the significant main effect of waveform arose because response times were faster in the square-wave condition overall. Interestingly, the set size effect was also reduced in the square-wave condition relative to the sine-wave condition. This indicates, contrary to our expectations, that visual search was faster and more efficient in the square wave condition.</p>
<p>In their 2010 study, van der Burg et al.
<xref ref-type="bibr" rid="pone.0037190-vanderBurg2">[10]</xref>
interleaved audiovisual trials with silent trials. This allowed them to interpret the set size effects observed in the audiovisual condition compared to the vision-only trials. During pilot experiments, our participants reported using two distinct conscious strategies depending on whether they were presented an audiovisual or a visual-only trial. Observers would wait for the sound to start to decide which strategy to use. In the presence of a visual-only trial, they would start serial searching for the luminance target while in the case of an audiovisual trial they would maintain central fixation and wait for the synchronized sound square to pop out. If observers were using distinct strategies depending on the condition, it seemed hazardous to compare data collected in the same experiment for these two sets of stimuli.</p>
</sec>
</sec>
<sec id="s3b">
<title>Experiment 2</title>
<p>If the absence of a set-size effect observed in the square-wave condition in Experiment 1 were due to the auditory information, we expect no difference between the two modulation conditions in the absence of sound. If results from Experiment 2 are comparable to those obtained in Experiment 1, they might reflect a difference in task difficulty between the two modulation conditions. If the square-wave condition is very easy, we might observe a kind of “pop out” effect.</p>
<p>As in Experiment 1, overall mean error rate was approximately 5% and error trials were discarded. A cut-off was applied at two standard-deviations from the mean response time for each participant (see
<xref ref-type="fig" rid="pone-0037190-g003">Figure 3b</xref>
and
<xref ref-type="fig" rid="pone-0037190-g004">4</xref>
and
<xref ref-type="supplementary-material" rid="pone.0037190.s002">Table S2</xref>
). A repeated-measures ANOVA was run on the response times with set size (6 vs. 10) and waveform (sine-wave vs. square-wave) as within-subject variables. The ANOVA revealed only a significant main effect of the set size (
<italic>F</italic>
(1, 3) = 15.9,
<italic>P</italic>
<0.05), with no effect of the waveform (
<italic>F</italic>
(1, 3) = 2.26,
<italic>P</italic>
 = 0.207) and no significant interaction (set size x waveform) effect (
<italic>F</italic>
(3, 1) = 0.133,
<italic>P</italic>
 = 0.733). The set-size effect is plotted in
<xref ref-type="fig" rid="pone-0037190-g003">Figure 3b</xref>
. The small difference between the sine- and square-wave conditions is not significant.</p>
<fig id="pone-0037190-g004" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0037190.g004</object-id>
<label>Figure 4</label>
<caption>
<title>Individual results of Experiment 1 & 2.</title>
<p>Response time (RT) gains ((RT(10) - RT(6)) in the square-wave condition as a function of the response time gains in the sine-wave condition. Along the black line, slopes are equal for both waveforms. When individual points are located in the lower part of the figure, response time gains are smaller in the square-wave condition. Crosses and dots represent individual results in Experiment 1 & 2 respectively.</p>
</caption>
<graphic xlink:href="pone.0037190.g004"></graphic>
</fig>
<sec id="s3b1">
<title>Preliminary discussion</title>
<p>In the Experiment 2, we found no significant difference between the two modulation conditions. Both sine- and square-wave conditions led to significant and comparable set-size effects. This confirms that the absence of a set-size effect in the square modulation condition of Experiment 1 can be attributed to the synchronized presence of a transient auditory signal. In addition, participants responded more quickly on the visual search task in Experiment 2 than in Experiment 1. This effect could be explained by participants using distinct conscious strategies for audiovisual and visual-only trials, as suggested in the Discussion of Experiment 1. If so, the facilitation in visual search observed in the square-wave condition of Experiment 1 could be due to a voluntary binding of visual and auditory information. To test this assumption, we used a cost–benefit paradigm in Experiment 3.</p>
</sec>
</sec>
<sec id="s3c">
<title>Experiment 3</title>
<p>As in the first two experiments, overall mean error rate was approximately 5% and error trials were discarded. A cut-off was applied at 2 standard-deviations from the mean response time for each participant (see
<xref ref-type="fig" rid="pone-0037190-g005">Figure 5a and 5b</xref>
and
<xref ref-type="supplementary-material" rid="pone.0037190.s003">Table S3</xref>
). A repeated-measures ANOVA was run on the response times with cue validity (valid vs. invalid) and waveform (sine-wave vs. square-wave) as within-subject variables. The ANOVA revealed a significant effect of cue validity (
<italic>F</italic>
(1, 3) = 15.3,
<italic>P</italic>
<0.05), no effect of the waveform (
<italic>F</italic>
(1, 3) = 2.84,
<italic>P</italic>
 = 0.167) and a significant interaction (cue validity * waveform) effect (
<italic>F</italic>
(3, 1) = 8.47,
<italic>P</italic>
<0.05).</p>
<fig id="pone-0037190-g005" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0037190.g005</object-id>
<label>Figure 5</label>
<caption>
<title>Results of Experiment 3.</title>
<p>(A) Mean response times pooled across five participants as a function of cue validity and waveform. See legend from
<xref ref-type="fig" rid="pone-0037190-g003">Figure 3</xref>
for details. (B) Individual results of Experiment 3. Response time gains (RT(other squares) - RT(adjacent square)) in the square-wave condition as a function of the response time gains in the sine-wave condition. See legend from
<xref ref-type="fig" rid="pone-0037190-g004">Figure 4</xref>
for details.</p>
</caption>
<graphic xlink:href="pone.0037190.g005"></graphic>
</fig>
<sec id="s3c1">
<title>Preliminary discussion</title>
<p>The results of Experiment 3 (
<xref ref-type="fig" rid="pone-0037190-g005">Figure 5a</xref>
) show a clear benefit in the square- compared to the sine-wave condition when the sound was synchronized with the adjacent square, and a cost when the square-wave sound was synchronized with one of the other squares. Even though the sound correlated with the adjacent square in only 20% of the trials, which all observers knew, results suggest that observers were unable to stop using the audiovisual synchrony. In 80% of trials, this strategy led to a wrong square and consequently slowed down the visual search process. This cost effect implies that the audio-visual correlation was automatically bound and could not be easily ignored.</p>
</sec>
</sec>
</sec>
<sec id="s4">
<title>Discussion</title>
<p>The goal of this series of experiments was to explore the effect of an auditory cue on visual search for stereomotion-defined visual stimuli. In the first two experiments, we showed that an amplitude-modulating auditory beep synchronized with a visual target led to efficient visual search. On the face of it, this result seems to contradict the finding from Harris, et al.
<xref ref-type="bibr" rid="pone.0037190-Harris1">[5]</xref>
that stereomotion does not pop out. Moreover, we found a significant improvement in visual search only when the auditory and visual modulations were square and not sine. Our results add to those obtained by van der Burg et al.
<xref ref-type="bibr" rid="pone.0037190-vanderBurg2">[10]</xref>
by showing that pip and pop is neither the exclusive domain of the luminance system, nor is it purely monocularly-driven.</p>
<p>Our predictions were that, contrary to the luminance system, the stereo system would be more efficient at tracking smooth (sine-wave) rather than abrupt (square-wave) changes of disparity over time. Instead, we found that visual search was more efficient for square-wave than for sine-wave modulations of depth. This suggests that the stereo system is better able to keep track of rapid temporal modulations in spatio-temporal disparity when guided by an auditory cue.</p>
<p>The third experiment was aimed at investigating whether the results from Experiments 1 and 2 could be attributed to an automatic integration of auditory and visual temporal signals or to a voluntary attention-like effect. The results of this last experiment suggest that even when the sound led to wrong locations and thus impaired visual search, the correlation between the auditory and visual signals could not be easily ignored. This conclusion is consistent with an interpretation in terms of audiovisual integration rather than one of crossmodal attention.</p>
<p>Neural structures differentially responsive to synchronized audiovisual events have been found throughout the human cortex
<xref ref-type="bibr" rid="pone.0037190-Noesselt1">[7]</xref>
. Recently, luminance-driven pip and pop-related increases in event related potentials were observed over lateral occipital areas of cortex
<xref ref-type="bibr" rid="pone.0037190-vanderBurg3">[11]</xref>
. It is conceivable that the compulsory audio-visual integration we observe may be related to audio-visually evoked activity in similar cortical areas.</p>
<p>The results of the experiments described in this article suggest that three main conclusions. First, an auditory cue can significantly improve the detection of targets defined exclusively by stereomotion, and second, that the stereo system is able to track abrupt changes of disparity over time when it is paired with a synchronized auditory signal. Third, and more generally, our findings support the idea that the pip and pop effect is likely to be mediated at a cortical level as we have demonstrated it here with stimuli that are exclusively binocularly defined.</p>
</sec>
<sec sec-type="supplementary-material" id="s5">
<title>Supporting Information</title>
<supplementary-material content-type="local-data" id="pone.0037190.s001">
<label>Table S1</label>
<caption>
<p>
<bold>Individual data of Experiment 1.</bold>
Individual response times (s) as a function of set size and waveform for Experiment 1.</p>
<p>(DOCX)</p>
</caption>
<media xlink:href="pone.0037190.s001.docx" mimetype="application" mime-subtype="msword">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0037190.s002">
<label>Table S2</label>
<caption>
<p>
<bold>Individual data of Experiment 2.</bold>
Individual response times (s) as a function of set size and waveform for Experiment 2.</p>
<p>(DOCX)</p>
</caption>
<media xlink:href="pone.0037190.s002.docx" mimetype="application" mime-subtype="msword">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0037190.s003">
<label>Table S3</label>
<caption>
<p>
<bold>Individual data of Experiment 3.</bold>
Individual response times (s) as a function of cue validity and waveform for Experiment 3.</p>
<p>(DOCX)</p>
</caption>
<media xlink:href="pone.0037190.s003.docx" mimetype="application" mime-subtype="msword">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
</sec>
</body>
<back>
<ack>
<p>This work was first presented at the annual conference of the Vision Sciences Society in Naples, Florida, in May 2011
<xref ref-type="bibr" rid="pone.0037190-Zannoli1">[20]</xref>
. We thank Laurie Wilcox for discussions. The experiments were approved by the Ethics Committee of the Université Paris Descartes.</p>
</ack>
<fn-group>
<fn fn-type="conflict">
<p>
<bold>Competing Interests: </bold>
The authors have declared that no competing interests exist.</p>
</fn>
<fn fn-type="financial-disclosure">
<p>
<bold>Funding: </bold>
This work was supported by a grant from the French Ministère de l’Enseignement Supérieur et de la Recherche and by a travel grant from Université Paris Descartes. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.</p>
</fn>
</fn-group>
<ref-list>
<title>References</title>
<ref id="pone.0037190-Neisser1">
<label>1</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Neisser</surname>
<given-names>U</given-names>
</name>
</person-group>
<year>1964</year>
<article-title>Visual search.</article-title>
<publisher-name>Scientific American</publisher-name>
</element-citation>
</ref>
<ref id="pone.0037190-Treisman1">
<label>2</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Treisman</surname>
<given-names>AM</given-names>
</name>
<name>
<surname>Gelade</surname>
<given-names>G</given-names>
</name>
</person-group>
<year>1980</year>
<article-title>A feature-integration theory of attention.</article-title>
<source>Cognitive Psychology</source>
<volume>12</volume>
<fpage>97</fpage>
<lpage>136</lpage>
<pub-id pub-id-type="pmid">7351125</pub-id>
</element-citation>
</ref>
<ref id="pone.0037190-Posner1">
<label>3</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Posner</surname>
<given-names>MI</given-names>
</name>
</person-group>
<year>1980</year>
<article-title>Orienting of attention.</article-title>
<source>Quarterly Journal of Experimental Psychology</source>
<volume>32</volume>
<fpage>3</fpage>
<lpage>25</lpage>
<pub-id pub-id-type="pmid">7367577</pub-id>
</element-citation>
</ref>
<ref id="pone.0037190-Nakayama1">
<label>4</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nakayama</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Silverman</surname>
<given-names>GH</given-names>
</name>
</person-group>
<year>1986</year>
<article-title>Serial and parallel processing of visual feature conjunctions.</article-title>
<source>Nature</source>
<volume>320</volume>
<fpage>264</fpage>
<lpage>265</lpage>
<pub-id pub-id-type="pmid">3960106</pub-id>
</element-citation>
</ref>
<ref id="pone.0037190-Harris1">
<label>5</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Harris</surname>
<given-names>JM</given-names>
</name>
<name>
<surname>McKee</surname>
<given-names>SP</given-names>
</name>
<name>
<surname>Watamaniuk</surname>
<given-names>SN</given-names>
</name>
</person-group>
<year>1998</year>
<article-title>Visual search for motion-in-depth: stereomotion does not “pop out” from disparity noise.</article-title>
<source>Nature Neuroscience</source>
<volume>1</volume>
<fpage>165</fpage>
<lpage>168</lpage>
</element-citation>
</ref>
<ref id="pone.0037190-Andersen1">
<label>6</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Andersen</surname>
<given-names>TS</given-names>
</name>
<name>
<surname>Mamassian</surname>
<given-names>P</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Audiovisual integration of stimulus transients.</article-title>
<source>Vision Research</source>
<volume>48</volume>
<fpage>2537</fpage>
<lpage>2544</lpage>
<pub-id pub-id-type="pmid">18801382</pub-id>
</element-citation>
</ref>
<ref id="pone.0037190-Noesselt1">
<label>7</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Noesselt</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Tyll</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Boehler</surname>
<given-names>CN</given-names>
</name>
<name>
<surname>Budinger</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Heinze</surname>
<given-names>HJ</given-names>
</name>
<etal></etal>
</person-group>
<year>2010</year>
<article-title>Sound-Induced Enhancement of Low-Intensity Vision: Multisensory Influences on Human Sensory-Specific Cortices and Thalamic Bodies Relate to Perceptual Enhancement of Visual Detection Sensitivity.</article-title>
<source>Journal of Neuroscience</source>
<volume>30</volume>
<fpage>13609</fpage>
<lpage>13623</lpage>
<pub-id pub-id-type="pmid">20943902</pub-id>
</element-citation>
</ref>
<ref id="pone.0037190-Lippert1">
<label>8</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lippert</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Logothetis</surname>
<given-names>NK</given-names>
</name>
<name>
<surname>Kayser</surname>
<given-names>C</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>Improvement of visual contrast detection by a simultaneous sound.</article-title>
<source>Brain Research</source>
<volume>1173</volume>
<fpage>102</fpage>
<lpage>109</lpage>
<pub-id pub-id-type="pmid">17765208</pub-id>
</element-citation>
</ref>
<ref id="pone.0037190-vanderBurg1">
<label>9</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>van der Burg</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Olivers</surname>
<given-names>CNL</given-names>
</name>
<name>
<surname>Bronkhorst</surname>
<given-names>AW</given-names>
</name>
<name>
<surname>Theeuwes</surname>
<given-names>J</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Pip and pop: nonspatial auditory signals improve spatial visual search.</article-title>
<source>Journal of Experimental Psychology: Human Perception and Performance</source>
<volume>34</volume>
<fpage>1053</fpage>
<lpage>1065</lpage>
<pub-id pub-id-type="pmid">18823194</pub-id>
</element-citation>
</ref>
<ref id="pone.0037190-vanderBurg2">
<label>10</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>van der Burg</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Cass</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Olivers</surname>
<given-names>CNL</given-names>
</name>
<name>
<surname>Theeuwes</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Alais</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>Efficient Visual Search from Synchronized Auditory Signals Requires Transient Audiovisual Events.</article-title>
<source>PLoS ONE</source>
<volume>5</volume>
<fpage>e10664</fpage>
<pub-id pub-id-type="pmid">20498844</pub-id>
</element-citation>
</ref>
<ref id="pone.0037190-vanderBurg3">
<label>11</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>van der Burg</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Talsma</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Olivers</surname>
<given-names>CNL</given-names>
</name>
<name>
<surname>Hickey</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Theeuwes</surname>
<given-names>J</given-names>
</name>
</person-group>
<year>2011</year>
<article-title>Early multisensory interactions affect the competition among multiple visual objects.</article-title>
<source>NeuroImage</source>
<volume>55</volume>
<fpage>1208</fpage>
<lpage>1218</lpage>
<pub-id pub-id-type="pmid">21195781</pub-id>
</element-citation>
</ref>
<ref id="pone.0037190-Harris2">
<label>12</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Harris</surname>
<given-names>JM</given-names>
</name>
<name>
<surname>Nefs</surname>
<given-names>HT</given-names>
</name>
<name>
<surname>Grafton</surname>
<given-names>CE</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Binocular vision and motion-in-depth.</article-title>
<source>Spatial vision</source>
<volume>21</volume>
<fpage>531</fpage>
<lpage>547</lpage>
<pub-id pub-id-type="pmid">19017481</pub-id>
</element-citation>
</ref>
<ref id="pone.0037190-Marr1">
<label>13</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Marr</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Poggio</surname>
<given-names>T</given-names>
</name>
</person-group>
<year>1976</year>
<article-title>Cooperative computation of stereo disparity.</article-title>
<source>Science</source>
<volume>194</volume>
<fpage>283</fpage>
<lpage>287</lpage>
<pub-id pub-id-type="pmid">968482</pub-id>
</element-citation>
</ref>
<ref id="pone.0037190-Assee1">
<label>14</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Assee</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Qian</surname>
<given-names>N</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>Solving da Vinci stereopsis with depth-edge-selective V2 cells.</article-title>
<source>Vision Research</source>
<volume>47</volume>
<fpage>2585</fpage>
<lpage>2602</lpage>
<pub-id pub-id-type="pmid">17698163</pub-id>
</element-citation>
</ref>
<ref id="pone.0037190-Nienborg1">
<label>15</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nienborg</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Bridge</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Parker</surname>
<given-names>AJ</given-names>
</name>
<name>
<surname>Cumming</surname>
<given-names>BG</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>Neuronal computation of disparity in V1 limits temporal resolution for detecting disparity modulation.</article-title>
<source>Journal of Neuroscience</source>
<volume>25</volume>
<fpage>10207</fpage>
<lpage>10219</lpage>
<pub-id pub-id-type="pmid">16267228</pub-id>
</element-citation>
</ref>
<ref id="pone.0037190-Tsirlin1">
<label>16</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Tsirlin</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Wilcox</surname>
<given-names>LM</given-names>
</name>
<name>
<surname>Allison</surname>
<given-names>RS</given-names>
</name>
</person-group>
<year>2011</year>
<article-title>The effect of crosstalk on the perceived depth from disparity and monocular occlusions.</article-title>
<publisher-name>Broadcasting, IEEE Transactions on</publisher-name>
<fpage>1</fpage>
<lpage>9</lpage>
</element-citation>
</ref>
<ref id="pone.0037190-Brooks1">
<label>17</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brooks</surname>
<given-names>KR</given-names>
</name>
<name>
<surname>Stone</surname>
<given-names>LS</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Spatial scale of stereomotion speed processing.</article-title>
<source>Journal of Vision</source>
<volume>6</volume>
<fpage>9</fpage>
<lpage>9</lpage>
</element-citation>
</ref>
<ref id="pone.0037190-Brainard1">
<label>18</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brainard</surname>
<given-names>DH</given-names>
</name>
</person-group>
<year>1997</year>
<article-title>The Psychophysics Toolbox.</article-title>
<source>Spatial vision</source>
<volume>10</volume>
<fpage>433</fpage>
<lpage>436</lpage>
<pub-id pub-id-type="pmid">9176952</pub-id>
</element-citation>
</ref>
<ref id="pone.0037190-Pelli1">
<label>19</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pelli</surname>
<given-names>DG</given-names>
</name>
</person-group>
<year>1997</year>
<article-title>The VideoToolbox software for visual psychophysics: transforming numbers into movies.</article-title>
<source>Spatial vision</source>
<volume>10</volume>
<fpage>437</fpage>
<lpage>442</lpage>
<pub-id pub-id-type="pmid">9176953</pub-id>
</element-citation>
</ref>
<ref id="pone.0037190-Zannoli1">
<label>20</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Zannoli</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Cass</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Mamassian</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Alais</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>2011</year>
<article-title>Synchronized audio-visual transients drive efficient visual search for motion-in-depth.</article-title>
<publisher-loc>Journal of Vision 1</publisher-loc>
<publisher-name>11</publisher-name>
<size units="page">792</size>
</element-citation>
</ref>
</ref-list>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Asie/explor/AustralieFrV1/Data/Pmc/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 002A17  | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Corpus/biblio.hfd -nk 002A17  | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Asie
   |area=    AustralieFrV1
   |flux=    Pmc
   |étape=   Corpus
   |type=    RBID
   |clé=     
   |texte=   
}}

Wicri

This area was generated with Dilib version V0.6.33.
Data generation: Tue Dec 5 10:43:12 2017. Site generation: Tue Mar 5 14:07:20 2024