Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Evidence for multisensory integration in the elicitation of prior entry by bimodal cues

Identifieur interne : 000786 ( Pmc/Curation ); précédent : 000785; suivant : 000787

Evidence for multisensory integration in the elicitation of prior entry by bimodal cues

Auteurs : Doug J. K. Barrett [Royaume-Uni] ; Katrin Krumbholz [Royaume-Uni]

Source :

RBID : PMC:3442165

Abstract

This study reports an experiment investigating the relative effects of intramodal, crossmodal and bimodal cues on visual and auditory temporal order judgements. Pairs of visual or auditory targets, separated by varying stimulus onset asynchronies, were presented to either side of a central fixation (±45°), and participants were asked to identify the target that had occurred first. In some of the trials, one of the targets was preceded by a short, non-predictive visual, auditory or audiovisual cue stimulus. The cue and target stimuli were presented at the exact same locations in space. The point of subjective simultaneity revealed a consistent spatiotemporal bias towards targets at the cued location. For the visual targets, the intramodal cue elicited the largest, and the crossmodal cue the smallest, bias. The bias elicited by the bimodal cue fell between the intramodal and crossmodal cue biases, with significant differences between all cue types. The pattern for the auditory targets was similar apart from a scaling factor and greater variance, so the differences between the cue conditions did not reach significance. These results provide evidence for multisensory integration in exogenous attentional cueing. The magnitude of the bimodal cueing effect was equivalent to the average of the facilitation elicited by the intramodal and crossmodal cues. Under the assumption that the visual and auditory cues were equally informative, this is consistent with the notion that exogenous attention, like perception, integrates multimodal information in an optimal way.


Url:
DOI: 10.1007/s00221-012-3191-8
PubMed: 22975896
PubMed Central: 3442165

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:3442165

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Evidence for multisensory integration in the elicitation of prior entry by bimodal cues</title>
<author>
<name sortKey="Barrett, Doug J K" sort="Barrett, Doug J K" uniqKey="Barrett D" first="Doug J. K." last="Barrett">Doug J. K. Barrett</name>
<affiliation wicri:level="1">
<nlm:aff id="Aff1">School of Psychology, University of Leicester, Leicester, UK</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>School of Psychology, University of Leicester, Leicester</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Krumbholz, Katrin" sort="Krumbholz, Katrin" uniqKey="Krumbholz K" first="Katrin" last="Krumbholz">Katrin Krumbholz</name>
<affiliation wicri:level="1">
<nlm:aff id="Aff2">MRC Institute of Hearing Research, University Park, Nottingham, UK</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>MRC Institute of Hearing Research, University Park, Nottingham</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">22975896</idno>
<idno type="pmc">3442165</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3442165</idno>
<idno type="RBID">PMC:3442165</idno>
<idno type="doi">10.1007/s00221-012-3191-8</idno>
<date when="2012">2012</date>
<idno type="wicri:Area/Pmc/Corpus">000786</idno>
<idno type="wicri:Area/Pmc/Curation">000786</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Evidence for multisensory integration in the elicitation of prior entry by bimodal cues</title>
<author>
<name sortKey="Barrett, Doug J K" sort="Barrett, Doug J K" uniqKey="Barrett D" first="Doug J. K." last="Barrett">Doug J. K. Barrett</name>
<affiliation wicri:level="1">
<nlm:aff id="Aff1">School of Psychology, University of Leicester, Leicester, UK</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>School of Psychology, University of Leicester, Leicester</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Krumbholz, Katrin" sort="Krumbholz, Katrin" uniqKey="Krumbholz K" first="Katrin" last="Krumbholz">Katrin Krumbholz</name>
<affiliation wicri:level="1">
<nlm:aff id="Aff2">MRC Institute of Hearing Research, University Park, Nottingham, UK</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>MRC Institute of Hearing Research, University Park, Nottingham</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Experimental Brain Research. Experimentelle Hirnforschung. Experimentation Cerebrale</title>
<idno type="ISSN">0014-4819</idno>
<idno type="eISSN">1432-1106</idno>
<imprint>
<date when="2012">2012</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>This study reports an experiment investigating the relative effects of intramodal, crossmodal and bimodal cues on visual and auditory temporal order judgements. Pairs of visual or auditory targets, separated by varying stimulus onset asynchronies, were presented to either side of a central fixation (±45°), and participants were asked to identify the target that had occurred first. In some of the trials, one of the targets was preceded by a short, non-predictive visual, auditory or audiovisual cue stimulus. The cue and target stimuli were presented at the exact same locations in space. The point of subjective simultaneity revealed a consistent spatiotemporal bias towards targets at the cued location. For the visual targets, the intramodal cue elicited the largest, and the crossmodal cue the smallest, bias. The bias elicited by the bimodal cue fell between the intramodal and crossmodal cue biases, with significant differences between all cue types. The pattern for the auditory targets was similar apart from a scaling factor and greater variance, so the differences between the cue conditions did not reach significance. These results provide evidence for multisensory integration in exogenous attentional cueing. The magnitude of the bimodal cueing effect was equivalent to the average of the facilitation elicited by the intramodal and crossmodal cues. Under the assumption that the visual and auditory cues were equally informative, this is consistent with the notion that exogenous attention, like perception, integrates multimodal information in an optimal way.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
<author>
<name sortKey="Gu, Y" uniqKey="Gu Y">Y Gu</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC DeAngelis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Barrett, Djk" uniqKey="Barrett D">DJK Barrett</name>
</author>
<author>
<name sortKey="Edmondson Jones, Am" uniqKey="Edmondson Jones A">AM Edmondson-Jones</name>
</author>
<author>
<name sortKey="Hall, Da" uniqKey="Hall D">DA Hall</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Battaglia, Pw" uniqKey="Battaglia P">PW Battaglia</name>
</author>
<author>
<name sortKey="Jacobs, Ra" uniqKey="Jacobs R">RA Jacobs</name>
</author>
<author>
<name sortKey="Aslin, Rn" uniqKey="Aslin R">RN Aslin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bertelson, P" uniqKey="Bertelson P">P Bertelson</name>
</author>
<author>
<name sortKey="Vroomen, J" uniqKey="Vroomen J">J Vroomen</name>
</author>
<author>
<name sortKey="Gelder, B" uniqKey="Gelder B">B Gelder</name>
</author>
<author>
<name sortKey="Driver, J" uniqKey="Driver J">J Driver</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Botta, F" uniqKey="Botta F">F Botta</name>
</author>
<author>
<name sortKey="Santengelo, V" uniqKey="Santengelo V">V Santengelo</name>
</author>
<author>
<name sortKey="Raffone, A" uniqKey="Raffone A">A Raffone</name>
</author>
<author>
<name sortKey="Sanabria, D" uniqKey="Sanabria D">D Sanabria</name>
</author>
<author>
<name sortKey="Lupianez, J" uniqKey="Lupianez J">J Lupianez</name>
</author>
<author>
<name sortKey="Belardinelli, Mo" uniqKey="Belardinelli M">MO Belardinelli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brainard, Dh" uniqKey="Brainard D">DH Brainard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Calvert, Ga" uniqKey="Calvert G">GA Calvert</name>
</author>
<author>
<name sortKey="Thesen, T" uniqKey="Thesen T">T Thesen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chambers, Cd" uniqKey="Chambers C">CD Chambers</name>
</author>
<author>
<name sortKey="Stokes, Mg" uniqKey="Stokes M">MG Stokes</name>
</author>
<author>
<name sortKey="Mattingley, Jb" uniqKey="Mattingley J">JB Mattingley</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Driver, J" uniqKey="Driver J">J Driver</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C Spence</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Duncan, J" uniqKey="Duncan J">J Duncan</name>
</author>
<author>
<name sortKey="Martens, S" uniqKey="Martens S">S Martens</name>
</author>
<author>
<name sortKey="Ward, R" uniqKey="Ward R">R Ward</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, Mo" uniqKey="Ernst M">MO Ernst</name>
</author>
<author>
<name sortKey="Banks, Ms" uniqKey="Banks M">MS Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, Mo" uniqKey="Ernst M">MO Ernst</name>
</author>
<author>
<name sortKey="Bulthoff, Hh" uniqKey="Bulthoff H">HH Bulthoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Eskes, Ga" uniqKey="Eskes G">GA Eskes</name>
</author>
<author>
<name sortKey="Klein, Rm" uniqKey="Klein R">RM Klein</name>
</author>
<author>
<name sortKey="Dove, Mb" uniqKey="Dove M">MB Dove</name>
</author>
<author>
<name sortKey="Coolican, J" uniqKey="Coolican J">J Coolican</name>
</author>
<author>
<name sortKey="Shore, Di" uniqKey="Shore D">DI Shore</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Farah, Mj" uniqKey="Farah M">MJ Farah</name>
</author>
<author>
<name sortKey="Wong, Ab" uniqKey="Wong A">AB Wong</name>
</author>
<author>
<name sortKey="Monheit, Ma" uniqKey="Monheit M">MA Monheit</name>
</author>
<author>
<name sortKey="Morrow, La" uniqKey="Morrow L">LA Morrow</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Forster, B" uniqKey="Forster B">B Forster</name>
</author>
<author>
<name sortKey="Cavina Pratesi, C" uniqKey="Cavina Pratesi C">C Cavina-Pratesi</name>
</author>
<author>
<name sortKey="Aglioti, Sm" uniqKey="Aglioti S">SM Aglioti</name>
</author>
<author>
<name sortKey="Berlucchi, G" uniqKey="Berlucchi G">G Berlucchi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Frassinetti, F" uniqKey="Frassinetti F">F Frassinetti</name>
</author>
<author>
<name sortKey="Bolognini, N" uniqKey="Bolognini N">N Bolognini</name>
</author>
<author>
<name sortKey="Ladavas, E" uniqKey="Ladavas E">E Làdavas</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gu, Y" uniqKey="Gu Y">Y Gu</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC DeAngelis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Harrington, Lk" uniqKey="Harrington L">LK Harrington</name>
</author>
<author>
<name sortKey="Peck, Kp" uniqKey="Peck K">KP Peck</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hill, Ni" uniqKey="Hill N">NI Hill</name>
</author>
<author>
<name sortKey="Darwin, Cj" uniqKey="Darwin C">CJ Darwin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ho, C" uniqKey="Ho C">C Ho</name>
</author>
<author>
<name sortKey="Santangelo, V" uniqKey="Santangelo V">V Santangelo</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C Spence</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Holmes, Np" uniqKey="Holmes N">NP Holmes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Holmes, Np" uniqKey="Holmes N">NP Holmes</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C Spence</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hukin, Rw" uniqKey="Hukin R">RW Hukin</name>
</author>
<author>
<name sortKey="Darwin, Cj" uniqKey="Darwin C">CJ Darwin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kanabus, M" uniqKey="Kanabus M">M Kanabus</name>
</author>
<author>
<name sortKey="Szelag, E" uniqKey="Szelag E">E Szelag</name>
</author>
<author>
<name sortKey="Rojek, E" uniqKey="Rojek E">E Rojek</name>
</author>
<author>
<name sortKey="Poppel, E" uniqKey="Poppel E">E Poppel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kingdom, Aa" uniqKey="Kingdom A">AA Kingdom</name>
</author>
<author>
<name sortKey="Prins, N" uniqKey="Prins N">N Prins</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Koelewijn, T" uniqKey="Koelewijn T">T Koelewijn</name>
</author>
<author>
<name sortKey="Bronkhorst, A" uniqKey="Bronkhorst A">A Bronkhorst</name>
</author>
<author>
<name sortKey="Theeuwes, J" uniqKey="Theeuwes J">J Theeuwes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ma, Wj" uniqKey="Ma W">WJ Ma</name>
</author>
<author>
<name sortKey="Pouget, A" uniqKey="Pouget A">A Pouget</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mcdonald, Jj" uniqKey="Mcdonald J">JJ McDonald</name>
</author>
<author>
<name sortKey="Teder Salejarvi, Wa" uniqKey="Teder Salejarvi W">WA Teder-Salejarvi</name>
</author>
<author>
<name sortKey="Russo, F" uniqKey="Russo F">F Russo</name>
</author>
<author>
<name sortKey="Hillyard, Sa" uniqKey="Hillyard S">SA Hillyard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mcdonald, Jj" uniqKey="Mcdonald J">JJ McDonald</name>
</author>
<author>
<name sortKey="Teder S Lej Rvi, Wa" uniqKey="Teder S Lej Rvi W">WA Teder-Sälejärvi</name>
</author>
<author>
<name sortKey="Ward, Lm" uniqKey="Ward L">LM Ward</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mcdonald, Jj" uniqKey="Mcdonald J">JJ McDonald</name>
</author>
<author>
<name sortKey="Ward, Lm" uniqKey="Ward L">LM Ward</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mcfarland, Dj" uniqKey="Mcfarland D">DJ McFarland</name>
</author>
<author>
<name sortKey="Cacace, At" uniqKey="Cacace A">AT Cacace</name>
</author>
<author>
<name sortKey="Setzen, G" uniqKey="Setzen G">G Setzen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Meredith, Ma" uniqKey="Meredith M">MA Meredith</name>
</author>
<author>
<name sortKey="Stein, Be" uniqKey="Stein B">BE Stein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Meyer, De" uniqKey="Meyer D">DE Meyer</name>
</author>
<author>
<name sortKey="Osman, Am" uniqKey="Osman A">AM Osman</name>
</author>
<author>
<name sortKey="Irwin, De" uniqKey="Irwin D">DE Irwin</name>
</author>
<author>
<name sortKey="Yantis, S" uniqKey="Yantis S">S Yantis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Miller, J" uniqKey="Miller J">J Miller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Molholm, S" uniqKey="Molholm S">S Molholm</name>
</author>
<author>
<name sortKey="Ritter, W" uniqKey="Ritter W">W Ritter</name>
</author>
<author>
<name sortKey="Murray, Mm" uniqKey="Murray M">MM Murray</name>
</author>
<author>
<name sortKey="Javitt, Dc" uniqKey="Javitt D">DC Javitt</name>
</author>
<author>
<name sortKey="Schroeder, Ce" uniqKey="Schroeder C">CE Schroeder</name>
</author>
<author>
<name sortKey="Foxe, Jj" uniqKey="Foxe J">JJ Foxe</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Moore, Bcj" uniqKey="Moore B">BCJ Moore</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mondor, Ta" uniqKey="Mondor T">TA Mondor</name>
</author>
<author>
<name sortKey="Amirault, Kj" uniqKey="Amirault K">KJ Amirault</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Morgan, Ml" uniqKey="Morgan M">ML Morgan</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC DeAngelis</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Muller, Hj" uniqKey="Muller H">HJ Müller</name>
</author>
<author>
<name sortKey="Rabbitt, Pm" uniqKey="Rabbitt P">PM Rabbitt</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mulligan, Rm" uniqKey="Mulligan R">RM Mulligan</name>
</author>
<author>
<name sortKey="Shaw, Ml" uniqKey="Shaw M">ML Shaw</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Roach, Nw" uniqKey="Roach N">NW Roach</name>
</author>
<author>
<name sortKey="Heron, J" uniqKey="Heron J">J Heron</name>
</author>
<author>
<name sortKey="Mcgraw, Pv" uniqKey="Mcgraw P">PV McGraw</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Santangelo, V" uniqKey="Santangelo V">V Santangelo</name>
</author>
<author>
<name sortKey="Ho, C" uniqKey="Ho C">C Ho</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C Spence</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Santangelo, V" uniqKey="Santangelo V">V Santangelo</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C Spence</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Santangelo, V" uniqKey="Santangelo V">V Santangelo</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C Spence</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Santangelo, V" uniqKey="Santangelo V">V Santangelo</name>
</author>
<author>
<name sortKey="Lubbe, Rhj" uniqKey="Lubbe R">RHJ Lubbe</name>
</author>
<author>
<name sortKey="Belardinelli, Mo" uniqKey="Belardinelli M">MO Belardinelli</name>
</author>
<author>
<name sortKey="Postma, A" uniqKey="Postma A">A Postma</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Santangelo, V" uniqKey="Santangelo V">V Santangelo</name>
</author>
<author>
<name sortKey="Lubbe, Rhj" uniqKey="Lubbe R">RHJ Lubbe</name>
</author>
<author>
<name sortKey="Olivetti Belardinelli, M" uniqKey="Olivetti Belardinelli M">M Olivetti Belardinelli</name>
</author>
<author>
<name sortKey="Postma, A" uniqKey="Postma A">A Postma</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schneider, Ka" uniqKey="Schneider K">KA Schneider</name>
</author>
<author>
<name sortKey="Bevelier, D" uniqKey="Bevelier D">D Bevelier</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shore, Di" uniqKey="Shore D">DI Shore</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C Spence</name>
</author>
<author>
<name sortKey="Klein, Rm" uniqKey="Klein R">RM Klein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sinnett, S" uniqKey="Sinnett S">S Sinnett</name>
</author>
<author>
<name sortKey="Soto Faraco, S" uniqKey="Soto Faraco S">S Soto-Faraco</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C Spence</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C Spence</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C Spence</name>
</author>
<author>
<name sortKey="Parise, C" uniqKey="Parise C">C Parise</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C Spence</name>
</author>
<author>
<name sortKey="Driver, J" uniqKey="Driver J">J Driver</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C Spence</name>
</author>
<author>
<name sortKey="Driver, J" uniqKey="Driver J">J Driver</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stein, Be" uniqKey="Stein B">BE Stein</name>
</author>
<author>
<name sortKey="Stanford, Tr" uniqKey="Stanford T">TR Stanford</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stein, Be" uniqKey="Stein B">BE Stein</name>
</author>
<author>
<name sortKey="Stanford, Tr" uniqKey="Stanford T">TR Stanford</name>
</author>
<author>
<name sortKey="Ramachandran, R" uniqKey="Ramachandran R">R Ramachandran</name>
</author>
<author>
<name sortKey="Perrault, Tj" uniqKey="Perrault T">TJ Perrault</name>
</author>
<author>
<name sortKey="Rowland, Ba" uniqKey="Rowland B">BA Rowland</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stelmach, Lb" uniqKey="Stelmach L">LB Stelmach</name>
</author>
<author>
<name sortKey="Herdman, Cm" uniqKey="Herdman C">CM Herdman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stormer, Vs" uniqKey="Stormer V">VS Störmer</name>
</author>
<author>
<name sortKey="Mcdonald, Jj" uniqKey="Mcdonald J">JJ McDonald</name>
</author>
<author>
<name sortKey="Hillyard, Sa" uniqKey="Hillyard S">SA Hillyard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Talsma, D" uniqKey="Talsma D">D Talsma</name>
</author>
<author>
<name sortKey="Senkowski, D" uniqKey="Senkowski D">D Senkowski</name>
</author>
<author>
<name sortKey="Soto Faraco, S" uniqKey="Soto Faraco S">S Soto-Faraco</name>
</author>
<author>
<name sortKey="Woldorff, Mg" uniqKey="Woldorff M">MG Woldorff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Teder S Lej Rvi, W" uniqKey="Teder S Lej Rvi W">W Teder-Sälejärvi</name>
</author>
<author>
<name sortKey="Russo, Fd" uniqKey="Russo F">FD Russo</name>
</author>
<author>
<name sortKey="Mcdonald, J" uniqKey="Mcdonald J">J McDonald</name>
</author>
<author>
<name sortKey="Hillyard, Sa" uniqKey="Hillyard S">SA Hillyard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tollin, Dj" uniqKey="Tollin D">DJ Tollin</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ward, Lm" uniqKey="Ward L">LM Ward</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Werner, S" uniqKey="Werner S">S Werner</name>
</author>
<author>
<name sortKey="Noppeney, U" uniqKey="Noppeney U">U Noppeney</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yates, Mj" uniqKey="Yates M">MJ Yates</name>
</author>
<author>
<name sortKey="Nicholls, Er" uniqKey="Nicholls E">ER Nicholls</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zampini, M" uniqKey="Zampini M">M Zampini</name>
</author>
<author>
<name sortKey="Shore, Di" uniqKey="Shore D">DI Shore</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C Spence</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zimmer, U" uniqKey="Zimmer U">U Zimmer</name>
</author>
<author>
<name sortKey="Macaluso, E" uniqKey="Macaluso E">E Macaluso</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Exp Brain Res</journal-id>
<journal-id journal-id-type="iso-abbrev">Exp Brain Res</journal-id>
<journal-title-group>
<journal-title>Experimental Brain Research. Experimentelle Hirnforschung. Experimentation Cerebrale</journal-title>
</journal-title-group>
<issn pub-type="ppub">0014-4819</issn>
<issn pub-type="epub">1432-1106</issn>
<publisher>
<publisher-name>Springer-Verlag</publisher-name>
<publisher-loc>Berlin/Heidelberg</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">22975896</article-id>
<article-id pub-id-type="pmc">3442165</article-id>
<article-id pub-id-type="publisher-id">3191</article-id>
<article-id pub-id-type="doi">10.1007/s00221-012-3191-8</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Evidence for multisensory integration in the elicitation of prior entry by bimodal cues</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Barrett</surname>
<given-names>Doug J. K.</given-names>
</name>
<address>
<phone>+44-116-2297178</phone>
<fax>+44-116-2231057</fax>
<email>djkb1@le.ac.uk</email>
</address>
<xref ref-type="aff" rid="Aff1">1</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Krumbholz</surname>
<given-names>Katrin</given-names>
</name>
<xref ref-type="aff" rid="Aff2">2</xref>
</contrib>
<aff id="Aff1">
<label>1</label>
School of Psychology, University of Leicester, Leicester, UK</aff>
<aff id="Aff2">
<label>2</label>
MRC Institute of Hearing Research, University Park, Nottingham, UK</aff>
</contrib-group>
<pub-date pub-type="epub">
<day>28</day>
<month>7</month>
<year>2012</year>
</pub-date>
<pub-date pub-type="pmc-release">
<day>28</day>
<month>7</month>
<year>2012</year>
</pub-date>
<pub-date pub-type="ppub">
<month>10</month>
<year>2012</year>
</pub-date>
<volume>222</volume>
<issue>1-2</issue>
<fpage>11</fpage>
<lpage>20</lpage>
<history>
<date date-type="received">
<day>15</day>
<month>12</month>
<year>2011</year>
</date>
<date date-type="accepted">
<day>8</day>
<month>7</month>
<year>2012</year>
</date>
</history>
<permissions>
<copyright-statement>© The Author(s) 2012</copyright-statement>
</permissions>
<abstract id="Abs1">
<p>This study reports an experiment investigating the relative effects of intramodal, crossmodal and bimodal cues on visual and auditory temporal order judgements. Pairs of visual or auditory targets, separated by varying stimulus onset asynchronies, were presented to either side of a central fixation (±45°), and participants were asked to identify the target that had occurred first. In some of the trials, one of the targets was preceded by a short, non-predictive visual, auditory or audiovisual cue stimulus. The cue and target stimuli were presented at the exact same locations in space. The point of subjective simultaneity revealed a consistent spatiotemporal bias towards targets at the cued location. For the visual targets, the intramodal cue elicited the largest, and the crossmodal cue the smallest, bias. The bias elicited by the bimodal cue fell between the intramodal and crossmodal cue biases, with significant differences between all cue types. The pattern for the auditory targets was similar apart from a scaling factor and greater variance, so the differences between the cue conditions did not reach significance. These results provide evidence for multisensory integration in exogenous attentional cueing. The magnitude of the bimodal cueing effect was equivalent to the average of the facilitation elicited by the intramodal and crossmodal cues. Under the assumption that the visual and auditory cues were equally informative, this is consistent with the notion that exogenous attention, like perception, integrates multimodal information in an optimal way.</p>
</abstract>
<kwd-group xml:lang="en">
<title>Keywords</title>
<kwd>Exogenous attention</kwd>
<kwd>Intramodal</kwd>
<kwd>Crossmodal</kwd>
<kwd>Multisensory integration</kwd>
</kwd-group>
<custom-meta-group>
<custom-meta>
<meta-name>issue-copyright-statement</meta-name>
<meta-value>© Springer-Verlag 2012</meta-value>
</custom-meta>
</custom-meta-group>
</article-meta>
</front>
<body>
<sec id="Sec1">
<title>Introduction</title>
<p>Our experience of the world is derived from multiple sensory systems. The converging input provided by these systems is a powerful resource for differentiating and selecting objects for action or further analysis. However, integrating information across separate sensory systems poses the brain a computationally complex problem. For example, associating the changes in the sound of an approaching car with the expansion of its image on the retina requires the integration of binaural and retinotopic information. Prioritising the car for a behavioural response then requires the selection of this integrated information in the face of competing stimuli (e.g. other vehicles). This prioritisation is usually ascribed to selective attention, which can be “exogenously” evoked by salient perceptual events or directed towards behaviourally relevant objects in a voluntary, or “endogenous”, manner (Müller and Rabbitt
<xref ref-type="bibr" rid="CR39">1989</xref>
). The relationship between multisensory integration and attention, and the extent to which they are based on common mechanisms or rely on shared neural resources, has recently become a focus of interest in cognitive neuroscience (for reviews, see Koelewijn et al.
<xref ref-type="bibr" rid="CR26">2010</xref>
; Talsma et al.
<xref ref-type="bibr" rid="CR59">2010</xref>
). Much of the research to date is based on experiments designed to compare responses to unimodal stimuli (e.g. separately presented auditory and visual stimuli) with the response to the combined multimodal stimulus (audiovisual stimulus). Comparisons of this kind yield an index of the benefits associated with stimulation in more than one modality. Their results have provided evidence that there may be a difference in the size of these benefits for perceptual integration versus attention.</p>
<p>Studies of multisensory perceptual integration have shown that bimodal stimuli often evoke responses that are quantitatively different from those evoked by either of their unimodal components separately. For instance, in simple reaction time (RT) tasks, observers tend to respond to a bimodal stimulus faster than they do to either of the unimodal components alone; this has been referred to as the redundant signals effect (RSE; Forster et al.
<xref ref-type="bibr" rid="CR15">2002</xref>
; Miller
<xref ref-type="bibr" rid="CR34">1986</xref>
). The RSE is likely to be related to findings from neurophysiological and neuroimaging studies that neural activity in the superior colliculus (SC) and other brain areas is often suppressed or enhanced in response to bimodal compared to unimodal stimuli (Angelaki et al.
<xref ref-type="bibr" rid="CR1">2009</xref>
; Calvert and Thesen
<xref ref-type="bibr" rid="CR7">2004</xref>
; Gu et al.
<xref ref-type="bibr" rid="CR17">2008</xref>
; Molholm et al.
<xref ref-type="bibr" rid="CR35">2002</xref>
; Morgan et al.
<xref ref-type="bibr" rid="CR38">2008</xref>
; Stein et al.
<xref ref-type="bibr" rid="CR56">2009</xref>
; Sinnett et al.
<xref ref-type="bibr" rid="CR50">2008</xref>
; Teder-Sälejärvi et al.
<xref ref-type="bibr" rid="CR60">2005</xref>
; Werner and Noppeney
<xref ref-type="bibr" rid="CR63">2010</xref>
). The degree of bimodal enhancement or suppression has been found to depend on the temporal and spatial congruency of the unimodal stimulus components (Frassinetti et al.
<xref ref-type="bibr" rid="CR16">2002</xref>
; Stein and Stanford
<xref ref-type="bibr" rid="CR55">2008</xref>
). Typically, the size of the bimodal response cannot be predicted on the basis of the responses to either of its unimodal components (Meredith and Stein
<xref ref-type="bibr" rid="CR32">1983</xref>
; Stein et al.
<xref ref-type="bibr" rid="CR56">2009</xref>
). This suggests that bimodal perceptual integration is based upon a true combination of unimodal responses, rather than an exclusive decision based on either unimodal response alone (e.g. a “winner-takes-all” mechanism; Mulligan and Shaw
<xref ref-type="bibr" rid="CR40">1980</xref>
).</p>
<p>In contrast to the studies on perceptual integration, studies of multisensory attention have found little evidence to suggest that the attentional facilitation evoked by bimodal cues is different from that evoked by their unimodal components. Most of these studies measured RTs and response accuracy to cued compared to uncued targets and have found the benefits afforded by bimodal cues to be comparable to those afforded by the most effective unimodal cue alone (Santangelo et al.
<xref ref-type="bibr" rid="CR45">2006</xref>
; Spence and Driver
<xref ref-type="bibr" rid="CR53">1999</xref>
; Ward
<xref ref-type="bibr" rid="CR62">1994</xref>
). One study also measured the neural response to bimodal cues and found bimodal enhancement of the neural response in the absence of any bimodal benefit in attentional facilitation (Santangelo et al.
<xref ref-type="bibr" rid="CR46">2008a</xref>
). This suggests that the absence of benefit for bimodal cues in the previous studies was not due to a failure to induce multisensory perceptual integration. These results have been interpreted as evidence that multisensory perceptual integration and attention are based on different underlying mechanisms (Bertelson et al.
<xref ref-type="bibr" rid="CR4">2000</xref>
; Santangelo et al.
<xref ref-type="bibr" rid="CR45">2006</xref>
; Spence
<xref ref-type="bibr" rid="CR51">2010</xref>
): while multisensory perceptual integration is thought to reflect a true combination of unimodal information, multisensory attention appears more consistent with facilitation being based on a winner-takes-all competition between the unimodal cue components. This competition might take place between separate modality-specific attentional resources (Chambers et al.
<xref ref-type="bibr" rid="CR8">2004</xref>
; Duncan et al.
<xref ref-type="bibr" rid="CR10">1997</xref>
; Mondor and Amirault
<xref ref-type="bibr" rid="CR36">1998</xref>
) or between the unimodal inputs to a supramodal attention mechanism (Farah et al.
<xref ref-type="bibr" rid="CR14">1989</xref>
; McDonald et al.
<xref ref-type="bibr" rid="CR29">2001</xref>
; Zimmer and Macaluso
<xref ref-type="bibr" rid="CR66">2007</xref>
).</p>
<p>There is, however, at least some evidence that is inconsistent with the idea that the multisensory perceptual integration and multisensory attention are based on separate mechanisms. For instance, it has been shown that exogenous shifts of attention to cues in one modality can modulate responses to targets in another modality. This indicates that attentional resources are not exclusively unimodal (Driver and Spence
<xref ref-type="bibr" rid="CR9">1998</xref>
; McDonald et al.
<xref ref-type="bibr" rid="CR30">2005</xref>
; Störmer et al.
<xref ref-type="bibr" rid="CR58">2009</xref>
). Moreover, while bimodal cues do not elicit a larger RT benefit than their unimodal components, they have been shown to capture attention more effectively in conditions of high perceptual load (Santangelo et al.
<xref ref-type="bibr" rid="CR47">2008b</xref>
). Thus, the absence of multisensory enhancement in attentional facilitation may reflect a lack of sensitivity in the tasks and criteria used to study multisensory attention. In particular, the RT tasks used in the previous studies are determined, at least in part, by post-perceptual factors, such as criterion shifts, working memory and response preparation, some of which may be insensitive to changes in attentional facilitation as a result of multisensory integration (Meyer et al.
<xref ref-type="bibr" rid="CR33">1988</xref>
; Eskes et al.
<xref ref-type="bibr" rid="CR13">2007</xref>
). The inability to find evidence of multisensory integration in exogenous attention may also have been exacerbated by the expectation, in most studies, that multimodal cues will evoke
<italic>enhancements</italic>
in attentional facilitation. While enhanced neural responses characterise perceptual integration in some circumstances, the relationship between the neural correlates of multisensory integration (enhancement or suppression) and its behavioural consequences is not well understood (Holmes and Spence
<xref ref-type="bibr" rid="CR22">2005</xref>
; Holmes
<xref ref-type="bibr" rid="CR21">2007</xref>
). Optimal models of multisensory integration, which consider both the mean and the variability of the response, predict responses to bimodal stimuli to fall between, rather than exceed, the responses to their unimodal components. According to the maximum likelihood estimation (MLE) model, multisensory integration is based upon an average of the unimodal estimates associated with a given object, with each estimate weighted by its respective variance (Ernst and Bulthoff
<xref ref-type="bibr" rid="CR12">2004</xref>
; Ma and Pouget
<xref ref-type="bibr" rid="CR27">2008</xref>
). If multisensory attention operates on similar principles, attentional facilitation by a bimodal cue might also be expected to approximate an average of the facilitation elicited by its unimodal components.</p>
<p>The aim of the current study was to re-investigate the relationship between multisensory perceptual integration and multisensory attentional facilitation by comparing the facilitation elicited by bimodal and unimodal cues. In contrast to the previous studies, we used a temporal order judgement (TOJ) rather than a RT task to measure attentional facilitation. TOJs measure the perceived order of occurrence of two asynchronous target stimuli. They have been shown to be highly sensitive to manipulations of exogenous spatial attention, in that targets at cued locations are often perceived to have occurred earlier than targets at uncued locations (e.g. Shore et al.
<xref ref-type="bibr" rid="CR49">2001</xref>
; Stelmach and Herdman
<xref ref-type="bibr" rid="CR57">1991</xref>
; Zampini et al.
<xref ref-type="bibr" rid="CR65">2005</xref>
). This bias, known as “prior entry”, has been attributed to an increase in perceptual sensitivity at the cued location (Shore et al.
<xref ref-type="bibr" rid="CR49">2001</xref>
; McDonald et al.
<xref ref-type="bibr" rid="CR30">2005</xref>
). In the current study, the two target stimuli were either visual or auditory, and, in some trials, one of them was preceded by a visual, auditory or audiovisual cue. A recent study by Eskes et al. (
<xref ref-type="bibr" rid="CR13">2007</xref>
) suggests that TOJs produce larger, and more reliable, cueing effects than RT tasks. TOJs might thus be expected to provide a more sensitive measure with which to investigate differences in the amount of facilitation elicited by bimodal and unimodal cues.</p>
</sec>
<sec id="Sec2" sec-type="methods">
<title>Method</title>
<sec id="Sec3">
<title>Participants</title>
<p>A total of 22 participants (8 male, ages ranging from 20 to 43 (mean 26.6) years) took part in this study. All participants were naïve to the purpose of the study and reported normal hearing and normal, or corrected-to-normal, vision. They gave informed written consent and were paid for their participation at an hourly rate. The experimental procedures conformed to the Code of Ethics of the World Medical Association (Declaration of Helsinki) and were approved by the local ethics committee.</p>
</sec>
<sec id="Sec4">
<title>Stimuli and apparatus</title>
<p>In order to make the auditory and visual TOJ tasks as similar as possible, we used target stimuli that differed along a categorical dimension. In addition, we required the auditory targets to be readily localisable, which meant that they had to be spectrally broad. To satisfy these constraints, we used a colour discrimination task for the visual TOJs, and a vowel discrimination task for the auditory TOJs.</p>
<p>The visual targets were two isoluminant (13.6 cd/m
<sup>2</sup>
) squares, one red and the other green, on a dark (1.7 cd/m
<sup>2</sup>
) background. Each square subtended 9° of visual angle. The visual stimuli were projected onto an acoustically transparent sheet, positioned at a viewing distance of 49 cm, using a floor-mounted projector (NEC WT610; London, UK). The image refresh rate was 75 Hz.</p>
<p>The auditory targets were the two vowels /i/ and /o/, generated using a Klatt synthesiser. Among the canonical vowels, /i/ and /o/ are the most widely separated in logarithmic formant space. The glottal pulse rates (GPRs), and thus the pitches, of two vowels differed by ±2 semitones around 100 Hz. Their first three formants were separated by ±1.25 semitones to simulate a difference in vocal tract length (VTL). These GPR and VTL differences exceed the largest differences at which the vowels would still be judged as having been uttered by the same speaker (Gaudrain et al.
<xref ref-type="bibr" rid="CR101">2009</xref>
). The auditory stimuli were digital-to-analogue converted at 44.1 kHz using an ASIO-compliant sound card (Motu 24 I/O; Cambridge, MA, USA). They were gated on and off with 10-ms cosine-squared ramps to avoid audible clicks and presented at an overall level of approximately 70 dB(A) using two Bose Cube loudspeakers (Kent, UK). The loudspeakers were mounted behind the sheet onto which the visual stimuli were projected. This set-up enabled us to present the auditory and visual stimuli from the same location.</p>
<p>Both the auditory and visual targets were presented at an angle of ±45° from the centre of gaze. In some conditions, one of the two targets was preceded by a visual, auditory or audiovisual cue stimulus. The visual cue was a bright (102.6 cd/m
<sup>2</sup>
) white disc that subtended 9° of visual angle. The auditory cue was a burst of Gaussian noise, presented at an overall level of approximately 75 dB(A). For the audiovisual cue, the auditory and visual cues were presented synchronously and at the same location (±45° like the targets).</p>
<p>Stimulus presentation was controlled using MATLAB (Mathworks, Natick, MA, USA) with the Psychophysics toolbox (Brainard
<xref ref-type="bibr" rid="CR6">1997</xref>
). The experiment was conducted in a quiet, dimly lit room.
<table-wrap id="Tab1">
<label>Table 1</label>
<caption>
<p>Mean JNDs in milliseconds with standard errors (in brackets) for the visual and auditory TOJs by cue condition</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left"></th>
<th align="left">Baseline</th>
<th align="left">Intramodal</th>
<th align="left">Crossmodal</th>
<th align="left">Bimodal</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left">Visual</td>
<td char="(" align="char">46.01 (4.22)</td>
<td char="(" align="char">61.59 (5.23)</td>
<td char="(" align="char">39.80 (3.33)</td>
<td char="(" align="char">57.60 (4.48)</td>
</tr>
<tr>
<td align="left">Auditory</td>
<td char="(" align="char">102.57 (7.72)</td>
<td char="(" align="char">120.73 (9.04)</td>
<td char="(" align="char">122.09 (11.23)</td>
<td char="(" align="char">156.88 (22.55)</td>
</tr>
</tbody>
</table>
</table-wrap>
</p>
</sec>
<sec id="Sec5">
<title>Procedure</title>
<p>For both target modalities (visual, auditory), TOJs were measured in four cue conditions. In one condition (“baseline”), there was no cue. In the “intramodal” cue condition, the cue was presented in the same modality as the targets (e.g. visual cue for the visual targets), and in the “crossmodal” condition, the cue’s modality was alternate to that of the targets (e.g. visual cue for the auditory targets). In the “bimodal” condition, the targets were preceded by the audiovisual cue.</p>
<p>Each trial began with a central fixation cross presented for 500 ms (see Fig. 
<xref rid="Fig1" ref-type="fig">1</xref>
). In the cued conditions, the cue was then presented to the left or right target location for 100 ms. The first target was presented after a cue-target onset asynchrony (CTOA) of 200 ms. The CTOA was designed to simultaneously minimise both the possibility of sensory interactions between the cue and the targets [e.g. energetic masking for the auditory TOJs (Moore
<xref ref-type="bibr" rid="CR37">2004</xref>
) and “sensory facilitation” for the visual TOJs (Schneider and Bevelier
<xref ref-type="bibr" rid="CR48">2003</xref>
)] and the likelihood that participants would make saccades to the cued location prior to onset of the first target onset (Harrington and Peck
<xref ref-type="bibr" rid="CR18">1998</xref>
; Santangelo and Spence
<xref ref-type="bibr" rid="CR44">2009</xref>
). The onsets of the targets were staggered by a stimulus onset asynchrony (SOA) of 27, 53, 107, 160 or 213 ms, and the participant’s task was to identify which target had appeared first (“which-target-first” task). The first-occurring target was presented to the left or right side with equal probability. In the cued conditions, the spatial relationship between the cue and the first-occurring target was non-predictive. The targets were switched off synchronously to ensure that TOJs were based on the targets’ onsets, rather than their offsets. The duration of the longer of the two targets was always 1,000 ms. Participants were asked to judge the identity (colour or vowel identity), rather than the location of the first-occurring target, to avoid any spatial response bias (Shore et al.
<xref ref-type="bibr" rid="CR49">2001</xref>
), and their responses were recorded by the experimenter using a standard keyboard.
<fig id="Fig1">
<label>Fig. 1</label>
<caption>
<p>Schematic representation of one trial in the visual TOJ task. In this example, the first-appearing target is preceded by an intramodal cue. The actual visual targets were isoluminant red and green squares. The target onsets were staggered by an SOA ranging from 27 to 213 ms</p>
</caption>
<graphic xlink:href="221_2012_3191_Fig1_HTML" id="MO1"></graphic>
</fig>
</p>
<p>Previous studies have shown that orthogonal judgements are effective at eliminating first-order response bias in TOJ tasks (Spence and Parise
<xref ref-type="bibr" rid="CR54">2010</xref>
). However, concerns regarding second-order response bias have been raised by some authors (e.g. Schneider and Bevelier
<xref ref-type="bibr" rid="CR48">2003</xref>
). Two different tasks have been suggested to eliminate this second-order response bias: the simultaneity judgement (SJ) task and an alternate TOJ task, where which-target-first and which-target-second responses are averaged (Shore et al.
<xref ref-type="bibr" rid="CR49">2001</xref>
). The SJ task tends to yield much smaller prior-entry effects than the TOJ task, and there is a debate as to whether the two tasks actually measure the same underlying perceptual processes (van Eijk et al.
<xref ref-type="bibr" rid="CR102">2008</xref>
; Yates and Nicholls
<xref ref-type="bibr" rid="CR64">2011</xref>
). In contrast, the alternate TOJ task provides an effective way of eliminating second-order response bias. However, the difference between the which-target-first and which-target-second responses, which is a measure of response bias, has been shown to be small in relation to the prior-entry effect (less than 12 %; Shore et al.
<xref ref-type="bibr" rid="CR49">2001</xref>
; see Spence and Parise, for review). Furthermore, alternate tasks are likely to introduce confusion at the response-stage, as participants switch between which-target-first and which-target-second responses. In order to avoid this confusion in an already difficult task (particularly for the auditory TOJ; see “
<xref rid="Sec7" ref-type="sec">Results</xref>
”), we adopted a simple which-target-first response design.</p>
<p>The different experimental conditions (i.e. combinations of target modality and cue condition) were run in eight separate blocks. Each block contained eight repetitions of each stimulus condition [target side (2) × SOA (5) for the baseline condition; cue side (2) × target side (2) × SOA (5) for the cued conditions]. The presentation of the stimulus conditions was randomised within each block, as was the order of presentation of blocks (i.e. experimental conditions). Participants were told to ignore the cues and asked to maintain their gaze at the central fixation throughout each trial.</p>
</sec>
<sec id="Sec6">
<title>Analysis</title>
<p>Performance in the baseline conditions was checked to ensure that, at the longest SOA (±231 ms), participants could correctly identify the first-appearing target with at least 80 % accuracy. Four participants failed to achieve this criterion and were excluded from further analysis. For the remaining participants, the results for the baseline condition were expressed in terms of the proportion of “left-target-first” responses as a function of the onset time of the left target minus that of the right (referred to as SOA in Fig. 
<xref rid="Fig2" ref-type="fig">2</xref>
). For the cued conditions, the results were expressed in terms of the proportion of “cued-target-first” responses as a function of the onset time difference between the cued and the uncued target. The resulting psychometric functions were fitted with a cumulative Gaussian using the Palamedes toolbox for MATLAB (Kingdom and Prins
<xref ref-type="bibr" rid="CR25">2010</xref>
). The fitting was conducted for each participant separately. Note, however, that the fitted functions shown in Fig. 
<xref rid="Fig2" ref-type="fig">2</xref>
are based on the mean data for all participants. The goodness of fit (GoF) was estimated by bootstrapping each participant’s data 1999 times using a Monte-Carlo procedure. All participants’ responses fell well within the 95 % confidence interval around the fitted functions, indicating a good match between the fitted and measured functions. The fitted functions were then used to estimate the point of subjective simultaneity (PSS) for each participant and condition. In the baseline condition, the PSS denotes the SOA at which the left and right targets are judged to have occurred first with equal probability. The PSS for the baseline conditions would thus be expected to be close to zero. In the cued conditions, the PSS denotes the SOA at which the cued and uncued targets are judged to have occurred first with equal probability. Under the assumption that the cue facilitates target processing, the PSS for the cued conditions would be expected to be shifted towards positive SOAs (i.e. cued target occurred before uncued target). The magnitude of the shift would be expected to reflect the lead-time required for the uncued target to be perceived as having occurred simultaneously with the cued target. Next to the PSS, we also estimated the just noticeable difference (JND) in the onsets of the two targets by calculating the difference between SOAs yielding cued-target-first responses with probabilities of 0.75 and 0.5.
<fig id="Fig2">
<label>Fig. 2</label>
<caption>
<p>Observed data and fitted psychometric functions for the visual (
<bold>a</bold>
) and auditory (
<bold>b</bold>
) TOJ tasks. The sigmoid fitting is based upon the averaged data across participants in this illustration. The different cue conditions are represented by different symbols and line styles (see legend in
<bold>b</bold>
). For baseline trials, the ordinate shows the proportion of “left-target-first” responses, and negative SOA values on the abscissa denote targets presented first in the left visual field. For cued trials, the ordinate shows the proportion of “cued-target-first” responses, and negative SOA values denote targets presented first at the cued location, irrespective of the side of presentation</p>
</caption>
<graphic xlink:href="221_2012_3191_Fig2_HTML" id="MO2"></graphic>
</fig>
</p>
<p>To compare performance across target and cue conditions, the PSS and JND estimates for each participant were entered into separate repeated-measures ANOVAs. The
<italic>p</italic>
values were Greenhouse–Geisser corrected for non-sphericity where appropriate. Post hoc comparisons were corrected for family-wise error using Holm–Bonferroni-adjusted
<italic>t</italic>
tests (two-tailed, α = 0.05).</p>
</sec>
</sec>
<sec id="Sec7" sec-type="results">
<title>Results</title>
<sec id="Sec8">
<title>PSS</title>
<p>The psychometric functions for the visual and auditory TOJs were sigmoidal (Fig. 
<xref rid="Fig2" ref-type="fig">2</xref>
), and the functions for the baseline conditions (no cue) were approximately mirror-symmetric about zero SOA, as expected. In contrast, the functions for the cued conditions were shifted towards positive SOAs, indicating a cue-related bias in the PSS. Figure 
<xref rid="Fig3" ref-type="fig">3</xref>
a shows the mean PSS estimates derived from the individual fitted psychometric functions. It indicates that the magnitude of the bias was larger for the visual than the auditory targets (compare black and grey bars, upper panel). The bias also differed between the cue conditions, particularly for the visual targets: intramodal cues produced the largest, crossmodal cues produced the smallest, and bimodal cues produced an intermediate PSS bias. A repeated-measures ANOVA of the PSS, with factors target modality (visual, auditory) and cue condition (baseline and intermodal, crossmodal or bimodal cue), revealed a significant main effect of cue condition [
<italic>F</italic>
(3,51) = 13.64,
<italic>p</italic>
 < 0.001]. The main effect of target modality was non-significant [
<italic>F</italic>
(1,17) = 2.64,
<italic>p</italic>
 = 0.122], but there was a significant target modality by cue condition interaction [
<italic>F</italic>
(3,51) = 4.05,
<italic>p</italic>
 < 0.012]. Post hoc tests showed that, for the visual TOJ task, all cued conditions elicited significant PSS biases compared to the baseline condition (all
<italic>p</italic>
 < 0.001). Furthermore, the PSS for the intramodal cue condition was significantly larger than those for the crossmodal (
<italic>p</italic>
 < 0.001) and bimodal conditions (
<italic>p</italic>
 = 0.001), and the PSS for the crossmodal condition was significantly smaller than that for the bimodal condition (
<italic>p</italic>
 = 0.04). For the auditory TOJ task, none of the differences between the cue conditions reached significance. This was because the cue-induced PSS biases for the auditory TOJs were considerably smaller than those for the visual TOJs, while the associated errors were larger. This difference also explains the target modality by cue condition interaction; when the PSS was normalised to the value for the intramodal cue condition in each modality (Fig. 
<xref rid="Fig3" ref-type="fig">3</xref>
b), this interaction disappeared [
<italic>F</italic>
(3,51) = 0.06,
<italic>p</italic>
 = 0.980]. This shows that the patterns of PSS bias across cue types for the visual and auditory target modalities were similar apart from a constant scaling factor. For both target modalities, the PSS bias elicited by the bimodal cue closely approximated the average of the biases elicited by the intramodal and crossmodal cues (visual targets: 42.06 vs. 43.99 ms,
<italic>t</italic>
(17) = 0.325,
<italic>p</italic>
 = 0.749; auditory targets: 24.65 vs. 24.73,
<italic>t</italic>
(17) = 0.008,
<italic>p</italic>
 = 0.99; see short dashed lines on upper-most set of bars in Fig. 
<xref rid="Fig3" ref-type="fig">3</xref>
a).
<fig id="Fig3">
<label>Fig. 3</label>
<caption>
<p>
<bold>a</bold>
Shows the average PSS for all cue conditions (
<italic>bsl</italic>
baseline,
<italic>intra</italic>
intramodal,
<italic>cross</italic>
crossmodal,
<italic>bi</italic>
bimodal). The
<italic>short dashed lines</italic>
on the set of bars showing the bimodal PSS (uppermost set) represent the mean of the intramodal and crossmodal PSS.
<bold>b</bold>
Shows the same PSS, but normalised by the PSS for the intramodal cue condition to facilitate more direct comparison between the visual and auditory TOJ tasks.
<italic>Error bars</italic>
denote the standard error of the mean</p>
</caption>
<graphic xlink:href="221_2012_3191_Fig3_HTML" id="MO3"></graphic>
</fig>
</p>
</sec>
<sec id="Sec9">
<title>JND</title>
<p>Figure 
<xref rid="Fig2" ref-type="fig">2</xref>
shows that the psychometric functions for the auditory TOJs were shallower than those for the visual TOJs, indicating that the temporal order of the auditory targets was more difficult to resolve than that of the visual targets. A repeated-measures ANOVA of the mean JND estimates with factors target modality (auditory, visual) and cue condition (baseline and intramodal, crossmodal or bimodal cue) confirmed the significance of this difference (main effect of target modality:
<italic>F</italic>
(1,17) = 44.67,
<italic>p</italic>
 < 0.001). There was also a significant effect of cue condition (
<italic>F</italic>
(3,51) = 5.61,
<italic>p</italic>
 = 0.002). The target modality by cue condition interaction approached, but did not reach, significance (
<italic>F</italic>
(3,51) = 3.02,
<italic>p</italic>
 = 0.075). Post hoc tests showed that the main effect of cue condition was driven primarily by a significantly larger JND in the bimodal compared to the baseline cue conditions (
<italic>p</italic>
 = 0.022; see Table
<xref rid="Tab1" ref-type="table">1</xref>
).</p>
</sec>
</sec>
<sec id="Sec10" sec-type="discussion">
<title>Discussion</title>
<p>The aim of this study was to investigate differences in the amount of attentional facilitation associated with exogenous bimodal, intramodal and crossmodal cues for visual and auditory TOJs. For the visual TOJs, the results revealed reliable facilitation for all cue types (indexed by a spatiotemporal bias towards targets at the cued location). The visual TOJ data also revealed reliable differences in the amount of facilitation elicited by the different cue types, with the intramodal cue eliciting the largest, the crossmodal cue eliciting the smallest, and the bimodal cue eliciting intermediate facilitation. These results provide strong evidence that exogenous attentional facilitation is sensitive to the sensory information conveyed by
<italic>both</italic>
unimodal components of a bimodal cue.</p>
<p>In contrast to the results of the previous studies (Santangelo et al.
<xref ref-type="bibr" rid="CR45">2006</xref>
,
<xref ref-type="bibr" rid="CR46">2008a</xref>
,
<xref ref-type="bibr" rid="CR47">b</xref>
; Spence and Driver
<xref ref-type="bibr" rid="CR52">1997</xref>
,
<xref ref-type="bibr" rid="CR53">1999</xref>
; Ward
<xref ref-type="bibr" rid="CR62">1994</xref>
), which have used RT tasks, the current results revealed a reliable difference in the amount of facilitation elicited by the bimodal compared to the most effective unimodal (i.e. intramodal) cue. This difference has been identified as a key criterion for multisensory integration in single-cell recordings (Stein et al.
<xref ref-type="bibr" rid="CR56">2009</xref>
). In the current study, the facilitation elicited by the bimodal cue was reduced compared to that elicited by the intramodal cue. Some of the previous RT studies have also found a tendency for a reduced bimodal cueing effect, but have not found it to be statistically reliable (e.g. Santangelo et al.
<xref ref-type="bibr" rid="CR46">2008a</xref>
). This may have been, because the difference in the amount of facilitation elicited by the intramodal and crossmodal cues was only small, and so, any reduction in the bimodal cueing effect may have been missed. In contrast, the difference was relatively large in the current study. This discrepancy between our result and that of the previous studies may, therefore, be due to the TOJ task being a more direct, and thus a more sensitive, measure of attentional modulation than RT tasks (Eskes et al.
<xref ref-type="bibr" rid="CR13">2007</xref>
). The fact that the current study used an orthogonal TOJ task means that the majority of the observed cue-induced facilitation can be attributed to attentional prioritisation or prior entry (Spence and Parise
<xref ref-type="bibr" rid="CR54">2010</xref>
). While it is possible that some proportion of the facilitation was due to second-order response bias (i.e. bias to respond to the cued target), the previous studies suggest that this effect would have been relatively small (around 10 % of the overall prior-entry effect; Shore et al.
<xref ref-type="bibr" rid="CR49">2001</xref>
). Moreover, the reduction in the facilitation elicited by the bimodal compared to the intramodal cue is inconsistent with an explanation of our data based on second-order response bias. This is because response bias would be expected to depend on the cue salience. Thus, given that the combination of the visual and auditory components of the bimodal cue would have been more, or at least equally, salient as the intramodal cue, the bimodal cue should have produced at least an equivalent response bias.</p>
<p>A similar argument also applies to the possibility that our results are attributable to eye movements or to sensory interactions between the intramodal cue (or cue component) and the target at the cued location. Eye movements to the cued location would not have been expected to elicit less facilitation for the bimodal than intramodal cue. Likewise, given that the bimodal cue contains the intramodal cue component, sensory interactions at the cued location would also not have been expected to elicit less facilitation for the bimodal than intramodal cue. Furthermore, in the auditory TOJ task, sensory interactions might have been expected to
<italic>reduce</italic>
the amount of facilitation elicited by the intramodal cue (through energetic masking; see Moore
<xref ref-type="bibr" rid="CR37">2004</xref>
), which is inconsistent with our finding that the intramodal cue caused the
<italic>most</italic>
facilitation. These arguments suggest that our results were not influenced by eye movements or sensory interactions. The findings of Santangelo and Spence
<xref ref-type="bibr" rid="CR44">(2009)</xref>
support this interpretation. Using the same CTOA as that used in the current study, they found no evidence of any effect of eye movements or sensory interactions on cue-induced facilitation in a visual TOJ task.</p>
<p>In the current results, auditory TOJs were both less accurate and less susceptible to spatial cueing effects than visual TOJs. In the baseline (no cue) conditions, auditory TOJs yielded an average JND of about 103 ms compared to only 46 ms for the visual TOJs. In contrast, Kanabus et al. (
<xref ref-type="bibr" rid="CR24">2002</xref>
) found comparable JNDs (of approximately 40 ms) in their auditory and visual TOJ tasks. The difference between the auditory JNDs in the current and in Kanabus et al.’s studies may be due to the tasks involving different stimulus, or feature, dimensions; the auditory targets used in Kanabus et al.’s study were tone pips presented at the same location but differing in frequency. In contrast, the auditory targets used in the current study were presented at different locations and differed in phonological (vowel) identity as well as frequency. McFarland et al. (
<xref ref-type="bibr" rid="CR31">1998</xref>
) showed that JNDs for TOJs in a given modality vary depending upon the feature dimension that separates the two targets. Another important determinant of accuracy may be the extent to which the two targets temporally overlap. Kanabus et al. employed tone pips of 15-ms duration, meaning that each target was played in isolation for all but the shortest SOA. In our study, target stimuli overlapped for a variable period that depended upon the SOA on each trial. This may have made differentiating the targets more difficult.</p>
<p>The non-significance of the cueing effects on the auditory TOJs is also consistent with previous findings that the effect of spatial cueing on auditory RT tasks is less robust than on visual RT tasks (Barrett et al.
<xref ref-type="bibr" rid="CR2">2010</xref>
; Mondor and Amirault
<xref ref-type="bibr" rid="CR36">1998</xref>
; McDonald and Ward
<xref ref-type="bibr" rid="CR28">1999</xref>
; Spence
<xref ref-type="bibr" rid="CR51">2010</xref>
). It has been proposed that the difficulty in eliciting spatial cueing effects in hearing might be due to a fundamental difference in the way in which spatial information is represented in the auditory and visual systems. In the visual system, the mapping of non-spatial features, such as colour or orientation, is superposed onto the representation of retinotopic space. In contrast, in the auditory system, spatial and non-spatial information is processed separately from an early level onwards (Tollin
<xref ref-type="bibr" rid="CR61">2003</xref>
). This might explain why spatial information has a lesser effect on the segregation and identification of auditory compared to visual objects (Hill and Darwin
<xref ref-type="bibr" rid="CR19">1996</xref>
; Hukin and Darwin
<xref ref-type="bibr" rid="CR23">1995</xref>
). However, despite their non-significance, the PSS for the auditory TOJs revealed a similar pattern across cue types as the PSS for the visual TOJs; normalisation showed that the visual and auditory PSS only differed by a constant scaling factor and in the relative amount of variance. This indicates that the differences in the results for the visual and auditory TOJs were quantitative, rather than qualitative, and suggests auditory object recognition can be affected by spatial cueing, although to a lesser extent than visual object recognition.</p>
<p>The observed reduction in the amount of facilitation elicited by the bimodal compared to the intramodal cue is clearly inconsistent with a “winner-takes-all” mechanism of exogenous attention: if facilitation were determined by the most effective cue, the magnitude of the PSS bias elicited by intramodal and bimodal cues should have been equivalent (Chambers et al.
<xref ref-type="bibr" rid="CR8">2004</xref>
; Duncan et al.
<xref ref-type="bibr" rid="CR10">1997</xref>
; Mondor and Amirault
<xref ref-type="bibr" rid="CR36">1998</xref>
). The current data also argue against a strictly supramodal mechanism, which would have resulted in equivalent facilitation for intramodal and crossmodal cues (Farah et al.
<xref ref-type="bibr" rid="CR14">1989</xref>
; Koelewijn et al.
<xref ref-type="bibr" rid="CR26">2010</xref>
; Spence and Driver
<xref ref-type="bibr" rid="CR52">1997</xref>
). Instead, the amount of facilitation elicited by the bimodal cue seemed to be influenced by
<italic>both</italic>
the intramodal and crossmodal cue components. One explanation for this pattern of results is that our observers oriented to the intramodal or crossmodal cue component on half of all trials. However, this would imply that the system was switching between the more and the less effective cue component in a random fashion. Such random switching between differentially informative sources of information would be unprecedented in any other sensory or attentional functions. Thus, a more likely account of the current results is that the magnitude of the facilitation evoked by the bimodal cue was based upon a true combination of the facilitation elicited by intramodal and crossmodal cue components. This account is also more easily reconciled with evidence that attentional capture by bimodal cues is more resistant to concurrent task load (Ho et al.
<xref ref-type="bibr" rid="CR20">2009</xref>
; Santangelo and Spence
<xref ref-type="bibr" rid="CR43">2007</xref>
; Santangelo et al.
<xref ref-type="bibr" rid="CR47">2008b</xref>
) and more effective in biasing access to working memory (Botta et al.
<xref ref-type="bibr" rid="CR5">2011</xref>
). These findings, which have been attributed to an increase in the salience of bimodal compared to unimodal cues, cannot be explained by a simple switching account between exclusive, unimodal attentional resources.</p>
<p>The finding that the bimodal cueing effect approximated the average of the intramodal and crossmodal cueing effects suggests that multisensory combination in attentional facilitation may operate on similar principles as multisensory combination in perception. Perceptually, the combination of multimodal information has been shown to involve a weighted averaging of the multimodal stimulus components. According to the MLE model, the weights are determined by the relative precision, or inverse variance, of the representation of each component (Battaglia et al.
<xref ref-type="bibr" rid="CR3">2003</xref>
; Ernst and Banks
<xref ref-type="bibr" rid="CR11">2002</xref>
; Ernst and Bulthoff
<xref ref-type="bibr" rid="CR12">2004</xref>
; Ma and Pouget
<xref ref-type="bibr" rid="CR27">2008</xref>
). When precision differs between the unimodal components, the MLE is biased towards the most precise component. When precision is similar, the MLE reduces to a simple average of the unimodal components (Roach et al.
<xref ref-type="bibr" rid="CR42">2006</xref>
). If exogenous attention uses a similar rule to combine independent intramodal and crossmodal responses to the cue, then the magnitude of facilitation evoked by a bimodal cue would also be expected to fall between that evoked by its separate components. In the current experiment, the auditory and visual cues were both highly salient and, as cues and target always appeared at the same locations, equally informative with respect to the target locations. This suggests that the spatial information conveyed by the auditory and visual cues was similarly precise. The close approximation of the bimodal facilitation to the average of that elicited by the unimodal cues would thus seem to be consistent with an optimal model of cue combination.</p>
<p>The MLE model predicts that a bimodal stimulus will be represented more precisely, or reliably, than either of its unimodal components alone (Ma and Pouget
<xref ref-type="bibr" rid="CR27">2008</xref>
). This suggests that while the facilitation elicited by the bimodal cue was smaller in magnitude than that elicited by its intramodal component, its trial-to-trial reliability may have been greater. Although this cannot be determined from the current data, because the observed JNDs reflect the precision of the TOJs rather than the reliability of cueing effect, generalising the optimal averaging model of multisensory perceptual integration to multisensory attention provides a parsimonious explanation of the current results. According to this interpretation, exogenous attention is able to effectively select competing objects by combining mutually informative orienting responses across different sensory systems. As the sensitivity of different sensory systems varies with respect to the spatial and non-spatial information they encode, converging sensory information is likely to provide the most reliable means of prioritising multimodal objects for action or further analysis. This increase in the precision with which bimodal compared to unimodal cues are represented may also explain the previous findings that bimodal cues are more resistant to concurrent task load and more effective in biasing access to working memory (Botta et al.
<xref ref-type="bibr" rid="CR5">2011</xref>
; Santangelo and Spence
<xref ref-type="bibr" rid="CR43">2007</xref>
). Although further studies are required to determine whether separate unimodal orienting responses are combined in a statistically optimal way, our data suggest perception and attention may integrate multimodal information using similar rules.</p>
</sec>
</body>
<back>
<ack>
<p>This work was supported by a summer bursary from the Nuffield Science Foundation and a small grant from the MRC Institute of Hearing Research. We wish to thank Karima Susi for her enthusiastic recruitment of participants and for conducting most of the experimental sessions.</p>
<sec id="d29e852">
<title>Open Access</title>
<p>This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.</p>
</sec>
</ack>
<ref-list id="Bib1">
<title>References</title>
<ref id="CR1">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
<name>
<surname>Gu</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>DeAngelis</surname>
<given-names>GC</given-names>
</name>
</person-group>
<article-title>Multisensory integration: psychophysics, neurophysiology, and computation</article-title>
<source>Curr Opin Neurobiol</source>
<year>2009</year>
<volume>19</volume>
<fpage>452</fpage>
<lpage>458</lpage>
<pub-id pub-id-type="doi">10.1016/j.conb.2009.06.008</pub-id>
<pub-id pub-id-type="pmid">19616425</pub-id>
</mixed-citation>
</ref>
<ref id="CR2">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Barrett</surname>
<given-names>DJK</given-names>
</name>
<name>
<surname>Edmondson-Jones</surname>
<given-names>AM</given-names>
</name>
<name>
<surname>Hall</surname>
<given-names>DA</given-names>
</name>
</person-group>
<article-title>Attention in neglect and extinction: assessing the degree of correspondence between visual and auditory impairments using matched tasks</article-title>
<source>J Clin Exp Neuropsychol</source>
<year>2010</year>
<volume>32</volume>
<fpage>71</fpage>
<lpage>80</lpage>
<pub-id pub-id-type="doi">10.1080/13803390902838058</pub-id>
<pub-id pub-id-type="pmid">19484647</pub-id>
</mixed-citation>
</ref>
<ref id="CR3">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Battaglia</surname>
<given-names>PW</given-names>
</name>
<name>
<surname>Jacobs</surname>
<given-names>RA</given-names>
</name>
<name>
<surname>Aslin</surname>
<given-names>RN</given-names>
</name>
</person-group>
<article-title>Bayesian integration of visual and auditory signals for spatial localization</article-title>
<source>J Opt Soc Am</source>
<year>2003</year>
<volume>20</volume>
<fpage>1391</fpage>
<lpage>1397</lpage>
<pub-id pub-id-type="doi">10.1364/JOSAA.20.001391</pub-id>
</mixed-citation>
</ref>
<ref id="CR4">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bertelson</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Vroomen</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Gelder</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Driver</surname>
<given-names>J</given-names>
</name>
</person-group>
<article-title>The ventriloquist effect does not depend on the direction of deliberate visual attention</article-title>
<source>Atten Percept Psychophys</source>
<year>2000</year>
<volume>62</volume>
<fpage>321</fpage>
<lpage>332</lpage>
<pub-id pub-id-type="doi">10.3758/BF03205552</pub-id>
</mixed-citation>
</ref>
<ref id="CR5">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Botta</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Santengelo</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Raffone</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Sanabria</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Lupianez</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Belardinelli</surname>
<given-names>MO</given-names>
</name>
</person-group>
<article-title>Multisensory integration affects visuo-spatial working memory</article-title>
<source>J Exp Psychol Hum Percept</source>
<year>2011</year>
<volume>37</volume>
<fpage>1099</fpage>
<lpage>1109</lpage>
<pub-id pub-id-type="doi">10.1037/a0023513</pub-id>
<pub-id pub-id-type="pmid">21553989</pub-id>
</mixed-citation>
</ref>
<ref id="CR6">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brainard</surname>
<given-names>DH</given-names>
</name>
</person-group>
<article-title>The psychophysics toolbox</article-title>
<source>Spat Vis</source>
<year>1997</year>
<volume>10</volume>
<fpage>433</fpage>
<lpage>436</lpage>
<pub-id pub-id-type="doi">10.1163/156856897X00357</pub-id>
<pub-id pub-id-type="pmid">9176952</pub-id>
</mixed-citation>
</ref>
<ref id="CR7">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Calvert</surname>
<given-names>GA</given-names>
</name>
<name>
<surname>Thesen</surname>
<given-names>T</given-names>
</name>
</person-group>
<article-title>Multisensory integration: methodological approaches and emerging principles in the human brain</article-title>
<source>J Physiol Paris</source>
<year>2004</year>
<volume>98</volume>
<fpage>191</fpage>
<lpage>205</lpage>
<pub-id pub-id-type="doi">10.1016/j.jphysparis.2004.03.018</pub-id>
<pub-id pub-id-type="pmid">15477032</pub-id>
</mixed-citation>
</ref>
<ref id="CR8">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chambers</surname>
<given-names>CD</given-names>
</name>
<name>
<surname>Stokes</surname>
<given-names>MG</given-names>
</name>
<name>
<surname>Mattingley</surname>
<given-names>JB</given-names>
</name>
</person-group>
<article-title>Modality-specific control of strategic spatial attention in parietal cortex</article-title>
<source>Neuron</source>
<year>2004</year>
<volume>44</volume>
<fpage>925</fpage>
<lpage>930</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuron.2004.12.009</pub-id>
<pub-id pub-id-type="pmid">15603736</pub-id>
</mixed-citation>
</ref>
<ref id="CR9">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Driver</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C</given-names>
</name>
</person-group>
<article-title>Cross-modal links in spatial attention</article-title>
<source>Philos Trans R Soc B</source>
<year>1998</year>
<volume>353</volume>
<fpage>1319</fpage>
<lpage>1331</lpage>
<pub-id pub-id-type="doi">10.1098/rstb.1998.0286</pub-id>
</mixed-citation>
</ref>
<ref id="CR10">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Duncan</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Martens</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Ward</surname>
<given-names>R</given-names>
</name>
</person-group>
<article-title>Restricted attentional capacity within but not between sensory modalities</article-title>
<source>Nature</source>
<year>1997</year>
<volume>387</volume>
<fpage>809</fpage>
<lpage>810</lpage>
<pub-id pub-id-type="doi">10.1038/42947</pub-id>
</mixed-citation>
</ref>
<ref id="CR11">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>MO</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>MS</given-names>
</name>
</person-group>
<article-title>Humans integrate visual and haptic information in a statistically optimal fashion</article-title>
<source>Nature</source>
<year>2002</year>
<volume>415</volume>
<fpage>429</fpage>
<lpage>433</lpage>
<pub-id pub-id-type="doi">10.1038/415429a</pub-id>
<pub-id pub-id-type="pmid">11807554</pub-id>
</mixed-citation>
</ref>
<ref id="CR12">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>MO</given-names>
</name>
<name>
<surname>Bulthoff</surname>
<given-names>HH</given-names>
</name>
</person-group>
<article-title>Merging the senses into a robust percept</article-title>
<source>Trends Cogn Sci</source>
<year>2004</year>
<volume>8</volume>
<fpage>162</fpage>
<lpage>169</lpage>
<pub-id pub-id-type="doi">10.1016/j.tics.2004.02.002</pub-id>
<pub-id pub-id-type="pmid">15050512</pub-id>
</mixed-citation>
</ref>
<ref id="CR13">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Eskes</surname>
<given-names>GA</given-names>
</name>
<name>
<surname>Klein</surname>
<given-names>RM</given-names>
</name>
<name>
<surname>Dove</surname>
<given-names>MB</given-names>
</name>
<name>
<surname>Coolican</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Shore</surname>
<given-names>DI</given-names>
</name>
</person-group>
<article-title>Comparing temporal order judgments and choice reaction time tasks as indices of exogenous spatial cuing</article-title>
<source>J Neurosci Methods</source>
<year>2007</year>
<volume>166</volume>
<fpage>259</fpage>
<lpage>265</lpage>
<pub-id pub-id-type="doi">10.1016/j.jneumeth.2007.07.006</pub-id>
<pub-id pub-id-type="pmid">17889372</pub-id>
</mixed-citation>
</ref>
<ref id="CR14">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Farah</surname>
<given-names>MJ</given-names>
</name>
<name>
<surname>Wong</surname>
<given-names>AB</given-names>
</name>
<name>
<surname>Monheit</surname>
<given-names>MA</given-names>
</name>
<name>
<surname>Morrow</surname>
<given-names>LA</given-names>
</name>
</person-group>
<article-title>Parietal lobe mechanisms of spatial attention: modality-specific or supramodal?</article-title>
<source>Neuropsychologia</source>
<year>1989</year>
<volume>27</volume>
<fpage>461</fpage>
<lpage>470</lpage>
<pub-id pub-id-type="doi">10.1016/0028-3932(89)90051-1</pub-id>
<pub-id pub-id-type="pmid">2733819</pub-id>
</mixed-citation>
</ref>
<ref id="CR15">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Forster</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Cavina-Pratesi</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Aglioti</surname>
<given-names>SM</given-names>
</name>
<name>
<surname>Berlucchi</surname>
<given-names>G</given-names>
</name>
</person-group>
<article-title>Redundant target effect and intersensory facilitation from visual-tactile interactions in simple reaction time</article-title>
<source>Exp Brain Res</source>
<year>2002</year>
<volume>143</volume>
<fpage>480</fpage>
<lpage>487</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-002-1017-9</pub-id>
<pub-id pub-id-type="pmid">11914794</pub-id>
</mixed-citation>
</ref>
<ref id="CR16">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Frassinetti</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Bolognini</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Làdavas</surname>
<given-names>E</given-names>
</name>
</person-group>
<article-title>Enhancement of visual perception by crossmodal visuo-auditory interaction</article-title>
<source>Exp Brain Res</source>
<year>2002</year>
<volume>147</volume>
<fpage>332</fpage>
<lpage>343</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-002-1262-y</pub-id>
<pub-id pub-id-type="pmid">12428141</pub-id>
</mixed-citation>
</ref>
<ref id="CR101">
<mixed-citation publication-type="other">Gaudrain E, Li S, Ban VS, Patterson RD (2009) The role of glottal pulse rate and vocal tract length in the perception of speaker identity. Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech 2009). ISSN 1990-9772. p148–151</mixed-citation>
</ref>
<ref id="CR17">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gu</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
<name>
<surname>DeAngelis</surname>
<given-names>GC</given-names>
</name>
</person-group>
<article-title>Neural correlates of multi-sensory cue integration in macaque area MSTd</article-title>
<source>Nat Neurosci</source>
<year>2008</year>
<volume>11</volume>
<fpage>1201</fpage>
<lpage>1210</lpage>
<pub-id pub-id-type="doi">10.1038/nn.2191</pub-id>
<pub-id pub-id-type="pmid">18776893</pub-id>
</mixed-citation>
</ref>
<ref id="CR18">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Harrington</surname>
<given-names>LK</given-names>
</name>
<name>
<surname>Peck</surname>
<given-names>KP</given-names>
</name>
</person-group>
<article-title>Spatial disparity affects visual-auditory interactions in human sensorimotor processing</article-title>
<source>Exp Brain Res</source>
<year>1998</year>
<volume>122</volume>
<fpage>247</fpage>
<lpage>252</lpage>
<pub-id pub-id-type="doi">10.1007/s002210050512</pub-id>
<pub-id pub-id-type="pmid">9776523</pub-id>
</mixed-citation>
</ref>
<ref id="CR19">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hill</surname>
<given-names>NI</given-names>
</name>
<name>
<surname>Darwin</surname>
<given-names>CJ</given-names>
</name>
</person-group>
<article-title>Lateralization of a perturbed harmonic: effects of onset asynchrony and mistuning</article-title>
<source>J Acoust Soc Am</source>
<year>1996</year>
<volume>100</volume>
<fpage>2352</fpage>
<lpage>2364</lpage>
<pub-id pub-id-type="doi">10.1121/1.417945</pub-id>
<pub-id pub-id-type="pmid">9480301</pub-id>
</mixed-citation>
</ref>
<ref id="CR20">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ho</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Santangelo</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C</given-names>
</name>
</person-group>
<article-title>Multisensory warning signals: when spatial correspondence matters</article-title>
<source>Exp Brain Res</source>
<year>2009</year>
<volume>195</volume>
<fpage>261</fpage>
<lpage>272</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-009-1778-5</pub-id>
<pub-id pub-id-type="pmid">19381621</pub-id>
</mixed-citation>
</ref>
<ref id="CR21">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Holmes</surname>
<given-names>NP</given-names>
</name>
</person-group>
<article-title>The law of inverse effectiveness in neurons and behaviour: multisensory integration versus normal variability</article-title>
<source>Neuropsychologia</source>
<year>2007</year>
<volume>45</volume>
<fpage>3340</fpage>
<lpage>3345</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2007.05.025</pub-id>
<pub-id pub-id-type="pmid">17663007</pub-id>
</mixed-citation>
</ref>
<ref id="CR22">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Holmes</surname>
<given-names>NP</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C</given-names>
</name>
</person-group>
<article-title>Multisensory integration: space, time and superadditivity</article-title>
<source>Curr Biol</source>
<year>2005</year>
<volume>15</volume>
<fpage>R762</fpage>
<lpage>R764</lpage>
<pub-id pub-id-type="doi">10.1016/j.cub.2005.08.058</pub-id>
<pub-id pub-id-type="pmid">16169476</pub-id>
</mixed-citation>
</ref>
<ref id="CR23">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hukin</surname>
<given-names>RW</given-names>
</name>
<name>
<surname>Darwin</surname>
<given-names>CJ</given-names>
</name>
</person-group>
<article-title>Effects of contralateral presentation and of interaural time differences in segregating a harmonic from a vowel</article-title>
<source>J Acoust Soc Am</source>
<year>1995</year>
<volume>98</volume>
<fpage>1380</fpage>
<lpage>1387</lpage>
<pub-id pub-id-type="doi">10.1121/1.414348</pub-id>
</mixed-citation>
</ref>
<ref id="CR24">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kanabus</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Szelag</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Rojek</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Poppel</surname>
<given-names>E</given-names>
</name>
</person-group>
<article-title>Temporal order judgement for auditory and visual stimuli</article-title>
<source>Acta Neurobiol Exp</source>
<year>2002</year>
<volume>62</volume>
<fpage>263</fpage>
<lpage>270</lpage>
</mixed-citation>
</ref>
<ref id="CR25">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Kingdom</surname>
<given-names>AA</given-names>
</name>
<name>
<surname>Prins</surname>
<given-names>N</given-names>
</name>
</person-group>
<source>Psychophysics a practical introduction</source>
<year>2010</year>
<publisher-loc>London</publisher-loc>
<publisher-name>Academic Press</publisher-name>
</mixed-citation>
</ref>
<ref id="CR26">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Koelewijn</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Bronkhorst</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Theeuwes</surname>
<given-names>J</given-names>
</name>
</person-group>
<article-title>Attention and the multiple stages of multisensory integration: a review of audiovisual studies</article-title>
<source>Acta Psychol</source>
<year>2010</year>
<volume>134</volume>
<fpage>372</fpage>
<lpage>384</lpage>
<pub-id pub-id-type="doi">10.1016/j.actpsy.2010.03.010</pub-id>
</mixed-citation>
</ref>
<ref id="CR27">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ma</surname>
<given-names>WJ</given-names>
</name>
<name>
<surname>Pouget</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>Linking neurons to behavior in multisensory perception: a computational review</article-title>
<source>Brain Res</source>
<year>2008</year>
<volume>1242</volume>
<fpage>4</fpage>
<lpage>12</lpage>
<pub-id pub-id-type="doi">10.1016/j.brainres.2008.04.082</pub-id>
<pub-id pub-id-type="pmid">18602905</pub-id>
</mixed-citation>
</ref>
<ref id="CR28">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>McDonald</surname>
<given-names>JJ</given-names>
</name>
<name>
<surname>Teder-Salejarvi</surname>
<given-names>WA</given-names>
</name>
<name>
<surname>Russo</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Hillyard</surname>
<given-names>SA</given-names>
</name>
</person-group>
<article-title>Neural basis of auditory-induced shifts in visual time-order perception</article-title>
<source>Nat Neurosci</source>
<year>2005</year>
<volume>8</volume>
<fpage>1197</fpage>
<lpage>1202</lpage>
<pub-id pub-id-type="doi">10.1038/nn1512</pub-id>
<pub-id pub-id-type="pmid">16056224</pub-id>
</mixed-citation>
</ref>
<ref id="CR29">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>McDonald</surname>
<given-names>JJ</given-names>
</name>
<name>
<surname>Teder-Sälejärvi</surname>
<given-names>WA</given-names>
</name>
<name>
<surname>Ward</surname>
<given-names>LM</given-names>
</name>
</person-group>
<article-title>Multisensory integration and crossmodal attention effects in the human brain</article-title>
<source>Science</source>
<year>2001</year>
<volume>292</volume>
<fpage>1791</fpage>
<lpage>1792</lpage>
<pub-id pub-id-type="doi">10.1126/science.292.5523.1791a</pub-id>
<pub-id pub-id-type="pmid">11397913</pub-id>
</mixed-citation>
</ref>
<ref id="CR30">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>McDonald</surname>
<given-names>JJ</given-names>
</name>
<name>
<surname>Ward</surname>
<given-names>LM</given-names>
</name>
</person-group>
<article-title>Spatial relevance determines facilitatory and inhibitory effects of auditory covert spatial orienting</article-title>
<source>J Exp Psychol Hum Percept</source>
<year>1999</year>
<volume>25</volume>
<fpage>1234</fpage>
<lpage>1252</lpage>
<pub-id pub-id-type="doi">10.1037/0096-1523.25.5.1234</pub-id>
</mixed-citation>
</ref>
<ref id="CR31">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>McFarland</surname>
<given-names>DJ</given-names>
</name>
<name>
<surname>Cacace</surname>
<given-names>AT</given-names>
</name>
<name>
<surname>Setzen</surname>
<given-names>G</given-names>
</name>
</person-group>
<article-title>Temporal-order discrimination for selected auditory and visual stimulus dimensions</article-title>
<source>J Speech Lang Hear Res</source>
<year>1998</year>
<volume>41</volume>
<fpage>300</fpage>
<lpage>314</lpage>
<pub-id pub-id-type="pmid">9570584</pub-id>
</mixed-citation>
</ref>
<ref id="CR32">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Meredith</surname>
<given-names>MA</given-names>
</name>
<name>
<surname>Stein</surname>
<given-names>BE</given-names>
</name>
</person-group>
<article-title>Interactions among converging sensory inputs in the superior colliculus</article-title>
<source>Science</source>
<year>1983</year>
<volume>221</volume>
<fpage>389</fpage>
<lpage>391</lpage>
<pub-id pub-id-type="doi">10.1126/science.6867718</pub-id>
<pub-id pub-id-type="pmid">6867718</pub-id>
</mixed-citation>
</ref>
<ref id="CR33">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Meyer</surname>
<given-names>DE</given-names>
</name>
<name>
<surname>Osman</surname>
<given-names>AM</given-names>
</name>
<name>
<surname>Irwin</surname>
<given-names>DE</given-names>
</name>
<name>
<surname>Yantis</surname>
<given-names>S</given-names>
</name>
</person-group>
<article-title>Modern mental chronometry</article-title>
<source>Biol Psychol</source>
<year>1988</year>
<volume>26</volume>
<fpage>3</fpage>
<lpage>67</lpage>
<pub-id pub-id-type="doi">10.1016/0301-0511(88)90013-0</pub-id>
<pub-id pub-id-type="pmid">3061480</pub-id>
</mixed-citation>
</ref>
<ref id="CR34">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Miller</surname>
<given-names>J</given-names>
</name>
</person-group>
<article-title>Timecourse of coactivation in bimodal divided attention</article-title>
<source>Percept Psychophys</source>
<year>1986</year>
<volume>40</volume>
<fpage>331</fpage>
<lpage>343</lpage>
<pub-id pub-id-type="doi">10.3758/BF03203025</pub-id>
<pub-id pub-id-type="pmid">3786102</pub-id>
</mixed-citation>
</ref>
<ref id="CR35">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Molholm</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Ritter</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Murray</surname>
<given-names>MM</given-names>
</name>
<name>
<surname>Javitt</surname>
<given-names>DC</given-names>
</name>
<name>
<surname>Schroeder</surname>
<given-names>CE</given-names>
</name>
<name>
<surname>Foxe</surname>
<given-names>JJ</given-names>
</name>
</person-group>
<article-title>Multisensory auditory-visual interactions during early sensory processing in humans: a high-density electrical mapping study</article-title>
<source>Cogn Brain Res</source>
<year>2002</year>
<volume>14</volume>
<fpage>115</fpage>
<lpage>128</lpage>
<pub-id pub-id-type="doi">10.1016/S0926-6410(02)00066-6</pub-id>
</mixed-citation>
</ref>
<ref id="CR36">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Moore</surname>
<given-names>BCJ</given-names>
</name>
</person-group>
<source>An introduction to the psychology of hearing</source>
<year>2004</year>
<edition>5</edition>
<publisher-loc>London</publisher-loc>
<publisher-name>Elsevier Academic Press</publisher-name>
</mixed-citation>
</ref>
<ref id="CR37">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mondor</surname>
<given-names>TA</given-names>
</name>
<name>
<surname>Amirault</surname>
<given-names>KJ</given-names>
</name>
</person-group>
<article-title>Effect of same-and different-modality spatial cues on auditory and visual target identification</article-title>
<source>J Exp Psychol Hum Percept</source>
<year>1998</year>
<volume>24</volume>
<fpage>745</fpage>
<lpage>755</lpage>
<pub-id pub-id-type="doi">10.1037/0096-1523.24.3.745</pub-id>
<pub-id pub-id-type="pmid">9627413</pub-id>
</mixed-citation>
</ref>
<ref id="CR38">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Morgan</surname>
<given-names>ML</given-names>
</name>
<name>
<surname>DeAngelis</surname>
<given-names>GC</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
</person-group>
<article-title>Multisensory integration in macaque visual cortex depends on cue reliability</article-title>
<source>Neuron</source>
<year>2008</year>
<volume>59</volume>
<fpage>662</fpage>
<lpage>673</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuron.2008.06.024</pub-id>
<pub-id pub-id-type="pmid">18760701</pub-id>
</mixed-citation>
</ref>
<ref id="CR39">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Müller</surname>
<given-names>HJ</given-names>
</name>
<name>
<surname>Rabbitt</surname>
<given-names>PM</given-names>
</name>
</person-group>
<article-title>Reflexive and voluntary orienting of visual attention: time course of activation and resistance to interruption</article-title>
<source>J Exp Psychol Hum Percept</source>
<year>1989</year>
<volume>15</volume>
<fpage>315</fpage>
<lpage>330</lpage>
<pub-id pub-id-type="doi">10.1037/0096-1523.15.2.315</pub-id>
<pub-id pub-id-type="pmid">2525601</pub-id>
</mixed-citation>
</ref>
<ref id="CR40">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mulligan</surname>
<given-names>RM</given-names>
</name>
<name>
<surname>Shaw</surname>
<given-names>ML</given-names>
</name>
</person-group>
<article-title>Multimodal signal detection: independent decisions vs. integration</article-title>
<source>Percept Psychophys</source>
<year>1980</year>
<volume>28</volume>
<fpage>471</fpage>
<lpage>478</lpage>
<pub-id pub-id-type="doi">10.3758/BF03204892</pub-id>
<pub-id pub-id-type="pmid">7208258</pub-id>
</mixed-citation>
</ref>
<ref id="CR42">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Roach</surname>
<given-names>NW</given-names>
</name>
<name>
<surname>Heron</surname>
<given-names>J</given-names>
</name>
<name>
<surname>McGraw</surname>
<given-names>PV</given-names>
</name>
</person-group>
<article-title>Resolving multisensory conflict: a strategy for balancing the costs and benefits of audio-visual integration</article-title>
<source>Philos Trans R Soc B</source>
<year>2006</year>
<volume>273</volume>
<fpage>2159</fpage>
<lpage>2168</lpage>
</mixed-citation>
</ref>
<ref id="CR43">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Santangelo</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Ho</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C</given-names>
</name>
</person-group>
<article-title>Capturing spatial attention with multisensory cues</article-title>
<source>Psychon Bull Rev</source>
<year>2008</year>
<volume>15</volume>
<fpage>398</fpage>
<lpage>403</lpage>
<pub-id pub-id-type="doi">10.3758/PBR.15.2.398</pub-id>
<pub-id pub-id-type="pmid">18488658</pub-id>
</mixed-citation>
</ref>
<ref id="CR44">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Santangelo</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C</given-names>
</name>
</person-group>
<article-title>Multisensory cues capture spatial attention regardless of perceptual load</article-title>
<source>J Exp Psychol Hum Percept</source>
<year>2007</year>
<volume>33</volume>
<fpage>1311</fpage>
<lpage>1321</lpage>
<pub-id pub-id-type="doi">10.1037/0096-1523.33.6.1311</pub-id>
<pub-id pub-id-type="pmid">18085945</pub-id>
</mixed-citation>
</ref>
<ref id="CR45">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Santangelo</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C</given-names>
</name>
</person-group>
<article-title>Crossmodal exogenous orienting improves the accuracy of temporal order judgements</article-title>
<source>Exp Brain Res</source>
<year>2009</year>
<volume>194</volume>
<fpage>577</fpage>
<lpage>586</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-009-1734-4</pub-id>
<pub-id pub-id-type="pmid">19242685</pub-id>
</mixed-citation>
</ref>
<ref id="CR46">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Santangelo</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Lubbe</surname>
<given-names>RHJ</given-names>
</name>
<name>
<surname>Belardinelli</surname>
<given-names>MO</given-names>
</name>
<name>
<surname>Postma</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>Spatial attention triggered by unimodal, crossmodal, and bimodal exogenous cues: a comparison of reflexive orienting mechanisms</article-title>
<source>Exp Brain Res</source>
<year>2006</year>
<volume>173</volume>
<fpage>40</fpage>
<lpage>48</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-006-0361-6</pub-id>
<pub-id pub-id-type="pmid">16489435</pub-id>
</mixed-citation>
</ref>
<ref id="CR47">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Santangelo</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Lubbe</surname>
<given-names>RHJ</given-names>
</name>
<name>
<surname>Olivetti Belardinelli</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Postma</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>Multisensory integration affects ERP components elicited by exogenous cues</article-title>
<source>Exp Brain Res</source>
<year>2008</year>
<volume>185</volume>
<fpage>269</fpage>
<lpage>277</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-007-1151-5</pub-id>
<pub-id pub-id-type="pmid">17909764</pub-id>
</mixed-citation>
</ref>
<ref id="CR48">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schneider</surname>
<given-names>KA</given-names>
</name>
<name>
<surname>Bevelier</surname>
<given-names>D</given-names>
</name>
</person-group>
<article-title>Components of visual prior entry</article-title>
<source>Cogn Psychol</source>
<year>2003</year>
<volume>47</volume>
<fpage>333</fpage>
<lpage>366</lpage>
<pub-id pub-id-type="doi">10.1016/S0010-0285(03)00035-5</pub-id>
<pub-id pub-id-type="pmid">14642288</pub-id>
</mixed-citation>
</ref>
<ref id="CR49">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shore</surname>
<given-names>DI</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Klein</surname>
<given-names>RM</given-names>
</name>
</person-group>
<article-title>Visual prior entry</article-title>
<source>Psychol Sci</source>
<year>2001</year>
<volume>12</volume>
<fpage>205</fpage>
<lpage>212</lpage>
<pub-id pub-id-type="doi">10.1111/1467-9280.00337</pub-id>
<pub-id pub-id-type="pmid">11437302</pub-id>
</mixed-citation>
</ref>
<ref id="CR50">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sinnett</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Soto-Faraco</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C</given-names>
</name>
</person-group>
<article-title>The co-occurrence of multisensory competition and facilitation</article-title>
<source>Acta Psychol</source>
<year>2008</year>
<volume>128</volume>
<fpage>153</fpage>
<lpage>161</lpage>
<pub-id pub-id-type="doi">10.1016/j.actpsy.2007.12.002</pub-id>
</mixed-citation>
</ref>
<ref id="CR51">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Spence</surname>
<given-names>C</given-names>
</name>
</person-group>
<article-title>Crossmodal spatial attention</article-title>
<source>Ann N Y Acad Sci</source>
<year>2010</year>
<volume>1191</volume>
<fpage>182</fpage>
<lpage>200</lpage>
<pub-id pub-id-type="doi">10.1111/j.1749-6632.2010.05440.x</pub-id>
<pub-id pub-id-type="pmid">20392281</pub-id>
</mixed-citation>
</ref>
<ref id="CR52">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Spence</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Parise</surname>
<given-names>C</given-names>
</name>
</person-group>
<article-title>Prior-entry: a review</article-title>
<source>Conscious Cogn</source>
<year>2010</year>
<volume>19</volume>
<fpage>364</fpage>
<lpage>379</lpage>
<pub-id pub-id-type="doi">10.1016/j.concog.2009.12.001</pub-id>
<pub-id pub-id-type="pmid">20056554</pub-id>
</mixed-citation>
</ref>
<ref id="CR53">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Spence</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Driver</surname>
<given-names>J</given-names>
</name>
</person-group>
<article-title>Audiovisual links in exogenous covert spatial orienting</article-title>
<source>Percept Psychophys</source>
<year>1997</year>
<volume>59</volume>
<fpage>1</fpage>
<lpage>22</lpage>
<pub-id pub-id-type="doi">10.3758/BF03206843</pub-id>
<pub-id pub-id-type="pmid">9038403</pub-id>
</mixed-citation>
</ref>
<ref id="CR54">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Spence</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Driver</surname>
<given-names>J</given-names>
</name>
</person-group>
<person-group person-group-type="editor">
<name>
<surname>Harris</surname>
<given-names>D</given-names>
</name>
</person-group>
<article-title>A new approach to the design of multimodal warning signals</article-title>
<source>Engineering psychology and cognitive ergonomics</source>
<year>1999</year>
<publisher-loc>Aldershot</publisher-loc>
<publisher-name>Ashgate Publishing</publisher-name>
<fpage>455</fpage>
<lpage>461</lpage>
</mixed-citation>
</ref>
<ref id="CR55">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stein</surname>
<given-names>BE</given-names>
</name>
<name>
<surname>Stanford</surname>
<given-names>TR</given-names>
</name>
</person-group>
<article-title>Multisensory integration: current issues from the perspective of the single neuron</article-title>
<source>Nat Rev Neurosci</source>
<year>2008</year>
<volume>9</volume>
<fpage>255</fpage>
<lpage>266</lpage>
<pub-id pub-id-type="doi">10.1038/nrn2331</pub-id>
<pub-id pub-id-type="pmid">18354398</pub-id>
</mixed-citation>
</ref>
<ref id="CR56">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stein</surname>
<given-names>BE</given-names>
</name>
<name>
<surname>Stanford</surname>
<given-names>TR</given-names>
</name>
<name>
<surname>Ramachandran</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Perrault</surname>
<given-names>TJ</given-names>
</name>
<name>
<surname>Rowland</surname>
<given-names>BA</given-names>
</name>
</person-group>
<article-title>Challenges in quantifying multisensory integration: alternative criteria, models, and inverse effectiveness</article-title>
<source>Exp Brain Res</source>
<year>2009</year>
<volume>198</volume>
<fpage>113</fpage>
<lpage>126</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-009-1880-8</pub-id>
<pub-id pub-id-type="pmid">19551377</pub-id>
</mixed-citation>
</ref>
<ref id="CR57">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stelmach</surname>
<given-names>LB</given-names>
</name>
<name>
<surname>Herdman</surname>
<given-names>CM</given-names>
</name>
</person-group>
<article-title>Directed attention and perception of temporal order</article-title>
<source>J Exp Psychol Hum Percept</source>
<year>1991</year>
<volume>17</volume>
<fpage>539</fpage>
<lpage>550</lpage>
<pub-id pub-id-type="doi">10.1037/0096-1523.17.2.539</pub-id>
<pub-id pub-id-type="pmid">1830091</pub-id>
</mixed-citation>
</ref>
<ref id="CR58">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Störmer</surname>
<given-names>VS</given-names>
</name>
<name>
<surname>McDonald</surname>
<given-names>JJ</given-names>
</name>
<name>
<surname>Hillyard</surname>
<given-names>SA</given-names>
</name>
</person-group>
<article-title>Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli</article-title>
<source>Proc Natl Acad Sci USA</source>
<year>2009</year>
<volume>106</volume>
<fpage>22456</fpage>
<lpage>22461</lpage>
<pub-id pub-id-type="doi">10.1073/pnas.0907573106</pub-id>
<pub-id pub-id-type="pmid">20007778</pub-id>
</mixed-citation>
</ref>
<ref id="CR59">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Talsma</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Senkowski</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Soto-Faraco</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Woldorff</surname>
<given-names>MG</given-names>
</name>
</person-group>
<article-title>The multifaceted interplay between attention and multisensory integration</article-title>
<source>Trends Cogn Sci</source>
<year>2010</year>
<volume>14</volume>
<fpage>400</fpage>
<lpage>410</lpage>
<pub-id pub-id-type="doi">10.1016/j.tics.2010.06.008</pub-id>
<pub-id pub-id-type="pmid">20675182</pub-id>
</mixed-citation>
</ref>
<ref id="CR60">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Teder-Sälejärvi</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Russo</surname>
<given-names>FD</given-names>
</name>
<name>
<surname>McDonald</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Hillyard</surname>
<given-names>SA</given-names>
</name>
</person-group>
<article-title>Effects of spatial congruity on audio-visual multimodal integration</article-title>
<source>J Cogn Neurosci</source>
<year>2005</year>
<volume>17</volume>
<fpage>1396</fpage>
<lpage>1409</lpage>
<pub-id pub-id-type="doi">10.1162/0898929054985383</pub-id>
<pub-id pub-id-type="pmid">16197693</pub-id>
</mixed-citation>
</ref>
<ref id="CR61">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tollin</surname>
<given-names>DJ</given-names>
</name>
</person-group>
<article-title>The lateral superior olive: a functional role in sound source localization</article-title>
<source>Neuroscientist</source>
<year>2003</year>
<volume>9</volume>
<fpage>127</fpage>
<lpage>143</lpage>
<pub-id pub-id-type="doi">10.1177/1073858403252228</pub-id>
<pub-id pub-id-type="pmid">12708617</pub-id>
</mixed-citation>
</ref>
<ref id="CR102">
<mixed-citation publication-type="other">van Eijk RLJ, Kohlrausch A, Juola FJ, van de Par S (2008) Audio-visual synchrony and temporal order judgements: effects of experimental method and stimulus type. Percept Psychophys 70:955–968</mixed-citation>
</ref>
<ref id="CR62">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ward</surname>
<given-names>LM</given-names>
</name>
</person-group>
<article-title>Supramodal and modality-specific mechanisms for stimulus-driven shifts of auditory and visual attention</article-title>
<source>Can J Exp Psychol</source>
<year>1994</year>
<volume>48</volume>
<fpage>242</fpage>
<lpage>259</lpage>
<pub-id pub-id-type="pmid">8069284</pub-id>
</mixed-citation>
</ref>
<ref id="CR63">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Werner</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Noppeney</surname>
<given-names>U</given-names>
</name>
</person-group>
<article-title>Superadditive responses in superior temporal sulcus predict audiovisual benefits in object categorization</article-title>
<source>Cereb Cortex</source>
<year>2010</year>
<volume>20</volume>
<fpage>1829</fpage>
<lpage>1842</lpage>
<pub-id pub-id-type="doi">10.1093/cercor/bhp248</pub-id>
<pub-id pub-id-type="pmid">19923200</pub-id>
</mixed-citation>
</ref>
<ref id="CR64">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yates</surname>
<given-names>MJ</given-names>
</name>
<name>
<surname>Nicholls</surname>
<given-names>ER</given-names>
</name>
</person-group>
<article-title>Somatosensory prior entry</article-title>
<source>Atten Percept Psychophys</source>
<year>2011</year>
<volume>71</volume>
<issue>4</issue>
<fpage>847</fpage>
<lpage>859</lpage>
<pub-id pub-id-type="doi">10.3758/APP.71.4.847</pub-id>
<pub-id pub-id-type="pmid">19429963</pub-id>
</mixed-citation>
</ref>
<ref id="CR65">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zampini</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Shore</surname>
<given-names>DI</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C</given-names>
</name>
</person-group>
<article-title>Audiovisual prior entry</article-title>
<source>Neurosci Lett</source>
<year>2005</year>
<volume>381</volume>
<fpage>217</fpage>
<lpage>222</lpage>
<pub-id pub-id-type="doi">10.1016/j.neulet.2005.01.085</pub-id>
<pub-id pub-id-type="pmid">15896473</pub-id>
</mixed-citation>
</ref>
<ref id="CR66">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zimmer</surname>
<given-names>U</given-names>
</name>
<name>
<surname>Macaluso</surname>
<given-names>E</given-names>
</name>
</person-group>
<article-title>Processing of multisensory spatial congruency can be dissociated from working memory and visuo-spatial attention</article-title>
<source>Eur J Neurosci</source>
<year>2007</year>
<volume>26</volume>
<fpage>1681</fpage>
<lpage>1691</lpage>
<pub-id pub-id-type="doi">10.1111/j.1460-9568.2007.05784.x</pub-id>
<pub-id pub-id-type="pmid">17880400</pub-id>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000786 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd -nk 000786 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Curation
   |type=    RBID
   |clé=     PMC:3442165
   |texte=   Evidence for multisensory integration in the elicitation of prior entry by bimodal cues
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Curation/RBID.i   -Sk "pubmed:22975896" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024