Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Audition and vision share spatial attentional resources, yet attentional load does not disrupt audiovisual integration

Identifieur interne : 000203 ( Pmc/Curation ); précédent : 000202; suivant : 000204

Audition and vision share spatial attentional resources, yet attentional load does not disrupt audiovisual integration

Auteurs : Basil Wahn [Allemagne] ; Peter König [Allemagne]

Source :

RBID : PMC:4518141

Abstract

Humans continuously receive and integrate information from several sensory modalities. However, attentional resources limit the amount of information that can be processed. It is not yet clear how attentional resources and multisensory processing are interrelated. Specifically, the following questions arise: (1) Are there distinct spatial attentional resources for each sensory modality? and (2) Does attentional load affect multisensory integration? We investigated these questions using a dual task paradigm: participants performed two spatial tasks (a multiple object tracking task and a localization task), either separately (single task condition) or simultaneously (dual task condition). In the multiple object tracking task, participants visually tracked a small subset of several randomly moving objects. In the localization task, participants received either visual, auditory, or redundant visual and auditory location cues. In the dual task condition, we found a substantial decrease in participants' performance relative to the results of the single task condition. Importantly, participants performed equally well in the dual task condition regardless of the location cues' modality. This result suggests that having spatial information coming from different modalities does not facilitate performance, thereby indicating shared spatial attentional resources for the auditory and visual modality. Furthermore, we found that participants integrated redundant multisensory information similarly even when they experienced additional attentional load in the dual task condition. Overall, findings suggest that (1) visual and auditory spatial attentional resources are shared and that (2) audiovisual integration of spatial information occurs in an pre-attentive processing stage.


Url:
DOI: 10.3389/fpsyg.2015.01084
PubMed: 26284008
PubMed Central: 4518141

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4518141

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Audition and vision share spatial attentional resources, yet attentional load does not disrupt audiovisual integration</title>
<author>
<name sortKey="Wahn, Basil" sort="Wahn, Basil" uniqKey="Wahn B" first="Basil" last="Wahn">Basil Wahn</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Neurobiopsychology, Institute of Cognitive Science, Universität Osnabrück</institution>
<country>Osnabrück, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Konig, Peter" sort="Konig, Peter" uniqKey="Konig P" first="Peter" last="König">Peter König</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Neurobiopsychology, Institute of Cognitive Science, Universität Osnabrück</institution>
<country>Osnabrück, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Department of Neurophysiology and Pathophysiology, Center of Experimental Medicine, University Medical Center Hamburg-Eppendorf</institution>
<country>Hamburg, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">26284008</idno>
<idno type="pmc">4518141</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4518141</idno>
<idno type="RBID">PMC:4518141</idno>
<idno type="doi">10.3389/fpsyg.2015.01084</idno>
<date when="2015">2015</date>
<idno type="wicri:Area/Pmc/Corpus">000203</idno>
<idno type="wicri:Area/Pmc/Curation">000203</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Audition and vision share spatial attentional resources, yet attentional load does not disrupt audiovisual integration</title>
<author>
<name sortKey="Wahn, Basil" sort="Wahn, Basil" uniqKey="Wahn B" first="Basil" last="Wahn">Basil Wahn</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Neurobiopsychology, Institute of Cognitive Science, Universität Osnabrück</institution>
<country>Osnabrück, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Konig, Peter" sort="Konig, Peter" uniqKey="Konig P" first="Peter" last="König">Peter König</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Neurobiopsychology, Institute of Cognitive Science, Universität Osnabrück</institution>
<country>Osnabrück, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Department of Neurophysiology and Pathophysiology, Center of Experimental Medicine, University Medical Center Hamburg-Eppendorf</institution>
<country>Hamburg, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Frontiers in Psychology</title>
<idno type="eISSN">1664-1078</idno>
<imprint>
<date when="2015">2015</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Humans continuously receive and integrate information from several sensory modalities. However, attentional resources limit the amount of information that can be processed. It is not yet clear how attentional resources and multisensory processing are interrelated. Specifically, the following questions arise: (1) Are there distinct spatial attentional resources for each sensory modality? and (2) Does attentional load affect multisensory integration? We investigated these questions using a dual task paradigm: participants performed two spatial tasks (a multiple object tracking task and a localization task), either separately (single task condition) or simultaneously (dual task condition). In the multiple object tracking task, participants visually tracked a small subset of several randomly moving objects. In the localization task, participants received either visual, auditory, or redundant visual and auditory location cues. In the dual task condition, we found a substantial decrease in participants' performance relative to the results of the single task condition. Importantly, participants performed equally well in the dual task condition regardless of the location cues' modality. This result suggests that having spatial information coming from different modalities does not facilitate performance, thereby indicating shared spatial attentional resources for the auditory and visual modality. Furthermore, we found that participants integrated redundant multisensory information similarly even when they experienced additional attentional load in the dual task condition. Overall, findings suggest that (1) visual and auditory spatial attentional resources are shared and that (2) audiovisual integration of spatial information occurs in an pre-attentive processing stage.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Ahveninen, J" uniqKey="Ahveninen J">J. Ahveninen</name>
</author>
<author>
<name sortKey="J Skel Inen, I P" uniqKey="J Skel Inen I">I. P. Jääskeläinen</name>
</author>
<author>
<name sortKey="Raij, T" uniqKey="Raij T">T. Raij</name>
</author>
<author>
<name sortKey="Bonmassar, G" uniqKey="Bonmassar G">G. Bonmassar</name>
</author>
<author>
<name sortKey="Devore, S" uniqKey="Devore S">S. Devore</name>
</author>
<author>
<name sortKey="H M L Inen, M" uniqKey="H M L Inen M">M. Hämäläinen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Alais, D" uniqKey="Alais D">D. Alais</name>
</author>
<author>
<name sortKey="Morrone, C" uniqKey="Morrone C">C. Morrone</name>
</author>
<author>
<name sortKey="Burr, D" uniqKey="Burr D">D. Burr</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Alsius, A" uniqKey="Alsius A">A. Alsius</name>
</author>
<author>
<name sortKey="Navarra, J" uniqKey="Navarra J">J. Navarra</name>
</author>
<author>
<name sortKey="Campbell, R" uniqKey="Campbell R">R. Campbell</name>
</author>
<author>
<name sortKey="Soto Faraco, S" uniqKey="Soto Faraco S">S. Soto-Faraco</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Alsius, A" uniqKey="Alsius A">A. Alsius</name>
</author>
<author>
<name sortKey="Navarra, J" uniqKey="Navarra J">J. Navarra</name>
</author>
<author>
<name sortKey="Soto Faraco, S" uniqKey="Soto Faraco S">S. Soto-Faraco</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Alvarez, G A" uniqKey="Alvarez G">G. A. Alvarez</name>
</author>
<author>
<name sortKey="Franconeri, S L" uniqKey="Franconeri S">S. L. Franconeri</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Amedi, A" uniqKey="Amedi A">A. Amedi</name>
</author>
<author>
<name sortKey="Malach, R" uniqKey="Malach R">R. Malach</name>
</author>
<author>
<name sortKey="Hendler, T" uniqKey="Hendler T">T. Hendler</name>
</author>
<author>
<name sortKey="Peled, S" uniqKey="Peled S">S. Peled</name>
</author>
<author>
<name sortKey="Zohary, E" uniqKey="Zohary E">E. Zohary</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Arnell, K M" uniqKey="Arnell K">K. M. Arnell</name>
</author>
<author>
<name sortKey="Jenkins, R" uniqKey="Jenkins R">R. Jenkins</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Arnell, K M" uniqKey="Arnell K">K. M. Arnell</name>
</author>
<author>
<name sortKey="Larson, J M" uniqKey="Larson J">J. M. Larson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Arrighi, R" uniqKey="Arrighi R">R. Arrighi</name>
</author>
<author>
<name sortKey="Lunardi, R" uniqKey="Lunardi R">R. Lunardi</name>
</author>
<author>
<name sortKey="Burr, D" uniqKey="Burr D">D. Burr</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bates, D" uniqKey="Bates D">D. Bates</name>
</author>
<author>
<name sortKey="Maechler, M" uniqKey="Maechler M">M. Maechler</name>
</author>
<author>
<name sortKey="Bolker, B" uniqKey="Bolker B">B. Bolker</name>
</author>
<author>
<name sortKey="Walker, S" uniqKey="Walker S">S. Walker</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bertelson, P" uniqKey="Bertelson P">P. Bertelson</name>
</author>
<author>
<name sortKey="Vroomen, J" uniqKey="Vroomen J">J. Vroomen</name>
</author>
<author>
<name sortKey="De Gelder, B" uniqKey="De Gelder B">B. De Gelder</name>
</author>
<author>
<name sortKey="Driver, J" uniqKey="Driver J">J. Driver</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bonnel, A M" uniqKey="Bonnel A">A.-M. Bonnel</name>
</author>
<author>
<name sortKey="Prinzmetal, W" uniqKey="Prinzmetal W">W. Prinzmetal</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bruin, J" uniqKey="Bruin J">J. Bruin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chan, J S" uniqKey="Chan J">J. S. Chan</name>
</author>
<author>
<name sortKey="Newell, F N" uniqKey="Newell F">F. N. Newell</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chun, M M" uniqKey="Chun M">M. M. Chun</name>
</author>
<author>
<name sortKey="Golomb, J D" uniqKey="Golomb J">J. D. Golomb</name>
</author>
<author>
<name sortKey="Turk Browne, N B" uniqKey="Turk Browne N">N. B. Turk-Browne</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Coren, S" uniqKey="Coren S">S. Coren</name>
</author>
<author>
<name sortKey="Ward, L M" uniqKey="Ward L">L. M. Ward</name>
</author>
<author>
<name sortKey="Enns, J T" uniqKey="Enns J">J. T. Enns</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Duncan, J" uniqKey="Duncan J">J. Duncan</name>
</author>
<author>
<name sortKey="Martens, S" uniqKey="Martens S">S. Martens</name>
</author>
<author>
<name sortKey="Ward, R" uniqKey="Ward R">R. Ward</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, M O" uniqKey="Ernst M">M. O. Ernst</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, M O" uniqKey="Ernst M">M. O. Ernst</name>
</author>
<author>
<name sortKey="Bulthoff, H H" uniqKey="Bulthoff H">H. H. Bülthoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Goschl, F" uniqKey="Goschl F">F. Göschl</name>
</author>
<author>
<name sortKey="Engel, A K" uniqKey="Engel A">A. K. Engel</name>
</author>
<author>
<name sortKey="Friese, U" uniqKey="Friese U">U. Friese</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Goschl, F" uniqKey="Goschl F">F. Göschl</name>
</author>
<author>
<name sortKey="Friese, U" uniqKey="Friese U">U. Friese</name>
</author>
<author>
<name sortKey="Daume, J" uniqKey="Daume J">J. Daume</name>
</author>
<author>
<name sortKey="Konig, P" uniqKey="Konig P">P. König</name>
</author>
<author>
<name sortKey="Engel, A K" uniqKey="Engel A">A. K. Engel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Green, C S" uniqKey="Green C">C. S. Green</name>
</author>
<author>
<name sortKey="Bavelier, D" uniqKey="Bavelier D">D. Bavelier</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hein, G" uniqKey="Hein G">G. Hein</name>
</author>
<author>
<name sortKey="Parr, A" uniqKey="Parr A">A. Parr</name>
</author>
<author>
<name sortKey="Duncan, J" uniqKey="Duncan J">J. Duncan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hillstrom, A P" uniqKey="Hillstrom A">A. P. Hillstrom</name>
</author>
<author>
<name sortKey="Shapiro, K L" uniqKey="Shapiro K">K. L. Shapiro</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="James, W" uniqKey="James W">W. James</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jolicoeur, P" uniqKey="Jolicoeur P">P. Jolicoeur</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kietzmann, T" uniqKey="Kietzmann T">T. Kietzmann</name>
</author>
<author>
<name sortKey="Wahn, B" uniqKey="Wahn B">B. Wahn</name>
</author>
<author>
<name sortKey="Konig, P" uniqKey="Konig P">P. König</name>
</author>
<author>
<name sortKey="Tong, F" uniqKey="Tong F">F. Tong</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kietzmann, T C" uniqKey="Kietzmann T">T. C. Kietzmann</name>
</author>
<author>
<name sortKey="Swisher, J D" uniqKey="Swisher J">J. D. Swisher</name>
</author>
<author>
<name sortKey="Konig, P" uniqKey="Konig P">P. König</name>
</author>
<author>
<name sortKey="Tong, F" uniqKey="Tong F">F. Tong</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Killian, N J" uniqKey="Killian N">N. J. Killian</name>
</author>
<author>
<name sortKey="Jutras, M J" uniqKey="Jutras M">M. J. Jutras</name>
</author>
<author>
<name sortKey="Buffalo, E A" uniqKey="Buffalo E">E. A. Buffalo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Koelewijn, T" uniqKey="Koelewijn T">T. Koelewijn</name>
</author>
<author>
<name sortKey="Bronkhorst, A" uniqKey="Bronkhorst A">A. Bronkhorst</name>
</author>
<author>
<name sortKey="Theeuwes, J" uniqKey="Theeuwes J">J. Theeuwes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Laberge, D" uniqKey="Laberge D">D. LaBerge</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Leifeld, P" uniqKey="Leifeld P">P. Leifeld</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Livingstone, M" uniqKey="Livingstone M">M. Livingstone</name>
</author>
<author>
<name sortKey="Hubel, D" uniqKey="Hubel D">D. Hubel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Maeder, P P" uniqKey="Maeder P">P. P. Maeder</name>
</author>
<author>
<name sortKey="Meuli, R A" uniqKey="Meuli R">R. A. Meuli</name>
</author>
<author>
<name sortKey="Adriani, M" uniqKey="Adriani M">M. Adriani</name>
</author>
<author>
<name sortKey="Bellmann, A" uniqKey="Bellmann A">A. Bellmann</name>
</author>
<author>
<name sortKey="Fornari, E" uniqKey="Fornari E">E. Fornari</name>
</author>
<author>
<name sortKey="Thiran, J P" uniqKey="Thiran J">J.-P. Thiran</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Marois, R" uniqKey="Marois R">R. Marois</name>
</author>
<author>
<name sortKey="Ivanoff, J" uniqKey="Ivanoff J">J. Ivanoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mcgurk, H" uniqKey="Mcgurk H">H. McGurk</name>
</author>
<author>
<name sortKey="Macdonald, J" uniqKey="Macdonald J">J. MacDonald</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mozolic, J L" uniqKey="Mozolic J">J. L. Mozolic</name>
</author>
<author>
<name sortKey="Hugenschmidt, C E" uniqKey="Hugenschmidt C">C. E. Hugenschmidt</name>
</author>
<author>
<name sortKey="Peiffer, A M" uniqKey="Peiffer A">A. M. Peiffer</name>
</author>
<author>
<name sortKey="Laurienti, P J" uniqKey="Laurienti P">P. J. Laurienti</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Navarra, J" uniqKey="Navarra J">J. Navarra</name>
</author>
<author>
<name sortKey="Alsius, A" uniqKey="Alsius A">A. Alsius</name>
</author>
<author>
<name sortKey="Soto Faraco, S" uniqKey="Soto Faraco S">S. Soto-Faraco</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Potter, M C" uniqKey="Potter M">M. C. Potter</name>
</author>
<author>
<name sortKey="Chun, M M" uniqKey="Chun M">M. M. Chun</name>
</author>
<author>
<name sortKey="Banks, B S" uniqKey="Banks B">B. S. Banks</name>
</author>
<author>
<name sortKey="Muckenhoupt, M" uniqKey="Muckenhoupt M">M. Muckenhoupt</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pylyshyn, Z W" uniqKey="Pylyshyn Z">Z. W. Pylyshyn</name>
</author>
<author>
<name sortKey="Storm, R W" uniqKey="Storm R">R. W. Storm</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Reed, C L" uniqKey="Reed C">C. L. Reed</name>
</author>
<author>
<name sortKey="Klatzky, R L" uniqKey="Klatzky R">R. L. Klatzky</name>
</author>
<author>
<name sortKey="Halgren, E" uniqKey="Halgren E">E. Halgren</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Soto Faraco, S" uniqKey="Soto Faraco S">S. Soto-Faraco</name>
</author>
<author>
<name sortKey="Navarra, J" uniqKey="Navarra J">J. Navarra</name>
</author>
<author>
<name sortKey="Alsius, A" uniqKey="Alsius A">A. Alsius</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Soto Faraco, S" uniqKey="Soto Faraco S">S. Soto-Faraco</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Soto Faraco, S" uniqKey="Soto Faraco S">S. Soto-Faraco</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
<author>
<name sortKey="Fairbank, K" uniqKey="Fairbank K">K. Fairbank</name>
</author>
<author>
<name sortKey="Kingstone, A" uniqKey="Kingstone A">A. Kingstone</name>
</author>
<author>
<name sortKey="Hillstrom, A P" uniqKey="Hillstrom A">A. P. Hillstrom</name>
</author>
<author>
<name sortKey="Shapiro, K" uniqKey="Shapiro K">K. Shapiro</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
<author>
<name sortKey="Pavani, F" uniqKey="Pavani F">F. Pavani</name>
</author>
<author>
<name sortKey="Driver, J" uniqKey="Driver J">J. Driver</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Talsma, D" uniqKey="Talsma D">D. Talsma</name>
</author>
<author>
<name sortKey="Doty, T J" uniqKey="Doty T">T. J. Doty</name>
</author>
<author>
<name sortKey="Strowd, R" uniqKey="Strowd R">R. Strowd</name>
</author>
<author>
<name sortKey="Woldorff, M G" uniqKey="Woldorff M">M. G. Woldorff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Talsma, D" uniqKey="Talsma D">D. Talsma</name>
</author>
<author>
<name sortKey="Doty, T J" uniqKey="Doty T">T. J. Doty</name>
</author>
<author>
<name sortKey="Woldorff, M G" uniqKey="Woldorff M">M. G. Woldorff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Talsma, D" uniqKey="Talsma D">D. Talsma</name>
</author>
<author>
<name sortKey="Woldorff, M G" uniqKey="Woldorff M">M. G. Woldorff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tremblay, S" uniqKey="Tremblay S">S. Tremblay</name>
</author>
<author>
<name sortKey="Vachon, F" uniqKey="Vachon F">F. Vachon</name>
</author>
<author>
<name sortKey="Jones, D M" uniqKey="Jones D">D. M. Jones</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Twisk, J W" uniqKey="Twisk J">J. W. Twisk</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Der Burg, E" uniqKey="Van Der Burg E">E. van der Burg</name>
</author>
<author>
<name sortKey="Olivers, C N" uniqKey="Olivers C">C. N. Olivers</name>
</author>
<author>
<name sortKey="Bronkhorst, A W" uniqKey="Bronkhorst A">A. W. Bronkhorst</name>
</author>
<author>
<name sortKey="Theeuwes, J" uniqKey="Theeuwes J">J. Theeuwes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Der Burg, E" uniqKey="Van Der Burg E">E. van der Burg</name>
</author>
<author>
<name sortKey="Olivers, C N L" uniqKey="Olivers C">C. N. L. Olivers</name>
</author>
<author>
<name sortKey="Bronkhorst, A W" uniqKey="Bronkhorst A">A. W. Bronkhorst</name>
</author>
<author>
<name sortKey="Koelewijn, T" uniqKey="Koelewijn T">T. Koelewijn</name>
</author>
<author>
<name sortKey="Theeuwes, J" uniqKey="Theeuwes J">J. Theeuwes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vroomen, J" uniqKey="Vroomen J">J. Vroomen</name>
</author>
<author>
<name sortKey="Bertelson, P" uniqKey="Bertelson P">P. Bertelson</name>
</author>
<author>
<name sortKey="De Gelder, B" uniqKey="De Gelder B">B. De Gelder</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wahn, B" uniqKey="Wahn B">B. Wahn</name>
</author>
<author>
<name sortKey="Konig, P" uniqKey="Konig P">P. König</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Walton, M" uniqKey="Walton M">M. Walton</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wickham, H" uniqKey="Wickham H">H. Wickham</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wilming, N" uniqKey="Wilming N">N. Wilming</name>
</author>
<author>
<name sortKey="Konig, P" uniqKey="Konig P">P. König</name>
</author>
<author>
<name sortKey="Buffalo, E A" uniqKey="Buffalo E">E. A. Buffalo</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Front Psychol</journal-id>
<journal-id journal-id-type="iso-abbrev">Front Psychol</journal-id>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title-group>
<journal-title>Frontiers in Psychology</journal-title>
</journal-title-group>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">26284008</article-id>
<article-id pub-id-type="pmc">4518141</article-id>
<article-id pub-id-type="doi">10.3389/fpsyg.2015.01084</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Audition and vision share spatial attentional resources, yet attentional load does not disrupt audiovisual integration</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Wahn</surname>
<given-names>Basil</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="author-notes" rid="fn001">
<sup>*</sup>
</xref>
<uri xlink:type="simple" xlink:href="http://loop.frontiersin.org/people/244640/overview"></uri>
</contrib>
<contrib contrib-type="author">
<name>
<surname>König</surname>
<given-names>Peter</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<uri xlink:type="simple" xlink:href="http://loop.frontiersin.org/people/269/overview"></uri>
</contrib>
</contrib-group>
<aff id="aff1">
<sup>1</sup>
<institution>Neurobiopsychology, Institute of Cognitive Science, Universität Osnabrück</institution>
<country>Osnabrück, Germany</country>
</aff>
<aff id="aff2">
<sup>2</sup>
<institution>Department of Neurophysiology and Pathophysiology, Center of Experimental Medicine, University Medical Center Hamburg-Eppendorf</institution>
<country>Hamburg, Germany</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>Edited by: Elia Formisano, Maastricht University, Netherlands</p>
</fn>
<fn fn-type="edited-by">
<p>Reviewed by: Jason Chan, University College Cork, Ireland; Antonia Thelen, Vanderbilt University, USA</p>
</fn>
<corresp id="fn001">*Correspondence: Basil Wahn, Neurobiopsychology, Institute of Cognitive Science, Universität Osnabrück, Albrechtstr. 28, 49069 Osnabrück, Germany
<email xlink:type="simple">bwahn@uos.de</email>
</corresp>
<fn fn-type="other" id="fn002">
<p>This article was submitted to Perception Science, a section of the journal Frontiers in Psychology</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>29</day>
<month>7</month>
<year>2015</year>
</pub-date>
<pub-date pub-type="collection">
<year>2015</year>
</pub-date>
<volume>6</volume>
<elocation-id>1084</elocation-id>
<history>
<date date-type="received">
<day>11</day>
<month>6</month>
<year>2015</year>
</date>
<date date-type="accepted">
<day>14</day>
<month>7</month>
<year>2015</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright © 2015 Wahn and König.</copyright-statement>
<copyright-year>2015</copyright-year>
<copyright-holder>Wahn and König</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</license-p>
</license>
</permissions>
<abstract>
<p>Humans continuously receive and integrate information from several sensory modalities. However, attentional resources limit the amount of information that can be processed. It is not yet clear how attentional resources and multisensory processing are interrelated. Specifically, the following questions arise: (1) Are there distinct spatial attentional resources for each sensory modality? and (2) Does attentional load affect multisensory integration? We investigated these questions using a dual task paradigm: participants performed two spatial tasks (a multiple object tracking task and a localization task), either separately (single task condition) or simultaneously (dual task condition). In the multiple object tracking task, participants visually tracked a small subset of several randomly moving objects. In the localization task, participants received either visual, auditory, or redundant visual and auditory location cues. In the dual task condition, we found a substantial decrease in participants' performance relative to the results of the single task condition. Importantly, participants performed equally well in the dual task condition regardless of the location cues' modality. This result suggests that having spatial information coming from different modalities does not facilitate performance, thereby indicating shared spatial attentional resources for the auditory and visual modality. Furthermore, we found that participants integrated redundant multisensory information similarly even when they experienced additional attentional load in the dual task condition. Overall, findings suggest that (1) visual and auditory spatial attentional resources are shared and that (2) audiovisual integration of spatial information occurs in an pre-attentive processing stage.</p>
</abstract>
<kwd-group>
<kwd>attentional load</kwd>
<kwd>multisensory integration</kwd>
<kwd>auditory display</kwd>
<kwd>vision</kwd>
<kwd>audition</kwd>
<kwd>attentional resources</kwd>
<kwd>multiple object tracking</kwd>
</kwd-group>
<funding-group>
<award-group>
<funding-source id="cn001">H2020-FETPROACT-2014</funding-source>
<award-id rid="cn001">H2020</award-id>
</award-group>
<award-group>
<funding-source id="cn002">socSMCs</funding-source>
<award-id rid="cn002">641321</award-id>
</award-group>
<award-group>
<funding-source id="cn003">eSMCs</funding-source>
<award-id rid="cn003">FP7-ICT-270212</award-id>
</award-group>
<award-group>
<funding-source id="cn004">MULTISENSE</funding-source>
<award-id rid="cn004">ERC-2010-AdG #269716</award-id>
</award-group>
</funding-group>
<counts>
<fig-count count="4"></fig-count>
<table-count count="2"></table-count>
<equation-count count="0"></equation-count>
<ref-count count="57"></ref-count>
<page-count count="12"></page-count>
<word-count count="9259"></word-count>
</counts>
</article-meta>
</front>
<body>
<sec id="s1">
<title>1. Introduction</title>
<p>From all our senses, we continuously receive far more information than can be effectively processed. Via a process called “attention” (James,
<xref rid="B25" ref-type="bibr">1890</xref>
; Chun et al.,
<xref rid="B15" ref-type="bibr">2011</xref>
), we select information that is relevant for our current situation. In the present study, we investigate the relation between attention and multisensory processes. In particular, we investigate whether attentional processing draws from separate pools of attentional resources for each sensory modality and to what extent attentional resources interact with multisensory integration processes.</p>
<p>Regarding the first question, it has been shown that the amount of information that can be attended at once is limited (Marois and Ivanoff,
<xref rid="B35" ref-type="bibr">2005</xref>
; Alvarez and Franconeri,
<xref rid="B5" ref-type="bibr">2007</xref>
). Attentional limitations were found in hearing (Tremblay et al.,
<xref rid="B49" ref-type="bibr">2005</xref>
), vision (Potter et al.,
<xref rid="B39" ref-type="bibr">1998</xref>
), and haptics (Hillstrom et al.,
<xref rid="B24" ref-type="bibr">2002</xref>
). The question of whether attentional limitations are specific to each sensory modality or whether there is a common pool of attentional resources for all sensory modalities is a matter of ongoing debate (for support for distinct attentional resources see: Duncan et al.,
<xref rid="B17" ref-type="bibr">1997</xref>
; Potter et al.,
<xref rid="B39" ref-type="bibr">1998</xref>
; Soto-Faraco and Spence,
<xref rid="B43" ref-type="bibr">2002</xref>
; Alais et al.,
<xref rid="B2" ref-type="bibr">2006</xref>
; Hein et al.,
<xref rid="B23" ref-type="bibr">2006</xref>
; Talsma et al.,
<xref rid="B46" ref-type="bibr">2006</xref>
; van der Burg et al.,
<xref rid="B52" ref-type="bibr">2007</xref>
; for support for a common pool of resources see: Jolicoeur,
<xref rid="B26" ref-type="bibr">1999</xref>
; Arnell and Larson,
<xref rid="B8" ref-type="bibr">2002</xref>
; Soto-Faraco et al.,
<xref rid="B44" ref-type="bibr">2002</xref>
; Arnell and Jenkins,
<xref rid="B7" ref-type="bibr">2004</xref>
). In particular, if humans have separate attentional resources for each sensory modality, the total amount of information that can be attended to would be larger if the received information would be distributed across several sensory modalities rather than received only via one sensory modality.</p>
<p>Earlier studies proposed that whether humans have separate attentional resources or one common pool of resources depends on the type of task (Bonnel and Prinzmetal,
<xref rid="B12" ref-type="bibr">1998</xref>
; Potter et al.,
<xref rid="B39" ref-type="bibr">1998</xref>
; Chan and Newell,
<xref rid="B14" ref-type="bibr">2008</xref>
; Arrighi et al.,
<xref rid="B9" ref-type="bibr">2011</xref>
). In particular, Arrighi et al. (
<xref rid="B9" ref-type="bibr">2011</xref>
) argued that when humans carry out tasks that require attention over longer periods of time (i.e., sustained attention) rather than for a brief time, they employ separate attentional resources from each sensory modality as opposed to a common pool of attentional resources. In their study, participants performed a multiple object tracking (“MOT”) task (Pylyshyn and Storm,
<xref rid="B40" ref-type="bibr">1988</xref>
) while concurrently performing either a visual or auditory discrimination task. In a MOT task, participants visually track a subset of objects (“targets”) among other randomly moving objects (“distractors”) for several seconds—a task requiring visual spatial attention over extended periods of time. When participants performed the MOT task and the visual discrimination task at the same time, Arrighi et al. (
<xref rid="B9" ref-type="bibr">2011</xref>
) found strong within-modality interference. However, when participants performed the MOT task and the auditory discrimination task at the same time, Arrighi et al. (
<xref rid="B9" ref-type="bibr">2011</xref>
) found little cross-modality interference. These findings suggest that there are separate attentional resources for the visual and auditory modalities in tasks that require sustained attention.</p>
<p>Notably, in Arrighi et al. (
<xref rid="B9" ref-type="bibr">2011</xref>
), participants performed a discrimination task and a spatial task at the same time. These findings left an open question whether humans employ separate attentional resources only when simultaneously performing a spatial task and a discrimination task (both requiring sustained attention) or whether they also employ separate attentional resources when performing two spatial tasks that require sustained attention simultaneously. For the case of visual and tactile attentional resources, we addressed this question in a previous study (Wahn and König,
<xref rid="B54" ref-type="bibr">2015</xref>
). Participants performed a MOT task while simultaneously performing a localization (“LOC”) task in which they either received visual, tactile, or redundant visual and tactile location cues. We reasoned that if two spatial tasks performed in separate sensory modalities draw from at least partially separate pools of attentional resources, then interference between these two spatial tasks should be less in comparison to the interference observed when two spatial tasks are carried out within the same sensory modality. However, findings revealed that, while there was substantial interference between tasks, the amount of interference did not differ between conditions in which tasks were performed in separate sensory modalities (i.e., haptics and vision) in comparison to a purely visual condition. These results indicate shared attentional resources for the visual and haptic modality when two spatial tasks are performed. Taken together, these findings suggest that distinct attentional resources for the sensory modalities are employed during simultaneous performance of a discrimination task and a spatial task, whereas a common pool of attentional resources is used during simultaneous performance of two spatial tasks (for a similar claim that spatial attention acts supramodally, see LaBerge,
<xref rid="B31" ref-type="bibr">1995</xref>
; Chan and Newell,
<xref rid="B14" ref-type="bibr">2008</xref>
). Moreover, this summary of results from previous studies is supported in terms of neuronal populations that potentially overlap in processing when two spatial tasks are performed in separate sensory modalities as both recruit neural substrates from a supramodal “where” pathway residing in the parietal lobe (see Maeder et al.,
<xref rid="B34" ref-type="bibr">2001</xref>
; Reed et al.,
<xref rid="B41" ref-type="bibr">2005</xref>
; Ahveninen et al.,
<xref rid="B1" ref-type="bibr">2006</xref>
for neural substrates for the “where” and “what” pathway). In contrast, the recruited neural substrates when performing a discrimination task overlap less with those recruited in a spatial task, which could potentially explain why distinct attentional resources are found when a spatial and discrimination task are performed simultaneously (Arrighi et al.,
<xref rid="B9" ref-type="bibr">2011</xref>
).</p>
<p>An important difference between our previous study and the study by Arrighi et al. (
<xref rid="B9" ref-type="bibr">2011</xref>
) are the sensory modalities in which participants carried out the tasks (i.e., vision and audition in their study; vision and haptics in our previous study). Therefore, in order to test our hypothesis that spatial attention acts supramodally, the present study investigates whether humans also employ a common pool of attentional resources when performing a visual spatial task in combination with an auditory spatial task. For this purpose, we modified the experimental paradigm of the previous study and used auditory location cues instead of tactile location cues. Specifically, participants performed a MOT task while simultaneously performing a LOC task in which they either received visual, auditory, or redundant visual and auditory location cues. We hypothesized that if there are separate spatial attentional resources for the auditory and visual modalities, participants' tracking performance in the visual MOT task and localization performance in the LOC task should be better when receiving auditory or redundant visual and auditory location cues than when receiving visual location cues in the LOC task. Conversely, if spatial attentional resources are shared between visual and auditory modalities (i.e., there is only a common pool of attentional resources), no differences in performance are expected.</p>
<p>In addition to investigating the question of separate attentional resources for the sensory modalities, we also investigated the second question: to what extent attentional processes interact with multisensory integration processes. Previous studies showed that the “ventriloquist effect” is not influenced by attentional processes, indicating that multisensory integration occurs prior to attentional processing (Bertelson et al.,
<xref rid="B11" ref-type="bibr">2000</xref>
; Vroomen et al.,
<xref rid="B53" ref-type="bibr">2001</xref>
). Other instances of audiovisual integration, such as the “McGurk effect” (McGurk and MacDonald,
<xref rid="B36" ref-type="bibr">1976</xref>
) were shown to occur pre-attentively (Soto-Faraco et al.,
<xref rid="B42" ref-type="bibr">2004</xref>
); as another example, also see the “pip and pop effect” (van der Burg et al.,
<xref rid="B51" ref-type="bibr">2008</xref>
). However, other studies found that selective attention positively modulated multisensory integration processes if stimuli from both sensory modalities were fully attended (Talsma and Woldorff,
<xref rid="B48" ref-type="bibr">2005</xref>
; Talsma et al.,
<xref rid="B47" ref-type="bibr">2007</xref>
) or attenuated multisensory integration processes if only one sensory channel was attended (Mozolic et al.,
<xref rid="B37" ref-type="bibr">2008</xref>
), arguing against a purely pre-attentive account of multisensory integration. Furthermore, Alsius et al. (
<xref rid="B3" ref-type="bibr">2005</xref>
) found that increasing the attentional load via a secondary visual or auditory task severely affected audiovisual integration, suggesting that attentional processes can negatively affect multisensory integration (see also Alsius et al.,
<xref rid="B4" ref-type="bibr">2007</xref>
for a study in which attention directed to the tactile modality weakens audiovisual integration). Koelewijn et al. (
<xref rid="B30" ref-type="bibr">2010</xref>
) suggested that multisensory integration processes do rely on attentional processes, explaining why high attentional load interferes with and selective attention influences multisensory integration.</p>
<p>However, most of these studies focused on the integration of auditory and visual information during the perception of speech (Navarra et al.,
<xref rid="B38" ref-type="bibr">2010</xref>
); but also see (Vroomen et al.,
<xref rid="B53" ref-type="bibr">2001</xref>
) for a study about the integration of emotional auditory and visual information. Thus, it remains unclear whether attentional load disrupts audiovisual integration for non-speech stimuli. In order to address this question, we investigated whether the integration of visual and auditory cues in a LOC task is disrupted by a high attentional load. To this end, we tested whether multisensory cue integration (Ernst and Bülthoff,
<xref rid="B19" ref-type="bibr">2004</xref>
; Ernst,
<xref rid="B18" ref-type="bibr">2006</xref>
) occurs in conditions of high attentional load (i.e., when the MOT task was performed simultaneously with the LOC task), and low attentional load (i.e., when only the LOC was performed). In particular, we tested whether, irrespective of the attentional load, redundant location cues in the visual and auditory modality lead to better and less variable location estimates in comparison to estimates obtained from receiving only unimodal location cues. If high attentional load does disrupt multisensory integration (as shown for instance in Alsius et al.,
<xref rid="B3" ref-type="bibr">2005</xref>
), people should no longer be able to integrate redundant information from the auditory and visual modalities in the condition of high attentional load.</p>
</sec>
<sec id="s2">
<title>2. Methods</title>
<sec>
<title>2.1. Methods of data acquisition</title>
<sec>
<title>2.1.1. Participants</title>
<p>We recruited nine students (six female,
<italic>M</italic>
= 25.22 years,
<italic>SD</italic>
= 2.54 years) as participants at the University of Osnabrück. All participants had normal vision and normal hearing. We admitted only students who did not play video games on a regular basis to the study, as action video game experience can lead to considerably higher tracking performances in MOT tasks (Green and Bavelier,
<xref rid="B22" ref-type="bibr">2006</xref>
). The ethics committee of the University Osnabrück approved the study, and all participants were informed about their rights and signed a written consent form. All participants received a monetary reward or course credits for participation.</p>
</sec>
<sec>
<title>2.1.2. Experimental setup</title>
<p>Participants wore headphones (Sony MDR-1RNC) and sat in a dark room at a distance of 90 cm in front of a computer screen (BenQ XL2420T, resolution 1920 × 1080, 120 Hz), subtending a visual field of 32.87 × 18.49 visual degrees. We recorded eye movements with a remote eyetracking system (Eyelink 1000, monocular pupil tracking, 500 Hz sampling rate). To calibrate eye position, we used a five-point grid and repeated the calibration procedure until the maximum error was below 0.7°.</p>
</sec>
<sec>
<title>2.1.3. Experimental conditions and experimental procedure</title>
<p>In the experiment, participants performed either a MOT task, a LOC task, or both tasks at the same time. In the LOC task, participants either had to identify the location toward which a visual, auditory, or redundant visual and auditory location cues moved. In particular, in the visual LOC task (“VI”), the participants' task was to indicate the location to which a dot (2.4 visual degrees wide) in the center of the screen moved using the corresponding key on the keyboard's number pad. The dot was a gradient (i.e., with increasing eccentricity, the color gradually changed from black to white—1 pixel change in eccentricity equaled a change in the RGB code by one unit, see Figure
<xref ref-type="fig" rid="F1">1B</xref>
).</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption>
<p>
<bold>(A)</bold>
Localization (LOC) task overview. The top row depicts the VI condition (in which visual location cues were received), the middle row the AU condition (in which auditory location cues were received) and the bottom row the VIAU condition (in which redundant visual and auditory location cues were received).
<bold>(B)</bold>
Mapping of number pad (top left) to visual stimuli on the screen (top right) and the auditory spatial cues (bottom). Arrows indicate the objects' current movement direction. This figure was adapted from our previous study (Wahn and König,
<xref rid="B54" ref-type="bibr">2015</xref>
) with permission of Koninklijke Brill NV.</p>
</caption>
<graphic xlink:href="fpsyg-06-01084-g0001"></graphic>
</fig>
<p>During each trial, the dot moved four to five times toward one out of eight possible locations (movement length 0.03 visual degrees), remained there for 600 ms, and then returned to the center of the screen (see top row in Figure
<xref ref-type="fig" rid="F1">1A</xref>
). The apparent motion was created by no longer showing the dot in the center of the screen and displaying it in one of the eight positions. When there was no movement toward a location, the dot remained continuously visible in the center of the screen. The participant was allowed to give her response once she was able to identify the movement direction. The spatial arrangement of the keys on the number pad matched the eight possible locations to which the visual stimulus could move (see Figure
<xref ref-type="fig" rid="F1">1B</xref>
). For instance, if participants saw a movement toward the bottom right, they had to press the “3” on the number pad. The location was chosen randomly out of the eight possible locations, and onsets of these movements were jittered within a time window of 0.6 or 1 s. The minimum time between onsets was 1.5 s. Participants were asked to indicate the location toward which the dot moved but not the central location, toward which the dot always moved back. In each trial, participants performed the LOC task for 11 s, and the trial ended after this period. Within this period, participants were instructed to always fixate on the center of the screen and ignore the motions of objects that moved across the screen.</p>
<p>Analogously, in the auditory LOC task (“AU”), an auditory location cue originating from the center of the screen moved to one of eight auditory spatial cues surrounding the central auditory cue. The apparent motion was created by no longer playing the central sound and playing one of the adjacent sounds instead. The participants' task was to indicate the location to which the auditory cue moved using the number pad (see middle row of Figure
<xref ref-type="fig" rid="F1">1A</xref>
). The mapping between the auditory cues and keys on the number pad matched with regards to their spatial arrangement. For instance, the key “9” corresponded to the right auditory cue in the top row (see Figure
<xref ref-type="fig" rid="F1">1B</xref>
). When there was no movement toward a location, the central tone was played, in analogy to always seeing the dot in the center of the screen during the VI condition when it did not move to a location.</p>
<p>As auditory location cues, nine gray noise sounds (stereo sound, sampling frequency 44,100 Hz, 32 bit IEEE float format, SPL: 20.1 dBA) were created using the Wave Arts plugin Panorama 5 and Adobe Audition. Each auditory cue was simulated to be perceived as originating from a different spatial location in front of the participant. The spatial locations were composed of unique combinations of one of three horizontal angles (−90°, 0°, 90°) and one of three vertical angles (−90°, 0°, 90°). Within the Panorama 5 plugin, default options were used (stereo width: 30°, direct gain: 0 db, direct slope: −3 db, mode: headphones). For the head-related transfer function, the generic 'Human' head-related transfer function (filter length: 128 points) was selected. Reflection and reverb options were disabled.</p>
<p>When redundant visual and auditory location cues were received (“VIAU”), the dot and the tone moved to matching locations, and the participants' task was to indicate the location using the number pad (see bottom row of Figure
<xref ref-type="fig" rid="F1">1A</xref>
).</p>
<p>In the MOT task, we instructed participants to track a subset of three randomly chosen objects (“targets”) among eighteen randomly moving objects for a total of 11 s. Before the objects (1.06 visual degrees wide) started to move, targets turned gray for a duration of 2 s and then became indistinguishable from the other objects. Then, objects moved for 11 s. During object motion, objects repelled each other and bounced off borders of the screen. Each objects' movement direction and speed [mean speed 2.57 visual degrees per second (minimum 1.71, maximum 3.42)] was randomly chosen, with a probability of 1% in each frame (the experiment was run at a 100 Hz refresh rate). When objects stopped moving, participants were instructed to select the target objects using the mouse (see Figure
<xref ref-type="fig" rid="F2">2</xref>
, top row). After selection of objects was complete, correctly selected objects were marked in green. In what will be referred to as “single task condition,” participants either performed one of the above described modality-specific versions of the LOC task (i.e., VI, AU, or VIAU) or the MOT task; i.e., they only performed a single task at the same time.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption>
<p>
<bold>Multiple object tracking (MOT) task overview</bold>
. Trial logic shown for the MOT task (top row), for performing the MOT task while either receiving the visual location cues (VI+MOT, second row), the auditory location cues (AU+MOT, third row) or the redundant visual and auditory location cues (VIAU+MOT, fourth row) in the localization (LOC) task. Arrows indicate the current movement direction of the objects. This figure was adapted from our previous study (Wahn and König,
<xref rid="B54" ref-type="bibr">2015</xref>
) with permission of Koninklijke Brill NV.</p>
</caption>
<graphic xlink:href="fpsyg-06-01084-g0002"></graphic>
</fig>
<p>In the “dual task condition,” participants performed the MOT task in combination with the LOC task; they received visual (“VI+MOT”), auditory (“AU+MOT”), or redundant visual and auditory (“VIAU+MOT”) location cues. Specifically, participants first saw a screen in which objects did not move, and targets were indicated in gray for 2 s. Then, objects moved for 11 s, and participants were additionally required to perform the LOC task. Using the number pad, participants were instructed to choose the locations indicated either by the dot in the center of the screen (“VI+MOT”), by the auditory cues (“AU+MOT”), or by the dot and the auditory cues (“VIAU+MOT”)—see Figure
<xref ref-type="fig" rid="F2">2</xref>
, the second, third, and fourth rows, respectively. When objects stopped moving, participants were instructed to select the targets, and they no longer had to perform the LOC task.</p>
<p>Note, in order to keep the perceptual load constant, participants always saw eighteen randomly moving objects in each experimental condition. In addition, while tracking these objects and/or performing the LOC task, participants were instructed to always fixate on the center of the screen.</p>
<p>The experiment was divided into 21 blocks each consisting of ten trials, presented in a pseudorandomized order. In one block, participants always performed the same condition, which was indicated at the beginning of each block. In conditions in which a localization task was performed and given that in every trial four to five location cues were indicated, each direction for a location cue was indicated approximately seventeen times. Each set of seven blocks included all seven conditions (VI, AU, VIAU, MOT, VI+MOT, AU+MOT, VIAU+MOT). Repetition of a condition in consecutive blocks was avoided. After every seventh block, we offered participants an optional break. The entire experiment took about 2 h. We programmed the experiment and performed data extraction with Python, using the Pygame library.</p>
</sec>
</sec>
<sec>
<title>2.2. Methods of data analysis</title>
<p>We excluded trials in which a participant's gaze deviated from the center by more than two visual degrees on average from the analysis (total of 2.18% trials excluded,
<italic>M</italic>
= 2.56 visual degrees,
<italic>SD</italic>
= 0.70). For each dependent variable, we regarded trials values below or above three times the interquartile range (relative to the median) as outliers and removed them per individual for each condition. We averaged all remaining trials for each participant and for each condition.</p>
<p>In order to test our hypotheses, we computed linear mixed models with predictors always representing planned comparisons between conditions. For this purpose, we used either a “dummy” or a “simple” coding scheme (Bruin,
<xref rid="B13" ref-type="bibr">2014</xref>
). For the estimation of the predictors' coefficients, a maximum likelihood estimation was used, as it leads to better approximations of fixed effects than a restricted maximum likelihood estimation does (Twisk,
<xref rid="B50" ref-type="bibr">2006</xref>
). Significance of estimated fixed effects was evaluated using 95% confidence limits. The fixed effects coefficients displayed in tables are unstandardized. For each linear mixed model, we modeled individual intercepts for each participant in order to account for the dependence between measurements across conditions (Twisk,
<xref rid="B50" ref-type="bibr">2006</xref>
). We checked the assumptions of linearity and homoskedasticity by visual inspection of the fitted values plotted against the residuals. We assessed the assumption of normality by visual inspection of histograms of the residuals and normal Q–Q plots and by performing a Shapiro–Wilk-test (alpha = 0.1) on these residuals. In cases of violations of normality, we bootstrapped 95% confidence limits to evaluate the significance of estimated coefficients.</p>
<p>We used custom R scripts for all analyses. We generated tables using “texreg” (Leifeld,
<xref rid="B32" ref-type="bibr">2013</xref>
), created graphics using “ggplot2” (Wickham,
<xref rid="B56" ref-type="bibr">2009</xref>
), and calculated linear mixed models analyses using “lme4” (Bates et al.,
<xref rid="B10" ref-type="bibr">2014</xref>
).</p>
</sec>
</sec>
<sec id="s3">
<title>3. Results</title>
<sec>
<title>3.1. Do audition and vision share spatial attentional resources?</title>
<p>Figure
<xref ref-type="fig" rid="F3">3A</xref>
shows a descriptive overview of the performance for the MOT task (on the abscissa) and the LOC task (ordinate), respectively. The two tasks interfered with each other irrespective of the sensory modality in which locations cues were received. To address the question of whether there are separate attentional resources for the visual and auditory modalities, it does not suffice to look at performances of each task separately. Therefore, in order to have an overall score of interference between tasks, we computed the Euclidean distance in performances between single task conditions and dual task conditions for each condition separately (Euclidean distance indicated as dashed lines in Figure
<xref ref-type="fig" rid="F3">3A</xref>
). Figure
<xref ref-type="fig" rid="F3">3B</xref>
(left panel) shows the mean Euclidean distance for the VI, AU, and VIAU conditions, respectively. It can be seen that the amount of interference in each condition is about equal in percentage, indicating that there is only one pool of attentional resources instead of separate attentional resources for each sensory modality. We tested this observation by comparing the VI condition and the AU and VIAU conditions, using a linear mixed model. For this model, we coded the predictor condition (with levels VI, AU, and VIAU) using a simple coding scheme with the VI condition as a reference group. With the simple coding scheme, the model's intercept represents the grand average over all conditions and thereby an overall score of interference between tasks. The coefficients in the model represent the comparisons between the VI condition with the AU and VIAU condition, respectively. We found a significant intercept that was also large in magnitude compared to zero (about 24%), indicating that the dual task conditions led to a considerable decrease in performance relative to the single task conditions. However, we did not find any significant differences between conditions (see first column of Table
<xref ref-type="table" rid="T1">1</xref>
), indicating that there is a common pool of spatial attentional resources for the auditory and visual modalities. Moreover, these results closely match the results of our previous study (Wahn and König,
<xref rid="B54" ref-type="bibr">2015</xref>
), in which we used tactile spatial cues instead of auditory spatial cues (for comparison, see the right panel of Figure
<xref ref-type="fig" rid="F3">3B</xref>
) and found evidence for shared spatial attentional resources for the tactile and visual modalities.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption>
<p>
<bold>Results of multiple object tracking (MOT) task and localization (LOC) task</bold>
. Percentage correct in MOT task (abscissa) plotted against percentage correct in LOC task (ordinate) for single task conditions of the two tasks (VI | MOT, AU | MOT, and VIAU | MOT) and dual task conditions (MOT+VI, MOT+AU, and MOT+VIAU). Dotted lines indicate the Euclidean distance between single and dual task conditions and represent an overall measure of interference; and
<bold>(B)</bold>
Interference [%] between the MOT and LOC task for each type of location cue (measured as Euclidean distance between single and dual task conditions) for the present study (left) and for the previous study (right), in which tactile instead of auditory spatial cues were received (Wahn and König,
<xref rid="B54" ref-type="bibr">2015</xref>
), using the same interference measure. Error bars in all panels are SEM. Panel
<bold>(B)</bold>
was adapted from our previous study (Wahn and König,
<xref rid="B54" ref-type="bibr">2015</xref>
) with permission of Koninklijke Brill NV.</p>
</caption>
<graphic xlink:href="fpsyg-06-01084-g0003"></graphic>
</fig>
<table-wrap id="T1" position="float">
<label>Table 1</label>
<caption>
<p>
<bold>Linear mixed model results</bold>
.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th rowspan="1" colspan="1"></th>
<th valign="top" align="center" rowspan="1" colspan="1">
<bold>Interference [%]</bold>
</th>
<th valign="top" align="center" rowspan="1" colspan="1">
<bold>Residuals</bold>
</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Intercept</td>
<td valign="top" align="center" rowspan="1" colspan="1">24.05
<xref ref-type="table-fn" rid="TN1">
<sup>*</sup>
</xref>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.00</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="center" rowspan="1" colspan="1">[15.27; 32.82]</td>
<td valign="top" align="center" rowspan="1" colspan="1">[−1.41; 1.41]</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">VI vs. AU</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.60</td>
<td valign="top" align="center" rowspan="1" colspan="1">1.85</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="center" rowspan="1" colspan="1">[−3.74; 4.93]</td>
<td valign="top" align="center" rowspan="1" colspan="1">[−1.61; 5.31]</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">VI vs. VIAU</td>
<td valign="top" align="center" rowspan="1" colspan="1">1.83</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.67</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="center" rowspan="1" colspan="1">[−2.51; 6.16]</td>
<td valign="top" align="center" rowspan="1" colspan="1">[−2.79; 4.13]</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Log likelihood</td>
<td valign="top" align="center" rowspan="1" colspan="1">−92.48</td>
<td valign="top" align="center" rowspan="1" colspan="1">−72.99</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Participants</td>
<td valign="top" align="center" rowspan="1" colspan="1">9</td>
<td valign="top" align="center" rowspan="1" colspan="1">9</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>
<italic>Dependent variables are “Interference [%]” between LOC and MOT task (first column) and “Interference [%]” controlled for differences in localization performance (second column). Unstandardized coefficient estimates of the differences between conditions are displayed</italic>
.</p>
<fn id="TN1">
<label>*</label>
<p>
<italic>0 outside 95% confidence interval</italic>
.</p>
</fn>
</table-wrap-foot>
</table-wrap>
<p>In addition, in order to control for differences in localization difficulty in the auditory and visual modality, we computed a linear mixed model in which the localization performances in the single task conditions was used as a predictor for the interference measure. We found a trend toward significance for the relation between the localization performance and the interference measure (95%-
<italic>CI</italic>
[−0.04; 0.18], log likelihood = −92.02), indicating that the localization performance predicts the interference to some extent. Using the residuals from this model for further analysis, we regressed out all the variance that is explained by the localization performance. With the residuals, we computed the same linear mixed model as used for the interference measure and again found no significant difference between conditions (see the second column of Table
<xref ref-type="table" rid="T1">1</xref>
), suggesting shared spatial attentional resources for the visual and auditory modalities.</p>
<p>We also suspected that there could be an asymmetry in localization performance of the auditory stimuli between the horizontal and vertical dimensions. In an extreme case, it could be that participants perfectly identified the location of the cue in one dimension while only guessing it in the other dimension. To rule out this possibility, we tested whether participants identified the given location cues in the auditory LOC task with a performance above chance (33%) only within the horizontal and vertical dimension, respectively, using a one-sample
<italic>t</italic>
-test. We found that participants indeed identified the location cues with a performance above chance for each dimension [vertical:
<italic>t</italic>
<sub>(8)</sub>
= 6.49,
<italic>p</italic>
< 0.001, 95%-
<italic>CI</italic>
[0.45,0.59]; horizontal:
<italic>t</italic>
<sub>(8)</sub>
= 13.75,
<italic>p</italic>
< 0.00001, 95%-
<italic>CI</italic>
[0.75,0.92]], ruling out the possibility that participants guessed the location of the cue in either of the dimensions.</p>
</sec>
<sec>
<title>3.2. Is audiovisual integration disrupted by attentional load?</title>
<p>In order to test whether the multisensory integration of localization cues received from the auditory and visual modalities is disrupted by attentional load, we verified whether the predictions of cue integration are fulfilled in the VIAU condition and in the MOT+VIAU condition. Given that the cues in each sensory modality give redundant information, cue integration predicts a better and also more reliable estimate of a parameter (Ernst and Bülthoff,
<xref rid="B19" ref-type="bibr">2004</xref>
; Ernst,
<xref rid="B18" ref-type="bibr">2006</xref>
). For our paradigm, this means that participants should be more accurate (i.e., commit fewer errors) in estimating the locations in the LOC task in the VIAU condition compared to the AU and VI condition. Furthermore, the standard deviations of participants' location estimates should be lower in the VIAU condition than in the AU and VI conditions, indicating more reliable estimates of the locations in the VIAU condition. Conversely, if multisensory integration is disrupted by attentional load, the predictions of cue integration should not be fulfilled. In particular, participants should not be more accurate and more reliable in estimating the locations in the VIAU condition compared to doing so in the AU and VI conditions.</p>
<p>We first calculated the committed errors as the city block distance (a distance measure also known as “Manhatten distance”) between the correct location and the selected location (see Figure
<xref ref-type="fig" rid="F4">4A</xref>
for a descriptive overview). We then compared the errors committed in the VIAU condition with those in the AU and VI conditions and the committed errors in the VIAU+MOT condition with those in the AU+MOT and VI+MOT conditions. For these comparisons, we used a linear model with a dummy coding scheme, with the VIAU condition and VIAU+MOT as the reference group, respectively. We found that participants committed fewer errors in the VIAU condition than in the AU or VI condition and they also committed fewer errors in the VIAU+MOT condition than in the AU+MOT condition and VI+MOT conditions (see the first and second columns in Table
<xref ref-type="table" rid="T2">2</xref>
). We ran the same model with the standard deviation of the location estimates for each participant as dependent variable and found the same pattern of results: Participants' estimates of the location were less variable in the VIAU condition than in the AU and VI conditions and were also less variable in the VIAU+MOT condition than in the AU+MOT and VI+MOT conditions (see the third and fourth columns in Table
<xref ref-type="table" rid="T2">2</xref>
). Overall, the findings indicate that irrespective of attentional load, participants integrate the information they receive via the visual and auditory modalities. Moreover, this pattern of results match the results in our previous study, in which participants received tactile instead of auditory spatial cues. In particular, in the previous study, we found better location estimates when receiving redundant visual and tactile spatial cues than when receiving unimodal spatial cues (for comparison, see Figure
<xref ref-type="fig" rid="F4">4B</xref>
), suggesting that visuotactile integration is not disrupted by attentional load.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption>
<p>
<bold>Results of the localization (LOC) task</bold>
.
<bold>(A)</bold>
Error (in city block distance) in LOC task for each type of location cue [visual (VI), auditory (AU) and redundant auditory and visual location cue (VIAU)], separately for single and dual task conditions (present study).
<bold>(B)</bold>
Error (in city block distance) in LOC task for each type of location cue: visual (VI), tactile (TA) and redundant tactile and visual location cue (VITA), separately for single and dual task conditions from the previous study (Wahn and König,
<xref rid="B54" ref-type="bibr">2015</xref>
). Error bars in all panels are SEM. Panel
<bold>(B)</bold>
was reproduced from our previous study (Wahn and König,
<xref rid="B54" ref-type="bibr">2015</xref>
) with permission of Koninklijke Brill NV.</p>
</caption>
<graphic xlink:href="fpsyg-06-01084-g0004"></graphic>
</fig>
<table-wrap id="T2" position="float">
<label>Table 2</label>
<caption>
<p>
<bold>Linear mixed model results</bold>
.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th rowspan="1" colspan="1"></th>
<th valign="top" align="center" rowspan="1" colspan="1">
<bold>City block Single</bold>
</th>
<th valign="top" align="center" rowspan="1" colspan="1">
<bold>City block Dual</bold>
</th>
<th valign="top" align="center" rowspan="1" colspan="1">
<bold>SD Single</bold>
</th>
<th valign="top" align="center" rowspan="1" colspan="1">
<bold>SD Dual</bold>
</th>
<th valign="top" align="center" rowspan="1" colspan="1">
<bold>RT Single</bold>
</th>
<th valign="top" align="center" rowspan="1" colspan="1">
<bold>RT Dual</bold>
</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Intercept</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.35
<xref ref-type="table-fn" rid="TN2">
<sup>*</sup>
</xref>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.52
<xref ref-type="table-fn" rid="TN2">
<sup>*</sup>
</xref>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.26
<xref ref-type="table-fn" rid="TN2">
<sup>*</sup>
</xref>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.38
<xref ref-type="table-fn" rid="TN2">
<sup>*</sup>
</xref>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.81
<xref ref-type="table-fn" rid="TN2">
<sup>*</sup>
</xref>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.84
<xref ref-type="table-fn" rid="TN2">
<sup>*</sup>
</xref>
</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="center" rowspan="1" colspan="1">[0.10; 0.59]</td>
<td valign="top" align="center" rowspan="1" colspan="1">[0.24; 0.77]</td>
<td valign="top" align="center" rowspan="1" colspan="1">[0.17; 0.36]</td>
<td valign="top" align="center" rowspan="1" colspan="1">[0.29; 0.47]</td>
<td valign="top" align="center" rowspan="1" colspan="1">[0.70; 0.92]</td>
<td valign="top" align="center" rowspan="1" colspan="1">[0.71; 0.95]</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">VIAU vs. AU</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.59
<xref ref-type="table-fn" rid="TN2">
<sup>*</sup>
</xref>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.54
<xref ref-type="table-fn" rid="TN2">
<sup>*</sup>
</xref>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.24
<xref ref-type="table-fn" rid="TN2">
<sup>*</sup>
</xref>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.17
<xref ref-type="table-fn" rid="TN2">
<sup>*</sup>
</xref>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.04</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.02</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="center" rowspan="1" colspan="1">[0.42; 0.76]</td>
<td valign="top" align="center" rowspan="1" colspan="1">[0.36; 0.75]</td>
<td valign="top" align="center" rowspan="1" colspan="1">[0.11; 0.36]</td>
<td valign="top" align="center" rowspan="1" colspan="1">[0.07; 0.29]</td>
<td valign="top" align="center" rowspan="1" colspan="1">[−0.02; 0.11]</td>
<td valign="top" align="center" rowspan="1" colspan="1">[−0.05; 0.09]</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">VIAU vs. VI</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.50
<xref ref-type="table-fn" rid="TN2">
<sup>*</sup>
</xref>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.39
<xref ref-type="table-fn" rid="TN2">
<sup>*</sup>
</xref>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.36
<xref ref-type="table-fn" rid="TN2">
<sup>*</sup>
</xref>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.18
<xref ref-type="table-fn" rid="TN2">
<sup>*</sup>
</xref>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.07
<xref ref-type="table-fn" rid="TN2">
<sup>*</sup>
</xref>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">0.05</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="center" rowspan="1" colspan="1">[0.33; 0.67]</td>
<td valign="top" align="center" rowspan="1" colspan="1">[0.19; 0.60]</td>
<td valign="top" align="center" rowspan="1" colspan="1">[0.23; 0.47]</td>
<td valign="top" align="center" rowspan="1" colspan="1">[0.07; 0.29]</td>
<td valign="top" align="center" rowspan="1" colspan="1">[0.01; 0.13]</td>
<td valign="top" align="center" rowspan="1" colspan="1">[−0.01; 0.12]</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Log Likelihood</td>
<td valign="top" align="center" rowspan="1" colspan="1">−5.94</td>
<td valign="top" align="center" rowspan="1" colspan="1">−8.72</td>
<td valign="top" align="center" rowspan="1" colspan="1">10.34</td>
<td valign="top" align="center" rowspan="1" colspan="1">10.31</td>
<td valign="top" align="center" rowspan="1" colspan="1">15.90</td>
<td valign="top" align="center" rowspan="1" colspan="1">14.62</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Participants</td>
<td valign="top" align="center" rowspan="1" colspan="1">9</td>
<td valign="top" align="center" rowspan="1" colspan="1">9</td>
<td valign="top" align="center" rowspan="1" colspan="1">9</td>
<td valign="top" align="center" rowspan="1" colspan="1">9</td>
<td valign="top" align="center" rowspan="1" colspan="1">9</td>
<td valign="top" align="center" rowspan="1" colspan="1">9</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>
<italic>Dependent variables are the city block distance (as a measurement of localization error; columns one and two for comparisons in the single task and dual task conditions), the standard deviation (“SD”) of the city block distance for each participant (columns three and four), and reaction times (“RT;” columns five and six). Unstandardized coefficient estimates of the differences between conditions are displayed. The intercept represents the mean of values in the VIAU condition, which was tested against zero (with zero representing a perfect performance for the city block error measure)</italic>
.</p>
<fn id="TN2">
<label>*</label>
<p>
<italic>0 outside of 95% confidence interval</italic>
.</p>
</fn>
</table-wrap-foot>
</table-wrap>
<p>We also investigated whether the higher accuracy in the VIAU and VIAU+MOT conditions could be due to an accuracy/speed tradeoff. We tested whether the participants in the VIAU condition took longer to respond than in the AU and VI conditions and whether participants in the VIAU+MOT condition took longer to respond than in the AU+MOT and VI+MOT conditions. We did not find any significant differences between conditions (see the fifth and sixth columns in Table
<xref ref-type="table" rid="T2">2</xref>
). Overall, this indicates that the better accuracy performances in the VIAU and VIAU+MOT conditions were not due to an accuracy/speed tradeoff.</p>
</sec>
</sec>
<sec id="s4">
<title>4. Discussion</title>
<sec>
<title>4.1. Do audition and vision share spatial attentional resources?</title>
<p>We investigated whether the auditory and visual modality share spatial attentional resources in tasks requiring sustained attention or whether there are distinct spatial attentional resources for these sensory modalities. In order to address this question, participants were asked to perform two spatial tasks (a MOT task and a LOC task) either simultaneously (dual task condition) or separately (single task conditions). Both the MOT and LOC tasks required sustained attention. We found a substantial decrease in performance in the dual task conditions relative to the single task conditions. However, we found that the amount of interference was not affected by the sensory modality in which the location cues were provided in the localization condition. That is, whether the location cues were provided via the visual, auditory, or visual and auditory modality did not affect how well participants performed the dual task. We interpret these results as an indication that shared spatial attentional resources for the auditory and visual modalities have been found for tasks requiring sustained attention. Moreover, these results are in line with results obtained in a previous study (Wahn and König,
<xref rid="B54" ref-type="bibr">2015</xref>
), in which tactile instead of auditory spatial cues were used, suggesting that visual spatial attentional resources are shared with tactile and auditory spatial attentional resources.</p>
<p>However, we also want to point out that the interference between tasks in the dual task condition caused a higher performance decrease in the MOT task than in the LOC task. Given the assumption that the MOT task and LOC tasks draw from a common pool of spatial attentional resources, we would have expected that the performance decrease for both tasks would be symmetrical. We suspect that this asymmetric pattern of results can be in part explained by how the performance for each of these tasks is computed. In particular, for the MOT task and given that participants tracked three targets, misclassifying one of the targets resulted in a substantial performance decrease of one third. In contrast, in the LOC task, given that four to five location cues were received per trial, misclassifying a location cue resulted in a performance decrease of only one-fourth or one-fifth. Therefore, if participants make an equal number of mistakes in both tasks due to the interference between tasks, a performance asymmetry would still be found. In a future study, matching the number of tracked targets with the received locations cues could decrease the asymmetry of the interference.</p>
<p>Alternatively, this asymmetric pattern of results could be explained by an additional interference induced by the LOC task. In particular, we suspect that the executive demand of having to continuously perform key presses in the LOC task could have caused an additional interference in the MOT task that is independent of the sensory modality in which the LOC task was performed. While this alternative explanation cannot be refuted in the present study, we want to point out that finding not only a decrease in performance in the MOT task but also a decrease in performance in the LOC task for each type of location cue suggests that these two tasks indeed draw from a common pool of spatial attentional resources.</p>
<p>In addition, we also want to point out that participants had a better performance in the LOC task when receiving visual location cues than when receiving auditory location cues. It could be that differences in task difficulty (as indicated by localization performance) could result in different amounts of interference between tasks, independent of the sensory modalities in which the tasks are carried out. We statistically controlled for differences in localization performance to rule out any additional interference between tasks caused by differences in task difficulty. With this procedure, we assumed that there is a linear relationship between task difficulty and the interference between tasks, and this was supported by finding a trend toward significance for the relation between localization performance and task interference. After controlling for the task difficulty, we still did not find any differences between conditions, suggesting shared attentional resources between the visual and auditory modality. However, we want to point out that other (non-linear) relationships between task difficulty and interference were not controlled for and could still influence the results. In a future study, the localization performance for the auditory and visual LOC tasks could be more closely matched to circumvent the need to control for the task difficulty.</p>
<p>A previous study (Arrighi et al.,
<xref rid="B9" ref-type="bibr">2011</xref>
) has shown that distinct attentional resources are used for tasks requiring sustained attention. In particular, in Arrighi et al. (
<xref rid="B9" ref-type="bibr">2011</xref>
), participants were required to perform a visual spatial task (i.e., a MOT task) in combination with either a visual or auditory discrimination task, and results indicated distinct attentional resources for the visual and auditory sensory modalities. We argue that this effect may be specific to the combination of the type of tasks that were performed (i.e., a spatial task in combination with a discrimination task). In the present study, participants performed two spatial tasks instead of one discrimination task and one spatial task; we found evidence for shared attentional resources for the visual and auditory sensory modalities. Similarly, in our previous study (Wahn and König,
<xref rid="B54" ref-type="bibr">2015</xref>
), we found indications for shared attentional resources for the visual and tactile modalities. Taken together, these findings indicate that distinct attentional resources for the sensory modalities are employed during simultaneous performance of a discrimination task and a spatial task, bit that a common pool of attentional resources is used during simultaneous performance of two spatial tasks (for a similar claim that spatial attention acts supramodally, see LaBerge,
<xref rid="B31" ref-type="bibr">1995</xref>
; Chan and Newell,
<xref rid="B14" ref-type="bibr">2008</xref>
).</p>
<p>More speculatively, we reason that our findings may be explained in terms of an overlap of neuronal populations that process both visual and auditory spatial information. Previous studies have found evidence for the existence of a dorsal “where” pathway residing in the parietal lobe, specialized in the processing of visual spatial information (Livingstone and Hubel,
<xref rid="B33" ref-type="bibr">1988</xref>
). For the auditory modality, previous research has also found evidence for the existence of a “where” pathway, specialized in the processing of auditory spatial information and residing in the parietal lobe (Maeder et al.,
<xref rid="B34" ref-type="bibr">2001</xref>
; Ahveninen et al.,
<xref rid="B1" ref-type="bibr">2006</xref>
; and for the tactile modality see Reed et al.,
<xref rid="B41" ref-type="bibr">2005</xref>
). However, recent studies investigating the medial temporal lobe of awake monkeys provide evidence for a spatial coding of the visual location of visual overt and covert attention in these regions (Killian et al.,
<xref rid="B29" ref-type="bibr">2012</xref>
; Wilming et al.,
<xref rid="B57" ref-type="bibr">2015</xref>
). Furthermore, there are indications that separate modality-specific spatial processing systems converge at the temporoparietal junction (Coren et al.,
<xref rid="B16" ref-type="bibr">2004</xref>
). Overall, there is reason to believe that the spatial processing of stimuli from the auditory and visual modalities involves partly overlapping neuronal populations and that this could explain our finding that spatial attentional resources for the visual and auditory modalities are shared. Moreover, given that we also found shared spatial attentional resources between the visual and tactile modalities (Wahn and König,
<xref rid="B54" ref-type="bibr">2015</xref>
), this suggests that in a future study, shared spatial attentional resources between the auditory and tactile modalities could be found as well.</p>
<p>In contrast, with respect to the neural correlates for a “what” pathway, specializing in object identification, previous studies indicate separate neural substrates for the visual, auditory and tactile modalities (Reed et al.,
<xref rid="B41" ref-type="bibr">2005</xref>
; Ahveninen et al.,
<xref rid="B1" ref-type="bibr">2006</xref>
; Kietzmann et al.,
<xref rid="B28" ref-type="bibr">2012</xref>
,
<xref rid="B27" ref-type="bibr">2013</xref>
; but also see Amedi et al. (
<xref rid="B6" ref-type="bibr">2001</xref>
) for visuo-haptic object-related activation in the visual “what” pathway). These findings suggest that when two discrimination tasks are performed in separate sensory modalities, neural populations involved in processing should overlap less and evidence for distinct attentional resources could be found (see Alais et al.,
<xref rid="B2" ref-type="bibr">2006</xref>
for distinct attentional resources when two discrimination tasks are performed; but also see Chan and Newell,
<xref rid="B14" ref-type="bibr">2008</xref>
for interference between two discrimination tasks performed in separate sensory modalities).</p>
</sec>
<sec>
<title>4.2. Is audiovisual integration disrupted by attentional load?</title>
<p>In addition to investigating whether spatial attentional resources are shared between the auditory and visual modalities, we also investigated whether attentional load severely interferes with multisensory integration processes. In particular, we tested whether the predictions given by multisensory cue integration were still fulfilled if participants experienced a high attentional load. Participants performed a LOC task in which they received redundant visual and auditory location cues, performing this task either alone or in combination with a MOT task. We found that, irrespective of attentional load, participants integrated the multisensory cues, as their estimates of the locations were more accurate and less variable than in unimodal conditions in which they only received either auditory or visual location cues. In contrast to previous research that has shown that audiovisual integration is susceptible to attentional load during the perception of speech (Alsius et al.,
<xref rid="B3" ref-type="bibr">2005</xref>
), our findings indicate that audiovisual integration is not disrupted by attentional load when non-speech stimuli are received which supports the view that audiovisual integration is a pre-attentive process. These findings are also in line with our previous study (Wahn and König,
<xref rid="B54" ref-type="bibr">2015</xref>
), in which redundant visual and tactile location cues were received and integrated despite attentional load, suggesting that neither audiovisual nor visuotactile integration of spatial information is disrupted by attentional load.</p>
<p>However, an alternative account of our findings would be that the improved estimates of the locations when receiving multisensory location cues could be due to a trial-by-trial strategy: Participants could always use the location cue of whichever sensory modality they can interpret more accurately. Given that participants were considerably better in their location estimates when they received redundant visual and auditory location cues (rather than in comparison to only receiving unimodal cues), such an alternation account seems unlikely but cannot be fully excluded.</p>
<p>Another alternative explanation of our findings that multisensory integration processes were not disrupted by an additional spatial attentional load (due to simultaneous performance of a MOT task) could be that spatial attentional resources for the two different tasks are not shared. However, we believe that finding a bidirectional performance decrease in the dual task conditions suggests that the required spatial attentional resources for these two tasks indeed overlap. Therefore, we infer that the absence of a disruption of multisensory integration in the dual task could be explained by an early pre-attentive account of multisensory integration.</p>
<p>A possible reason that we find no effect of attentional load on audiovisual integration could be that the attentional load manipulation was not strong enough. Therefore, we cannot exclude the possibility that an even higher attentional load could lead to a disruption of audiovisual integration processes. However, given that participants had a performance of about 70 percent in the MOT task, we think that the difficulty of the MOT task was within a reasonable range that allowed for the quantification of interference between tasks without approaching floor effects.</p>
<p>Another possible reason that we find no affect of attentional load, in contrast to other studies (Alsius et al.,
<xref rid="B3" ref-type="bibr">2005</xref>
; Talsma et al.,
<xref rid="B47" ref-type="bibr">2007</xref>
; Mozolic et al.,
<xref rid="B37" ref-type="bibr">2008</xref>
), is that we attribute this difference in findings to the type of stimuli used. While previous studies have investigated the effect of attentional load during the perception of linguistic stimuli, the present study investigated the integration of auditory and visual location cues that did not carry any linguistic content. We suggest that the perception of linguistic stimuli could recruit additional top-down directed circuits that affect the integration of auditory and visual stimuli, which were not recruited in the present study. A future study using neurophysiological methods could contrast the use of linguistic stimuli in comparison to pure spatial information during audiovisual integration and identify the involved brain regions in conditions of high additional attentional load.</p>
<p>Finally, as an alternative approach for future studies, the question of whether attentional load does affect audiovisual integration could be investigated with a crossmodal congruency task (Spence et al.,
<xref rid="B45" ref-type="bibr">2004</xref>
; Walton and Spence,
<xref rid="B55" ref-type="bibr">2004</xref>
) or a multisensory pattern matching task (Göschl et al.,
<xref rid="B20" ref-type="bibr">2014</xref>
,
<xref rid="B21" ref-type="bibr">2015</xref>
). With such tasks, the susceptibility to distractor stimuli could be investigated as a function of attentional load and more subtle effects of attentional load may then be detected.</p>
</sec>
<sec>
<title>4.3. Conclusion</title>
<p>Our investigation of the relation between attention and multisensory processes has indicated that the type of tasks that are performed in separate sensory modalities determines whether separate attentional resources or one supramodal pool of resources is employed. Moreover, these findings suggest that the distribution of attentional resources is operating at a task level independent of the involved sensory modalities. In addition, our findings indicate that high attentional load does not disrupt the integration of spatial information from several sensory modalities, suggesting an early account of multisensory integration that is independent of attentional resources. Taken together, the findings indicate that in circumstances in which several spatial tasks need to be performed simultaneously in several sensory modalities, multisensory processes seem to operate independently from and prior to attentional processes. Future studies using a combination of EEG and fMRI could further elucidate the exact time course when multisensory and attentional processes operate, which brain regions are involved in these processes, and to what extent they operate independently or overlap in processing.</p>
</sec>
</sec>
<sec id="s5">
<title>Funding</title>
<p>We gratefully acknowledge the support by H2020—H2020-FETPROACT-2014 641321—socSMCs (for BW), FP7-ICT-270212—eSMCs (for PK and BW), and ERC-2010-AdG #269716—MULTISENSE (for PK).</p>
<sec>
<title>Conflict of interest statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</sec>
</body>
<back>
<ack>
<p>As the present study is a close follow-up study to our previous study (Wahn and König,
<xref rid="B54" ref-type="bibr">2015</xref>
), in which tactile spatial cues instead of auditory spatial cues were used, we like to acknowledge that content which is analogous to both studies was reproduced from the previous article with permission of the publishing company Koninklijke Brill NV. The previous study was published in Wahn and König (
<xref rid="B54" ref-type="bibr">2015</xref>
). In addition, we want to thank Supriya Murali and Anette Aumeistere for their help in creating the auditory stimuli and collecting the data.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="B1">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ahveninen</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Jääskeläinen</surname>
<given-names>I. P.</given-names>
</name>
<name>
<surname>Raij</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Bonmassar</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Devore</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Hämäläinen</surname>
<given-names>M.</given-names>
</name>
<etal></etal>
</person-group>
. (
<year>2006</year>
).
<article-title>Task-modulated “what” and “where” pathways in human auditory cortex</article-title>
.
<source>Proc. Natl. Acad. Sci. U.S.A.</source>
<volume>103</volume>
,
<fpage>14608</fpage>
<lpage>14613</lpage>
.
<pub-id pub-id-type="doi">10.1073/pnas.0510480103</pub-id>
<pub-id pub-id-type="pmid">16983092</pub-id>
</mixed-citation>
</ref>
<ref id="B2">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Alais</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Morrone</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Burr</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Separate attentional resources for vision and audition</article-title>
.
<source>Proc. R. Soc. B Biol. Sci.</source>
<volume>273</volume>
,
<fpage>1339</fpage>
<lpage>1345</lpage>
.
<pub-id pub-id-type="doi">10.1098/rspb.2005.3420</pub-id>
</mixed-citation>
</ref>
<ref id="B3">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Alsius</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Navarra</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Campbell</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Soto-Faraco</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Audiovisual integration of speech falters under high attention demands</article-title>
.
<source>Curr. Biol.</source>
<volume>15</volume>
,
<fpage>839</fpage>
<lpage>843</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.cub.2005.03.046</pub-id>
<pub-id pub-id-type="pmid">15886102</pub-id>
</mixed-citation>
</ref>
<ref id="B4">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Alsius</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Navarra</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Soto-Faraco</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Attention to touch weakens audiovisual speech integration</article-title>
.
<source>Exp. Brain Res.</source>
<volume>183</volume>
,
<fpage>399</fpage>
<lpage>404</lpage>
.
<pub-id pub-id-type="doi">10.1007/s00221-007-1110-1</pub-id>
<pub-id pub-id-type="pmid">17899043</pub-id>
</mixed-citation>
</ref>
<ref id="B5">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Alvarez</surname>
<given-names>G. A.</given-names>
</name>
<name>
<surname>Franconeri</surname>
<given-names>S. L.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>How many objects can you track?: evidence for a resource-limited attentive tracking mechanism</article-title>
.
<source>J. Vis.</source>
<volume>7</volume>
:
<fpage>14</fpage>
.
<pub-id pub-id-type="doi">10.1167/7.13.14</pub-id>
<pub-id pub-id-type="pmid">17997642</pub-id>
</mixed-citation>
</ref>
<ref id="B6">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Amedi</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Malach</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Hendler</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Peled</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Zohary</surname>
<given-names>E.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Visuo-haptic object-related activation in the ventral visual pathway</article-title>
.
<source>Nat. Neurosci.</source>
<volume>4</volume>
,
<fpage>324</fpage>
<lpage>330</lpage>
.
<pub-id pub-id-type="doi">10.1038/85201</pub-id>
<pub-id pub-id-type="pmid">11224551</pub-id>
</mixed-citation>
</ref>
<ref id="B7">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Arnell</surname>
<given-names>K. M.</given-names>
</name>
<name>
<surname>Jenkins</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Revisiting within-modality and cross-modality attentional blinks: effects of target–distractor similarity</article-title>
.
<source>Percept. Psychophys.</source>
<volume>66</volume>
,
<fpage>1147</fpage>
<lpage>1161</lpage>
.
<pub-id pub-id-type="doi">10.3758/BF03196842</pub-id>
<pub-id pub-id-type="pmid">15751472</pub-id>
</mixed-citation>
</ref>
<ref id="B8">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Arnell</surname>
<given-names>K. M.</given-names>
</name>
<name>
<surname>Larson</surname>
<given-names>J. M.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Cross-modality attentional blinks without preparatory task-set switching</article-title>
.
<source>Psychon. Bull. Rev.</source>
<volume>9</volume>
,
<fpage>497</fpage>
<lpage>506</lpage>
.
<pub-id pub-id-type="doi">10.3758/BF03196305</pub-id>
<pub-id pub-id-type="pmid">12412889</pub-id>
</mixed-citation>
</ref>
<ref id="B9">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Arrighi</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Lunardi</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Burr</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Vision and audition do not share attentional resources in sustained tasks</article-title>
.
<source>Front. Psychol.</source>
<volume>2</volume>
:
<issue>56</issue>
.
<pub-id pub-id-type="doi">10.3389/fpsyg.2011.00056</pub-id>
<pub-id pub-id-type="pmid">21734893</pub-id>
</mixed-citation>
</ref>
<ref id="B10">
<mixed-citation publication-type="webpage">
<person-group person-group-type="author">
<name>
<surname>Bates</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Maechler</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Bolker</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Walker</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2014</year>
).
<source>lme4: Linear Mixed-effects Models using Eigen and S4</source>
. Available online at:
<ext-link ext-link-type="uri" xlink:href="http://CRAN.R-project.org/package=lme4">http://CRAN.R-project.org/package=lme4</ext-link>
</mixed-citation>
</ref>
<ref id="B11">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bertelson</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Vroomen</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>De Gelder</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Driver</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>The ventriloquist effect does not depend on the direction of deliberate visual attention</article-title>
.
<source>Percept. Psychophys.</source>
<volume>62</volume>
,
<fpage>321</fpage>
<lpage>332</lpage>
.
<pub-id pub-id-type="doi">10.3758/BF03205552</pub-id>
<pub-id pub-id-type="pmid">10723211</pub-id>
</mixed-citation>
</ref>
<ref id="B12">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bonnel</surname>
<given-names>A.-M.</given-names>
</name>
<name>
<surname>Prinzmetal</surname>
<given-names>W.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>Dividing attention between the color and the shape of objects</article-title>
.
<source>Percept. Psychophys.</source>
<volume>60</volume>
,
<fpage>113</fpage>
<lpage>124</lpage>
.
<pub-id pub-id-type="doi">10.3758/BF03211922</pub-id>
<pub-id pub-id-type="pmid">9503916</pub-id>
</mixed-citation>
</ref>
<ref id="B13">
<mixed-citation publication-type="webpage">
<person-group person-group-type="author">
<name>
<surname>Bruin</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2014</year>
).
<source>R Library: Contrast Coding Systems for Categorical Variables.</source>
Available online at:
<ext-link ext-link-type="uri" xlink:href="http://www.ats.ucla.edu/stat/r/library/contrast_coding.htm">http://www.ats.ucla.edu/stat/r/library/contrast_coding.htm</ext-link>
(Retrieved November, 2014).</mixed-citation>
</ref>
<ref id="B14">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chan</surname>
<given-names>J. S.</given-names>
</name>
<name>
<surname>Newell</surname>
<given-names>F. N.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Behavioral evidence for task-dependent ?what? vs ?where? processing within and across modalities</article-title>
.
<source>Percept. Psychophys.</source>
<volume>70</volume>
,
<fpage>36</fpage>
<lpage>49</lpage>
.
<pub-id pub-id-type="doi">10.3758/PP.70.1.36</pub-id>
<pub-id pub-id-type="pmid">18306959</pub-id>
</mixed-citation>
</ref>
<ref id="B15">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chun</surname>
<given-names>M. M.</given-names>
</name>
<name>
<surname>Golomb</surname>
<given-names>J. D.</given-names>
</name>
<name>
<surname>Turk-Browne</surname>
<given-names>N. B.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>A taxonomy of external and internal attention</article-title>
.
<source>Annu. Rev. Psychol.</source>
<volume>62</volume>
,
<fpage>73</fpage>
<lpage>101</lpage>
.
<pub-id pub-id-type="doi">10.1146/annurev.psych.093008.100427</pub-id>
<pub-id pub-id-type="pmid">19575619</pub-id>
</mixed-citation>
</ref>
<ref id="B16">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Coren</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Ward</surname>
<given-names>L. M.</given-names>
</name>
<name>
<surname>Enns</surname>
<given-names>J. T.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<source>Sensation and Perception, 6th Edn.</source>
<publisher-loc>New York, NY</publisher-loc>
:
<publisher-name>Wiley</publisher-name>
.</mixed-citation>
</ref>
<ref id="B17">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Duncan</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Martens</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Ward</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<article-title>Restricted attentional capacity within but not between sensory modalities</article-title>
.
<source>Nature</source>
<volume>397</volume>
,
<fpage>808</fpage>
<lpage>810</lpage>
.
<pub-id pub-id-type="doi">10.1038/42947</pub-id>
<pub-id pub-id-type="pmid">9194561</pub-id>
</mixed-citation>
</ref>
<ref id="B18">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>M. O.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>A bayesian view on multimodal cue integration</article-title>
, in
<source>Human Body Perception From the Inside Out</source>
, Chapter 6, eds
<person-group person-group-type="editor">
<name>
<surname>Knoblich</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Thornton</surname>
<given-names>I. M.</given-names>
</name>
<name>
<surname>Grosjean</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Shiffrar</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<publisher-loc>New York, NY</publisher-loc>
:
<publisher-name>Oxford University Press</publisher-name>
),
<fpage>105</fpage>
<lpage>131</lpage>
.</mixed-citation>
</ref>
<ref id="B19">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>M. O.</given-names>
</name>
<name>
<surname>Bülthoff</surname>
<given-names>H. H.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Merging the senses into a robust percept</article-title>
.
<source>Trends Cogn. Sci.</source>
<volume>8</volume>
,
<fpage>162</fpage>
<lpage>169</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.tics.2004.02.002</pub-id>
<pub-id pub-id-type="pmid">15050512</pub-id>
</mixed-citation>
</ref>
<ref id="B20">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Göschl</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Engel</surname>
<given-names>A. K.</given-names>
</name>
<name>
<surname>Friese</surname>
<given-names>U.</given-names>
</name>
</person-group>
(
<year>2014</year>
).
<article-title>Attention modulates visual-tactile interaction in spatial pattern matching</article-title>
.
<source>PLoS ONE</source>
<volume>9</volume>
:
<fpage>e106896</fpage>
.
<pub-id pub-id-type="doi">10.1371/journal.pone.0106896</pub-id>
<pub-id pub-id-type="pmid">25203102</pub-id>
</mixed-citation>
</ref>
<ref id="B21">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Göschl</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Friese</surname>
<given-names>U.</given-names>
</name>
<name>
<surname>Daume</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>König</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Engel</surname>
<given-names>A. K.</given-names>
</name>
</person-group>
(
<year>2015</year>
).
<article-title>Oscillatory signatures of crossmodal congruence effects: an eeg investigation employing a visuotactile pattern matching paradigm</article-title>
.
<source>Neuroimage</source>
<volume>116</volume>
,
<fpage>177</fpage>
<lpage>186</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2015.03.067</pub-id>
<pub-id pub-id-type="pmid">25846580</pub-id>
</mixed-citation>
</ref>
<ref id="B22">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Green</surname>
<given-names>C. S.</given-names>
</name>
<name>
<surname>Bavelier</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Enumeration versus multiple object tracking: the case of action video game players</article-title>
.
<source>Cognition</source>
<volume>101</volume>
,
<fpage>217</fpage>
<lpage>245</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.cognition.2005.10.004</pub-id>
<pub-id pub-id-type="pmid">16359652</pub-id>
</mixed-citation>
</ref>
<ref id="B23">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hein</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Parr</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Duncan</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Within-modality and cross-modality attentional blinks in a simple discrimination task</article-title>
.
<source>Percept. Psychophys.</source>
<volume>68</volume>
,
<fpage>54</fpage>
<lpage>61</lpage>
.
<pub-id pub-id-type="doi">10.3758/BF03193655</pub-id>
<pub-id pub-id-type="pmid">16617829</pub-id>
</mixed-citation>
</ref>
<ref id="B24">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hillstrom</surname>
<given-names>A. P.</given-names>
</name>
<name>
<surname>Shapiro</surname>
<given-names>K. L.</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Attentional limitations in processing sequentially presented vibrotactile targets</article-title>
.
<source>Percept. Psychophys.</source>
<volume>64</volume>
,
<fpage>1068</fpage>
<lpage>1082</lpage>
.
<pub-id pub-id-type="doi">10.3758/BF03194757</pub-id>
<pub-id pub-id-type="pmid">12489662</pub-id>
</mixed-citation>
</ref>
<ref id="B25">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>James</surname>
<given-names>W.</given-names>
</name>
</person-group>
(
<year>1890</year>
).
<source>The Principles of Psychology.</source>
<publisher-loc>Cambridge, MA</publisher-loc>
:
<publisher-name>Harvard University Press</publisher-name>
.</mixed-citation>
</ref>
<ref id="B26">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jolicoeur</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Restricted attentional capacity between sensory modalities</article-title>
.
<source>Psychon. Bull. Rev.</source>
<volume>6</volume>
,
<fpage>87</fpage>
<lpage>92</lpage>
.
<pub-id pub-id-type="doi">10.3758/BF03210813</pub-id>
<pub-id pub-id-type="pmid">12199316</pub-id>
</mixed-citation>
</ref>
<ref id="B27">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kietzmann</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Wahn</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>König</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Tong</surname>
<given-names>F.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>Face selective areas in the human ventral stream exhibit a preference for 3/4 views in the fovea and periphery</article-title>
.
<source>Percept. ECVP Abstr.</source>
<volume>42</volume>
,
<fpage>54</fpage>
Available online at:
<ext-link ext-link-type="uri" xlink:href="http://www.perceptionweb.com/abstract.cgi?id=v130568">http://www.perceptionweb.com/abstract.cgi?id=v130568</ext-link>
</mixed-citation>
</ref>
<ref id="B28">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kietzmann</surname>
<given-names>T. C.</given-names>
</name>
<name>
<surname>Swisher</surname>
<given-names>J. D.</given-names>
</name>
<name>
<surname>König</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Tong</surname>
<given-names>F.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Prevalence of selectivity for mirror-symmetric views of faces in the ventral and dorsal visual pathways</article-title>
.
<source>J. Neurosci.</source>
<volume>32</volume>
,
<fpage>11763</fpage>
<lpage>11772</lpage>
.
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.0126-12.2012</pub-id>
<pub-id pub-id-type="pmid">22915118</pub-id>
</mixed-citation>
</ref>
<ref id="B29">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Killian</surname>
<given-names>N. J.</given-names>
</name>
<name>
<surname>Jutras</surname>
<given-names>M. J.</given-names>
</name>
<name>
<surname>Buffalo</surname>
<given-names>E. A.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>A map of visual space in the primate entorhinal cortex</article-title>
.
<source>Nature</source>
<volume>491</volume>
,
<fpage>761</fpage>
<lpage>764</lpage>
.
<pub-id pub-id-type="doi">10.1038/nature11587</pub-id>
<pub-id pub-id-type="pmid">23103863</pub-id>
</mixed-citation>
</ref>
<ref id="B30">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Koelewijn</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Bronkhorst</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Theeuwes</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Attention and the multiple stages of multisensory integration: a review of audiovisual studies</article-title>
.
<source>Acta Psychol.</source>
<volume>134</volume>
,
<fpage>372</fpage>
<lpage>384</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.actpsy.2010.03.010</pub-id>
</mixed-citation>
</ref>
<ref id="B31">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>LaBerge</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>1995</year>
).
<source>Attentional Processing: The Brain's Art of Mindfulness</source>
,
<volume>Vol. 2</volume>
<publisher-loc>Cambridge, MA</publisher-loc>
:
<publisher-name>Harvard University Press</publisher-name>
<pub-id pub-id-type="doi">10.4159/harvard.9780674183940</pub-id>
</mixed-citation>
</ref>
<ref id="B32">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Leifeld</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>texreg: conversion of statistical model output in R to L
<sup>A</sup>
T
<sub>E</sub>
X and HTML tables</article-title>
.
<source>J. Stat. Softw.</source>
<volume>55</volume>
,
<fpage>1</fpage>
<lpage>24</lpage>
. Available online at:
<ext-link ext-link-type="uri" xlink:href="http://cran.gis-lab.info/web/packages/texreg/vignettes/v55i08.pdf">http://cran.gis-lab.info/web/packages/texreg/vignettes/v55i08.pdf</ext-link>
</mixed-citation>
</ref>
<ref id="B33">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Livingstone</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Hubel</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>1988</year>
).
<article-title>Segregation of form, color, movement, and depth: anatomy, physiology, and perception</article-title>
.
<source>Science</source>
<volume>240</volume>
,
<fpage>740</fpage>
<lpage>749</lpage>
.
<pub-id pub-id-type="doi">10.1126/science.3283936</pub-id>
<pub-id pub-id-type="pmid">3283936</pub-id>
</mixed-citation>
</ref>
<ref id="B34">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Maeder</surname>
<given-names>P. P.</given-names>
</name>
<name>
<surname>Meuli</surname>
<given-names>R. A.</given-names>
</name>
<name>
<surname>Adriani</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Bellmann</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Fornari</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Thiran</surname>
<given-names>J.-P.</given-names>
</name>
<etal></etal>
</person-group>
. (
<year>2001</year>
).
<article-title>Distinct pathways involved in sound recognition and localization: a human fmri study</article-title>
.
<source>Neuroimage</source>
<volume>14</volume>
,
<fpage>802</fpage>
<lpage>816</lpage>
.
<pub-id pub-id-type="doi">10.1006/nimg.2001.0888</pub-id>
<pub-id pub-id-type="pmid">11554799</pub-id>
</mixed-citation>
</ref>
<ref id="B35">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Marois</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Ivanoff</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Capacity limits of information processing in the brain</article-title>
.
<source>Trends Cogn. Sci.</source>
<volume>9</volume>
,
<fpage>296</fpage>
<lpage>305</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.tics.2005.04.010</pub-id>
<pub-id pub-id-type="pmid">15925809</pub-id>
</mixed-citation>
</ref>
<ref id="B36">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>McGurk</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>MacDonald</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>1976</year>
).
<article-title>Hearing lips and seeing voices</article-title>
.
<source>Nature</source>
<volume>264</volume>
,
<fpage>746</fpage>
<lpage>748</lpage>
.
<pub-id pub-id-type="pmid">1012311</pub-id>
</mixed-citation>
</ref>
<ref id="B37">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mozolic</surname>
<given-names>J. L.</given-names>
</name>
<name>
<surname>Hugenschmidt</surname>
<given-names>C. E.</given-names>
</name>
<name>
<surname>Peiffer</surname>
<given-names>A. M.</given-names>
</name>
<name>
<surname>Laurienti</surname>
<given-names>P. J.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Modality-specific selective attention attenuates multisensory integration</article-title>
.
<source>Exp. Brain Res.</source>
<volume>184</volume>
,
<fpage>39</fpage>
<lpage>52</lpage>
.
<pub-id pub-id-type="doi">10.1007/s00221-007-1080-3</pub-id>
<pub-id pub-id-type="pmid">17684735</pub-id>
</mixed-citation>
</ref>
<ref id="B38">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Navarra</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Alsius</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Soto-Faraco</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Assessing the role of attention in the audiovisual integration of speech</article-title>
.
<source>Inf. Fusion</source>
<volume>11</volume>
,
<fpage>4</fpage>
<lpage>11</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.inffus.2009.04.001</pub-id>
</mixed-citation>
</ref>
<ref id="B39">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Potter</surname>
<given-names>M. C.</given-names>
</name>
<name>
<surname>Chun</surname>
<given-names>M. M.</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>B. S.</given-names>
</name>
<name>
<surname>Muckenhoupt</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>Two attentional deficits in serial target search: the visual attentional blink and an amodal task-switch deficit</article-title>
.
<source>J. Exp. Psychol.</source>
<volume>24</volume>
,
<fpage>979</fpage>
<lpage>992</lpage>
.
<pub-id pub-id-type="doi">10.1037/0278-7393.24.4.979</pub-id>
</mixed-citation>
</ref>
<ref id="B40">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pylyshyn</surname>
<given-names>Z. W.</given-names>
</name>
<name>
<surname>Storm</surname>
<given-names>R. W.</given-names>
</name>
</person-group>
(
<year>1988</year>
).
<article-title>Tracking multiple independent targets: evidence for a parallel tracking mechanism</article-title>
.
<source>Spatial Vis.</source>
<volume>3</volume>
,
<fpage>179</fpage>
<lpage>197</lpage>
.
<pub-id pub-id-type="doi">10.1163/156856888X00122</pub-id>
</mixed-citation>
</ref>
<ref id="B41">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Reed</surname>
<given-names>C. L.</given-names>
</name>
<name>
<surname>Klatzky</surname>
<given-names>R. L.</given-names>
</name>
<name>
<surname>Halgren</surname>
<given-names>E.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>What vs. where in touch: an fmri study</article-title>
.
<source>Neuroimage</source>
<volume>25</volume>
,
<fpage>718</fpage>
<lpage>726</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2004.11.044</pub-id>
<pub-id pub-id-type="pmid">15808973</pub-id>
</mixed-citation>
</ref>
<ref id="B42">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Soto-Faraco</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Navarra</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Alsius</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Assessing automaticity in audiovisual speech integration: evidence from the speeded classification task</article-title>
.
<source>Cognition</source>
<volume>92</volume>
,
<fpage>B13</fpage>
<lpage>B23</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.cognition.2003.10.005</pub-id>
<pub-id pub-id-type="pmid">15019556</pub-id>
</mixed-citation>
</ref>
<ref id="B43">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Soto-Faraco</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Modality-specific auditory and visual temporal processing deficits</article-title>
.
<source>Q. J. Exp. Psychol.</source>
<volume>55</volume>
,
<fpage>23</fpage>
<lpage>40</lpage>
.
<pub-id pub-id-type="doi">10.1080/02724980143000136</pub-id>
</mixed-citation>
</ref>
<ref id="B44">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Soto-Faraco</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Fairbank</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Kingstone</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Hillstrom</surname>
<given-names>A. P.</given-names>
</name>
<name>
<surname>Shapiro</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>A crossmodal attentional blink between vision and touch</article-title>
.
<source>Psychon. Bull. Rev.</source>
<volume>9</volume>
,
<fpage>731</fpage>
<lpage>738</lpage>
.
<pub-id pub-id-type="doi">10.3758/BF03196328</pub-id>
<pub-id pub-id-type="pmid">12613676</pub-id>
</mixed-citation>
</ref>
<ref id="B45">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Pavani</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Driver</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Spatial constraints on visual-tactile cross-modal distractor congruency effects</article-title>
.
<source>Cogn. Affect. Behav. Neurosci.</source>
<volume>4</volume>
,
<fpage>148</fpage>
<lpage>169</lpage>
.
<pub-id pub-id-type="doi">10.3758/CABN.4.2.148</pub-id>
<pub-id pub-id-type="pmid">15460922</pub-id>
</mixed-citation>
</ref>
<ref id="B46">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Talsma</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Doty</surname>
<given-names>T. J.</given-names>
</name>
<name>
<surname>Strowd</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Woldorff</surname>
<given-names>M. G.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Attentional capacity for processing concurrent stimuli is larger across sensory modalities than within a modality</article-title>
.
<source>Psychophysiology</source>
<volume>43</volume>
,
<fpage>541</fpage>
<lpage>549</lpage>
.
<pub-id pub-id-type="doi">10.1111/j.1469-8986.2006.00452.x</pub-id>
<pub-id pub-id-type="pmid">17076810</pub-id>
</mixed-citation>
</ref>
<ref id="B47">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Talsma</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Doty</surname>
<given-names>T. J.</given-names>
</name>
<name>
<surname>Woldorff</surname>
<given-names>M. G.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Selective attention and audiovisual integration: is attending to both modalities a prerequisite for early integration?</article-title>
<source>Cereb. Cortex</source>
<volume>17</volume>
,
<fpage>679</fpage>
<lpage>690</lpage>
.
<pub-id pub-id-type="doi">10.1093/cercor/bhk016</pub-id>
<pub-id pub-id-type="pmid">16707740</pub-id>
</mixed-citation>
</ref>
<ref id="B48">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Talsma</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Woldorff</surname>
<given-names>M. G.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Selective attention and multisensory integration: multiple phases of effects on the evoked brain activity</article-title>
.
<source>J. Cogn. Neurosci.</source>
<volume>17</volume>
,
<fpage>1098</fpage>
<lpage>1114</lpage>
.
<pub-id pub-id-type="doi">10.1162/0898929054475172</pub-id>
<pub-id pub-id-type="pmid">16102239</pub-id>
</mixed-citation>
</ref>
<ref id="B49">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tremblay</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Vachon</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Jones</surname>
<given-names>D. M.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Attentional and perceptual sources of the auditory attentional blink</article-title>
.
<source>Percept. Psychophys.</source>
<volume>67</volume>
,
<fpage>195</fpage>
<lpage>208</lpage>
.
<pub-id pub-id-type="doi">10.3758/BF03206484</pub-id>
<pub-id pub-id-type="pmid">15971684</pub-id>
</mixed-citation>
</ref>
<ref id="B50">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Twisk</surname>
<given-names>J. W.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<source>Applied Multilevel Analysis</source>
.
<publisher-loc>Cambridge, MA</publisher-loc>
:
<publisher-name>Cambridge University Press</publisher-name>
<pub-id pub-id-type="doi">10.1017/cbo9780511610806</pub-id>
</mixed-citation>
</ref>
<ref id="B51">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>van der Burg</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Olivers</surname>
<given-names>C. N.</given-names>
</name>
<name>
<surname>Bronkhorst</surname>
<given-names>A. W.</given-names>
</name>
<name>
<surname>Theeuwes</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Pip and pop: nonspatial auditory signals improve spatial visual search</article-title>
.
<source>J. Exp. Psychol.</source>
<volume>34</volume>
:
<fpage>1053</fpage>
<pub-id pub-id-type="doi">10.1037/0096-1523.34.5.1053</pub-id>
</mixed-citation>
</ref>
<ref id="B52">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>van der Burg</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Olivers</surname>
<given-names>C. N. L.</given-names>
</name>
<name>
<surname>Bronkhorst</surname>
<given-names>A. W.</given-names>
</name>
<name>
<surname>Koelewijn</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Theeuwes</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>The absence of an auditory–visual attentional blink is not due to echoic memory</article-title>
.
<source>Percept. Psychophys.</source>
<volume>69</volume>
,
<fpage>1230</fpage>
<lpage>1241</lpage>
.
<pub-id pub-id-type="doi">10.3758/BF03193958</pub-id>
<pub-id pub-id-type="pmid">18038959</pub-id>
</mixed-citation>
</ref>
<ref id="B53">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vroomen</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Bertelson</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>De Gelder</surname>
<given-names>B.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>The ventriloquist effect does not depend on the direction of automatic visual attention</article-title>
.
<source>Percept. Psychophys.</source>
<volume>63</volume>
,
<fpage>651</fpage>
<lpage>659</lpage>
.
<pub-id pub-id-type="doi">10.3758/BF03194427</pub-id>
<pub-id pub-id-type="pmid">11436735</pub-id>
</mixed-citation>
</ref>
<ref id="B54">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wahn</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>König</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2015</year>
).
<article-title>Vision and haptics share spatial attentional resources and visuotactile integration is not affected by high attentional load</article-title>
.
<source>Multisensory Res.</source>
<volume>28</volume>
,
<fpage>371</fpage>
<lpage>392</lpage>
.
<pub-id pub-id-type="doi">10.1163/22134808-00002482</pub-id>
</mixed-citation>
</ref>
<ref id="B55">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Walton</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Cross-modal congruency and visual capture in a visual elevation-discrimination task</article-title>
.
<source>Exp. Brain Res.</source>
<volume>154</volume>
,
<fpage>113</fpage>
<lpage>120</lpage>
.
<pub-id pub-id-type="doi">10.1007/s00221-003-1706-z</pub-id>
<pub-id pub-id-type="pmid">14579008</pub-id>
</mixed-citation>
</ref>
<ref id="B56">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Wickham</surname>
<given-names>H.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<source>ggplot2: Elegant Graphics for Data Analysis</source>
.
<publisher-loc>New York, NY</publisher-loc>
:
<publisher-name>Springer</publisher-name>
<pub-id pub-id-type="doi">10.1007/978-0-387-98141-3</pub-id>
</mixed-citation>
</ref>
<ref id="B57">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Wilming</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>König</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Buffalo</surname>
<given-names>E. A.</given-names>
</name>
</person-group>
(
<year>2015</year>
).
<article-title>Grid cells reflect the locus of attention, even in the absence of movement</article-title>
, in
<source>Cosyne2015 Main Meeting Program</source>
(
<publisher-loc>Salt Lake City, UT</publisher-loc>
),
<fpage>33</fpage>
.</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000203 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd -nk 000203 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Curation
   |type=    RBID
   |clé=     PMC:4518141
   |texte=   Audition and vision share spatial attentional resources, yet attentional load does not disrupt audiovisual integration
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Curation/RBID.i   -Sk "pubmed:26284008" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024