Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Cross-modal links in spatial attention.

Identifieur interne : 002686 ( Pmc/Checkpoint ); précédent : 002685; suivant : 002687

Cross-modal links in spatial attention.

Auteurs : J. Driver ; C. Spence

Source :

RBID : PMC:1692335

Abstract

A great deal is now known about the effects of spatial attention within individual sensory modalities, especially for vision and audition. However, there has been little previous study of possible cross-modal links in attention. Here, we review recent findings from our own experiments on this topic, which reveal extensive spatial links between the modalities. An irrelevant but salient event presented within touch, audition, or vision, can attract covert spatial attention in the other modalities (with the one exception that visual events do not attract auditory attention when saccades are prevented). By shifting receptors in one modality relative to another, the spatial coordinates of these cross-modal interactions can be examined. For instance, when a hand is placed in a new position, stimulation of it now draws visual attention to a correspondingly different location, although some aspects of attention do not spatially remap in this way. Cross-modal links are also evident in voluntary shifts of attention. When a person strongly expects a target in one modality (e.g. audition) to appear in a particular location, their judgements improve at that location not only for the expected modality but also for other modalities (e.g. vision), even if events in the latter modality are somewhat more likely elsewhere. Finally, some of our experiments suggest that information from different sensory modalities may be integrated preattentively, to produce the multimodal internal spatial representations in which attention can be directed. Such preattentive cross-modal integration can, in some cases, produce helpful illusions that increase the efficiency of selective attention in complex scenes.


Url:
PubMed: 9770225
PubMed Central: 1692335


Affiliations:


Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:1692335

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Cross-modal links in spatial attention.</title>
<author>
<name sortKey="Driver, J" sort="Driver, J" uniqKey="Driver J" first="J" last="Driver">J. Driver</name>
</author>
<author>
<name sortKey="Spence, C" sort="Spence, C" uniqKey="Spence C" first="C" last="Spence">C. Spence</name>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">9770225</idno>
<idno type="pmc">1692335</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1692335</idno>
<idno type="RBID">PMC:1692335</idno>
<date when="1998">1998</date>
<idno type="wicri:Area/Pmc/Corpus">000B76</idno>
<idno type="wicri:Area/Pmc/Curation">000B76</idno>
<idno type="wicri:Area/Pmc/Checkpoint">002686</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Cross-modal links in spatial attention.</title>
<author>
<name sortKey="Driver, J" sort="Driver, J" uniqKey="Driver J" first="J" last="Driver">J. Driver</name>
</author>
<author>
<name sortKey="Spence, C" sort="Spence, C" uniqKey="Spence C" first="C" last="Spence">C. Spence</name>
</author>
</analytic>
<series>
<title level="j">Philosophical Transactions of the Royal Society B: Biological Sciences</title>
<idno type="ISSN">0962-8436</idno>
<idno type="eISSN">1471-2970</idno>
<imprint>
<date when="1998">1998</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>A great deal is now known about the effects of spatial attention within individual sensory modalities, especially for vision and audition. However, there has been little previous study of possible cross-modal links in attention. Here, we review recent findings from our own experiments on this topic, which reveal extensive spatial links between the modalities. An irrelevant but salient event presented within touch, audition, or vision, can attract covert spatial attention in the other modalities (with the one exception that visual events do not attract auditory attention when saccades are prevented). By shifting receptors in one modality relative to another, the spatial coordinates of these cross-modal interactions can be examined. For instance, when a hand is placed in a new position, stimulation of it now draws visual attention to a correspondingly different location, although some aspects of attention do not spatially remap in this way. Cross-modal links are also evident in voluntary shifts of attention. When a person strongly expects a target in one modality (e.g. audition) to appear in a particular location, their judgements improve at that location not only for the expected modality but also for other modalities (e.g. vision), even if events in the latter modality are somewhat more likely elsewhere. Finally, some of our experiments suggest that information from different sensory modalities may be integrated preattentively, to produce the multimodal internal spatial representations in which attention can be directed. Such preattentive cross-modal integration can, in some cases, produce helpful illusions that increase the efficiency of selective attention in complex scenes.</p>
</div>
</front>
</TEI>
<pmc article-type="review-article">
<pmc-comment>The publisher of this article does not allow downloading of the full text in XML form.</pmc-comment>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Philos Trans R Soc Lond B Biol Sci</journal-id>
<journal-title>Philosophical Transactions of the Royal Society B: Biological Sciences</journal-title>
<issn pub-type="ppub">0962-8436</issn>
<issn pub-type="epub">1471-2970</issn>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">9770225</article-id>
<article-id pub-id-type="pmc">1692335</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Cross-modal links in spatial attention.</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Driver</surname>
<given-names>J</given-names>
</name>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Spence</surname>
<given-names>C</given-names>
</name>
</contrib>
</contrib-group>
<aff>Department of Psychology, University College London, UK. j.driver@ucl.ac.uk</aff>
<pub-date pub-type="ppub">
<day>29</day>
<month>8</month>
<year>1998</year>
</pub-date>
<volume>353</volume>
<issue>1373</issue>
<fpage>1319</fpage>
<lpage>1331</lpage>
<abstract>
<p>A great deal is now known about the effects of spatial attention within individual sensory modalities, especially for vision and audition. However, there has been little previous study of possible cross-modal links in attention. Here, we review recent findings from our own experiments on this topic, which reveal extensive spatial links between the modalities. An irrelevant but salient event presented within touch, audition, or vision, can attract covert spatial attention in the other modalities (with the one exception that visual events do not attract auditory attention when saccades are prevented). By shifting receptors in one modality relative to another, the spatial coordinates of these cross-modal interactions can be examined. For instance, when a hand is placed in a new position, stimulation of it now draws visual attention to a correspondingly different location, although some aspects of attention do not spatially remap in this way. Cross-modal links are also evident in voluntary shifts of attention. When a person strongly expects a target in one modality (e.g. audition) to appear in a particular location, their judgements improve at that location not only for the expected modality but also for other modalities (e.g. vision), even if events in the latter modality are somewhat more likely elsewhere. Finally, some of our experiments suggest that information from different sensory modalities may be integrated preattentively, to produce the multimodal internal spatial representations in which attention can be directed. Such preattentive cross-modal integration can, in some cases, produce helpful illusions that increase the efficiency of selective attention in complex scenes.</p>
</abstract>
</article-meta>
</front>
</pmc>
<affiliations>
<list></list>
<tree>
<noCountry>
<name sortKey="Driver, J" sort="Driver, J" uniqKey="Driver J" first="J" last="Driver">J. Driver</name>
<name sortKey="Spence, C" sort="Spence, C" uniqKey="Spence C" first="C" last="Spence">C. Spence</name>
</noCountry>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Checkpoint
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 002686 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd -nk 002686 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Checkpoint
   |type=    RBID
   |clé=     PMC:1692335
   |texte=   Cross-modal links in spatial attention.
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/RBID.i   -Sk "pubmed:9770225" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024