Serveur d'exploration sur l'Université de Trèves

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Multisensory top-down sets: Evidence for contingent crossmodal capture.

Identifieur interne : 000241 ( PubMed/Corpus ); précédent : 000240; suivant : 000242

Multisensory top-down sets: Evidence for contingent crossmodal capture.

Auteurs : Frank Mast ; Christian Frings ; Charles Spence

Source :

RBID : pubmed:25944449

English descriptors

Abstract

Numerous studies that have investigated visual selective attention have demonstrated that a salient but task-irrelevant stimulus can involuntarily capture a participant's attention. Over the years, a lively debate has erupted concerning the impact of contingent top-down control settings on such stimulus-driven attentional capture. In the research reported here, we investigated whether top-down sets would also affect participants' performance in a multisensory task setting. A nonspatial compatibility task was used, in which the target and the distractor were always presented sequentially from the same spatial location. We manipulated target-distractor similarity by varying the visual and tactile features of the stimuli. Participants always responded to the visual target features (color); the tactile features were incorporated into the participants' top-down set only when the experimental context allowed for the tactile feature to be used in order to discriminate the target from the distractor. Larger compatibility effects after bimodal distractors were observed only when the participants were searching for a bimodal target and when tactile information was useful. Taken together, these results provide the first demonstration of nonspatial contingent crossmodal capture.

DOI: 10.3758/s13414-015-0915-4
PubMed: 25944449

Links to Exploration step

pubmed:25944449

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Multisensory top-down sets: Evidence for contingent crossmodal capture.</title>
<author>
<name sortKey="Mast, Frank" sort="Mast, Frank" uniqKey="Mast F" first="Frank" last="Mast">Frank Mast</name>
<affiliation>
<nlm:affiliation>Department of Psychology, University of Trier, Trier, Germany, mastfra@uni-trier.de.</nlm:affiliation>
</affiliation>
</author>
<author>
<name sortKey="Frings, Christian" sort="Frings, Christian" uniqKey="Frings C" first="Christian" last="Frings">Christian Frings</name>
</author>
<author>
<name sortKey="Spence, Charles" sort="Spence, Charles" uniqKey="Spence C" first="Charles" last="Spence">Charles Spence</name>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PubMed</idno>
<date when="2015">2015</date>
<idno type="RBID">pubmed:25944449</idno>
<idno type="pmid">25944449</idno>
<idno type="doi">10.3758/s13414-015-0915-4</idno>
<idno type="wicri:Area/PubMed/Corpus">000241</idno>
<idno type="wicri:explorRef" wicri:stream="PubMed" wicri:step="Corpus" wicri:corpus="PubMed">000241</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en">Multisensory top-down sets: Evidence for contingent crossmodal capture.</title>
<author>
<name sortKey="Mast, Frank" sort="Mast, Frank" uniqKey="Mast F" first="Frank" last="Mast">Frank Mast</name>
<affiliation>
<nlm:affiliation>Department of Psychology, University of Trier, Trier, Germany, mastfra@uni-trier.de.</nlm:affiliation>
</affiliation>
</author>
<author>
<name sortKey="Frings, Christian" sort="Frings, Christian" uniqKey="Frings C" first="Christian" last="Frings">Christian Frings</name>
</author>
<author>
<name sortKey="Spence, Charles" sort="Spence, Charles" uniqKey="Spence C" first="Charles" last="Spence">Charles Spence</name>
</author>
</analytic>
<series>
<title level="j">Attention, perception & psychophysics</title>
<idno type="eISSN">1943-393X</idno>
<imprint>
<date when="2015" type="published">2015</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Adult</term>
<term>Attention</term>
<term>Color Perception</term>
<term>Humans</term>
<term>Male</term>
<term>Touch Perception</term>
<term>Young Adult</term>
</keywords>
<keywords scheme="MESH" xml:lang="en">
<term>Adult</term>
<term>Attention</term>
<term>Color Perception</term>
<term>Humans</term>
<term>Male</term>
<term>Touch Perception</term>
<term>Young Adult</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">Numerous studies that have investigated visual selective attention have demonstrated that a salient but task-irrelevant stimulus can involuntarily capture a participant's attention. Over the years, a lively debate has erupted concerning the impact of contingent top-down control settings on such stimulus-driven attentional capture. In the research reported here, we investigated whether top-down sets would also affect participants' performance in a multisensory task setting. A nonspatial compatibility task was used, in which the target and the distractor were always presented sequentially from the same spatial location. We manipulated target-distractor similarity by varying the visual and tactile features of the stimuli. Participants always responded to the visual target features (color); the tactile features were incorporated into the participants' top-down set only when the experimental context allowed for the tactile feature to be used in order to discriminate the target from the distractor. Larger compatibility effects after bimodal distractors were observed only when the participants were searching for a bimodal target and when tactile information was useful. Taken together, these results provide the first demonstration of nonspatial contingent crossmodal capture.</div>
</front>
</TEI>
<pubmed>
<MedlineCitation Status="MEDLINE" Owner="NLM">
<PMID Version="1">25944449</PMID>
<DateCreated>
<Year>2015</Year>
<Month>08</Month>
<Day>03</Day>
</DateCreated>
<DateCompleted>
<Year>2016</Year>
<Month>02</Month>
<Day>26</Day>
</DateCompleted>
<DateRevised>
<Year>2015</Year>
<Month>08</Month>
<Day>03</Day>
</DateRevised>
<Article PubModel="Print">
<Journal>
<ISSN IssnType="Electronic">1943-393X</ISSN>
<JournalIssue CitedMedium="Internet">
<Volume>77</Volume>
<Issue>6</Issue>
<PubDate>
<Year>2015</Year>
<Month>Aug</Month>
</PubDate>
</JournalIssue>
<Title>Attention, perception & psychophysics</Title>
<ISOAbbreviation>Atten Percept Psychophys</ISOAbbreviation>
</Journal>
<ArticleTitle>Multisensory top-down sets: Evidence for contingent crossmodal capture.</ArticleTitle>
<Pagination>
<MedlinePgn>1970-85</MedlinePgn>
</Pagination>
<ELocationID EIdType="doi" ValidYN="Y">10.3758/s13414-015-0915-4</ELocationID>
<Abstract>
<AbstractText>Numerous studies that have investigated visual selective attention have demonstrated that a salient but task-irrelevant stimulus can involuntarily capture a participant's attention. Over the years, a lively debate has erupted concerning the impact of contingent top-down control settings on such stimulus-driven attentional capture. In the research reported here, we investigated whether top-down sets would also affect participants' performance in a multisensory task setting. A nonspatial compatibility task was used, in which the target and the distractor were always presented sequentially from the same spatial location. We manipulated target-distractor similarity by varying the visual and tactile features of the stimuli. Participants always responded to the visual target features (color); the tactile features were incorporated into the participants' top-down set only when the experimental context allowed for the tactile feature to be used in order to discriminate the target from the distractor. Larger compatibility effects after bimodal distractors were observed only when the participants were searching for a bimodal target and when tactile information was useful. Taken together, these results provide the first demonstration of nonspatial contingent crossmodal capture.</AbstractText>
</Abstract>
<AuthorList CompleteYN="Y">
<Author ValidYN="Y">
<LastName>Mast</LastName>
<ForeName>Frank</ForeName>
<Initials>F</Initials>
<AffiliationInfo>
<Affiliation>Department of Psychology, University of Trier, Trier, Germany, mastfra@uni-trier.de.</Affiliation>
</AffiliationInfo>
</Author>
<Author ValidYN="Y">
<LastName>Frings</LastName>
<ForeName>Christian</ForeName>
<Initials>C</Initials>
</Author>
<Author ValidYN="Y">
<LastName>Spence</LastName>
<ForeName>Charles</ForeName>
<Initials>C</Initials>
</Author>
</AuthorList>
<Language>eng</Language>
<PublicationTypeList>
<PublicationType UI="D016428">Journal Article</PublicationType>
<PublicationType UI="D013485">Research Support, Non-U.S. Gov't</PublicationType>
</PublicationTypeList>
</Article>
<MedlineJournalInfo>
<Country>United States</Country>
<MedlineTA>Atten Percept Psychophys</MedlineTA>
<NlmUniqueID>101495384</NlmUniqueID>
<ISSNLinking>1943-3921</ISSNLinking>
</MedlineJournalInfo>
<CitationSubset>IM</CitationSubset>
<MeshHeadingList>
<MeshHeading>
<DescriptorName UI="D000328" MajorTopicYN="N">Adult</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D001288" MajorTopicYN="Y">Attention</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D003118" MajorTopicYN="Y">Color Perception</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D006801" MajorTopicYN="N">Humans</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D008297" MajorTopicYN="N">Male</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D055698" MajorTopicYN="Y">Touch Perception</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D055815" MajorTopicYN="N">Young Adult</DescriptorName>
</MeshHeading>
</MeshHeadingList>
</MedlineCitation>
<PubmedData>
<History>
<PubMedPubDate PubStatus="entrez">
<Year>2015</Year>
<Month>5</Month>
<Day>7</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="pubmed">
<Year>2015</Year>
<Month>5</Month>
<Day>7</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="medline">
<Year>2016</Year>
<Month>2</Month>
<Day>27</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
</History>
<PublicationStatus>ppublish</PublicationStatus>
<ArticleIdList>
<ArticleId IdType="pubmed">25944449</ArticleId>
<ArticleId IdType="doi">10.3758/s13414-015-0915-4</ArticleId>
</ArticleIdList>
</PubmedData>
</pubmed>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Rhénanie/explor/UnivTrevesV1/Data/PubMed/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000241 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/PubMed/Corpus/biblio.hfd -nk 000241 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Rhénanie
   |area=    UnivTrevesV1
   |flux=    PubMed
   |étape=   Corpus
   |type=    RBID
   |clé=     pubmed:25944449
   |texte=   Multisensory top-down sets: Evidence for contingent crossmodal capture.
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/PubMed/Corpus/RBID.i   -Sk "pubmed:25944449" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/PubMed/Corpus/biblio.hfd   \
       | NlmPubMed2Wicri -a UnivTrevesV1 

Wicri

This area was generated with Dilib version V0.6.31.
Data generation: Sat Jul 22 16:29:01 2017. Site generation: Wed Feb 28 14:55:37 2024