Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Visual-auditory events: cross-modal perceptual priming and recognition memory.

Identifieur interne : 000252 ( Ncbi/Merge ); précédent : 000251; suivant : 000253

Visual-auditory events: cross-modal perceptual priming and recognition memory.

Auteurs : A J Greene [États-Unis] ; R D Easton ; L S Lashell

Source :

RBID : pubmed:11697874

English descriptors

Abstract

Modality specificity in priming is taken as evidence for independent perceptual systems. However, Easton, Greene, and Srinivas (1997) showed that visual and haptic cross-modal priming is comparable in magnitude to within-modal priming. Where appropriate, perceptual systems might share like information. To test this, we assessed priming and recognition for visual and auditory events, within- and across- modalities. On the visual test, auditory study resulted in no priming. On the auditory priming test, visual study resulted in priming that was only marginally less than within-modal priming. The priming results show that visual study facilitates identification on both visual and auditory tests, but auditory study only facilitates performance on the auditory test. For both recognition tests, within-modal recognition exceeded cross-modal recognition. The results have two novel implications for the understanding of perceptual priming: First, we introduce visual and auditory priming for spatio-temporal events as a new priming paradigm chosen for its ecological validity and potential for information exchange. Second, we propose that the asymmetry of the cross-modal priming observed here may reflect the capacity of these perceptual modalities to provide cross-modal constraints on ambiguity. We argue that visual perception might inform and constrain auditory processing, while auditory perception corresponds to too many potential visual events to usefully inform and constrain visual perception.

DOI: 10.1006/ccog.2001.0502
PubMed: 11697874

Links toward previous steps (curation, corpus...)


Links to Exploration step

pubmed:11697874

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Visual-auditory events: cross-modal perceptual priming and recognition memory.</title>
<author>
<name sortKey="Greene, A J" sort="Greene, A J" uniqKey="Greene A" first="A J" last="Greene">A J Greene</name>
<affiliation wicri:level="2">
<nlm:affiliation>Department of Neurosurgery, University of Virginia, Charlottesville, VA 22908, USA. ajg3x@virginia.edu</nlm:affiliation>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Neurosurgery, University of Virginia, Charlottesville, VA 22908</wicri:regionArea>
<placeName>
<region type="state">Virginie</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Easton, R D" sort="Easton, R D" uniqKey="Easton R" first="R D" last="Easton">R D Easton</name>
</author>
<author>
<name sortKey="Lashell, L S" sort="Lashell, L S" uniqKey="Lashell L" first="L S" last="Lashell">L S Lashell</name>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PubMed</idno>
<date when="2001">2001</date>
<idno type="RBID">pubmed:11697874</idno>
<idno type="pmid">11697874</idno>
<idno type="doi">10.1006/ccog.2001.0502</idno>
<idno type="wicri:Area/PubMed/Corpus">001D16</idno>
<idno type="wicri:Area/PubMed/Curation">001D16</idno>
<idno type="wicri:Area/PubMed/Checkpoint">001A58</idno>
<idno type="wicri:Area/Ncbi/Merge">000252</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en">Visual-auditory events: cross-modal perceptual priming and recognition memory.</title>
<author>
<name sortKey="Greene, A J" sort="Greene, A J" uniqKey="Greene A" first="A J" last="Greene">A J Greene</name>
<affiliation wicri:level="2">
<nlm:affiliation>Department of Neurosurgery, University of Virginia, Charlottesville, VA 22908, USA. ajg3x@virginia.edu</nlm:affiliation>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Neurosurgery, University of Virginia, Charlottesville, VA 22908</wicri:regionArea>
<placeName>
<region type="state">Virginie</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Easton, R D" sort="Easton, R D" uniqKey="Easton R" first="R D" last="Easton">R D Easton</name>
</author>
<author>
<name sortKey="Lashell, L S" sort="Lashell, L S" uniqKey="Lashell L" first="L S" last="Lashell">L S Lashell</name>
</author>
</analytic>
<series>
<title level="j">Consciousness and cognition</title>
<idno type="ISSN">1053-8100</idno>
<imprint>
<date when="2001" type="published">2001</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Auditory Perception (physiology)</term>
<term>Brain (physiology)</term>
<term>Humans</term>
<term>Memory (physiology)</term>
<term>Recognition (Psychology) (physiology)</term>
<term>Space Perception (physiology)</term>
<term>Time Perception (physiology)</term>
<term>Visual Perception (physiology)</term>
</keywords>
<keywords scheme="MESH" qualifier="physiology" xml:lang="en">
<term>Auditory Perception</term>
<term>Brain</term>
<term>Memory</term>
<term>Recognition (Psychology)</term>
<term>Space Perception</term>
<term>Time Perception</term>
<term>Visual Perception</term>
</keywords>
<keywords scheme="MESH" xml:lang="en">
<term>Humans</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">Modality specificity in priming is taken as evidence for independent perceptual systems. However, Easton, Greene, and Srinivas (1997) showed that visual and haptic cross-modal priming is comparable in magnitude to within-modal priming. Where appropriate, perceptual systems might share like information. To test this, we assessed priming and recognition for visual and auditory events, within- and across- modalities. On the visual test, auditory study resulted in no priming. On the auditory priming test, visual study resulted in priming that was only marginally less than within-modal priming. The priming results show that visual study facilitates identification on both visual and auditory tests, but auditory study only facilitates performance on the auditory test. For both recognition tests, within-modal recognition exceeded cross-modal recognition. The results have two novel implications for the understanding of perceptual priming: First, we introduce visual and auditory priming for spatio-temporal events as a new priming paradigm chosen for its ecological validity and potential for information exchange. Second, we propose that the asymmetry of the cross-modal priming observed here may reflect the capacity of these perceptual modalities to provide cross-modal constraints on ambiguity. We argue that visual perception might inform and constrain auditory processing, while auditory perception corresponds to too many potential visual events to usefully inform and constrain visual perception.</div>
</front>
</TEI>
<pubmed>
<MedlineCitation Owner="NLM" Status="MEDLINE">
<PMID Version="1">11697874</PMID>
<DateCreated>
<Year>2001</Year>
<Month>11</Month>
<Day>07</Day>
</DateCreated>
<DateCompleted>
<Year>2002</Year>
<Month>01</Month>
<Day>11</Day>
</DateCompleted>
<DateRevised>
<Year>2004</Year>
<Month>11</Month>
<Day>17</Day>
</DateRevised>
<Article PubModel="Print">
<Journal>
<ISSN IssnType="Print">1053-8100</ISSN>
<JournalIssue CitedMedium="Print">
<Volume>10</Volume>
<Issue>3</Issue>
<PubDate>
<Year>2001</Year>
<Month>Sep</Month>
</PubDate>
</JournalIssue>
<Title>Consciousness and cognition</Title>
<ISOAbbreviation>Conscious Cogn</ISOAbbreviation>
</Journal>
<ArticleTitle>Visual-auditory events: cross-modal perceptual priming and recognition memory.</ArticleTitle>
<Pagination>
<MedlinePgn>425-35</MedlinePgn>
</Pagination>
<Abstract>
<AbstractText>Modality specificity in priming is taken as evidence for independent perceptual systems. However, Easton, Greene, and Srinivas (1997) showed that visual and haptic cross-modal priming is comparable in magnitude to within-modal priming. Where appropriate, perceptual systems might share like information. To test this, we assessed priming and recognition for visual and auditory events, within- and across- modalities. On the visual test, auditory study resulted in no priming. On the auditory priming test, visual study resulted in priming that was only marginally less than within-modal priming. The priming results show that visual study facilitates identification on both visual and auditory tests, but auditory study only facilitates performance on the auditory test. For both recognition tests, within-modal recognition exceeded cross-modal recognition. The results have two novel implications for the understanding of perceptual priming: First, we introduce visual and auditory priming for spatio-temporal events as a new priming paradigm chosen for its ecological validity and potential for information exchange. Second, we propose that the asymmetry of the cross-modal priming observed here may reflect the capacity of these perceptual modalities to provide cross-modal constraints on ambiguity. We argue that visual perception might inform and constrain auditory processing, while auditory perception corresponds to too many potential visual events to usefully inform and constrain visual perception.</AbstractText>
<CopyrightInformation>Copyright 2001 Academic Press.</CopyrightInformation>
</Abstract>
<AuthorList CompleteYN="Y">
<Author ValidYN="Y">
<LastName>Greene</LastName>
<ForeName>A J</ForeName>
<Initials>AJ</Initials>
<AffiliationInfo>
<Affiliation>Department of Neurosurgery, University of Virginia, Charlottesville, VA 22908, USA. ajg3x@virginia.edu</Affiliation>
</AffiliationInfo>
</Author>
<Author ValidYN="Y">
<LastName>Easton</LastName>
<ForeName>R D</ForeName>
<Initials>RD</Initials>
</Author>
<Author ValidYN="Y">
<LastName>LaShell</LastName>
<ForeName>L S</ForeName>
<Initials>LS</Initials>
</Author>
</AuthorList>
<Language>eng</Language>
<PublicationTypeList>
<PublicationType UI="D016428">Journal Article</PublicationType>
</PublicationTypeList>
</Article>
<MedlineJournalInfo>
<Country>United States</Country>
<MedlineTA>Conscious Cogn</MedlineTA>
<NlmUniqueID>9303140</NlmUniqueID>
<ISSNLinking>1053-8100</ISSNLinking>
</MedlineJournalInfo>
<CitationSubset>IM</CitationSubset>
<MeshHeadingList>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D001307">Auditory Perception</DescriptorName>
<QualifierName MajorTopicYN="Y" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D001921">Brain</DescriptorName>
<QualifierName MajorTopicYN="N" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D006801">Humans</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D008568">Memory</DescriptorName>
<QualifierName MajorTopicYN="Y" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D021641">Recognition (Psychology)</DescriptorName>
<QualifierName MajorTopicYN="Y" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D013028">Space Perception</DescriptorName>
<QualifierName MajorTopicYN="N" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D013998">Time Perception</DescriptorName>
<QualifierName MajorTopicYN="N" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D014796">Visual Perception</DescriptorName>
<QualifierName MajorTopicYN="Y" UI="Q000502">physiology</QualifierName>
</MeshHeading>
</MeshHeadingList>
</MedlineCitation>
<PubmedData>
<History>
<PubMedPubDate PubStatus="pubmed">
<Year>2001</Year>
<Month>11</Month>
<Day>8</Day>
<Hour>10</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="medline">
<Year>2002</Year>
<Month>1</Month>
<Day>12</Day>
<Hour>10</Hour>
<Minute>1</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="entrez">
<Year>2001</Year>
<Month>11</Month>
<Day>8</Day>
<Hour>10</Hour>
<Minute>0</Minute>
</PubMedPubDate>
</History>
<PublicationStatus>ppublish</PublicationStatus>
<ArticleIdList>
<ArticleId IdType="pubmed">11697874</ArticleId>
<ArticleId IdType="doi">10.1006/ccog.2001.0502</ArticleId>
<ArticleId IdType="pii">S1053-8100(01)90502-1</ArticleId>
</ArticleIdList>
</PubmedData>
</pubmed>
<affiliations>
<list>
<country>
<li>États-Unis</li>
</country>
<region>
<li>Virginie</li>
</region>
</list>
<tree>
<noCountry>
<name sortKey="Easton, R D" sort="Easton, R D" uniqKey="Easton R" first="R D" last="Easton">R D Easton</name>
<name sortKey="Lashell, L S" sort="Lashell, L S" uniqKey="Lashell L" first="L S" last="Lashell">L S Lashell</name>
</noCountry>
<country name="États-Unis">
<region name="Virginie">
<name sortKey="Greene, A J" sort="Greene, A J" uniqKey="Greene A" first="A J" last="Greene">A J Greene</name>
</region>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Ncbi/Merge
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000252 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd -nk 000252 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Ncbi
   |étape=   Merge
   |type=    RBID
   |clé=     pubmed:11697874
   |texte=   Visual-auditory events: cross-modal perceptual priming and recognition memory.
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/RBID.i   -Sk "pubmed:11697874" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024