Visual-auditory events: Cross-modal perceptual priming and recognition memory
Identifieur interne : 001358 ( PascalFrancis/Corpus ); précédent : 001357; suivant : 001359Visual-auditory events: Cross-modal perceptual priming and recognition memory
Auteurs : Anthony J. Greene ; Randolph D. Easton ; Lisa S. R. LashellSource :
- Consciousness and cognition [ 1053-8100 ] ; 2001.
Descripteurs français
- Pascal (Inist)
English descriptors
- KwdEn :
Abstract
Modality specificity in priming is taken as evidence for independent perceptual systems. However, Easton, Greene, and Srinivas (1997) showed that visual and haptic cross-modal priming is comparable in magnitude to within-modal priming. Where appropriate, perceptual systems might share like information. To test this, we assessed priming and recognition for visual and auditory events, within- and across- modalities. On the visual test, auditory study resulted in no priming. On the auditory priming test, visual study resulted in priming that was only marginally less than within-modal priming. The priming results show that visual study facilitates identification on both visual and auditory tests, hut auditory study only facilitates performance on the auditory test. For both recognition tests, within-modal recognition exceeded cross-modal recognition. The results have two novel implications for the understanding of perceptual priming: First, we introduce visual and auditory priming for spatio-temporal events as a new priming paradigm chosen for its ecological validity and potential for information exchange. Second, we propose that the asymmetry of the cross-modal priming observed here may reflect the capacity of these perceptual modalities to provide cross-modal constraints on ambiguity. We argue that visual perception might inform and constrain auditory processing, while auditory perception corresponds to too many potential visual events to usefully inform and constrain visual perception.
Notice en format standard (ISO 2709)
Pour connaître la documentation sur le format Inist Standard.
pA |
|
---|
Format Inist (serveur)
NO : | PASCAL 01-0432213 INIST |
---|---|
ET : | Visual-auditory events: Cross-modal perceptual priming and recognition memory |
AU : | GREENE (Anthony J.); EASTON (Randolph D.); LASHELL (Lisa S. R.) |
AF : | Department of Neurosurgery, University of Virginia, HSC Box 420/Charlottesville, Virginia 22908/Etats-Unis (1 aut.); Department of Psychology, Boston College/Chestnut Hill, Massachusetts 02167/Etats-Unis (2 aut.) |
DT : | Publication en série; Courte communication, note brève; Niveau analytique |
SO : | Consciousness and cognition; ISSN 1053-8100; Etats-Unis; Da. 2001; Vol. 10; No. 3; Pp. 425-435; Bibl. 2 p. |
LA : | Anglais |
EA : | Modality specificity in priming is taken as evidence for independent perceptual systems. However, Easton, Greene, and Srinivas (1997) showed that visual and haptic cross-modal priming is comparable in magnitude to within-modal priming. Where appropriate, perceptual systems might share like information. To test this, we assessed priming and recognition for visual and auditory events, within- and across- modalities. On the visual test, auditory study resulted in no priming. On the auditory priming test, visual study resulted in priming that was only marginally less than within-modal priming. The priming results show that visual study facilitates identification on both visual and auditory tests, hut auditory study only facilitates performance on the auditory test. For both recognition tests, within-modal recognition exceeded cross-modal recognition. The results have two novel implications for the understanding of perceptual priming: First, we introduce visual and auditory priming for spatio-temporal events as a new priming paradigm chosen for its ecological validity and potential for information exchange. Second, we propose that the asymmetry of the cross-modal priming observed here may reflect the capacity of these perceptual modalities to provide cross-modal constraints on ambiguity. We argue that visual perception might inform and constrain auditory processing, while auditory perception corresponds to too many potential visual events to usefully inform and constrain visual perception. |
CC : | 002A26F05A |
FD : | Etude expérimentale; Effet amorçage; Modalité stimulus; Vision; Audition; Perception intermodale; Mémoire; Reconnaissance mnémonique; Perception; Cognition; Homme |
ED : | Experimental study; Priming effect; Stimulus modality; Vision; Hearing; Intermodal perception; Memory; Recognition memory; Perception; Cognition; Human |
SD : | Estudio experimental; Efecto priming; Modalidad estímulo; Visión; Audición; Percepción intermodal; Memoria; Reconocimiento mnemónico; Percepción; Cognición; Hombre |
LO : | INIST-22612.354000099504040070 |
ID : | 01-0432213 |
Links to Exploration step
Pascal:01-0432213Le document en format XML
<record><TEI><teiHeader><fileDesc><titleStmt><title xml:lang="en" level="a">Visual-auditory events: Cross-modal perceptual priming and recognition memory</title>
<author><name sortKey="Greene, Anthony J" sort="Greene, Anthony J" uniqKey="Greene A" first="Anthony J." last="Greene">Anthony J. Greene</name>
<affiliation><inist:fA14 i1="01"><s1>Department of Neurosurgery, University of Virginia, HSC Box 420</s1>
<s2>Charlottesville, Virginia 22908</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author><name sortKey="Easton, Randolph D" sort="Easton, Randolph D" uniqKey="Easton R" first="Randolph D." last="Easton">Randolph D. Easton</name>
<affiliation><inist:fA14 i1="02"><s1>Department of Psychology, Boston College</s1>
<s2>Chestnut Hill, Massachusetts 02167</s2>
<s3>USA</s3>
<sZ>2 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author><name sortKey="Lashell, Lisa S R" sort="Lashell, Lisa S R" uniqKey="Lashell L" first="Lisa S. R." last="Lashell">Lisa S. R. Lashell</name>
</author>
</titleStmt>
<publicationStmt><idno type="wicri:source">INIST</idno>
<idno type="inist">01-0432213</idno>
<date when="2001">2001</date>
<idno type="stanalyst">PASCAL 01-0432213 INIST</idno>
<idno type="RBID">Pascal:01-0432213</idno>
<idno type="wicri:Area/PascalFrancis/Corpus">001358</idno>
</publicationStmt>
<sourceDesc><biblStruct><analytic><title xml:lang="en" level="a">Visual-auditory events: Cross-modal perceptual priming and recognition memory</title>
<author><name sortKey="Greene, Anthony J" sort="Greene, Anthony J" uniqKey="Greene A" first="Anthony J." last="Greene">Anthony J. Greene</name>
<affiliation><inist:fA14 i1="01"><s1>Department of Neurosurgery, University of Virginia, HSC Box 420</s1>
<s2>Charlottesville, Virginia 22908</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author><name sortKey="Easton, Randolph D" sort="Easton, Randolph D" uniqKey="Easton R" first="Randolph D." last="Easton">Randolph D. Easton</name>
<affiliation><inist:fA14 i1="02"><s1>Department of Psychology, Boston College</s1>
<s2>Chestnut Hill, Massachusetts 02167</s2>
<s3>USA</s3>
<sZ>2 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author><name sortKey="Lashell, Lisa S R" sort="Lashell, Lisa S R" uniqKey="Lashell L" first="Lisa S. R." last="Lashell">Lisa S. R. Lashell</name>
</author>
</analytic>
<series><title level="j" type="main">Consciousness and cognition</title>
<title level="j" type="abbreviated">Conscious. cogn.</title>
<idno type="ISSN">1053-8100</idno>
<imprint><date when="2001">2001</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
<seriesStmt><title level="j" type="main">Consciousness and cognition</title>
<title level="j" type="abbreviated">Conscious. cogn.</title>
<idno type="ISSN">1053-8100</idno>
</seriesStmt>
</fileDesc>
<profileDesc><textClass><keywords scheme="KwdEn" xml:lang="en"><term>Cognition</term>
<term>Experimental study</term>
<term>Hearing</term>
<term>Human</term>
<term>Intermodal perception</term>
<term>Memory</term>
<term>Perception</term>
<term>Priming effect</term>
<term>Recognition memory</term>
<term>Stimulus modality</term>
<term>Vision</term>
</keywords>
<keywords scheme="Pascal" xml:lang="fr"><term>Etude expérimentale</term>
<term>Effet amorçage</term>
<term>Modalité stimulus</term>
<term>Vision</term>
<term>Audition</term>
<term>Perception intermodale</term>
<term>Mémoire</term>
<term>Reconnaissance mnémonique</term>
<term>Perception</term>
<term>Cognition</term>
<term>Homme</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front><div type="abstract" xml:lang="en">Modality specificity in priming is taken as evidence for independent perceptual systems. However, Easton, Greene, and Srinivas (1997) showed that visual and haptic cross-modal priming is comparable in magnitude to within-modal priming. Where appropriate, perceptual systems might share like information. To test this, we assessed priming and recognition for visual and auditory events, within- and across- modalities. On the visual test, auditory study resulted in no priming. On the auditory priming test, visual study resulted in priming that was only marginally less than within-modal priming. The priming results show that visual study facilitates identification on both visual and auditory tests, hut auditory study only facilitates performance on the auditory test. For both recognition tests, within-modal recognition exceeded cross-modal recognition. The results have two novel implications for the understanding of perceptual priming: First, we introduce visual and auditory priming for spatio-temporal events as a new priming paradigm chosen for its ecological validity and potential for information exchange. Second, we propose that the asymmetry of the cross-modal priming observed here may reflect the capacity of these perceptual modalities to provide cross-modal constraints on ambiguity. We argue that visual perception might inform and constrain auditory processing, while auditory perception corresponds to too many potential visual events to usefully inform and constrain visual perception.</div>
</front>
</TEI>
<inist><standard h6="B"><pA><fA01 i1="01" i2="1"><s0>1053-8100</s0>
</fA01>
<fA03 i2="1"><s0>Conscious. cogn.</s0>
</fA03>
<fA05><s2>10</s2>
</fA05>
<fA06><s2>3</s2>
</fA06>
<fA08 i1="01" i2="1" l="ENG"><s1>Visual-auditory events: Cross-modal perceptual priming and recognition memory</s1>
</fA08>
<fA11 i1="01" i2="1"><s1>GREENE (Anthony J.)</s1>
</fA11>
<fA11 i1="02" i2="1"><s1>EASTON (Randolph D.)</s1>
</fA11>
<fA11 i1="03" i2="1"><s1>LASHELL (Lisa S. R.)</s1>
</fA11>
<fA14 i1="01"><s1>Department of Neurosurgery, University of Virginia, HSC Box 420</s1>
<s2>Charlottesville, Virginia 22908</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
</fA14>
<fA14 i1="02"><s1>Department of Psychology, Boston College</s1>
<s2>Chestnut Hill, Massachusetts 02167</s2>
<s3>USA</s3>
<sZ>2 aut.</sZ>
</fA14>
<fA20><s1>425-435</s1>
</fA20>
<fA21><s1>2001</s1>
</fA21>
<fA23 i1="01"><s0>ENG</s0>
</fA23>
<fA43 i1="01"><s1>INIST</s1>
<s2>22612</s2>
<s5>354000099504040070</s5>
</fA43>
<fA44><s0>0000</s0>
<s1>© 2001 INIST-CNRS. All rights reserved.</s1>
</fA44>
<fA45><s0>2 p.</s0>
</fA45>
<fA47 i1="01" i2="1"><s0>01-0432213</s0>
</fA47>
<fA60><s1>P</s1>
<s3>CC</s3>
</fA60>
<fA61><s0>A</s0>
</fA61>
<fA64 i1="01" i2="1"><s0>Consciousness and cognition</s0>
</fA64>
<fA66 i1="01"><s0>USA</s0>
</fA66>
<fC01 i1="01" l="ENG"><s0>Modality specificity in priming is taken as evidence for independent perceptual systems. However, Easton, Greene, and Srinivas (1997) showed that visual and haptic cross-modal priming is comparable in magnitude to within-modal priming. Where appropriate, perceptual systems might share like information. To test this, we assessed priming and recognition for visual and auditory events, within- and across- modalities. On the visual test, auditory study resulted in no priming. On the auditory priming test, visual study resulted in priming that was only marginally less than within-modal priming. The priming results show that visual study facilitates identification on both visual and auditory tests, hut auditory study only facilitates performance on the auditory test. For both recognition tests, within-modal recognition exceeded cross-modal recognition. The results have two novel implications for the understanding of perceptual priming: First, we introduce visual and auditory priming for spatio-temporal events as a new priming paradigm chosen for its ecological validity and potential for information exchange. Second, we propose that the asymmetry of the cross-modal priming observed here may reflect the capacity of these perceptual modalities to provide cross-modal constraints on ambiguity. We argue that visual perception might inform and constrain auditory processing, while auditory perception corresponds to too many potential visual events to usefully inform and constrain visual perception.</s0>
</fC01>
<fC02 i1="01" i2="X"><s0>002A26F05A</s0>
</fC02>
<fC03 i1="01" i2="X" l="FRE"><s0>Etude expérimentale</s0>
<s5>01</s5>
</fC03>
<fC03 i1="01" i2="X" l="ENG"><s0>Experimental study</s0>
<s5>01</s5>
</fC03>
<fC03 i1="01" i2="X" l="SPA"><s0>Estudio experimental</s0>
<s5>01</s5>
</fC03>
<fC03 i1="02" i2="X" l="FRE"><s0>Effet amorçage</s0>
<s5>02</s5>
</fC03>
<fC03 i1="02" i2="X" l="ENG"><s0>Priming effect</s0>
<s5>02</s5>
</fC03>
<fC03 i1="02" i2="X" l="SPA"><s0>Efecto priming</s0>
<s5>02</s5>
</fC03>
<fC03 i1="03" i2="X" l="FRE"><s0>Modalité stimulus</s0>
<s5>03</s5>
</fC03>
<fC03 i1="03" i2="X" l="ENG"><s0>Stimulus modality</s0>
<s5>03</s5>
</fC03>
<fC03 i1="03" i2="X" l="SPA"><s0>Modalidad estímulo</s0>
<s5>03</s5>
</fC03>
<fC03 i1="04" i2="X" l="FRE"><s0>Vision</s0>
<s5>04</s5>
</fC03>
<fC03 i1="04" i2="X" l="ENG"><s0>Vision</s0>
<s5>04</s5>
</fC03>
<fC03 i1="04" i2="X" l="SPA"><s0>Visión</s0>
<s5>04</s5>
</fC03>
<fC03 i1="05" i2="X" l="FRE"><s0>Audition</s0>
<s5>05</s5>
</fC03>
<fC03 i1="05" i2="X" l="ENG"><s0>Hearing</s0>
<s5>05</s5>
</fC03>
<fC03 i1="05" i2="X" l="SPA"><s0>Audición</s0>
<s5>05</s5>
</fC03>
<fC03 i1="06" i2="X" l="FRE"><s0>Perception intermodale</s0>
<s5>06</s5>
</fC03>
<fC03 i1="06" i2="X" l="ENG"><s0>Intermodal perception</s0>
<s5>06</s5>
</fC03>
<fC03 i1="06" i2="X" l="SPA"><s0>Percepción intermodal</s0>
<s5>06</s5>
</fC03>
<fC03 i1="07" i2="X" l="FRE"><s0>Mémoire</s0>
<s5>07</s5>
</fC03>
<fC03 i1="07" i2="X" l="ENG"><s0>Memory</s0>
<s5>07</s5>
</fC03>
<fC03 i1="07" i2="X" l="SPA"><s0>Memoria</s0>
<s5>07</s5>
</fC03>
<fC03 i1="08" i2="X" l="FRE"><s0>Reconnaissance mnémonique</s0>
<s5>10</s5>
</fC03>
<fC03 i1="08" i2="X" l="ENG"><s0>Recognition memory</s0>
<s5>10</s5>
</fC03>
<fC03 i1="08" i2="X" l="SPA"><s0>Reconocimiento mnemónico</s0>
<s5>10</s5>
</fC03>
<fC03 i1="09" i2="X" l="FRE"><s0>Perception</s0>
<s5>17</s5>
</fC03>
<fC03 i1="09" i2="X" l="ENG"><s0>Perception</s0>
<s5>17</s5>
</fC03>
<fC03 i1="09" i2="X" l="SPA"><s0>Percepción</s0>
<s5>17</s5>
</fC03>
<fC03 i1="10" i2="X" l="FRE"><s0>Cognition</s0>
<s5>18</s5>
</fC03>
<fC03 i1="10" i2="X" l="ENG"><s0>Cognition</s0>
<s5>18</s5>
</fC03>
<fC03 i1="10" i2="X" l="SPA"><s0>Cognición</s0>
<s5>18</s5>
</fC03>
<fC03 i1="11" i2="X" l="FRE"><s0>Homme</s0>
<s5>19</s5>
</fC03>
<fC03 i1="11" i2="X" l="ENG"><s0>Human</s0>
<s5>19</s5>
</fC03>
<fC03 i1="11" i2="X" l="SPA"><s0>Hombre</s0>
<s5>19</s5>
</fC03>
<fN21><s1>302</s1>
</fN21>
</pA>
</standard>
<server><NO>PASCAL 01-0432213 INIST</NO>
<ET>Visual-auditory events: Cross-modal perceptual priming and recognition memory</ET>
<AU>GREENE (Anthony J.); EASTON (Randolph D.); LASHELL (Lisa S. R.)</AU>
<AF>Department of Neurosurgery, University of Virginia, HSC Box 420/Charlottesville, Virginia 22908/Etats-Unis (1 aut.); Department of Psychology, Boston College/Chestnut Hill, Massachusetts 02167/Etats-Unis (2 aut.)</AF>
<DT>Publication en série; Courte communication, note brève; Niveau analytique</DT>
<SO>Consciousness and cognition; ISSN 1053-8100; Etats-Unis; Da. 2001; Vol. 10; No. 3; Pp. 425-435; Bibl. 2 p.</SO>
<LA>Anglais</LA>
<EA>Modality specificity in priming is taken as evidence for independent perceptual systems. However, Easton, Greene, and Srinivas (1997) showed that visual and haptic cross-modal priming is comparable in magnitude to within-modal priming. Where appropriate, perceptual systems might share like information. To test this, we assessed priming and recognition for visual and auditory events, within- and across- modalities. On the visual test, auditory study resulted in no priming. On the auditory priming test, visual study resulted in priming that was only marginally less than within-modal priming. The priming results show that visual study facilitates identification on both visual and auditory tests, hut auditory study only facilitates performance on the auditory test. For both recognition tests, within-modal recognition exceeded cross-modal recognition. The results have two novel implications for the understanding of perceptual priming: First, we introduce visual and auditory priming for spatio-temporal events as a new priming paradigm chosen for its ecological validity and potential for information exchange. Second, we propose that the asymmetry of the cross-modal priming observed here may reflect the capacity of these perceptual modalities to provide cross-modal constraints on ambiguity. We argue that visual perception might inform and constrain auditory processing, while auditory perception corresponds to too many potential visual events to usefully inform and constrain visual perception.</EA>
<CC>002A26F05A</CC>
<FD>Etude expérimentale; Effet amorçage; Modalité stimulus; Vision; Audition; Perception intermodale; Mémoire; Reconnaissance mnémonique; Perception; Cognition; Homme</FD>
<ED>Experimental study; Priming effect; Stimulus modality; Vision; Hearing; Intermodal perception; Memory; Recognition memory; Perception; Cognition; Human</ED>
<SD>Estudio experimental; Efecto priming; Modalidad estímulo; Visión; Audición; Percepción intermodal; Memoria; Reconocimiento mnemónico; Percepción; Cognición; Hombre</SD>
<LO>INIST-22612.354000099504040070</LO>
<ID>01-0432213</ID>
</server>
</inist>
</record>
Pour manipuler ce document sous Unix (Dilib)
EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/PascalFrancis/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001358 | SxmlIndent | more
Ou
HfdSelect -h $EXPLOR_AREA/Data/PascalFrancis/Corpus/biblio.hfd -nk 001358 | SxmlIndent | more
Pour mettre un lien sur cette page dans le réseau Wicri
{{Explor lien |wiki= Ticri/CIDE |area= HapticV1 |flux= PascalFrancis |étape= Corpus |type= RBID |clé= Pascal:01-0432213 |texte= Visual-auditory events: Cross-modal perceptual priming and recognition memory }}
This area was generated with Dilib version V0.6.23. |