The effect of temporal delay and spatial differences on cross-modal object recognition
Identifieur interne :
000570 ( PascalFrancis/Curation );
précédent :
000569;
suivant :
000571
The effect of temporal delay and spatial differences on cross-modal object recognition
Auteurs : Andrew T. Woods [
Irlande (pays)] ;
Sile O'Modhrain [
Irlande (pays)] ;
Fiona N. Newell [
Irlande (pays)]
Source :
-
Cognitive, affective & behavioral neuroscience : (Print) [ 1530-7026 ] ; 2004.
RBID : Pascal:04-0495950
Descripteurs français
English descriptors
Abstract
In a series of experiments, we investigated the matching of objects across visual and haptic modalities across different time delays and spatial dimensions. In all of the experiments, we used simple L-shaped figures as stimuli that varied in either the x or the y dimension or in both dimensions. In Experiment 1, we found that cross-modal matching performance decreased as a function of the time delay between the presentation of the objects. We found no difference in performance between the visual-haptic (VH) and haptic-visual (HV) conditions. Cross-modal performance was better when objects differed in both the x and y dimensions rather than in one dimension alone. In Experiment 2, we investigated the relative contribution of each modality to performance across different interstimulus delays. We found no differential effect of delay between the HH and VV conditions, although general performance was better for the VV condition than for the HH condition. Again, responses to xy changes were better than changes in the x or y dimensions alone. Finally, in Experiment 3, we examined performance in a matching task with simultaneous and successive presentation conditions. We failed to find any difference between simultaneous and successive presentation conditions. Our findings suggest that the short-term retention of object representations is similar in both the visual and haptic modalities. Moreover, these results suggest that recognition is best within a temporal window that includes simultaneous or rapidly successive presentation of stimuli across the modalities and is also best when objects are more discriminable from each other.
pA |
A01 | 01 | 1 | | @0 1530-7026 |
---|
A03 | | 1 | | @0 Cogn. affect. behav. neurosci. : (Print) |
---|
A05 | | | | @2 4 |
---|
A06 | | | | @2 2 |
---|
A08 | 01 | 1 | ENG | @1 The effect of temporal delay and spatial differences on cross-modal object recognition |
---|
A09 | 01 | 1 | ENG | @1 Multisensory processes |
---|
A11 | 01 | 1 | | @1 WOODS (Andrew T.) |
---|
A11 | 02 | 1 | | @1 O'MODHRAIN (Sile) |
---|
A11 | 03 | 1 | | @1 NEWELL (Fiona N.) |
---|
A12 | 01 | 1 | | @1 SHORE (David I.) @9 ed. |
---|
A12 | 02 | 1 | | @1 ELLIOTT (Digby) @9 ed. |
---|
A12 | 03 | 1 | | @1 MEREDITH (M. Alex) @9 ed. |
---|
A14 | 01 | | | @1 University of Dublin, Trinity College @2 Dublin @3 IRL @Z 1 aut. @Z 3 aut. |
---|
A14 | 02 | | | @1 Media Lab Europe @2 Dublin @3 IRL @Z 2 aut. |
---|
A15 | 01 | | | @1 Department of Psychology, McMaster University @3 CAN @Z 1 aut. |
---|
A15 | 02 | | | @1 Department of Kinesiology, McMaster University @3 CAN @Z 2 aut. |
---|
A15 | 03 | | | @1 Medical College of Virginia, Virginia Commonwealth University @3 USA @Z 3 aut. |
---|
A20 | | | | @1 260-269 |
---|
A21 | | | | @1 2004 |
---|
A23 | 01 | | | @0 ENG |
---|
A43 | 01 | | | @1 INIST @2 13280A @5 354000113967780150 |
---|
A44 | | | | @0 0000 @1 © 2004 INIST-CNRS. All rights reserved. |
---|
A45 | | | | @0 29 ref. |
---|
A47 | 01 | 1 | | @0 04-0495950 |
---|
A60 | | | | @1 P @2 C |
---|
A61 | | | | @0 A |
---|
A64 | 01 | 1 | | @0 Cognitive, affective & behavioral neuroscience : (Print) |
---|
A66 | 01 | | | @0 USA |
---|
A99 | | | | @0 2 notes |
---|
C01 | 01 | | ENG | @0 In a series of experiments, we investigated the matching of objects across visual and haptic modalities across different time delays and spatial dimensions. In all of the experiments, we used simple L-shaped figures as stimuli that varied in either the x or the y dimension or in both dimensions. In Experiment 1, we found that cross-modal matching performance decreased as a function of the time delay between the presentation of the objects. We found no difference in performance between the visual-haptic (VH) and haptic-visual (HV) conditions. Cross-modal performance was better when objects differed in both the x and y dimensions rather than in one dimension alone. In Experiment 2, we investigated the relative contribution of each modality to performance across different interstimulus delays. We found no differential effect of delay between the HH and VV conditions, although general performance was better for the VV condition than for the HH condition. Again, responses to xy changes were better than changes in the x or y dimensions alone. Finally, in Experiment 3, we examined performance in a matching task with simultaneous and successive presentation conditions. We failed to find any difference between simultaneous and successive presentation conditions. Our findings suggest that the short-term retention of object representations is similar in both the visual and haptic modalities. Moreover, these results suggest that recognition is best within a temporal window that includes simultaneous or rapidly successive presentation of stimuli across the modalities and is also best when objects are more discriminable from each other. |
---|
C02 | 01 | X | | @0 002A26E08 |
---|
C03 | 01 | X | FRE | @0 Etude expérimentale @5 01 |
---|
C03 | 01 | X | ENG | @0 Experimental study @5 01 |
---|
C03 | 01 | X | SPA | @0 Estudio experimental @5 01 |
---|
C03 | 02 | X | FRE | @0 Vision @5 02 |
---|
C03 | 02 | X | ENG | @0 Vision @5 02 |
---|
C03 | 02 | X | SPA | @0 Visión @5 02 |
---|
C03 | 03 | X | FRE | @0 Sensibilité tactile @5 03 |
---|
C03 | 03 | X | ENG | @0 Tactile sensitivity @5 03 |
---|
C03 | 03 | X | SPA | @0 Sensibilidad tactil @5 03 |
---|
C03 | 04 | X | FRE | @0 Reconnaissance @5 04 |
---|
C03 | 04 | X | ENG | @0 Recognition @5 04 |
---|
C03 | 04 | X | SPA | @0 Reconocimiento @5 04 |
---|
C03 | 05 | X | FRE | @0 Objet @5 05 |
---|
C03 | 05 | X | ENG | @0 Object @5 05 |
---|
C03 | 05 | X | SPA | @0 Objeto @5 05 |
---|
C03 | 06 | X | FRE | @0 Espace @5 06 |
---|
C03 | 06 | X | ENG | @0 Space @5 06 |
---|
C03 | 06 | X | SPA | @0 Espacio @5 06 |
---|
C03 | 07 | X | FRE | @0 Temps @5 07 |
---|
C03 | 07 | X | ENG | @0 Time @5 07 |
---|
C03 | 07 | X | SPA | @0 Tiempo @5 07 |
---|
C03 | 08 | X | FRE | @0 Perception @5 13 |
---|
C03 | 08 | X | ENG | @0 Perception @5 13 |
---|
C03 | 08 | X | SPA | @0 Percepción @5 13 |
---|
C03 | 09 | X | FRE | @0 Cognition @5 17 |
---|
C03 | 09 | X | ENG | @0 Cognition @5 17 |
---|
C03 | 09 | X | SPA | @0 Cognición @5 17 |
---|
C03 | 10 | X | FRE | @0 Homme @5 18 |
---|
C03 | 10 | X | ENG | @0 Human @5 18 |
---|
C03 | 10 | X | SPA | @0 Hombre @5 18 |
---|
N21 | | | | @1 278 |
---|
N44 | 01 | | | @1 PSI |
---|
N82 | | | | @1 PSI |
---|
|
pR |
A30 | 01 | 1 | ENG | @1 International Multisensory Research Forum @2 4 @3 Hamilton, Ontario CAN @4 2003-06 |
---|
|
Links toward previous steps (curation, corpus...)
- to stream PascalFrancis, to step Corpus: Pour aller vers cette notice dans l'étape Curation :000F39
Links to Exploration step
Pascal:04-0495950
Le document en format XML
<record><TEI><teiHeader><fileDesc><titleStmt><title xml:lang="en" level="a">The effect of temporal delay and spatial differences on cross-modal object recognition</title>
<author><name sortKey="Woods, Andrew T" sort="Woods, Andrew T" uniqKey="Woods A" first="Andrew T." last="Woods">Andrew T. Woods</name>
<affiliation wicri:level="1"><inist:fA14 i1="01"><s1>University of Dublin, Trinity College</s1>
<s2>Dublin</s2>
<s3>IRL</s3>
<sZ>1 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
<country>Irlande (pays)</country>
</affiliation>
</author>
<author><name sortKey="O Modhrain, Sile" sort="O Modhrain, Sile" uniqKey="O Modhrain S" first="Sile" last="O'Modhrain">Sile O'Modhrain</name>
<affiliation wicri:level="1"><inist:fA14 i1="02"><s1>Media Lab Europe</s1>
<s2>Dublin</s2>
<s3>IRL</s3>
<sZ>2 aut.</sZ>
</inist:fA14>
<country>Irlande (pays)</country>
</affiliation>
</author>
<author><name sortKey="Newell, Fiona N" sort="Newell, Fiona N" uniqKey="Newell F" first="Fiona N." last="Newell">Fiona N. Newell</name>
<affiliation wicri:level="1"><inist:fA14 i1="01"><s1>University of Dublin, Trinity College</s1>
<s2>Dublin</s2>
<s3>IRL</s3>
<sZ>1 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
<country>Irlande (pays)</country>
</affiliation>
</author>
</titleStmt>
<publicationStmt><idno type="wicri:source">INIST</idno>
<idno type="inist">04-0495950</idno>
<date when="2004">2004</date>
<idno type="stanalyst">PASCAL 04-0495950 INIST</idno>
<idno type="RBID">Pascal:04-0495950</idno>
<idno type="wicri:Area/PascalFrancis/Corpus">000F39</idno>
<idno type="wicri:Area/PascalFrancis/Curation">000570</idno>
</publicationStmt>
<sourceDesc><biblStruct><analytic><title xml:lang="en" level="a">The effect of temporal delay and spatial differences on cross-modal object recognition</title>
<author><name sortKey="Woods, Andrew T" sort="Woods, Andrew T" uniqKey="Woods A" first="Andrew T." last="Woods">Andrew T. Woods</name>
<affiliation wicri:level="1"><inist:fA14 i1="01"><s1>University of Dublin, Trinity College</s1>
<s2>Dublin</s2>
<s3>IRL</s3>
<sZ>1 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
<country>Irlande (pays)</country>
</affiliation>
</author>
<author><name sortKey="O Modhrain, Sile" sort="O Modhrain, Sile" uniqKey="O Modhrain S" first="Sile" last="O'Modhrain">Sile O'Modhrain</name>
<affiliation wicri:level="1"><inist:fA14 i1="02"><s1>Media Lab Europe</s1>
<s2>Dublin</s2>
<s3>IRL</s3>
<sZ>2 aut.</sZ>
</inist:fA14>
<country>Irlande (pays)</country>
</affiliation>
</author>
<author><name sortKey="Newell, Fiona N" sort="Newell, Fiona N" uniqKey="Newell F" first="Fiona N." last="Newell">Fiona N. Newell</name>
<affiliation wicri:level="1"><inist:fA14 i1="01"><s1>University of Dublin, Trinity College</s1>
<s2>Dublin</s2>
<s3>IRL</s3>
<sZ>1 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
<country>Irlande (pays)</country>
</affiliation>
</author>
</analytic>
<series><title level="j" type="main">Cognitive, affective & behavioral neuroscience : (Print)</title>
<title level="j" type="abbreviated">Cogn. affect. behav. neurosci. : (Print)</title>
<idno type="ISSN">1530-7026</idno>
<imprint><date when="2004">2004</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
<seriesStmt><title level="j" type="main">Cognitive, affective & behavioral neuroscience : (Print)</title>
<title level="j" type="abbreviated">Cogn. affect. behav. neurosci. : (Print)</title>
<idno type="ISSN">1530-7026</idno>
</seriesStmt>
</fileDesc>
<profileDesc><textClass><keywords scheme="KwdEn" xml:lang="en"><term>Cognition</term>
<term>Experimental study</term>
<term>Human</term>
<term>Object</term>
<term>Perception</term>
<term>Recognition</term>
<term>Space</term>
<term>Tactile sensitivity</term>
<term>Time</term>
<term>Vision</term>
</keywords>
<keywords scheme="Pascal" xml:lang="fr"><term>Etude expérimentale</term>
<term>Vision</term>
<term>Sensibilité tactile</term>
<term>Reconnaissance</term>
<term>Objet</term>
<term>Espace</term>
<term>Temps</term>
<term>Perception</term>
<term>Cognition</term>
<term>Homme</term>
</keywords>
<keywords scheme="Wicri" type="topic" xml:lang="fr"><term>Homme</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front><div type="abstract" xml:lang="en">In a series of experiments, we investigated the matching of objects across visual and haptic modalities across different time delays and spatial dimensions. In all of the experiments, we used simple L-shaped figures as stimuli that varied in either the x or the y dimension or in both dimensions. In Experiment 1, we found that cross-modal matching performance decreased as a function of the time delay between the presentation of the objects. We found no difference in performance between the visual-haptic (VH) and haptic-visual (HV) conditions. Cross-modal performance was better when objects differed in both the x and y dimensions rather than in one dimension alone. In Experiment 2, we investigated the relative contribution of each modality to performance across different interstimulus delays. We found no differential effect of delay between the HH and VV conditions, although general performance was better for the VV condition than for the HH condition. Again, responses to xy changes were better than changes in the x or y dimensions alone. Finally, in Experiment 3, we examined performance in a matching task with simultaneous and successive presentation conditions. We failed to find any difference between simultaneous and successive presentation conditions. Our findings suggest that the short-term retention of object representations is similar in both the visual and haptic modalities. Moreover, these results suggest that recognition is best within a temporal window that includes simultaneous or rapidly successive presentation of stimuli across the modalities and is also best when objects are more discriminable from each other.</div>
</front>
</TEI>
<inist><standard h6="B"><pA><fA01 i1="01" i2="1"><s0>1530-7026</s0>
</fA01>
<fA03 i2="1"><s0>Cogn. affect. behav. neurosci. : (Print)</s0>
</fA03>
<fA08 i1="01" i2="1" l="ENG"><s1>The effect of temporal delay and spatial differences on cross-modal object recognition</s1>
</fA08>
<fA09 i1="01" i2="1" l="ENG"><s1>Multisensory processes</s1>
</fA09>
<fA11 i1="01" i2="1"><s1>WOODS (Andrew T.)</s1>
</fA11>
<fA11 i1="02" i2="1"><s1>O'MODHRAIN (Sile)</s1>
</fA11>
<fA11 i1="03" i2="1"><s1>NEWELL (Fiona N.)</s1>
</fA11>
<fA12 i1="01" i2="1"><s1>SHORE (David I.)</s1>
<s9>ed.</s9>
</fA12>
<fA12 i1="02" i2="1"><s1>ELLIOTT (Digby)</s1>
<s9>ed.</s9>
</fA12>
<fA12 i1="03" i2="1"><s1>MEREDITH (M. Alex)</s1>
<s9>ed.</s9>
</fA12>
<fA14 i1="01"><s1>University of Dublin, Trinity College</s1>
<s2>Dublin</s2>
<s3>IRL</s3>
<sZ>1 aut.</sZ>
<sZ>3 aut.</sZ>
</fA14>
<fA14 i1="02"><s1>Media Lab Europe</s1>
<s2>Dublin</s2>
<s3>IRL</s3>
<sZ>2 aut.</sZ>
</fA14>
<fA15 i1="01"><s1>Department of Psychology, McMaster University</s1>
<s3>CAN</s3>
<sZ>1 aut.</sZ>
</fA15>
<fA15 i1="02"><s1>Department of Kinesiology, McMaster University</s1>
<s3>CAN</s3>
<sZ>2 aut.</sZ>
</fA15>
<fA15 i1="03"><s1>Medical College of Virginia, Virginia Commonwealth University</s1>
<s3>USA</s3>
<sZ>3 aut.</sZ>
</fA15>
<fA20><s1>260-269</s1>
</fA20>
<fA21><s1>2004</s1>
</fA21>
<fA23 i1="01"><s0>ENG</s0>
</fA23>
<fA43 i1="01"><s1>INIST</s1>
<s2>13280A</s2>
<s5>354000113967780150</s5>
</fA43>
<fA44><s0>0000</s0>
<s1>© 2004 INIST-CNRS. All rights reserved.</s1>
</fA44>
<fA45><s0>29 ref.</s0>
</fA45>
<fA47 i1="01" i2="1"><s0>04-0495950</s0>
</fA47>
<fA60><s1>P</s1>
<s2>C</s2>
</fA60>
<fA64 i1="01" i2="1"><s0>Cognitive, affective & behavioral neuroscience : (Print)</s0>
</fA64>
<fA66 i1="01"><s0>USA</s0>
</fA66>
<fA99><s0>2 notes</s0>
</fA99>
<fC01 i1="01" l="ENG"><s0>In a series of experiments, we investigated the matching of objects across visual and haptic modalities across different time delays and spatial dimensions. In all of the experiments, we used simple L-shaped figures as stimuli that varied in either the x or the y dimension or in both dimensions. In Experiment 1, we found that cross-modal matching performance decreased as a function of the time delay between the presentation of the objects. We found no difference in performance between the visual-haptic (VH) and haptic-visual (HV) conditions. Cross-modal performance was better when objects differed in both the x and y dimensions rather than in one dimension alone. In Experiment 2, we investigated the relative contribution of each modality to performance across different interstimulus delays. We found no differential effect of delay between the HH and VV conditions, although general performance was better for the VV condition than for the HH condition. Again, responses to xy changes were better than changes in the x or y dimensions alone. Finally, in Experiment 3, we examined performance in a matching task with simultaneous and successive presentation conditions. We failed to find any difference between simultaneous and successive presentation conditions. Our findings suggest that the short-term retention of object representations is similar in both the visual and haptic modalities. Moreover, these results suggest that recognition is best within a temporal window that includes simultaneous or rapidly successive presentation of stimuli across the modalities and is also best when objects are more discriminable from each other.</s0>
</fC01>
<fC02 i1="01" i2="X"><s0>002A26E08</s0>
</fC02>
<fC03 i1="01" i2="X" l="FRE"><s0>Etude expérimentale</s0>
<s5>01</s5>
</fC03>
<fC03 i1="01" i2="X" l="ENG"><s0>Experimental study</s0>
<s5>01</s5>
</fC03>
<fC03 i1="01" i2="X" l="SPA"><s0>Estudio experimental</s0>
<s5>01</s5>
</fC03>
<fC03 i1="02" i2="X" l="FRE"><s0>Vision</s0>
<s5>02</s5>
</fC03>
<fC03 i1="02" i2="X" l="ENG"><s0>Vision</s0>
<s5>02</s5>
</fC03>
<fC03 i1="02" i2="X" l="SPA"><s0>Visión</s0>
<s5>02</s5>
</fC03>
<fC03 i1="03" i2="X" l="FRE"><s0>Sensibilité tactile</s0>
<s5>03</s5>
</fC03>
<fC03 i1="03" i2="X" l="ENG"><s0>Tactile sensitivity</s0>
<s5>03</s5>
</fC03>
<fC03 i1="03" i2="X" l="SPA"><s0>Sensibilidad tactil</s0>
<s5>03</s5>
</fC03>
<fC03 i1="04" i2="X" l="FRE"><s0>Reconnaissance</s0>
<s5>04</s5>
</fC03>
<fC03 i1="04" i2="X" l="ENG"><s0>Recognition</s0>
<s5>04</s5>
</fC03>
<fC03 i1="04" i2="X" l="SPA"><s0>Reconocimiento</s0>
<s5>04</s5>
</fC03>
<fC03 i1="05" i2="X" l="FRE"><s0>Objet</s0>
<s5>05</s5>
</fC03>
<fC03 i1="05" i2="X" l="ENG"><s0>Object</s0>
<s5>05</s5>
</fC03>
<fC03 i1="05" i2="X" l="SPA"><s0>Objeto</s0>
<s5>05</s5>
</fC03>
<fC03 i1="06" i2="X" l="FRE"><s0>Espace</s0>
<s5>06</s5>
</fC03>
<fC03 i1="06" i2="X" l="ENG"><s0>Space</s0>
<s5>06</s5>
</fC03>
<fC03 i1="06" i2="X" l="SPA"><s0>Espacio</s0>
<s5>06</s5>
</fC03>
<fC03 i1="07" i2="X" l="FRE"><s0>Temps</s0>
<s5>07</s5>
</fC03>
<fC03 i1="07" i2="X" l="ENG"><s0>Time</s0>
<s5>07</s5>
</fC03>
<fC03 i1="07" i2="X" l="SPA"><s0>Tiempo</s0>
<s5>07</s5>
</fC03>
<fC03 i1="08" i2="X" l="FRE"><s0>Perception</s0>
<s5>13</s5>
</fC03>
<fC03 i1="08" i2="X" l="ENG"><s0>Perception</s0>
<s5>13</s5>
</fC03>
<fC03 i1="08" i2="X" l="SPA"><s0>Percepción</s0>
<s5>13</s5>
</fC03>
<fC03 i1="09" i2="X" l="FRE"><s0>Cognition</s0>
<s5>17</s5>
</fC03>
<fC03 i1="09" i2="X" l="ENG"><s0>Cognition</s0>
<s5>17</s5>
</fC03>
<fC03 i1="09" i2="X" l="SPA"><s0>Cognición</s0>
<s5>17</s5>
</fC03>
<fC03 i1="10" i2="X" l="FRE"><s0>Homme</s0>
<s5>18</s5>
</fC03>
<fC03 i1="10" i2="X" l="ENG"><s0>Human</s0>
<s5>18</s5>
</fC03>
<fC03 i1="10" i2="X" l="SPA"><s0>Hombre</s0>
<s5>18</s5>
</fC03>
<fN21><s1>278</s1>
</fN21>
<fN44 i1="01"><s1>PSI</s1>
</fN44>
<fN82><s1>PSI</s1>
</fN82>
</pA>
<pR><fA30 i1="01" i2="1" l="ENG"><s1>International Multisensory Research Forum</s1>
<s2>4</s2>
<s3>Hamilton, Ontario CAN</s3>
<s4>2003-06</s4>
</fA30>
</pR>
</standard>
</inist>
</record>
Pour manipuler ce document sous Unix (Dilib)
EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/PascalFrancis/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000570 | SxmlIndent | more
Ou
HfdSelect -h $EXPLOR_AREA/Data/PascalFrancis/Curation/biblio.hfd -nk 000570 | SxmlIndent | more
Pour mettre un lien sur cette page dans le réseau Wicri
{{Explor lien
|wiki= Ticri/CIDE
|area= HapticV1
|flux= PascalFrancis
|étape= Curation
|type= RBID
|clé= Pascal:04-0495950
|texte= The effect of temporal delay and spatial differences on cross-modal object recognition
}}
| This area was generated with Dilib version V0.6.23. Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024 | ![](Common/icons/LogoDilib.gif) |