Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

The effect of temporal delay and spatial differences on cross-modal object recognition

Identifieur interne : 000B98 ( PascalFrancis/Checkpoint ); précédent : 000B97; suivant : 000B99

The effect of temporal delay and spatial differences on cross-modal object recognition

Auteurs : Andrew T. Woods [Irlande (pays)] ; Sile O'Modhrain [Irlande (pays)] ; Fiona N. Newell [Irlande (pays)]

Source :

RBID : Pascal:04-0495950

Descripteurs français

English descriptors

Abstract

In a series of experiments, we investigated the matching of objects across visual and haptic modalities across different time delays and spatial dimensions. In all of the experiments, we used simple L-shaped figures as stimuli that varied in either the x or the y dimension or in both dimensions. In Experiment 1, we found that cross-modal matching performance decreased as a function of the time delay between the presentation of the objects. We found no difference in performance between the visual-haptic (VH) and haptic-visual (HV) conditions. Cross-modal performance was better when objects differed in both the x and y dimensions rather than in one dimension alone. In Experiment 2, we investigated the relative contribution of each modality to performance across different interstimulus delays. We found no differential effect of delay between the HH and VV conditions, although general performance was better for the VV condition than for the HH condition. Again, responses to xy changes were better than changes in the x or y dimensions alone. Finally, in Experiment 3, we examined performance in a matching task with simultaneous and successive presentation conditions. We failed to find any difference between simultaneous and successive presentation conditions. Our findings suggest that the short-term retention of object representations is similar in both the visual and haptic modalities. Moreover, these results suggest that recognition is best within a temporal window that includes simultaneous or rapidly successive presentation of stimuli across the modalities and is also best when objects are more discriminable from each other.


Affiliations:


Links toward previous steps (curation, corpus...)


Links to Exploration step

Pascal:04-0495950

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en" level="a">The effect of temporal delay and spatial differences on cross-modal object recognition</title>
<author>
<name sortKey="Woods, Andrew T" sort="Woods, Andrew T" uniqKey="Woods A" first="Andrew T." last="Woods">Andrew T. Woods</name>
<affiliation wicri:level="1">
<inist:fA14 i1="01">
<s1>University of Dublin, Trinity College</s1>
<s2>Dublin</s2>
<s3>IRL</s3>
<sZ>1 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
<country>Irlande (pays)</country>
<wicri:noRegion>Dublin</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="O Modhrain, Sile" sort="O Modhrain, Sile" uniqKey="O Modhrain S" first="Sile" last="O'Modhrain">Sile O'Modhrain</name>
<affiliation wicri:level="1">
<inist:fA14 i1="02">
<s1>Media Lab Europe</s1>
<s2>Dublin</s2>
<s3>IRL</s3>
<sZ>2 aut.</sZ>
</inist:fA14>
<country>Irlande (pays)</country>
<wicri:noRegion>Media Lab Europe</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Newell, Fiona N" sort="Newell, Fiona N" uniqKey="Newell F" first="Fiona N." last="Newell">Fiona N. Newell</name>
<affiliation wicri:level="1">
<inist:fA14 i1="01">
<s1>University of Dublin, Trinity College</s1>
<s2>Dublin</s2>
<s3>IRL</s3>
<sZ>1 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
<country>Irlande (pays)</country>
<wicri:noRegion>Dublin</wicri:noRegion>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">INIST</idno>
<idno type="inist">04-0495950</idno>
<date when="2004">2004</date>
<idno type="stanalyst">PASCAL 04-0495950 INIST</idno>
<idno type="RBID">Pascal:04-0495950</idno>
<idno type="wicri:Area/PascalFrancis/Corpus">000F39</idno>
<idno type="wicri:Area/PascalFrancis/Curation">000570</idno>
<idno type="wicri:Area/PascalFrancis/Checkpoint">000B98</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a">The effect of temporal delay and spatial differences on cross-modal object recognition</title>
<author>
<name sortKey="Woods, Andrew T" sort="Woods, Andrew T" uniqKey="Woods A" first="Andrew T." last="Woods">Andrew T. Woods</name>
<affiliation wicri:level="1">
<inist:fA14 i1="01">
<s1>University of Dublin, Trinity College</s1>
<s2>Dublin</s2>
<s3>IRL</s3>
<sZ>1 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
<country>Irlande (pays)</country>
<wicri:noRegion>Dublin</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="O Modhrain, Sile" sort="O Modhrain, Sile" uniqKey="O Modhrain S" first="Sile" last="O'Modhrain">Sile O'Modhrain</name>
<affiliation wicri:level="1">
<inist:fA14 i1="02">
<s1>Media Lab Europe</s1>
<s2>Dublin</s2>
<s3>IRL</s3>
<sZ>2 aut.</sZ>
</inist:fA14>
<country>Irlande (pays)</country>
<wicri:noRegion>Media Lab Europe</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Newell, Fiona N" sort="Newell, Fiona N" uniqKey="Newell F" first="Fiona N." last="Newell">Fiona N. Newell</name>
<affiliation wicri:level="1">
<inist:fA14 i1="01">
<s1>University of Dublin, Trinity College</s1>
<s2>Dublin</s2>
<s3>IRL</s3>
<sZ>1 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
<country>Irlande (pays)</country>
<wicri:noRegion>Dublin</wicri:noRegion>
</affiliation>
</author>
</analytic>
<series>
<title level="j" type="main">Cognitive, affective & behavioral neuroscience : (Print)</title>
<title level="j" type="abbreviated">Cogn. affect. behav. neurosci. : (Print)</title>
<idno type="ISSN">1530-7026</idno>
<imprint>
<date when="2004">2004</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
<seriesStmt>
<title level="j" type="main">Cognitive, affective & behavioral neuroscience : (Print)</title>
<title level="j" type="abbreviated">Cogn. affect. behav. neurosci. : (Print)</title>
<idno type="ISSN">1530-7026</idno>
</seriesStmt>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Cognition</term>
<term>Experimental study</term>
<term>Human</term>
<term>Object</term>
<term>Perception</term>
<term>Recognition</term>
<term>Space</term>
<term>Tactile sensitivity</term>
<term>Time</term>
<term>Vision</term>
</keywords>
<keywords scheme="Pascal" xml:lang="fr">
<term>Etude expérimentale</term>
<term>Vision</term>
<term>Sensibilité tactile</term>
<term>Reconnaissance</term>
<term>Objet</term>
<term>Espace</term>
<term>Temps</term>
<term>Perception</term>
<term>Cognition</term>
<term>Homme</term>
</keywords>
<keywords scheme="Wicri" type="topic" xml:lang="fr">
<term>Homme</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">In a series of experiments, we investigated the matching of objects across visual and haptic modalities across different time delays and spatial dimensions. In all of the experiments, we used simple L-shaped figures as stimuli that varied in either the x or the y dimension or in both dimensions. In Experiment 1, we found that cross-modal matching performance decreased as a function of the time delay between the presentation of the objects. We found no difference in performance between the visual-haptic (VH) and haptic-visual (HV) conditions. Cross-modal performance was better when objects differed in both the x and y dimensions rather than in one dimension alone. In Experiment 2, we investigated the relative contribution of each modality to performance across different interstimulus delays. We found no differential effect of delay between the HH and VV conditions, although general performance was better for the VV condition than for the HH condition. Again, responses to xy changes were better than changes in the x or y dimensions alone. Finally, in Experiment 3, we examined performance in a matching task with simultaneous and successive presentation conditions. We failed to find any difference between simultaneous and successive presentation conditions. Our findings suggest that the short-term retention of object representations is similar in both the visual and haptic modalities. Moreover, these results suggest that recognition is best within a temporal window that includes simultaneous or rapidly successive presentation of stimuli across the modalities and is also best when objects are more discriminable from each other.</div>
</front>
</TEI>
<inist>
<standard h6="B">
<pA>
<fA01 i1="01" i2="1">
<s0>1530-7026</s0>
</fA01>
<fA03 i2="1">
<s0>Cogn. affect. behav. neurosci. : (Print)</s0>
</fA03>
<fA05>
<s2>4</s2>
</fA05>
<fA06>
<s2>2</s2>
</fA06>
<fA08 i1="01" i2="1" l="ENG">
<s1>The effect of temporal delay and spatial differences on cross-modal object recognition</s1>
</fA08>
<fA09 i1="01" i2="1" l="ENG">
<s1>Multisensory processes</s1>
</fA09>
<fA11 i1="01" i2="1">
<s1>WOODS (Andrew T.)</s1>
</fA11>
<fA11 i1="02" i2="1">
<s1>O'MODHRAIN (Sile)</s1>
</fA11>
<fA11 i1="03" i2="1">
<s1>NEWELL (Fiona N.)</s1>
</fA11>
<fA12 i1="01" i2="1">
<s1>SHORE (David I.)</s1>
<s9>ed.</s9>
</fA12>
<fA12 i1="02" i2="1">
<s1>ELLIOTT (Digby)</s1>
<s9>ed.</s9>
</fA12>
<fA12 i1="03" i2="1">
<s1>MEREDITH (M. Alex)</s1>
<s9>ed.</s9>
</fA12>
<fA14 i1="01">
<s1>University of Dublin, Trinity College</s1>
<s2>Dublin</s2>
<s3>IRL</s3>
<sZ>1 aut.</sZ>
<sZ>3 aut.</sZ>
</fA14>
<fA14 i1="02">
<s1>Media Lab Europe</s1>
<s2>Dublin</s2>
<s3>IRL</s3>
<sZ>2 aut.</sZ>
</fA14>
<fA15 i1="01">
<s1>Department of Psychology, McMaster University</s1>
<s3>CAN</s3>
<sZ>1 aut.</sZ>
</fA15>
<fA15 i1="02">
<s1>Department of Kinesiology, McMaster University</s1>
<s3>CAN</s3>
<sZ>2 aut.</sZ>
</fA15>
<fA15 i1="03">
<s1>Medical College of Virginia, Virginia Commonwealth University</s1>
<s3>USA</s3>
<sZ>3 aut.</sZ>
</fA15>
<fA20>
<s1>260-269</s1>
</fA20>
<fA21>
<s1>2004</s1>
</fA21>
<fA23 i1="01">
<s0>ENG</s0>
</fA23>
<fA43 i1="01">
<s1>INIST</s1>
<s2>13280A</s2>
<s5>354000113967780150</s5>
</fA43>
<fA44>
<s0>0000</s0>
<s1>© 2004 INIST-CNRS. All rights reserved.</s1>
</fA44>
<fA45>
<s0>29 ref.</s0>
</fA45>
<fA47 i1="01" i2="1">
<s0>04-0495950</s0>
</fA47>
<fA60>
<s1>P</s1>
<s2>C</s2>
</fA60>
<fA61>
<s0>A</s0>
</fA61>
<fA64 i1="01" i2="1">
<s0>Cognitive, affective & behavioral neuroscience : (Print)</s0>
</fA64>
<fA66 i1="01">
<s0>USA</s0>
</fA66>
<fA99>
<s0>2 notes</s0>
</fA99>
<fC01 i1="01" l="ENG">
<s0>In a series of experiments, we investigated the matching of objects across visual and haptic modalities across different time delays and spatial dimensions. In all of the experiments, we used simple L-shaped figures as stimuli that varied in either the x or the y dimension or in both dimensions. In Experiment 1, we found that cross-modal matching performance decreased as a function of the time delay between the presentation of the objects. We found no difference in performance between the visual-haptic (VH) and haptic-visual (HV) conditions. Cross-modal performance was better when objects differed in both the x and y dimensions rather than in one dimension alone. In Experiment 2, we investigated the relative contribution of each modality to performance across different interstimulus delays. We found no differential effect of delay between the HH and VV conditions, although general performance was better for the VV condition than for the HH condition. Again, responses to xy changes were better than changes in the x or y dimensions alone. Finally, in Experiment 3, we examined performance in a matching task with simultaneous and successive presentation conditions. We failed to find any difference between simultaneous and successive presentation conditions. Our findings suggest that the short-term retention of object representations is similar in both the visual and haptic modalities. Moreover, these results suggest that recognition is best within a temporal window that includes simultaneous or rapidly successive presentation of stimuli across the modalities and is also best when objects are more discriminable from each other.</s0>
</fC01>
<fC02 i1="01" i2="X">
<s0>002A26E08</s0>
</fC02>
<fC03 i1="01" i2="X" l="FRE">
<s0>Etude expérimentale</s0>
<s5>01</s5>
</fC03>
<fC03 i1="01" i2="X" l="ENG">
<s0>Experimental study</s0>
<s5>01</s5>
</fC03>
<fC03 i1="01" i2="X" l="SPA">
<s0>Estudio experimental</s0>
<s5>01</s5>
</fC03>
<fC03 i1="02" i2="X" l="FRE">
<s0>Vision</s0>
<s5>02</s5>
</fC03>
<fC03 i1="02" i2="X" l="ENG">
<s0>Vision</s0>
<s5>02</s5>
</fC03>
<fC03 i1="02" i2="X" l="SPA">
<s0>Visión</s0>
<s5>02</s5>
</fC03>
<fC03 i1="03" i2="X" l="FRE">
<s0>Sensibilité tactile</s0>
<s5>03</s5>
</fC03>
<fC03 i1="03" i2="X" l="ENG">
<s0>Tactile sensitivity</s0>
<s5>03</s5>
</fC03>
<fC03 i1="03" i2="X" l="SPA">
<s0>Sensibilidad tactil</s0>
<s5>03</s5>
</fC03>
<fC03 i1="04" i2="X" l="FRE">
<s0>Reconnaissance</s0>
<s5>04</s5>
</fC03>
<fC03 i1="04" i2="X" l="ENG">
<s0>Recognition</s0>
<s5>04</s5>
</fC03>
<fC03 i1="04" i2="X" l="SPA">
<s0>Reconocimiento</s0>
<s5>04</s5>
</fC03>
<fC03 i1="05" i2="X" l="FRE">
<s0>Objet</s0>
<s5>05</s5>
</fC03>
<fC03 i1="05" i2="X" l="ENG">
<s0>Object</s0>
<s5>05</s5>
</fC03>
<fC03 i1="05" i2="X" l="SPA">
<s0>Objeto</s0>
<s5>05</s5>
</fC03>
<fC03 i1="06" i2="X" l="FRE">
<s0>Espace</s0>
<s5>06</s5>
</fC03>
<fC03 i1="06" i2="X" l="ENG">
<s0>Space</s0>
<s5>06</s5>
</fC03>
<fC03 i1="06" i2="X" l="SPA">
<s0>Espacio</s0>
<s5>06</s5>
</fC03>
<fC03 i1="07" i2="X" l="FRE">
<s0>Temps</s0>
<s5>07</s5>
</fC03>
<fC03 i1="07" i2="X" l="ENG">
<s0>Time</s0>
<s5>07</s5>
</fC03>
<fC03 i1="07" i2="X" l="SPA">
<s0>Tiempo</s0>
<s5>07</s5>
</fC03>
<fC03 i1="08" i2="X" l="FRE">
<s0>Perception</s0>
<s5>13</s5>
</fC03>
<fC03 i1="08" i2="X" l="ENG">
<s0>Perception</s0>
<s5>13</s5>
</fC03>
<fC03 i1="08" i2="X" l="SPA">
<s0>Percepción</s0>
<s5>13</s5>
</fC03>
<fC03 i1="09" i2="X" l="FRE">
<s0>Cognition</s0>
<s5>17</s5>
</fC03>
<fC03 i1="09" i2="X" l="ENG">
<s0>Cognition</s0>
<s5>17</s5>
</fC03>
<fC03 i1="09" i2="X" l="SPA">
<s0>Cognición</s0>
<s5>17</s5>
</fC03>
<fC03 i1="10" i2="X" l="FRE">
<s0>Homme</s0>
<s5>18</s5>
</fC03>
<fC03 i1="10" i2="X" l="ENG">
<s0>Human</s0>
<s5>18</s5>
</fC03>
<fC03 i1="10" i2="X" l="SPA">
<s0>Hombre</s0>
<s5>18</s5>
</fC03>
<fN21>
<s1>278</s1>
</fN21>
<fN44 i1="01">
<s1>PSI</s1>
</fN44>
<fN82>
<s1>PSI</s1>
</fN82>
</pA>
<pR>
<fA30 i1="01" i2="1" l="ENG">
<s1>International Multisensory Research Forum</s1>
<s2>4</s2>
<s3>Hamilton, Ontario CAN</s3>
<s4>2003-06</s4>
</fA30>
</pR>
</standard>
</inist>
<affiliations>
<list>
<country>
<li>Irlande (pays)</li>
</country>
</list>
<tree>
<country name="Irlande (pays)">
<noRegion>
<name sortKey="Woods, Andrew T" sort="Woods, Andrew T" uniqKey="Woods A" first="Andrew T." last="Woods">Andrew T. Woods</name>
</noRegion>
<name sortKey="Newell, Fiona N" sort="Newell, Fiona N" uniqKey="Newell F" first="Fiona N." last="Newell">Fiona N. Newell</name>
<name sortKey="O Modhrain, Sile" sort="O Modhrain, Sile" uniqKey="O Modhrain S" first="Sile" last="O'Modhrain">Sile O'Modhrain</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/PascalFrancis/Checkpoint
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000B98 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/PascalFrancis/Checkpoint/biblio.hfd -nk 000B98 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    PascalFrancis
   |étape=   Checkpoint
   |type=    RBID
   |clé=     Pascal:04-0495950
   |texte=   The effect of temporal delay and spatial differences on cross-modal object recognition
}}

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024