Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Transfer of object category knowledge across visual and haptic modalities: experimental and computational studies.

Identifieur interne : 001736 ( Main/Exploration ); précédent : 001735; suivant : 001737

Transfer of object category knowledge across visual and haptic modalities: experimental and computational studies.

Auteurs : Ilker Yildirim [États-Unis] ; Robert A. Jacobs

Source :

RBID : pubmed:23102553

English descriptors

Abstract

We study people's abilities to transfer object category knowledge across visual and haptic domains. If a person learns to categorize objects based on inputs from one sensory modality, can the person categorize these same objects when the objects are perceived through another modality? Can the person categorize novel objects from the same categories when these objects are, again, perceived through another modality? Our work makes three contributions. First, by fabricating Fribbles (3-D, multi-part objects with a categorical structure), we developed visual-haptic stimuli that are highly complex and realistic, and thus more ecologically valid than objects that are typically used in haptic or visual-haptic experiments. Based on these stimuli, we developed the See and Grasp data set, a data set containing both visual and haptic features of the Fribbles, and are making this data set freely available on the world wide web. Second, complementary to previous research such as studies asking if people transfer knowledge of object identity across visual and haptic domains, we conducted an experiment evaluating whether people transfer object category knowledge across these domains. Our data clearly indicate that we do. Third, we developed a computational model that learns multisensory representations of prototypical 3-D shape. Similar to previous work, the model uses shape primitives to represent parts, and spatial relations among primitives to represent multi-part objects. However, it is distinct in its use of a Bayesian inference algorithm allowing it to acquire multisensory representations, and sensory-specific forward models allowing it to predict visual or haptic features from multisensory representations. The model provides an excellent qualitative account of our experimental data, thereby illustrating the potential importance of multisensory representations and sensory-specific forward models to multisensory perception.

DOI: 10.1016/j.cognition.2012.08.005
PubMed: 23102553


Affiliations:


Links toward previous steps (curation, corpus...)


Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Transfer of object category knowledge across visual and haptic modalities: experimental and computational studies.</title>
<author>
<name sortKey="Yildirim, Ilker" sort="Yildirim, Ilker" uniqKey="Yildirim I" first="Ilker" last="Yildirim">Ilker Yildirim</name>
<affiliation wicri:level="2">
<nlm:affiliation>Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, United States. iyildirim@bcs.rochester.edu</nlm:affiliation>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627</wicri:regionArea>
<placeName>
<region type="state">État de New York</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Jacobs, Robert A" sort="Jacobs, Robert A" uniqKey="Jacobs R" first="Robert A" last="Jacobs">Robert A. Jacobs</name>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PubMed</idno>
<date when="2013">2013</date>
<idno type="doi">10.1016/j.cognition.2012.08.005</idno>
<idno type="RBID">pubmed:23102553</idno>
<idno type="pmid">23102553</idno>
<idno type="wicri:Area/PubMed/Corpus">000B12</idno>
<idno type="wicri:Area/PubMed/Curation">000B12</idno>
<idno type="wicri:Area/PubMed/Checkpoint">000754</idno>
<idno type="wicri:Area/Ncbi/Merge">002308</idno>
<idno type="wicri:Area/Ncbi/Curation">002308</idno>
<idno type="wicri:Area/Ncbi/Checkpoint">002308</idno>
<idno type="wicri:Area/Main/Merge">001747</idno>
<idno type="wicri:Area/Main/Curation">001736</idno>
<idno type="wicri:Area/Main/Exploration">001736</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en">Transfer of object category knowledge across visual and haptic modalities: experimental and computational studies.</title>
<author>
<name sortKey="Yildirim, Ilker" sort="Yildirim, Ilker" uniqKey="Yildirim I" first="Ilker" last="Yildirim">Ilker Yildirim</name>
<affiliation wicri:level="2">
<nlm:affiliation>Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, United States. iyildirim@bcs.rochester.edu</nlm:affiliation>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627</wicri:regionArea>
<placeName>
<region type="state">État de New York</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Jacobs, Robert A" sort="Jacobs, Robert A" uniqKey="Jacobs R" first="Robert A" last="Jacobs">Robert A. Jacobs</name>
</author>
</analytic>
<series>
<title level="j">Cognition</title>
<idno type="eISSN">1873-7838</idno>
<imprint>
<date when="2013" type="published">2013</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Female</term>
<term>Form Perception (physiology)</term>
<term>Humans</term>
<term>Knowledge</term>
<term>Learning (physiology)</term>
<term>Male</term>
<term>Models, Psychological</term>
<term>Physical Stimulation</term>
<term>Recognition (Psychology) (physiology)</term>
<term>Touch (physiology)</term>
<term>Touch Perception (physiology)</term>
<term>Transfer (Psychology) (physiology)</term>
<term>Vision, Ocular (physiology)</term>
<term>Visual Perception (physiology)</term>
<term>Young Adult</term>
</keywords>
<keywords scheme="MESH" qualifier="physiology" xml:lang="en">
<term>Form Perception</term>
<term>Learning</term>
<term>Recognition (Psychology)</term>
<term>Touch</term>
<term>Touch Perception</term>
<term>Transfer (Psychology)</term>
<term>Vision, Ocular</term>
<term>Visual Perception</term>
</keywords>
<keywords scheme="MESH" xml:lang="en">
<term>Female</term>
<term>Humans</term>
<term>Knowledge</term>
<term>Male</term>
<term>Models, Psychological</term>
<term>Physical Stimulation</term>
<term>Young Adult</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">We study people's abilities to transfer object category knowledge across visual and haptic domains. If a person learns to categorize objects based on inputs from one sensory modality, can the person categorize these same objects when the objects are perceived through another modality? Can the person categorize novel objects from the same categories when these objects are, again, perceived through another modality? Our work makes three contributions. First, by fabricating Fribbles (3-D, multi-part objects with a categorical structure), we developed visual-haptic stimuli that are highly complex and realistic, and thus more ecologically valid than objects that are typically used in haptic or visual-haptic experiments. Based on these stimuli, we developed the See and Grasp data set, a data set containing both visual and haptic features of the Fribbles, and are making this data set freely available on the world wide web. Second, complementary to previous research such as studies asking if people transfer knowledge of object identity across visual and haptic domains, we conducted an experiment evaluating whether people transfer object category knowledge across these domains. Our data clearly indicate that we do. Third, we developed a computational model that learns multisensory representations of prototypical 3-D shape. Similar to previous work, the model uses shape primitives to represent parts, and spatial relations among primitives to represent multi-part objects. However, it is distinct in its use of a Bayesian inference algorithm allowing it to acquire multisensory representations, and sensory-specific forward models allowing it to predict visual or haptic features from multisensory representations. The model provides an excellent qualitative account of our experimental data, thereby illustrating the potential importance of multisensory representations and sensory-specific forward models to multisensory perception.</div>
</front>
</TEI>
<affiliations>
<list>
<country>
<li>États-Unis</li>
</country>
<region>
<li>État de New York</li>
</region>
</list>
<tree>
<noCountry>
<name sortKey="Jacobs, Robert A" sort="Jacobs, Robert A" uniqKey="Jacobs R" first="Robert A" last="Jacobs">Robert A. Jacobs</name>
</noCountry>
<country name="États-Unis">
<region name="État de New York">
<name sortKey="Yildirim, Ilker" sort="Yildirim, Ilker" uniqKey="Yildirim I" first="Ilker" last="Yildirim">Ilker Yildirim</name>
</region>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Main/Exploration
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001736 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Main/Exploration/biblio.hfd -nk 001736 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Main
   |étape=   Exploration
   |type=    RBID
   |clé=     pubmed:23102553
   |texte=   Transfer of object category knowledge across visual and haptic modalities: experimental and computational studies.
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Main/Exploration/RBID.i   -Sk "pubmed:23102553" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Main/Exploration/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024