Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Multisensory recognition of actively explored objects.

Identifieur interne : 001557 ( PubMed/Corpus ); précédent : 001556; suivant : 001558

Multisensory recognition of actively explored objects.

Auteurs : Marc O. Ernst ; Fiona N. Newell

Source :

RBID : pubmed:17974318

English descriptors

Abstract

Shape recognition can be achieved through vision or touch, raising the issue of how this information is shared across modalities. Here we provide a short review of previous findings on cross-modal object recognition and we provide new empirical data on multisensory recognition of actively explored objects. It was previously shown that, similar to vision, haptic recognition of objects fixed in space is orientation specific and that cross-modal object recognition performance was relatively efficient when these views of the objects were matched across the sensory modalities (Newell, Ernst, Tjan, & Bülthoff, 2001). For actively explored (i.e., spatially unconstrained) objects, we now found a cost in cross-modal relative to within-modal recognition performance. At first, this may seem to be in contrast to findings by Newell et al. (2001). However, a detailed video analysis of the visual and haptic exploration behaviour during learning and recognition revealed that one view of the objects was predominantly explored relative to all others. Thus, active visual and haptic exploration is not balanced across object views. The cost in recognition performance across modalities for actively explored objects could be attributed to the fact that the predominantly learned object view was not appropriately matched between learning and recognition test in the cross-modal conditions. Thus, it seems that participants naturally adopt an exploration strategy during visual and haptic object learning that involves constraining the orientation of the objects. Although this strategy ensures good within-modal performance, it is not optimal for achieving the best recognition performance across modalities.

PubMed: 17974318

Links to Exploration step

pubmed:17974318

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Multisensory recognition of actively explored objects.</title>
<author>
<name sortKey="Ernst, Marc O" sort="Ernst, Marc O" uniqKey="Ernst M" first="Marc O" last="Ernst">Marc O. Ernst</name>
<affiliation>
<nlm:affiliation>Max Planck Institute for Biological Cybernetics, Tübingen, Germany. marc.ernst@tuebingen.mpg.de</nlm:affiliation>
</affiliation>
</author>
<author>
<name sortKey="Newell, Fiona N" sort="Newell, Fiona N" uniqKey="Newell F" first="Fiona N" last="Newell">Fiona N. Newell</name>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PubMed</idno>
<date when="2007">2007</date>
<idno type="RBID">pubmed:17974318</idno>
<idno type="pmid">17974318</idno>
<idno type="wicri:Area/PubMed/Corpus">001557</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en">Multisensory recognition of actively explored objects.</title>
<author>
<name sortKey="Ernst, Marc O" sort="Ernst, Marc O" uniqKey="Ernst M" first="Marc O" last="Ernst">Marc O. Ernst</name>
<affiliation>
<nlm:affiliation>Max Planck Institute for Biological Cybernetics, Tübingen, Germany. marc.ernst@tuebingen.mpg.de</nlm:affiliation>
</affiliation>
</author>
<author>
<name sortKey="Newell, Fiona N" sort="Newell, Fiona N" uniqKey="Newell F" first="Fiona N" last="Newell">Fiona N. Newell</name>
</author>
</analytic>
<series>
<title level="j">Canadian journal of experimental psychology = Revue canadienne de psychologie expérimentale</title>
<idno type="ISSN">1196-1961</idno>
<imprint>
<date when="2007" type="published">2007</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Humans</term>
<term>Learning</term>
<term>Recognition (Psychology)</term>
<term>Sensation (physiology)</term>
<term>Touch (physiology)</term>
<term>Visual Perception (physiology)</term>
</keywords>
<keywords scheme="MESH" qualifier="physiology" xml:lang="en">
<term>Sensation</term>
<term>Touch</term>
<term>Visual Perception</term>
</keywords>
<keywords scheme="MESH" xml:lang="en">
<term>Humans</term>
<term>Learning</term>
<term>Recognition (Psychology)</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">Shape recognition can be achieved through vision or touch, raising the issue of how this information is shared across modalities. Here we provide a short review of previous findings on cross-modal object recognition and we provide new empirical data on multisensory recognition of actively explored objects. It was previously shown that, similar to vision, haptic recognition of objects fixed in space is orientation specific and that cross-modal object recognition performance was relatively efficient when these views of the objects were matched across the sensory modalities (Newell, Ernst, Tjan, & Bülthoff, 2001). For actively explored (i.e., spatially unconstrained) objects, we now found a cost in cross-modal relative to within-modal recognition performance. At first, this may seem to be in contrast to findings by Newell et al. (2001). However, a detailed video analysis of the visual and haptic exploration behaviour during learning and recognition revealed that one view of the objects was predominantly explored relative to all others. Thus, active visual and haptic exploration is not balanced across object views. The cost in recognition performance across modalities for actively explored objects could be attributed to the fact that the predominantly learned object view was not appropriately matched between learning and recognition test in the cross-modal conditions. Thus, it seems that participants naturally adopt an exploration strategy during visual and haptic object learning that involves constraining the orientation of the objects. Although this strategy ensures good within-modal performance, it is not optimal for achieving the best recognition performance across modalities.</div>
</front>
</TEI>
<pubmed>
<MedlineCitation Owner="NLM" Status="MEDLINE">
<PMID Version="1">17974318</PMID>
<DateCreated>
<Year>2007</Year>
<Month>11</Month>
<Day>02</Day>
</DateCreated>
<DateCompleted>
<Year>2007</Year>
<Month>12</Month>
<Day>07</Day>
</DateCompleted>
<Article PubModel="Print">
<Journal>
<ISSN IssnType="Print">1196-1961</ISSN>
<JournalIssue CitedMedium="Print">
<Volume>61</Volume>
<Issue>3</Issue>
<PubDate>
<Year>2007</Year>
<Month>Sep</Month>
</PubDate>
</JournalIssue>
<Title>Canadian journal of experimental psychology = Revue canadienne de psychologie expérimentale</Title>
<ISOAbbreviation>Can J Exp Psychol</ISOAbbreviation>
</Journal>
<ArticleTitle>Multisensory recognition of actively explored objects.</ArticleTitle>
<Pagination>
<MedlinePgn>242-53</MedlinePgn>
</Pagination>
<Abstract>
<AbstractText>Shape recognition can be achieved through vision or touch, raising the issue of how this information is shared across modalities. Here we provide a short review of previous findings on cross-modal object recognition and we provide new empirical data on multisensory recognition of actively explored objects. It was previously shown that, similar to vision, haptic recognition of objects fixed in space is orientation specific and that cross-modal object recognition performance was relatively efficient when these views of the objects were matched across the sensory modalities (Newell, Ernst, Tjan, & Bülthoff, 2001). For actively explored (i.e., spatially unconstrained) objects, we now found a cost in cross-modal relative to within-modal recognition performance. At first, this may seem to be in contrast to findings by Newell et al. (2001). However, a detailed video analysis of the visual and haptic exploration behaviour during learning and recognition revealed that one view of the objects was predominantly explored relative to all others. Thus, active visual and haptic exploration is not balanced across object views. The cost in recognition performance across modalities for actively explored objects could be attributed to the fact that the predominantly learned object view was not appropriately matched between learning and recognition test in the cross-modal conditions. Thus, it seems that participants naturally adopt an exploration strategy during visual and haptic object learning that involves constraining the orientation of the objects. Although this strategy ensures good within-modal performance, it is not optimal for achieving the best recognition performance across modalities.</AbstractText>
</Abstract>
<AuthorList CompleteYN="Y">
<Author ValidYN="Y">
<LastName>Ernst</LastName>
<ForeName>Marc O</ForeName>
<Initials>MO</Initials>
<AffiliationInfo>
<Affiliation>Max Planck Institute for Biological Cybernetics, Tübingen, Germany. marc.ernst@tuebingen.mpg.de</Affiliation>
</AffiliationInfo>
</Author>
<Author ValidYN="Y">
<LastName>Newell</LastName>
<ForeName>Fiona N</ForeName>
<Initials>FN</Initials>
</Author>
</AuthorList>
<Language>eng</Language>
<PublicationTypeList>
<PublicationType UI="D016428">Journal Article</PublicationType>
<PublicationType UI="D013485">Research Support, Non-U.S. Gov't</PublicationType>
</PublicationTypeList>
</Article>
<MedlineJournalInfo>
<Country>Canada</Country>
<MedlineTA>Can J Exp Psychol</MedlineTA>
<NlmUniqueID>9315513</NlmUniqueID>
<ISSNLinking>1196-1961</ISSNLinking>
</MedlineJournalInfo>
<CitationSubset>IM</CitationSubset>
<MeshHeadingList>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D006801">Humans</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D007858">Learning</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="Y" UI="D021641">Recognition (Psychology)</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D012677">Sensation</DescriptorName>
<QualifierName MajorTopicYN="N" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D014110">Touch</DescriptorName>
<QualifierName MajorTopicYN="N" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D014796">Visual Perception</DescriptorName>
<QualifierName MajorTopicYN="N" UI="Q000502">physiology</QualifierName>
</MeshHeading>
</MeshHeadingList>
</MedlineCitation>
<PubmedData>
<History>
<PubMedPubDate PubStatus="pubmed">
<Year>2007</Year>
<Month>11</Month>
<Day>3</Day>
<Hour>9</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="medline">
<Year>2007</Year>
<Month>12</Month>
<Day>8</Day>
<Hour>9</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="entrez">
<Year>2007</Year>
<Month>11</Month>
<Day>3</Day>
<Hour>9</Hour>
<Minute>0</Minute>
</PubMedPubDate>
</History>
<PublicationStatus>ppublish</PublicationStatus>
<ArticleIdList>
<ArticleId IdType="pubmed">17974318</ArticleId>
</ArticleIdList>
</PubmedData>
</pubmed>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/PubMed/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001557 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/PubMed/Corpus/biblio.hfd -nk 001557 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    PubMed
   |étape=   Corpus
   |type=    RBID
   |clé=     pubmed:17974318
   |texte=   Multisensory recognition of actively explored objects.
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/PubMed/Corpus/RBID.i   -Sk "pubmed:17974318" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/PubMed/Corpus/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024