Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Visual, haptic and cross-modal recognition of objects and scenes.

Identifieur interne : 001A50 ( PubMed/Corpus ); précédent : 001A49; suivant : 001A51

Visual, haptic and cross-modal recognition of objects and scenes.

Auteurs : Andrew T. Woods ; Fiona N. Newell

Source :

RBID : pubmed:15477029

English descriptors

Abstract

In this article we review current literature on cross-modal recognition and present new findings from our studies on object and scene recognition. Specifically, we address the questions of what is the nature of the representation underlying each sensory system that facilitates convergence across the senses and how perception is modified by the interaction of the senses. In the first set of our experiments, the recognition of unfamiliar objects within and across the visual and haptic modalities was investigated under conditions of changes in orientation (0 degrees or 180 degrees ). An orientation change increased recognition errors within each modality but this effect was reduced across modalities. Our results suggest that cross-modal object representations of objects are mediated by surface-dependent representations. In a second series of experiments, we investigated how spatial information is integrated across modalities and viewpoint using scenes of familiar, 3D objects as stimuli. We found that scene recognition performance was less efficient when there was either a change in modality, or in orientation, between learning and test. Furthermore, haptic learning was selectively disrupted by a verbal interpolation task. Our findings are discussed with reference to separate spatial encoding of visual and haptic scenes. We conclude by discussing a number of constraints under which cross-modal integration is optimal for object recognition. These constraints include the nature of the task, and the amount of spatial and temporal congruency of information across the modalities.

DOI: 10.1016/j.jphysparis.2004.03.006
PubMed: 15477029

Links to Exploration step

pubmed:15477029

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Visual, haptic and cross-modal recognition of objects and scenes.</title>
<author>
<name sortKey="Woods, Andrew T" sort="Woods, Andrew T" uniqKey="Woods A" first="Andrew T" last="Woods">Andrew T. Woods</name>
<affiliation>
<nlm:affiliation>Department of Psychology, Trinity College, University of Dublin, Aras an Phairsaigh, Dublin 2, Ireland.</nlm:affiliation>
</affiliation>
</author>
<author>
<name sortKey="Newell, Fiona N" sort="Newell, Fiona N" uniqKey="Newell F" first="Fiona N" last="Newell">Fiona N. Newell</name>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PubMed</idno>
<date when="????">
<PubDate>
<MedlineDate>2004 Jan-Jun</MedlineDate>
</PubDate>
</date>
<idno type="doi">10.1016/j.jphysparis.2004.03.006</idno>
<idno type="RBID">pubmed:15477029</idno>
<idno type="pmid">15477029</idno>
<idno type="wicri:Area/PubMed/Corpus">001A50</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en">Visual, haptic and cross-modal recognition of objects and scenes.</title>
<author>
<name sortKey="Woods, Andrew T" sort="Woods, Andrew T" uniqKey="Woods A" first="Andrew T" last="Woods">Andrew T. Woods</name>
<affiliation>
<nlm:affiliation>Department of Psychology, Trinity College, University of Dublin, Aras an Phairsaigh, Dublin 2, Ireland.</nlm:affiliation>
</affiliation>
</author>
<author>
<name sortKey="Newell, Fiona N" sort="Newell, Fiona N" uniqKey="Newell F" first="Fiona N" last="Newell">Fiona N. Newell</name>
</author>
</analytic>
<series>
<title level="j">Journal of physiology, Paris</title>
<idno type="ISSN">0928-4257</idno>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Animals</term>
<term>Humans</term>
<term>Photic Stimulation (methods)</term>
<term>Recognition (Psychology) (physiology)</term>
<term>Visual Perception (physiology)</term>
</keywords>
<keywords scheme="MESH" qualifier="methods" xml:lang="en">
<term>Photic Stimulation</term>
</keywords>
<keywords scheme="MESH" qualifier="physiology" xml:lang="en">
<term>Recognition (Psychology)</term>
<term>Visual Perception</term>
</keywords>
<keywords scheme="MESH" xml:lang="en">
<term>Animals</term>
<term>Humans</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">In this article we review current literature on cross-modal recognition and present new findings from our studies on object and scene recognition. Specifically, we address the questions of what is the nature of the representation underlying each sensory system that facilitates convergence across the senses and how perception is modified by the interaction of the senses. In the first set of our experiments, the recognition of unfamiliar objects within and across the visual and haptic modalities was investigated under conditions of changes in orientation (0 degrees or 180 degrees ). An orientation change increased recognition errors within each modality but this effect was reduced across modalities. Our results suggest that cross-modal object representations of objects are mediated by surface-dependent representations. In a second series of experiments, we investigated how spatial information is integrated across modalities and viewpoint using scenes of familiar, 3D objects as stimuli. We found that scene recognition performance was less efficient when there was either a change in modality, or in orientation, between learning and test. Furthermore, haptic learning was selectively disrupted by a verbal interpolation task. Our findings are discussed with reference to separate spatial encoding of visual and haptic scenes. We conclude by discussing a number of constraints under which cross-modal integration is optimal for object recognition. These constraints include the nature of the task, and the amount of spatial and temporal congruency of information across the modalities.</div>
</front>
</TEI>
<pubmed>
<MedlineCitation Owner="NLM" Status="MEDLINE">
<PMID Version="1">15477029</PMID>
<DateCreated>
<Year>2004</Year>
<Month>10</Month>
<Day>12</Day>
</DateCreated>
<DateCompleted>
<Year>2005</Year>
<Month>01</Month>
<Day>14</Day>
</DateCompleted>
<DateRevised>
<Year>2006</Year>
<Month>11</Month>
<Day>15</Day>
</DateRevised>
<Article PubModel="Print">
<Journal>
<ISSN IssnType="Print">0928-4257</ISSN>
<JournalIssue CitedMedium="Print">
<Volume>98</Volume>
<Issue>1-3</Issue>
<PubDate>
<MedlineDate>2004 Jan-Jun</MedlineDate>
</PubDate>
</JournalIssue>
<Title>Journal of physiology, Paris</Title>
<ISOAbbreviation>J. Physiol. Paris</ISOAbbreviation>
</Journal>
<ArticleTitle>Visual, haptic and cross-modal recognition of objects and scenes.</ArticleTitle>
<Pagination>
<MedlinePgn>147-59</MedlinePgn>
</Pagination>
<Abstract>
<AbstractText>In this article we review current literature on cross-modal recognition and present new findings from our studies on object and scene recognition. Specifically, we address the questions of what is the nature of the representation underlying each sensory system that facilitates convergence across the senses and how perception is modified by the interaction of the senses. In the first set of our experiments, the recognition of unfamiliar objects within and across the visual and haptic modalities was investigated under conditions of changes in orientation (0 degrees or 180 degrees ). An orientation change increased recognition errors within each modality but this effect was reduced across modalities. Our results suggest that cross-modal object representations of objects are mediated by surface-dependent representations. In a second series of experiments, we investigated how spatial information is integrated across modalities and viewpoint using scenes of familiar, 3D objects as stimuli. We found that scene recognition performance was less efficient when there was either a change in modality, or in orientation, between learning and test. Furthermore, haptic learning was selectively disrupted by a verbal interpolation task. Our findings are discussed with reference to separate spatial encoding of visual and haptic scenes. We conclude by discussing a number of constraints under which cross-modal integration is optimal for object recognition. These constraints include the nature of the task, and the amount of spatial and temporal congruency of information across the modalities.</AbstractText>
</Abstract>
<AuthorList CompleteYN="Y">
<Author ValidYN="Y">
<LastName>Woods</LastName>
<ForeName>Andrew T</ForeName>
<Initials>AT</Initials>
<AffiliationInfo>
<Affiliation>Department of Psychology, Trinity College, University of Dublin, Aras an Phairsaigh, Dublin 2, Ireland.</Affiliation>
</AffiliationInfo>
</Author>
<Author ValidYN="Y">
<LastName>Newell</LastName>
<ForeName>Fiona N</ForeName>
<Initials>FN</Initials>
</Author>
</AuthorList>
<Language>eng</Language>
<PublicationTypeList>
<PublicationType UI="D016428">Journal Article</PublicationType>
<PublicationType UI="D013485">Research Support, Non-U.S. Gov't</PublicationType>
<PublicationType UI="D016454">Review</PublicationType>
</PublicationTypeList>
</Article>
<MedlineJournalInfo>
<Country>France</Country>
<MedlineTA>J Physiol Paris</MedlineTA>
<NlmUniqueID>9309351</NlmUniqueID>
<ISSNLinking>0928-4257</ISSNLinking>
</MedlineJournalInfo>
<CitationSubset>IM</CitationSubset>
<MeshHeadingList>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D000818">Animals</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D006801">Humans</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D010775">Photic Stimulation</DescriptorName>
<QualifierName MajorTopicYN="Y" UI="Q000379">methods</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D021641">Recognition (Psychology)</DescriptorName>
<QualifierName MajorTopicYN="Y" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D014796">Visual Perception</DescriptorName>
<QualifierName MajorTopicYN="Y" UI="Q000502">physiology</QualifierName>
</MeshHeading>
</MeshHeadingList>
<NumberOfReferences>62</NumberOfReferences>
</MedlineCitation>
<PubmedData>
<History>
<PubMedPubDate PubStatus="pubmed">
<Year>2004</Year>
<Month>10</Month>
<Day>13</Day>
<Hour>9</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="medline">
<Year>2005</Year>
<Month>1</Month>
<Day>15</Day>
<Hour>9</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="entrez">
<Year>2004</Year>
<Month>10</Month>
<Day>13</Day>
<Hour>9</Hour>
<Minute>0</Minute>
</PubMedPubDate>
</History>
<PublicationStatus>ppublish</PublicationStatus>
<ArticleIdList>
<ArticleId IdType="pii">S0928-4257(04)00077-4</ArticleId>
<ArticleId IdType="doi">10.1016/j.jphysparis.2004.03.006</ArticleId>
<ArticleId IdType="pubmed">15477029</ArticleId>
</ArticleIdList>
</PubmedData>
</pubmed>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/PubMed/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001A50 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/PubMed/Corpus/biblio.hfd -nk 001A50 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    PubMed
   |étape=   Corpus
   |type=    RBID
   |clé=     pubmed:15477029
   |texte=   Visual, haptic and cross-modal recognition of objects and scenes.
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/PubMed/Corpus/RBID.i   -Sk "pubmed:15477029" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/PubMed/Corpus/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024