Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Multisensory object representation: insights from studies of vision and touch.

Identifieur interne : 000E32 ( PubMed/Corpus ); précédent : 000E31; suivant : 000E33

Multisensory object representation: insights from studies of vision and touch.

Auteurs : Simon Lacey ; K. Sathian

Source :

RBID : pubmed:21741551

English descriptors

Abstract

Behavioral studies show that the unisensory representations underlying within-modal visual and haptic object recognition are strikingly similar in terms of view- and size-sensitivity, and integration of structural and surface properties. However, the basis for these attributes differs in each modality, indicating that while these representations are functionally similar, they are not identical. Imaging studies reveal bisensory, visuo-haptic object selectivity, notably in the lateral occipital complex and the intraparietal sulcus, that suggests a shared representation of objects. Such a multisensory representation could underlie visuo-haptic cross-modal object recognition. In this chapter, we compare visual and haptic within-modal object recognition and trace a progression from functionally similar but separate unisensory representations to a shared multisensory representation underlying cross-modal object recognition as well as view-independence, regardless of modality. We outline, and provide evidence for, a model of multisensory object recognition in which representations are flexibly accessible via top-down or bottom-up processing, the choice of route being influenced by object familiarity and individual preference along the object-spatial continuum of mental imagery.

DOI: 10.1016/B978-0-444-53752-2.00006-0
PubMed: 21741551

Links to Exploration step

pubmed:21741551

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Multisensory object representation: insights from studies of vision and touch.</title>
<author>
<name sortKey="Lacey, Simon" sort="Lacey, Simon" uniqKey="Lacey S" first="Simon" last="Lacey">Simon Lacey</name>
<affiliation>
<nlm:affiliation>Department of Neurology, Emory University, Atlanta, Georgia, USA.</nlm:affiliation>
</affiliation>
</author>
<author>
<name sortKey="Sathian, K" sort="Sathian, K" uniqKey="Sathian K" first="K" last="Sathian">K. Sathian</name>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PubMed</idno>
<date when="2011">2011</date>
<idno type="doi">10.1016/B978-0-444-53752-2.00006-0</idno>
<idno type="RBID">pubmed:21741551</idno>
<idno type="pmid">21741551</idno>
<idno type="wicri:Area/PubMed/Corpus">000E32</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en">Multisensory object representation: insights from studies of vision and touch.</title>
<author>
<name sortKey="Lacey, Simon" sort="Lacey, Simon" uniqKey="Lacey S" first="Simon" last="Lacey">Simon Lacey</name>
<affiliation>
<nlm:affiliation>Department of Neurology, Emory University, Atlanta, Georgia, USA.</nlm:affiliation>
</affiliation>
</author>
<author>
<name sortKey="Sathian, K" sort="Sathian, K" uniqKey="Sathian K" first="K" last="Sathian">K. Sathian</name>
</author>
</analytic>
<series>
<title level="j">Progress in brain research</title>
<idno type="eISSN">1875-7855</idno>
<imprint>
<date when="2011" type="published">2011</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Cerebral Cortex (physiology)</term>
<term>Form Perception (physiology)</term>
<term>Humans</term>
<term>Recognition (Psychology) (physiology)</term>
<term>Surface Properties</term>
<term>Touch (physiology)</term>
<term>Visual Perception (physiology)</term>
</keywords>
<keywords scheme="MESH" qualifier="physiology" xml:lang="en">
<term>Cerebral Cortex</term>
<term>Form Perception</term>
<term>Recognition (Psychology)</term>
<term>Touch</term>
<term>Visual Perception</term>
</keywords>
<keywords scheme="MESH" xml:lang="en">
<term>Humans</term>
<term>Surface Properties</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">Behavioral studies show that the unisensory representations underlying within-modal visual and haptic object recognition are strikingly similar in terms of view- and size-sensitivity, and integration of structural and surface properties. However, the basis for these attributes differs in each modality, indicating that while these representations are functionally similar, they are not identical. Imaging studies reveal bisensory, visuo-haptic object selectivity, notably in the lateral occipital complex and the intraparietal sulcus, that suggests a shared representation of objects. Such a multisensory representation could underlie visuo-haptic cross-modal object recognition. In this chapter, we compare visual and haptic within-modal object recognition and trace a progression from functionally similar but separate unisensory representations to a shared multisensory representation underlying cross-modal object recognition as well as view-independence, regardless of modality. We outline, and provide evidence for, a model of multisensory object recognition in which representations are flexibly accessible via top-down or bottom-up processing, the choice of route being influenced by object familiarity and individual preference along the object-spatial continuum of mental imagery.</div>
</front>
</TEI>
<pubmed>
<MedlineCitation Owner="NLM" Status="MEDLINE">
<PMID Version="1">21741551</PMID>
<DateCreated>
<Year>2011</Year>
<Month>07</Month>
<Day>11</Day>
</DateCreated>
<DateCompleted>
<Year>2011</Year>
<Month>11</Month>
<Day>14</Day>
</DateCompleted>
<Article PubModel="Print">
<Journal>
<ISSN IssnType="Electronic">1875-7855</ISSN>
<JournalIssue CitedMedium="Internet">
<Volume>191</Volume>
<PubDate>
<Year>2011</Year>
</PubDate>
</JournalIssue>
<Title>Progress in brain research</Title>
<ISOAbbreviation>Prog. Brain Res.</ISOAbbreviation>
</Journal>
<ArticleTitle>Multisensory object representation: insights from studies of vision and touch.</ArticleTitle>
<Pagination>
<MedlinePgn>165-76</MedlinePgn>
</Pagination>
<ELocationID EIdType="doi" ValidYN="Y">10.1016/B978-0-444-53752-2.00006-0</ELocationID>
<Abstract>
<AbstractText>Behavioral studies show that the unisensory representations underlying within-modal visual and haptic object recognition are strikingly similar in terms of view- and size-sensitivity, and integration of structural and surface properties. However, the basis for these attributes differs in each modality, indicating that while these representations are functionally similar, they are not identical. Imaging studies reveal bisensory, visuo-haptic object selectivity, notably in the lateral occipital complex and the intraparietal sulcus, that suggests a shared representation of objects. Such a multisensory representation could underlie visuo-haptic cross-modal object recognition. In this chapter, we compare visual and haptic within-modal object recognition and trace a progression from functionally similar but separate unisensory representations to a shared multisensory representation underlying cross-modal object recognition as well as view-independence, regardless of modality. We outline, and provide evidence for, a model of multisensory object recognition in which representations are flexibly accessible via top-down or bottom-up processing, the choice of route being influenced by object familiarity and individual preference along the object-spatial continuum of mental imagery.</AbstractText>
<CopyrightInformation>Copyright © 2011 Elsevier B.V. All rights reserved.</CopyrightInformation>
</Abstract>
<AuthorList CompleteYN="Y">
<Author ValidYN="Y">
<LastName>Lacey</LastName>
<ForeName>Simon</ForeName>
<Initials>S</Initials>
<AffiliationInfo>
<Affiliation>Department of Neurology, Emory University, Atlanta, Georgia, USA.</Affiliation>
</AffiliationInfo>
</Author>
<Author ValidYN="Y">
<LastName>Sathian</LastName>
<ForeName>K</ForeName>
<Initials>K</Initials>
</Author>
</AuthorList>
<Language>eng</Language>
<PublicationTypeList>
<PublicationType UI="D016428">Journal Article</PublicationType>
<PublicationType UI="D052061">Research Support, N.I.H., Extramural</PublicationType>
<PublicationType UI="D013486">Research Support, U.S. Gov't, Non-P.H.S.</PublicationType>
<PublicationType UI="D016454">Review</PublicationType>
</PublicationTypeList>
</Article>
<MedlineJournalInfo>
<Country>Netherlands</Country>
<MedlineTA>Prog Brain Res</MedlineTA>
<NlmUniqueID>0376441</NlmUniqueID>
<ISSNLinking>0079-6123</ISSNLinking>
</MedlineJournalInfo>
<CitationSubset>IM</CitationSubset>
<MeshHeadingList>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D002540">Cerebral Cortex</DescriptorName>
<QualifierName MajorTopicYN="N" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D005556">Form Perception</DescriptorName>
<QualifierName MajorTopicYN="N" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D006801">Humans</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D021641">Recognition (Psychology)</DescriptorName>
<QualifierName MajorTopicYN="N" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D013499">Surface Properties</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D014110">Touch</DescriptorName>
<QualifierName MajorTopicYN="Y" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D014796">Visual Perception</DescriptorName>
<QualifierName MajorTopicYN="Y" UI="Q000502">physiology</QualifierName>
</MeshHeading>
</MeshHeadingList>
</MedlineCitation>
<PubmedData>
<History>
<PubMedPubDate PubStatus="entrez">
<Year>2011</Year>
<Month>7</Month>
<Day>12</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="pubmed">
<Year>2011</Year>
<Month>7</Month>
<Day>12</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="medline">
<Year>2011</Year>
<Month>11</Month>
<Day>15</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
</History>
<PublicationStatus>ppublish</PublicationStatus>
<ArticleIdList>
<ArticleId IdType="pii">B978-0-444-53752-2.00006-0</ArticleId>
<ArticleId IdType="doi">10.1016/B978-0-444-53752-2.00006-0</ArticleId>
<ArticleId IdType="pubmed">21741551</ArticleId>
</ArticleIdList>
</PubmedData>
</pubmed>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/PubMed/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000E32 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/PubMed/Corpus/biblio.hfd -nk 000E32 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    PubMed
   |étape=   Corpus
   |type=    RBID
   |clé=     pubmed:21741551
   |texte=   Multisensory object representation: insights from studies of vision and touch.
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/PubMed/Corpus/RBID.i   -Sk "pubmed:21741551" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/PubMed/Corpus/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024