Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Visual and haptic perceptual spaces show high similarity in humans.

Identifieur interne : 001025 ( PubMed/Corpus ); précédent : 001024; suivant : 001026

Visual and haptic perceptual spaces show high similarity in humans.

Auteurs : Nina Gaissert ; Christian Wallraven ; Heinrich H. Bülthoff

Source :

RBID : pubmed:20884497

English descriptors

Abstract

In this study, we show that humans form highly similar perceptual spaces when they explore complex objects from a parametrically defined object space in the visual and haptic domains. For this, a three-dimensional parameter space of well-defined, shell-like objects was generated. Participants either explored two-dimensional pictures or three-dimensional, interactive virtual models of these objects visually, or they explored three-dimensional plastic models haptically. In all cases, the task was to rate the similarity between two objects. Using these similarity ratings and multidimensional scaling (MDS) analyses, the perceptual spaces of the different modalities were then analyzed. Looking at planar configurations within this three-dimensional object space, we found that active visual exploration led to a highly similar perceptual space compared to passive exploration, showing that participants were able to reconstruct the complex parameter space already from two-dimensional pictures alone. Furthermore, we found that visual and haptic perceptual spaces had virtually identical topology compared to that of the physical stimulus space. Surprisingly, the haptic modality even slightly exceeded the visual modality in recovering the topology of the complex object space when the whole three-dimensional space was explored. Our findings point to a close connection between visual and haptic object representations and demonstrate the great degree of fidelity with which haptic shape processing occurs.

DOI: 10.1167/10.11.2
PubMed: 20884497

Links to Exploration step

pubmed:20884497

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Visual and haptic perceptual spaces show high similarity in humans.</title>
<author>
<name sortKey="Gaissert, Nina" sort="Gaissert, Nina" uniqKey="Gaissert N" first="Nina" last="Gaissert">Nina Gaissert</name>
<affiliation>
<nlm:affiliation>Max Planck Institute for Biological Cybernetics, Tübingen, Germany. nina.gaissert@tuebingen.mpg.de</nlm:affiliation>
</affiliation>
</author>
<author>
<name sortKey="Wallraven, Christian" sort="Wallraven, Christian" uniqKey="Wallraven C" first="Christian" last="Wallraven">Christian Wallraven</name>
</author>
<author>
<name sortKey="Bulthoff, Heinrich H" sort="Bulthoff, Heinrich H" uniqKey="Bulthoff H" first="Heinrich H" last="Bülthoff">Heinrich H. Bülthoff</name>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PubMed</idno>
<date when="2010">2010</date>
<idno type="doi">10.1167/10.11.2</idno>
<idno type="RBID">pubmed:20884497</idno>
<idno type="pmid">20884497</idno>
<idno type="wicri:Area/PubMed/Corpus">001025</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en">Visual and haptic perceptual spaces show high similarity in humans.</title>
<author>
<name sortKey="Gaissert, Nina" sort="Gaissert, Nina" uniqKey="Gaissert N" first="Nina" last="Gaissert">Nina Gaissert</name>
<affiliation>
<nlm:affiliation>Max Planck Institute for Biological Cybernetics, Tübingen, Germany. nina.gaissert@tuebingen.mpg.de</nlm:affiliation>
</affiliation>
</author>
<author>
<name sortKey="Wallraven, Christian" sort="Wallraven, Christian" uniqKey="Wallraven C" first="Christian" last="Wallraven">Christian Wallraven</name>
</author>
<author>
<name sortKey="Bulthoff, Heinrich H" sort="Bulthoff, Heinrich H" uniqKey="Bulthoff H" first="Heinrich H" last="Bülthoff">Heinrich H. Bülthoff</name>
</author>
</analytic>
<series>
<title level="j">Journal of vision</title>
<idno type="eISSN">1534-7362</idno>
<imprint>
<date when="2010" type="published">2010</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Discrimination (Psychology) (physiology)</term>
<term>Humans</term>
<term>Pattern Recognition, Visual (physiology)</term>
<term>Photic Stimulation</term>
<term>Reaction Time (physiology)</term>
</keywords>
<keywords scheme="MESH" qualifier="physiology" xml:lang="en">
<term>Discrimination (Psychology)</term>
<term>Pattern Recognition, Visual</term>
<term>Reaction Time</term>
</keywords>
<keywords scheme="MESH" xml:lang="en">
<term>Humans</term>
<term>Photic Stimulation</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">In this study, we show that humans form highly similar perceptual spaces when they explore complex objects from a parametrically defined object space in the visual and haptic domains. For this, a three-dimensional parameter space of well-defined, shell-like objects was generated. Participants either explored two-dimensional pictures or three-dimensional, interactive virtual models of these objects visually, or they explored three-dimensional plastic models haptically. In all cases, the task was to rate the similarity between two objects. Using these similarity ratings and multidimensional scaling (MDS) analyses, the perceptual spaces of the different modalities were then analyzed. Looking at planar configurations within this three-dimensional object space, we found that active visual exploration led to a highly similar perceptual space compared to passive exploration, showing that participants were able to reconstruct the complex parameter space already from two-dimensional pictures alone. Furthermore, we found that visual and haptic perceptual spaces had virtually identical topology compared to that of the physical stimulus space. Surprisingly, the haptic modality even slightly exceeded the visual modality in recovering the topology of the complex object space when the whole three-dimensional space was explored. Our findings point to a close connection between visual and haptic object representations and demonstrate the great degree of fidelity with which haptic shape processing occurs.</div>
</front>
</TEI>
<pubmed>
<MedlineCitation Owner="NLM" Status="MEDLINE">
<PMID Version="1">20884497</PMID>
<DateCreated>
<Year>2010</Year>
<Month>10</Month>
<Day>04</Day>
</DateCreated>
<DateCompleted>
<Year>2011</Year>
<Month>03</Month>
<Day>17</Day>
</DateCompleted>
<Article PubModel="Electronic">
<Journal>
<ISSN IssnType="Electronic">1534-7362</ISSN>
<JournalIssue CitedMedium="Internet">
<Volume>10</Volume>
<Issue>11</Issue>
<PubDate>
<Year>2010</Year>
</PubDate>
</JournalIssue>
<Title>Journal of vision</Title>
<ISOAbbreviation>J Vis</ISOAbbreviation>
</Journal>
<ArticleTitle>Visual and haptic perceptual spaces show high similarity in humans.</ArticleTitle>
<Pagination>
<MedlinePgn>2</MedlinePgn>
</Pagination>
<ELocationID EIdType="doi" ValidYN="Y">10.1167/10.11.2</ELocationID>
<Abstract>
<AbstractText>In this study, we show that humans form highly similar perceptual spaces when they explore complex objects from a parametrically defined object space in the visual and haptic domains. For this, a three-dimensional parameter space of well-defined, shell-like objects was generated. Participants either explored two-dimensional pictures or three-dimensional, interactive virtual models of these objects visually, or they explored three-dimensional plastic models haptically. In all cases, the task was to rate the similarity between two objects. Using these similarity ratings and multidimensional scaling (MDS) analyses, the perceptual spaces of the different modalities were then analyzed. Looking at planar configurations within this three-dimensional object space, we found that active visual exploration led to a highly similar perceptual space compared to passive exploration, showing that participants were able to reconstruct the complex parameter space already from two-dimensional pictures alone. Furthermore, we found that visual and haptic perceptual spaces had virtually identical topology compared to that of the physical stimulus space. Surprisingly, the haptic modality even slightly exceeded the visual modality in recovering the topology of the complex object space when the whole three-dimensional space was explored. Our findings point to a close connection between visual and haptic object representations and demonstrate the great degree of fidelity with which haptic shape processing occurs.</AbstractText>
</Abstract>
<AuthorList CompleteYN="Y">
<Author ValidYN="Y">
<LastName>Gaissert</LastName>
<ForeName>Nina</ForeName>
<Initials>N</Initials>
<AffiliationInfo>
<Affiliation>Max Planck Institute for Biological Cybernetics, Tübingen, Germany. nina.gaissert@tuebingen.mpg.de</Affiliation>
</AffiliationInfo>
</Author>
<Author ValidYN="Y">
<LastName>Wallraven</LastName>
<ForeName>Christian</ForeName>
<Initials>C</Initials>
</Author>
<Author ValidYN="Y">
<LastName>Bülthoff</LastName>
<ForeName>Heinrich H</ForeName>
<Initials>HH</Initials>
</Author>
</AuthorList>
<Language>eng</Language>
<PublicationTypeList>
<PublicationType UI="D003160">Comparative Study</PublicationType>
<PublicationType UI="D016428">Journal Article</PublicationType>
<PublicationType UI="D013485">Research Support, Non-U.S. Gov't</PublicationType>
</PublicationTypeList>
<ArticleDate DateType="Electronic">
<Year>2010</Year>
<Month>09</Month>
<Day>02</Day>
</ArticleDate>
</Article>
<MedlineJournalInfo>
<Country>United States</Country>
<MedlineTA>J Vis</MedlineTA>
<NlmUniqueID>101147197</NlmUniqueID>
<ISSNLinking>1534-7362</ISSNLinking>
</MedlineJournalInfo>
<CitationSubset>IM</CitationSubset>
<MeshHeadingList>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D004192">Discrimination (Psychology)</DescriptorName>
<QualifierName MajorTopicYN="Y" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D006801">Humans</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D010364">Pattern Recognition, Visual</DescriptorName>
<QualifierName MajorTopicYN="Y" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D010775">Photic Stimulation</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D011930">Reaction Time</DescriptorName>
<QualifierName MajorTopicYN="Y" UI="Q000502">physiology</QualifierName>
</MeshHeading>
</MeshHeadingList>
</MedlineCitation>
<PubmedData>
<History>
<PubMedPubDate PubStatus="entrez">
<Year>2010</Year>
<Month>10</Month>
<Day>2</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="pubmed">
<Year>2010</Year>
<Month>10</Month>
<Day>5</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="medline">
<Year>2011</Year>
<Month>3</Month>
<Day>18</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
</History>
<PublicationStatus>epublish</PublicationStatus>
<ArticleIdList>
<ArticleId IdType="pii">10.11.2</ArticleId>
<ArticleId IdType="doi">10.1167/10.11.2</ArticleId>
<ArticleId IdType="pubmed">20884497</ArticleId>
</ArticleIdList>
</PubmedData>
</pubmed>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/PubMed/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001025 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/PubMed/Corpus/biblio.hfd -nk 001025 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    PubMed
   |étape=   Corpus
   |type=    RBID
   |clé=     pubmed:20884497
   |texte=   Visual and haptic perceptual spaces show high similarity in humans.
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/PubMed/Corpus/RBID.i   -Sk "pubmed:20884497" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/PubMed/Corpus/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024