Experience-dependent visual cue integration based on consistencies between visual and haptic percepts.
Identifieur interne : 001D91 ( PubMed/Corpus ); précédent : 001D90; suivant : 001D92Experience-dependent visual cue integration based on consistencies between visual and haptic percepts.
Auteurs : J E Atkins ; J. Fiser ; R A JacobsSource :
- Vision research [ 0042-6989 ] ; 2001.
English descriptors
- KwdEn :
- MESH :
- physiology : Depth Perception, Memory, Motion Perception, Touch.
- Cues, Humans, User-Computer Interface.
Abstract
We study the hypothesis that observers can use haptic percepts as a standard against which the relative reliabilities of visual cues can be judged, and that these reliabilities determine how observers combine depth information provided by these cues. Using a novel visuo-haptic virtual reality environment, subjects viewed and grasped virtual objects. In Experiment 1, subjects were trained under motion relevant conditions, during which haptic and visual motion cues were consistent whereas haptic and visual texture cues were uncorrelated, and texture relevant conditions, during which haptic and texture cues were consistent whereas haptic and motion cues were uncorrelated. Subjects relied more on the motion cue after motion relevant training than after texture relevant training, and more on the texture cue after texture relevant training than after motion relevant training. Experiment 2 studied whether or not subjects could adapt their visual cue combination strategies in a context-dependent manner based on context-dependent consistencies between haptic and visual cues. Subjects successfully learned two cue combination strategies in parallel, and correctly applied each strategy in its appropriate context. Experiment 3, which was similar to Experiment 1 except that it used a more naturalistic experimental task, yielded the same pattern of results as Experiment 1 indicating that the findings do not depend on the precise nature of the experimental task. Overall, the results suggest that observers can involuntarily compare visual and haptic percepts in order to evaluate the relative reliabilities of visual cues, and that these reliabilities determine how cues are combined during three-dimensional visual perception.
PubMed: 11166048
Links to Exploration step
pubmed:11166048Le document en format XML
<record><TEI><teiHeader><fileDesc><titleStmt><title xml:lang="en">Experience-dependent visual cue integration based on consistencies between visual and haptic percepts.</title>
<author><name sortKey="Atkins, J E" sort="Atkins, J E" uniqKey="Atkins J" first="J E" last="Atkins">J E Atkins</name>
<affiliation><nlm:affiliation>Department of Brain and Cognitive Sciences and the Center for Visual Science, University of Rochester, Rochester, NY 14627, USA.</nlm:affiliation>
</affiliation>
</author>
<author><name sortKey="Fiser, J" sort="Fiser, J" uniqKey="Fiser J" first="J" last="Fiser">J. Fiser</name>
</author>
<author><name sortKey="Jacobs, R A" sort="Jacobs, R A" uniqKey="Jacobs R" first="R A" last="Jacobs">R A Jacobs</name>
</author>
</titleStmt>
<publicationStmt><idno type="wicri:source">PubMed</idno>
<date when="2001">2001</date>
<idno type="RBID">pubmed:11166048</idno>
<idno type="pmid">11166048</idno>
<idno type="wicri:Area/PubMed/Corpus">001D91</idno>
</publicationStmt>
<sourceDesc><biblStruct><analytic><title xml:lang="en">Experience-dependent visual cue integration based on consistencies between visual and haptic percepts.</title>
<author><name sortKey="Atkins, J E" sort="Atkins, J E" uniqKey="Atkins J" first="J E" last="Atkins">J E Atkins</name>
<affiliation><nlm:affiliation>Department of Brain and Cognitive Sciences and the Center for Visual Science, University of Rochester, Rochester, NY 14627, USA.</nlm:affiliation>
</affiliation>
</author>
<author><name sortKey="Fiser, J" sort="Fiser, J" uniqKey="Fiser J" first="J" last="Fiser">J. Fiser</name>
</author>
<author><name sortKey="Jacobs, R A" sort="Jacobs, R A" uniqKey="Jacobs R" first="R A" last="Jacobs">R A Jacobs</name>
</author>
</analytic>
<series><title level="j">Vision research</title>
<idno type="ISSN">0042-6989</idno>
<imprint><date when="2001" type="published">2001</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc><textClass><keywords scheme="KwdEn" xml:lang="en"><term>Cues</term>
<term>Depth Perception (physiology)</term>
<term>Humans</term>
<term>Memory (physiology)</term>
<term>Motion Perception (physiology)</term>
<term>Touch (physiology)</term>
<term>User-Computer Interface</term>
</keywords>
<keywords scheme="MESH" qualifier="physiology" xml:lang="en"><term>Depth Perception</term>
<term>Memory</term>
<term>Motion Perception</term>
<term>Touch</term>
</keywords>
<keywords scheme="MESH" xml:lang="en"><term>Cues</term>
<term>Humans</term>
<term>User-Computer Interface</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front><div type="abstract" xml:lang="en">We study the hypothesis that observers can use haptic percepts as a standard against which the relative reliabilities of visual cues can be judged, and that these reliabilities determine how observers combine depth information provided by these cues. Using a novel visuo-haptic virtual reality environment, subjects viewed and grasped virtual objects. In Experiment 1, subjects were trained under motion relevant conditions, during which haptic and visual motion cues were consistent whereas haptic and visual texture cues were uncorrelated, and texture relevant conditions, during which haptic and texture cues were consistent whereas haptic and motion cues were uncorrelated. Subjects relied more on the motion cue after motion relevant training than after texture relevant training, and more on the texture cue after texture relevant training than after motion relevant training. Experiment 2 studied whether or not subjects could adapt their visual cue combination strategies in a context-dependent manner based on context-dependent consistencies between haptic and visual cues. Subjects successfully learned two cue combination strategies in parallel, and correctly applied each strategy in its appropriate context. Experiment 3, which was similar to Experiment 1 except that it used a more naturalistic experimental task, yielded the same pattern of results as Experiment 1 indicating that the findings do not depend on the precise nature of the experimental task. Overall, the results suggest that observers can involuntarily compare visual and haptic percepts in order to evaluate the relative reliabilities of visual cues, and that these reliabilities determine how cues are combined during three-dimensional visual perception.</div>
</front>
</TEI>
<pubmed><MedlineCitation Owner="NLM" Status="MEDLINE"><PMID Version="1">11166048</PMID>
<DateCreated><Year>2001</Year>
<Month>02</Month>
<Day>21</Day>
</DateCreated>
<DateCompleted><Year>2001</Year>
<Month>03</Month>
<Day>01</Day>
</DateCompleted>
<DateRevised><Year>2007</Year>
<Month>11</Month>
<Day>14</Day>
</DateRevised>
<Article PubModel="Print"><Journal><ISSN IssnType="Print">0042-6989</ISSN>
<JournalIssue CitedMedium="Print"><Volume>41</Volume>
<Issue>4</Issue>
<PubDate><Year>2001</Year>
<Month>Feb</Month>
</PubDate>
</JournalIssue>
<Title>Vision research</Title>
<ISOAbbreviation>Vision Res.</ISOAbbreviation>
</Journal>
<ArticleTitle>Experience-dependent visual cue integration based on consistencies between visual and haptic percepts.</ArticleTitle>
<Pagination><MedlinePgn>449-61</MedlinePgn>
</Pagination>
<Abstract><AbstractText>We study the hypothesis that observers can use haptic percepts as a standard against which the relative reliabilities of visual cues can be judged, and that these reliabilities determine how observers combine depth information provided by these cues. Using a novel visuo-haptic virtual reality environment, subjects viewed and grasped virtual objects. In Experiment 1, subjects were trained under motion relevant conditions, during which haptic and visual motion cues were consistent whereas haptic and visual texture cues were uncorrelated, and texture relevant conditions, during which haptic and texture cues were consistent whereas haptic and motion cues were uncorrelated. Subjects relied more on the motion cue after motion relevant training than after texture relevant training, and more on the texture cue after texture relevant training than after motion relevant training. Experiment 2 studied whether or not subjects could adapt their visual cue combination strategies in a context-dependent manner based on context-dependent consistencies between haptic and visual cues. Subjects successfully learned two cue combination strategies in parallel, and correctly applied each strategy in its appropriate context. Experiment 3, which was similar to Experiment 1 except that it used a more naturalistic experimental task, yielded the same pattern of results as Experiment 1 indicating that the findings do not depend on the precise nature of the experimental task. Overall, the results suggest that observers can involuntarily compare visual and haptic percepts in order to evaluate the relative reliabilities of visual cues, and that these reliabilities determine how cues are combined during three-dimensional visual perception.</AbstractText>
</Abstract>
<AuthorList CompleteYN="Y"><Author ValidYN="Y"><LastName>Atkins</LastName>
<ForeName>J E</ForeName>
<Initials>JE</Initials>
<AffiliationInfo><Affiliation>Department of Brain and Cognitive Sciences and the Center for Visual Science, University of Rochester, Rochester, NY 14627, USA.</Affiliation>
</AffiliationInfo>
</Author>
<Author ValidYN="Y"><LastName>Fiser</LastName>
<ForeName>J</ForeName>
<Initials>J</Initials>
</Author>
<Author ValidYN="Y"><LastName>Jacobs</LastName>
<ForeName>R A</ForeName>
<Initials>RA</Initials>
</Author>
</AuthorList>
<Language>eng</Language>
<GrantList CompleteYN="Y"><Grant><GrantID>P41-RR09283</GrantID>
<Acronym>RR</Acronym>
<Agency>NCRR NIH HHS</Agency>
<Country>United States</Country>
</Grant>
<Grant><GrantID>R01-EY13149</GrantID>
<Acronym>EY</Acronym>
<Agency>NEI NIH HHS</Agency>
<Country>United States</Country>
</Grant>
<Grant><GrantID>R29-MH54770</GrantID>
<Acronym>MH</Acronym>
<Agency>NIMH NIH HHS</Agency>
<Country>United States</Country>
</Grant>
</GrantList>
<PublicationTypeList><PublicationType UI="D016428">Journal Article</PublicationType>
<PublicationType UI="D013485">Research Support, Non-U.S. Gov't</PublicationType>
<PublicationType UI="D013486">Research Support, U.S. Gov't, Non-P.H.S.</PublicationType>
<PublicationType UI="D013487">Research Support, U.S. Gov't, P.H.S.</PublicationType>
</PublicationTypeList>
</Article>
<MedlineJournalInfo><Country>England</Country>
<MedlineTA>Vision Res</MedlineTA>
<NlmUniqueID>0417402</NlmUniqueID>
<ISSNLinking>0042-6989</ISSNLinking>
</MedlineJournalInfo>
<CitationSubset>IM</CitationSubset>
<MeshHeadingList><MeshHeading><DescriptorName MajorTopicYN="Y" UI="D003463">Cues</DescriptorName>
</MeshHeading>
<MeshHeading><DescriptorName MajorTopicYN="N" UI="D003867">Depth Perception</DescriptorName>
<QualifierName MajorTopicYN="Y" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading><DescriptorName MajorTopicYN="N" UI="D006801">Humans</DescriptorName>
</MeshHeading>
<MeshHeading><DescriptorName MajorTopicYN="N" UI="D008568">Memory</DescriptorName>
<QualifierName MajorTopicYN="Y" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading><DescriptorName MajorTopicYN="N" UI="D009039">Motion Perception</DescriptorName>
<QualifierName MajorTopicYN="Y" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading><DescriptorName MajorTopicYN="N" UI="D014110">Touch</DescriptorName>
<QualifierName MajorTopicYN="Y" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading><DescriptorName MajorTopicYN="N" UI="D014584">User-Computer Interface</DescriptorName>
</MeshHeading>
</MeshHeadingList>
</MedlineCitation>
<PubmedData><History><PubMedPubDate PubStatus="pubmed"><Year>2001</Year>
<Month>2</Month>
<Day>13</Day>
<Hour>11</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="medline"><Year>2001</Year>
<Month>3</Month>
<Day>7</Day>
<Hour>10</Hour>
<Minute>1</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="entrez"><Year>2001</Year>
<Month>2</Month>
<Day>13</Day>
<Hour>11</Hour>
<Minute>0</Minute>
</PubMedPubDate>
</History>
<PublicationStatus>ppublish</PublicationStatus>
<ArticleIdList><ArticleId IdType="pubmed">11166048</ArticleId>
<ArticleId IdType="pii">S0042-6989(00)00254-6</ArticleId>
</ArticleIdList>
</PubmedData>
</pubmed>
</record>
Pour manipuler ce document sous Unix (Dilib)
EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/PubMed/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001D91 | SxmlIndent | more
Ou
HfdSelect -h $EXPLOR_AREA/Data/PubMed/Corpus/biblio.hfd -nk 001D91 | SxmlIndent | more
Pour mettre un lien sur cette page dans le réseau Wicri
{{Explor lien |wiki= Ticri/CIDE |area= HapticV1 |flux= PubMed |étape= Corpus |type= RBID |clé= pubmed:11166048 |texte= Experience-dependent visual cue integration based on consistencies between visual and haptic percepts. }}
Pour générer des pages wiki
HfdIndexSelect -h $EXPLOR_AREA/Data/PubMed/Corpus/RBID.i -Sk "pubmed:11166048" \ | HfdSelect -Kh $EXPLOR_AREA/Data/PubMed/Corpus/biblio.hfd \ | NlmPubMed2Wicri -a HapticV1
This area was generated with Dilib version V0.6.23. |