Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Experience-dependent visual cue integration based on consistencies between visual and haptic percepts

Identifieur interne : 003460 ( Istex/Corpus ); précédent : 003459; suivant : 003461

Experience-dependent visual cue integration based on consistencies between visual and haptic percepts

Auteurs : Joseph E. Atkins ; J Zsef Fiser ; Robert A. Jacobs

Source :

RBID : ISTEX:FBD3ADCD07C8A2CAEDCBC42FB3A763F70A7EF75E

English descriptors

Abstract

We study the hypothesis that observers can use haptic percepts as a standard against which the relative reliabilities of visual cues can be judged, and that these reliabilities determine how observers combine depth information provided by these cues. Using a novel visuo-haptic virtual reality environment, subjects viewed and grasped virtual objects. In Experiment 1, subjects were trained under motion relevant conditions, during which haptic and visual motion cues were consistent whereas haptic and visual texture cues were uncorrelated, and texture relevant conditions, during which haptic and texture cues were consistent whereas haptic and motion cues were uncorrelated. Subjects relied more on the motion cue after motion relevant training than after texture relevant training, and more on the texture cue after texture relevant training than after motion relevant training. Experiment 2 studied whether or not subjects could adapt their visual cue combination strategies in a context-dependent manner based on context-dependent consistencies between haptic and visual cues. Subjects successfully learned two cue combination strategies in parallel, and correctly applied each strategy in its appropriate context. Experiment 3, which was similar to Experiment 1 except that it used a more naturalistic experimental task, yielded the same pattern of results as Experiment 1 indicating that the findings do not depend on the precise nature of the experimental task. Overall, the results suggest that observers can involuntarily compare visual and haptic percepts in order to evaluate the relative reliabilities of visual cues, and that these reliabilities determine how cues are combined during three-dimensional visual perception.

Url:
DOI: 10.1016/S0042-6989(00)00254-6

Links to Exploration step

ISTEX:FBD3ADCD07C8A2CAEDCBC42FB3A763F70A7EF75E

Le document en format XML

<record>
<TEI wicri:istexFullTextTei="biblStruct">
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Experience-dependent visual cue integration based on consistencies between visual and haptic percepts</title>
<author>
<name sortKey="Atkins, Joseph E" sort="Atkins, Joseph E" uniqKey="Atkins J" first="Joseph E." last="Atkins">Joseph E. Atkins</name>
<affiliation>
<mods:affiliation>Department of Brain and Cognitive Sciences and the Center for Visual Science, University of Rochester, Rochester, NY 14627, USA</mods:affiliation>
</affiliation>
</author>
<author>
<name sortKey="Fiser, J Zsef" sort="Fiser, J Zsef" uniqKey="Fiser J" first="J Zsef" last="Fiser">J Zsef Fiser</name>
<affiliation>
<mods:affiliation>Department of Brain and Cognitive Sciences and the Center for Visual Science, University of Rochester, Rochester, NY 14627, USA</mods:affiliation>
</affiliation>
</author>
<author>
<name sortKey="Jacobs, Robert A" sort="Jacobs, Robert A" uniqKey="Jacobs R" first="Robert A." last="Jacobs">Robert A. Jacobs</name>
<affiliation>
<mods:affiliation>Department of Brain and Cognitive Sciences and the Center for Visual Science, University of Rochester, Rochester, NY 14627, USA</mods:affiliation>
</affiliation>
<affiliation>
<mods:affiliation>E-mail: robbie@bcs.rochester.edu</mods:affiliation>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">ISTEX</idno>
<idno type="RBID">ISTEX:FBD3ADCD07C8A2CAEDCBC42FB3A763F70A7EF75E</idno>
<date when="2001" year="2001">2001</date>
<idno type="doi">10.1016/S0042-6989(00)00254-6</idno>
<idno type="url">https://api.istex.fr/document/FBD3ADCD07C8A2CAEDCBC42FB3A763F70A7EF75E/fulltext/pdf</idno>
<idno type="wicri:Area/Istex/Corpus">003460</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title level="a" type="main" xml:lang="en">Experience-dependent visual cue integration based on consistencies between visual and haptic percepts</title>
<author>
<name sortKey="Atkins, Joseph E" sort="Atkins, Joseph E" uniqKey="Atkins J" first="Joseph E." last="Atkins">Joseph E. Atkins</name>
<affiliation>
<mods:affiliation>Department of Brain and Cognitive Sciences and the Center for Visual Science, University of Rochester, Rochester, NY 14627, USA</mods:affiliation>
</affiliation>
</author>
<author>
<name sortKey="Fiser, J Zsef" sort="Fiser, J Zsef" uniqKey="Fiser J" first="J Zsef" last="Fiser">J Zsef Fiser</name>
<affiliation>
<mods:affiliation>Department of Brain and Cognitive Sciences and the Center for Visual Science, University of Rochester, Rochester, NY 14627, USA</mods:affiliation>
</affiliation>
</author>
<author>
<name sortKey="Jacobs, Robert A" sort="Jacobs, Robert A" uniqKey="Jacobs R" first="Robert A." last="Jacobs">Robert A. Jacobs</name>
<affiliation>
<mods:affiliation>Department of Brain and Cognitive Sciences and the Center for Visual Science, University of Rochester, Rochester, NY 14627, USA</mods:affiliation>
</affiliation>
<affiliation>
<mods:affiliation>E-mail: robbie@bcs.rochester.edu</mods:affiliation>
</affiliation>
</author>
</analytic>
<monogr></monogr>
<series>
<title level="j">Vision Research</title>
<title level="j" type="abbrev">VR</title>
<idno type="ISSN">0042-6989</idno>
<imprint>
<publisher>ELSEVIER</publisher>
<date type="published" when="2000">2000</date>
<biblScope unit="volume">41</biblScope>
<biblScope unit="issue">4</biblScope>
<biblScope unit="page" from="449">449</biblScope>
<biblScope unit="page" to="461">461</biblScope>
</imprint>
<idno type="ISSN">0042-6989</idno>
</series>
<idno type="istex">FBD3ADCD07C8A2CAEDCBC42FB3A763F70A7EF75E</idno>
<idno type="DOI">10.1016/S0042-6989(00)00254-6</idno>
<idno type="PII">S0042-6989(00)00254-6</idno>
</biblStruct>
</sourceDesc>
<seriesStmt>
<idno type="ISSN">0042-6989</idno>
</seriesStmt>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Haptic percepts</term>
<term>Relative reliability</term>
<term>Visual cue integration</term>
<term>Visual percepts</term>
</keywords>
</textClass>
<langUsage>
<language ident="en">en</language>
</langUsage>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">We study the hypothesis that observers can use haptic percepts as a standard against which the relative reliabilities of visual cues can be judged, and that these reliabilities determine how observers combine depth information provided by these cues. Using a novel visuo-haptic virtual reality environment, subjects viewed and grasped virtual objects. In Experiment 1, subjects were trained under motion relevant conditions, during which haptic and visual motion cues were consistent whereas haptic and visual texture cues were uncorrelated, and texture relevant conditions, during which haptic and texture cues were consistent whereas haptic and motion cues were uncorrelated. Subjects relied more on the motion cue after motion relevant training than after texture relevant training, and more on the texture cue after texture relevant training than after motion relevant training. Experiment 2 studied whether or not subjects could adapt their visual cue combination strategies in a context-dependent manner based on context-dependent consistencies between haptic and visual cues. Subjects successfully learned two cue combination strategies in parallel, and correctly applied each strategy in its appropriate context. Experiment 3, which was similar to Experiment 1 except that it used a more naturalistic experimental task, yielded the same pattern of results as Experiment 1 indicating that the findings do not depend on the precise nature of the experimental task. Overall, the results suggest that observers can involuntarily compare visual and haptic percepts in order to evaluate the relative reliabilities of visual cues, and that these reliabilities determine how cues are combined during three-dimensional visual perception.</div>
</front>
</TEI>
<istex>
<corpusName>elsevier</corpusName>
<author>
<json:item>
<name>Joseph E. Atkins</name>
<affiliations>
<json:string>Department of Brain and Cognitive Sciences and the Center for Visual Science, University of Rochester, Rochester, NY 14627, USA</json:string>
</affiliations>
</json:item>
<json:item>
<name>József Fiser</name>
<affiliations>
<json:string>Department of Brain and Cognitive Sciences and the Center for Visual Science, University of Rochester, Rochester, NY 14627, USA</json:string>
</affiliations>
</json:item>
<json:item>
<name>Robert A. Jacobs</name>
<affiliations>
<json:string>Department of Brain and Cognitive Sciences and the Center for Visual Science, University of Rochester, Rochester, NY 14627, USA</json:string>
<json:string>E-mail: robbie@bcs.rochester.edu</json:string>
</affiliations>
</json:item>
</author>
<subject>
<json:item>
<lang>
<json:string>eng</json:string>
</lang>
<value>Visual cue integration</value>
</json:item>
<json:item>
<lang>
<json:string>eng</json:string>
</lang>
<value>Visual percepts</value>
</json:item>
<json:item>
<lang>
<json:string>eng</json:string>
</lang>
<value>Haptic percepts</value>
</json:item>
<json:item>
<lang>
<json:string>eng</json:string>
</lang>
<value>Relative reliability</value>
</json:item>
</subject>
<language>
<json:string>eng</json:string>
</language>
<abstract>We study the hypothesis that observers can use haptic percepts as a standard against which the relative reliabilities of visual cues can be judged, and that these reliabilities determine how observers combine depth information provided by these cues. Using a novel visuo-haptic virtual reality environment, subjects viewed and grasped virtual objects. In Experiment 1, subjects were trained under motion relevant conditions, during which haptic and visual motion cues were consistent whereas haptic and visual texture cues were uncorrelated, and texture relevant conditions, during which haptic and texture cues were consistent whereas haptic and motion cues were uncorrelated. Subjects relied more on the motion cue after motion relevant training than after texture relevant training, and more on the texture cue after texture relevant training than after motion relevant training. Experiment 2 studied whether or not subjects could adapt their visual cue combination strategies in a context-dependent manner based on context-dependent consistencies between haptic and visual cues. Subjects successfully learned two cue combination strategies in parallel, and correctly applied each strategy in its appropriate context. Experiment 3, which was similar to Experiment 1 except that it used a more naturalistic experimental task, yielded the same pattern of results as Experiment 1 indicating that the findings do not depend on the precise nature of the experimental task. Overall, the results suggest that observers can involuntarily compare visual and haptic percepts in order to evaluate the relative reliabilities of visual cues, and that these reliabilities determine how cues are combined during three-dimensional visual perception.</abstract>
<qualityIndicators>
<score>8</score>
<pdfVersion>1.2</pdfVersion>
<pdfPageSize>552 x 768 pts</pdfPageSize>
<refBibsNative>true</refBibsNative>
<keywordCount>4</keywordCount>
<abstractCharCount>1736</abstractCharCount>
<pdfWordCount>8533</pdfWordCount>
<pdfCharCount>51297</pdfCharCount>
<pdfPageCount>13</pdfPageCount>
<abstractWordCount>250</abstractWordCount>
</qualityIndicators>
<title>Experience-dependent visual cue integration based on consistencies between visual and haptic percepts</title>
<pii>
<json:string>S0042-6989(00)00254-6</json:string>
</pii>
<genre>
<json:string>research-article</json:string>
</genre>
<host>
<volume>41</volume>
<pii>
<json:string>S0042-6989(00)X0148-4</json:string>
</pii>
<pages>
<last>461</last>
<first>449</first>
</pages>
<issn>
<json:string>0042-6989</json:string>
</issn>
<issue>4</issue>
<genre>
<json:string>Journal</json:string>
</genre>
<language>
<json:string>unknown</json:string>
</language>
<title>Vision Research</title>
<publicationDate>2001</publicationDate>
</host>
<categories>
<wos>
<json:string>OPHTHALMOLOGY</json:string>
<json:string>NEUROSCIENCES</json:string>
</wos>
</categories>
<publicationDate>2000</publicationDate>
<copyrightDate>2001</copyrightDate>
<doi>
<json:string>10.1016/S0042-6989(00)00254-6</json:string>
</doi>
<id>FBD3ADCD07C8A2CAEDCBC42FB3A763F70A7EF75E</id>
<score>1</score>
<fulltext>
<json:item>
<original>true</original>
<mimetype>application/pdf</mimetype>
<extension>pdf</extension>
<uri>https://api.istex.fr/document/FBD3ADCD07C8A2CAEDCBC42FB3A763F70A7EF75E/fulltext/pdf</uri>
</json:item>
<json:item>
<original>true</original>
<mimetype>text/plain</mimetype>
<extension>txt</extension>
<uri>https://api.istex.fr/document/FBD3ADCD07C8A2CAEDCBC42FB3A763F70A7EF75E/fulltext/txt</uri>
</json:item>
<json:item>
<original>false</original>
<mimetype>application/zip</mimetype>
<extension>zip</extension>
<uri>https://api.istex.fr/document/FBD3ADCD07C8A2CAEDCBC42FB3A763F70A7EF75E/fulltext/zip</uri>
</json:item>
<istex:fulltextTEI uri="https://api.istex.fr/document/FBD3ADCD07C8A2CAEDCBC42FB3A763F70A7EF75E/fulltext/tei">
<teiHeader>
<fileDesc>
<titleStmt>
<title level="a" type="main" xml:lang="en">Experience-dependent visual cue integration based on consistencies between visual and haptic percepts</title>
</titleStmt>
<publicationStmt>
<authority>ISTEX</authority>
<publisher>ELSEVIER</publisher>
<availability>
<p>ELSEVIER</p>
</availability>
<date>2001</date>
</publicationStmt>
<notesStmt>
<note type="content">Fig. 1: (A) A subject using the visuo-haptic virtual reality experimental apparatus. The subject is grasping a virtual object viewed via displays embedded in the head-mounted goggles. (B) A typical instance of the display that the subjects viewed during the experiment. The motion cue cannot be illustrated, but the texture cue is evident from the foreshortening of the disks at the sides of the cylinder. (C) A schematic representation of the cylinders viewed from the top. The three ellipses represent three of the possible seven cylinder shapes (1=smallest depth; 4=depth equal to width; 7=largest depth).</note>
<note type="content">Fig. 2: The response data of subject JH on visual test trials following texture relevant training (top-left graph) and motion relevant training (bottom-left graph). The logistic model was used to fit surfaces to these two datasets (top-right and bottom-right graphs, respectively).</note>
<note type="content">Fig. 3: The estimated motion coefficient for each subject following motion relevant and texture relevant training based on visual and motor test trials.</note>
<note type="content">Fig. 4: The estimated motion coefficient for each subject in the motion relevant and texture relevant contexts based on visual and motor test trials.</note>
<note type="content">Fig. 5: The estimated motion coefficient for each subject following motion relevant and texture relevant training based on visual and motor test trials.</note>
</notesStmt>
<sourceDesc>
<biblStruct type="inbook">
<analytic>
<title level="a" type="main" xml:lang="en">Experience-dependent visual cue integration based on consistencies between visual and haptic percepts</title>
<author>
<persName>
<forename type="first">Joseph E.</forename>
<surname>Atkins</surname>
</persName>
<affiliation>Department of Brain and Cognitive Sciences and the Center for Visual Science, University of Rochester, Rochester, NY 14627, USA</affiliation>
</author>
<author>
<persName>
<forename type="first">József</forename>
<surname>Fiser</surname>
</persName>
<affiliation>Department of Brain and Cognitive Sciences and the Center for Visual Science, University of Rochester, Rochester, NY 14627, USA</affiliation>
</author>
<author>
<persName>
<forename type="first">Robert A.</forename>
<surname>Jacobs</surname>
</persName>
<email>robbie@bcs.rochester.edu</email>
<note type="correspondence">
<p>Corresponding author. Tel.: +1-716-2750753; fax: +1-716-4429216</p>
</note>
<affiliation>Department of Brain and Cognitive Sciences and the Center for Visual Science, University of Rochester, Rochester, NY 14627, USA</affiliation>
</author>
</analytic>
<monogr>
<title level="j">Vision Research</title>
<title level="j" type="abbrev">VR</title>
<idno type="pISSN">0042-6989</idno>
<idno type="PII">S0042-6989(00)X0148-4</idno>
<imprint>
<publisher>ELSEVIER</publisher>
<date type="published" when="2000"></date>
<biblScope unit="volume">41</biblScope>
<biblScope unit="issue">4</biblScope>
<biblScope unit="page" from="449">449</biblScope>
<biblScope unit="page" to="461">461</biblScope>
</imprint>
</monogr>
<idno type="istex">FBD3ADCD07C8A2CAEDCBC42FB3A763F70A7EF75E</idno>
<idno type="DOI">10.1016/S0042-6989(00)00254-6</idno>
<idno type="PII">S0042-6989(00)00254-6</idno>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<creation>
<date>2001</date>
</creation>
<langUsage>
<language ident="en">en</language>
</langUsage>
<abstract xml:lang="en">
<p>We study the hypothesis that observers can use haptic percepts as a standard against which the relative reliabilities of visual cues can be judged, and that these reliabilities determine how observers combine depth information provided by these cues. Using a novel visuo-haptic virtual reality environment, subjects viewed and grasped virtual objects. In Experiment 1, subjects were trained under motion relevant conditions, during which haptic and visual motion cues were consistent whereas haptic and visual texture cues were uncorrelated, and texture relevant conditions, during which haptic and texture cues were consistent whereas haptic and motion cues were uncorrelated. Subjects relied more on the motion cue after motion relevant training than after texture relevant training, and more on the texture cue after texture relevant training than after motion relevant training. Experiment 2 studied whether or not subjects could adapt their visual cue combination strategies in a context-dependent manner based on context-dependent consistencies between haptic and visual cues. Subjects successfully learned two cue combination strategies in parallel, and correctly applied each strategy in its appropriate context. Experiment 3, which was similar to Experiment 1 except that it used a more naturalistic experimental task, yielded the same pattern of results as Experiment 1 indicating that the findings do not depend on the precise nature of the experimental task. Overall, the results suggest that observers can involuntarily compare visual and haptic percepts in order to evaluate the relative reliabilities of visual cues, and that these reliabilities determine how cues are combined during three-dimensional visual perception.</p>
</abstract>
<textClass xml:lang="en">
<keywords scheme="keyword">
<list>
<head>Keywords</head>
<item>
<term>Visual cue integration</term>
</item>
<item>
<term>Visual percepts</term>
</item>
<item>
<term>Haptic percepts</term>
</item>
<item>
<term>Relative reliability</term>
</item>
</list>
</keywords>
</textClass>
</profileDesc>
<revisionDesc>
<change when="2000-05-05">Received</change>
<change when="2000-09-13">Modified</change>
<change when="2000">Published</change>
</revisionDesc>
</teiHeader>
</istex:fulltextTEI>
</fulltext>
<metadata>
<istex:metadataXml wicri:clean="Elsevier, elements deleted: ce:floats; body; tail">
<istex:xmlDeclaration>version="1.0" encoding="utf-8"</istex:xmlDeclaration>
<istex:docType PUBLIC="-//ES//DTD journal article DTD version 4.5.2//EN//XML" URI="art452.dtd" name="istex:docType">
<istex:entity SYSTEM="gr1" NDATA="IMAGE" name="gr1"></istex:entity>
<istex:entity SYSTEM="gr2" NDATA="IMAGE" name="gr2"></istex:entity>
<istex:entity SYSTEM="gr3" NDATA="IMAGE" name="gr3"></istex:entity>
<istex:entity SYSTEM="gr4" NDATA="IMAGE" name="gr4"></istex:entity>
<istex:entity SYSTEM="gr5" NDATA="IMAGE" name="gr5"></istex:entity>
</istex:docType>
<istex:document>
<converted-article version="4.5.2" docsubtype="fla" xml:lang="en">
<item-info>
<jid>VR</jid>
<aid>2939</aid>
<ce:pii>S0042-6989(00)00254-6</ce:pii>
<ce:doi>10.1016/S0042-6989(00)00254-6</ce:doi>
<ce:copyright type="full-transfer" year="2001">Elsevier Science Ltd</ce:copyright>
</item-info>
<head>
<ce:title>Experience-dependent visual cue integration based on consistencies between visual and haptic percepts</ce:title>
<ce:author-group>
<ce:author>
<ce:given-name>Joseph E.</ce:given-name>
<ce:surname>Atkins</ce:surname>
</ce:author>
<ce:author>
<ce:given-name>József</ce:given-name>
<ce:surname>Fiser</ce:surname>
</ce:author>
<ce:author>
<ce:given-name>Robert A.</ce:given-name>
<ce:surname>Jacobs</ce:surname>
<ce:cross-ref refid="CORR1">*</ce:cross-ref>
<ce:e-address>robbie@bcs.rochester.edu</ce:e-address>
</ce:author>
<ce:affiliation>
<ce:textfn>Department of Brain and Cognitive Sciences and the Center for Visual Science, University of Rochester, Rochester, NY 14627, USA</ce:textfn>
</ce:affiliation>
<ce:correspondence id="CORR1">
<ce:label>*</ce:label>
<ce:text>Corresponding author. Tel.: +1-716-2750753; fax: +1-716-4429216</ce:text>
</ce:correspondence>
</ce:author-group>
<ce:date-received day="5" month="5" year="2000"></ce:date-received>
<ce:date-revised day="13" month="9" year="2000"></ce:date-revised>
<ce:abstract>
<ce:section-title>Abstract</ce:section-title>
<ce:abstract-sec>
<ce:simple-para>We study the hypothesis that observers can use haptic percepts as a standard against which the relative reliabilities of visual cues can be judged, and that these reliabilities determine how observers combine depth information provided by these cues. Using a novel visuo-haptic virtual reality environment, subjects viewed and grasped virtual objects. In Experiment 1, subjects were trained under motion relevant conditions, during which haptic and visual motion cues were consistent whereas haptic and visual texture cues were uncorrelated, and texture relevant conditions, during which haptic and texture cues were consistent whereas haptic and motion cues were uncorrelated. Subjects relied more on the motion cue after motion relevant training than after texture relevant training, and more on the texture cue after texture relevant training than after motion relevant training. Experiment 2 studied whether or not subjects could adapt their visual cue combination strategies in a context-dependent manner based on context-dependent consistencies between haptic and visual cues. Subjects successfully learned two cue combination strategies in parallel, and correctly applied each strategy in its appropriate context. Experiment 3, which was similar to Experiment 1 except that it used a more naturalistic experimental task, yielded the same pattern of results as Experiment 1 indicating that the findings do not depend on the precise nature of the experimental task. Overall, the results suggest that observers can involuntarily compare visual and haptic percepts in order to evaluate the relative reliabilities of visual cues, and that these reliabilities determine how cues are combined during three-dimensional visual perception.</ce:simple-para>
</ce:abstract-sec>
</ce:abstract>
<ce:keywords class="keyword">
<ce:section-title>Keywords</ce:section-title>
<ce:keyword>
<ce:text>Visual cue integration</ce:text>
</ce:keyword>
<ce:keyword>
<ce:text>Visual percepts</ce:text>
</ce:keyword>
<ce:keyword>
<ce:text>Haptic percepts</ce:text>
</ce:keyword>
<ce:keyword>
<ce:text>Relative reliability</ce:text>
</ce:keyword>
</ce:keywords>
</head>
</converted-article>
</istex:document>
</istex:metadataXml>
<mods version="3.6">
<titleInfo lang="en">
<title>Experience-dependent visual cue integration based on consistencies between visual and haptic percepts</title>
</titleInfo>
<titleInfo type="alternative" lang="en" contentType="CDATA">
<title>Experience-dependent visual cue integration based on consistencies between visual and haptic percepts</title>
</titleInfo>
<name type="personal">
<namePart type="given">Joseph E.</namePart>
<namePart type="family">Atkins</namePart>
<affiliation>Department of Brain and Cognitive Sciences and the Center for Visual Science, University of Rochester, Rochester, NY 14627, USA</affiliation>
<role>
<roleTerm type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">József</namePart>
<namePart type="family">Fiser</namePart>
<affiliation>Department of Brain and Cognitive Sciences and the Center for Visual Science, University of Rochester, Rochester, NY 14627, USA</affiliation>
<role>
<roleTerm type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Robert A.</namePart>
<namePart type="family">Jacobs</namePart>
<affiliation>Department of Brain and Cognitive Sciences and the Center for Visual Science, University of Rochester, Rochester, NY 14627, USA</affiliation>
<affiliation>E-mail: robbie@bcs.rochester.edu</affiliation>
<description>Corresponding author. Tel.: +1-716-2750753; fax: +1-716-4429216</description>
<role>
<roleTerm type="text">author</roleTerm>
</role>
</name>
<typeOfResource>text</typeOfResource>
<genre type="research-article" displayLabel="Full-length article"></genre>
<originInfo>
<publisher>ELSEVIER</publisher>
<dateIssued encoding="w3cdtf">2000</dateIssued>
<dateCaptured encoding="w3cdtf">2000-05-05</dateCaptured>
<dateModified encoding="w3cdtf">2000-09-13</dateModified>
<copyrightDate encoding="w3cdtf">2001</copyrightDate>
</originInfo>
<language>
<languageTerm type="code" authority="iso639-2b">eng</languageTerm>
<languageTerm type="code" authority="rfc3066">en</languageTerm>
</language>
<physicalDescription>
<internetMediaType>text/html</internetMediaType>
</physicalDescription>
<abstract lang="en">We study the hypothesis that observers can use haptic percepts as a standard against which the relative reliabilities of visual cues can be judged, and that these reliabilities determine how observers combine depth information provided by these cues. Using a novel visuo-haptic virtual reality environment, subjects viewed and grasped virtual objects. In Experiment 1, subjects were trained under motion relevant conditions, during which haptic and visual motion cues were consistent whereas haptic and visual texture cues were uncorrelated, and texture relevant conditions, during which haptic and texture cues were consistent whereas haptic and motion cues were uncorrelated. Subjects relied more on the motion cue after motion relevant training than after texture relevant training, and more on the texture cue after texture relevant training than after motion relevant training. Experiment 2 studied whether or not subjects could adapt their visual cue combination strategies in a context-dependent manner based on context-dependent consistencies between haptic and visual cues. Subjects successfully learned two cue combination strategies in parallel, and correctly applied each strategy in its appropriate context. Experiment 3, which was similar to Experiment 1 except that it used a more naturalistic experimental task, yielded the same pattern of results as Experiment 1 indicating that the findings do not depend on the precise nature of the experimental task. Overall, the results suggest that observers can involuntarily compare visual and haptic percepts in order to evaluate the relative reliabilities of visual cues, and that these reliabilities determine how cues are combined during three-dimensional visual perception.</abstract>
<note type="content">Fig. 1: (A) A subject using the visuo-haptic virtual reality experimental apparatus. The subject is grasping a virtual object viewed via displays embedded in the head-mounted goggles. (B) A typical instance of the display that the subjects viewed during the experiment. The motion cue cannot be illustrated, but the texture cue is evident from the foreshortening of the disks at the sides of the cylinder. (C) A schematic representation of the cylinders viewed from the top. The three ellipses represent three of the possible seven cylinder shapes (1=smallest depth; 4=depth equal to width; 7=largest depth).</note>
<note type="content">Fig. 2: The response data of subject JH on visual test trials following texture relevant training (top-left graph) and motion relevant training (bottom-left graph). The logistic model was used to fit surfaces to these two datasets (top-right and bottom-right graphs, respectively).</note>
<note type="content">Fig. 3: The estimated motion coefficient for each subject following motion relevant and texture relevant training based on visual and motor test trials.</note>
<note type="content">Fig. 4: The estimated motion coefficient for each subject in the motion relevant and texture relevant contexts based on visual and motor test trials.</note>
<note type="content">Fig. 5: The estimated motion coefficient for each subject following motion relevant and texture relevant training based on visual and motor test trials.</note>
<subject lang="en">
<genre>Keywords</genre>
<topic>Visual cue integration</topic>
<topic>Visual percepts</topic>
<topic>Haptic percepts</topic>
<topic>Relative reliability</topic>
</subject>
<relatedItem type="host">
<titleInfo>
<title>Vision Research</title>
</titleInfo>
<titleInfo type="abbreviated">
<title>VR</title>
</titleInfo>
<genre type="Journal">journal</genre>
<originInfo>
<dateIssued encoding="w3cdtf">200102</dateIssued>
</originInfo>
<identifier type="ISSN">0042-6989</identifier>
<identifier type="PII">S0042-6989(00)X0148-4</identifier>
<part>
<date>200102</date>
<detail type="volume">
<number>41</number>
<caption>vol.</caption>
</detail>
<detail type="issue">
<number>4</number>
<caption>no.</caption>
</detail>
<extent unit="issue pages">
<start>415</start>
<end>540</end>
</extent>
<extent unit="pages">
<start>449</start>
<end>461</end>
</extent>
</part>
</relatedItem>
<identifier type="istex">FBD3ADCD07C8A2CAEDCBC42FB3A763F70A7EF75E</identifier>
<identifier type="DOI">10.1016/S0042-6989(00)00254-6</identifier>
<identifier type="PII">S0042-6989(00)00254-6</identifier>
<accessCondition type="use and reproduction" contentType="">© 2001Elsevier Science Ltd</accessCondition>
<recordInfo>
<recordContentSource>ELSEVIER</recordContentSource>
<recordOrigin>Elsevier Science Ltd, ©2001</recordOrigin>
</recordInfo>
</mods>
</metadata>
<enrichments>
<istex:catWosTEI uri="https://api.istex.fr/document/FBD3ADCD07C8A2CAEDCBC42FB3A763F70A7EF75E/enrichments/catWos">
<teiHeader>
<profileDesc>
<textClass>
<classCode scheme="WOS">OPHTHALMOLOGY</classCode>
<classCode scheme="WOS">NEUROSCIENCES</classCode>
</textClass>
</profileDesc>
</teiHeader>
</istex:catWosTEI>
</enrichments>
<serie></serie>
</istex>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Istex/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 003460 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Istex/Corpus/biblio.hfd -nk 003460 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Istex
   |étape=   Corpus
   |type=    RBID
   |clé=     ISTEX:FBD3ADCD07C8A2CAEDCBC42FB3A763F70A7EF75E
   |texte=   Experience-dependent visual cue integration based on consistencies between visual and haptic percepts
}}

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024