Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Humans integrate visual and haptic information in a statistically optimal fashion

Identifieur interne : 001297 ( PascalFrancis/Corpus ); précédent : 001296; suivant : 001298

Humans integrate visual and haptic information in a statistically optimal fashion

Auteurs : Marc O. Ernst ; Martin S. Banks

Source :

RBID : Pascal:02-0244122

Descripteurs français

English descriptors

Abstract

When a person looks at an object while exploring it with their hand, vision and touch both provide information for estimating the properties of the object. Vision frequently dominates the integrated visual-haptic percept, for example when judging size, shape or position1-3, but in some circumstances the percept is clearly affected by haptics4-7. Here we propose that a general principle, which minimizes variance in the final estimate, determines the degree to which vision or haptics dominates. This principle is realized by using maximum-likelihood estimation8-15 to combine the inputs. To investigate cue combination quantitatively, we first measured the variances associated with visual and haptic estimation of height. We then used these measurements to construct a maximum-likelihood integrator. This model behaved very similarly to humans in a visual-haptic task. Thus, the nervous system seems to combine visual and haptic information in a fashion that is similar to a maximum-likelihood integrator. Visual dominance occurs when the variance associated with visual estimation is lower than that associated with haptic estimation.

Notice en format standard (ISO 2709)

Pour connaître la documentation sur le format Inist Standard.

pA  
A01 01  1    @0 0028-0836
A02 01      @0 NATUAS
A03   1    @0 Nature : (Lond.)
A05       @2 415
A06       @2 6870
A08 01  1  ENG  @1 Humans integrate visual and haptic information in a statistically optimal fashion
A11 01  1    @1 ERNST (Marc O.)
A11 02  1    @1 BANKS (Martin S.)
A14 01      @1 Vision Science Program/School of Optometry, University of California @2 Berkeley 94720-2020 @3 USA @Z 1 aut. @Z 2 aut.
A20       @1 429-433
A21       @1 2002
A23 01      @0 ENG
A43 01      @1 INIST @2 142 @5 354000102509760230
A44       @0 0000 @1 © 2002 INIST-CNRS. All rights reserved.
A45       @0 25 ref.
A47 01  1    @0 02-0244122
A60       @1 P @3 LT
A61       @0 A
A64 01  1    @0 Nature : (London)
A66 01      @0 GBR
C01 01    ENG  @0 When a person looks at an object while exploring it with their hand, vision and touch both provide information for estimating the properties of the object. Vision frequently dominates the integrated visual-haptic percept, for example when judging size, shape or position1-3, but in some circumstances the percept is clearly affected by haptics4-7. Here we propose that a general principle, which minimizes variance in the final estimate, determines the degree to which vision or haptics dominates. This principle is realized by using maximum-likelihood estimation8-15 to combine the inputs. To investigate cue combination quantitatively, we first measured the variances associated with visual and haptic estimation of height. We then used these measurements to construct a maximum-likelihood integrator. This model behaved very similarly to humans in a visual-haptic task. Thus, the nervous system seems to combine visual and haptic information in a fashion that is similar to a maximum-likelihood integrator. Visual dominance occurs when the variance associated with visual estimation is lower than that associated with haptic estimation.
C02 01  X    @0 002A26E08
C03 01  X  FRE  @0 Intégration information @5 01
C03 01  X  ENG  @0 Information integration @5 01
C03 01  X  SPA  @0 Integración información @5 01
C03 02  X  FRE  @0 Perception intermodale @5 02
C03 02  X  ENG  @0 Intermodal perception @5 02
C03 02  X  SPA  @0 Percepción intermodal @5 02
C03 03  X  FRE  @0 Vision @5 03
C03 03  X  ENG  @0 Vision @5 03
C03 03  X  SPA  @0 Visión @5 03
C03 04  X  FRE  @0 Sensibilité tactile @5 04
C03 04  X  ENG  @0 Tactile sensitivity @5 04
C03 04  X  SPA  @0 Sensibilidad tactil @5 04
C03 05  X  FRE  @0 Maximum vraisemblance @5 05
C03 05  X  ENG  @0 Maximum likelihood @5 05
C03 05  X  SPA  @0 Maxima verosimilitud @5 05
C03 06  X  FRE  @0 Modèle statistique @5 07
C03 06  X  ENG  @0 Statistical model @5 07
C03 06  X  SPA  @0 Modelo estadístico @5 07
C03 07  X  FRE  @0 Perception @5 17
C03 07  X  ENG  @0 Perception @5 17
C03 07  X  SPA  @0 Percepción @5 17
C03 08  X  FRE  @0 Cognition @5 18
C03 08  X  ENG  @0 Cognition @5 18
C03 08  X  SPA  @0 Cognición @5 18
C03 09  X  FRE  @0 Homme @5 19
C03 09  X  ENG  @0 Human @5 19
C03 09  X  SPA  @0 Hombre @5 19
N21       @1 147
N82       @1 PSI

Format Inist (serveur)

NO : PASCAL 02-0244122 INIST
ET : Humans integrate visual and haptic information in a statistically optimal fashion
AU : ERNST (Marc O.); BANKS (Martin S.)
AF : Vision Science Program/School of Optometry, University of California/Berkeley 94720-2020/Etats-Unis (1 aut., 2 aut.)
DT : Publication en série; Lettre à l'éditeur; Niveau analytique
SO : Nature : (London); ISSN 0028-0836; Coden NATUAS; Royaume-Uni; Da. 2002; Vol. 415; No. 6870; Pp. 429-433; Bibl. 25 ref.
LA : Anglais
EA : When a person looks at an object while exploring it with their hand, vision and touch both provide information for estimating the properties of the object. Vision frequently dominates the integrated visual-haptic percept, for example when judging size, shape or position1-3, but in some circumstances the percept is clearly affected by haptics4-7. Here we propose that a general principle, which minimizes variance in the final estimate, determines the degree to which vision or haptics dominates. This principle is realized by using maximum-likelihood estimation8-15 to combine the inputs. To investigate cue combination quantitatively, we first measured the variances associated with visual and haptic estimation of height. We then used these measurements to construct a maximum-likelihood integrator. This model behaved very similarly to humans in a visual-haptic task. Thus, the nervous system seems to combine visual and haptic information in a fashion that is similar to a maximum-likelihood integrator. Visual dominance occurs when the variance associated with visual estimation is lower than that associated with haptic estimation.
CC : 002A26E08
FD : Intégration information; Perception intermodale; Vision; Sensibilité tactile; Maximum vraisemblance; Modèle statistique; Perception; Cognition; Homme
ED : Information integration; Intermodal perception; Vision; Tactile sensitivity; Maximum likelihood; Statistical model; Perception; Cognition; Human
SD : Integración información; Percepción intermodal; Visión; Sensibilidad tactil; Maxima verosimilitud; Modelo estadístico; Percepción; Cognición; Hombre
LO : INIST-142.354000102509760230
ID : 02-0244122

Links to Exploration step

Pascal:02-0244122

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en" level="a">Humans integrate visual and haptic information in a statistically optimal fashion</title>
<author>
<name sortKey="Ernst, Marc O" sort="Ernst, Marc O" uniqKey="Ernst M" first="Marc O." last="Ernst">Marc O. Ernst</name>
<affiliation>
<inist:fA14 i1="01">
<s1>Vision Science Program/School of Optometry, University of California</s1>
<s2>Berkeley 94720-2020</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Banks, Martin S" sort="Banks, Martin S" uniqKey="Banks M" first="Martin S." last="Banks">Martin S. Banks</name>
<affiliation>
<inist:fA14 i1="01">
<s1>Vision Science Program/School of Optometry, University of California</s1>
<s2>Berkeley 94720-2020</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">INIST</idno>
<idno type="inist">02-0244122</idno>
<date when="2002">2002</date>
<idno type="stanalyst">PASCAL 02-0244122 INIST</idno>
<idno type="RBID">Pascal:02-0244122</idno>
<idno type="wicri:Area/PascalFrancis/Corpus">001297</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a">Humans integrate visual and haptic information in a statistically optimal fashion</title>
<author>
<name sortKey="Ernst, Marc O" sort="Ernst, Marc O" uniqKey="Ernst M" first="Marc O." last="Ernst">Marc O. Ernst</name>
<affiliation>
<inist:fA14 i1="01">
<s1>Vision Science Program/School of Optometry, University of California</s1>
<s2>Berkeley 94720-2020</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Banks, Martin S" sort="Banks, Martin S" uniqKey="Banks M" first="Martin S." last="Banks">Martin S. Banks</name>
<affiliation>
<inist:fA14 i1="01">
<s1>Vision Science Program/School of Optometry, University of California</s1>
<s2>Berkeley 94720-2020</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
</analytic>
<series>
<title level="j" type="main">Nature : (London)</title>
<title level="j" type="abbreviated">Nature : (Lond.)</title>
<idno type="ISSN">0028-0836</idno>
<imprint>
<date when="2002">2002</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
<seriesStmt>
<title level="j" type="main">Nature : (London)</title>
<title level="j" type="abbreviated">Nature : (Lond.)</title>
<idno type="ISSN">0028-0836</idno>
</seriesStmt>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Cognition</term>
<term>Human</term>
<term>Information integration</term>
<term>Intermodal perception</term>
<term>Maximum likelihood</term>
<term>Perception</term>
<term>Statistical model</term>
<term>Tactile sensitivity</term>
<term>Vision</term>
</keywords>
<keywords scheme="Pascal" xml:lang="fr">
<term>Intégration information</term>
<term>Perception intermodale</term>
<term>Vision</term>
<term>Sensibilité tactile</term>
<term>Maximum vraisemblance</term>
<term>Modèle statistique</term>
<term>Perception</term>
<term>Cognition</term>
<term>Homme</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">When a person looks at an object while exploring it with their hand, vision and touch both provide information for estimating the properties of the object. Vision frequently dominates the integrated visual-haptic percept, for example when judging size, shape or position
<sup>1-3</sup>
, but in some circumstances the percept is clearly affected by haptics
<sup>4-7</sup>
. Here we propose that a general principle, which minimizes variance in the final estimate, determines the degree to which vision or haptics dominates. This principle is realized by using maximum-likelihood estimation
<sup>8-15</sup>
to combine the inputs. To investigate cue combination quantitatively, we first measured the variances associated with visual and haptic estimation of height. We then used these measurements to construct a maximum-likelihood integrator. This model behaved very similarly to humans in a visual-haptic task. Thus, the nervous system seems to combine visual and haptic information in a fashion that is similar to a maximum-likelihood integrator. Visual dominance occurs when the variance associated with visual estimation is lower than that associated with haptic estimation.</div>
</front>
</TEI>
<inist>
<standard h6="B">
<pA>
<fA01 i1="01" i2="1">
<s0>0028-0836</s0>
</fA01>
<fA02 i1="01">
<s0>NATUAS</s0>
</fA02>
<fA03 i2="1">
<s0>Nature : (Lond.)</s0>
</fA03>
<fA05>
<s2>415</s2>
</fA05>
<fA06>
<s2>6870</s2>
</fA06>
<fA08 i1="01" i2="1" l="ENG">
<s1>Humans integrate visual and haptic information in a statistically optimal fashion</s1>
</fA08>
<fA11 i1="01" i2="1">
<s1>ERNST (Marc O.)</s1>
</fA11>
<fA11 i1="02" i2="1">
<s1>BANKS (Martin S.)</s1>
</fA11>
<fA14 i1="01">
<s1>Vision Science Program/School of Optometry, University of California</s1>
<s2>Berkeley 94720-2020</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</fA14>
<fA20>
<s1>429-433</s1>
</fA20>
<fA21>
<s1>2002</s1>
</fA21>
<fA23 i1="01">
<s0>ENG</s0>
</fA23>
<fA43 i1="01">
<s1>INIST</s1>
<s2>142</s2>
<s5>354000102509760230</s5>
</fA43>
<fA44>
<s0>0000</s0>
<s1>© 2002 INIST-CNRS. All rights reserved.</s1>
</fA44>
<fA45>
<s0>25 ref.</s0>
</fA45>
<fA47 i1="01" i2="1">
<s0>02-0244122</s0>
</fA47>
<fA60>
<s1>P</s1>
<s3>LT</s3>
</fA60>
<fA61>
<s0>A</s0>
</fA61>
<fA64 i1="01" i2="1">
<s0>Nature : (London)</s0>
</fA64>
<fA66 i1="01">
<s0>GBR</s0>
</fA66>
<fC01 i1="01" l="ENG">
<s0>When a person looks at an object while exploring it with their hand, vision and touch both provide information for estimating the properties of the object. Vision frequently dominates the integrated visual-haptic percept, for example when judging size, shape or position
<sup>1-3</sup>
, but in some circumstances the percept is clearly affected by haptics
<sup>4-7</sup>
. Here we propose that a general principle, which minimizes variance in the final estimate, determines the degree to which vision or haptics dominates. This principle is realized by using maximum-likelihood estimation
<sup>8-15</sup>
to combine the inputs. To investigate cue combination quantitatively, we first measured the variances associated with visual and haptic estimation of height. We then used these measurements to construct a maximum-likelihood integrator. This model behaved very similarly to humans in a visual-haptic task. Thus, the nervous system seems to combine visual and haptic information in a fashion that is similar to a maximum-likelihood integrator. Visual dominance occurs when the variance associated with visual estimation is lower than that associated with haptic estimation.</s0>
</fC01>
<fC02 i1="01" i2="X">
<s0>002A26E08</s0>
</fC02>
<fC03 i1="01" i2="X" l="FRE">
<s0>Intégration information</s0>
<s5>01</s5>
</fC03>
<fC03 i1="01" i2="X" l="ENG">
<s0>Information integration</s0>
<s5>01</s5>
</fC03>
<fC03 i1="01" i2="X" l="SPA">
<s0>Integración información</s0>
<s5>01</s5>
</fC03>
<fC03 i1="02" i2="X" l="FRE">
<s0>Perception intermodale</s0>
<s5>02</s5>
</fC03>
<fC03 i1="02" i2="X" l="ENG">
<s0>Intermodal perception</s0>
<s5>02</s5>
</fC03>
<fC03 i1="02" i2="X" l="SPA">
<s0>Percepción intermodal</s0>
<s5>02</s5>
</fC03>
<fC03 i1="03" i2="X" l="FRE">
<s0>Vision</s0>
<s5>03</s5>
</fC03>
<fC03 i1="03" i2="X" l="ENG">
<s0>Vision</s0>
<s5>03</s5>
</fC03>
<fC03 i1="03" i2="X" l="SPA">
<s0>Visión</s0>
<s5>03</s5>
</fC03>
<fC03 i1="04" i2="X" l="FRE">
<s0>Sensibilité tactile</s0>
<s5>04</s5>
</fC03>
<fC03 i1="04" i2="X" l="ENG">
<s0>Tactile sensitivity</s0>
<s5>04</s5>
</fC03>
<fC03 i1="04" i2="X" l="SPA">
<s0>Sensibilidad tactil</s0>
<s5>04</s5>
</fC03>
<fC03 i1="05" i2="X" l="FRE">
<s0>Maximum vraisemblance</s0>
<s5>05</s5>
</fC03>
<fC03 i1="05" i2="X" l="ENG">
<s0>Maximum likelihood</s0>
<s5>05</s5>
</fC03>
<fC03 i1="05" i2="X" l="SPA">
<s0>Maxima verosimilitud</s0>
<s5>05</s5>
</fC03>
<fC03 i1="06" i2="X" l="FRE">
<s0>Modèle statistique</s0>
<s5>07</s5>
</fC03>
<fC03 i1="06" i2="X" l="ENG">
<s0>Statistical model</s0>
<s5>07</s5>
</fC03>
<fC03 i1="06" i2="X" l="SPA">
<s0>Modelo estadístico</s0>
<s5>07</s5>
</fC03>
<fC03 i1="07" i2="X" l="FRE">
<s0>Perception</s0>
<s5>17</s5>
</fC03>
<fC03 i1="07" i2="X" l="ENG">
<s0>Perception</s0>
<s5>17</s5>
</fC03>
<fC03 i1="07" i2="X" l="SPA">
<s0>Percepción</s0>
<s5>17</s5>
</fC03>
<fC03 i1="08" i2="X" l="FRE">
<s0>Cognition</s0>
<s5>18</s5>
</fC03>
<fC03 i1="08" i2="X" l="ENG">
<s0>Cognition</s0>
<s5>18</s5>
</fC03>
<fC03 i1="08" i2="X" l="SPA">
<s0>Cognición</s0>
<s5>18</s5>
</fC03>
<fC03 i1="09" i2="X" l="FRE">
<s0>Homme</s0>
<s5>19</s5>
</fC03>
<fC03 i1="09" i2="X" l="ENG">
<s0>Human</s0>
<s5>19</s5>
</fC03>
<fC03 i1="09" i2="X" l="SPA">
<s0>Hombre</s0>
<s5>19</s5>
</fC03>
<fN21>
<s1>147</s1>
</fN21>
<fN82>
<s1>PSI</s1>
</fN82>
</pA>
</standard>
<server>
<NO>PASCAL 02-0244122 INIST</NO>
<ET>Humans integrate visual and haptic information in a statistically optimal fashion</ET>
<AU>ERNST (Marc O.); BANKS (Martin S.)</AU>
<AF>Vision Science Program/School of Optometry, University of California/Berkeley 94720-2020/Etats-Unis (1 aut., 2 aut.)</AF>
<DT>Publication en série; Lettre à l'éditeur; Niveau analytique</DT>
<SO>Nature : (London); ISSN 0028-0836; Coden NATUAS; Royaume-Uni; Da. 2002; Vol. 415; No. 6870; Pp. 429-433; Bibl. 25 ref.</SO>
<LA>Anglais</LA>
<EA>When a person looks at an object while exploring it with their hand, vision and touch both provide information for estimating the properties of the object. Vision frequently dominates the integrated visual-haptic percept, for example when judging size, shape or position
<sup>1-3</sup>
, but in some circumstances the percept is clearly affected by haptics
<sup>4-7</sup>
. Here we propose that a general principle, which minimizes variance in the final estimate, determines the degree to which vision or haptics dominates. This principle is realized by using maximum-likelihood estimation
<sup>8-15</sup>
to combine the inputs. To investigate cue combination quantitatively, we first measured the variances associated with visual and haptic estimation of height. We then used these measurements to construct a maximum-likelihood integrator. This model behaved very similarly to humans in a visual-haptic task. Thus, the nervous system seems to combine visual and haptic information in a fashion that is similar to a maximum-likelihood integrator. Visual dominance occurs when the variance associated with visual estimation is lower than that associated with haptic estimation.</EA>
<CC>002A26E08</CC>
<FD>Intégration information; Perception intermodale; Vision; Sensibilité tactile; Maximum vraisemblance; Modèle statistique; Perception; Cognition; Homme</FD>
<ED>Information integration; Intermodal perception; Vision; Tactile sensitivity; Maximum likelihood; Statistical model; Perception; Cognition; Human</ED>
<SD>Integración información; Percepción intermodal; Visión; Sensibilidad tactil; Maxima verosimilitud; Modelo estadístico; Percepción; Cognición; Hombre</SD>
<LO>INIST-142.354000102509760230</LO>
<ID>02-0244122</ID>
</server>
</inist>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/PascalFrancis/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001297 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/PascalFrancis/Corpus/biblio.hfd -nk 001297 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    PascalFrancis
   |étape=   Corpus
   |type=    RBID
   |clé=     Pascal:02-0244122
   |texte=   Humans integrate visual and haptic information in a statistically optimal fashion
}}

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024