Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Combining sensory information: Mandatory fusion within, but not between, senses

Identifieur interne : 001207 ( PascalFrancis/Corpus ); précédent : 001206; suivant : 001208

Combining sensory information: Mandatory fusion within, but not between, senses

Auteurs : J. M. Hillis ; M. O. Ernst ; M. S. Banks ; M. S. Landy

Source :

RBID : Pascal:03-0078816

Descripteurs français

English descriptors

Abstract

Humans use multiple sources of sensory information to estimate environmental properties. For example, the eyes and hands both provide relevant information about an object's shape. The eyes estimate shape using binocular disparity, perspective projection, etc. The hands supply haptic shape information by means of tactile and proprioceptive cues. Combining information across cues can improve estimation of object properties but may come at a cost: loss of single-cue information. We report that single-cue information is indeed lost when cues from within the same sensory modality (disparity and texture gradients in vision) are combined, but not when different modalities (vision and haptics) are combined.

Notice en format standard (ISO 2709)

Pour connaître la documentation sur le format Inist Standard.

pA  
A01 01  1    @0 0036-8075
A02 01      @0 SCIEAS
A03   1    @0 Science : (Wash. D.C.)
A05       @2 298
A06       @2 5598
A08 01  1  ENG  @1 Combining sensory information: Mandatory fusion within, but not between, senses
A11 01  1    @1 HILLIS (J. M.)
A11 02  1    @1 ERNST (M. O.)
A11 03  1    @1 BANKS (M. S.)
A11 04  1    @1 LANDY (M. S.)
A14 01      @1 Vision Science Program, School of Optometry, University of California @2 Berkeley, CA 94720-2020 @3 USA @Z 1 aut. @Z 3 aut.
A14 02      @1 Max-Planck Institute for Biological Cybernetics, Spemannstrasse 38 @2 72076, Tübingen @3 DEU @Z 2 aut.
A14 03      @1 Department of Psychology, University of California @2 Berkeley, CA 94720-1650 @3 USA @Z 3 aut.
A14 04      @1 Department of Psychology and Center for Neural Science, New York University, 6 Washington Place @2 New York, NY 10003 @3 USA @Z 4 aut.
A20       @1 1627-1630
A21       @1 2002
A23 01      @0 ENG
A43 01      @1 INIST @2 6040 @5 354000105572480280
A44       @0 0000 @1 © 2003 INIST-CNRS. All rights reserved.
A47 01  1    @0 03-0078816
A60       @1 P
A61       @0 A
A64 01  1    @0 Science : (Washington, D.C.)
A66 01      @0 USA
A99       @0 1/2 p. ref. et notes
C01 01    ENG  @0 Humans use multiple sources of sensory information to estimate environmental properties. For example, the eyes and hands both provide relevant information about an object's shape. The eyes estimate shape using binocular disparity, perspective projection, etc. The hands supply haptic shape information by means of tactile and proprioceptive cues. Combining information across cues can improve estimation of object properties but may come at a cost: loss of single-cue information. We report that single-cue information is indeed lost when cues from within the same sensory modality (disparity and texture gradients in vision) are combined, but not when different modalities (vision and haptics) are combined.
C02 01  X    @0 002A26E08
C03 01  X  FRE  @0 Perception intermodale @5 01
C03 01  X  ENG  @0 Intermodal perception @5 01
C03 01  X  SPA  @0 Percepción intermodal @5 01
C03 02  X  FRE  @0 Sensibilité tactile @5 02
C03 02  X  ENG  @0 Tactile sensitivity @5 02
C03 02  X  SPA  @0 Sensibilidad tactil @5 02
C03 03  X  FRE  @0 Vision @5 03
C03 03  X  ENG  @0 Vision @5 03
C03 03  X  SPA  @0 Visión @5 03
C03 04  X  FRE  @0 Proprioception @5 04
C03 04  X  ENG  @0 Proprioception @5 04
C03 04  X  SPA  @0 Propiocepción @5 04
C03 05  X  FRE  @0 Forme stimulus @5 05
C03 05  X  ENG  @0 Stimulus shape @5 05
C03 05  X  SPA  @0 Forma estímulo @5 05
C03 06  X  FRE  @0 Disparité @5 06
C03 06  X  ENG  @0 Disparity @5 06
C03 06  X  SPA  @0 Disparidad @5 06
C03 07  X  FRE  @0 Texture @5 07
C03 07  X  ENG  @0 Texture @5 07
C03 07  X  SPA  @0 Textura @5 07
C03 08  X  FRE  @0 Modalité stimulus @5 08
C03 08  X  ENG  @0 Stimulus modality @5 08
C03 08  X  SPA  @0 Modalidad estímulo @5 08
C03 09  X  FRE  @0 Homme @5 54
C03 09  X  ENG  @0 Human @5 54
C03 09  X  SPA  @0 Hombre @5 54
C07 01  X  FRE  @0 Cognition @5 20
C07 01  X  ENG  @0 Cognition @5 20
C07 01  X  SPA  @0 Cognición @5 20
N21       @1 041

Format Inist (serveur)

NO : PASCAL 03-0078816 INIST
ET : Combining sensory information: Mandatory fusion within, but not between, senses
AU : HILLIS (J. M.); ERNST (M. O.); BANKS (M. S.); LANDY (M. S.)
AF : Vision Science Program, School of Optometry, University of California/Berkeley, CA 94720-2020/Etats-Unis (1 aut., 3 aut.); Max-Planck Institute for Biological Cybernetics, Spemannstrasse 38/72076, Tübingen/Allemagne (2 aut.); Department of Psychology, University of California/Berkeley, CA 94720-1650/Etats-Unis (3 aut.); Department of Psychology and Center for Neural Science, New York University, 6 Washington Place/New York, NY 10003/Etats-Unis (4 aut.)
DT : Publication en série; Niveau analytique
SO : Science : (Washington, D.C.); ISSN 0036-8075; Coden SCIEAS; Etats-Unis; Da. 2002; Vol. 298; No. 5598; Pp. 1627-1630
LA : Anglais
EA : Humans use multiple sources of sensory information to estimate environmental properties. For example, the eyes and hands both provide relevant information about an object's shape. The eyes estimate shape using binocular disparity, perspective projection, etc. The hands supply haptic shape information by means of tactile and proprioceptive cues. Combining information across cues can improve estimation of object properties but may come at a cost: loss of single-cue information. We report that single-cue information is indeed lost when cues from within the same sensory modality (disparity and texture gradients in vision) are combined, but not when different modalities (vision and haptics) are combined.
CC : 002A26E08
FD : Perception intermodale; Sensibilité tactile; Vision; Proprioception; Forme stimulus; Disparité; Texture; Modalité stimulus; Homme
FG : Cognition
ED : Intermodal perception; Tactile sensitivity; Vision; Proprioception; Stimulus shape; Disparity; Texture; Stimulus modality; Human
EG : Cognition
SD : Percepción intermodal; Sensibilidad tactil; Visión; Propiocepción; Forma estímulo; Disparidad; Textura; Modalidad estímulo; Hombre
LO : INIST-6040.354000105572480280
ID : 03-0078816

Links to Exploration step

Pascal:03-0078816

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en" level="a">Combining sensory information: Mandatory fusion within, but not between, senses</title>
<author>
<name sortKey="Hillis, J M" sort="Hillis, J M" uniqKey="Hillis J" first="J. M." last="Hillis">J. M. Hillis</name>
<affiliation>
<inist:fA14 i1="01">
<s1>Vision Science Program, School of Optometry, University of California</s1>
<s2>Berkeley, CA 94720-2020</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Ernst, M O" sort="Ernst, M O" uniqKey="Ernst M" first="M. O." last="Ernst">M. O. Ernst</name>
<affiliation>
<inist:fA14 i1="02">
<s1>Max-Planck Institute for Biological Cybernetics, Spemannstrasse 38</s1>
<s2>72076, Tübingen</s2>
<s3>DEU</s3>
<sZ>2 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Banks, M S" sort="Banks, M S" uniqKey="Banks M" first="M. S." last="Banks">M. S. Banks</name>
<affiliation>
<inist:fA14 i1="01">
<s1>Vision Science Program, School of Optometry, University of California</s1>
<s2>Berkeley, CA 94720-2020</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
<affiliation>
<inist:fA14 i1="03">
<s1>Department of Psychology, University of California</s1>
<s2>Berkeley, CA 94720-1650</s2>
<s3>USA</s3>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Landy, M S" sort="Landy, M S" uniqKey="Landy M" first="M. S." last="Landy">M. S. Landy</name>
<affiliation>
<inist:fA14 i1="04">
<s1>Department of Psychology and Center for Neural Science, New York University, 6 Washington Place</s1>
<s2>New York, NY 10003</s2>
<s3>USA</s3>
<sZ>4 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">INIST</idno>
<idno type="inist">03-0078816</idno>
<date when="2002">2002</date>
<idno type="stanalyst">PASCAL 03-0078816 INIST</idno>
<idno type="RBID">Pascal:03-0078816</idno>
<idno type="wicri:Area/PascalFrancis/Corpus">001207</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a">Combining sensory information: Mandatory fusion within, but not between, senses</title>
<author>
<name sortKey="Hillis, J M" sort="Hillis, J M" uniqKey="Hillis J" first="J. M." last="Hillis">J. M. Hillis</name>
<affiliation>
<inist:fA14 i1="01">
<s1>Vision Science Program, School of Optometry, University of California</s1>
<s2>Berkeley, CA 94720-2020</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Ernst, M O" sort="Ernst, M O" uniqKey="Ernst M" first="M. O." last="Ernst">M. O. Ernst</name>
<affiliation>
<inist:fA14 i1="02">
<s1>Max-Planck Institute for Biological Cybernetics, Spemannstrasse 38</s1>
<s2>72076, Tübingen</s2>
<s3>DEU</s3>
<sZ>2 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Banks, M S" sort="Banks, M S" uniqKey="Banks M" first="M. S." last="Banks">M. S. Banks</name>
<affiliation>
<inist:fA14 i1="01">
<s1>Vision Science Program, School of Optometry, University of California</s1>
<s2>Berkeley, CA 94720-2020</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
<affiliation>
<inist:fA14 i1="03">
<s1>Department of Psychology, University of California</s1>
<s2>Berkeley, CA 94720-1650</s2>
<s3>USA</s3>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Landy, M S" sort="Landy, M S" uniqKey="Landy M" first="M. S." last="Landy">M. S. Landy</name>
<affiliation>
<inist:fA14 i1="04">
<s1>Department of Psychology and Center for Neural Science, New York University, 6 Washington Place</s1>
<s2>New York, NY 10003</s2>
<s3>USA</s3>
<sZ>4 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
</analytic>
<series>
<title level="j" type="main">Science : (Washington, D.C.)</title>
<title level="j" type="abbreviated">Science : (Wash. D.C.)</title>
<idno type="ISSN">0036-8075</idno>
<imprint>
<date when="2002">2002</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
<seriesStmt>
<title level="j" type="main">Science : (Washington, D.C.)</title>
<title level="j" type="abbreviated">Science : (Wash. D.C.)</title>
<idno type="ISSN">0036-8075</idno>
</seriesStmt>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Disparity</term>
<term>Human</term>
<term>Intermodal perception</term>
<term>Proprioception</term>
<term>Stimulus modality</term>
<term>Stimulus shape</term>
<term>Tactile sensitivity</term>
<term>Texture</term>
<term>Vision</term>
</keywords>
<keywords scheme="Pascal" xml:lang="fr">
<term>Perception intermodale</term>
<term>Sensibilité tactile</term>
<term>Vision</term>
<term>Proprioception</term>
<term>Forme stimulus</term>
<term>Disparité</term>
<term>Texture</term>
<term>Modalité stimulus</term>
<term>Homme</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">Humans use multiple sources of sensory information to estimate environmental properties. For example, the eyes and hands both provide relevant information about an object's shape. The eyes estimate shape using binocular disparity, perspective projection, etc. The hands supply haptic shape information by means of tactile and proprioceptive cues. Combining information across cues can improve estimation of object properties but may come at a cost: loss of single-cue information. We report that single-cue information is indeed lost when cues from within the same sensory modality (disparity and texture gradients in vision) are combined, but not when different modalities (vision and haptics) are combined.</div>
</front>
</TEI>
<inist>
<standard h6="B">
<pA>
<fA01 i1="01" i2="1">
<s0>0036-8075</s0>
</fA01>
<fA02 i1="01">
<s0>SCIEAS</s0>
</fA02>
<fA03 i2="1">
<s0>Science : (Wash. D.C.)</s0>
</fA03>
<fA05>
<s2>298</s2>
</fA05>
<fA06>
<s2>5598</s2>
</fA06>
<fA08 i1="01" i2="1" l="ENG">
<s1>Combining sensory information: Mandatory fusion within, but not between, senses</s1>
</fA08>
<fA11 i1="01" i2="1">
<s1>HILLIS (J. M.)</s1>
</fA11>
<fA11 i1="02" i2="1">
<s1>ERNST (M. O.)</s1>
</fA11>
<fA11 i1="03" i2="1">
<s1>BANKS (M. S.)</s1>
</fA11>
<fA11 i1="04" i2="1">
<s1>LANDY (M. S.)</s1>
</fA11>
<fA14 i1="01">
<s1>Vision Science Program, School of Optometry, University of California</s1>
<s2>Berkeley, CA 94720-2020</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
<sZ>3 aut.</sZ>
</fA14>
<fA14 i1="02">
<s1>Max-Planck Institute for Biological Cybernetics, Spemannstrasse 38</s1>
<s2>72076, Tübingen</s2>
<s3>DEU</s3>
<sZ>2 aut.</sZ>
</fA14>
<fA14 i1="03">
<s1>Department of Psychology, University of California</s1>
<s2>Berkeley, CA 94720-1650</s2>
<s3>USA</s3>
<sZ>3 aut.</sZ>
</fA14>
<fA14 i1="04">
<s1>Department of Psychology and Center for Neural Science, New York University, 6 Washington Place</s1>
<s2>New York, NY 10003</s2>
<s3>USA</s3>
<sZ>4 aut.</sZ>
</fA14>
<fA20>
<s1>1627-1630</s1>
</fA20>
<fA21>
<s1>2002</s1>
</fA21>
<fA23 i1="01">
<s0>ENG</s0>
</fA23>
<fA43 i1="01">
<s1>INIST</s1>
<s2>6040</s2>
<s5>354000105572480280</s5>
</fA43>
<fA44>
<s0>0000</s0>
<s1>© 2003 INIST-CNRS. All rights reserved.</s1>
</fA44>
<fA47 i1="01" i2="1">
<s0>03-0078816</s0>
</fA47>
<fA60>
<s1>P</s1>
</fA60>
<fA61>
<s0>A</s0>
</fA61>
<fA64 i1="01" i2="1">
<s0>Science : (Washington, D.C.)</s0>
</fA64>
<fA66 i1="01">
<s0>USA</s0>
</fA66>
<fA99>
<s0>1/2 p. ref. et notes</s0>
</fA99>
<fC01 i1="01" l="ENG">
<s0>Humans use multiple sources of sensory information to estimate environmental properties. For example, the eyes and hands both provide relevant information about an object's shape. The eyes estimate shape using binocular disparity, perspective projection, etc. The hands supply haptic shape information by means of tactile and proprioceptive cues. Combining information across cues can improve estimation of object properties but may come at a cost: loss of single-cue information. We report that single-cue information is indeed lost when cues from within the same sensory modality (disparity and texture gradients in vision) are combined, but not when different modalities (vision and haptics) are combined.</s0>
</fC01>
<fC02 i1="01" i2="X">
<s0>002A26E08</s0>
</fC02>
<fC03 i1="01" i2="X" l="FRE">
<s0>Perception intermodale</s0>
<s5>01</s5>
</fC03>
<fC03 i1="01" i2="X" l="ENG">
<s0>Intermodal perception</s0>
<s5>01</s5>
</fC03>
<fC03 i1="01" i2="X" l="SPA">
<s0>Percepción intermodal</s0>
<s5>01</s5>
</fC03>
<fC03 i1="02" i2="X" l="FRE">
<s0>Sensibilité tactile</s0>
<s5>02</s5>
</fC03>
<fC03 i1="02" i2="X" l="ENG">
<s0>Tactile sensitivity</s0>
<s5>02</s5>
</fC03>
<fC03 i1="02" i2="X" l="SPA">
<s0>Sensibilidad tactil</s0>
<s5>02</s5>
</fC03>
<fC03 i1="03" i2="X" l="FRE">
<s0>Vision</s0>
<s5>03</s5>
</fC03>
<fC03 i1="03" i2="X" l="ENG">
<s0>Vision</s0>
<s5>03</s5>
</fC03>
<fC03 i1="03" i2="X" l="SPA">
<s0>Visión</s0>
<s5>03</s5>
</fC03>
<fC03 i1="04" i2="X" l="FRE">
<s0>Proprioception</s0>
<s5>04</s5>
</fC03>
<fC03 i1="04" i2="X" l="ENG">
<s0>Proprioception</s0>
<s5>04</s5>
</fC03>
<fC03 i1="04" i2="X" l="SPA">
<s0>Propiocepción</s0>
<s5>04</s5>
</fC03>
<fC03 i1="05" i2="X" l="FRE">
<s0>Forme stimulus</s0>
<s5>05</s5>
</fC03>
<fC03 i1="05" i2="X" l="ENG">
<s0>Stimulus shape</s0>
<s5>05</s5>
</fC03>
<fC03 i1="05" i2="X" l="SPA">
<s0>Forma estímulo</s0>
<s5>05</s5>
</fC03>
<fC03 i1="06" i2="X" l="FRE">
<s0>Disparité</s0>
<s5>06</s5>
</fC03>
<fC03 i1="06" i2="X" l="ENG">
<s0>Disparity</s0>
<s5>06</s5>
</fC03>
<fC03 i1="06" i2="X" l="SPA">
<s0>Disparidad</s0>
<s5>06</s5>
</fC03>
<fC03 i1="07" i2="X" l="FRE">
<s0>Texture</s0>
<s5>07</s5>
</fC03>
<fC03 i1="07" i2="X" l="ENG">
<s0>Texture</s0>
<s5>07</s5>
</fC03>
<fC03 i1="07" i2="X" l="SPA">
<s0>Textura</s0>
<s5>07</s5>
</fC03>
<fC03 i1="08" i2="X" l="FRE">
<s0>Modalité stimulus</s0>
<s5>08</s5>
</fC03>
<fC03 i1="08" i2="X" l="ENG">
<s0>Stimulus modality</s0>
<s5>08</s5>
</fC03>
<fC03 i1="08" i2="X" l="SPA">
<s0>Modalidad estímulo</s0>
<s5>08</s5>
</fC03>
<fC03 i1="09" i2="X" l="FRE">
<s0>Homme</s0>
<s5>54</s5>
</fC03>
<fC03 i1="09" i2="X" l="ENG">
<s0>Human</s0>
<s5>54</s5>
</fC03>
<fC03 i1="09" i2="X" l="SPA">
<s0>Hombre</s0>
<s5>54</s5>
</fC03>
<fC07 i1="01" i2="X" l="FRE">
<s0>Cognition</s0>
<s5>20</s5>
</fC07>
<fC07 i1="01" i2="X" l="ENG">
<s0>Cognition</s0>
<s5>20</s5>
</fC07>
<fC07 i1="01" i2="X" l="SPA">
<s0>Cognición</s0>
<s5>20</s5>
</fC07>
<fN21>
<s1>041</s1>
</fN21>
</pA>
</standard>
<server>
<NO>PASCAL 03-0078816 INIST</NO>
<ET>Combining sensory information: Mandatory fusion within, but not between, senses</ET>
<AU>HILLIS (J. M.); ERNST (M. O.); BANKS (M. S.); LANDY (M. S.)</AU>
<AF>Vision Science Program, School of Optometry, University of California/Berkeley, CA 94720-2020/Etats-Unis (1 aut., 3 aut.); Max-Planck Institute for Biological Cybernetics, Spemannstrasse 38/72076, Tübingen/Allemagne (2 aut.); Department of Psychology, University of California/Berkeley, CA 94720-1650/Etats-Unis (3 aut.); Department of Psychology and Center for Neural Science, New York University, 6 Washington Place/New York, NY 10003/Etats-Unis (4 aut.)</AF>
<DT>Publication en série; Niveau analytique</DT>
<SO>Science : (Washington, D.C.); ISSN 0036-8075; Coden SCIEAS; Etats-Unis; Da. 2002; Vol. 298; No. 5598; Pp. 1627-1630</SO>
<LA>Anglais</LA>
<EA>Humans use multiple sources of sensory information to estimate environmental properties. For example, the eyes and hands both provide relevant information about an object's shape. The eyes estimate shape using binocular disparity, perspective projection, etc. The hands supply haptic shape information by means of tactile and proprioceptive cues. Combining information across cues can improve estimation of object properties but may come at a cost: loss of single-cue information. We report that single-cue information is indeed lost when cues from within the same sensory modality (disparity and texture gradients in vision) are combined, but not when different modalities (vision and haptics) are combined.</EA>
<CC>002A26E08</CC>
<FD>Perception intermodale; Sensibilité tactile; Vision; Proprioception; Forme stimulus; Disparité; Texture; Modalité stimulus; Homme</FD>
<FG>Cognition</FG>
<ED>Intermodal perception; Tactile sensitivity; Vision; Proprioception; Stimulus shape; Disparity; Texture; Stimulus modality; Human</ED>
<EG>Cognition</EG>
<SD>Percepción intermodal; Sensibilidad tactil; Visión; Propiocepción; Forma estímulo; Disparidad; Textura; Modalidad estímulo; Hombre</SD>
<LO>INIST-6040.354000105572480280</LO>
<ID>03-0078816</ID>
</server>
</inist>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/PascalFrancis/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001207 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/PascalFrancis/Corpus/biblio.hfd -nk 001207 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    PascalFrancis
   |étape=   Corpus
   |type=    RBID
   |clé=     Pascal:03-0078816
   |texte=   Combining sensory information: Mandatory fusion within, but not between, senses
}}

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024