Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

The eyes grasp, the hands see: Metric category knowledge transfers between vision and touch

Identifieur interne : 000112 ( PascalFrancis/Corpus ); précédent : 000111; suivant : 000113

The eyes grasp, the hands see: Metric category knowledge transfers between vision and touch

Auteurs : Christian Wallraven ; Heinrich H. Bülthoff ; Steffen Waterkamp ; Loes Van Dam ; Nina Gaissert

Source :

RBID : Francis:14-0223797

Descripteurs français

English descriptors

Abstract

Categorization of seen objects is often determined by the shapes of objects. However, shape is not exclusive to the visual modality: The haptic system also is expert at identifying shapes. Hence, an important question for understanding shape processing is whether humans store separate modality-dependent shape representations, or whether information is integrated into one multisensory representation. To answer this question, we created a metric space of computer-generated novel objects varying in shape. These objects were then printed using a 3-D printer, to generate tangible stimuli. In a categorization experiment, participants first explored the objects visually and haptically. We found that both modalities led to highly similar categorization behavior. Next, participants were trained either visually or haptically on shape categories within the metric space. As expected, visual training increased visual performance, and haptic training increased haptic performance. Importantly, however, we found that visual training also improved haptic performance, and vice versa. Two additional experiments showed that the location of the categorical boundary in the metric space also transferred across modalities, as did heightened discriminability of objects adjacent to the boundary. This observed transfer of metric category knowledge across modalities indicates that visual and haptic forms of shape information are integrated into a shared multisensory representation.

Notice en format standard (ISO 2709)

Pour connaître la documentation sur le format Inist Standard.

pA  
A01 01  1    @0 1069-9384
A03   1    @0 Psychon. bull. rev.
A05       @2 21
A06       @2 4
A08 01  1  ENG  @1 The eyes grasp, the hands see: Metric category knowledge transfers between vision and touch
A11 01  1    @1 WALLRAVEN (Christian)
A11 02  1    @1 BÜLTHOFF (Heinrich H.)
A11 03  1    @1 WATERKAMP (Steffen)
A11 04  1    @1 VAN DAM (Loes)
A11 05  1    @1 GAISSERT (Nina)
A14 01      @1 Department of Brain and Cognitive Engineering, Korea University, Anam-Dong 5ga @2 Seongbuk-gu, Seoul 136-713 @3 KOR @Z 1 aut. @Z 2 aut.
A14 02      @1 Max Planck Institute for Biological Cybernetics @2 Tübingen @3 DEU @Z 2 aut. @Z 3 aut. @Z 5 aut.
A14 03      @1 University of Bielefeld @2 Bielefeld @3 DEU @Z 4 aut.
A20       @1 976-985
A21       @1 2014
A23 01      @0 ENG
A43 01      @1 INIST @2 13280C @5 354000504842790100
A44       @0 0000 @1 © 2014 INIST-CNRS. All rights reserved.
A45       @0 1/2 p.
A47 01  1    @0 14-0223797
A60       @1 P @3 CC
A61       @0 A
A64 01  1    @0 Psychonomic bulletin & review
A66 01      @0 USA
C01 01    ENG  @0 Categorization of seen objects is often determined by the shapes of objects. However, shape is not exclusive to the visual modality: The haptic system also is expert at identifying shapes. Hence, an important question for understanding shape processing is whether humans store separate modality-dependent shape representations, or whether information is integrated into one multisensory representation. To answer this question, we created a metric space of computer-generated novel objects varying in shape. These objects were then printed using a 3-D printer, to generate tangible stimuli. In a categorization experiment, participants first explored the objects visually and haptically. We found that both modalities led to highly similar categorization behavior. Next, participants were trained either visually or haptically on shape categories within the metric space. As expected, visual training increased visual performance, and haptic training increased haptic performance. Importantly, however, we found that visual training also improved haptic performance, and vice versa. Two additional experiments showed that the location of the categorical boundary in the metric space also transferred across modalities, as did heightened discriminability of objects adjacent to the boundary. This observed transfer of metric category knowledge across modalities indicates that visual and haptic forms of shape information are integrated into a shared multisensory representation.
C02 01  X    @0 770B04D @1 II
C02 02  X    @0 770B05C @1 II
C02 03  X    @0 770B05E @1 II
C02 04  X    @0 770B05H @1 II
C03 01  X  FRE  @0 Mouvement corporel @5 01
C03 01  X  ENG  @0 Body movement @5 01
C03 01  X  SPA  @0 Movimiento corporal @5 01
C03 02  X  FRE  @0 Préhension @5 02
C03 02  X  ENG  @0 Gripping @5 02
C03 02  X  SPA  @0 Prension @5 02
C03 03  X  FRE  @0 Etude expérimentale @5 03
C03 03  X  ENG  @0 Experimental study @5 03
C03 03  X  SPA  @0 Estudio experimental @5 03
C03 04  X  FRE  @0 Membre supérieur @5 05
C03 04  X  ENG  @0 Upper limb @5 05
C03 04  X  SPA  @0 Miembro superior @5 05
C03 05  X  FRE  @0 Catégorisation @5 07
C03 05  X  ENG  @0 Categorization @5 07
C03 05  X  SPA  @0 Categorización @5 07
C03 06  X  FRE  @0 Transfert des connaissances @5 08
C03 06  X  ENG  @0 Knowledge transfer @5 08
C03 06  X  SPA  @0 Transferencia conocimiento @5 08
C03 07  X  FRE  @0 Vision @5 09
C03 07  X  ENG  @0 Vision @5 09
C03 07  X  SPA  @0 Visión @5 09
C03 08  X  FRE  @0 Sensibilité tactile @5 10
C03 08  X  ENG  @0 Tactile sensitivity @5 10
C03 08  X  SPA  @0 Sensibilidad tactil @5 10
C03 09  X  FRE  @0 Perception intermodale @5 11
C03 09  X  ENG  @0 Intermodal perception @5 11
C03 09  X  SPA  @0 Percepción intermodal @5 11
C03 10  X  FRE  @0 Homme @5 18
C03 10  X  ENG  @0 Human @5 18
C03 10  X  SPA  @0 Hombre @5 18
C07 01  X  FRE  @0 Cognition @5 38
C07 01  X  ENG  @0 Cognition @5 38
C07 01  X  SPA  @0 Cognición @5 38
C07 02  X  FRE  @0 Motricité @5 39
C07 02  X  ENG  @0 Motricity @5 39
C07 02  X  SPA  @0 Motricidad @5 39
N21       @1 272

Format Inist (serveur)

NO : FRANCIS 14-0223797 INIST
ET : The eyes grasp, the hands see: Metric category knowledge transfers between vision and touch
AU : WALLRAVEN (Christian); BÜLTHOFF (Heinrich H.); WATERKAMP (Steffen); VAN DAM (Loes); GAISSERT (Nina)
AF : Department of Brain and Cognitive Engineering, Korea University, Anam-Dong 5ga/Seongbuk-gu, Seoul 136-713/Corée, République de (1 aut., 2 aut.); Max Planck Institute for Biological Cybernetics/Tübingen/Allemagne (2 aut., 3 aut., 5 aut.); University of Bielefeld/Bielefeld/Allemagne (4 aut.)
DT : Publication en série; Courte communication, note brève; Niveau analytique
SO : Psychonomic bulletin & review; ISSN 1069-9384; Etats-Unis; Da. 2014; Vol. 21; No. 4; Pp. 976-985; Bibl. 1/2 p.
LA : Anglais
EA : Categorization of seen objects is often determined by the shapes of objects. However, shape is not exclusive to the visual modality: The haptic system also is expert at identifying shapes. Hence, an important question for understanding shape processing is whether humans store separate modality-dependent shape representations, or whether information is integrated into one multisensory representation. To answer this question, we created a metric space of computer-generated novel objects varying in shape. These objects were then printed using a 3-D printer, to generate tangible stimuli. In a categorization experiment, participants first explored the objects visually and haptically. We found that both modalities led to highly similar categorization behavior. Next, participants were trained either visually or haptically on shape categories within the metric space. As expected, visual training increased visual performance, and haptic training increased haptic performance. Importantly, however, we found that visual training also improved haptic performance, and vice versa. Two additional experiments showed that the location of the categorical boundary in the metric space also transferred across modalities, as did heightened discriminability of objects adjacent to the boundary. This observed transfer of metric category knowledge across modalities indicates that visual and haptic forms of shape information are integrated into a shared multisensory representation.
CC : 770B04D; 770B05C; 770B05E; 770B05H
FD : Mouvement corporel; Préhension; Etude expérimentale; Membre supérieur; Catégorisation; Transfert des connaissances; Vision; Sensibilité tactile; Perception intermodale; Homme
FG : Cognition; Motricité
ED : Body movement; Gripping; Experimental study; Upper limb; Categorization; Knowledge transfer; Vision; Tactile sensitivity; Intermodal perception; Human
EG : Cognition; Motricity
SD : Movimiento corporal; Prension; Estudio experimental; Miembro superior; Categorización; Transferencia conocimiento; Visión; Sensibilidad tactil; Percepción intermodal; Hombre
LO : INIST-13280C.354000504842790100
ID : 14-0223797

Links to Exploration step

Francis:14-0223797

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en" level="a">The eyes grasp, the hands see: Metric category knowledge transfers between vision and touch</title>
<author>
<name sortKey="Wallraven, Christian" sort="Wallraven, Christian" uniqKey="Wallraven C" first="Christian" last="Wallraven">Christian Wallraven</name>
<affiliation>
<inist:fA14 i1="01">
<s1>Department of Brain and Cognitive Engineering, Korea University, Anam-Dong 5ga</s1>
<s2>Seongbuk-gu, Seoul 136-713</s2>
<s3>KOR</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Bulthoff, Heinrich H" sort="Bulthoff, Heinrich H" uniqKey="Bulthoff H" first="Heinrich H." last="Bülthoff">Heinrich H. Bülthoff</name>
<affiliation>
<inist:fA14 i1="01">
<s1>Department of Brain and Cognitive Engineering, Korea University, Anam-Dong 5ga</s1>
<s2>Seongbuk-gu, Seoul 136-713</s2>
<s3>KOR</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</inist:fA14>
</affiliation>
<affiliation>
<inist:fA14 i1="02">
<s1>Max Planck Institute for Biological Cybernetics</s1>
<s2>Tübingen</s2>
<s3>DEU</s3>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
<sZ>5 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Waterkamp, Steffen" sort="Waterkamp, Steffen" uniqKey="Waterkamp S" first="Steffen" last="Waterkamp">Steffen Waterkamp</name>
<affiliation>
<inist:fA14 i1="02">
<s1>Max Planck Institute for Biological Cybernetics</s1>
<s2>Tübingen</s2>
<s3>DEU</s3>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
<sZ>5 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Van Dam, Loes" sort="Van Dam, Loes" uniqKey="Van Dam L" first="Loes" last="Van Dam">Loes Van Dam</name>
<affiliation>
<inist:fA14 i1="03">
<s1>University of Bielefeld</s1>
<s2>Bielefeld</s2>
<s3>DEU</s3>
<sZ>4 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Gaissert, Nina" sort="Gaissert, Nina" uniqKey="Gaissert N" first="Nina" last="Gaissert">Nina Gaissert</name>
<affiliation>
<inist:fA14 i1="02">
<s1>Max Planck Institute for Biological Cybernetics</s1>
<s2>Tübingen</s2>
<s3>DEU</s3>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
<sZ>5 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">INIST</idno>
<idno type="inist">14-0223797</idno>
<date when="2014">2014</date>
<idno type="stanalyst">FRANCIS 14-0223797 INIST</idno>
<idno type="RBID">Francis:14-0223797</idno>
<idno type="wicri:Area/PascalFrancis/Corpus">000112</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a">The eyes grasp, the hands see: Metric category knowledge transfers between vision and touch</title>
<author>
<name sortKey="Wallraven, Christian" sort="Wallraven, Christian" uniqKey="Wallraven C" first="Christian" last="Wallraven">Christian Wallraven</name>
<affiliation>
<inist:fA14 i1="01">
<s1>Department of Brain and Cognitive Engineering, Korea University, Anam-Dong 5ga</s1>
<s2>Seongbuk-gu, Seoul 136-713</s2>
<s3>KOR</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Bulthoff, Heinrich H" sort="Bulthoff, Heinrich H" uniqKey="Bulthoff H" first="Heinrich H." last="Bülthoff">Heinrich H. Bülthoff</name>
<affiliation>
<inist:fA14 i1="01">
<s1>Department of Brain and Cognitive Engineering, Korea University, Anam-Dong 5ga</s1>
<s2>Seongbuk-gu, Seoul 136-713</s2>
<s3>KOR</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</inist:fA14>
</affiliation>
<affiliation>
<inist:fA14 i1="02">
<s1>Max Planck Institute for Biological Cybernetics</s1>
<s2>Tübingen</s2>
<s3>DEU</s3>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
<sZ>5 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Waterkamp, Steffen" sort="Waterkamp, Steffen" uniqKey="Waterkamp S" first="Steffen" last="Waterkamp">Steffen Waterkamp</name>
<affiliation>
<inist:fA14 i1="02">
<s1>Max Planck Institute for Biological Cybernetics</s1>
<s2>Tübingen</s2>
<s3>DEU</s3>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
<sZ>5 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Van Dam, Loes" sort="Van Dam, Loes" uniqKey="Van Dam L" first="Loes" last="Van Dam">Loes Van Dam</name>
<affiliation>
<inist:fA14 i1="03">
<s1>University of Bielefeld</s1>
<s2>Bielefeld</s2>
<s3>DEU</s3>
<sZ>4 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Gaissert, Nina" sort="Gaissert, Nina" uniqKey="Gaissert N" first="Nina" last="Gaissert">Nina Gaissert</name>
<affiliation>
<inist:fA14 i1="02">
<s1>Max Planck Institute for Biological Cybernetics</s1>
<s2>Tübingen</s2>
<s3>DEU</s3>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
<sZ>5 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
</analytic>
<series>
<title level="j" type="main">Psychonomic bulletin & review</title>
<title level="j" type="abbreviated">Psychon. bull. rev.</title>
<idno type="ISSN">1069-9384</idno>
<imprint>
<date when="2014">2014</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
<seriesStmt>
<title level="j" type="main">Psychonomic bulletin & review</title>
<title level="j" type="abbreviated">Psychon. bull. rev.</title>
<idno type="ISSN">1069-9384</idno>
</seriesStmt>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Body movement</term>
<term>Categorization</term>
<term>Experimental study</term>
<term>Gripping</term>
<term>Human</term>
<term>Intermodal perception</term>
<term>Knowledge transfer</term>
<term>Tactile sensitivity</term>
<term>Upper limb</term>
<term>Vision</term>
</keywords>
<keywords scheme="Pascal" xml:lang="fr">
<term>Mouvement corporel</term>
<term>Préhension</term>
<term>Etude expérimentale</term>
<term>Membre supérieur</term>
<term>Catégorisation</term>
<term>Transfert des connaissances</term>
<term>Vision</term>
<term>Sensibilité tactile</term>
<term>Perception intermodale</term>
<term>Homme</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">Categorization of seen objects is often determined by the shapes of objects. However, shape is not exclusive to the visual modality: The haptic system also is expert at identifying shapes. Hence, an important question for understanding shape processing is whether humans store separate modality-dependent shape representations, or whether information is integrated into one multisensory representation. To answer this question, we created a metric space of computer-generated novel objects varying in shape. These objects were then printed using a 3-D printer, to generate tangible stimuli. In a categorization experiment, participants first explored the objects visually and haptically. We found that both modalities led to highly similar categorization behavior. Next, participants were trained either visually or haptically on shape categories within the metric space. As expected, visual training increased visual performance, and haptic training increased haptic performance. Importantly, however, we found that visual training also improved haptic performance, and vice versa. Two additional experiments showed that the location of the categorical boundary in the metric space also transferred across modalities, as did heightened discriminability of objects adjacent to the boundary. This observed transfer of metric category knowledge across modalities indicates that visual and haptic forms of shape information are integrated into a shared multisensory representation.</div>
</front>
</TEI>
<inist>
<standard h6="B">
<pA>
<fA01 i1="01" i2="1">
<s0>1069-9384</s0>
</fA01>
<fA03 i2="1">
<s0>Psychon. bull. rev.</s0>
</fA03>
<fA05>
<s2>21</s2>
</fA05>
<fA06>
<s2>4</s2>
</fA06>
<fA08 i1="01" i2="1" l="ENG">
<s1>The eyes grasp, the hands see: Metric category knowledge transfers between vision and touch</s1>
</fA08>
<fA11 i1="01" i2="1">
<s1>WALLRAVEN (Christian)</s1>
</fA11>
<fA11 i1="02" i2="1">
<s1>BÜLTHOFF (Heinrich H.)</s1>
</fA11>
<fA11 i1="03" i2="1">
<s1>WATERKAMP (Steffen)</s1>
</fA11>
<fA11 i1="04" i2="1">
<s1>VAN DAM (Loes)</s1>
</fA11>
<fA11 i1="05" i2="1">
<s1>GAISSERT (Nina)</s1>
</fA11>
<fA14 i1="01">
<s1>Department of Brain and Cognitive Engineering, Korea University, Anam-Dong 5ga</s1>
<s2>Seongbuk-gu, Seoul 136-713</s2>
<s3>KOR</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</fA14>
<fA14 i1="02">
<s1>Max Planck Institute for Biological Cybernetics</s1>
<s2>Tübingen</s2>
<s3>DEU</s3>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
<sZ>5 aut.</sZ>
</fA14>
<fA14 i1="03">
<s1>University of Bielefeld</s1>
<s2>Bielefeld</s2>
<s3>DEU</s3>
<sZ>4 aut.</sZ>
</fA14>
<fA20>
<s1>976-985</s1>
</fA20>
<fA21>
<s1>2014</s1>
</fA21>
<fA23 i1="01">
<s0>ENG</s0>
</fA23>
<fA43 i1="01">
<s1>INIST</s1>
<s2>13280C</s2>
<s5>354000504842790100</s5>
</fA43>
<fA44>
<s0>0000</s0>
<s1>© 2014 INIST-CNRS. All rights reserved.</s1>
</fA44>
<fA45>
<s0>1/2 p.</s0>
</fA45>
<fA47 i1="01" i2="1">
<s0>14-0223797</s0>
</fA47>
<fA60>
<s1>P</s1>
<s3>CC</s3>
</fA60>
<fA61>
<s0>A</s0>
</fA61>
<fA64 i1="01" i2="1">
<s0>Psychonomic bulletin & review</s0>
</fA64>
<fA66 i1="01">
<s0>USA</s0>
</fA66>
<fC01 i1="01" l="ENG">
<s0>Categorization of seen objects is often determined by the shapes of objects. However, shape is not exclusive to the visual modality: The haptic system also is expert at identifying shapes. Hence, an important question for understanding shape processing is whether humans store separate modality-dependent shape representations, or whether information is integrated into one multisensory representation. To answer this question, we created a metric space of computer-generated novel objects varying in shape. These objects were then printed using a 3-D printer, to generate tangible stimuli. In a categorization experiment, participants first explored the objects visually and haptically. We found that both modalities led to highly similar categorization behavior. Next, participants were trained either visually or haptically on shape categories within the metric space. As expected, visual training increased visual performance, and haptic training increased haptic performance. Importantly, however, we found that visual training also improved haptic performance, and vice versa. Two additional experiments showed that the location of the categorical boundary in the metric space also transferred across modalities, as did heightened discriminability of objects adjacent to the boundary. This observed transfer of metric category knowledge across modalities indicates that visual and haptic forms of shape information are integrated into a shared multisensory representation.</s0>
</fC01>
<fC02 i1="01" i2="X">
<s0>770B04D</s0>
<s1>II</s1>
</fC02>
<fC02 i1="02" i2="X">
<s0>770B05C</s0>
<s1>II</s1>
</fC02>
<fC02 i1="03" i2="X">
<s0>770B05E</s0>
<s1>II</s1>
</fC02>
<fC02 i1="04" i2="X">
<s0>770B05H</s0>
<s1>II</s1>
</fC02>
<fC03 i1="01" i2="X" l="FRE">
<s0>Mouvement corporel</s0>
<s5>01</s5>
</fC03>
<fC03 i1="01" i2="X" l="ENG">
<s0>Body movement</s0>
<s5>01</s5>
</fC03>
<fC03 i1="01" i2="X" l="SPA">
<s0>Movimiento corporal</s0>
<s5>01</s5>
</fC03>
<fC03 i1="02" i2="X" l="FRE">
<s0>Préhension</s0>
<s5>02</s5>
</fC03>
<fC03 i1="02" i2="X" l="ENG">
<s0>Gripping</s0>
<s5>02</s5>
</fC03>
<fC03 i1="02" i2="X" l="SPA">
<s0>Prension</s0>
<s5>02</s5>
</fC03>
<fC03 i1="03" i2="X" l="FRE">
<s0>Etude expérimentale</s0>
<s5>03</s5>
</fC03>
<fC03 i1="03" i2="X" l="ENG">
<s0>Experimental study</s0>
<s5>03</s5>
</fC03>
<fC03 i1="03" i2="X" l="SPA">
<s0>Estudio experimental</s0>
<s5>03</s5>
</fC03>
<fC03 i1="04" i2="X" l="FRE">
<s0>Membre supérieur</s0>
<s5>05</s5>
</fC03>
<fC03 i1="04" i2="X" l="ENG">
<s0>Upper limb</s0>
<s5>05</s5>
</fC03>
<fC03 i1="04" i2="X" l="SPA">
<s0>Miembro superior</s0>
<s5>05</s5>
</fC03>
<fC03 i1="05" i2="X" l="FRE">
<s0>Catégorisation</s0>
<s5>07</s5>
</fC03>
<fC03 i1="05" i2="X" l="ENG">
<s0>Categorization</s0>
<s5>07</s5>
</fC03>
<fC03 i1="05" i2="X" l="SPA">
<s0>Categorización</s0>
<s5>07</s5>
</fC03>
<fC03 i1="06" i2="X" l="FRE">
<s0>Transfert des connaissances</s0>
<s5>08</s5>
</fC03>
<fC03 i1="06" i2="X" l="ENG">
<s0>Knowledge transfer</s0>
<s5>08</s5>
</fC03>
<fC03 i1="06" i2="X" l="SPA">
<s0>Transferencia conocimiento</s0>
<s5>08</s5>
</fC03>
<fC03 i1="07" i2="X" l="FRE">
<s0>Vision</s0>
<s5>09</s5>
</fC03>
<fC03 i1="07" i2="X" l="ENG">
<s0>Vision</s0>
<s5>09</s5>
</fC03>
<fC03 i1="07" i2="X" l="SPA">
<s0>Visión</s0>
<s5>09</s5>
</fC03>
<fC03 i1="08" i2="X" l="FRE">
<s0>Sensibilité tactile</s0>
<s5>10</s5>
</fC03>
<fC03 i1="08" i2="X" l="ENG">
<s0>Tactile sensitivity</s0>
<s5>10</s5>
</fC03>
<fC03 i1="08" i2="X" l="SPA">
<s0>Sensibilidad tactil</s0>
<s5>10</s5>
</fC03>
<fC03 i1="09" i2="X" l="FRE">
<s0>Perception intermodale</s0>
<s5>11</s5>
</fC03>
<fC03 i1="09" i2="X" l="ENG">
<s0>Intermodal perception</s0>
<s5>11</s5>
</fC03>
<fC03 i1="09" i2="X" l="SPA">
<s0>Percepción intermodal</s0>
<s5>11</s5>
</fC03>
<fC03 i1="10" i2="X" l="FRE">
<s0>Homme</s0>
<s5>18</s5>
</fC03>
<fC03 i1="10" i2="X" l="ENG">
<s0>Human</s0>
<s5>18</s5>
</fC03>
<fC03 i1="10" i2="X" l="SPA">
<s0>Hombre</s0>
<s5>18</s5>
</fC03>
<fC07 i1="01" i2="X" l="FRE">
<s0>Cognition</s0>
<s5>38</s5>
</fC07>
<fC07 i1="01" i2="X" l="ENG">
<s0>Cognition</s0>
<s5>38</s5>
</fC07>
<fC07 i1="01" i2="X" l="SPA">
<s0>Cognición</s0>
<s5>38</s5>
</fC07>
<fC07 i1="02" i2="X" l="FRE">
<s0>Motricité</s0>
<s5>39</s5>
</fC07>
<fC07 i1="02" i2="X" l="ENG">
<s0>Motricity</s0>
<s5>39</s5>
</fC07>
<fC07 i1="02" i2="X" l="SPA">
<s0>Motricidad</s0>
<s5>39</s5>
</fC07>
<fN21>
<s1>272</s1>
</fN21>
</pA>
</standard>
<server>
<NO>FRANCIS 14-0223797 INIST</NO>
<ET>The eyes grasp, the hands see: Metric category knowledge transfers between vision and touch</ET>
<AU>WALLRAVEN (Christian); BÜLTHOFF (Heinrich H.); WATERKAMP (Steffen); VAN DAM (Loes); GAISSERT (Nina)</AU>
<AF>Department of Brain and Cognitive Engineering, Korea University, Anam-Dong 5ga/Seongbuk-gu, Seoul 136-713/Corée, République de (1 aut., 2 aut.); Max Planck Institute for Biological Cybernetics/Tübingen/Allemagne (2 aut., 3 aut., 5 aut.); University of Bielefeld/Bielefeld/Allemagne (4 aut.)</AF>
<DT>Publication en série; Courte communication, note brève; Niveau analytique</DT>
<SO>Psychonomic bulletin & review; ISSN 1069-9384; Etats-Unis; Da. 2014; Vol. 21; No. 4; Pp. 976-985; Bibl. 1/2 p.</SO>
<LA>Anglais</LA>
<EA>Categorization of seen objects is often determined by the shapes of objects. However, shape is not exclusive to the visual modality: The haptic system also is expert at identifying shapes. Hence, an important question for understanding shape processing is whether humans store separate modality-dependent shape representations, or whether information is integrated into one multisensory representation. To answer this question, we created a metric space of computer-generated novel objects varying in shape. These objects were then printed using a 3-D printer, to generate tangible stimuli. In a categorization experiment, participants first explored the objects visually and haptically. We found that both modalities led to highly similar categorization behavior. Next, participants were trained either visually or haptically on shape categories within the metric space. As expected, visual training increased visual performance, and haptic training increased haptic performance. Importantly, however, we found that visual training also improved haptic performance, and vice versa. Two additional experiments showed that the location of the categorical boundary in the metric space also transferred across modalities, as did heightened discriminability of objects adjacent to the boundary. This observed transfer of metric category knowledge across modalities indicates that visual and haptic forms of shape information are integrated into a shared multisensory representation.</EA>
<CC>770B04D; 770B05C; 770B05E; 770B05H</CC>
<FD>Mouvement corporel; Préhension; Etude expérimentale; Membre supérieur; Catégorisation; Transfert des connaissances; Vision; Sensibilité tactile; Perception intermodale; Homme</FD>
<FG>Cognition; Motricité</FG>
<ED>Body movement; Gripping; Experimental study; Upper limb; Categorization; Knowledge transfer; Vision; Tactile sensitivity; Intermodal perception; Human</ED>
<EG>Cognition; Motricity</EG>
<SD>Movimiento corporal; Prension; Estudio experimental; Miembro superior; Categorización; Transferencia conocimiento; Visión; Sensibilidad tactil; Percepción intermodal; Hombre</SD>
<LO>INIST-13280C.354000504842790100</LO>
<ID>14-0223797</ID>
</server>
</inist>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/PascalFrancis/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000112 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/PascalFrancis/Corpus/biblio.hfd -nk 000112 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    PascalFrancis
   |étape=   Corpus
   |type=    RBID
   |clé=     Francis:14-0223797
   |texte=   The eyes grasp, the hands see: Metric category knowledge transfers between vision and touch
}}

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024