The eyes grasp, the hands see: Metric category knowledge transfers between vision and touch
Identifieur interne : 000112 ( PascalFrancis/Corpus ); précédent : 000111; suivant : 000113The eyes grasp, the hands see: Metric category knowledge transfers between vision and touch
Auteurs : Christian Wallraven ; Heinrich H. Bülthoff ; Steffen Waterkamp ; Loes Van Dam ; Nina GaissertSource :
- Psychonomic bulletin & review [ 1069-9384 ] ; 2014.
Descripteurs français
- Pascal (Inist)
English descriptors
- KwdEn :
Abstract
Categorization of seen objects is often determined by the shapes of objects. However, shape is not exclusive to the visual modality: The haptic system also is expert at identifying shapes. Hence, an important question for understanding shape processing is whether humans store separate modality-dependent shape representations, or whether information is integrated into one multisensory representation. To answer this question, we created a metric space of computer-generated novel objects varying in shape. These objects were then printed using a 3-D printer, to generate tangible stimuli. In a categorization experiment, participants first explored the objects visually and haptically. We found that both modalities led to highly similar categorization behavior. Next, participants were trained either visually or haptically on shape categories within the metric space. As expected, visual training increased visual performance, and haptic training increased haptic performance. Importantly, however, we found that visual training also improved haptic performance, and vice versa. Two additional experiments showed that the location of the categorical boundary in the metric space also transferred across modalities, as did heightened discriminability of objects adjacent to the boundary. This observed transfer of metric category knowledge across modalities indicates that visual and haptic forms of shape information are integrated into a shared multisensory representation.
Notice en format standard (ISO 2709)
Pour connaître la documentation sur le format Inist Standard.
pA |
|
---|
Format Inist (serveur)
NO : | FRANCIS 14-0223797 INIST |
---|---|
ET : | The eyes grasp, the hands see: Metric category knowledge transfers between vision and touch |
AU : | WALLRAVEN (Christian); BÜLTHOFF (Heinrich H.); WATERKAMP (Steffen); VAN DAM (Loes); GAISSERT (Nina) |
AF : | Department of Brain and Cognitive Engineering, Korea University, Anam-Dong 5ga/Seongbuk-gu, Seoul 136-713/Corée, République de (1 aut., 2 aut.); Max Planck Institute for Biological Cybernetics/Tübingen/Allemagne (2 aut., 3 aut., 5 aut.); University of Bielefeld/Bielefeld/Allemagne (4 aut.) |
DT : | Publication en série; Courte communication, note brève; Niveau analytique |
SO : | Psychonomic bulletin & review; ISSN 1069-9384; Etats-Unis; Da. 2014; Vol. 21; No. 4; Pp. 976-985; Bibl. 1/2 p. |
LA : | Anglais |
EA : | Categorization of seen objects is often determined by the shapes of objects. However, shape is not exclusive to the visual modality: The haptic system also is expert at identifying shapes. Hence, an important question for understanding shape processing is whether humans store separate modality-dependent shape representations, or whether information is integrated into one multisensory representation. To answer this question, we created a metric space of computer-generated novel objects varying in shape. These objects were then printed using a 3-D printer, to generate tangible stimuli. In a categorization experiment, participants first explored the objects visually and haptically. We found that both modalities led to highly similar categorization behavior. Next, participants were trained either visually or haptically on shape categories within the metric space. As expected, visual training increased visual performance, and haptic training increased haptic performance. Importantly, however, we found that visual training also improved haptic performance, and vice versa. Two additional experiments showed that the location of the categorical boundary in the metric space also transferred across modalities, as did heightened discriminability of objects adjacent to the boundary. This observed transfer of metric category knowledge across modalities indicates that visual and haptic forms of shape information are integrated into a shared multisensory representation. |
CC : | 770B04D; 770B05C; 770B05E; 770B05H |
FD : | Mouvement corporel; Préhension; Etude expérimentale; Membre supérieur; Catégorisation; Transfert des connaissances; Vision; Sensibilité tactile; Perception intermodale; Homme |
FG : | Cognition; Motricité |
ED : | Body movement; Gripping; Experimental study; Upper limb; Categorization; Knowledge transfer; Vision; Tactile sensitivity; Intermodal perception; Human |
EG : | Cognition; Motricity |
SD : | Movimiento corporal; Prension; Estudio experimental; Miembro superior; Categorización; Transferencia conocimiento; Visión; Sensibilidad tactil; Percepción intermodal; Hombre |
LO : | INIST-13280C.354000504842790100 |
ID : | 14-0223797 |
Links to Exploration step
Francis:14-0223797Le document en format XML
<record><TEI><teiHeader><fileDesc><titleStmt><title xml:lang="en" level="a">The eyes grasp, the hands see: Metric category knowledge transfers between vision and touch</title>
<author><name sortKey="Wallraven, Christian" sort="Wallraven, Christian" uniqKey="Wallraven C" first="Christian" last="Wallraven">Christian Wallraven</name>
<affiliation><inist:fA14 i1="01"><s1>Department of Brain and Cognitive Engineering, Korea University, Anam-Dong 5ga</s1>
<s2>Seongbuk-gu, Seoul 136-713</s2>
<s3>KOR</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author><name sortKey="Bulthoff, Heinrich H" sort="Bulthoff, Heinrich H" uniqKey="Bulthoff H" first="Heinrich H." last="Bülthoff">Heinrich H. Bülthoff</name>
<affiliation><inist:fA14 i1="01"><s1>Department of Brain and Cognitive Engineering, Korea University, Anam-Dong 5ga</s1>
<s2>Seongbuk-gu, Seoul 136-713</s2>
<s3>KOR</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</inist:fA14>
</affiliation>
<affiliation><inist:fA14 i1="02"><s1>Max Planck Institute for Biological Cybernetics</s1>
<s2>Tübingen</s2>
<s3>DEU</s3>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
<sZ>5 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author><name sortKey="Waterkamp, Steffen" sort="Waterkamp, Steffen" uniqKey="Waterkamp S" first="Steffen" last="Waterkamp">Steffen Waterkamp</name>
<affiliation><inist:fA14 i1="02"><s1>Max Planck Institute for Biological Cybernetics</s1>
<s2>Tübingen</s2>
<s3>DEU</s3>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
<sZ>5 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author><name sortKey="Van Dam, Loes" sort="Van Dam, Loes" uniqKey="Van Dam L" first="Loes" last="Van Dam">Loes Van Dam</name>
<affiliation><inist:fA14 i1="03"><s1>University of Bielefeld</s1>
<s2>Bielefeld</s2>
<s3>DEU</s3>
<sZ>4 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author><name sortKey="Gaissert, Nina" sort="Gaissert, Nina" uniqKey="Gaissert N" first="Nina" last="Gaissert">Nina Gaissert</name>
<affiliation><inist:fA14 i1="02"><s1>Max Planck Institute for Biological Cybernetics</s1>
<s2>Tübingen</s2>
<s3>DEU</s3>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
<sZ>5 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
</titleStmt>
<publicationStmt><idno type="wicri:source">INIST</idno>
<idno type="inist">14-0223797</idno>
<date when="2014">2014</date>
<idno type="stanalyst">FRANCIS 14-0223797 INIST</idno>
<idno type="RBID">Francis:14-0223797</idno>
<idno type="wicri:Area/PascalFrancis/Corpus">000112</idno>
</publicationStmt>
<sourceDesc><biblStruct><analytic><title xml:lang="en" level="a">The eyes grasp, the hands see: Metric category knowledge transfers between vision and touch</title>
<author><name sortKey="Wallraven, Christian" sort="Wallraven, Christian" uniqKey="Wallraven C" first="Christian" last="Wallraven">Christian Wallraven</name>
<affiliation><inist:fA14 i1="01"><s1>Department of Brain and Cognitive Engineering, Korea University, Anam-Dong 5ga</s1>
<s2>Seongbuk-gu, Seoul 136-713</s2>
<s3>KOR</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author><name sortKey="Bulthoff, Heinrich H" sort="Bulthoff, Heinrich H" uniqKey="Bulthoff H" first="Heinrich H." last="Bülthoff">Heinrich H. Bülthoff</name>
<affiliation><inist:fA14 i1="01"><s1>Department of Brain and Cognitive Engineering, Korea University, Anam-Dong 5ga</s1>
<s2>Seongbuk-gu, Seoul 136-713</s2>
<s3>KOR</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</inist:fA14>
</affiliation>
<affiliation><inist:fA14 i1="02"><s1>Max Planck Institute for Biological Cybernetics</s1>
<s2>Tübingen</s2>
<s3>DEU</s3>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
<sZ>5 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author><name sortKey="Waterkamp, Steffen" sort="Waterkamp, Steffen" uniqKey="Waterkamp S" first="Steffen" last="Waterkamp">Steffen Waterkamp</name>
<affiliation><inist:fA14 i1="02"><s1>Max Planck Institute for Biological Cybernetics</s1>
<s2>Tübingen</s2>
<s3>DEU</s3>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
<sZ>5 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author><name sortKey="Van Dam, Loes" sort="Van Dam, Loes" uniqKey="Van Dam L" first="Loes" last="Van Dam">Loes Van Dam</name>
<affiliation><inist:fA14 i1="03"><s1>University of Bielefeld</s1>
<s2>Bielefeld</s2>
<s3>DEU</s3>
<sZ>4 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author><name sortKey="Gaissert, Nina" sort="Gaissert, Nina" uniqKey="Gaissert N" first="Nina" last="Gaissert">Nina Gaissert</name>
<affiliation><inist:fA14 i1="02"><s1>Max Planck Institute for Biological Cybernetics</s1>
<s2>Tübingen</s2>
<s3>DEU</s3>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
<sZ>5 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
</analytic>
<series><title level="j" type="main">Psychonomic bulletin & review</title>
<title level="j" type="abbreviated">Psychon. bull. rev.</title>
<idno type="ISSN">1069-9384</idno>
<imprint><date when="2014">2014</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
<seriesStmt><title level="j" type="main">Psychonomic bulletin & review</title>
<title level="j" type="abbreviated">Psychon. bull. rev.</title>
<idno type="ISSN">1069-9384</idno>
</seriesStmt>
</fileDesc>
<profileDesc><textClass><keywords scheme="KwdEn" xml:lang="en"><term>Body movement</term>
<term>Categorization</term>
<term>Experimental study</term>
<term>Gripping</term>
<term>Human</term>
<term>Intermodal perception</term>
<term>Knowledge transfer</term>
<term>Tactile sensitivity</term>
<term>Upper limb</term>
<term>Vision</term>
</keywords>
<keywords scheme="Pascal" xml:lang="fr"><term>Mouvement corporel</term>
<term>Préhension</term>
<term>Etude expérimentale</term>
<term>Membre supérieur</term>
<term>Catégorisation</term>
<term>Transfert des connaissances</term>
<term>Vision</term>
<term>Sensibilité tactile</term>
<term>Perception intermodale</term>
<term>Homme</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front><div type="abstract" xml:lang="en">Categorization of seen objects is often determined by the shapes of objects. However, shape is not exclusive to the visual modality: The haptic system also is expert at identifying shapes. Hence, an important question for understanding shape processing is whether humans store separate modality-dependent shape representations, or whether information is integrated into one multisensory representation. To answer this question, we created a metric space of computer-generated novel objects varying in shape. These objects were then printed using a 3-D printer, to generate tangible stimuli. In a categorization experiment, participants first explored the objects visually and haptically. We found that both modalities led to highly similar categorization behavior. Next, participants were trained either visually or haptically on shape categories within the metric space. As expected, visual training increased visual performance, and haptic training increased haptic performance. Importantly, however, we found that visual training also improved haptic performance, and vice versa. Two additional experiments showed that the location of the categorical boundary in the metric space also transferred across modalities, as did heightened discriminability of objects adjacent to the boundary. This observed transfer of metric category knowledge across modalities indicates that visual and haptic forms of shape information are integrated into a shared multisensory representation.</div>
</front>
</TEI>
<inist><standard h6="B"><pA><fA01 i1="01" i2="1"><s0>1069-9384</s0>
</fA01>
<fA03 i2="1"><s0>Psychon. bull. rev.</s0>
</fA03>
<fA05><s2>21</s2>
</fA05>
<fA06><s2>4</s2>
</fA06>
<fA08 i1="01" i2="1" l="ENG"><s1>The eyes grasp, the hands see: Metric category knowledge transfers between vision and touch</s1>
</fA08>
<fA11 i1="01" i2="1"><s1>WALLRAVEN (Christian)</s1>
</fA11>
<fA11 i1="02" i2="1"><s1>BÜLTHOFF (Heinrich H.)</s1>
</fA11>
<fA11 i1="03" i2="1"><s1>WATERKAMP (Steffen)</s1>
</fA11>
<fA11 i1="04" i2="1"><s1>VAN DAM (Loes)</s1>
</fA11>
<fA11 i1="05" i2="1"><s1>GAISSERT (Nina)</s1>
</fA11>
<fA14 i1="01"><s1>Department of Brain and Cognitive Engineering, Korea University, Anam-Dong 5ga</s1>
<s2>Seongbuk-gu, Seoul 136-713</s2>
<s3>KOR</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</fA14>
<fA14 i1="02"><s1>Max Planck Institute for Biological Cybernetics</s1>
<s2>Tübingen</s2>
<s3>DEU</s3>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
<sZ>5 aut.</sZ>
</fA14>
<fA14 i1="03"><s1>University of Bielefeld</s1>
<s2>Bielefeld</s2>
<s3>DEU</s3>
<sZ>4 aut.</sZ>
</fA14>
<fA20><s1>976-985</s1>
</fA20>
<fA21><s1>2014</s1>
</fA21>
<fA23 i1="01"><s0>ENG</s0>
</fA23>
<fA43 i1="01"><s1>INIST</s1>
<s2>13280C</s2>
<s5>354000504842790100</s5>
</fA43>
<fA44><s0>0000</s0>
<s1>© 2014 INIST-CNRS. All rights reserved.</s1>
</fA44>
<fA45><s0>1/2 p.</s0>
</fA45>
<fA47 i1="01" i2="1"><s0>14-0223797</s0>
</fA47>
<fA60><s1>P</s1>
<s3>CC</s3>
</fA60>
<fA61><s0>A</s0>
</fA61>
<fA64 i1="01" i2="1"><s0>Psychonomic bulletin & review</s0>
</fA64>
<fA66 i1="01"><s0>USA</s0>
</fA66>
<fC01 i1="01" l="ENG"><s0>Categorization of seen objects is often determined by the shapes of objects. However, shape is not exclusive to the visual modality: The haptic system also is expert at identifying shapes. Hence, an important question for understanding shape processing is whether humans store separate modality-dependent shape representations, or whether information is integrated into one multisensory representation. To answer this question, we created a metric space of computer-generated novel objects varying in shape. These objects were then printed using a 3-D printer, to generate tangible stimuli. In a categorization experiment, participants first explored the objects visually and haptically. We found that both modalities led to highly similar categorization behavior. Next, participants were trained either visually or haptically on shape categories within the metric space. As expected, visual training increased visual performance, and haptic training increased haptic performance. Importantly, however, we found that visual training also improved haptic performance, and vice versa. Two additional experiments showed that the location of the categorical boundary in the metric space also transferred across modalities, as did heightened discriminability of objects adjacent to the boundary. This observed transfer of metric category knowledge across modalities indicates that visual and haptic forms of shape information are integrated into a shared multisensory representation.</s0>
</fC01>
<fC02 i1="01" i2="X"><s0>770B04D</s0>
<s1>II</s1>
</fC02>
<fC02 i1="02" i2="X"><s0>770B05C</s0>
<s1>II</s1>
</fC02>
<fC02 i1="03" i2="X"><s0>770B05E</s0>
<s1>II</s1>
</fC02>
<fC02 i1="04" i2="X"><s0>770B05H</s0>
<s1>II</s1>
</fC02>
<fC03 i1="01" i2="X" l="FRE"><s0>Mouvement corporel</s0>
<s5>01</s5>
</fC03>
<fC03 i1="01" i2="X" l="ENG"><s0>Body movement</s0>
<s5>01</s5>
</fC03>
<fC03 i1="01" i2="X" l="SPA"><s0>Movimiento corporal</s0>
<s5>01</s5>
</fC03>
<fC03 i1="02" i2="X" l="FRE"><s0>Préhension</s0>
<s5>02</s5>
</fC03>
<fC03 i1="02" i2="X" l="ENG"><s0>Gripping</s0>
<s5>02</s5>
</fC03>
<fC03 i1="02" i2="X" l="SPA"><s0>Prension</s0>
<s5>02</s5>
</fC03>
<fC03 i1="03" i2="X" l="FRE"><s0>Etude expérimentale</s0>
<s5>03</s5>
</fC03>
<fC03 i1="03" i2="X" l="ENG"><s0>Experimental study</s0>
<s5>03</s5>
</fC03>
<fC03 i1="03" i2="X" l="SPA"><s0>Estudio experimental</s0>
<s5>03</s5>
</fC03>
<fC03 i1="04" i2="X" l="FRE"><s0>Membre supérieur</s0>
<s5>05</s5>
</fC03>
<fC03 i1="04" i2="X" l="ENG"><s0>Upper limb</s0>
<s5>05</s5>
</fC03>
<fC03 i1="04" i2="X" l="SPA"><s0>Miembro superior</s0>
<s5>05</s5>
</fC03>
<fC03 i1="05" i2="X" l="FRE"><s0>Catégorisation</s0>
<s5>07</s5>
</fC03>
<fC03 i1="05" i2="X" l="ENG"><s0>Categorization</s0>
<s5>07</s5>
</fC03>
<fC03 i1="05" i2="X" l="SPA"><s0>Categorización</s0>
<s5>07</s5>
</fC03>
<fC03 i1="06" i2="X" l="FRE"><s0>Transfert des connaissances</s0>
<s5>08</s5>
</fC03>
<fC03 i1="06" i2="X" l="ENG"><s0>Knowledge transfer</s0>
<s5>08</s5>
</fC03>
<fC03 i1="06" i2="X" l="SPA"><s0>Transferencia conocimiento</s0>
<s5>08</s5>
</fC03>
<fC03 i1="07" i2="X" l="FRE"><s0>Vision</s0>
<s5>09</s5>
</fC03>
<fC03 i1="07" i2="X" l="ENG"><s0>Vision</s0>
<s5>09</s5>
</fC03>
<fC03 i1="07" i2="X" l="SPA"><s0>Visión</s0>
<s5>09</s5>
</fC03>
<fC03 i1="08" i2="X" l="FRE"><s0>Sensibilité tactile</s0>
<s5>10</s5>
</fC03>
<fC03 i1="08" i2="X" l="ENG"><s0>Tactile sensitivity</s0>
<s5>10</s5>
</fC03>
<fC03 i1="08" i2="X" l="SPA"><s0>Sensibilidad tactil</s0>
<s5>10</s5>
</fC03>
<fC03 i1="09" i2="X" l="FRE"><s0>Perception intermodale</s0>
<s5>11</s5>
</fC03>
<fC03 i1="09" i2="X" l="ENG"><s0>Intermodal perception</s0>
<s5>11</s5>
</fC03>
<fC03 i1="09" i2="X" l="SPA"><s0>Percepción intermodal</s0>
<s5>11</s5>
</fC03>
<fC03 i1="10" i2="X" l="FRE"><s0>Homme</s0>
<s5>18</s5>
</fC03>
<fC03 i1="10" i2="X" l="ENG"><s0>Human</s0>
<s5>18</s5>
</fC03>
<fC03 i1="10" i2="X" l="SPA"><s0>Hombre</s0>
<s5>18</s5>
</fC03>
<fC07 i1="01" i2="X" l="FRE"><s0>Cognition</s0>
<s5>38</s5>
</fC07>
<fC07 i1="01" i2="X" l="ENG"><s0>Cognition</s0>
<s5>38</s5>
</fC07>
<fC07 i1="01" i2="X" l="SPA"><s0>Cognición</s0>
<s5>38</s5>
</fC07>
<fC07 i1="02" i2="X" l="FRE"><s0>Motricité</s0>
<s5>39</s5>
</fC07>
<fC07 i1="02" i2="X" l="ENG"><s0>Motricity</s0>
<s5>39</s5>
</fC07>
<fC07 i1="02" i2="X" l="SPA"><s0>Motricidad</s0>
<s5>39</s5>
</fC07>
<fN21><s1>272</s1>
</fN21>
</pA>
</standard>
<server><NO>FRANCIS 14-0223797 INIST</NO>
<ET>The eyes grasp, the hands see: Metric category knowledge transfers between vision and touch</ET>
<AU>WALLRAVEN (Christian); BÜLTHOFF (Heinrich H.); WATERKAMP (Steffen); VAN DAM (Loes); GAISSERT (Nina)</AU>
<AF>Department of Brain and Cognitive Engineering, Korea University, Anam-Dong 5ga/Seongbuk-gu, Seoul 136-713/Corée, République de (1 aut., 2 aut.); Max Planck Institute for Biological Cybernetics/Tübingen/Allemagne (2 aut., 3 aut., 5 aut.); University of Bielefeld/Bielefeld/Allemagne (4 aut.)</AF>
<DT>Publication en série; Courte communication, note brève; Niveau analytique</DT>
<SO>Psychonomic bulletin & review; ISSN 1069-9384; Etats-Unis; Da. 2014; Vol. 21; No. 4; Pp. 976-985; Bibl. 1/2 p.</SO>
<LA>Anglais</LA>
<EA>Categorization of seen objects is often determined by the shapes of objects. However, shape is not exclusive to the visual modality: The haptic system also is expert at identifying shapes. Hence, an important question for understanding shape processing is whether humans store separate modality-dependent shape representations, or whether information is integrated into one multisensory representation. To answer this question, we created a metric space of computer-generated novel objects varying in shape. These objects were then printed using a 3-D printer, to generate tangible stimuli. In a categorization experiment, participants first explored the objects visually and haptically. We found that both modalities led to highly similar categorization behavior. Next, participants were trained either visually or haptically on shape categories within the metric space. As expected, visual training increased visual performance, and haptic training increased haptic performance. Importantly, however, we found that visual training also improved haptic performance, and vice versa. Two additional experiments showed that the location of the categorical boundary in the metric space also transferred across modalities, as did heightened discriminability of objects adjacent to the boundary. This observed transfer of metric category knowledge across modalities indicates that visual and haptic forms of shape information are integrated into a shared multisensory representation.</EA>
<CC>770B04D; 770B05C; 770B05E; 770B05H</CC>
<FD>Mouvement corporel; Préhension; Etude expérimentale; Membre supérieur; Catégorisation; Transfert des connaissances; Vision; Sensibilité tactile; Perception intermodale; Homme</FD>
<FG>Cognition; Motricité</FG>
<ED>Body movement; Gripping; Experimental study; Upper limb; Categorization; Knowledge transfer; Vision; Tactile sensitivity; Intermodal perception; Human</ED>
<EG>Cognition; Motricity</EG>
<SD>Movimiento corporal; Prension; Estudio experimental; Miembro superior; Categorización; Transferencia conocimiento; Visión; Sensibilidad tactil; Percepción intermodal; Hombre</SD>
<LO>INIST-13280C.354000504842790100</LO>
<ID>14-0223797</ID>
</server>
</inist>
</record>
Pour manipuler ce document sous Unix (Dilib)
EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/PascalFrancis/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000112 | SxmlIndent | more
Ou
HfdSelect -h $EXPLOR_AREA/Data/PascalFrancis/Corpus/biblio.hfd -nk 000112 | SxmlIndent | more
Pour mettre un lien sur cette page dans le réseau Wicri
{{Explor lien |wiki= Ticri/CIDE |area= HapticV1 |flux= PascalFrancis |étape= Corpus |type= RBID |clé= Francis:14-0223797 |texte= The eyes grasp, the hands see: Metric category knowledge transfers between vision and touch }}
This area was generated with Dilib version V0.6.23. |