Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Similarity and categorization: From vision to touch

Identifieur interne : 000433 ( PascalFrancis/Corpus ); précédent : 000432; suivant : 000434

Similarity and categorization: From vision to touch

Auteurs : Nina Gaissert ; Heinrich H. Bülthoff ; Christian Wallraven

Source :

RBID : Pascal:11-0439339

Descripteurs français

English descriptors

Abstract

Even though human perceptual development relies on combining multiple modalities, most categorization studies so far have focused on the visual modality. To better understand the mechanisms underlying multisensory categorization, we analyzed visual and haptic perceptual spaces and compared them with human categorization behavior. As stimuli we used a three-dimensional object space of complex, parametrically-defined objects. First, we gathered similarity ratings for all objects and analyzed the perceptual spaces of both modalities using multidimensional scaling analysis. Next, we performed three different categorization tasks which are representative of every-day learning scenarios: in a fully unconstrained task, objects were freely categorized, in a semi-constrained task, exactly three groups had to be created, whereas in a constrained task, participants received three prototype objects and had to assign all other objects accordingly. We found that the haptic modality was on par with the visual modality both in recovering the topology of the physical space and in solving the categorization tasks. We also found that within-category similarity was consistently higher than across-category similarity for all categorization tasks and thus show how perceptual spaces based on similarity can explain visual and haptic object categorization. Our results suggest that both modalities employ similar processes in forming categories of complex objects.

Notice en format standard (ISO 2709)

Pour connaître la documentation sur le format Inist Standard.

pA  
A01 01  1    @0 0001-6918
A02 01      @0 APSOAZ
A03   1    @0 Acta psychol.
A05       @2 138
A06       @2 1
A08 01  1  ENG  @1 Similarity and categorization: From vision to touch
A11 01  1    @1 GAISSERT (Nina)
A11 02  1    @1 BÜLTHOFF (Heinrich H.)
A11 03  1    @1 WALLRAVEN (Christian)
A14 01      @1 Max Planck Institute for Biological Cybernetics, Tübingen, Spemannstr. 38 @2 72076 Tübingen @3 DEU @Z 1 aut. @Z 2 aut.
A14 02      @1 Department of Brain and Cognitive Engineering, Korea University, Anam-dong, Seongbuk-gu, Seoul 136-713 @2 Seoul @3 KOR @Z 2 aut. @Z 3 aut.
A20       @1 219-230
A21       @1 2011
A23 01      @0 ENG
A43 01      @1 INIST @2 2174 @5 354000191255340280
A44       @0 0000 @1 © 2011 INIST-CNRS. All rights reserved.
A45       @0 3/4 p.
A47 01  1    @0 11-0439339
A60       @1 P
A61       @0 A
A64 01  1    @0 Acta psychologica
A66 01      @0 GBR
C01 01    ENG  @0 Even though human perceptual development relies on combining multiple modalities, most categorization studies so far have focused on the visual modality. To better understand the mechanisms underlying multisensory categorization, we analyzed visual and haptic perceptual spaces and compared them with human categorization behavior. As stimuli we used a three-dimensional object space of complex, parametrically-defined objects. First, we gathered similarity ratings for all objects and analyzed the perceptual spaces of both modalities using multidimensional scaling analysis. Next, we performed three different categorization tasks which are representative of every-day learning scenarios: in a fully unconstrained task, objects were freely categorized, in a semi-constrained task, exactly three groups had to be created, whereas in a constrained task, participants received three prototype objects and had to assign all other objects accordingly. We found that the haptic modality was on par with the visual modality both in recovering the topology of the physical space and in solving the categorization tasks. We also found that within-category similarity was consistently higher than across-category similarity for all categorization tasks and thus show how perceptual spaces based on similarity can explain visual and haptic object categorization. Our results suggest that both modalities employ similar processes in forming categories of complex objects.
C02 01  X    @0 002A26E03
C02 02  X    @0 002A26E05
C02 03  X    @0 002A26E08
C03 01  X  FRE  @0 Catégorisation @5 01
C03 01  X  ENG  @0 Categorization @5 01
C03 01  X  SPA  @0 Categorización @5 01
C03 02  X  FRE  @0 Vision @5 02
C03 02  X  ENG  @0 Vision @5 02
C03 02  X  SPA  @0 Visión @5 02
C03 03  X  FRE  @0 Sensibilité tactile @5 03
C03 03  X  ENG  @0 Tactile sensitivity @5 03
C03 03  X  SPA  @0 Sensibilidad tactil @5 03
C03 04  X  FRE  @0 Perception intermodale @5 05
C03 04  X  ENG  @0 Intermodal perception @5 05
C03 04  X  SPA  @0 Percepción intermodal @5 05
C03 05  X  FRE  @0 Perception espace @5 06
C03 05  X  ENG  @0 Space perception @5 06
C03 05  X  SPA  @0 Percepción espacio @5 06
C03 06  X  FRE  @0 Etude expérimentale @5 07
C03 06  X  ENG  @0 Experimental study @5 07
C03 06  X  SPA  @0 Estudio experimental @5 07
C03 07  X  FRE  @0 Homme @5 18
C03 07  X  ENG  @0 Human @5 18
C03 07  X  SPA  @0 Hombre @5 18
C07 01  X  FRE  @0 Cognition @5 37
C07 01  X  ENG  @0 Cognition @5 37
C07 01  X  SPA  @0 Cognición @5 37
N21       @1 297

Format Inist (serveur)

NO : PASCAL 11-0439339 INIST
ET : Similarity and categorization: From vision to touch
AU : GAISSERT (Nina); BÜLTHOFF (Heinrich H.); WALLRAVEN (Christian)
AF : Max Planck Institute for Biological Cybernetics, Tübingen, Spemannstr. 38/72076 Tübingen/Allemagne (1 aut., 2 aut.); Department of Brain and Cognitive Engineering, Korea University, Anam-dong, Seongbuk-gu, Seoul 136-713/Seoul/Corée, République de (2 aut., 3 aut.)
DT : Publication en série; Niveau analytique
SO : Acta psychologica; ISSN 0001-6918; Coden APSOAZ; Royaume-Uni; Da. 2011; Vol. 138; No. 1; Pp. 219-230; Bibl. 3/4 p.
LA : Anglais
EA : Even though human perceptual development relies on combining multiple modalities, most categorization studies so far have focused on the visual modality. To better understand the mechanisms underlying multisensory categorization, we analyzed visual and haptic perceptual spaces and compared them with human categorization behavior. As stimuli we used a three-dimensional object space of complex, parametrically-defined objects. First, we gathered similarity ratings for all objects and analyzed the perceptual spaces of both modalities using multidimensional scaling analysis. Next, we performed three different categorization tasks which are representative of every-day learning scenarios: in a fully unconstrained task, objects were freely categorized, in a semi-constrained task, exactly three groups had to be created, whereas in a constrained task, participants received three prototype objects and had to assign all other objects accordingly. We found that the haptic modality was on par with the visual modality both in recovering the topology of the physical space and in solving the categorization tasks. We also found that within-category similarity was consistently higher than across-category similarity for all categorization tasks and thus show how perceptual spaces based on similarity can explain visual and haptic object categorization. Our results suggest that both modalities employ similar processes in forming categories of complex objects.
CC : 002A26E03; 002A26E05; 002A26E08
FD : Catégorisation; Vision; Sensibilité tactile; Perception intermodale; Perception espace; Etude expérimentale; Homme
FG : Cognition
ED : Categorization; Vision; Tactile sensitivity; Intermodal perception; Space perception; Experimental study; Human
EG : Cognition
SD : Categorización; Visión; Sensibilidad tactil; Percepción intermodal; Percepción espacio; Estudio experimental; Hombre
LO : INIST-2174.354000191255340280
ID : 11-0439339

Links to Exploration step

Pascal:11-0439339

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en" level="a">Similarity and categorization: From vision to touch</title>
<author>
<name sortKey="Gaissert, Nina" sort="Gaissert, Nina" uniqKey="Gaissert N" first="Nina" last="Gaissert">Nina Gaissert</name>
<affiliation>
<inist:fA14 i1="01">
<s1>Max Planck Institute for Biological Cybernetics, Tübingen, Spemannstr. 38</s1>
<s2>72076 Tübingen</s2>
<s3>DEU</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Bulthoff, Heinrich H" sort="Bulthoff, Heinrich H" uniqKey="Bulthoff H" first="Heinrich H." last="Bülthoff">Heinrich H. Bülthoff</name>
<affiliation>
<inist:fA14 i1="01">
<s1>Max Planck Institute for Biological Cybernetics, Tübingen, Spemannstr. 38</s1>
<s2>72076 Tübingen</s2>
<s3>DEU</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</inist:fA14>
</affiliation>
<affiliation>
<inist:fA14 i1="02">
<s1>Department of Brain and Cognitive Engineering, Korea University, Anam-dong, Seongbuk-gu, Seoul 136-713</s1>
<s2>Seoul</s2>
<s3>KOR</s3>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Wallraven, Christian" sort="Wallraven, Christian" uniqKey="Wallraven C" first="Christian" last="Wallraven">Christian Wallraven</name>
<affiliation>
<inist:fA14 i1="02">
<s1>Department of Brain and Cognitive Engineering, Korea University, Anam-dong, Seongbuk-gu, Seoul 136-713</s1>
<s2>Seoul</s2>
<s3>KOR</s3>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">INIST</idno>
<idno type="inist">11-0439339</idno>
<date when="2011">2011</date>
<idno type="stanalyst">PASCAL 11-0439339 INIST</idno>
<idno type="RBID">Pascal:11-0439339</idno>
<idno type="wicri:Area/PascalFrancis/Corpus">000433</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a">Similarity and categorization: From vision to touch</title>
<author>
<name sortKey="Gaissert, Nina" sort="Gaissert, Nina" uniqKey="Gaissert N" first="Nina" last="Gaissert">Nina Gaissert</name>
<affiliation>
<inist:fA14 i1="01">
<s1>Max Planck Institute for Biological Cybernetics, Tübingen, Spemannstr. 38</s1>
<s2>72076 Tübingen</s2>
<s3>DEU</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Bulthoff, Heinrich H" sort="Bulthoff, Heinrich H" uniqKey="Bulthoff H" first="Heinrich H." last="Bülthoff">Heinrich H. Bülthoff</name>
<affiliation>
<inist:fA14 i1="01">
<s1>Max Planck Institute for Biological Cybernetics, Tübingen, Spemannstr. 38</s1>
<s2>72076 Tübingen</s2>
<s3>DEU</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</inist:fA14>
</affiliation>
<affiliation>
<inist:fA14 i1="02">
<s1>Department of Brain and Cognitive Engineering, Korea University, Anam-dong, Seongbuk-gu, Seoul 136-713</s1>
<s2>Seoul</s2>
<s3>KOR</s3>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Wallraven, Christian" sort="Wallraven, Christian" uniqKey="Wallraven C" first="Christian" last="Wallraven">Christian Wallraven</name>
<affiliation>
<inist:fA14 i1="02">
<s1>Department of Brain and Cognitive Engineering, Korea University, Anam-dong, Seongbuk-gu, Seoul 136-713</s1>
<s2>Seoul</s2>
<s3>KOR</s3>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
</analytic>
<series>
<title level="j" type="main">Acta psychologica</title>
<title level="j" type="abbreviated">Acta psychol.</title>
<idno type="ISSN">0001-6918</idno>
<imprint>
<date when="2011">2011</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
<seriesStmt>
<title level="j" type="main">Acta psychologica</title>
<title level="j" type="abbreviated">Acta psychol.</title>
<idno type="ISSN">0001-6918</idno>
</seriesStmt>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Categorization</term>
<term>Experimental study</term>
<term>Human</term>
<term>Intermodal perception</term>
<term>Space perception</term>
<term>Tactile sensitivity</term>
<term>Vision</term>
</keywords>
<keywords scheme="Pascal" xml:lang="fr">
<term>Catégorisation</term>
<term>Vision</term>
<term>Sensibilité tactile</term>
<term>Perception intermodale</term>
<term>Perception espace</term>
<term>Etude expérimentale</term>
<term>Homme</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">Even though human perceptual development relies on combining multiple modalities, most categorization studies so far have focused on the visual modality. To better understand the mechanisms underlying multisensory categorization, we analyzed visual and haptic perceptual spaces and compared them with human categorization behavior. As stimuli we used a three-dimensional object space of complex, parametrically-defined objects. First, we gathered similarity ratings for all objects and analyzed the perceptual spaces of both modalities using multidimensional scaling analysis. Next, we performed three different categorization tasks which are representative of every-day learning scenarios: in a fully unconstrained task, objects were freely categorized, in a semi-constrained task, exactly three groups had to be created, whereas in a constrained task, participants received three prototype objects and had to assign all other objects accordingly. We found that the haptic modality was on par with the visual modality both in recovering the topology of the physical space and in solving the categorization tasks. We also found that within-category similarity was consistently higher than across-category similarity for all categorization tasks and thus show how perceptual spaces based on similarity can explain visual and haptic object categorization. Our results suggest that both modalities employ similar processes in forming categories of complex objects.</div>
</front>
</TEI>
<inist>
<standard h6="B">
<pA>
<fA01 i1="01" i2="1">
<s0>0001-6918</s0>
</fA01>
<fA02 i1="01">
<s0>APSOAZ</s0>
</fA02>
<fA03 i2="1">
<s0>Acta psychol.</s0>
</fA03>
<fA05>
<s2>138</s2>
</fA05>
<fA06>
<s2>1</s2>
</fA06>
<fA08 i1="01" i2="1" l="ENG">
<s1>Similarity and categorization: From vision to touch</s1>
</fA08>
<fA11 i1="01" i2="1">
<s1>GAISSERT (Nina)</s1>
</fA11>
<fA11 i1="02" i2="1">
<s1>BÜLTHOFF (Heinrich H.)</s1>
</fA11>
<fA11 i1="03" i2="1">
<s1>WALLRAVEN (Christian)</s1>
</fA11>
<fA14 i1="01">
<s1>Max Planck Institute for Biological Cybernetics, Tübingen, Spemannstr. 38</s1>
<s2>72076 Tübingen</s2>
<s3>DEU</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</fA14>
<fA14 i1="02">
<s1>Department of Brain and Cognitive Engineering, Korea University, Anam-dong, Seongbuk-gu, Seoul 136-713</s1>
<s2>Seoul</s2>
<s3>KOR</s3>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</fA14>
<fA20>
<s1>219-230</s1>
</fA20>
<fA21>
<s1>2011</s1>
</fA21>
<fA23 i1="01">
<s0>ENG</s0>
</fA23>
<fA43 i1="01">
<s1>INIST</s1>
<s2>2174</s2>
<s5>354000191255340280</s5>
</fA43>
<fA44>
<s0>0000</s0>
<s1>© 2011 INIST-CNRS. All rights reserved.</s1>
</fA44>
<fA45>
<s0>3/4 p.</s0>
</fA45>
<fA47 i1="01" i2="1">
<s0>11-0439339</s0>
</fA47>
<fA60>
<s1>P</s1>
</fA60>
<fA61>
<s0>A</s0>
</fA61>
<fA64 i1="01" i2="1">
<s0>Acta psychologica</s0>
</fA64>
<fA66 i1="01">
<s0>GBR</s0>
</fA66>
<fC01 i1="01" l="ENG">
<s0>Even though human perceptual development relies on combining multiple modalities, most categorization studies so far have focused on the visual modality. To better understand the mechanisms underlying multisensory categorization, we analyzed visual and haptic perceptual spaces and compared them with human categorization behavior. As stimuli we used a three-dimensional object space of complex, parametrically-defined objects. First, we gathered similarity ratings for all objects and analyzed the perceptual spaces of both modalities using multidimensional scaling analysis. Next, we performed three different categorization tasks which are representative of every-day learning scenarios: in a fully unconstrained task, objects were freely categorized, in a semi-constrained task, exactly three groups had to be created, whereas in a constrained task, participants received three prototype objects and had to assign all other objects accordingly. We found that the haptic modality was on par with the visual modality both in recovering the topology of the physical space and in solving the categorization tasks. We also found that within-category similarity was consistently higher than across-category similarity for all categorization tasks and thus show how perceptual spaces based on similarity can explain visual and haptic object categorization. Our results suggest that both modalities employ similar processes in forming categories of complex objects.</s0>
</fC01>
<fC02 i1="01" i2="X">
<s0>002A26E03</s0>
</fC02>
<fC02 i1="02" i2="X">
<s0>002A26E05</s0>
</fC02>
<fC02 i1="03" i2="X">
<s0>002A26E08</s0>
</fC02>
<fC03 i1="01" i2="X" l="FRE">
<s0>Catégorisation</s0>
<s5>01</s5>
</fC03>
<fC03 i1="01" i2="X" l="ENG">
<s0>Categorization</s0>
<s5>01</s5>
</fC03>
<fC03 i1="01" i2="X" l="SPA">
<s0>Categorización</s0>
<s5>01</s5>
</fC03>
<fC03 i1="02" i2="X" l="FRE">
<s0>Vision</s0>
<s5>02</s5>
</fC03>
<fC03 i1="02" i2="X" l="ENG">
<s0>Vision</s0>
<s5>02</s5>
</fC03>
<fC03 i1="02" i2="X" l="SPA">
<s0>Visión</s0>
<s5>02</s5>
</fC03>
<fC03 i1="03" i2="X" l="FRE">
<s0>Sensibilité tactile</s0>
<s5>03</s5>
</fC03>
<fC03 i1="03" i2="X" l="ENG">
<s0>Tactile sensitivity</s0>
<s5>03</s5>
</fC03>
<fC03 i1="03" i2="X" l="SPA">
<s0>Sensibilidad tactil</s0>
<s5>03</s5>
</fC03>
<fC03 i1="04" i2="X" l="FRE">
<s0>Perception intermodale</s0>
<s5>05</s5>
</fC03>
<fC03 i1="04" i2="X" l="ENG">
<s0>Intermodal perception</s0>
<s5>05</s5>
</fC03>
<fC03 i1="04" i2="X" l="SPA">
<s0>Percepción intermodal</s0>
<s5>05</s5>
</fC03>
<fC03 i1="05" i2="X" l="FRE">
<s0>Perception espace</s0>
<s5>06</s5>
</fC03>
<fC03 i1="05" i2="X" l="ENG">
<s0>Space perception</s0>
<s5>06</s5>
</fC03>
<fC03 i1="05" i2="X" l="SPA">
<s0>Percepción espacio</s0>
<s5>06</s5>
</fC03>
<fC03 i1="06" i2="X" l="FRE">
<s0>Etude expérimentale</s0>
<s5>07</s5>
</fC03>
<fC03 i1="06" i2="X" l="ENG">
<s0>Experimental study</s0>
<s5>07</s5>
</fC03>
<fC03 i1="06" i2="X" l="SPA">
<s0>Estudio experimental</s0>
<s5>07</s5>
</fC03>
<fC03 i1="07" i2="X" l="FRE">
<s0>Homme</s0>
<s5>18</s5>
</fC03>
<fC03 i1="07" i2="X" l="ENG">
<s0>Human</s0>
<s5>18</s5>
</fC03>
<fC03 i1="07" i2="X" l="SPA">
<s0>Hombre</s0>
<s5>18</s5>
</fC03>
<fC07 i1="01" i2="X" l="FRE">
<s0>Cognition</s0>
<s5>37</s5>
</fC07>
<fC07 i1="01" i2="X" l="ENG">
<s0>Cognition</s0>
<s5>37</s5>
</fC07>
<fC07 i1="01" i2="X" l="SPA">
<s0>Cognición</s0>
<s5>37</s5>
</fC07>
<fN21>
<s1>297</s1>
</fN21>
</pA>
</standard>
<server>
<NO>PASCAL 11-0439339 INIST</NO>
<ET>Similarity and categorization: From vision to touch</ET>
<AU>GAISSERT (Nina); BÜLTHOFF (Heinrich H.); WALLRAVEN (Christian)</AU>
<AF>Max Planck Institute for Biological Cybernetics, Tübingen, Spemannstr. 38/72076 Tübingen/Allemagne (1 aut., 2 aut.); Department of Brain and Cognitive Engineering, Korea University, Anam-dong, Seongbuk-gu, Seoul 136-713/Seoul/Corée, République de (2 aut., 3 aut.)</AF>
<DT>Publication en série; Niveau analytique</DT>
<SO>Acta psychologica; ISSN 0001-6918; Coden APSOAZ; Royaume-Uni; Da. 2011; Vol. 138; No. 1; Pp. 219-230; Bibl. 3/4 p.</SO>
<LA>Anglais</LA>
<EA>Even though human perceptual development relies on combining multiple modalities, most categorization studies so far have focused on the visual modality. To better understand the mechanisms underlying multisensory categorization, we analyzed visual and haptic perceptual spaces and compared them with human categorization behavior. As stimuli we used a three-dimensional object space of complex, parametrically-defined objects. First, we gathered similarity ratings for all objects and analyzed the perceptual spaces of both modalities using multidimensional scaling analysis. Next, we performed three different categorization tasks which are representative of every-day learning scenarios: in a fully unconstrained task, objects were freely categorized, in a semi-constrained task, exactly three groups had to be created, whereas in a constrained task, participants received three prototype objects and had to assign all other objects accordingly. We found that the haptic modality was on par with the visual modality both in recovering the topology of the physical space and in solving the categorization tasks. We also found that within-category similarity was consistently higher than across-category similarity for all categorization tasks and thus show how perceptual spaces based on similarity can explain visual and haptic object categorization. Our results suggest that both modalities employ similar processes in forming categories of complex objects.</EA>
<CC>002A26E03; 002A26E05; 002A26E08</CC>
<FD>Catégorisation; Vision; Sensibilité tactile; Perception intermodale; Perception espace; Etude expérimentale; Homme</FD>
<FG>Cognition</FG>
<ED>Categorization; Vision; Tactile sensitivity; Intermodal perception; Space perception; Experimental study; Human</ED>
<EG>Cognition</EG>
<SD>Categorización; Visión; Sensibilidad tactil; Percepción intermodal; Percepción espacio; Estudio experimental; Hombre</SD>
<LO>INIST-2174.354000191255340280</LO>
<ID>11-0439339</ID>
</server>
</inist>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/PascalFrancis/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000433 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/PascalFrancis/Corpus/biblio.hfd -nk 000433 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    PascalFrancis
   |étape=   Corpus
   |type=    RBID
   |clé=     Pascal:11-0439339
   |texte=   Similarity and categorization: From vision to touch
}}

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024