Similarity and categorization: From vision to touch
Identifieur interne : 000433 ( PascalFrancis/Corpus ); précédent : 000432; suivant : 000434Similarity and categorization: From vision to touch
Auteurs : Nina Gaissert ; Heinrich H. Bülthoff ; Christian WallravenSource :
- Acta psychologica [ 0001-6918 ] ; 2011.
Descripteurs français
- Pascal (Inist)
English descriptors
- KwdEn :
Abstract
Even though human perceptual development relies on combining multiple modalities, most categorization studies so far have focused on the visual modality. To better understand the mechanisms underlying multisensory categorization, we analyzed visual and haptic perceptual spaces and compared them with human categorization behavior. As stimuli we used a three-dimensional object space of complex, parametrically-defined objects. First, we gathered similarity ratings for all objects and analyzed the perceptual spaces of both modalities using multidimensional scaling analysis. Next, we performed three different categorization tasks which are representative of every-day learning scenarios: in a fully unconstrained task, objects were freely categorized, in a semi-constrained task, exactly three groups had to be created, whereas in a constrained task, participants received three prototype objects and had to assign all other objects accordingly. We found that the haptic modality was on par with the visual modality both in recovering the topology of the physical space and in solving the categorization tasks. We also found that within-category similarity was consistently higher than across-category similarity for all categorization tasks and thus show how perceptual spaces based on similarity can explain visual and haptic object categorization. Our results suggest that both modalities employ similar processes in forming categories of complex objects.
Notice en format standard (ISO 2709)
Pour connaître la documentation sur le format Inist Standard.
pA |
|
---|
Format Inist (serveur)
NO : | PASCAL 11-0439339 INIST |
---|---|
ET : | Similarity and categorization: From vision to touch |
AU : | GAISSERT (Nina); BÜLTHOFF (Heinrich H.); WALLRAVEN (Christian) |
AF : | Max Planck Institute for Biological Cybernetics, Tübingen, Spemannstr. 38/72076 Tübingen/Allemagne (1 aut., 2 aut.); Department of Brain and Cognitive Engineering, Korea University, Anam-dong, Seongbuk-gu, Seoul 136-713/Seoul/Corée, République de (2 aut., 3 aut.) |
DT : | Publication en série; Niveau analytique |
SO : | Acta psychologica; ISSN 0001-6918; Coden APSOAZ; Royaume-Uni; Da. 2011; Vol. 138; No. 1; Pp. 219-230; Bibl. 3/4 p. |
LA : | Anglais |
EA : | Even though human perceptual development relies on combining multiple modalities, most categorization studies so far have focused on the visual modality. To better understand the mechanisms underlying multisensory categorization, we analyzed visual and haptic perceptual spaces and compared them with human categorization behavior. As stimuli we used a three-dimensional object space of complex, parametrically-defined objects. First, we gathered similarity ratings for all objects and analyzed the perceptual spaces of both modalities using multidimensional scaling analysis. Next, we performed three different categorization tasks which are representative of every-day learning scenarios: in a fully unconstrained task, objects were freely categorized, in a semi-constrained task, exactly three groups had to be created, whereas in a constrained task, participants received three prototype objects and had to assign all other objects accordingly. We found that the haptic modality was on par with the visual modality both in recovering the topology of the physical space and in solving the categorization tasks. We also found that within-category similarity was consistently higher than across-category similarity for all categorization tasks and thus show how perceptual spaces based on similarity can explain visual and haptic object categorization. Our results suggest that both modalities employ similar processes in forming categories of complex objects. |
CC : | 002A26E03; 002A26E05; 002A26E08 |
FD : | Catégorisation; Vision; Sensibilité tactile; Perception intermodale; Perception espace; Etude expérimentale; Homme |
FG : | Cognition |
ED : | Categorization; Vision; Tactile sensitivity; Intermodal perception; Space perception; Experimental study; Human |
EG : | Cognition |
SD : | Categorización; Visión; Sensibilidad tactil; Percepción intermodal; Percepción espacio; Estudio experimental; Hombre |
LO : | INIST-2174.354000191255340280 |
ID : | 11-0439339 |
Links to Exploration step
Pascal:11-0439339Le document en format XML
<record><TEI><teiHeader><fileDesc><titleStmt><title xml:lang="en" level="a">Similarity and categorization: From vision to touch</title>
<author><name sortKey="Gaissert, Nina" sort="Gaissert, Nina" uniqKey="Gaissert N" first="Nina" last="Gaissert">Nina Gaissert</name>
<affiliation><inist:fA14 i1="01"><s1>Max Planck Institute for Biological Cybernetics, Tübingen, Spemannstr. 38</s1>
<s2>72076 Tübingen</s2>
<s3>DEU</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author><name sortKey="Bulthoff, Heinrich H" sort="Bulthoff, Heinrich H" uniqKey="Bulthoff H" first="Heinrich H." last="Bülthoff">Heinrich H. Bülthoff</name>
<affiliation><inist:fA14 i1="01"><s1>Max Planck Institute for Biological Cybernetics, Tübingen, Spemannstr. 38</s1>
<s2>72076 Tübingen</s2>
<s3>DEU</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</inist:fA14>
</affiliation>
<affiliation><inist:fA14 i1="02"><s1>Department of Brain and Cognitive Engineering, Korea University, Anam-dong, Seongbuk-gu, Seoul 136-713</s1>
<s2>Seoul</s2>
<s3>KOR</s3>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author><name sortKey="Wallraven, Christian" sort="Wallraven, Christian" uniqKey="Wallraven C" first="Christian" last="Wallraven">Christian Wallraven</name>
<affiliation><inist:fA14 i1="02"><s1>Department of Brain and Cognitive Engineering, Korea University, Anam-dong, Seongbuk-gu, Seoul 136-713</s1>
<s2>Seoul</s2>
<s3>KOR</s3>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
</titleStmt>
<publicationStmt><idno type="wicri:source">INIST</idno>
<idno type="inist">11-0439339</idno>
<date when="2011">2011</date>
<idno type="stanalyst">PASCAL 11-0439339 INIST</idno>
<idno type="RBID">Pascal:11-0439339</idno>
<idno type="wicri:Area/PascalFrancis/Corpus">000433</idno>
</publicationStmt>
<sourceDesc><biblStruct><analytic><title xml:lang="en" level="a">Similarity and categorization: From vision to touch</title>
<author><name sortKey="Gaissert, Nina" sort="Gaissert, Nina" uniqKey="Gaissert N" first="Nina" last="Gaissert">Nina Gaissert</name>
<affiliation><inist:fA14 i1="01"><s1>Max Planck Institute for Biological Cybernetics, Tübingen, Spemannstr. 38</s1>
<s2>72076 Tübingen</s2>
<s3>DEU</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author><name sortKey="Bulthoff, Heinrich H" sort="Bulthoff, Heinrich H" uniqKey="Bulthoff H" first="Heinrich H." last="Bülthoff">Heinrich H. Bülthoff</name>
<affiliation><inist:fA14 i1="01"><s1>Max Planck Institute for Biological Cybernetics, Tübingen, Spemannstr. 38</s1>
<s2>72076 Tübingen</s2>
<s3>DEU</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</inist:fA14>
</affiliation>
<affiliation><inist:fA14 i1="02"><s1>Department of Brain and Cognitive Engineering, Korea University, Anam-dong, Seongbuk-gu, Seoul 136-713</s1>
<s2>Seoul</s2>
<s3>KOR</s3>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author><name sortKey="Wallraven, Christian" sort="Wallraven, Christian" uniqKey="Wallraven C" first="Christian" last="Wallraven">Christian Wallraven</name>
<affiliation><inist:fA14 i1="02"><s1>Department of Brain and Cognitive Engineering, Korea University, Anam-dong, Seongbuk-gu, Seoul 136-713</s1>
<s2>Seoul</s2>
<s3>KOR</s3>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
</analytic>
<series><title level="j" type="main">Acta psychologica</title>
<title level="j" type="abbreviated">Acta psychol.</title>
<idno type="ISSN">0001-6918</idno>
<imprint><date when="2011">2011</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
<seriesStmt><title level="j" type="main">Acta psychologica</title>
<title level="j" type="abbreviated">Acta psychol.</title>
<idno type="ISSN">0001-6918</idno>
</seriesStmt>
</fileDesc>
<profileDesc><textClass><keywords scheme="KwdEn" xml:lang="en"><term>Categorization</term>
<term>Experimental study</term>
<term>Human</term>
<term>Intermodal perception</term>
<term>Space perception</term>
<term>Tactile sensitivity</term>
<term>Vision</term>
</keywords>
<keywords scheme="Pascal" xml:lang="fr"><term>Catégorisation</term>
<term>Vision</term>
<term>Sensibilité tactile</term>
<term>Perception intermodale</term>
<term>Perception espace</term>
<term>Etude expérimentale</term>
<term>Homme</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front><div type="abstract" xml:lang="en">Even though human perceptual development relies on combining multiple modalities, most categorization studies so far have focused on the visual modality. To better understand the mechanisms underlying multisensory categorization, we analyzed visual and haptic perceptual spaces and compared them with human categorization behavior. As stimuli we used a three-dimensional object space of complex, parametrically-defined objects. First, we gathered similarity ratings for all objects and analyzed the perceptual spaces of both modalities using multidimensional scaling analysis. Next, we performed three different categorization tasks which are representative of every-day learning scenarios: in a fully unconstrained task, objects were freely categorized, in a semi-constrained task, exactly three groups had to be created, whereas in a constrained task, participants received three prototype objects and had to assign all other objects accordingly. We found that the haptic modality was on par with the visual modality both in recovering the topology of the physical space and in solving the categorization tasks. We also found that within-category similarity was consistently higher than across-category similarity for all categorization tasks and thus show how perceptual spaces based on similarity can explain visual and haptic object categorization. Our results suggest that both modalities employ similar processes in forming categories of complex objects.</div>
</front>
</TEI>
<inist><standard h6="B"><pA><fA01 i1="01" i2="1"><s0>0001-6918</s0>
</fA01>
<fA02 i1="01"><s0>APSOAZ</s0>
</fA02>
<fA03 i2="1"><s0>Acta psychol.</s0>
</fA03>
<fA05><s2>138</s2>
</fA05>
<fA06><s2>1</s2>
</fA06>
<fA08 i1="01" i2="1" l="ENG"><s1>Similarity and categorization: From vision to touch</s1>
</fA08>
<fA11 i1="01" i2="1"><s1>GAISSERT (Nina)</s1>
</fA11>
<fA11 i1="02" i2="1"><s1>BÜLTHOFF (Heinrich H.)</s1>
</fA11>
<fA11 i1="03" i2="1"><s1>WALLRAVEN (Christian)</s1>
</fA11>
<fA14 i1="01"><s1>Max Planck Institute for Biological Cybernetics, Tübingen, Spemannstr. 38</s1>
<s2>72076 Tübingen</s2>
<s3>DEU</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</fA14>
<fA14 i1="02"><s1>Department of Brain and Cognitive Engineering, Korea University, Anam-dong, Seongbuk-gu, Seoul 136-713</s1>
<s2>Seoul</s2>
<s3>KOR</s3>
<sZ>2 aut.</sZ>
<sZ>3 aut.</sZ>
</fA14>
<fA20><s1>219-230</s1>
</fA20>
<fA21><s1>2011</s1>
</fA21>
<fA23 i1="01"><s0>ENG</s0>
</fA23>
<fA43 i1="01"><s1>INIST</s1>
<s2>2174</s2>
<s5>354000191255340280</s5>
</fA43>
<fA44><s0>0000</s0>
<s1>© 2011 INIST-CNRS. All rights reserved.</s1>
</fA44>
<fA45><s0>3/4 p.</s0>
</fA45>
<fA47 i1="01" i2="1"><s0>11-0439339</s0>
</fA47>
<fA60><s1>P</s1>
</fA60>
<fA61><s0>A</s0>
</fA61>
<fA64 i1="01" i2="1"><s0>Acta psychologica</s0>
</fA64>
<fA66 i1="01"><s0>GBR</s0>
</fA66>
<fC01 i1="01" l="ENG"><s0>Even though human perceptual development relies on combining multiple modalities, most categorization studies so far have focused on the visual modality. To better understand the mechanisms underlying multisensory categorization, we analyzed visual and haptic perceptual spaces and compared them with human categorization behavior. As stimuli we used a three-dimensional object space of complex, parametrically-defined objects. First, we gathered similarity ratings for all objects and analyzed the perceptual spaces of both modalities using multidimensional scaling analysis. Next, we performed three different categorization tasks which are representative of every-day learning scenarios: in a fully unconstrained task, objects were freely categorized, in a semi-constrained task, exactly three groups had to be created, whereas in a constrained task, participants received three prototype objects and had to assign all other objects accordingly. We found that the haptic modality was on par with the visual modality both in recovering the topology of the physical space and in solving the categorization tasks. We also found that within-category similarity was consistently higher than across-category similarity for all categorization tasks and thus show how perceptual spaces based on similarity can explain visual and haptic object categorization. Our results suggest that both modalities employ similar processes in forming categories of complex objects.</s0>
</fC01>
<fC02 i1="01" i2="X"><s0>002A26E03</s0>
</fC02>
<fC02 i1="02" i2="X"><s0>002A26E05</s0>
</fC02>
<fC02 i1="03" i2="X"><s0>002A26E08</s0>
</fC02>
<fC03 i1="01" i2="X" l="FRE"><s0>Catégorisation</s0>
<s5>01</s5>
</fC03>
<fC03 i1="01" i2="X" l="ENG"><s0>Categorization</s0>
<s5>01</s5>
</fC03>
<fC03 i1="01" i2="X" l="SPA"><s0>Categorización</s0>
<s5>01</s5>
</fC03>
<fC03 i1="02" i2="X" l="FRE"><s0>Vision</s0>
<s5>02</s5>
</fC03>
<fC03 i1="02" i2="X" l="ENG"><s0>Vision</s0>
<s5>02</s5>
</fC03>
<fC03 i1="02" i2="X" l="SPA"><s0>Visión</s0>
<s5>02</s5>
</fC03>
<fC03 i1="03" i2="X" l="FRE"><s0>Sensibilité tactile</s0>
<s5>03</s5>
</fC03>
<fC03 i1="03" i2="X" l="ENG"><s0>Tactile sensitivity</s0>
<s5>03</s5>
</fC03>
<fC03 i1="03" i2="X" l="SPA"><s0>Sensibilidad tactil</s0>
<s5>03</s5>
</fC03>
<fC03 i1="04" i2="X" l="FRE"><s0>Perception intermodale</s0>
<s5>05</s5>
</fC03>
<fC03 i1="04" i2="X" l="ENG"><s0>Intermodal perception</s0>
<s5>05</s5>
</fC03>
<fC03 i1="04" i2="X" l="SPA"><s0>Percepción intermodal</s0>
<s5>05</s5>
</fC03>
<fC03 i1="05" i2="X" l="FRE"><s0>Perception espace</s0>
<s5>06</s5>
</fC03>
<fC03 i1="05" i2="X" l="ENG"><s0>Space perception</s0>
<s5>06</s5>
</fC03>
<fC03 i1="05" i2="X" l="SPA"><s0>Percepción espacio</s0>
<s5>06</s5>
</fC03>
<fC03 i1="06" i2="X" l="FRE"><s0>Etude expérimentale</s0>
<s5>07</s5>
</fC03>
<fC03 i1="06" i2="X" l="ENG"><s0>Experimental study</s0>
<s5>07</s5>
</fC03>
<fC03 i1="06" i2="X" l="SPA"><s0>Estudio experimental</s0>
<s5>07</s5>
</fC03>
<fC03 i1="07" i2="X" l="FRE"><s0>Homme</s0>
<s5>18</s5>
</fC03>
<fC03 i1="07" i2="X" l="ENG"><s0>Human</s0>
<s5>18</s5>
</fC03>
<fC03 i1="07" i2="X" l="SPA"><s0>Hombre</s0>
<s5>18</s5>
</fC03>
<fC07 i1="01" i2="X" l="FRE"><s0>Cognition</s0>
<s5>37</s5>
</fC07>
<fC07 i1="01" i2="X" l="ENG"><s0>Cognition</s0>
<s5>37</s5>
</fC07>
<fC07 i1="01" i2="X" l="SPA"><s0>Cognición</s0>
<s5>37</s5>
</fC07>
<fN21><s1>297</s1>
</fN21>
</pA>
</standard>
<server><NO>PASCAL 11-0439339 INIST</NO>
<ET>Similarity and categorization: From vision to touch</ET>
<AU>GAISSERT (Nina); BÜLTHOFF (Heinrich H.); WALLRAVEN (Christian)</AU>
<AF>Max Planck Institute for Biological Cybernetics, Tübingen, Spemannstr. 38/72076 Tübingen/Allemagne (1 aut., 2 aut.); Department of Brain and Cognitive Engineering, Korea University, Anam-dong, Seongbuk-gu, Seoul 136-713/Seoul/Corée, République de (2 aut., 3 aut.)</AF>
<DT>Publication en série; Niveau analytique</DT>
<SO>Acta psychologica; ISSN 0001-6918; Coden APSOAZ; Royaume-Uni; Da. 2011; Vol. 138; No. 1; Pp. 219-230; Bibl. 3/4 p.</SO>
<LA>Anglais</LA>
<EA>Even though human perceptual development relies on combining multiple modalities, most categorization studies so far have focused on the visual modality. To better understand the mechanisms underlying multisensory categorization, we analyzed visual and haptic perceptual spaces and compared them with human categorization behavior. As stimuli we used a three-dimensional object space of complex, parametrically-defined objects. First, we gathered similarity ratings for all objects and analyzed the perceptual spaces of both modalities using multidimensional scaling analysis. Next, we performed three different categorization tasks which are representative of every-day learning scenarios: in a fully unconstrained task, objects were freely categorized, in a semi-constrained task, exactly three groups had to be created, whereas in a constrained task, participants received three prototype objects and had to assign all other objects accordingly. We found that the haptic modality was on par with the visual modality both in recovering the topology of the physical space and in solving the categorization tasks. We also found that within-category similarity was consistently higher than across-category similarity for all categorization tasks and thus show how perceptual spaces based on similarity can explain visual and haptic object categorization. Our results suggest that both modalities employ similar processes in forming categories of complex objects.</EA>
<CC>002A26E03; 002A26E05; 002A26E08</CC>
<FD>Catégorisation; Vision; Sensibilité tactile; Perception intermodale; Perception espace; Etude expérimentale; Homme</FD>
<FG>Cognition</FG>
<ED>Categorization; Vision; Tactile sensitivity; Intermodal perception; Space perception; Experimental study; Human</ED>
<EG>Cognition</EG>
<SD>Categorización; Visión; Sensibilidad tactil; Percepción intermodal; Percepción espacio; Estudio experimental; Hombre</SD>
<LO>INIST-2174.354000191255340280</LO>
<ID>11-0439339</ID>
</server>
</inist>
</record>
Pour manipuler ce document sous Unix (Dilib)
EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/PascalFrancis/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000433 | SxmlIndent | more
Ou
HfdSelect -h $EXPLOR_AREA/Data/PascalFrancis/Corpus/biblio.hfd -nk 000433 | SxmlIndent | more
Pour mettre un lien sur cette page dans le réseau Wicri
{{Explor lien |wiki= Ticri/CIDE |area= HapticV1 |flux= PascalFrancis |étape= Corpus |type= RBID |clé= Pascal:11-0439339 |texte= Similarity and categorization: From vision to touch }}
This area was generated with Dilib version V0.6.23. |