Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Crossmodal information for visual and haptic discrimination

Identifieur interne : 004921 ( Main/Exploration ); précédent : 004920; suivant : 004922

Crossmodal information for visual and haptic discrimination

Auteurs : Flip Phillips [États-Unis] ; Eric J. L. Egan [États-Unis]

Source :

RBID : Pascal:09-0392751

Descripteurs français

English descriptors

Abstract

Both our visual and haptic systems contribute to the perception of the three dimensional world, especially the proximat perception of objects, The interaction of these systems has been the subject of some debate over the years, ranging from the philosophically posed Molyneux problem to the more pragmatic examination of their psychophysical relationship. To better understand the nature of this interaction we have performed a variety of experiments characterizing; the detection. discrimination, and production of 3D shape. A stimulus set of 25 complex. natural appearing, noisy 3D target objects were statistically specified in the Fourier domain and manufactured using a 3D printer. A series of paired-comparison experiments examined subjects' unimodal (visual-visual and haptic-haptic) and crossmodal (visual-haptic) perceptual abilities. Additionally, subjects sculpted objects using uni- or crossmodal source information. In all experiments, the performance in the unirnodal conditions were similar to one another and unimodal presentation fared better than crossmodal. Also, the spatial frequency of object features affected performance differentially across the range used in this experiment. The sculpted objects were scanned in 3D and the resulting geometry was compared metrically and statistically to the original stimuli, Objects with higher spatial frequency were harder to sculpt when limited to haptic input compared to only visual input. The opposite was found for objects with low spatial frequency. The psychophysical discrimination and comparison experiments yielded similar findings. There is a marked performance difference between the visual and haptic systems and these differences were systematically distributed along the range of feature details. The existence of non-universal (i.e. modality-specific) representations explain the poor crossrnodal performance. Our current findings suggest that haptic and visual information is either integrated into a multi-modal form, or each is independent and somewhat efficient translation is possible. Vision shows a distinct, advantage when dealing with higher frequency objects but both modalities are effective when comparing objects that differ by a large amount.


Affiliations:


Links toward previous steps (curation, corpus...)


Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en" level="a">Crossmodal information for visual and haptic discrimination</title>
<author>
<name sortKey="Phillips, Flip" sort="Phillips, Flip" uniqKey="Phillips F" first="Flip" last="Phillips">Flip Phillips</name>
<affiliation wicri:level="2">
<inist:fA14 i1="01">
<s1>Skidmore College</s1>
<s2>Saratoga Springs, NY</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
</inist:fA14>
<country>États-Unis</country>
<placeName>
<region type="state">État de New York</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Egan, Eric J L" sort="Egan, Eric J L" uniqKey="Egan E" first="Eric J. L." last="Egan">Eric J. L. Egan</name>
<affiliation wicri:level="2">
<inist:fA14 i1="02">
<s1>The Ohio State University</s1>
<s2>Columbus, OH</s2>
<s3>USA</s3>
<sZ>2 aut.</sZ>
</inist:fA14>
<country>États-Unis</country>
<placeName>
<region type="state">Ohio</region>
</placeName>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">INIST</idno>
<idno type="inist">09-0392751</idno>
<date when="2009">2009</date>
<idno type="stanalyst">PASCAL 09-0392751 INIST</idno>
<idno type="RBID">Pascal:09-0392751</idno>
<idno type="wicri:Area/PascalFrancis/Corpus">000751</idno>
<idno type="wicri:Area/PascalFrancis/Curation">000C81</idno>
<idno type="wicri:Area/PascalFrancis/Checkpoint">000614</idno>
<idno type="wicri:doubleKey">0277-786X:2009:Phillips F:crossmodal:information:for</idno>
<idno type="wicri:Area/Main/Merge">004A07</idno>
<idno type="wicri:Area/Main/Curation">004921</idno>
<idno type="wicri:Area/Main/Exploration">004921</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a">Crossmodal information for visual and haptic discrimination</title>
<author>
<name sortKey="Phillips, Flip" sort="Phillips, Flip" uniqKey="Phillips F" first="Flip" last="Phillips">Flip Phillips</name>
<affiliation wicri:level="2">
<inist:fA14 i1="01">
<s1>Skidmore College</s1>
<s2>Saratoga Springs, NY</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
</inist:fA14>
<country>États-Unis</country>
<placeName>
<region type="state">État de New York</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Egan, Eric J L" sort="Egan, Eric J L" uniqKey="Egan E" first="Eric J. L." last="Egan">Eric J. L. Egan</name>
<affiliation wicri:level="2">
<inist:fA14 i1="02">
<s1>The Ohio State University</s1>
<s2>Columbus, OH</s2>
<s3>USA</s3>
<sZ>2 aut.</sZ>
</inist:fA14>
<country>États-Unis</country>
<placeName>
<region type="state">Ohio</region>
</placeName>
</affiliation>
</author>
</analytic>
<series>
<title level="j" type="main">Proceedings of SPIE, the International Society for Optical Engineering</title>
<title level="j" type="abbreviated">Proc. SPIE Int. Soc. Opt. Eng.</title>
<idno type="ISSN">0277-786X</idno>
<imprint>
<date when="2009">2009</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
<seriesStmt>
<title level="j" type="main">Proceedings of SPIE, the International Society for Optical Engineering</title>
<title level="j" type="abbreviated">Proc. SPIE Int. Soc. Opt. Eng.</title>
<idno type="ISSN">0277-786X</idno>
</seriesStmt>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Differential method</term>
<term>Discrimination</term>
<term>Distance measurement</term>
<term>Low frequency</term>
<term>Performance evaluation</term>
<term>Printers</term>
<term>Spatial frequency</term>
<term>Statistical method</term>
<term>Three dimensional model</term>
<term>Vision</term>
<term>Volume</term>
</keywords>
<keywords scheme="Pascal" xml:lang="fr">
<term>Vision</term>
<term>Imprimante</term>
<term>Discrimination</term>
<term>Modèle 3 dimensions</term>
<term>Mesure de distance</term>
<term>Volume</term>
<term>Méthode statistique</term>
<term>Evaluation performance</term>
<term>Fréquence spatiale</term>
<term>Méthode différentielle</term>
<term>Basse fréquence</term>
<term>0130C</term>
<term>4266S</term>
</keywords>
<keywords scheme="Wicri" type="topic" xml:lang="fr">
<term>Imprimante</term>
<term>Méthode statistique</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">Both our visual and haptic systems contribute to the perception of the three dimensional world, especially the proximat perception of objects, The interaction of these systems has been the subject of some debate over the years, ranging from the philosophically posed Molyneux problem to the more pragmatic examination of their psychophysical relationship. To better understand the nature of this interaction we have performed a variety of experiments characterizing; the detection. discrimination, and production of 3D shape. A stimulus set of 25 complex. natural appearing, noisy 3D target objects were statistically specified in the Fourier domain and manufactured using a 3D printer. A series of paired-comparison experiments examined subjects' unimodal (visual-visual and haptic-haptic) and crossmodal (visual-haptic) perceptual abilities. Additionally, subjects sculpted objects using uni- or crossmodal source information. In all experiments, the performance in the unirnodal conditions were similar to one another and unimodal presentation fared better than crossmodal. Also, the spatial frequency of object features affected performance differentially across the range used in this experiment. The sculpted objects were scanned in 3D and the resulting geometry was compared metrically and statistically to the original stimuli, Objects with higher spatial frequency were harder to sculpt when limited to haptic input compared to only visual input. The opposite was found for objects with low spatial frequency. The psychophysical discrimination and comparison experiments yielded similar findings. There is a marked performance difference between the visual and haptic systems and these differences were systematically distributed along the range of feature details. The existence of non-universal (i.e. modality-specific) representations explain the poor crossrnodal performance. Our current findings suggest that haptic and visual information is either integrated into a multi-modal form, or each is independent and somewhat efficient translation is possible. Vision shows a distinct, advantage when dealing with higher frequency objects but both modalities are effective when comparing objects that differ by a large amount.</div>
</front>
</TEI>
<affiliations>
<list>
<country>
<li>États-Unis</li>
</country>
<region>
<li>Ohio</li>
<li>État de New York</li>
</region>
</list>
<tree>
<country name="États-Unis">
<region name="État de New York">
<name sortKey="Phillips, Flip" sort="Phillips, Flip" uniqKey="Phillips F" first="Flip" last="Phillips">Flip Phillips</name>
</region>
<name sortKey="Egan, Eric J L" sort="Egan, Eric J L" uniqKey="Egan E" first="Eric J. L." last="Egan">Eric J. L. Egan</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Main/Exploration
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 004921 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Main/Exploration/biblio.hfd -nk 004921 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Main
   |étape=   Exploration
   |type=    RBID
   |clé=     Pascal:09-0392751
   |texte=   Crossmodal information for visual and haptic discrimination
}}

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024