Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Crossmodal information for visual and haptic discrimination

Identifieur interne : 000C81 ( PascalFrancis/Curation ); précédent : 000C80; suivant : 000C82

Crossmodal information for visual and haptic discrimination

Auteurs : Flip Phillips [États-Unis] ; Eric J. L. Egan [États-Unis]

Source :

RBID : Pascal:09-0392751

Descripteurs français

English descriptors

Abstract

Both our visual and haptic systems contribute to the perception of the three dimensional world, especially the proximat perception of objects, The interaction of these systems has been the subject of some debate over the years, ranging from the philosophically posed Molyneux problem to the more pragmatic examination of their psychophysical relationship. To better understand the nature of this interaction we have performed a variety of experiments characterizing; the detection. discrimination, and production of 3D shape. A stimulus set of 25 complex. natural appearing, noisy 3D target objects were statistically specified in the Fourier domain and manufactured using a 3D printer. A series of paired-comparison experiments examined subjects' unimodal (visual-visual and haptic-haptic) and crossmodal (visual-haptic) perceptual abilities. Additionally, subjects sculpted objects using uni- or crossmodal source information. In all experiments, the performance in the unirnodal conditions were similar to one another and unimodal presentation fared better than crossmodal. Also, the spatial frequency of object features affected performance differentially across the range used in this experiment. The sculpted objects were scanned in 3D and the resulting geometry was compared metrically and statistically to the original stimuli, Objects with higher spatial frequency were harder to sculpt when limited to haptic input compared to only visual input. The opposite was found for objects with low spatial frequency. The psychophysical discrimination and comparison experiments yielded similar findings. There is a marked performance difference between the visual and haptic systems and these differences were systematically distributed along the range of feature details. The existence of non-universal (i.e. modality-specific) representations explain the poor crossrnodal performance. Our current findings suggest that haptic and visual information is either integrated into a multi-modal form, or each is independent and somewhat efficient translation is possible. Vision shows a distinct, advantage when dealing with higher frequency objects but both modalities are effective when comparing objects that differ by a large amount.
pA  
A01 01  1    @0 0277-786X
A02 01      @0 PSISDG
A03   1    @0 Proc. SPIE Int. Soc. Opt. Eng.
A05       @2 7240
A08 01  1  ENG  @1 Crossmodal information for visual and haptic discrimination
A09 01  1  ENG  @1 Human vision and electronic imaging XIV : 19-22 January 2009, San Jose, California, United States
A11 01  1    @1 PHILLIPS (Flip)
A11 02  1    @1 EGAN (Eric J. L.)
A12 01  1    @1 ROGOWITZ (Bernice Ellen) @9 ed.
A12 02  1    @1 PAPPAS (Thrasyvoulos N.) @9 ed.
A14 01      @1 Skidmore College @2 Saratoga Springs, NY @3 USA @Z 1 aut.
A14 02      @1 The Ohio State University @2 Columbus, OH @3 USA @Z 2 aut.
A18 01  1    @1 Society of photo-optical instrumentation engineers @3 USA @9 org-cong.
A18 02  1    @1 IS&T--The Society for Imaging Science and Technology @3 USA @9 org-cong.
A20       @2 72400H.1-72400H.15
A21       @1 2009
A23 01      @0 ENG
A25 01      @1 SPIE @2 Bellingham (Wash.)
A25 02      @1 IS&T @2 Springfield (Va.)
A26 01      @0 978-0-8194-7490-2
A26 02      @0 0-8194-7490-8
A43 01      @1 INIST @2 21760 @5 354000172963110140
A44       @0 0000 @1 © 2009 INIST-CNRS. All rights reserved.
A45       @0 24 ref.
A47 01  1    @0 09-0392751
A60       @1 P @2 C
A61       @0 A
A64 01  1    @0 Proceedings of SPIE, the International Society for Optical Engineering
A66 01      @0 USA
C01 01    ENG  @0 Both our visual and haptic systems contribute to the perception of the three dimensional world, especially the proximat perception of objects, The interaction of these systems has been the subject of some debate over the years, ranging from the philosophically posed Molyneux problem to the more pragmatic examination of their psychophysical relationship. To better understand the nature of this interaction we have performed a variety of experiments characterizing; the detection. discrimination, and production of 3D shape. A stimulus set of 25 complex. natural appearing, noisy 3D target objects were statistically specified in the Fourier domain and manufactured using a 3D printer. A series of paired-comparison experiments examined subjects' unimodal (visual-visual and haptic-haptic) and crossmodal (visual-haptic) perceptual abilities. Additionally, subjects sculpted objects using uni- or crossmodal source information. In all experiments, the performance in the unirnodal conditions were similar to one another and unimodal presentation fared better than crossmodal. Also, the spatial frequency of object features affected performance differentially across the range used in this experiment. The sculpted objects were scanned in 3D and the resulting geometry was compared metrically and statistically to the original stimuli, Objects with higher spatial frequency were harder to sculpt when limited to haptic input compared to only visual input. The opposite was found for objects with low spatial frequency. The psychophysical discrimination and comparison experiments yielded similar findings. There is a marked performance difference between the visual and haptic systems and these differences were systematically distributed along the range of feature details. The existence of non-universal (i.e. modality-specific) representations explain the poor crossrnodal performance. Our current findings suggest that haptic and visual information is either integrated into a multi-modal form, or each is independent and somewhat efficient translation is possible. Vision shows a distinct, advantage when dealing with higher frequency objects but both modalities are effective when comparing objects that differ by a large amount.
C02 01  3    @0 001B00A30C
C02 02  X    @0 002A25I
C03 01  3  FRE  @0 Vision @5 03
C03 01  3  ENG  @0 Vision @5 03
C03 02  3  FRE  @0 Imprimante @5 11
C03 02  3  ENG  @0 Printers @5 11
C03 03  X  FRE  @0 Discrimination @5 61
C03 03  X  ENG  @0 Discrimination @5 61
C03 03  X  SPA  @0 Discriminación @5 61
C03 04  X  FRE  @0 Modèle 3 dimensions @5 62
C03 04  X  ENG  @0 Three dimensional model @5 62
C03 04  X  SPA  @0 Modelo 3 dimensiones @5 62
C03 05  X  FRE  @0 Mesure de distance @5 63
C03 05  X  ENG  @0 Distance measurement @5 63
C03 05  X  SPA  @0 Medición distancia @5 63
C03 06  3  FRE  @0 Volume @5 64
C03 06  3  ENG  @0 Volume @5 64
C03 07  X  FRE  @0 Méthode statistique @5 65
C03 07  X  ENG  @0 Statistical method @5 65
C03 07  X  SPA  @0 Método estadístico @5 65
C03 08  3  FRE  @0 Evaluation performance @5 66
C03 08  3  ENG  @0 Performance evaluation @5 66
C03 09  3  FRE  @0 Fréquence spatiale @5 67
C03 09  3  ENG  @0 Spatial frequency @5 67
C03 10  X  FRE  @0 Méthode différentielle @5 68
C03 10  X  ENG  @0 Differential method @5 68
C03 10  X  SPA  @0 Método diferencial @5 68
C03 11  X  FRE  @0 Basse fréquence @5 69
C03 11  X  ENG  @0 Low frequency @5 69
C03 11  X  SPA  @0 Baja frecuencia @5 69
C03 12  3  FRE  @0 0130C @4 INC @5 83
C03 13  3  FRE  @0 4266S @4 INC @5 84
N21       @1 285
N44 01      @1 OTO
N82       @1 OTO
pR  
A30 01  1  ENG  @1 Electronic Imaging Science and Technology Symposium @3 Santa Jose CA USA @4 2008

Links toward previous steps (curation, corpus...)


Links to Exploration step

Pascal:09-0392751

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en" level="a">Crossmodal information for visual and haptic discrimination</title>
<author>
<name sortKey="Phillips, Flip" sort="Phillips, Flip" uniqKey="Phillips F" first="Flip" last="Phillips">Flip Phillips</name>
<affiliation wicri:level="1">
<inist:fA14 i1="01">
<s1>Skidmore College</s1>
<s2>Saratoga Springs, NY</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
</inist:fA14>
<country>États-Unis</country>
</affiliation>
</author>
<author>
<name sortKey="Egan, Eric J L" sort="Egan, Eric J L" uniqKey="Egan E" first="Eric J. L." last="Egan">Eric J. L. Egan</name>
<affiliation wicri:level="1">
<inist:fA14 i1="02">
<s1>The Ohio State University</s1>
<s2>Columbus, OH</s2>
<s3>USA</s3>
<sZ>2 aut.</sZ>
</inist:fA14>
<country>États-Unis</country>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">INIST</idno>
<idno type="inist">09-0392751</idno>
<date when="2009">2009</date>
<idno type="stanalyst">PASCAL 09-0392751 INIST</idno>
<idno type="RBID">Pascal:09-0392751</idno>
<idno type="wicri:Area/PascalFrancis/Corpus">000751</idno>
<idno type="wicri:Area/PascalFrancis/Curation">000C81</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a">Crossmodal information for visual and haptic discrimination</title>
<author>
<name sortKey="Phillips, Flip" sort="Phillips, Flip" uniqKey="Phillips F" first="Flip" last="Phillips">Flip Phillips</name>
<affiliation wicri:level="1">
<inist:fA14 i1="01">
<s1>Skidmore College</s1>
<s2>Saratoga Springs, NY</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
</inist:fA14>
<country>États-Unis</country>
</affiliation>
</author>
<author>
<name sortKey="Egan, Eric J L" sort="Egan, Eric J L" uniqKey="Egan E" first="Eric J. L." last="Egan">Eric J. L. Egan</name>
<affiliation wicri:level="1">
<inist:fA14 i1="02">
<s1>The Ohio State University</s1>
<s2>Columbus, OH</s2>
<s3>USA</s3>
<sZ>2 aut.</sZ>
</inist:fA14>
<country>États-Unis</country>
</affiliation>
</author>
</analytic>
<series>
<title level="j" type="main">Proceedings of SPIE, the International Society for Optical Engineering</title>
<title level="j" type="abbreviated">Proc. SPIE Int. Soc. Opt. Eng.</title>
<idno type="ISSN">0277-786X</idno>
<imprint>
<date when="2009">2009</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
<seriesStmt>
<title level="j" type="main">Proceedings of SPIE, the International Society for Optical Engineering</title>
<title level="j" type="abbreviated">Proc. SPIE Int. Soc. Opt. Eng.</title>
<idno type="ISSN">0277-786X</idno>
</seriesStmt>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Differential method</term>
<term>Discrimination</term>
<term>Distance measurement</term>
<term>Low frequency</term>
<term>Performance evaluation</term>
<term>Printers</term>
<term>Spatial frequency</term>
<term>Statistical method</term>
<term>Three dimensional model</term>
<term>Vision</term>
<term>Volume</term>
</keywords>
<keywords scheme="Pascal" xml:lang="fr">
<term>Vision</term>
<term>Imprimante</term>
<term>Discrimination</term>
<term>Modèle 3 dimensions</term>
<term>Mesure de distance</term>
<term>Volume</term>
<term>Méthode statistique</term>
<term>Evaluation performance</term>
<term>Fréquence spatiale</term>
<term>Méthode différentielle</term>
<term>Basse fréquence</term>
<term>0130C</term>
<term>4266S</term>
</keywords>
<keywords scheme="Wicri" type="topic" xml:lang="fr">
<term>Imprimante</term>
<term>Méthode statistique</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">Both our visual and haptic systems contribute to the perception of the three dimensional world, especially the proximat perception of objects, The interaction of these systems has been the subject of some debate over the years, ranging from the philosophically posed Molyneux problem to the more pragmatic examination of their psychophysical relationship. To better understand the nature of this interaction we have performed a variety of experiments characterizing; the detection. discrimination, and production of 3D shape. A stimulus set of 25 complex. natural appearing, noisy 3D target objects were statistically specified in the Fourier domain and manufactured using a 3D printer. A series of paired-comparison experiments examined subjects' unimodal (visual-visual and haptic-haptic) and crossmodal (visual-haptic) perceptual abilities. Additionally, subjects sculpted objects using uni- or crossmodal source information. In all experiments, the performance in the unirnodal conditions were similar to one another and unimodal presentation fared better than crossmodal. Also, the spatial frequency of object features affected performance differentially across the range used in this experiment. The sculpted objects were scanned in 3D and the resulting geometry was compared metrically and statistically to the original stimuli, Objects with higher spatial frequency were harder to sculpt when limited to haptic input compared to only visual input. The opposite was found for objects with low spatial frequency. The psychophysical discrimination and comparison experiments yielded similar findings. There is a marked performance difference between the visual and haptic systems and these differences were systematically distributed along the range of feature details. The existence of non-universal (i.e. modality-specific) representations explain the poor crossrnodal performance. Our current findings suggest that haptic and visual information is either integrated into a multi-modal form, or each is independent and somewhat efficient translation is possible. Vision shows a distinct, advantage when dealing with higher frequency objects but both modalities are effective when comparing objects that differ by a large amount.</div>
</front>
</TEI>
<inist>
<standard h6="B">
<pA>
<fA01 i1="01" i2="1">
<s0>0277-786X</s0>
</fA01>
<fA02 i1="01">
<s0>PSISDG</s0>
</fA02>
<fA03 i2="1">
<s0>Proc. SPIE Int. Soc. Opt. Eng.</s0>
</fA03>
<fA05>
<s2>7240</s2>
</fA05>
<fA08 i1="01" i2="1" l="ENG">
<s1>Crossmodal information for visual and haptic discrimination</s1>
</fA08>
<fA09 i1="01" i2="1" l="ENG">
<s1>Human vision and electronic imaging XIV : 19-22 January 2009, San Jose, California, United States</s1>
</fA09>
<fA11 i1="01" i2="1">
<s1>PHILLIPS (Flip)</s1>
</fA11>
<fA11 i1="02" i2="1">
<s1>EGAN (Eric J. L.)</s1>
</fA11>
<fA12 i1="01" i2="1">
<s1>ROGOWITZ (Bernice Ellen)</s1>
<s9>ed.</s9>
</fA12>
<fA12 i1="02" i2="1">
<s1>PAPPAS (Thrasyvoulos N.)</s1>
<s9>ed.</s9>
</fA12>
<fA14 i1="01">
<s1>Skidmore College</s1>
<s2>Saratoga Springs, NY</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
</fA14>
<fA14 i1="02">
<s1>The Ohio State University</s1>
<s2>Columbus, OH</s2>
<s3>USA</s3>
<sZ>2 aut.</sZ>
</fA14>
<fA18 i1="01" i2="1">
<s1>Society of photo-optical instrumentation engineers</s1>
<s3>USA</s3>
<s9>org-cong.</s9>
</fA18>
<fA18 i1="02" i2="1">
<s1>IS&T--The Society for Imaging Science and Technology</s1>
<s3>USA</s3>
<s9>org-cong.</s9>
</fA18>
<fA20>
<s2>72400H.1-72400H.15</s2>
</fA20>
<fA21>
<s1>2009</s1>
</fA21>
<fA23 i1="01">
<s0>ENG</s0>
</fA23>
<fA25 i1="01">
<s1>SPIE</s1>
<s2>Bellingham (Wash.)</s2>
</fA25>
<fA25 i1="02">
<s1>IS&T</s1>
<s2>Springfield (Va.)</s2>
</fA25>
<fA26 i1="01">
<s0>978-0-8194-7490-2</s0>
</fA26>
<fA26 i1="02">
<s0>0-8194-7490-8</s0>
</fA26>
<fA43 i1="01">
<s1>INIST</s1>
<s2>21760</s2>
<s5>354000172963110140</s5>
</fA43>
<fA44>
<s0>0000</s0>
<s1>© 2009 INIST-CNRS. All rights reserved.</s1>
</fA44>
<fA45>
<s0>24 ref.</s0>
</fA45>
<fA47 i1="01" i2="1">
<s0>09-0392751</s0>
</fA47>
<fA60>
<s1>P</s1>
<s2>C</s2>
</fA60>
<fA61>
<s0>A</s0>
</fA61>
<fA64 i1="01" i2="1">
<s0>Proceedings of SPIE, the International Society for Optical Engineering</s0>
</fA64>
<fA66 i1="01">
<s0>USA</s0>
</fA66>
<fC01 i1="01" l="ENG">
<s0>Both our visual and haptic systems contribute to the perception of the three dimensional world, especially the proximat perception of objects, The interaction of these systems has been the subject of some debate over the years, ranging from the philosophically posed Molyneux problem to the more pragmatic examination of their psychophysical relationship. To better understand the nature of this interaction we have performed a variety of experiments characterizing; the detection. discrimination, and production of 3D shape. A stimulus set of 25 complex. natural appearing, noisy 3D target objects were statistically specified in the Fourier domain and manufactured using a 3D printer. A series of paired-comparison experiments examined subjects' unimodal (visual-visual and haptic-haptic) and crossmodal (visual-haptic) perceptual abilities. Additionally, subjects sculpted objects using uni- or crossmodal source information. In all experiments, the performance in the unirnodal conditions were similar to one another and unimodal presentation fared better than crossmodal. Also, the spatial frequency of object features affected performance differentially across the range used in this experiment. The sculpted objects were scanned in 3D and the resulting geometry was compared metrically and statistically to the original stimuli, Objects with higher spatial frequency were harder to sculpt when limited to haptic input compared to only visual input. The opposite was found for objects with low spatial frequency. The psychophysical discrimination and comparison experiments yielded similar findings. There is a marked performance difference between the visual and haptic systems and these differences were systematically distributed along the range of feature details. The existence of non-universal (i.e. modality-specific) representations explain the poor crossrnodal performance. Our current findings suggest that haptic and visual information is either integrated into a multi-modal form, or each is independent and somewhat efficient translation is possible. Vision shows a distinct, advantage when dealing with higher frequency objects but both modalities are effective when comparing objects that differ by a large amount.</s0>
</fC01>
<fC02 i1="01" i2="3">
<s0>001B00A30C</s0>
</fC02>
<fC02 i1="02" i2="X">
<s0>002A25I</s0>
</fC02>
<fC03 i1="01" i2="3" l="FRE">
<s0>Vision</s0>
<s5>03</s5>
</fC03>
<fC03 i1="01" i2="3" l="ENG">
<s0>Vision</s0>
<s5>03</s5>
</fC03>
<fC03 i1="02" i2="3" l="FRE">
<s0>Imprimante</s0>
<s5>11</s5>
</fC03>
<fC03 i1="02" i2="3" l="ENG">
<s0>Printers</s0>
<s5>11</s5>
</fC03>
<fC03 i1="03" i2="X" l="FRE">
<s0>Discrimination</s0>
<s5>61</s5>
</fC03>
<fC03 i1="03" i2="X" l="ENG">
<s0>Discrimination</s0>
<s5>61</s5>
</fC03>
<fC03 i1="03" i2="X" l="SPA">
<s0>Discriminación</s0>
<s5>61</s5>
</fC03>
<fC03 i1="04" i2="X" l="FRE">
<s0>Modèle 3 dimensions</s0>
<s5>62</s5>
</fC03>
<fC03 i1="04" i2="X" l="ENG">
<s0>Three dimensional model</s0>
<s5>62</s5>
</fC03>
<fC03 i1="04" i2="X" l="SPA">
<s0>Modelo 3 dimensiones</s0>
<s5>62</s5>
</fC03>
<fC03 i1="05" i2="X" l="FRE">
<s0>Mesure de distance</s0>
<s5>63</s5>
</fC03>
<fC03 i1="05" i2="X" l="ENG">
<s0>Distance measurement</s0>
<s5>63</s5>
</fC03>
<fC03 i1="05" i2="X" l="SPA">
<s0>Medición distancia</s0>
<s5>63</s5>
</fC03>
<fC03 i1="06" i2="3" l="FRE">
<s0>Volume</s0>
<s5>64</s5>
</fC03>
<fC03 i1="06" i2="3" l="ENG">
<s0>Volume</s0>
<s5>64</s5>
</fC03>
<fC03 i1="07" i2="X" l="FRE">
<s0>Méthode statistique</s0>
<s5>65</s5>
</fC03>
<fC03 i1="07" i2="X" l="ENG">
<s0>Statistical method</s0>
<s5>65</s5>
</fC03>
<fC03 i1="07" i2="X" l="SPA">
<s0>Método estadístico</s0>
<s5>65</s5>
</fC03>
<fC03 i1="08" i2="3" l="FRE">
<s0>Evaluation performance</s0>
<s5>66</s5>
</fC03>
<fC03 i1="08" i2="3" l="ENG">
<s0>Performance evaluation</s0>
<s5>66</s5>
</fC03>
<fC03 i1="09" i2="3" l="FRE">
<s0>Fréquence spatiale</s0>
<s5>67</s5>
</fC03>
<fC03 i1="09" i2="3" l="ENG">
<s0>Spatial frequency</s0>
<s5>67</s5>
</fC03>
<fC03 i1="10" i2="X" l="FRE">
<s0>Méthode différentielle</s0>
<s5>68</s5>
</fC03>
<fC03 i1="10" i2="X" l="ENG">
<s0>Differential method</s0>
<s5>68</s5>
</fC03>
<fC03 i1="10" i2="X" l="SPA">
<s0>Método diferencial</s0>
<s5>68</s5>
</fC03>
<fC03 i1="11" i2="X" l="FRE">
<s0>Basse fréquence</s0>
<s5>69</s5>
</fC03>
<fC03 i1="11" i2="X" l="ENG">
<s0>Low frequency</s0>
<s5>69</s5>
</fC03>
<fC03 i1="11" i2="X" l="SPA">
<s0>Baja frecuencia</s0>
<s5>69</s5>
</fC03>
<fC03 i1="12" i2="3" l="FRE">
<s0>0130C</s0>
<s4>INC</s4>
<s5>83</s5>
</fC03>
<fC03 i1="13" i2="3" l="FRE">
<s0>4266S</s0>
<s4>INC</s4>
<s5>84</s5>
</fC03>
<fN21>
<s1>285</s1>
</fN21>
<fN44 i1="01">
<s1>OTO</s1>
</fN44>
<fN82>
<s1>OTO</s1>
</fN82>
</pA>
<pR>
<fA30 i1="01" i2="1" l="ENG">
<s1>Electronic Imaging Science and Technology Symposium</s1>
<s3>Santa Jose CA USA</s3>
<s4>2008</s4>
</fA30>
</pR>
</standard>
</inist>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/PascalFrancis/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000C81 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/PascalFrancis/Curation/biblio.hfd -nk 000C81 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    PascalFrancis
   |étape=   Curation
   |type=    RBID
   |clé=     Pascal:09-0392751
   |texte=   Crossmodal information for visual and haptic discrimination
}}

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024