Crossmodal information for visual and haptic discrimination
Identifieur interne :
000751 ( PascalFrancis/Corpus );
précédent :
000750;
suivant :
000752
Crossmodal information for visual and haptic discrimination
Auteurs : Flip Phillips ;
Eric J. L. EganSource :
-
Proceedings of SPIE, the International Society for Optical Engineering [ 0277-786X ] ; 2009.
RBID : Pascal:09-0392751
Descripteurs français
- Pascal (Inist)
- Vision,
Imprimante,
Discrimination,
Modèle 3 dimensions,
Mesure de distance,
Volume,
Méthode statistique,
Evaluation performance,
Fréquence spatiale,
Méthode différentielle,
Basse fréquence,
0130C,
4266S.
English descriptors
- KwdEn :
- Differential method,
Discrimination,
Distance measurement,
Low frequency,
Performance evaluation,
Printers,
Spatial frequency,
Statistical method,
Three dimensional model,
Vision,
Volume.
Abstract
Both our visual and haptic systems contribute to the perception of the three dimensional world, especially the proximat perception of objects, The interaction of these systems has been the subject of some debate over the years, ranging from the philosophically posed Molyneux problem to the more pragmatic examination of their psychophysical relationship. To better understand the nature of this interaction we have performed a variety of experiments characterizing; the detection. discrimination, and production of 3D shape. A stimulus set of 25 complex. natural appearing, noisy 3D target objects were statistically specified in the Fourier domain and manufactured using a 3D printer. A series of paired-comparison experiments examined subjects' unimodal (visual-visual and haptic-haptic) and crossmodal (visual-haptic) perceptual abilities. Additionally, subjects sculpted objects using uni- or crossmodal source information. In all experiments, the performance in the unirnodal conditions were similar to one another and unimodal presentation fared better than crossmodal. Also, the spatial frequency of object features affected performance differentially across the range used in this experiment. The sculpted objects were scanned in 3D and the resulting geometry was compared metrically and statistically to the original stimuli, Objects with higher spatial frequency were harder to sculpt when limited to haptic input compared to only visual input. The opposite was found for objects with low spatial frequency. The psychophysical discrimination and comparison experiments yielded similar findings. There is a marked performance difference between the visual and haptic systems and these differences were systematically distributed along the range of feature details. The existence of non-universal (i.e. modality-specific) representations explain the poor crossrnodal performance. Our current findings suggest that haptic and visual information is either integrated into a multi-modal form, or each is independent and somewhat efficient translation is possible. Vision shows a distinct, advantage when dealing with higher frequency objects but both modalities are effective when comparing objects that differ by a large amount.
Notice en format standard (ISO 2709)
Pour connaître la documentation sur le format Inist Standard.
pA |
A01 | 01 | 1 | | @0 0277-786X |
---|
A02 | 01 | | | @0 PSISDG |
---|
A03 | | 1 | | @0 Proc. SPIE Int. Soc. Opt. Eng. |
---|
A05 | | | | @2 7240 |
---|
A08 | 01 | 1 | ENG | @1 Crossmodal information for visual and haptic discrimination |
---|
A09 | 01 | 1 | ENG | @1 Human vision and electronic imaging XIV : 19-22 January 2009, San Jose, California, United States |
---|
A11 | 01 | 1 | | @1 PHILLIPS (Flip) |
---|
A11 | 02 | 1 | | @1 EGAN (Eric J. L.) |
---|
A12 | 01 | 1 | | @1 ROGOWITZ (Bernice Ellen) @9 ed. |
---|
A12 | 02 | 1 | | @1 PAPPAS (Thrasyvoulos N.) @9 ed. |
---|
A14 | 01 | | | @1 Skidmore College @2 Saratoga Springs, NY @3 USA @Z 1 aut. |
---|
A14 | 02 | | | @1 The Ohio State University @2 Columbus, OH @3 USA @Z 2 aut. |
---|
A18 | 01 | 1 | | @1 Society of photo-optical instrumentation engineers @3 USA @9 org-cong. |
---|
A18 | 02 | 1 | | @1 IS&T--The Society for Imaging Science and Technology @3 USA @9 org-cong. |
---|
A20 | | | | @2 72400H.1-72400H.15 |
---|
A21 | | | | @1 2009 |
---|
A23 | 01 | | | @0 ENG |
---|
A25 | 01 | | | @1 SPIE @2 Bellingham (Wash.) |
---|
A25 | 02 | | | @1 IS&T @2 Springfield (Va.) |
---|
A26 | 01 | | | @0 978-0-8194-7490-2 |
---|
A26 | 02 | | | @0 0-8194-7490-8 |
---|
A43 | 01 | | | @1 INIST @2 21760 @5 354000172963110140 |
---|
A44 | | | | @0 0000 @1 © 2009 INIST-CNRS. All rights reserved. |
---|
A45 | | | | @0 24 ref. |
---|
A47 | 01 | 1 | | @0 09-0392751 |
---|
A60 | | | | @1 P @2 C |
---|
A61 | | | | @0 A |
---|
A64 | 01 | 1 | | @0 Proceedings of SPIE, the International Society for Optical Engineering |
---|
A66 | 01 | | | @0 USA |
---|
C01 | 01 | | ENG | @0 Both our visual and haptic systems contribute to the perception of the three dimensional world, especially the proximat perception of objects, The interaction of these systems has been the subject of some debate over the years, ranging from the philosophically posed Molyneux problem to the more pragmatic examination of their psychophysical relationship. To better understand the nature of this interaction we have performed a variety of experiments characterizing; the detection. discrimination, and production of 3D shape. A stimulus set of 25 complex. natural appearing, noisy 3D target objects were statistically specified in the Fourier domain and manufactured using a 3D printer. A series of paired-comparison experiments examined subjects' unimodal (visual-visual and haptic-haptic) and crossmodal (visual-haptic) perceptual abilities. Additionally, subjects sculpted objects using uni- or crossmodal source information. In all experiments, the performance in the unirnodal conditions were similar to one another and unimodal presentation fared better than crossmodal. Also, the spatial frequency of object features affected performance differentially across the range used in this experiment. The sculpted objects were scanned in 3D and the resulting geometry was compared metrically and statistically to the original stimuli, Objects with higher spatial frequency were harder to sculpt when limited to haptic input compared to only visual input. The opposite was found for objects with low spatial frequency. The psychophysical discrimination and comparison experiments yielded similar findings. There is a marked performance difference between the visual and haptic systems and these differences were systematically distributed along the range of feature details. The existence of non-universal (i.e. modality-specific) representations explain the poor crossrnodal performance. Our current findings suggest that haptic and visual information is either integrated into a multi-modal form, or each is independent and somewhat efficient translation is possible. Vision shows a distinct, advantage when dealing with higher frequency objects but both modalities are effective when comparing objects that differ by a large amount. |
---|
C02 | 01 | 3 | | @0 001B00A30C |
---|
C02 | 02 | X | | @0 002A25I |
---|
C03 | 01 | 3 | FRE | @0 Vision @5 03 |
---|
C03 | 01 | 3 | ENG | @0 Vision @5 03 |
---|
C03 | 02 | 3 | FRE | @0 Imprimante @5 11 |
---|
C03 | 02 | 3 | ENG | @0 Printers @5 11 |
---|
C03 | 03 | X | FRE | @0 Discrimination @5 61 |
---|
C03 | 03 | X | ENG | @0 Discrimination @5 61 |
---|
C03 | 03 | X | SPA | @0 Discriminación @5 61 |
---|
C03 | 04 | X | FRE | @0 Modèle 3 dimensions @5 62 |
---|
C03 | 04 | X | ENG | @0 Three dimensional model @5 62 |
---|
C03 | 04 | X | SPA | @0 Modelo 3 dimensiones @5 62 |
---|
C03 | 05 | X | FRE | @0 Mesure de distance @5 63 |
---|
C03 | 05 | X | ENG | @0 Distance measurement @5 63 |
---|
C03 | 05 | X | SPA | @0 Medición distancia @5 63 |
---|
C03 | 06 | 3 | FRE | @0 Volume @5 64 |
---|
C03 | 06 | 3 | ENG | @0 Volume @5 64 |
---|
C03 | 07 | X | FRE | @0 Méthode statistique @5 65 |
---|
C03 | 07 | X | ENG | @0 Statistical method @5 65 |
---|
C03 | 07 | X | SPA | @0 Método estadístico @5 65 |
---|
C03 | 08 | 3 | FRE | @0 Evaluation performance @5 66 |
---|
C03 | 08 | 3 | ENG | @0 Performance evaluation @5 66 |
---|
C03 | 09 | 3 | FRE | @0 Fréquence spatiale @5 67 |
---|
C03 | 09 | 3 | ENG | @0 Spatial frequency @5 67 |
---|
C03 | 10 | X | FRE | @0 Méthode différentielle @5 68 |
---|
C03 | 10 | X | ENG | @0 Differential method @5 68 |
---|
C03 | 10 | X | SPA | @0 Método diferencial @5 68 |
---|
C03 | 11 | X | FRE | @0 Basse fréquence @5 69 |
---|
C03 | 11 | X | ENG | @0 Low frequency @5 69 |
---|
C03 | 11 | X | SPA | @0 Baja frecuencia @5 69 |
---|
C03 | 12 | 3 | FRE | @0 0130C @4 INC @5 83 |
---|
C03 | 13 | 3 | FRE | @0 4266S @4 INC @5 84 |
---|
N21 | | | | @1 285 |
---|
N44 | 01 | | | @1 OTO |
---|
N82 | | | | @1 OTO |
---|
|
pR |
A30 | 01 | 1 | ENG | @1 Electronic Imaging Science and Technology Symposium @3 Santa Jose CA USA @4 2008 |
---|
|
Format Inist (serveur)
NO : | PASCAL 09-0392751 INIST |
ET : | Crossmodal information for visual and haptic discrimination |
AU : | PHILLIPS (Flip); EGAN (Eric J. L.); ROGOWITZ (Bernice Ellen); PAPPAS (Thrasyvoulos N.) |
AF : | Skidmore College/Saratoga Springs, NY/Etats-Unis (1 aut.); The Ohio State University/Columbus, OH/Etats-Unis (2 aut.) |
DT : | Publication en série; Congrès; Niveau analytique |
SO : | Proceedings of SPIE, the International Society for Optical Engineering; ISSN 0277-786X; Coden PSISDG; Etats-Unis; Da. 2009; Vol. 7240; 72400H.1-72400H.15; Bibl. 24 ref. |
LA : | Anglais |
EA : | Both our visual and haptic systems contribute to the perception of the three dimensional world, especially the proximat perception of objects, The interaction of these systems has been the subject of some debate over the years, ranging from the philosophically posed Molyneux problem to the more pragmatic examination of their psychophysical relationship. To better understand the nature of this interaction we have performed a variety of experiments characterizing; the detection. discrimination, and production of 3D shape. A stimulus set of 25 complex. natural appearing, noisy 3D target objects were statistically specified in the Fourier domain and manufactured using a 3D printer. A series of paired-comparison experiments examined subjects' unimodal (visual-visual and haptic-haptic) and crossmodal (visual-haptic) perceptual abilities. Additionally, subjects sculpted objects using uni- or crossmodal source information. In all experiments, the performance in the unirnodal conditions were similar to one another and unimodal presentation fared better than crossmodal. Also, the spatial frequency of object features affected performance differentially across the range used in this experiment. The sculpted objects were scanned in 3D and the resulting geometry was compared metrically and statistically to the original stimuli, Objects with higher spatial frequency were harder to sculpt when limited to haptic input compared to only visual input. The opposite was found for objects with low spatial frequency. The psychophysical discrimination and comparison experiments yielded similar findings. There is a marked performance difference between the visual and haptic systems and these differences were systematically distributed along the range of feature details. The existence of non-universal (i.e. modality-specific) representations explain the poor crossrnodal performance. Our current findings suggest that haptic and visual information is either integrated into a multi-modal form, or each is independent and somewhat efficient translation is possible. Vision shows a distinct, advantage when dealing with higher frequency objects but both modalities are effective when comparing objects that differ by a large amount. |
CC : | 001B00A30C; 002A25I |
FD : | Vision; Imprimante; Discrimination; Modèle 3 dimensions; Mesure de distance; Volume; Méthode statistique; Evaluation performance; Fréquence spatiale; Méthode différentielle; Basse fréquence; 0130C; 4266S |
ED : | Vision; Printers; Discrimination; Three dimensional model; Distance measurement; Volume; Statistical method; Performance evaluation; Spatial frequency; Differential method; Low frequency |
SD : | Discriminación; Modelo 3 dimensiones; Medición distancia; Método estadístico; Método diferencial; Baja frecuencia |
LO : | INIST-21760.354000172963110140 |
ID : | 09-0392751 |
Links to Exploration step
Pascal:09-0392751
Le document en format XML
<record><TEI><teiHeader><fileDesc><titleStmt><title xml:lang="en" level="a">Crossmodal information for visual and haptic discrimination</title>
<author><name sortKey="Phillips, Flip" sort="Phillips, Flip" uniqKey="Phillips F" first="Flip" last="Phillips">Flip Phillips</name>
<affiliation><inist:fA14 i1="01"><s1>Skidmore College</s1>
<s2>Saratoga Springs, NY</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author><name sortKey="Egan, Eric J L" sort="Egan, Eric J L" uniqKey="Egan E" first="Eric J. L." last="Egan">Eric J. L. Egan</name>
<affiliation><inist:fA14 i1="02"><s1>The Ohio State University</s1>
<s2>Columbus, OH</s2>
<s3>USA</s3>
<sZ>2 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
</titleStmt>
<publicationStmt><idno type="wicri:source">INIST</idno>
<idno type="inist">09-0392751</idno>
<date when="2009">2009</date>
<idno type="stanalyst">PASCAL 09-0392751 INIST</idno>
<idno type="RBID">Pascal:09-0392751</idno>
<idno type="wicri:Area/PascalFrancis/Corpus">000751</idno>
</publicationStmt>
<sourceDesc><biblStruct><analytic><title xml:lang="en" level="a">Crossmodal information for visual and haptic discrimination</title>
<author><name sortKey="Phillips, Flip" sort="Phillips, Flip" uniqKey="Phillips F" first="Flip" last="Phillips">Flip Phillips</name>
<affiliation><inist:fA14 i1="01"><s1>Skidmore College</s1>
<s2>Saratoga Springs, NY</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author><name sortKey="Egan, Eric J L" sort="Egan, Eric J L" uniqKey="Egan E" first="Eric J. L." last="Egan">Eric J. L. Egan</name>
<affiliation><inist:fA14 i1="02"><s1>The Ohio State University</s1>
<s2>Columbus, OH</s2>
<s3>USA</s3>
<sZ>2 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
</analytic>
<series><title level="j" type="main">Proceedings of SPIE, the International Society for Optical Engineering</title>
<title level="j" type="abbreviated">Proc. SPIE Int. Soc. Opt. Eng.</title>
<idno type="ISSN">0277-786X</idno>
<imprint><date when="2009">2009</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
<seriesStmt><title level="j" type="main">Proceedings of SPIE, the International Society for Optical Engineering</title>
<title level="j" type="abbreviated">Proc. SPIE Int. Soc. Opt. Eng.</title>
<idno type="ISSN">0277-786X</idno>
</seriesStmt>
</fileDesc>
<profileDesc><textClass><keywords scheme="KwdEn" xml:lang="en"><term>Differential method</term>
<term>Discrimination</term>
<term>Distance measurement</term>
<term>Low frequency</term>
<term>Performance evaluation</term>
<term>Printers</term>
<term>Spatial frequency</term>
<term>Statistical method</term>
<term>Three dimensional model</term>
<term>Vision</term>
<term>Volume</term>
</keywords>
<keywords scheme="Pascal" xml:lang="fr"><term>Vision</term>
<term>Imprimante</term>
<term>Discrimination</term>
<term>Modèle 3 dimensions</term>
<term>Mesure de distance</term>
<term>Volume</term>
<term>Méthode statistique</term>
<term>Evaluation performance</term>
<term>Fréquence spatiale</term>
<term>Méthode différentielle</term>
<term>Basse fréquence</term>
<term>0130C</term>
<term>4266S</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front><div type="abstract" xml:lang="en">Both our visual and haptic systems contribute to the perception of the three dimensional world, especially the proximat perception of objects, The interaction of these systems has been the subject of some debate over the years, ranging from the philosophically posed Molyneux problem to the more pragmatic examination of their psychophysical relationship. To better understand the nature of this interaction we have performed a variety of experiments characterizing; the detection. discrimination, and production of 3D shape. A stimulus set of 25 complex. natural appearing, noisy 3D target objects were statistically specified in the Fourier domain and manufactured using a 3D printer. A series of paired-comparison experiments examined subjects' unimodal (visual-visual and haptic-haptic) and crossmodal (visual-haptic) perceptual abilities. Additionally, subjects sculpted objects using uni- or crossmodal source information. In all experiments, the performance in the unirnodal conditions were similar to one another and unimodal presentation fared better than crossmodal. Also, the spatial frequency of object features affected performance differentially across the range used in this experiment. The sculpted objects were scanned in 3D and the resulting geometry was compared metrically and statistically to the original stimuli, Objects with higher spatial frequency were harder to sculpt when limited to haptic input compared to only visual input. The opposite was found for objects with low spatial frequency. The psychophysical discrimination and comparison experiments yielded similar findings. There is a marked performance difference between the visual and haptic systems and these differences were systematically distributed along the range of feature details. The existence of non-universal (i.e. modality-specific) representations explain the poor crossrnodal performance. Our current findings suggest that haptic and visual information is either integrated into a multi-modal form, or each is independent and somewhat efficient translation is possible. Vision shows a distinct, advantage when dealing with higher frequency objects but both modalities are effective when comparing objects that differ by a large amount.</div>
</front>
</TEI>
<inist><standard h6="B"><pA><fA01 i1="01" i2="1"><s0>0277-786X</s0>
</fA01>
<fA02 i1="01"><s0>PSISDG</s0>
</fA02>
<fA03 i2="1"><s0>Proc. SPIE Int. Soc. Opt. Eng.</s0>
</fA03>
<fA05><s2>7240</s2>
</fA05>
<fA08 i1="01" i2="1" l="ENG"><s1>Crossmodal information for visual and haptic discrimination</s1>
</fA08>
<fA09 i1="01" i2="1" l="ENG"><s1>Human vision and electronic imaging XIV : 19-22 January 2009, San Jose, California, United States</s1>
</fA09>
<fA11 i1="01" i2="1"><s1>PHILLIPS (Flip)</s1>
</fA11>
<fA11 i1="02" i2="1"><s1>EGAN (Eric J. L.)</s1>
</fA11>
<fA12 i1="01" i2="1"><s1>ROGOWITZ (Bernice Ellen)</s1>
<s9>ed.</s9>
</fA12>
<fA12 i1="02" i2="1"><s1>PAPPAS (Thrasyvoulos N.)</s1>
<s9>ed.</s9>
</fA12>
<fA14 i1="01"><s1>Skidmore College</s1>
<s2>Saratoga Springs, NY</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
</fA14>
<fA14 i1="02"><s1>The Ohio State University</s1>
<s2>Columbus, OH</s2>
<s3>USA</s3>
<sZ>2 aut.</sZ>
</fA14>
<fA18 i1="01" i2="1"><s1>Society of photo-optical instrumentation engineers</s1>
<s3>USA</s3>
<s9>org-cong.</s9>
</fA18>
<fA18 i1="02" i2="1"><s1>IS&T--The Society for Imaging Science and Technology</s1>
<s3>USA</s3>
<s9>org-cong.</s9>
</fA18>
<fA20><s2>72400H.1-72400H.15</s2>
</fA20>
<fA21><s1>2009</s1>
</fA21>
<fA23 i1="01"><s0>ENG</s0>
</fA23>
<fA25 i1="01"><s1>SPIE</s1>
<s2>Bellingham (Wash.)</s2>
</fA25>
<fA25 i1="02"><s1>IS&T</s1>
<s2>Springfield (Va.)</s2>
</fA25>
<fA26 i1="01"><s0>978-0-8194-7490-2</s0>
</fA26>
<fA26 i1="02"><s0>0-8194-7490-8</s0>
</fA26>
<fA43 i1="01"><s1>INIST</s1>
<s2>21760</s2>
<s5>354000172963110140</s5>
</fA43>
<fA44><s0>0000</s0>
<s1>© 2009 INIST-CNRS. All rights reserved.</s1>
</fA44>
<fA45><s0>24 ref.</s0>
</fA45>
<fA47 i1="01" i2="1"><s0>09-0392751</s0>
</fA47>
<fA60><s1>P</s1>
<s2>C</s2>
</fA60>
<fA64 i1="01" i2="1"><s0>Proceedings of SPIE, the International Society for Optical Engineering</s0>
</fA64>
<fA66 i1="01"><s0>USA</s0>
</fA66>
<fC01 i1="01" l="ENG"><s0>Both our visual and haptic systems contribute to the perception of the three dimensional world, especially the proximat perception of objects, The interaction of these systems has been the subject of some debate over the years, ranging from the philosophically posed Molyneux problem to the more pragmatic examination of their psychophysical relationship. To better understand the nature of this interaction we have performed a variety of experiments characterizing; the detection. discrimination, and production of 3D shape. A stimulus set of 25 complex. natural appearing, noisy 3D target objects were statistically specified in the Fourier domain and manufactured using a 3D printer. A series of paired-comparison experiments examined subjects' unimodal (visual-visual and haptic-haptic) and crossmodal (visual-haptic) perceptual abilities. Additionally, subjects sculpted objects using uni- or crossmodal source information. In all experiments, the performance in the unirnodal conditions were similar to one another and unimodal presentation fared better than crossmodal. Also, the spatial frequency of object features affected performance differentially across the range used in this experiment. The sculpted objects were scanned in 3D and the resulting geometry was compared metrically and statistically to the original stimuli, Objects with higher spatial frequency were harder to sculpt when limited to haptic input compared to only visual input. The opposite was found for objects with low spatial frequency. The psychophysical discrimination and comparison experiments yielded similar findings. There is a marked performance difference between the visual and haptic systems and these differences were systematically distributed along the range of feature details. The existence of non-universal (i.e. modality-specific) representations explain the poor crossrnodal performance. Our current findings suggest that haptic and visual information is either integrated into a multi-modal form, or each is independent and somewhat efficient translation is possible. Vision shows a distinct, advantage when dealing with higher frequency objects but both modalities are effective when comparing objects that differ by a large amount.</s0>
</fC01>
<fC02 i1="01" i2="3"><s0>001B00A30C</s0>
</fC02>
<fC02 i1="02" i2="X"><s0>002A25I</s0>
</fC02>
<fC03 i1="01" i2="3" l="FRE"><s0>Vision</s0>
<s5>03</s5>
</fC03>
<fC03 i1="01" i2="3" l="ENG"><s0>Vision</s0>
<s5>03</s5>
</fC03>
<fC03 i1="02" i2="3" l="FRE"><s0>Imprimante</s0>
<s5>11</s5>
</fC03>
<fC03 i1="02" i2="3" l="ENG"><s0>Printers</s0>
<s5>11</s5>
</fC03>
<fC03 i1="03" i2="X" l="FRE"><s0>Discrimination</s0>
<s5>61</s5>
</fC03>
<fC03 i1="03" i2="X" l="ENG"><s0>Discrimination</s0>
<s5>61</s5>
</fC03>
<fC03 i1="03" i2="X" l="SPA"><s0>Discriminación</s0>
<s5>61</s5>
</fC03>
<fC03 i1="04" i2="X" l="FRE"><s0>Modèle 3 dimensions</s0>
<s5>62</s5>
</fC03>
<fC03 i1="04" i2="X" l="ENG"><s0>Three dimensional model</s0>
<s5>62</s5>
</fC03>
<fC03 i1="04" i2="X" l="SPA"><s0>Modelo 3 dimensiones</s0>
<s5>62</s5>
</fC03>
<fC03 i1="05" i2="X" l="FRE"><s0>Mesure de distance</s0>
<s5>63</s5>
</fC03>
<fC03 i1="05" i2="X" l="ENG"><s0>Distance measurement</s0>
<s5>63</s5>
</fC03>
<fC03 i1="05" i2="X" l="SPA"><s0>Medición distancia</s0>
<s5>63</s5>
</fC03>
<fC03 i1="06" i2="3" l="FRE"><s0>Volume</s0>
<s5>64</s5>
</fC03>
<fC03 i1="06" i2="3" l="ENG"><s0>Volume</s0>
<s5>64</s5>
</fC03>
<fC03 i1="07" i2="X" l="FRE"><s0>Méthode statistique</s0>
<s5>65</s5>
</fC03>
<fC03 i1="07" i2="X" l="ENG"><s0>Statistical method</s0>
<s5>65</s5>
</fC03>
<fC03 i1="07" i2="X" l="SPA"><s0>Método estadístico</s0>
<s5>65</s5>
</fC03>
<fC03 i1="08" i2="3" l="FRE"><s0>Evaluation performance</s0>
<s5>66</s5>
</fC03>
<fC03 i1="08" i2="3" l="ENG"><s0>Performance evaluation</s0>
<s5>66</s5>
</fC03>
<fC03 i1="09" i2="3" l="FRE"><s0>Fréquence spatiale</s0>
<s5>67</s5>
</fC03>
<fC03 i1="09" i2="3" l="ENG"><s0>Spatial frequency</s0>
<s5>67</s5>
</fC03>
<fC03 i1="10" i2="X" l="FRE"><s0>Méthode différentielle</s0>
<s5>68</s5>
</fC03>
<fC03 i1="10" i2="X" l="ENG"><s0>Differential method</s0>
<s5>68</s5>
</fC03>
<fC03 i1="10" i2="X" l="SPA"><s0>Método diferencial</s0>
<s5>68</s5>
</fC03>
<fC03 i1="11" i2="X" l="FRE"><s0>Basse fréquence</s0>
<s5>69</s5>
</fC03>
<fC03 i1="11" i2="X" l="ENG"><s0>Low frequency</s0>
<s5>69</s5>
</fC03>
<fC03 i1="11" i2="X" l="SPA"><s0>Baja frecuencia</s0>
<s5>69</s5>
</fC03>
<fC03 i1="12" i2="3" l="FRE"><s0>0130C</s0>
<s4>INC</s4>
<s5>83</s5>
</fC03>
<fC03 i1="13" i2="3" l="FRE"><s0>4266S</s0>
<s4>INC</s4>
<s5>84</s5>
</fC03>
<fN21><s1>285</s1>
</fN21>
<fN44 i1="01"><s1>OTO</s1>
</fN44>
<fN82><s1>OTO</s1>
</fN82>
</pA>
<pR><fA30 i1="01" i2="1" l="ENG"><s1>Electronic Imaging Science and Technology Symposium</s1>
<s3>Santa Jose CA USA</s3>
<s4>2008</s4>
</fA30>
</pR>
</standard>
<server><NO>PASCAL 09-0392751 INIST</NO>
<ET>Crossmodal information for visual and haptic discrimination</ET>
<AU>PHILLIPS (Flip); EGAN (Eric J. L.); ROGOWITZ (Bernice Ellen); PAPPAS (Thrasyvoulos N.)</AU>
<AF>Skidmore College/Saratoga Springs, NY/Etats-Unis (1 aut.); The Ohio State University/Columbus, OH/Etats-Unis (2 aut.)</AF>
<DT>Publication en série; Congrès; Niveau analytique</DT>
<SO>Proceedings of SPIE, the International Society for Optical Engineering; ISSN 0277-786X; Coden PSISDG; Etats-Unis; Da. 2009; Vol. 7240; 72400H.1-72400H.15; Bibl. 24 ref.</SO>
<LA>Anglais</LA>
<EA>Both our visual and haptic systems contribute to the perception of the three dimensional world, especially the proximat perception of objects, The interaction of these systems has been the subject of some debate over the years, ranging from the philosophically posed Molyneux problem to the more pragmatic examination of their psychophysical relationship. To better understand the nature of this interaction we have performed a variety of experiments characterizing; the detection. discrimination, and production of 3D shape. A stimulus set of 25 complex. natural appearing, noisy 3D target objects were statistically specified in the Fourier domain and manufactured using a 3D printer. A series of paired-comparison experiments examined subjects' unimodal (visual-visual and haptic-haptic) and crossmodal (visual-haptic) perceptual abilities. Additionally, subjects sculpted objects using uni- or crossmodal source information. In all experiments, the performance in the unirnodal conditions were similar to one another and unimodal presentation fared better than crossmodal. Also, the spatial frequency of object features affected performance differentially across the range used in this experiment. The sculpted objects were scanned in 3D and the resulting geometry was compared metrically and statistically to the original stimuli, Objects with higher spatial frequency were harder to sculpt when limited to haptic input compared to only visual input. The opposite was found for objects with low spatial frequency. The psychophysical discrimination and comparison experiments yielded similar findings. There is a marked performance difference between the visual and haptic systems and these differences were systematically distributed along the range of feature details. The existence of non-universal (i.e. modality-specific) representations explain the poor crossrnodal performance. Our current findings suggest that haptic and visual information is either integrated into a multi-modal form, or each is independent and somewhat efficient translation is possible. Vision shows a distinct, advantage when dealing with higher frequency objects but both modalities are effective when comparing objects that differ by a large amount.</EA>
<CC>001B00A30C; 002A25I</CC>
<FD>Vision; Imprimante; Discrimination; Modèle 3 dimensions; Mesure de distance; Volume; Méthode statistique; Evaluation performance; Fréquence spatiale; Méthode différentielle; Basse fréquence; 0130C; 4266S</FD>
<ED>Vision; Printers; Discrimination; Three dimensional model; Distance measurement; Volume; Statistical method; Performance evaluation; Spatial frequency; Differential method; Low frequency</ED>
<SD>Discriminación; Modelo 3 dimensiones; Medición distancia; Método estadístico; Método diferencial; Baja frecuencia</SD>
<LO>INIST-21760.354000172963110140</LO>
<ID>09-0392751</ID>
</server>
</inist>
</record>
Pour manipuler ce document sous Unix (Dilib)
EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/PascalFrancis/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000751 | SxmlIndent | more
Ou
HfdSelect -h $EXPLOR_AREA/Data/PascalFrancis/Corpus/biblio.hfd -nk 000751 | SxmlIndent | more
Pour mettre un lien sur cette page dans le réseau Wicri
{{Explor lien
|wiki= Ticri/CIDE
|area= HapticV1
|flux= PascalFrancis
|étape= Corpus
|type= RBID
|clé= Pascal:09-0392751
|texte= Crossmodal information for visual and haptic discrimination
}}
| This area was generated with Dilib version V0.6.23. Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024 | |