Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Multimodal cues for object manipulation in augmented and virtual environments

Identifieur interne : 000C60 ( PascalFrancis/Checkpoint ); précédent : 000C59; suivant : 000C61

Multimodal cues for object manipulation in augmented and virtual environments

Auteurs : Mihaela A. Zahariev [Canada]

Source :

RBID : Pascal:04-0412113

Descripteurs français

English descriptors

Abstract

The purpose of this work is to investigate the role of multimodal, especially auditory displays on human manipulation in augmented environments. We use information from all our sensory modalities when interacting in natural environments. Despite differences among the senses, we use them in concert to perceive and interact with multimodally specified objects and events. Traditionally, human-computer interaction has focused on graphical displays, thus not taking advantage of the richness of human senses and skills developed though interaction with the physical world [1]. Virtual environments have the potential to integrate all sensory modalities, to present the user with multiple inputs and outputs, and to allow the user to directly acquire and manipulate augmented or virtual objects. With the increasing availability of haptic and auditory displays, it is important to understand the complex relationships amongst different sensory feedback modalities and how they affect performance when interacting with augmented and virtual objects. Background and motivation for this research, questions and hypotheses, and some preliminary results are presented. A plan for future experiments is proposed.


Affiliations:


Links toward previous steps (curation, corpus...)


Links to Exploration step

Pascal:04-0412113

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en" level="a">Multimodal cues for object manipulation in augmented and virtual environments</title>
<author>
<name sortKey="Zahariev, Mihaela A" sort="Zahariev, Mihaela A" uniqKey="Zahariev M" first="Mihaela A." last="Zahariev">Mihaela A. Zahariev</name>
<affiliation wicri:level="1">
<inist:fA14 i1="01">
<s1>Human Motor Systems Laboratory, School of Kinesiology Simon Fraser University</s1>
<s2>Burnaby, B.C., V5A 1S6</s2>
<s3>CAN</s3>
<sZ>1 aut.</sZ>
</inist:fA14>
<country>Canada</country>
<wicri:noRegion>Burnaby, B.C., V5A 1S6</wicri:noRegion>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">INIST</idno>
<idno type="inist">04-0412113</idno>
<date when="2004">2004</date>
<idno type="stanalyst">PASCAL 04-0412113 INIST</idno>
<idno type="RBID">Pascal:04-0412113</idno>
<idno type="wicri:Area/PascalFrancis/Corpus">000F60</idno>
<idno type="wicri:Area/PascalFrancis/Curation">000549</idno>
<idno type="wicri:Area/PascalFrancis/Checkpoint">000C60</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a">Multimodal cues for object manipulation in augmented and virtual environments</title>
<author>
<name sortKey="Zahariev, Mihaela A" sort="Zahariev, Mihaela A" uniqKey="Zahariev M" first="Mihaela A." last="Zahariev">Mihaela A. Zahariev</name>
<affiliation wicri:level="1">
<inist:fA14 i1="01">
<s1>Human Motor Systems Laboratory, School of Kinesiology Simon Fraser University</s1>
<s2>Burnaby, B.C., V5A 1S6</s2>
<s3>CAN</s3>
<sZ>1 aut.</sZ>
</inist:fA14>
<country>Canada</country>
<wicri:noRegion>Burnaby, B.C., V5A 1S6</wicri:noRegion>
</affiliation>
</author>
</analytic>
<series>
<title level="j" type="main">Lecture notes in computer science</title>
<idno type="ISSN">0302-9743</idno>
<imprint>
<date when="2004">2004</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
<seriesStmt>
<title level="j" type="main">Lecture notes in computer science</title>
<idno type="ISSN">0302-9743</idno>
</seriesStmt>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Availability</term>
<term>Hearing</term>
<term>Human</term>
<term>Information use</term>
<term>Input output</term>
<term>Man machine relation</term>
<term>Motivation</term>
<term>Natural environment</term>
<term>Tactile sensitivity</term>
<term>User interface</term>
<term>Virtual reality</term>
</keywords>
<keywords scheme="Pascal" xml:lang="fr">
<term>Relation homme machine</term>
<term>Réalité virtuelle</term>
<term>Utilisation information</term>
<term>Interface utilisateur</term>
<term>Disponibilité</term>
<term>Audition</term>
<term>Homme</term>
<term>Milieu naturel</term>
<term>Sensibilité tactile</term>
<term>Motivation</term>
<term>Entrée sortie</term>
</keywords>
<keywords scheme="Wicri" type="topic" xml:lang="fr">
<term>Réalité virtuelle</term>
<term>Homme</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">The purpose of this work is to investigate the role of multimodal, especially auditory displays on human manipulation in augmented environments. We use information from all our sensory modalities when interacting in natural environments. Despite differences among the senses, we use them in concert to perceive and interact with multimodally specified objects and events. Traditionally, human-computer interaction has focused on graphical displays, thus not taking advantage of the richness of human senses and skills developed though interaction with the physical world [1]. Virtual environments have the potential to integrate all sensory modalities, to present the user with multiple inputs and outputs, and to allow the user to directly acquire and manipulate augmented or virtual objects. With the increasing availability of haptic and auditory displays, it is important to understand the complex relationships amongst different sensory feedback modalities and how they affect performance when interacting with augmented and virtual objects. Background and motivation for this research, questions and hypotheses, and some preliminary results are presented. A plan for future experiments is proposed.</div>
</front>
</TEI>
<inist>
<standard h6="B">
<pA>
<fA01 i1="01" i2="1">
<s0>0302-9743</s0>
</fA01>
<fA05>
<s2>3101</s2>
</fA05>
<fA08 i1="01" i2="1" l="ENG">
<s1>Multimodal cues for object manipulation in augmented and virtual environments</s1>
</fA08>
<fA09 i1="01" i2="1" l="ENG">
<s1>Computer human interaction : Rotorua, 29 June - 2 July 2004</s1>
</fA09>
<fA11 i1="01" i2="1">
<s1>ZAHARIEV (Mihaela A.)</s1>
</fA11>
<fA12 i1="01" i2="1">
<s1>MASOODIAN (Masood)</s1>
<s9>ed.</s9>
</fA12>
<fA12 i1="02" i2="1">
<s1>JONES (Steve)</s1>
<s9>ed.</s9>
</fA12>
<fA12 i1="03" i2="1">
<s1>ROGERS (Bill)</s1>
<s9>ed.</s9>
</fA12>
<fA14 i1="01">
<s1>Human Motor Systems Laboratory, School of Kinesiology Simon Fraser University</s1>
<s2>Burnaby, B.C., V5A 1S6</s2>
<s3>CAN</s3>
<sZ>1 aut.</sZ>
</fA14>
<fA20>
<s1>687-691</s1>
</fA20>
<fA21>
<s1>2004</s1>
</fA21>
<fA23 i1="01">
<s0>ENG</s0>
</fA23>
<fA26 i1="01">
<s0>3-540-22312-6</s0>
</fA26>
<fA43 i1="01">
<s1>INIST</s1>
<s2>16343</s2>
<s5>354000117898990790</s5>
</fA43>
<fA44>
<s0>0000</s0>
<s1>© 2004 INIST-CNRS. All rights reserved.</s1>
</fA44>
<fA45>
<s0>11 ref.</s0>
</fA45>
<fA47 i1="01" i2="1">
<s0>04-0412113</s0>
</fA47>
<fA60>
<s1>P</s1>
<s2>C</s2>
</fA60>
<fA61>
<s0>A</s0>
</fA61>
<fA64 i1="01" i2="1">
<s0>Lecture notes in computer science</s0>
</fA64>
<fA66 i1="01">
<s0>DEU</s0>
</fA66>
<fC01 i1="01" l="ENG">
<s0>The purpose of this work is to investigate the role of multimodal, especially auditory displays on human manipulation in augmented environments. We use information from all our sensory modalities when interacting in natural environments. Despite differences among the senses, we use them in concert to perceive and interact with multimodally specified objects and events. Traditionally, human-computer interaction has focused on graphical displays, thus not taking advantage of the richness of human senses and skills developed though interaction with the physical world [1]. Virtual environments have the potential to integrate all sensory modalities, to present the user with multiple inputs and outputs, and to allow the user to directly acquire and manipulate augmented or virtual objects. With the increasing availability of haptic and auditory displays, it is important to understand the complex relationships amongst different sensory feedback modalities and how they affect performance when interacting with augmented and virtual objects. Background and motivation for this research, questions and hypotheses, and some preliminary results are presented. A plan for future experiments is proposed.</s0>
</fC01>
<fC02 i1="01" i2="X">
<s0>001D02B04</s0>
</fC02>
<fC03 i1="01" i2="X" l="FRE">
<s0>Relation homme machine</s0>
<s5>01</s5>
</fC03>
<fC03 i1="01" i2="X" l="ENG">
<s0>Man machine relation</s0>
<s5>01</s5>
</fC03>
<fC03 i1="01" i2="X" l="SPA">
<s0>Relación hombre máquina</s0>
<s5>01</s5>
</fC03>
<fC03 i1="02" i2="X" l="FRE">
<s0>Réalité virtuelle</s0>
<s5>06</s5>
</fC03>
<fC03 i1="02" i2="X" l="ENG">
<s0>Virtual reality</s0>
<s5>06</s5>
</fC03>
<fC03 i1="02" i2="X" l="SPA">
<s0>Realidad virtual</s0>
<s5>06</s5>
</fC03>
<fC03 i1="03" i2="X" l="FRE">
<s0>Utilisation information</s0>
<s5>07</s5>
</fC03>
<fC03 i1="03" i2="X" l="ENG">
<s0>Information use</s0>
<s5>07</s5>
</fC03>
<fC03 i1="03" i2="X" l="SPA">
<s0>Uso información</s0>
<s5>07</s5>
</fC03>
<fC03 i1="04" i2="X" l="FRE">
<s0>Interface utilisateur</s0>
<s5>08</s5>
</fC03>
<fC03 i1="04" i2="X" l="ENG">
<s0>User interface</s0>
<s5>08</s5>
</fC03>
<fC03 i1="04" i2="X" l="SPA">
<s0>Interfase usuario</s0>
<s5>08</s5>
</fC03>
<fC03 i1="05" i2="X" l="FRE">
<s0>Disponibilité</s0>
<s5>09</s5>
</fC03>
<fC03 i1="05" i2="X" l="ENG">
<s0>Availability</s0>
<s5>09</s5>
</fC03>
<fC03 i1="05" i2="X" l="SPA">
<s0>Disponibilidad</s0>
<s5>09</s5>
</fC03>
<fC03 i1="06" i2="X" l="FRE">
<s0>Audition</s0>
<s5>18</s5>
</fC03>
<fC03 i1="06" i2="X" l="ENG">
<s0>Hearing</s0>
<s5>18</s5>
</fC03>
<fC03 i1="06" i2="X" l="SPA">
<s0>Audición</s0>
<s5>18</s5>
</fC03>
<fC03 i1="07" i2="X" l="FRE">
<s0>Homme</s0>
<s5>19</s5>
</fC03>
<fC03 i1="07" i2="X" l="ENG">
<s0>Human</s0>
<s5>19</s5>
</fC03>
<fC03 i1="07" i2="X" l="SPA">
<s0>Hombre</s0>
<s5>19</s5>
</fC03>
<fC03 i1="08" i2="X" l="FRE">
<s0>Milieu naturel</s0>
<s5>20</s5>
</fC03>
<fC03 i1="08" i2="X" l="ENG">
<s0>Natural environment</s0>
<s5>20</s5>
</fC03>
<fC03 i1="08" i2="X" l="SPA">
<s0>Medio natural</s0>
<s5>20</s5>
</fC03>
<fC03 i1="09" i2="X" l="FRE">
<s0>Sensibilité tactile</s0>
<s5>21</s5>
</fC03>
<fC03 i1="09" i2="X" l="ENG">
<s0>Tactile sensitivity</s0>
<s5>21</s5>
</fC03>
<fC03 i1="09" i2="X" l="SPA">
<s0>Sensibilidad tactil</s0>
<s5>21</s5>
</fC03>
<fC03 i1="10" i2="X" l="FRE">
<s0>Motivation</s0>
<s5>22</s5>
</fC03>
<fC03 i1="10" i2="X" l="ENG">
<s0>Motivation</s0>
<s5>22</s5>
</fC03>
<fC03 i1="10" i2="X" l="SPA">
<s0>Motivación</s0>
<s5>22</s5>
</fC03>
<fC03 i1="11" i2="X" l="FRE">
<s0>Entrée sortie</s0>
<s5>23</s5>
</fC03>
<fC03 i1="11" i2="X" l="ENG">
<s0>Input output</s0>
<s5>23</s5>
</fC03>
<fC03 i1="11" i2="X" l="SPA">
<s0>Entrada salida</s0>
<s5>23</s5>
</fC03>
<fN21>
<s1>236</s1>
</fN21>
<fN44 i1="01">
<s1>OTO</s1>
</fN44>
<fN82>
<s1>OTO</s1>
</fN82>
</pA>
<pR>
<fA30 i1="01" i2="1" l="ENG">
<s1>APCHI 2004 : Asia Pacific conference on computer human interaction</s1>
<s2>6</s2>
<s3>Rotorua NZL</s3>
<s4>2004-06-29</s4>
</fA30>
</pR>
</standard>
</inist>
<affiliations>
<list>
<country>
<li>Canada</li>
</country>
</list>
<tree>
<country name="Canada">
<noRegion>
<name sortKey="Zahariev, Mihaela A" sort="Zahariev, Mihaela A" uniqKey="Zahariev M" first="Mihaela A." last="Zahariev">Mihaela A. Zahariev</name>
</noRegion>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/PascalFrancis/Checkpoint
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000C60 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/PascalFrancis/Checkpoint/biblio.hfd -nk 000C60 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    PascalFrancis
   |étape=   Checkpoint
   |type=    RBID
   |clé=     Pascal:04-0412113
   |texte=   Multimodal cues for object manipulation in augmented and virtual environments
}}

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024