Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Multimodal cues for object manipulation in augmented and virtual environments

Identifieur interne : 000F60 ( PascalFrancis/Corpus ); précédent : 000F59; suivant : 000F61

Multimodal cues for object manipulation in augmented and virtual environments

Auteurs : Mihaela A. Zahariev

Source :

RBID : Pascal:04-0412113

Descripteurs français

English descriptors

Abstract

The purpose of this work is to investigate the role of multimodal, especially auditory displays on human manipulation in augmented environments. We use information from all our sensory modalities when interacting in natural environments. Despite differences among the senses, we use them in concert to perceive and interact with multimodally specified objects and events. Traditionally, human-computer interaction has focused on graphical displays, thus not taking advantage of the richness of human senses and skills developed though interaction with the physical world [1]. Virtual environments have the potential to integrate all sensory modalities, to present the user with multiple inputs and outputs, and to allow the user to directly acquire and manipulate augmented or virtual objects. With the increasing availability of haptic and auditory displays, it is important to understand the complex relationships amongst different sensory feedback modalities and how they affect performance when interacting with augmented and virtual objects. Background and motivation for this research, questions and hypotheses, and some preliminary results are presented. A plan for future experiments is proposed.

Notice en format standard (ISO 2709)

Pour connaître la documentation sur le format Inist Standard.

pA  
A01 01  1    @0 0302-9743
A05       @2 3101
A08 01  1  ENG  @1 Multimodal cues for object manipulation in augmented and virtual environments
A09 01  1  ENG  @1 Computer human interaction : Rotorua, 29 June - 2 July 2004
A11 01  1    @1 ZAHARIEV (Mihaela A.)
A12 01  1    @1 MASOODIAN (Masood) @9 ed.
A12 02  1    @1 JONES (Steve) @9 ed.
A12 03  1    @1 ROGERS (Bill) @9 ed.
A14 01      @1 Human Motor Systems Laboratory, School of Kinesiology Simon Fraser University @2 Burnaby, B.C., V5A 1S6 @3 CAN @Z 1 aut.
A20       @1 687-691
A21       @1 2004
A23 01      @0 ENG
A26 01      @0 3-540-22312-6
A43 01      @1 INIST @2 16343 @5 354000117898990790
A44       @0 0000 @1 © 2004 INIST-CNRS. All rights reserved.
A45       @0 11 ref.
A47 01  1    @0 04-0412113
A60       @1 P @2 C
A61       @0 A
A64 01  1    @0 Lecture notes in computer science
A66 01      @0 DEU
C01 01    ENG  @0 The purpose of this work is to investigate the role of multimodal, especially auditory displays on human manipulation in augmented environments. We use information from all our sensory modalities when interacting in natural environments. Despite differences among the senses, we use them in concert to perceive and interact with multimodally specified objects and events. Traditionally, human-computer interaction has focused on graphical displays, thus not taking advantage of the richness of human senses and skills developed though interaction with the physical world [1]. Virtual environments have the potential to integrate all sensory modalities, to present the user with multiple inputs and outputs, and to allow the user to directly acquire and manipulate augmented or virtual objects. With the increasing availability of haptic and auditory displays, it is important to understand the complex relationships amongst different sensory feedback modalities and how they affect performance when interacting with augmented and virtual objects. Background and motivation for this research, questions and hypotheses, and some preliminary results are presented. A plan for future experiments is proposed.
C02 01  X    @0 001D02B04
C03 01  X  FRE  @0 Relation homme machine @5 01
C03 01  X  ENG  @0 Man machine relation @5 01
C03 01  X  SPA  @0 Relación hombre máquina @5 01
C03 02  X  FRE  @0 Réalité virtuelle @5 06
C03 02  X  ENG  @0 Virtual reality @5 06
C03 02  X  SPA  @0 Realidad virtual @5 06
C03 03  X  FRE  @0 Utilisation information @5 07
C03 03  X  ENG  @0 Information use @5 07
C03 03  X  SPA  @0 Uso información @5 07
C03 04  X  FRE  @0 Interface utilisateur @5 08
C03 04  X  ENG  @0 User interface @5 08
C03 04  X  SPA  @0 Interfase usuario @5 08
C03 05  X  FRE  @0 Disponibilité @5 09
C03 05  X  ENG  @0 Availability @5 09
C03 05  X  SPA  @0 Disponibilidad @5 09
C03 06  X  FRE  @0 Audition @5 18
C03 06  X  ENG  @0 Hearing @5 18
C03 06  X  SPA  @0 Audición @5 18
C03 07  X  FRE  @0 Homme @5 19
C03 07  X  ENG  @0 Human @5 19
C03 07  X  SPA  @0 Hombre @5 19
C03 08  X  FRE  @0 Milieu naturel @5 20
C03 08  X  ENG  @0 Natural environment @5 20
C03 08  X  SPA  @0 Medio natural @5 20
C03 09  X  FRE  @0 Sensibilité tactile @5 21
C03 09  X  ENG  @0 Tactile sensitivity @5 21
C03 09  X  SPA  @0 Sensibilidad tactil @5 21
C03 10  X  FRE  @0 Motivation @5 22
C03 10  X  ENG  @0 Motivation @5 22
C03 10  X  SPA  @0 Motivación @5 22
C03 11  X  FRE  @0 Entrée sortie @5 23
C03 11  X  ENG  @0 Input output @5 23
C03 11  X  SPA  @0 Entrada salida @5 23
N21       @1 236
N44 01      @1 OTO
N82       @1 OTO
pR  
A30 01  1  ENG  @1 APCHI 2004 : Asia Pacific conference on computer human interaction @2 6 @3 Rotorua NZL @4 2004-06-29

Format Inist (serveur)

NO : PASCAL 04-0412113 INIST
ET : Multimodal cues for object manipulation in augmented and virtual environments
AU : ZAHARIEV (Mihaela A.); MASOODIAN (Masood); JONES (Steve); ROGERS (Bill)
AF : Human Motor Systems Laboratory, School of Kinesiology Simon Fraser University/Burnaby, B.C., V5A 1S6/Canada (1 aut.)
DT : Publication en série; Congrès; Niveau analytique
SO : Lecture notes in computer science; ISSN 0302-9743; Allemagne; Da. 2004; Vol. 3101; Pp. 687-691; Bibl. 11 ref.
LA : Anglais
EA : The purpose of this work is to investigate the role of multimodal, especially auditory displays on human manipulation in augmented environments. We use information from all our sensory modalities when interacting in natural environments. Despite differences among the senses, we use them in concert to perceive and interact with multimodally specified objects and events. Traditionally, human-computer interaction has focused on graphical displays, thus not taking advantage of the richness of human senses and skills developed though interaction with the physical world [1]. Virtual environments have the potential to integrate all sensory modalities, to present the user with multiple inputs and outputs, and to allow the user to directly acquire and manipulate augmented or virtual objects. With the increasing availability of haptic and auditory displays, it is important to understand the complex relationships amongst different sensory feedback modalities and how they affect performance when interacting with augmented and virtual objects. Background and motivation for this research, questions and hypotheses, and some preliminary results are presented. A plan for future experiments is proposed.
CC : 001D02B04
FD : Relation homme machine; Réalité virtuelle; Utilisation information; Interface utilisateur; Disponibilité; Audition; Homme; Milieu naturel; Sensibilité tactile; Motivation; Entrée sortie
ED : Man machine relation; Virtual reality; Information use; User interface; Availability; Hearing; Human; Natural environment; Tactile sensitivity; Motivation; Input output
SD : Relación hombre máquina; Realidad virtual; Uso información; Interfase usuario; Disponibilidad; Audición; Hombre; Medio natural; Sensibilidad tactil; Motivación; Entrada salida
LO : INIST-16343.354000117898990790
ID : 04-0412113

Links to Exploration step

Pascal:04-0412113

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en" level="a">Multimodal cues for object manipulation in augmented and virtual environments</title>
<author>
<name sortKey="Zahariev, Mihaela A" sort="Zahariev, Mihaela A" uniqKey="Zahariev M" first="Mihaela A." last="Zahariev">Mihaela A. Zahariev</name>
<affiliation>
<inist:fA14 i1="01">
<s1>Human Motor Systems Laboratory, School of Kinesiology Simon Fraser University</s1>
<s2>Burnaby, B.C., V5A 1S6</s2>
<s3>CAN</s3>
<sZ>1 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">INIST</idno>
<idno type="inist">04-0412113</idno>
<date when="2004">2004</date>
<idno type="stanalyst">PASCAL 04-0412113 INIST</idno>
<idno type="RBID">Pascal:04-0412113</idno>
<idno type="wicri:Area/PascalFrancis/Corpus">000F60</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a">Multimodal cues for object manipulation in augmented and virtual environments</title>
<author>
<name sortKey="Zahariev, Mihaela A" sort="Zahariev, Mihaela A" uniqKey="Zahariev M" first="Mihaela A." last="Zahariev">Mihaela A. Zahariev</name>
<affiliation>
<inist:fA14 i1="01">
<s1>Human Motor Systems Laboratory, School of Kinesiology Simon Fraser University</s1>
<s2>Burnaby, B.C., V5A 1S6</s2>
<s3>CAN</s3>
<sZ>1 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
</analytic>
<series>
<title level="j" type="main">Lecture notes in computer science</title>
<idno type="ISSN">0302-9743</idno>
<imprint>
<date when="2004">2004</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
<seriesStmt>
<title level="j" type="main">Lecture notes in computer science</title>
<idno type="ISSN">0302-9743</idno>
</seriesStmt>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Availability</term>
<term>Hearing</term>
<term>Human</term>
<term>Information use</term>
<term>Input output</term>
<term>Man machine relation</term>
<term>Motivation</term>
<term>Natural environment</term>
<term>Tactile sensitivity</term>
<term>User interface</term>
<term>Virtual reality</term>
</keywords>
<keywords scheme="Pascal" xml:lang="fr">
<term>Relation homme machine</term>
<term>Réalité virtuelle</term>
<term>Utilisation information</term>
<term>Interface utilisateur</term>
<term>Disponibilité</term>
<term>Audition</term>
<term>Homme</term>
<term>Milieu naturel</term>
<term>Sensibilité tactile</term>
<term>Motivation</term>
<term>Entrée sortie</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">The purpose of this work is to investigate the role of multimodal, especially auditory displays on human manipulation in augmented environments. We use information from all our sensory modalities when interacting in natural environments. Despite differences among the senses, we use them in concert to perceive and interact with multimodally specified objects and events. Traditionally, human-computer interaction has focused on graphical displays, thus not taking advantage of the richness of human senses and skills developed though interaction with the physical world [1]. Virtual environments have the potential to integrate all sensory modalities, to present the user with multiple inputs and outputs, and to allow the user to directly acquire and manipulate augmented or virtual objects. With the increasing availability of haptic and auditory displays, it is important to understand the complex relationships amongst different sensory feedback modalities and how they affect performance when interacting with augmented and virtual objects. Background and motivation for this research, questions and hypotheses, and some preliminary results are presented. A plan for future experiments is proposed.</div>
</front>
</TEI>
<inist>
<standard h6="B">
<pA>
<fA01 i1="01" i2="1">
<s0>0302-9743</s0>
</fA01>
<fA05>
<s2>3101</s2>
</fA05>
<fA08 i1="01" i2="1" l="ENG">
<s1>Multimodal cues for object manipulation in augmented and virtual environments</s1>
</fA08>
<fA09 i1="01" i2="1" l="ENG">
<s1>Computer human interaction : Rotorua, 29 June - 2 July 2004</s1>
</fA09>
<fA11 i1="01" i2="1">
<s1>ZAHARIEV (Mihaela A.)</s1>
</fA11>
<fA12 i1="01" i2="1">
<s1>MASOODIAN (Masood)</s1>
<s9>ed.</s9>
</fA12>
<fA12 i1="02" i2="1">
<s1>JONES (Steve)</s1>
<s9>ed.</s9>
</fA12>
<fA12 i1="03" i2="1">
<s1>ROGERS (Bill)</s1>
<s9>ed.</s9>
</fA12>
<fA14 i1="01">
<s1>Human Motor Systems Laboratory, School of Kinesiology Simon Fraser University</s1>
<s2>Burnaby, B.C., V5A 1S6</s2>
<s3>CAN</s3>
<sZ>1 aut.</sZ>
</fA14>
<fA20>
<s1>687-691</s1>
</fA20>
<fA21>
<s1>2004</s1>
</fA21>
<fA23 i1="01">
<s0>ENG</s0>
</fA23>
<fA26 i1="01">
<s0>3-540-22312-6</s0>
</fA26>
<fA43 i1="01">
<s1>INIST</s1>
<s2>16343</s2>
<s5>354000117898990790</s5>
</fA43>
<fA44>
<s0>0000</s0>
<s1>© 2004 INIST-CNRS. All rights reserved.</s1>
</fA44>
<fA45>
<s0>11 ref.</s0>
</fA45>
<fA47 i1="01" i2="1">
<s0>04-0412113</s0>
</fA47>
<fA60>
<s1>P</s1>
<s2>C</s2>
</fA60>
<fA61>
<s0>A</s0>
</fA61>
<fA64 i1="01" i2="1">
<s0>Lecture notes in computer science</s0>
</fA64>
<fA66 i1="01">
<s0>DEU</s0>
</fA66>
<fC01 i1="01" l="ENG">
<s0>The purpose of this work is to investigate the role of multimodal, especially auditory displays on human manipulation in augmented environments. We use information from all our sensory modalities when interacting in natural environments. Despite differences among the senses, we use them in concert to perceive and interact with multimodally specified objects and events. Traditionally, human-computer interaction has focused on graphical displays, thus not taking advantage of the richness of human senses and skills developed though interaction with the physical world [1]. Virtual environments have the potential to integrate all sensory modalities, to present the user with multiple inputs and outputs, and to allow the user to directly acquire and manipulate augmented or virtual objects. With the increasing availability of haptic and auditory displays, it is important to understand the complex relationships amongst different sensory feedback modalities and how they affect performance when interacting with augmented and virtual objects. Background and motivation for this research, questions and hypotheses, and some preliminary results are presented. A plan for future experiments is proposed.</s0>
</fC01>
<fC02 i1="01" i2="X">
<s0>001D02B04</s0>
</fC02>
<fC03 i1="01" i2="X" l="FRE">
<s0>Relation homme machine</s0>
<s5>01</s5>
</fC03>
<fC03 i1="01" i2="X" l="ENG">
<s0>Man machine relation</s0>
<s5>01</s5>
</fC03>
<fC03 i1="01" i2="X" l="SPA">
<s0>Relación hombre máquina</s0>
<s5>01</s5>
</fC03>
<fC03 i1="02" i2="X" l="FRE">
<s0>Réalité virtuelle</s0>
<s5>06</s5>
</fC03>
<fC03 i1="02" i2="X" l="ENG">
<s0>Virtual reality</s0>
<s5>06</s5>
</fC03>
<fC03 i1="02" i2="X" l="SPA">
<s0>Realidad virtual</s0>
<s5>06</s5>
</fC03>
<fC03 i1="03" i2="X" l="FRE">
<s0>Utilisation information</s0>
<s5>07</s5>
</fC03>
<fC03 i1="03" i2="X" l="ENG">
<s0>Information use</s0>
<s5>07</s5>
</fC03>
<fC03 i1="03" i2="X" l="SPA">
<s0>Uso información</s0>
<s5>07</s5>
</fC03>
<fC03 i1="04" i2="X" l="FRE">
<s0>Interface utilisateur</s0>
<s5>08</s5>
</fC03>
<fC03 i1="04" i2="X" l="ENG">
<s0>User interface</s0>
<s5>08</s5>
</fC03>
<fC03 i1="04" i2="X" l="SPA">
<s0>Interfase usuario</s0>
<s5>08</s5>
</fC03>
<fC03 i1="05" i2="X" l="FRE">
<s0>Disponibilité</s0>
<s5>09</s5>
</fC03>
<fC03 i1="05" i2="X" l="ENG">
<s0>Availability</s0>
<s5>09</s5>
</fC03>
<fC03 i1="05" i2="X" l="SPA">
<s0>Disponibilidad</s0>
<s5>09</s5>
</fC03>
<fC03 i1="06" i2="X" l="FRE">
<s0>Audition</s0>
<s5>18</s5>
</fC03>
<fC03 i1="06" i2="X" l="ENG">
<s0>Hearing</s0>
<s5>18</s5>
</fC03>
<fC03 i1="06" i2="X" l="SPA">
<s0>Audición</s0>
<s5>18</s5>
</fC03>
<fC03 i1="07" i2="X" l="FRE">
<s0>Homme</s0>
<s5>19</s5>
</fC03>
<fC03 i1="07" i2="X" l="ENG">
<s0>Human</s0>
<s5>19</s5>
</fC03>
<fC03 i1="07" i2="X" l="SPA">
<s0>Hombre</s0>
<s5>19</s5>
</fC03>
<fC03 i1="08" i2="X" l="FRE">
<s0>Milieu naturel</s0>
<s5>20</s5>
</fC03>
<fC03 i1="08" i2="X" l="ENG">
<s0>Natural environment</s0>
<s5>20</s5>
</fC03>
<fC03 i1="08" i2="X" l="SPA">
<s0>Medio natural</s0>
<s5>20</s5>
</fC03>
<fC03 i1="09" i2="X" l="FRE">
<s0>Sensibilité tactile</s0>
<s5>21</s5>
</fC03>
<fC03 i1="09" i2="X" l="ENG">
<s0>Tactile sensitivity</s0>
<s5>21</s5>
</fC03>
<fC03 i1="09" i2="X" l="SPA">
<s0>Sensibilidad tactil</s0>
<s5>21</s5>
</fC03>
<fC03 i1="10" i2="X" l="FRE">
<s0>Motivation</s0>
<s5>22</s5>
</fC03>
<fC03 i1="10" i2="X" l="ENG">
<s0>Motivation</s0>
<s5>22</s5>
</fC03>
<fC03 i1="10" i2="X" l="SPA">
<s0>Motivación</s0>
<s5>22</s5>
</fC03>
<fC03 i1="11" i2="X" l="FRE">
<s0>Entrée sortie</s0>
<s5>23</s5>
</fC03>
<fC03 i1="11" i2="X" l="ENG">
<s0>Input output</s0>
<s5>23</s5>
</fC03>
<fC03 i1="11" i2="X" l="SPA">
<s0>Entrada salida</s0>
<s5>23</s5>
</fC03>
<fN21>
<s1>236</s1>
</fN21>
<fN44 i1="01">
<s1>OTO</s1>
</fN44>
<fN82>
<s1>OTO</s1>
</fN82>
</pA>
<pR>
<fA30 i1="01" i2="1" l="ENG">
<s1>APCHI 2004 : Asia Pacific conference on computer human interaction</s1>
<s2>6</s2>
<s3>Rotorua NZL</s3>
<s4>2004-06-29</s4>
</fA30>
</pR>
</standard>
<server>
<NO>PASCAL 04-0412113 INIST</NO>
<ET>Multimodal cues for object manipulation in augmented and virtual environments</ET>
<AU>ZAHARIEV (Mihaela A.); MASOODIAN (Masood); JONES (Steve); ROGERS (Bill)</AU>
<AF>Human Motor Systems Laboratory, School of Kinesiology Simon Fraser University/Burnaby, B.C., V5A 1S6/Canada (1 aut.)</AF>
<DT>Publication en série; Congrès; Niveau analytique</DT>
<SO>Lecture notes in computer science; ISSN 0302-9743; Allemagne; Da. 2004; Vol. 3101; Pp. 687-691; Bibl. 11 ref.</SO>
<LA>Anglais</LA>
<EA>The purpose of this work is to investigate the role of multimodal, especially auditory displays on human manipulation in augmented environments. We use information from all our sensory modalities when interacting in natural environments. Despite differences among the senses, we use them in concert to perceive and interact with multimodally specified objects and events. Traditionally, human-computer interaction has focused on graphical displays, thus not taking advantage of the richness of human senses and skills developed though interaction with the physical world [1]. Virtual environments have the potential to integrate all sensory modalities, to present the user with multiple inputs and outputs, and to allow the user to directly acquire and manipulate augmented or virtual objects. With the increasing availability of haptic and auditory displays, it is important to understand the complex relationships amongst different sensory feedback modalities and how they affect performance when interacting with augmented and virtual objects. Background and motivation for this research, questions and hypotheses, and some preliminary results are presented. A plan for future experiments is proposed.</EA>
<CC>001D02B04</CC>
<FD>Relation homme machine; Réalité virtuelle; Utilisation information; Interface utilisateur; Disponibilité; Audition; Homme; Milieu naturel; Sensibilité tactile; Motivation; Entrée sortie</FD>
<ED>Man machine relation; Virtual reality; Information use; User interface; Availability; Hearing; Human; Natural environment; Tactile sensitivity; Motivation; Input output</ED>
<SD>Relación hombre máquina; Realidad virtual; Uso información; Interfase usuario; Disponibilidad; Audición; Hombre; Medio natural; Sensibilidad tactil; Motivación; Entrada salida</SD>
<LO>INIST-16343.354000117898990790</LO>
<ID>04-0412113</ID>
</server>
</inist>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/PascalFrancis/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000F60 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/PascalFrancis/Corpus/biblio.hfd -nk 000F60 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    PascalFrancis
   |étape=   Corpus
   |type=    RBID
   |clé=     Pascal:04-0412113
   |texte=   Multimodal cues for object manipulation in augmented and virtual environments
}}

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024