Serveur d'exploration sur la recherche en informatique en Lorraine

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

From Perception to Expression : the Deictic Gesture in Man-Machine Task-Oriented Dialogue

Identifieur interne : 001C00 ( Crin/Curation ); précédent : 001B99; suivant : 001C01

From Perception to Expression : the Deictic Gesture in Man-Machine Task-Oriented Dialogue

Auteurs : Nadia Bellalem ; Laurent Romary

Source :

RBID : CRIN:bellalem96b

English descriptors

Abstract

The subject dealt with in this paper situates itself in the framework of the interpretation of complex utterances combining speech and designation gesture (usually known as multimodal) as they may be encountered in task-oriented man-machine dialogue systems. More precisely, our aim is to show that it is essential to take into account the different parameters involved in the visual perception of the scene which is presented to the user. As a matter of fact, this scene reflects the current state of the application under control and thus involves a set of graphical representations as well as a spatial organization mirroring the internal structure of the objects in the task. Among other consequences, it may be shown that such an organization - as perceived by the user - conditions the specific wording adopted by the user/speaker. Given that our objective is to provide computers with the ability to actually understand the user's utterances, we claim that it is mandatory to build up an artificial perceptual representation which is somehow similar to that which is mentally apprehended by the user. To this end, we first describe the three protagonists of the interpretation of deictic gesture, that is, the linguistic message, the task and visual perception. We then propose an overview of some results provided by the field of psychology, in order to bring concepts which might be transposed to the specific communicative situation associated with the human-computer couple. From this analysis, we introduce the different elements which have to be considered to model the perceptive representation of the user and we show the influence of this representation upon the shape of the gestural trajectory and the structure of the linguistic utterance. Finally, we make some propositions for an actual artificial perceptive representation such as we feel that it should in the process of interpreting deictic gesture.

Links toward previous steps (curation, corpus...)


Links to Exploration step

CRIN:bellalem96b

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en" wicri:score="304">From Perception to Expression : the Deictic Gesture in Man-Machine Task-Oriented Dialogue</title>
</titleStmt>
<publicationStmt>
<idno type="RBID">CRIN:bellalem96b</idno>
<date when="1996" year="1996">1996</date>
<idno type="wicri:Area/Crin/Corpus">001C00</idno>
<idno type="wicri:Area/Crin/Curation">001C00</idno>
<idno type="wicri:explorRef" wicri:stream="Crin" wicri:step="Curation">001C00</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en">From Perception to Expression : the Deictic Gesture in Man-Machine Task-Oriented Dialogue</title>
<author>
<name sortKey="Bellalem, Nadia" sort="Bellalem, Nadia" uniqKey="Bellalem N" first="Nadia" last="Bellalem">Nadia Bellalem</name>
</author>
<author>
<name sortKey="Romary, Laurent" sort="Romary, Laurent" uniqKey="Romary L" first="Laurent" last="Romary">Laurent Romary</name>
</author>
</analytic>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>multimodal dialogue system</term>
<term>natural language</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en" wicri:score="4058">The subject dealt with in this paper situates itself in the framework of the interpretation of complex utterances combining speech and designation gesture (usually known as multimodal) as they may be encountered in task-oriented man-machine dialogue systems. More precisely, our aim is to show that it is essential to take into account the different parameters involved in the visual perception of the scene which is presented to the user. As a matter of fact, this scene reflects the current state of the application under control and thus involves a set of graphical representations as well as a spatial organization mirroring the internal structure of the objects in the task. Among other consequences, it may be shown that such an organization - as perceived by the user - conditions the specific wording adopted by the user/speaker. Given that our objective is to provide computers with the ability to actually understand the user's utterances, we claim that it is mandatory to build up an artificial perceptual representation which is somehow similar to that which is mentally apprehended by the user. To this end, we first describe the three protagonists of the interpretation of deictic gesture, that is, the linguistic message, the task and visual perception. We then propose an overview of some results provided by the field of psychology, in order to bring concepts which might be transposed to the specific communicative situation associated with the human-computer couple. From this analysis, we introduce the different elements which have to be considered to model the perceptive representation of the user and we show the influence of this representation upon the shape of the gestural trajectory and the structure of the linguistic utterance. Finally, we make some propositions for an actual artificial perceptive representation such as we feel that it should in the process of interpreting deictic gesture.</div>
</front>
</TEI>
<BibTex type="techreport">
<ref>bellalem96b</ref>
<crinnumber>96-R-076</crinnumber>
<category>15</category>
<equipe>DIALOGUE</equipe>
<author>
<e>Bellalem, Nadia</e>
<e>Romary, Laurent</e>
</author>
<title>From Perception to Expression : the Deictic Gesture in Man-Machine Task-Oriented Dialogue</title>
<institution>Centre de Recherche en Informatique de Nancy</institution>
<year>1996</year>
<type>Rapport interne</type>
<address>Vandoeuvre-lès-Nancy</address>
<keywords>
<e>natural language</e>
<e>multimodal dialogue system</e>
</keywords>
<abstract>The subject dealt with in this paper situates itself in the framework of the interpretation of complex utterances combining speech and designation gesture (usually known as multimodal) as they may be encountered in task-oriented man-machine dialogue systems. More precisely, our aim is to show that it is essential to take into account the different parameters involved in the visual perception of the scene which is presented to the user. As a matter of fact, this scene reflects the current state of the application under control and thus involves a set of graphical representations as well as a spatial organization mirroring the internal structure of the objects in the task. Among other consequences, it may be shown that such an organization - as perceived by the user - conditions the specific wording adopted by the user/speaker. Given that our objective is to provide computers with the ability to actually understand the user's utterances, we claim that it is mandatory to build up an artificial perceptual representation which is somehow similar to that which is mentally apprehended by the user. To this end, we first describe the three protagonists of the interpretation of deictic gesture, that is, the linguistic message, the task and visual perception. We then propose an overview of some results provided by the field of psychology, in order to bring concepts which might be transposed to the specific communicative situation associated with the human-computer couple. From this analysis, we introduce the different elements which have to be considered to model the perceptive representation of the user and we show the influence of this representation upon the shape of the gestural trajectory and the structure of the linguistic utterance. Finally, we make some propositions for an actual artificial perceptive representation such as we feel that it should in the process of interpreting deictic gesture.</abstract>
</BibTex>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Lorraine/explor/InforLorV4/Data/Crin/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001C00 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Crin/Curation/biblio.hfd -nk 001C00 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Lorraine
   |area=    InforLorV4
   |flux=    Crin
   |étape=   Curation
   |type=    RBID
   |clé=     CRIN:bellalem96b
   |texte=   From Perception to Expression  : the Deictic Gesture in Man-Machine Task-Oriented Dialogue
}}

Wicri

This area was generated with Dilib version V0.6.33.
Data generation: Mon Jun 10 21:56:28 2019. Site generation: Fri Feb 25 15:29:27 2022