Serveur d'exploration sur la recherche en informatique en Lorraine

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Towards Multimodal Content Representation

Identifieur interne : 008A44 ( Main/Merge ); précédent : 008A43; suivant : 008A45

Towards Multimodal Content Representation

Auteurs : Harry Bunt ; Laurent Romary

Source :

RBID : CRIN:bunt02a

English descriptors

Abstract

Multimodal interfaces, combining the use of speech, graphics, gestures, and facial expressions in input and output, promise to provide new possibilities to deal with information in more effective and efficient ways, supporting for instance : - the understanding of possibly imprecise, partial or ambiguous multimodal input ; - the generation of coordinated, cohesive, and coherent multimodal presentations ; - the management of multimodal interaction (e.g., task completion, adapting the interface, error prevention) by representing and exploiting models of the user, the domain, the task, the interactive context, and the media (e.g. text, audio, video). The present document is intended to support the discussion on multimodal content representation, its possible objectives and basic constraints, and how the definition of a generic representation framework for multimodal content representation may be approached. It takes into account the results of the Dagstuhl workshop, in particular those of the informal working group on multimodal meaning representation that was active during the workshop (see http ://www.dfki.de/~wahlster/Dagstuhl_Multi_Modality, Working Group 4).

Links toward previous steps (curation, corpus...)


Links to Exploration step

CRIN:bunt02a

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en" wicri:score="185">Towards Multimodal Content Representation</title>
</titleStmt>
<publicationStmt>
<idno type="RBID">CRIN:bunt02a</idno>
<date when="2002" year="2002">2002</date>
<idno type="wicri:Area/Crin/Corpus">003274</idno>
<idno type="wicri:Area/Crin/Curation">003274</idno>
<idno type="wicri:explorRef" wicri:stream="Crin" wicri:step="Curation">003274</idno>
<idno type="wicri:Area/Crin/Checkpoint">001276</idno>
<idno type="wicri:explorRef" wicri:stream="Crin" wicri:step="Checkpoint">001276</idno>
<idno type="wicri:Area/Main/Merge">008A44</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en">Towards Multimodal Content Representation</title>
<author>
<name sortKey="Bunt, Harry" sort="Bunt, Harry" uniqKey="Bunt H" first="Harry" last="Bunt">Harry Bunt</name>
</author>
<author>
<name sortKey="Romary, Laurent" sort="Romary, Laurent" uniqKey="Romary L" first="Laurent" last="Romary">Laurent Romary</name>
</author>
</analytic>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>content representation</term>
<term>multimodality</term>
<term>semantics</term>
<term>xml</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en" wicri:score="4205">Multimodal interfaces, combining the use of speech, graphics, gestures, and facial expressions in input and output, promise to provide new possibilities to deal with information in more effective and efficient ways, supporting for instance : - the understanding of possibly imprecise, partial or ambiguous multimodal input ; - the generation of coordinated, cohesive, and coherent multimodal presentations ; - the management of multimodal interaction (e.g., task completion, adapting the interface, error prevention) by representing and exploiting models of the user, the domain, the task, the interactive context, and the media (e.g. text, audio, video). The present document is intended to support the discussion on multimodal content representation, its possible objectives and basic constraints, and how the definition of a generic representation framework for multimodal content representation may be approached. It takes into account the results of the Dagstuhl workshop, in particular those of the informal working group on multimodal meaning representation that was active during the workshop (see http ://www.dfki.de/~wahlster/Dagstuhl_Multi_Modality, Working Group 4).</div>
</front>
</TEI>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Lorraine/explor/InforLorV4/Data/Main/Merge
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 008A44 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Main/Merge/biblio.hfd -nk 008A44 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Lorraine
   |area=    InforLorV4
   |flux=    Main
   |étape=   Merge
   |type=    RBID
   |clé=     CRIN:bunt02a
   |texte=   Towards Multimodal Content Representation
}}

Wicri

This area was generated with Dilib version V0.6.33.
Data generation: Mon Jun 10 21:56:28 2019. Site generation: Fri Feb 25 15:29:27 2022