Interactive Motion Modeling and Parameterization by Direct Demonstration
Identifieur interne : 003875 ( Main/Exploration ); précédent : 003874; suivant : 003876Interactive Motion Modeling and Parameterization by Direct Demonstration
Auteurs : Carlo Camporesi ; Yazhou Huang ; Marcelo KallmannSource :
- Lecture Notes in Computer Science [ 0302-9743 ] ; 2010.
Abstract
Abstract: While interactive virtual humans are becoming widely used in education, training and therapeutic applications, building animations which are both realistic and parameterized in respect to a given scenario remains a complex and time–consuming task. In order to improve this situation, we propose a framework based on the direct demonstration and parameterization of motions. The presented approach addresses three important aspects of the problem in an integrated fashion: (1) our framework relies on an interactive real-time motion capture interface that empowers non–skilled animators with the ability to model realistic upper-body actions and gestures by direct demonstration; (2) our interface also accounts for the interactive definition of clustered example motions, in order to well represent the variations of interest for a given motion being modeled; and (3) we also present an inverse blending optimization technique which solves the problem of precisely parameterizing a cluster of example motions in respect to arbitrary spatial constraints. The optimization is efficiently solved on-line, allowing autonomous virtual humans to precisely perform learned actions and gestures in respect to arbitrarily given targets. Our proposed framework has been implemented in an immersive multi-tile stereo visualization system, achieving a powerful and intuitive interface for programming generic parameterized motions by demonstration.
Url:
DOI: 10.1007/978-3-642-15892-6_9
Affiliations:
Links toward previous steps (curation, corpus...)
- to stream Istex, to step Corpus: 002841
- to stream Istex, to step Curation: 002841
- to stream Istex, to step Checkpoint: 000672
- to stream Main, to step Merge: 003933
- to stream Main, to step Curation: 003875
Le document en format XML
<record><TEI wicri:istexFullTextTei="biblStruct:series"><teiHeader><fileDesc><titleStmt><title xml:lang="en">Interactive Motion Modeling and Parameterization by Direct Demonstration</title>
<author><name sortKey="Camporesi, Carlo" sort="Camporesi, Carlo" uniqKey="Camporesi C" first="Carlo" last="Camporesi">Carlo Camporesi</name>
</author>
<author><name sortKey="Huang, Yazhou" sort="Huang, Yazhou" uniqKey="Huang Y" first="Yazhou" last="Huang">Yazhou Huang</name>
</author>
<author><name sortKey="Kallmann, Marcelo" sort="Kallmann, Marcelo" uniqKey="Kallmann M" first="Marcelo" last="Kallmann">Marcelo Kallmann</name>
</author>
</titleStmt>
<publicationStmt><idno type="wicri:source">ISTEX</idno>
<idno type="RBID">ISTEX:7B58C31EA10746669D9DE53FAE9735C2D4ECBF11</idno>
<date when="2010" year="2010">2010</date>
<idno type="doi">10.1007/978-3-642-15892-6_9</idno>
<idno type="url">https://api.istex.fr/document/7B58C31EA10746669D9DE53FAE9735C2D4ECBF11/fulltext/pdf</idno>
<idno type="wicri:Area/Istex/Corpus">002841</idno>
<idno type="wicri:Area/Istex/Curation">002841</idno>
<idno type="wicri:Area/Istex/Checkpoint">000672</idno>
<idno type="wicri:doubleKey">0302-9743:2010:Camporesi C:interactive:motion:modeling</idno>
<idno type="wicri:Area/Main/Merge">003933</idno>
<idno type="wicri:Area/Main/Curation">003875</idno>
<idno type="wicri:Area/Main/Exploration">003875</idno>
</publicationStmt>
<sourceDesc><biblStruct><analytic><title level="a" type="main" xml:lang="en">Interactive Motion Modeling and Parameterization by Direct Demonstration</title>
<author><name sortKey="Camporesi, Carlo" sort="Camporesi, Carlo" uniqKey="Camporesi C" first="Carlo" last="Camporesi">Carlo Camporesi</name>
<affiliation><wicri:noCountry code="subField">Merced</wicri:noCountry>
</affiliation>
</author>
<author><name sortKey="Huang, Yazhou" sort="Huang, Yazhou" uniqKey="Huang Y" first="Yazhou" last="Huang">Yazhou Huang</name>
<affiliation><wicri:noCountry code="subField">Merced</wicri:noCountry>
</affiliation>
</author>
<author><name sortKey="Kallmann, Marcelo" sort="Kallmann, Marcelo" uniqKey="Kallmann M" first="Marcelo" last="Kallmann">Marcelo Kallmann</name>
<affiliation><wicri:noCountry code="subField">Merced</wicri:noCountry>
</affiliation>
</author>
</analytic>
<monogr></monogr>
<series><title level="s">Lecture Notes in Computer Science</title>
<imprint><date>2010</date>
</imprint>
<idno type="ISSN">0302-9743</idno>
<idno type="eISSN">1611-3349</idno>
<idno type="ISSN">0302-9743</idno>
</series>
<idno type="istex">7B58C31EA10746669D9DE53FAE9735C2D4ECBF11</idno>
<idno type="DOI">10.1007/978-3-642-15892-6_9</idno>
<idno type="ChapterID">9</idno>
<idno type="ChapterID">Chap9</idno>
</biblStruct>
</sourceDesc>
<seriesStmt><idno type="ISSN">0302-9743</idno>
</seriesStmt>
</fileDesc>
<profileDesc><textClass></textClass>
<langUsage><language ident="en">en</language>
</langUsage>
</profileDesc>
</teiHeader>
<front><div type="abstract" xml:lang="en">Abstract: While interactive virtual humans are becoming widely used in education, training and therapeutic applications, building animations which are both realistic and parameterized in respect to a given scenario remains a complex and time–consuming task. In order to improve this situation, we propose a framework based on the direct demonstration and parameterization of motions. The presented approach addresses three important aspects of the problem in an integrated fashion: (1) our framework relies on an interactive real-time motion capture interface that empowers non–skilled animators with the ability to model realistic upper-body actions and gestures by direct demonstration; (2) our interface also accounts for the interactive definition of clustered example motions, in order to well represent the variations of interest for a given motion being modeled; and (3) we also present an inverse blending optimization technique which solves the problem of precisely parameterizing a cluster of example motions in respect to arbitrary spatial constraints. The optimization is efficiently solved on-line, allowing autonomous virtual humans to precisely perform learned actions and gestures in respect to arbitrarily given targets. Our proposed framework has been implemented in an immersive multi-tile stereo visualization system, achieving a powerful and intuitive interface for programming generic parameterized motions by demonstration.</div>
</front>
</TEI>
<affiliations><list></list>
<tree><noCountry><name sortKey="Camporesi, Carlo" sort="Camporesi, Carlo" uniqKey="Camporesi C" first="Carlo" last="Camporesi">Carlo Camporesi</name>
<name sortKey="Huang, Yazhou" sort="Huang, Yazhou" uniqKey="Huang Y" first="Yazhou" last="Huang">Yazhou Huang</name>
<name sortKey="Kallmann, Marcelo" sort="Kallmann, Marcelo" uniqKey="Kallmann M" first="Marcelo" last="Kallmann">Marcelo Kallmann</name>
</noCountry>
</tree>
</affiliations>
</record>
Pour manipuler ce document sous Unix (Dilib)
EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Main/Exploration
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 003875 | SxmlIndent | more
Ou
HfdSelect -h $EXPLOR_AREA/Data/Main/Exploration/biblio.hfd -nk 003875 | SxmlIndent | more
Pour mettre un lien sur cette page dans le réseau Wicri
{{Explor lien |wiki= Ticri/CIDE |area= HapticV1 |flux= Main |étape= Exploration |type= RBID |clé= ISTEX:7B58C31EA10746669D9DE53FAE9735C2D4ECBF11 |texte= Interactive Motion Modeling and Parameterization by Direct Demonstration }}
This area was generated with Dilib version V0.6.23. |