Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

A New Approach for Human-Robot Interaction Using Human Body Language

Identifieur interne : 003304 ( Main/Merge ); précédent : 003303; suivant : 003305

A New Approach for Human-Robot Interaction Using Human Body Language

Auteurs : Nhan Nguyen-Duc-Thanh [Corée du Sud] ; Daniel Stonier [Corée du Sud] ; Sungyoung Lee [Corée du Sud] ; Dong-Han Kim [Corée du Sud]

Source :

RBID : ISTEX:E5A365AEB3BA2E9B8F92902AA03BCCAFFF3D35DD

Abstract

Abstract: In order to make the interaction between human and robot easier and more diversity, in this paper, we construct a system in which human users can use the body language to control robot doing some works. At first, the human body data is collected via 3D camera. And then, we extract the skeleton feature. Based on the human posture and Semaphore system, the international communicative method, robot can get the character in the Alphabet. Finally, robot combines the separate character into the understandable message and executes what user wants to do. In simulation, we show the results in which the iRobot can move up, down, turn left, turn right...based on the received body message.

Url:
DOI: 10.1007/978-3-642-24082-9_92

Links toward previous steps (curation, corpus...)


Links to Exploration step

ISTEX:E5A365AEB3BA2E9B8F92902AA03BCCAFFF3D35DD

Le document en format XML

<record>
<TEI wicri:istexFullTextTei="biblStruct">
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">A New Approach for Human-Robot Interaction Using Human Body Language</title>
<author>
<name sortKey="Nguyen Duc Thanh, Nhan" sort="Nguyen Duc Thanh, Nhan" uniqKey="Nguyen Duc Thanh N" first="Nhan" last="Nguyen-Duc-Thanh">Nhan Nguyen-Duc-Thanh</name>
</author>
<author>
<name sortKey="Stonier, Daniel" sort="Stonier, Daniel" uniqKey="Stonier D" first="Daniel" last="Stonier">Daniel Stonier</name>
</author>
<author>
<name sortKey="Lee, Sungyoung" sort="Lee, Sungyoung" uniqKey="Lee S" first="Sungyoung" last="Lee">Sungyoung Lee</name>
</author>
<author>
<name sortKey="Kim, Dong Han" sort="Kim, Dong Han" uniqKey="Kim D" first="Dong-Han" last="Kim">Dong-Han Kim</name>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">ISTEX</idno>
<idno type="RBID">ISTEX:E5A365AEB3BA2E9B8F92902AA03BCCAFFF3D35DD</idno>
<date when="2011" year="2011">2011</date>
<idno type="doi">10.1007/978-3-642-24082-9_92</idno>
<idno type="url">https://api.istex.fr/document/E5A365AEB3BA2E9B8F92902AA03BCCAFFF3D35DD/fulltext/pdf</idno>
<idno type="wicri:Area/Istex/Corpus">001789</idno>
<idno type="wicri:Area/Istex/Curation">001789</idno>
<idno type="wicri:Area/Istex/Checkpoint">000637</idno>
<idno type="wicri:doubleKey">0302-9743:2011:Nguyen Duc Thanh N:a:new:approach</idno>
<idno type="wicri:Area/Main/Merge">003304</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title level="a" type="main" xml:lang="en">A New Approach for Human-Robot Interaction Using Human Body Language</title>
<author>
<name sortKey="Nguyen Duc Thanh, Nhan" sort="Nguyen Duc Thanh, Nhan" uniqKey="Nguyen Duc Thanh N" first="Nhan" last="Nguyen-Duc-Thanh">Nhan Nguyen-Duc-Thanh</name>
<affiliation wicri:level="1">
<country xml:lang="fr">Corée du Sud</country>
<wicri:regionArea>Department of Radio and Electronics Engineering, KyungHee University, 446-701, Gyeonggi-do</wicri:regionArea>
<wicri:noRegion>Gyeonggi-do</wicri:noRegion>
</affiliation>
<affiliation>
<wicri:noCountry code="no comma">E-mail: Nguyendtnhan@gmail.com</wicri:noCountry>
</affiliation>
</author>
<author>
<name sortKey="Stonier, Daniel" sort="Stonier, Daniel" uniqKey="Stonier D" first="Daniel" last="Stonier">Daniel Stonier</name>
<affiliation wicri:level="3">
<country xml:lang="fr">Corée du Sud</country>
<wicri:regionArea>Yujin Robot Company, Namsung Plaza 1214, Gasan-Dong, Guemcheon-Gu, Seoul</wicri:regionArea>
<placeName>
<settlement type="city">Séoul</settlement>
</placeName>
</affiliation>
<affiliation>
<wicri:noCountry code="no comma">E-mail: d.stonier@gmail.com</wicri:noCountry>
</affiliation>
</author>
<author>
<name sortKey="Lee, Sungyoung" sort="Lee, Sungyoung" uniqKey="Lee S" first="Sungyoung" last="Lee">Sungyoung Lee</name>
<affiliation wicri:level="1">
<country xml:lang="fr">Corée du Sud</country>
<wicri:regionArea>Department of Computer Engineering, KyungHee University, 446-701, Gyeonggi-do</wicri:regionArea>
<wicri:noRegion>Gyeonggi-do</wicri:noRegion>
</affiliation>
<affiliation wicri:level="1">
<country wicri:rule="url">Corée du Sud</country>
</affiliation>
</author>
<author>
<name sortKey="Kim, Dong Han" sort="Kim, Dong Han" uniqKey="Kim D" first="Dong-Han" last="Kim">Dong-Han Kim</name>
<affiliation wicri:level="1">
<country xml:lang="fr">Corée du Sud</country>
<wicri:regionArea>Department of Radio and Electronics Engineering, KyungHee University, 446-701, Gyeonggi-do</wicri:regionArea>
<wicri:noRegion>Gyeonggi-do</wicri:noRegion>
</affiliation>
<affiliation wicri:level="1">
<country wicri:rule="url">Corée du Sud</country>
</affiliation>
</author>
</analytic>
<monogr></monogr>
<series>
<title level="s">Lecture Notes in Computer Science</title>
<imprint>
<date>2011</date>
</imprint>
<idno type="ISSN">0302-9743</idno>
<idno type="eISSN">1611-3349</idno>
<idno type="ISSN">0302-9743</idno>
</series>
<idno type="istex">E5A365AEB3BA2E9B8F92902AA03BCCAFFF3D35DD</idno>
<idno type="DOI">10.1007/978-3-642-24082-9_92</idno>
<idno type="ChapterID">92</idno>
<idno type="ChapterID">Chap92</idno>
</biblStruct>
</sourceDesc>
<seriesStmt>
<idno type="ISSN">0302-9743</idno>
</seriesStmt>
</fileDesc>
<profileDesc>
<textClass></textClass>
<langUsage>
<language ident="en">en</language>
</langUsage>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">Abstract: In order to make the interaction between human and robot easier and more diversity, in this paper, we construct a system in which human users can use the body language to control robot doing some works. At first, the human body data is collected via 3D camera. And then, we extract the skeleton feature. Based on the human posture and Semaphore system, the international communicative method, robot can get the character in the Alphabet. Finally, robot combines the separate character into the understandable message and executes what user wants to do. In simulation, we show the results in which the iRobot can move up, down, turn left, turn right...based on the received body message.</div>
</front>
</TEI>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Main/Merge
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 003304 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Main/Merge/biblio.hfd -nk 003304 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Main
   |étape=   Merge
   |type=    RBID
   |clé=     ISTEX:E5A365AEB3BA2E9B8F92902AA03BCCAFFF3D35DD
   |texte=   A New Approach for Human-Robot Interaction Using Human Body Language
}}

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024