Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Control and Guidance of Low-Cost Robots via Gesture Perception for Monitoring Activities in the Home

Identifieur interne : 000388 ( Pmc/Curation ); précédent : 000387; suivant : 000389

Control and Guidance of Low-Cost Robots via Gesture Perception for Monitoring Activities in the Home

Auteurs : Angel D. Sempere ; Arturo Serna-Leon ; Pablo Gil ; Santiago Puente ; Fernando Torres

Source :

RBID : PMC:4721773

Abstract

This paper describes the development of a low-cost mini-robot that is controlled by visual gestures. The prototype allows a person with disabilities to perform visual inspections indoors and in domestic spaces. Such a device could be used as the operator's eyes obviating the need for him to move about. The robot is equipped with a motorised webcam that is also controlled by visual gestures. This camera is used to monitor tasks in the home using the mini-robot while the operator remains quiet and motionless. The prototype was evaluated through several experiments testing the ability to use the mini-robot’s kinematics and communication systems to make it follow certain paths. The mini-robot can be programmed with specific orders and can be tele-operated by means of 3D hand gestures to enable the operator to perform movements and monitor tasks from a distance.


Url:
DOI: 10.3390/s151229853
PubMed: 26690448
PubMed Central: 4721773

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4721773

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Control and Guidance of Low-Cost Robots via Gesture Perception for Monitoring Activities in the Home</title>
<author>
<name sortKey="Sempere, Angel D" sort="Sempere, Angel D" uniqKey="Sempere A" first="Angel D." last="Sempere">Angel D. Sempere</name>
</author>
<author>
<name sortKey="Serna Leon, Arturo" sort="Serna Leon, Arturo" uniqKey="Serna Leon A" first="Arturo" last="Serna-Leon">Arturo Serna-Leon</name>
</author>
<author>
<name sortKey="Gil, Pablo" sort="Gil, Pablo" uniqKey="Gil P" first="Pablo" last="Gil">Pablo Gil</name>
</author>
<author>
<name sortKey="Puente, Santiago" sort="Puente, Santiago" uniqKey="Puente S" first="Santiago" last="Puente">Santiago Puente</name>
</author>
<author>
<name sortKey="Torres, Fernando" sort="Torres, Fernando" uniqKey="Torres F" first="Fernando" last="Torres">Fernando Torres</name>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">26690448</idno>
<idno type="pmc">4721773</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4721773</idno>
<idno type="RBID">PMC:4721773</idno>
<idno type="doi">10.3390/s151229853</idno>
<date when="2015">2015</date>
<idno type="wicri:Area/Pmc/Corpus">000388</idno>
<idno type="wicri:Area/Pmc/Curation">000388</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Control and Guidance of Low-Cost Robots via Gesture Perception for Monitoring Activities in the Home</title>
<author>
<name sortKey="Sempere, Angel D" sort="Sempere, Angel D" uniqKey="Sempere A" first="Angel D." last="Sempere">Angel D. Sempere</name>
</author>
<author>
<name sortKey="Serna Leon, Arturo" sort="Serna Leon, Arturo" uniqKey="Serna Leon A" first="Arturo" last="Serna-Leon">Arturo Serna-Leon</name>
</author>
<author>
<name sortKey="Gil, Pablo" sort="Gil, Pablo" uniqKey="Gil P" first="Pablo" last="Gil">Pablo Gil</name>
</author>
<author>
<name sortKey="Puente, Santiago" sort="Puente, Santiago" uniqKey="Puente S" first="Santiago" last="Puente">Santiago Puente</name>
</author>
<author>
<name sortKey="Torres, Fernando" sort="Torres, Fernando" uniqKey="Torres F" first="Fernando" last="Torres">Fernando Torres</name>
</author>
</analytic>
<series>
<title level="j">Sensors (Basel, Switzerland)</title>
<idno type="eISSN">1424-8220</idno>
<imprint>
<date when="2015">2015</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>This paper describes the development of a low-cost mini-robot that is controlled by visual gestures. The prototype allows a person with disabilities to perform visual inspections indoors and in domestic spaces. Such a device could be used as the operator's eyes obviating the need for him to move about. The robot is equipped with a motorised webcam that is also controlled by visual gestures. This camera is used to monitor tasks in the home using the mini-robot while the operator remains quiet and motionless. The prototype was evaluated through several experiments testing the ability to use the mini-robot’s kinematics and communication systems to make it follow certain paths. The mini-robot can be programmed with specific orders and can be tele-operated by means of 3D hand gestures to enable the operator to perform movements and monitor tasks from a distance.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gross, H" uniqKey="Gross H">H. Gross</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Xiong, X" uniqKey="Xiong X">X. Xiong</name>
</author>
<author>
<name sortKey="Song, Z" uniqKey="Song Z">Z. Song</name>
</author>
<author>
<name sortKey="Zhang, J" uniqKey="Zhang J">J. Zhang</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jackson, R D" uniqKey="Jackson R">R.D. Jackson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Moreno Avalos, H A" uniqKey="Moreno Avalos H">H.A. Moreno Avalos</name>
</author>
<author>
<name sortKey="Carrera Calder N, I G" uniqKey="Carrera Calder N I">I.G. Carrera Calderón</name>
</author>
<author>
<name sortKey="Romero Hernandez, S" uniqKey="Romero Hernandez S">S. Romero Hernández</name>
</author>
<author>
<name sortKey="Cruz Morales, V" uniqKey="Cruz Morales V">V. Cruz Morales</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Suarez, J" uniqKey="Suarez J">J. Suarez</name>
</author>
<author>
<name sortKey="Murphy, R R" uniqKey="Murphy R">R.R. Murphy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Den Bergh, M" uniqKey="Van Den Bergh M">M. Van den Bergh</name>
</author>
<author>
<name sortKey="Carton, D" uniqKey="Carton D">D Carton</name>
</author>
<author>
<name sortKey="De Nijs, R" uniqKey="De Nijs R">R. de Nijs</name>
</author>
<author>
<name sortKey="Mitsou, N" uniqKey="Mitsou N">N. Mitsou</name>
</author>
<author>
<name sortKey="Landsiedel, C" uniqKey="Landsiedel C">C. Landsiedel</name>
</author>
<author>
<name sortKey="Kuehnlenz, K" uniqKey="Kuehnlenz K">K. Kuehnlenz</name>
</author>
<author>
<name sortKey="Wollherr, D" uniqKey="Wollherr D">D. Wollherr</name>
</author>
<author>
<name sortKey="Van Gool, L" uniqKey="Van Gool L">L. van Gool</name>
</author>
<author>
<name sortKey="Buss, M" uniqKey="Buss M">M. Buss</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Alonso Mora, J" uniqKey="Alonso Mora J">J. Alonso-Mora</name>
</author>
<author>
<name sortKey="Haegeli Lohaus, S" uniqKey="Haegeli Lohaus S">S. Haegeli Lohaus</name>
</author>
<author>
<name sortKey="Leemann, P" uniqKey="Leemann P">P. Leemann</name>
</author>
<author>
<name sortKey="Siegwart, R" uniqKey="Siegwart R">R. Siegwart</name>
</author>
<author>
<name sortKey="Beardsley, P" uniqKey="Beardsley P">P. Beardsley</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Asad, M" uniqKey="Asad M">M. Asad</name>
</author>
<author>
<name sortKey="Abhayaratne, C" uniqKey="Abhayaratne C">C. Abhayaratne</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wang, C" uniqKey="Wang C">C. Wang</name>
</author>
<author>
<name sortKey="Liu, Z" uniqKey="Liu Z">Z. Liu</name>
</author>
<author>
<name sortKey="Chan, S C" uniqKey="Chan S">S.-C. Chan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kopinski, T" uniqKey="Kopinski T">T. Kopinski</name>
</author>
<author>
<name sortKey="Magand, S" uniqKey="Magand S">S. Magand</name>
</author>
<author>
<name sortKey="Gepperth, A" uniqKey="Gepperth A">A. Gepperth</name>
</author>
<author>
<name sortKey="Handmann, U" uniqKey="Handmann U">U. Handmann</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kondori, F A" uniqKey="Kondori F">F.A. Kondori</name>
</author>
<author>
<name sortKey="Yousefit, S" uniqKey="Yousefit S">S. Yousefit</name>
</author>
<author>
<name sortKey="Ostovar, A" uniqKey="Ostovar A">A. Ostovar</name>
</author>
<author>
<name sortKey="Liu, L" uniqKey="Liu L">L. Liu</name>
</author>
<author>
<name sortKey="Li, H" uniqKey="Li H">H. Li</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Takano, W" uniqKey="Takano W">W. Takano</name>
</author>
<author>
<name sortKey="Ishikawa, J" uniqKey="Ishikawa J">J. Ishikawa</name>
</author>
<author>
<name sortKey="Nakamura, Y" uniqKey="Nakamura Y">Y. Nakamura</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rosch, O K" uniqKey="Rosch O">O.K. Rösch</name>
</author>
<author>
<name sortKey="Schilling, K" uniqKey="Schilling K">K. Schilling</name>
</author>
<author>
<name sortKey="Roth, H" uniqKey="Roth H">H. Roth</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Okamura, A M" uniqKey="Okamura A">A.M. Okamura</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Roesener, C" uniqKey="Roesener C">C. Roesener</name>
</author>
<author>
<name sortKey="Perner, A" uniqKey="Perner A">A. Perner</name>
</author>
<author>
<name sortKey="Zerawa, S" uniqKey="Zerawa S">S. Zerawa</name>
</author>
<author>
<name sortKey="Hutter, S" uniqKey="Hutter S">S. Hutter</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dadgostar, F" uniqKey="Dadgostar F">F. Dadgostar</name>
</author>
<author>
<name sortKey="Sarrafzadeh, A" uniqKey="Sarrafzadeh A">A. Sarrafzadeh</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tara, R" uniqKey="Tara R">R. Tara</name>
</author>
<author>
<name sortKey="Santosa, P" uniqKey="Santosa P">P. Santosa</name>
</author>
<author>
<name sortKey="Adji, T" uniqKey="Adji T">T. Adji</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chen, F S" uniqKey="Chen F">F.S. Chen</name>
</author>
<author>
<name sortKey="Fu, C M" uniqKey="Fu C">C.M. Fu</name>
</author>
<author>
<name sortKey="Huang, C L" uniqKey="Huang C">C.L. Huang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Palacios, J M" uniqKey="Palacios J">J.M. Palacios</name>
</author>
<author>
<name sortKey="Sagues, C" uniqKey="Sagues C">C. Sagüés</name>
</author>
<author>
<name sortKey="Montijano, E" uniqKey="Montijano E">E. Montijano</name>
</author>
<author>
<name sortKey="Llorente, S" uniqKey="Llorente S">S. Llorente</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Liang, H" uniqKey="Liang H">H. Liang</name>
</author>
<author>
<name sortKey="Yuan, J" uniqKey="Yuan J">J. Yuan</name>
</author>
<author>
<name sortKey="Thalmann, D" uniqKey="Thalmann D">D. Thalmann</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gil, P" uniqKey="Gil P">P. Gil</name>
</author>
<author>
<name sortKey="Mateo, C" uniqKey="Mateo C">C. Mateo</name>
</author>
<author>
<name sortKey="Torres, F" uniqKey="Torres F">F. Torres</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Caputo, M" uniqKey="Caputo M">M. Caputo</name>
</author>
<author>
<name sortKey="Denker, K" uniqKey="Denker K">K. Denker</name>
</author>
<author>
<name sortKey="Dums, B" uniqKey="Dums B">B. Dums</name>
</author>
<author>
<name sortKey="Umlauf, G" uniqKey="Umlauf G">G. Umlauf</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cosgun, K" uniqKey="Cosgun K">K. Cosgun</name>
</author>
<author>
<name sortKey="Bunger, M" uniqKey="Bunger M">M. Bunger</name>
</author>
<author>
<name sortKey="Christensen, H I" uniqKey="Christensen H">H.I. Christensen</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Keskin, C" uniqKey="Keskin C">C. Keskin</name>
</author>
<author>
<name sortKey="Kirac, F" uniqKey="Kirac F">F. Kirac</name>
</author>
<author>
<name sortKey="Kara, Y E" uniqKey="Kara Y">Y.E. Kara</name>
</author>
<author>
<name sortKey="Akarun, L" uniqKey="Akarun L">L. Akarun</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ren, Z" uniqKey="Ren Z">Z. Ren</name>
</author>
<author>
<name sortKey="Yuan, J" uniqKey="Yuan J">J. Yuan</name>
</author>
<author>
<name sortKey="Meng, J" uniqKey="Meng J">J. Meng</name>
</author>
<author>
<name sortKey="Zhang, Z" uniqKey="Zhang Z">Z. Zhang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yong, W" uniqKey="Yong W">W. Yong</name>
</author>
<author>
<name sortKey="Tianli, Y" uniqKey="Tianli Y">Y. Tianli</name>
</author>
<author>
<name sortKey="Shi, T" uniqKey="Shi T">T. Shi</name>
</author>
<author>
<name sortKey="Zhu, L" uniqKey="Zhu L">L. Zhu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Malassiotis, S" uniqKey="Malassiotis S">S. Malassiotis</name>
</author>
<author>
<name sortKey="Aifanti, N" uniqKey="Aifanti N">N. Aifanti</name>
</author>
<author>
<name sortKey="Strintzis, M G" uniqKey="Strintzis M">M.G. Strintzis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ferris, R" uniqKey="Ferris R">R. Ferris</name>
</author>
<author>
<name sortKey="Turk, M" uniqKey="Turk M">M. Turk</name>
</author>
<author>
<name sortKey="Raskar, R" uniqKey="Raskar R">R. Raskar</name>
</author>
<author>
<name sortKey="Tan, K H" uniqKey="Tan K">K.H. Tan</name>
</author>
<author>
<name sortKey="Ohashi, G" uniqKey="Ohashi G">G. Ohashi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rusu, R B" uniqKey="Rusu R">R.B. Rusu</name>
</author>
<author>
<name sortKey="Brandski, G" uniqKey="Brandski G">G. Brandski</name>
</author>
<author>
<name sortKey="Thibaux, R" uniqKey="Thibaux R">R. Thibaux</name>
</author>
<author>
<name sortKey="Hsu, J" uniqKey="Hsu J">J. Hsu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rusu, R B" uniqKey="Rusu R">R.B. Rusu</name>
</author>
<author>
<name sortKey="Blodow, N" uniqKey="Blodow N">N. Blodow</name>
</author>
<author>
<name sortKey="Beetz, M" uniqKey="Beetz M">M. Beetz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mateo, C M" uniqKey="Mateo C">C.M. Mateo</name>
</author>
<author>
<name sortKey="Gil, P" uniqKey="Gil P">P. Gil</name>
</author>
<author>
<name sortKey="Torres, F" uniqKey="Torres F">F. Torres</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kuremoto, T" uniqKey="Kuremoto T">T. Kuremoto</name>
</author>
<author>
<name sortKey="Obayashi, M" uniqKey="Obayashi M">M. Obayashi</name>
</author>
<author>
<name sortKey="Kobayashi, K" uniqKey="Kobayashi K">K. Kobayashi</name>
</author>
<author>
<name sortKey="Feng, L B" uniqKey="Feng L">L.-B. Feng</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Sensors (Basel)</journal-id>
<journal-id journal-id-type="iso-abbrev">Sensors (Basel)</journal-id>
<journal-id journal-id-type="publisher-id">sensors</journal-id>
<journal-title-group>
<journal-title>Sensors (Basel, Switzerland)</journal-title>
</journal-title-group>
<issn pub-type="epub">1424-8220</issn>
<publisher>
<publisher-name>MDPI</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">26690448</article-id>
<article-id pub-id-type="pmc">4721773</article-id>
<article-id pub-id-type="doi">10.3390/s151229853</article-id>
<article-id pub-id-type="publisher-id">sensors-15-29853</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Control and Guidance of Low-Cost Robots via Gesture Perception for Monitoring Activities in the Home</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Sempere</surname>
<given-names>Angel D.</given-names>
</name>
<xref ref-type="author-notes" rid="fn1-sensors-15-29853"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Serna-Leon</surname>
<given-names>Arturo</given-names>
</name>
<xref ref-type="author-notes" rid="fn1-sensors-15-29853"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Gil</surname>
<given-names>Pablo</given-names>
</name>
<xref ref-type="author-notes" rid="fn1-sensors-15-29853"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Puente</surname>
<given-names>Santiago</given-names>
</name>
<xref rid="c1-sensors-15-29853" ref-type="corresp">*</xref>
<xref ref-type="author-notes" rid="fn1-sensors-15-29853"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Torres</surname>
<given-names>Fernando</given-names>
</name>
<xref ref-type="author-notes" rid="fn1-sensors-15-29853"></xref>
</contrib>
</contrib-group>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Passaro</surname>
<given-names>Vittorio M. N.</given-names>
</name>
<role>Academic Editor</role>
</contrib>
</contrib-group>
<aff id="af1-sensors-15-29853">Physics, Systems Engineering and Signal Theory Department, University of Alicante, San Vicente del Raspeig, E-03690 Alicante, Spain;
<email>adss1@alu.ua.es</email>
(A.D.S.);
<email>asl37@alu.ua.es</email>
(A.S.-L.);
<email>Pablo.gil@ua.es</email>
(P.G.);
<email>Fernando.torres@ua.es</email>
(F.T.)</aff>
<author-notes>
<corresp id="c1-sensors-15-29853">
<label>*</label>
Correspondence:
<email>Santiago.puente@ua.es</email>
; Tel.: +34-965-90-3400 (ext. 2371); Fax: +34-965-90-3464</corresp>
<fn id="fn1-sensors-15-29853">
<label></label>
<p>These authors contributed equally to this work.</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>11</day>
<month>12</month>
<year>2015</year>
</pub-date>
<pub-date pub-type="collection">
<month>12</month>
<year>2015</year>
</pub-date>
<volume>15</volume>
<issue>12</issue>
<fpage>31268</fpage>
<lpage>31292</lpage>
<history>
<date date-type="received">
<day>18</day>
<month>9</month>
<year>2015</year>
</date>
<date date-type="accepted">
<day>04</day>
<month>12</month>
<year>2015</year>
</date>
</history>
<permissions>
<copyright-statement>© 2015 by the authors; licensee MDPI, Basel, Switzerland.</copyright-statement>
<copyright-year>2015</copyright-year>
<license>
<license-p>
<pmc-comment>CREATIVE COMMONS</pmc-comment>
This article is an open access article distributed under the terms and conditions of the Creative Commons by Attribution (CC-BY) license (
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">http://creativecommons.org/licenses/by/4.0/</ext-link>
).</license-p>
</license>
</permissions>
<abstract>
<p>This paper describes the development of a low-cost mini-robot that is controlled by visual gestures. The prototype allows a person with disabilities to perform visual inspections indoors and in domestic spaces. Such a device could be used as the operator's eyes obviating the need for him to move about. The robot is equipped with a motorised webcam that is also controlled by visual gestures. This camera is used to monitor tasks in the home using the mini-robot while the operator remains quiet and motionless. The prototype was evaluated through several experiments testing the ability to use the mini-robot’s kinematics and communication systems to make it follow certain paths. The mini-robot can be programmed with specific orders and can be tele-operated by means of 3D hand gestures to enable the operator to perform movements and monitor tasks from a distance.</p>
</abstract>
<kwd-group>
<kwd>robot systems</kwd>
<kwd>human-robot interaction</kwd>
<kwd>3D gesture perception</kwd>
<kwd>3D descriptors</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="sec1-sensors-15-29853">
<title>1. Introduction</title>
<p>Mobility is one of the most important facets of any person’s autonomous capabilities. However, motor disabilities constitute a major issue in our society. A significant proportion of older people have serious mobility problems. According to recent reports, approximately 20% of people aged 70 years or older and 50% of people aged 85 or over report difficulties in performing basic daily living activities [
<xref rid="B1-sensors-15-29853" ref-type="bibr">1</xref>
]. Mobility problems are common and impede domestic activities. Furthermore, current demographics show that the elderly population (aged over 65) in industrialised countries is continuously increasing [
<xref rid="B2-sensors-15-29853" ref-type="bibr">2</xref>
,
<xref rid="B3-sensors-15-29853" ref-type="bibr">3</xref>
].</p>
<p>The assistance of a machine to perform autonomous tasks would be of great benefit to many people. The lack of human resources available to assist people with mobility problems has naturally led to the creation of systems for achieving autonomous mobility. Future research in this area should strive to make life easier in the home. Today, many basic household chores can be supported through technology, in the form of domestic robots. </p>
<p>Several research projects exist for assisting the elderly through robotic solutions for healthcare and quality of life, including home care robots: Georgia Tech has Cody; CMU has Herb; the Fraunhofer Institute has Care-O-Bot; Yale, USC, and MIT are running an NSF-funded project on Socially Assistive Robotics; and CIR and KAIST in Korea are conducting their own robot projects [
<xref rid="B4-sensors-15-29853" ref-type="bibr">4</xref>
]. The University of Reading has also been working on a project called Hector: Robotic Assistance for the Elderly [
<xref rid="B5-sensors-15-29853" ref-type="bibr">5</xref>
].</p>
<p>However, the vast majority of the current robotic assistance solutions present a serious handicap, namely the cost of domestic robots is still prohibitive for the average family. At the same time, the emergence of single-board computers, such as the Raspberry Pi, and the popularisation of microcontroller evaluation boards, such as Arduino, offer new possibilities for realising capable robots on a significantly lower budget.</p>
<p>Additionally, tele-operation and control of machines such as robotic arms or vehicles with various haptic tools have been on the market for several years in the form of various commercial products [
<xref rid="B6-sensors-15-29853" ref-type="bibr">6</xref>
]. One example of such a product is the ABB FlexPendant [
<xref rid="B7-sensors-15-29853" ref-type="bibr">7</xref>
], a controller that combines touchscreen controls with a joystick and physical buttons, thereby allowing the user to exert direct control over a robotic arm or mixed control using pre-programmed scripts; another example is the Rovio WowWee [
<xref rid="B8-sensors-15-29853" ref-type="bibr">8</xref>
], an RC car that runs on a web server that can be controlled using a computer or tablet via the Internet. Moreover, non-haptic systems have also been used commercially, though so far only for leisure products, such as those for computer interface control (Samsung Smart Interaction, an interface for controlling a television through voice commands and gestures) or video game interaction (Sony EyeToy or Microsoft Kinect). Because of these video game controllers, the popularisation of depth cameras has led to a reduction in the cost of manufacturing sensors using various underlying technologies a fact which has made these elements suitable options for recognising body parts, thus enabling interaction with computers and replacing traditional haptic interfaces. By focusing the problem of body part recognition on the identification of various hand gestures, we can allow individuals with mobility problems to interact easily with a servant robotic platform by associating commands with these gestures.</p>
<p>Service robots can help physically disabled people to live a more independent life and can also offer sensory support. In the near future, they may well become a common household item adapted for the home. Moreover, they could be connected to emergency services that can provide help and support 24 h a day, seven days a week [
<xref rid="B9-sensors-15-29853" ref-type="bibr">9</xref>
]. A low-cost robot would be a promising solution for individuals who, due to poor memory skills or mobility are unsafe at home [
<xref rid="B10-sensors-15-29853" ref-type="bibr">10</xref>
,
<xref rid="B11-sensors-15-29853" ref-type="bibr">11</xref>
]. The purpose of our mobile robot is to serve as a prototype of a low-cost service robot for monitoring rooms and allowing individuals to monitor locations inside their homes. Our low-cost mobile robot prototype was designed with two main features: the ability to be remotely controlled by hand gestures captured by RGBD sensors (Kinect) and the capability of moving in an autonomously controlled manner using a Raspberry Pi microcontroller.</p>
<p>In recent years, numerous attempts have been made resolve the problems of hand gesture recognition using real-time depth sensors [
<xref rid="B12-sensors-15-29853" ref-type="bibr">12</xref>
] and many other efforts have focused on building remote control applications for robots making use of them [
<xref rid="B13-sensors-15-29853" ref-type="bibr">13</xref>
,
<xref rid="B14-sensors-15-29853" ref-type="bibr">14</xref>
] due to their potential applications in contactless human-computer interaction. Thus, considerable progress has been made in this area and a number of algorithms addressing different aspects of the problem have been previously proposed. Techniques and methods to improve the pre-processing algorithms and to reduce the quantization error caused by low resolution of Kinect [
<xref rid="B15-sensors-15-29853" ref-type="bibr">15</xref>
] for hand recognition include methods based on local shape detection using superpixel and colour segmentation techniques [
<xref rid="B16-sensors-15-29853" ref-type="bibr">16</xref>
], approaches for hand gesture recognition using learning techniques such as PCA and multilayer perceptron on a large database of samples [
<xref rid="B17-sensors-15-29853" ref-type="bibr">17</xref>
] and so forth. This paper proposes a system based on state machine that extracts accurate 3D hand gestures using a three-dimensional descriptor. The combination of 3D descriptor as VFH and the implementation of a state machine improve the results reducing the recognition error in comparison with the ground truth. Our system uses the skeleton information from Kinect to produce markless hand extraction similar to [
<xref rid="B16-sensors-15-29853" ref-type="bibr">16</xref>
] however, unlike that work in that we use a global descriptor instead of local shape of superpixels to retain the overall shapes of gestures to be recognized. Also, our training phase does not require as much data and time as in [
<xref rid="B17-sensors-15-29853" ref-type="bibr">17</xref>
]. Current methods are generally based on appearance or models and they are dependent on the image features, invariance properties and number of gestures to be recognized [
<xref rid="B12-sensors-15-29853" ref-type="bibr">12</xref>
,
<xref rid="B18-sensors-15-29853" ref-type="bibr">18</xref>
]. Moreover, they can only handle a discrete set of hand gestures if they are to run in real time. To mitigate this fact and achieve robustness, our system can handle a discrete set of hand gestures with a few simple hand gestures by combining both in order to build a sequence of two or more gestures which can be associated with different orders or commands.</p>
<p>This paper is organised as follows:
<xref ref-type="sec" rid="sec2-sensors-15-29853">Section 2</xref>
describes the proposed method. We begin by specifying the components of the physical platform to be tele-operated and the methodology for commanding its operations. We then describe how the perception system is designed to enable non-haptic tele-operation by using a depth sensor to locate the hands of the operator-user in space and performing gesture analysis, based on previous training.
<xref ref-type="sec" rid="sec3-sensors-15-29853">Section 3</xref>
presents several experiments that cover various aspects of the functionality and performance of the proposed method. Finally, we report the conclusions of this work.</p>
</sec>
<sec id="sec2-sensors-15-29853">
<title>2. Proposed Method </title>
<p>The motivation for our work is to facilitate the performance of surveillance and verification tasks by persons with mobility issues in indoor living spaces. </p>
<p>This project has two major components: the client computer and the robot. The client device displays the camera signal that is acquired by the robot and detects hand gestures using a Kinect. Kinect was chosen for two main reasons: firstly, it is necessary to have a large field of vision so that most of the operator’s body can be mapped (in contrast to other sensors like Leap Motion); and secondly, it is the cheapest and the most extensively used of its kind, a fact that allows us to easily integrate it with reusing code, from open-free libraries without platform constraints. The robot, called Charlie [
<xref rid="B19-sensors-15-29853" ref-type="bibr">19</xref>
], is capable of moving and of streaming the camera signal. It also incorporates a servomotor that controls the inclination of the camera and several optical sensors that capture images of the ground for line detection. By virtue of certain pre-programmed behaviours, it is possible to activate a script to execute a task, such as going to a specific room, or to direct actions, including the positioning of the servo. The selection of the robot provides flexibility in the application design; however it can be performed with different platforms like IRobot Create 2 with small changes. The Kinect is placed in the location where the person with reduced mobility spends most of their time, thereby providing that person with full control of the robot at all times. The camera on the robot continuously streams its forward field of vision; an RGB 2D model camera is used for this purpose because it offers a useful, good-quality image for a low price. Because it is not focused on the user and its technology is less robust in several aspects (such as background changes, differences in lighting, and the presence of multiple objects on the same plane) due to the difficulty of applying descriptors to a point cloud, this sensor is not used for item recognition.</p>
<table-wrap id="sensors-15-29853-t001" position="float">
<object-id pub-id-type="pii">sensors-15-29853-t001_Table 1</object-id>
<label>Table 1</label>
<caption>
<p>The components of Charlie and their cost (September 2014).</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Component</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Cost in €</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">GoShield-GR (include board, motor driver, wheels)</td>
<td align="center" valign="middle" rowspan="1" colspan="1">114.76</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">Arduino Due</td>
<td align="center" valign="middle" rowspan="1" colspan="1">47.19</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">Raspberry Pi B</td>
<td align="center" valign="middle" rowspan="1" colspan="1">35.99</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">Raspberry camera module</td>
<td align="center" valign="middle" rowspan="1" colspan="1">22.95</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">32 GB class 10 Samsung SD card</td>
<td align="center" valign="middle" rowspan="1" colspan="1">17.48</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">S3003 servomotor</td>
<td align="center" valign="middle" rowspan="1" colspan="1">3.45</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">SR04 distance sensor</td>
<td align="center" valign="middle" rowspan="1" colspan="1">1.11</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">Ralink RT5370 Wi-Fi USB adapter</td>
<td align="center" valign="middle" rowspan="1" colspan="1">6.95</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Cables, enclosures, 4 AA batteries</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">17.00</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<bold>TOTAL</bold>
</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<bold>266.88</bold>
</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Another major goal is to make this technology available to people on low incomes by always striving to use cost-effective components. The total cost of the robot prototype is less than 270€, with autonomy around 1 h (
<xref ref-type="table" rid="sensors-15-29853-t001">Table 1</xref>
). The client system can be installed on any x86-compatible computer with the addition of a Kinect.</p>
<sec id="sec2dot1-sensors-15-29853">
<title>2.1. Robot Design </title>
<p>The robot prototype, called Charlie, was built using readily available and cost-effective components. The main components are a Raspberry Pi, which controls the high-level commands for the robot, and an Arduino Due, which is responsible for handling the commands for the low-level API. Both API levels are described in the next section.</p>
<p>The core system is a Raspberry Pi that runs Raspbian, a Debian-based GNU/Linux distribution. This module manages the command server and streams the camera signal through a motion JPEG server. The command server uses the WebSocket protocol. This protocol allows the client device to control the components of the robot through the API. For communication with the client device, the system has a Wi-Fi adapter and is able to create an
<italic>ad hoc</italic>
network or to connect to an existing network. </p>
<p>The Arduino Due is responsible for controlling the components of the GoShield-GR shield. This microcontroller receives control commands from the Raspberry Pi via USB and outputs electrical signals to the various components of the GoShield-GR shield. Of these, the primary components are 2 DC motors for movement and 21 CNY70 optical sensors that are focused on the ground for line detection. The other components are 14 LEDs and a buzzer, for indication purposes. A schematic diagram of the robot is provided in
<xref ref-type="fig" rid="sensors-15-29853-f001">Figure 1</xref>
.</p>
<fig id="sensors-15-29853-f001" position="float">
<label>Figure 1</label>
<caption>
<p>Robot design schematic.</p>
</caption>
<graphic xlink:href="sensors-15-29853-g001"></graphic>
</fig>
<p>In total, the robot API consists of 21 optical sensors for detecting lines on the ground, 2 DC motors, a servomotor that controls the inclination of the camera, a distance sensor, and 14 LEDs and a buzzer to serve as indicators (
<xref ref-type="table" rid="sensors-15-29853-t002">Table 2</xref>
). </p>
<table-wrap id="sensors-15-29853-t002" position="float">
<object-id pub-id-type="pii">sensors-15-29853-t002_Table 2</object-id>
<label>Table 2</label>
<caption>
<p>Accessible components of the robot.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Component</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Quantity</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">75:1 Micro Metal DC/Gearmotor HP</td>
<td align="center" valign="middle" rowspan="1" colspan="1">2</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">CNY70 optical sensors</td>
<td align="center" valign="middle" rowspan="1" colspan="1">21</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">Raspberry camera module 1080 p</td>
<td align="center" valign="middle" rowspan="1" colspan="1">1</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">SR04 distance sensor</td>
<td align="center" valign="middle" rowspan="1" colspan="1">1</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">S3003 servomotor</td>
<td align="center" valign="middle" rowspan="1" colspan="1">1</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">LEDs</td>
<td align="center" valign="middle" rowspan="1" colspan="1">14</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Buzzer</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">1</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="sec2dot2-sensors-15-29853">
<title>2.2. Robot Commands API </title>
<p>Charlie the robot is capable of executing pre-programmed behaviours and taking direct actions through its API. Examples of its pre-programmed behaviours include moving to a specific room in the house, performing a check of each room, or returning to the user’s room. The direct actions it can take include controlling the inclination of the camera or its own movement (such as going straight, stopping its movement or turning left) and measuring the distance to the closest object.</p>
<p>The API is divided into two levels: direct action commands and scripting commands. The first level controls the direct action commands (cmd). These commands are transmitted using 1 to 3 bytes. They are used to control the components of the robot. These commands are related to the kinematics of the robot, the positioning of the servomotor, and the read-out of the distance sensor or the optical sensors. The first byte of a command signal indicates the number of commands to be issued. Depending on the command, it may also have zero, one or two parameters, each of which is 1 byte in length. Most direct action commands are retransmitted without alteration by the Raspberry Pi to the Arduino Due board through USB communication. The remaining commands are implemented by the Raspberry Pi itself, which is responsible for controlling the servomotor and measuring distances.</p>
<p>The high-level API enables the saving, loading, deletion and execution of the scripts for the pre-programmed tasks. The scripting language is Python, and it is possible to use the full set of instructions and standard libraries of this language plus 2 additional functions that enable specific functionalities of the robot, one for the execution of commands and the other for receiving data from the hardware. The first of these functions is execute Command(cmd, [p1], [p2]) for executing direct action commands with the API. If the command returns a value (reads a sensor), then the appropriate function to use is executeReadCommand(cmd). None of the sensor reading commands has parameters. With respect to the syntax, the first byte indicates the type of command. If the command is related to the memory (
<italic>i.e</italic>
., if it saves, loads or deletes a script), then the next byte indicates the ID of the relevant script. Therefore, there are 256 available positions for storing scripts. The remainder of the command bytes contain the Python code. There is no length constraint at this API level. For saving a script, the server receives a command whose first byte contains the value “50” and whose second byte indicates the ID of the script (between 0 and 255); the following bytes contain the Python script itself. Consequently, the server will save the script in a file that is named with the specified ID. A command for executing a script that was previously saved consists of two bytes, where the first byte contains the value 51 and the second byte contains the script ID. The server loads the file that contains the script and executes it using the exec function of Python. </p>
</sec>
<sec id="sec2dot3-sensors-15-29853">
<title>2.3. Robot Communication </title>
<p>One important issue related to the robot scheme is the implementation of communication among the Android application, the Raspberry Pi and the Arduino board. This communication is performed in two steps: one between the Android application and the Raspberry Pi and the other between the Raspberry Pi and the Arduino board. The first is high-level communication; it allows the transmission of commands and scripts to the robot and the reception of the camera signal by the user of the Android application. This channel uses the IEEE 802.11 standard for Wi-Fi communication to connect through HTTP for the camera signal and uses WebSockets in the command server. The second level of communication is performed between the Raspberry Pi and the Arduino board. This channel is used for low-level communication; it uses a connection through the USB port of the Raspberry Pi to the USB port of the Arduino board, enabling serial communication between them for the transmission of commands and sensor values. The scheme of the communication channels is depicted in
<xref ref-type="fig" rid="sensors-15-29853-f002">Figure 2</xref>
.</p>
<fig id="sensors-15-29853-f002" position="float">
<label>Figure 2</label>
<caption>
<p>Communication scheme of the robot.</p>
</caption>
<graphic xlink:href="sensors-15-29853-g002"></graphic>
</fig>
<p>The steps of initialising communication are as follows:
<list list-type="order">
<list-item>
<p>The Raspberry Pi creates an ad hoc Wi-Fi hotspot to connect to the Android user</p>
</list-item>
<list-item>
<p>The Raspberry Pi initialises the camera server</p>
</list-item>
<list-item>
<p>The Raspberry Pi initialises the command server</p>
<list list-type="alpha-lower">
<list-item>
<p>It opens a serial communication channel with the Arduino board</p>
</list-item>
<list-item>
<p>It starts up the WebSocket</p>
</list-item>
</list>
</list-item>
<list-item>
<p>The Android application connects to the Wi-Fi network of the Raspberry Pi</p>
</list-item>
<list-item>
<p>The Android application generates a WebSocket connection</p>
</list-item>
<list-item>
<p>The Raspberry Pi accepts the Android WebSocket connection</p>
</list-item>
<list-item>
<p>The Android application is sent a command by WebSockets [1,200,123] (3 bytes)</p>
</list-item>
<list-item>
<p>The Raspberry Pi receives the command</p>
</list-item>
<list-item>
<p>The Raspberry Pi translates the command and sends it to the Arduino board</p>
</list-item>
<list-item>
<p>The Arduino board translates the command to the electrical references of the motors</p>
</list-item>
</list>
</p>
</sec>
<sec id="sec2dot4-sensors-15-29853">
<title>2.4. Perception System</title>
<sec id="sec2dot4dot1-sensors-15-29853">
<title>2.4.1. Human-Hand Detection Process</title>
<p>Hand detection has been widely discussed in the literature on perception systems for interaction between humans and electronic devices. New sensor technologies have facilitated the sensing of human body parts for remote control of avatars and robotic devices. 3D cameras such as RGBD or time-of-flight (ToF) cameras enable the extraction of human gestures and movements [
<xref rid="B20-sensors-15-29853" ref-type="bibr">20</xref>
], and it is anticipated that they can be used to control devices from a distance. Master-slave architectures such as haptic devices or infrared remote control have been widely used in previous works to move robots both with feedback [
<xref rid="B21-sensors-15-29853" ref-type="bibr">21</xref>
,
<xref rid="B22-sensors-15-29853" ref-type="bibr">22</xref>
] and without feedback [
<xref rid="B23-sensors-15-29853" ref-type="bibr">23</xref>
]. At present, visual sensors have replaced these systems in many cases because of their cost and versatility. Cameras allow the extraction of the hands of a user using a combination of several image processing techniques, such as skin colour segmentation (based on a combination of static and dynamic thresholds) [
<xref rid="B24-sensors-15-29853" ref-type="bibr">24</xref>
,
<xref rid="B25-sensors-15-29853" ref-type="bibr">25</xref>
], background subtraction using different scenes of a video stream [
<xref rid="B26-sensors-15-29853" ref-type="bibr">26</xref>
] and shape recognition based on morphological information (
<italic>i.e</italic>
., curvature calculations and convexity defects [
<xref rid="B27-sensors-15-29853" ref-type="bibr">27</xref>
]). The depth information provided by RGBD and ToF sensors helps us to solve the two main drawbacks of this approach, namely, the dependence on the lighting of the scene and the distinction between background and user, in a simple and robust way. The ability to capture 3D data from a scene, which can be modelled like a point cloud, introduces the possibility of determining hand poses by means of depth clustering analysis. This analysis can be combined with some of the techniques used in RGB analysis, such as the use of morphological constraints [
<xref rid="B28-sensors-15-29853" ref-type="bibr">28</xref>
] or dynamic skin colour segmentation [
<xref rid="B29-sensors-15-29853" ref-type="bibr">29</xref>
]. The positions of the hands can also be obtained through more complex analysis, such as tracking the skeleton of the user [
<xref rid="B30-sensors-15-29853" ref-type="bibr">30</xref>
]. In the latter case, there are several implementations of skeletal tracking available to the general public (OpenNI, Kinect SDK) that are sufficiently reliable for our proposed work. The proposed method involves several processing steps (
<xref ref-type="fig" rid="sensors-15-29853-f003">Figure 3</xref>
):
<list list-type="order">
<list-item>
<p>Detecting the skeleton of the user</p>
</list-item>
<list-item>
<p>Segmenting the area contiguous to the hand (rough segmentation)</p>
</list-item>
<list-item>
<p>Splitting hand points and noise from the extracted area (fine segmentation).</p>
</list-item>
</list>
</p>
<fig id="sensors-15-29853-f003" position="float">
<label>Figure 3</label>
<caption>
<p>Data flow among the different software components of the application.</p>
</caption>
<graphic xlink:href="sensors-15-29853-g003"></graphic>
</fig>
<p>
<italic>Step 1: Using skeletal tracking</italic>
 </p>
<p>As the first step of hand segmentation, a skeleton tracking node provided by the manufacturer of the sensor (NiTE middleware with the OpenNI driver) is used. The tracking is carried out from the IR sensor and without using the RGB information. This is generally because IR sensors are less sensitive to changes of light in the scenario. Only interferences on the same wavelength could cause detection problems, but this is unlikely to occur in indoor environments and households which are lit by artificial light and their spectrum is away from infrared light. This has been empirically tested with different people and rooms. This basically means that the tracking system works properly with people of different races who may have varying skin colour and body shape. Besides, the tracking system can be used with people wearing any kind of clothes provided that the garments are not too voluminous. It is preferable for clothes to be as close fitting as possible. In our case, skeletal tracking is just for the upper body. The idea is to provide the least light-dependent and invasive system possible. In addition, the skeletal tracking can follow multiple users at the same time in the environment. The skeletal tracking can simultaneously follow multiple users in the scenario. This approach makes it easy to expand the capabilities of our system to the control of more than one target with a single sensor or to the collaborative control of the same target. The greatest disadvantages of the mechanism are the need for a starting pose for skeletal recognition and the need for the user to be positioned in front of the camera, with most of his or her body present in the field of view of the sensor.</p>
<p>In comparison with the official MS Kinect-SDK, the NiTE middleware offers greater flexibility for embedding it in an ultimate solution. Various binaries are provided that have been compiled for Windows and GNU/Linux systems, and in later versions, compatibility with ARM processors has been added, allowing the system to host applications on mobile devices and microcomputers. The maintainers of the most popular Linux distributions provide a ready-to-install package in the official repositories, and the maintainers of the Robot Operating System (ROS) middleware provide a package that is ready to add to the architecture of a standard ROS solution. Furthermore, in a comparison of the precision of the two skeletal tracking approaches, there are no noticeable differences between them in the normal use case [
<xref rid="B31-sensors-15-29853" ref-type="bibr">31</xref>
].</p>
<p>
<italic>Step 2: Rough segmentation</italic>
 </p>
<p>Once the approximate positions of the two hands are located, the next step is the extraction of the points in their surroundings. The cloud is ordered in a k-d tree. A k-d tree is a data structure that represents a binary tree and is used for ordering a group of k-dimensional components. In this case, we are organising a cloud of points by their spatial coordinates to enable an efficient range search on the neighbourhood of the detected centre of the hand (
<xref ref-type="fig" rid="sensors-15-29853-f004">Figure 4</xref>
).</p>
<fig id="sensors-15-29853-f004" position="float">
<label>Figure 4</label>
<caption>
<p>Representations of the detection of the human skeleton via the segmentation and filtering of a point cloud to detect the hands of a user sitting on a couch.</p>
</caption>
<graphic xlink:href="sensors-15-29853-g004"></graphic>
</fig>
<p>For this step, a radius of interest of 24 cm around the centre is considered. This value is the sum, rounded up, of the average error on the hand position detection of a user in a sitting position (14.2 cm [
<xref rid="B31-sensors-15-29853" ref-type="bibr">31</xref>
]) and half the average length of a male hand among members of the European ethnic group with the largest hand size (19.5 cm) [
<xref rid="B32-sensors-15-29853" ref-type="bibr">32</xref>
].</p>
<p>
<italic>Step 3: Fine segmentation</italic>
 </p>
<p>The final processing step removes potential spurious elements that appear in the segmented area because of their proximity to the hands of the user. These elements could be objects in the scene as well as the clothes of the user or other parts of the body (e.g., chest, hair). Because the segmentation is initially centred on the palm of the hand, we assume that the largest continuous element in the extracted cloud will be the palm, the fingers attached to the palm and part of the arm. Then, the analysis is performed by defining a cluster as a set of points that are closer than 3 cm to another point in the cluster. This margin is needed because when the hand is located in a deep region of the scene, the density of points is lower and the segmentation could miss one or more fingers. Several constraints can be formulated to avoid false cluster identification, such as a minimum cluster size (to avoid falsely identifying noise as a signal when the hand is out of the scene) or a maximum cluster size (to avoid identifying one or more additional adjacent elements as part of the hand). Experimentally, we determined that reasonable values for these constraints are 100 points for the minimum size and 30,000 points for the maximum size (
<xref ref-type="fig" rid="sensors-15-29853-f005">Figure 5</xref>
).</p>
<fig id="sensors-15-29853-f005" position="float">
<label>Figure 5</label>
<caption>
<p>Visualisations of the original coloured point cloud and the final segmented hands of a user standing up.</p>
</caption>
<graphic xlink:href="sensors-15-29853-g005"></graphic>
</fig>
</sec>
<sec id="sec2dot4dot2-sensors-15-29853">
<title>2.4.2. Gesture Recognition</title>
<p>Various approaches are used in the literature to address the problem of gesture classification. For example, classification can be performed based on pose estimation by imitating the skeletal tracking of the Kinect SDK [
<xref rid="B33-sensors-15-29853" ref-type="bibr">33</xref>
,
<xref rid="B34-sensors-15-29853" ref-type="bibr">34</xref>
] or through a combination of hand feature extraction (location, centroid, fingertips, silhouette) and machine learning (Hidden Markov Models [
<xref rid="B35-sensors-15-29853" ref-type="bibr">35</xref>
], k-Nearest Neighbours [
<xref rid="B36-sensors-15-29853" ref-type="bibr">36</xref>
], shape description [
<xref rid="B37-sensors-15-29853" ref-type="bibr">37</xref>
]), combining different techniques in each step to construct the best system for the target application.</p>
<p>Once the hand is extracted, the process is split into two parts: training and detection. For both sub-processes, a descriptor of the segmentation result is computed. The descriptor used in our prototype is the Viewpoint Feature Histogram (VFH) [
<xref rid="B38-sensors-15-29853" ref-type="bibr">38</xref>
]. The VFH encodes the differences in angle–pitch (α), roll (∅) and yaw (θ)–between the normal vector of the centroid p
<sub>i</sub>
of the point cloud that represents the hand and any other part of the cloud p
<sub>j</sub>
in the Darboux frame (
<xref ref-type="fig" rid="sensors-15-29853-f006">Figure 6</xref>
) [
<xref rid="B39-sensors-15-29853" ref-type="bibr">39</xref>
].</p>
<fig id="sensors-15-29853-f006" position="float">
<label>Figure 6</label>
<caption>
<p>Geometric variation between two points based on the framework of Darboux transformations.</p>
</caption>
<graphic xlink:href="sensors-15-29853-g006"></graphic>
</fig>
<p>The reference frame centred on
<inline-formula>
<mml:math id="mm1">
<mml:mrow>
<mml:msub>
<mml:mi>p</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
is defined by:
<disp-formula id="FD1">
<label>(1)</label>
<mml:math id="mm2">
<mml:mrow>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>u</mml:mi>
<mml:mo stretchy="false">¯</mml:mo>
</mml:mover>
<mml:mo>,</mml:mo>
<mml:mover accent="true">
<mml:mi>v</mml:mi>
<mml:mo stretchy="false">¯</mml:mo>
</mml:mover>
<mml:mo>,</mml:mo>
<mml:mover accent="true">
<mml:mi>w</mml:mi>
<mml:mo stretchy="false">¯</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>}</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi>n</mml:mi>
<mml:mo stretchy="false">¯</mml:mo>
</mml:mover>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mover accent="true">
<mml:mi>t</mml:mi>
<mml:mo stretchy="false">¯</mml:mo>
</mml:mover>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mover accent="true">
<mml:mi>n</mml:mi>
<mml:mo stretchy="false">¯</mml:mo>
</mml:mover>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mover accent="true">
<mml:mi>t</mml:mi>
<mml:mo stretchy="false">¯</mml:mo>
</mml:mover>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>}</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
where
<disp-formula id="FD2">
<label>(2)</label>
<mml:math id="mm3">
<mml:mrow>
<mml:mover accent="true">
<mml:mi>u</mml:mi>
<mml:mo stretchy="false">¯</mml:mo>
</mml:mover>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mover accent="true">
<mml:mi>n</mml:mi>
<mml:mo stretchy="false">¯</mml:mo>
</mml:mover>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="FD3">
<label>(3)</label>
<mml:math id="mm4">
<mml:mrow>
<mml:mover accent="true">
<mml:mi>v</mml:mi>
<mml:mo stretchy="false">¯</mml:mo>
</mml:mover>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mover accent="true">
<mml:mi>t</mml:mi>
<mml:mo stretchy="false">¯</mml:mo>
</mml:mover>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mover accent="true">
<mml:mi>u</mml:mi>
<mml:mo stretchy="false">¯</mml:mo>
</mml:mover>
<mml:mo></mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>p</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>p</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>d</mml:mi>
<mml:mrow>
<mml:mi>L</mml:mi>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="FD4">
<label>(4)</label>
<mml:math id="mm5">
<mml:mrow>
<mml:mover accent="true">
<mml:mi>w</mml:mi>
<mml:mo stretchy="false">¯</mml:mo>
</mml:mover>
<mml:mo>=</mml:mo>
<mml:mover accent="true">
<mml:mi>u</mml:mi>
<mml:mo stretchy="false">¯</mml:mo>
</mml:mover>
<mml:mo></mml:mo>
<mml:mover accent="true">
<mml:mi>v</mml:mi>
<mml:mo stretchy="false">¯</mml:mo>
</mml:mover>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mover accent="true">
<mml:mi>n</mml:mi>
<mml:mo stretchy="false">¯</mml:mo>
</mml:mover>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mover accent="true">
<mml:mi>t</mml:mi>
<mml:mo stretchy="false">¯</mml:mo>
</mml:mover>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>Then, the geometric variation between the two points can be expressed as the relative difference between the directions of their normal vectors
<inline-formula>
<mml:math id="mm6">
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi>n</mml:mi>
<mml:mo stretchy="false">¯</mml:mo>
</mml:mover>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
and
<inline-formula>
<mml:math id="mm7">
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi>n</mml:mi>
<mml:mo stretchy="false">¯</mml:mo>
</mml:mover>
<mml:mi>j</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
, and it is calculated as follows:
<disp-formula id="FD5">
<label>(5)</label>
<mml:math id="mm8">
<mml:mrow>
<mml:mi>α</mml:mi>
<mml:mo>=</mml:mo>
<mml:mtext>acos </mml:mtext>
<mml:mo stretchy="false">(</mml:mo>
<mml:mover accent="true">
<mml:mi>v</mml:mi>
<mml:mo stretchy="false">¯</mml:mo>
</mml:mover>
<mml:mo>·</mml:mo>
<mml:msub>
<mml:mi>n</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="FD6">
<label>(6)</label>
<mml:math id="mm9">
<mml:mrow>
<mml:mi></mml:mi>
<mml:mo>=</mml:mo>
<mml:mtext>acos</mml:mtext>
<mml:mrow>
<mml:mo stretchy="true">(</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>u</mml:mi>
<mml:mo stretchy="false">¯</mml:mo>
</mml:mover>
<mml:mo>·</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>p</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>p</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>d</mml:mi>
<mml:mrow>
<mml:mi>L</mml:mi>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo stretchy="true">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="FD7">
<label>(7)</label>
<mml:math id="mm10">
<mml:mrow>
<mml:mi>θ</mml:mi>
<mml:mo>=</mml:mo>
<mml:mtext>atan</mml:mtext>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>w</mml:mi>
<mml:mo stretchy="false">¯</mml:mo>
</mml:mover>
<mml:mo>·</mml:mo>
<mml:msub>
<mml:mover accent="true">
<mml:mi>n</mml:mi>
<mml:mo stretchy="false">¯</mml:mo>
</mml:mover>
<mml:mi>j</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mover accent="true">
<mml:mi>u</mml:mi>
<mml:mo stretchy="false">¯</mml:mo>
</mml:mover>
<mml:mo>·</mml:mo>
<mml:msub>
<mml:mover accent="true">
<mml:mi>n</mml:mi>
<mml:mo stretchy="false">¯</mml:mo>
</mml:mover>
<mml:mi>j</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="FD8">
<label>(8)</label>
<mml:math id="mm11">
<mml:mrow>
<mml:msub>
<mml:mi>d</mml:mi>
<mml:mrow>
<mml:mi>L</mml:mi>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>p</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>p</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:msub>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mi>L</mml:mi>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</disp-formula>
where
<inline-formula>
<mml:math id="mm12">
<mml:mrow>
<mml:msub>
<mml:mi>d</mml:mi>
<mml:mrow>
<mml:mi>L</mml:mi>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
represents the Euclidean distance between two points in the space.</p>
<p>This geometric variation is used to determine the geometric shape in a manner similar to that used other works, such as [
<xref rid="B40-sensors-15-29853" ref-type="bibr">40</xref>
], but in this case, it is applied to hand gestures. Additionally, the VFH incorporates information to encode the direction of the point of view. For this reason, the VFH is suitable for recognising gestures as well as for processes that require identifying the pose of an object (
<xref ref-type="fig" rid="sensors-15-29853-f007">Figure 7</xref>
). The VFH descriptor for each point on the hand (
<xref ref-type="fig" rid="sensors-15-29853-f007">Figure 7</xref>
) is represented as a multi-dimensional histogram that accumulates the number of repetitions of a tuple
<inline-formula>
<mml:math id="mm13">
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>α</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi></mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>θ</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>d</mml:mi>
<mml:mrow>
<mml:mi>L</mml:mi>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
(
<xref ref-type="fig" rid="sensors-15-29853-f008">Figure 8</xref>
) where each component is normalized to 100. Furthermore, as in any one-dimensional histogram, it is necessary to split the data into a specific number of divisions. Each of these divisions represents a range of values of each element of the tuple and graphically indicates the number of occurrences belonging to each range of values (
<xref ref-type="fig" rid="sensors-15-29853-f008">Figure 8</xref>
). The descriptor normalizes its bins by the total number of points which represent the hand and also it normalizes the shape distribution component [
<xref rid="B41-sensors-15-29853" ref-type="bibr">41</xref>
].</p>
<fig id="sensors-15-29853-f007" position="float">
<label>Figure 7</label>
<caption>
<p>Viewpoint information of the VFH descriptor of a human hand.</p>
</caption>
<graphic xlink:href="sensors-15-29853-g007"></graphic>
</fig>
<fig id="sensors-15-29853-f008" position="float">
<label>Figure 8</label>
<caption>
<p>Description of the zones of the histogram.</p>
</caption>
<graphic xlink:href="sensors-15-29853-g008"></graphic>
</fig>
<p>This approach yields a descriptor that is well suited to our purpose because it is global, rapid to compute, independent of scale and dependent on pose (
<xref ref-type="fig" rid="sensors-15-29853-f009">Figure 9</xref>
and
<xref ref-type="fig" rid="sensors-15-29853-f010">Figure 10</xref>
). This last feature will allow us to decide at the time of training whether two different poses should be considered to be the same gesture or different gestures (
<xref ref-type="fig" rid="sensors-15-29853-f011">Figure 11</xref>
). Once the possible gestures are defined, several descriptors of the different frames of each gesture are stored. The method that is applied to match the current gesture with one of the trained gestures is similar to that used by Rusu
<italic>et al</italic>
. [
<xref rid="B31-sensors-15-29853" ref-type="bibr">31</xref>
]. All the histograms that describe a gesture are regarded as 308-dimensional points (one dimension per bin) and are placed in a point cloud. Afterwards, the incoming gesture is placed on the cloud, and the 10 points (histograms of the analysed gestures) that are nearest to it are located. Because these points may be frames of the same gesture or of different gestures, the matching gesture is determined via weighted voting, based on the distances between the histograms.</p>
<fig id="sensors-15-29853-f009" position="float">
<label>Figure 9</label>
<caption>
<p>Sets of descriptors computed to recognise several left-hand gestures I. Descriptor in the left. Pose in the right.</p>
</caption>
<graphic xlink:href="sensors-15-29853-g009"></graphic>
</fig>
<fig id="sensors-15-29853-f010" position="float">
<label>Figure 10</label>
<caption>
<p>Sets of descriptors computed to recognise several left-hand gestures II. Descriptor in the left. Pose in the right. </p>
</caption>
<graphic xlink:href="sensors-15-29853-g010a"></graphic>
<graphic xlink:href="sensors-15-29853-g010b"></graphic>
</fig>
<fig id="sensors-15-29853-f011" position="float">
<label>Figure 11</label>
<caption>
<p>Examples of the gesture classification process performed by comparing histograms using the minimum Euclidean distance as a similarity metric. (
<bold>a</bold>
) Gesture 0
<inline-formula>
<mml:math id="mm14">
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
<mml:mtext> </mml:mtext>
<mml:mn>1</mml:mn>
<mml:mo>=</mml:mo>
<mml:mn>31</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
<mml:mtext> </mml:mtext>
<mml:mn>2</mml:mn>
<mml:mo>=</mml:mo>
<mml:mn>25</mml:mn>
<mml:mo>,</mml:mo>
<mml:mtext> </mml:mtext>
<mml:mi>t</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
<mml:mtext> </mml:mtext>
<mml:mn>3</mml:mn>
<mml:mo>=</mml:mo>
<mml:mn>32</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
; (
<bold>b</bold>
) Gesture 2
<inline-formula>
<mml:math id="mm15">
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
<mml:mtext> </mml:mtext>
<mml:mn>1</mml:mn>
<mml:mo>=</mml:mo>
<mml:mn>27</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
<mml:mtext> </mml:mtext>
<mml:mn>2</mml:mn>
<mml:mo>=</mml:mo>
<mml:mn>41</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
<mml:mtext> </mml:mtext>
<mml:mn>3</mml:mn>
<mml:mo>=</mml:mo>
<mml:mn>33</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
.</p>
</caption>
<graphic xlink:href="sensors-15-29853-g011"></graphic>
</fig>
<p>The effectiveness of the entire process (segmentation and recognition) is highly dependent on the number of different gestures to be distinguished and the differences between the gestures due to the variance in the morphological features and poses (
<xref ref-type="fig" rid="sensors-15-29853-f012">Figure 12</xref>
). It is possible to perform pose-invariant recognition while training the algorithm, grouping the same figure in different poses as the same gesture and increasing the number of frames captured per gesture. In general, we can affirm that for a given number of gestures to be distinguished, a higher number of frames per gesture in the database will result in higher precision.</p>
<fig id="sensors-15-29853-f012" position="float">
<label>Figure 12</label>
<caption>
<p>Similarity measures computed during the classification process among three different human-hand gestures: gesture 0, gesture 2 and gesture 5
<inline-formula>
<mml:math id="mm16">
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
<mml:mtext> </mml:mtext>
<mml:mn>1</mml:mn>
<mml:mo>=</mml:mo>
<mml:mn>81</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
<mml:mtext> </mml:mtext>
<mml:mn>2</mml:mn>
<mml:mo>=</mml:mo>
<mml:mn>45</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
<mml:mtext> </mml:mtext>
<mml:mn>3</mml:mn>
<mml:mo>=</mml:mo>
<mml:mn>87</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
.</p>
</caption>
<graphic xlink:href="sensors-15-29853-g012"></graphic>
</fig>
<p>It is possible to distinguish several differences between histograms of each example (
<xref ref-type="fig" rid="sensors-15-29853-f009">Figure 9</xref>
and
<xref ref-type="fig" rid="sensors-15-29853-f010">Figure 10</xref>
) with regard to the first part of the histogram representing the FPFH components encoded as a set of angle variations, such as pitch (α), roll (∅) and yaw (θ) (
<xref ref-type="fig" rid="sensors-15-29853-f008">Figure 8</xref>
). All of these are calculated using the same number of repetitions per value.
<xref ref-type="fig" rid="sensors-15-29853-f009">Figure 9</xref>
shows the representation of 0 fingers by means of close hand gesture, and
<xref ref-type="fig" rid="sensors-15-29853-f010">Figure 10</xref>
shows five samples of the most common representations from 1 to 5 that can be represented with the hand gestures by extending fingers. A comparison between the first and fifth samples of
<xref ref-type="fig" rid="sensors-15-29853-f010">Figure 10</xref>
, regarding the first part of the histograms, shows the changes in the dispersion of angle variations. Thus, the first sample shows that there are more bins with no zero values and the histogram has a greater distribution of angular values. That is to say, the first sample has more dispersion than the fourth or fifth sample. But also, the fifth sample shows that the angular values are more concentrated around three bins than in the remaining samples in
<xref ref-type="fig" rid="sensors-15-29853-f010">Figure 10</xref>
. To summarise, that concentration occurs for different values in each histogram, resulting in sharp concentrations on an entropic background. </p>
<p>Comparison of all the figures denotes that the differences on the last part of histogram, which represents the viewpoint, are mainly caused by the changes in the position of the hand and the camera in the whole training process. Note that they are less related to the gesture morphology (
<italic>i.e</italic>
., with the hand pattern shape which is being reproduced by the user). These variations can be observed in
<xref ref-type="fig" rid="sensors-15-29853-f012">Figure 12</xref>
which represents the overlap of three gestures.</p>
</sec>
<sec id="sec2dot4dot3-sensors-15-29853">
<title>2.4.3. Generating Commands via the Combined Interpretation of the Movements and Gestures of Two Human Hands</title>
<p>A scripting module has been built for the tele-control of Charlie using its socket API. Each gesture associated with each hand commands a state change. With the left hand, we select the orientation of the wheels (palm to the left, pointed left; palm to the right, pointed right; closed hand, pointed straight). With the right hand, we command the rotation of the wheels (open palm, forward; closed hand with the thumb extended thumb, backward; closed hand, brake). The relative spatial position of the hands in the scene is used to control the orientation of the camera on the robot. Separating the hands will cause the angle of the camera to rotate upwards, and bringing the hands closer together will cause it to rotate downwards. In addition to this functionality, the absolute positions of the hands in the scene are considered to make the system more usable for a person with reduced mobility. For a user sitting in a chair, the action of rotating the camera will be performed if the hands are above the neck. The gestures will be interpreted as orientation and rotation commands if the hands are positioned between the neck and the hips. Positions below the hips will be regarded as resting positions.</p>
</sec>
</sec>
</sec>
<sec id="sec3-sensors-15-29853">
<title>3. Experiments and Results</title>
<sec id="sec3dot1-sensors-15-29853">
<title>3.1. Experiment 1: Programming and Controlling the Robot</title>
<p>Using the scripting API, it is possible to create behaviours for the robot. In the experiment described below, the upper row of 12 CNY70 reflective optical sensors with transistor output is used to perform a line-following movement strategy (
<xref ref-type="fig" rid="sensors-15-29853-f013">Figure 13</xref>
).</p>
<fig id="sensors-15-29853-f013" position="float">
<label>Figure 13</label>
<caption>
<p>Our low-cost robot controlled by our human-hand gesture interface.</p>
</caption>
<graphic xlink:href="sensors-15-29853-g013"></graphic>
</fig>
<p>The goal is to implement a simple line-following behaviour by obtaining the position of the first sensor (from the left) to detect a line. Then, this initial value of between 0 and 11 (x) is rescaled to the range 0–255 and sent to each motor in reverse. The applied formula is as follows:
<disp-formula id="FD9">
<label>(9)</label>
<mml:math id="mm17">
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mi>p</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>d</mml:mi>
<mml:mi>L</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>f</mml:mi>
<mml:mi>t</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>255</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mn>11</mml:mn>
</mml:mrow>
</mml:mfrac>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mi>p</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>d</mml:mi>
<mml:mi>R</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>g</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>t</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>255</mml:mn>
<mml:mo></mml:mo>
<mml:mi>s</mml:mi>
<mml:mi>p</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>d</mml:mi>
<mml:mi>L</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>f</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>For example, if a line is detected by the first sensor (from the left), the resulting value of the find function will be 0. Then, the robot will need to perform a strong turn to the left, which requires the speed of the left motor to be 0 and that of the right motor to be 255. These actions are repeated in a loop (read the sensors, calculate the speed of the motors and send the movement command to the Arduino board) as long as the read-out of the sensors continues to indicate a black line; when this is no longer true, the command to stop the motors is executed. For this procedure, three commands of the low-level API are used:
<list list-type="bullet">
<list-item>
<p>executeReadCommand(14): Returns a string of length 12 that contains the read value of each sensor in the row. For example, the string “100000000000” indicates that a line has been detected by the top left sensor.</p>
</list-item>
<list-item>
<p>executeCommand(5,speedLeft,speedRight): Execute the movement corresponding to the specified speed of each motor, from 0 to 255.</p>
</list-item>
<list-item>
<p>executeCommand(0): Stop the movement of the motors.</p>
</list-item>
</list>
</p>
<p>We can store this command by sending the WebSocket server a message in which the first byte is 50, the second byte is the ID of this script (from 0 to 255), and the remaining bytes contain the code itself. Then, for execution, the command 50 followed by the assigned ID must be sent to the WebSocket server.</p>
</sec>
<sec id="sec3dot2-sensors-15-29853">
<title>3.2. Experiment 2: Results of the Detection of Hand-Human and Gesture Recognition</title>
<p>As a result of the aforementioned gesture recognition parameters, the effectiveness of the recognition results is highly variable. To illustrate our proposed approach, we considered three and six different gestures per hand. The system was trained using a database of 250 samples per gesture in both experiments (
<xref ref-type="table" rid="sensors-15-29853-t003">Table 3</xref>
and
<xref ref-type="table" rid="sensors-15-29853-t004">Table 4</xref>
). All gestures were captured by the Kinect sensor in real time from different environments and different users (examples are shown in
<xref ref-type="fig" rid="sensors-15-29853-f004">Figure 4</xref>
,
<xref ref-type="fig" rid="sensors-15-29853-f005">Figure 5</xref>
and
<xref ref-type="fig" rid="sensors-15-29853-f014">Figure 14</xref>
). In
<xref ref-type="table" rid="sensors-15-29853-t003">Table 3</xref>
and
<xref ref-type="table" rid="sensors-15-29853-t004">Table 4</xref>
, the columns represent the gesture models, and the rows represent 600 samples of unknown gestures that were captured by the Kinect for classification.</p>
<table-wrap id="sensors-15-29853-t003" position="float">
<object-id pub-id-type="pii">sensors-15-29853-t003_Table 3</object-id>
<label>Table 3</label>
<caption>
<p>Confusion matrix computed during the recognition process for the identification of three simple gestures of the right hand, with 200 samples per gesture.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th rowspan="2" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" colspan="1"></th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">
<inline-graphic xlink:href="sensors-15-29853-i001.jpg"></inline-graphic>
</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">
<inline-graphic xlink:href="sensors-15-29853-i002.jpg"></inline-graphic>
</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">
<inline-graphic xlink:href="sensors-15-29853-i003.jpg"></inline-graphic>
</th>
</tr>
<tr>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Recognised Gesture 0</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Recognised Gesture 2</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Recognised Gesture 5</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">Input Gesture 0</td>
<td align="center" valign="middle" rowspan="1" colspan="1">200</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">Input Gesture 2</td>
<td align="center" valign="middle" rowspan="1" colspan="1">3</td>
<td align="center" valign="middle" rowspan="1" colspan="1">195</td>
<td align="center" valign="middle" rowspan="1" colspan="1">2</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Input Gesture 5</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">15</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">6</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">179</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="sensors-15-29853-t004" position="float">
<object-id pub-id-type="pii">sensors-15-29853-t004_Table 4</object-id>
<label>Table 4</label>
<caption>
<p>Confusion matrix computed during the recognition process for the identification of six gestures of the right hand, with 200 samples per gesture.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th rowspan="2" align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" colspan="1"></th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">
<inline-graphic xlink:href="sensors-15-29853-i004.jpg"></inline-graphic>
</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">
<inline-graphic xlink:href="sensors-15-29853-i005.jpg"></inline-graphic>
</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">
<inline-graphic xlink:href="sensors-15-29853-i006.jpg"></inline-graphic>
</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">
<inline-graphic xlink:href="sensors-15-29853-i007.jpg"></inline-graphic>
</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">
<inline-graphic xlink:href="sensors-15-29853-i008.jpg"></inline-graphic>
</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">
<inline-graphic xlink:href="sensors-15-29853-i009.jpg"></inline-graphic>
</th>
</tr>
<tr>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Recognised Gesture 0</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Recognised Gesture 1</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Recognised Gesture 2</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Recognised Gesture 3</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Recognised Gesture 4</th>
<th align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Recognised Gesture 5</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">Input Gesture 0</td>
<td align="center" valign="middle" rowspan="1" colspan="1">177</td>
<td align="center" valign="middle" rowspan="1" colspan="1">3</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0</td>
<td align="center" valign="middle" rowspan="1" colspan="1">2</td>
<td align="center" valign="middle" rowspan="1" colspan="1">18</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">Input Gesture 1</td>
<td align="center" valign="middle" rowspan="1" colspan="1">4</td>
<td align="center" valign="middle" rowspan="1" colspan="1">177</td>
<td align="center" valign="middle" rowspan="1" colspan="1">10</td>
<td align="center" valign="middle" rowspan="1" colspan="1">7</td>
<td align="center" valign="middle" rowspan="1" colspan="1">2</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">Input Gesture 2</td>
<td align="center" valign="middle" rowspan="1" colspan="1">1</td>
<td align="center" valign="middle" rowspan="1" colspan="1">21</td>
<td align="center" valign="middle" rowspan="1" colspan="1">133</td>
<td align="center" valign="middle" rowspan="1" colspan="1">32</td>
<td align="center" valign="middle" rowspan="1" colspan="1">8</td>
<td align="center" valign="middle" rowspan="1" colspan="1">5</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">Input Gesture 3</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0</td>
<td align="center" valign="middle" rowspan="1" colspan="1">184</td>
<td align="center" valign="middle" rowspan="1" colspan="1">16</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">Input Gesture 4</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0</td>
<td align="center" valign="middle" rowspan="1" colspan="1">8</td>
<td align="center" valign="middle" rowspan="1" colspan="1">176</td>
<td align="center" valign="middle" rowspan="1" colspan="1">16</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Input Gesture 5</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">23</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">0</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">5</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">8</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">36</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">128</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>The hit rate decreases when the number of gestures in the database is incremented, as shown in
<xref ref-type="table" rid="sensors-15-29853-t004">Table 4</xref>
which includes a further three gestures, more than in
<xref ref-type="table" rid="sensors-15-29853-t003">Table 3</xref>
. The success rate depends on the morphological differences between gestures. As a result, two similar gestures could often be mismatched when the database is small but when it is increased, they could also be more likely to be matched. That is to say, the probability of the gesture recognition process decreases with the size of the database and number of gestures previously registered. For this reason, in our system, the use of more than four gestures/hand could easily cause confusion. To attenuate this fact without sacrificing the robustness in detection, our system seeks to work with four gestures of both hands, simultaneously. Therefore, two small sets of several different gestures are registered in our database (eight gestures, four for each hand). This set is used in the experiment shown in
<xref ref-type="sec" rid="sec3dot4-sensors-15-29853">Section 3.4</xref>
. Those gestures are the representations of zero, one, three and five fingers. However, it is very important to consider the fact that our system is able to work with combinations of sequences of three concatenated gestures for each hand. Therefore, combinations of three elements can be used to identify action commands. Consequently, the system works by forming a State Machine (SM) with multiple actions where the input is a sequence of gestures combining both hands, and the output is a reliable transition between states. Therefore, the difference of a single hand gesture allows the system to be associated with different actions.</p>
<p>The choice of gestures is arbitrary and is not associated with the numerical value shown. It is associated with actions and commands for the robot. The descriptor is not invariant to poses, thus a gesture is labelled as different even though it was obtained from different hand orientations but also with the same visible fingers (
<italic>i.e</italic>
., same fingers pointing up or down).</p>
<p>An important aspect of evaluating the suitability of our proposed method is the reaction time required to detect a gesture. The following measurements, presented in units of milliseconds, were executed under full-stack conditions; in other words, all processes were running at 100% system load. The computer used for this purpose was a Lenovo T520 (Intel Core i5 2520 M @ 2.5 GHz, 6 GB of RAM DDR3, Intel HD3000). The durations were measured for 100 instances of detection (
<xref ref-type="table" rid="sensors-15-29853-t005">Table 5</xref>
and
<xref ref-type="table" rid="sensors-15-29853-t006">Table 6</xref>
).</p>
<table-wrap id="sensors-15-29853-t005" position="float">
<object-id pub-id-type="pii">sensors-15-29853-t005_Table 5</object-id>
<label>Table 5</label>
<caption>
<p>Times required for different steps of the hand segmentation process.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1"></th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Max (ms)</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Average (ms)</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Min (ms)</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">Rough segmentation</td>
<td align="center" valign="middle" rowspan="1" colspan="1">242</td>
<td align="center" valign="middle" rowspan="1" colspan="1">203</td>
<td align="center" valign="middle" rowspan="1" colspan="1">165</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">Fine segmentation without noise</td>
<td align="center" valign="middle" rowspan="1" colspan="1">101</td>
<td align="center" valign="middle" rowspan="1" colspan="1">62</td>
<td align="center" valign="middle" rowspan="1" colspan="1">37</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Overall process (segmentation, neighbourhood search and data conversions between steps)</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">1222</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">513</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">383</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="sensors-15-29853-t006" position="float">
<object-id pub-id-type="pii">sensors-15-29853-t006_Table 6</object-id>
<label>Table 6</label>
<caption>
<p>Times required for different steps of the gesture analysis and classification process.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1"></th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Max (ms)</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Average (ms)</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Min (ms)</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">VFH analysis</td>
<td align="center" valign="middle" rowspan="1" colspan="1">177</td>
<td align="center" valign="middle" rowspan="1" colspan="1">120</td>
<td align="center" valign="middle" rowspan="1" colspan="1">37</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">k-d tree search with 750 elements</td>
<td align="center" valign="middle" rowspan="1" colspan="1">7</td>
<td align="center" valign="middle" rowspan="1" colspan="1">4.4</td>
<td align="center" valign="middle" rowspan="1" colspan="1">2</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Overall process (analysis, search and data conversions between steps)</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">533</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">196</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">71</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="sec3dot3-sensors-15-29853">
<title>3.3. Experiment 3: Controlling the Robot with Human-Hand Gestures in a Domestic Environment</title>
<p>Regarding the hand gesture recognition software, an ROS node that monitors the data sent from the skeleton tracking and gesture detection modules was implemented. This node represents a coupling between the full detection chain and the hardware of the robot, transforming the currently detected scene into a command. The position of the hand relative to other elements of the body (e.g., head, shoulder, hip) is used to determine whether the user is re-posing his or her hands or activating commands. The same gesture can be associated with different commands depending on the relative height of the hands. Once the pairs of gestures and positions are associated, we add a third element to the tuple, namely, the desired action of the robot. A number that identifies this action is sent via the WebSocket to the robot using its specialized API.</p>
<fig id="sensors-15-29853-f014" position="float">
<label>Figure 14</label>
<caption>
<p>A user at rest, issuing commands with the right hand and issuing commands with both hands.</p>
</caption>
<graphic xlink:href="sensors-15-29853-g014"></graphic>
</fig>
<p>On the part of the robot, two API commands are used. Scripts must first be pre-programmed using JavaScript; the system then stores the various specified behaviours in memory using the store script command. Once they are stored, they can be executed using the load and execute commands. Therefore, for this experiment, the robot received these two types of commands via the WebSocket; only the parameter that indicates the ID of each script, which takes values in the range 0-255, is modified among different instances of such commands to indicate the slot in which the desired script is stored. </p>
<p>The error was then measured as the deviation between the central sensor (
<xref ref-type="fig" rid="sensors-15-29853-f001">Figure 1</xref>
) and the read sensor. Because there are 12 sensors (with labels x ranging between 0 and 11, where the central sensors are numbered 5 and 6), the formula for this deviation error is:
<disp-formula id="FD10">
<label>(10)</label>
<mml:math id="mm19">
<mml:mrow>
<mml:mi>ε</mml:mi>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>|</mml:mo>
<mml:mrow>
<mml:mn>5.5</mml:mn>
<mml:mo></mml:mo>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo>|</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
where
<italic>x</italic>
is the label of the read sensor; thus, the error represents the position of this sensor in relation to the central sensor.</p>
<fig id="sensors-15-29853-f015" position="float">
<label>Figure 15</label>
<caption>
<p>(
<bold>a</bold>
) Deviation of the line detected by the robot’s sensors; (
<bold>b</bold>
) Speed applied to each motor by the closed-loop controller to correct the path.</p>
</caption>
<graphic xlink:href="sensors-15-29853-g015"></graphic>
</fig>
<p>Suppose that a command is sent to the robot to move to a certain location in the environment. It should be noted that the objective of this work is not the localisation of the robot or the mapping of the environment (SLAM) along the navigation paths. The primary goal is to develop a low-cost robot with a motorised camera such that the movements of the robot and camera can both be controlled by a gesture interface based on visual perception for the tasks of supervising and monitoring a household environment. The general movement of the robot is controlled by an open-loop controller based on 3D data that are acquired by an external RGBD sensor, and the path of the robot is controlled by a closed-loop controller based on data from sensors that are mounted on the robot; they are used, for example, to allow the robot to follow a black line on the floor to move from one room to another.
<xref ref-type="fig" rid="sensors-15-29853-f015">Figure 15</xref>
illustrates the behaviour of the low-cost robot following a path between two rooms as commanded by a human user via a hand gesture. First, the gesture is recognised, and the corresponding command is sent to the robot; then, the robot activates and moves to achieve its objective. Afterwards, the movement of the robot is controlled by its sensors such that it maintains a certain distance with respect to the desired path (curved or straight).
<xref ref-type="fig" rid="sensors-15-29853-f015">Figure 15</xref>
a shows the deviation from the desired path. In this case, sensors 5 and 6 (
<xref ref-type="fig" rid="sensors-15-29853-f001">Figure 1</xref>
) exhibit oscillations due to measurement errors and the robot’s velocity. For both straight lines and curves, the trajectories tend to exhibit an oscillation of approximately ε = 0.5. Although there are initial peaks with a deviation of ε = 3.5 (38.5%), these occurrences do not pose a problem for the robot in tracking its path to achieve its target. In general, the algorithm applies more power to the motor that is opposite to the position of the detected line, as shown in
<xref ref-type="fig" rid="sensors-15-29853-f015">Figure 15</xref>
b, to correct the position in relation to the desired path.</p>
</sec>
<sec id="sec3dot4-sensors-15-29853">
<title>3.4. Experiment 4: Behaviour of the System in a Complete Use Case </title>
<p>This experiment describes an instance of the use of the full system (
<xref ref-type="fig" rid="sensors-15-29853-f016">Figure 16</xref>
) in the residence of a dependent person (
<xref ref-type="fig" rid="sensors-15-29853-f017">Figure 17</xref>
). </p>
<p>The proposed system can recognize four gestures and the actions can be composed of sequences of one, two or three gestures (which are represented by zero, one, three and five fingers). The mathematical combination of these items provides up to 84 sequences for allocation of robot orders or actions. That is to say, there are four combinations of one gesture, 16 of two gestures and 64 of three gestures that can be generated from this set of gestures. However, the implemented system was only tested with the restriction of three instead of four gestures (zero as a closed hand, five as an open palm with separated fingers, and two representing the victory sign). In this case, the number of actions is reduced to 39. In command words, the set of 39 actions is given by a sequence of k not necessarily distinct gestures, where k can take the value one, two or three, and where the command is not taken into account. This means three combinations of one gesture, nine of two gestures and 27 of three gestures. Moreover, the choice of gesture was made with the intention of making each robot action a natural one. For example, to go forward, it uses the gesture of the right hand with fingers up.</p>
<fig id="sensors-15-29853-f016" position="float">
<label>Figure 16</label>
<caption>
<p>Navigation through a finite-state machine by means of triggering gestures.</p>
</caption>
<graphic xlink:href="sensors-15-29853-g016"></graphic>
</fig>
<fig id="sensors-15-29853-f017" position="float">
<label>Figure 17</label>
<caption>
<p>Driven itinerary throughout the house, with the state detected at each point.</p>
</caption>
<graphic xlink:href="sensors-15-29853-g017"></graphic>
</fig>
<p>Furthermore, our system works like a SM where each combination of gestures could define one action at a time. This means that the robot actions depend on the current state and the sequence of gestures used by the human operator. The machine can change from one state to several different states with the same combination if the current state is different. Therefore, the transitions are defined by a list of previous states and the triggering condition defined by the gestures.</p>
<p>
<xref ref-type="fig" rid="sensors-15-29853-f016">Figure 16</xref>
shows an example of actions associated with states and sequence of gestures. Thus, the SM indicates that a sequence of three gestures on the left hand is needed in order to go from OFF status to ON status. Moreover, a sequence of two gestures on the right hand is used to go from ON status to Forward-Left status. In this case, the gestures are with the left hand opened (ON to Forward, Forward to Forward-Left). Also, a closed hand on the same hand allows the robot to pass from any status to Halt mode (ON). In the same way, other additional actions have been associated with the set of gestures, such as the tilt camera movement which is controlled by the variation measure on the height of the left hand, when it represents the victory sign. This variation is mapped according to the amount of degrees of the camera servomotor required to rotate it.</p>
<p>The user is resting on a couch when he hears an audio notification from a sensor at the main entrance of the home indicating someone’s arrival. The user wishes to determine who is coming in and, for that purpose, performs the following sequence of movements: (1) places his right hand in the allowed detection area (between the hip and the neck); (2) performs the sequence for initiating gesture detection, which is composed of three consecutive gestures; (3) directs the robot (by performing the necessary gestures) to the main entrance; (4) tilts the camera to focus on the face of the newly arrived person; (5) moves the robot out of the way; and (6) performs the sequence for terminating gesture detection (the same sequence as in step 2). </p>
<p>Some gestures consist of more than one pose. No constraint is placed on the time allowed to perform each of the poses of a gesture. The gesture used in step 4 depends on the relative variations in the height of the position of the hand: 2 cm of variation (up or down from the starting height) will be translated into 5° of rotation of the axis of the camera. The range of the servomotor that controls the inclination is limited to 135°, and thus, the maximal distance that must be moved between hand positions is 54 cm. Each state is associated with a high-level API command for the robot, with the exception of the Halt state. This latter is executed by the machine that is connected to the Kinect (
<xref ref-type="table" rid="sensors-15-29853-t007">Table 7</xref>
) because it is related to the initiation or termination of operation. </p>
<table-wrap id="sensors-15-29853-t007" position="float">
<object-id pub-id-type="pii">sensors-15-29853-t007_Table 7</object-id>
<label>Table 7</label>
<caption>
<p>Charlie API commands associated with different states:</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1"></th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Command</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Parameter 1</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Parameter 2</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Command Description</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="middle" rowspan="1" colspan="1">
<bold>1. OFF</bold>
</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0</td>
<td align="center" valign="middle" rowspan="1" colspan="1"></td>
<td align="center" valign="middle" rowspan="1" colspan="1"></td>
<td align="center" valign="middle" rowspan="1" colspan="1">Stop Motors</td>
</tr>
<tr>
<td align="left" valign="middle" rowspan="1" colspan="1">
<bold>2. ON (Halt)</bold>
</td>
<td colspan="4" align="center" valign="middle" rowspan="1">(managed by the machine connected to the Kinect)</td>
</tr>
<tr>
<td align="left" valign="middle" rowspan="1" colspan="1">
<bold>3. Forward</bold>
</td>
<td align="center" valign="middle" rowspan="1" colspan="1">1</td>
<td align="center" valign="middle" rowspan="1" colspan="1">191</td>
<td align="center" valign="middle" rowspan="1" colspan="1"></td>
<td align="center" valign="middle" rowspan="1" colspan="1">Move forward (both motors at 75% speed)</td>
</tr>
<tr>
<td align="left" valign="middle" rowspan="1" colspan="1">
<bold>4. Forward-Left</bold>
</td>
<td align="center" valign="middle" rowspan="1" colspan="1">5</td>
<td align="center" valign="middle" rowspan="1" colspan="1">63</td>
<td align="center" valign="middle" rowspan="1" colspan="1">191</td>
<td align="center" valign="middle" rowspan="1" colspan="1">Left motor 25% Right motor 75%</td>
</tr>
<tr>
<td align="left" valign="middle" rowspan="1" colspan="1">
<bold>5. Forward-Right</bold>
</td>
<td align="center" valign="middle" rowspan="1" colspan="1">5</td>
<td align="center" valign="middle" rowspan="1" colspan="1">191</td>
<td align="center" valign="middle" rowspan="1" colspan="1">63</td>
<td align="center" valign="middle" rowspan="1" colspan="1">Left motor 75% Right motor 25%</td>
</tr>
<tr>
<td align="left" valign="middle" rowspan="1" colspan="1">
<bold>6. Turn Left</bold>
</td>
<td align="center" valign="middle" rowspan="1" colspan="1">6</td>
<td align="center" valign="middle" rowspan="1" colspan="1"></td>
<td align="center" valign="middle" rowspan="1" colspan="1"></td>
<td align="center" valign="middle" rowspan="1" colspan="1">Edge-rotation to the left</td>
</tr>
<tr>
<td align="left" valign="middle" rowspan="1" colspan="1">
<bold>7. Turn Right</bold>
</td>
<td align="center" valign="middle" rowspan="1" colspan="1">7</td>
<td align="center" valign="middle" rowspan="1" colspan="1"></td>
<td align="center" valign="middle" rowspan="1" colspan="1"></td>
<td align="center" valign="middle" rowspan="1" colspan="1">Edge-rotation to the right</td>
</tr>
<tr>
<td align="left" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<bold>8. Camera Tilting</bold>
</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">35</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">x</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1"></td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Sets the inclination of the camera</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>As seen, the robot is not capable merely of reproducing marked trajectories; it can also be programmed to perform actions in a different manner through multi-pose gesture recognition. Via the wireless network of the house, it is possible to access the video stream from the camera of the robot using a simple web browser, providing the user with first-person feedback (
<xref ref-type="fig" rid="sensors-15-29853-f018">Figure 18</xref>
). Increasing the complexity of the gesture detection, by including a finite-state machine in between, improves the rejection of faulty detections and makes the output more reliable (
<xref ref-type="table" rid="sensors-15-29853-t008">Table 8</xref>
). To filter out any remaining errors prior to changing the state of the machine, a vote-based evaluation of three incoming gestures is performed, and the gesture with the most identified occurrences is accepted as the intended one.</p>
<fig id="sensors-15-29853-f018" position="float">
<label>Figure 18</label>
<caption>
<p>Frames captured by the robot’s camera along the itinerary from the base station to the target.</p>
</caption>
<graphic xlink:href="sensors-15-29853-g018"></graphic>
</fig>
<p>
<xref ref-type="fig" rid="sensors-15-29853-f019">Figure 19</xref>
shows the set of the used gestures which were chosen because they are easily imitable by anyone (ergonomic) and provide a good recognition result. In addition, it shows the refused gestures which were not considered due to several reasons such as difficult imitation, excessive similarity with other gestures (success rate of the descriptor in the recognition process decreases significantly) or the gesture is not intuitive;
<italic>i.e</italic>
., it would be difficult for the user to remember the action according to shape and position.
<xref ref-type="fig" rid="sensors-15-29853-f020">Figure 20</xref>
shows the full set of gestures that the recognition descriptor is able to correctly identify and are considered for the handling of the robot. It is important to note that the grouped gestures which have been labelled with the same number cannot be used together in our system because the recognition descriptor does not provide a suitable difference between both signatures. Also, the figure illustrates the different combinations chosen for both hands. The criterion was to choose the most robust gesture for the left hand and the most intuitive one for the right hand.</p>
<table-wrap id="sensors-15-29853-t008" position="float">
<object-id pub-id-type="pii">sensors-15-29853-t008_Table 8</object-id>
<label>Table 8</label>
<caption>
<p>Accuracy of the detection of gesture sequences to reach the state needed to determine the path of the robot in Experiment 4 and in artificial tests of the navigation between states (100 trials per action).</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1"></th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Involved States (1)</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Times Detected (2)</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Times Missed (3)</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Times Missed after 3-Step Voting Evaluation (4)</th>
<th align="center" valign="middle" style="border-top:solid thin;border-bottom:solid thin" rowspan="1" colspan="1">Ratio of Correct Detection in Artificial Tests after 3-Step Voting Evaluation (6)</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">Unlocking and locking</td>
<td align="center" valign="middle" rowspan="1" colspan="1">1, 2</td>
<td align="center" valign="middle" rowspan="1" colspan="1">2</td>
<td align="center" valign="middle" rowspan="1" colspan="1">1</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0</td>
<td align="center" valign="middle" rowspan="1" colspan="1">100%</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">Moving forward </td>
<td align="center" valign="middle" rowspan="1" colspan="1">2, 3, 4, 5</td>
<td align="center" valign="middle" rowspan="1" colspan="1">32</td>
<td align="center" valign="middle" rowspan="1" colspan="1">6</td>
<td align="center" valign="middle" rowspan="1" colspan="1">1</td>
<td align="center" valign="middle" rowspan="1" colspan="1">92%</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">Moving left </td>
<td align="center" valign="middle" rowspan="1" colspan="1">2, 6</td>
<td align="center" valign="middle" rowspan="1" colspan="1">3</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0</td>
<td align="center" valign="middle" rowspan="1" colspan="1">97%</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">Moving right</td>
<td align="center" valign="middle" rowspan="1" colspan="1">2, 7</td>
<td align="center" valign="middle" rowspan="1" colspan="1">8</td>
<td align="center" valign="middle" rowspan="1" colspan="1">1</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0</td>
<td align="center" valign="middle" rowspan="1" colspan="1">95%</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">Moving forward-right</td>
<td align="center" valign="middle" rowspan="1" colspan="1">3, 5</td>
<td align="center" valign="middle" rowspan="1" colspan="1">14</td>
<td align="center" valign="middle" rowspan="1" colspan="1">3</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0</td>
<td align="center" valign="middle" rowspan="1" colspan="1">95%</td>
</tr>
<tr>
<td align="center" valign="middle" rowspan="1" colspan="1">Camera tilting</td>
<td align="center" valign="middle" rowspan="1" colspan="1">2, 8</td>
<td align="center" valign="middle" rowspan="1" colspan="1">6</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0</td>
<td align="center" valign="middle" rowspan="1" colspan="1">100%</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Halt</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">All to 2</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">13</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">0</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">0</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">98%</td>
</tr>
</tbody>
</table>
</table-wrap>
<fig id="sensors-15-29853-f019" position="float">
<label>Figure 19</label>
<caption>
<p>Gestures considered for robot control in our experiments.</p>
</caption>
<graphic xlink:href="sensors-15-29853-g019"></graphic>
</fig>
<fig id="sensors-15-29853-f020" position="float">
<label>Figure 20</label>
<caption>
<p>Full set of gestures that the recognition descriptor is able to identify correctly and are considered for handling the robot. (
<bold>a</bold>
) With just one, left or right hand; (
<bold>b</bold>
) Sequence of two gestures with both hands.</p>
</caption>
<graphic xlink:href="sensors-15-29853-g020"></graphic>
</fig>
</sec>
</sec>
<sec id="sec4-sensors-15-29853">
<title>4. Discussion and Conclusions</title>
<p>This paper presented a low-cost robotic system to assist people with monitoring tasks in home environments. The robotic system is managed by means of a human interface that recognises gestures from both hands. Our human interface consists of a low-cost RGBD sensor, such as the Kinect, and a set of algorithms to detect the locations of human hands based on the skeleton of the user’s body and to recognise their poses and orientations using a 3D descriptor of surfaces. This component of the system functions correctly regardless of the characteristics of the environment and the human operator. </p>
<p>The use of a low-cost robot to perform the proposed tasks is a new approach to reducing the cost of the high-level robotisation of tasks in the home. This approach introduces the possibility of controlling the system with human gestures and provides the user with feedback from the environment via a motorised camera. The primary advantage of this low-cost approach is the ease of replacement of any damaged component; as a trade-off, the designed prototype is not highly robust against unexpected problems that may arise in its operating space.</p>
<p>The robotic system has been programmed to be used as a basis for surveillance activities in the home. It moves through the house and uses a motorised camera, which is also remotely operated by gestures. In this way, dependent individuals can require fewer other people for assistance over time. The robot is intended to assist the disabled, the elderly or people with mobility problems in tasks that involve physical actions, such as getting up and moving from one place to another inside the home to observe what is occurring in those locations.</p>
<p>The results of the experiments indicate that the proposed human interface based on hand recognition achieves high levels of accuracy for the interpretation of gestures (greater than 86%). Although the experiments also indicate the occurrence of false positives in the recognition process, the system always processes in real time and allows the human operator to repeat a gesture three times to ensure that the interpretation is robust and to prevent unwanted or erroneous task commands being issued to the robot. This repetition concept is based on the High Dynamic Range (HDR) mode of certain cameras, in which each gesture is automatically captured three times, and its intent is to improve the range used to register the 3D point cloud that represents the hand. Additionally, a runtime study revealed that the mean runtime of the entire recognition process is 320 ms. This process includes the image acquisition, feature extraction, 3D descriptor computation and matching between the test view and the model database. Two disadvantages are that the system requires training and that a larger number of possible gestures increase the runtime and decrease the level of accuracy, thereby reducing the success rate. However, a considerable advantage is the possibility of working with two hands and the implementation as a State Machine (SM) and consequently, it is possible to associate more robot commands or actions than gestures because the actions depend on the current status of the SM as well as the sequence of gestures, and not just the latter.</p>
<p>Another important aspect is avoidance of obstacles; the distance sensor mounted on the robot is used for this purpose. If it detects an obstacle, the robot stops and waits for new commands from the user, who controls the robot movements with his gestures using the view captured by the webcam mounted on the robot. In summary, the designed system offers a new approach to the execution of tasks in the home. Currently, the system is being improved through the development of an easy programming method for defining the tasks and behaviours of the robot.</p>
</sec>
</body>
<back>
<ack>
<title>Acknowledgments</title>
<p>The research that yielded these results has received funding from the projects DPI2012-32390 and PROMETEO/2013/085.</p>
</ack>
<notes>
<title>Author Contributions</title>
<p>The contributions presented in this work are the result of the joint efforts of all authors who compose the research team. Each of the members contributed to a similar degree in each step of this research, including the analysis, design, development, implementation, and testing of the proposed system. Angel D. Sempere and Pablo Gil implemented the perception system based on 3D descriptors. Arturo Serna and Santiago Puente designed the architecture of the robot, its command API and its communication system. Fernando Torres coordinated the integration of all aspects of the system. All authors contributed to the design of the tests and experiments. </p>
</notes>
<notes>
<title>Conflicts of Interest</title>
<p>The authors declare no conflict of interest.</p>
</notes>
<ref-list>
<title>References</title>
<ref id="B1-sensors-15-29853">
<label>1.</label>
<element-citation publication-type="webpage">
<person-group person-group-type="author">
<collab>Parkinson Disease Foundation</collab>
</person-group>
<article-title>Statistics for Parkinson’s Disease 2014</article-title>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="http://www.pdf.org/en/parkinson_statistics">http://www.pdf.org/en/parkinson_statistics</ext-link>
</comment>
<date-in-citation>(accessed on 10 March 2015)</date-in-citation>
</element-citation>
</ref>
<ref id="B2-sensors-15-29853">
<label>2.</label>
<element-citation publication-type="webpage">
<person-group person-group-type="author">
<collab>Stroke Center</collab>
</person-group>
<article-title>Stroke Statistics 2014</article-title>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="http://www.strokecenter.org/patients/about-stroke/stroke-statistics/">http://www.strokecenter.org/patients/about-stroke/stroke-statistics/</ext-link>
</comment>
<date-in-citation>(accessed on 11 March 2015)</date-in-citation>
</element-citation>
</ref>
<ref id="B3-sensors-15-29853">
<label>3.</label>
<element-citation publication-type="gov">
<person-group person-group-type="author">
<collab>US Census</collab>
</person-group>
<article-title>The Elderly Population 2014</article-title>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="http://www.census.gov/prod/2014pubs/p25-1140.pdf">http://www.census.gov/prod/2014pubs/p25-1140.pdf</ext-link>
</comment>
<date-in-citation>(accessed on 15 March 2015)</date-in-citation>
</element-citation>
</ref>
<ref id="B4-sensors-15-29853">
<label>4.</label>
<element-citation publication-type="webpage">
<person-group person-group-type="author">
<collab>IEEE Spectrum</collab>
</person-group>
<article-title>Where are the Elder Care Robots?</article-title>
<year>2012</year>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="http://spectrum.ieee.org/automaton/robotics/home-robots/where-are-the-eldercare-robots">http://spectrum.ieee.org/automaton/robotics/home-robots/where-are-the-eldercare-robots</ext-link>
</comment>
<date-in-citation>(accessed on 10 March 2015)</date-in-citation>
</element-citation>
</ref>
<ref id="B5-sensors-15-29853">
<label>5.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Gross</surname>
<given-names>H.</given-names>
</name>
</person-group>
<article-title>I’ll keep an eye on you: Home robot companion for elderly people with cognitive impairment</article-title>
<source>Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC)</source>
<conf-loc>Anchorage, Alaska</conf-loc>
<conf-date>9–12 October 2011</conf-date>
<fpage>2481</fpage>
<lpage>2488</lpage>
</element-citation>
</ref>
<ref id="B6-sensors-15-29853">
<label>6.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Xiong</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Song</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>J.</given-names>
</name>
</person-group>
<article-title>Domestic robots with multi-function and safe internet connectivity</article-title>
<source>Proceedings of the International Conference on Information and Automation</source>
<conf-loc>Zhuhai, China</conf-loc>
<conf-date>22–25 June 2009</conf-date>
<fpage>277</fpage>
<lpage>282</lpage>
</element-citation>
</ref>
<ref id="B7-sensors-15-29853">
<label>7.</label>
<element-citation publication-type="webpage">
<person-group person-group-type="author">
<collab>IRC5 with FlexPendant</collab>
</person-group>
<article-title>Operating Manual</article-title>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="http://developercenter.robotstudio.com/index.aspx?DevCenter=ManualsOrFPSDK&OpenDocument&Url=../IRC5FlexPendantOpManual/Custom/IRC5FlexPendantOpManual.html">http://developercenter.robot studio.com/index.aspx?DevCenter=ManualsOrFPSDK&OpenDocument&Url=../IRC5FlexPendantOpManual/Custom/IRC5FlexPendantOpManual.html</ext-link>
</comment>
<date-in-citation>(accessed on 2 November 2015)</date-in-citation>
</element-citation>
</ref>
<ref id="B8-sensors-15-29853">
<label>8.</label>
<element-citation publication-type="webpage">
<article-title>Rovio WowWee</article-title>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="http://www.wowwee.com/en/products/tech/telepresence/rovio/rovio">http://www.wowwee.com/en/products/tech/telepresence/rovio/rovio</ext-link>
</comment>
<date-in-citation>(accessed on 4 November 2015)</date-in-citation>
</element-citation>
</ref>
<ref id="B9-sensors-15-29853">
<label>9.</label>
<element-citation publication-type="webpage">
<article-title>Vigilio Vigi’Fall</article-title>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="http://www.vigilio.fr">http://www.vigilio.fr</ext-link>
</comment>
<date-in-citation>(accessed on 17 June 2015)</date-in-citation>
</element-citation>
</ref>
<ref id="B10-sensors-15-29853">
<label>10.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jackson</surname>
<given-names>R.D.</given-names>
</name>
</person-group>
<article-title>Robotics and its role in helping disabled people</article-title>
<source>Eng. Sci. Educ. J.</source>
<year>1993</year>
<volume>2</volume>
<fpage>267</fpage>
<lpage>272</lpage>
<pub-id pub-id-type="doi">10.1049/esej:19930077</pub-id>
</element-citation>
</ref>
<ref id="B11-sensors-15-29853">
<label>11.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Moreno Avalos</surname>
<given-names>H.A.</given-names>
</name>
<name>
<surname>Carrera Calderón</surname>
<given-names>I.G.</given-names>
</name>
<name>
<surname>Romero Hernández</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Cruz Morales</surname>
<given-names>V.</given-names>
</name>
</person-group>
<article-title>Concept Design Process for Robotic Devices: The Case of an Assistive Robot</article-title>
<source>Multibody Mechatron. Syst. Mech. Mach. Sci.</source>
<year>2015</year>
<volume>25</volume>
<fpage>295</fpage>
<lpage>304</lpage>
</element-citation>
</ref>
<ref id="B12-sensors-15-29853">
<label>12.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Suarez</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Murphy</surname>
<given-names>R.R.</given-names>
</name>
</person-group>
<article-title>Hand gesture recognition with depth images: A review</article-title>
<source>Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)</source>
<conf-loc>Paris, France</conf-loc>
<conf-date>9–13 September 2012</conf-date>
<fpage>411</fpage>
<lpage>417</lpage>
</element-citation>
</ref>
<ref id="B13-sensors-15-29853">
<label>13.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Van den Bergh</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Carton</surname>
<given-names>D</given-names>
</name>
<name>
<surname>de Nijs</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Mitsou</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Landsiedel</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Kuehnlenz</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Wollherr</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>van Gool</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Buss</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>Real-time 3D hand gesture interaction with a robot for understanding directions from humans</article-title>
<source>Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)</source>
<conf-loc>Atlanta, GA, USA</conf-loc>
<conf-date>31 July–3 August 2011</conf-date>
<fpage>357</fpage>
<lpage>362</lpage>
</element-citation>
</ref>
<ref id="B14-sensors-15-29853">
<label>14.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Alonso-Mora</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Haegeli Lohaus</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Leemann</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Siegwart</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Beardsley</surname>
<given-names>P.</given-names>
</name>
</person-group>
<article-title>Gesture based human—Multi-robot swarm interaction and its application to an interactive display</article-title>
<source>Proceedings of the IEEE International Conference on Robotics and Automation (ICRA)</source>
<conf-loc>Seattle, WA, USA</conf-loc>
<conf-date>26–30 May 2015</conf-date>
<fpage>5948</fpage>
<lpage>5953</lpage>
</element-citation>
</ref>
<ref id="B15-sensors-15-29853">
<label>15.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Asad</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Abhayaratne</surname>
<given-names>C.</given-names>
</name>
</person-group>
<article-title>Kinect depth stream pre-processing for hand gesture recognition</article-title>
<source>Proceedings of the 20th IEEE International Conference on Image Processing (ICIP)</source>
<conf-loc>Melbourne, Australia</conf-loc>
<conf-date>13–18 September 2013</conf-date>
<fpage>3735</fpage>
<lpage>3739</lpage>
</element-citation>
</ref>
<ref id="B16-sensors-15-29853">
<label>16.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Chan</surname>
<given-names>S.-C.</given-names>
</name>
</person-group>
<article-title>Superpixel-Based Hand Gesture Recognition with Kinect Depth Camera</article-title>
<source>IEEE Trans. Multimed.</source>
<year>2015</year>
<volume>17</volume>
<fpage>29</fpage>
<lpage>39</lpage>
<pub-id pub-id-type="doi">10.1109/TMM.2014.2374357</pub-id>
</element-citation>
</ref>
<ref id="B17-sensors-15-29853">
<label>17.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Kopinski</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Magand</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Gepperth</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Handmann</surname>
<given-names>U.</given-names>
</name>
</person-group>
<article-title>A light-weight real-time applicable hand gesture recognition system for automotive applications</article-title>
<source>Proceedings of the IEEE Intelligent Vehicles Symposium (IV)</source>
<conf-loc>Seoul, Korea</conf-loc>
<conf-date>28 June–1 July 2015</conf-date>
<fpage>336</fpage>
<lpage>342</lpage>
</element-citation>
</ref>
<ref id="B18-sensors-15-29853">
<label>18.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Kondori</surname>
<given-names>F.A.</given-names>
</name>
<name>
<surname>Yousefit</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Ostovar</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>H.</given-names>
</name>
</person-group>
<article-title>A Direct Method for 3D Hand Pose Recovery</article-title>
<source>Proceedings of the 22nd International Conference on Pattern Recognition (ICPR)</source>
<conf-loc>Stockholm, Sweden</conf-loc>
<conf-date>24–28 August 2014</conf-date>
<fpage>345</fpage>
<lpage>350</lpage>
</element-citation>
</ref>
<ref id="B19-sensors-15-29853">
<label>19.</label>
<element-citation publication-type="webpage">
<person-group person-group-type="author">
<collab>Github Website</collab>
</person-group>
<article-title>Charlie the Robot</article-title>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="https://github.com/sernaleon/charlie/wiki">https://github.com/sernaleon/charlie/wiki</ext-link>
</comment>
<date-in-citation>(accessed on 4 December 2015)</date-in-citation>
</element-citation>
</ref>
<ref id="B20-sensors-15-29853">
<label>20.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Takano</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Ishikawa</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Nakamura</surname>
<given-names>Y.</given-names>
</name>
</person-group>
<article-title>Using a human action database to recognize actions in monocular image sequences: Recovering human whole body configurations</article-title>
<source>Adv. Robot.</source>
<year>2015</year>
<volume>29</volume>
<fpage>771</fpage>
<lpage>784</lpage>
<pub-id pub-id-type="doi">10.1080/01691864.2014.996604</pub-id>
</element-citation>
</ref>
<ref id="B21-sensors-15-29853">
<label>21.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rösch</surname>
<given-names>O.K.</given-names>
</name>
<name>
<surname>Schilling</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Roth</surname>
<given-names>H.</given-names>
</name>
</person-group>
<article-title>Haptic interfaces for the remote control of mobile robots</article-title>
<source>Control Eng. Pract.</source>
<year>2002</year>
<volume>10</volume>
<fpage>1309</fpage>
<lpage>1313</lpage>
<pub-id pub-id-type="doi">10.1016/S0967-0661(02)00153-3</pub-id>
</element-citation>
</ref>
<ref id="B22-sensors-15-29853">
<label>22.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Okamura</surname>
<given-names>A.M.</given-names>
</name>
</person-group>
<article-title>Methods for haptic feedback in teleoperated robot-assisted surgery</article-title>
<source>Ind. Robot Int. J.</source>
<year>2004</year>
<volume>31</volume>
<fpage>499</fpage>
<lpage>508</lpage>
<pub-id pub-id-type="doi">10.1108/01439910410566362</pub-id>
<pub-id pub-id-type="pmid">16429611</pub-id>
</element-citation>
</ref>
<ref id="B23-sensors-15-29853">
<label>23.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Roesener</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Perner</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Zerawa</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Hutter</surname>
<given-names>S.</given-names>
</name>
</person-group>
<article-title>Interface for non-haptic control in automation</article-title>
<source>Proceedings of the 8th IEEE International Conference on Industrial Informatics (INDIN)</source>
<conf-loc>Osaka, Japan</conf-loc>
<conf-date>13–16 July 2010</conf-date>
<fpage>961</fpage>
<lpage>966</lpage>
</element-citation>
</ref>
<ref id="B24-sensors-15-29853">
<label>24.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dadgostar</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Sarrafzadeh</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>An adaptive real-time skin detector based on hue thresholding: Comparison on two motion tracking methods</article-title>
<source>Pattern Recogn. Lett.</source>
<year>2006</year>
<volume>27</volume>
<fpage>1342</fpage>
<lpage>1352</lpage>
<pub-id pub-id-type="doi">10.1016/j.patrec.2006.01.007</pub-id>
</element-citation>
</ref>
<ref id="B25-sensors-15-29853">
<label>25.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tara</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Santosa</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Adji</surname>
<given-names>T.</given-names>
</name>
</person-group>
<article-title>Hand Segmentation from Depth Image Using Anthropometric Approach in Natural Interface Development</article-title>
<source>Int. J. Sci. Eng. Res.</source>
<year>2012</year>
<volume>3</volume>
<fpage>1</fpage>
<lpage>4</lpage>
</element-citation>
</ref>
<ref id="B26-sensors-15-29853">
<label>26.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chen</surname>
<given-names>F.S.</given-names>
</name>
<name>
<surname>Fu</surname>
<given-names>C.M.</given-names>
</name>
<name>
<surname>Huang</surname>
<given-names>C.L.</given-names>
</name>
</person-group>
<article-title>Hand gesture recognition using a real-time tracking method and hidden Markov models</article-title>
<source>Image Vis. Comput.</source>
<year>2003</year>
<volume>21</volume>
<fpage>745</fpage>
<lpage>758</lpage>
<pub-id pub-id-type="doi">10.1016/S0262-8856(03)00070-2</pub-id>
</element-citation>
</ref>
<ref id="B27-sensors-15-29853">
<label>27.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Palacios</surname>
<given-names>J.M.</given-names>
</name>
<name>
<surname>Sagüés</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Montijano</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Llorente</surname>
<given-names>S.</given-names>
</name>
</person-group>
<article-title>Human-Computer Interaction Based on Hand Gestures Using RGB-D Sensors</article-title>
<source>Sensors</source>
<year>2013</year>
<volume>13</volume>
<fpage>11842</fpage>
<lpage>11860</lpage>
<pub-id pub-id-type="doi">10.3390/s130911842</pub-id>
<pub-id pub-id-type="pmid">24018953</pub-id>
</element-citation>
</ref>
<ref id="B28-sensors-15-29853">
<label>28.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Liang</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Yuan</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Thalmann</surname>
<given-names>D.</given-names>
</name>
</person-group>
<article-title>3D Fingertip and Palm Tracking in Depth Image Sequences</article-title>
<source>Proceedings of the 20th ACM International Conference on Multimedia</source>
<conf-loc>Nara, Japan</conf-loc>
<conf-date>29 October–2 November 2012</conf-date>
<fpage>785</fpage>
<lpage>788</lpage>
</element-citation>
</ref>
<ref id="B29-sensors-15-29853">
<label>29.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gil</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Mateo</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Torres</surname>
<given-names>F.</given-names>
</name>
</person-group>
<article-title>3D Visual Sensing of the Human Hand for the Remote Operation of a Robotic Hand</article-title>
<source>Int. J. Adv. Robot. Syst.</source>
<year>2014</year>
<fpage>11</fpage>
<lpage>26</lpage>
<pub-id pub-id-type="doi">10.5772/57525</pub-id>
</element-citation>
</ref>
<ref id="B30-sensors-15-29853">
<label>30.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Caputo</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Denker</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Dums</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Umlauf</surname>
<given-names>G.</given-names>
</name>
</person-group>
<article-title>3D Hand Gesture Recognition Based on Sensor Fusion of Commodity Hardware</article-title>
<source>Mensch Comput.</source>
<year>2012</year>
<volume>2012</volume>
<fpage>293</fpage>
<lpage>302</lpage>
</element-citation>
</ref>
<ref id="B31-sensors-15-29853">
<label>31.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Cosgun</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Bunger</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Christensen</surname>
<given-names>H.I.</given-names>
</name>
</person-group>
<article-title>Accuracy Analysis of Skeleton Trackers for Safety in HRI</article-title>
<source>Proceedings of the Workshop on Safety and Comfort of Humanoid Coworker and Assistant (HUMANOIDS)</source>
<conf-loc>Atlanta, GA, USA</conf-loc>
<conf-date>15–17 October 2013</conf-date>
</element-citation>
</ref>
<ref id="B32-sensors-15-29853">
<label>32.</label>
<element-citation publication-type="webpage">
<person-group person-group-type="author">
<collab>DINED Anthropometric Database</collab>
</person-group>
<article-title>Hand Length Size of North European Communities</article-title>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="http://dined.io.tudelft.nl/dined/">http://dined.io.tudelft.nl/dined/</ext-link>
</comment>
<date-in-citation>(accessed on 2 November 2015)</date-in-citation>
</element-citation>
</ref>
<ref id="B33-sensors-15-29853">
<label>33.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Keskin</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Kirac</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Kara</surname>
<given-names>Y.E.</given-names>
</name>
<name>
<surname>Akarun</surname>
<given-names>L.</given-names>
</name>
</person-group>
<article-title>Real time hand pose estimation using depth sensors</article-title>
<source>Proceedings of the Computer Vision Workshops</source>
<conf-loc>Barcelona, Spain</conf-loc>
<conf-date>6–13 November 2011</conf-date>
<fpage>1228</fpage>
<lpage>1234</lpage>
</element-citation>
</ref>
<ref id="B34-sensors-15-29853">
<label>34.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ren</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Yuan</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Meng</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>Z.</given-names>
</name>
</person-group>
<article-title>Robust Part-Based Hand Gesture Recognition Using Kinect Sensor</article-title>
<source>IEEE Trans. Multimed.</source>
<year>2013</year>
<volume>15</volume>
<fpage>1110</fpage>
<lpage>1120</lpage>
<pub-id pub-id-type="doi">10.1109/TMM.2013.2246148</pub-id>
</element-citation>
</ref>
<ref id="B35-sensors-15-29853">
<label>35.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Yong</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Tianli</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Shi</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Zhu</surname>
<given-names>L.</given-names>
</name>
</person-group>
<article-title>Using human body gestures as inputs for gaming via depth analysis</article-title>
<source>Proceedings of the IEEE International Conference on Multimedia and Expo</source>
<conf-loc>Hannover, Germany</conf-loc>
<conf-date>23–26 June 2008</conf-date>
<fpage>993</fpage>
<lpage>996</lpage>
</element-citation>
</ref>
<ref id="B36-sensors-15-29853">
<label>36.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Malassiotis</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Aifanti</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Strintzis</surname>
<given-names>M.G.</given-names>
</name>
</person-group>
<article-title>A gesture recognition system using 3D data</article-title>
<source>Proceedings of the First International Symposium on 3D Data Processing Visualization and Transmission</source>
<conf-loc>Padova, Italy</conf-loc>
<conf-date>19–21 June 2002</conf-date>
<fpage>190</fpage>
<lpage>193</lpage>
</element-citation>
</ref>
<ref id="B37-sensors-15-29853">
<label>37.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Ferris</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Turk</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Raskar</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Tan</surname>
<given-names>K.H.</given-names>
</name>
<name>
<surname>Ohashi</surname>
<given-names>G.</given-names>
</name>
</person-group>
<article-title>Recognition of Isolated Fingerspelling Gestures Using Depth Edges</article-title>
<source>Real-Time Vision for Human-Computer Interaction</source>
<publisher-name>Springer US</publisher-name>
<publisher-loc>New York, NY, USA</publisher-loc>
<year>2005</year>
</element-citation>
</ref>
<ref id="B38-sensors-15-29853">
<label>38.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Rusu</surname>
<given-names>R.B.</given-names>
</name>
<name>
<surname>Brandski</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Thibaux</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Hsu</surname>
<given-names>J.</given-names>
</name>
</person-group>
<article-title>Fast 3D Recognition and Pose Using the Viewpoint Feature Histogram</article-title>
<source>Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)</source>
<conf-loc>Taipei, Taiwan</conf-loc>
<conf-date>18–22 October 2010</conf-date>
<fpage>2155</fpage>
<lpage>2162</lpage>
</element-citation>
</ref>
<ref id="B39-sensors-15-29853">
<label>39.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Rusu</surname>
<given-names>R.B.</given-names>
</name>
<name>
<surname>Blodow</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Beetz</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>Fast Point Feature Histograms (FPFH) for 3D Registration</article-title>
<source>Proceedings of the International Conference on Robotics and Automation (ICRA)</source>
<conf-loc>Kobe, Japan</conf-loc>
<conf-date>12–17 May 2009</conf-date>
<fpage>3212</fpage>
<lpage>3217</lpage>
</element-citation>
</ref>
<ref id="B40-sensors-15-29853">
<label>40.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mateo</surname>
<given-names>C.M.</given-names>
</name>
<name>
<surname>Gil</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Torres</surname>
<given-names>F.</given-names>
</name>
</person-group>
<article-title>Visual perception for the 3D recognition of geometric pieces in robotic manipulation</article-title>
<source>Int. J. Adv. Manuf. Technol.</source>
<year>2015</year>
<pub-id pub-id-type="doi">10.1007/s00170-015-7708-8</pub-id>
</element-citation>
</ref>
<ref id="B41-sensors-15-29853">
<label>41.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kuremoto</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Obayashi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Kobayashi</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Feng</surname>
<given-names>L.-B.</given-names>
</name>
</person-group>
<article-title>Instruction learning systems for partner robots</article-title>
<source>Adv. Robot. Model. Control Appl.</source>
<year>2012</year>
<volume>8</volume>
<fpage>149</fpage>
<lpage>170</lpage>
</element-citation>
</ref>
</ref-list>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000388 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd -nk 000388 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Curation
   |type=    RBID
   |clé=     PMC:4721773
   |texte=   Control and Guidance of Low-Cost Robots via Gesture Perception for Monitoring Activities in the Home
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Curation/RBID.i   -Sk "pubmed:26690448" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024