Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Vision-Based Pose Estimation for Robot-Mediated Hand Telerehabilitation

Identifieur interne : 000005 ( Pmc/Checkpoint ); précédent : 000004; suivant : 000006

Vision-Based Pose Estimation for Robot-Mediated Hand Telerehabilitation

Auteurs : Giuseppe Air Farulla ; Daniele Pianu ; Marco Cempini ; Mario Cortese ; Ludovico O. Russo ; Marco Indaco ; Roberto Nerino ; Antonio Chimienti ; Calogero M. Oddo ; Nicola Vitiello

Source :

RBID : PMC:4801584

Abstract

Vision-based Pose Estimation (VPE) represents a non-invasive solution to allow a smooth and natural interaction between a human user and a robotic system, without requiring complex calibration procedures. Moreover, VPE interfaces are gaining momentum as they are highly intuitive, such that they can be used from untrained personnel (e.g., a generic caregiver) even in delicate tasks as rehabilitation exercises. In this paper, we present a novel master–slave setup for hand telerehabilitation with an intuitive and simple interface for remote control of a wearable hand exoskeleton, named HX. While performing rehabilitative exercises, the master unit evaluates the 3D position of a human operator’s hand joints in real-time using only a RGB-D camera, and commands remotely the slave exoskeleton. Within the slave unit, the exoskeleton replicates hand movements and an external grip sensor records interaction forces, that are fed back to the operator-therapist, allowing a direct real-time assessment of the rehabilitative task. Experimental data collected with an operator and six volunteers are provided to show the feasibility of the proposed system and its performances. The results demonstrate that, leveraging on our system, the operator was able to directly control volunteers’ hands movements.


Url:
DOI: 10.3390/s16020208
PubMed: 26861333
PubMed Central: 4801584


Affiliations:


Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4801584

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Vision-Based Pose Estimation for Robot-Mediated Hand Telerehabilitation</title>
<author>
<name sortKey="Air Farulla, Giuseppe" sort="Air Farulla, Giuseppe" uniqKey="Air Farulla G" first="Giuseppe" last="Air Farulla">Giuseppe Air Farulla</name>
<affiliation>
<nlm:aff id="af1-sensors-16-00208">Department of Control and Computer Engineering, Politecnico di Torino, Corso Duca degli Abruzzi 24, Turin 10129, Italy;
<email>ludovico.russo@polito.it</email>
(L.O.R.);
<email>marco.indaco@polito.it</email>
(M.I.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Pianu, Daniele" sort="Pianu, Daniele" uniqKey="Pianu D" first="Daniele" last="Pianu">Daniele Pianu</name>
<affiliation>
<nlm:aff id="af2-sensors-16-00208">Institute of Electronics, Computer and Telecommunication Engineering, National Research Council, Corso Duca degli Abruzzi 24, Turin 10129, Italy;
<email>daniele.pianu@ieiit.cnr.it</email>
(D.P.);
<email>roberto.nerino@ieiit.cnr.it</email>
(R.N.);
<email>antonio.chimienti@ieiit.cnr.it</email>
(A.C.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Cempini, Marco" sort="Cempini, Marco" uniqKey="Cempini M" first="Marco" last="Cempini">Marco Cempini</name>
<affiliation>
<nlm:aff id="af3-sensors-16-00208">The BioRobotics Institute, Scuola Superiore Sant’Anna, viale Rinaldo Piaggio 34, Pontedera 56025, Italy;
<email>m.cempini@sssup.it</email>
(M.Ce.);
<email>m.cortese@sssup.it</email>
(M.Co.);
<email>oddoc@sssup.it</email>
(C.M.O.);
<email>n.vitiello@sssup.it</email>
(N.V.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Cortese, Mario" sort="Cortese, Mario" uniqKey="Cortese M" first="Mario" last="Cortese">Mario Cortese</name>
<affiliation>
<nlm:aff id="af3-sensors-16-00208">The BioRobotics Institute, Scuola Superiore Sant’Anna, viale Rinaldo Piaggio 34, Pontedera 56025, Italy;
<email>m.cempini@sssup.it</email>
(M.Ce.);
<email>m.cortese@sssup.it</email>
(M.Co.);
<email>oddoc@sssup.it</email>
(C.M.O.);
<email>n.vitiello@sssup.it</email>
(N.V.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Russo, Ludovico O" sort="Russo, Ludovico O" uniqKey="Russo L" first="Ludovico O." last="Russo">Ludovico O. Russo</name>
<affiliation>
<nlm:aff id="af1-sensors-16-00208">Department of Control and Computer Engineering, Politecnico di Torino, Corso Duca degli Abruzzi 24, Turin 10129, Italy;
<email>ludovico.russo@polito.it</email>
(L.O.R.);
<email>marco.indaco@polito.it</email>
(M.I.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Indaco, Marco" sort="Indaco, Marco" uniqKey="Indaco M" first="Marco" last="Indaco">Marco Indaco</name>
<affiliation>
<nlm:aff id="af1-sensors-16-00208">Department of Control and Computer Engineering, Politecnico di Torino, Corso Duca degli Abruzzi 24, Turin 10129, Italy;
<email>ludovico.russo@polito.it</email>
(L.O.R.);
<email>marco.indaco@polito.it</email>
(M.I.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Nerino, Roberto" sort="Nerino, Roberto" uniqKey="Nerino R" first="Roberto" last="Nerino">Roberto Nerino</name>
<affiliation>
<nlm:aff id="af2-sensors-16-00208">Institute of Electronics, Computer and Telecommunication Engineering, National Research Council, Corso Duca degli Abruzzi 24, Turin 10129, Italy;
<email>daniele.pianu@ieiit.cnr.it</email>
(D.P.);
<email>roberto.nerino@ieiit.cnr.it</email>
(R.N.);
<email>antonio.chimienti@ieiit.cnr.it</email>
(A.C.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Chimienti, Antonio" sort="Chimienti, Antonio" uniqKey="Chimienti A" first="Antonio" last="Chimienti">Antonio Chimienti</name>
<affiliation>
<nlm:aff id="af2-sensors-16-00208">Institute of Electronics, Computer and Telecommunication Engineering, National Research Council, Corso Duca degli Abruzzi 24, Turin 10129, Italy;
<email>daniele.pianu@ieiit.cnr.it</email>
(D.P.);
<email>roberto.nerino@ieiit.cnr.it</email>
(R.N.);
<email>antonio.chimienti@ieiit.cnr.it</email>
(A.C.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Oddo, Calogero M" sort="Oddo, Calogero M" uniqKey="Oddo C" first="Calogero M." last="Oddo">Calogero M. Oddo</name>
<affiliation>
<nlm:aff id="af3-sensors-16-00208">The BioRobotics Institute, Scuola Superiore Sant’Anna, viale Rinaldo Piaggio 34, Pontedera 56025, Italy;
<email>m.cempini@sssup.it</email>
(M.Ce.);
<email>m.cortese@sssup.it</email>
(M.Co.);
<email>oddoc@sssup.it</email>
(C.M.O.);
<email>n.vitiello@sssup.it</email>
(N.V.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Vitiello, Nicola" sort="Vitiello, Nicola" uniqKey="Vitiello N" first="Nicola" last="Vitiello">Nicola Vitiello</name>
<affiliation>
<nlm:aff id="af3-sensors-16-00208">The BioRobotics Institute, Scuola Superiore Sant’Anna, viale Rinaldo Piaggio 34, Pontedera 56025, Italy;
<email>m.cempini@sssup.it</email>
(M.Ce.);
<email>m.cortese@sssup.it</email>
(M.Co.);
<email>oddoc@sssup.it</email>
(C.M.O.);
<email>n.vitiello@sssup.it</email>
(N.V.)</nlm:aff>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">26861333</idno>
<idno type="pmc">4801584</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4801584</idno>
<idno type="RBID">PMC:4801584</idno>
<idno type="doi">10.3390/s16020208</idno>
<date when="2016">2016</date>
<idno type="wicri:Area/Pmc/Corpus">000573</idno>
<idno type="wicri:Area/Pmc/Curation">000573</idno>
<idno type="wicri:Area/Pmc/Checkpoint">000005</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Vision-Based Pose Estimation for Robot-Mediated Hand Telerehabilitation</title>
<author>
<name sortKey="Air Farulla, Giuseppe" sort="Air Farulla, Giuseppe" uniqKey="Air Farulla G" first="Giuseppe" last="Air Farulla">Giuseppe Air Farulla</name>
<affiliation>
<nlm:aff id="af1-sensors-16-00208">Department of Control and Computer Engineering, Politecnico di Torino, Corso Duca degli Abruzzi 24, Turin 10129, Italy;
<email>ludovico.russo@polito.it</email>
(L.O.R.);
<email>marco.indaco@polito.it</email>
(M.I.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Pianu, Daniele" sort="Pianu, Daniele" uniqKey="Pianu D" first="Daniele" last="Pianu">Daniele Pianu</name>
<affiliation>
<nlm:aff id="af2-sensors-16-00208">Institute of Electronics, Computer and Telecommunication Engineering, National Research Council, Corso Duca degli Abruzzi 24, Turin 10129, Italy;
<email>daniele.pianu@ieiit.cnr.it</email>
(D.P.);
<email>roberto.nerino@ieiit.cnr.it</email>
(R.N.);
<email>antonio.chimienti@ieiit.cnr.it</email>
(A.C.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Cempini, Marco" sort="Cempini, Marco" uniqKey="Cempini M" first="Marco" last="Cempini">Marco Cempini</name>
<affiliation>
<nlm:aff id="af3-sensors-16-00208">The BioRobotics Institute, Scuola Superiore Sant’Anna, viale Rinaldo Piaggio 34, Pontedera 56025, Italy;
<email>m.cempini@sssup.it</email>
(M.Ce.);
<email>m.cortese@sssup.it</email>
(M.Co.);
<email>oddoc@sssup.it</email>
(C.M.O.);
<email>n.vitiello@sssup.it</email>
(N.V.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Cortese, Mario" sort="Cortese, Mario" uniqKey="Cortese M" first="Mario" last="Cortese">Mario Cortese</name>
<affiliation>
<nlm:aff id="af3-sensors-16-00208">The BioRobotics Institute, Scuola Superiore Sant’Anna, viale Rinaldo Piaggio 34, Pontedera 56025, Italy;
<email>m.cempini@sssup.it</email>
(M.Ce.);
<email>m.cortese@sssup.it</email>
(M.Co.);
<email>oddoc@sssup.it</email>
(C.M.O.);
<email>n.vitiello@sssup.it</email>
(N.V.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Russo, Ludovico O" sort="Russo, Ludovico O" uniqKey="Russo L" first="Ludovico O." last="Russo">Ludovico O. Russo</name>
<affiliation>
<nlm:aff id="af1-sensors-16-00208">Department of Control and Computer Engineering, Politecnico di Torino, Corso Duca degli Abruzzi 24, Turin 10129, Italy;
<email>ludovico.russo@polito.it</email>
(L.O.R.);
<email>marco.indaco@polito.it</email>
(M.I.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Indaco, Marco" sort="Indaco, Marco" uniqKey="Indaco M" first="Marco" last="Indaco">Marco Indaco</name>
<affiliation>
<nlm:aff id="af1-sensors-16-00208">Department of Control and Computer Engineering, Politecnico di Torino, Corso Duca degli Abruzzi 24, Turin 10129, Italy;
<email>ludovico.russo@polito.it</email>
(L.O.R.);
<email>marco.indaco@polito.it</email>
(M.I.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Nerino, Roberto" sort="Nerino, Roberto" uniqKey="Nerino R" first="Roberto" last="Nerino">Roberto Nerino</name>
<affiliation>
<nlm:aff id="af2-sensors-16-00208">Institute of Electronics, Computer and Telecommunication Engineering, National Research Council, Corso Duca degli Abruzzi 24, Turin 10129, Italy;
<email>daniele.pianu@ieiit.cnr.it</email>
(D.P.);
<email>roberto.nerino@ieiit.cnr.it</email>
(R.N.);
<email>antonio.chimienti@ieiit.cnr.it</email>
(A.C.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Chimienti, Antonio" sort="Chimienti, Antonio" uniqKey="Chimienti A" first="Antonio" last="Chimienti">Antonio Chimienti</name>
<affiliation>
<nlm:aff id="af2-sensors-16-00208">Institute of Electronics, Computer and Telecommunication Engineering, National Research Council, Corso Duca degli Abruzzi 24, Turin 10129, Italy;
<email>daniele.pianu@ieiit.cnr.it</email>
(D.P.);
<email>roberto.nerino@ieiit.cnr.it</email>
(R.N.);
<email>antonio.chimienti@ieiit.cnr.it</email>
(A.C.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Oddo, Calogero M" sort="Oddo, Calogero M" uniqKey="Oddo C" first="Calogero M." last="Oddo">Calogero M. Oddo</name>
<affiliation>
<nlm:aff id="af3-sensors-16-00208">The BioRobotics Institute, Scuola Superiore Sant’Anna, viale Rinaldo Piaggio 34, Pontedera 56025, Italy;
<email>m.cempini@sssup.it</email>
(M.Ce.);
<email>m.cortese@sssup.it</email>
(M.Co.);
<email>oddoc@sssup.it</email>
(C.M.O.);
<email>n.vitiello@sssup.it</email>
(N.V.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Vitiello, Nicola" sort="Vitiello, Nicola" uniqKey="Vitiello N" first="Nicola" last="Vitiello">Nicola Vitiello</name>
<affiliation>
<nlm:aff id="af3-sensors-16-00208">The BioRobotics Institute, Scuola Superiore Sant’Anna, viale Rinaldo Piaggio 34, Pontedera 56025, Italy;
<email>m.cempini@sssup.it</email>
(M.Ce.);
<email>m.cortese@sssup.it</email>
(M.Co.);
<email>oddoc@sssup.it</email>
(C.M.O.);
<email>n.vitiello@sssup.it</email>
(N.V.)</nlm:aff>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Sensors (Basel, Switzerland)</title>
<idno type="eISSN">1424-8220</idno>
<imprint>
<date when="2016">2016</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Vision-based Pose Estimation (VPE) represents a non-invasive solution to allow a smooth and natural interaction between a human user and a robotic system, without requiring complex calibration procedures. Moreover, VPE interfaces are gaining momentum as they are highly intuitive, such that they can be used from untrained personnel (e.g., a generic caregiver) even in delicate tasks as rehabilitation exercises. In this paper, we present a novel master–slave setup for hand telerehabilitation with an intuitive and simple interface for remote control of a wearable hand exoskeleton, named HX. While performing rehabilitative exercises, the master unit evaluates the 3D position of a human operator’s hand joints in real-time using only a RGB-D camera, and commands remotely the slave exoskeleton. Within the slave unit, the exoskeleton replicates hand movements and an external grip sensor records interaction forces, that are fed back to the operator-therapist, allowing a direct real-time assessment of the rehabilitative task. Experimental data collected with an operator and six volunteers are provided to show the feasibility of the proposed system and its performances. The results demonstrate that, leveraging on our system, the operator was able to directly control volunteers’ hands movements.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Dobkin, B" uniqKey="Dobkin B">B. Dobkin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fox, S" uniqKey="Fox S">S. Fox</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Popescu, V G" uniqKey="Popescu V">V.G. Popescu</name>
</author>
<author>
<name sortKey="Burdea, G C" uniqKey="Burdea G">G.C. Burdea</name>
</author>
<author>
<name sortKey="Bouzit, M" uniqKey="Bouzit M">M. Bouzit</name>
</author>
<author>
<name sortKey="Hentz, V R" uniqKey="Hentz V">V.R. Hentz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Burdea, G" uniqKey="Burdea G">G. Burdea</name>
</author>
<author>
<name sortKey="Popescu, V" uniqKey="Popescu V">V. Popescu</name>
</author>
<author>
<name sortKey="Hentz, V" uniqKey="Hentz V">V. Hentz</name>
</author>
<author>
<name sortKey="Colbert, K" uniqKey="Colbert K">K. Colbert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Giansanti, D" uniqKey="Giansanti D">D. Giansanti</name>
</author>
<author>
<name sortKey="Morelli, S" uniqKey="Morelli S">S. Morelli</name>
</author>
<author>
<name sortKey="Maccioni, G" uniqKey="Maccioni G">G. Maccioni</name>
</author>
<author>
<name sortKey="Macellari, V" uniqKey="Macellari V">V. Macellari</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Holden, M K" uniqKey="Holden M">M.K. Holden</name>
</author>
<author>
<name sortKey="Dyar, T A" uniqKey="Dyar T">T.A. Dyar</name>
</author>
<author>
<name sortKey="Dayan Cimadoro, L" uniqKey="Dayan Cimadoro L">L. Dayan-Cimadoro</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Holden, M K" uniqKey="Holden M">M.K. Holden</name>
</author>
<author>
<name sortKey="Dyar, T A" uniqKey="Dyar T">T.A. Dyar</name>
</author>
<author>
<name sortKey="Schwamm, L" uniqKey="Schwamm L">L. Schwamm</name>
</author>
<author>
<name sortKey="Bizzi, E" uniqKey="Bizzi E">E. Bizzi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Holden, M K" uniqKey="Holden M">M.K. Holden</name>
</author>
<author>
<name sortKey="Dyar, T" uniqKey="Dyar T">T. Dyar</name>
</author>
<author>
<name sortKey="Schwamm, L" uniqKey="Schwamm L">L. Schwamm</name>
</author>
<author>
<name sortKey="Bizzi, E" uniqKey="Bizzi E">E. Bizzi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Song, A" uniqKey="Song A">A. Song</name>
</author>
<author>
<name sortKey="Pan, L" uniqKey="Pan L">L. Pan</name>
</author>
<author>
<name sortKey="Xu, G" uniqKey="Xu G">G. Xu</name>
</author>
<author>
<name sortKey="Li, H" uniqKey="Li H">H. Li</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Basteris, A" uniqKey="Basteris A">A. Basteris</name>
</author>
<author>
<name sortKey="Nijenhuis, S M" uniqKey="Nijenhuis S">S.M. Nijenhuis</name>
</author>
<author>
<name sortKey="Stienen, A" uniqKey="Stienen A">A. Stienen</name>
</author>
<author>
<name sortKey="Buurke, J H" uniqKey="Buurke J">J.H. Buurke</name>
</author>
<author>
<name sortKey="Prange, G B" uniqKey="Prange G">G.B. Prange</name>
</author>
<author>
<name sortKey="Amirabdollahian, F" uniqKey="Amirabdollahian F">F. Amirabdollahian</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pan, L" uniqKey="Pan L">L. Pan</name>
</author>
<author>
<name sortKey="Song, A" uniqKey="Song A">A. Song</name>
</author>
<author>
<name sortKey="Xu, G" uniqKey="Xu G">G. Xu</name>
</author>
<author>
<name sortKey="Li, H" uniqKey="Li H">H. Li</name>
</author>
<author>
<name sortKey="Xu, B" uniqKey="Xu B">B. Xu</name>
</author>
<author>
<name sortKey="Xiong, P" uniqKey="Xiong P">P. Xiong</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brochard, S" uniqKey="Brochard S">S. Brochard</name>
</author>
<author>
<name sortKey="Robertson, J" uniqKey="Robertson J">J. Robertson</name>
</author>
<author>
<name sortKey="Medee, B" uniqKey="Medee B">B. Medee</name>
</author>
<author>
<name sortKey="Remy Neris, O" uniqKey="Remy Neris O">O. Remy-Neris</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Balasubramanian, S" uniqKey="Balasubramanian S">S. Balasubramanian</name>
</author>
<author>
<name sortKey="Klein, J" uniqKey="Klein J">J. Klein</name>
</author>
<author>
<name sortKey="Burdet, E" uniqKey="Burdet E">E. Burdet</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mehrholz, J" uniqKey="Mehrholz J">J. Mehrholz</name>
</author>
<author>
<name sortKey="Platz, T" uniqKey="Platz T">T. Platz</name>
</author>
<author>
<name sortKey="Kugler, J" uniqKey="Kugler J">J. Kugler</name>
</author>
<author>
<name sortKey="Pohl, M" uniqKey="Pohl M">M. Pohl</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Reinkensmeyer, J" uniqKey="Reinkensmeyer J">J. Reinkensmeyer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Burgar, C G" uniqKey="Burgar C">C.G. Burgar</name>
</author>
<author>
<name sortKey="Lum, P S" uniqKey="Lum P">P.S. Lum</name>
</author>
<author>
<name sortKey="Shor, P C" uniqKey="Shor P">P.C. Shor</name>
</author>
<author>
<name sortKey="Van Der Loos, H M" uniqKey="Van Der Loos H">H.M. Van der Loos</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Song, G" uniqKey="Song G">G. Song</name>
</author>
<author>
<name sortKey="Guo, S" uniqKey="Guo S">S. Guo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Peng, Q" uniqKey="Peng Q">Q. Peng</name>
</author>
<author>
<name sortKey="Park, H S" uniqKey="Park H">H.S. Park</name>
</author>
<author>
<name sortKey="Zhang, L Q" uniqKey="Zhang L">L.Q. Zhang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Duong, M D" uniqKey="Duong M">M.D. Duong</name>
</author>
<author>
<name sortKey="Terashima, K" uniqKey="Terashima K">K. Terashima</name>
</author>
<author>
<name sortKey="Miyoshi, T" uniqKey="Miyoshi T">T. Miyoshi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cortese, M" uniqKey="Cortese M">M. Cortese</name>
</author>
<author>
<name sortKey="Cempini, M" uniqKey="Cempini M">M. Cempini</name>
</author>
<author>
<name sortKey="De Almeida Ribeiro, P R" uniqKey="De Almeida Ribeiro P">P.R. de Almeida Ribeiro</name>
</author>
<author>
<name sortKey="Soekadar, S R" uniqKey="Soekadar S">S.R. Soekadar</name>
</author>
<author>
<name sortKey="Carrozza, M C" uniqKey="Carrozza M">M.C. Carrozza</name>
</author>
<author>
<name sortKey="Vitiello, N" uniqKey="Vitiello N">N. Vitiello</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ant N, D" uniqKey="Ant N D">D. Antón</name>
</author>
<author>
<name sortKey="Goni, A" uniqKey="Goni A">A. Goni</name>
</author>
<author>
<name sortKey="Illarramendi, A" uniqKey="Illarramendi A">A. Illarramendi</name>
</author>
<author>
<name sortKey="Torres Unda, J J" uniqKey="Torres Unda J">J.J. Torres-Unda</name>
</author>
<author>
<name sortKey="Seco, J" uniqKey="Seco J">J. Seco</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Russo, L O" uniqKey="Russo L">L.O. Russo</name>
</author>
<author>
<name sortKey="Air Farulla, G" uniqKey="Air Farulla G">G. Airò Farulla</name>
</author>
<author>
<name sortKey="Pianu, D" uniqKey="Pianu D">D. Pianu</name>
</author>
<author>
<name sortKey="Salgarella, A R" uniqKey="Salgarella A">A.R. Salgarella</name>
</author>
<author>
<name sortKey="Controzzi, M" uniqKey="Controzzi M">M. Controzzi</name>
</author>
<author>
<name sortKey="Cipriani, C" uniqKey="Cipriani C">C. Cipriani</name>
</author>
<author>
<name sortKey="Oddo, C M" uniqKey="Oddo C">C.M. Oddo</name>
</author>
<author>
<name sortKey="Geraci, C" uniqKey="Geraci C">C. Geraci</name>
</author>
<author>
<name sortKey="Rosa, S" uniqKey="Rosa S">S. Rosa</name>
</author>
<author>
<name sortKey="Indaco, M" uniqKey="Indaco M">M. Indaco</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shotton, J" uniqKey="Shotton J">J. Shotton</name>
</author>
<author>
<name sortKey="Sharp, T" uniqKey="Sharp T">T. Sharp</name>
</author>
<author>
<name sortKey="Kipman, A" uniqKey="Kipman A">A. Kipman</name>
</author>
<author>
<name sortKey="Fitzgibbon, A" uniqKey="Fitzgibbon A">A. Fitzgibbon</name>
</author>
<author>
<name sortKey="Finocchio, M" uniqKey="Finocchio M">M. Finocchio</name>
</author>
<author>
<name sortKey="Blake, A" uniqKey="Blake A">A. Blake</name>
</author>
<author>
<name sortKey="Cook, M" uniqKey="Cook M">M. Cook</name>
</author>
<author>
<name sortKey="Moore, R" uniqKey="Moore R">R. Moore</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Keskin, C" uniqKey="Keskin C">C. Keskin</name>
</author>
<author>
<name sortKey="K Rac, F" uniqKey="K Rac F">F. Kıraç</name>
</author>
<author>
<name sortKey="Kara, Y E" uniqKey="Kara Y">Y.E. Kara</name>
</author>
<author>
<name sortKey="Akarun, L" uniqKey="Akarun L">L. Akarun</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Oikonomidis, I" uniqKey="Oikonomidis I">I. Oikonomidis</name>
</author>
<author>
<name sortKey="Kyriazis, N" uniqKey="Kyriazis N">N. Kyriazis</name>
</author>
<author>
<name sortKey="Argyros, A A" uniqKey="Argyros A">A.A. Argyros</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wachs, J P" uniqKey="Wachs J">J.P. Wachs</name>
</author>
<author>
<name sortKey="Kolsch, M" uniqKey="Kolsch M">M. Kölsch</name>
</author>
<author>
<name sortKey="Stern, H" uniqKey="Stern H">H. Stern</name>
</author>
<author>
<name sortKey="Edan, Y" uniqKey="Edan Y">Y. Edan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kennedy, J" uniqKey="Kennedy J">J. Kennedy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Breiman, L" uniqKey="Breiman L">L. Breiman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cempini, M" uniqKey="Cempini M">M. Cempini</name>
</author>
<author>
<name sortKey="Cortese, M" uniqKey="Cortese M">M. Cortese</name>
</author>
<author>
<name sortKey="Vitiello, N" uniqKey="Vitiello N">N. Vitiello</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Donati, M" uniqKey="Donati M">M. Donati</name>
</author>
<author>
<name sortKey="Vitiello, N" uniqKey="Vitiello N">N. Vitiello</name>
</author>
<author>
<name sortKey="De Rossi, S M M" uniqKey="De Rossi S">S.M.M. De Rossi</name>
</author>
<author>
<name sortKey="Lenzi, T" uniqKey="Lenzi T">T. Lenzi</name>
</author>
<author>
<name sortKey="Crea, S" uniqKey="Crea S">S. Crea</name>
</author>
<author>
<name sortKey="Persichetti, A" uniqKey="Persichetti A">A. Persichetti</name>
</author>
<author>
<name sortKey="Giovacchini, F" uniqKey="Giovacchini F">F. Giovacchini</name>
</author>
<author>
<name sortKey="Koopman, B" uniqKey="Koopman B">B. Koopman</name>
</author>
<author>
<name sortKey="Podobnik, J" uniqKey="Podobnik J">J. Podobnik</name>
</author>
<author>
<name sortKey="Munih, M" uniqKey="Munih M">M. Munih</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fischler, M A" uniqKey="Fischler M">M.A. Fischler</name>
</author>
<author>
<name sortKey="Bolles, R C" uniqKey="Bolles R">R.C. Bolles</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Comaniciu, D" uniqKey="Comaniciu D">D. Comaniciu</name>
</author>
<author>
<name sortKey="Meer, P" uniqKey="Meer P">P. Meer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lin, J" uniqKey="Lin J">J. Lin</name>
</author>
<author>
<name sortKey="Wu, Y" uniqKey="Wu Y">Y. Wu</name>
</author>
<author>
<name sortKey="Huang, T S" uniqKey="Huang T">T.S. Huang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lenzi, T" uniqKey="Lenzi T">T. Lenzi</name>
</author>
<author>
<name sortKey="Vitiello, N" uniqKey="Vitiello N">N. Vitiello</name>
</author>
<author>
<name sortKey="De Rossi, S M M" uniqKey="De Rossi S">S.M.M. De Rossi</name>
</author>
<author>
<name sortKey="Persichetti, A" uniqKey="Persichetti A">A. Persichetti</name>
</author>
<author>
<name sortKey="Giovacchini, F" uniqKey="Giovacchini F">F. Giovacchini</name>
</author>
<author>
<name sortKey="Roccella, S" uniqKey="Roccella S">S. Roccella</name>
</author>
<author>
<name sortKey="Vecchi, F" uniqKey="Vecchi F">F. Vecchi</name>
</author>
<author>
<name sortKey="Carrozza, M C" uniqKey="Carrozza M">M.C. Carrozza</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sari, M" uniqKey="Sari M">M. Šarić</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mohammadi, A" uniqKey="Mohammadi A">A. Mohammadi</name>
</author>
<author>
<name sortKey="Tavakoli, M" uniqKey="Tavakoli M">M. Tavakoli</name>
</author>
<author>
<name sortKey="Marquez, H J" uniqKey="Marquez H">H.J. Marquez</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tropp, H" uniqKey="Tropp H">H. Tropp</name>
</author>
<author>
<name sortKey="Alaranta, H" uniqKey="Alaranta H">H. Alaranta</name>
</author>
<author>
<name sortKey="Renstrom, P" uniqKey="Renstrom P">P. Renstrom</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Sensors (Basel)</journal-id>
<journal-id journal-id-type="iso-abbrev">Sensors (Basel)</journal-id>
<journal-id journal-id-type="publisher-id">sensors</journal-id>
<journal-title-group>
<journal-title>Sensors (Basel, Switzerland)</journal-title>
</journal-title-group>
<issn pub-type="epub">1424-8220</issn>
<publisher>
<publisher-name>MDPI</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">26861333</article-id>
<article-id pub-id-type="pmc">4801584</article-id>
<article-id pub-id-type="doi">10.3390/s16020208</article-id>
<article-id pub-id-type="publisher-id">sensors-16-00208</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Vision-Based Pose Estimation for Robot-Mediated Hand Telerehabilitation</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Airò Farulla</surname>
<given-names>Giuseppe</given-names>
</name>
<xref ref-type="aff" rid="af1-sensors-16-00208">1</xref>
<xref rid="c1-sensors-16-00208" ref-type="corresp">*</xref>
<xref ref-type="author-notes" rid="fn1-sensors-16-00208"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Pianu</surname>
<given-names>Daniele</given-names>
</name>
<xref ref-type="aff" rid="af2-sensors-16-00208">2</xref>
<xref ref-type="author-notes" rid="fn1-sensors-16-00208"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Cempini</surname>
<given-names>Marco</given-names>
</name>
<xref ref-type="aff" rid="af3-sensors-16-00208">3</xref>
<xref ref-type="author-notes" rid="fn1-sensors-16-00208"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Cortese</surname>
<given-names>Mario</given-names>
</name>
<xref ref-type="aff" rid="af3-sensors-16-00208">3</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Russo</surname>
<given-names>Ludovico O.</given-names>
</name>
<xref ref-type="aff" rid="af1-sensors-16-00208">1</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Indaco</surname>
<given-names>Marco</given-names>
</name>
<xref ref-type="aff" rid="af1-sensors-16-00208">1</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Nerino</surname>
<given-names>Roberto</given-names>
</name>
<xref ref-type="aff" rid="af2-sensors-16-00208">2</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Chimienti</surname>
<given-names>Antonio</given-names>
</name>
<xref ref-type="aff" rid="af2-sensors-16-00208">2</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Oddo</surname>
<given-names>Calogero M.</given-names>
</name>
<xref ref-type="aff" rid="af3-sensors-16-00208">3</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Vitiello</surname>
<given-names>Nicola</given-names>
</name>
<xref ref-type="aff" rid="af3-sensors-16-00208">3</xref>
</contrib>
</contrib-group>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Shen</surname>
<given-names>Yajing</given-names>
</name>
<role>Academic Editor</role>
</contrib>
</contrib-group>
<aff id="af1-sensors-16-00208">
<label>1</label>
Department of Control and Computer Engineering, Politecnico di Torino, Corso Duca degli Abruzzi 24, Turin 10129, Italy;
<email>ludovico.russo@polito.it</email>
(L.O.R.);
<email>marco.indaco@polito.it</email>
(M.I.)</aff>
<aff id="af2-sensors-16-00208">
<label>2</label>
Institute of Electronics, Computer and Telecommunication Engineering, National Research Council, Corso Duca degli Abruzzi 24, Turin 10129, Italy;
<email>daniele.pianu@ieiit.cnr.it</email>
(D.P.);
<email>roberto.nerino@ieiit.cnr.it</email>
(R.N.);
<email>antonio.chimienti@ieiit.cnr.it</email>
(A.C.)</aff>
<aff id="af3-sensors-16-00208">
<label>3</label>
The BioRobotics Institute, Scuola Superiore Sant’Anna, viale Rinaldo Piaggio 34, Pontedera 56025, Italy;
<email>m.cempini@sssup.it</email>
(M.Ce.);
<email>m.cortese@sssup.it</email>
(M.Co.);
<email>oddoc@sssup.it</email>
(C.M.O.);
<email>n.vitiello@sssup.it</email>
(N.V.)</aff>
<author-notes>
<corresp id="c1-sensors-16-00208">
<label>*</label>
Correspondence:
<email>giuseppe.airofarulla@polito.it</email>
; Tel.: +39-11-90-7191; Fax: +39-11-90-7195</corresp>
<fn id="fn1-sensors-16-00208">
<label></label>
<p>These authors contributed equally to this work.</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>05</day>
<month>2</month>
<year>2016</year>
</pub-date>
<pub-date pub-type="collection">
<month>2</month>
<year>2016</year>
</pub-date>
<volume>16</volume>
<issue>2</issue>
<elocation-id>208</elocation-id>
<history>
<date date-type="received">
<day>16</day>
<month>12</month>
<year>2015</year>
</date>
<date date-type="accepted">
<day>29</day>
<month>1</month>
<year>2016</year>
</date>
</history>
<permissions>
<copyright-statement>© 2016 by the authors; licensee MDPI, Basel, Switzerland.</copyright-statement>
<copyright-year>2016</copyright-year>
<license>
<license-p>
<pmc-comment>CREATIVE COMMONS</pmc-comment>
This article is an open access article distributed under the terms and conditions of the Creative Commons by Attribution (CC-BY) license (
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">http://creativecommons.org/licenses/by/4.0/</ext-link>
).</license-p>
</license>
</permissions>
<abstract>
<p>Vision-based Pose Estimation (VPE) represents a non-invasive solution to allow a smooth and natural interaction between a human user and a robotic system, without requiring complex calibration procedures. Moreover, VPE interfaces are gaining momentum as they are highly intuitive, such that they can be used from untrained personnel (e.g., a generic caregiver) even in delicate tasks as rehabilitation exercises. In this paper, we present a novel master–slave setup for hand telerehabilitation with an intuitive and simple interface for remote control of a wearable hand exoskeleton, named HX. While performing rehabilitative exercises, the master unit evaluates the 3D position of a human operator’s hand joints in real-time using only a RGB-D camera, and commands remotely the slave exoskeleton. Within the slave unit, the exoskeleton replicates hand movements and an external grip sensor records interaction forces, that are fed back to the operator-therapist, allowing a direct real-time assessment of the rehabilitative task. Experimental data collected with an operator and six volunteers are provided to show the feasibility of the proposed system and its performances. The results demonstrate that, leveraging on our system, the operator was able to directly control volunteers’ hands movements.</p>
</abstract>
<kwd-group>
<kwd>hand telerehabilitation</kwd>
<kwd>hand exoskeleton</kwd>
<kwd>motion tracking</kwd>
<kwd>upper limb rehabilitation</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="sec1-sensors-16-00208">
<title>1. Introduction</title>
<p>Traditional rehabilitation is performed in a one-to-one fashion, namely one therapist (or sometimes several) working with one patient, leading to high personnel and management costs, especially for demanding patients such as those with brain or post surgery injuries. Due to the high hospitalization costs, all these patients are leaving clinics and returning to their homes sooner than in the past [
<xref rid="B1-sensors-16-00208" ref-type="bibr">1</xref>
], when their rehabilitative program is not yet finished. These patients can greatly benefit from a telerehabilitation equipment, which is able to provide remote assistance and relief without the burden of going to the clinic on a daily basis. On the other hand, therapists can surely benefit from non-invasive systems capable of acquiring information about their movements which are then sent to the patient (or even to many patients), possibly in real-time to allow a direct control; modern vision-based techniques offer interesting sparks in such way. The possibility to provide high quality rehabilitation programs regardless of patients physical location and leveraging on vision is thus certainly attractive.</p>
<sec id="sec1dot1-sensors-16-00208">
<title>1.1. Telerehabilitation</title>
<p>Typical telerehabilitation systems require patients to perform exercises following instructions given from a domestic PC, often in form of a Virtual Reality (VR) environment or a video game, while their motion kinematics is captured through sensorized tools such as gloves and grippers for off-line evaluation from the operator. Relevant examples are the computer-based biomechanical evaluation tools Eval by Greenleaf Medical (Portola Valley, CA, USA) [
<xref rid="B2-sensors-16-00208" ref-type="bibr">2</xref>
], and the system proposed by Popescu
<italic>et al.</italic>
[
<xref rid="B3-sensors-16-00208" ref-type="bibr">3</xref>
,
<xref rid="B4-sensors-16-00208" ref-type="bibr">4</xref>
]. This last system comprises a VR environment, the force feedback glove “Rutgers Masters”, and a series of networked PCs: one at patient’s home, the others recording the rehabilitation performance in the clinical facility. Authors in [
<xref rid="B5-sensors-16-00208" ref-type="bibr">5</xref>
] proposed a sensorized commercial hand glove, namely the HumanGlove, produced by Humanware s.r.l. (Pisa, Italy), for functional assessment of both the hand and the fingers. Despite being proof of the telerehabilitation relevance, these systems cannot provide feedback on the mobilization of the impaired articulations.</p>
<p>Different is instead the case of the telerehabilitation systems which establish Real Time (RT) direct links between the operator and the patient. An example is the system by Holden
<italic>et al.</italic>
[
<xref rid="B6-sensors-16-00208" ref-type="bibr">6</xref>
,
<xref rid="B7-sensors-16-00208" ref-type="bibr">7</xref>
,
<xref rid="B8-sensors-16-00208" ref-type="bibr">8</xref>
], where a training-by-imitation rehabilitation strategy is enforced through a virtual avatar on the patient’s screen and the supervision of an operator. A drawback of such system is that it does not allow a direct intervention on subject movements but only corrections on the teaching unit.</p>
<p>Recent studies have suggested that for upper limb functional recovery, repetitive and long duration rehabilitation using robots is helpful [
<xref rid="B9-sensors-16-00208" ref-type="bibr">9</xref>
,
<xref rid="B10-sensors-16-00208" ref-type="bibr">10</xref>
,
<xref rid="B11-sensors-16-00208" ref-type="bibr">11</xref>
,
<xref rid="B12-sensors-16-00208" ref-type="bibr">12</xref>
,
<xref rid="B13-sensors-16-00208" ref-type="bibr">13</xref>
,
<xref rid="B14-sensors-16-00208" ref-type="bibr">14</xref>
,
<xref rid="B15-sensors-16-00208" ref-type="bibr">15</xref>
,
<xref rid="B16-sensors-16-00208" ref-type="bibr">16</xref>
]. This becomes evident especially when the robot system can be directly controlled by the operator, who can tune the therapy to the current patient’s conditions and actual residual abilities. Many studies dealt with the development of systems providing at-home rehabilitation therapy through robotic devices, without sacrificing quality of care: such systems are addressed as
<italic>master–slave</italic>
setups. Master–slave setups have been introduced also in mission-critical environments, such as in the field of tele-surgery [
<xref rid="B17-sensors-16-00208" ref-type="bibr">17</xref>
]. The master unit records the intended motion from the operator, who guides the patient along a desired motion pattern, adjusting the task parameters as soon as needed on the basis of the feedbacks received by the slave unit.</p>
<p>The system for the home-treatment of elbow hypertonia presented by Peng
<italic>et al.</italic>
, in [
<xref rid="B18-sensors-16-00208" ref-type="bibr">18</xref>
], and the system for upper-limb function recovery by Duong
<italic>et al.</italic>
[
<xref rid="B19-sensors-16-00208" ref-type="bibr">19</xref>
] are examples of valuable
<italic>master–slave</italic>
setups. In an earlier application of the hand exoskeleton (HX) [
<xref rid="B20-sensors-16-00208" ref-type="bibr">20</xref>
], the master units consists of the commercial sensorized glove Acceleglove (AnthroTronix, Silver Spring, MD, USA), worn by the operator and tracking his movements, and of a post-processing custom Java routine, providing RT records of the intended rehabilitation exercises.</p>
</sec>
<sec id="sec1dot2-sensors-16-00208">
<title>1.2. Vision-Based Hand Pose Estimation</title>
<p>Despite being a natural option for mastering hand telerehabilitation, glove-based interfaces are typically expensive and may entangle therapist’ movements, thus compromising the efficacy of the protocol. In addition, they typically require calibration prior to each usage. A valid alternative comes from modern motion tracking technologies, which instead offers many advantages in terms of usability, reduced costs and learning time, and do not require calibration procedures. In this field, the KiReS (Kinect Rehabilitation System) [
<xref rid="B21-sensors-16-00208" ref-type="bibr">21</xref>
] is a full-body telerehabilitation system based on Kinect, which implements a markerless video tracker of user movements. While performing the exercises, users are shown two 3D avatars: one is a 3D representative of correct movements to follow and one represents the user and its movements as captured by the Kinect. Markerless video tracking is a viable solution for master units in a master–slave setup too, as the operator can perform an exercise in front of the camera, while a robotic device guides the patient through its correct execution.</p>
<p>We propose a novel paradigm based on modern Vision-based Pose Estimation (VPE) and Hand Tracking techniques. VPE has played a leading role in the field of Human Robot Interaction (HRI), and has already demonstrated its applicability to remote control of robotic actuators [
<xref rid="B22-sensors-16-00208" ref-type="bibr">22</xref>
]. With the availability of consumer grade RGB-D (RGB-Depth) sensors, VPE algorithms have gained momentum. State-of-the art solutions based on RGB-D cameras [
<xref rid="B23-sensors-16-00208" ref-type="bibr">23</xref>
,
<xref rid="B24-sensors-16-00208" ref-type="bibr">24</xref>
,
<xref rid="B25-sensors-16-00208" ref-type="bibr">25</xref>
] for real-time full-body or hand tracking and pose estimation achieve impressive results. Moreover, VPE interfaces are intuitive enough to be used even from untrained personnel (e.g., a generic caregiver) [
<xref rid="B26-sensors-16-00208" ref-type="bibr">26</xref>
].</p>
<p>In the context of pose estimation using RGB-D sensors (more generally, within the field of VPE), we can distinguish between two main approaches:
<italic>model-based</italic>
(also known as
<italic>generative</italic>
) and
<italic>appearance-based</italic>
(also called
<italic>discriminative</italic>
) ones.</p>
<p>Algorithms following the model-based approach search, within the space of possible hand poses, the one which minimizes a
<italic>dissimilarity</italic>
function with respect to the hand as seen by the RGB-D sensor. This research is often expressed as a non-linear optimization problem. Particle Swarm Optimization (PSO) [
<xref rid="B27-sensors-16-00208" ref-type="bibr">27</xref>
] is a well-established algorithm specifically developed to optimize continuous non-linear functions. It is commonly employed in model-based approaches [
<xref rid="B25-sensors-16-00208" ref-type="bibr">25</xref>
] to guarantee the convergence process with reasonable timing. Nevertheless, these approaches show limitations in reaching RT performances on consumer hardware.</p>
<p>Appearance-based approaches rely on machine learning algorithms specifically trained to estimate hand poses from run-time observations. Here the training represents the most demanding computational task, but it is performed only once and off-line. These approaches thus easily achieve RT performances. A previous study by Shotton
<italic>et al.</italic>
[
<xref rid="B23-sensors-16-00208" ref-type="bibr">23</xref>
] lays the foundations for current state-of-the-art: authors perform a per-part classification of the human body using a Random Forest (RF) classifier [
<xref rid="B28-sensors-16-00208" ref-type="bibr">28</xref>
] and simple per-pixel features which are computed on data acquired from a Kinect sensor. Human body parts are then clustered to approximate skeleton joints. Keskin
<italic>et al.</italic>
[
<xref rid="B24-sensors-16-00208" ref-type="bibr">24</xref>
] applied successfully the same approach to the hand, which is segmented from the rest of the body, divided into parts and clustered to approximate its joints.</p>
</sec>
<sec id="sec1dot3-sensors-16-00208">
<title>1.3. Vision-Based Hand Telerehabilitation</title>
<p>Currently, only few works address master–slave hand telerehabilitation, and very few leverage on vision-based techniques. To the best of our knowledge, no master–slave platform for hand tele-rehabilitation based on RGB-D sensors and vision-based algorithms has been presented. In this paper, we introduce a mechatronic master–slave setup for RT hand telerehabilitation, exploiting partial results from [
<xref rid="B20-sensors-16-00208" ref-type="bibr">20</xref>
], but with the master guidance based on VPE hand tracking algorithm. The proposed setup combines three independent subsystems, enabling important features for a telerehabilitation protocol: (i) a VPE system through which the operator is able to dynamically drive patients’ hands along a desired exercise; (ii) the multi-joints robotic hand exoskeleton HX [
<xref rid="B29-sensors-16-00208" ref-type="bibr">29</xref>
], driving the subject’s hand, and posed under direct control of the operator; (iii) a sensorized graspable object [
<xref rid="B30-sensors-16-00208" ref-type="bibr">30</xref>
], which detects the fingertip grasp force during the manipulation exercises and further feeds it back it to the operator. The operator receives as additional feedback the measured positions from the HX joints, to reliably assess quality and percentage of completion of the exercises.</p>
<p>We argue that the usage of VPE-based master–slave system is attractive, since these systems provide the following advantages: (i) reduced costs and stress for the patients without compromising quality and accuracy of the rehabilitation; (ii) reduced discomfort and time occupation for the therapist, whom hand movements are not entangled and can be freely shaped but also recorded for being later sent to the slave interface, thus ensuring that a patient is performing several time exactly the same exercise or that different patients are following the same therapy; (iii) measurable and precise updates about patients’ performances provided to the operator, who can tune the rehabilitation therapy on the needs and behaviors of any single patient.</p>
<p>The main aim of this work is to demonstrate through an early validation stage that vision-based robot-mediated hand telerehabilitation is actually feasible. Experimental results achieved with an operator and six healthy volunteers prove the overall feasibility of our system, and the stability of the VPE-based telerehabilitation setup across different speed settings. In addition, experiments show that no user had difficulties nor discomfort in wearing the exoskeleton and performing the exercises and that the operator always had a direct RT control over their movements.</p>
<p>The remaining of the paper is organized as follows:
<xref ref-type="sec" rid="sec2-sensors-16-00208">Section 2</xref>
discusses the theoretical approach and practical implementation of our solution, as well as the integrated technologies description;
<xref ref-type="sec" rid="sec3-sensors-16-00208">Section 3</xref>
discusses results derived from our experiments; and finally
<xref ref-type="sec" rid="sec4-sensors-16-00208">Section 4</xref>
concludes the paper and presents planned future activities.</p>
</sec>
</sec>
<sec id="sec2-sensors-16-00208">
<title>2. Experimental Section</title>
<sec id="sec2dot1-sensors-16-00208">
<title>2.1. System Overview</title>
<p>This Section introduces the master–slave telerehabilitation mechatronic apparatus, composed of three main subsystems: (i) the master unit, which consists of a consumer RGB-D camera and a VPE algorithm; the slave unit, which consists of (ii) a powered hand robotic exoskeleton; and of (iii) a sensorized object, recording gripping forces when handled. Master and slave units are connected by means of a bidirectional communication link. The master unit records, processes and conveys information about the operator’s hand, and sends RT motion commands to the slave unit, which mobilizes patient’s hand. The patient can grasp the sensorized object with his hand moved by the robotic exoskeleton. The master unit receives both pieces of feedback from the robotic exoskeleton and recordings of the detected grasping forces.</p>
<sec id="sec2dot1dot1-sensors-16-00208">
<title>2.1.1. Master Unit</title>
<p>Here, we propose a custom implementation of the per-part hand classification framework presented in [
<xref rid="B24-sensors-16-00208" ref-type="bibr">24</xref>
], adapting it to our hand telerehabilitation task. Next, paragraphs briefly introduce the RF classifier and present our custom implementation.</p>
<sec>
<title>Random Forests</title>
<p>Random Forests [
<xref rid="B28-sensors-16-00208" ref-type="bibr">28</xref>
] are an ensemble of decision trees classifiers trained on a random subset of features and training data. Intermediate nodes store a feature-threshold pair
<inline-formula>
<mml:math id="mm1">
<mml:mfenced separators="" open="(" close=")">
<mml:mi>F</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>τ</mml:mi>
</mml:mfenced>
</mml:math>
</inline-formula>
learned during the training phase. Starting from the root node, for each input datum
<inline-formula>
<mml:math id="mm2">
<mml:mi mathvariant="bold">x</mml:mi>
</mml:math>
</inline-formula>
, the feature response
<inline-formula>
<mml:math id="mm3">
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mi mathvariant="bold">x</mml:mi>
</mml:mfenced>
</mml:mrow>
</mml:math>
</inline-formula>
is compared to the threshold
<italic>τ</italic>
; the datum is then forwarded to one of the child nodes according to the comparison result. Given a tree
<italic>T</italic>
, the comparison is repeated until a leaf node is reached, where a probability distribution
<inline-formula>
<mml:math id="mm4">
<mml:mrow>
<mml:msub>
<mml:mi>P</mml:mi>
<mml:mi>T</mml:mi>
</mml:msub>
<mml:mfenced separators="" open="(" close=")">
<mml:mi>c</mml:mi>
<mml:mo>|</mml:mo>
<mml:mi mathvariant="bold">x</mml:mi>
</mml:mfenced>
</mml:mrow>
</mml:math>
</inline-formula>
over all possible classes
<inline-formula>
<mml:math id="mm5">
<mml:mi mathvariant="script">C</mml:mi>
</mml:math>
</inline-formula>
is stored. The datum
<inline-formula>
<mml:math id="mm6">
<mml:mi mathvariant="bold">x</mml:mi>
</mml:math>
</inline-formula>
descends every tree in the forest and the final probability distribution
<inline-formula>
<mml:math id="mm7">
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mo>(</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>|</mml:mo>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
is given by the average of the
<inline-formula>
<mml:math id="mm8">
<mml:mrow>
<mml:msub>
<mml:mi>P</mml:mi>
<mml:mi>T</mml:mi>
</mml:msub>
<mml:mfenced separators="" open="(" close=")">
<mml:mi>c</mml:mi>
<mml:mo>|</mml:mo>
<mml:mi mathvariant="bold">x</mml:mi>
</mml:mfenced>
</mml:mrow>
</mml:math>
</inline-formula>
of the reached leaves.</p>
<p>In the context of per-part hand classification, the input for the RF is represented by the pixels of the depthmaps as acquired from the RGB-D camera. For each pixel, the RF per-class posterior represents the probability that it belong to a given hand part. We distinguish between 22 different hand parts, centered with respect to finger joints, fingertips, wrist and palm center (see
<xref ref-type="fig" rid="sensors-16-00208-f001">Figure 1</xref>
a).</p>
</sec>
<sec>
<title>Features</title>
<p>The same feature presented in [
<xref rid="B23-sensors-16-00208" ref-type="bibr">23</xref>
] is computed per-pixel during both RF training and at run-time. More specifically, given a depthmap
<italic>D</italic>
, a pixel
<inline-formula>
<mml:math id="mm9">
<mml:mi mathvariant="bold">x</mml:mi>
</mml:math>
</inline-formula>
and a pair of offsets
<inline-formula>
<mml:math id="mm10">
<mml:mi mathvariant="bold">u</mml:mi>
</mml:math>
</inline-formula>
and
<inline-formula>
<mml:math id="mm11">
<mml:mi mathvariant="bold">v</mml:mi>
</mml:math>
</inline-formula>
, the feature is computed as
<disp-formula id="FD1-sensors-16-00208">
<label>(1)</label>
<mml:math id="mm12">
<mml:mrow>
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mrow>
<mml:mi mathvariant="bold">u</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="bold">v</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mi>D</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mi>x</mml:mi>
<mml:mo>+</mml:mo>
<mml:mfrac>
<mml:mi mathvariant="bold">u</mml:mi>
<mml:mrow>
<mml:mi>D</mml:mi>
<mml:mo>(</mml:mo>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mfrac>
</mml:mfenced>
<mml:mo>-</mml:mo>
<mml:mi>D</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mi>x</mml:mi>
<mml:mo>+</mml:mo>
<mml:mfrac>
<mml:mi mathvariant="bold">v</mml:mi>
<mml:mrow>
<mml:mi>D</mml:mi>
<mml:mo>(</mml:mo>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mfrac>
</mml:mfenced>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>By definition,
<inline-formula>
<mml:math id="mm13">
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mrow>
<mml:mi mathvariant="bold">u</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="bold">v</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula>
is invariant with respect to in-plane translations and, due to the normalization by depth
<inline-formula>
<mml:math id="mm14">
<mml:mrow>
<mml:mi>D</mml:mi>
<mml:mo>(</mml:mo>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
, to depth variations. Furthermore, since few arithmetic and memory access operations are involved, it requires limited computational resources.</p>
<fig id="sensors-16-00208-f001" position="float">
<label>Figure 1</label>
<caption>
<p>Model (
<bold>a</bold>
) and input data (
<bold>b</bold>
,
<bold>c</bold>
) used for training the RF classifier.</p>
</caption>
<graphic xlink:href="sensors-16-00208-g001"></graphic>
</fig>
</sec>
<sec>
<title>Training</title>
<p>The training set consists of segmented depthmap-labels pairs representing all the poses of interest for the application. An example of depthmap-labels pair is shown in
<xref ref-type="fig" rid="sensors-16-00208-f001">Figure 1</xref>
b,c. As in [
<xref rid="B23-sensors-16-00208" ref-type="bibr">23</xref>
], we resort to a synthetic generator of the training set pairs by means of a 3D mesh model and a set of rendering routines: starting from a sub-set of representative hand poses, we extend the training set generating intermediate poses by means of key-frame animation.</p>
<p>The training phase aims at finding, for each node
<italic>n</italic>
, the most discriminative pair
<inline-formula>
<mml:math id="mm15">
<mml:mfenced separators="" open="(" close=")">
<mml:mi>F</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>τ</mml:mi>
</mml:mfenced>
</mml:math>
</inline-formula>
,
<italic>i.e.</italic>
, the feature-threshold pair which maximizes the Gain of Information
<italic>I</italic>
:
<disp-formula id="FD2-sensors-16-00208">
<label>(2)</label>
<mml:math id="mm16">
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mo>=</mml:mo>
<mml:mi>H</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mi mathvariant="script">S</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>-</mml:mo>
<mml:munder>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo></mml:mo>
<mml:mi>L</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>R</mml:mi>
</mml:mrow>
</mml:munder>
<mml:mfrac>
<mml:mrow>
<mml:mrow>
<mml:mo>|</mml:mo>
</mml:mrow>
<mml:msub>
<mml:mi mathvariant="script">S</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>|</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>|</mml:mo>
</mml:mrow>
<mml:msub>
<mml:mi mathvariant="script">S</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>|</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
<mml:mi>H</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mi mathvariant="script">S</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
where
<inline-formula>
<mml:math id="mm17">
<mml:msub>
<mml:mi mathvariant="script">S</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:math>
</inline-formula>
is the subset of pixels that reaches the current node,
<inline-formula>
<mml:math id="mm18">
<mml:msub>
<mml:mi mathvariant="script">S</mml:mi>
<mml:mi>L</mml:mi>
</mml:msub>
</mml:math>
</inline-formula>
and
<inline-formula>
<mml:math id="mm19">
<mml:msub>
<mml:mi mathvariant="script">S</mml:mi>
<mml:mi>R</mml:mi>
</mml:msub>
</mml:math>
</inline-formula>
are the two subsets obtained by the split against the threshold
<italic>τ</italic>
, and
<inline-formula>
<mml:math id="mm20">
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mo>(</mml:mo>
<mml:mi mathvariant="script">S</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
is the Shannon entropy with respect to classes for the subset
<inline-formula>
<mml:math id="mm21">
<mml:mi mathvariant="script">S</mml:mi>
</mml:math>
</inline-formula>
. After the best pair is chosen, the training procedure recourses on the left and right child nodes until a stopping criterion is met (e.g., maximum tree depth or minimum size of
<inline-formula>
<mml:math id="mm22">
<mml:msub>
<mml:mi mathvariant="script">S</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:math>
</inline-formula>
). For each leaf node, the probability
<inline-formula>
<mml:math id="mm23">
<mml:mrow>
<mml:msub>
<mml:mi>P</mml:mi>
<mml:mi>T</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>|</mml:mo>
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
is computed as the ratio between the number of pixels of class
<italic>c</italic>
and the total number of pixels that reach the leaf.</p>
<p>It is of relevance to note here that our experimental setup includes only two classes of movements and that the positioning of the hand with respect to the camera is highly constrained (see
<xref ref-type="sec" rid="sec2dot1dot3-sensors-16-00208">Section 2.1.3</xref>
). This considerably reduces the set of possible hand poses, consequently reducing the size of the training set and the depth of the decision tree, drastically shortening the training time while still achieving satisfactory results. Furthermore, since the class of movement is known
<italic>a priori</italic>
, a single RF is trained for each specific exercise. Our custom implementation of the training algorithm that leverages on modern GPU architecture achieves a training time of around 40 h on a relatively inexpensive hardware configuration. The same training parameters set of [
<xref rid="B22-sensors-16-00208" ref-type="bibr">22</xref>
] has been used (except for the tree depth that was set to 16).</p>
</sec>
<sec>
<title>Operator Hand Motion Estimation</title>
<p>The master unit consists of a commercial RGB-D camera (Softkinetic Depthsense DS325) that is suspended 50 cm over a table so that it can record operator’s hand movements from the top view. This placement is chosen to minimize the risk of self-occlusions among operator’s fingers, representing the main obstacle to the hand tracking task. The camera is linked to a laptop (Intel Core i7 3630QM, Nvidia GeForce 650M) running our custom implementation of the RF classifier. The laptop reads the depth input stream from the camera at a rate of 30 fps, which is the highest working frequency allowed by the camera. A preprocessing phase is devoted to isolate operator’s hand (foreground) from the table (background): pixels belonging to the plane (within 5 mm of uncertainty) are removed via the RANSAC algorithm [
<xref rid="B31-sensors-16-00208" ref-type="bibr">31</xref>
], while the others are maintained for further processing. Once hand’s pixels have been segmented, their depth information is processed by the RF classifier which can recognize the 22 different parts of the hand. Then, the joints, fingertips, palm and wrist positions are approximated applying the Mean Shift clustering algorithm [
<xref rid="B32-sensors-16-00208" ref-type="bibr">32</xref>
] on the hand sub-parts.</p>
<p>Only five of the total hand parts are kept for subsequent processing, as these are the only ones necessary for computing the master command signal, as explained in
<xref ref-type="sec" rid="sec2dot1dot3-sensors-16-00208">Section 2.1.3</xref>
. Namely, these parts are the metacarpo-phalangeal (MCP), proximal- and distal-interphalangeal (PIP and DIP) joints and fingertip of the index, and the thumb fingertip [
<xref rid="B33-sensors-16-00208" ref-type="bibr">33</xref>
].</p>
</sec>
</sec>
<sec id="sec2dot1dot2-sensors-16-00208">
<title>2.1.2. Slave Unit</title>
<p>The slave unit consists of the HX powered hand orthosis [
<xref rid="B29-sensors-16-00208" ref-type="bibr">29</xref>
], a mechatronic device built of three modules (
<xref ref-type="fig" rid="sensors-16-00208-f002">Figure 2</xref>
): a bi-digital wearable exoskeleton for the active assistance of the index and thumb fingers; a remote actuation block driving the exoskeleton by means of a cable-sheath system; and a control/power external unit. The exoskeleton is comprised of four active degrees-of-motion (DoM), two for each finger: for the index finger, they are the MCP joint, and the PIP and DIP joints, under-actuated together; for the thumb finger, they are the under-actuated flexion/extension (f/e) of the MCP and DIP joints, and the carpo-metacarpal (CMC) joint opposition. In the following, these four DoM are respectively addressed as MCP, P-DIP, MC-IP and CMC. DoM are driven by DC-motor, placed remotely in order to minimize the influence of weight and noise on the user, through a bidirectional cable-sheath transmission. Once worn and tethered to the actuators, the exoskeleton is not back-drivable, and it coerces motion on the wearer’s fingers. HX can drive each finger along the prescribed motion with a constant pressure of 20 N in tip, or equivalently distributed along the phalanges pads.</p>
<fig id="sensors-16-00208-f002" position="float">
<label>Figure 2</label>
<caption>
<p>HX while holding the sensorized object in a pinch (
<bold>a</bold>
) and lateral (
<bold>b</bold>
) grasping exercise. The DoMs of the HX device are: (1) the flexion/extension of the index MCP; (2) of the index P-DIP (under-actuated); (3) of the thumb MCP and IP (under-actuated) and (4) the CMC opposition. Other Degrees-of-Freedom (DoF), like thumb intra/extra rotation and the index abduction/adduction, are passive [
<xref rid="B29-sensors-16-00208" ref-type="bibr">29</xref>
]. The HX is used to grasp the sensorized object, whose squeezable soft-pads provide force information on the basis of a optoelectronic deformation transduction [
<xref rid="B34-sensors-16-00208" ref-type="bibr">34</xref>
].</p>
</caption>
<graphic xlink:href="sensors-16-00208-g002"></graphic>
</fig>
<p>The slave unit also comprises a sensorized grasping object. It is a rectangular block (size 6 × 2.4 × 3 cm) of acrylic resin, with the widest faces covered by two pressure-sensitive pads, based on an opto-electronic sensing technology developed for measuring human-robot interaction forces in wearable rehabilitation robots [
<xref rid="B34-sensors-16-00208" ref-type="bibr">34</xref>
]. Basically, the sensorized objects is grasped by squeezing two silicone bulk hollow structures (one per side of contact, see
<xref ref-type="fig" rid="sensors-16-00208-f002">Figure 2</xref>
). These pads cover a Printed Circuit Board (PCB) which hosts a pattern of pairs of light sensitive elements. Each pair includes an LED emitter and a light-sensitive receiver. When the silicone is squeezed, the deformation obstructs the light collected by the receiver with respect to the light emitted by the LED, and the proportional Voltage-drops of each receiver over the corresponding emitters is characterized in order to get the total normal displacement of the silicone pad. Previous work on the characterization of the sensor [
<xref rid="B30-sensors-16-00208" ref-type="bibr">30</xref>
] reported an overall repeatability of the sensors (taking into account error and hysteresis) of about 0.16 N.</p>
</sec>
<sec id="sec2dot1dot3-sensors-16-00208">
<title>2.1.3. Communication</title>
<p>The master and slave units are connected by means of a bidirectional communication link (UDP/IP connection). The communication protocol works at 30 fps rate, ensuring parallelism with master unit. The master unit encodes within a single byte per frame the data sent to the slave unit about the intended motor task, and it can receive feedbacks from the slave unit about (i) the kinematics and kinetic state of the exoskeleton and (ii) interaction forces with the sensorized object. A personal computer provides comprehensive RT information about both master and slave systems. Communication is engineered to require the lowest possible bandwidth. Our implementation showed good performances, since the delay between the master and the slave units was never more than 100 ms, and no data package was lost.</p>
<p>To start the exercise, the operator has to place his hand under the camera. This constraint on the position is not representing a limitation to exercises, since it has been chosen to maximize both operator’s comfort (he can rest his elbow on the table, raising just his hand) and VPE accuracy (as fingers’ self-occlusions are minimized). Once his hand is acquired, the algorithm estimates hand’s joints of interest (example sequences are shown in
<xref ref-type="fig" rid="sensors-16-00208-f003">Figure 3</xref>
) and computes the percentage of completion of the exercise. For the pinch grasp, this percentage is related to the normalized distance between the index and thumb fingertips (see
<xref ref-type="fig" rid="sensors-16-00208-f004">Figure 4</xref>
a). For the lateral grasp, the percentage is related to the distance of thumb fingertip along the normal of the plane containing the index MCP, PIP, DIP joints and fingertip (see
<xref ref-type="fig" rid="sensors-16-00208-f004">Figure 4</xref>
b). These measures were chosen as they are very fast to compute.</p>
<fig id="sensors-16-00208-f003" position="float">
<label>Figure 3</label>
<caption>
<p>Pinch (
<bold>a–d</bold>
) and lateral (
<bold>e–h</bold>
) grasping sequences with overimposed fingertips as estimated by our VPE algorithm.</p>
</caption>
<graphic xlink:href="sensors-16-00208-g003"></graphic>
</fig>
<fig id="sensors-16-00208-f004" position="float">
<label>Figure 4</label>
<caption>
<p>Graphical illustration of the distances computed to evaluate the percentage of completeness for the rehabilitative exercises: (
<bold>a</bold>
) pinch grasp; (
<bold>b</bold>
) lateral grasp. We used the 3D hand model from the libhand library [
<xref rid="B35-sensors-16-00208" ref-type="bibr">35</xref>
] for illustration purpose.</p>
</caption>
<graphic xlink:href="sensors-16-00208-g004"></graphic>
</fig>
</sec>
</sec>
<sec id="sec2dot2-sensors-16-00208">
<title>2.2. Master–Slave Control Strategy</title>
<p>The closure percentage
<italic>p</italic>
is conveyed to the slave unit encoded in the one-byte payload of a UDP packet within seven bits, the remaining bit encoding the grasp type (0 for pinch, 1 for lateral grasping). The slave unit controller commands the exoskeleton motors according to the message received from the network, in order to reach the continuously updated desired position. According to the desired grasp, the four DoM are coordinated differently [
<xref rid="B20-sensors-16-00208" ref-type="bibr">20</xref>
]: the set-point of the
<italic>i</italic>
-th DoM is commanded computing
<disp-formula id="FD3-sensors-16-00208">
<label>(3)</label>
<mml:math id="mm24">
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mi>p</mml:mi>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>e</mml:mi>
<mml:mi>n</mml:mi>
<mml:mi>d</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>-</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mn>100</mml:mn>
</mml:mfrac>
</mml:mrow>
</mml:math>
</disp-formula>
where the maximum (
<inline-formula>
<mml:math id="mm25">
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>e</mml:mi>
<mml:mi>n</mml:mi>
<mml:mi>d</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula>
) and initial (
<inline-formula>
<mml:math id="mm26">
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula>
) values of the
<italic>i</italic>
-th joint opening
<inline-formula>
<mml:math id="mm27">
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:math>
</inline-formula>
are reported in
<xref ref-type="table" rid="sensors-16-00208-t001">Table 1</xref>
.</p>
<table-wrap id="sensors-16-00208-t001" position="float">
<object-id pub-id-type="pii">sensors-16-00208-t001_Table 1</object-id>
<label>Table 1</label>
<caption>
<p>Reference values for hand exoskeleton (HX) actuator motions.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th colspan="2" align="center" valign="middle" style="border-bottom:solid thin;border-top:solid thin" rowspan="1">Grasp Type</th>
<th align="center" valign="middle" style="border-bottom:solid thin;border-top:solid thin" rowspan="1" colspan="1">MCP (deg)</th>
<th align="center" valign="middle" style="border-bottom:solid thin;border-top:solid thin" rowspan="1" colspan="1">P-DIP (deg)</th>
<th align="center" valign="middle" style="border-bottom:solid thin;border-top:solid thin" rowspan="1" colspan="1">MC-IP (deg)</th>
<th align="center" valign="middle" style="border-bottom:solid thin;border-top:solid thin" rowspan="1" colspan="1">CMC (deg)</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Pinch</td>
<td align="center" valign="middle" rowspan="1" colspan="1">
<inline-formula>
<mml:math id="mm28">
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:math>
</inline-formula>
</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<inline-formula>
<mml:math id="mm29">
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>e</mml:mi>
<mml:mi>n</mml:mi>
<mml:mi>d</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula>
</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">90</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">60</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">45</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">75</td>
</tr>
<tr>
<td rowspan="2" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Lateral</td>
<td align="center" valign="middle" rowspan="1" colspan="1">
<inline-formula>
<mml:math id="mm30">
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
</mml:math>
</inline-formula>
</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0</td>
<td align="center" valign="middle" rowspan="1" colspan="1">0</td>
</tr>
<tr>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<inline-formula>
<mml:math id="mm31">
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>e</mml:mi>
<mml:mi>n</mml:mi>
<mml:mi>d</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula>
</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">75</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">100</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">65</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">45</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="sec2dot3-sensors-16-00208">
<title>2.3. Experimental Design and Methods</title>
<p>The experimental protocol consists of repeated sequences of fingers “opening and closing” tasks, commanded by the operator of the master unit and executed on the subject’s hand by the slave exoskeleton. The experimental setup is illustrated in
<xref ref-type="fig" rid="sensors-16-00208-f005">Figure 5</xref>
.</p>
<p>In the experiments, we address two different kind of grasps, typically used within rehabilitative exercises, both involving only thumb and index fingers: the pinch and the lateral ones (illustrated in
<xref ref-type="fig" rid="sensors-16-00208-f003">Figure 3</xref>
). The operator chooses which exercise to perform before activating the RGB-D camera and starting acquiring images.</p>
<p>The experimental protocol comprises, for each subject, two series of repetitions for both pinch and lateral grasps at different speeds self-selected by the operator (30 repetitions per series, roughly divided as 10 each for “slow”, “normal” and “fast” velocities): the first series mainly aimed at letting the subjects familiarizing with the exoskeleton, while, in the second series, the subjects were asked to grasp the sensorized object while being guided in the rehabilitative exercises.</p>
<p>Within the exoskeleton, the
<italic>i</italic>
-th DoM motor of the slave setup tracked the
<inline-formula>
<mml:math id="mm32">
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:math>
</inline-formula>
set-points from Equation (
<xref ref-type="disp-formula" rid="FD3-sensors-16-00208">3</xref>
) according to a filtering stage (2nd-order low-pass Butterworth filter with cutoff frequency of 0.45 Hz) and a PI controller (estimated bandwidth of 80 Hz). As a consequence, the motor-driven positions—which we will indicate by
<inline-formula>
<mml:math id="mm33">
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>^</mml:mo>
</mml:mover>
</mml:math>
</inline-formula>
—are delayed of 200 milliseconds
<italic>ca.</italic>
, however, being less noisy, with respect to the command signal
<inline-formula>
<mml:math id="mm34">
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:math>
</inline-formula>
.</p>
<fig id="sensors-16-00208-f005" position="float">
<label>Figure 5</label>
<caption>
<p>Our experimental setup provides an RT communication link, a natural direct driving of the gesture, and an on-the-fly supervision method of the operator.</p>
</caption>
<graphic xlink:href="sensors-16-00208-g005"></graphic>
</fig>
<p>Since our research project is at its very first stage of development, we at the moment preferred to conduct an early stage validation of the system without directly including a real therapist nor any impaired (e.g., stroke survivors) subject, who will instead be involved in further and larger experiments. For this reason, we asked healthy right-handed subjects to volunteer for participate to this early validation stage. Six subjects participated in the experiments, while one volunteered to act as operator, and received training in VPE. The operator was acting as master in all the experiments. Subjects were introduced to the system and the protocol, and assisted in wearing the exoskeleton: they sat in front of the expert, but they had no visual cue of the operator’s intentions due to a panel.
<italic>Vice-versa</italic>
, the expert had visual feedback on the slave unit and received graphic feedback from both the hand tracker and the slave unit. An external PC was used for sniffing network traffic, data storing, postprocessing and statistics. The whole pipeline is depicted in
<xref ref-type="fig" rid="sensors-16-00208-f006">Figure 6</xref>
.</p>
<fig id="sensors-16-00208-f006" position="float">
<label>Figure 6</label>
<caption>
<p>Experimental setup pipeline. (
<bold>a</bold>
) master unit: the VPE camera computes the joints positions of the operator’s hand; (
<bold>b</bold>
) slave unit: the subject wears the HX exoskeleton, which drives his/her fingers towards closure; the subject can freely move the arm; (
<bold>c</bold>
) squeezable grip sensor; (
<bold>d</bold>
) bi-directional link is realized through a UDP/IP communication between the VPE acquiring PC and the real-time control board driving the HX; (
<bold>e</bold>
) in the same PC, the operator visualizes in RT gripping force feedback from the slave unit. Down: same setup, addressing a lateral grasp. The white panel prevents the subject from having a visual clue of the operator’s intentions.</p>
</caption>
<graphic xlink:href="sensors-16-00208-g006"></graphic>
</fig>
</sec>
</sec>
<sec id="sec3-sensors-16-00208">
<title>3. Results and Discussion</title>
<p>All six subjects could wear the exoskeleton without reporting any hindrance nor being harmed by the device, and the operator was able to correct and adapt the motion sequence based on the visual feedback of the patient’s environment. During the grasping session, the operator drove the closure of HX until the sensorized object was stably gripped: this condition could either be verified by the visual feedback, or by the interaction force reported by the sensor.
<xref ref-type="fig" rid="sensors-16-00208-f007">Figure 7</xref>
shows illustrative trials of opening-closing sequences for both grasps. We analyzed the setup performance on the basis of the master input motion speed. Such speed was not set for each trial, since the operator was fully free to drive the motion; however, he was asked to try different speeds, based on his own perception. For each closure repetition, we estimated the speed from the slope of the closure command percentage—
<italic>i.e.</italic>
, the slope of the rising part of the dotted red curve in the first panel in
<xref ref-type="fig" rid="sensors-16-00208-f007">Figure 7</xref>
a,b.</p>
<fig id="sensors-16-00208-f007" position="float">
<label>Figure 7</label>
<caption>
<p>Example profiles of the tele-rehabilitation results. The first panel of (
<bold>a</bold>
) and (
<bold>b</bold>
) shows master desired (pre-filtering) closing percentage
<italic>p</italic>
, and grip force as recorded by the sensorized object, the other panels shows the slave desired (red) and measured (blue) position of the four DoM.</p>
</caption>
<graphic xlink:href="sensors-16-00208-g007"></graphic>
</fig>
<p>To assess the performance of the slave setup in tracking the master commands, we analyzed the root-mean-squared error (RMSE) between the desired
<inline-formula>
<mml:math id="mm35">
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:math>
</inline-formula>
and the current motion of each DoM. Such performance would change as the operator varied the grasp velocity, due to the intrinsic limitation of the slave motors and the added inertia of the exoskeleton and of the mechanical transmission. Hence, we evaluated the distribution of the current-
<italic>versus</italic>
-desired motions discrepancy across the operator’s self-selected speed. Expected (but small) discrepancies between the motor-driven positions
<inline-formula>
<mml:math id="mm36">
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>^</mml:mo>
</mml:mover>
</mml:math>
</inline-formula>
and the command signal
<inline-formula>
<mml:math id="mm37">
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:math>
</inline-formula>
(respectively blue and red curves in the last panels of
<xref ref-type="fig" rid="sensors-16-00208-f007">Figure 7</xref>
a,b) are also due to the filtering stage described in
<xref ref-type="sec" rid="sec2dot3-sensors-16-00208">Section 2.3</xref>
.</p>
<p>For each grasp repetition, we isolated the motion profiles from just before the operator started closing (
<inline-formula>
<mml:math id="mm38">
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula>
) to just after the operator went back to open position (
<inline-formula>
<mml:math id="mm39">
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mi>e</mml:mi>
<mml:mi>n</mml:mi>
<mml:mi>d</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula>
), and we evaluated the RMSE
<italic>ε</italic>
in this time-window, while the closure speed
<inline-formula>
<mml:math id="mm40">
<mml:mover accent="true">
<mml:mi>p</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:math>
</inline-formula>
was evaluated as the mean slope in the closing phase (starting at
<inline-formula>
<mml:math id="mm41">
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula>
and ending at
<inline-formula>
<mml:math id="mm42">
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>p</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula>
):
<disp-formula id="FD4-sensors-16-00208">
<label>(4)</label>
<mml:math id="mm43">
<mml:mrow>
<mml:msub>
<mml:mi>ε</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msqrt>
<mml:mstyle scriptlevel="0" displaystyle="true">
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mo></mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mi>e</mml:mi>
<mml:mi>n</mml:mi>
<mml:mi>d</mml:mi>
</mml:mrow>
</mml:msub>
</mml:msubsup>
<mml:msup>
<mml:mfenced separators="" open="(" close=")">
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>-</mml:mo>
<mml:msub>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mfenced>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mi mathvariant="normal">d</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mi>e</mml:mi>
<mml:mi>n</mml:mi>
<mml:mi>d</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>-</mml:mo>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mstyle>
</mml:msqrt>
<mml:mspace width="0.166667em"></mml:mspace>
<mml:mtext>with</mml:mtext>
<mml:mspace width="4.pt"></mml:mspace>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi></mml:mi>
<mml:mo>,</mml:mo>
<mml:mn>4</mml:mn>
<mml:mo>;</mml:mo>
<mml:mspace width="0.166667em"></mml:mspace>
<mml:mover accent="true">
<mml:mi>p</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
<mml:mo>=</mml:mo>
<mml:mstyle scriptlevel="0" displaystyle="true">
<mml:mfrac>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>p</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>-</mml:mo>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>p</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>-</mml:mo>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mstyle>
<mml:mspace width="0.166667em"></mml:mspace>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>Aggregated results, comprising all subjects and all trials, separately per each DoM, are shown in
<xref ref-type="fig" rid="sensors-16-00208-f008">Figure 8</xref>
: each marker represents
<inline-formula>
<mml:math id="mm44">
<mml:mover accent="true">
<mml:mi>p</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:math>
</inline-formula>
and
<italic>ε</italic>
of a single closure trial. To proceed for a statistical analysis, we divided the operator speeds in 30 equal intervals, ranging from the minimum to the maximum
<inline-formula>
<mml:math id="mm45">
<mml:mover accent="true">
<mml:mi>p</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:math>
</inline-formula>
recorded. Within each speed interval, we estimated the mean value of the corresponding
<italic>ε</italic>
belonging to the second and the third quartile: such values are shown by the histograms in
<xref ref-type="fig" rid="sensors-16-00208-f008">Figure 8</xref>
. We interpolated these values with a linear function, weighting each mean
<italic>ε</italic>
with the number of closure trials enclosed in the correspondent speed interval: this linear relationship between
<inline-formula>
<mml:math id="mm46">
<mml:mover accent="true">
<mml:mi>p</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:math>
</inline-formula>
and
<italic>ε</italic>
is represented by the straight segments in
<xref ref-type="fig" rid="sensors-16-00208-f008">Figure 8</xref>
.</p>
<p>Results shown in
<xref ref-type="fig" rid="sensors-16-00208-f007">Figure 7</xref>
demonstrate the instantaneous communication between the master command
<italic>p</italic>
(top panels, red curve) and the setpoint of the HX robot DoM
<inline-formula>
<mml:math id="mm47">
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:math>
</inline-formula>
(other panels, red solid curve), which are exactly aligned. For what concerns the motion actuation, the blue DoM output curve tracks the red input with a small delay and settling time, which are visually appreciable in the graphs but only cover 200 milliseconds
<italic>ca.</italic>
, and are due to the filtering and time-response of the actuators (see
<xref ref-type="sec" rid="sec2dot3-sensors-16-00208">Section 2.3</xref>
). On top of this, there is the communication delay between master and slave units, and another delay between the HX motion (blue curve) and the force response from the gripper (top black curve): this is the time needed by the HX to reach the gripper and squeeze it, the same effect is visible also in releasing the gripper. In any case, the delay between the operator reaching the desired posture and the peak response from the gripper was never noticeable in our experiments and never interfered with the exercises. Conversely, when the speed increases, the RMSE between the real and the desired motion,
<italic>ε</italic>
, also increases: this is visible from the plots in
<xref ref-type="fig" rid="sensors-16-00208-f008">Figure 8</xref>
. The main contribution to the calculated
<italic>ε</italic>
is mainly due to the discrepancy between
<inline-formula>
<mml:math id="mm48">
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:math>
</inline-formula>
and
<inline-formula>
<mml:math id="mm49">
<mml:msub>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:math>
</inline-formula>
in the grasping (increasing
<italic>p</italic>
) and releasing (decreasing
<italic>p</italic>
) dynamic phases, while in the
<italic>static</italic>
part the difference is not appreciable.
<xref ref-type="table" rid="sensors-16-00208-t002">Table 2</xref>
reports limit values (
<italic>Slow</italic>
,
<italic>Medium</italic>
and
<italic>Fast</italic>
) of the linear interpolation
<italic>ε</italic>
against closure speed
<inline-formula>
<mml:math id="mm50">
<mml:mover accent="true">
<mml:mi>p</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:math>
</inline-formula>
for each motor and each grasp type, and also the standard deviation from the collected data, if available: indeed, for certain speeds and especially in the lateral grasp, collected data were not enough to calculate a meaningful standard deviation.</p>
<p>A qualitative analysis of
<xref ref-type="fig" rid="sensors-16-00208-f008">Figure 8</xref>
suggests that the operator preferred to concentrate grasping speeds in the
<inline-formula>
<mml:math id="mm51">
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo>÷</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>.</mml:mo>
<mml:mn>85</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>
Hz range for the pinch grasp, and in the
<inline-formula>
<mml:math id="mm52">
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo>÷</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>.</mml:mo>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>
Hz range for the lateral one (with 1 Hz representing a whole closure and opening task performed in one second). Quicker grasps that fall above these intervals, although being intercepted by linear regression, are still out of the statistics (being above mean speed plus twice the standard deviation). The
<italic>Medium</italic>
rows reported in
<xref ref-type="table" rid="sensors-16-00208-t002">Table 2</xref>
are chosen as the maximum speed value of these preferred range. In addition, we can notice how for the lateral grasp,
<italic>Medium</italic>
and
<italic>Fast</italic>
closure speeds attained are lower: this is mainly due to the fact that the HX motion covers a smaller space when
<italic>p</italic>
ranges from 0 to 100% (see
<xref ref-type="table" rid="sensors-16-00208-t001">Table 1</xref>
).</p>
<fig id="sensors-16-00208-f008" position="float">
<label>Figure 8</label>
<caption>
<p>Aggregated cross-results of operator’s task execution speed and RMSE of HX motion from the desired, respectively for (
<bold>a</bold>
) pinch and (
<bold>b</bold>
) lateral grasps. The histogram bars represent the RMSE in degrees, while each task repetition is reported as a circle dot. The black small line in top of each histogram represents the mean speed, the black winds around it represent the standard deviation, and the gray wind indicates mean speed plus twice the standard deviation. Results are divided among the four DoM.</p>
</caption>
<graphic xlink:href="sensors-16-00208-g008"></graphic>
</fig>
<table-wrap id="sensors-16-00208-t002" position="float">
<object-id pub-id-type="pii">sensors-16-00208-t002_Table 2</object-id>
<label>Table 2</label>
<caption>
<p>Mean value ± standard deviation of
<italic>ε</italic>
, classified according to the operator self-chosen velocity 
<inline-formula>
<mml:math id="mm53">
<mml:mover accent="true">
<mml:mi>p</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:math>
</inline-formula>
.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th colspan="3" align="center" valign="middle" style="border-bottom:solid thin;border-top:solid thin" rowspan="1">Grasp Type and Speed</th>
<th align="center" valign="middle" style="border-bottom:solid thin;border-top:solid thin" rowspan="1" colspan="1">MCP (deg)</th>
<th align="center" valign="middle" style="border-bottom:solid thin;border-top:solid thin" rowspan="1" colspan="1">P-DIP (deg)</th>
<th align="center" valign="middle" style="border-bottom:solid thin;border-top:solid thin" rowspan="1" colspan="1">MC-IP (deg)</th>
<th align="center" valign="middle" style="border-bottom:solid thin;border-top:solid thin" rowspan="1" colspan="1">CMC (deg)</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Pinch</td>
<td align="left" valign="middle" rowspan="1" colspan="1">Slow</td>
<td align="left" valign="middle" rowspan="1" colspan="1">
<inline-formula>
<mml:math id="mm54">
<mml:mrow>
<mml:mover accent="true">
<mml:mi>p</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
<mml:mo><</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>.</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>
Hz</td>
<td align="center" valign="middle" rowspan="1" colspan="1">3.0 ± 0.5</td>
<td align="center" valign="middle" rowspan="1" colspan="1">4.2 ± 1.0</td>
<td align="center" valign="middle" rowspan="1" colspan="1">2.5 ± 0.4</td>
<td align="center" valign="middle" rowspan="1" colspan="1">2.4 ± 0.6</td>
</tr>
<tr>
<td align="left" valign="middle" rowspan="1" colspan="1">Medium</td>
<td align="left" valign="middle" rowspan="1" colspan="1">
<inline-formula>
<mml:math id="mm55">
<mml:mrow>
<mml:mover accent="true">
<mml:mi>p</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
<mml:mo></mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>.</mml:mo>
<mml:mn>35</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>
Hz</td>
<td align="center" valign="middle" rowspan="1" colspan="1">4.8 ± 1.7</td>
<td align="center" valign="middle" rowspan="1" colspan="1">5.7 ± 1.5</td>
<td align="center" valign="middle" rowspan="1" colspan="1">3.2 ± 0.8</td>
<td align="center" valign="middle" rowspan="1" colspan="1">4.2 ± 2.3</td>
</tr>
<tr>
<td align="left" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Fast</td>
<td align="left" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<inline-formula>
<mml:math id="mm56">
<mml:mrow>
<mml:mover accent="true">
<mml:mi>p</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
<mml:mo>></mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>.</mml:mo>
<mml:mn>85</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>
Hz</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">7.6 ± 3.7</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">8.2 ± 2.6</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">4.5 ± 1.8</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">7.1 ± 3.6</td>
</tr>
<tr>
<td rowspan="3" align="center" valign="middle" style="border-bottom:solid thin" colspan="1">Lateral</td>
<td align="left" valign="middle" rowspan="1" colspan="1">Slow</td>
<td align="left" valign="middle" rowspan="1" colspan="1">
<inline-formula>
<mml:math id="mm57">
<mml:mrow>
<mml:mover accent="true">
<mml:mi>p</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
<mml:mo><</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>.</mml:mo>
<mml:mn>05</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>
Hz</td>
<td align="center" valign="middle" rowspan="1" colspan="1">5.1 ± 2.1</td>
<td align="center" valign="middle" rowspan="1" colspan="1">7.4 ± 3.7</td>
<td align="center" valign="middle" rowspan="1" colspan="1">4.4 ± 1.1</td>
<td align="center" valign="middle" rowspan="1" colspan="1">2.4 ± 0.6</td>
</tr>
<tr>
<td align="left" valign="middle" rowspan="1" colspan="1">Medium</td>
<td align="left" valign="middle" rowspan="1" colspan="1">
<inline-formula>
<mml:math id="mm58">
<mml:mrow>
<mml:mover accent="true">
<mml:mi>p</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
<mml:mo></mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>.</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>
Hz</td>
<td align="center" valign="middle" rowspan="1" colspan="1">6.1 ± 4.4</td>
<td align="center" valign="middle" rowspan="1" colspan="1">9.0 ± 5.7</td>
<td align="center" valign="middle" rowspan="1" colspan="1">5.1 ± 2.0</td>
<td align="center" valign="middle" rowspan="1" colspan="1">2.5 ± 1.0</td>
</tr>
<tr>
<td align="left" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">Fast</td>
<td align="left" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">
<inline-formula>
<mml:math id="mm59">
<mml:mrow>
<mml:mover accent="true">
<mml:mi>p</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
<mml:mo>></mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>.</mml:mo>
<mml:mn>6</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>
Hz</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">8.1</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">12.1</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">6.7</td>
<td align="center" valign="middle" style="border-bottom:solid thin" rowspan="1" colspan="1">2.8</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Collected results demonstrate that the proposed experimental setup works reliably; in addition, low variances of the error
<italic>ε</italic>
shown in
<xref ref-type="table" rid="sensors-16-00208-t002">Table 2</xref>
demonstrate a good over-subject repeatability. In addition, they demonstrate that HX can actually drive the human hand along the imposed path and maintain the object grasp stably, that the hand tracking algorithm is capable of real-time performances and it is accurate enough for the purpose, and that the decoding algorithm of the master system is simple but effective, and does not require expensive materials (such as external sensors) nor an intensive phase of training. Furthermore, the operator could successfully drive the volunteers along the intended task in all trials, with any preferred speed setting, while being able to dynamically change it on-the-fly.</p>
<p>The proposed setup has been specifically studied to allow RT-direct telerehabilitation, with operator and patient simultaneously receiving mutual feedback. Still, the proposed setup could also be used in off-line rehabilitative tasks by recording the operator motion and commanding to the slave unit when requested (possibly multiple times): such a feature could be useful for patients who have to perform exactly the same exercise repeatedly. The operator could receive later a resume about patients’ performances.</p>
<p>Communication between master and slave units has been thought to require a very low bandwidth, thus allowing RT-direct controlled rehabilitation to take place even with unstable or poor Internet connection. No packet loss in the master–slave bidirectional communication was observed in our experiments, and exercises were never affected by appreciable communication delays. Delays will not in any case have consequences on the system stability, as the exoskeleton implements security mechanisms that prevents from harming the patient and will allow the exoskeleton to reach a rest position when no control command is received from the master unit; in case of delays, in addition, the operator could analyze off-line feedback received from the slave unit. The slave unit records information about patient range-of-motion for each addressed DoM as well as interaction forces. Such data can be used to assess improvements and patient’s evolution, representing useful and valid support both to the operator—who can prepare a set of rehabilitative exercises only once, thus saving time, and the patient—who can receive precise information about his/her improvements.</p>
</sec>
<sec id="sec4-sensors-16-00208">
<title>4. Conclusions</title>
<p>In this paper, we introduced the design of a telerehabilitation system for hand functional recovery, and presented the results of preliminary experimental activities assessing the system usability and accuracy.</p>
<p>The proposed system goes beyond the current state of the art in several features. In telerehabilitation systems, a strong limitation is usually due to time delays and loss of information [
<xref rid="B36-sensors-16-00208" ref-type="bibr">36</xref>
], which might affect reliability and stability of the RT-link: the telecommunication system must comply with some minimum standards (maximum time lags, loss of information and speed). Our implementation showed good performances, since the lag between master and slave systems never affected the regular development of the exercises.</p>
<p>Most of the current rehabilitation robotic aids are independent mechanical systems and lowly networked, providing poor interaction with the operator. This represents a not negligible limitation because the operator needs to monitor patient’s progress to be able to adapt the exercises as needed. The main novelty of the presented application is given by the combination of a markerless hand tracking system, leveraging on a VPE algorithm, and the multi-joints HX hand exoskeleton. Together, they provide the operator with natural and reliable information about the evolution of the exercise kinematics and a way to enforce and change it. In addition, information from an external compliant sensor about interaction forces can be provided to the operator, to quantitatively evaluate whether the task’s goal was or not attained. Patients may feel more motivated to exercise at home under the guidance of an highly adaptive robotic tool.</p>
<p>Experimental results proved the overall feasibility, and the stability of the telerehabilitation setup across different speed settings, and for different subjects.</p>
<p>Future studies will deal with the extension of the VPE framework, to allow the automatic detection of the exercise accomplished by the operator. In addition, we will define clinical protocols to evaluate the efficacy of our telerehabilitation system with impaired subjects. In fact, hand rehabilitation therapy is relevant for post stroke patients, who often show residual hand functionality, which can be improved by continuous exercise. It is important to train patients constantly and effectively. Hemiplegic patients receive benefits from continuous exercise on the affected hand, especially if co-aided by the other hand they can still control. Post-traumatic healing and prevention of repeated injuries are as well achieved through rehabilitation. The aim of such treatment is to develop strength, flexibility, and proprioception in the affected body segment [
<xref rid="B37-sensors-16-00208" ref-type="bibr">37</xref>
]. We think that our system paves the way to a set of telerehabilitation tools and procedures specifically designed for post stroke patients.</p>
</sec>
</body>
<back>
<ack>
<title>Acknowledgments</title>
<p>This research has been partially supported by: (i) the “Smart Cities and Social Innovation Under 30" program of the Italian Ministry of Research and University through the PARLOMA Project (SIN_00132), (ii) the Italian Ministry of Health, Ricerca Finalizzata 2009—VRehab Project (Grant RF-2009-1472190), (iii) the EU Commission within the WAY Project (“Wearable interfaces for hand function recoverY", FP7-ICT-Ch5 G.A. 288551), (iv) the Italian Ministry of Economic Development within the AMULOS Project (ADVANCED MULOS, Contract MI01_00319), and (v) the TIM Joint Open Lab.</p>
</ack>
<notes>
<title>Author Contributions</title>
<p>All authors contributed extensively to the work presented in this paper. G.A.F., D.P., R.N., A.C. wrote code and assembled input data, and M.Ce., M. Co., N.V. designed and implemented the hand exoskeleton, analyzed and interpreted the experimental data. C.M.O. and N.V. designed and supervised the experiments. G.A.F., D.P., M.Ce., M. Co., L.O.R., M. I., C.M.O., and N.V. wrote the manuscript. The authors give final approval of the version to be submitted.</p>
</notes>
<notes>
<title>Conflicts of Interest</title>
<p>The authors declare no conflict of interest.</p>
</notes>
<glossary>
<title>Abbreviations</title>
<p>The following abbreviations are used in this manuscript:
<def-list>
<def-item>
<term>a/a</term>
<def>
<p>abduction/adduction</p>
</def>
</def-item>
<def-item>
<term>f/e</term>
<def>
<p>flexion/extension</p>
</def>
</def-item>
<def-item>
<term>DIP</term>
<def>
<p>Distal-Intra-Phalangeal</p>
</def>
</def-item>
<def-item>
<term>DoM</term>
<def>
<p>Degree of Mobility</p>
</def>
</def-item>
<def-item>
<term>GPU</term>
<def>
<p>Graphics Processor Unit</p>
</def>
</def-item>
<def-item>
<term>HRI</term>
<def>
<p>Human Robot Interaction</p>
</def>
</def-item>
<def-item>
<term>HX</term>
<def>
<p>Hand eXoskeleton</p>
</def>
</def-item>
<def-item>
<term>MCP</term>
<def>
<p>Meta-Carpo-Phalangeal</p>
</def>
</def-item>
<def-item>
<term>MC-IP</term>
<def>
<p>Meta-Carpo-Inter-Phalangeal</p>
</def>
</def-item>
<def-item>
<term>P-DIP</term>
<def>
<p>Proximal-Distal-Intra-Phalangeal</p>
</def>
</def-item>
<def-item>
<term>PIP</term>
<def>
<p>Proximal-Intra-Phalangeal</p>
</def>
</def-item>
<def-item>
<term>PSO</term>
<def>
<p>Particle Swarm Optimization</p>
</def>
</def-item>
<def-item>
<term>RGB-D</term>
<def>
<p>Reg, Green, Blue, and Depth</p>
</def>
</def-item>
<def-item>
<term>RF</term>
<def>
<p>Random Rorest</p>
</def>
</def-item>
<def-item>
<term>RMSE</term>
<def>
<p>Root-Mean-Squared Error</p>
</def>
</def-item>
<def-item>
<term>RT</term>
<def>
<p>Real Time</p>
</def>
</def-item>
<def-item>
<term>UDP/IP</term>
<def>
<p>User Datagram Protocol/Internet Protocol</p>
</def>
</def-item>
<def-item>
<term>VPE</term>
<def>
<p>Vision-Based Pose Estimation</p>
</def>
</def-item>
<def-item>
<term>VR</term>
<def>
<p>Virtual Reality</p>
</def>
</def-item>
</def-list>
</p>
</glossary>
<ref-list>
<title>References</title>
<ref id="B1-sensors-16-00208">
<label>1.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dobkin</surname>
<given-names>B.</given-names>
</name>
</person-group>
<article-title>The economic impact of stroke</article-title>
<source>Neurology</source>
<year>1995</year>
<volume>45</volume>
<fpage>S6</fpage>
<lpage>S9</lpage>
<pub-id pub-id-type="pmid">7885589</pub-id>
</element-citation>
</ref>
<ref id="B2-sensors-16-00208">
<label>2.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fox</surname>
<given-names>S.</given-names>
</name>
</person-group>
<article-title>EVAL-revolutionizing hand exams</article-title>
<source>Adv. Occup. Ther.</source>
<year>1991</year>
<volume>7</volume>
<fpage>1</fpage>
<lpage>7</lpage>
</element-citation>
</ref>
<ref id="B3-sensors-16-00208">
<label>3.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Popescu</surname>
<given-names>V.G.</given-names>
</name>
<name>
<surname>Burdea</surname>
<given-names>G.C.</given-names>
</name>
<name>
<surname>Bouzit</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Hentz</surname>
<given-names>V.R.</given-names>
</name>
</person-group>
<article-title>A virtual-reality-based telerehabilitation system with force feedback</article-title>
<source>IEEE Trans. Inf. Technol. Biomed.</source>
<year>2000</year>
<volume>4</volume>
<fpage>45</fpage>
<lpage>51</lpage>
<pub-id pub-id-type="doi">10.1109/4233.826858</pub-id>
<pub-id pub-id-type="pmid">10761773</pub-id>
</element-citation>
</ref>
<ref id="B4-sensors-16-00208">
<label>4.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Burdea</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Popescu</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Hentz</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Colbert</surname>
<given-names>K.</given-names>
</name>
</person-group>
<article-title>Virtual reality-based orthopedic telerehabilitation</article-title>
<source>IEEE Trans. Rehabil. Eng.</source>
<year>2000</year>
<volume>8</volume>
<fpage>430</fpage>
<lpage>432</lpage>
<pub-id pub-id-type="doi">10.1109/86.867886</pub-id>
<pub-id pub-id-type="pmid">11001524</pub-id>
</element-citation>
</ref>
<ref id="B5-sensors-16-00208">
<label>5.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Giansanti</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Morelli</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Maccioni</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Macellari</surname>
<given-names>V.</given-names>
</name>
</person-group>
<article-title>Validation of a tele-home-care for hand-telerehabilitation</article-title>
<source>Conf. Proc. IEEE Eng. Med. Biol. Soc.</source>
<year>2007</year>
<volume>2007</volume>
<fpage>3830</fpage>
<lpage>3832</lpage>
<pub-id pub-id-type="pmid">18002833</pub-id>
</element-citation>
</ref>
<ref id="B6-sensors-16-00208">
<label>6.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Holden</surname>
<given-names>M.K.</given-names>
</name>
<name>
<surname>Dyar</surname>
<given-names>T.A.</given-names>
</name>
<name>
<surname>Dayan-Cimadoro</surname>
<given-names>L.</given-names>
</name>
</person-group>
<article-title>Telerehabilitation using a virtual environment improves upper extremity function in patients with stroke</article-title>
<source>IEEE Trans. Neural Syst. Rehabil. Eng.</source>
<year>2007</year>
<volume>15</volume>
<fpage>36</fpage>
<lpage>42</lpage>
<pub-id pub-id-type="doi">10.1109/TNSRE.2007.891388</pub-id>
<pub-id pub-id-type="pmid">17436874</pub-id>
</element-citation>
</ref>
<ref id="B7-sensors-16-00208">
<label>7.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Holden</surname>
<given-names>M.K.</given-names>
</name>
<name>
<surname>Dyar</surname>
<given-names>T.A.</given-names>
</name>
<name>
<surname>Schwamm</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Bizzi</surname>
<given-names>E.</given-names>
</name>
</person-group>
<article-title>Virtual-environment-based telerehabilitation in patients with stroke</article-title>
<source>Presence</source>
<year>2005</year>
<volume>14</volume>
<fpage>214</fpage>
<lpage>233</lpage>
<pub-id pub-id-type="doi">10.1162/1054746053967058</pub-id>
</element-citation>
</ref>
<ref id="B8-sensors-16-00208">
<label>8.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Holden</surname>
<given-names>M.K.</given-names>
</name>
<name>
<surname>Dyar</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Schwamm</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Bizzi</surname>
<given-names>E.</given-names>
</name>
</person-group>
<article-title>Home-based telerehabilitation using a virtual environment system</article-title>
<source>Proceedings of the 2nd International Workshop on Virtual Rehabilitation</source>
<conf-loc>Piscataway, NJ, USA</conf-loc>
<conf-date>13–20 September 2003</conf-date>
<fpage>4</fpage>
<lpage>12</lpage>
</element-citation>
</ref>
<ref id="B9-sensors-16-00208">
<label>9.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Song</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Pan</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Xu</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>H.</given-names>
</name>
</person-group>
<article-title>Adaptive motion control of arm rehabilitation robot based on impedance identification</article-title>
<source>Robotica</source>
<year>2015</year>
<volume>33</volume>
<fpage>1795</fpage>
<lpage>1812</lpage>
<pub-id pub-id-type="doi">10.1017/S026357471400099X</pub-id>
</element-citation>
</ref>
<ref id="B10-sensors-16-00208">
<label>10.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Basteris</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Nijenhuis</surname>
<given-names>S.M.</given-names>
</name>
<name>
<surname>Stienen</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Buurke</surname>
<given-names>J.H.</given-names>
</name>
<name>
<surname>Prange</surname>
<given-names>G.B.</given-names>
</name>
<name>
<surname>Amirabdollahian</surname>
<given-names>F.</given-names>
</name>
</person-group>
<article-title>Training modalities in robot-mediated upper limb rehabilitation in stroke: A framework for classification based on a systematic review</article-title>
<source>J. Neuroeng. Rehabil.</source>
<year>2014</year>
<volume>11</volume>
<fpage>111</fpage>
<pub-id pub-id-type="doi">10.1186/1743-0003-11-111</pub-id>
<pub-id pub-id-type="pmid">25012864</pub-id>
</element-citation>
</ref>
<ref id="B11-sensors-16-00208">
<label>11.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pan</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Song</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Xu</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Xu</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Xiong</surname>
<given-names>P.</given-names>
</name>
</person-group>
<article-title>Hierarchical safety supervisory control strategy for robot-assisted rehabilitation exercise</article-title>
<source>Robotica</source>
<year>2013</year>
<volume>31</volume>
<fpage>757</fpage>
<lpage>766</lpage>
<pub-id pub-id-type="doi">10.1017/S0263574713000052</pub-id>
</element-citation>
</ref>
<ref id="B12-sensors-16-00208">
<label>12.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brochard</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Robertson</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Medee</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Remy-Neris</surname>
<given-names>O.</given-names>
</name>
</person-group>
<article-title>What’s new in new technologies for upper extremity rehabilitation?</article-title>
<source>Curr. Opin. Neurol.</source>
<year>2010</year>
<volume>23</volume>
<fpage>683</fpage>
<lpage>687</lpage>
<pub-id pub-id-type="doi">10.1097/WCO.0b013e32833f61ce</pub-id>
<pub-id pub-id-type="pmid">20852420</pub-id>
</element-citation>
</ref>
<ref id="B13-sensors-16-00208">
<label>13.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Balasubramanian</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Klein</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Burdet</surname>
<given-names>E.</given-names>
</name>
</person-group>
<article-title>Robot-assisted rehabilitation of hand function</article-title>
<source>Curr. Opin. Neurol.</source>
<year>2010</year>
<volume>23</volume>
<fpage>661</fpage>
<lpage>670</lpage>
<pub-id pub-id-type="doi">10.1097/WCO.0b013e32833e99a4</pub-id>
<pub-id pub-id-type="pmid">20852421</pub-id>
</element-citation>
</ref>
<ref id="B14-sensors-16-00208">
<label>14.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mehrholz</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Platz</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Kugler</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Pohl</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>Electromechanical-assisted training for improving arm function and disability after stroke</article-title>
<source>Cochrane Database Syst. Rev.</source>
<year>2008</year>
<volume>4</volume>
<fpage>CD006876</fpage>
<pub-id pub-id-type="pmid">18843735</pub-id>
</element-citation>
</ref>
<ref id="B15-sensors-16-00208">
<label>15.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Reinkensmeyer</surname>
<given-names>J.</given-names>
</name>
</person-group>
<article-title>Comparison of robot-assisted reaching to free reaching in promoting recovery from chronic stroke</article-title>
<source>Proceedings of the 7th International Conference on Rehabilitation Robotics</source>
<conf-loc>Evry Cedex, France</conf-loc>
<conf-date>1 January 2001</conf-date>
<fpage>39</fpage>
</element-citation>
</ref>
<ref id="B16-sensors-16-00208">
<label>16.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Burgar</surname>
<given-names>C.G.</given-names>
</name>
<name>
<surname>Lum</surname>
<given-names>P.S.</given-names>
</name>
<name>
<surname>Shor</surname>
<given-names>P.C.</given-names>
</name>
<name>
<surname>Van der Loos</surname>
<given-names>H.M.</given-names>
</name>
</person-group>
<article-title>Development of robots for rehabilitation therapy: The Palo Alto VA/Stanford experience</article-title>
<source>J. Rehabil. Res. Dev.</source>
<year>2000</year>
<volume>37</volume>
<fpage>663</fpage>
<lpage>674</lpage>
<pub-id pub-id-type="pmid">11321002</pub-id>
</element-citation>
</ref>
<ref id="B17-sensors-16-00208">
<label>17.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Song</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Guo</surname>
<given-names>S.</given-names>
</name>
</person-group>
<article-title>Development of a novel tele-rehabilitation system</article-title>
<source>Proceedings of the IEEE International Conference on Robotics and Biomimetics (ROBIO’06)</source>
<conf-loc>Kunming, China</conf-loc>
<conf-date>17–20 December 2006</conf-date>
<fpage>785</fpage>
<lpage>789</lpage>
</element-citation>
</ref>
<ref id="B18-sensors-16-00208">
<label>18.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Peng</surname>
<given-names>Q.</given-names>
</name>
<name>
<surname>Park</surname>
<given-names>H.S.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>L.Q.</given-names>
</name>
</person-group>
<article-title>A low-cost portable telerehabilitation system for the treatment and assessment of the elbow deformity of stroke patients</article-title>
<source>Proceedings of the IEEE 9th International Conference on Rehabilitation Robotics (ICORR 2005)</source>
<conf-loc>Chicago, IL, USA</conf-loc>
<conf-date>28 June–1 July 2005</conf-date>
<fpage>149</fpage>
<lpage>151</lpage>
</element-citation>
</ref>
<ref id="B19-sensors-16-00208">
<label>19.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Duong</surname>
<given-names>M.D.</given-names>
</name>
<name>
<surname>Terashima</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Miyoshi</surname>
<given-names>T.</given-names>
</name>
</person-group>
<article-title>A novel stable teleoperation with haptic feedback by means of impedance adjustment via arbitrary time delay environment for rehabilitation</article-title>
<source>Proceedings of the 2009 IEEE International Conference on Control Applications (CCA) & Intelligent Control (ISIC)</source>
<conf-loc>Saint Petersburg, Russia</conf-loc>
<conf-date>8–10 July 2009</conf-date>
<fpage>1744</fpage>
<lpage>1749</lpage>
</element-citation>
</ref>
<ref id="B20-sensors-16-00208">
<label>20.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cortese</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Cempini</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>de Almeida Ribeiro</surname>
<given-names>P.R.</given-names>
</name>
<name>
<surname>Soekadar</surname>
<given-names>S.R.</given-names>
</name>
<name>
<surname>Carrozza</surname>
<given-names>M.C.</given-names>
</name>
<name>
<surname>Vitiello</surname>
<given-names>N.</given-names>
</name>
</person-group>
<article-title>A Mechatronic System for Robot-Mediated Hand Telerehabilitation</article-title>
<source>IEEE/ASME Trans. Mechatron.</source>
<year>2014</year>
<volume>20</volume>
<fpage>1753</fpage>
<lpage>1764</lpage>
<pub-id pub-id-type="doi">10.1109/TMECH.2014.2353298</pub-id>
</element-citation>
</ref>
<ref id="B21-sensors-16-00208">
<label>21.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Antón</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Goni</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Illarramendi</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Torres-Unda</surname>
<given-names>J.J.</given-names>
</name>
<name>
<surname>Seco</surname>
<given-names>J.</given-names>
</name>
</person-group>
<article-title>KiReS: A Kinect-based telerehabilitation system</article-title>
<source>Proceedings of the 2013 IEEE 15th International Conference on E-Health Networking, Applications & Services (Healthcom)</source>
<conf-loc>Lisbon, Portugal</conf-loc>
<conf-date>9–12 October 2013</conf-date>
<fpage>444</fpage>
<lpage>448</lpage>
</element-citation>
</ref>
<ref id="B22-sensors-16-00208">
<label>22.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Russo</surname>
<given-names>L.O.</given-names>
</name>
<name>
<surname>Airò Farulla</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Pianu</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Salgarella</surname>
<given-names>A.R.</given-names>
</name>
<name>
<surname>Controzzi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Cipriani</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Oddo</surname>
<given-names>C.M.</given-names>
</name>
<name>
<surname>Geraci</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Rosa</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Indaco</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>PARLOMA-A Novel Human-Robot Interaction System for Deaf-blind Remote Communication</article-title>
<source>Int. J. Adv. Robot. Syst.</source>
<year>2015</year>
<pub-id pub-id-type="doi">10.5772/60416</pub-id>
</element-citation>
</ref>
<ref id="B23-sensors-16-00208">
<label>23.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shotton</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Sharp</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Kipman</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Fitzgibbon</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Finocchio</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Blake</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Cook</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Moore</surname>
<given-names>R.</given-names>
</name>
</person-group>
<article-title>Real-time human pose recognition in parts from single depth images</article-title>
<source>Commun. ACM</source>
<year>2013</year>
<volume>56</volume>
<fpage>116</fpage>
<lpage>124</lpage>
<pub-id pub-id-type="doi">10.1145/2398356.2398381</pub-id>
</element-citation>
</ref>
<ref id="B24-sensors-16-00208">
<label>24.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Keskin</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Kıraç</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Kara</surname>
<given-names>Y.E.</given-names>
</name>
<name>
<surname>Akarun</surname>
<given-names>L.</given-names>
</name>
</person-group>
<article-title>Real time hand pose estimation using depth sensors</article-title>
<source>Consumer Depth Cameras for Computer Vision</source>
<publisher-name>Springer</publisher-name>
<publisher-loc>Zurich, Switzerland</publisher-loc>
<year>2013</year>
<fpage>119</fpage>
<lpage>137</lpage>
</element-citation>
</ref>
<ref id="B25-sensors-16-00208">
<label>25.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Oikonomidis</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Kyriazis</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Argyros</surname>
<given-names>A.A.</given-names>
</name>
</person-group>
<article-title>Efficient model-based 3D tracking of hand articulations using Kinect</article-title>
<source>BMVC</source>
<year>2011</year>
<volume>1</volume>
<fpage>3</fpage>
</element-citation>
</ref>
<ref id="B26-sensors-16-00208">
<label>26.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wachs</surname>
<given-names>J.P.</given-names>
</name>
<name>
<surname>Kölsch</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Stern</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Edan</surname>
<given-names>Y.</given-names>
</name>
</person-group>
<article-title>Vision-based hand-gesture applications</article-title>
<source>Communic. ACM</source>
<year>2011</year>
<volume>54</volume>
<fpage>60</fpage>
<lpage>71</lpage>
<pub-id pub-id-type="doi">10.1145/1897816.1897838</pub-id>
</element-citation>
</ref>
<ref id="B27-sensors-16-00208">
<label>27.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Kennedy</surname>
<given-names>J.</given-names>
</name>
</person-group>
<article-title>Particle swarm optimization</article-title>
<source>Encyclopedia of Machine Learning</source>
<publisher-name>Springer</publisher-name>
<publisher-loc>New York, NY, USA</publisher-loc>
<year>2010</year>
<fpage>760</fpage>
<lpage>766</lpage>
</element-citation>
</ref>
<ref id="B28-sensors-16-00208">
<label>28.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Breiman</surname>
<given-names>L.</given-names>
</name>
</person-group>
<article-title>Random forests</article-title>
<source>Mach. Learn.</source>
<year>2001</year>
<volume>45</volume>
<fpage>5</fpage>
<lpage>32</lpage>
<pub-id pub-id-type="doi">10.1023/A:1010933404324</pub-id>
</element-citation>
</ref>
<ref id="B29-sensors-16-00208">
<label>29.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cempini</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Cortese</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Vitiello</surname>
<given-names>N.</given-names>
</name>
</person-group>
<article-title>A Powered Finger–Thumb Wearable Hand Exoskeleton With Self-Aligning Joint Axes</article-title>
<source>IEEE/ASME Trans. Mechatron.</source>
<year>2015</year>
<volume>20</volume>
<fpage>705</fpage>
<lpage>716</lpage>
<pub-id pub-id-type="doi">10.1109/TMECH.2014.2315528</pub-id>
</element-citation>
</ref>
<ref id="B30-sensors-16-00208">
<label>30.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Donati</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Vitiello</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>De Rossi</surname>
<given-names>S.M.M.</given-names>
</name>
<name>
<surname>Lenzi</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Crea</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Persichetti</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Giovacchini</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Koopman</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Podobnik</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Munih</surname>
<given-names>M.</given-names>
</name>
<etal></etal>
</person-group>
<article-title>A flexible sensor technology for the distributed measurement of interaction pressure</article-title>
<source>Sensors</source>
<year>2013</year>
<volume>13</volume>
<fpage>1021</fpage>
<lpage>1045</lpage>
<pub-id pub-id-type="doi">10.3390/s130101021</pub-id>
<pub-id pub-id-type="pmid">23322104</pub-id>
</element-citation>
</ref>
<ref id="B31-sensors-16-00208">
<label>31.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fischler</surname>
<given-names>M.A.</given-names>
</name>
<name>
<surname>Bolles</surname>
<given-names>R.C.</given-names>
</name>
</person-group>
<article-title>Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography</article-title>
<source>Commun. ACM</source>
<year>1981</year>
<volume>24</volume>
<fpage>381</fpage>
<lpage>395</lpage>
<pub-id pub-id-type="doi">10.1145/358669.358692</pub-id>
</element-citation>
</ref>
<ref id="B32-sensors-16-00208">
<label>32.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Comaniciu</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Meer</surname>
<given-names>P.</given-names>
</name>
</person-group>
<article-title>Mean shift: A robust approach toward feature space analysis</article-title>
<source>IEEE Trans. Pattern Anal. Mach. Intell.</source>
<year>2002</year>
<volume>24</volume>
<fpage>603</fpage>
<lpage>619</lpage>
<pub-id pub-id-type="doi">10.1109/34.1000236</pub-id>
</element-citation>
</ref>
<ref id="B33-sensors-16-00208">
<label>33.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Lin</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Huang</surname>
<given-names>T.S.</given-names>
</name>
</person-group>
<article-title>Modeling the constraints of human hand motion</article-title>
<source>Proceedings of the IEEE Workshop on Human Motion</source>
<conf-loc>Los Alamitos, CA, USA</conf-loc>
<conf-date>7–8 December 2000</conf-date>
<fpage>121</fpage>
<lpage>126</lpage>
</element-citation>
</ref>
<ref id="B34-sensors-16-00208">
<label>34.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lenzi</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Vitiello</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>De Rossi</surname>
<given-names>S.M.M.</given-names>
</name>
<name>
<surname>Persichetti</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Giovacchini</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Roccella</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Vecchi</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Carrozza</surname>
<given-names>M.C.</given-names>
</name>
</person-group>
<article-title>Measuring human–robot interaction on wearable robots: A distributed approach</article-title>
<source>Mechatronics</source>
<year>2011</year>
<volume>21</volume>
<fpage>1123</fpage>
<lpage>1131</lpage>
<pub-id pub-id-type="doi">10.1016/j.mechatronics.2011.04.003</pub-id>
</element-citation>
</ref>
<ref id="B35-sensors-16-00208">
<label>35.</label>
<element-citation publication-type="webpage">
<person-group person-group-type="author">
<name>
<surname>Šarić</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>LibHand: A Library for Hand Articulation, Version 0.9</article-title>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="http://www.libhand.org/">http://www.libhand.org/</ext-link>
</comment>
<date-in-citation>(accessed on 29 January 2016)</date-in-citation>
</element-citation>
</ref>
<ref id="B36-sensors-16-00208">
<label>36.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Mohammadi</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Tavakoli</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Marquez</surname>
<given-names>H.J.</given-names>
</name>
</person-group>
<article-title>Control of nonlinear teleoperation systems subject to disturbances and variable time delays</article-title>
<source>Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)</source>
<conf-loc>Algarve, Portugal</conf-loc>
<conf-date>7–12 October 2012</conf-date>
<fpage>3017</fpage>
<lpage>3022</lpage>
</element-citation>
</ref>
<ref id="B37-sensors-16-00208">
<label>37.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tropp</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Alaranta</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Renstrom</surname>
<given-names>P.</given-names>
</name>
</person-group>
<article-title>Proprioception and coordination training in injury prevention</article-title>
<source>Sports Inj. Basic Princ. Prev. Care</source>
<year>1993</year>
<volume>4</volume>
<fpage>277</fpage>
<lpage>290</lpage>
</element-citation>
</ref>
</ref-list>
</back>
</pmc>
<affiliations>
<list></list>
<tree>
<noCountry>
<name sortKey="Air Farulla, Giuseppe" sort="Air Farulla, Giuseppe" uniqKey="Air Farulla G" first="Giuseppe" last="Air Farulla">Giuseppe Air Farulla</name>
<name sortKey="Cempini, Marco" sort="Cempini, Marco" uniqKey="Cempini M" first="Marco" last="Cempini">Marco Cempini</name>
<name sortKey="Chimienti, Antonio" sort="Chimienti, Antonio" uniqKey="Chimienti A" first="Antonio" last="Chimienti">Antonio Chimienti</name>
<name sortKey="Cortese, Mario" sort="Cortese, Mario" uniqKey="Cortese M" first="Mario" last="Cortese">Mario Cortese</name>
<name sortKey="Indaco, Marco" sort="Indaco, Marco" uniqKey="Indaco M" first="Marco" last="Indaco">Marco Indaco</name>
<name sortKey="Nerino, Roberto" sort="Nerino, Roberto" uniqKey="Nerino R" first="Roberto" last="Nerino">Roberto Nerino</name>
<name sortKey="Oddo, Calogero M" sort="Oddo, Calogero M" uniqKey="Oddo C" first="Calogero M." last="Oddo">Calogero M. Oddo</name>
<name sortKey="Pianu, Daniele" sort="Pianu, Daniele" uniqKey="Pianu D" first="Daniele" last="Pianu">Daniele Pianu</name>
<name sortKey="Russo, Ludovico O" sort="Russo, Ludovico O" uniqKey="Russo L" first="Ludovico O." last="Russo">Ludovico O. Russo</name>
<name sortKey="Vitiello, Nicola" sort="Vitiello, Nicola" uniqKey="Vitiello N" first="Nicola" last="Vitiello">Nicola Vitiello</name>
</noCountry>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Checkpoint
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000005 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd -nk 000005 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Checkpoint
   |type=    RBID
   |clé=     PMC:4801584
   |texte=   Vision-Based Pose Estimation for Robot-Mediated Hand Telerehabilitation
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/RBID.i   -Sk "pubmed:26861333" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024