Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Identification of Haptic Based Guiding Using Hard Reins

Identifieur interne : 000309 ( Pmc/Curation ); précédent : 000308; suivant : 000310

Identification of Haptic Based Guiding Using Hard Reins

Auteurs : Anuradha Ranasinghe [Royaume-Uni] ; Prokar Dasgupta [Royaume-Uni] ; Kaspar Althoefer [Royaume-Uni] ; Thrishantha Nanayakkara [Royaume-Uni]

Source :

RBID : PMC:4511788

Abstract

This paper presents identifications of human-human interaction in which one person with limited auditory and visual perception of the environment (a follower) is guided by an agent with full perceptual capabilities (a guider) via a hard rein along a given path. We investigate several identifications of the interaction between the guider and the follower such as computational models that map states of the follower to actions of the guider and the computational basis of the guider to modulate the force on the rein in response to the trust level of the follower. Based on experimental identification systems on human demonstrations show that the guider and the follower experience learning for an optimal stable state-dependent novel 3rd and 2nd order auto-regressive predictive and reactive control policies respectively. By modeling the follower’s dynamics using a time varying virtual damped inertial system, we found that the coefficient of virtual damping is most appropriate to explain the trust level of the follower at any given time. Moreover, we present the stability of the extracted guiding policy when it was implemented on a planar 1-DoF robotic arm. Our findings provide a theoretical basis to design advanced human-robot interaction algorithms applicable to a variety of situations where a human requires the assistance of a robot to perceive the environment.


Url:
DOI: 10.1371/journal.pone.0132020
PubMed: 26201076
PubMed Central: 4511788

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4511788

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Identification of Haptic Based Guiding Using Hard Reins</title>
<author>
<name sortKey="Ranasinghe, Anuradha" sort="Ranasinghe, Anuradha" uniqKey="Ranasinghe A" first="Anuradha" last="Ranasinghe">Anuradha Ranasinghe</name>
<affiliation wicri:level="1">
<nlm:aff id="aff001">
<addr-line>Department of Informatics/ Center for Robotics Research, King’s College London, London, United Kingdom</addr-line>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>Department of Informatics/ Center for Robotics Research, King’s College London, London</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Dasgupta, Prokar" sort="Dasgupta, Prokar" uniqKey="Dasgupta P" first="Prokar" last="Dasgupta">Prokar Dasgupta</name>
<affiliation wicri:level="1">
<nlm:aff id="aff002">
<addr-line>MRC Center for Transplantation, DTIMB & NIHR BRC, King’s College London, London, United Kingdom</addr-line>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>MRC Center for Transplantation, DTIMB & NIHR BRC, King’s College London, London</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Althoefer, Kaspar" sort="Althoefer, Kaspar" uniqKey="Althoefer K" first="Kaspar" last="Althoefer">Kaspar Althoefer</name>
<affiliation wicri:level="1">
<nlm:aff id="aff001">
<addr-line>Department of Informatics/ Center for Robotics Research, King’s College London, London, United Kingdom</addr-line>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>Department of Informatics/ Center for Robotics Research, King’s College London, London</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Nanayakkara, Thrishantha" sort="Nanayakkara, Thrishantha" uniqKey="Nanayakkara T" first="Thrishantha" last="Nanayakkara">Thrishantha Nanayakkara</name>
<affiliation wicri:level="1">
<nlm:aff id="aff001">
<addr-line>Department of Informatics/ Center for Robotics Research, King’s College London, London, United Kingdom</addr-line>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>Department of Informatics/ Center for Robotics Research, King’s College London, London</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">26201076</idno>
<idno type="pmc">4511788</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4511788</idno>
<idno type="RBID">PMC:4511788</idno>
<idno type="doi">10.1371/journal.pone.0132020</idno>
<date when="2015">2015</date>
<idno type="wicri:Area/Pmc/Corpus">000309</idno>
<idno type="wicri:Area/Pmc/Curation">000309</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Identification of Haptic Based Guiding Using Hard Reins</title>
<author>
<name sortKey="Ranasinghe, Anuradha" sort="Ranasinghe, Anuradha" uniqKey="Ranasinghe A" first="Anuradha" last="Ranasinghe">Anuradha Ranasinghe</name>
<affiliation wicri:level="1">
<nlm:aff id="aff001">
<addr-line>Department of Informatics/ Center for Robotics Research, King’s College London, London, United Kingdom</addr-line>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>Department of Informatics/ Center for Robotics Research, King’s College London, London</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Dasgupta, Prokar" sort="Dasgupta, Prokar" uniqKey="Dasgupta P" first="Prokar" last="Dasgupta">Prokar Dasgupta</name>
<affiliation wicri:level="1">
<nlm:aff id="aff002">
<addr-line>MRC Center for Transplantation, DTIMB & NIHR BRC, King’s College London, London, United Kingdom</addr-line>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>MRC Center for Transplantation, DTIMB & NIHR BRC, King’s College London, London</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Althoefer, Kaspar" sort="Althoefer, Kaspar" uniqKey="Althoefer K" first="Kaspar" last="Althoefer">Kaspar Althoefer</name>
<affiliation wicri:level="1">
<nlm:aff id="aff001">
<addr-line>Department of Informatics/ Center for Robotics Research, King’s College London, London, United Kingdom</addr-line>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>Department of Informatics/ Center for Robotics Research, King’s College London, London</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Nanayakkara, Thrishantha" sort="Nanayakkara, Thrishantha" uniqKey="Nanayakkara T" first="Thrishantha" last="Nanayakkara">Thrishantha Nanayakkara</name>
<affiliation wicri:level="1">
<nlm:aff id="aff001">
<addr-line>Department of Informatics/ Center for Robotics Research, King’s College London, London, United Kingdom</addr-line>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>Department of Informatics/ Center for Robotics Research, King’s College London, London</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">PLoS ONE</title>
<idno type="eISSN">1932-6203</idno>
<imprint>
<date when="2015">2015</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>This paper presents identifications of human-human interaction in which one person with limited auditory and visual perception of the environment (a follower) is guided by an agent with full perceptual capabilities (a guider) via a hard rein along a given path. We investigate several identifications of the interaction between the guider and the follower such as computational models that map states of the follower to actions of the guider and the computational basis of the guider to modulate the force on the rein in response to the trust level of the follower. Based on experimental identification systems on human demonstrations show that the guider and the follower experience learning for an optimal stable state-dependent novel 3
<sup>rd</sup>
and 2
<sup>nd</sup>
order auto-regressive predictive and reactive control policies respectively. By modeling the follower’s dynamics using a time varying virtual damped inertial system, we found that the coefficient of virtual damping is most appropriate to explain the trust level of the follower at any given time. Moreover, we present the stability of the extracted guiding policy when it was implemented on a planar 1-DoF robotic arm. Our findings provide a theoretical basis to design advanced human-robot interaction algorithms applicable to a variety of situations where a human requires the assistance of a robot to perceive the environment.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Murphy, Rr" uniqKey="Murphy R">RR Murphy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Goodrich, Ma" uniqKey="Goodrich M">MA Goodrich</name>
</author>
<author>
<name sortKey="Schultz, Ac" uniqKey="Schultz A">AC Schultz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Casper, J" uniqKey="Casper J">J Casper</name>
</author>
<author>
<name sortKey="Murphy, Rr" uniqKey="Murphy R">RR Murphy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Finzi, A" uniqKey="Finzi A">A Finzi</name>
</author>
<author>
<name sortKey="Orlandini, A" uniqKey="Orlandini A">A Orlandini</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Marston, Jr" uniqKey="Marston J">JR Marston</name>
</author>
<author>
<name sortKey="Loomis, Jm" uniqKey="Loomis J">JM Loomis</name>
</author>
<author>
<name sortKey="Klatzky, Rl" uniqKey="Klatzky R">RL Klatzky</name>
</author>
<author>
<name sortKey="Golledge, Rg" uniqKey="Golledge R">RG Golledge</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Penders, J" uniqKey="Penders J">J Penders</name>
</author>
<author>
<name sortKey="Alboul, L" uniqKey="Alboul L">L Alboul</name>
</author>
<author>
<name sortKey="Witkowski, U" uniqKey="Witkowski U">U Witkowski</name>
</author>
<author>
<name sortKey="Naghsh, A" uniqKey="Naghsh A">A Naghsh</name>
</author>
<author>
<name sortKey="Saez Pons, J" uniqKey="Saez Pons J">J Saez-Pons</name>
</author>
<author>
<name sortKey="Herbrechtsmeier, S" uniqKey="Herbrechtsmeier S">S Herbrechtsmeier</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Loomis, Jm" uniqKey="Loomis J">JM Loomis</name>
</author>
<author>
<name sortKey="Golledge, Rg" uniqKey="Golledge R">RG Golledge</name>
</author>
<author>
<name sortKey="Klatzky, Rl" uniqKey="Klatzky R">RL Klatzky</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ulrich, I" uniqKey="Ulrich I">I Ulrich</name>
</author>
<author>
<name sortKey="Borenstein, J" uniqKey="Borenstein J">J Borenstein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tachi, S" uniqKey="Tachi S">S Tachi</name>
</author>
<author>
<name sortKey="Tanie, K" uniqKey="Tanie K">K Tanie</name>
</author>
<author>
<name sortKey="Komoriya, K" uniqKey="Komoriya K">K Komoriya</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Loomis, Jm" uniqKey="Loomis J">JM Loomis</name>
</author>
<author>
<name sortKey="Klatzky, Rl" uniqKey="Klatzky R">RL Klatzky</name>
</author>
<author>
<name sortKey="Golledge, Rg" uniqKey="Golledge R">RG Golledge</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Park, E" uniqKey="Park E">E Park</name>
</author>
<author>
<name sortKey="Quaneisha, J" uniqKey="Quaneisha J">J Quaneisha</name>
</author>
<author>
<name sortKey=" Amp Xiaochun, J" uniqKey=" Amp Xiaochun J">J & Xiaochun</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mrtl, A" uniqKey="Mrtl A">A Mrtl</name>
</author>
<author>
<name sortKey="Lawitzky, M" uniqKey="Lawitzky M">M Lawitzky</name>
</author>
<author>
<name sortKey="Kucukyilmaz, A" uniqKey="Kucukyilmaz A">A Kucukyilmaz</name>
</author>
<author>
<name sortKey="Sezgin, M" uniqKey="Sezgin M">M Sezgin</name>
</author>
<author>
<name sortKey="Basdogan, C" uniqKey="Basdogan C">C Basdogan</name>
</author>
<author>
<name sortKey=" Amp Hirche, S" uniqKey=" Amp Hirche S">S & Hirche</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hancock, Pa" uniqKey="Hancock P">PA Hancock</name>
</author>
<author>
<name sortKey="Billings, Dr" uniqKey="Billings D">DR Billings</name>
</author>
<author>
<name sortKey="Schaefer, Ke" uniqKey="Schaefer K">KE Schaefer</name>
</author>
<author>
<name sortKey="Chen, Jy" uniqKey="Chen J">JY Chen</name>
</author>
<author>
<name sortKey="De Visser, Ej" uniqKey="De Visser E">EJ De Visser</name>
</author>
<author>
<name sortKey="Parasuraman, R" uniqKey="Parasuraman R">R Parasuraman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Flanders, M" uniqKey="Flanders M">M Flanders</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Richardson, Mj" uniqKey="Richardson M">MJ Richardson</name>
</author>
<author>
<name sortKey="Flash, T" uniqKey="Flash T">T Flash</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Christopher, Hm" uniqKey="Christopher H">HM Christopher</name>
</author>
<author>
<name sortKey="Wolpert, Dm" uniqKey="Wolpert D">DM Wolpert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bhushan, N" uniqKey="Bhushan N">N Bhushan</name>
</author>
<author>
<name sortKey="Shadmehr, R" uniqKey="Shadmehr R">R Shadmehr</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Saeb, S" uniqKey="Saeb S">S Saeb</name>
</author>
<author>
<name sortKey="Cornelius, W" uniqKey="Cornelius W">W Cornelius</name>
</author>
<author>
<name sortKey="Jochen, T" uniqKey="Jochen T">T Jochen</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sterman, Jd" uniqKey="Sterman J">JD Sterman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Mourik, Am" uniqKey="Van Mourik A">AM Van Mourik</name>
</author>
<author>
<name sortKey="Daffertshofer, A" uniqKey="Daffertshofer A">A Daffertshofer</name>
</author>
<author>
<name sortKey="Beek, Pj" uniqKey="Beek P">PJ Beek</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Johannsen, G" uniqKey="Johannsen G">G Johannsen</name>
</author>
<author>
<name sortKey="Rouse, Wb" uniqKey="Rouse W">WB Rouse</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stokes, Ia" uniqKey="Stokes I">IA Stokes</name>
</author>
<author>
<name sortKey="Gardner, Mm" uniqKey="Gardner M">MM Gardner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thoroughman, Ka" uniqKey="Thoroughman K">KA Thoroughman</name>
</author>
<author>
<name sortKey="Reza, S" uniqKey="Reza S">S Reza</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">PLoS One</journal-id>
<journal-id journal-id-type="iso-abbrev">PLoS ONE</journal-id>
<journal-id journal-id-type="publisher-id">plos</journal-id>
<journal-id journal-id-type="pmc">plosone</journal-id>
<journal-title-group>
<journal-title>PLoS ONE</journal-title>
</journal-title-group>
<issn pub-type="epub">1932-6203</issn>
<publisher>
<publisher-name>Public Library of Science</publisher-name>
<publisher-loc>San Francisco, CA USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">26201076</article-id>
<article-id pub-id-type="pmc">4511788</article-id>
<article-id pub-id-type="publisher-id">PONE-D-14-33768</article-id>
<article-id pub-id-type="doi">10.1371/journal.pone.0132020</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Identification of Haptic Based Guiding Using Hard Reins</article-title>
<alt-title alt-title-type="running-head">Identification of Haptic Based Guiding</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Ranasinghe</surname>
<given-names>Anuradha</given-names>
</name>
<xref ref-type="aff" rid="aff001">
<sup>1</sup>
</xref>
<xref ref-type="corresp" rid="cor001">*</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Dasgupta</surname>
<given-names>Prokar</given-names>
</name>
<xref ref-type="aff" rid="aff002">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Althoefer</surname>
<given-names>Kaspar</given-names>
</name>
<xref ref-type="aff" rid="aff001">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Nanayakkara</surname>
<given-names>Thrishantha</given-names>
</name>
<xref ref-type="aff" rid="aff001">
<sup>1</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff001">
<label>1</label>
<addr-line>Department of Informatics/ Center for Robotics Research, King’s College London, London, United Kingdom</addr-line>
</aff>
<aff id="aff002">
<label>2</label>
<addr-line>MRC Center for Transplantation, DTIMB & NIHR BRC, King’s College London, London, United Kingdom</addr-line>
</aff>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Federici</surname>
<given-names>Stefano</given-names>
</name>
<role>Editor</role>
<xref ref-type="aff" rid="edit1"></xref>
</contrib>
</contrib-group>
<aff id="edit1">
<addr-line>University of Perugia, ITALY</addr-line>
</aff>
<author-notes>
<fn fn-type="conflict" id="coi001">
<p>
<bold>Competing Interests: </bold>
The authors have declared that no competing interests exist.</p>
</fn>
<fn fn-type="con" id="contrib001">
<p>Conceived and designed the experiments: AR PD KA TN. Performed the experiments: AR. Analyzed the data: AR TN. Contributed reagents/materials/analysis tools: AR PD KA TN. Wrote the paper: AR PD KA TN.</p>
</fn>
<corresp id="cor001">* E-mail:
<email>anuradha.ranasinghe@kcl.ac.uk</email>
</corresp>
</author-notes>
<pub-date pub-type="collection">
<year>2015</year>
</pub-date>
<pub-date pub-type="epub">
<day>22</day>
<month>7</month>
<year>2015</year>
</pub-date>
<volume>10</volume>
<issue>7</issue>
<elocation-id>e0132020</elocation-id>
<history>
<date date-type="received">
<day>1</day>
<month>8</month>
<year>2014</year>
</date>
<date date-type="accepted">
<day>9</day>
<month>6</month>
<year>2015</year>
</date>
</history>
<permissions>
<copyright-year>2015</copyright-year>
<copyright-holder>Ranasinghe et al</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<license-p>This is an open access article distributed under the terms of the
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution License</ext-link>
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:type="simple" xlink:href="pone.0132020.pdf"></self-uri>
<abstract>
<p>This paper presents identifications of human-human interaction in which one person with limited auditory and visual perception of the environment (a follower) is guided by an agent with full perceptual capabilities (a guider) via a hard rein along a given path. We investigate several identifications of the interaction between the guider and the follower such as computational models that map states of the follower to actions of the guider and the computational basis of the guider to modulate the force on the rein in response to the trust level of the follower. Based on experimental identification systems on human demonstrations show that the guider and the follower experience learning for an optimal stable state-dependent novel 3
<sup>rd</sup>
and 2
<sup>nd</sup>
order auto-regressive predictive and reactive control policies respectively. By modeling the follower’s dynamics using a time varying virtual damped inertial system, we found that the coefficient of virtual damping is most appropriate to explain the trust level of the follower at any given time. Moreover, we present the stability of the extracted guiding policy when it was implemented on a planar 1-DoF robotic arm. Our findings provide a theoretical basis to design advanced human-robot interaction algorithms applicable to a variety of situations where a human requires the assistance of a robot to perceive the environment.</p>
</abstract>
<funding-group>
<funding-statement>This study is supported by the UK Engineering and Physical Sciences Research Council (EPSRC) grant no. EP/I028765/1, the Guy’s and St Thomas’ Charity grant on developing clinician-scientific interfaces in robotic assisted surgery: translating technical innovation into improved clinical care (grant no. R090705), and Vattikuti foundation. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.</funding-statement>
</funding-group>
<counts>
<fig-count count="10"></fig-count>
<table-count count="5"></table-count>
<page-count count="22"></page-count>
</counts>
<custom-meta-group>
<custom-meta id="data-availability">
<meta-name>Data Availability</meta-name>
<meta-value>All relevant data are within the paper and its Supporting Information files.</meta-value>
</custom-meta>
</custom-meta-group>
</article-meta>
<notes>
<title>Data Availability</title>
<p>All relevant data are within the paper and its Supporting Information files.</p>
</notes>
</front>
<body>
<sec sec-type="intro" id="sec001">
<title>Introduction</title>
<p>Robots have been used in urban search and rescue (USAR) for the last ten years [
<xref rid="pone.0132020.ref001" ref-type="bibr">1</xref>
]. Human-Robot Interaction (HRI) is a field to study dedicated to understanding, designing, and evaluating robotic systems for use by or in interaction with humans [
<xref rid="pone.0132020.ref002" ref-type="bibr">2</xref>
]. The need for advanced HRI algorithms that are responsive to real time variations of the physical and psychological states of human users in uncalibrated environments has been felt in many applications like fire-fighting, disaster response, and search and rescue operations [
<xref rid="pone.0132020.ref003" ref-type="bibr">3</xref>
,
<xref rid="pone.0132020.ref004" ref-type="bibr">4</xref>
].</p>
<p>Several attempts have been made to guide people who have vision and auditory impairments [
<xref rid="pone.0132020.ref005" ref-type="bibr">5</xref>
] or find themselves in situations which cause their vision and hearing to be impaired. For example, blind people use guide dogs [
<xref rid="pone.0132020.ref006" ref-type="bibr">6</xref>
] to help them find their way while fire-fighters who find themselves in low visibility conditions and encounter high auditory distractions depend on touch sensation of walls [
<xref rid="pone.0132020.ref007" ref-type="bibr">7</xref>
]. Fire-fighters have to work in low visibility conditions due to smoke or dust and high auditory distractions due to their oxygen masks and other sounds in a typical fire-fighting environment. Nowadays, they depend on touch sensation (haptic) of walls for localizing and ropes for finding the direction [
<xref rid="pone.0132020.ref007" ref-type="bibr">7</xref>
]. A personal navigation system which uses Global Positioning System (GPS) and magnetic sensors was introduced to guide blind people in [
<xref rid="pone.0132020.ref006" ref-type="bibr">6</xref>
]. The main limitation of this approach is that upon arriving at a decision making point the user has to depend on gesture based visual communication with the navigation support system, which may not be appropriate in low visibility conditions.</p>
<p>This paper presents identification of abstracted dynamics of haptic based human control policies and human responses on guiding/following hard reins in low visibility conditions. The extracted haptic based guidance policies can be implemented on a robot to guide a human in low visibility conditions like indoor fire-fighting, disaster response, and search and rescue operations.</p>
<p>A robotic guide dog with environment perception capability called Rovi has been developed [
<xref rid="pone.0132020.ref008" ref-type="bibr">8</xref>
] to guide a human with limited environment perceptions. Rovi could avoid obstacles and reach a target on a smooth indoor floor, however it encountered difficulties in uncertain environments. An auditory navigation support system for the blind is discussed in [
<xref rid="pone.0132020.ref009" ref-type="bibr">9</xref>
], where, visually impaired human subjects (blindfolded subjects) were given verbal commands by a speech synthesizer. However, speech synthesis is not appropriate for the guidance of a visually impaired person in stressful situations such as a fire emergency where background noise levels are high [
<xref rid="pone.0132020.ref010" ref-type="bibr">10</xref>
]. Ulrich
<italic>et al</italic>
. developed a guide cane without acoustic feedback in 2001 [
<xref rid="pone.0132020.ref010" ref-type="bibr">10</xref>
]. The guide cane has an ability to analyze the situation and determines appropriate direction to avoid the obstacle, and steers the wheels without requiring any conscious effort [
<xref rid="pone.0132020.ref010" ref-type="bibr">10</xref>
]. Most of the developed devices do not receive feedback from the visually impaired user. A robotic guide called MELDOG was designed by Tachi
<italic>et al</italic>
. [
<xref rid="pone.0132020.ref011" ref-type="bibr">11</xref>
] to introduce effective mobility aids for blind people. Loomis
<italic>et al</italic>
. [
<xref rid="pone.0132020.ref012" ref-type="bibr">12</xref>
] developed personal navigation system to guide blind people in familiar and unfamiliar environments. Both the MELDOG [
<xref rid="pone.0132020.ref011" ref-type="bibr">11</xref>
] and Loomis
<italic>et al</italic>
. [
<xref rid="pone.0132020.ref012" ref-type="bibr">12</xref>
] navigator could follow only commands given by the user to reach the destination. The user’s response was not taken into account for navigation. However, human response were considered for navigation in cooperative human-robot haptic navigation in [
<xref rid="pone.0132020.ref013" ref-type="bibr">13</xref>
] in unstructured environments. The method in [
<xref rid="pone.0132020.ref013" ref-type="bibr">13</xref>
] was designed for unstructured environments since the mobile robot was designed to use its on board sensors to localize in the environment and follow a path, while the blind user is tracked using an RGB-D camera placed on the mobile robot. However, our intention is to extract the guiding/following control policies which can be implemented on an intelligent agent to guide a human in unstructured environments in low visibility conditions.</p>
<p>Reinforcement-based learning and learning based on demonstrations are commonly used to develop control policies for human guidance by robots in low visibility environments. To identify the parameters of an auto-regressive policy we have chosen to use learning based on demonstrations. This method is safer because the controller structure and parameters can be identified offline. The identified controller can be tested for optimality and stability using simple numerical simulations, before testing online using a robotic hardware platform. In this particular scenario, the human guiders used a hard rein to guide the follower. They gave guiding signals by swinging the rein left/right in the horizontal plane, with negligible vertical movements. Therefore, the human guiding strategy can be realized by a planar 1-DOF robotic arm with a passive joint at the end point to connect the hard rein. We demonstrated the effectiveness of this idea by exporting the controller identified from human-human demonstrations directly on the planar robotic arm. The identified controller have been subjected to no modifications.</p>
<p>We presented haptic based human guidance in our previous studies where one person with limited auditory and visual perception of the environment is guided by an agent with full perceptual capabilities via a hard rein in [
<xref rid="pone.0132020.ref014" ref-type="bibr">14</xref>
] and [
<xref rid="pone.0132020.ref015" ref-type="bibr">15</xref>
]. In this study, we have been inspired by the guide dog scenario out of many examples of guiding via reins because in this scenario a hard rein is used to establish a connection between the dog and the visually impaired follower. To the best of our knowledge, this is the first paper showing a computational model of closed loop state dependent haptic guiding policy and state transition following policy.</p>
<p>We argue that any robotic assistant to a person with limited perception of the environment should account for the level of trust of the person. Trust is one of the most critical factors in urban search and rescue missions because it can impact the decisions humans make in uncertain conditions [
<xref rid="pone.0132020.ref016" ref-type="bibr">16</xref>
]. Several attempts have been made to study the level of trust of a human with limited perception of the environment [
<xref rid="pone.0132020.ref017" ref-type="bibr">17</xref>
], [
<xref rid="pone.0132020.ref018" ref-type="bibr">18</xref>
] in different environments. In a simulated game of fire-fighting, Stormont
<italic>et al</italic>
. [
<xref rid="pone.0132020.ref017" ref-type="bibr">17</xref>
] showed that the fire-fighters become increasingly dependent on robotic agents when the fire starts to spread due to randomly changing wind directions. Freedy [
<xref rid="pone.0132020.ref018" ref-type="bibr">18</xref>
] has discussed how self confidence correlates with trust of automation in human robot collaboration. Recent studies confirmed that when the trust level gets higher the activeness is increased in human robot shared control work in [
<xref rid="pone.0132020.ref019" ref-type="bibr">19</xref>
,
<xref rid="pone.0132020.ref020" ref-type="bibr">20</xref>
]. Moreover, [
<xref rid="pone.0132020.ref020" ref-type="bibr">20</xref>
] and [
<xref rid="pone.0132020.ref016" ref-type="bibr">16</xref>
] studied how human trust can be explained quantitatively. However, our attempt is not only to quantify the human trust but also to model it in real time. In HRI, it is becoming increasingly important to consider trust of the human on the robotic counterpart in uncertain environments like real fire. In this paper we discuss a novel optimal state-dependent controller that accounts for the level in a trust of the follower as part of the state.</p>
<p>In summary, this paper presents the experimental methodology employed to collect data of human-human interaction via a hard rein while tracking an arbitrary path. For simplicity, hereafter “the follower” refers to the person with limited auditory and visual perception and “the guider” refers to the person with auditory and visual perception. We describe the mathematical model of the guider’s and the follower’s state dependent control policy in detail. The experimental results of human subjects along with numerical simulation results are used to show the stability of the control policy identified through experiments. The paper also discusses the virtual time varying damped inertial model to estimate the trust of the follower. Moreover, we show the validation of the extracted guiding control policy when the derived guiding policy is implemented on a planar 1-DoF robotic arm in human-robot interactions.</p>
</sec>
<sec id="sec002">
<title>Modeling</title>
<sec id="sec003">
<title>The guider’s closed loop control policy</title>
<p>Let the state be the relative orientation between the guider and the follower given by
<italic>ϕ</italic>
, and the action be the angle of the rein relative to the sensor on the chest of the guider given by
<italic>θ</italic>
as shown in
<xref ref-type="fig" rid="pone.0132020.g001">Fig 1C</xref>
(see
<xref ref-type="sec" rid="sec028">Material and methods</xref>
: Experiment 1: Extracting guiding/following control policies). We model the guider’s control policy by an Auto-regressive model (AR) as a
<italic>N</italic>
-th order state dependent discrete linear controller. AR model gives us temporal (model nature) and structural (model order) relationship. The order
<italic>N</italic>
depends on the number of discrete state samples used to calculate the current action.</p>
<fig id="pone.0132020.g001" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0132020.g001</object-id>
<label>Fig 1</label>
<caption>
<title>The experimental setup.</title>
<p>Experiment 1: (A) Tracking the path by the duo: The visually and auditorily distracted follower is guided by the guider. The tug signal was given via a hard rein, (B) The state
<italic>ϕ</italic>
(the relative orientation difference between the guider and the follower) and the action
<italic>θ</italic>
(angle of the rein relative to the guiding agent), (C) The detailed diagram of labeled wiggly path on a floor, (D) EMG sensors are attached on anterior deltoid, posterior deltoid, biceps, and triceps of the subject’s arm, Experiment 2: (E) The experimental layout of trust studies: P1: Ninety degree turn, P2: Sixty degree turn, and P3: Straight path, (F) Experiment 3:
<italic>ϕ</italic>
the is relative orientation difference between the motor shaft and the guider and
<italic>θ</italic>
is the swing action in horizontal plane, and (G) Experimental setup: The hard rein was held by the human follower connected to the robotic arm across a passive joint. The cord was attached to the waist belt of the blindfolded subjects and the encoder on the shaft platform to measure the relative error (
<italic>ϕ</italic>
).</p>
</caption>
<graphic xlink:href="pone.0132020.g001"></graphic>
</fig>
<p>Then the linear discrete control policy of the guider is given by
<disp-formula id="pone.0132020.e001">
<alternatives>
<graphic xlink:href="pone.0132020.e001.jpg" id="pone.0132020.e001g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M1">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd columnalign="right">
<mml:mrow>
<mml:msub>
<mml:mi>θ</mml:mi>
<mml:mi>g</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:munderover>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mi>r</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>-</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:munderover>
<mml:msubsup>
<mml:mi>a</mml:mi>
<mml:mi>r</mml:mi>
<mml:mrow>
<mml:mi>g</mml:mi>
<mml:mi>R</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:msub>
<mml:mi>ϕ</mml:mi>
<mml:mi>g</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>-</mml:mo>
<mml:mi>r</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:msup>
<mml:mi>c</mml:mi>
<mml:mrow>
<mml:mi>g</mml:mi>
<mml:mi>R</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</alternatives>
<label>(1)</label>
</disp-formula>
if it is a reactive controller, and
<disp-formula id="pone.0132020.e002">
<alternatives>
<graphic xlink:href="pone.0132020.e002.jpg" id="pone.0132020.e002g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M2">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd columnalign="right">
<mml:mrow>
<mml:msub>
<mml:mi>θ</mml:mi>
<mml:mi>g</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:munderover>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mi>r</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>-</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:munderover>
<mml:msubsup>
<mml:mi>a</mml:mi>
<mml:mi>r</mml:mi>
<mml:mrow>
<mml:mi>g</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:msub>
<mml:mi>ϕ</mml:mi>
<mml:mi>g</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>r</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:msup>
<mml:mi>c</mml:mi>
<mml:mrow>
<mml:mi>g</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</alternatives>
<label>(2)</label>
</disp-formula>
if it is a predictive controller, where,
<italic>k</italic>
denotes the sampling step,
<italic>N</italic>
is the order of the polynomial,
<inline-formula id="pone.0132020.e003">
<alternatives>
<graphic xlink:href="pone.0132020.e003.jpg" id="pone.0132020.e003g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M3">
<mml:mrow>
<mml:msubsup>
<mml:mi>a</mml:mi>
<mml:mi>r</mml:mi>
<mml:mrow>
<mml:mi>g</mml:mi>
<mml:mi>R</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mi>a</mml:mi>
<mml:mi>r</mml:mi>
<mml:mrow>
<mml:mi>g</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:mi>r</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo>,</mml:mo>
<mml:mo></mml:mo>
<mml:mo>,</mml:mo>
<mml:mi>N</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:math>
</alternatives>
</inline-formula>
is the polynomial coefficient corresponding to the
<italic>r</italic>
-th state in the reactive and predictive models respectively, and
<italic>c</italic>
<sup>
<italic>gRe</italic>
</sup>
,
<italic>c</italic>
<sup>
<italic>gRe</italic>
</sup>
are corresponding scalars.</p>
</sec>
<sec id="sec004">
<title>The follower’s state transition policy</title>
<p>While the guider’s control policy is represented by Eqs (
<xref ref-type="disp-formula" rid="pone.0132020.e001">1</xref>
) and (
<xref ref-type="disp-formula" rid="pone.0132020.e002">2</xref>
), we again model the follower’s state transition policy as an
<italic>N</italic>
-th order action dependent discrete linear controller to understand behavior of the follower. The order
<italic>N</italic>
depends on the number of past actions used to calculate the current state. Then the linear discrete control policy of the follower is given by
<disp-formula id="pone.0132020.e004">
<alternatives>
<graphic xlink:href="pone.0132020.e004.jpg" id="pone.0132020.e004g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M4">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd columnalign="right">
<mml:mrow>
<mml:msub>
<mml:mi>ϕ</mml:mi>
<mml:mi>f</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:munderover>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mi>r</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>-</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:munderover>
<mml:msubsup>
<mml:mi>a</mml:mi>
<mml:mi>r</mml:mi>
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mi>R</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:msub>
<mml:mi>θ</mml:mi>
<mml:mi>f</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>-</mml:mo>
<mml:mi>r</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:msup>
<mml:mi>c</mml:mi>
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mi>R</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</alternatives>
<label>(3)</label>
</disp-formula>
if it is a reactive controller, and
<disp-formula id="pone.0132020.e005">
<alternatives>
<graphic xlink:href="pone.0132020.e005.jpg" id="pone.0132020.e005g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M5">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd columnalign="right">
<mml:mrow>
<mml:msub>
<mml:mi>ϕ</mml:mi>
<mml:mi>f</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:munderover>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mi>r</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>-</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:munderover>
<mml:msubsup>
<mml:mi>a</mml:mi>
<mml:mi>r</mml:mi>
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:msub>
<mml:mi>θ</mml:mi>
<mml:mi>f</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>r</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:msup>
<mml:mi>c</mml:mi>
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</alternatives>
<label>(4)</label>
</disp-formula>
</p>
<p>if it is a predictive controller, where,
<italic>k</italic>
denotes the sampling step,
<italic>N</italic>
is the order of the polynomial,
<inline-formula id="pone.0132020.e006">
<alternatives>
<graphic xlink:href="pone.0132020.e006.jpg" id="pone.0132020.e006g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M6">
<mml:mrow>
<mml:msubsup>
<mml:mi>a</mml:mi>
<mml:mi>r</mml:mi>
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mi>R</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mi>a</mml:mi>
<mml:mi>r</mml:mi>
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:mi>r</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo>,</mml:mo>
<mml:mo></mml:mo>
<mml:mo>,</mml:mo>
<mml:mi>N</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:math>
</alternatives>
</inline-formula>
is the polynomial coefficient corresponding to the
<italic>r</italic>
-th state in the reactive and predictive model respectively, and
<italic>c</italic>
<sup>
<italic>fRe</italic>
</sup>
,
<italic>c</italic>
<sup>
<italic>fPre</italic>
</sup>
are corresponding scalars. These linear controllers in Eqs (
<xref ref-type="disp-formula" rid="pone.0132020.e001">1</xref>
), (
<xref ref-type="disp-formula" rid="pone.0132020.e002">2</xref>
), (
<xref ref-type="disp-formula" rid="pone.0132020.e004">3</xref>
), and (
<xref ref-type="disp-formula" rid="pone.0132020.e005">4</xref>
) can be regressed with the experiment 1 data obtained in the guider-follower experiments above to get the behavior of the polynomial coefficients across trials (see
<xref ref-type="sec" rid="sec028">material and methods</xref>
Experiment 1: Extracting guiding/following control policies). The behavior of these coefficients for all human subjects across the learning trials will give us useful insights as to the predictive/reactive nature, variability, and stability of the control policy learned by human guiders. Furthermore, a linear control policy given in Eqs (
<xref ref-type="disp-formula" rid="pone.0132020.e001">1</xref>
), (
<xref ref-type="disp-formula" rid="pone.0132020.e002">2</xref>
), (
<xref ref-type="disp-formula" rid="pone.0132020.e004">3</xref>
), and (
<xref ref-type="disp-formula" rid="pone.0132020.e005">4</xref>
) would make it easy to transfer the fully learned control policy to a robotic guider in low visibility conditions.</p>
</sec>
<sec id="sec005">
<title>Modeling the follower as a virtual time varying damped inertial system</title>
<p>In order to study how the above control policy would interact with the follower in an arbitrary path tracking task, we model the voluntary following behavior of the blindfolded human subject (follower) as a damped inertial system, where a tug force
<italic>F</italic>
(
<italic>k</italic>
) applied along the follower’s heading direction at sampling step
<italic>k</italic>
would result in a transition of position given by
<disp-formula id="pone.0132020.e007">
<alternatives>
<graphic xlink:href="pone.0132020.e007.jpg" id="pone.0132020.e007g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M7">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd columnalign="right">
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mi>M</mml:mi>
<mml:mover accent="true">
<mml:msub>
<mml:mi>P</mml:mi>
<mml:mi>f</mml:mi>
</mml:msub>
<mml:mo>¨</mml:mo>
</mml:mover>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mi>ζ</mml:mi>
<mml:mover accent="true">
<mml:msub>
<mml:mi>P</mml:mi>
<mml:mi>f</mml:mi>
</mml:msub>
<mml:mo>˙</mml:mo>
</mml:mover>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</alternatives>
<label>(5)</label>
</disp-formula>
where the tug force
<italic>F</italic>
(
<italic>k</italic>
) ∈
<italic>Re</italic>
<sup>2</sup>
, the virtual mass
<italic>M</italic>
<italic>Re</italic>
, position vector
<italic>P</italic>
<sub>
<italic>f</italic>
</sub>
(
<italic>k</italic>
) ∈
<italic>Re</italic>
<sup>2</sup>
, and the virtual damping coefficient
<italic>ζ</italic>
<italic>Re</italic>
. It should be noted that the virtual mass and damping coefficients are not those real coefficients of the follower’s stationary body, but the mass and damping coefficients felt by the guider while the duo is in voluntary movement. This dynamic equation can be approximated by a discrete state-space equation given by
<disp-formula id="pone.0132020.e008">
<alternatives>
<graphic xlink:href="pone.0132020.e008.jpg" id="pone.0132020.e008g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M8">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd columnalign="right">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>)</mml:mo>
<mml:mo>=</mml:mo>
<mml:mi>A</mml:mi>
<mml:mi>x</mml:mi>
<mml:mo>(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>)</mml:mo>
<mml:mo>+</mml:mo>
<mml:mi>B</mml:mi>
<mml:mi>u</mml:mi>
<mml:mo>(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</alternatives>
<label>(6)</label>
</disp-formula>
where,
<italic>k</italic>
is the sampling step,
<inline-formula id="pone.0132020.e009">
<alternatives>
<graphic xlink:href="pone.0132020.e009.jpg" id="pone.0132020.e009g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M9">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo stretchy="true">[</mml:mo>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mi>P</mml:mi>
<mml:mi>f</mml:mi>
</mml:msub>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo stretchy="false">)</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mi>P</mml:mi>
<mml:mi>f</mml:mi>
</mml:msub>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
<mml:mo stretchy="true">]</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</alternatives>
</inline-formula>
,
<inline-formula id="pone.0132020.e010">
<alternatives>
<graphic xlink:href="pone.0132020.e010.jpg" id="pone.0132020.e010g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M10">
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo stretchy="true">[</mml:mo>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mo stretchy="false">(</mml:mo>
<mml:mn>2</mml:mn>
<mml:mi>M</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>T</mml:mi>
<mml:mi>ζ</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo>/</mml:mo>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>M</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>T</mml:mi>
<mml:mi>ζ</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mtd>
<mml:mtd>
<mml:mo></mml:mo>
<mml:mi>M</mml:mi>
<mml:mo>/</mml:mo>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>M</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>T</mml:mi>
<mml:mi>ζ</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mn>1</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
</mml:mtr>
</mml:mtable>
<mml:mo stretchy="true">]</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</alternatives>
</inline-formula>
,
<inline-formula id="pone.0132020.e011">
<alternatives>
<graphic xlink:href="pone.0132020.e011.jpg" id="pone.0132020.e011g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M11">
<mml:mrow>
<mml:mi>B</mml:mi>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo stretchy="true">[</mml:mo>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:msup>
<mml:mi>T</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>/</mml:mo>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>M</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>T</mml:mi>
<mml:mi>ζ</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
</mml:mtr>
</mml:mtable>
<mml:mo stretchy="true">]</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</alternatives>
</inline-formula>
,
<italic>u</italic>
(
<italic>k</italic>
) =
<italic>F</italic>
(
<italic>k</italic>
), and T is the sampling time.</p>
<p>Given the updated position of the follower
<italic>P</italic>
<sub>
<italic>f</italic>
</sub>
(
<italic>k</italic>
), the new position of the guider
<italic>P</italic>
<sub>
<italic>g</italic>
</sub>
(
<italic>k</italic>
) can be easily calculated by imposing the constraint ‖
<italic>P</italic>
<sub>
<italic>f</italic>
</sub>
(
<italic>k</italic>
) −
<italic>P</italic>
<sub>
<italic>g</italic>
</sub>
(
<italic>k</italic>
)‖ =
<italic>L</italic>
, where
<italic>L</italic>
is the length of the hard rein. We obtain the guider’s location assuming that the guider is always on the known desired path. Therefore, given a follower’s position
<italic>P</italic>
<sub>
<italic>f</italic>
</sub>
(
<italic>k</italic>
) the intersection of the desired path and the circle with center at
<italic>P</italic>
<sub>
<italic>f</italic>
</sub>
(
<italic>k</italic>
) and radius
<italic>L</italic>
will give the guider’s location.</p>
</sec>
</sec>
<sec sec-type="results" id="sec006">
<title>Results</title>
<sec id="sec007">
<title>Experiment 1: Extracting guiding/following control policies</title>
<p>We conducted Experiment 1 with 15 naive pairs to understand how the coefficients of the control policy relate to states
<italic>ϕ</italic>
and actions
<italic>θ</italic>
given in Eqs (
<xref ref-type="disp-formula" rid="pone.0132020.e001">1</xref>
), (
<xref ref-type="disp-formula" rid="pone.0132020.e002">2</xref>
), (
<xref ref-type="disp-formula" rid="pone.0132020.e004">3</xref>
), and (
<xref ref-type="disp-formula" rid="pone.0132020.e005">4</xref>
) settle down across learning trials. In order to have a deeper insight into how the coefficients in the discrete linear controller in Eqs (
<xref ref-type="disp-formula" rid="pone.0132020.e001">1</xref>
), (
<xref ref-type="disp-formula" rid="pone.0132020.e002">2</xref>
), (
<xref ref-type="disp-formula" rid="pone.0132020.e004">3</xref>
), and (
<xref ref-type="disp-formula" rid="pone.0132020.e005">4</xref>
) change across learning trials, we ask whether 1) the guider and the follower tend to learn a predictive/reactive controller across trials, 2) the order of the control policy of the guider in Eqs (
<xref ref-type="disp-formula" rid="pone.0132020.e001">1</xref>
) and (
<xref ref-type="disp-formula" rid="pone.0132020.e002">2</xref>
) and the order of the control policy of the follower in Eqs (
<xref ref-type="disp-formula" rid="pone.0132020.e004">3</xref>
) and (
<xref ref-type="disp-formula" rid="pone.0132020.e005">4</xref>
) change over trials, and if so, what its steady state order would be.</p>
<sec id="sec008">
<title>Adoption of wave families for action and state vector profiles</title>
<p>Since the raw motion data have noise, we used Wavelet Toolbox (The MathWorks Inc.) to reduce the noise in the action and the state vectors to find regression coefficients. The guider’s action is a continuous swing and pull on the horizontal plane. For clarity, we plotted the guider’s arm movement in horizontal and vertical planes for a random trial as shown in
<xref ref-type="fig" rid="pone.0132020.g002">Fig 2A</xref>
. The vertical movements are very too slow compared to horizontal movements. Therefore, we consider only horizontal movements to represent the arm action. We chose the Daubechies wave family (for sinusoidal waves) [
<xref rid="pone.0132020.ref021" ref-type="bibr">21</xref>
] in the wavelet analysis. According to previous studies [
<xref rid="pone.0132020.ref022" ref-type="bibr">22</xref>
,
<xref rid="pone.0132020.ref023" ref-type="bibr">23</xref>
] human arm movements are continuous and smooth. Therefore, a continuous mother wavelet (
<italic>db10</italic>
) is taken to represent the swing actions in wavelet analysis. For further clarity, we compared the percentage of energy representation of
<italic>db10</italic>
and
<italic>harr</italic>
wave families as shown in
<xref ref-type="fig" rid="pone.0132020.g002">Fig 2B</xref>
. Considering higher percentage value, we selected
<italic>db10</italic>
for our swing type action analysis.</p>
<fig id="pone.0132020.g002" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0132020.g002</object-id>
<label>Fig 2</label>
<caption>
<title>Selection of wavelet family for guiding agent action and state vectors.</title>
<p>(A) The vertical movements and horizontal action vector for the guider in a representative trial, and (B) The percentage of energy representation of action vector of all subjects in all trials for db10 and harr wavelet families, and (C), The percentage of energy corresponding to 1 to 4 decomposition levels in db10 wave family. The averaged action vector across the all subjects over trials are taken.</p>
</caption>
<graphic xlink:href="pone.0132020.g002"></graphic>
</fig>
<p>Then different decompression levels were tested for
<italic>db10</italic>
. The percentage of energy corresponding to approximation for different decompression levels was found to be 99.66%, 93.47%, and 86.73% for decompression levels 4, 8, and 15 respectively. The highest percentage of energy was gained when the decompression level is 4.</p>
<p>
<xref ref-type="fig" rid="pone.0132020.g002">Fig 2C</xref>
shows the percentage energy corresponding to decomposition levels 1–4 of the action vector. We use the 4
<sup>th</sup>
decomposition level for action vector analysis since this level has the highest percentage value (88%). The same procedure was tested for state vector profile and the results correspond to those obtained for the action vector profile. Based on the results we adopt 4
<sup>th</sup>
decomposition level of
<italic>db10</italic>
wave family to analyze raw data of the action and the state.</p>
</sec>
<sec id="sec009">
<title>Determination of the guider’s control policy</title>
<p>Hereafter, the 4
<sup>th</sup>
decomposition level of
<italic>db10</italic>
of action
<italic>θ</italic>
and state
<italic>ϕ</italic>
vectors are used for regression in Eqs (
<xref ref-type="disp-formula" rid="pone.0132020.e001">1</xref>
) and (
<xref ref-type="disp-formula" rid="pone.0132020.e002">2</xref>
). Once the coefficients of the polynomial in Eqs (
<xref ref-type="disp-formula" rid="pone.0132020.e001">1</xref>
) and (
<xref ref-type="disp-formula" rid="pone.0132020.e002">2</xref>
) are estimated, the best control policy (Eqs (
<xref ref-type="disp-formula" rid="pone.0132020.e001">1</xref>
) or (
<xref ref-type="disp-formula" rid="pone.0132020.e002">2</xref>
)), and the corresponding best order of the polynomial should give the best
<italic>R</italic>
<sup>2</sup>
value for a given trial across all subjects. Here, twenty experimental trials were binned to five for clarity.</p>
</sec>
<sec id="sec010">
<title>Determination of predictive/reactive nature of the guider’s control policy</title>
<p>Coefficients of Eqs (
<xref ref-type="disp-formula" rid="pone.0132020.e001">1</xref>
) and (
<xref ref-type="disp-formula" rid="pone.0132020.e002">2</xref>
) were estimated from 1
<sup>st</sup>
order to 4
<sup>th</sup>
order polynomials in
<xref ref-type="fig" rid="pone.0132020.g003">Fig 3A</xref>
to select best fit policies. Dashed line and solid line were used to denote reactive and predictive models respectively. From
<xref ref-type="fig" rid="pone.0132020.g003">Fig 3A</xref>
, we can observe that the R
<sup>2</sup>
values corresponding to the 1
<sup>st</sup>
order model in both Eqs (
<xref ref-type="disp-formula" rid="pone.0132020.e001">1</xref>
) and (
<xref ref-type="disp-formula" rid="pone.0132020.e002">2</xref>
) are the lowest. The relatively high R
<sup>2</sup>
values of the higher order models suggest that the control policy is of order > 1. Therefore, we consider the percentage (%) differences of
<italic>R</italic>
<sup>2</sup>
values of higher order polynomials relative to the 1
<sup>st</sup>
order polynomial for both Eqs (
<xref ref-type="disp-formula" rid="pone.0132020.e001">1</xref>
) and (
<xref ref-type="disp-formula" rid="pone.0132020.e002">2</xref>
) to assess the fitness of the predictive control policy given in
<xref ref-type="disp-formula" rid="pone.0132020.e002">Eq (2)</xref>
relative to the reactive policy given in
<xref ref-type="disp-formula" rid="pone.0132020.e001">Eq (1)</xref>
.
<xref ref-type="fig" rid="pone.0132020.g003">Fig 3B</xref>
shows that the marginal percentage (%) gain in
<italic>R</italic>
<sup>2</sup>
value (%△
<italic>R</italic>
<sup>2</sup>
) of 2
<sup>nd</sup>
, 3
<sup>rd</sup>
, and 4
<sup>th</sup>
order polynomials in
<xref ref-type="disp-formula" rid="pone.0132020.e002">Eq (2)</xref>
predictive control policy (solid line) grows compared to those of the reactive control (dashed line) policy in
<xref ref-type="disp-formula" rid="pone.0132020.e001">Eq (1)</xref>
. Therefore, our conclusion is that the guider gradually prefers to employ a predictive control policy than on a predictive control policy than a reactive one.</p>
<fig id="pone.0132020.g003" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0132020.g003</object-id>
<label>Fig 3</label>
<caption>
<title>
<italic>R</italic>
<sup>2</sup>
values from 1
<sup>st</sup>
order to 4
<sup>th</sup>
order polynomials for the guider and the follower.</title>
<p>reactive models (dashed line) and predictive models (solid line): (A) and (C) are the
<italic>R</italic>
<sup>2</sup>
value variation of the reactive and predictive from 1
<sup>st</sup>
to 4
<sup>th</sup>
order polynomials over trials for the guider and the follower respectively. (B) and (D) are the percentage (%) differences of
<italic>R</italic>
<sup>2</sup>
values of 2
<sup>nd</sup>
to 4
<sup>th</sup>
order polynomials with respect to 1st order polynomial for the guider’s and the follower’s control policies respectively: 1st to 2
<sup>nd</sup>
order (blue), 1st to 3
<sup>rd</sup>
order (black), 1
<sup>st</sup>
to 4
<sup>th</sup>
order (green).</p>
</caption>
<graphic xlink:href="pone.0132020.g003"></graphic>
</fig>
</sec>
<sec id="sec011">
<title>Determination of the model order of the guider’s control policy</title>
<p>The data is not sufficient to test whether the population follows normal distribution after binning. The Mann-Whitney test does not require the assumption that the differences between the two samples are normally distributed. Therefore, the non-parametric Mann-Whitney U test (
<italic>α</italic>
= 0.05) was conducted to test significance. The percentage (%) gain of 3
<sup>rd</sup>
order polynomial is highest compared to 2
<sup>nd</sup>
and 4
<sup>th</sup>
order polynomials as shown in
<xref ref-type="table" rid="pone.0132020.t001">Table 1</xref>
by numerical values and
<xref ref-type="fig" rid="pone.0132020.g003">Fig 3B</xref>
. There is a statistically significant improvement from 2
<sup>nd</sup>
to 3
<sup>rd</sup>
order models (
<italic>p</italic>
= 0.008), while there is no significant information gain from 3
<sup>rd</sup>
to 4
<sup>th</sup>
order models (
<italic>p</italic>
= 0.54). Means that the guider predictive control policy is more explained when the order is
<italic>N</italic>
= 3. No more information is added for higher orders after
<italic>N</italic>
= 3. Therefore, hereafter, we use 3
<sup>rd</sup>
order predictive control policy to explain the guider’s control policy.</p>
<table-wrap id="pone.0132020.t001" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0132020.t001</object-id>
<label>Table 1</label>
<caption>
<title>Guider predictive △
<italic>R</italic>
<sup>2</sup>
% of 2
<sup>nd</sup>
to 4
<sup>th</sup>
order polynomials w.r.t 1
<sup>st</sup>
order. Statistical significance was computed using the Mann-Whitney U test (
<italic>α</italic>
= 0.05).</title>
</caption>
<alternatives>
<graphic id="pone.0132020.t001g" xlink:href="pone.0132020.t001"></graphic>
<table frame="box" rules="all" border="0">
<colgroup span="1">
<col align="left" valign="top" span="1"></col>
<col align="left" valign="top" span="1"></col>
<col align="left" valign="top" span="1"></col>
<col align="left" valign="top" span="1"></col>
<col align="left" valign="top" span="1"></col>
</colgroup>
<thead>
<tr>
<th align="left" rowspan="1" colspan="1">Trial No:</th>
<th align="left" rowspan="1" colspan="1">2
<sup>nd</sup>
order</th>
<th align="left" rowspan="1" colspan="1">3
<sup>rd</sup>
order</th>
<th align="left" rowspan="1" colspan="1">4
<sup>th</sup>
order</th>
<th align="left" rowspan="1" colspan="1">
<italic>p</italic>
values</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">4</td>
<td align="char" char="." rowspan="1" colspan="1">8.94</td>
<td align="char" char="." rowspan="1" colspan="1">11.37</td>
<td align="char" char="." rowspan="1" colspan="1">11.97</td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">8</td>
<td align="char" char="." rowspan="1" colspan="1">8.26</td>
<td align="char" char="." rowspan="1" colspan="1">10.98</td>
<td align="char" char="." rowspan="1" colspan="1">11.62</td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">12</td>
<td align="char" char="." rowspan="1" colspan="1">7.81</td>
<td align="char" char="." rowspan="1" colspan="1">10.36</td>
<td align="char" char="." rowspan="1" colspan="1">10.74</td>
<td align="left" rowspan="1" colspan="1">
<italic>p</italic>
(2
<sup>
<italic>nd</italic>
</sup>
↔ 3
<sup>
<italic>rd</italic>
</sup>
) < 0.008*,</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">16</td>
<td align="char" char="." rowspan="1" colspan="1">9.38</td>
<td align="char" char="." rowspan="1" colspan="1">11.68</td>
<td align="char" char="." rowspan="1" colspan="1">12.25</td>
<td align="left" rowspan="1" colspan="1">
<italic>p</italic>
(3
<sup>
<italic>rd</italic>
</sup>
↔ 4
<sup>
<italic>th</italic>
</sup>
) > 0.5</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">20</td>
<td align="char" char="." rowspan="1" colspan="1">9.74</td>
<td align="char" char="." rowspan="1" colspan="1">14.00</td>
<td align="char" char="." rowspan="1" colspan="1">14.70</td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
</tbody>
</table>
</alternatives>
</table-wrap>
<p>Therefore the guider’s control policy can be written
<disp-formula id="pone.0132020.e012">
<alternatives>
<graphic xlink:href="pone.0132020.e012.jpg" id="pone.0132020.e012g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M12">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd columnalign="right">
<mml:mrow>
<mml:msub>
<mml:mi>θ</mml:mi>
<mml:mi>g</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mi>a</mml:mi>
<mml:mn>0</mml:mn>
<mml:mrow>
<mml:mi>g</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:msub>
<mml:mi>ϕ</mml:mi>
<mml:mi>g</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mi>a</mml:mi>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mi>g</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:msub>
<mml:mi>ϕ</mml:mi>
<mml:mi>g</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mi>a</mml:mi>
<mml:mn>2</mml:mn>
<mml:mrow>
<mml:mi>g</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:msub>
<mml:mi>ϕ</mml:mi>
<mml:mi>g</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:msup>
<mml:mi>c</mml:mi>
<mml:mrow>
<mml:mi>g</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</alternatives>
<label>(7)</label>
</disp-formula>
</p>
</sec>
<sec id="sec012">
<title>Determination of the follower’s state transition policy</title>
<p>Next we try to understand the follower’s state transition policy in response to guider’s actions, hereafter referred to as follower’s state transition policy.</p>
</sec>
<sec id="sec013">
<title>Determination of predictive/reactive nature of the follower’s state transition policy</title>
<p>We used experimental data for state
<italic>θ</italic>
and action
<italic>ϕ</italic>
in Eqs (
<xref ref-type="disp-formula" rid="pone.0132020.e004">3</xref>
) and (
<xref ref-type="disp-formula" rid="pone.0132020.e005">4</xref>
) to extract features of the follower’s state transition policy from 1
<sup>st</sup>
to 4
<sup>th</sup>
order polynomials over trials as shown in
<xref ref-type="fig" rid="pone.0132020.g003">Fig 3C</xref>
. Here, we used the same mathematical and statistical method as in the guider’s model.
<xref ref-type="fig" rid="pone.0132020.g003">Fig 3C</xref>
shows that the marginal percentage (%) gain in R
<sup>2</sup>
value (%△
<italic>R</italic>
<sup>2</sup>
) of 2
<sup>nd</sup>
, 3
<sup>rd</sup>
, and 4
<sup>th</sup>
order polynomials in
<xref ref-type="disp-formula" rid="pone.0132020.e004">Eq (3)</xref>
reactive control policy (dashed line) grows compared to that of the predictive control policy (solid line) in
<xref ref-type="disp-formula" rid="pone.0132020.e005">Eq (4)</xref>
. Therefore, our conclusion is that the follower gradually more employs on a reactive policy than a predictive one.</p>
</sec>
<sec id="sec014">
<title>Determination of the model order of the follower’s state transition policy</title>
<p>The percentage (%) gain of 2
<sup>nd</sup>
order polynomial is highest compared to 3
<sup>rd</sup>
and 4
<sup>th</sup>
order polynomials as shown in
<xref ref-type="table" rid="pone.0132020.t002">Table 2</xref>
by numerical values. Interestingly, there is no statistically significant improvement from 2
<sup>nd</sup>
to 3
<sup>rd</sup>
order models (
<italic>p</italic>
= 0.42) or from 3
<sup>rd</sup>
to 4
<sup>th</sup>
order models (
<italic>p</italic>
= 0.54). Therefore, we can say the follower reactive policy is more explained when the order is
<italic>N</italic>
= 2. Therefore, hereafter, we consider 2
<sup>nd</sup>
order reactive policy to explain follower’s state transition policy.</p>
<table-wrap id="pone.0132020.t002" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0132020.t002</object-id>
<label>Table 2</label>
<caption>
<title>Follower reactive △
<italic>R</italic>
<sup>2</sup>
% of 2
<sup>nd</sup>
to 4
<sup>th</sup>
order polynomials w.r.t 1
<sup>st</sup>
order. Statistical significance was computed using the Mann-Whitney U test (
<italic>α</italic>
= 0.05).</title>
</caption>
<alternatives>
<graphic id="pone.0132020.t002g" xlink:href="pone.0132020.t002"></graphic>
<table frame="box" rules="all" border="0">
<colgroup span="1">
<col align="left" valign="top" span="1"></col>
<col align="left" valign="top" span="1"></col>
<col align="left" valign="top" span="1"></col>
<col align="left" valign="top" span="1"></col>
<col align="left" valign="top" span="1"></col>
</colgroup>
<thead>
<tr>
<th align="left" rowspan="1" colspan="1">Trial No:</th>
<th align="left" rowspan="1" colspan="1">2
<sup>
<italic>nd</italic>
</sup>
order</th>
<th align="left" rowspan="1" colspan="1">3
<sup>
<italic>rd</italic>
</sup>
order</th>
<th align="left" rowspan="1" colspan="1">4
<sup>
<italic>th</italic>
</sup>
order</th>
<th align="left" rowspan="1" colspan="1">
<italic>p</italic>
values</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">4</td>
<td align="char" char="." rowspan="1" colspan="1">8.58</td>
<td align="char" char="." rowspan="1" colspan="1">9.57</td>
<td align="char" char="." rowspan="1" colspan="1">9.91</td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">8</td>
<td align="char" char="." rowspan="1" colspan="1">8.31</td>
<td align="char" char="." rowspan="1" colspan="1">10.33</td>
<td align="char" char="." rowspan="1" colspan="1">10.77</td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">12</td>
<td align="char" char="." rowspan="1" colspan="1">7.41</td>
<td align="char" char="." rowspan="1" colspan="1">8.46</td>
<td align="char" char="." rowspan="1" colspan="1">8.70</td>
<td align="left" rowspan="1" colspan="1">
<italic>p</italic>
(2
<sup>
<italic>nd</italic>
</sup>
↔ 3
<sup>
<italic>rd</italic>
</sup>
) > 0.1,</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">16</td>
<td align="char" char="." rowspan="1" colspan="1">9.45</td>
<td align="char" char="." rowspan="1" colspan="1">10.21</td>
<td align="char" char="." rowspan="1" colspan="1">10.51</td>
<td align="left" rowspan="1" colspan="1">
<italic>p</italic>
(3
<sup>
<italic>rd</italic>
</sup>
↔ 4
<sup>
<italic>th</italic>
</sup>
) > 0.5</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">20</td>
<td align="char" char="." rowspan="1" colspan="1">9.96</td>
<td align="char" char="." rowspan="1" colspan="1">11.82</td>
<td align="char" char="." rowspan="1" colspan="1">12.29</td>
<td align="left" rowspan="1" colspan="1"></td>
</tr>
</tbody>
</table>
</alternatives>
</table-wrap>
<p>The follower’s state transition policy can be written as,
<disp-formula id="pone.0132020.e013">
<alternatives>
<graphic xlink:href="pone.0132020.e013.jpg" id="pone.0132020.e013g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M13">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd columnalign="right">
<mml:mrow>
<mml:msub>
<mml:mi>ϕ</mml:mi>
<mml:mi>f</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mi>a</mml:mi>
<mml:mn>0</mml:mn>
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mi>R</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:msub>
<mml:mi>θ</mml:mi>
<mml:mi>f</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mi>a</mml:mi>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mi>R</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:msub>
<mml:mi>θ</mml:mi>
<mml:mi>f</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>-</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:msup>
<mml:mi>c</mml:mi>
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mi>R</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</alternatives>
<label>(8)</label>
</disp-formula>
</p>
</sec>
<sec id="sec015">
<title>Polynomial parameters of auto-regressive state dependent behavioral policies of the duo</title>
<p>We proceed to explore how the polynomial parameters of the guider’s 3
<sup>rd</sup>
order predictive and the follower’s 2
<sup>nd</sup>
order reactive policies can change across learning trials in Eqs (
<xref ref-type="disp-formula" rid="pone.0132020.e002">2</xref>
) and (
<xref ref-type="disp-formula" rid="pone.0132020.e004">3</xref>
) for the guider and the follower respectively. We notice in Figs
<xref ref-type="fig" rid="pone.0132020.g004">4</xref>
and
<xref ref-type="fig" rid="pone.0132020.g005">5</xref>
that the history of the polynomial coefficients fluctuates within bounds for both the guider predictive and the follower reactive. The average and standard deviation values of the coefficients are labeled in Figs
<xref ref-type="fig" rid="pone.0132020.g004">4</xref>
and
<xref ref-type="fig" rid="pone.0132020.g005">5</xref>
(denoted by avg: and std: respectively). This may be the result of the variability across subjects as well as of the variability of parameters across trials. Therefore, we could estimate the above control policy as a bounded stochastic decision making process.</p>
<fig id="pone.0132020.g004" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0132020.g004</object-id>
<label>Fig 4</label>
<caption>
<title>The representation of coefficients of the 3
<sup>rd</sup>
order auto regressive predictive controller of the guider.</title>
<p>10
<sup>th</sup>
trial is marked by a red dashed line. Trials from 10
<sup>th</sup>
to 20
<sup>th</sup>
were only taken for the simulation in
<xref ref-type="fig" rid="pone.0132020.g010">Fig 10</xref>
.</p>
</caption>
<graphic xlink:href="pone.0132020.g004"></graphic>
</fig>
<fig id="pone.0132020.g005" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0132020.g005</object-id>
<label>Fig 5</label>
<caption>
<title>The evolution of coefficients of the 2
<sup>nd</sup>
order auto regressive reactive controller of the follower.</title>
</caption>
<graphic xlink:href="pone.0132020.g005"></graphic>
</fig>
</sec>
<sec id="sec016">
<title>Optimality of muscle recruitment</title>
<p>To understand the optimality of muscle activation, we proceed to study the responsibility assignment of muscles from EMG recordings. We used the Wavelet Toolbox to reduce the noise of the raw EMG data (The MathWorks Inc.). The raw EMG signal is a sinusoidal continuous wave as shown in
<xref ref-type="fig" rid="pone.0132020.g006">Fig 6A</xref>
. Therefore we chose the
<italic>sym8</italic>
in Symlets wave family (The MathWorks Inc) [
<xref rid="pone.0132020.ref021" ref-type="bibr">21</xref>
] for EMG analysis. The percentage of energy corresponding to
<italic>sym8</italic>
(Symlets) and
<italic>harr</italic>
(Harr) is 72.9% and 68% respectively. This is demonstrated in a bar chart shown in
<xref ref-type="fig" rid="pone.0132020.g006">Fig 6B</xref>
. Considering the highest energy percentage, we selected
<italic>sym8</italic>
for our EMG wave analysis. Then different decompression levels were tested for
<italic>sym8</italic>
. The percentages of energy corresponding to approximation for different decompression levels were found to be 99.52%, 95.97%, 92.05%, 85.41%, and 20.36% for decompression levels 3, 4, 5, 6 and 7 respectively. The highest percentage of energy was gained when the decompression level is 3.
<xref ref-type="fig" rid="pone.0132020.g006">Fig 6C</xref>
shows the percentage energy corresponding to the 1 to 3 decomposition levels of the EMG signal. Since the 3
<sup>rd</sup>
decomposition level has the highest percentage of energy, we use it hereafter to analyze raw EMG data.</p>
<fig id="pone.0132020.g006" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0132020.g006</object-id>
<label>Fig 6</label>
<caption>
<title>Selection of wavelet family for EMG vector.</title>
<p>(A) A representative raw EMG signal from the guider, (B) The percentage of energy representation for harr and sym8 wavelet families for raw EMG signal in
<xref ref-type="fig" rid="pone.0132020.g007">Fig 7A</xref>
, (C) The percentage of energy corresponding to 1 to 3 decomposition levels for sym8 wave family for the EMG signal in
<xref ref-type="fig" rid="pone.0132020.g007">Fig 7A</xref>
.</p>
</caption>
<graphic xlink:href="pone.0132020.g006"></graphic>
</fig>
</sec>
<sec id="sec017">
<title>Behavior of antagonist muscles</title>
<p>When the guider takes the arm action in horizontal plane as shown in
<xref ref-type="fig" rid="pone.0132020.g002">Fig 2A</xref>
, it can be pushing/ pulling or swinging in horizontal plane. The anterior deltoid and posterior deltoid are recruited to perform pushing and pulling actions. The guider can use the elbow joint in two different ways: one is to swing the rein in the vertical plane by flexing the elbow without moving the shoulder joint, and the other is to pull the rein if the elbow is flexed in synchrony with a shoulder joint flexion. To understand the muscle recruitment, we plotted the average normalized activation of each individual muscle and the averaged normalized EMG ratio between frontal and dorsal muscles in all trials as shown in
<xref ref-type="fig" rid="pone.0132020.g007">Fig 7A and Fig 7B</xref>
respectively. There is a downward trend in the ratio of anterior deltoid and posterior deltoid muscles in all trials (
<xref ref-type="fig" rid="pone.0132020.g007">Fig 7B:M1</xref>
), while there is an upward trend in the ratio of biceps and triceps muscles has a upward trend in all trials (
<xref ref-type="fig" rid="pone.0132020.g007">Fig 7B:M2</xref>
). This could indicate that a forward model of task dynamics is learnt across trials. Kolmogorov-Smirnov test proved the average muscle distribution comes from a normal distribution. Therefore significance was tested by t-test. Here, the significance test was conducted between the first 5 trials and last 5 trials of M1 and M2 using single tailed t-test. The results show that the ratio of first five trials and last five trials of anterior deltoid and posterior deltoid (M1) are significantly different (
<italic>p</italic>
= 0.00004) while there is no significance between the ratio of first five trials and last five trials of biceps and triceps (M2) (
<italic>p</italic>
= 0.85). This suggests that the forward model [
<xref rid="pone.0132020.ref024" ref-type="bibr">24</xref>
,
<xref rid="pone.0132020.ref025" ref-type="bibr">25</xref>
] that predicts the consequence of guiding actions accounts for the activity of deltoids than the elbow joint. This may be due to the fact that the elbow joints is mainly responsible to keep the guider’s actions on the horizontal plane.</p>
<fig id="pone.0132020.g007" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0132020.g007</object-id>
<label>Fig 7</label>
<caption>
<title>The behavior of the average normalized muscle EMGs.</title>
<p>(A) Average normalized muscle EMG anterior deltoid, posterior deltoids, biceps, and triceps. The gradient and intercept of individual muscles are (−0.005, 0.315), (0.004,0.426), (0.001, 0.133), and (−0.013, 0.995) for anterior deltoid, posterior deltoid, biceps, and triceps respectively, (B) Frontal and dorsal muscle ratio: M1- biceps triceps muscle ratio, M2- anterior deltoid posterior deltoid muscle ratio, and (C) The behavior of this cost indicator
<italic>J</italic>
of the 2
<sup>nd</sup>
order best fit curve for average EMGs of all four muscles of the ten subjects across trials.</p>
</caption>
<graphic xlink:href="pone.0132020.g007"></graphic>
</fig>
</sec>
<sec id="sec018">
<title>Behavior of total EMG over trials</title>
<p>To obtain an estimation of the total energy consumed during guiding, we compute the average EMG for all four muscles of all fifteen pairs that reflects the average energy consumed in a trial given by
<inline-formula id="pone.0132020.e014">
<alternatives>
<graphic xlink:href="pone.0132020.e014.jpg" id="pone.0132020.e014g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M14">
<mml:mrow>
<mml:mi>J</mml:mi>
<mml:mo>=</mml:mo>
<mml:msqrt>
<mml:mrow>
<mml:msubsup>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mn>4</mml:mn>
</mml:msubsup>
<mml:msubsup>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mi>N</mml:mi>
</mml:msub>
</mml:msubsup>
<mml:mi>E</mml:mi>
<mml:mi>M</mml:mi>
<mml:msubsup>
<mml:mi>G</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:math>
</alternatives>
</inline-formula>
, where
<italic>S</italic>
<sub>
<italic>N</italic>
</sub>
is the number of subjects,
<italic>EMG</italic>
<sub>
<italic>ij</italic>
</sub>
is the average rectified EMG of the
<italic>i</italic>
th muscle of the
<italic>j</italic>
th subject. The behavior of this energy consumption indicator
<italic>J</italic>
is shown in
<xref ref-type="fig" rid="pone.0132020.g007">Fig 7C</xref>
. We can clearly observe from the 2
<sup>nd</sup>
order best fit curve that
<italic>J</italic>
increases to a maximum in the first half of the trials and decreases in the last 10 trials. This suggests that optimization is a non-monotonic process. During the first half of the trials, it may have given priority to predictive control policy order selection (
<xref ref-type="disp-formula" rid="pone.0132020.e012">Eq (7)</xref>
) and the formation of the forward model to predict follower’s state into the future than to optimization in the muscle activation space, which is also reflected in the behavior of
<italic>R</italic>
<sup>2</sup>
values in
<xref ref-type="fig" rid="pone.0132020.g003">Fig 3</xref>
. Once the optimal order is selected, subjects exhibit monotonic optimization in the muscle activation space as seen in the last 10 trials of
<xref ref-type="fig" rid="pone.0132020.g007">Fig 7C</xref>
, with a corresponding increase of
<italic>R</italic>
<sup>2</sup>
values in
<xref ref-type="fig" rid="pone.0132020.g003">Fig 3</xref>
.</p>
</sec>
</sec>
<sec id="sec019">
<title>Experiment 2: Modeling the follower’s trust in different paths</title>
<p>Here our intention is to incorporate the instantaneous trust level of the follower in the state-space of the closed loop controller. We show the results of 14 naive subjects’ variability of voluntary movements towards a blindfolded follower in a virtual damped inertial dynamic system. Our attempt is to address the question of how the follower’s trust towards the guider should be accounted for in designing a closed loop controller. Here, we argue that the trust of the follower in any given context should be reflected on how compliant his/her voluntary movements are to the instructions of the guider.</p>
<p>The experimental results of 14 pairs of subjects in three types of paths—90° turn, 60° turn, and straight—are shown in
<xref ref-type="fig" rid="pone.0132020.g008">Fig 8</xref>
. Here we extracted motion data within a window of 10 seconds around the 90° and 60° turns, and for fairness of comparison, we took the same window for the straight path for our regression analysis to observe the virtual damping coefficient, virtual stiffness coefficient and the virtual mass in three different paths.
<xref ref-type="fig" rid="pone.0132020.g008">Fig 8A</xref>
shows the variability of the virtual damping coefficient, and
<xref ref-type="fig" rid="pone.0132020.g008">Fig 8B</xref>
shows the virtual mass for the above three contexts. We can notice from
<xref ref-type="fig" rid="pone.0132020.g008">Fig 8A</xref>
that the variability of the virtual damping coefficient is highest in the path with a 90° turn while there is less variability in the 60° turn path and least variability in the straight path.</p>
<fig id="pone.0132020.g008" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0132020.g008</object-id>
<label>Fig 8</label>
<caption>
<title>Regression coefficients in
<xref ref-type="disp-formula" rid="pone.0132020.e015">Eq (9)</xref>
of different paths.</title>
<p>(A) Virtual damping coefficient for paths: 90° turn (red), 60° (yellow) turn, and straight path (green). The average values are 3.055, 1.605, and −0.586 for 90° turn, 60° turn. and straight path respectively, (B) Virtual mass coefficient for paths: 90° turn (red), 60° turn (yellow), and straight path (green). The average values are 2.066, −0.083, and 0.002 for 90° turn, 60° turn, and straight path respectively, (C) Virtual stiffness coefficient for paths: 90° turn (red), 60° (yellow) turn, and straight path (green). The average values are 0.0325, −0.1385, and 0.0117 for 90° turn, 60° turn. and straight path respectively, and (D) The follower’s response towards the trust scale: The trust scale varies from 1 to 10 from the lowest to the highest.</p>
</caption>
<graphic xlink:href="pone.0132020.g008"></graphic>
</fig>
<p>When the follower voluntarily moves forward according to the small tug-signal of the guider, any increase of force felt by the guider must come from a reduction in the voluntary nature of followers movement. Therefore we modeled the follower as a virtual damped inertial model. To represent the variable voluntary nature of the follower, we did not consider virtual stiffness because the original location is irrelevant in a voluntary movement. However, we tested the variability of virtual stiffness adding the stiffness to
<xref ref-type="disp-formula" rid="pone.0132020.e007">Eq (5)</xref>
.</p>
<p>Then the
<xref ref-type="disp-formula" rid="pone.0132020.e007">Eq (5)</xref>
becomes
<disp-formula id="pone.0132020.e015">
<alternatives>
<graphic xlink:href="pone.0132020.e015.jpg" id="pone.0132020.e015g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M15">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd columnalign="right">
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mi>M</mml:mi>
<mml:mover accent="true">
<mml:msub>
<mml:mi>P</mml:mi>
<mml:mi>f</mml:mi>
</mml:msub>
<mml:mo>¨</mml:mo>
</mml:mover>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mi>ζ</mml:mi>
<mml:mover accent="true">
<mml:msub>
<mml:mi>P</mml:mi>
<mml:mi>f</mml:mi>
</mml:msub>
<mml:mo>˙</mml:mo>
</mml:mover>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mi>k</mml:mi>
<mml:msub>
<mml:mi>P</mml:mi>
<mml:mi>f</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</alternatives>
<label>(9)</label>
</disp-formula>
</p>
<p>
<xref ref-type="fig" rid="pone.0132020.g008">Fig 8C</xref>
shows the variability of the virtual stiffness for 90° turn, 60° turn, and straight path. The variability of the stiffness and the mass are low as shown in
<xref ref-type="fig" rid="pone.0132020.g008">Fig 8B</xref>
and
<xref ref-type="fig" rid="pone.0132020.g008">Fig 8C</xref>
while variability of damping coefficient is high as shown in
<xref ref-type="fig" rid="pone.0132020.g008">Fig 8A</xref>
. In
<xref ref-type="fig" rid="pone.0132020.g008">Fig 8A</xref>
, in
<xref ref-type="fig" rid="pone.0132020.g008">Fig 8B</xref>
, and in
<xref ref-type="fig" rid="pone.0132020.g008">Fig 8C</xref>
the average values of the virtual damping coefficient, the virtual mass, and the virtual stiffness distribution in straight path are lowest. This shows that the trust level of the follower is greater in the straight path.
<xref ref-type="table" rid="pone.0132020.t003">Table 3</xref>
,
<xref ref-type="table" rid="pone.0132020.t004">Table 4</xref>
, and
<xref ref-type="table" rid="pone.0132020.t005">Table 5</xref>
show the results of Mann-Whitney U test for different paths (90° turn, 60° turn, straight path) of coefficients in
<xref ref-type="disp-formula" rid="pone.0132020.e015">Eq (9)</xref>
. Results in
<xref ref-type="table" rid="pone.0132020.t003">Table 3</xref>
show that the virtual damping coefficient in 90° turn was significantly different from that in straight path (
<italic>p</italic>
= 0.009). Moreover, virtual damping coefficient in 60° turn was also significantly different from that in straight path (
<italic>p</italic>
= 0.01). There was no statistically significant difference between the virtual damping coefficient with the 90° and 60° turns (
<italic>p</italic>
= 0.90). The virtual mass distribution in
<xref ref-type="disp-formula" rid="pone.0132020.e015">Eq (9)</xref>
is shown in
<xref ref-type="fig" rid="pone.0132020.g008">Fig 8B</xref>
. Significance test results show that the straight path is different from the path with the 90° turn (
<italic>p</italic>
= 0.006). However, no significant differences between the 60° turn path and the straight path (
<italic>p</italic>
= 0.8). This may be due to the fact that the follower shows more trust in following the guider down a straight path than in paths with turns.</p>
<table-wrap id="pone.0132020.t003" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0132020.t003</object-id>
<label>Table 3</label>
<caption>
<title>Virtual damping coefficients. Statistical significance was computed using the Mann-Whitney U test (
<italic>α</italic>
= 0.05).</title>
</caption>
<alternatives>
<graphic id="pone.0132020.t003g" xlink:href="pone.0132020.t003"></graphic>
<table frame="box" rules="all" border="0">
<colgroup span="1">
<col align="left" valign="top" span="1"></col>
<col align="left" valign="top" span="1"></col>
<col align="left" valign="top" span="1"></col>
</colgroup>
<thead>
<tr>
<th align="left" rowspan="1" colspan="1">Paths</th>
<th align="left" rowspan="1" colspan="1">Mean</th>
<th align="left" rowspan="1" colspan="1"></th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">90° turn</td>
<td align="char" char="." rowspan="1" colspan="1">3.055</td>
<td align="left" rowspan="1" colspan="1">
<italic>p</italic>
(90°
<italic>turn</italic>
↔ 60°
<italic>turn</italic>
) > 0.6,</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">60° turn</td>
<td align="char" char="." rowspan="1" colspan="1">1.605</td>
<td align="left" rowspan="1" colspan="1">
<italic>p</italic>
(60°
<italic>turn</italic>
<italic>Straightpath</italic>
) < 0.02*,</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Straight path</td>
<td align="char" char="." rowspan="1" colspan="1">−0.586</td>
<td align="left" rowspan="1" colspan="1">
<italic>p</italic>
(90°
<italic>turn</italic>
<italic>Straightpath</italic>
) < 0.01*</td>
</tr>
</tbody>
</table>
</alternatives>
</table-wrap>
<table-wrap id="pone.0132020.t004" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0132020.t004</object-id>
<label>Table 4</label>
<caption>
<title>Virtual mass coefficients. Statistical significance was computed using the Mann-Whitney U test (
<italic>α</italic>
= 0.05).</title>
</caption>
<alternatives>
<graphic id="pone.0132020.t004g" xlink:href="pone.0132020.t004"></graphic>
<table frame="box" rules="all" border="0">
<colgroup span="1">
<col align="left" valign="top" span="1"></col>
<col align="left" valign="top" span="1"></col>
<col align="left" valign="top" span="1"></col>
</colgroup>
<thead>
<tr>
<th align="left" rowspan="1" colspan="1">Paths</th>
<th align="left" rowspan="1" colspan="1">Mean</th>
<th align="left" rowspan="1" colspan="1"></th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">90° turn</td>
<td align="char" char="." rowspan="1" colspan="1">2.066</td>
<td align="left" rowspan="1" colspan="1">
<italic>p</italic>
(90°
<italic>turn</italic>
↔ 60°
<italic>turn</italic>
) > 0.8,</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">60° turn</td>
<td align="char" char="." rowspan="1" colspan="1">−0.083</td>
<td align="left" rowspan="1" colspan="1">
<italic>p</italic>
(60°
<italic>turn</italic>
<italic>Straightpath</italic>
) > 0.7,</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Straight path</td>
<td align="char" char="." rowspan="1" colspan="1">0.002</td>
<td align="left" rowspan="1" colspan="1">
<italic>p</italic>
(90°
<italic>turn</italic>
<italic>Straightpath</italic>
) < 0.01*</td>
</tr>
</tbody>
</table>
</alternatives>
</table-wrap>
<table-wrap id="pone.0132020.t005" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0132020.t005</object-id>
<label>Table 5</label>
<caption>
<title>Virtual stiffness coefficients. Statistical significance was computed using the Mann-Whitney U test (
<italic>α</italic>
= 0.05).</title>
</caption>
<alternatives>
<graphic id="pone.0132020.t005g" xlink:href="pone.0132020.t005"></graphic>
<table frame="box" rules="all" border="0">
<colgroup span="1">
<col align="left" valign="top" span="1"></col>
<col align="left" valign="top" span="1"></col>
<col align="left" valign="top" span="1"></col>
</colgroup>
<thead>
<tr>
<th align="left" rowspan="1" colspan="1">Paths</th>
<th align="left" rowspan="1" colspan="1">Mean</th>
<th align="left" rowspan="1" colspan="1"></th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">90° turn</td>
<td align="char" char="." rowspan="1" colspan="1">0.0325</td>
<td align="left" rowspan="1" colspan="1">
<italic>p</italic>
(90°
<italic>turn</italic>
↔ 60°
<italic>turn</italic>
) < 0.05*,</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">60° turn</td>
<td align="char" char="." rowspan="1" colspan="1">−0.1385</td>
<td align="left" rowspan="1" colspan="1">
<italic>p</italic>
(60°
<italic>turn</italic>
<italic>Straightpath</italic>
) < 0.05*,</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Straight path</td>
<td align="char" char="." rowspan="1" colspan="1">0.0117</td>
<td align="left" rowspan="1" colspan="1">
<italic>p</italic>
(90°
<italic>turn</italic>
<italic>Straightpath</italic>
) < 0.05*</td>
</tr>
</tbody>
</table>
</alternatives>
</table-wrap>
<p>However, the virtual stiffness is significantly different in the 90° turn path compared to the straight path (
<italic>p</italic>
= 0.002) and 60° turn (
<italic>p</italic>
= 0.004). Moreover, 60° turn is significantly different from straight path (
<italic>p</italic>
= 0.001). Even though the virtual stiffness is significantly different for three defined paths the variability is very low. However, the variability of virtual damping coefficient is higher than virtual mass and stiffness. Therefore, these results confirm that the follower’s trust level is reflected in the time varying parameter of the virtual damped inertial system. We also note that the virtual damping coefficient reflects more accurately the level of trust than the virtual mass or stiffness.</p>
<p>We conclude that the virtual damping coefficient can be a good indicator to control the push/pull behavior of an intelligent guider using a feedback controller of the form given in
<xref ref-type="disp-formula" rid="pone.0132020.e016">Eq (10)</xref>
, where F(k) is the pushing/pulling tug force along the rein from the human guider at
<italic>k</italic>
<sup>th</sup>
sampling step, M is the time varying virtual mass,
<italic>M</italic>
<sub>0</sub>
is its desired value,
<italic>ζ</italic>
is the time varying virtual damping coefficient,
<italic>ζ</italic>
<sub>0</sub>
is its desired value, and k is the sampling step.
<disp-formula id="pone.0132020.e016">
<alternatives>
<graphic xlink:href="pone.0132020.e016.jpg" id="pone.0132020.e016g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M16">
<mml:mtable displaystyle="true">
<mml:mtr>
<mml:mtd columnalign="right">
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mi>F</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>-</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>M</mml:mi>
<mml:mo>-</mml:mo>
<mml:msub>
<mml:mi>M</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mover accent="true">
<mml:msub>
<mml:mi>P</mml:mi>
<mml:mi>f</mml:mi>
</mml:msub>
<mml:mo>¨</mml:mo>
</mml:mover>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>-</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>ζ</mml:mi>
<mml:mo>-</mml:mo>
<mml:msub>
<mml:mi>ζ</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mover accent="true">
<mml:msub>
<mml:mi>P</mml:mi>
<mml:mi>f</mml:mi>
</mml:msub>
<mml:mo>˙</mml:mo>
</mml:mover>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</alternatives>
<label>(10)</label>
</disp-formula>
</p>
<p>Human subjects consistently confirmed that their trust level in following the guider dropped as they moved from the straight path, to the 60° turn path and further decreased when they took the 90° turn path. Moreover, we present the followers’ response towards the defined trust scale (see
<xref ref-type="sec" rid="sec028">material and methods</xref>
) in
<xref ref-type="fig" rid="pone.0132020.g008">Fig 8D</xref>
, where the average trust scale values across all the subjects for straight, 60° turn, and 90° turn are shown in
<xref ref-type="fig" rid="pone.0132020.g008">Fig 8D</xref>
. The variability of 90° turn is higher than that of the 60° turn and straight paths as shown in
<xref ref-type="fig" rid="pone.0132020.g008">Fig 8D</xref>
. For further clarity, the significance was computed by Mann-Whitney U test as shown in
<xref ref-type="fig" rid="pone.0132020.g008">Fig 8D</xref>
. The results show that between straight and 90° turn (
<italic>p</italic>
= 0.01) and straight and 60° turn are significantly different (
<italic>p</italic>
= 0.03). The followers response after each trial confirms that the follower shows more confidence when following the guider along a straight path than along paths with 90° and 60° turns.</p>
<sec id="sec020">
<title>Developing a closed loop path tracking controller incorporating the follower’s trust level</title>
<p>We combine the guider’s 3
<sup>rd</sup>
order predictive policy in
<xref ref-type="disp-formula" rid="pone.0132020.e012">Eq (7)</xref>
to control the swing movement of the hard rein with the tug force modulation rule in
<xref ref-type="disp-formula" rid="pone.0132020.e016">Eq (10)</xref>
to form a complete controller that accounts for the state of the follower that indicating his/her trust level.</p>
<p>We use the last 10 trial’s coefficients values (marked on Figs
<xref ref-type="fig" rid="pone.0132020.g004">4</xref>
and
<xref ref-type="fig" rid="pone.0132020.g005">5</xref>
by red dashed line) to calculate the statistical features of the regression coefficients in order to make sure the model reflects the behavior of the human subjects at a mature learning stage. In this stage, we assume that the distribution of the coefficients as a normally distributed random variable. Therefore, the model parameters were then found to be:
<italic>a</italic>
<sub>0</sub>
=
<italic>N</italic>
(−1.6784, 0.1930
<sup>2</sup>
),
<italic>a</italic>
<sub>1</sub>
=
<italic>N</italic>
(1.4710,0.5052
<sup>2</sup>
),
<italic>a</italic>
<sub>2</sub>
=
<italic>N</italic>
(−0.5295,0.5052
<sup>2</sup>
), and
<italic>c</italic>
=
<italic>N</italic>
(−0.4446,0.2643
<sup>2</sup>
).</p>
<p>In order to ascertain whether the control policy obtained by this system identification process is stable for an arbitrarily different scenario, we conducted numerical simulation studies forming a closed loop dynamic control system of the guider and the follower using the control policy given in
<xref ref-type="disp-formula" rid="pone.0132020.e012">Eq (7)</xref>
together with the discrete state space equation of the follower dynamics given in
<xref ref-type="disp-formula" rid="pone.0132020.e008">Eq (6)</xref>
. The length of the hard rein
<italic>L</italic>
= 0.7
<italic>m</italic>
, the mass of the follower
<italic>M</italic>
= 10
<italic>kg</italic>
with the damping coefficient
<italic>ζ</italic>
= 4
<italic>Nsec</italic>
/
<italic>m</italic>
, the magnitude of the force exerted along the rein was 5N, and the sampling step
<italic>T</italic>
= 0.2.</p>
<p>Moreover, when the follower’s desired angles are
<italic>ϕ</italic>
(0) = +65° and
<italic>ϕ</italic>
(0) = −65°, we notice that the simulated damping behavior of followers error reduction from
<xref ref-type="fig" rid="pone.0132020.g009">Fig 9B</xref>
. This confirms that the guiding control policy is comparable with the human-robot experimental results in
<xref ref-type="fig" rid="pone.0132020.g009">Fig 9A</xref>
with simulation results in human demonstration experiments in
<xref ref-type="fig" rid="pone.0132020.g009">Fig 9B</xref>
for reaching same desired angles +65° and −65°.</p>
<fig id="pone.0132020.g009" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0132020.g009</object-id>
<label>Fig 9</label>
<caption>
<title>Experimental setup and results to validate the guider’s control policy.</title>
<p>(A). The experimental results of completion the task for 10 naive subjects for the desired angles +65° and −65°. The individual subjects completion are shown by dashed lines. The average task completion fitted curves across all subjects are shown by a solid line, (B) Simulation results for the task completion for the desired angles +65° and −65°, and (C) Average rise time across 10 subjects for desired angles +65°, +45°, +25°, −65°, −45°, and −25°.</p>
</caption>
<graphic xlink:href="pone.0132020.g009"></graphic>
</fig>
<p>To understand the variability of the virtual model parameters based on the model, we set the virtual mass
<italic>M</italic>
= 15
<italic>kg</italic>
from
<italic>t</italic>
= 2
<italic>sec</italic>
to
<italic>t</italic>
= 3
<italic>sec</italic>
and the virtual damping coefficient
<italic>ζ</italic>
= 6
<italic>Nsec</italic>
/
<italic>m</italic>
from
<italic>t</italic>
= 6
<italic>sec</italic>
to
<italic>t</italic>
= 7
<italic>sec</italic>
to observe tug force variation in
<xref ref-type="disp-formula" rid="pone.0132020.e008">Eq (6)</xref>
as shown in
<xref ref-type="fig" rid="pone.0132020.g010">Fig 10</xref>
. The tug force variation in
<xref ref-type="fig" rid="pone.0132020.g010">Fig 10</xref>
shows that the virtual damping coefficient more influenced to vary the tug force than the virtual mass. The results suggest that the virtual model parameter can be used to demonstrate the level of trust of the follower.</p>
<fig id="pone.0132020.g010" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0132020.g010</object-id>
<label>Fig 10</label>
<caption>
<title>Simulation results.</title>
<p>The tug force variation of the follower in order to achieve a sudden change of the virtual mass
<italic>M</italic>
= 15[kg] from
<italic>t</italic>
= 2s to
<italic>t</italic>
= 3s and the virtual damping coefficient
<italic>ζ</italic>
= 6[Nsec/m] from
<italic>t</italic>
= 6s to
<italic>t</italic>
= 7s.
<italic>F</italic>
<sub>
<italic>X</italic>
</sub>
and
<italic>F</italic>
<sub>
<italic>Y</italic>
</sub>
are forces in X and Y directions.</p>
</caption>
<graphic xlink:href="pone.0132020.g010"></graphic>
</fig>
</sec>
</sec>
<sec id="sec021">
<title>Experiment 3: Validating the guider’s control policy</title>
<sec id="sec022">
<title>The guider’s closed loop control policy validation</title>
<p>We implemented the guider’s control policy in
<xref ref-type="disp-formula" rid="pone.0132020.e012">Eq (7)</xref>
to generate a tug force from the planar 1-D of freedom robotic arm in order to guide the blindfolded follower as shown in
<xref ref-type="fig" rid="pone.0132020.g001">Fig 1G</xref>
. The experimental results of the trials involving 10 naive subjects show that the closed loop controller minimizes the error in regards to guiding the subject to the desired target as shown in
<xref ref-type="fig" rid="pone.0132020.g009">Fig 9A</xref>
. The individual subject’s task completion is represented by a dashed line while the average fitted settling curve across all subjects is represented by a solid line in
<xref ref-type="fig" rid="pone.0132020.g009">Fig 9A</xref>
. Moreover, a comparison of experimental results (
<xref ref-type="fig" rid="pone.0132020.g009">Fig 9A</xref>
) and simulation results (
<xref ref-type="fig" rid="pone.0132020.g009">Fig 9B</xref>
) suggest that the open loop guiding controller can minimize the following error to bring the human subject in to the desired point. There is no significant difference between the average distribution of experimental recording (
<italic>p</italic>
= 0.857) in
<xref ref-type="fig" rid="pone.0132020.g009">Fig 9B</xref>
across the subjects and the simulation (
<italic>p</italic>
= 0.067) in
<xref ref-type="fig" rid="pone.0132020.g009">Fig 9B</xref>
for reaching −65° and +65°. The significance test results confirm the validation of the proposed virtual damped inertial model given in
<xref ref-type="disp-formula" rid="pone.0132020.e016">Eq (10)</xref>
for the follower. Combining the trust studies in
<xref ref-type="fig" rid="pone.0132020.g008">Fig 8A</xref>
and
<xref ref-type="fig" rid="pone.0132020.g008">Fig 8B</xref>
, and the simulation in
<xref ref-type="fig" rid="pone.0132020.g009">Fig 9A</xref>
and
<xref ref-type="fig" rid="pone.0132020.g009">Fig 9B</xref>
, we conclude that the virtual damping coefficient would be used as an indicator to represent the human follower’s trust level.</p>
<p>We show how the average error
<italic>ϕ</italic>
was reduced over time across the trials for ten subjects for desired angles −65° and +65° as shown in
<xref ref-type="fig" rid="pone.0132020.g009">Fig 9A</xref>
and
<xref ref-type="fig" rid="pone.0132020.g009">Fig 9B</xref>
for experimental data and simulation respectively. The results show that the implemented guider’s control policy is able to bring the blindfolded subject into the desired position and settle down in a reasonable time. For clarity, we demonstrate the average rise time across all subjects for the given six desired angles as shown in
<xref ref-type="fig" rid="pone.0132020.g009">Fig 9C</xref>
. The desired angles are −65°, −45°, −25°, +25°, +45°, and +65°. We consider the rise time measured for the number of commands to reach from 10% to 90% of the desired angles. We use stepinfo function (MATLAB 2012b) to extract the rise time. A single trial was run for 90 seconds. The experimental results show that within reasonable time the subjects can reach to the desired angle and settle down. This again confirms that implemented controller can bring subjects into the desired positions and settle down in a reasonable time.</p>
</sec>
</sec>
</sec>
<sec sec-type="conclusions" id="sec023">
<title>Discussion</title>
<p>This study was conducted to explore how two human participants interact with each other using haptic signals through a hard rein to achieve a path tracking goal when one partner (the follower) is blindfolded, while the other (the guider) receives full state feedback of the follower.</p>
<sec id="sec024">
<title>The duo’s policies</title>
<p>If an intelligent agent (man/machine) is given the task to guide such a follower using only a hard rein, the guiding agent should learn a control policy that can effectively manage the variability of follower’s behavior [
<xref rid="pone.0132020.ref026" ref-type="bibr">26</xref>
]. In this study, we conducted experiments to understand how two human subjects interact with each other using haptic signals through a hard rein to achieve a path tracking goal when one partner was cut off from auditory and visual feedback from the environment (the follower), while the other person with environmental perception (the guider) gets full state feedback of the follower to find variability of movement and uncertainty of the behavior.</p>
<p>The
<italic>R</italic>
<sup>2</sup>
values of the guider’s predictive and the follower’s reactive behavioral policies increased over the course of the trials, as shown in
<xref ref-type="fig" rid="pone.0132020.g003">Fig 3A</xref>
and
<xref ref-type="fig" rid="pone.0132020.g003">Fig 3C</xref>
. The significance test results among different orders of auto-regressive policies confirm that the guider’s policy is best approximated by a 3
<sup>rd</sup>
order model while the follower’s state transition policy is best approximated by a 2
<sup>nd</sup>
order model. The results suggest that in general, the guider depends on more information than the follower. The follower’s 2
<sup>nd</sup>
order reactive and the guider’s 3
<sup>rd</sup>
predictive control policies suggest that a reactive behavior does not need as many past states as in a predictive behavior to take actions. The proposed control policy based on human-human demonstrations is mainly intended for use in robots which guide people with good vision working in low visibility environments as in fire-fighting and other disaster response operations.</p>
<p>Variability is an indispensable feature in human behavior [
<xref rid="pone.0132020.ref027" ref-type="bibr">27</xref>
]. Therefore, we set out to understand the specific properties of variability of human guiding behavior in this particular task by observing the variation of polynomial coefficients in Eqs (
<xref ref-type="disp-formula" rid="pone.0132020.e002">2</xref>
) and (
<xref ref-type="disp-formula" rid="pone.0132020.e004">3</xref>
) across trials. By modeling the control policy learned by the guiding agent as a discrete state dependent auto-regressive function, we found that guiding agent learns a stochastic stable control policy across 20 trials as shown in Figs
<xref ref-type="fig" rid="pone.0132020.g004">4</xref>
and
<xref ref-type="fig" rid="pone.0132020.g005">5</xref>
. These results are consistent with those of previous studies on stochastic human behavior [
<xref rid="pone.0132020.ref027" ref-type="bibr">27</xref>
<xref rid="pone.0132020.ref029" ref-type="bibr">29</xref>
] in similar contexts.</p>
</sec>
<sec id="sec025">
<title>The human follower’s trust</title>
<p>Previous studies on human trust on a guiding agent have shown that humans tend to depend entirely on the guiding agent when they are in hazardous environments [
<xref rid="pone.0132020.ref017" ref-type="bibr">17</xref>
] until a sudden change occurs [
<xref rid="pone.0132020.ref030" ref-type="bibr">30</xref>
]. This implies that the degree of compliance in a follower should diminish if the follower loses trust in the guiding agent. By modeling the impedance of the follower as a virtual inertial damped stiffness system, we then considered the variability of the follower’s impedance parameters (the virtual mass, damping, and stiffness coefficients) at different turn angles. Regarding the three types of paths shown in
<xref ref-type="fig" rid="pone.0132020.g008">Fig 8</xref>
the blindfolded subjects who played the role of the follower confirmed that their trust in following the guider was highest in the straight path while it decreased in the other paths with the lowest trust level recorded in the path with the 90° turn. The results of virtual impedance parameters in
<xref ref-type="disp-formula" rid="pone.0132020.e015">Eq (9)</xref>
are shown in
<xref ref-type="fig" rid="pone.0132020.g008">Fig 8A</xref>
,
<xref ref-type="fig" rid="pone.0132020.g008">Fig 8B</xref>
, and
<xref ref-type="fig" rid="pone.0132020.g008">Fig 8C</xref>
. Our experimental results of human subjects also show that the variability of the virtual damping coefficients correlates more with the complexity of the path in
<xref ref-type="fig" rid="pone.0132020.g008">Fig 8</xref>
—reflecting the trust level of the follower—than that of the virtual mass or stiffness coefficients. Therefore, we consider the follower as virtual damped inertial system as in
<xref ref-type="disp-formula" rid="pone.0132020.e007">Eq (5)</xref>
.
<xref ref-type="fig" rid="pone.0132020.g008">Fig 8A</xref>
,
<xref ref-type="fig" rid="pone.0132020.g008">Fig 8B</xref>
, and
<xref ref-type="fig" rid="pone.0132020.g008">Fig 8C</xref>
show that the higher trust of the follower in the straight path results in a lower average value of the virtual damping coefficient. When the follower’s trust decreases during the 90° and 60° turns, the guider has to exert a higher tug force to lead the follower to the desired trajectory. This results to higher average values for the virtual mass, virtual damping coefficient, and virtual stiffness coefficients.</p>
<p>Moreover, the experimental average trust scale test results in
<xref ref-type="fig" rid="pone.0132020.g008">Fig 8C</xref>
suggest that the follower’s trust decreases in 90° and 60° turns.</p>
<p>Once the parameters of the
<xref ref-type="disp-formula" rid="pone.0132020.e002">Eq (2)</xref>
are known, the damped inertial model of the voluntary movement of the follower can be combined to form a complete state dependent controller that accounts for the trust level of the follower as given by
<inline-formula id="pone.0132020.e017">
<alternatives>
<graphic xlink:href="pone.0132020.e017.jpg" id="pone.0132020.e017g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M17">
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="true">[</mml:mo>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mi>F</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo stretchy="false">)</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mi>θ</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo stretchy="false">)</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
<mml:mo stretchy="true">]</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo stretchy="true">[</mml:mo>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mn>1</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd>
<mml:mn>1</mml:mn>
</mml:mtd>
</mml:mtr>
</mml:mtable>
<mml:mo stretchy="true">]</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo stretchy="true">[</mml:mo>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mi>F</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo stretchy="false">)</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mi>θ</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mtd>
</mml:mtr>
</mml:mtable>
<mml:mo stretchy="true">]</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mo stretchy="true">[</mml:mo>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>M</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>M</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:msub>
<mml:mover accent="true">
<mml:mi>P</mml:mi>
<mml:mo>..</mml:mo>
</mml:mover>
<mml:mi>f</mml:mi>
</mml:msub>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo></mml:mo>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>ζ</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>ζ</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:msub>
<mml:mover accent="true">
<mml:mi>P</mml:mi>
<mml:mo>.</mml:mo>
</mml:mover>
<mml:mi>f</mml:mi>
</mml:msub>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo stretchy="false">)</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:msubsup>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mi>r</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:msubsup>
<mml:mi>a</mml:mi>
<mml:mi>r</mml:mi>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mi>ϕ</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>r</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo>+</mml:mo>
<mml:msup>
<mml:mi>C</mml:mi>
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mtd>
</mml:mtr>
</mml:mtable>
<mml:mo stretchy="true">]</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</alternatives>
</inline-formula>
where
<italic>M</italic>
<sub>0</sub>
and
<italic>ζ</italic>
<sub>0</sub>
are desired mass and desired damping coefficients respectively. This complete state dependent controller can be readily implemented in a potential human-robot interaction scenario.</p>
<p>Therefore, our results from human-human demonstrations provide useful design guidelines to human-robot interaction that should account for the real-time trust level of the human counterpart. In a human-robot interaction scenario, such as one involving a fire-fighter being guided by a robot through thick smoke, the estimate of the followers’ trust using the above method could be used to change acceleration/deceleration of the intelligent agent.</p>
</sec>
<sec id="sec026">
<title>Arm muscle recruitment for cost minimization</title>
<p>Previous work [
<xref rid="pone.0132020.ref031" ref-type="bibr">31</xref>
] has proved that the total muscle activation for a single task decreased over learning trials [
<xref rid="pone.0132020.ref031" ref-type="bibr">31</xref>
]. From the 2
<sup>nd</sup>
order best fit curve for the quadratic sum of EMG
<italic>J</italic>
for all muscles as shown in
<xref ref-type="fig" rid="pone.0132020.g007">Fig 7C</xref>
, we can observe that
<italic>J</italic>
increases to a maximum around the 10th trial and then decreases in the last 10 trials. This suggests that effort optimization is a non-monotonic process. During the first 10 trials, subjects may have given priority to order selection than to optimization in the muscle activation space, which is also reflected in the behavior of
<italic>R</italic>
<sup>2</sup>
values in
<xref ref-type="fig" rid="pone.0132020.g003">Fig 3</xref>
. Once the optimal order is selected, subjects exhibit monotonic optimization in the muscle activation space, as seen in the last 10 trials of
<xref ref-type="fig" rid="pone.0132020.g007">Fig 7C</xref>
, with a corresponding increase of
<italic>R</italic>
<sup>2</sup>
values in
<xref ref-type="fig" rid="pone.0132020.g003">Fig 3</xref>
. However, our observation on the guider’s muscle activation gradually progresses from an initial muscle co-contraction based command generation strategy to a low energy policy with minimum muscle co-contraction. Therefore, this is in agreement with other studies that show a similar pattern of reduction in muscle co-contraction when motor learning progresses [
<xref rid="pone.0132020.ref024" ref-type="bibr">24</xref>
,
<xref rid="pone.0132020.ref025" ref-type="bibr">25</xref>
]. This phenomenon can be the result of the fact from the fact that the guiding agent builds internal models [
<xref rid="pone.0132020.ref032" ref-type="bibr">32</xref>
] of hand and task dynamics to guide the blindfolded follower. The human guiding strategy can be realized by a planar 1-D of freedom robotic arm with a passive joint at the end point to connect the hard rein. We demonstrated the effectiveness of this idea by exporting with no modifications the controller identified from human-human demonstrations directly on the planar robotic arm.</p>
</sec>
<sec id="sec027">
<title>Future applications and research directions</title>
<p>The guiding control policy in Eqs (
<xref ref-type="disp-formula" rid="pone.0132020.e001">1</xref>
) and (
<xref ref-type="disp-formula" rid="pone.0132020.e002">2</xref>
) together with the virtual damped inertial model which estimates the trust level of the follower opens up the opportunity for the development of an integrated controller that treats the trust level of the follower as a part of the state vector. This will enable the controller to adjust to the changes of the behavioral dynamics of the follower in varying distraction and stress conditions. In this study we propose a model that predicts the future states of the follower and can be used in a predictive control policy. This forward model may contain some approximation of the follower’s reactive behavior. It will be interesting to understand the detailed computational nature of this prediction used by the guider. Another unexplored area is to understand factors determining how the follower perceives haptic control commands given by the guider.</p>
<p>Moreover, If a group can be trained to follow the person immediately in front or a leader, the robot can guide just one human using a hard rein, and that person can be linked to the others using hard or soft reins. Therefore, the group can be modeled as a soft passive dynamic system with multiple degrees of freedom.</p>
</sec>
</sec>
<sec sec-type="materials|methods" id="sec028">
<title>Materials and Methods</title>
<p>We conducted three separate experiments: 1) understand state dependent control policy of human subjects when one human guides another human with limited visual and auditory environmental perceptions in an arbitrary complex path, 2) model the trust level of the follower using a time varying damped inertial system, and 3) validate the guider’s control policy.</p>
<sec id="sec029">
<title>Experimental protocol</title>
<p>In all experiments, subjects signed a written consent form approved by King’s College London Bio medical Sciences, Medicine, Dentistry and Natural and Mathematical Sciences research ethics committee which approved this study by Kings College London Bio medical Sciences, Medicine, Dentistry and Natural and Mathematical Sciences research ethics committee (REC Reference number BDM/11/12-20.).</p>
</sec>
<sec id="sec030">
<title>Experiment 1: Extracting guiding/following control policies</title>
<p>Experiment 1 was conducted to extract guiding/following control policies. Fifteen (11 male, 4 female) naive subjects participated in 20 trials. Subjects were healthy and in the 23—43 age group (avg: 28.20, std: 5.12) years.
<xref ref-type="fig" rid="pone.0132020.g001">Fig 1A</xref>
shows how the guider and the follower held both ends of a hard 0.7m long, 500g weight rein to track the wiggly path.
<xref ref-type="fig" rid="pone.0132020.g001">Fig 1A</xref>
shows the follower was blindfolded and cutoff and prevented from using auditory feedback.
<xref ref-type="fig" rid="pone.0132020.g001">Fig 1B</xref>
shows the relative orientation difference between the guider and the follower (referred to as state hereafter), and angle of the rein relative to the guiding agent (referred to as action hereafter).</p>
<p>For clarity, the detailed wiggly path is shown in
<xref ref-type="fig" rid="pone.0132020.g001">Fig 1C</xref>
. The 9m length path was divided into nine milestones as shown in
<xref ref-type="fig" rid="pone.0132020.g001">Fig 1C</xref>
. In any given trial, the guider was asked to take the follower from one milestone to another at six milestones up or down (ex. 1–7, 2–8, 3–9, 9–3, 8–2, and 7–1). The starting milestone was pseudo-randomly changed from trial to trial and the follower was disoriented before starting every trial in order to eliminate the effect of any memory of the path. The guider was instructed to move the handle of the hard rein only on the horizontal plane to generate left/right turn and push/pull commands. In that scenario, the guider takes only left/right and push/pull movements in horizontal plane with negligible vertical movements. Furthermore, the guider was instructed to use push and pull commands for forwards and backwards movements to track the follower in defined path as shown in
<xref ref-type="fig" rid="pone.0132020.g001">Fig 1C</xref>
. The follower was instructed to pay attention to the commands via the hard rein to follow the guider. The follower started to follow the guider once a gentle tug was given via the rein. The subjects were asked to maintain a natural speed of walking during the trial. Experimental data can be found:
<xref ref-type="supplementary-material" rid="pone.0132020.s001">S1 File</xref>
: Motion data for subject 1 to subject 8,
<xref ref-type="supplementary-material" rid="pone.0132020.s002">S2 File</xref>
: Motion data for subject 9 to subject 15. Moreover,
<xref ref-type="supplementary-material" rid="pone.0132020.s003">S3 File</xref>
: EMG data for subject 1 to subject 8, and
<xref ref-type="supplementary-material" rid="pone.0132020.s004">S4 File</xref>
: EMG data for subject 9 to subject 15.</p>
</sec>
<sec id="sec031">
<title>Experiment 2: Modeling the follower’s trust in different paths</title>
<p>Experiment 2 was conducted to study how to model the trust of the follower in different path tracking contexts. Fourteen naive pairs (10 male, 4 female) of subjects participated in 10 trials each for three different paths as shown in
<xref ref-type="fig" rid="pone.0132020.g001">Fig 1E</xref>
. Subjects were healthy and in the 23–43 age group (avg: 26.20, std: 2.21) years. The path was pseudo-randomly changed from trial to trial and the follower was disoriented before starting every trial in order to eliminate the effect of any memory of the path. The subjects were given 5 minute breaks after every 6 trials. Moreover, the subjects were asked to maintain a natural speed of walking during the each trial. To study the trust from the human follower, a trust scale 1 to 10 ranging from lowest to highest was introduced before starting the experiments and subjects were asked to rate their trust in following the guider after each trial. Experimental data can be found:
<xref ref-type="supplementary-material" rid="pone.0132020.s005">S5 File</xref>
: Force data for subject 1 to subject 5,
<xref ref-type="supplementary-material" rid="pone.0132020.s006">S6 File</xref>
: Force data for subject 6 to subject 10, and
<xref ref-type="supplementary-material" rid="pone.0132020.s007">S7 File</xref>
: Force data for subject 11 to subject 15. Moreover,
<xref ref-type="supplementary-material" rid="pone.0132020.s008">S8 File</xref>
: Motion data for subject 1 to subject 8, and
<xref ref-type="supplementary-material" rid="pone.0132020.s009">S9 File</xref>
: Motion data for subject 9 to subject 15.</p>
</sec>
<sec id="sec032">
<title>Experiment 3: Validating the guider’s control policy</title>
<p>Experiment 3 was conducted to validate the guider’s control policy and test its stability. We conducted experiments with 10 naive subjects (7 male, 3 female). Subjects were healthy and in the 21–28 age group (avg: 25.90, std: 1.91) years. Each subject participated in 3 trials. We implemented the guider’s control policy in
<xref ref-type="disp-formula" rid="pone.0132020.e012">Eq (7)</xref>
on a 1-DoF planar robotic arm to generate swing actions to guide a follower to a desired point. The schematic diagram of the experimental setup is shown in
<xref ref-type="fig" rid="pone.0132020.g001">Fig 1F</xref>
.
<xref ref-type="fig" rid="pone.0132020.g001">Fig 1G</xref>
shows the actual experimental setup.</p>
<p>Here a cord was attached to the waist belt of the blindfolded subjects. The the encoder on the shaft platform is used to measure the orientation difference between the follower and the motor shaft as shown in
<xref ref-type="fig" rid="pone.0132020.g001">Fig 1G</xref>
. The subjects were instructed to move proportional to the force they felt and to the direction of the tug force. Once the trial was started, the encoder mounted on the motor shaft read instantaneous error
<italic>ϕ</italic>
of the blindfolded subject’s position relative to the desired angle. We defined −65°, −45°, −25°, +25°, +45°, and +65° as desired angles. Then the robotic arm computed the commands to perturb the arm to minimize the following error between the human subject and the robotic arm. A single trial was run for 90 seconds. Experimental data can be found:
<xref ref-type="supplementary-material" rid="pone.0132020.s010">S10 File</xref>
: Reaching data for 10 subjects.</p>
</sec>
<sec id="sec033">
<title>Sensing</title>
<p>MTx motion capture sensors (3-axis acceleration, 3-axis magnetic field strengths, 4-quaternions, 3-axis Gyroscope readings (Xsens,USA)) were used to measure the states
<italic>ϕ</italic>
and actions
<italic>θ</italic>
of the duo. Two MTx sensors were attached on the chest of the guider and the follower to measure the rate of change of the orientation difference between them (state). Another motion tracker was attached on the hard rein to measure the angle of the rein relative to the sensor on the chest of the guider (action from the guider). Four Electromyography (EMG) electrodes at 1500Hz were fixed on the guider’s anterior deltoid, biceps, posterior deltoid and lateral triceps along the upper arm as shown in
<xref ref-type="fig" rid="pone.0132020.g006">Fig 6D</xref>
. Before attaching EMG electrodes, the skin was cleaned with alcohol. An extra motion tracker with a switch was worn by the guider. We achieved synchronization of MTx motion sensors with muscle EMG sensors by serially connecting a channel of the EMG recorder with the magnetic sensor of the MTx sensor via a switch. The guider switched on the circuit which induced a magnetic pulse in the MTx motion sensor while recording a voltage pulse in one of the channels of the EMG records. Since we used five MTx sensors, we sampled data at 25Hz to stay within hardware design limits.</p>
<p>In the second experiment, in addition to MTx sensors, ATI Mini40 6-axis force torque transducer was attached to the hard rein to measure tug force sampled at 1000Hz along the horizontal plane to guide the follower. The acceleration of the follower was measured by MTx sensors as shown in
<xref ref-type="fig" rid="pone.0132020.g001">Fig 1B</xref>
.</p>
</sec>
<sec id="sec034">
<title>Data Analysis</title>
<p>All data were analyzed using MATLAB R2012a (The MathWorks Inc). We used Daubechies wave family (db10) of the MATLAB Wavelet Toolbox to extract the action of the guider and the state of the follower. Symlet wave family (sym8) of MATLAB was used for EMG analysis. Statistical significances were computed using the Mann-Whitney U test and t-test.</p>
</sec>
<sec id="sec035">
<title>Ethics statement</title>
<p>The experimental study, protocol, information sheet, and consent form were approved by the King’s College London Biomedical Sciences, Medicine, Dentistry and Natural and Mathematical Sciences research ethics committee: REC Reference number BDM/11/12-20. In all experiments, subjects signed a written consent form approved by King’s College London Bio medical Sciences, Medicine, Dentistry and Natural and Mathematical Sciences research ethics committee.</p>
</sec>
</sec>
<sec sec-type="supplementary-material" id="sec036">
<title>Supporting Information</title>
<supplementary-material content-type="local-data" id="pone.0132020.s001">
<label>S1 File</label>
<caption>
<title>Motion data in human movements in Experiment 1 for subject 1 to subject 8.</title>
<p>(ZIP)</p>
</caption>
<media xlink:href="pone.0132020.s001.zip">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0132020.s002">
<label>S2 File</label>
<caption>
<title>Motion data in human movements in Experiment 1 for subject 9 to subject 15.</title>
<p>(ZIP)</p>
</caption>
<media xlink:href="pone.0132020.s002.zip">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0132020.s003">
<label>S3 File</label>
<caption>
<title>EMG recordings in Experiment 1 for subject 1 to subject 8.</title>
<p>(ZIP)</p>
</caption>
<media xlink:href="pone.0132020.s003.zip">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0132020.s004">
<label>S4 File</label>
<caption>
<title>EMG recordings in Experiment 1 for subject 9 to subject 15.</title>
<p>(ZIP)</p>
</caption>
<media xlink:href="pone.0132020.s004.zip">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0132020.s005">
<label>S5 File</label>
<caption>
<title>Force data in Experiment 2 for subject 1 to subject 5.</title>
<p>(ZIP)</p>
</caption>
<media xlink:href="pone.0132020.s005.zip">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0132020.s006">
<label>S6 File</label>
<caption>
<title>Force data in Experiment 2 for subject 6 to subject 10.</title>
<p>(ZIP)</p>
</caption>
<media xlink:href="pone.0132020.s006.zip">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0132020.s007">
<label>S7 File</label>
<caption>
<title>Force data in Experiment 2 for subject 11 to subject 15.</title>
<p>(ZIP)</p>
</caption>
<media xlink:href="pone.0132020.s007.zip">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0132020.s008">
<label>S8 File</label>
<caption>
<title>Motion data in human movements in Experiment 2 for subject 1 to subject 8.</title>
<p>(ZIP)</p>
</caption>
<media xlink:href="pone.0132020.s008.zip">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0132020.s009">
<label>S9 File</label>
<caption>
<title>Motion data in human movements in Experiment 2 for subject 9 to subject 15.</title>
<p>(ZIP)</p>
</caption>
<media xlink:href="pone.0132020.s009.zip">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0132020.s010">
<label>S10 File</label>
<caption>
<title>Reaching data in Experiment 3 for 10 subjects.</title>
<p>(ZIP)</p>
</caption>
<media xlink:href="pone.0132020.s010.zip">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
</sec>
</body>
<back>
<ack>
<p>The authors would like to thank the UK Engineering and Physical Sciences Research Council (EPSRC) grant no. EP/I028765/1, the Guy’s and St Thomas’ Charity grant on developing clinician-scientific interfaces in robotic assisted surgery: translating technical innovation into improved clinical care (grant no. R090705), and Vattikuti foundation.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="pone.0132020.ref001">
<label>1</label>
<mixed-citation publication-type="journal">
<name>
<surname>Murphy</surname>
<given-names>RR</given-names>
</name>
,
<article-title>Human-robot interaction in rescue robotics</article-title>
,
<source>IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews</source>
,
<volume>34</volume>
(
<issue>2</issue>
),
<year>2004</year>
,
<fpage>138</fpage>
<lpage>153</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0132020.ref002">
<label>2</label>
<mixed-citation publication-type="journal">
<name>
<surname>Goodrich</surname>
<given-names>MA</given-names>
</name>
, &
<name>
<surname>Schultz</surname>
<given-names>AC</given-names>
</name>
,
<article-title>Human-robot interaction: a survey</article-title>
,
<source>Foundations and trends in human-computer interaction</source>
,
<volume>1</volume>
(
<issue>3</issue>
),
<year>2007</year>
,
<fpage>203</fpage>
<lpage>275</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1561/1100000005">10.1561/1100000005</ext-link>
</comment>
</mixed-citation>
</ref>
<ref id="pone.0132020.ref003">
<label>3</label>
<mixed-citation publication-type="journal">
<name>
<surname>Casper</surname>
<given-names>J</given-names>
</name>
, &
<name>
<surname>Murphy</surname>
<given-names>RR</given-names>
</name>
,
<article-title>Human-robot interactions during the robot-assisted urban search and rescue response at the world trade center</article-title>
,
<source>IEEE Transactions on Systems, Man, and Cybernetics Part B</source>
,
<volume>33</volume>
(
<issue>3</issue>
),
<year>2003</year>
,
<fpage>367</fpage>
<lpage>385</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1109/TSMCB.2003.811794">10.1109/TSMCB.2003.811794</ext-link>
</comment>
</mixed-citation>
</ref>
<ref id="pone.0132020.ref004">
<label>4</label>
<mixed-citation publication-type="journal">
<name>
<surname>Finzi</surname>
<given-names>A</given-names>
</name>
&
<name>
<surname>Orlandini</surname>
<given-names>A</given-names>
</name>
,
<article-title>A mixed-initiative approach to human-robot interaction in rescue scenarios</article-title>
,
<source>American Association for Artificial Intelligence</source>
,
<year>2005</year>
.</mixed-citation>
</ref>
<ref id="pone.0132020.ref005">
<label>5</label>
<mixed-citation publication-type="other">Ghosh A, Alboul L, Penders J, Jones P, & Reed H, Following a Robot using a Haptic Interface without Visual Feedback, The Seventh International Conference on Advances in Computer-Human Interactions, 2014, 147–153.</mixed-citation>
</ref>
<ref id="pone.0132020.ref006">
<label>6</label>
<mixed-citation publication-type="journal">
<name>
<surname>Marston</surname>
<given-names>JR</given-names>
</name>
,
<name>
<surname>Loomis</surname>
<given-names>JM</given-names>
</name>
,
<name>
<surname>Klatzky</surname>
<given-names>RL</given-names>
</name>
, &
<name>
<surname>Golledge</surname>
<given-names>RG</given-names>
</name>
,
<article-title>Nonvisual route following with guidance from a simple haptic or auditory display</article-title>
,
<source>Journal of Visual Impairment & Blindness</source>
,
<volume>101</volume>
(
<issue>4</issue>
),
<year>2007</year>
,
<fpage>203</fpage>
<lpage>211</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0132020.ref007">
<label>7</label>
<mixed-citation publication-type="journal">
<name>
<surname>Penders</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Alboul</surname>
<given-names>L</given-names>
</name>
,
<name>
<surname>Witkowski</surname>
<given-names>U</given-names>
</name>
,
<name>
<surname>Naghsh</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Saez-Pons</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Herbrechtsmeier</surname>
<given-names>S</given-names>
</name>
<etal>et al.</etal>
,
<article-title>A robot swarm assisting a human firefighter</article-title>
,
<source>Advanced Robotics</source>
,
<volume>25</volume>
,
<year>2011</year>
,
<fpage>93</fpage>
<lpage>117</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1163/016918610X538507">10.1163/016918610X538507</ext-link>
</comment>
</mixed-citation>
</ref>
<ref id="pone.0132020.ref008">
<label>8</label>
<mixed-citation publication-type="other">Allan MA, Prabu B, Nagarajan R, & Bukhari I, ROVI: a robot for visually impaired for collision- free navigation, Proc. of the International Conference on Man-Machine Systems, 2009,.</mixed-citation>
</ref>
<ref id="pone.0132020.ref009">
<label>9</label>
<mixed-citation publication-type="journal">
<name>
<surname>Loomis</surname>
<given-names>JM</given-names>
</name>
,
<name>
<surname>Golledge</surname>
<given-names>RG</given-names>
</name>
, &
<name>
<surname>Klatzky</surname>
<given-names>RL</given-names>
</name>
,
<article-title>Navigation system for the blind: Auditory Display Modes and Guidance</article-title>
,
<source>IEEE Transaction on Biomedical Engineering</source>
,
<volume>7</volume>
,
<year>1998</year>
,
<fpage>163</fpage>
<lpage>203</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0132020.ref010">
<label>10</label>
<mixed-citation publication-type="journal">
<name>
<surname>Ulrich</surname>
<given-names>I</given-names>
</name>
&
<name>
<surname>Borenstein</surname>
<given-names>J</given-names>
</name>
,
<article-title>The GuideCane-applying mobile robot technologies to assist the visually impaired</article-title>
,
<source>IEEE Transactions on Systems, Man and Cybernetics, Part A:</source>
,
<year>2001</year>
, vol
<volume>31</volume>
,
<fpage>131</fpage>
<lpage>136</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1109/3468.911370">10.1109/3468.911370</ext-link>
</comment>
</mixed-citation>
</ref>
<ref id="pone.0132020.ref011">
<label>11</label>
<mixed-citation publication-type="journal">
<name>
<surname>Tachi</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Tanie</surname>
<given-names>K</given-names>
</name>
, &
<name>
<surname>Komoriya</surname>
<given-names>K</given-names>
</name>
,
<article-title>Electrocutaneous Communication in a Guide Dog Robot(MELDOG)</article-title>
,
<source>IEEE Transaction on Biomedical Engineering</source>
,
<volume>7</volume>
,
<year>1985</year>
,
<fpage>461</fpage>
<lpage>469</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1109/TBME.1985.325561">10.1109/TBME.1985.325561</ext-link>
</comment>
<pub-id pub-id-type="pmid">4018827</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0132020.ref012">
<label>12</label>
<mixed-citation publication-type="journal">
<name>
<surname>Loomis</surname>
<given-names>JM</given-names>
</name>
,
<name>
<surname>Klatzky</surname>
<given-names>RL</given-names>
</name>
, &
<name>
<surname>Golledge</surname>
<given-names>RG</given-names>
</name>
,
<article-title>Navigating without vision: basic and applied research</article-title>
,
<source>The journal of the American Academy of Optometry, Optometry & Vision Science</source>
,
<volume>78</volume>
(
<issue>5</issue>
),
<year>2001</year>
,
<fpage>282</fpage>
<lpage>289</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1097/00006324-200105000-00011">10.1097/00006324-200105000-00011</ext-link>
</comment>
<pub-id pub-id-type="pmid">11384005</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0132020.ref013">
<label>13</label>
<mixed-citation publication-type="other">Scheggi S, Aggravi M, Morbidi F, & Prattichizzo D, Cooperative human-robot haptic navigation, IEEE International Conference on Robotics and Automation, 2014.</mixed-citation>
</ref>
<ref id="pone.0132020.ref014">
<label>14</label>
<mixed-citation publication-type="other">Ranasinghe A, Penders J, Dasgupta P, Althoefer K, & Nanayakkara T, A two party haptic guidance controller via a hard rein, IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Tokyo, Japan, 2013, 116–122.</mixed-citation>
</ref>
<ref id="pone.0132020.ref015">
<label>15</label>
<mixed-citation publication-type="other">Ranasinghe A, Althoefer K, Penders J, Dasgupta P, & Nanayakkara T, An Optimal State Dependent Haptic Guidance Controller via a Hard Rein, IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2013, 2322–2327.</mixed-citation>
</ref>
<ref id="pone.0132020.ref016">
<label>16</label>
<mixed-citation publication-type="journal">
<name>
<surname>Park</surname>
<given-names>E</given-names>
</name>
,
<name>
<surname>Quaneisha</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>& Xiaochun</surname>
<given-names>J</given-names>
</name>
,
<article-title>Measuring trust of human operators in new generation rescue robots</article-title>
,
<source>Proceedings of the JFPS International Symposium on Fluid Power</source>
, vol.
<volume>7</volume>
,
<year>2008</year>
.</mixed-citation>
</ref>
<ref id="pone.0132020.ref017">
<label>17</label>
<mixed-citation publication-type="other">Stormont DP, Analyzing human trust of autonomous systems in hazardous environments, Proc. of the Human Implications of Human-Robot Interaction workshop at AAAI, 2008, 27–32.</mixed-citation>
</ref>
<ref id="pone.0132020.ref018">
<label>18</label>
<mixed-citation publication-type="other">Freedy E, DeVisser E, Weltman G, & Coeyman N, Measurement of trust in human-robot collaboration, IEEE International conference on Collaborative Technologies and Systems, 2007.</mixed-citation>
</ref>
<ref id="pone.0132020.ref019">
<label>19</label>
<mixed-citation publication-type="journal">
<name>
<surname>Mrtl</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Lawitzky</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Kucukyilmaz</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Sezgin</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Basdogan</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>& Hirche</surname>
<given-names>S</given-names>
</name>
,
<article-title>The role of roles: Physical cooperation between humans and robots</article-title>
.
<source>The International Journal of Robotics Research</source>
,
<volume>31</volume>
(
<issue>13</issue>
),
<year>2012</year>
,
<fpage>1656</fpage>
<lpage>1674</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1177/0278364912455366">10.1177/0278364912455366</ext-link>
</comment>
</mixed-citation>
</ref>
<ref id="pone.0132020.ref020">
<label>20</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hancock</surname>
<given-names>PA</given-names>
</name>
,
<name>
<surname>Billings</surname>
<given-names>DR</given-names>
</name>
,
<name>
<surname>Schaefer</surname>
<given-names>KE</given-names>
</name>
,
<name>
<surname>Chen</surname>
<given-names>JY</given-names>
</name>
,
<name>
<surname>De Visser</surname>
<given-names>EJ</given-names>
</name>
, &
<name>
<surname>Parasuraman</surname>
<given-names>R</given-names>
</name>
,
<article-title>A meta-analysis of factors affecting trust in human-robot interaction</article-title>
,
<source>Human Factors: The Journal of the Human Factors and Ergonomics Society</source>
,
<volume>53</volume>
(
<issue>5</issue>
),
<year>2011</year>
,
<fpage>517</fpage>
<lpage>527</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1177/0018720811417254">10.1177/0018720811417254</ext-link>
</comment>
</mixed-citation>
</ref>
<ref id="pone.0132020.ref021">
<label>21</label>
<mixed-citation publication-type="journal">
<name>
<surname>Flanders</surname>
<given-names>M</given-names>
</name>
,
<article-title>Choosing a wavelet for single-trial EMG</article-title>
,
<source>Journal of Neuroscience Methods</source>
,
<volume>116.2</volume>
,
<year>2002</year>
,
<fpage>165</fpage>
<lpage>177</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/S0165-0270(02)00038-9">10.1016/S0165-0270(02)00038-9</ext-link>
</comment>
<pub-id pub-id-type="pmid">12044666</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0132020.ref022">
<label>22</label>
<mixed-citation publication-type="journal">
<name>
<surname>Richardson</surname>
<given-names>MJ</given-names>
</name>
, &
<name>
<surname>Flash</surname>
<given-names>T</given-names>
</name>
,
<article-title>Comparing smooth arm movements with the two-thirds power law and the related segmented-control hypothesis</article-title>
,
<source>Journal of neuroscience</source>
,
<volume>22</volume>
(
<issue>18</issue>
),
<year>2002</year>
,
<fpage>8201</fpage>
<lpage>8211</lpage>
.
<pub-id pub-id-type="pmid">12223574</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0132020.ref023">
<label>23</label>
<mixed-citation publication-type="journal">
<name>
<surname>Christopher</surname>
<given-names>HM</given-names>
</name>
, &
<name>
<surname>Wolpert</surname>
<given-names>DM</given-names>
</name>
,
<article-title>Signal-dependent noise determines motor planning</article-title>
,
<source>Nature</source>
,
<year>1998</year>
,
<fpage>780</fpage>
<lpage>784</lpage>
.
<pub-id pub-id-type="pmid">9723616</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0132020.ref024">
<label>24</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bhushan</surname>
<given-names>N</given-names>
</name>
, &
<name>
<surname>Shadmehr</surname>
<given-names>R</given-names>
</name>
,
<article-title>Computational nature of human adaptive control during learning of reaching movements in force fields</article-title>
,
<source>Biological cybernetics</source>
,
<volume>81</volume>
(
<issue>1</issue>
),
<year>1999</year>
,
<fpage>39</fpage>
<lpage>60</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1007/s004220050543">10.1007/s004220050543</ext-link>
</comment>
<pub-id pub-id-type="pmid">10434390</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0132020.ref025">
<label>25</label>
<mixed-citation publication-type="journal">
<name>
<surname>Saeb</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Cornelius</surname>
<given-names>W</given-names>
</name>
, &
<name>
<surname>Jochen</surname>
<given-names>T</given-names>
</name>
,
<article-title>Learning the optimal control of coordinated eye and head movements</article-title>
,
<source>PLoS computational biology</source>
,
<year>2011</year>
,
<fpage>7</fpage>
<lpage>11</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0132020.ref026">
<label>26</label>
<mixed-citation publication-type="other">Duchaine V & Gosselin C, Safe, stable and intuitive control for physical human-robot interaction, IEEE International Conference on Robotics and Automation, 2009, 3383–3388.</mixed-citation>
</ref>
<ref id="pone.0132020.ref027">
<label>27</label>
<mixed-citation publication-type="other">Mitsunaga N, Smith C, Kanda T, Ishiguro H, & Hagita N, Robot behavior adaptation for human-robot interaction based on policy gradient reinforcement learning, IEEE/RSJ International Conference on Intelligent Robots and Systems, 2005, 218–225.</mixed-citation>
</ref>
<ref id="pone.0132020.ref028">
<label>28</label>
<mixed-citation publication-type="journal">
<name>
<surname>Sterman</surname>
<given-names>JD</given-names>
</name>
,
<article-title>Deterministic chaos in models of human behavior: Methodological issues and experimental results</article-title>
,
<source>System Dynamics Review</source>
,
<volume>4</volume>
(
<issue>12</issue>
),
<year>1998</year>
,
<fpage>148</fpage>
<lpage>178</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0132020.ref029">
<label>29</label>
<mixed-citation publication-type="journal">
<name>
<surname>Van Mourik</surname>
<given-names>AM</given-names>
</name>
,
<name>
<surname>Daffertshofer</surname>
<given-names>A</given-names>
</name>
, &
<name>
<surname>Beek</surname>
<given-names>PJ</given-names>
</name>
,
<article-title>Deterministic and stochastic features of rhythmic human movement</article-title>
,
<source>journal of Biological cybernetics</source>
,
<volume>94</volume>
(
<issue>3</issue>
),
<year>2006</year>
,
<fpage>233</fpage>
<lpage>244</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1007/s00422-005-0041-9">10.1007/s00422-005-0041-9</ext-link>
</comment>
<pub-id pub-id-type="pmid">16380845</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0132020.ref030">
<label>30</label>
<mixed-citation publication-type="journal">
<name>
<surname>Johannsen</surname>
<given-names>G</given-names>
</name>
&
<name>
<surname>Rouse</surname>
<given-names>WB</given-names>
</name>
,
<article-title>Mathematical concepts for modeling human behavior in complex man-machine systems</article-title>
,
<source>Human Factors: The Journal of the Human Factors and Ergonomics Society</source>
,
<volume>21</volume>
(
<issue>6</issue>
),
<year>1979</year>
,
<fpage>733</fpage>
<lpage>747</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0132020.ref031">
<label>31</label>
<mixed-citation publication-type="journal">
<name>
<surname>Stokes</surname>
<given-names>IA</given-names>
</name>
&
<name>
<surname>Gardner</surname>
<given-names>MM</given-names>
</name>
,
<article-title>Lumbar spinal muscle activation synergies predicted by multi-criteria cost function</article-title>
,
<source>Journal of biomechanics</source>
,
<volume>34</volume>
(
<issue>6</issue>
),
<year>2001</year>
,
<fpage>733</fpage>
<lpage>740</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/S0021-9290(01)00034-3">10.1016/S0021-9290(01)00034-3</ext-link>
</comment>
<pub-id pub-id-type="pmid">11470110</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0132020.ref032">
<label>32</label>
<mixed-citation publication-type="journal">
<name>
<surname>Thoroughman</surname>
<given-names>KA</given-names>
</name>
&
<name>
<surname>Reza</surname>
<given-names>S</given-names>
</name>
,
<article-title>Learning of action through adaptive combination of motor primitives</article-title>
,
<source>Nature</source>
,
<volume>407</volume>
,
<year>2000</year>
,
<fpage>742</fpage>
<lpage>747</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1038/35037588">10.1038/35037588</ext-link>
</comment>
<pub-id pub-id-type="pmid">11048720</pub-id>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000309 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd -nk 000309 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Curation
   |type=    RBID
   |clé=     PMC:4511788
   |texte=   Identification of Haptic Based Guiding Using Hard Reins
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Curation/RBID.i   -Sk "pubmed:26201076" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024