Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Stereo Camera Based Virtual Cane System with Identifiable Distance Tactile Feedback for the Blind

Identifieur interne : 002481 ( Pmc/Curation ); précédent : 002480; suivant : 002482

Stereo Camera Based Virtual Cane System with Identifiable Distance Tactile Feedback for the Blind

Auteurs : Donghun Kim ; Kwangtaek Kim [Corée du Sud] ; Sangyoun Lee [Corée du Sud]

Source :

RBID : PMC:4118356

Abstract

In this paper, we propose a new haptic-assisted virtual cane system operated by a simple finger pointing gesture. The system is developed by two stages: development of visual information delivery assistant (VIDA) with a stereo camera and adding a tactile feedback interface with dual actuators for guidance and distance feedbacks. In the first stage, user's pointing finger is automatically detected using color and disparity data from stereo images and then a 3D pointing direction of the finger is estimated with its geometric and textural features. Finally, any object within the estimated pointing trajectory in 3D space is detected and the distance is then estimated in real time. For the second stage, identifiable tactile signals are designed through a series of identification experiments, and an identifiable tactile feedback interface is developed and integrated into the VIDA system. Our approach differs in that navigation guidance is provided by a simple finger pointing gesture and tactile distance feedbacks are perfectly identifiable to the blind.


Url:
DOI: 10.3390/s140610412
PubMed: 24932864
PubMed Central: 4118356

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4118356

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Stereo Camera Based Virtual Cane System with Identifiable Distance Tactile Feedback for the Blind</title>
<author>
<name sortKey="Kim, Donghun" sort="Kim, Donghun" uniqKey="Kim D" first="Donghun" last="Kim">Donghun Kim</name>
<affiliation>
<nlm:aff id="af1-sensors-14-10412"> School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN 47906, USA; E-Mail:
<email>zava@purdue.edu</email>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Kim, Kwangtaek" sort="Kim, Kwangtaek" uniqKey="Kim K" first="Kwangtaek" last="Kim">Kwangtaek Kim</name>
<affiliation wicri:level="1">
<nlm:aff id="af2-sensors-14-10412"> Department of Electrical and Electronic Engineering, Institute of BioMed-IT, Energy-IT and Smart-IT Technology (Best), Yonsei University, Yonsei-ro, Seodaemun-gu, Seoul 120-749, Korea</nlm:aff>
<country xml:lang="fr">Corée du Sud</country>
<wicri:regionArea> Department of Electrical and Electronic Engineering, Institute of BioMed-IT, Energy-IT and Smart-IT Technology (Best), Yonsei University, Yonsei-ro, Seodaemun-gu, Seoul 120-749</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Lee, Sangyoun" sort="Lee, Sangyoun" uniqKey="Lee S" first="Sangyoun" last="Lee">Sangyoun Lee</name>
<affiliation wicri:level="1">
<nlm:aff id="af2-sensors-14-10412"> Department of Electrical and Electronic Engineering, Institute of BioMed-IT, Energy-IT and Smart-IT Technology (Best), Yonsei University, Yonsei-ro, Seodaemun-gu, Seoul 120-749, Korea</nlm:aff>
<country xml:lang="fr">Corée du Sud</country>
<wicri:regionArea> Department of Electrical and Electronic Engineering, Institute of BioMed-IT, Energy-IT and Smart-IT Technology (Best), Yonsei University, Yonsei-ro, Seodaemun-gu, Seoul 120-749</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">24932864</idno>
<idno type="pmc">4118356</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4118356</idno>
<idno type="RBID">PMC:4118356</idno>
<idno type="doi">10.3390/s140610412</idno>
<date when="2014">2014</date>
<idno type="wicri:Area/Pmc/Corpus">002481</idno>
<idno type="wicri:Area/Pmc/Curation">002481</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Stereo Camera Based Virtual Cane System with Identifiable Distance Tactile Feedback for the Blind</title>
<author>
<name sortKey="Kim, Donghun" sort="Kim, Donghun" uniqKey="Kim D" first="Donghun" last="Kim">Donghun Kim</name>
<affiliation>
<nlm:aff id="af1-sensors-14-10412"> School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN 47906, USA; E-Mail:
<email>zava@purdue.edu</email>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Kim, Kwangtaek" sort="Kim, Kwangtaek" uniqKey="Kim K" first="Kwangtaek" last="Kim">Kwangtaek Kim</name>
<affiliation wicri:level="1">
<nlm:aff id="af2-sensors-14-10412"> Department of Electrical and Electronic Engineering, Institute of BioMed-IT, Energy-IT and Smart-IT Technology (Best), Yonsei University, Yonsei-ro, Seodaemun-gu, Seoul 120-749, Korea</nlm:aff>
<country xml:lang="fr">Corée du Sud</country>
<wicri:regionArea> Department of Electrical and Electronic Engineering, Institute of BioMed-IT, Energy-IT and Smart-IT Technology (Best), Yonsei University, Yonsei-ro, Seodaemun-gu, Seoul 120-749</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Lee, Sangyoun" sort="Lee, Sangyoun" uniqKey="Lee S" first="Sangyoun" last="Lee">Sangyoun Lee</name>
<affiliation wicri:level="1">
<nlm:aff id="af2-sensors-14-10412"> Department of Electrical and Electronic Engineering, Institute of BioMed-IT, Energy-IT and Smart-IT Technology (Best), Yonsei University, Yonsei-ro, Seodaemun-gu, Seoul 120-749, Korea</nlm:aff>
<country xml:lang="fr">Corée du Sud</country>
<wicri:regionArea> Department of Electrical and Electronic Engineering, Institute of BioMed-IT, Energy-IT and Smart-IT Technology (Best), Yonsei University, Yonsei-ro, Seodaemun-gu, Seoul 120-749</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Sensors (Basel, Switzerland)</title>
<idno type="eISSN">1424-8220</idno>
<imprint>
<date when="2014">2014</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>In this paper, we propose a new haptic-assisted virtual cane system operated by a simple finger pointing gesture. The system is developed by two stages: development of visual information delivery assistant (VIDA) with a stereo camera and adding a tactile feedback interface with dual actuators for guidance and distance feedbacks. In the first stage, user's pointing finger is automatically detected using color and disparity data from stereo images and then a 3D pointing direction of the finger is estimated with its geometric and textural features. Finally, any object within the estimated pointing trajectory in 3D space is detected and the distance is then estimated in real time. For the second stage, identifiable tactile signals are designed through a series of identification experiments, and an identifiable tactile feedback interface is developed and integrated into the VIDA system. Our approach differs in that navigation guidance is provided by a simple finger pointing gesture and tactile distance feedbacks are perfectly identifiable to the blind.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Vera, P" uniqKey="Vera P">P. Vera</name>
</author>
<author>
<name sortKey="Zenteno, D" uniqKey="Zenteno D">D. Zenteno</name>
</author>
<author>
<name sortKey="Salas, J" uniqKey="Salas J">J. Salas</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dakopoulos, D" uniqKey="Dakopoulos D">D. Dakopoulos</name>
</author>
<author>
<name sortKey="Bourbakis, N" uniqKey="Bourbakis N">N. Bourbakis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yuan, D" uniqKey="Yuan D">D. Yuan</name>
</author>
<author>
<name sortKey="Manduchi, R" uniqKey="Manduchi R">R. Manduchi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Manduchi, R" uniqKey="Manduchi R">R. Manduchi</name>
</author>
<author>
<name sortKey="Coughlan, J" uniqKey="Coughlan J">J. Coughlan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dramas, F" uniqKey="Dramas F">F. Dramas</name>
</author>
<author>
<name sortKey="Thorpe, S J" uniqKey="Thorpe S">S.J. Thorpe</name>
</author>
<author>
<name sortKey="Jouffrais, C" uniqKey="Jouffrais C">C. Jouffrais</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jose, J" uniqKey="Jose J">J. José</name>
</author>
<author>
<name sortKey="Farrajota, M" uniqKey="Farrajota M">M. Farrajota</name>
</author>
<author>
<name sortKey="Rodrigues, J M" uniqKey="Rodrigues J">J.M. Rodrigues</name>
</author>
<author>
<name sortKey="Du Buf, J H" uniqKey="Du Buf J">J.H. du Buf</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fernandes, H" uniqKey="Fernandes H">H. Fernandes</name>
</author>
<author>
<name sortKey="Costa, P" uniqKey="Costa P">P. Costa</name>
</author>
<author>
<name sortKey="Filipe, V" uniqKey="Filipe V">V. Filipe</name>
</author>
<author>
<name sortKey="Hadjileontiadis, L" uniqKey="Hadjileontiadis L">L. Hadjileontiadis</name>
</author>
<author>
<name sortKey="Barroso, J" uniqKey="Barroso J">J. Barroso</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brilhault, A" uniqKey="Brilhault A">A. Brilhault</name>
</author>
<author>
<name sortKey="Kammoun, S" uniqKey="Kammoun S">S. Kammoun</name>
</author>
<author>
<name sortKey="Gutierrez, O" uniqKey="Gutierrez O">O. Gutierrez</name>
</author>
<author>
<name sortKey="Truillet, P" uniqKey="Truillet P">P. Truillet</name>
</author>
<author>
<name sortKey="Jouffrais, C" uniqKey="Jouffrais C">C. Jouffrais</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Denis, G" uniqKey="Denis G">G. Denis</name>
</author>
<author>
<name sortKey="Jouffrais, C" uniqKey="Jouffrais C">C. Jouffrais</name>
</author>
<author>
<name sortKey="Vergnieux, V" uniqKey="Vergnieux V">V. Vergnieux</name>
</author>
<author>
<name sortKey="Mace, M" uniqKey="Mace M">M. Macé</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Asano, H" uniqKey="Asano H">H. Asano</name>
</author>
<author>
<name sortKey="Nagayasu, T" uniqKey="Nagayasu T">T. Nagayasu</name>
</author>
<author>
<name sortKey="Orimo, T" uniqKey="Orimo T">T. Orimo</name>
</author>
<author>
<name sortKey="Terabayashi, K" uniqKey="Terabayashi K">K. Terabayashi</name>
</author>
<author>
<name sortKey="Ohta, M" uniqKey="Ohta M">M. Ohta</name>
</author>
<author>
<name sortKey="Umeda, K" uniqKey="Umeda K">K. Umeda</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kim, D" uniqKey="Kim D">D. Kim</name>
</author>
<author>
<name sortKey="Hong, K" uniqKey="Hong K">K. Hong</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wachs, J P" uniqKey="Wachs J">J.P. Wachs</name>
</author>
<author>
<name sortKey="Kolsch, M" uniqKey="Kolsch M">M. Kölsch</name>
</author>
<author>
<name sortKey="Stern, H" uniqKey="Stern H">H. Stern</name>
</author>
<author>
<name sortKey="Edan, Y" uniqKey="Edan Y">Y. Edan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Matikainen, P" uniqKey="Matikainen P">P. Matikainen</name>
</author>
<author>
<name sortKey="Pillai, P" uniqKey="Pillai P">P. Pillai</name>
</author>
<author>
<name sortKey="Mummert, L" uniqKey="Mummert L">L. Mummert</name>
</author>
<author>
<name sortKey="Sukthankar, R" uniqKey="Sukthankar R">R. Sukthankar</name>
</author>
<author>
<name sortKey="Hebert, M" uniqKey="Hebert M">M. Hebert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lee, M" uniqKey="Lee M">M. Lee</name>
</author>
<author>
<name sortKey="Green, R" uniqKey="Green R">R. Green</name>
</author>
<author>
<name sortKey="Billinghurst, M" uniqKey="Billinghurst M">M. Billinghurst</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nickel, K" uniqKey="Nickel K">K. Nickel</name>
</author>
<author>
<name sortKey="Stiefelhagen, R" uniqKey="Stiefelhagen R">R. Stiefelhagen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thomas, B" uniqKey="Thomas B">B. Thomas</name>
</author>
<author>
<name sortKey="Piekarski, W" uniqKey="Piekarski W">W. Piekarski</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Segen, J" uniqKey="Segen J">J. Segen</name>
</author>
<author>
<name sortKey="Kumar, S" uniqKey="Kumar S">S. Kumar</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rehg, J" uniqKey="Rehg J">J. Rehg</name>
</author>
<author>
<name sortKey="Kanade, T" uniqKey="Kanade T">T. Kanade</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ong, E" uniqKey="Ong E">E. Ong</name>
</author>
<author>
<name sortKey="Bowden, R" uniqKey="Bowden R">R. Bowden</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Starner, T" uniqKey="Starner T">T. Starner</name>
</author>
<author>
<name sortKey="Weaver, J" uniqKey="Weaver J">J. Weaver</name>
</author>
<author>
<name sortKey="Pentland, A" uniqKey="Pentland A">A. Pentland</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kolsch, M" uniqKey="Kolsch M">M. Kolsch</name>
</author>
<author>
<name sortKey="Turk, M" uniqKey="Turk M">M. Turk</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Trucco, E" uniqKey="Trucco E">E. Trucco</name>
</author>
<author>
<name sortKey="Verri, A" uniqKey="Verri A">A. Verri</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hartley, R" uniqKey="Hartley R">R. Hartley</name>
</author>
<author>
<name sortKey="Zisserman, A" uniqKey="Zisserman A">A. Zisserman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zhang, Z" uniqKey="Zhang Z">Z. Zhang</name>
</author>
<author>
<name sortKey="Faugeras, O" uniqKey="Faugeras O">O. Faugeras</name>
</author>
<author>
<name sortKey="Deriche, R" uniqKey="Deriche R">R. Deriche</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tsai, R" uniqKey="Tsai R">R. Tsai</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pressey, N" uniqKey="Pressey N">N. Pressey</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ertan, S" uniqKey="Ertan S">S. Ertan</name>
</author>
<author>
<name sortKey="Lee, C" uniqKey="Lee C">C. Lee</name>
</author>
<author>
<name sortKey="Willets, A" uniqKey="Willets A">A. Willets</name>
</author>
<author>
<name sortKey="Tan, H" uniqKey="Tan H">H. Tan</name>
</author>
<author>
<name sortKey="Pentland, A" uniqKey="Pentland A">A. Pentland</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Velazquez, R" uniqKey="Velazquez R">R. Velázquez</name>
</author>
<author>
<name sortKey="Maingreaud, F" uniqKey="Maingreaud F">F. Maingreaud</name>
</author>
<author>
<name sortKey="Pissaloux, E" uniqKey="Pissaloux E">E. Pissaloux</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hirose, M" uniqKey="Hirose M">M. Hirose</name>
</author>
<author>
<name sortKey="Amemiya, T" uniqKey="Amemiya T">T. Amemiya</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Goldstein, E B" uniqKey="Goldstein E">E.B. Goldstein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Miller, G A" uniqKey="Miller G">G.A. Miller</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Sensors (Basel)</journal-id>
<journal-id journal-id-type="iso-abbrev">Sensors (Basel)</journal-id>
<journal-title-group>
<journal-title>Sensors (Basel, Switzerland)</journal-title>
</journal-title-group>
<issn pub-type="epub">1424-8220</issn>
<publisher>
<publisher-name>MDPI</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">24932864</article-id>
<article-id pub-id-type="pmc">4118356</article-id>
<article-id pub-id-type="doi">10.3390/s140610412</article-id>
<article-id pub-id-type="publisher-id">sensors-14-10412</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Stereo Camera Based Virtual Cane System with Identifiable Distance Tactile Feedback for the Blind</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Kim</surname>
<given-names>Donghun</given-names>
</name>
<xref ref-type="aff" rid="af1-sensors-14-10412">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Kim</surname>
<given-names>Kwangtaek</given-names>
</name>
<xref ref-type="aff" rid="af2-sensors-14-10412">
<sup>2</sup>
</xref>
<xref rid="c1-sensors-14-10412">
<sup>*</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Lee</surname>
<given-names>Sangyoun</given-names>
</name>
<xref ref-type="aff" rid="af2-sensors-14-10412">
<sup>2</sup>
</xref>
<xref rid="c1-sensors-14-10412">
<sup>*</sup>
</xref>
</contrib>
</contrib-group>
<aff id="af1-sensors-14-10412">
<label>1</label>
School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN 47906, USA; E-Mail:
<email>zava@purdue.edu</email>
</aff>
<aff id="af2-sensors-14-10412">
<label>2</label>
Department of Electrical and Electronic Engineering, Institute of BioMed-IT, Energy-IT and Smart-IT Technology (Best), Yonsei University, Yonsei-ro, Seodaemun-gu, Seoul 120-749, Korea</aff>
<author-notes>
<corresp id="c1-sensors-14-10412">
<label>*</label>
Authors to whom correspondence should be addressed; E-Mails:
<email>kwangtaekkim@yonsei.ac.kr</email>
(K.K.);
<email>syleee@yonsei.ac.kr</email>
(S.L.); Tel.: +82-2-2123-5768 (K.K. & S.L.).</corresp>
</author-notes>
<pub-date pub-type="collection">
<month>6</month>
<year>2014</year>
</pub-date>
<pub-date pub-type="epub">
<day>13</day>
<month>6</month>
<year>2014</year>
</pub-date>
<volume>14</volume>
<issue>6</issue>
<fpage>10412</fpage>
<lpage>10431</lpage>
<history>
<date date-type="received">
<day>23</day>
<month>1</month>
<year>2014</year>
</date>
<date date-type="rev-recd">
<day>26</day>
<month>5</month>
<year>2014</year>
</date>
<date date-type="accepted">
<day>05</day>
<month>6</month>
<year>2014</year>
</date>
</history>
<permissions>
<copyright-statement>© 2014 by the authors; licensee MDPI, Basel, Switzerland.</copyright-statement>
<copyright-year>2014</copyright-year>
<license>
<license-p>This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/3.0/">http://creativecommons.org/licenses/by/3.0/</ext-link>
).</license-p>
</license>
</permissions>
<abstract>
<p>In this paper, we propose a new haptic-assisted virtual cane system operated by a simple finger pointing gesture. The system is developed by two stages: development of visual information delivery assistant (VIDA) with a stereo camera and adding a tactile feedback interface with dual actuators for guidance and distance feedbacks. In the first stage, user's pointing finger is automatically detected using color and disparity data from stereo images and then a 3D pointing direction of the finger is estimated with its geometric and textural features. Finally, any object within the estimated pointing trajectory in 3D space is detected and the distance is then estimated in real time. For the second stage, identifiable tactile signals are designed through a series of identification experiments, and an identifiable tactile feedback interface is developed and integrated into the VIDA system. Our approach differs in that navigation guidance is provided by a simple finger pointing gesture and tactile distance feedbacks are perfectly identifiable to the blind.</p>
</abstract>
<kwd-group>
<kwd>finger pointing gestures</kwd>
<kwd>3D pointing direction estimation</kwd>
<kwd>obstacle detection</kwd>
<kwd>stereo camera system</kwd>
<kwd>human computer interaction</kwd>
<kwd>tactile feedback</kwd>
<kwd>virtual cane</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec sec-type="intro">
<label>1.</label>
<title>Introduction</title>
<p>The role of the white cane, a mechanical device, is an extended-hand like guidance for the better mobility that guarantees safe and comfortable movements. It is helpful for the blind to avoid obstacles and to negotiate his/her steps, and to follow the safest walking trajectory while in motion. Over decades, the stick-look cane has been recognized as a commonly used tool for people who are blind or visually impaired as compared to guide dogs that cost more. Despite the popularity of the white cane, it has drawbacks such as a long training time, a limited sensible range (e.g., only usable within 1–2 m), uncomfortable carrying and contact-based object detection.</p>
<p>From a technical point of view, the virtual cane system can be separated into two main parts: sensing obstacles and providing feedbacks to avoid the detected obstacles. Literally, sensors play a crucial role in sensing obstacles and these days high-tech sensors like ultrasound and lidar have been used as new approaches [
<xref rid="b1-sensors-14-10412" ref-type="bibr">1</xref>
<xref rid="b3-sensors-14-10412" ref-type="bibr">3</xref>
]. Nonetheless, those sensors have tradeoffs in terms of accuracy, cost and portableness, and so camera sensors have been considered as the best option due to the unique benefits such as low cost, non-contact object detection, precise shape reconstruction, and computational efficiency. These benefits have been also proved by researchers [
<xref rid="b4-sensors-14-10412" ref-type="bibr">4</xref>
,
<xref rid="b5-sensors-14-10412" ref-type="bibr">5</xref>
] who demonstrated that utilizing vision sensors benefit the blind to explore visual environments efficiently under dynamic scenes in various applications.</p>
<p>Recently, stereo camera based approaches have been introduced by several researchers. Jose
<italic>et al.</italic>
[
<xref rid="b6-sensors-14-10412" ref-type="bibr">6</xref>
] developed a virtual cane system by using a stereo camera, and successfully showed the effectiveness as a wearable system customized for assisting navigation under unknown environments. Fernandes
<italic>et al.</italic>
[
<xref rid="b7-sensors-14-10412" ref-type="bibr">7</xref>
] proposed a robust stereo vision algorithm extracting predefined landmarks like circles that provide cues for safe walking. As a hybrid system, Brilhault
<italic>et al.</italic>
[
<xref rid="b8-sensors-14-10412" ref-type="bibr">8</xref>
] combined stereo vision and a GPS system to improve user's positioning. An assistance system that can guide user's orientation to locate objects has been developed by Dramas
<italic>et al.</italic>
[
<xref rid="b5-sensors-14-10412" ref-type="bibr">5</xref>
].</p>
<p>Denis
<italic>et al.</italic>
[
<xref rid="b9-sensors-14-10412" ref-type="bibr">9</xref>
] developed a wearable virtual cane system that can detect objects coming close to the user. Additionally, they designed distinct sound feedbacks for the estimated distance to the detected objects. Although many interesting systems have been developed until now, most of the systems are passive or workable only under some conditions like known environments and predefined landmarks. These limitations could be barriers to design a natural user interface.</p>
<p>For a natural user interface, vision based hand gesture and finger pointing technologies have been actively developed by many researchers [
<xref rid="b10-sensors-14-10412" ref-type="bibr">10</xref>
<xref rid="b14-sensors-14-10412" ref-type="bibr">14</xref>
] since those are non-intrusive, convenient, and interactive. Especially, 3D range data based gesture recognition is highly reliable and robust to be a practical use as demonstrated in many game applications with the Kinect. The effectiveness of using 3D depth images for finger gesture recognition has also been corroborated by Matikainen
<italic>et al.</italic>
and Nickel
<italic>et al.</italic>
[
<xref rid="b13-sensors-14-10412" ref-type="bibr">13</xref>
,
<xref rid="b15-sensors-14-10412" ref-type="bibr">15</xref>
] who developed robust pointing gesture technologies for interactive visual scene analyses. These pointing gesture technologies provide users with benefits in that the user's hand can become free from holding a sensing device and the user is interactively able to get accurate information of a place where he/she wants to explore in advance.</p>
<p>In general, developing finger pointing recognition technologies is threefold: finger detection and tracking, estimation of finger pointing directions, and obstacle's detection. First, detecting and tracking fingers with wearable cameras is not simple due to human motions and noisy backgrounds. As pioneering work, several researchers [
<xref rid="b16-sensors-14-10412" ref-type="bibr">16</xref>
<xref rid="b18-sensors-14-10412" ref-type="bibr">18</xref>
] introduced hand detection and tracking algorithms with simplified conditions like uniform backgrounds or wearing color gloves. Afterward, many researchers put a lot of effort into improving bare hand tracking technologies on cluttered backgrounds for sign language applications [
<xref rid="b19-sensors-14-10412" ref-type="bibr">19</xref>
,
<xref rid="b20-sensors-14-10412" ref-type="bibr">20</xref>
] and for human computer interactions [
<xref rid="b21-sensors-14-10412" ref-type="bibr">21</xref>
]. In our work, a dynamic update model for moving backgrounds was proposed to compensate background changes caused by motions from body worn cameras.</p>
<p>Second, with the detected pointing finger, estimating an accurate 3D finger pointing direction can be achieved by using classic theories in stereo vision [
<xref rid="b22-sensors-14-10412" ref-type="bibr">22</xref>
], multiple view geometry [
<xref rid="b23-sensors-14-10412" ref-type="bibr">23</xref>
], and stereo camera calibration [
<xref rid="b24-sensors-14-10412" ref-type="bibr">24</xref>
,
<xref rid="b25-sensors-14-10412" ref-type="bibr">25</xref>
]. In our study, we utilized the existing theory to estimate a 3D pointing direction with disparity data obtained from a stereo camera. Compared with the previous steps, detecting obstacles is a challenging problem since algorithms should be able to detect obstacles even in complex surroundings as humans naturally do. To detect objects accurately, segmentations that intelligently extract target objects under dynamic scenes are extremely important. As a new algorithm, we developed a robust segmentation algorithm which is suitable for the virtual cane system.</p>
<p>In order to deliver the visual scene information to the blind user, providing feedbacks via sounds and/or vibration is effective. However, audible feedback often becomes noise or even can mask important information such as traffic sounds and other people's talk on street. For this reason, tactile feedback is preferably developed as a non-intrusive interface with vision systems. As the first work, Pressey [
<xref rid="b26-sensors-14-10412" ref-type="bibr">26</xref>
] developed a lightweight hand-held device called MOWAT SENSOR to be easily carried while walking. The sensor itself detected an object within a beam of high frequency sounds and vibrated for tactile feedback with predefined vibrations (e.g., the higher frequency is interpreted as a closer obstacle). Etran
<italic>et al.</italic>
[
<xref rid="b27-sensors-14-10412" ref-type="bibr">27</xref>
] invented a wearable navigation system based on a haptic directional display embedded in the back of a vest. Directional cues were generated differently like lines, circles and blinking to inform directional cues.</p>
<p>Velazquez
<italic>et al.</italic>
[
<xref rid="b28-sensors-14-10412" ref-type="bibr">28</xref>
] introduced a concept of Intelligent Glasses with tactile feedback. A dynamic edge shape extracted from a stereo camera was directly transmitted to a tactile display, a braille built with push up/down mechanical sticks representing the edge shape. A shortcoming of the system was that user's hand had to be placed on the haptic display device all the time to feel tactile feedbacks. As recent work, Hirose and Amemiya [
<xref rid="b29-sensors-14-10412" ref-type="bibr">29</xref>
] developed a prototype with a PDA (Personal Digital Assistant) device. For tactile feedback, three vibrating motors attached to the user's arms (left and right) and back. Direction cues were delivered to the user by vibrations on single or double motors, while the number of pulses of the vibrations was used for distance cues. However, none of the existing tactile feedback systems did design identifiable feedback signals based on human perceptions but instead intuitively selected vibration signals. In our work, we designed perfectly identifiable tactile signals by conducting a series of identification experiments and those signals were successfully integrated in our virtual cane system.</p>
<p>In this paper, we focus on presenting a robust and advanced distance estimation system with a stereo camera that is operated by a simple finger pointing gesture. Additionally, we propose a complete virtual cane solution by integrating a tactile feedback interface that employs perceptually identifiable frequencies, obtained from 1D frequency identification experiments for distance-matching tactile feedbacks.</p>
<p>The remainder of this paper is organized as follows. In Section 2, we describe the visual information delivery system, and the experimental results are presented in Section 3. Section 4 explains how we designed identifiable tactile feedback signals and integrated the tactile feedback system with the visual information delivery system. Conclusions and future work are provided in Section 5.</p>
</sec>
<sec>
<label>2.</label>
<title>Visual Information Delivery Assistant (VIDA)</title>
<p>The Visual Information Delivery Assistant (VIDA) consists of three steps: hand detection, estimation of a 3D pointing direction, object detection and distance calculation. The flow chart of the algorithm is shown in
<xref rid="f1-sensors-14-10412" ref-type="fig">Figure 1</xref>
.</p>
<sec>
<label>2.1.</label>
<title>Hand Detection</title>
<p>As earlier mentioned, extracting hands or a finger in a complex scene is not easy. The problem even becomes severe if images are taken from a fluctuated camera, which is the case of wearable virtual cane systems. To tackle the challenging problem, we combine static and dynamic segmentation methods to improve hand region detection. Hand and finger regions are then detected by using skin color information. In the following, detailed algorithms are explained.</p>
<sec>
<label>2.1.1.</label>
<title>Background Subtraction under Dynamic Scenes</title>
<p>Background subtraction provides a fundamental framework for hand area detection in both static and dynamic environments. In our algorithm, we made a fusion framework that adaptively utilizes static and dynamic background subtraction methods. For this approach, we define a static background as an image frame whose variations are relatively small, while a dynamic background is an image frame that has big global variations (e.g., the entire scene is changed). To implement the static background subtraction, we adopted a learning average method, a well-known statistical method, to build a background model. The method basically creates a background model based on the mean difference computed from a set of accumulated image frames. Then, subsequent frames are subtracted from the created background model, which results in segmentation. This simple approach enables the quick detection of a moving hand in a static environment.</p>
<p>For taking advantage of the static background subtraction approach in dynamic scenes, we introduce a strategy that makes our system intelligently works well under dynamic environments. The strategy includes continuous dynamic model updates and detecting global scene changes to trigger building a new background model. The former develops an extended version of static background subtraction by replacing the static background model with the most up-to-date background model that is captured from dynamic scenes. In other words, any dynamic scene captured at time
<italic>t</italic>
can be considered to be a new static background model for hand segmentation under dynamic environments. This approach works well for both static and dynamic background subtraction.</p>
<p>However, one issue here is how to let our system know the update timing for a new background model when the stereo camera is in motion. In our observation, two typical types of global changes were found in a dynamic environment: changes by the moving camera and local changes by moving objects. The latter must be excluded for updating the model. This is why our strategy includes detecting global scene changes. For this, we developed a decision maker algorithm that compares local variations within predefined window blocks to determine whether the global scene was changed or not. A threshold value was defined by our pilot study for the criterion. The sum of local variations is compared with the threshold. For instance, the greater value is the case of global scene changes that result in creating a new background model, while the opposite case updates the background model locally from the previous model. During this process, the hand region is not updated.
<xref rid="f2-sensors-14-10412" ref-type="fig">Figure 2</xref>
shows how the background model is updated against camera motions and a moving object (the user's arm).</p>
<p>For efficient processing, the input image (320 × 240 pixels) was divided into 8 by 6 blocks (40 × 40 pixels). For each block, our decision maker algorithm was run to determine whether a new global background model should be created. If more than half pixels in each block show variations, the block is categorized as a block to be updated. The background model is then updated with the eight nearest neighbors of the block. In our approach, both a color background model and a depth background model are updated except the foreground region of a skin-colored hand area for the next process.</p>
</sec>
<sec>
<label>2.1.2.</label>
<title>Hand Detection Using Color Segmentation</title>
<p>With the detected region including the user's arm, the hand region is identified by skin-color information. Two color spaces,
<italic>YUV</italic>
and
<italic>YIQ</italic>
, are used together for creating a unique classifier for color segmentation in our approach. Our classifier was designed to take two feature parameters from the two color spaces (
<italic>i.e.</italic>
, one from each). The reason why we have chosen the specific color models is that both models are sensitive to low color depths (image bandwidth) like human visual perception. It is also known that
<italic>YUV</italic>
and
<italic>YIQ</italic>
have the same luminance component but different coordinate systems for chrominance components. For generating feature parameters with the
<italic>YUV</italic>
model, luminance (
<italic>Y</italic>
) and two chrominance components (U and
<italic>V</italic>
) are computed from the
<italic>RGB</italic>
space by the transformation matrix below:
<disp-formula id="FD1">
<label>(1)</label>
<mml:math id="mm1">
<mml:mrow>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mi>Y</mml:mi>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mi>U</mml:mi>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mi>V</mml:mi>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mn>0.299</mml:mn>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mn>0.587</mml:mn>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mn>0.114</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>0.147</mml:mn>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>0.289</mml:mn>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mn>0.436</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mn>0.615</mml:mn>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>0.515</mml:mn>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>0.100</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
<mml:mspace width="0.2em"></mml:mspace>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mi>R</mml:mi>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mi>G</mml:mi>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mi>B</mml:mi>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>The computed
<italic>U</italic>
and
<italic>V</italic>
values are then used for computing two feature parameters, a displacement vector (
<italic>C</italic>
) and the phase angle (
<italic>θ</italic>
), that can be used for color segmentation. Those features are computed as follows:
<disp-formula id="FD2">
<label>(2)</label>
<mml:math id="mm2">
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mo>=</mml:mo>
<mml:msqrt>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>|</mml:mo>
<mml:mi>U</mml:mi>
<mml:mo>|</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>+</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>|</mml:mo>
<mml:mi>V</mml:mi>
<mml:mo>|</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msqrt>
<mml:mspace width="0.2em"></mml:mspace>
<mml:mtext mathvariant="italic">and</mml:mtext>
<mml:mspace width="0.2em"></mml:mspace>
<mml:mi>θ</mml:mi>
<mml:mo>=</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mo>tan</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>V</mml:mi>
<mml:mo>/</mml:mo>
<mml:mi>U</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>In the
<italic>YIQ</italic>
color space,
<italic>I</italic>
and
<italic>Q</italic>
, representing chrominance values, can also be used as features. Obtaining these values from
<italic>RGB</italic>
is achieved by the transformation matrix:
<disp-formula id="FD3">
<label>(3)</label>
<mml:math id="mm3">
<mml:mrow>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mi>Y</mml:mi>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mi>I</mml:mi>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mi>Q</mml:mi>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mn>0.299</mml:mn>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mn>0.587</mml:mn>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mn>0.114</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mn>0.596</mml:mn>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>0.274</mml:mn>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>0.322</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>0.212</mml:mn>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>0.523</mml:mn>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mrow>
<mml:mn>0.311</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
<mml:mspace width="0.2em"></mml:mspace>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mi>R</mml:mi>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mi>G</mml:mi>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mi>B</mml:mi>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>A combination of the four feature values,
<italic>C</italic>
,
<italic>θ</italic>
,
<italic>I</italic>
and
<italic>Q</italic>
, can create a unique criterion that can accurately segment human hands. Therefore those values can be tuned up to the aiming level of segmentation. In our application, we used only two features,
<italic>θ</italic>
and
<italic>I</italic>
. The ranges of optimized values used for hand segmentation are 105 to 150 and 15 to 100, respectively. After detecting the skin colored area with these color features, noises were completely eliminated by using a connected component analysis. The upper row of
<xref rid="f3-sensors-14-10412" ref-type="fig">Figure 3</xref>
shows this segmentation procedure using the two feature values.</p>
</sec>
</sec>
<sec>
<label>2.2.</label>
<title>Estimation of 3D Pointing Direction</title>
<p>A pointing gesture with fingers generally forms a particular hand shape created by a combination of convex and concave shapes. We utilize this phenomenon to estimate the finger pointing direction in 2D space based on a shape analysis, and the estimated direction is then extended to three dimensions using the existing camera geometry We further present these two steps in the following.</p>
<p>A pointing direction in 2D space is estimated by taking three steps, as seen in the lower row of
<xref rid="f3-sensors-14-10412" ref-type="fig">Figure 3</xref>
: extracting the hand contour, finding a convex polygon on the extracted contour, and estimating an accurate pointing vector. In order to extract a precise hand contour, the polygonal approximation is applied because of its robustness to illuminations. The extracted contour is then verified by a convex hull algorithm that finds both convex vertex points and convex defects (concave). The process of examining convexity is as follows:</p>
<p>Considering two points,
<italic>A</italic>
and
<italic>B</italic>
in a region Ω, the convexity can be evaluated by the following measure:
<disp-formula id="FD4">
<label>(4)</label>
<mml:math id="mm4">
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mo>=</mml:mo>
<mml:mi>α</mml:mi>
<mml:mi>A</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>β</mml:mi>
<mml:mi>B</mml:mi>
</mml:mrow>
</mml:math>
</disp-formula>
for 0 <
<italic>α, β</italic>
< 1 and
<italic>α</italic>
+
<italic>β</italic>
= 1.</p>
<p>If all possible
<italic>V</italic>
are in the region Ω for arbitrary values of
<italic>α</italic>
and
<italic>β</italic>
, then the contour from
<italic>A</italic>
to
<italic>B</italic>
is convex. This way, a convex polygon and convexity defects are generated as the output.</p>
<p>As the last step, an accurate 2D pointing direction is estimated by taking two steps, an initial estimate and the refinement process, with the filtered hand contour (
<xref rid="f3-sensors-14-10412" ref-type="fig">Figure 3d</xref>
) and a hand shape polygon (cyan colored lines in
<xref rid="f3-sensors-14-10412" ref-type="fig">Figure 3e</xref>
) formed from finger tips and convexity. An initial direction is roughly determined from a bounding rectangle of the hand contour, a blue box in
<xref rid="f3-sensors-14-10412" ref-type="fig">Figure 3e</xref>
. That is, the longer side of the rectangle becomes a unit vector, a blue arrow in
<xref rid="f3-sensors-14-10412" ref-type="fig">Figure 3f</xref>
, of the 2D pointing direction at the geometric center of the hand. The initial pointing direction is then refined towards the index finger by the principle vector of the hand shape polygon (see the longer red line inside the hand shape contour in
<xref rid="f3-sensors-14-10412" ref-type="fig">Figure 3e</xref>
). The finally estimated 2D pointing direction is visualized as a red arrow superimposed on the hand image in
<xref rid="f3-sensors-14-10412" ref-type="fig">Figure 3f</xref>
.</p>
<p>To estimate the corresponding 3D pointing vector, intrinsic parameters computed from the camera calibration are used. In theory, 3D points on a 3D pointing vector from the corresponding 2D points are obtained as follows:
<disp-formula id="FD5">
<label>(5)</label>
<mml:math id="mm5">
<mml:mrow>
<mml:mi>Z</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mi>f</mml:mi>
<mml:mi>d</mml:mi>
</mml:mfrac>
<mml:mi>B</mml:mi>
<mml:mo>,</mml:mo>
<mml:mspace width="0.2em"></mml:mspace>
<mml:mi>X</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mi>u</mml:mi>
<mml:mi>f</mml:mi>
</mml:mfrac>
<mml:mi>Z</mml:mi>
<mml:mo>,</mml:mo>
<mml:mspace width="0.2em"></mml:mspace>
<mml:mi>Y</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mi>v</mml:mi>
<mml:mi>f</mml:mi>
</mml:mfrac>
<mml:mi>Z</mml:mi>
</mml:mrow>
</mml:math>
</disp-formula>
where a 3D point is denoted by a vector (
<italic>X</italic>
,
<italic>Y</italic>
,
<italic>Z</italic>
) from the origin in the camera coordinates, f is the focal length of the camera, B is the baseline distance of a stereo camera,
<italic>d</italic>
is the disparity value at any location, and (
<italic>u</italic>
,
<italic>v</italic>
) is a location on the 2D image.
<xref rid="f4-sensors-14-10412" ref-type="fig">Figure 4</xref>
graphically shows how a 3D pointing direction is estimated from a 2D pointing direction.</p>
</sec>
<sec>
<label>2.3.</label>
<title>Object Detection</title>
<p>In the VIDA system, any object inside Region of Interest (ROI) extracted by user's pointing gesture is detected as an obstacle. The actual distance to the detected object is accurately computed with the stereo camera geometry that provides transformation between 2D and 3D spaces. In the following, the specific algorithms, classification of 3D points and ROI extraction, are presented.</p>
<sec>
<label>2.3.1.</label>
<title>Classification of 3D Points</title>
<p>Given a 3D pointing vector estimated in the previous step, a line passing through any two points in 3D space can be simply obtained in homogeneous coordinates [
<xref rid="b23-sensors-14-10412" ref-type="bibr">23</xref>
]. For simplicity's sake, we do classify 3D points of object candidates in 2D image planes (
<italic>x-z</italic>
and
<italic>y-z</italic>
projected planes) instead of 3D space. By doing so, the computational cost, required for distance computing between 3D points and a 3D line (e.g., a finger pointing vector), has been greatly reduced. The classification algorithm therefore works as follows. First, the line (
<italic>l</italic>
) between two 2D points (
<italic>p</italic>
<sub>1</sub>
,
<italic>p</italic>
<sub>2</sub>
) as projected to a 2D plane is computed by the cross product of the two points,
<disp-formula id="FD6">
<label>(6)</label>
<mml:math id="mm6">
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mi>p</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>×</mml:mo>
<mml:msub>
<mml:mi>p</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</disp-formula>
where
<italic>l</italic>
,
<italic>p</italic>
<sub>1</sub>
,
<italic>p</italic>
<sub>2</sub>
<italic>R</italic>
<sup>3</sup>
are represented in the homogeneous coordinates.</p>
<p>Second, using
<xref rid="FD5" ref-type="disp-formula">Equation (5)</xref>
, all pixels in a 2D image are mapped to the corresponding 3D points by the stereo camera geometry (or called two view geometry). Each 3D point is then classified into two groups (class of interest (
<italic>C
<sub>I</sub>
</italic>
) and class of non-interest (
<italic>C
<sub>NI</sub>
</italic>
)) by the measured distance in the
<italic>x-z</italic>
and
<italic>y-z</italic>
image planes. The orthogonal distance from a point to a line is obtained by the dot product:
<disp-formula id="FD7">
<label>(7)</label>
<mml:math id="mm7">
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mo>·</mml:mo>
<mml:mi>p</mml:mi>
<mml:mo>=</mml:mo>
<mml:mi>d</mml:mi>
</mml:mrow>
</mml:math>
</disp-formula>
where
<italic>l</italic>
,
<italic>p</italic>
<italic>R</italic>
<sup>3</sup>
in the homogeneous coordinates and d is a scalar.</p>
<p>With a fixed value of
<italic>d</italic>
(orthogonal distance), a virtual cylinder is formed as shown in
<xref rid="f5-sensors-14-10412" ref-type="fig">Figure 5</xref>
and its boundary, defined by
<italic>d</italic>
, becomes the classifier of 3D points. Based on this configuration, any 3D point inside the virtual cylinder falls into the class of interest (
<italic>C
<sub>I</sub>
</italic>
), and otherwise belongs to the class of non-interest (
<italic>C
<sub>NI</sub>
</italic>
). Then, all points in
<italic>C
<sub>I</sub>
</italic>
are back-projected onto the corresponding 2D image plane, and the region becomes a ROI candidate for object detection. This transformation between 2D image coordinates and 3D camera coordinates is illustrated in
<xref rid="f5-sensors-14-10412" ref-type="fig">Figure 5</xref>
. Note that in the figure a smile face pictogram represents the center of the camera coordinate system.</p>
</sec>
<sec>
<label>2.3.2.</label>
<title>ROI Extraction and Object Detection</title>
<p>As outcomes in the previous step, ROI candidates were generated on the image plane from 3D points. However, it was observed that ROI candidates are not correctly generated in case of disparity errors (noise or lack of data) as seen in
<xref rid="f6-sensors-14-10412" ref-type="fig">Figure 6</xref>
. These problems are mainly caused by illumination changes and errors in computing disparity values. To fix the problems, 3D disparity noises are filtered first, and then the way of classification of 3D points is slightly modified from the previous.</p>
<p>As seen in
<xref rid="f7-sensors-14-10412" ref-type="fig">Figure 7</xref>
,
<xref rid="f3-sensors-14-10412" ref-type="fig">3D</xref>
points are classified by using two virtual rectangles projected from the virtual cylinder onto the x-z and y-z planes respectively. In last to extract the final ROI candidate, the classified 3D points in the ROI interest group are projected onto the image plane in which two perpendicular bars are formed. The intersection of the two bars (yellow and pink) shown in
<xref rid="f8-sensors-14-10412" ref-type="fig">Figure 8</xref>
is chosen as the final ROI. The white region in
<xref rid="f8-sensors-14-10412" ref-type="fig">Figure 8a</xref>
is the ROI candidate determined by the virtual cylinder only.</p>
<p>In our VIDA system, the final ROI can be visually magnified for the user to perceive the details (texture information) of the detected objects. The magnification of the ROI is performed using the bilinear interpolation method. The bilinear interpolation performs a linear interpolation in one direction first and then repeats it for the other direction. The resulting images by the ROI extraction and magnification are shown in
<xref rid="f8-sensors-14-10412" ref-type="fig">Figure 8</xref>
.</p>
</sec>
</sec>
</sec>
<sec>
<label>3.</label>
<title>Experimental Results of VIDA</title>
<p>The VIDA system shown in
<xref rid="f9-sensors-14-10412" ref-type="fig">Figure 9</xref>
consists of a commercial stereo camera (Bumblebee 2 manufactured by PointGrey Inc., Richmond, BC, Canada), a personal computer (Intel Core2 2.2G Hz and 2G RAM) and a standard LCD (Liquid-Crystal Dispaly) monitor. The system runs at 6 frames per second with an input QVGA (Quarter Video Graphics Array, 320 × 240 pixels) image. The spatial accuracy, reported by the manufactured company as the system calibration error, was 5 mm at 1 m distance, which is sufficiently accurate to detect hands and objects for the virtual cane system.</p>
<p>Our developed system, VIDA, was thoroughly verified in terms of accuracy and robustness against illuminations and occlusions. The evaluation was done systematically, first with each algorithm and then with the whole system. The results shown in
<xref rid="f10-sensors-14-10412" ref-type="fig">Figure 10</xref>
show how our hand detection algorithm works well with backgrounds that contain numerous objects.
<xref rid="f11-sensors-14-10412" ref-type="fig">Figure 11</xref>
also demonstrates the precise 3D pointing estimation under a dynamic environment. In the resulting images, the estimated pointing vector colored in red was superimposed onto user's hand to show the accuracy.</p>
<p>For the system evaluation, we tested VIDA with various objects that are different in shape, color and size under dynamic scenes. For this experiment, a user was asked to walk around and make a random pointing gesture to objects. To evaluate the accuracy, what was pointed by the user has been recorded and compared with the detected object from VIDA. The sequential images seen in
<xref rid="f12-sensors-14-10412" ref-type="fig">Figure 12</xref>
demonstrate the experiment procedure. As clearly seen in
<xref rid="f12-sensors-14-10412" ref-type="fig">Figure 12</xref>
, all objects pointed by the user were successfully detected even at long distance (up to 3 m). The numerical results are summarized in
<xref rid="t1-sensors-14-10412" ref-type="table">Table 1</xref>
. For a more accurate evaluation with the ground truth, we used a laser pointer, attached to the top of the index finger, as a tool that can generate the ground truth.
<xref rid="f13-sensors-14-10412" ref-type="fig">Figure 13</xref>
shows the experimental setup and the image of pointing target circles. During the experiment, points marked by laser points on the target image were recorded and compared with pointing points estimated by our algorithm. Geometric errors between the laser points (ground truth) and the estimated points were computed and averaged for 40 repetitions per distance, 1, 2 and 3 m, respectively. The results were also summarized in
<xref rid="t2-sensors-14-10412" ref-type="table">Table 2</xref>
.</p>
<p>Additionally, robustness against illuminations and occlusions has been tested. For the illumination test, three different lighting conditions (bright, normal, dark) were used and compared with each other in setting that these lighting conditions are distinguishable on the taken images. For the evaluation with occlusions, two user scenarios were developed: interferences by extra hands and faces since both cases can significantly affect the performance of our system utilizing skin color information for hand detection. The results in
<xref rid="f14-sensors-14-10412" ref-type="fig">Figure 14</xref>
show the robustness of our system against illuminations and occlusions.</p>
</sec>
<sec>
<label>4.</label>
<title>Development of Tactile Feedback Interface for VIDA</title>
<p>In this section, we first describe a frequency identification experiment conducted to choose a set of distinctive tactile signals. We then present a tactile feedback interface integrated into VIDA.</p>
<sec>
<label>4.1.</label>
<title>Finding Identifiable Frequencies for Tactile Feedback</title>
<p>We designed an identification experiment to find a set of identifiable signals for tactile feedback. In the design of haptic feedback signal waves, we focused on frequency identification because our hardware setup with a mini piezo driver (DRV8662, manufactured by Texas Instrument Inc., Dallas, TX, USA) provides the wider range of responses in frequency than in amplitude. For the experimental setup, cycle and amplitude were fixed to 3, 60 Vpp, and a square wave, based on user's preference. Additionally, a square waveform was selected because it delivers the stronger haptic effect than other wave forms (sinusoidal or sawtooth) when the same voltage input is given. We were particularly interested in finding identifiable frequencies on the index finger since the goal of the present study is to develop a virtual cane system that can be operated by a simple pointing gesture. It is also well known that the index finger is most sensitive to tactile sensation in our body [
<xref rid="b30-sensors-14-10412" ref-type="bibr">30</xref>
].</p>
<p>For the identification experiment, a vibrator (see
<xref rid="f15-sensors-14-10412" ref-type="fig">Figure 15a</xref>
) for tactile feedback was built with a piezoelectric actuator (20 mm diameter, Murata Manufacturing Co. Ltd., Nagaokakyo, Kyoto, Japan), and was affixed to a transparent acrylic square cut (20 mm long and 2 mm thick). A programmable piezo actuator driver (DRV 8662 EVM, Texas Instrument Inc.) was used to drive the vibrator. Tactile signals were pregenerated with square waves at different frequencies and then sent to the piezo driver automatically, whenever the participant press a key to feel the next test signal. Ten participants (4 females and 6 males; age range 22–36; no previous haptic experience; neither visually impaired nor blind) took part in the identification experiment.</p>
<p>In the experiment, all participants were instructed to put the vibrator on their index finger and took five minute training to get familiar with tactile sensations at different frequencies to be tested. Vibrations with the three frequencies were randomized and presented to each participant one by one at a time. The participant then had to respond immediately the identification number of the presented frequency by using the keyboard. A PC (Personal Computer) monitor graphically displayed all necessary information (questions, trials remained, the elapsed time) for the participant to follow the procedure until complete the experiment. In order to obtain unbiased data, a minimum of 50 trials each frequency as suggested by Miller [
<xref rid="b31-sensors-14-10412" ref-type="bibr">31</xref>
], that makes a minimum of 150 trials in total, was tested for an identification experiment.</p>
<p>The experiment was repeated three times with different sets of frequencies as shown in
<xref rid="t3-sensors-14-10412" ref-type="table">Table 3</xref>
with the same participants. The very first experiment tested whether an initial set of three frequencies (10 Hz, 100 Hz and 300 Hz) are identifiable, and the second experiment was then conducted with a new set of frequencies (10 Hz, 100 Hz and 500 Hz) that was formed by replacing 300 Hz with 500 Hz after investigating the result of the first experiment. In the last experiment, a new frequency, 600 Hz, was verified instead of 500 Hz because the frequency 500 Hz was not perfectly identified. In this way, a final set of identifiable frequencies (10 Hz, 100 Hz and 600 Hz) was found.
<xref rid="t3-sensors-14-10412" ref-type="table">Table 3</xref>
shows the three confusion matrices obtained through the three consecutive identification experiments. All of the participants completed each experiment in 40 min, and so it took about two hours in total for each participant to complete the entire experiment including a 10-min break.</p>
</sec>
<sec>
<label>4.2.</label>
<title>Design of Tactile Feedback Interface with Identifiable Frequencies</title>
<p>With the result of the frequency identification experiment, we propose a novel tactile feedback interface that can be integrated into the VIDA system. Our design takes into account both identifiable distance feedback and hand guidance feedback keeping user's hand to be in the camera's view of VIDA. For the tactile distance feedback, the distance estimated from the VIDA system is mapped to one of the three identifiable frequencies (10 Hz, 100 Hz and 600 Hz) in
<xref rid="t4-sensors-14-10412" ref-type="table">Table 4</xref>
. For instance, the higher frequency is assigned to the closer distance since users have to take a quicker action to avoid detected obstacles. On the contrary, the guidance feedback uses the highest frequency (600 Hz) since it is provided only when user's hand is out of the camera view and in general, people perceive signals at higher frequencies as warning signals. The two signals are delivered to two separate haptic actuators, attached to the index finger for the distance feedback and the wrist for the guidance feedback as an example. The interpretations are summarized in
<xref rid="t5-sensors-14-10412" ref-type="table">Table 5</xref>
.
<xref rid="f16-sensors-14-10412" ref-type="fig">Figure 16</xref>
shows how the designed tactile feedback interface can be successfully integrated into the VIDA system. The developed tactile feedback interface can also be used for other navigation systems as long as distinctive distance values are provided.</p>
</sec>
</sec>
<sec sec-type="conclusions">
<label>5.</label>
<title>Conclusions</title>
<p>We developed a complete solution of a virtual cane system by combining finger pointing gesture and tactile feedback. For the development of finger pointing estimation, a novel algorithm that can precisely estimate a 3D finger pointing direction with a stereo camera was proposed. The proposed algorithm was thoroughly tested under various conditions (dynamic scenes, different objects, illumination changes and occlusions). The evaluation results show that our developed system (VIDA) is sufficiently robust and provides accurate object detection. In addition, we designed identifiable tactile signals that can be mapped to distance information estimated by VIDA. Those signals (10 Hz, 100 Hz, 600 Hz) were selected through identification experiments and were then used for developing a tactile feedback interface. As the last step, we have demonstrated that the tactile feedback interface can be successfully integrated into VIDA as a virtual cane system.</p>
<p>Our developed visual system provides accurate finger tracking and finger pointing estimation in real-time. The accuracy and real time performance enable blind people to navigate on street only with a simple finger pointing gesture. This technology is not only a cost effective solution, but is also extendable for other applications such as finger or hand gesture controls for mobile devices, computer games and VR (virtual reality) applications. Towards a complete navigation solution for the blind, we adopted haptic feedback that is an effective way to deliver obstacle's information under dynamic and noisy environments on street. Unlike other prior work [
<xref rid="b26-sensors-14-10412" ref-type="bibr">26</xref>
<xref rid="b28-sensors-14-10412" ref-type="bibr">28</xref>
], we adopted identifiable tactile signals that were designed by identification experiments. This approach can benefit researchers or designer who develop human computer interfaces concerning haptic perception. In last, our system differs in that navigation guidance is given upon user's simple gesture action (
<italic>i.e.</italic>
, both way interactions) and our approach is robust under unknown dynamic scenes.</p>
<p>Our future work will continue to improve the frame update rate (6 Hz) for faster walkers and will also evaluate the proposed virtual cane system by conducting user studies with visually impaired people. We are also interested in learning more about the possibility of designing tactile signals with other parameters (amplitudes and complex wave forms) by conducting more identification and psychophysical experiments.</p>
</sec>
</body>
<back>
<ack>
<p>This research was supported by the MSIP (Ministry of Science, ICT and Future Planning), Korea, under the ITRC (Information Technology Research Center) support program (NIPA-2014-H0301-14-1012) supervised by the NIPA (National IT Industry Promotion Agency), and also supported by Institute of BioMed-IT and Smart-IT Technology (Best), a Brain Korea 21 Plus program, Yonsei University.</p>
</ack>
<notes>
<title>Author Contributions</title>
<p>Donghun Kim developed the entire visual information delivery system (VIDA) with a stereo camera including the finger pointing estimation algorithm. Kwangtaek Kim defined the research topic, designed and developed an identifiable tactile feedback interface. Sangyoun Lee guided the research direction and verified the research results. All authors made substantial contributions in the writing and revision of the paper.</p>
</notes>
<notes>
<title>Conflicts of Interest</title>
<p>The authors declare no conflict of interest.</p>
</notes>
<ref-list>
<title>References</title>
<ref id="b1-sensors-14-10412">
<label>1.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vera</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Zenteno</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Salas</surname>
<given-names>J.</given-names>
</name>
</person-group>
<article-title>A smartphone-based virtual white cane</article-title>
<source>Pattern Anal. Appl.</source>
<year>2013</year>
<volume>2013</volume>
<pub-id pub-id-type="doi">10.1007/s10044-013-0328-8</pub-id>
</element-citation>
</ref>
<ref id="b2-sensors-14-10412">
<label>2.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dakopoulos</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Bourbakis</surname>
<given-names>N.</given-names>
</name>
</person-group>
<article-title>Wearable obstacle avoidance electronic travel aids for blind: A survey</article-title>
<source>IEEE Trans. Syst. Man Cybern. Part C Appl. Rev.</source>
<year>2010</year>
<volume>40</volume>
<fpage>25</fpage>
<lpage>35</lpage>
</element-citation>
</ref>
<ref id="b3-sensors-14-10412">
<label>3.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Yuan</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Manduchi</surname>
<given-names>R.</given-names>
</name>
</person-group>
<article-title>A tool for range sensing and environment discovery for the blind</article-title>
<conf-name>Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshop</conf-name>
<conf-loc>Washington, DC, USA</conf-loc>
<conf-date>27 June– 2 July 2004</conf-date>
</element-citation>
</ref>
<ref id="b4-sensors-14-10412">
<label>4.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Manduchi</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Coughlan</surname>
<given-names>J.</given-names>
</name>
</person-group>
<article-title>(Computer) vision without sight</article-title>
<source>ACM Commun.</source>
<year>2012</year>
<volume>55</volume>
<fpage>96</fpage>
<lpage>104</lpage>
</element-citation>
</ref>
<ref id="b5-sensors-14-10412">
<label>5.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dramas</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Thorpe</surname>
<given-names>S.J.</given-names>
</name>
<name>
<surname>Jouffrais</surname>
<given-names>C.</given-names>
</name>
</person-group>
<article-title>Artificial vision for the blind: A bio-inspired algorithm for objects and obstacles detection</article-title>
<source>Int. J. Image Graph.</source>
<year>2010</year>
<volume>10</volume>
<fpage>531</fpage>
<lpage>544</lpage>
</element-citation>
</ref>
<ref id="b6-sensors-14-10412">
<label>6.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>José</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Farrajota</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Rodrigues</surname>
<given-names>J.M.</given-names>
</name>
<name>
<surname>du Buf</surname>
<given-names>J.H.</given-names>
</name>
</person-group>
<article-title>The SmartVision local navigation aid for blind and visually impaired persons</article-title>
<source>Int. J. Digit. Content Technol. Appl.</source>
<year>2011</year>
<volume>5</volume>
<fpage>362</fpage>
<lpage>375</lpage>
</element-citation>
</ref>
<ref id="b7-sensors-14-10412">
<label>7.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Fernandes</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Costa</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Filipe</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Hadjileontiadis</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Barroso</surname>
<given-names>J.</given-names>
</name>
</person-group>
<article-title>Stereo vision in blind navigation assistance</article-title>
<conf-name>Proceedings of the IEEE World Automation Congress (WAC)</conf-name>
<conf-loc>Kobe, Japan</conf-loc>
<conf-date>19–23 September 2010</conf-date>
<fpage>1</fpage>
<lpage>6</lpage>
</element-citation>
</ref>
<ref id="b8-sensors-14-10412">
<label>8.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Brilhault</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Kammoun</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Gutierrez</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Truillet</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Jouffrais</surname>
<given-names>C.</given-names>
</name>
</person-group>
<article-title>Fusion of artificial vision and GPS to improve blind pedestrian positioning</article-title>
<conf-name>Proceedings of the IEEE 4th IFIP International Conference on New Technologies, Mobility and Security (NTMS)</conf-name>
<conf-loc>Paris, France</conf-loc>
<conf-date>7–10 February 2011</conf-date>
<fpage>1</fpage>
<lpage>5</lpage>
</element-citation>
</ref>
<ref id="b9-sensors-14-10412">
<label>9.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Denis</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Jouffrais</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Vergnieux</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Macé</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>Human faces detection and localization with simulated prosthetic vision</article-title>
<conf-name>Proceedings of the ACM CHI'13 Extended Abstracts on Human Factors in Computing Systems</conf-name>
<conf-loc>Paris, France</conf-loc>
<conf-date>27 April– 2 May 2013</conf-date>
<fpage>61</fpage>
<lpage>66</lpage>
</element-citation>
</ref>
<ref id="b10-sensors-14-10412">
<label>10.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Asano</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Nagayasu</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Orimo</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Terabayashi</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Ohta</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Umeda</surname>
<given-names>K.</given-names>
</name>
</person-group>
<article-title>Recognition of finger-pointing direction using color clustering and image segmentation</article-title>
<conf-name>Proceedings of the SICE Annual Conference</conf-name>
<conf-loc>Nagoya, Japan</conf-loc>
<conf-date>14–17 September 2013</conf-date>
<fpage>2029</fpage>
<lpage>2034</lpage>
</element-citation>
</ref>
<ref id="b11-sensors-14-10412">
<label>11.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Kim</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Hong</surname>
<given-names>K.</given-names>
</name>
</person-group>
<article-title>A robust human pointing location estimation using 3D hand and face poses with RGB-D sensor</article-title>
<conf-name>Proceedings of the IEEE International Conference on Consumer Electronics</conf-name>
<conf-loc>Las Vegas, NV, USA</conf-loc>
<conf-date>11–14 January 2013</conf-date>
<fpage>556</fpage>
<lpage>557</lpage>
</element-citation>
</ref>
<ref id="b12-sensors-14-10412">
<label>12.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wachs</surname>
<given-names>J.P.</given-names>
</name>
<name>
<surname>Kölsch</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Stern</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Edan</surname>
<given-names>Y.</given-names>
</name>
</person-group>
<article-title>Vision-based hand-gesture applications</article-title>
<source>Commun. ACM</source>
<year>2011</year>
<volume>54</volume>
<fpage>60</fpage>
<lpage>71</lpage>
<pub-id pub-id-type="pmid">21984822</pub-id>
</element-citation>
</ref>
<ref id="b13-sensors-14-10412">
<label>13.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Matikainen</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Pillai</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Mummert</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Sukthankar</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Hebert</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>Prop-free pointing detection in dynamic cluttered environments</article-title>
<conf-name>Proceedings of the IEEE International Conference on Automatic Face Gesture Recognition and Workshops</conf-name>
<conf-loc>Santa Barbara, CA, USA</conf-loc>
<conf-date>21–25 March 2011</conf-date>
<fpage>374</fpage>
<lpage>381</lpage>
</element-citation>
</ref>
<ref id="b14-sensors-14-10412">
<label>14.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Lee</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Green</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Billinghurst</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>3D natural hand interaction for AR applications</article-title>
<conf-name>Proceedings of the IEEE 23rd International Conference on Image and Vision Computing New Zealand</conf-name>
<conf-loc>Christchurch, UK</conf-loc>
<conf-date>26–28 November 2008</conf-date>
<fpage>1</fpage>
<lpage>6</lpage>
</element-citation>
</ref>
<ref id="b15-sensors-14-10412">
<label>15.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nickel</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Stiefelhagen</surname>
<given-names>R.</given-names>
</name>
</person-group>
<article-title>Visual recognition of pointing gestures for human-robot interaction</article-title>
<source>Image Vis. Comput.</source>
<year>2007</year>
<volume>25</volume>
<fpage>1875</fpage>
<lpage>1884</lpage>
</element-citation>
</ref>
<ref id="b16-sensors-14-10412">
<label>16.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Thomas</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Piekarski</surname>
<given-names>W.</given-names>
</name>
</person-group>
<article-title>Glove based user interaction techniques for augmented reality in an outdoor environment</article-title>
<source>Virtual Real.</source>
<year>2002</year>
<volume>6</volume>
<fpage>167</fpage>
<lpage>180</lpage>
</element-citation>
</ref>
<ref id="b17-sensors-14-10412">
<label>17.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Segen</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Kumar</surname>
<given-names>S.</given-names>
</name>
</person-group>
<article-title>Gesture VR: Vision-based 3D hand interace for spatial interaction</article-title>
<conf-name>Proceedings of the Sixth ACM International Conference on Multimedia</conf-name>
<conf-loc>Bristol, UK</conf-loc>
<conf-date>12–16 September 1998</conf-date>
<fpage>455</fpage>
<lpage>464</lpage>
</element-citation>
</ref>
<ref id="b18-sensors-14-10412">
<label>18.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Rehg</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Kanade</surname>
<given-names>T.</given-names>
</name>
</person-group>
<article-title>Visual tracking of high DOF articulated structures: An application to human hand tracking</article-title>
<conf-name>Proceedings of the Third European Conference on Computer Vision</conf-name>
<conf-loc>Stockholm, Sweden</conf-loc>
<conf-date>2–6 May 1994</conf-date>
<fpage>35</fpage>
<lpage>46</lpage>
</element-citation>
</ref>
<ref id="b19-sensors-14-10412">
<label>19.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Ong</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Bowden</surname>
<given-names>R.</given-names>
</name>
</person-group>
<article-title>A boosted classifier tree for hand shape detection</article-title>
<conf-name>Proceedings of the Sixth IEEE International Conference on Automatic Face and Gesture Recognition</conf-name>
<conf-loc>Seoul, Korea</conf-loc>
<conf-date>17–19 May 2004</conf-date>
<fpage>889</fpage>
<lpage>894</lpage>
</element-citation>
</ref>
<ref id="b20-sensors-14-10412">
<label>20.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Starner</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Weaver</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Pentland</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Real-time american sign language recognition using desk and wearable computer based video</article-title>
<source>IEEE Trans. Pattern Anal. Mach. Intell.</source>
<year>1998</year>
<volume>20</volume>
<fpage>1371</fpage>
<lpage>1375</lpage>
</element-citation>
</ref>
<ref id="b21-sensors-14-10412">
<label>21.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Kolsch</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Turk</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>Robust hand detection</article-title>
<conf-name>Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition</conf-name>
<conf-loc>Seoul, Korea</conf-loc>
<conf-date>17–19 May 2004</conf-date>
<fpage>614</fpage>
<lpage>619</lpage>
</element-citation>
</ref>
<ref id="b22-sensors-14-10412">
<label>22.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Trucco</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Verri</surname>
<given-names>A.</given-names>
</name>
</person-group>
<source>Introductory Techniques for 3-D Computer Vision</source>
<publisher-name>Prentice Hall</publisher-name>
<publisher-loc>Englewood Cliffs, NJ, USA</publisher-loc>
<year>1998</year>
</element-citation>
</ref>
<ref id="b23-sensors-14-10412">
<label>23.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Hartley</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Zisserman</surname>
<given-names>A.</given-names>
</name>
</person-group>
<source>Multiple View Geometry in Computer Vision</source>
<publisher-name>Cambridge University Press</publisher-name>
<publisher-loc>Cambridge, UK</publisher-loc>
<year>2003</year>
</element-citation>
</ref>
<ref id="b24-sensors-14-10412">
<label>24.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Faugeras</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Deriche</surname>
<given-names>R.</given-names>
</name>
</person-group>
<article-title>An effective technique for calibrating a binocular stereo through projective reconstruction using both a calibration object and the environment</article-title>
<source>Videre</source>
<year>1997</year>
<volume>1</volume>
<fpage>58</fpage>
<lpage>68</lpage>
</element-citation>
</ref>
<ref id="b25-sensors-14-10412">
<label>25.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tsai</surname>
<given-names>R.</given-names>
</name>
</person-group>
<article-title>A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses</article-title>
<source>IEEE J. Robot. Autom.</source>
<year>1987</year>
<volume>3</volume>
<fpage>323</fpage>
<lpage>344</lpage>
</element-citation>
</ref>
<ref id="b26-sensors-14-10412">
<label>26.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pressey</surname>
<given-names>N.</given-names>
</name>
</person-group>
<article-title>Mowat sensor</article-title>
<source>Focus</source>
<year>1977</year>
<volume>11</volume>
<fpage>35</fpage>
<lpage>39</lpage>
</element-citation>
</ref>
<ref id="b27-sensors-14-10412">
<label>27.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Ertan</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Willets</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Tan</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Pentland</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>A wearable haptic navigation guidance system</article-title>
<conf-name>Proceedings of the Second International Symposium on Wearable Computers 1998 Digest of Papers</conf-name>
<conf-loc>Pittsburgh, PA, USA</conf-loc>
<conf-date>19–20 October 1998</conf-date>
<fpage>164</fpage>
<lpage>165</lpage>
</element-citation>
</ref>
<ref id="b28-sensors-14-10412">
<label>28.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Velázquez</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Maingreaud</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Pissaloux</surname>
<given-names>E.</given-names>
</name>
</person-group>
<article-title>Intelligent glasses: A new man-machine interface concept integrating computer vision and human tactile perception</article-title>
<conf-name>Proceedings of the EuroHaptics</conf-name>
<conf-loc>Dublin, Ireland</conf-loc>
<conf-date>6–9 July 2003</conf-date>
<fpage>456</fpage>
<lpage>460</lpage>
</element-citation>
</ref>
<ref id="b29-sensors-14-10412">
<label>29.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Hirose</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Amemiya</surname>
<given-names>T.</given-names>
</name>
</person-group>
<article-title>Wearable finger-braille interface for navigation of deaf-blind in ubiquitous barrier-free space</article-title>
<conf-name>Proceedings of the 10th International Conference on Human-Computer Interaction, Universal Access in Human Computer Interaction</conf-name>
<conf-loc>Crete, Greece</conf-loc>
<conf-date>22–27 June 2003;</conf-date>
<fpage>1417</fpage>
<lpage>1421</lpage>
</element-citation>
</ref>
<ref id="b30-sensors-14-10412">
<label>30.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Goldstein</surname>
<given-names>E.B.</given-names>
</name>
</person-group>
<source>Sensation and Perception</source>
<publisher-name>Cengage Learning</publisher-name>
<publisher-loc>Boston, MA, USA</publisher-loc>
<year>2013</year>
</element-citation>
</ref>
<ref id="b31-sensors-14-10412">
<label>31.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Miller</surname>
<given-names>G.A.</given-names>
</name>
</person-group>
<article-title>Note on the bias of information estimates</article-title>
<source>Inf. Theory Psychol.</source>
<year>1955</year>
<volume>2</volume>
<fpage>95</fpage>
<lpage>100</lpage>
</element-citation>
</ref>
</ref-list>
</back>
<floats-group>
<fig id="f1-sensors-14-10412" position="float">
<label>Figure 1.</label>
<caption>
<p>The flow chart of algorithms used in Visual Information Delivery Assistant.</p>
</caption>
<graphic xlink:href="sensors-14-10412f1"></graphic>
</fig>
<fig id="f2-sensors-14-10412" position="float">
<label>Figure 2.</label>
<caption>
<p>Examples of hand detection with our background subtraction under dynamic scenes: a pointing finger (red) and a moving arm (blue). Note that images were taken sequentially from left to right.</p>
</caption>
<graphic xlink:href="sensors-14-10412f2"></graphic>
</fig>
<fig id="f3-sensors-14-10412" position="float">
<label>Figure 3.</label>
<caption>
<p>Photographic representation of estimating a 2D pointing direction:
<bold>(a)</bold>
a detected hand area;
<bold>(b)</bold>
the binary image of
<bold>(a)</bold>
but with noises;
<bold>(c)</bold>
a cleaned hand region after removing noises by using a connected component analysis;
<bold>(d)</bold>
the hand contour;
<bold>(e)</bold>
a hand shape estimated by an geometric analysis; and
<bold>(f)</bold>
an initial estimated direction vector (blue) by a blue bounding box and a refined direction vector (red) by a hand shape geometry.</p>
</caption>
<graphic xlink:href="sensors-14-10412f3"></graphic>
</fig>
<fig id="f4-sensors-14-10412" position="float">
<label>Figure 4.</label>
<caption>
<p>Estimation of a 3D pointing direction:
<bold>(a)</bold>
a 2D pointing vector;
<bold>(b)</bold>
a hand area projected onto the
<italic>x-y</italic>
plane; and
<bold>(c)</bold>
an estimated 3D vector from the corresponding 2D pointing vector.</p>
</caption>
<graphic xlink:href="sensors-14-10412f4"></graphic>
</fig>
<fig id="f5-sensors-14-10412" position="float">
<label>Figure 5.</label>
<caption>
<p>Relationship between the image coordinates and the camera coordinates.</p>
</caption>
<graphic xlink:href="sensors-14-10412f5"></graphic>
</fig>
<fig id="f6-sensors-14-10412" position="float">
<label>Figure 6.</label>
<caption>
<p>Examples of
<bold>(a)</bold>
noisy disparity data and
<bold>(b)</bold>
lack of disparity values. Those areas are highlighted by ellipsoids.</p>
</caption>
<graphic xlink:href="sensors-14-10412f6"></graphic>
</fig>
<fig id="f7-sensors-14-10412" position="float">
<label>Figure 7.</label>
<caption>
<p>Virtual rectangles used as a classifier on
<bold>(a)</bold>
the
<italic>x-z</italic>
plane and
<bold>(b)</bold>
the
<italic>y-z</italic>
plane.</p>
</caption>
<graphic xlink:href="sensors-14-10412f7"></graphic>
</fig>
<fig id="f8-sensors-14-10412" position="float">
<label>Figure 8.</label>
<caption>
<p>ROI extraction and the magnified view:
<bold>(a)</bold>
the extracted ROI region highlighted by a red rectangle;
<bold>(b)</bold>
Superimposed ROI; and
<bold>(c)</bold>
its magnified view.</p>
</caption>
<graphic xlink:href="sensors-14-10412f8"></graphic>
</fig>
<fig id="f9-sensors-14-10412" position="float">
<label>Figure 9.</label>
<caption>
<p>Our developed VIDA system:
<bold>(a)</bold>
the entire system look;
<bold>(b)</bold>
an example of the head mounted VIDA system; and
<bold>(c)</bold>
the user scenario.</p>
</caption>
<graphic xlink:href="sensors-14-10412f9"></graphic>
</fig>
<fig id="f10-sensors-14-10412" position="float">
<label>Figure 10.</label>
<caption>
<p>Results of hand detection.</p>
</caption>
<graphic xlink:href="sensors-14-10412f10"></graphic>
</fig>
<fig id="f11-sensors-14-10412" position="float">
<label>Figure 11.</label>
<caption>
<p>Results of finger pointing estimation (red arrow) with dynamic backgrounds.</p>
</caption>
<graphic xlink:href="sensors-14-10412f11"></graphic>
</fig>
<fig id="f12-sensors-14-10412" position="float">
<label>Figure 12.</label>
<caption>
<p>Results of object detection and magnification for
<bold>(a)</bold>
a calendar;
<bold>(b)</bold>
an instrument;
<bold>(c)</bold>
a humidifier;
<bold>(d)</bold>
a small window;
<bold>(e)</bold>
a drawn shape on a blackboard. Each row, from top to bottom, shows the sequential order for ROI extractions and the magnified ROI display.</p>
</caption>
<graphic xlink:href="sensors-14-10412f12"></graphic>
</fig>
<fig id="f13-sensors-14-10412" position="float">
<label>Figure 13.</label>
<caption>
<p>Quantitative experimental setup to evaluate our finger pointing algorithm with a laser pointer (ground truth):
<bold>(a)</bold>
a laser pointer attached to the top of the index finger and
<bold>(b)</bold>
the experiment with a target circle image at a distance (1 m).</p>
</caption>
<graphic xlink:href="sensors-14-10412f13"></graphic>
</fig>
<fig id="f14-sensors-14-10412" position="float">
<label>Figure 14.</label>
<caption>
<p>Results with varying illumination conditions and occlusions:
<bold>(a</bold>
<bold>c)</bold>
three lighting conditions (bright, normal, dark) and
<bold>(d,e)</bold>
occlusions by other people's hand and face.</p>
</caption>
<graphic xlink:href="sensors-14-10412f14"></graphic>
</fig>
<fig id="f15-sensors-14-10412" position="float">
<label>Figure 15.</label>
<caption>
<p>Identification experiment setup to investigate identifiable frequencies for tactile feedback:
<bold>(a)</bold>
Piezoelectric actuator and
<bold>(b)</bold>
the actuator attached to the index.</p>
</caption>
<graphic xlink:href="sensors-14-10412f15"></graphic>
</fig>
<fig id="f16-sensors-14-10412" position="float">
<label>Figure 16.</label>
<caption>
<p>A flow chart of a complete solution of our proposed virtual cane system including a tactile feedback interface.</p>
</caption>
<graphic xlink:href="sensors-14-10412f16"></graphic>
</fig>
<table-wrap id="t1-sensors-14-10412" position="float">
<label>Table 1.</label>
<caption>
<p>The accuracy of object detection from
<xref rid="f12-sensors-14-10412" ref-type="fig">Figure 12</xref>
.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th valign="middle" align="center" rowspan="1" colspan="1"></th>
<th valign="middle" align="center" rowspan="1" colspan="1">
<bold>(a)</bold>
</th>
<th valign="middle" align="center" rowspan="1" colspan="1">
<bold>(b)</bold>
</th>
<th valign="middle" align="center" rowspan="1" colspan="1">
<bold>(c)</bold>
</th>
<th valign="middle" align="center" rowspan="1" colspan="1">
<bold>(d)</bold>
</th>
<th valign="middle" align="center" rowspan="1" colspan="1">
<bold>(e)</bold>
</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="center" rowspan="1" colspan="1">Object size</td>
<td valign="top" align="center" rowspan="1" colspan="1">8 × 18</td>
<td valign="top" align="center" rowspan="1" colspan="1">12 × 17</td>
<td valign="top" align="center" rowspan="1" colspan="1">13 × 16</td>
<td valign="top" align="center" rowspan="1" colspan="1">36 × 17</td>
<td valign="top" align="center" rowspan="1" colspan="1">17 × 17</td>
</tr>
<tr>
<td valign="top" align="center" rowspan="1" colspan="1">Distance error (pixels)</td>
<td valign="top" align="center" rowspan="1" colspan="1">7.28</td>
<td valign="top" align="center" rowspan="1" colspan="1">8.06</td>
<td valign="top" align="center" rowspan="1" colspan="1">6.40</td>
<td valign="top" align="center" rowspan="1" colspan="1">8.00</td>
<td valign="top" align="center" rowspan="1" colspan="1">7.07</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="t2-sensors-14-10412" position="float">
<label>Table 2.</label>
<caption>
<p>Quantitative experiment results of our proposed pointing algorithm with using a laser pointer.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th valign="middle" align="center" rowspan="1" colspan="1">
<bold>Distance</bold>
</th>
<th valign="middle" align="center" rowspan="1" colspan="1">
<bold>1m</bold>
</th>
<th valign="middle" align="center" rowspan="1" colspan="1">
<bold>2m</bold>
</th>
<th valign="middle" align="center" rowspan="1" colspan="1">
<bold>3m</bold>
</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="center" rowspan="1" colspan="1">Average error (pixels)</td>
<td valign="top" align="center" rowspan="1" colspan="1">8.46</td>
<td valign="top" align="center" rowspan="1" colspan="1">11.27</td>
<td valign="top" align="center" rowspan="1" colspan="1">13.56</td>
</tr>
<tr>
<td valign="top" align="center" rowspan="1" colspan="1">Average error (cm)</td>
<td valign="top" align="center" rowspan="1" colspan="1">6.98</td>
<td valign="top" align="center" rowspan="1" colspan="1">15.21</td>
<td valign="top" align="center" rowspan="1" colspan="1">25.58</td>
</tr>
<tr>
<td valign="top" align="center" rowspan="1" colspan="1">Standard dev. (cm)</td>
<td valign="top" align="center" rowspan="1" colspan="1">3.43</td>
<td valign="top" align="center" rowspan="1" colspan="1">3.25</td>
<td valign="top" align="center" rowspan="1" colspan="1">6.39</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="t3-sensors-14-10412" position="float">
<label>Table 3.</label>
<caption>
<p>Stimulus-response confusion matrices obtained through three repetitions of an frequency identification experiment with different sets of frequencies. Each cell shows accumulated responses from ten participants. Note that the max number “500” indicates perfect identification.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th valign="middle" align="center" rowspan="1" colspan="1"></th>
<th colspan="4" valign="top" align="center" rowspan="1">
<bold>Response (Experiment 1)</bold>
</th>
<th colspan="4" valign="top" align="center" rowspan="1">
<bold>Response (Experiment 2)</bold>
</th>
<th colspan="4" valign="top" align="center" rowspan="1">
<bold>Response (Experiment 3)</bold>
</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="middle" rowspan="4" align="center" colspan="1">Stimulus</td>
<td valign="top" align="center" rowspan="1" colspan="1">Hz</td>
<td valign="top" align="center" rowspan="1" colspan="1">10</td>
<td valign="top" align="center" rowspan="1" colspan="1">100</td>
<td valign="top" align="center" rowspan="1" colspan="1">300</td>
<td valign="top" align="center" rowspan="1" colspan="1">Hz</td>
<td valign="top" align="center" rowspan="1" colspan="1">10</td>
<td valign="top" align="center" rowspan="1" colspan="1">100</td>
<td valign="top" align="center" rowspan="1" colspan="1">500</td>
<td valign="top" align="center" rowspan="1" colspan="1">Hz</td>
<td valign="top" align="center" rowspan="1" colspan="1">10</td>
<td valign="top" align="center" rowspan="1" colspan="1">100</td>
<td valign="top" align="center" rowspan="1" colspan="1">600</td>
</tr>
<tr>
<td valign="top" align="center" rowspan="1" colspan="1">10</td>
<td valign="top" align="center" rowspan="1" colspan="1">
<bold>500</bold>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">0</td>
<td valign="top" align="center" rowspan="1" colspan="1">0</td>
<td valign="top" align="center" rowspan="1" colspan="1">10</td>
<td valign="top" align="center" rowspan="1" colspan="1">
<bold>500</bold>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">0</td>
<td valign="top" align="center" rowspan="1" colspan="1">0</td>
<td valign="top" align="center" rowspan="1" colspan="1">10</td>
<td valign="top" align="center" rowspan="1" colspan="1">
<bold>500</bold>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">0</td>
<td valign="top" align="center" rowspan="1" colspan="1">0</td>
</tr>
<tr>
<td valign="top" align="center" rowspan="1" colspan="1">100</td>
<td valign="top" align="center" rowspan="1" colspan="1">0</td>
<td valign="top" align="center" rowspan="1" colspan="1">
<bold>431</bold>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">69</td>
<td valign="top" align="center" rowspan="1" colspan="1">100</td>
<td valign="top" align="center" rowspan="1" colspan="1">0</td>
<td valign="top" align="center" rowspan="1" colspan="1">
<bold>488</bold>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">12</td>
<td valign="top" align="center" rowspan="1" colspan="1">100</td>
<td valign="top" align="center" rowspan="1" colspan="1">0</td>
<td valign="top" align="center" rowspan="1" colspan="1">
<bold>500</bold>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">0</td>
</tr>
<tr>
<td valign="top" align="center" rowspan="1" colspan="1">300</td>
<td valign="top" align="center" rowspan="1" colspan="1">0</td>
<td valign="top" align="center" rowspan="1" colspan="1">60</td>
<td valign="top" align="center" rowspan="1" colspan="1">
<bold>440</bold>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">500</td>
<td valign="top" align="center" rowspan="1" colspan="1">0</td>
<td valign="top" align="center" rowspan="1" colspan="1">9</td>
<td valign="top" align="center" rowspan="1" colspan="1">
<bold>491</bold>
</td>
<td valign="top" align="center" rowspan="1" colspan="1">600</td>
<td valign="top" align="center" rowspan="1" colspan="1">0</td>
<td valign="top" align="center" rowspan="1" colspan="1">0</td>
<td valign="top" align="center" rowspan="1" colspan="1">
<bold>500</bold>
</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="t4-sensors-14-10412" position="float">
<label>Table 4.</label>
<caption>
<p>A proposed set of identifiable tactile signals with dual actuators used for distinctive distance ranges.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th colspan="2" valign="middle" align="center" rowspan="1">
<bold>Actuator I (Distance Feedback)</bold>
</th>
<th valign="middle" align="center" rowspan="1" colspan="1">
<bold>Actuator II (Guidance Feedback)</bold>
</th>
</tr>
<tr>
<th valign="bottom" colspan="3" rowspan="1">
<hr></hr>
</th>
</tr>
<tr>
<th valign="middle" align="center" rowspan="1" colspan="1">Distance (m)</th>
<th valign="middle" align="center" rowspan="1" colspan="1">Frequency (Hz)</th>
<th valign="middle" align="center" rowspan="1" colspan="1">Frequency (Hz)</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="center" rowspan="1" colspan="1">
<italic>D</italic>
<1</td>
<td valign="top" align="center" rowspan="1" colspan="1">600</td>
<td valign="middle" rowspan="3" align="center" colspan="1">600</td>
</tr>
<tr>
<td valign="top" align="center" rowspan="1" colspan="1">1 ≰
<italic>D</italic>
<2</td>
<td valign="top" align="center" rowspan="1" colspan="1">100</td>
</tr>
<tr>
<td valign="top" align="center" rowspan="1" colspan="1">
<italic>2</italic>
<italic>D</italic>
<3</td>
<td valign="top" align="center" rowspan="1" colspan="1">10</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="t5-sensors-14-10412" position="float">
<label>Table 5.</label>
<caption>
<p>A configuration of dual actuators with its interpretations.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th valign="middle" align="center" rowspan="1" colspan="1">
<bold>Interpretation</bold>
</th>
<th valign="middle" align="center" rowspan="1" colspan="1">
<bold>Actuator I</bold>
</th>
<th valign="middle" align="center" rowspan="1" colspan="1">
<bold>Actuator II</bold>
</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="center" rowspan="1" colspan="1">Display distance information</td>
<td valign="top" align="center" rowspan="1" colspan="1">ON</td>
<td valign="top" align="center" rowspan="1" colspan="1">OFF</td>
</tr>
<tr>
<td valign="top" align="center" rowspan="1" colspan="1">Alert to align watching and pointing</td>
<td valign="top" align="center" rowspan="1" colspan="1">OFF</td>
<td valign="top" align="center" rowspan="1" colspan="1">ON</td>
</tr>
</tbody>
</table>
</table-wrap>
</floats-group>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 002481 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd -nk 002481 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Curation
   |type=    RBID
   |clé=     PMC:4118356
   |texte=   Stereo Camera Based Virtual Cane System with Identifiable Distance Tactile Feedback for the Blind
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Curation/RBID.i   -Sk "pubmed:24932864" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024