Serveur d'exploration sur l'OCR

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Reading Visual Braille with a Retinal Prosthesis

Identifieur interne : 000091 ( Pmc/Checkpoint ); précédent : 000090; suivant : 000092

Reading Visual Braille with a Retinal Prosthesis

Auteurs : Thomas Z. Lauritzen [États-Unis] ; Jordan Harris [États-Unis] ; Saddek Mohand-Said [France] ; Jose A. Sahel [France] ; Jessy D. Dorn [États-Unis] ; Kelly Mcclure [États-Unis] ; Robert J. Greenberg [États-Unis]

Source :

RBID : PMC:3504310

Abstract

Retinal prostheses, which restore partial vision to patients blinded by outer retinal degeneration, are currently in clinical trial. The Argus II retinal prosthesis system was recently awarded CE approval for commercial use in Europe. While retinal prosthesis users have achieved remarkable visual improvement to the point of reading letters and short sentences, the reading process is still fairly cumbersome. This study investigates the possibility of using an epiretinal prosthesis to stimulate visual braille as a sensory substitution for reading written letters and words. The Argus II retinal prosthesis system, used in this study, includes a 10 × 6 electrode array implanted epiretinally, a tiny video camera mounted on a pair of glasses, and a wearable computer that processes the video and determines the stimulation current of each electrode in real time. In the braille reading system, individual letters are created by a subset of dots from a 3 by 2 array of six dots. For the visual braille experiment, a grid of six electrodes was chosen out of the 10 × 6 Argus II array. Groups of these electrodes were then directly stimulated (bypassing the camera) to create visual percepts of individual braille letters. Experiments were performed in a single subject. Single letters were stimulated in an alternative forced choice (AFC) paradigm, and short 2–4-letter words were stimulated (one letter at a time) in an open-choice reading paradigm. The subject correctly identified 89% of single letters, 80% of 2-letter, 60% of 3-letter, and 70% of 4-letter words. This work suggests that text can successfully be stimulated and read as visual braille in retinal prosthesis patients.


Url:
DOI: 10.3389/fnins.2012.00168
PubMed: 23189036
PubMed Central: 3504310


Affiliations:


Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:3504310

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Reading Visual Braille with a Retinal Prosthesis</title>
<author>
<name sortKey="Lauritzen, Thomas Z" sort="Lauritzen, Thomas Z" uniqKey="Lauritzen T" first="Thomas Z." last="Lauritzen">Thomas Z. Lauritzen</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Second Sight Medical Products</institution>
<country>Sylmar, CA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Harris, Jordan" sort="Harris, Jordan" uniqKey="Harris J" first="Jordan" last="Harris">Jordan Harris</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Second Sight Medical Products</institution>
<country>Sylmar, CA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Brigham Young University – Idaho</institution>
<country>Rexburg, ID, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Mohand Said, Saddek" sort="Mohand Said, Saddek" uniqKey="Mohand Said S" first="Saddek" last="Mohand-Said">Saddek Mohand-Said</name>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<institution>UMR-S 968, Institut de la Vision</institution>
<country>Paris, France</country>
</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff4">
<institution>CIC INSERM DHOS 503, National Ophthalmology Hospital</institution>
<country>Paris, France</country>
</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Sahel, Jose A" sort="Sahel, Jose A" uniqKey="Sahel J" first="Jose A." last="Sahel">Jose A. Sahel</name>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<institution>UMR-S 968, Institut de la Vision</institution>
<country>Paris, France</country>
</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff4">
<institution>CIC INSERM DHOS 503, National Ophthalmology Hospital</institution>
<country>Paris, France</country>
</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Dorn, Jessy D" sort="Dorn, Jessy D" uniqKey="Dorn J" first="Jessy D." last="Dorn">Jessy D. Dorn</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Second Sight Medical Products</institution>
<country>Sylmar, CA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Mcclure, Kelly" sort="Mcclure, Kelly" uniqKey="Mcclure K" first="Kelly" last="Mcclure">Kelly Mcclure</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Second Sight Medical Products</institution>
<country>Sylmar, CA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Greenberg, Robert J" sort="Greenberg, Robert J" uniqKey="Greenberg R" first="Robert J." last="Greenberg">Robert J. Greenberg</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Second Sight Medical Products</institution>
<country>Sylmar, CA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">23189036</idno>
<idno type="pmc">3504310</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3504310</idno>
<idno type="RBID">PMC:3504310</idno>
<idno type="doi">10.3389/fnins.2012.00168</idno>
<date when="2012">2012</date>
<idno type="wicri:Area/Pmc/Corpus">000162</idno>
<idno type="wicri:Area/Pmc/Curation">000162</idno>
<idno type="wicri:Area/Pmc/Checkpoint">000091</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Reading Visual Braille with a Retinal Prosthesis</title>
<author>
<name sortKey="Lauritzen, Thomas Z" sort="Lauritzen, Thomas Z" uniqKey="Lauritzen T" first="Thomas Z." last="Lauritzen">Thomas Z. Lauritzen</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Second Sight Medical Products</institution>
<country>Sylmar, CA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Harris, Jordan" sort="Harris, Jordan" uniqKey="Harris J" first="Jordan" last="Harris">Jordan Harris</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Second Sight Medical Products</institution>
<country>Sylmar, CA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Brigham Young University – Idaho</institution>
<country>Rexburg, ID, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Mohand Said, Saddek" sort="Mohand Said, Saddek" uniqKey="Mohand Said S" first="Saddek" last="Mohand-Said">Saddek Mohand-Said</name>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<institution>UMR-S 968, Institut de la Vision</institution>
<country>Paris, France</country>
</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff4">
<institution>CIC INSERM DHOS 503, National Ophthalmology Hospital</institution>
<country>Paris, France</country>
</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Sahel, Jose A" sort="Sahel, Jose A" uniqKey="Sahel J" first="Jose A." last="Sahel">Jose A. Sahel</name>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<institution>UMR-S 968, Institut de la Vision</institution>
<country>Paris, France</country>
</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff4">
<institution>CIC INSERM DHOS 503, National Ophthalmology Hospital</institution>
<country>Paris, France</country>
</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Dorn, Jessy D" sort="Dorn, Jessy D" uniqKey="Dorn J" first="Jessy D." last="Dorn">Jessy D. Dorn</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Second Sight Medical Products</institution>
<country>Sylmar, CA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Mcclure, Kelly" sort="Mcclure, Kelly" uniqKey="Mcclure K" first="Kelly" last="Mcclure">Kelly Mcclure</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Second Sight Medical Products</institution>
<country>Sylmar, CA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Greenberg, Robert J" sort="Greenberg, Robert J" uniqKey="Greenberg R" first="Robert J." last="Greenberg">Robert J. Greenberg</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Second Sight Medical Products</institution>
<country>Sylmar, CA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Frontiers in Neuroscience</title>
<idno type="ISSN">1662-4548</idno>
<idno type="eISSN">1662-453X</idno>
<imprint>
<date when="2012">2012</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Retinal prostheses, which restore partial vision to patients blinded by outer retinal degeneration, are currently in clinical trial. The Argus II retinal prosthesis system was recently awarded CE approval for commercial use in Europe. While retinal prosthesis users have achieved remarkable visual improvement to the point of reading letters and short sentences, the reading process is still fairly cumbersome. This study investigates the possibility of using an epiretinal prosthesis to stimulate visual braille as a sensory substitution for reading written letters and words. The Argus II retinal prosthesis system, used in this study, includes a 10 × 6 electrode array implanted epiretinally, a tiny video camera mounted on a pair of glasses, and a wearable computer that processes the video and determines the stimulation current of each electrode in real time. In the braille reading system, individual letters are created by a subset of dots from a 3 by 2 array of six dots. For the visual braille experiment, a grid of six electrodes was chosen out of the 10 × 6 Argus II array. Groups of these electrodes were then directly stimulated (bypassing the camera) to create visual percepts of individual braille letters. Experiments were performed in a single subject. Single letters were stimulated in an alternative forced choice (AFC) paradigm, and short 2–4-letter words were stimulated (one letter at a time) in an open-choice reading paradigm. The subject correctly identified 89% of single letters, 80% of 2-letter, 60% of 3-letter, and 70% of 4-letter words. This work suggests that text can successfully be stimulated and read as visual braille in retinal prosthesis patients.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Baayen, R H" uniqKey="Baayen R">R. H. Baayen</name>
</author>
<author>
<name sortKey="Piepenbrock, R" uniqKey="Piepenbrock R">R. Piepenbrock</name>
</author>
<author>
<name sortKey="Van Rijn, H" uniqKey="Van Rijn H">H. van Rijn</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chen, X" uniqKey="Chen X">X. Chen</name>
</author>
<author>
<name sortKey="Yuille, A L" uniqKey="Yuille A">A. L. Yuille</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Da Cruz, L" uniqKey="Da Cruz L">L. da Cruz</name>
</author>
<author>
<name sortKey="Coley, B" uniqKey="Coley B">B. Coley</name>
</author>
<author>
<name sortKey="Christopher, P" uniqKey="Christopher P">P. Christopher</name>
</author>
<author>
<name sortKey="Merlini, F" uniqKey="Merlini F">F. Merlini</name>
</author>
<author>
<name sortKey="Wuyyuru, V" uniqKey="Wuyyuru V">V. Wuyyuru</name>
</author>
<author>
<name sortKey="Sahel, J" uniqKey="Sahel J">J. Sahel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dobelle, W H" uniqKey="Dobelle W">W. H. Dobelle</name>
</author>
<author>
<name sortKey="Mladejovski, M G" uniqKey="Mladejovski M">M. G. Mladejovski</name>
</author>
<author>
<name sortKey="Evans, J R" uniqKey="Evans J">J. R. Evans</name>
</author>
<author>
<name sortKey="Roberts, T S" uniqKey="Roberts T">T. S. Roberts</name>
</author>
<author>
<name sortKey="Girvin, J P" uniqKey="Girvin J">J. P. Girvin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Humayun, M S" uniqKey="Humayun M">M. S. Humayun</name>
</author>
<author>
<name sortKey="Weiland, J D" uniqKey="Weiland J">J. D. Weiland</name>
</author>
<author>
<name sortKey="Fujii, G Y" uniqKey="Fujii G">G. Y. Fujii</name>
</author>
<author>
<name sortKey="Greenberg, R" uniqKey="Greenberg R">R. Greenberg</name>
</author>
<author>
<name sortKey="Williamson, R" uniqKey="Williamson R">R. Williamson</name>
</author>
<author>
<name sortKey="Little, J" uniqKey="Little J">J. Little</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lauritzen, T Z" uniqKey="Lauritzen T">T. Z. Lauritzen</name>
</author>
<author>
<name sortKey="Nanduri, D" uniqKey="Nanduri D">D. Nanduri</name>
</author>
<author>
<name sortKey="Weiland, J D" uniqKey="Weiland J">J. D. Weiland</name>
</author>
<author>
<name sortKey="Dorn, J" uniqKey="Dorn J">J. Dorn</name>
</author>
<author>
<name sortKey="Mcclure, K" uniqKey="Mcclure K">K. McClure</name>
</author>
<author>
<name sortKey="Greenberg, R" uniqKey="Greenberg R">R. Greenberg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="New, B" uniqKey="New B">B. New</name>
</author>
<author>
<name sortKey="Pallier, C" uniqKey="Pallier C">C. Pallier</name>
</author>
<author>
<name sortKey="Brysbaert, M" uniqKey="Brysbaert M">M. Brysbaert</name>
</author>
<author>
<name sortKey="Ferrand, L" uniqKey="Ferrand L">L. Ferrand</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="New, B" uniqKey="New B">B. New</name>
</author>
<author>
<name sortKey="Pallier, C" uniqKey="Pallier C">C. Pallier</name>
</author>
<author>
<name sortKey="Ferrand, L" uniqKey="Ferrand L">L. Ferrand</name>
</author>
<author>
<name sortKey="Matos, R" uniqKey="Matos R">R. Matos</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sahel, J A" uniqKey="Sahel J">J. A. Sahel</name>
</author>
<author>
<name sortKey="Da Cruz, L" uniqKey="Da Cruz L">L. da Cruz</name>
</author>
<author>
<name sortKey="Hafezi, F" uniqKey="Hafezi F">F. Hafezi</name>
</author>
<author>
<name sortKey="Stanga, P E" uniqKey="Stanga P">P. E. Stanga</name>
</author>
<author>
<name sortKey="Merlini, F" uniqKey="Merlini F">F. Merlini</name>
</author>
<author>
<name sortKey="Coley, B" uniqKey="Coley B">B. Coley</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shen, H" uniqKey="Shen H">H. Shen</name>
</author>
<author>
<name sortKey="Coughlan, J" uniqKey="Coughlan J">J. Coughlan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wilke, R" uniqKey="Wilke R">R. Wilke</name>
</author>
<author>
<name sortKey="Gabel, V P" uniqKey="Gabel V">V.-P. Gabel</name>
</author>
<author>
<name sortKey="Sachs, H" uniqKey="Sachs H">H. Sachs</name>
</author>
<author>
<name sortKey="Schmidt, K U B" uniqKey="Schmidt K">K.-U. B. Schmidt</name>
</author>
<author>
<name sortKey="Gekeler, F" uniqKey="Gekeler F">F. Gekeler</name>
</author>
<author>
<name sortKey="Besch, D" uniqKey="Besch D">D. Besch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wilke, R G" uniqKey="Wilke R">R. G. Wilke</name>
</author>
<author>
<name sortKey="Geppmaier, U" uniqKey="Geppmaier U">U. Geppmaier</name>
</author>
<author>
<name sortKey="Stingle, K" uniqKey="Stingle K">K. Stingle</name>
</author>
<author>
<name sortKey="Zrenner, E" uniqKey="Zrenner E">E. Zrenner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zrenner, E" uniqKey="Zrenner E">E. Zrenner</name>
</author>
<author>
<name sortKey="Bartz Schmidt, K U" uniqKey="Bartz Schmidt K">K. U. Bartz-Schmidt</name>
</author>
<author>
<name sortKey="Benav, H" uniqKey="Benav H">H. Benav</name>
</author>
<author>
<name sortKey="Besch, D" uniqKey="Besch D">D. Besch</name>
</author>
<author>
<name sortKey="Bruckmann, A" uniqKey="Bruckmann A">A. Bruckmann</name>
</author>
<author>
<name sortKey="Gabel, V P" uniqKey="Gabel V">V.-P. Gabel</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Front Neurosci</journal-id>
<journal-id journal-id-type="publisher-id">Front. Neurosci.</journal-id>
<journal-title-group>
<journal-title>Frontiers in Neuroscience</journal-title>
</journal-title-group>
<issn pub-type="ppub">1662-4548</issn>
<issn pub-type="epub">1662-453X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">23189036</article-id>
<article-id pub-id-type="pmc">3504310</article-id>
<article-id pub-id-type="doi">10.3389/fnins.2012.00168</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Reading Visual Braille with a Retinal Prosthesis</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Lauritzen</surname>
<given-names>Thomas Z.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="author-notes" rid="fn001">*</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Harris</surname>
<given-names>Jordan</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Mohand-Said</surname>
<given-names>Saddek</given-names>
</name>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
<xref ref-type="aff" rid="aff4">
<sup>4</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Sahel</surname>
<given-names>Jose A.</given-names>
</name>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
<xref ref-type="aff" rid="aff4">
<sup>4</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Dorn</surname>
<given-names>Jessy D.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>McClure</surname>
<given-names>Kelly</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Greenberg</surname>
<given-names>Robert J.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff1">
<sup>1</sup>
<institution>Second Sight Medical Products</institution>
<country>Sylmar, CA, USA</country>
</aff>
<aff id="aff2">
<sup>2</sup>
<institution>Brigham Young University – Idaho</institution>
<country>Rexburg, ID, USA</country>
</aff>
<aff id="aff3">
<sup>3</sup>
<institution>UMR-S 968, Institut de la Vision</institution>
<country>Paris, France</country>
</aff>
<aff id="aff4">
<sup>4</sup>
<institution>CIC INSERM DHOS 503, National Ophthalmology Hospital</institution>
<country>Paris, France</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>Edited by: John P. Donoghue, Brown University, USA</p>
</fn>
<fn fn-type="edited-by">
<p>Reviewed by: Silvestro Micera, Scuola Superiore Sant’Anna, Italy; Chet T. Moritz, University of Washington, USA</p>
</fn>
<corresp id="fn001">*Correspondence: Thomas Z. Lauritzen, Second Sight Medical Products, 12744 San Fernando Road, Building 3, Sylmar, CA 91342, USA. e-mail:
<email xlink:type="simple">tlauritzen@2-sight.com</email>
</corresp>
<fn fn-type="other" id="fn002">
<p>This article was submitted to Frontiers in Neuroprosthetics, a specialty of Frontiers in Neuroscience.</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>22</day>
<month>11</month>
<year>2012</year>
</pub-date>
<pub-date pub-type="collection">
<year>2012</year>
</pub-date>
<volume>6</volume>
<elocation-id>168</elocation-id>
<history>
<date date-type="received">
<day>07</day>
<month>7</month>
<year>2012</year>
</date>
<date date-type="accepted">
<day>01</day>
<month>11</month>
<year>2012</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright © 2012 Lauritzen, Harris, Mohand-Said, Sahel, Dorn, McClure and Greenberg.</copyright-statement>
<copyright-year>2012</copyright-year>
<license license-type="open-access" xlink:href="http://www.frontiersin.org/licenseagreement">
<license-p>This is an open-access article distributed under the terms of the
<uri xlink:type="simple" xlink:href="http://creativecommons.org/licenses/by/3.0/">Creative Commons Attribution License</uri>
, which permits use, distribution and reproduction in other forums, provided the original authors and source are credited and subject to any copyright notices concerning any third-party graphics etc.</license-p>
</license>
</permissions>
<abstract>
<p>Retinal prostheses, which restore partial vision to patients blinded by outer retinal degeneration, are currently in clinical trial. The Argus II retinal prosthesis system was recently awarded CE approval for commercial use in Europe. While retinal prosthesis users have achieved remarkable visual improvement to the point of reading letters and short sentences, the reading process is still fairly cumbersome. This study investigates the possibility of using an epiretinal prosthesis to stimulate visual braille as a sensory substitution for reading written letters and words. The Argus II retinal prosthesis system, used in this study, includes a 10 × 6 electrode array implanted epiretinally, a tiny video camera mounted on a pair of glasses, and a wearable computer that processes the video and determines the stimulation current of each electrode in real time. In the braille reading system, individual letters are created by a subset of dots from a 3 by 2 array of six dots. For the visual braille experiment, a grid of six electrodes was chosen out of the 10 × 6 Argus II array. Groups of these electrodes were then directly stimulated (bypassing the camera) to create visual percepts of individual braille letters. Experiments were performed in a single subject. Single letters were stimulated in an alternative forced choice (AFC) paradigm, and short 2–4-letter words were stimulated (one letter at a time) in an open-choice reading paradigm. The subject correctly identified 89% of single letters, 80% of 2-letter, 60% of 3-letter, and 70% of 4-letter words. This work suggests that text can successfully be stimulated and read as visual braille in retinal prosthesis patients.</p>
</abstract>
<kwd-group>
<kwd>retina</kwd>
<kwd>epiretinal prosthesis</kwd>
<kwd>sensory substitution</kwd>
<kwd>retinitis pigmentosa</kwd>
<kwd>blindness</kwd>
<kwd>perception</kwd>
<kwd>degeneration</kwd>
<kwd>sight restoration</kwd>
</kwd-group>
<counts>
<fig-count count="7"></fig-count>
<table-count count="1"></table-count>
<equation-count count="0"></equation-count>
<ref-count count="13"></ref-count>
<page-count count="7"></page-count>
<word-count count="4525"></word-count>
</counts>
</article-meta>
</front>
<body>
<sec>
<title>Introduction</title>
<p>Retinal prostheses restore partial vision to people blinded by outer retinal degenerative diseases such as Retinitis Pigmentosa (RP) or Macular Degeneration (Humayun et al.,
<xref ref-type="bibr" rid="B5">2003</xref>
). Recent results have demonstrated the ability of prosthesis users to read large letters and short words and sentences for some subjects (Sahel et al.,
<xref ref-type="bibr" rid="B9">2011</xref>
; Zrenner et al.,
<xref ref-type="bibr" rid="B13">2011</xref>
). But with the current spatial resolution of prosthetic vision, reading takes tens of seconds for single letters and minutes for short words, and requiring letters to be ∼1–20 cm high at normal (∼30 cm) reading distance (da Cruz et al.,
<xref ref-type="bibr" rid="B3">2010</xref>
; Sahel et al.,
<xref ref-type="bibr" rid="B9">2011</xref>
; Zrenner et al.,
<xref ref-type="bibr" rid="B13">2011</xref>
). While these results are in themselves are impressive, and the performance is expected to improve significantly with future prosthesis development, the practical application at current level is limited. For example, signs one might read while walking around have letters of a few centimeters in height, but are intended to be read from several meters distance, and it is not practical spending minutes to read each sign one might encounter.</p>
<p>An alternative is to use the prosthesis to create percepts in the form of braille letters (to be read visually rather than tactually). For example, letter recognition software could identify text (e.g., from a sign), which could then be translated into braille and stimulated via the visual prosthesis. This study addresses the feasibility of reading visual braille with retinal prostheses. The specific device used in this study is the Second Sight Argus
<sup>®</sup>
II System (Second Sight Medical Products, Sylmar, CA, USA).</p>
<p>The Argus II System consists of a surgically implanted 60-channel stimulating microelectrode array, and inductive coil link used to transmit power and data to the internal portion of the implant, an external video processing unit (VPU), and a miniature camera mounted on a pair of glasses. The video camera captures a portion of the visual field and relays the information to the VPU. The VPU digitizes the signal in real time, applies a series of image processing filters, down-samples the image to a 6 by 10 pixilated grid, and creates a series of stimulus pulses customized to the individual user based on pixel gray-scale values. The Argus II System is commercially available in Europe (CE approval) and in clinical trial in the USA.</p>
<p>Here we present results showing that an Argus II subject can read visually stimulated braille. Performance is 89% correct for individual letters at 500 ms presentation, and 60–80% correct for short words, proving the feasibility of reading via visual braille.</p>
</sec>
<sec sec-type="materials|methods">
<title>Materials and Methods</title>
<sec>
<title>Subject selection</title>
<p>Second Sight has 30 subjects enrolled in a clinical study,
<uri xlink:type="simple" xlink:href="http://clinicaltrials.gov">http://clinicaltrials.gov</uri>
(NCT00407602). The subjects are blinded by the degenerative retinal disease Retinitis Pigmentosa (RP). RP causes the photoreceptor cells in the retina to die. Subjects are implanted with the Argus II retinal prosthesis system, which stimulates the surviving cells in the retina. Subjects have been implanted between 2 and 4.5 years. All subjects enrolled in the study have no cognitive impairments or learning ability deficiencies. A single subject was selected based on three criteria for this feasibility study: the ability to read (tactile) braille, spatial resolution high enough to isolate responses from six individual electrodes arranged in 3 by 2 pattern, and availability for testing. The subject is an experienced braille reader. The experiments were carried out September 2011 to March 2012 and approved by the Institutional Review Board at the location of the experiments (Centre Hospitalier National d’Ophtalmologie des Quinze-Vingts, Paris, France) and under the principles of the Declaration of Helsinki.</p>
</sec>
<sec>
<title>Description of device</title>
<p>The Argus II System consists of an implantable device surgically implanted on and in the eye, and an external unit worn by the user. The external unit consists of a small camera and transmitter mounted on a pair of sunglasses and a VPU and battery that can be worn on a belt or shoulder strap (Figure
<xref ref-type="fig" rid="F1">1</xref>
A). The implanted portion (Figure
<xref ref-type="fig" rid="F1">1</xref>
B) consists of a receiving and transmitting coil and a hermetically sealed electronics case, fixed to the sclera outside of the eye, and an electrode array (a 6 by 10 array of 60 electrodes, 200 μm in diameter, 525 μm between nearest neighbor center to center cardinal axes) that is secured to the surface of the retina (epiretinally) inside the eye by a retinal tack. The electrode array is connected to the electronics by a metalized polymer cable that penetrates the sclera in the pars plana. The camera captures video and sends the information to the processor, which converts the image to electronic signals that are then sent to the transmitter on the glasses. The implanted receiver wirelessly receives these data and sends the signal to the electrode array via a small bus, where electric stimulation pulses are emitted. The controlled electrical stimulation of the retina induces cellular responses in retinal ganglion cells that travel through the optic nerve to the visual cortex and results in visual percepts.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption>
<p>
<bold>Overview of Argus II system</bold>
.
<bold>(A)</bold>
External portion consisting of a miniature camera mounted on a pair of sunglasses, a Video Processing Unit (VPU), and a transmitter coil.
<bold>(B)</bold>
The internal portion, consisting of a receiver coil connected with a bus to a 60 electrode epiretinal array.</p>
</caption>
<graphic xlink:href="fnins-06-00168-g001"></graphic>
</fig>
</sec>
<sec>
<title>Selection of basis for stimuli</title>
<p>In this experiment, the Argus II System was used in “direct stimulation mode.” The camera was bypassed and individual electrodes were stimulated, controlled by a computer. Therefore, no visual reading software was used in these experiments.</p>
<p>The basis for the braille alphabet is a 3 by 2 array of dots, and each letter has a specific configuration (Figure
<xref ref-type="fig" rid="F2">2</xref>
A). For braille stimulation, sets of six electrodes were picked that spanned a 3 by 2 array. All six electrodes were stimulated at the same time with 20 Hz trains of 500 ms of 1 ms cathodic-anodic square pulses, i.e., 10 pulses. The current amplitude of pulses was set individually for each of the six electrodes to be 2.5–3 times the threshold for detection of a single electrode. A set of six electrodes resulting in a perceived stimulus of 3 by 2 dots was selected based on feedback from the subject (Figure
<xref ref-type="fig" rid="F2">2</xref>
B).</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption>
<p>
<bold>Stimulating braille</bold>
.
<bold>(A)</bold>
The braille alphabet.
<bold>(B)</bold>
Six electrodes forming the basis of the braille stimulation used in the experiment.</p>
</caption>
<graphic xlink:href="fnins-06-00168-g002"></graphic>
</fig>
</sec>
<sec>
<title>Visual braille stimulation</title>
<p>The experimental paradigm was inspired by the character recognition experiments of the Argus II subjects (da Cruz et al.,
<xref ref-type="bibr" rid="B3">2010</xref>
; Sahel et al.,
<xref ref-type="bibr" rid="B9">2011</xref>
). For single letter recognition experiments, the 26 letters of the alphabet were split into three sets of 8 or 9 letters: set 1 (f, g, h, l, o, p, r, v), set 2 (a, c, d, i, k, m, s, w, y), and set 3 (b, e, j, n, q, t, u, x, z). The subject was aware of which letters were contained in the current set. Selection of the letters for each set was picked randomly with the one rule that letters with dots in the same geometric structure, but a single difference in distance would not be in the same set. Four such pairs exist, b-k, f-m, g-x, and h-u. For example, b and k are both made up of two dots in a vertical line with just a difference in spacing (see Figure
<xref ref-type="fig" rid="F2">2</xref>
A). With this rule, it would be possible to split the alphabet in first, middle, last thirds. But set 1 was picked as a pilot set of letters, avoiding the simplest letters made up of only one or two dots, and kept in the main experiment. Sets 2 and 3 were subsequently constructed of the remaining letters. The letters were then stimulated in random order with five repeats of each letter in an 8- or 9-alternative forced choice (AFC) paradigm. After each visual braille letter stimulation, the subject identified which letter was perceived, and the response was recorded by the experimenter. During the experiment, the subject could request that the letter set be repeated (i.e., he could be reminded of which letters were possible within the set). No other information was given to avoid biasing answers. A letter was presented as a 500 ms pulse train at 20 Hz with the subset of the six basis electrodes forming a given letter being active. To assure performance was not dependent on a narrow parameter range, the experiments were repeated with 40 and 60 Hz stimulation.</p>
<p>The subject was a native French speaker. To test the subject’s ability to read words in visual braille, the 10 most common 2-, 3-, and 4-letter words in French (Table
<xref ref-type="table" rid="T1">1</xref>
) were picked based on usage frequency
<xref ref-type="fn" rid="fn1">
<sup>1</sup>
</xref>
(New et al.,
<xref ref-type="bibr" rid="B8">2001</xref>
,
<xref ref-type="bibr" rid="B7">2004</xref>
). Each word was presented with 500 ms per letter and 1000 ms break between letters. Considerations on the timing between letters are discussed in
<italic>Discussion</italic>
. The subject was informed that short words would be presented, but was not aware of which words were contained in the set. The order of the words was random and each word was stimulated once. The subject was allowed to request a single repetition of a word, but a guess would be considered a final answer. Responses were recorded by the experimenter.</p>
<table-wrap id="T1" position="float">
<label>Table 1</label>
<caption>
<p>
<bold>List of words (in French)</bold>
.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" rowspan="1" colspan="1">2-Letter</th>
<th align="left" rowspan="1" colspan="1">3-Letter</th>
<th align="left" rowspan="1" colspan="1">4-Letter</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">de</td>
<td align="left" rowspan="1" colspan="1">les</td>
<td align="left" rowspan="1" colspan="1">dans</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">la</td>
<td align="left" rowspan="1" colspan="1">des</td>
<td align="left" rowspan="1" colspan="1">pour</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">et</td>
<td align="left" rowspan="1" colspan="1">que</td>
<td align="left" rowspan="1" colspan="1">elle</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">le</td>
<td align="left" rowspan="1" colspan="1">une</td>
<td align="left" rowspan="1" colspan="1">plus</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">il</td>
<td align="left" rowspan="1" colspan="1">est</td>
<td align="left" rowspan="1" colspan="1">mais</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">un</td>
<td align="left" rowspan="1" colspan="1">qui</td>
<td align="left" rowspan="1" colspan="1">nous</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">en</td>
<td align="left" rowspan="1" colspan="1">pas</td>
<td align="left" rowspan="1" colspan="1">avec</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">du</td>
<td align="left" rowspan="1" colspan="1">par</td>
<td align="left" rowspan="1" colspan="1">tout</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">je</td>
<td align="left" rowspan="1" colspan="1">sur</td>
<td align="left" rowspan="1" colspan="1">vous</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">ne</td>
<td align="left" rowspan="1" colspan="1">son</td>
<td align="left" rowspan="1" colspan="1">bien</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec>
<title>Analysis</title>
<p>Answers were summed and significance of the proportion of correct answers was determined based on binomial distributions (correct/wrong) and chance levels, 1/8 or 1/9 depending on letter set.</p>
<p>Error analysis was performed by comparing the braille pattern of the letter guessed by the subject to the pattern of the correct letter. The degree of error was determined by assigning one point for: each dot that was not perceived, each missing dot that was perceived (false positive), or each dot that was perceived in a wrong place and summing the points. This resulted in 0 degrees of error denoting a correct identification, and a maximum possible error of 6 degrees.</p>
</sec>
</sec>
<sec>
<title>Results</title>
<p>A subject, blinded by RP and implanted with the Argus
<sup>®</sup>
II retinal prosthesis system, was presented visual braille via six electrodes arranged in a 3 by 2 pattern to span the braille alphabet. The subject had no cognitive or learning ability impairments, and was an experienced (tactile) braille reader.</p>
<sec>
<title>Single letter recognition</title>
<p>Single letters were stimulated in sets of 8 or 9 letters in an AFC paradigm with five repetitions of each letter. Single letters were presented for 500 ms. Letter recognition was high for all presented letters. The detection rate at 20 Hz stimulation for the three letter sets ranged between 75 and 98% with a mean of 89% correct, and all were highly significantly above chance level (
<italic>p</italic>
 < 0.001; Figure
<xref ref-type="fig" rid="F3">3</xref>
). Stimulation at 40 and 60 Hz yielded 85 and 77% mean correct, both significantly above chance recognition (
<italic>p</italic>
 < 0.001) and not significantly different from the recognition rate at 20 Hz stimuli (data not shown).</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption>
<p>
<bold>Identification of single letters</bold>
. Proportion correct of identification of single letters in set 1 (8 AFC), set 2 (9 AFC), and set 3 (9 AFC) and the summary percent correct. Each letter was presented five times in random order within its set. The black horizontal lines denote chance level for the respective set. *
<italic>p</italic>
 < 0.001 (binomial probability distribution).</p>
</caption>
<graphic xlink:href="fnins-06-00168-g003"></graphic>
</fig>
<p>While the complexity of letters varies, there is no indication that performance depended on the complexity of letters, measured as the number of dots in a letter (Figure
<xref ref-type="fig" rid="F4">4</xref>
).</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption>
<p>
<bold>Identification of single letters as a function of letter complexity, measured as the number of dots forming a letter</bold>
. All letter complexities have high identification rate and there is no systematic change in identification with complexity.</p>
</caption>
<graphic xlink:href="fnins-06-00168-g004"></graphic>
</fig>
<p>Error matrices show the perceived letter as a function of the displayed letter (Figure
<xref ref-type="fig" rid="F5">5</xref>
). There is no systematic error in misperceived letters. To determine a degree of error, the perception errors were scored the perception by adding a point for each extra perceived dot, missed dot, or dot perceived in a wrong location. Zero degree error is a correct perception (89%) and the maximum possible number of errors with a 6-dot basis is 6 degrees of error. Nine percent of the perceptions had 1 degree (82% of all errors), 2% had 2 degrees of error (18% of all errors), and there were no higher errors (Figure
<xref ref-type="fig" rid="F6">6</xref>
A). Splitting the errors up in extra perceived, missed, or dot in wrong location, we see that by far the most errors (64%) are caused by one or two extra perceived dots, while 21 and 14% respectively are caused by a missed dot or a dot perceived in the wrong location (Figure
<xref ref-type="fig" rid="F6">6</xref>
B). Further, the error matrices (Figure
<xref ref-type="fig" rid="F5">5</xref>
) show that electrode F5, representing the lower left dot is involved in 9 of the total of 14 errors.</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption>
<p>
<bold>Error matrices</bold>
. Matrix plots of letter perceived (
<italic>y</italic>
-axis) as a function of the letter displayed (
<italic>x</italic>
-axis). The ideal case would be a diagonal matrix.</p>
</caption>
<graphic xlink:href="fnins-06-00168-g005"></graphic>
</fig>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption>
<p>
<bold>Degree of error in recognizing single letters</bold>
.
<bold>(A)</bold>
The distance in error between the stimulated and perceived letter. Each degree is a perceived dot added, missed, or perceived in a wrong location. Zero degree difference is correct identification. Most errors are a single degree. Theoretically, the maximum error is 6°.
<bold>(B)</bold>
The type of error, an extra perceived dot, a misplaced dot, or a missed dot, as a function of the distance in error.</p>
</caption>
<graphic xlink:href="fnins-06-00168-g006"></graphic>
</fig>
</sec>
<sec>
<title>Word recognition</title>
<p>The subject was presented 10 2-, 3-, and 4-letter words (Table
<xref ref-type="table" rid="T1">1</xref>
) and correctly identified eight, six, and seven words respectively (Figure
<xref ref-type="fig" rid="F7">7</xref>
). The proportion of word recognition was highly significant based on random letter presentation (For example, since the whole alphabet was available, chance of a 2-letter word is 1/26
<sup>2</sup>
 = 0.0015). The proportion of word recognition is not significantly different from what would be predicted by the single letter recognition proportion [0.89
<sup>(word length)</sup>
; Figure
<xref ref-type="fig" rid="F7">7</xref>
]. Eighty-nine percent is the average proportion correct from eight and nine AFC experiments. It is reasonable to expect the number is similar in a 26 AFC task (ignoring the use-frequency of individual letters in regular text). Comparing the presented and guessed words, the nine word errors contained a total of 15 single letter errors. Of these, eight were single dot errors, five were a missed letter, and one involved flipping the order of two letters (counts as two single letter errors). Only one error contained multiple dot (three) errors.</p>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption>
<p>
<bold>Recognition of braille words</bold>
. Proportion correct identification of 2-, 3-, and 4-letter words. Black line represents the expected proportion correct given a proportion of single letter identification rate of 89%.</p>
</caption>
<graphic xlink:href="fnins-06-00168-g007"></graphic>
</fig>
</sec>
</sec>
<sec sec-type="discussion">
<title>Discussion</title>
<p>This work shows that an Argus II user can read both single letters and short words in visually stimulated braille. The subject recognized 89% of presented letters. Eighty-two percent of errors were due to a single dot misperception, and there is no indication that the complexity of the letter played a role in perception. Sixty-four percent of the errors were caused by the perception of an extra dot. Similarly, the electrode representing the lower left dot (electrode F5) was involved in 64% of all the errors, including 6 of the total 11 extra perceived dots. This indicates that improving the performance of that electrode will significantly improve the results. The subject also identified eight of 2-, six of 3-, and seven of 4-letter words of a total of 10 presented words of each length. It is reasonable to expect the performance will improve with training. The subject is an experienced braille reader. While we did not test it specifically in this study, it is safe to assume a 100% identification rate for tactile braille. Thus the discrepancy is due to visual stimulus comprehension and not braille comprehension. This opens the possibility for the Argus II users to read text by making a sensory substitution to visual braille.</p>
<sec>
<title>Comparison to other visual prosthetic stimulation</title>
<p>Dobelle et al. (
<xref ref-type="bibr" rid="B4">1976</xref>
) stimulated visual braille with a visual cortex prosthesis. Presenting randomized single letters presented for 500 ms to a subject, they reported 73–85% correct responses, depending on the exact experimental paradigm. These results are similar to the results presented here. Dobelle et al. (
<xref ref-type="bibr" rid="B4">1976</xref>
) picked six electrodes spanning a 3 by 2 array of perceived phosphenes. Interestingly, the perceived locations of the six electrodes were “scrambled” compared to their array locations. We expect these non-linear discrepancies from the retinotopic map are due to their large electrode array (several cm) covering several sulci and gyri. The phosphene locations of the electrodes in the Argus II subject were more linearly arranged one-to-one, as expected when stimulating the retinotopic space of the retina.</p>
<p>Other retinal prostheses have the ability to function in a “direct stimulation” mode (Wilke et al.,
<xref ref-type="bibr" rid="B11">2011a</xref>
; Zrenner et al.,
<xref ref-type="bibr" rid="B13">2011</xref>
). To the best of our knowledge, these groups have not experimented with visual braille in direct stimulation. Real world use of visual braille for reading requires visual processing filters, such as character recognition software, to allow for translating text into braille. Thus the Argus II system is the only currently available system able to apply visual processing filters, such as character recognition software, to allow for translating text into braille for stimulation in real world use.</p>
</sec>
<sec>
<title>Considerations on braille reading speed</title>
<p>The stimulation time used in these experiments (500 ms per letter and 1000 ms between letters) is significantly faster than the current reading speed reported with retinal prostheses (tens of seconds per letter; da Cruz et al.,
<xref ref-type="bibr" rid="B3">2010</xref>
; Sahel et al.,
<xref ref-type="bibr" rid="B9">2011</xref>
; Zrenner et al.,
<xref ref-type="bibr" rid="B13">2011</xref>
). The current study did not explore details on how stimulation time affects perception. In a short pilot experiment, we did set the stimulation time to 250 ms in a run of letter set 1, and found that the subject perceived 77.5% of the letters correctly. This is not significantly different from the 75% correct at 500 ms (Figure
<xref ref-type="fig" rid="F3">3</xref>
). This indicates that it is possible to perceive visual braille at very short presentation times of down to, at least, 250 ms.</p>
<p>While shortening the presentation time of individual letters may increase word reading speed, we expect a limiting factor is the timing between letters and words. Recent experiments with direct stimulation in retinal prostheses indicate that the persistence of a phosphene is 150–200 ms (Lauritzen et al.,
<xref ref-type="bibr" rid="B6">2011</xref>
; Wilke et al.,
<xref ref-type="bibr" rid="B12">2011b</xref>
). Similarly, Dobelle et al. (
<xref ref-type="bibr" rid="B4">1976</xref>
) reported that “at frames faster than 4s
<sup>−1</sup>
, presentations tend to blur” indicating that phosphenes generated by direct cortical stimulation have a similar persistence. These findings indicate that a theoretical lower limit for the interval for visual braille reading is slightly higher than 150–200 ms, say ∼250 ms. If letter (and word-space) presentations are also ∼250 ms, i.e., ∼500 ms per letter plus space, a realistic goal for reading speed is ∼120 letters per minute. This is an adequate speed for reading signs and shorter messages.</p>
</sec>
<sec>
<title>Considerations on braille reading performance</title>
<p>In this experiment, single letter performance was 89% correct, and performance of reading of short words aligned well with expectation based on single letter performance (Figure
<xref ref-type="fig" rid="F7">7</xref>
). While the single letter performance is high, and we expect it to get better with training, a simple multiplication of probabilities would result in a larger amount of errors for just slightly longer words. But this is alleviated by the increased structure of longer words and context of sentences (e.g., Baayen et al.,
<xref ref-type="bibr" rid="B1">1995</xref>
; New et al.,
<xref ref-type="bibr" rid="B7">2004</xref>
). For example, missing a letter in the word “restaurant” does not alter it to something unrecognizable.</p>
</sec>
<sec>
<title>Considerations for prosthetic applications</title>
<p>Implementing a visual braille function in prosthetic vision requires implementing optical character recognition software for reading text in the VPU. Such software is common use (e.g., Google Goggles)
<xref ref-type="fn" rid="fn2">
<sup>2</sup>
</xref>
and Open Source codes are available
<xref ref-type="fn" rid="fn3">
<sup>3</sup>
</xref>
. Reading identified text is only part of the problem. Identifying text in the environment is the other. Different groups have published algorithms for detecting and reading text in natural scenes (Chen and Yuille,
<xref ref-type="bibr" rid="B2">2004</xref>
; Shen and Coughlan,
<xref ref-type="bibr" rid="B10">2012</xref>
). In particular, Chen and Yuille (
<xref ref-type="bibr" rid="B2">2004</xref>
) report a success rate of detecting and reading text of more than 90% (detecting 97.2% of all text in natural images, and reading 93% of the detected text). Algorithms like this are only expected to improve in the future.</p>
<p>Further, the user would need to read visual braille. The subject in this study reads braille, but only about 10% of blind people read braille
<xref ref-type="fn" rid="fn4">
<sup>4</sup>
</xref>
. Interestingly, the subject in the Dobelle et al. (
<xref ref-type="bibr" rid="B4">1976</xref>
) study did not know (tactile) braille at the onset of the study. During the study, they tested both tactile and visual braille, and the subject averaged only 28% correct letter identification using tactile braille as opposed to 73–85% letter identification using visual braille. This validates the notion that visual braille is a different modality than tactile braille. While knowledge of tactile braille is useful, it is likely not a necessity for succeeding in reading visual braille.</p>
</sec>
</sec>
<sec>
<title>Conclusion</title>
<p>In summary, stimulation of visual braille is feasible for conveying text to visual prosthesis users, and the technology needed can readily be implemented. It is a requirement that the user is able to read braille, but this can be learned with limited effort if the user does not already have this ability.</p>
</sec>
<sec>
<title>Conflict of Interest Statement</title>
<p>Thomas Z. Lauritzen, Jessy D. Dorn, Kelly McClure, Robert J. Greenberg are employees of and have financial interests in Second Sight Medical Products. Jordan Harris has no conflicts of interest.</p>
</sec>
</body>
<back>
<ack>
<p>Second Sight Medical Products and NIH-NEI (EY020778).</p>
</ack>
<fn-group>
<fn id="fn1">
<p>
<sup>1</sup>
<uri xlink:type="simple" xlink:href="http://www.lexique.org">www.lexique.org</uri>
</p>
</fn>
<fn id="fn2">
<p>
<sup>2</sup>
<uri xlink:type="simple" xlink:href="http://www.google.com/mobile/goggles">http://www.google.com/mobile/goggles</uri>
</p>
</fn>
<fn id="fn3">
<p>
<sup>3</sup>
<uri xlink:type="simple" xlink:href="http://code.google.com/p/tesseract-ocr/">http://code.google.com/p/tesseract-ocr/</uri>
</p>
</fn>
<fn id="fn4">
<p>
<sup>4</sup>
<uri xlink:type="simple" xlink:href="http://nfb.org/braille-campaign">http://nfb.org/braille-campaign</uri>
</p>
</fn>
</fn-group>
<ref-list>
<title>References</title>
<ref id="B1">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Baayen</surname>
<given-names>R. H.</given-names>
</name>
<name>
<surname>Piepenbrock</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>van Rijn</surname>
<given-names>H.</given-names>
</name>
</person-group>
(
<year>1995</year>
).
<source>The CELEX Lexical database</source>
.
<publisher-loc>Philadelphia</publisher-loc>
:
<publisher-name>Linguistic Data Consortium, University of Pennsylavnia</publisher-name>
</mixed-citation>
</ref>
<ref id="B2">
<mixed-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Chen</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Yuille</surname>
<given-names>A. L.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>“Detecting and reading text in natural scenes,”</article-title>
in
<conf-name>Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition</conf-name>
(
<italic>CVPR</italic>
)
<volume>2</volume>
,
<fpage>366</fpage>
<lpage>373</lpage>
</mixed-citation>
</ref>
<ref id="B3">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>da Cruz</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Coley</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Christopher</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Merlini</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Wuyyuru</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Sahel</surname>
<given-names>J.</given-names>
</name>
<etal></etal>
</person-group>
(
<year>2010</year>
).
<article-title>Patients blinded by outer retinal dystrophies are able to identify letters using the Argus
<sup></sup>
II retinal prosthesis system</article-title>
.
<source>Invest. Ophthalmol. Vis. Sci.</source>
<fpage>51</fpage>
, ARVO E-Abstract 2023.</mixed-citation>
</ref>
<ref id="B4">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dobelle</surname>
<given-names>W. H.</given-names>
</name>
<name>
<surname>Mladejovski</surname>
<given-names>M. G.</given-names>
</name>
<name>
<surname>Evans</surname>
<given-names>J. R.</given-names>
</name>
<name>
<surname>Roberts</surname>
<given-names>T. S.</given-names>
</name>
<name>
<surname>Girvin</surname>
<given-names>J. P.</given-names>
</name>
</person-group>
(
<year>1976</year>
).
<article-title>“Braille” reading by a blind volunteer by visual cortex stimulation</article-title>
.
<source>Nature</source>
<volume>259</volume>
,
<fpage>111</fpage>
<lpage>112</lpage>
<pub-id pub-id-type="doi">10.1038/259111a0</pub-id>
<pub-id pub-id-type="pmid">1246346</pub-id>
</mixed-citation>
</ref>
<ref id="B5">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Humayun</surname>
<given-names>M. S.</given-names>
</name>
<name>
<surname>Weiland</surname>
<given-names>J. D.</given-names>
</name>
<name>
<surname>Fujii</surname>
<given-names>G. Y.</given-names>
</name>
<name>
<surname>Greenberg</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Williamson</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Little</surname>
<given-names>J.</given-names>
</name>
<etal></etal>
</person-group>
(
<year>2003</year>
).
<article-title>Visual perception in a blind subject with a chronic microelectronic retinal prosthesis</article-title>
.
<source>Vision Res.</source>
<volume>43</volume>
,
<fpage>2573</fpage>
<lpage>2581</lpage>
<pub-id pub-id-type="doi">10.1016/S0042-6989(03)00457-7</pub-id>
<pub-id pub-id-type="pmid">13129543</pub-id>
</mixed-citation>
</ref>
<ref id="B6">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lauritzen</surname>
<given-names>T. Z.</given-names>
</name>
<name>
<surname>Nanduri</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Weiland</surname>
<given-names>J. D.</given-names>
</name>
<name>
<surname>Dorn</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>McClure</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Greenberg</surname>
<given-names>R.</given-names>
</name>
<etal></etal>
</person-group>
(
<year>2011</year>
).
<article-title>Inter-electrode discriminability correlates with spatial visual performance in Argus
<sup></sup>
II subjects</article-title>
.
<source>Invest. Ophthalmol. Vis. Sci.</source>
<fpage>52</fpage>
, ARVO E-Abstract 4927.</mixed-citation>
</ref>
<ref id="B7">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>New</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Pallier</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Brysbaert</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Ferrand</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Lexique 2: a new French lexical database</article-title>
.
<source>Behav. Res. Methods</source>
<volume>36</volume>
,
<fpage>516</fpage>
<lpage>524</lpage>
<pub-id pub-id-type="doi">10.3758/BF03195598</pub-id>
</mixed-citation>
</ref>
<ref id="B8">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>New</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Pallier</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Ferrand</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Matos</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Une base de données lexicales du français contemporain sur internet: LEXIQUE.
<italic>L</italic>
</article-title>
<source>Année Psychologique</source>
<volume>101</volume>
,
<fpage>447</fpage>
<lpage>462</lpage>
<pub-id pub-id-type="doi">10.3406/psy.2001.1341</pub-id>
</mixed-citation>
</ref>
<ref id="B9">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sahel</surname>
<given-names>J. A.</given-names>
</name>
<name>
<surname>da Cruz</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Hafezi</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Stanga</surname>
<given-names>P. E.</given-names>
</name>
<name>
<surname>Merlini</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Coley</surname>
<given-names>B.</given-names>
</name>
<etal></etal>
</person-group>
(
<year>2011</year>
).
<article-title>Subjects blind from outer retinal dystrophies are able to consistently read short sentences using the Argus
<sup></sup>
II retinal prosthesis system</article-title>
.
<source>Invest. Ophthalmol. Vis. Sci.</source>
<fpage>52</fpage>
, ARVO E-Abstract 3420.</mixed-citation>
</ref>
<ref id="B10">
<mixed-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Shen</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Coughlan</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Towards a real-time system for finding and reading signs for visually impaired users</article-title>
.
<conf-name>Proceedings of the 13th International Conference on Computers Helping People with Special Needs (ICCHP ’12)</conf-name>
<volume>2</volume>
,
<fpage>41</fpage>
<lpage>47</lpage>
</mixed-citation>
</ref>
<ref id="B11">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wilke</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Gabel</surname>
<given-names>V.-P.</given-names>
</name>
<name>
<surname>Sachs</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Schmidt</surname>
<given-names>K.-U. B.</given-names>
</name>
<name>
<surname>Gekeler</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Besch</surname>
<given-names>D.</given-names>
</name>
<etal></etal>
</person-group>
(
<year>2011a</year>
).
<article-title>Spatial resolution and perception of patterns mediated by a subretinal 16-electrode array in patients blinded by hereditary retinal dystrophies</article-title>
.
<source>Invest. Opthtalmol. Vis. Sci.</source>
<volume>52</volume>
,
<fpage>5995</fpage>
<lpage>6003</lpage>
<pub-id pub-id-type="doi">10.1167/iovs.10-6946</pub-id>
</mixed-citation>
</ref>
<ref id="B12">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wilke</surname>
<given-names>R. G.</given-names>
</name>
<name>
<surname>Geppmaier</surname>
<given-names>U.</given-names>
</name>
<name>
<surname>Stingle</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Zrenner</surname>
<given-names>E.</given-names>
</name>
</person-group>
(
<year>2011b</year>
).
<article-title>Fading of perception in retinal implants is a function of time and space between sites of stimulation</article-title>
.
<source>Invest. Ophthalmol. Vis. Sci.</source>
<fpage>52</fpage>
, ARVO E-Abstract 458.
<pub-id pub-id-type="doi">10.1167/iovs.10-6946</pub-id>
</mixed-citation>
</ref>
<ref id="B13">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zrenner</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Bartz-Schmidt</surname>
<given-names>K. U.</given-names>
</name>
<name>
<surname>Benav</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Besch</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Bruckmann</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Gabel</surname>
<given-names>V.-P.</given-names>
</name>
<etal></etal>
</person-group>
(
<year>2011</year>
).
<article-title>Subretinal electronic chips allow blind patients to read letters and combine them to words</article-title>
.
<source>Proc. Boil. Sci.</source>
<volume>278</volume>
,
<fpage>1489</fpage>
<lpage>1497</lpage>
<pub-id pub-id-type="doi">10.1098/rspb.2010.1747</pub-id>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
<affiliations>
<list>
<country>
<li>France</li>
<li>États-Unis</li>
</country>
</list>
<tree>
<country name="États-Unis">
<noRegion>
<name sortKey="Lauritzen, Thomas Z" sort="Lauritzen, Thomas Z" uniqKey="Lauritzen T" first="Thomas Z." last="Lauritzen">Thomas Z. Lauritzen</name>
</noRegion>
<name sortKey="Dorn, Jessy D" sort="Dorn, Jessy D" uniqKey="Dorn J" first="Jessy D." last="Dorn">Jessy D. Dorn</name>
<name sortKey="Greenberg, Robert J" sort="Greenberg, Robert J" uniqKey="Greenberg R" first="Robert J." last="Greenberg">Robert J. Greenberg</name>
<name sortKey="Harris, Jordan" sort="Harris, Jordan" uniqKey="Harris J" first="Jordan" last="Harris">Jordan Harris</name>
<name sortKey="Harris, Jordan" sort="Harris, Jordan" uniqKey="Harris J" first="Jordan" last="Harris">Jordan Harris</name>
<name sortKey="Mcclure, Kelly" sort="Mcclure, Kelly" uniqKey="Mcclure K" first="Kelly" last="Mcclure">Kelly Mcclure</name>
</country>
<country name="France">
<noRegion>
<name sortKey="Mohand Said, Saddek" sort="Mohand Said, Saddek" uniqKey="Mohand Said S" first="Saddek" last="Mohand-Said">Saddek Mohand-Said</name>
</noRegion>
<name sortKey="Mohand Said, Saddek" sort="Mohand Said, Saddek" uniqKey="Mohand Said S" first="Saddek" last="Mohand-Said">Saddek Mohand-Said</name>
<name sortKey="Sahel, Jose A" sort="Sahel, Jose A" uniqKey="Sahel J" first="Jose A." last="Sahel">Jose A. Sahel</name>
<name sortKey="Sahel, Jose A" sort="Sahel, Jose A" uniqKey="Sahel J" first="Jose A." last="Sahel">Jose A. Sahel</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/OcrV1/Data/Pmc/Checkpoint
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000091 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd -nk 000091 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    OcrV1
   |flux=    Pmc
   |étape=   Checkpoint
   |type=    RBID
   |clé=     PMC:3504310
   |texte=   Reading Visual Braille with a Retinal Prosthesis
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/RBID.i   -Sk "pubmed:23189036" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd   \
       | NlmPubMed2Wicri -a OcrV1 

Wicri

This area was generated with Dilib version V0.6.32.
Data generation: Sat Nov 11 16:53:45 2017. Site generation: Mon Mar 11 23:15:16 2024