Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

'Visual’ parsing can be taught quickly without visual experience during critical periods

Identifieur interne : 000638 ( Pmc/Curation ); précédent : 000637; suivant : 000639

'Visual’ parsing can be taught quickly without visual experience during critical periods

Auteurs : Lior Reich [Israël] ; Amir Amedi [Israël]

Source :

RBID : PMC:4611203

Abstract

Cases of invasive sight-restoration in congenital blind adults demonstrated that acquiring visual abilities is extremely challenging, presumably because visual-experience during critical-periods is crucial for learning visual-unique concepts (e.g. size constancy). Visual rehabilitation can also be achieved using sensory-substitution-devices (SSDs) which convey visual information non-invasively through sounds. We tested whether one critical concept – visual parsing, which is highly-impaired in sight-restored patients – can be learned using SSD. To this end, congenitally blind adults participated in a unique, relatively short (~70 hours), SSD-‘vision’ training. Following this, participants successfully parsed 2D and 3D visual objects. Control individuals naïve to SSDs demonstrated that while some aspects of parsing with SSD are intuitive, the blind’s success could not be attributed to auditory processing alone. Furthermore, we had a unique opportunity to compare the SSD-users’ abilities to those reported for sight-restored patients who performed similar tasks visually, and who had months of eyesight. Intriguingly, the SSD-users outperformed the patients on most criteria tested. These suggest that with adequate training and technologies, key high-order visual features can be quickly acquired in adulthood, and lack of visual-experience during critical-periods can be somewhat compensated for. Practically, these highlight the potential of SSDs as standalone-aids or combined with invasive restoration approaches.


Url:
DOI: 10.1038/srep15359
PubMed: 26482105
PubMed Central: 4611203

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4611203

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">'Visual’ parsing can be taught quickly without visual experience during critical periods</title>
<author>
<name sortKey="Reich, Lior" sort="Reich, Lior" uniqKey="Reich L" first="Lior" last="Reich">Lior Reich</name>
<affiliation wicri:level="1">
<nlm:aff id="a1">
<institution>Department of Medical Neurobiology, The Institute for Medical Research Israel-Canada, Faculty of Medicine, The Hebrew University of Jerusalem</institution>
, Jerusalem 91220,
<country>Israel</country>
</nlm:aff>
<country xml:lang="fr">Israël</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Amedi, Amir" sort="Amedi, Amir" uniqKey="Amedi A" first="Amir" last="Amedi">Amir Amedi</name>
<affiliation wicri:level="1">
<nlm:aff id="a1">
<institution>Department of Medical Neurobiology, The Institute for Medical Research Israel-Canada, Faculty of Medicine, The Hebrew University of Jerusalem</institution>
, Jerusalem 91220,
<country>Israel</country>
</nlm:aff>
<country xml:lang="fr">Israël</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="a2">
<institution>The Edmond and Lily Safra Center for Brain Sciences (ELSC), The Hebrew University of Jerusalem</institution>
, Jerusalem 91220,
<country>Israel</country>
</nlm:aff>
<country xml:lang="fr">Israël</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">26482105</idno>
<idno type="pmc">4611203</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4611203</idno>
<idno type="RBID">PMC:4611203</idno>
<idno type="doi">10.1038/srep15359</idno>
<date when="2015">2015</date>
<idno type="wicri:Area/Pmc/Corpus">000638</idno>
<idno type="wicri:Area/Pmc/Curation">000638</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">'Visual’ parsing can be taught quickly without visual experience during critical periods</title>
<author>
<name sortKey="Reich, Lior" sort="Reich, Lior" uniqKey="Reich L" first="Lior" last="Reich">Lior Reich</name>
<affiliation wicri:level="1">
<nlm:aff id="a1">
<institution>Department of Medical Neurobiology, The Institute for Medical Research Israel-Canada, Faculty of Medicine, The Hebrew University of Jerusalem</institution>
, Jerusalem 91220,
<country>Israel</country>
</nlm:aff>
<country xml:lang="fr">Israël</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Amedi, Amir" sort="Amedi, Amir" uniqKey="Amedi A" first="Amir" last="Amedi">Amir Amedi</name>
<affiliation wicri:level="1">
<nlm:aff id="a1">
<institution>Department of Medical Neurobiology, The Institute for Medical Research Israel-Canada, Faculty of Medicine, The Hebrew University of Jerusalem</institution>
, Jerusalem 91220,
<country>Israel</country>
</nlm:aff>
<country xml:lang="fr">Israël</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="a2">
<institution>The Edmond and Lily Safra Center for Brain Sciences (ELSC), The Hebrew University of Jerusalem</institution>
, Jerusalem 91220,
<country>Israel</country>
</nlm:aff>
<country xml:lang="fr">Israël</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Scientific Reports</title>
<idno type="eISSN">2045-2322</idno>
<imprint>
<date when="2015">2015</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Cases of invasive sight-restoration in congenital blind adults demonstrated that acquiring visual abilities is extremely challenging, presumably because visual-experience during critical-periods is crucial for learning visual-unique concepts (e.g. size constancy). Visual rehabilitation can also be achieved using sensory-substitution-devices (SSDs) which convey visual information non-invasively through sounds. We tested whether one critical concept – visual parsing, which is highly-impaired in sight-restored patients – can be learned using SSD. To this end, congenitally blind adults participated in a unique, relatively short (~70 hours), SSD-‘vision’ training. Following this, participants successfully parsed 2D and 3D visual objects. Control individuals naïve to SSDs demonstrated that while some aspects of parsing with SSD are intuitive, the blind’s success could not be attributed to auditory processing alone. Furthermore, we had a unique opportunity to compare the SSD-users’ abilities to those reported for sight-restored patients who performed similar tasks visually, and who had months of eyesight. Intriguingly, the SSD-users outperformed the patients on most criteria tested. These suggest that with adequate training and technologies, key high-order visual features can be quickly acquired in adulthood, and lack of visual-experience during critical-periods can be somewhat compensated for. Practically, these highlight the potential of SSDs as standalone-aids or combined with invasive restoration approaches.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Zrenner, E" uniqKey="Zrenner E">E. Zrenner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ahuja, A K" uniqKey="Ahuja A">A. K. Ahuja</name>
</author>
<author>
<name sortKey="Behrend, M R" uniqKey="Behrend M">M. R. Behrend</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Weiland, J D" uniqKey="Weiland J">J. D. Weiland</name>
</author>
<author>
<name sortKey="Cho, A K" uniqKey="Cho A">A. K. Cho</name>
</author>
<author>
<name sortKey="Humayun, M S" uniqKey="Humayun M">M. S. Humayun</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Luo, Y H L" uniqKey="Luo Y">Y. H.-L. Luo</name>
</author>
<author>
<name sortKey="Da Cruz, L" uniqKey="Da Cruz L">L. da Cruz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Da Cruz, L" uniqKey="Da Cruz L">L. da Cruz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Humayun, M S" uniqKey="Humayun M">M. S. Humayun</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lauritzen, T Z" uniqKey="Lauritzen T">T. Z. Lauritzen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dorn, J D" uniqKey="Dorn J">J. D. Dorn</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gregory, R L" uniqKey="Gregory R">R. L. Gregory</name>
</author>
<author>
<name sortKey="Wallace, J G" uniqKey="Wallace J">J. G. Wallace</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ackroyd, C" uniqKey="Ackroyd C">C. Ackroyd</name>
</author>
<author>
<name sortKey="Humphrey, N K" uniqKey="Humphrey N">N. K. Humphrey</name>
</author>
<author>
<name sortKey="Warrington, E K" uniqKey="Warrington E">E. K. Warrington</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Carlson, S" uniqKey="Carlson S">S. Carlson</name>
</author>
<author>
<name sortKey="Hyvarinen, L" uniqKey="Hyvarinen L">L. Hyvarinen</name>
</author>
<author>
<name sortKey="Raninen, A" uniqKey="Raninen A">A. Raninen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fine, I" uniqKey="Fine I">I. Fine</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ostrovsky, Y" uniqKey="Ostrovsky Y">Y. Ostrovsky</name>
</author>
<author>
<name sortKey="Meyers, E" uniqKey="Meyers E">E. Meyers</name>
</author>
<author>
<name sortKey="Ganesh, S" uniqKey="Ganesh S">S. Ganesh</name>
</author>
<author>
<name sortKey="Mathur, U" uniqKey="Mathur U">U. Mathur</name>
</author>
<author>
<name sortKey="Sinha, P" uniqKey="Sinha P">P. Sinha</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Levin, N" uniqKey="Levin N">N. Levin</name>
</author>
<author>
<name sortKey="Dumoulin, S O" uniqKey="Dumoulin S">S. O. Dumoulin</name>
</author>
<author>
<name sortKey="Winawer, J" uniqKey="Winawer J">J. Winawer</name>
</author>
<author>
<name sortKey="Dougherty, R F" uniqKey="Dougherty R">R. F. Dougherty</name>
</author>
<author>
<name sortKey="Wandell, B A" uniqKey="Wandell B">B. A. Wandell</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sinha, P" uniqKey="Sinha P">P. Sinha</name>
</author>
<author>
<name sortKey="Held, R" uniqKey="Held R">R. Held</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wiesel, T N" uniqKey="Wiesel T">T. N. Wiesel</name>
</author>
<author>
<name sortKey="Hubel, D H" uniqKey="Hubel D">D. H. Hubel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wiesel, T N" uniqKey="Wiesel T">T. N. Wiesel</name>
</author>
<author>
<name sortKey="Hubel, D H" uniqKey="Hubel D">D. H. Hubel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dormal, G" uniqKey="Dormal G">G. Dormal</name>
</author>
<author>
<name sortKey="Lepore, F" uniqKey="Lepore F">F. Lepore</name>
</author>
<author>
<name sortKey="Collignon, O" uniqKey="Collignon O">O. Collignon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Putzar, L" uniqKey="Putzar L">L. Putzar</name>
</author>
<author>
<name sortKey="Hotting, K" uniqKey="Hotting K">K. Hötting</name>
</author>
<author>
<name sortKey="Rosler, F" uniqKey="Rosler F">F. Rösler</name>
</author>
<author>
<name sortKey="Roder, B" uniqKey="Roder B">B. Röder</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Maurer, D" uniqKey="Maurer D">D. Maurer</name>
</author>
<author>
<name sortKey="Mondloch, C J" uniqKey="Mondloch C">C. J. Mondloch</name>
</author>
<author>
<name sortKey="Lewis, T L" uniqKey="Lewis T">T. L. Lewis</name>
</author>
<author>
<name sortKey="Von Hofsten, C" uniqKey="Von Hofsten C">C. von Hofsten</name>
</author>
<author>
<name sortKey="Rosander, K" uniqKey="Rosander K">K. Rosander</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ellemberg, D" uniqKey="Ellemberg D">D. Ellemberg</name>
</author>
<author>
<name sortKey="Lewis, T L" uniqKey="Lewis T">T. L. Lewis</name>
</author>
<author>
<name sortKey="Maurer, D" uniqKey="Maurer D">D. Maurer</name>
</author>
<author>
<name sortKey="Brar, S" uniqKey="Brar S">S. Brar</name>
</author>
<author>
<name sortKey="Brent, H P" uniqKey="Brent H">H. P. Brent</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chang, W C" uniqKey="Chang W">W. C. Chang</name>
</author>
<author>
<name sortKey="Bin, I" uniqKey="Bin I">I. Bin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Borenstein, E" uniqKey="Borenstein E">E. Borenstein</name>
</author>
<author>
<name sortKey="Ullman, S" uniqKey="Ullman S">S. Ullman</name>
</author>
<author>
<name sortKey="Heyden, A" uniqKey="Heyden A">A. Heyden</name>
</author>
<author>
<name sortKey="Sparr, G" uniqKey="Sparr G">G. Sparr</name>
</author>
<author>
<name sortKey="Nielsen, M" uniqKey="Nielsen M">M. Nielsen</name>
</author>
<author>
<name sortKey="Johansen, P" uniqKey="Johansen P">P. Johansen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dura Bernal, S" uniqKey="Dura Bernal S">S. Dura-Bernal</name>
</author>
<author>
<name sortKey="Wennekers, T" uniqKey="Wennekers T">T. Wennekers</name>
</author>
<author>
<name sortKey="Denham, S L" uniqKey="Denham S">S. L. Denham</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ptito, M" uniqKey="Ptito M">M. Ptito</name>
</author>
<author>
<name sortKey="Matteau, I" uniqKey="Matteau I">I. Matteau</name>
</author>
<author>
<name sortKey="Gjedde, A" uniqKey="Gjedde A">A. Gjedde</name>
</author>
<author>
<name sortKey="Kupers, R" uniqKey="Kupers R">R. Kupers</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kupers, R" uniqKey="Kupers R">R. Kupers</name>
</author>
<author>
<name sortKey="Chebat, D R" uniqKey="Chebat D">D. R. Chebat</name>
</author>
<author>
<name sortKey="Madsen, K H" uniqKey="Madsen K">K. H. Madsen</name>
</author>
<author>
<name sortKey="Paulson, O B" uniqKey="Paulson O">O. B. Paulson</name>
</author>
<author>
<name sortKey="Ptito, M" uniqKey="Ptito M">M. Ptito</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Renier, L" uniqKey="Renier L">L. Renier</name>
</author>
<author>
<name sortKey="De Volder, A G" uniqKey="De Volder A">A. G. De Volder</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chebat, D R" uniqKey="Chebat D">D. R. Chebat</name>
</author>
<author>
<name sortKey="Schneider, F C" uniqKey="Schneider F">F. C. Schneider</name>
</author>
<author>
<name sortKey="Kupers, R" uniqKey="Kupers R">R. Kupers</name>
</author>
<author>
<name sortKey="Ptito, M" uniqKey="Ptito M">M. Ptito</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Striem Amit, E" uniqKey="Striem Amit E">E. Striem-Amit</name>
</author>
<author>
<name sortKey="Cohen, L" uniqKey="Cohen L">L. Cohen</name>
</author>
<author>
<name sortKey="Dehaene, S" uniqKey="Dehaene S">S. Dehaene</name>
</author>
<author>
<name sortKey="Amedi, A" uniqKey="Amedi A">A. Amedi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Abboud, S" uniqKey="Abboud S">S. Abboud</name>
</author>
<author>
<name sortKey="Hanassy, S" uniqKey="Hanassy S">S. Hanassy</name>
</author>
<author>
<name sortKey="Levy Tzedek, S" uniqKey="Levy Tzedek S">S. Levy-Tzedek</name>
</author>
<author>
<name sortKey="Maidenbaum, S" uniqKey="Maidenbaum S">S. Maidenbaum</name>
</author>
<author>
<name sortKey="Amedi, A" uniqKey="Amedi A">A. Amedi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Meijer, P B" uniqKey="Meijer P">P. B. Meijer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Maurer, D" uniqKey="Maurer D">D. Maurer</name>
</author>
<author>
<name sortKey="Lewis, T L" uniqKey="Lewis T">T. L. Lewis</name>
</author>
<author>
<name sortKey="Mondloch, C J" uniqKey="Mondloch C">C. J. Mondloch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lewis, T L" uniqKey="Lewis T">T. L. Lewis</name>
</author>
<author>
<name sortKey="Maurer, D" uniqKey="Maurer D">D. Maurer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Marr, D" uniqKey="Marr D">D. Marr</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Elli, G V" uniqKey="Elli G">G. V. Elli</name>
</author>
<author>
<name sortKey="Benetti, S" uniqKey="Benetti S">S. Benetti</name>
</author>
<author>
<name sortKey="Collignon, O" uniqKey="Collignon O">O. Collignon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Deroy, O" uniqKey="Deroy O">O. Deroy</name>
</author>
<author>
<name sortKey="Auvray, M" uniqKey="Auvray M">M. Auvray</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ward, J" uniqKey="Ward J">J. Ward</name>
</author>
<author>
<name sortKey="Meijer, P" uniqKey="Meijer P">P. Meijer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ostrovsky, Y" uniqKey="Ostrovsky Y">Y. Ostrovsky</name>
</author>
<author>
<name sortKey="Andalman, A" uniqKey="Andalman A">A. Andalman</name>
</author>
<author>
<name sortKey="Sinha, P" uniqKey="Sinha P">P. Sinha</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Maidenbaum, S" uniqKey="Maidenbaum S">S. Maidenbaum</name>
</author>
<author>
<name sortKey="Abboud, S" uniqKey="Abboud S">S. Abboud</name>
</author>
<author>
<name sortKey="Amedi, A" uniqKey="Amedi A">A. Amedi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Matteau, I" uniqKey="Matteau I">I. Matteau</name>
</author>
<author>
<name sortKey="Kupers, R" uniqKey="Kupers R">R. Kupers</name>
</author>
<author>
<name sortKey="Ricciardi, E" uniqKey="Ricciardi E">E. Ricciardi</name>
</author>
<author>
<name sortKey="Pietrini, P" uniqKey="Pietrini P">P. Pietrini</name>
</author>
<author>
<name sortKey="Ptito, M" uniqKey="Ptito M">M. Ptito</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Amedi, A" uniqKey="Amedi A">A. Amedi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kim, J K" uniqKey="Kim J">J. K. Kim</name>
</author>
<author>
<name sortKey="Zatorre, R J" uniqKey="Zatorre R">R. J. Zatorre</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ptito, M" uniqKey="Ptito M">M. Ptito</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Striem Amit, E" uniqKey="Striem Amit E">E. Striem-Amit</name>
</author>
<author>
<name sortKey="Dakwar, O" uniqKey="Dakwar O">O. Dakwar</name>
</author>
<author>
<name sortKey="Reich, L" uniqKey="Reich L">L. Reich</name>
</author>
<author>
<name sortKey="Amedi, A" uniqKey="Amedi A">A. Amedi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Reich, L" uniqKey="Reich L">L. Reich</name>
</author>
<author>
<name sortKey="Maidenbaum, S" uniqKey="Maidenbaum S">S. Maidenbaum</name>
</author>
<author>
<name sortKey="Amedi, A" uniqKey="Amedi A">A. Amedi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ricciardi, E" uniqKey="Ricciardi E">E. Ricciardi</name>
</author>
<author>
<name sortKey="Bonino, D" uniqKey="Bonino D">D. Bonino</name>
</author>
<author>
<name sortKey="Pellegrini, S" uniqKey="Pellegrini S">S. Pellegrini</name>
</author>
<author>
<name sortKey="Pietrini, P" uniqKey="Pietrini P">P. Pietrini</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Merabet, L B" uniqKey="Merabet L">L. B. Merabet</name>
</author>
<author>
<name sortKey="Pascual Leone, A" uniqKey="Pascual Leone A">A. Pascual-Leone</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Striem Amit, E" uniqKey="Striem Amit E">E. Striem-Amit</name>
</author>
<author>
<name sortKey="Guendelman, M" uniqKey="Guendelman M">M. Guendelman</name>
</author>
<author>
<name sortKey="Amedi, A" uniqKey="Amedi A">A. Amedi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Maidenbaum, S" uniqKey="Maidenbaum S">S. Maidenbaum</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kupers, R" uniqKey="Kupers R">R. Kupers</name>
</author>
<author>
<name sortKey="Ptito, M" uniqKey="Ptito M">M. Ptito</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ptito, M" uniqKey="Ptito M">M. Ptito</name>
</author>
<author>
<name sortKey="Moesgaard, S M" uniqKey="Moesgaard S">S. M. Moesgaard</name>
</author>
<author>
<name sortKey="Gjedde, A" uniqKey="Gjedde A">A. Gjedde</name>
</author>
<author>
<name sortKey="Kupers, R" uniqKey="Kupers R">R. Kupers</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Maurer, D" uniqKey="Maurer D">D. Maurer</name>
</author>
<author>
<name sortKey="Hensch, T K" uniqKey="Hensch T">T. K. Hensch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Levy Tzedek, S" uniqKey="Levy Tzedek S">S. Levy-Tzedek</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Sci Rep</journal-id>
<journal-id journal-id-type="iso-abbrev">Sci Rep</journal-id>
<journal-title-group>
<journal-title>Scientific Reports</journal-title>
</journal-title-group>
<issn pub-type="epub">2045-2322</issn>
<publisher>
<publisher-name>Nature Publishing Group</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">26482105</article-id>
<article-id pub-id-type="pmc">4611203</article-id>
<article-id pub-id-type="pii">srep15359</article-id>
<article-id pub-id-type="doi">10.1038/srep15359</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>'Visual’ parsing can be taught quickly without visual experience during critical periods</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Reich</surname>
<given-names>Lior</given-names>
</name>
<xref ref-type="aff" rid="a1">1</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Amedi</surname>
<given-names>Amir</given-names>
</name>
<xref ref-type="corresp" rid="c1">a</xref>
<xref ref-type="aff" rid="a1">1</xref>
<xref ref-type="aff" rid="a2">2</xref>
</contrib>
<aff id="a1">
<label>1</label>
<institution>Department of Medical Neurobiology, The Institute for Medical Research Israel-Canada, Faculty of Medicine, The Hebrew University of Jerusalem</institution>
, Jerusalem 91220,
<country>Israel</country>
</aff>
<aff id="a2">
<label>2</label>
<institution>The Edmond and Lily Safra Center for Brain Sciences (ELSC), The Hebrew University of Jerusalem</institution>
, Jerusalem 91220,
<country>Israel</country>
</aff>
</contrib-group>
<author-notes>
<corresp id="c1">
<label>a</label>
<email>amir.amedi@ekmd.huji.ac.il</email>
</corresp>
</author-notes>
<pub-date pub-type="epub">
<day>20</day>
<month>10</month>
<year>2015</year>
</pub-date>
<pub-date pub-type="collection">
<year>2015</year>
</pub-date>
<volume>5</volume>
<elocation-id>15359</elocation-id>
<history>
<date date-type="received">
<day>28</day>
<month>05</month>
<year>2015</year>
</date>
<date date-type="accepted">
<day>15</day>
<month>09</month>
<year>2015</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright © 2015, Macmillan Publishers Limited</copyright-statement>
<copyright-year>2015</copyright-year>
<copyright-holder>Macmillan Publishers Limited</copyright-holder>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/">
<pmc-comment>author-paid</pmc-comment>
<license-p>This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">http://creativecommons.org/licenses/by/4.0/</ext-link>
</license-p>
</license>
</permissions>
<abstract>
<p>Cases of invasive sight-restoration in congenital blind adults demonstrated that acquiring visual abilities is extremely challenging, presumably because visual-experience during critical-periods is crucial for learning visual-unique concepts (e.g. size constancy). Visual rehabilitation can also be achieved using sensory-substitution-devices (SSDs) which convey visual information non-invasively through sounds. We tested whether one critical concept – visual parsing, which is highly-impaired in sight-restored patients – can be learned using SSD. To this end, congenitally blind adults participated in a unique, relatively short (~70 hours), SSD-‘vision’ training. Following this, participants successfully parsed 2D and 3D visual objects. Control individuals naïve to SSDs demonstrated that while some aspects of parsing with SSD are intuitive, the blind’s success could not be attributed to auditory processing alone. Furthermore, we had a unique opportunity to compare the SSD-users’ abilities to those reported for sight-restored patients who performed similar tasks visually, and who had months of eyesight. Intriguingly, the SSD-users outperformed the patients on most criteria tested. These suggest that with adequate training and technologies, key high-order visual features can be quickly acquired in adulthood, and lack of visual-experience during critical-periods can be somewhat compensated for. Practically, these highlight the potential of SSDs as standalone-aids or combined with invasive restoration approaches.</p>
</abstract>
</article-meta>
</front>
<body>
<p>39,000,000 people worldwide are blind, constituting a major clinical challenge to develop effective visual rehabilitation techniques. The most straightforward clinical approach is to surgically correct the function of the eyes’ non-neural components (e.g. by removing cataracts, which are the major cause of blindness in developing countries due to low treatment accessibility, or by corneal transplantation). Such treatments result in nearly full resolution of visual input. However they are only applicable to specific causes and stages of vision loss. To treat other blindness etiologies which damage the retina, visual prostheses
<xref ref-type="bibr" rid="b1">1</xref>
<xref ref-type="bibr" rid="b2">2</xref>
<xref ref-type="bibr" rid="b3">3</xref>
<xref ref-type="bibr" rid="b4">4</xref>
are being developed (for current visual performance using prostheses see
<xref ref-type="bibr" rid="b1">1</xref>
<xref ref-type="bibr" rid="b2">2</xref>
<xref ref-type="bibr" rid="b5">5</xref>
<xref ref-type="bibr" rid="b6">6</xref>
<xref ref-type="bibr" rid="b7">7</xref>
<xref ref-type="bibr" rid="b8">8</xref>
). This promising field is growing extremely fast, and involves massive research, as well as engineering and economic efforts.</p>
<p>However, even if full resolution of visual input is delivered to the brain (as in cataract removal; but this is far from being the case with current prostheses, which provide very low-resolution information), the acquisition of higher visual function in adulthood is still very challenging, even after weeks, months or years of rich post-surgery visual experience. Thus, reports on individuals
<xref ref-type="bibr" rid="b9">9</xref>
<xref ref-type="bibr" rid="b10">10</xref>
<xref ref-type="bibr" rid="b11">11</xref>
<xref ref-type="bibr" rid="b12">12</xref>
<xref ref-type="bibr" rid="b13">13</xref>
<xref ref-type="bibr" rid="b14">14</xref>
<xref ref-type="bibr" rid="b15">15</xref>
who had limited or no visual-experience during development and medically regained fairly complete visual input in adulthood have found profound deficits in various visual skills. While some functions (e.g. motion detection, basic form recognition) recovered relatively fast, many others (e.g. 3D perception, object and face recognition, interpretation of transparency and perspective cues) were massively impaired and recovered slowly (if at all). It seems as if the regained visual input have been provided to a brain that was wholly unpracticed at analyzing and interpreting it, and the visual-experience acquired at this stage may have come too late or too little. This is commonly hypothesized to result from the absence of natural visual information during critical (or sensitive) periods, a notion first introduced by Hubel and Wiesel
<xref ref-type="bibr" rid="b16">16</xref>
<xref ref-type="bibr" rid="b17">17</xref>
who showed in animal models that even short visual deprivation durations during early developmental stages may irreversibly damage visual perception at older ages. Notably, an even short period of congenital blindness in humans, although treated in early childhood, can lead to some persistent (though much less dramatic) functional deficits
<xref ref-type="bibr" rid="b18">18</xref>
<xref ref-type="bibr" rid="b19">19</xref>
<xref ref-type="bibr" rid="b20">20</xref>
<xref ref-type="bibr" rid="b21">21</xref>
.</p>
<p>One highly important task consistently reported to be impaired following sight restoration in adulthood
<xref ref-type="bibr" rid="b13">13</xref>
is visual parsing; i.e., the ability to segregate a visual scene into distinct, complete objects. Consider for instance a typical office desk, with a computer screen, a keyboard and some stationery on it. When looking at the scene you do not perceive a messy collection of areas of different hues, luminance levels, textures and contours, but rather see separate meaningful objects. While this parsing task seems trivial to the normally-developed sighted, it is very complex and demanding, sometimes almost impossible, for a person with limited visual experience as it requires interpreting the visual input based on previous knowledge and visual concepts which have no intuitive parallel in other sensory modalities (e.g. shadow, transparency)
<xref ref-type="bibr" rid="b22">22</xref>
. It is worth noting that visual parsing is extremely difficult even for most computer-vision algorithms, as they are based on basic image-driven features such as continuity of grey-level and bounding contours
<xref ref-type="bibr" rid="b23">23</xref>
and lack higher-order feedback input, which has an important role in object perception
<xref ref-type="bibr" rid="b24">24</xref>
.</p>
<p>An elegant study by Ostrovsky and colleagues
<xref ref-type="bibr" rid="b13">13</xref>
showed that individuals who had their sight restored medically (by cataract removal or refractive correction
<xref ref-type="bibr" rid="b13">13</xref>
) performed very poorly in visual parsing of much simpler images than the scene described above: when attempting to parse an image, they made judgments based only on color, closed loops, luminance levels and motion cues, and did not apply any higher-order visual interpretation, and thus over-fragmented the image. For instance, they misinterpreted a 3D cube to be three different patches in different grayscale levels.</p>
<p>Here we took advantage of a unique structured-training program that was developed and has been perfected in our lab for the last 7 years, which enables the blind to ‘see’ using another class of visual rehabilitation approaches – non-invasive sensory substitution devices (SSDs) – and tested whether training the adult brain could help acquire this key function.</p>
<p>Visual-to-auditory SSDs (Supp. Fig. 1A) transform visual images into sound representations (‘soundscapes’), while preserving the image’s spatial topography (Supp. Fig. 1B), thus theoretically enabling the blind to ‘see’ using their ears in a cheap and easily accessible manner. Whether these SSDs are useful and successful for visual rehabilitation is still an open question, but one that has elicited growing interest in recent years. Although there is accumulating evidence demonstrating functional abilities in various ‘visual’ tasks using SSDs
<xref ref-type="bibr" rid="b25">25</xref>
<xref ref-type="bibr" rid="b26">26</xref>
<xref ref-type="bibr" rid="b27">27</xref>
<xref ref-type="bibr" rid="b28">28</xref>
<xref ref-type="bibr" rid="b29">29</xref>
<xref ref-type="bibr" rid="b30">30</xref>
, no group has directly tested ‘visual’ parsing - one of the most basic functions which is fundamental for recognizing objects and interacting with them, and thus for the practical use of SSDs. We are also not aware of any formal organized programs to teach SSD usage, which is one of the main limitations in their adoption.</p>
<p>The main aims of the current study were thus to: 1) test whether the concept of ‘visual’ parsing (and the required underlying visual knowledge, such as understanding transparency) can be acquired in adulthood by the congenitally blind who lack any visual experience, and whether it can be implemented practically using the vOICe SSD
<xref ref-type="bibr" rid="b31">31</xref>
after limited training; 2) take advantage of a unique opportunity to compare, at least to some extent, the parsing abilities of the SSD-users to those reported
<xref ref-type="bibr" rid="b13">13</xref>
for sight-restored individuals. Specifically, can the use of SSD to perceive ‘visual’ information help overcome some of the challenges observed in the patients?</p>
<p>As an additional related question, given the topographical nature of the vOICe SSD, we assessed to what extent the parsing task could be performed intuitively without any training by sighted individuals.</p>
<sec disp-level="1" sec-type="results">
<title>Results</title>
<p>All blind participants were enrolled in a novel unique structured-training program in which they learned how to extract and interpret high-resolution visual information from the complex soundscapes generated by the vOICe SSD (Supp. Fig. 1; see Methods for full details). Each subject underwent ~70 hours of one-on-one training, in 2-hour weekly sessions. The program was composed of two main components: structured 2D training in lab-settings, and live-view training in more natural settings. During the 2D stage participants were taught how to process the soundscapes of 2D static images from various visual categories (Supp. Fig. 1D). During each training trial, the participants heard a soundscape and were asked to describe the image, pay attention to both the location and the shapes of all elements in the image, as well as integrate the details into meaningful wholes. Additionally, more general visual principles such as the conversion of 3D objects to 2D images (and back) were demonstrated. Training was conducted using guiding questions, verbal explanations and tangible-images feedback (see Supp. Fig. 1E). In the initial stages of training the participants were also asked to draw, by engraving, the ‘visual’ mental image constructed in their mind’s eye. After this structured-training, participants could indicate which category a soundscape represented
<xref ref-type="bibr" rid="b29">29</xref>
, and identify multiple features enabling a differentiation between objects in the same category. During live-viewing training, participants used a mobile kit of the vOICe (Supp. Fig. 1C) to acquire on-line dynamic images and actively sense the environment, thus making the transformation from perception to action. Visual knowledge and skills were also introduced at this stage. E.g., the change in the size of a seen object with distance was counter-intuitive for the participants, since this is not the case when judging an object’s size and distance by touch, and we had to explicitly explain it and intensively practice its implications. Similarly, they practiced head-“eye”-hand coordination, orienting their heads (and the sunglasses supporting the camera) to the objects at hand, etc.</p>
<p>Importantly, the skills tested here were not directly taught during this general structured-training program, but were only introduced in a short pre-test training session (which included completely different stimuli than those used in the test;
<xref ref-type="fig" rid="f1">Fig. 1A</xref>
).</p>
<p>In the ‘visual’ parsing test, 7 congenitally fully blind adults were presented with soundscapes of images containing 1, 2 or 3 shapes and were requested to indicate the number of objects. Specifically, there were a few types of stimuli: a) 1, 2 or 3 non-overlapping shapes (filled opaque, line drawings or filled transparent; see examples in
<xref ref-type="fig" rid="f1">Fig. 1B</xref>
i-iii); b) 2 overlapping shapes (filled opaque, line drawings or filled transparent;
<xref ref-type="fig" rid="f1">Fig. 1B</xref>
iv–vi); c) a single 3D shape (
<xref ref-type="fig" rid="f1">Fig. 1B</xref>
vii).</p>
<p>The total success rate of the SSD-users group was 84.1% ± s.d 7.6, with a performance of 97.1% ± 4.9, 76.7% ± 13.2 and 98.1% ± 3.3 for stimuli containing 1, 2 or 3 2D shapes, respectively (
<xref ref-type="fig" rid="f1">Fig. 1C</xref>
; for detailed performance in each stimulus type see Supp. Fig. 2). All success rates were highly significant above chance-level (33.3% on a 3-alternative-forced choice; p < 0.0006 (n = 7) for all comparisons, as assessed by a Wilcoxon rank sum test; importantly, this is the lowest p-value possible in this non-parametric test, given the number of subjects. All p-values reported here were corrected for multiple comparisons using the most conservative Bonferroni correction).</p>
<p>In order to account not only for the participants’ success rate but also to the errors committed we further calculated the d’ sensitivity measure. Averaged d’ was 3.6 ± 1.2, 2.5 ± 0.4 and 5.6 ± 2 for responding “1”, “2” or “3”, respectively. The full data matrix of the participants’ responses is presented in
<xref ref-type="supplementary-material" rid="S1">Supp. Table 1</xref>
.</p>
<p>The average reaction time per stimulus was 7.4 ± 3.2 seconds, i.e. 3.7 repetitions of the stimuli (since the scanning rate was 2 seconds per image). No significant correlation (r
<sup>2</sup>
 = 0.366) was found between participants’ performance and reaction time (Supp. Fig. 3). We next looked specifically at our subjects’ ability to identify two overlapping shapes as two distinct objects, a highly impaired ability in sight-restored individuals even after weeks to months of visual experience
<xref ref-type="bibr" rid="b13">13</xref>
. The SSD-users performed significantly above chance (73% ± 17.5; p < 0.0006), regardless of whether the overlapping shapes were presented as line drawings or as transparent shapes (68.6% ± 22.3 and 76.2% ± 21 respectively, p < 0.0006,
<xref ref-type="fig" rid="f1">Fig. 1D</xref>
).</p>
<p>Since sight-restored individuals have been reported to successfully parse overlapping shapes when these were in different colors
<xref ref-type="bibr" rid="b13">13</xref>
, we also tested our subjects’ ability to parse two overlapping opaque shapes of different luminance levels, which is the closest parallel to color in the grayscale-only conversion of the vOICe. In this case as well, the SSD-users were very successful (72.4% ± 16.5 correct; Supp. Fig. 2).</p>
<p>In addition to the group of 7 congenitally fully blind individuals, we also tested 2 subjects who had some very limited visual experience. FN has faint light (but not form) perception, and HBH had some vision in one eye during her first year of life (
<xref ref-type="table" rid="t1">Table 1</xref>
). These 2 subjects performed similarly to the group (
<xref ref-type="fig" rid="f1">Fig. 1C</xref>
, represented by cyan diamonds; 82.1% and 86.3% total performance, 77.8% and 75.6% parsing the two overlapping shapes for FN and HBH, respectively).</p>
<p>Next, we assessed whether the ‘visual’ parsing capacity of the blind using SSD extends to 3D objects, by testing their ability to perceive 3D shapes as single entities, despite the fact that the shape is made up of several facets with different luminance levels. The congenitally fully blind SSD-users performed very well (
<xref ref-type="fig" rid="f1">Fig. 1D</xref>
; 84.3% ± 12.7; p < 0.0006). FN and HBH were also successful and both had 70% success.</p>
<p>In order to verify that the blind SSD users’ ‘visual’ parsing ability could not be attributed to auditory processing alone, and to assess what level of parsing can be achieved without any training, 7 sighted controls, matched to the group of 7 congenitally fully blind and naïve to SSD, performed the same experiment (see
<xref ref-type="fig" rid="f2">Fig. 2</xref>
). Their overall performance was 61.3% ± 10.1, significantly lower than that of the blind (p < 0.0006). Interestingly, the sighted performed significantly above chance level (p < 0.0006), demonstrating that some aspects of the visual-to-auditory transformation and of ‘visual’ parsing using the device are intuitive.</p>
<p>However, when looking specifically at the stimuli of interest, i.e. the 3D shapes and the 2 overlapping shapes, the naïve sighted controls’ performance was much lower and did not differ significantly from chance (p > 0.05): 40% ± 8.6 correct for 2-overlapping line-drawing shapes, 37.1 ± 26.1% for 2 overlapping transparent shapes, and 51.4% ± 22.7 for 3D shapes. All scores were significantly lower than those of the blind (p = 0.0125, p = 0.0135 and p = 0.003, respectively). Thus, the blind participants’ success was not based on the auditory input alone but rather required visual interpretation.</p>
<p>The SSD-users’ success was further manifested when comparing their individual achievements (represented by orange diamonds in
<xref ref-type="fig" rid="f1">Figs 1</xref>
and
<xref ref-type="fig" rid="f3">3</xref>
) to those of the 3 sight-restored individuals described in the intriguing work by Ostrovsky and colleagues
<xref ref-type="bibr" rid="b13">13</xref>
(see
<xref ref-type="fig" rid="f3">Fig. 3A</xref>
for a comparison between the groups’ characteristics). These individuals were tested on comparable static visual parsing tasks twice: two weeks to three months post-restoration, and again (on some of the tasks) 10–18 months post-restoration to determine progress. The stimuli used in our experiment were similar in principle, but not completely identical (e.g. some of the shapes were different) to those used in the sight restoration study. Moreover, we conducted a 3-alternative forced-choice experiment, to enable statistical analysis of significance, whereas in the sight restoration study free responses were required. Nevertheless, although a fully direct comparison was impossible, comparing the two studies was relevant and instructive.</p>
<p>When tested weeks post-restoration, the sight-restored patients failed on the three comparable tasks; i.e., parsing two overlapping line drawing shapes, parsing two overlapping transparent shapes, and parsing 3D shapes. Thus, each of the 9 SSD-users outperformed them (
<xref ref-type="fig" rid="f3">Fig. 3B–D</xref>
). At the second time-point the sight-restored were tested, 10-18 months post-surgery, some improvement was reported in parsing overlapping line-drawing shapes. One patient had nearly-perfect performance, and the other two also made strides. Nevertheless, as can be seen in
<xref ref-type="fig" rid="f3">Fig. 3B</xref>
four of the SSD-users still outperformed the sight-restored and 2 SSD-users had comparable performance. Finally, when tested again on 3D-parsing, one of the sight-restored subjects still had 0% success, but the other two showed some improvement. In this case, all the individual SSD-users outperformed the sight-restored patients (see
<xref ref-type="fig" rid="f3">Fig. 3D</xref>
). Parsing of two overlapping transparent shapes in the sight-restored was not tested at this time-point, so no comparison could be made.</p>
</sec>
<sec disp-level="1" sec-type="discussion">
<title>Discussion</title>
<p>The findings show that a key complex visual concept – ‘visual’ parsing– can be learned and implemented in adulthood using sound-represented visual images, without any visual experience (
<xref ref-type="fig" rid="f1">Fig. 1</xref>
). After participation in our unique structured-training program, the congenitally blind SSD-users experienced success (at both the group level and the single-subject level) on various parsing tasks: they correctly perceived 2D overlapping shapes in different formats as distinct objects, and perceived 3D objects as single entities (
<xref ref-type="fig" rid="f1">Fig. 1C,D</xref>
). When considering our results one must understand how far from being trivial, although often taken for granted by the normally-developed sighted, is for the blind to acquire functional vision and to perform ‘visual’ tasks. Thus, for some medically sight-restored individuals learning to interpret regained visual information, and to actually see, was not only slow and challenging (as discussed in the introduction), but was so difficult that they regressed to living in functional self-defined blindness
<xref ref-type="bibr" rid="b10">10</xref>
<xref ref-type="bibr" rid="b11">11</xref>
.</p>
<p>We further had the opportunity to compare (
<xref ref-type="fig" rid="f3">Fig. 3</xref>
) the parsing abilities of our subjects to those reported for individuals who regained sight by cataract removal or refractive correction
<xref ref-type="bibr" rid="b13">13</xref>
, i.e. to compare between the two visual rehabilitation approaches. This was a unique opportunity, as both highly-trained congenitally blind SSD-users and medically sight-restored individuals are relatively rare groups which are not easily accessed. We found that the SSD-users acquired this skill faster; namely, they succeeded on the task after only ~70 training hours (only ~20 minutes on the specific tested tasks), whereas the sight-restored patients completely failed even following weeks of constant natural vision (i.e., a minimum of 210 waking hours for the patient who was tested the shortest time post-surgery) – and in some aspects performed better than the sight-restored even after ~1 year of eyesight. This is especially intriguing when considering the complete lack of visual experience in 7 of our subjects (while the medically-restored patients had some light-perception throughout their life as these procedures require functioning photoreceptors). This finding has also strong relevance to any other means of sight-restoration since cataract removal represents the best case scenario in terms of the resulting resolution.</p>
<p>Finally, we showed in a control experiment with naïve sighted individuals (
<xref ref-type="fig" rid="f2">Fig. 2</xref>
) that some (simple) aspects of the visual-to-auditory transformation and of ‘visual’ parsing using the device are intuitive. Moreover, the findings demonstrate that not only were the blind SSD-users better than the sight-restored, their abilities following training were also better than the intuitive understanding of the sighted individuals who had normal visual experience during development and could base their judgment on extensive visual knowledge.</p>
<p>Our results have both theoretical and practical implications, which will be discussed below.</p>
<p>On a theoretical level, the findings suggest that with adequate training and technologies, high-order visual
<bold>concepts</bold>
can be learned in adulthood using an out-of-the-box approach, and that complete lack of visual experience during the relatively narrow critical period window
<xref ref-type="bibr" rid="b32">32</xref>
<xref ref-type="bibr" rid="b33">33</xref>
can be somewhat compensated for.</p>
<p>Because the visual-to-auditory transformation is not associative but rather preserves the visual spatial layout, because the task could not be performed based on auditory processing alone (
<xref ref-type="fig" rid="f2">Fig. 2</xref>
) and because all the experimental stimuli were novel to the subjects, their success reflects the implementation of visual principles and a generalized learning of the tested skills.</p>
<p>Importantly, we do not claim that our subjects’ ‘visual’ abilities necessarily imply that they generated holistic 3D mental ‘visual’ representations. However, even if such a representation was not created, and the task was performed based on more local features in the image and/or on low level cues (probably at the level of a 2.5-dimension sketch as suggest by Marr
<xref ref-type="bibr" rid="b34">34</xref>
), and by using different strategies than normally sighted individuals, the results are still very encouraging both theoretically and in terms of rehabilitation practicability. They suggest that: 1) the information conveyed through the vOICe suffices to perform complex visual tasks; 2) various execution techniques can be learned such that visual capacities can be recovered in a top-down manner (based on feedback information from higher-order areas, previous experience and cognitive processing, all mediated through abundant backwards connectivity) even when bottom-up pathways are massively impaired (and will remain so even after an invasive intervention).</p>
<p>On a practical level, this is the first time that ‘visual’ parsing abilities using SSD were directly tested. This ability is necessary for using SSDs in everyday life since proper parsing of a visual scene into distinct whole objects is an initial step in recognizing them. Therefore, the participants’ success is very encouraging with regard to the potential of SSDs to aid the blind, providing them with otherwise unavailable visual information and capacities. SSDs may be especially beneficial for a specific sub-group of the blind who, due to their etiology, cannot undergo invasive restoration procedures (i.e. all congenitally fully blind individuals, and late blind individuals who have non-functioning components in the visual pathway between the operated areas and the brain), but will also be extremely helpful for the entire blind population since the vast majority resides in poor developing countries and have scant access to medical treatment (WHO fact sheet N282 2013).</p>
<p>Nevertheless, SSDs also have disadvantages
<xref ref-type="bibr" rid="b35">35</xref>
. These include the absence of subjective visual qualia
<xref ref-type="bibr" rid="b36">36</xref>
(though see
<xref ref-type="bibr" rid="b37">37</xref>
), a need for organized structured-training, possible interference with environmental auditory inputs, and less automatic, more cognitively demanding perception. Visual-to-auditory SSDs are also slow compared to natural vision (e.g. ~7 seconds on average in the current experiment; though see also the relatively slow reaction time in a sight-restored individual
<xref ref-type="bibr" rid="b38">38</xref>
and even slower times in retinal prosthesis implantees on similar or much easier tasks
<xref ref-type="bibr" rid="b5">5</xref>
). This is partially an integral component of the transformation algorithm which, in the case of the vOICe, displays the image sequentially. These disadvantages may (in addition to psychological and social factors) account for the fact that no SSD has been widely adopted by the blind to date.</p>
<p>This said, based on the behavioral achievements reported by us and by others
<xref ref-type="bibr" rid="b25">25</xref>
<xref ref-type="bibr" rid="b26">26</xref>
<xref ref-type="bibr" rid="b27">27</xref>
<xref ref-type="bibr" rid="b28">28</xref>
<xref ref-type="bibr" rid="b29">29</xref>
<xref ref-type="bibr" rid="b30">30</xref>
, together with the growing implementation of adequate training procedures (e.g. an online training that will help to expand SSD usage and training from the lab to the field
<xref ref-type="bibr" rid="b39">39</xref>
) and improvement in SSD technology (generating more user-friendly devices and upgrading their capabilities), SSDs have great promise for visual rehabilitation as standalone daily aids
<xref ref-type="bibr" rid="b35">35</xref>
.</p>
<p>Furthermore, we suggest that SSDs can be complementarily and synergistically combined with invasive sight-restoration procedures, taking into consideration the advantages and disadvantages of each approach. Thus, SSDs can be used before invasive sight restoration procedures (see
<xref ref-type="fig" rid="f4">Fig. 4A</xref>
), to familiarize the operated individual with unique visual features in order to ease rehabilitation. For instance, blind individuals might benefit from learning and practicing before surgery concepts like visual parsing which can be quickly learned with SSD (
<xref ref-type="fig" rid="f1">Figs 1</xref>
and
<xref ref-type="fig" rid="f3">3</xref>
), but were impaired following invasive procedures.</p>
<p>Moreover, a growing body of evidence shows that the ‘visual’ cortex of the blind follows the original functional organization and task specialization of the sighted visual cortex, and that SSD-‘vision’ recruits largely the same neural networks engaged by natural vision
<xref ref-type="bibr" rid="b29">29</xref>
<xref ref-type="bibr" rid="b40">40</xref>
<xref ref-type="bibr" rid="b41">41</xref>
<xref ref-type="bibr" rid="b42">42</xref>
<xref ref-type="bibr" rid="b43">43</xref>
<xref ref-type="bibr" rid="b44">44</xref>
(reviewed in
<xref ref-type="bibr" rid="b45">45</xref>
<xref ref-type="bibr" rid="b46">46</xref>
). Therefore, prior SSD training may be also used to induce adult plasticity and strengthen the visual networks, thus supporting sight restoration efforts
<xref ref-type="bibr" rid="b47">47</xref>
.</p>
<p>Additionally, when the invasive procedure involves a visual prosthesis, a combined post-operation aid can be used (See
<xref ref-type="fig" rid="f4">Fig. 4B</xref>
), delivering the visual information simultaneously through the prostheses electrodes (providing vivid visual qualia) and through SSD (providing explanatory input to the visual signal). Based on our demonstration (
<xref ref-type="fig" rid="f3">Fig. 3B–D</xref>
) that the same visual task is learned faster by SSD-users than by sight-restored individuals, the dual, synchronous “visual” information should speed up rehabilitation.</p>
<p>Finally, SSDs can be used to provide input beyond the maximal capabilities of the prosthesis (
<xref ref-type="fig" rid="f4">Fig. 4C</xref>
). Thus, the technical resolution of the vOICe SSD
<xref ref-type="bibr" rid="b31">31</xref>
can be up to two orders of magnitude higher than that of current retinal prostheses
<xref ref-type="bibr" rid="b4">4</xref>
; and the functional ‘visual’ acuity of blind vOICe-users was shown to exceed the acuity reported with any visual rehabilitation approach
<xref ref-type="bibr" rid="b48">48</xref>
. Therefore, the information from the prosthesis might not suffice for various visual tasks, which could be relatively easily performed using SSDs (see the demonstration in
<xref ref-type="fig" rid="f4">Fig. 4C</xref>
). Thus, an individual will probably be able to recognize the typical configuration of a face using the prosthesis, but in order to recognize facial expressions the SSD will have to be turned on. Complementary color and depth information, which are currently not conveyed through prostheses, can also be conveyed through recently developed SSDs
<xref ref-type="bibr" rid="b30">30</xref>
<xref ref-type="bibr" rid="b49">49</xref>
.</p>
<p>Taken together, SSDs has a great rehabilitative potential as standalone assistive aids or combined (pre/post-surgery) with invasive sight restoration techniques.</p>
<p>One interesting question, which is beyond the scope of this article, is why the SSD-users, who received the visual information through their ears, performed better and required less experience than the sight-restored individuals, who received the information in the natural way. One speculative explanation is that since the initial processing of the SSD-delivered information is likely to be carried out by the auditory system (e.g. identifying the sound frequency and timing), SSD ‘vision’ benefits from the superior auditory skills of the blind
<xref ref-type="bibr" rid="b50">50</xref>
, their greater reliance on audition in daily life and their richer auditory (vs. visual) experience. However, critically, none of the tasks reported here could be performed based on the auditory input alone, but rather required ‘visual’-specific processing (see
<xref ref-type="fig" rid="f2">Fig. 2</xref>
).</p>
<p>Another not mutually exclusive explanation, which we believe played a central role in the achievements reported here, is the specific structured-training approach we used, during which the foundations of vision were gradually and explicitly taught, and various ‘visual’ tasks were intensively practiced. We stress that all (invasive or non-invasive) visual rehabilitation approaches should be accompanied by structured-training (see also
<xref ref-type="bibr" rid="b35">35</xref>
.) Training is important not only to the early blind but also to late blind individuals trying to cope with an atypical and degraded input such as that arriving from SSDs or visual prostheses (which is very different than natural stimulation of the neurons during eyesight). The importance of training was demonstrated for instance by showing that while sighted individuals were able to spontaneously extract SSD-conveyed pictorial depth cues and use them for depth estimation, early blind individuals were able to do so only following a training session in which they experienced various aspects of ‘visual’ depth
<xref ref-type="bibr" rid="b27">27</xref>
. Additionally, brain imaging studies have shown that SSD training strengthened the functional connectivity between the auditory cortex and task related ‘visual’ areas
<xref ref-type="bibr" rid="b42">42</xref>
; and that SSD-induced occipital cortex activation was stronger following training
<xref ref-type="bibr" rid="b51">51</xref>
. Even in the easier (and unilateral) case of visual impairment, amblyopia, a combined treatment which includes structured visual training was shown to trigger adult plasticity and greatly improve visual perceptual outcomes
<xref ref-type="bibr" rid="b52">52</xref>
.</p>
<p>Regardless of explanation, our results clearly show that the absence of visual experience should not limit the acquisition of ‘visual’ parsing - a critical high-level aspect of vision. Most probably, provided proper training, the ability of the blind to learn visual-unique concepts using out-of-the-box methods also applies to at least some other functions, such as size constancy. Future studies should examine this, as well as whether the action-perception loop can be closed using SSDs and/or other visual rehabilitation approaches
<xref ref-type="bibr" rid="b53">53</xref>
. Additionally, the practical contribution of SSDs as a means for ‘visual’ training before/after sight restoration and whether they indeed help overcome the serious deficits observed in practical visual perception after sight is regained still need thorough evaluation in future clinical trials. Finally, we plan to use the training program we developed to train also medically sight-restored patients who do not use SSDs, to test whether abilities can improve in a similar manner when eyesight alone is used in the training process.</p>
</sec>
<sec disp-level="1" sec-type="methods">
<title>Methods</title>
<sec disp-level="2">
<title>Participants</title>
<p>Nine blind individuals (see
<xref ref-type="table" rid="t1">Table 1</xref>
for full details) participated in the experiment. Seven were congenitally fully blind, one (FN) was congenitally blind but had faint light perception, and the remaining participant (HBH) had congenital blindness in her left eye and lost sight in the right eye at the age of 1 year. Subjects’ ages ranged from 21 to 53, all had normal hearing (except PH, who had slightly impaired hearing in her right ear), and had no neurological or psychiatric conditions. None of the participants had any experience with SSD prior to training. Additional seven sighted individuals, matched in gender and age to the seven congenitally and fully blind participants (average age: 33, range: 20–52) and totally naïve to SSDs (unfamiliar with the visual-to-auditory transformation algorithm), participated in the experiment as a control group. The Hebrew University’s ethics committee for research involving human subjects approved the experiments and written informed consent was obtained from each participant. All methods were carried out in accordance with the approved guidelines.</p>
</sec>
<sec disp-level="2">
<title>“The vOICe” visual-to-auditory sensory substitution device</title>
<p>The vOICe SSD
<xref ref-type="bibr" rid="b31">31</xref>
converts images into sounds preserving visual detail at a high resolution (up to 25,344 pixels, the resolution used here) using a pre-determined algorithm (see Supp. Fig. 1B for full details), thus enabling “seeing with sound” for highly trained users.</p>
</sec>
<sec disp-level="2">
<title>Structured-training procedure</title>
<p>Blind participants were enrolled in a novel unique training program in which they learned how to extract and interpret visual information using the vOICe SSD. Each participant was trained for several months in a 2-hour weekly training session by a single trainer on a one-to-one basis. The training duration (71.2 hours on average) and progress rate varied across participants and was determined by personal achievements and difficulties as well as other time constraints.</p>
<p>The program was composed of two main stages. During the structured 2D training, participants learned to extract high detail 2D visual information from still (static) images. During each training trial, participants heard a soundscape and had to describe the image as accurately as possible and to recognize what they ‘saw’. Occasionally, mostly in the first few training sessions, the participants were asked to draw the image they ‘saw’ (by engraving, thus making the image tangible). This requirement forced them to reach definite conclusions as to how they imagined the image, and enabled the trainer to completely assess their ‘visual’ perception. In cases when the participants failed to perfectly describe the image, had mistaken or missed some details, they were asked guiding questions by the trainer, who also directed them as to the processing strategy they could use to interpret the sounds. Thus, participants were instructed to attend to various properties of the sound (e.g. its duration, whether the sound’s frequency is constant or changing, etc.). Then, they were encouraged to think what shape (or combination of shapes, creating a complex image) could be represented by these specific properties. Additional useful hints, such as the relative size of an object compared to a known object (e.g. the participant’s hand), were discussed. This active technique enabled the participants to better understand how to avoid mistakes in the future and which questions they should ask themselves to correctly interpret the soundscape. This prepared them for future independent use of the vOICe, without the trainer’s guidance. In addition to the verbal description of the sound and the image it represented, we provided the blind subjects with tangible image feedback, identical to those they “saw” using the vOICe, which provided further understanding of the image (see Supp. Fig. 1E).</p>
<p>Special emphasis was given to the features characterizing the object category. For example, in the body posture category participants were encouraged to mirror the posture presented, in the face category they were instructed how to identify features that characterize a face in general and features that differentiate faces (e.g. hair length, eye shape and position), and in the house category they were encouraged to identify the general structure of a building, as well as specific features such as number of floors, number and location of windows and the shape of the roof.</p>
<p>In the second training stage, participants practiced dynamic active ‘vision’ in real environments using a mobile setup of the vOICe (Supp. Fig. 1C). The difficulty of tasks practiced was gradual, starting with localizing and reaching for simple objects placed on a homogenous background, through “eye”-hand coordination tasks and finally distance estimation of objects, corridor navigation and obstacle avoidance. After these demonstrations of general principles, the second stage was not structured, and varied as a function of participants, such that every blind user was trained for the specific tasks that coincided with his/her specific needs and interests.</p>
<p>Importantly, in both training stages participants were demonstrated and taught more general visual perception principles that they were unfamiliar with such as variations in object size at different distances and the transparency of objects. These complex visual concepts were first explicitly explained (e.g. “if one object occludes another one, than the first must be closer”) and then were directly practiced to enable implementation of the acquired knowledge.</p>
</sec>
<sec disp-level="2">
<title>‘Visual’ parsing test</title>
<p>General experimental design: vOICe soundscapes of image stimuli were played in a fixed pseudo-randomized order, at a scanning rate of 2 seconds per image, using Presentation software (Neurobehavioral Systems, CA, USA). Participants indicated their answer using the keyboard. Each sound was played until the subject responded, and the next stimulus was presented only after the subject pressed the space key. Answers and reaction times were recorded for each trial. No feedback was given to the participants during the experiment. All stimuli in the experiment were novel, and were not presented to the subjects in any previous training session, thus requiring generalization of the learned skills.</p>
<p>Experimental stimuli: The methodology and stimuli were based on those used by Ostrovsky and colleagues
<xref ref-type="bibr" rid="b13">13</xref>
(“Tests of Static Visual Parsing” section), who assessed visual parsing in medically sight-restored individuals. Image stimuli consisted of 1, 2 non-overlapping, 2 overlapping or 3 non-overlapping 2D shapes or a single 3D shape, and subjects had to indicate whether each stimulus contained 1, 2 or 3 shapes. The 2D shapes were a circle, square, rectangle, triangle and pentagon. The shapes were presented in one of three formats: line-drawings, filled opaque shapes (with different luminance levels for different shapes within an image) or filled transparent shapes (same luminance level for all shapes within a single image). The 3D objects were filled opaque shapes corresponding to the 2D shapes (e.g. a cube instead of a square). The stimuli of most interest were the 2-overlapping shapes and 3D shapes. The other stimuli were used to control for the subjects’ general ability to identify the number of distinct objects and to eliminate any potential psychological bias if most stimuli had contained the same number of objects. However, in order to decrease the number of stimuli in the experiment so that subjects would remain focused and attentive, there were fewer repetitions of the control stimuli than the stimuli of interest. Specifically, the experiment included a total of 95 stimuli (divided into two runs): 45 images with 2 overlapping shapes (15 images per shape format), 15 images with 2 non-overlapping shapes (5 stimuli per shape format), 15 images with 3 non-overlapping shapes (5 stimuli per format), 10 images with a single shape (5 in an outline format and 5 in a full solid shape format) and 10 stimuli with a single 3D shape. The shapes’ locations varied randomly between the images, thus the timing of objects in the soundscapes could not have been informative about their number. See
<xref ref-type="fig" rid="f1">Fig. 1B</xref>
for examples of the different stimuli and their auditory representation.</p>
<p>Blind participants were briefly trained (~20 minutes; see
<xref ref-type="fig" rid="f1">Fig. 1A</xref>
) for the specific task before the experiment to familiarize them with the visual principles of object occlusion, transparency, segmentation and overlap. During training, one stimulus of each type (i.e. a 3D shape, 2 overlapping filled opaque shapes, etc.) was presented using different shapes than those used in the experiment (a trapezoid, a rhombus, a cylinder). Sighted controls were not trained and remained naïve to the visual-to-auditory transformation algorithm.</p>
</sec>
<sec disp-level="2">
<title>Statistical analysis</title>
<p>Average percent correct was calculated and a Wilcoxon rank sum test was used to test for significance (relative to chance level, which was 1/3, as there were 3 possible answers, or between blind and sighted groups). A Bonferroni correction was used to account for multiple comparisons. Thus, we divided the target α = 0.05 value by the number of statistical comparisons performed, which yielded p < 0.00625 as the threshold for significance. Additionally, the d’ sensitivity measure was calculated for the main results.</p>
</sec>
</sec>
<sec disp-level="1">
<title>Additional Information</title>
<p>
<bold>How to cite this article</bold>
: Reich, L. and Amedi, A. ‘Visual’ parsing can be taught quickly without visual experience during critical periods.
<italic>Sci. Rep.</italic>
<bold>5</bold>
, 15359; doi: 10.1038/srep15359 (2015).</p>
</sec>
<sec sec-type="supplementary-material" id="S1">
<title>Supplementary Material</title>
<supplementary-material id="d33e23" content-type="local-data">
<caption>
<title>Supplementary Information</title>
</caption>
<media xlink:href="srep15359-s1.pdf"></media>
</supplementary-material>
</sec>
</body>
<back>
<ack>
<p>We thank Sharon Taub for her help in running the experiment and Ella Striem-Amit for useful discussions. LR is supported by the Ariane de Rothschild Women’s Doctoral Program. AA is a European Research Council fellow and is supported by ERC-ITG grant (310809); The Charitable Gatsby Foundation and The James S. McDonnell Foundation scholar award for understanding human cognition (grant number 220020284).</p>
</ack>
<ref-list>
<ref id="b1">
<mixed-citation publication-type="journal">
<name>
<surname>Zrenner</surname>
<given-names>E.</given-names>
</name>
<italic>et al.</italic>
<article-title>Subretinal electronic chips allow blind patients to read letters and combine them to words</article-title>
.
<source>P. Roy. Soc. B-Biol. Sci.</source>
<volume>278</volume>
,
<fpage>1489</fpage>
<lpage>1497</lpage>
(
<year>2011</year>
).</mixed-citation>
</ref>
<ref id="b2">
<mixed-citation publication-type="journal">
<name>
<surname>Ahuja</surname>
<given-names>A. K.</given-names>
</name>
&
<name>
<surname>Behrend</surname>
<given-names>M. R.</given-names>
</name>
<article-title>The Argus™ II retinal prosthesis: Factors affecting patient selection for implantation</article-title>
.
<source>Prog. Retin. Eye Res.</source>
<volume>36</volume>
,
<fpage>1</fpage>
<lpage>23</lpage>
(
<year>2013</year>
).
<pub-id pub-id-type="pmid">23500412</pub-id>
</mixed-citation>
</ref>
<ref id="b3">
<mixed-citation publication-type="journal">
<name>
<surname>Weiland</surname>
<given-names>J. D.</given-names>
</name>
,
<name>
<surname>Cho</surname>
<given-names>A. K.</given-names>
</name>
&
<name>
<surname>Humayun</surname>
<given-names>M. S.</given-names>
</name>
<article-title>Retinal Prostheses: Current Clinical Results and Future Needs</article-title>
.
<source>Ophthalmology</source>
<volume>118</volume>
,
<fpage>2227</fpage>
<lpage>2237</lpage>
(
<year>2011</year>
).
<pub-id pub-id-type="pmid">22047893</pub-id>
</mixed-citation>
</ref>
<ref id="b4">
<mixed-citation publication-type="journal">
<name>
<surname>Luo</surname>
<given-names>Y. H.-L.</given-names>
</name>
&
<name>
<surname>da Cruz</surname>
<given-names>L.</given-names>
</name>
<article-title>A review and update on the current status of retinal prostheses (bionic eye)</article-title>
.
<source>Brit. Med. Bull.</source>
<volume>109</volume>
,
<fpage>31</fpage>
<lpage>44</lpage>
(
<year>2014</year>
).
<pub-id pub-id-type="pmid">24526779</pub-id>
</mixed-citation>
</ref>
<ref id="b5">
<mixed-citation publication-type="journal">
<name>
<surname>da Cruz</surname>
<given-names>L.</given-names>
</name>
<italic>et al.</italic>
<article-title>The Argus II epiretinal prosthesis system allows letter and word reading and long-term function in patients with profound vision loss</article-title>
.
<source>Brit. J. Ophthalmol.</source>
<volume>97</volume>
,
<fpage>632</fpage>
<lpage>636</lpage>
(
<year>2013</year>
).
<pub-id pub-id-type="pmid">23426738</pub-id>
</mixed-citation>
</ref>
<ref id="b6">
<mixed-citation publication-type="journal">
<name>
<surname>Humayun</surname>
<given-names>M. S.</given-names>
</name>
<italic>et al.</italic>
<article-title>Interim results from the international trial of Second Sight’s visual prosthesis</article-title>
.
<source>Ophthalmology</source>
<volume>119</volume>
,
<fpage>779</fpage>
<lpage>788</lpage>
(
<year>2012</year>
).
<pub-id pub-id-type="pmid">22244176</pub-id>
</mixed-citation>
</ref>
<ref id="b7">
<mixed-citation publication-type="journal">
<name>
<surname>Lauritzen</surname>
<given-names>T. Z.</given-names>
</name>
<italic>et al.</italic>
<article-title>Reading visual Braille with a retinal prosthesis</article-title>
.
<source>Front. Neurosci.</source>
<volume>6</volume>
,
<pub-id pub-id-type="doi">10.3389/fnins.2012.00168</pub-id>
(
<year>2012</year>
).</mixed-citation>
</ref>
<ref id="b8">
<mixed-citation publication-type="journal">
<name>
<surname>Dorn</surname>
<given-names>J. D.</given-names>
</name>
<italic>et al.</italic>
<article-title>The Detection of Motion by Blind Subjects With the Epiretinal 60-Electrode (Argus II) Retinal Prosthesis</article-title>
.
<source>JAMA Ophthalmol.</source>
<volume>131</volume>
,
<fpage>183</fpage>
<lpage>189</lpage>
(
<year>2013</year>
).
<pub-id pub-id-type="pmid">23544203</pub-id>
</mixed-citation>
</ref>
<ref id="b9">
<mixed-citation publication-type="journal">
<name>
<surname>Gregory</surname>
<given-names>R. L.</given-names>
</name>
&
<name>
<surname>Wallace</surname>
<given-names>J. G.</given-names>
</name>
<article-title>Recovery from early blindness: A case study</article-title>
. In
<source>Experimental Psychology Society Monograph No. 2</source>
(Heffers,
<year>1963</year>
).</mixed-citation>
</ref>
<ref id="b10">
<mixed-citation publication-type="journal">
<name>
<surname>Ackroyd</surname>
<given-names>C.</given-names>
</name>
,
<name>
<surname>Humphrey</surname>
<given-names>N. K.</given-names>
</name>
&
<name>
<surname>Warrington</surname>
<given-names>E. K.</given-names>
</name>
<article-title>Lasting effects of early blindness: A case study</article-title>
.
<source>Q. J. Exp. Psychol.</source>
<volume>26</volume>
,
<fpage>114</fpage>
<lpage>124</lpage>
(
<year>1974</year>
).
<pub-id pub-id-type="pmid">4592530</pub-id>
</mixed-citation>
</ref>
<ref id="b11">
<mixed-citation publication-type="journal">
<name>
<surname>Carlson</surname>
<given-names>S.</given-names>
</name>
,
<name>
<surname>Hyvarinen</surname>
<given-names>L.</given-names>
</name>
&
<name>
<surname>Raninen</surname>
<given-names>A.</given-names>
</name>
<article-title>Persistent behavioural blindness after early visual deprivation and active visual rehabilitation: a case report</article-title>
.
<source>Brit. J. Ophthalmo.l</source>
<volume>70</volume>
,
<fpage>607</fpage>
<lpage>611</lpage>
(
<year>1986</year>
).</mixed-citation>
</ref>
<ref id="b12">
<mixed-citation publication-type="journal">
<name>
<surname>Fine</surname>
<given-names>I.</given-names>
</name>
<italic>et al.</italic>
<article-title>Long-term deprivation affects visual perception and cortex</article-title>
.
<source>Nat. Neurosci.</source>
<volume>6</volume>
,
<fpage>915</fpage>
<lpage>916</lpage>
(
<year>2003</year>
).
<pub-id pub-id-type="pmid">12937420</pub-id>
</mixed-citation>
</ref>
<ref id="b13">
<mixed-citation publication-type="journal">
<name>
<surname>Ostrovsky</surname>
<given-names>Y.</given-names>
</name>
,
<name>
<surname>Meyers</surname>
<given-names>E.</given-names>
</name>
,
<name>
<surname>Ganesh</surname>
<given-names>S.</given-names>
</name>
,
<name>
<surname>Mathur</surname>
<given-names>U.</given-names>
</name>
&
<name>
<surname>Sinha</surname>
<given-names>P.</given-names>
</name>
<article-title>Visual parsing after recovery from blindness</article-title>
.
<source>Psychol. Sci.</source>
<volume>20</volume>
,
<fpage>1484</fpage>
<lpage>1491</lpage>
(
<year>2009</year>
).
<pub-id pub-id-type="pmid">19891751</pub-id>
</mixed-citation>
</ref>
<ref id="b14">
<mixed-citation publication-type="journal">
<name>
<surname>Levin</surname>
<given-names>N.</given-names>
</name>
,
<name>
<surname>Dumoulin</surname>
<given-names>S. O.</given-names>
</name>
,
<name>
<surname>Winawer</surname>
<given-names>J.</given-names>
</name>
,
<name>
<surname>Dougherty</surname>
<given-names>R. F.</given-names>
</name>
&
<name>
<surname>Wandell</surname>
<given-names>B. A.</given-names>
</name>
<article-title>Cortical Maps and White Matter Tracts following Long Period of Visual Deprivation and Retinal Image Restoration</article-title>
.
<source>Neuron</source>
<volume>65</volume>
,
<fpage>21</fpage>
<lpage>31</lpage>
(
<year>2010</year>
).
<pub-id pub-id-type="pmid">20152110</pub-id>
</mixed-citation>
</ref>
<ref id="b15">
<mixed-citation publication-type="journal">
<name>
<surname>Sinha</surname>
<given-names>P.</given-names>
</name>
&
<name>
<surname>Held</surname>
<given-names>R.</given-names>
</name>
<article-title>Sight restoration</article-title>
.
<source>F1000 Med. Rep.</source>
<volume>4</volume>
,
<fpage>17</fpage>
(
<year>2012</year>
).
<pub-id pub-id-type="pmid">22991579</pub-id>
</mixed-citation>
</ref>
<ref id="b16">
<mixed-citation publication-type="journal">
<name>
<surname>Wiesel</surname>
<given-names>T. N.</given-names>
</name>
&
<name>
<surname>Hubel</surname>
<given-names>D. H.</given-names>
</name>
<article-title>Comparison of the effects of unilateral and bilateral eye closure on cortical unit responses in kittens</article-title>
.
<source>J. Neurophysiol.</source>
<volume>28</volume>
,
<fpage>1029</fpage>
<lpage>1040</lpage>
(
<year>1965</year>
).
<pub-id pub-id-type="pmid">5883730</pub-id>
</mixed-citation>
</ref>
<ref id="b17">
<mixed-citation publication-type="journal">
<name>
<surname>Wiesel</surname>
<given-names>T. N.</given-names>
</name>
&
<name>
<surname>Hubel</surname>
<given-names>D. H.</given-names>
</name>
<article-title>Single-Cell Responses in Striate Cortex of Kittens Deprived of Vision in One Eye</article-title>
.
<source>J. Neurophysiol.</source>
<volume>26</volume>
,
<fpage>1003</fpage>
<lpage>1017</lpage>
(
<year>1963</year>
).
<pub-id pub-id-type="pmid">14084161</pub-id>
</mixed-citation>
</ref>
<ref id="b18">
<mixed-citation publication-type="journal">
<name>
<surname>Dormal</surname>
<given-names>G.</given-names>
</name>
,
<name>
<surname>Lepore</surname>
<given-names>F.</given-names>
</name>
&
<name>
<surname>Collignon</surname>
<given-names>O.</given-names>
</name>
<article-title>Plasticity of the Dorsal “Spatial” Stream in Visually Deprived Individuals</article-title>
.
<source>Neural plast.</source>
<volume>2012</volume>
,
<pub-id pub-id-type="doi">10.1155/2012/687659</pub-id>
(
<year>2012</year>
).</mixed-citation>
</ref>
<ref id="b19">
<mixed-citation publication-type="journal">
<name>
<surname>Putzar</surname>
<given-names>L.</given-names>
</name>
,
<name>
<surname>Hötting</surname>
<given-names>K.</given-names>
</name>
,
<name>
<surname>Rösler</surname>
<given-names>F.</given-names>
</name>
&
<name>
<surname>Röder</surname>
<given-names>B.</given-names>
</name>
<article-title>The development of visual feature binding processes after visual deprivation in early infancy</article-title>
.
<source>Vision Res.</source>
<volume>47</volume>
,
<fpage>2616</fpage>
<lpage>2626</lpage>
(
<year>2007</year>
).
<pub-id pub-id-type="pmid">17697691</pub-id>
</mixed-citation>
</ref>
<ref id="b20">
<mixed-citation publication-type="journal">
<name>
<surname>Maurer</surname>
<given-names>D.</given-names>
</name>
,
<name>
<surname>Mondloch</surname>
<given-names>C. J.</given-names>
</name>
&
<name>
<surname>Lewis</surname>
<given-names>T. L.</given-names>
</name>
in
<source>Progress in Brain Research</source>
<volume>Vol. 164</volume>
(eds
<name>
<surname>von Hofsten</surname>
<given-names>C.</given-names>
</name>
&
<name>
<surname>Rosander</surname>
<given-names>K.</given-names>
</name>
)
<fpage>87</fpage>
<lpage>104</lpage>
(Elsevier,
<year>2007</year>
).
<pub-id pub-id-type="pmid">17920427</pub-id>
</mixed-citation>
</ref>
<ref id="b21">
<mixed-citation publication-type="journal">
<name>
<surname>Ellemberg</surname>
<given-names>D.</given-names>
</name>
,
<name>
<surname>Lewis</surname>
<given-names>T. L.</given-names>
</name>
,
<name>
<surname>Maurer</surname>
<given-names>D.</given-names>
</name>
,
<name>
<surname>Brar</surname>
<given-names>S.</given-names>
</name>
&
<name>
<surname>Brent</surname>
<given-names>H. P.</given-names>
</name>
<article-title>Better perception of global motion after monocular than after binocular deprivation</article-title>
.
<source>Vision Res.</source>
<volume>42</volume>
,
<fpage>169</fpage>
<lpage>179</lpage>
(
<year>2002</year>
).
<pub-id pub-id-type="pmid">11809471</pub-id>
</mixed-citation>
</ref>
<ref id="b22">
<mixed-citation publication-type="journal">
<name>
<surname>Chang</surname>
<given-names>W. C.</given-names>
</name>
&
<name>
<surname>Bin</surname>
<given-names>I.</given-names>
</name>
<article-title>The Difficulties in Teaching an Adult with Congenital Blindness to Draw Cubes: A Case Study</article-title>
.
<source>J. Visual Impair. Blin.</source>
<volume>107</volume>
,
<fpage>144</fpage>
<lpage>149</lpage>
(
<year>2013</year>
).</mixed-citation>
</ref>
<ref id="b23">
<mixed-citation publication-type="journal">
<name>
<surname>Borenstein</surname>
<given-names>E.</given-names>
</name>
&
<name>
<surname>Ullman</surname>
<given-names>S.</given-names>
</name>
in
<italic>Computer Vision — ECCV 2002</italic>
<volume>Vol. 2351</volume>
<source>Lecture Notes in Computer Science</source>
(eds
<name>
<surname>Heyden</surname>
<given-names>A.</given-names>
</name>
,
<name>
<surname>Sparr</surname>
<given-names>G.</given-names>
</name>
,
<name>
<surname>Nielsen</surname>
<given-names>M.</given-names>
</name>
&
<name>
<surname>Johansen</surname>
<given-names>P.</given-names>
</name>
) Ch. 8,
<fpage>109</fpage>
<lpage>122</lpage>
(Springer Berlin Heidelberg,
<year>2002</year>
).</mixed-citation>
</ref>
<ref id="b24">
<mixed-citation publication-type="journal">
<name>
<surname>Dura-Bernal</surname>
<given-names>S.</given-names>
</name>
,
<name>
<surname>Wennekers</surname>
<given-names>T.</given-names>
</name>
&
<name>
<surname>Denham</surname>
<given-names>S. L.</given-names>
</name>
<article-title>The role of feedback in a hierarchical model of object perception</article-title>
.
<source>Adv. Exp. Med. Biol.</source>
<volume>718</volume>
,
<fpage>165</fpage>
<lpage>179</lpage>
(
<year>2011</year>
).
<pub-id pub-id-type="pmid">21744218</pub-id>
</mixed-citation>
</ref>
<ref id="b25">
<mixed-citation publication-type="journal">
<name>
<surname>Ptito</surname>
<given-names>M.</given-names>
</name>
,
<name>
<surname>Matteau</surname>
<given-names>I.</given-names>
</name>
,
<name>
<surname>Gjedde</surname>
<given-names>A.</given-names>
</name>
&
<name>
<surname>Kupers</surname>
<given-names>R.</given-names>
</name>
<article-title>Recruitment of the middle temporal area by tactile motion in congenital blindness</article-title>
.
<source>Neuroreport</source>
<volume>20</volume>
,
<fpage>543</fpage>
<lpage>547</lpage>
(
<year>2009</year>
).
<pub-id pub-id-type="pmid">19240660</pub-id>
</mixed-citation>
</ref>
<ref id="b26">
<mixed-citation publication-type="journal">
<name>
<surname>Kupers</surname>
<given-names>R.</given-names>
</name>
,
<name>
<surname>Chebat</surname>
<given-names>D. R.</given-names>
</name>
,
<name>
<surname>Madsen</surname>
<given-names>K. H.</given-names>
</name>
,
<name>
<surname>Paulson</surname>
<given-names>O. B.</given-names>
</name>
&
<name>
<surname>Ptito</surname>
<given-names>M.</given-names>
</name>
<article-title>Neural correlates of virtual route recognition in congenital blindness</article-title>
.
<source>Proc. Natl. Acad. Sci. U S A</source>
<volume>107</volume>
,
<fpage>12716</fpage>
<lpage>12721</lpage>
(
<year>2010</year>
).
<pub-id pub-id-type="pmid">20616025</pub-id>
</mixed-citation>
</ref>
<ref id="b27">
<mixed-citation publication-type="journal">
<name>
<surname>Renier</surname>
<given-names>L.</given-names>
</name>
&
<name>
<surname>De Volder</surname>
<given-names>A. G.</given-names>
</name>
<article-title>Vision substitution and depth perception: early blind subjects experience visual perspective through their ears</article-title>
.
<source>Disabil. Rehabil. Assist. Technol.</source>
<volume>5</volume>
,
<fpage>175</fpage>
<lpage>183</lpage>
(
<year>2010</year>
).
<pub-id pub-id-type="pmid">20214472</pub-id>
</mixed-citation>
</ref>
<ref id="b28">
<mixed-citation publication-type="journal">
<name>
<surname>Chebat</surname>
<given-names>D. R.</given-names>
</name>
,
<name>
<surname>Schneider</surname>
<given-names>F. C.</given-names>
</name>
,
<name>
<surname>Kupers</surname>
<given-names>R.</given-names>
</name>
&
<name>
<surname>Ptito</surname>
<given-names>M.</given-names>
</name>
<article-title>Navigation with a sensory substitution device in congenitally blind individuals</article-title>
.
<source>Neuroreport</source>
<volume>22</volume>
,
<fpage>342</fpage>
<lpage>347</lpage>
(
<year>2011</year>
).
<pub-id pub-id-type="pmid">21451425</pub-id>
</mixed-citation>
</ref>
<ref id="b29">
<mixed-citation publication-type="journal">
<name>
<surname>Striem-Amit</surname>
<given-names>E.</given-names>
</name>
,
<name>
<surname>Cohen</surname>
<given-names>L.</given-names>
</name>
,
<name>
<surname>Dehaene</surname>
<given-names>S.</given-names>
</name>
&
<name>
<surname>Amedi</surname>
<given-names>A.</given-names>
</name>
<article-title>Reading with sounds: sensory substitution selectively activates the visual word form area in the blind</article-title>
.
<source>Neuron</source>
<volume>76</volume>
,
<fpage>640</fpage>
<lpage>652</lpage>
(
<year>2012</year>
).
<pub-id pub-id-type="pmid">23141074</pub-id>
</mixed-citation>
</ref>
<ref id="b30">
<mixed-citation publication-type="journal">
<name>
<surname>Abboud</surname>
<given-names>S.</given-names>
</name>
,
<name>
<surname>Hanassy</surname>
<given-names>S.</given-names>
</name>
,
<name>
<surname>Levy-Tzedek</surname>
<given-names>S.</given-names>
</name>
,
<name>
<surname>Maidenbaum</surname>
<given-names>S.</given-names>
</name>
&
<name>
<surname>Amedi</surname>
<given-names>A.</given-names>
</name>
<article-title>EyeMusic: Introducing a “visual” colorful experience for the blind using auditory sensory substituti</article-title>
on.
<source>Restor. neurol. and neuros</source>
,
<volume>32</volume>
,
<fpage>247</fpage>
<lpage>257</lpage>
(
<year>2014</year>
).</mixed-citation>
</ref>
<ref id="b31">
<mixed-citation publication-type="journal">
<name>
<surname>Meijer</surname>
<given-names>P. B.</given-names>
</name>
<article-title>An experimental system for auditory image representations</article-title>
.
<source>IEEE Trans. Biomed. Eng.</source>
<volume>39</volume>
,
<fpage>112</fpage>
<lpage>121</lpage>
(
<year>1992</year>
).
<pub-id pub-id-type="pmid">1612614</pub-id>
</mixed-citation>
</ref>
<ref id="b32">
<mixed-citation publication-type="journal">
<name>
<surname>Maurer</surname>
<given-names>D.</given-names>
</name>
,
<name>
<surname>Lewis</surname>
<given-names>T. L.</given-names>
</name>
&
<name>
<surname>Mondloch</surname>
<given-names>C. J.</given-names>
</name>
<article-title>Missing sights: consequences for visual cognitive development</article-title>
.
<source>Trends Cogn. Sci.</source>
<volume>9</volume>
,
<fpage>144</fpage>
<lpage>151</lpage>
(
<year>2005</year>
).
<pub-id pub-id-type="pmid">15737823</pub-id>
</mixed-citation>
</ref>
<ref id="b33">
<mixed-citation publication-type="journal">
<name>
<surname>Lewis</surname>
<given-names>T. L.</given-names>
</name>
&
<name>
<surname>Maurer</surname>
<given-names>D.</given-names>
</name>
<article-title>Multiple sensitive periods in human visual development: evidence from visually deprived children</article-title>
.
<source>Dev. Psychobiol.</source>
<volume>46</volume>
,
<fpage>163</fpage>
<lpage>183</lpage>
(
<year>2005</year>
).
<pub-id pub-id-type="pmid">15772974</pub-id>
</mixed-citation>
</ref>
<ref id="b34">
<mixed-citation publication-type="journal">
<name>
<surname>Marr</surname>
<given-names>D.</given-names>
</name>
<source>Vision</source>
(W.H.Freeman,
<year>1982</year>
).</mixed-citation>
</ref>
<ref id="b35">
<mixed-citation publication-type="journal">
<name>
<surname>Elli</surname>
<given-names>G. V.</given-names>
</name>
,
<name>
<surname>Benetti</surname>
<given-names>S.</given-names>
</name>
&
<name>
<surname>Collignon</surname>
<given-names>O.</given-names>
</name>
<article-title>Is There a Future for Sensory Substitution Outside Academic Laboratories?</article-title>
<source>Multisens. Res.</source>
<volume>27</volume>
,
<fpage>271</fpage>
<lpage>291</lpage>
(
<year>2014</year>
).
<pub-id pub-id-type="pmid">25693297</pub-id>
</mixed-citation>
</ref>
<ref id="b36">
<mixed-citation publication-type="journal">
<name>
<surname>Deroy</surname>
<given-names>O.</given-names>
</name>
&
<name>
<surname>Auvray</surname>
<given-names>M.</given-names>
</name>
<article-title>Reading the world through the skin and ears: a new perspective on sensory substitution</article-title>
.
<source>Front. psychol.</source>
<volume>3</volume>
,
<pub-id pub-id-type="doi">10.3389/fpsyg.2012.00457</pub-id>
(
<year>2012</year>
).</mixed-citation>
</ref>
<ref id="b37">
<mixed-citation publication-type="journal">
<name>
<surname>Ward</surname>
<given-names>J.</given-names>
</name>
&
<name>
<surname>Meijer</surname>
<given-names>P.</given-names>
</name>
<article-title>Visual experiences in the blind induced by an auditory sensory substitution device</article-title>
.
<source>Conscious Cogn.</source>
<volume>19</volume>
,
<fpage>492</fpage>
<lpage>500</lpage>
(
<year>2010</year>
).
<pub-id pub-id-type="pmid">19955003</pub-id>
</mixed-citation>
</ref>
<ref id="b38">
<mixed-citation publication-type="journal">
<name>
<surname>Ostrovsky</surname>
<given-names>Y.</given-names>
</name>
,
<name>
<surname>Andalman</surname>
<given-names>A.</given-names>
</name>
&
<name>
<surname>Sinha</surname>
<given-names>P.</given-names>
</name>
<article-title>Vision following extended congenital blindness</article-title>
.
<source>Psychol. Sci.</source>
<volume>17</volume>
,
<fpage>1009</fpage>
<lpage>1014</lpage>
(
<year>2006</year>
).
<pub-id pub-id-type="pmid">17201779</pub-id>
</mixed-citation>
</ref>
<ref id="b39">
<mixed-citation publication-type="journal">
<name>
<surname>Maidenbaum</surname>
<given-names>S.</given-names>
</name>
,
<name>
<surname>Abboud</surname>
<given-names>S.</given-names>
</name>
&
<name>
<surname>Amedi</surname>
<given-names>A.</given-names>
</name>
<article-title>Sensory substitution: Closing the gap between basic research and widespread practical visual rehabilitation</article-title>
.
<source>Neurosci. Biobehav. Rev.</source>
<volume>41</volume>
,
<fpage>3</fpage>
<lpage>15</lpage>
(
<year>2014</year>
).
<pub-id pub-id-type="pmid">24275274</pub-id>
</mixed-citation>
</ref>
<ref id="b40">
<mixed-citation publication-type="journal">
<name>
<surname>Matteau</surname>
<given-names>I.</given-names>
</name>
,
<name>
<surname>Kupers</surname>
<given-names>R.</given-names>
</name>
,
<name>
<surname>Ricciardi</surname>
<given-names>E.</given-names>
</name>
,
<name>
<surname>Pietrini</surname>
<given-names>P.</given-names>
</name>
&
<name>
<surname>Ptito</surname>
<given-names>M.</given-names>
</name>
<article-title>Beyond visual, aural and haptic movement perception: hMT+ is activated by electrotactile motion stimulation of the tongue in sighted and in congenitally blind individua</article-title>
ls.
<source>Brain Res. Bull.</source>
<volume>82</volume>
,
<fpage>264</fpage>
<lpage>270</lpage>
(
<year>2010</year>
).
<pub-id pub-id-type="pmid">20466041</pub-id>
</mixed-citation>
</ref>
<ref id="b41">
<mixed-citation publication-type="journal">
<name>
<surname>Amedi</surname>
<given-names>A.</given-names>
</name>
<italic>et al.</italic>
<article-title>Shape conveyed by visual-to-auditory sensory substitution activates the lateral occipital complex</article-title>
.
<source>Nat. Neurosci.</source>
<volume>10</volume>
,
<fpage>687</fpage>
<lpage>689</lpage>
(
<year>2007</year>
).
<pub-id pub-id-type="pmid">17515898</pub-id>
</mixed-citation>
</ref>
<ref id="b42">
<mixed-citation publication-type="journal">
<name>
<surname>Kim</surname>
<given-names>J. K.</given-names>
</name>
&
<name>
<surname>Zatorre</surname>
<given-names>R. J.</given-names>
</name>
<article-title>Tactile-auditory shape learning engages the lateral occipital complex</article-title>
.
<source>J. Neurosci.</source>
<volume>31</volume>
,
<fpage>7848</fpage>
<lpage>7856</lpage>
(
<year>2011</year>
).
<pub-id pub-id-type="pmid">21613498</pub-id>
</mixed-citation>
</ref>
<ref id="b43">
<mixed-citation publication-type="journal">
<name>
<surname>Ptito</surname>
<given-names>M.</given-names>
</name>
<italic>et al.</italic>
<article-title>Crossmodal recruitment of the ventral visual stream in congenital blindness</article-title>
.
<source>Neural Plast.</source>
<volume>2012</volume>
,
<pub-id pub-id-type="doi">10.1155/2012/304045</pub-id>
(
<year>2012</year>
).</mixed-citation>
</ref>
<ref id="b44">
<mixed-citation publication-type="journal">
<name>
<surname>Striem-Amit</surname>
<given-names>E.</given-names>
</name>
,
<name>
<surname>Dakwar</surname>
<given-names>O.</given-names>
</name>
,
<name>
<surname>Reich</surname>
<given-names>L.</given-names>
</name>
&
<name>
<surname>Amedi</surname>
<given-names>A.</given-names>
</name>
<article-title>The large-scale organization of “visual” streams emerges without visual experience</article-title>
.
<source>Cereb. Cortex</source>
<volume>22</volume>
,
<fpage>1698</fpage>
<lpage>1709</lpage>
(
<year>2012</year>
).
<pub-id pub-id-type="pmid">21940707</pub-id>
</mixed-citation>
</ref>
<ref id="b45">
<mixed-citation publication-type="journal">
<name>
<surname>Reich</surname>
<given-names>L.</given-names>
</name>
,
<name>
<surname>Maidenbaum</surname>
<given-names>S.</given-names>
</name>
&
<name>
<surname>Amedi</surname>
<given-names>A.</given-names>
</name>
<article-title>The brain as a flexible task machine: implications for visual rehabilitation using noninvasive vs. invasive approaches</article-title>
.
<source>Curr. Opin. Neurol.</source>
<volume>25</volume>
,
<fpage>86</fpage>
<lpage>95</lpage>
(
<year>2012</year>
).
<pub-id pub-id-type="pmid">22157107</pub-id>
</mixed-citation>
</ref>
<ref id="b46">
<mixed-citation publication-type="journal">
<name>
<surname>Ricciardi</surname>
<given-names>E.</given-names>
</name>
,
<name>
<surname>Bonino</surname>
<given-names>D.</given-names>
</name>
,
<name>
<surname>Pellegrini</surname>
<given-names>S.</given-names>
</name>
&
<name>
<surname>Pietrini</surname>
<given-names>P.</given-names>
</name>
<article-title>Mind the blind brain to understand the sighted one! Is there a supramodal cortical functional architecture?</article-title>
<source>Neurosci. Biobehav. Rev.</source>
<volume>41C</volume>
,
<fpage>64</fpage>
<lpage>77</lpage>
(
<year>2013</year>
).
<pub-id pub-id-type="pmid">24157726</pub-id>
</mixed-citation>
</ref>
<ref id="b47">
<mixed-citation publication-type="journal">
<name>
<surname>Merabet</surname>
<given-names>L. B.</given-names>
</name>
&
<name>
<surname>Pascual-Leone</surname>
<given-names>A.</given-names>
</name>
<article-title>Neural reorganization following sensory loss: the opportunity of change</article-title>
.
<source>Nat. Rev. Neurosci.</source>
<volume>11</volume>
,
<fpage>44</fpage>
<lpage>52</lpage>
(
<year>2010</year>
).
<pub-id pub-id-type="pmid">19935836</pub-id>
</mixed-citation>
</ref>
<ref id="b48">
<mixed-citation publication-type="journal">
<name>
<surname>Striem-Amit</surname>
<given-names>E.</given-names>
</name>
,
<name>
<surname>Guendelman</surname>
<given-names>M.</given-names>
</name>
&
<name>
<surname>Amedi</surname>
<given-names>A.</given-names>
</name>
<article-title>‘Visual’ acuity of the congenitally blind using visual-to-auditory sensory substitution</article-title>
.
<source>PLoS One</source>
<volume>7</volume>
,
<fpage>e33136</fpage>
(
<year>2012</year>
).
<pub-id pub-id-type="pmid">22438894</pub-id>
</mixed-citation>
</ref>
<ref id="b49">
<mixed-citation publication-type="journal">
<name>
<surname>Maidenbaum</surname>
<given-names>S.</given-names>
</name>
<italic>et al.</italic>
<article-title>The “EyeCane”, a new electronic travel aid for the blind: Technology, behavior & swift learning</article-title>
.
<source>Restor. Neurol. Neurosci.</source>
<volume>32</volume>
,
<fpage>813</fpage>
<lpage>824</lpage>
(
<year>2014</year>
).
<pub-id pub-id-type="pmid">25201814</pub-id>
</mixed-citation>
</ref>
<ref id="b50">
<mixed-citation publication-type="journal">
<name>
<surname>Kupers</surname>
<given-names>R.</given-names>
</name>
&
<name>
<surname>Ptito</surname>
<given-names>M.</given-names>
</name>
<article-title>Compensatory plasticity and cross-modal reorganization following early visual deprivation</article-title>
.
<source>Neurosci. Biobehav. Rev.</source>
<volume>41</volume>
,
<fpage>36</fpage>
<lpage>52</lpage>
(
<year>2013</year>
).
<pub-id pub-id-type="pmid">23954750</pub-id>
</mixed-citation>
</ref>
<ref id="b51">
<mixed-citation publication-type="journal">
<name>
<surname>Ptito</surname>
<given-names>M.</given-names>
</name>
,
<name>
<surname>Moesgaard</surname>
<given-names>S. M.</given-names>
</name>
,
<name>
<surname>Gjedde</surname>
<given-names>A.</given-names>
</name>
&
<name>
<surname>Kupers</surname>
<given-names>R.</given-names>
</name>
<article-title>Cross-modal plasticity revealed by electrotactile stimulation of the tongue in the congenitally blind</article-title>
.
<source>Brain</source>
<volume>128</volume>
,
<fpage>606</fpage>
<lpage>614</lpage>
(
<year>2005</year>
).
<pub-id pub-id-type="pmid">15634727</pub-id>
</mixed-citation>
</ref>
<ref id="b52">
<mixed-citation publication-type="journal">
<name>
<surname>Maurer</surname>
<given-names>D.</given-names>
</name>
&
<name>
<surname>Hensch</surname>
<given-names>T. K.</given-names>
</name>
<article-title>Amblyopia: background to the special issue on stroke recovery</article-title>
.
<source>Dev. psychobiol.</source>
<volume>54</volume>
,
<fpage>224</fpage>
<lpage>238</lpage>
(
<year>2012</year>
).
<pub-id pub-id-type="pmid">22415912</pub-id>
</mixed-citation>
</ref>
<ref id="b53">
<mixed-citation publication-type="journal">
<name>
<surname>Levy-Tzedek</surname>
<given-names>S.</given-names>
</name>
<italic>et al.</italic>
<article-title>Cross-sensory transfer of sensory-motor information: visuomotor learning affects performance on an audiomotor task, using sensory-substitution</article-title>
.
<source>Sci. Rep.</source>
<volume>2</volume>
,
<fpage>949</fpage>
(
<year>2012</year>
).
<pub-id pub-id-type="pmid">23230514</pub-id>
</mixed-citation>
</ref>
</ref-list>
<fn-group>
<fn>
<p>
<bold>Author Contributions</bold>
Created and designed the experiments: L.R. and A.A. Conducted the experiments and analyzed the data: L.R. Wrote the manuscript: L.R. and A.A.</p>
</fn>
</fn-group>
</back>
<floats-group>
<fig id="f1">
<label>Figure 1</label>
<caption>
<title>Success of congenitally blind users of the vOICe SSD on the ‘visual’ parsing test.</title>
<p>(
<bold>A</bold>
) Skills and principles required for ‘visual’ parsing were not directly taught during the ~70 hours training, but rather were introduced very briefly (~20 minutes) in a pre-test training session. Different stimuli were used in the training and test phases. (
<bold>B</bold>
) Types of stimuli presented during the ‘visual’ parsing test: a) 1, 2 or 3 non-overlapping shapes (filled opaque or line drawing; see examples in i-iii); b) 2 overlapping shapes (filled opaque, line drawing or filled transparent; iv-vi); c) a single 3D shape (vii). Below each stimulus is the waveform and spectrogram (demonstrating that visual shape is preserved in the sound) of its soundscape. (
<bold>C</bold>
) The SSD-users performed significantly above chance in indicating the number of 2D shapes. Error bars represent standard error of the mean. (
<bold>D</bold>
) The SSD-users performed significantly above chance in parsing overlapping 2D objects and single 3D shapes. The average performance of the 7 congenitally fully blind individuals is depicted by blue bars, the performance of individual congenitally fully blind SSD-users in orange diamonds, the performance of individual SSD-users who had very limited visual experience in cyan diamonds. *** denotes p < 0.0006.</p>
</caption>
<graphic xlink:href="srep15359-f1"></graphic>
</fig>
<fig id="f2">
<label>Figure 2</label>
<caption>
<title>Comparison of performance between the highly-trained congenitally blind SSD-users and a group of sighted controls totally naïve to SSD.</title>
<p>The group of 7 congenitally and fully blind SSD-users (blue bars) performed significantly better than a group of 7 age- and gender-matched naïve sighted individuals (orange bars). Error bars represent standard error of the mean. *** denotes p < 0.0006. ** denotes p < 0.005. * denotes p < 0.05.</p>
</caption>
<graphic xlink:href="srep15359-f2"></graphic>
</fig>
<fig id="f3">
<label>Figure 3</label>
<caption>
<title>Comparison of ‘visual’ parsing abilities of the congenitally blind SSD-users and medically sight-restored individuals.</title>
<p>(
<bold>A</bold>
) Main differences between the two groups. (
<bold>B</bold>
<bold>D</bold>
) Comparison of the performance of our group of blind SSD-users (who performed the task using SSD) vs. that of 3 sight-restored individuals tested by Ostrovsky
<italic>et al.</italic>
<xref ref-type="bibr" rid="b13">13</xref>
short and long time post-surgery (performed the task visually). The average performance of the 7 congenitally fully blind SSD-users is depicted by blue bars, performance of individual congenitally fully blind SSD-users by orange diamonds, performance of individual SSD-users who had very limited visual experience are depicted by cyan diamonds. Each sight-restored individual is represented by a green bar (performance tested a short time post-surgery) and a purple/pink bar (performance tested a longer time post-surgery). *** denotes p < 0.0006. The SSD-users outperformed the sight-restored individuals in most aspects tested, both at the group level and at the single-subject level. Data on the sight-restored individuals were adapted with permission from Ostrovsky
<italic>et al.</italic>
, 2009
<xref ref-type="bibr" rid="b13">13</xref>
. (
<bold>B</bold>
) Identifying 2 overlapping line-drawing shapes as 2 distinct objects. (
<bold>C</bold>
) Identifying 2 overlapping filled transparent shapes as 2 distinct objects. (
<bold>D</bold>
) Identifying a 3-dimensional shape as a single entity.</p>
</caption>
<graphic xlink:href="srep15359-f3"></graphic>
</fig>
<fig id="f4">
<label>Figure 4</label>
<caption>
<title>Combining SSDs with invasive means of sight-restoration to enhance visual abilities.</title>
<p>(
<bold>A</bold>
) Using SSDs to visually train individuals before invasive sight restoration to familiarize them with visually-unique features, and to strengthen the cortical visual networks. (
<bold>B</bold>
) Combining SSDs with visual prostheses as a neuro-rehabilitative post-operation aid. The system includes: 1) A camera consistently capturing images; 2) A processing unit which converts the visual information into: (i) an auditory sensory-substitution representation and (ii) neural stimulation conveyed by the visual-prosthesis electrodes. In such a device, the prostheses provide vivid visual qualia and the SSD provide explanatory input to the visual signal from the prosthesis. The dual synchronous visual information is expected to speed up rehabilitation. (
<bold>C</bold>
) Using SSDs to provide input beyond the maximal capabilities of the prosthesis. Thus, the technical resolution of ‘the vOICe’ SSD stimulation can be up to two orders of magnitude higher than that of currently available prostheses
<xref ref-type="bibr" rid="b4">4</xref>
. Therefore, while the information from the prosthesis might not suffice for various visual tasks such as determining the shape and vantage point of the house, the additional SSD-input would enhance perception.</p>
</caption>
<graphic xlink:href="srep15359-f4"></graphic>
</fig>
<table-wrap position="float" id="t1">
<label>Table 1</label>
<caption>
<title>Blind Participant Demographics.</title>
</caption>
<table frame="hsides" rules="groups" border="1">
<colgroup>
<col align="left"></col>
<col align="center"></col>
<col align="center"></col>
<col align="center"></col>
<col align="center"></col>
<col align="center"></col>
<col align="center"></col>
<col align="center"></col>
</colgroup>
<thead valign="bottom">
<tr>
<th align="left" valign="top" charoff="50">Subject</th>
<th align="center" valign="top" charoff="50">Age(years)&gender</th>
<th align="center" valign="top" charoff="50">Cause of blindness</th>
<th align="center" valign="top" charoff="50">Lightperception</th>
<th align="center" valign="top" charoff="50">Age ofblindnessonset(years)</th>
<th align="center" valign="top" charoff="50">Musicalexperience</th>
<th align="center" valign="top" charoff="50">Prior SSDexperience</th>
<th align="center" valign="top" charoff="50">Braillereading</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left" valign="top" charoff="50">
<bold>EQ</bold>
</td>
<td align="center" valign="top" charoff="50">34 F</td>
<td align="center" valign="top" charoff="50">ROP</td>
<td align="center" valign="top" charoff="50">None</td>
<td align="center" valign="top" charoff="50">0</td>
<td align="center" valign="top" charoff="50">No</td>
<td align="center" valign="top" charoff="50">No</td>
<td align="center" valign="top" charoff="50">Yes</td>
</tr>
<tr>
<td align="left" valign="top" charoff="50">
<bold>FM</bold>
</td>
<td align="center" valign="top" charoff="50">28 F</td>
<td align="center" valign="top" charoff="50">Microphthalmia</td>
<td align="center" valign="top" charoff="50">None</td>
<td align="center" valign="top" charoff="50">0</td>
<td align="center" valign="top" charoff="50">No</td>
<td align="center" valign="top" charoff="50">No</td>
<td align="center" valign="top" charoff="50">Yes</td>
</tr>
<tr>
<td align="left" valign="top" charoff="50">
<bold>FN</bold>
</td>
<td align="center" valign="top" charoff="50">30 F</td>
<td align="center" valign="top" charoff="50">LCA</td>
<td align="center" valign="top" charoff="50">Faint</td>
<td align="center" valign="top" charoff="50">0</td>
<td align="center" valign="top" charoff="50">No</td>
<td align="center" valign="top" charoff="50">No</td>
<td align="center" valign="top" charoff="50">Yes</td>
</tr>
<tr>
<td align="left" valign="top" charoff="50">
<bold>HBH</bold>
</td>
<td align="center" valign="top" charoff="50">22 F</td>
<td align="center" valign="top" charoff="50">Microphthalmia, Retinal detachment</td>
<td align="center" valign="top" charoff="50">None</td>
<td align="center" valign="top" charoff="50">1</td>
<td align="center" valign="top" charoff="50">No</td>
<td align="center" valign="top" charoff="50">No</td>
<td align="center" valign="top" charoff="50">Yes</td>
</tr>
<tr>
<td align="left" valign="top" charoff="50">
<bold>IS</bold>
</td>
<td align="center" valign="top" charoff="50">29 F</td>
<td align="center" valign="top" charoff="50">ROP</td>
<td align="center" valign="top" charoff="50">None</td>
<td align="center" valign="top" charoff="50">0</td>
<td align="center" valign="top" charoff="50">Works as a music therapist</td>
<td align="center" valign="top" charoff="50">No</td>
<td align="center" valign="top" charoff="50">Yes</td>
</tr>
<tr>
<td align="left" valign="top" charoff="50">
<bold>PC</bold>
</td>
<td align="center" valign="top" charoff="50">36 M</td>
<td align="center" valign="top" charoff="50">ROP</td>
<td align="center" valign="top" charoff="50">None</td>
<td align="center" valign="top" charoff="50">0</td>
<td align="center" valign="top" charoff="50">No</td>
<td align="center" valign="top" charoff="50">No</td>
<td align="center" valign="top" charoff="50">Yes</td>
</tr>
<tr>
<td align="left" valign="top" charoff="50">
<bold>PH</bold>
</td>
<td align="center" valign="top" charoff="50">37 F</td>
<td align="center" valign="top" charoff="50">Congenital rubella</td>
<td align="center" valign="top" charoff="50">None</td>
<td align="center" valign="top" charoff="50">0</td>
<td align="center" valign="top" charoff="50">played musical instruments for ~7 years</td>
<td align="center" valign="top" charoff="50">No</td>
<td align="center" valign="top" charoff="50">Yes</td>
</tr>
<tr>
<td align="left" valign="top" charoff="50">
<bold>TT</bold>
</td>
<td align="center" valign="top" charoff="50">53 M</td>
<td align="center" valign="top" charoff="50">ROP</td>
<td align="center" valign="top" charoff="50">None</td>
<td align="center" valign="top" charoff="50">0</td>
<td align="center" valign="top" charoff="50">No</td>
<td align="center" valign="top" charoff="50">No</td>
<td align="center" valign="top" charoff="50">Yes</td>
</tr>
<tr>
<td align="left" valign="top" charoff="50">
<bold>UM</bold>
</td>
<td align="center" valign="top" charoff="50">21 F</td>
<td align="center" valign="top" charoff="50">ROP</td>
<td align="center" valign="top" charoff="50">None</td>
<td align="center" valign="top" charoff="50">0</td>
<td align="center" valign="top" charoff="50">No</td>
<td align="center" valign="top" charoff="50">No</td>
<td align="center" valign="top" charoff="50">Yes</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn id="t1-fn1">
<p>ROP: Retinopathy of prematurity; LCA: Leber congenital amaurosis.</p>
</fn>
</table-wrap-foot>
</table-wrap>
</floats-group>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000638 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd -nk 000638 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Curation
   |type=    RBID
   |clé=     PMC:4611203
   |texte=   'Visual’ parsing can be taught quickly without visual experience during critical periods
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Curation/RBID.i   -Sk "pubmed:26482105" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024