Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Aging and Sensory Substitution in a Virtual Navigation Task

Identifieur interne : 000168 ( Pmc/Checkpoint ); précédent : 000167; suivant : 000169

Aging and Sensory Substitution in a Virtual Navigation Task

Auteurs : S. Levy-Tzedek [Israël] ; S. Maidenbaum [Israël] ; A. Amedi [Israël, France] ; J. Lackner [États-Unis]

Source :

RBID : PMC:4805187

Abstract

Virtual environments are becoming ubiquitous, and used in a variety of contexts–from entertainment to training and rehabilitation. Recently, technology for making them more accessible to blind or visually impaired users has been developed, by using sound to represent visual information. The ability of older individuals to interpret these cues has not yet been studied. In this experiment, we studied the effects of age and sensory modality (visual or auditory) on navigation through a virtual maze. We added a layer of complexity by conducting the experiment in a rotating room, in order to test the effect of the spatial bias induced by the rotation on performance. Results from 29 participants showed that with the auditory cues, it took participants a longer time to complete the mazes, they took a longer path length through the maze, they paused more, and had more collisions with the walls, compared to navigation with the visual cues. The older group took a longer time to complete the mazes, they paused more, and had more collisions with the walls, compared to the younger group. There was no effect of room rotation on the performance, nor were there any significant interactions among age, feedback modality and room rotation. We conclude that there is a decline in performance with age, and that while navigation with auditory cues is possible even at an old age, it presents more challenges than visual navigation.


Url:
DOI: 10.1371/journal.pone.0151593
PubMed: 27007812
PubMed Central: 4805187


Affiliations:


Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4805187

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Aging and Sensory Substitution in a Virtual Navigation Task</title>
<author>
<name sortKey="Levy Tzedek, S" sort="Levy Tzedek, S" uniqKey="Levy Tzedek S" first="S." last="Levy-Tzedek">S. Levy-Tzedek</name>
<affiliation wicri:level="1">
<nlm:aff id="aff001">
<addr-line>Recanati School for Community Health Professions, Department of Physical Therapy, Ben Gurion University of the Negev, Beer-Sheva, Israel</addr-line>
</nlm:aff>
<country xml:lang="fr">Israël</country>
<wicri:regionArea>Recanati School for Community Health Professions, Department of Physical Therapy, Ben Gurion University of the Negev, Beer-Sheva</wicri:regionArea>
<wicri:noRegion>Beer-Sheva</wicri:noRegion>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff002">
<addr-line>Zlotowski Center for Neuroscience, Ben Gurion University of the Negev, Beer-Sheva, Israel</addr-line>
</nlm:aff>
<country xml:lang="fr">Israël</country>
<wicri:regionArea>Zlotowski Center for Neuroscience, Ben Gurion University of the Negev, Beer-Sheva</wicri:regionArea>
<wicri:noRegion>Beer-Sheva</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Maidenbaum, S" sort="Maidenbaum, S" uniqKey="Maidenbaum S" first="S." last="Maidenbaum">S. Maidenbaum</name>
<affiliation wicri:level="1">
<nlm:aff id="aff003">
<addr-line>Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem, Israel</addr-line>
</nlm:aff>
<country xml:lang="fr">Israël</country>
<wicri:regionArea>Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem</wicri:regionArea>
<wicri:noRegion>Jerusalem</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Amedi, A" sort="Amedi, A" uniqKey="Amedi A" first="A." last="Amedi">A. Amedi</name>
<affiliation wicri:level="1">
<nlm:aff id="aff003">
<addr-line>Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem, Israel</addr-line>
</nlm:aff>
<country xml:lang="fr">Israël</country>
<wicri:regionArea>Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem</wicri:regionArea>
<wicri:noRegion>Jerusalem</wicri:noRegion>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff004">
<addr-line>Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Jerusalem, Israel</addr-line>
</nlm:aff>
<country xml:lang="fr">Israël</country>
<wicri:regionArea>Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Jerusalem</wicri:regionArea>
<wicri:noRegion>Jerusalem</wicri:noRegion>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff005">
<addr-line>Sorbonne Universités UPMC Univ Paris 06, Institut de la Vision, Paris, France</addr-line>
</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea>Sorbonne Universités UPMC Univ Paris 06, Institut de la Vision, Paris</wicri:regionArea>
<placeName>
<settlement type="city">Paris</settlement>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Lackner, J" sort="Lackner, J" uniqKey="Lackner J" first="J." last="Lackner">J. Lackner</name>
<affiliation wicri:level="2">
<nlm:aff id="aff006">
<addr-line>Ashton Graybiel Spatial Orientation Laboratory, Department of Physiology, Brandeis University, Waltham, Massachusetts, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Ashton Graybiel Spatial Orientation Laboratory, Department of Physiology, Brandeis University, Waltham, Massachusetts</wicri:regionArea>
<placeName>
<region type="state">Massachusetts</region>
</placeName>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">27007812</idno>
<idno type="pmc">4805187</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4805187</idno>
<idno type="RBID">PMC:4805187</idno>
<idno type="doi">10.1371/journal.pone.0151593</idno>
<date when="2016">2016</date>
<idno type="wicri:Area/Pmc/Corpus">000558</idno>
<idno type="wicri:Area/Pmc/Curation">000558</idno>
<idno type="wicri:Area/Pmc/Checkpoint">000168</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Aging and Sensory Substitution in a Virtual Navigation Task</title>
<author>
<name sortKey="Levy Tzedek, S" sort="Levy Tzedek, S" uniqKey="Levy Tzedek S" first="S." last="Levy-Tzedek">S. Levy-Tzedek</name>
<affiliation wicri:level="1">
<nlm:aff id="aff001">
<addr-line>Recanati School for Community Health Professions, Department of Physical Therapy, Ben Gurion University of the Negev, Beer-Sheva, Israel</addr-line>
</nlm:aff>
<country xml:lang="fr">Israël</country>
<wicri:regionArea>Recanati School for Community Health Professions, Department of Physical Therapy, Ben Gurion University of the Negev, Beer-Sheva</wicri:regionArea>
<wicri:noRegion>Beer-Sheva</wicri:noRegion>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff002">
<addr-line>Zlotowski Center for Neuroscience, Ben Gurion University of the Negev, Beer-Sheva, Israel</addr-line>
</nlm:aff>
<country xml:lang="fr">Israël</country>
<wicri:regionArea>Zlotowski Center for Neuroscience, Ben Gurion University of the Negev, Beer-Sheva</wicri:regionArea>
<wicri:noRegion>Beer-Sheva</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Maidenbaum, S" sort="Maidenbaum, S" uniqKey="Maidenbaum S" first="S." last="Maidenbaum">S. Maidenbaum</name>
<affiliation wicri:level="1">
<nlm:aff id="aff003">
<addr-line>Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem, Israel</addr-line>
</nlm:aff>
<country xml:lang="fr">Israël</country>
<wicri:regionArea>Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem</wicri:regionArea>
<wicri:noRegion>Jerusalem</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Amedi, A" sort="Amedi, A" uniqKey="Amedi A" first="A." last="Amedi">A. Amedi</name>
<affiliation wicri:level="1">
<nlm:aff id="aff003">
<addr-line>Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem, Israel</addr-line>
</nlm:aff>
<country xml:lang="fr">Israël</country>
<wicri:regionArea>Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem</wicri:regionArea>
<wicri:noRegion>Jerusalem</wicri:noRegion>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff004">
<addr-line>Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Jerusalem, Israel</addr-line>
</nlm:aff>
<country xml:lang="fr">Israël</country>
<wicri:regionArea>Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Jerusalem</wicri:regionArea>
<wicri:noRegion>Jerusalem</wicri:noRegion>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff005">
<addr-line>Sorbonne Universités UPMC Univ Paris 06, Institut de la Vision, Paris, France</addr-line>
</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea>Sorbonne Universités UPMC Univ Paris 06, Institut de la Vision, Paris</wicri:regionArea>
<placeName>
<settlement type="city">Paris</settlement>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Lackner, J" sort="Lackner, J" uniqKey="Lackner J" first="J." last="Lackner">J. Lackner</name>
<affiliation wicri:level="2">
<nlm:aff id="aff006">
<addr-line>Ashton Graybiel Spatial Orientation Laboratory, Department of Physiology, Brandeis University, Waltham, Massachusetts, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Ashton Graybiel Spatial Orientation Laboratory, Department of Physiology, Brandeis University, Waltham, Massachusetts</wicri:regionArea>
<placeName>
<region type="state">Massachusetts</region>
</placeName>
</affiliation>
</author>
</analytic>
<series>
<title level="j">PLoS ONE</title>
<idno type="eISSN">1932-6203</idno>
<imprint>
<date when="2016">2016</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Virtual environments are becoming ubiquitous, and used in a variety of contexts–from entertainment to training and rehabilitation. Recently, technology for making them more accessible to blind or visually impaired users has been developed, by using sound to represent visual information. The ability of older individuals to interpret these cues has not yet been studied. In this experiment, we studied the effects of age and sensory modality (visual or auditory) on navigation through a virtual maze. We added a layer of complexity by conducting the experiment in a rotating room, in order to test the effect of the spatial bias induced by the rotation on performance. Results from 29 participants showed that with the auditory cues, it took participants a longer time to complete the mazes, they took a longer path length through the maze, they paused more, and had more collisions with the walls, compared to navigation with the visual cues. The older group took a longer time to complete the mazes, they paused more, and had more collisions with the walls, compared to the younger group. There was no effect of room rotation on the performance, nor were there any significant interactions among age, feedback modality and room rotation. We conclude that there is a decline in performance with age, and that while navigation with auditory cues is possible even at an old age, it presents more challenges than visual navigation.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Levy Tzedek, S" uniqKey="Levy Tzedek S">S Levy-Tzedek</name>
</author>
<author>
<name sortKey="Riemer, D" uniqKey="Riemer D">D Riemer</name>
</author>
<author>
<name sortKey="Amedi, A" uniqKey="Amedi A">A Amedi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Auvray, M" uniqKey="Auvray M">M Auvray</name>
</author>
<author>
<name sortKey="Hanneton, S" uniqKey="Hanneton S">S Hanneton</name>
</author>
<author>
<name sortKey="O Regan, Jk" uniqKey="O Regan J">JK O Regan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Proulx, Mj" uniqKey="Proulx M">MJ Proulx</name>
</author>
<author>
<name sortKey="Gwinnutt, J" uniqKey="Gwinnutt J">J Gwinnutt</name>
</author>
<author>
<name sortKey="Dell Rba, S" uniqKey="Dell Rba S">S Dell’Erba</name>
</author>
<author>
<name sortKey="Levy Tzedek, S" uniqKey="Levy Tzedek S">S Levy-Tzedek</name>
</author>
<author>
<name sortKey="De Sousa, Aa" uniqKey="De Sousa A">AA de Sousa</name>
</author>
<author>
<name sortKey="Brown, Dj" uniqKey="Brown D">DJ Brown</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shull, Pb" uniqKey="Shull P">PB Shull</name>
</author>
<author>
<name sortKey="Damian, Dd" uniqKey="Damian D">DD Damian</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Li, S C" uniqKey="Li S">S-C Li</name>
</author>
<author>
<name sortKey="Lindenberger, U" uniqKey="Lindenberger U">U Lindenberger</name>
</author>
<author>
<name sortKey="Sikstroem, S" uniqKey="Sikstroem S">S Sikstroem</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hartshorne, Jk" uniqKey="Hartshorne J">JK Hartshorne</name>
</author>
<author>
<name sortKey="Germine, Lt" uniqKey="Germine L">LT Germine</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wilkniss, Sm" uniqKey="Wilkniss S">SM Wilkniss</name>
</author>
<author>
<name sortKey="Jones, Mg" uniqKey="Jones M">MG Jones</name>
</author>
<author>
<name sortKey="Korol, Dl" uniqKey="Korol D">DL Korol</name>
</author>
<author>
<name sortKey="Gold, Pe" uniqKey="Gold P">PE Gold</name>
</author>
<author>
<name sortKey="Manning, Ca" uniqKey="Manning C">CA Manning</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yan, Jh" uniqKey="Yan J">JH Yan</name>
</author>
<author>
<name sortKey="Thomas, Jr" uniqKey="Thomas J">JR Thomas</name>
</author>
<author>
<name sortKey="Stelmach, Ge" uniqKey="Stelmach G">GE Stelmach</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Moffat, Sd" uniqKey="Moffat S">SD Moffat</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Maidenbaum, S" uniqKey="Maidenbaum S">S Maidenbaum</name>
</author>
<author>
<name sortKey="Hanassy, S" uniqKey="Hanassy S">S Hanassy</name>
</author>
<author>
<name sortKey="Abboud, S" uniqKey="Abboud S">S Abboud</name>
</author>
<author>
<name sortKey="Buchs, G" uniqKey="Buchs G">G Buchs</name>
</author>
<author>
<name sortKey="Chebat, D R" uniqKey="Chebat D">D-R Chebat</name>
</author>
<author>
<name sortKey="Levy Tzedek, S" uniqKey="Levy Tzedek S">S Levy-Tzedek</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Maidenbaum, S" uniqKey="Maidenbaum S">S Maidenbaum</name>
</author>
<author>
<name sortKey="Levy Tzedek, S" uniqKey="Levy Tzedek S">S Levy-Tzedek</name>
</author>
<author>
<name sortKey="Chebat, Dr" uniqKey="Chebat D">DR Chebat</name>
</author>
<author>
<name sortKey="Namer Furstenberg, R" uniqKey="Namer Furstenberg R">R Namer-Furstenberg</name>
</author>
<author>
<name sortKey="Amedi, A" uniqKey="Amedi A">A Amedi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Maidenbaum, S" uniqKey="Maidenbaum S">S Maidenbaum</name>
</author>
<author>
<name sortKey="Levy Tzedek, S" uniqKey="Levy Tzedek S">S Levy-Tzedek</name>
</author>
<author>
<name sortKey="Chebat, D R" uniqKey="Chebat D">D-R Chebat</name>
</author>
<author>
<name sortKey="Amedi, A" uniqKey="Amedi A">A Amedi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Merabet, Lb" uniqKey="Merabet L">LB Merabet</name>
</author>
<author>
<name sortKey="Connors, Ec" uniqKey="Connors E">EC Connors</name>
</author>
<author>
<name sortKey="Halko, Ma" uniqKey="Halko M">MA Halko</name>
</author>
<author>
<name sortKey="Sanchez, J" uniqKey="Sanchez J">J Sanchez</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Smith, K" uniqKey="Smith K">K Smith</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Westin, T" uniqKey="Westin T">T Westin</name>
</author>
<author>
<name sortKey="Bierre, K" uniqKey="Bierre K">K Bierre</name>
</author>
<author>
<name sortKey="Gramenos, D" uniqKey="Gramenos D">D Gramenos</name>
</author>
<author>
<name sortKey="Hinn, M" uniqKey="Hinn M">M Hinn</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Levy Tzedek, S" uniqKey="Levy Tzedek S">S Levy-Tzedek</name>
</author>
<author>
<name sortKey="Hanassy, S" uniqKey="Hanassy S">S Hanassy</name>
</author>
<author>
<name sortKey="Abboud, S" uniqKey="Abboud S">S Abboud</name>
</author>
<author>
<name sortKey="Shachar, M" uniqKey="Shachar M">M Shachar</name>
</author>
<author>
<name sortKey="Amedi, A" uniqKey="Amedi A">A Amedi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Levy Tzedek, S" uniqKey="Levy Tzedek S">S Levy-Tzedek</name>
</author>
<author>
<name sortKey="Novick, I" uniqKey="Novick I">I Novick</name>
</author>
<author>
<name sortKey="Arbel, R" uniqKey="Arbel R">R Arbel</name>
</author>
<author>
<name sortKey="Abboud, S" uniqKey="Abboud S">S Abboud</name>
</author>
<author>
<name sortKey="Maidenbaum, S" uniqKey="Maidenbaum S">S Maidenbaum</name>
</author>
<author>
<name sortKey="Vaadia, E" uniqKey="Vaadia E">E Vaadia</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Carriot, J" uniqKey="Carriot J">J Carriot</name>
</author>
<author>
<name sortKey="Bryan, A" uniqKey="Bryan A">A Bryan</name>
</author>
<author>
<name sortKey="Dizio, P" uniqKey="Dizio P">P DiZio</name>
</author>
<author>
<name sortKey="Lackner, Jr" uniqKey="Lackner J">JR Lackner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lackner, Jr" uniqKey="Lackner J">JR Lackner</name>
</author>
<author>
<name sortKey="Dizio, P" uniqKey="Dizio P">P DiZio</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Box, Ge" uniqKey="Box G">GE Box</name>
</author>
<author>
<name sortKey="Cox, Dr" uniqKey="Cox D">DR Cox</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rodgers, Mk" uniqKey="Rodgers M">MK Rodgers</name>
</author>
<author>
<name sortKey="Sindone, Ja" uniqKey="Sindone J">JA Sindone</name>
</author>
<author>
<name sortKey="Moffat, Sd" uniqKey="Moffat S">SD Moffat</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Moffat, Sd" uniqKey="Moffat S">SD Moffat</name>
</author>
<author>
<name sortKey="Zonderman, Ab" uniqKey="Zonderman A">AB Zonderman</name>
</author>
<author>
<name sortKey="Resnick, Sm" uniqKey="Resnick S">SM Resnick</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Iaria, G" uniqKey="Iaria G">G Iaria</name>
</author>
<author>
<name sortKey="Palermo, L" uniqKey="Palermo L">L Palermo</name>
</author>
<author>
<name sortKey="Committeri, G" uniqKey="Committeri G">G Committeri</name>
</author>
<author>
<name sortKey="Barton, Jj" uniqKey="Barton J">JJ Barton</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Goeke, C" uniqKey="Goeke C">C Goeke</name>
</author>
<author>
<name sortKey="Kornpetpanee, S" uniqKey="Kornpetpanee S">S Kornpetpanee</name>
</author>
<author>
<name sortKey="Koster, M" uniqKey="Koster M">M Köster</name>
</author>
<author>
<name sortKey="Fernandez Revelles, Ab" uniqKey="Fernandez Revelles A">AB Fernández-Revelles</name>
</author>
<author>
<name sortKey="Gramann, K" uniqKey="Gramann K">K Gramann</name>
</author>
<author>
<name sortKey="Konig, P" uniqKey="Konig P">P König</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Byrne, P" uniqKey="Byrne P">P Byrne</name>
</author>
<author>
<name sortKey="Becker, S" uniqKey="Becker S">S Becker</name>
</author>
<author>
<name sortKey="Burgess, N" uniqKey="Burgess N">N Burgess</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lin, Fr" uniqKey="Lin F">FR Lin</name>
</author>
<author>
<name sortKey="Thorpe, R" uniqKey="Thorpe R">R Thorpe</name>
</author>
<author>
<name sortKey="Gordon Salant, S" uniqKey="Gordon Salant S">S Gordon-Salant</name>
</author>
<author>
<name sortKey="Ferrucci, L" uniqKey="Ferrucci L">L Ferrucci</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ouda, L" uniqKey="Ouda L">L Ouda</name>
</author>
<author>
<name sortKey="Profant, O" uniqKey="Profant O">O Profant</name>
</author>
<author>
<name sortKey="Syka, J" uniqKey="Syka J">J Syka</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stenfelt, S" uniqKey="Stenfelt S">S Stenfelt</name>
</author>
<author>
<name sortKey="Roennberg, J" uniqKey="Roennberg J">J Roennberg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rosenbaum, Rs" uniqKey="Rosenbaum R">RS Rosenbaum</name>
</author>
<author>
<name sortKey="Ziegler, M" uniqKey="Ziegler M">M Ziegler</name>
</author>
<author>
<name sortKey="Winocur, G" uniqKey="Winocur G">G Winocur</name>
</author>
<author>
<name sortKey="Grady, Cl" uniqKey="Grady C">CL Grady</name>
</author>
<author>
<name sortKey="Moscovitch, M" uniqKey="Moscovitch M">M Moscovitch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Moffat, Sd" uniqKey="Moffat S">SD Moffat</name>
</author>
<author>
<name sortKey="Elkins, W" uniqKey="Elkins W">W Elkins</name>
</author>
<author>
<name sortKey="Resnick, Sm" uniqKey="Resnick S">SM Resnick</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Himann, Je" uniqKey="Himann J">JE Himann</name>
</author>
<author>
<name sortKey="Cunningham, Da" uniqKey="Cunningham D">DA Cunningham</name>
</author>
<author>
<name sortKey="Rechnitzer, Pa" uniqKey="Rechnitzer P">PA Rechnitzer</name>
</author>
<author>
<name sortKey="Paterson, Dh" uniqKey="Paterson D">DH Paterson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kimura, T" uniqKey="Kimura T">T Kimura</name>
</author>
<author>
<name sortKey="Kobayashi, H" uniqKey="Kobayashi H">H Kobayashi</name>
</author>
<author>
<name sortKey="Nakayama, E" uniqKey="Nakayama E">E Nakayama</name>
</author>
<author>
<name sortKey="Hanaoka, M" uniqKey="Hanaoka M">M Hanaoka</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Liu, Y C" uniqKey="Liu Y">Y-C Liu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bushara, Ko" uniqKey="Bushara K">KO Bushara</name>
</author>
<author>
<name sortKey="Weeks, Ra" uniqKey="Weeks R">RA Weeks</name>
</author>
<author>
<name sortKey="Ishii, K" uniqKey="Ishii K">K Ishii</name>
</author>
<author>
<name sortKey="Catalan, M J" uniqKey="Catalan M">M-J Catalan</name>
</author>
<author>
<name sortKey="Tian, B" uniqKey="Tian B">B Tian</name>
</author>
<author>
<name sortKey="Rauschecker, Jp" uniqKey="Rauschecker J">JP Rauschecker</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Loomis, Jm" uniqKey="Loomis J">JM Loomis</name>
</author>
<author>
<name sortKey="Klatzky, Rl" uniqKey="Klatzky R">RL Klatzky</name>
</author>
<author>
<name sortKey="Mchugh, B" uniqKey="Mchugh B">B McHugh</name>
</author>
<author>
<name sortKey="Giudice, Na" uniqKey="Giudice N">NA Giudice</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mahmood, O" uniqKey="Mahmood O">O Mahmood</name>
</author>
<author>
<name sortKey="Adamo, D" uniqKey="Adamo D">D Adamo</name>
</author>
<author>
<name sortKey="Briceno, E" uniqKey="Briceno E">E Briceno</name>
</author>
<author>
<name sortKey="Moffat, Sd" uniqKey="Moffat S">SD Moffat</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Boot, Wr" uniqKey="Boot W">WR Boot</name>
</author>
<author>
<name sortKey="Blakely, Dp" uniqKey="Blakely D">DP Blakely</name>
</author>
<author>
<name sortKey="Simons, Dj" uniqKey="Simons D">DJ Simons</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Spence, I" uniqKey="Spence I">I Spence</name>
</author>
<author>
<name sortKey="Feng, J" uniqKey="Feng J">J Feng</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dehaene, S" uniqKey="Dehaene S">S Dehaene</name>
</author>
<author>
<name sortKey="Nakamura, K" uniqKey="Nakamura K">K Nakamura</name>
</author>
<author>
<name sortKey="Jobert, A" uniqKey="Jobert A">A Jobert</name>
</author>
<author>
<name sortKey="Kuroki, C" uniqKey="Kuroki C">C Kuroki</name>
</author>
<author>
<name sortKey="Ogawa, S" uniqKey="Ogawa S">S Ogawa</name>
</author>
<author>
<name sortKey="Cohen, L" uniqKey="Cohen L">L Cohen</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">PLoS One</journal-id>
<journal-id journal-id-type="iso-abbrev">PLoS ONE</journal-id>
<journal-id journal-id-type="publisher-id">plos</journal-id>
<journal-id journal-id-type="pmc">plosone</journal-id>
<journal-title-group>
<journal-title>PLoS ONE</journal-title>
</journal-title-group>
<issn pub-type="epub">1932-6203</issn>
<publisher>
<publisher-name>Public Library of Science</publisher-name>
<publisher-loc>San Francisco, CA USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">27007812</article-id>
<article-id pub-id-type="pmc">4805187</article-id>
<article-id pub-id-type="doi">10.1371/journal.pone.0151593</article-id>
<article-id pub-id-type="publisher-id">PONE-D-15-41129</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Neuroscience</subject>
<subj-group>
<subject>Sensory Perception</subject>
<subj-group>
<subject>Vision</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Sensory Perception</subject>
<subj-group>
<subject>Vision</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Social Sciences</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Sensory Perception</subject>
<subj-group>
<subject>Vision</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Neuroscience</subject>
<subj-group>
<subject>Sensory Perception</subject>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Sensory Perception</subject>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Social Sciences</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Sensory Perception</subject>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>People and Places</subject>
<subj-group>
<subject>Population Groupings</subject>
<subj-group>
<subject>Age Groups</subject>
<subj-group>
<subject>Elderly</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Developmental Biology</subject>
<subj-group>
<subject>Organism Development</subject>
<subj-group>
<subject>Aging</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Physiology</subject>
<subj-group>
<subject>Physiological Processes</subject>
<subj-group>
<subject>Aging</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Medicine and Health Sciences</subject>
<subj-group>
<subject>Physiology</subject>
<subj-group>
<subject>Physiological Processes</subject>
<subj-group>
<subject>Aging</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Medicine and Health Sciences</subject>
<subj-group>
<subject>Ophthalmology</subject>
<subj-group>
<subject>Visual Impairments</subject>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Neuroscience</subject>
<subj-group>
<subject>Cognitive Science</subject>
<subj-group>
<subject>Cognitive Psychology</subject>
<subj-group>
<subject>Learning</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Cognitive Psychology</subject>
<subj-group>
<subject>Learning</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Social Sciences</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Cognitive Psychology</subject>
<subj-group>
<subject>Learning</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Neuroscience</subject>
<subj-group>
<subject>Learning and Memory</subject>
<subj-group>
<subject>Learning</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Neuroscience</subject>
<subj-group>
<subject>Cognitive Science</subject>
<subj-group>
<subject>Cognition</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Anatomy</subject>
<subj-group>
<subject>Brain</subject>
<subj-group>
<subject>Cerebral Cortex</subject>
<subj-group>
<subject>Parietal Lobe</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Medicine and Health Sciences</subject>
<subj-group>
<subject>Anatomy</subject>
<subj-group>
<subject>Brain</subject>
<subj-group>
<subject>Cerebral Cortex</subject>
<subj-group>
<subject>Parietal Lobe</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Aging and Sensory Substitution in a Virtual Navigation Task</article-title>
<alt-title alt-title-type="running-head">Aging and Sensory Substitution in a Virtual Navigation Task</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Levy-Tzedek</surname>
<given-names>S.</given-names>
</name>
<xref ref-type="aff" rid="aff001">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff002">
<sup>2</sup>
</xref>
<xref ref-type="corresp" rid="cor001">*</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Maidenbaum</surname>
<given-names>S.</given-names>
</name>
<xref ref-type="aff" rid="aff003">
<sup>3</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Amedi</surname>
<given-names>A.</given-names>
</name>
<xref ref-type="aff" rid="aff003">
<sup>3</sup>
</xref>
<xref ref-type="aff" rid="aff004">
<sup>4</sup>
</xref>
<xref ref-type="aff" rid="aff005">
<sup>5</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Lackner</surname>
<given-names>J.</given-names>
</name>
<xref ref-type="aff" rid="aff006">
<sup>6</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff001">
<label>1</label>
<addr-line>Recanati School for Community Health Professions, Department of Physical Therapy, Ben Gurion University of the Negev, Beer-Sheva, Israel</addr-line>
</aff>
<aff id="aff002">
<label>2</label>
<addr-line>Zlotowski Center for Neuroscience, Ben Gurion University of the Negev, Beer-Sheva, Israel</addr-line>
</aff>
<aff id="aff003">
<label>3</label>
<addr-line>Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem, Israel</addr-line>
</aff>
<aff id="aff004">
<label>4</label>
<addr-line>Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Jerusalem, Israel</addr-line>
</aff>
<aff id="aff005">
<label>5</label>
<addr-line>Sorbonne Universités UPMC Univ Paris 06, Institut de la Vision, Paris, France</addr-line>
</aff>
<aff id="aff006">
<label>6</label>
<addr-line>Ashton Graybiel Spatial Orientation Laboratory, Department of Physiology, Brandeis University, Waltham, Massachusetts, United States of America</addr-line>
</aff>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Proulx</surname>
<given-names>Michael J</given-names>
</name>
<role>Editor</role>
<xref ref-type="aff" rid="edit1"></xref>
</contrib>
</contrib-group>
<aff id="edit1">
<addr-line>University of Bath, UNITED KINGDOM</addr-line>
</aff>
<author-notes>
<fn fn-type="conflict" id="coi001">
<p>
<bold>Competing Interests: </bold>
The authors have declared that no competing interests exist.</p>
</fn>
<fn fn-type="con" id="contrib001">
<p>Conceived and designed the experiments: SL JL. Performed the experiments: SL JL. Analyzed the data: SL. Contributed reagents/materials/analysis tools: SM AA SL. Wrote the paper: SL JL. Commented on the written manuscript: SM.</p>
</fn>
<corresp id="cor001">* E-mail:
<email>Shelly@bgu.ac.il</email>
</corresp>
</author-notes>
<pub-date pub-type="epub">
<day>23</day>
<month>3</month>
<year>2016</year>
</pub-date>
<pub-date pub-type="collection">
<year>2016</year>
</pub-date>
<volume>11</volume>
<issue>3</issue>
<elocation-id>e0151593</elocation-id>
<history>
<date date-type="received">
<day>17</day>
<month>9</month>
<year>2015</year>
</date>
<date date-type="accepted">
<day>1</day>
<month>3</month>
<year>2016</year>
</date>
</history>
<permissions>
<copyright-statement>© 2016 Levy-Tzedek et al</copyright-statement>
<copyright-year>2016</copyright-year>
<copyright-holder>Levy-Tzedek et al</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<license-p>This is an open access article distributed under the terms of the
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution License</ext-link>
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:type="simple" xlink:href="pone.0151593.pdf"></self-uri>
<abstract>
<p>Virtual environments are becoming ubiquitous, and used in a variety of contexts–from entertainment to training and rehabilitation. Recently, technology for making them more accessible to blind or visually impaired users has been developed, by using sound to represent visual information. The ability of older individuals to interpret these cues has not yet been studied. In this experiment, we studied the effects of age and sensory modality (visual or auditory) on navigation through a virtual maze. We added a layer of complexity by conducting the experiment in a rotating room, in order to test the effect of the spatial bias induced by the rotation on performance. Results from 29 participants showed that with the auditory cues, it took participants a longer time to complete the mazes, they took a longer path length through the maze, they paused more, and had more collisions with the walls, compared to navigation with the visual cues. The older group took a longer time to complete the mazes, they paused more, and had more collisions with the walls, compared to the younger group. There was no effect of room rotation on the performance, nor were there any significant interactions among age, feedback modality and room rotation. We conclude that there is a decline in performance with age, and that while navigation with auditory cues is possible even at an old age, it presents more challenges than visual navigation.</p>
</abstract>
<funding-group>
<funding-statement>The Brandeis-Leir foundation (SL, AA, JL) and the Brandeis-Bronfman foundation (SL, JL) provided funding for this experiment. The research was partially supported by the Helmsley Charitable Trust through the Agricultural, Biological and Cognitive Robotics Center of the Ben-Gurion University of the Negev (SL). The support of the Promobilia Foundation is gratefully acknowledged (SL). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.</funding-statement>
</funding-group>
<counts>
<fig-count count="10"></fig-count>
<table-count count="3"></table-count>
<page-count count="17"></page-count>
</counts>
<custom-meta-group>
<custom-meta id="data-availability">
<meta-name>Data Availability</meta-name>
<meta-value>All relevant data are available from Figshare(
<ext-link ext-link-type="uri" xlink:href="https://figshare.com/articles/subject_raw_data_and_drawings/3100660">https://figshare.com/articles/subject_raw_data_and_drawings/3100660</ext-link>
).</meta-value>
</custom-meta>
</custom-meta-group>
</article-meta>
<notes>
<title>Data Availability</title>
<p>All relevant data are available from Figshare(
<ext-link ext-link-type="uri" xlink:href="https://figshare.com/articles/subject_raw_data_and_drawings/3100660">https://figshare.com/articles/subject_raw_data_and_drawings/3100660</ext-link>
).</p>
</notes>
</front>
<body>
<sec sec-type="intro" id="sec001">
<title>Introduction</title>
<p>Sensory substitution devices (SSDs) convey information that is usually perceived by one sense, using an alternative sense [
<xref rid="pone.0151593.ref001" ref-type="bibr">1</xref>
]. For example, auditory cues can be used to convey information that is usually perceived using visual cues. In recent years, there has been a surge of studies on the use of sensory substitution devices (e.g., [
<xref rid="pone.0151593.ref002" ref-type="bibr">2</xref>
<xref rid="pone.0151593.ref004" ref-type="bibr">4</xref>
]), but none of them, to the best of our knowledge, examined the effects of aging on the ability to use SSDs, or have challenged the use of SSDs by introducing a spatial perception bias while performing a task with the device.</p>
<p>In the current study we made a first step towards bridging this gap. We studied the differences between younger and older adults who perform a navigation task in a virtual environment using sensory substitution. We further explored whether use of SSDs is susceptible to the introduction of external interference in the form of a spatial perception bias.</p>
<p>We thus connected previously parallel lines of research, on sensory feedback, aging, virtual navigation and spatial cognition. Specifically, we asked what are the combined effects of age, sensory modality and spatial-perception bias on movement through a virtual maze (our specific hypotheses are detailed below). We addressed these questions in a single experiment, rather than a series of separate experiments, so that we can examine not only main effects of each factor (age, sensory modality and spatial-perception bias), but also the interaction effects between these factors.</p>
<sec id="sec002">
<title>Aging and movement</title>
<p>Aging is accompanied by a gradual decline in selective cognitive functions, including attention, information processing, and learning [
<xref rid="pone.0151593.ref005" ref-type="bibr">5</xref>
], while functions such as semantic memory, comprehension and vocabulary can remain stable or may even improve with age [
<xref rid="pone.0151593.ref006" ref-type="bibr">6</xref>
<xref rid="pone.0151593.ref007" ref-type="bibr">7</xref>
].There are changes in the quality of movement control that take place with age. For example, movements are slower (e.g., [
<xref rid="pone.0151593.ref008" ref-type="bibr">8</xref>
]), there is evidence of reduced movement planning, and more reliance on visual feedback during movement execution [
<xref rid="pone.0151593.ref009" ref-type="bibr">9</xref>
]. Complex tasks (e.g., navigation), which engage several aspects of cognition and motor planning, are consequently affected. At least some of these changes in task performance–and particularly in navigational abilities–might be explained by degradation of brain structures involved in navigation; particularly, the hippocampus, the parahippocampal gyrus, the posterior cingulate gyrus, the parietal lobes and the pre-frontal cortex [
<xref rid="pone.0151593.ref010" ref-type="bibr">10</xref>
]. When navigating through space, we must maintain a representation of our position with relation to the three-dimensional world, a task that draws on our spatial cognition abilities. We tap into these abilities whenever we navigate to a familiar place, or learn the route to a new one. We therefore rely on our spatial cognitive abilities in our everyday life ubiquitously. We regularly encounter novel environments, and the process of adjusting our movement plan for navigating within them based on their layout, location of obstacles, and moving entities, is highly dynamic. And yet, testing of cognitive aging has focused on static, paper-based tests, rather than dynamic ones [
<xref rid="pone.0151593.ref010" ref-type="bibr">10</xref>
]. In order to evaluate spatial cognition, a dynamic test is better suited to uncover the ability to efficiently interpret dynamically changing spatial information.</p>
</sec>
<sec id="sec003">
<title>Virtual navigation</title>
<p>Virtual environments have been used in recent years as a tool to study navigation patterns in simple and complex settings. Several researchers have put particular emphasis on making them accessible to blind and visually impaired individuals–via auditory and tactile cues–so they can be used to study negotiation of space in the absence of visual cues [
<xref rid="pone.0151593.ref011" ref-type="bibr">11</xref>
<xref rid="pone.0151593.ref017" ref-type="bibr">17</xref>
]. In fact, virtual mazes can, in and of themselves, be used as a rehabilitation tool, for at-home training to navigate through an unfamiliar environment, for individuals with visual impairment (e.g., [
<xref rid="pone.0151593.ref011" ref-type="bibr">11</xref>
]). In this scenario, visual information is conveyed via sounds, in what is termed "sensory substitution" (e.g., [
<xref rid="pone.0151593.ref013" ref-type="bibr">13</xref>
<xref rid="pone.0151593.ref014" ref-type="bibr">14</xref>
,
<xref rid="pone.0151593.ref018" ref-type="bibr">18</xref>
<xref rid="pone.0151593.ref019" ref-type="bibr">19</xref>
]). It has been demonstrated that learning of new environments in the virtual realm transfers to the corresponding real-world setting [
<xref rid="pone.0151593.ref011" ref-type="bibr">11</xref>
,
<xref rid="pone.0151593.ref015" ref-type="bibr">15</xref>
,
<xref rid="pone.0151593.ref020" ref-type="bibr">20</xref>
].</p>
<p>This simple yet powerful tool can be used to study perception of spatial cues arriving from different sensory modalities.</p>
</sec>
<sec id="sec004">
<title>Rotation and biased perception of space</title>
<p>The Slow Rotation Room at Brandeis University has been developed as a tool to study human orientation, movement control, and perception during exposure to angular and linear acceleration.</p>
<p>When exposed to angular acceleration in a dark environment, people experience the oculogyral illusion. In this situation, a head-fixed visual target will appear to move through space and be displaced relative to the person’s body in the direction of acceleration [
<xref rid="pone.0151593.ref021" ref-type="bibr">21</xref>
]. However, an audio signal that is presented at a person’s midline position during acceleration, will be perceived as if it has been displaced in the direction
<italic>opposite</italic>
to the direction of acceleration; this phenomenon is known as the audiogyral illusion [
<xref rid="pone.0151593.ref021" ref-type="bibr">21</xref>
]. Body localization is also affected under these circumstances, and this effect is referred to as the somatogravic illusion. Lackner & Dizio [
<xref rid="pone.0151593.ref022" ref-type="bibr">22</xref>
] have shown that the three illusions are correlated in extent and direction, suggesting that there is an internal remapping of a common reference frame for visual, auditory, and haptic localization in the accelerated environment. In other words, a bias in spatial perception is generated, across the senses, as a result of exposure to acceleration. We set out to test the effect of this perceptual bias on the performance of a motor task. Specifically, we tested the effect of the perceptual bias on navigation within a virtual maze when spatial information is provided either visually or auditorily.</p>
<p>To the best of our knowledge, this is the first experiment to study effects of age on using a sensory-substitution device, and the first to use a dynamically changing virtual environment within a rotating room setting.</p>
</sec>
<sec id="sec005">
<title>Hypotheses</title>
<p>Our hypotheses were that:</p>
<list list-type="order">
<list-item>
<p>Older adults will have a reduced ability to interpret auditory cues representing visual information, compared to young adults. Thus, we expect younger adults will perform better performance on the virtual-maze task than older adults.</p>
</list-item>
<list-item>
<p>Auditory cues will be more difficult to interpret as indicators of distance than visual information. Thus, we expect performance on the virtual-maze task will be better when visual cues are available, than when auditory cues are available.</p>
</list-item>
<list-item>
<p>Introducing a spatial perception bias (via centripetal force) will interfere with the performance on a motor task. Thus, we expect performance on the virtual maze game when the room is rotating to be worse than the performance on the task when the room is stationary.</p>
</list-item>
</list>
<p>Our outcome measures for these hypotheses were a series of indicators of success on a virtual navigation task, such as the time it took to complete the virtual maze, and the number of collisions with the maze walls. The specific performance metrics are detailed below.</p>
</sec>
</sec>
<sec sec-type="materials|methods" id="sec006">
<title>Materials and Methods</title>
<sec id="sec007">
<title>Equipment</title>
<sec id="sec008">
<title>The eye cane</title>
<p>Sensory feedback to the participants, as they were navigating through the virtual mazes (described below), was given either using visual cues or auditory cues. Auditory cues were provided using the "virtual EyeCane" [
<xref rid="pone.0151593.ref014" ref-type="bibr">14</xref>
]. The virtual EyeCane is based on a physical sensory-substitution device, called the EyeCane [
<xref rid="pone.0151593.ref012" ref-type="bibr">12</xref>
]. The physical EyeCane is a hand-held device, which uses infra-red sensors to detect distance from objects located up to 5 meters away from the user. The device reports the distance from the object at which it is pointed by producing a series of beeps; the closer an object is to the user, the higher is the frequency of the beeps. For example, if users were to stand 5 meters from a wall while pointing the device at it, then gradually approaching it, they would initially not receive any auditory feedback (outside the range of the device), and as they would get closer to the wall, the device would start beeping, with the beeps sounded out closer together the closer the users get to the wall.</p>
<p>The virtual EyeCane is a virtual representation of the physical EyeCane. It produces a series of beeping sounds at a frequency which is determined by the distance of the virtual device from the virtual object at which it is pointed. The participants were represented by 'avatars' as they navigated through virtual mazes. The avatars held the virtual EyeCane in their hands (the device was not visible to the participants), and the device was always pointed at the direction at which the avatar was facing. The avatar was free to rotate right and left while walking through the mazes.</p>
<p>The virtual EyeCane conveys distances up to 5 virtual meters to the user by changing the frequency of a series of beeping sounds, such that 5 virtual meters and up will be silent, and the closer the object the device points at, the higher is the beeping rate.</p>
</sec>
<sec id="sec009">
<title>The virtual mazes</title>
<p>The virtual mazes were created using Blender 2.49 and Python 2.6.2. The location and orientation of the participants' avatar was tracked and recorded at 20 Hz.</p>
<p>The avatar representing the participants was not visible, and they had first-person view of the mazes. That is, the visual input was similar to what they would see if they actually navigated through a maze in the real world. Navigation was accomplished using the arrow keys on a standard laptop keyboard, and the auditory cues were delivered via standard headphones. The participants heard a collision sound whenever their avatar came in contact with a virtual wall. Distances within the environment were set so that each 'virtual meter' corresponds to a real-world meter. Thus, the auditory output from the Virtual-EyeCane at one 'virtual meter' is the same as the output from the real-world EyeCane at a distance of one meter.</p>
<p>The participants controlled their location within the maze using the keyboard in the following way: The 'up' and 'down' arrow keys advanced the avatar forward and backward, respectively, and the 'left' and 'right' arrow keys rotated the avatar in the respective direction. The participants could control the speed of their navigation through the mazes by controlling the speed at which they clicked on the arrow keys.</p>
<p>A video demonstrating navigation through a visual and an auditory maze can be found in
<xref ref-type="supplementary-material" rid="pone.0151593.s001">S1 Video</xref>
.</p>
</sec>
<sec id="sec010">
<title>The rotating room</title>
<p>The experiment was conducted in the Brandeis Rotation Room, a fully enclosed circular room, 6.7 m in diameter. In this room, when participants are rotated at a constant velocity in a fully enclosed environment, they feel as if they are stationary in a stationary environment, but feel heavier than normal and experience some body tilt. For each participant, half of the trials were conducted with the room being stationary, and half with the room rotating about its central axis. For the rotation trials, the room was accelerated at 1°/s
<sup>2</sup>
to a velocity of 60°/s (10 rpm), and held there for 1 minute before starting the rotation block of trials. Upon completion of the rotation block of trials, the room was decelerated at 1°/s
<sup>2</sup>
to a stop. The rotation was in the counterclockwise direction. Participants were explicitly aware of the rotation of the room during the rotation trials.</p>
</sec>
</sec>
<sec id="sec011">
<title>Participants</title>
<p>Twenty nine participants took part in the experiment. Seventeen young adults (19–23 years old, mean±SD: 20.7±1.3 years; 11 males, 6 females) and 12 older adults (58–69 years old, 64.3±3.7 years; 8 males, 4 females). Four of the senior adults and one of the younger adults have previously been in the rotating room. Individuals who suffered from high blood pressure, claustrophobia, respiratory problems, ADHD, had a history of seizures or problems with balance were excluded from participation. All participants gave their written informed consent to participate.</p>
</sec>
<sec id="sec012">
<title>Procedure</title>
<p>Participants were screened for medical conditions that would preclude their participation according to the exclusion criteria listed above. Their blood pressure was collected using a commercially available automated blood pressure cuff.</p>
<p>Participants were seated comfortably on a chair, padded with pillows for the back and the head, with a laptop computer placed on their lap, on top of a rigid support (see
<xref ref-type="fig" rid="pone.0151593.g001">Fig 1</xref>
). The chair was located at the periphery of the rotating room. The entire experimental procedure took place in a single session inside the rotating room. In all but one case, two participants were tested in parallel, on opposite sides of the rotating room. Each participant was accompanied by an experimenter for the entire duration of the experiment. The experimental protocol was approved by the Internal Review Board (IRB) of the Brandeis University.</p>
<fig id="pone.0151593.g001" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0151593.g001</object-id>
<label>Fig 1</label>
<caption>
<title>Experimental setup.</title>
<p>The participant is seated in the periphery of the Rotating Room, with comfortable head support. She is controlling the movement of the avatar through a virtual maze using the arrow keys of a laptop placed on a flat support located on her lap.</p>
</caption>
<graphic xlink:href="pone.0151593.g001"></graphic>
</fig>
<sec id="sec013">
<title>Training session</title>
<p>The algorithm of the virtual EyeCane was explained to the participants, and they were given the opportunity to navigate within two training mazes as many times as they wished until they felt comfortable with the task. There was no overlap between the shapes of the training mazes and shapes of the mazes used during the test session. The training phase was the only part of the experiment when participants concurrently got both visual information, presented on the computer screen, and auditory information via the headphones, using the virtual EyeCane. They were encouraged to pay attention to the correspondence between the visual and the auditory feedback, so that they could use it during the trials in which only auditory feedback will be given. They were also encouraged to experiment with closing and opening their eyes as they navigated through the maze, so they could compare their perception with their eyes closed to that with their eyes open. All training was done with the room being stationary.</p>
</sec>
<sec id="sec014">
<title>Test session</title>
<p>Each participant was presented with eight different mazes (see
<xref ref-type="fig" rid="pone.0151593.g002">Fig 2</xref>
), each repeated five times consecutively, for a total of 40 experimental trials per participant. The goal was to locate the exit from the maze and virtually walk through it. The maximum allowed time for completion of a trial was 3.5 minutes, after which the trial was terminated, and the next trial was presented. After each trial, the participants were asked to draw the shape of the maze on a piece of paper, and mark the start and the end point of the maze.</p>
<fig id="pone.0151593.g002" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0151593.g002</object-id>
<label>Fig 2</label>
<caption>
<title>The virtual mazes.</title>
<p>A sketch of the eight virtual mazes that participants were asked to traverse (marked as mazes A-H). A circle (◦) denotes the starting point of the maze, and a square (□) denoted the end point (exit).</p>
</caption>
<graphic xlink:href="pone.0151593.g002"></graphic>
</fig>
<p>The experimental session was comprised of two blocks of trials: a stationary block and a rotating block. Each block comprised four mazes, two of the four mazes were presented with visual feedback ('visual condition'), and two mazes were presented with auditory feedback ('auditory condition'; See
<xref ref-type="fig" rid="pone.0151593.g003">Fig 3</xref>
). Each maze was repeated five times for a total of 20 trials per block. Approximately half of the participants completed the rotating block first, and half completed the stationary block first. During the auditory condition, participants wore a blindfold, and the auditory feedback was provided via headphones. A headphone-jack splitter was used, with two sets of headphones connected to it, such that one set was used by the participant, and the other by the experimenter.</p>
<fig id="pone.0151593.g003" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0151593.g003</object-id>
<label>Fig 3</label>
<caption>
<title>The protocol.</title>
<p>A schematic representation of the experimental protocol. Participants started with either a “stationary room” or “rotating room” block of trials, within which they performed the first half of the trials. The block was further divided into auditory and visual blocks, within each one, the participants performed five consecutive repetitions of each of two mazes.</p>
</caption>
<graphic xlink:href="pone.0151593.g003"></graphic>
</fig>
<p>The order of the mazes and the blocks was counter-balanced across participants.</p>
</sec>
</sec>
<sec id="sec015">
<title>Performance metrics</title>
<p>The number of trials that were terminated for exceeding the maximum time limit allotted per trial (3.5 minutes) was calculated per participant. These trials were excluded from further analysis which was done using the performance metrics detailed below.</p>
<sec id="sec016">
<title>Time</title>
<p>The time to successful completion of the trial was calculated as the time elapsed between the start of movement within the maze, and until the user-controlled avatar reached the end of the maze.</p>
</sec>
<sec id="sec017">
<title>Path length</title>
<p>Path length was calculated as the total distance traversed by the user’s avatar from the start to the end of the maze.</p>
</sec>
<sec id="sec018">
<title>Number of pauses</title>
<p>A pause was defined as the absence of movement in the x-y plane for longer than 2 seconds. The number of pauses per trial was calculated.</p>
</sec>
<sec id="sec019">
<title>Collisions</title>
<p>A collision was defined as a contact between the user’s avatar and one of the virtual walls of the maze. The number of collisions per trial was calculated.</p>
</sec>
</sec>
<sec id="sec020">
<title>Statistical analysis</title>
<p>A Cross-Nested Mix Model GLM analysis was performed using IBM SPSS Statistics (Version 23.0. Armonk, NY: IBM Corp). The fixed-effect factors that were used in the model were the age, the feedback modality and the room rotation, with two levels for each factor. The two-way and three-way interactions among these factors were tested. The random-effect factors were the individual participants, and within the individual participants, the maze types and the repetitions within each maze. The REML method was employed for the analysis. In a case where the random effects were found not to contribute to the model, they were removed, and the model was re-fitted. To comply with the basic assumptions for the analysis, the appropriate transformation was performed for each of the performance metrics. The appropriate transformation was selected based on a Box & Cox procedure [
<xref rid="pone.0151593.ref023" ref-type="bibr">23</xref>
]. The data on the time it took to complete the mazes underwent a
<inline-formula id="pone.0151593.e001">
<alternatives>
<graphic xlink:href="pone.0151593.e001.jpg" id="pone.0151593.e001g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M1">
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:math>
</alternatives>
</inline-formula>
transformation; the data on the path length and the number of collisions underwent a
<inline-formula id="pone.0151593.e002">
<alternatives>
<graphic xlink:href="pone.0151593.e002.jpg" id="pone.0151593.e002g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M2">
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfrac>
</mml:math>
</alternatives>
</inline-formula>
transformation; and the data on the number of pauses underwent a
<inline-formula id="pone.0151593.e003">
<alternatives>
<graphic xlink:href="pone.0151593.e003.jpg" id="pone.0151593.e003g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M3">
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfrac>
</mml:math>
</alternatives>
</inline-formula>
transformation for values up to four, and a
<inline-formula id="pone.0151593.e004">
<alternatives>
<graphic xlink:href="pone.0151593.e004.jpg" id="pone.0151593.e004g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M4">
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mn>5</mml:mn>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfrac>
</mml:math>
</alternatives>
</inline-formula>
transformation for values greater than 4. Effect sizes in terms of
<italic>η</italic>
<sup>2</sup>
are not reported since they cannot be directly calculated using this statistical model in a standard manner. P values < 0.05 were considered to indicate a significant difference.</p>
</sec>
</sec>
<sec sec-type="results" id="sec021">
<title>Results</title>
<p>One participant in the younger group and one in the older group did not complete all trials, due to discomfort. The total number of recorded trials was therefore 1123. Of these trials, a total of 120 trials were terminated, because the participants exceeded the 3.5-min time limit allotted for completing each maze. The majority of incomplete trials was found in the auditory, rather than in the visual condition (53 trials in the auditory condition, vs. 2 in the visual condition for the younger group, and 61 vs. 4 for the older group). These incomplete trials were excluded from further analysis, and a total of 1013 trials were analyzed.</p>
<sec id="sec022">
<title>Main effects</title>
<p>Both age and the feedback modality used for navigation (vision or audition) had a significant effect on all performance metrics: time, path length, number of pauses, and number of collisions. Room rotation had no significant effect on any of the performance metrics. The average values for each of the performance metrics are given in
<xref ref-type="table" rid="pone.0151593.t001">Table 1</xref>
, by condition. For each value of the independent variables (age, feedback modality and rotation), data were averaged over the other independent variables. For example, the data reported for the time it took to complete the maze under the visual condition comprise all visual trials, performed by both age groups and in both rotation conditions.</p>
<table-wrap id="pone.0151593.t001" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0151593.t001</object-id>
<label>Table 1</label>
<caption>
<title>Average values (mean±SD) per performance metric.</title>
</caption>
<alternatives>
<graphic id="pone.0151593.t001g" xlink:href="pone.0151593.t001"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
</colgroup>
<thead>
<tr>
<th align="left" rowspan="1" colspan="1">Performance metric</th>
<th align="left" rowspan="1" colspan="1">Visual</th>
<th align="left" rowspan="1" colspan="1">Auditory</th>
<th align="left" rowspan="1" colspan="1">Young</th>
<th align="left" rowspan="1" colspan="1">Old</th>
<th align="left" rowspan="1" colspan="1">Rotating</th>
<th align="left" rowspan="1" colspan="1">Stationary</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>Time (sec)</bold>
</td>
<td align="left" rowspan="1" colspan="1">37.2 ± 20.8</td>
<td align="left" rowspan="1" colspan="1">84.7 ± 53.3</td>
<td align="left" rowspan="1" colspan="1">58.1 ± 47.5</td>
<td align="left" rowspan="1" colspan="1">68.3 ± 46.7</td>
<td align="left" rowspan="1" colspan="1">59.4 ± 44.2</td>
<td align="left" rowspan="1" colspan="1">64.5 ± 50.3</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>Path length (virtual meters)</bold>
</td>
<td align="left" rowspan="1" colspan="1">18.8 ± 7.8</td>
<td align="left" rowspan="1" colspan="1">26.2 ± 13.8</td>
<td align="left" rowspan="1" colspan="1">22.8 ± 13.0</td>
<td align="left" rowspan="1" colspan="1">22.5 ± 9.9</td>
<td align="left" rowspan="1" colspan="1">22.3 ± 12.5</td>
<td align="left" rowspan="1" colspan="1">23.0 ± 11.3</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>Number of pauses</bold>
</td>
<td align="left" rowspan="1" colspan="1">0.1 ± 0.4</td>
<td align="left" rowspan="1" colspan="1">1.0 ± 1.6</td>
<td align="left" rowspan="1" colspan="1">0.4 ± 1.1</td>
<td align="left" rowspan="1" colspan="1">0.8 ± 1.5</td>
<td align="left" rowspan="1" colspan="1">0.5 ± 1.2</td>
<td align="left" rowspan="1" colspan="1">0.6 ± 1.3</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>Number of collisions</bold>
</td>
<td align="left" rowspan="1" colspan="1">0.3 ± 0.9</td>
<td align="left" rowspan="1" colspan="1">1.3 ± 2.1</td>
<td align="left" rowspan="1" colspan="1">0.7 ± 1.8</td>
<td align="left" rowspan="1" colspan="1">1.0 ± 1.4</td>
<td align="left" rowspan="1" colspan="1">0.8 ± 1.5</td>
<td align="left" rowspan="1" colspan="1">0.9 ± 1.8</td>
</tr>
</tbody>
</table>
</alternatives>
</table-wrap>
<p>P-values for the main effects are detailed, per performance metric, in
<xref ref-type="table" rid="pone.0151593.t002">Table 2</xref>
.</p>
<table-wrap id="pone.0151593.t002" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0151593.t002</object-id>
<label>Table 2</label>
<caption>
<title>P-values for the main effects per performance metric.</title>
<p>P-values < 0.05 are marked in bold.</p>
</caption>
<alternatives>
<graphic id="pone.0151593.t002g" xlink:href="pone.0151593.t002"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
</colgroup>
<thead>
<tr>
<th align="left" rowspan="1" colspan="1">Performance metric</th>
<th align="left" rowspan="1" colspan="1">Visual vs. Auditory</th>
<th align="left" rowspan="1" colspan="1">Young vs. Old</th>
<th align="left" rowspan="1" colspan="1">Rotating vs. Stationary</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>Time</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold><0.001</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>0.005</bold>
</td>
<td align="right" rowspan="1" colspan="1">0.51</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>Path length</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold><0.001</bold>
</td>
<td align="right" rowspan="1" colspan="1">
<bold>0.025</bold>
</td>
<td align="right" rowspan="1" colspan="1">0.36</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>Number of pauses</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold><0.001</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold>0.004</bold>
</td>
<td align="right" rowspan="1" colspan="1">0.32</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>Number of collisions</bold>
</td>
<td align="left" rowspan="1" colspan="1">
<bold><0.001</bold>
</td>
<td align="right" rowspan="1" colspan="1">
<bold><0.001</bold>
</td>
<td align="right" rowspan="1" colspan="1">0.95</td>
</tr>
</tbody>
</table>
</alternatives>
</table-wrap>
<sec id="sec023">
<title>Time</title>
<p>Age (F
<sub>1,27</sub>
= 9.6, p = 0.005) and feedback modality (F
<sub>1,86</sub>
= 170, p<0.001) both had a significant effect on the time it took to complete the mazes (see
<xref ref-type="fig" rid="pone.0151593.g004">Fig 4</xref>
). There was no significant effect of rotation, or of any of the interactions (p>0.4). While no significant interactions effects were present, the data from the various combinations of the independent variables are shown in graph form, for the sake of completeness.</p>
<fig id="pone.0151593.g004" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0151593.g004</object-id>
<label>Fig 4</label>
<caption>
<title>Time to completion of the mazes.</title>
<p>Top row: main effects; bottom row: interaction effects (ns). Significant effects marked with an asterisk.</p>
</caption>
<graphic xlink:href="pone.0151593.g004"></graphic>
</fig>
<p>As seen in
<xref ref-type="fig" rid="pone.0151593.g004">Fig 4</xref>
, bottom center pane, maze completion times were about the same for the visual trials (∼37 sec), whether performed with the room stationary or rotating. In the auditory condition, which on average took ∼85 sec to complete, maze completion times were, on average, shorter when the room was rotating, compared to when it was stationary (not significant, ns).</p>
</sec>
<sec id="sec024">
<title>Path length</title>
<p>An example of the path taken by a participant over the five repetitions of a single maze is shown in
<xref ref-type="fig" rid="pone.0151593.g005">Fig 5</xref>
.</p>
<fig id="pone.0151593.g005" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0151593.g005</object-id>
<label>Fig 5</label>
<caption>
<title>An example of actual paths traversed by the virtual avatar.</title>
<p>Shown here are sample paths from a senior participant, completing maze G in the visual, stationary condition. Showing trials 1–5 for this maze, from left to right. A light blue circle denotes the starting point, and a dark blue square denotes the end point.</p>
</caption>
<graphic xlink:href="pone.0151593.g005"></graphic>
</fig>
<p>Age (F
<sub>1,48</sub>
= 5.4, p = 0.025) and feedback modality (F
<sub>1,244</sub>
= 42.3, p<0.001) both had a significant effect on the path length within the virtual mazes (see
<xref ref-type="fig" rid="pone.0151593.g006">Fig 6</xref>
). There was no significant effect of rotation or of any of the interactions (p>0.25).</p>
<fig id="pone.0151593.g006" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0151593.g006</object-id>
<label>Fig 6</label>
<caption>
<title>Path length.</title>
<p>Top row: main effects; bottom row: interaction effects (ns). Significant effects marked with an asterisk.</p>
</caption>
<graphic xlink:href="pone.0151593.g006"></graphic>
</fig>
<p>The young group moved their avatars over 18.5±5.6 virtual meters in the visually guided trials, and over 39.4±17.6 virtual meters in the trials guided by auditory feedback. The older group walked their avatars over 22.0±11.2 virtual meters in the visually guided trials, and over 36.6±10.4 virtual meters in the trials guided by auditory feedback.</p>
<p>As seen in
<xref ref-type="fig" rid="pone.0151593.g006">Fig 6</xref>
, the younger group was affected to a greater extent by the replacement of visual feedback with auditory feedback. They experienced a 57% increase in path length in the auditory vs. the visual condition, whereas the older group experienced an 18% increase in path length in the auditory condition, compared to the visual condition (ns).</p>
</sec>
<sec id="sec025">
<title>Number of pauses</title>
<p>Age (F
<sub>1,26</sub>
= 10, p = 0.004) and feedback modality (F
<sub>1,81</sub>
= 124.3, p<0.001) both had a significant effect on the number of pauses within the virtual mazes. There was no significant effect of rotation (F
<sub>1,74</sub>
= 1.0, p = 0.32) or of any of the interactions (p>0.07).</p>
<p>The young group paused, on average, 0.02±0.1 times during the visual trials, and 0.8±1.4 times during the auditory trials. The older group paused, on average, 0.2±0.5 times during the visual trials, and 1.5±1.8 times during the auditory trials. That is, the older group showed a greater increase in the number of pauses in the auditory vs. the visual condition, compared to the young group (ns).</p>
<p>The average number of pauses per maze for the young group was 0.6±1.3 when the room was stationary, and 0.3±0.8 when the room was rotating. The older group paused on average 0.8±1.3 per maze when the room was stationary, and 0.9±1.6 per maze when the room was rotating (see
<xref ref-type="fig" rid="pone.0151593.g007">Fig 7</xref>
). That is, the younger group paused less when the room was rotating, whereas the older group paused less when the room was stationary (ns).</p>
<fig id="pone.0151593.g007" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0151593.g007</object-id>
<label>Fig 7</label>
<caption>
<title>Number of pauses.</title>
<p>Top row: main effects; bottom row: interaction effects (ns). Significant effects marked with an asterisk.</p>
</caption>
<graphic xlink:href="pone.0151593.g007"></graphic>
</fig>
<p>Across all participants, the number of pauses in the visual stationary condition was 0.1 ± 0.3, whereas in the visual rotating condition it was 0.1 ± 0.4. In the auditory stationary condition, participants paused 1.2 ± 1.7 times on average per maze, and in the auditory rotating condition, they paused 0.9 ± 1.5 times per maze. That is, whereas rotation did not affect the number of pauses in the visual condition, it had an unexpected effect in the auditory condition: on average, participants paused less when the room was rotating than when the room was stationary, especially the young participants (ns).</p>
</sec>
<sec id="sec026">
<title>Number of collisions</title>
<p>Age (F
<sub>1,27</sub>
= 20, p<0.001) and feedback modality (F
<sub>1,74</sub>
= 91.1, p<0.001) both had a significant effect on the number of collisions with the virtual maze walls. There was no significant effect of rotation (F
<sub>1,71</sub>
= 0.004, p = 0.95), or of any of the interactions (p≥0.2).</p>
<p>Both age groups had more collisions with the virtual walls in the auditory, compared to the visual, condition. As seen in
<xref ref-type="fig" rid="pone.0151593.g008">Fig 8</xref>
, bottom left, the gap between the two groups, which existed in the visual condition (the older adults making 7 times more collisions with the walls than the younger group), closed in the auditory condition, where both groups had about 1.3 collisions per maze, on average.</p>
<fig id="pone.0151593.g008" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0151593.g008</object-id>
<label>Fig 8</label>
<caption>
<title>Number of collisions.</title>
<p>Top row: main effects; bottom row: interaction effects (ns). Significant effects marked with an asterisk.</p>
</caption>
<graphic xlink:href="pone.0151593.g008"></graphic>
</fig>
<p>As shown in
<xref ref-type="fig" rid="pone.0151593.g008">Fig 8</xref>
, bottom middle, the rotation of the room had little effect on the number of collisions in the visual condition (∼0.35 in both stationary and rotating conditions). However, in the auditory condition, which overall had more collisions than the visual condition, there were 27%
<italic>less</italic>
collisions when the room was rotating, compared to when it was stationary (1.1±1.8 vs. 1.5± 2.3, respectively, ns).</p>
</sec>
</sec>
<sec id="sec027">
<title>Path drawing</title>
<p>The paths drawn by participants following each trial were analyzed. No statistical analysis of the drawing data was performed.</p>
<p>The paths drawn were classified as “correct” if the maze drawn was identical to the actual maze, including cases where the participants drew the correct maze while omitting a “dead end” corridor (e.g., in the example shown in
<xref ref-type="fig" rid="pone.0151593.g009">Fig 9</xref>
the last two drawings, representing trials 4 and 5, would be classified as "correct"). Classification of the mazes as “incorrect” included cases where the maze drawn was a mirror image of the correct maze (e.g.,
<xref ref-type="fig" rid="pone.0151593.g009">Fig 9</xref>
, trials 1–3), and cases where the participants rotated a dead-end corridor (not leading to the exit) by 90° with respect to its actual direction. We termed this phenomenon “false corridor” drawing (see
<xref ref-type="fig" rid="pone.0151593.g010">Fig 10</xref>
).</p>
<fig id="pone.0151593.g009" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0151593.g009</object-id>
<label>Fig 9</label>
<caption>
<title>Sample drawings.</title>
<p>Sample drawings from a senior participant, who completed maze B in the auditory, stationary condition. Showing trials 1–5 for this maze, from left to right. A circle denotes the starting point, and a square denotes the end point. Inset: a sketch of the actual maze. This example demonstrates rotation (in all 5 trials) and mirroring (in the first 3 trials, on the left) of the drawings with respect to the actual maze.</p>
</caption>
<graphic xlink:href="pone.0151593.g009"></graphic>
</fig>
<fig id="pone.0151593.g010" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0151593.g010</object-id>
<label>Fig 10</label>
<caption>
<title>Example of a "false-corridor" drawing.</title>
<p>On the left is a drawing made by a young participant, who completed maze D in the auditory, stationary condition. A circle denotes the starting point, and a square denotes the end point. On the right is a sketch of the actual maze. This example demonstrates what we termed a "false corridor", where a dead-end corridor was drawn at 90° to its actual direction.</p>
</caption>
<graphic xlink:href="pone.0151593.g010"></graphic>
</fig>
<p>On average, participants correctly drew about 20% of the mazes (22% in the young group, 14% in the old group, see
<xref ref-type="table" rid="pone.0151593.t003">Table 3</xref>
). An additional ∼45% of the drawings represented the layout of the mazes correctly, except that they were a mirror image of the actual maze. Rotation of the drawing plane relative to the “true north” was rather common (∼70%), and was not considered incorrect (see examples of rotation in
<xref ref-type="fig" rid="pone.0151593.g009">Fig 9</xref>
). The pieces of paper on which the participants were asked to draw were always presented to them in a particular orientation (short edge towards the participant, long edge along the side), with the unspoken assumption that they would draw the entrance to the maze at the bottom of the page. This was not in fact the case, and we refer to the tendency to draw the starting point in a location other than the bottom of the page as a deviation from the “true north”. Overall, there were more correct mappings in the visual, compared to the auditory condition, and more in the rotating, compared to the stationary condition. Surprisingly, there were also more mirrored drawings in the visual compared to the auditory condition, more mirroring in the young vs. the old group, and more mirroring in the stationary compared to the rotating condition (see
<xref ref-type="table" rid="pone.0151593.t003">Table 3</xref>
).</p>
<table-wrap id="pone.0151593.t003" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0151593.t003</object-id>
<label>Table 3</label>
<caption>
<title>Analysis of maze drawings.</title>
<p>Values reported are means ± SD.</p>
</caption>
<alternatives>
<graphic id="pone.0151593.t003g" xlink:href="pone.0151593.t003"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
</colgroup>
<thead>
<tr>
<th align="right" rowspan="1" colspan="1"></th>
<th align="left" rowspan="1" colspan="1">
<italic>Visual</italic>
</th>
<th align="left" rowspan="1" colspan="1">
<italic>Auditory</italic>
</th>
<th align="left" rowspan="1" colspan="1">
<italic>Young</italic>
</th>
<th align="left" rowspan="1" colspan="1">
<italic>Old</italic>
</th>
<th align="left" rowspan="1" colspan="1">
<italic>Rotating</italic>
</th>
<th align="left" rowspan="1" colspan="1">
<italic>Stationary</italic>
</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">
<bold>
<italic>% Correct mapping</italic>
</bold>
</td>
<td align="left" rowspan="1" colspan="1">22 ± 17</td>
<td align="left" rowspan="1" colspan="1">16 ± 16</td>
<td align="left" rowspan="1" colspan="1">22 ± 9</td>
<td align="left" rowspan="1" colspan="1">15 ± 10</td>
<td align="left" rowspan="1" colspan="1">22 ± 19</td>
<td align="left" rowspan="1" colspan="1">15 ± 17</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>% mirroring</italic>
</td>
<td align="left" rowspan="1" colspan="1">60 ± 25</td>
<td align="left" rowspan="1" colspan="1">32 ± 20</td>
<td align="left" rowspan="1" colspan="1">50 ± 12</td>
<td align="left" rowspan="1" colspan="1">40 ± 23</td>
<td align="left" rowspan="1" colspan="1">42 ± 21</td>
<td align="left" rowspan="1" colspan="1">49 ± 24</td>
</tr>
</tbody>
</table>
</alternatives>
</table-wrap>
</sec>
</sec>
<sec sec-type="conclusions" id="sec028">
<title>Discussion</title>
<sec id="sec029">
<title>Summary of main findings</title>
<p>The present experiment studied the effects of age, sensory modality (visual or auditory), and room rotation on movement through a virtual maze.</p>
<p>We found that
<italic>the sensory modality</italic>
used to navigate through the virtual maze had a significant effect on all movement parameters which we examined: time, distance travelled, number of pauses, and number of collisions with the maze walls. With the auditory cues, it took participants a longer time to complete the mazes, they took a longer path length through the maze, they paused more, and had more collisions with the walls, compared to navigation with the visual cues.
<italic>Age</italic>
had a significant effect on time, number of pauses, and number of collisions: the older group took a longer time to complete the mazes, they paused more, and had more collisions with the walls, compared to the younger group.
<italic>Rotation of the room</italic>
had no significant effect on any of the examined performance metrics. We found no significant interaction effects</p>
</sec>
<sec id="sec030">
<title>Why old are not as good as young with vision?</title>
<p>Our finding that the older adults took a longer time to complete the mazes is in line with previous findings showing that older adults are slower in a navigation task than younger adults [
<xref rid="pone.0151593.ref008" ref-type="bibr">8</xref>
,
<xref rid="pone.0151593.ref024" ref-type="bibr">24</xref>
<xref rid="pone.0151593.ref026" ref-type="bibr">26</xref>
]. The younger group also had less pauses both when navigating with vision and with audition, and had less collisions with the wall in the visual condition, compared to the older group. The advantage shown by the younger group may be in part due to the tendency of young individuals to use an allocentric navigation strategy (relying on external cues), as opposed to an egocentric navigation strategy (relying on an internal frame of reference), preferred by older adults [
<xref rid="pone.0151593.ref024" ref-type="bibr">24</xref>
]. The younger group would thus benefit more from the visual cues available during the visual trials. It should be noted that a recent study on virtual navigation failed to find an effect of age on navigation strategy [
<xref rid="pone.0151593.ref027" ref-type="bibr">27</xref>
], and that we did not explicitly test the navigation strategy employed in the current study by the two age groups. Current models of spatial navigation suggest that both egocentric and allocentric representations contribute to spatial memory, and their relative contributions depend on the timescale of the task [
<xref rid="pone.0151593.ref028" ref-type="bibr">28</xref>
]. In this theoretical framework, on a short time scale (∼20 seconds), the egocentric representation is more pronounced, whereas on a long time scale (> 5 minutes), the allocentric representation is more active [
<xref rid="pone.0151593.ref028" ref-type="bibr">28</xref>
]. The current experimental task, which lasted approximately 3 minutes, is on the medium-term time scale, and thus would benefit from both types of representations. In addition, most of the young participants in the current study (14 out of 17) reported having regular experience with video games in their routine, compared to a minority of the older participants (3 out of 12). Potentially, that experience made it easier for the younger group to perform the task of controlling a moving avatar while being seated.</p>
</sec>
<sec id="sec031">
<title>Why the old group has a shorter path length than the young with audition?</title>
<p>Hearing loss is prevalent in over 60% of adults aged 70 years and older in the U.S. [
<xref rid="pone.0151593.ref029" ref-type="bibr">29</xref>
]. While traditionally, the loss of hair cells in the human ear had been considered to be the main reason for age-related hearing loss, it is now acknowledged that comprehension of auditory signals depends not only on the state of the peripheral receptors but also on the integrity of the central auditory system [
<xref rid="pone.0151593.ref030" ref-type="bibr">30</xref>
]. Deciphering the meaning of complex auditory signals, such as speech, draws on the function of brain structures outside the auditory cortex [
<xref rid="pone.0151593.ref031" ref-type="bibr">31</xref>
]. The auditory task in the current experiment required more than a mere localization of auditory cues: the participants had to maintain a mental representation of their avatar's location within the virtual environment, and continuously evaluate their distance from nearby maze walls in order to successfully negotiate the mazes. Thus, success on the auditory navigation trials depended on a combination of sensory and cognitive abilities, most likely involving the hippocampus as well as extra-hippocampal regions.</p>
<p>The tendency of young adults to prefer an allocentric navigation strategy, compared with an egocentric navigation strategy preferred by older adults [
<xref rid="pone.0151593.ref024" ref-type="bibr">24</xref>
] may explain the increased ability of the older group to navigate within the virtual environment in the absence of visual cues, compared with the younger group.</p>
<p>Rosenbaum et al [
<xref rid="pone.0151593.ref032" ref-type="bibr">32</xref>
] reported that mental navigation involves the retrosplenial cortex (directionality in an allocentric framework), the medial and posterior parietal cortex (space perception within an egocentric coordinate system) and regions of prefrontal cortex (working memory). Moffat and colleagues [
<xref rid="pone.0151593.ref033" ref-type="bibr">33</xref>
] reported that navigation through a virtual environment was associated with activation in the hippocampus, the parahippocampal gyrus, retrosplenial cortex, right and left lateral parietal cortex, medial parietal lobe and the cerebellum. Older adults performing this task showed reduced activation in the hippocampus and parahippocampal gyrus, medial parietal lobe and retrosplenial cortex, and increased activation in anterior cingulate gyrus and medial frontal lobe. Studies on the effects of aging on navigation skills have shown the older adults to have reduced abilities in some aspects of the task (e.g., took longer, and made more turning errors), but not others (e.g., recalling encountered landmarks; see [
<xref rid="pone.0151593.ref008" ref-type="bibr">8</xref>
,
<xref rid="pone.0151593.ref025" ref-type="bibr">25</xref>
]). Importantly, results from studies on the effects of aging on navigational skills have come almost exclusively from tasks where information relied heavily on visual input (for a review, see [
<xref rid="pone.0151593.ref010" ref-type="bibr">10</xref>
]).</p>
<p>The current experiment is the first, to the best of our knowledge, to test the effects of aging on navigational skills using auditory cues alone, via a sensory substitution device. And so, while aging is associated with a decline in auditory function (both peripheral and central losses [
<xref rid="pone.0151593.ref030" ref-type="bibr">30</xref>
]), and a decline in several cognitive functions [
<xref rid="pone.0151593.ref005" ref-type="bibr">5</xref>
], including spatial cognition, and in it, a decline in some navigational skills [
<xref rid="pone.0151593.ref010" ref-type="bibr">10</xref>
], there are also cognitive functions that improve with age [
<xref rid="pone.0151593.ref006" ref-type="bibr">6</xref>
,
<xref rid="pone.0151593.ref007" ref-type="bibr">7</xref>
], and which may underlie the surprising finding that the older group did better than the younger group on the auditory trials, in terms of path length. An alternative explanation to the finding that they took a shorter path through the maze, is that they employed a more conservative, less exploratory approach to navigating. This interpretation is consistent with the longer completion times found in the older group. Taken together, the older adults had an effectively reduced speed of maze negotiation, which corresponds well to findings in real-world gait measurements showing a decline in walking speed with age [
<xref rid="pone.0151593.ref034" ref-type="bibr">34</xref>
<xref rid="pone.0151593.ref035" ref-type="bibr">35</xref>
].</p>
</sec>
<sec id="sec032">
<title>Why was there no significant effect of rotation?</title>
<p>Based on previous studies [
<xref rid="pone.0151593.ref021" ref-type="bibr">21</xref>
,
<xref rid="pone.0151593.ref022" ref-type="bibr">22</xref>
], we expected the perceptual bias that results from room rotation to interfere with task performance. However, no significant effect of room rotation was found in our current study. These findings suggest that the continuous veridical sensory feedback–whether provided visually or via the auditory sensory-substitution device–about the actual, non-biased, location of the avatar in the virtual environment, was able to overcome a perceptual bias, insofar as one resulted from the room’s rotation. Importantly, since previous studies that demonstrated the presence of a bias used much simpler sensory stimuli, it is possible that a perceptual bias was not present in the current dynamic experimental setup.</p>
</sec>
<sec id="sec033">
<title>Why navigating with auditory cues takes longer?</title>
<p>Though auditory cuing has been found to be more effective than visual cuing in certain situations, such as when driving in a simulator [
<xref rid="pone.0151593.ref036" ref-type="bibr">36</xref>
], spatial processing of auditory cues is done in separate neural structures than visuo-spatial information [
<xref rid="pone.0151593.ref037" ref-type="bibr">37</xref>
], and may take a different time course to process [
<xref rid="pone.0151593.ref038" ref-type="bibr">38</xref>
]. Though spatial information is shared across the senses [
<xref rid="pone.0151593.ref019" ref-type="bibr">19</xref>
], sighted individuals rely more on visual information for creating mental maps, and so the difference in performance between the two modalities may be a result of lack of sufficient training with the auditory cues.</p>
</sec>
<sec id="sec034">
<title>Why young participants had over all more correct mappings than the old group?</title>
<p>We found that the younger group produced an overall higher rate of correct drawings of the virtual paths they had taken. This result is consistent with findings from a study suggesting that path integration in the older population is not as good as in younger adults [
<xref rid="pone.0151593.ref039" ref-type="bibr">39</xref>
] and with findings from an experiment studying recall errors in route memorization [
<xref rid="pone.0151593.ref008" ref-type="bibr">8</xref>
]. In that experiment, the authors had young and old participants follow an experimenter through an unfamiliar route, and then recall the path. They found that the older adults made more errors, though there was no age-related difference in the "false positives" during recall (i.e., recalling a turn where there was none). It should be noted that the map drawing task relies on the translation of information derived from path integration into a map-like representation. One potential explanation is that young adults have more experience with video games, which some studies claim improve spatial cognition [
<xref rid="pone.0151593.ref040" ref-type="bibr">40</xref>
<xref rid="pone.0151593.ref041" ref-type="bibr">41</xref>
]. That result is consistent with the expectation that younger people, who often have experience with navigation video games, would be better able to later recall the layout of the maze they navigated through than older adults.</p>
</sec>
<sec id="sec035">
<title>Why were there so many mirror drawings?</title>
<p>We found that approximately half of the drawings made by the participants were mirror images of the actual paths taken. A possible explanation for this is what has been termed "mirror invariance" [
<xref rid="pone.0151593.ref042" ref-type="bibr">42</xref>
]. Presumably, humans are born with the ability to invariably recognize (and reproduce) images and their mirrored counterparts; this ability is diminished only when individuals are required to learn to read and write, where mirroring could cause confusion (e.g., for the letters p, q, b & d in the Latin script). This suppression is learned during childhood, and exists only for languages where such mirror confusion is possible, and not for others (e.g., Tamil) or for illiterate individuals [
<xref rid="pone.0151593.ref042" ref-type="bibr">42</xref>
]. Adults who have learned to repress the mirror invariance with respect to letters–in reading and writing–have retained the ability to recognize mirror images of objects, via the ventral visual pathway [
<xref rid="pone.0151593.ref042" ref-type="bibr">42</xref>
]. It may be, then, that the large portion of mirror images produced in the current experiment is the result of mirror invariance with respect to path representation.</p>
</sec>
</sec>
<sec sec-type="conclusions" id="sec036">
<title>Conclusion</title>
<p>There is a decline in performance on a virtual navigation task with age and with the use of auditory feedback, as opposed to visual feedback. Nonetheless, older adults were able to successfully use a sensory substitution device to navigate through virtual mazes.</p>
</sec>
<sec sec-type="supplementary-material" id="sec037">
<title>Supporting Information</title>
<supplementary-material content-type="local-data" id="pone.0151593.s001">
<label>S1 Video</label>
<caption>
<title>Experimental setup.</title>
<p>A video demonstration of navigation through the virtual mazes with visual and with auditory feedback.</p>
<p>(MP4)</p>
</caption>
<media xlink:href="pone.0151593.s001.mp4">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
</sec>
</body>
<back>
<ack>
<p>The authors wish to thank Janna Kaplan for her help with the experiment's coordination, recruitment & running; Joel Ventura, Avijit Bakshi, Lee Picard and Lisa Khlestova for their help with running the experiment; Yoav Raanan for his help with analysis of the navigation data; Tamir Duvdevani, Yedidya Silverman and Yael Baron for their help with analysis of the drawing data; The Brandeis-Leir and the Brandeis-Bronfman foundations for providing funding for this experiment. The research was partially supported by the Helmsley Charitable Trust through the Agricultural, Biological and Cognitive Robotics Center of the Ben-Gurion University of the Negev. The support of the Promobilia Foundation is gratefully acknowledged.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="pone.0151593.ref001">
<label>1</label>
<mixed-citation publication-type="journal">
<name>
<surname>Levy-Tzedek</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Riemer</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Amedi</surname>
<given-names>A</given-names>
</name>
.
<article-title>Color improves “visual” acuity via sound</article-title>
.
<source>Frontiers in neuroscience</source>
.
<year>2014</year>
;
<volume>8</volume>
.</mixed-citation>
</ref>
<ref id="pone.0151593.ref002">
<label>2</label>
<mixed-citation publication-type="journal">
<name>
<surname>Auvray</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Hanneton</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>O Regan</surname>
<given-names>JK</given-names>
</name>
.
<article-title>Learning to perceive with a visuo-auditory substitution system: Localisation and object recognition withThe vOICe'</article-title>
.
<source>PERCEPTION-LONDON</source>
-.
<year>2007</year>
;
<volume>36</volume>
(
<issue>3</issue>
):
<fpage>416</fpage>
<pub-id pub-id-type="pmid">17455756</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0151593.ref003">
<label>3</label>
<mixed-citation publication-type="journal">
<name>
<surname>Proulx</surname>
<given-names>MJ</given-names>
</name>
,
<name>
<surname>Gwinnutt</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Dell’Erba</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Levy-Tzedek</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>de Sousa</surname>
<given-names>AA</given-names>
</name>
,
<name>
<surname>Brown</surname>
<given-names>DJ</given-names>
</name>
.
<article-title>Other ways of seeing: From behavior to neural mechanisms in the online “visual” control of action with sensory substitution</article-title>
.
<source>Restorative neurology and neuroscience</source>
.
<year>2016</year>
;
<volume>34</volume>
(
<issue>1</issue>
):
<fpage>29</fpage>
<lpage>44</lpage>
.
<pub-id pub-id-type="pmid">26599473</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0151593.ref004">
<label>4</label>
<mixed-citation publication-type="journal">
<name>
<surname>Shull</surname>
<given-names>PB</given-names>
</name>
,
<name>
<surname>Damian</surname>
<given-names>DD</given-names>
</name>
.
<article-title>Haptic wearables as sensory replacement, sensory augmentation and trainer–a review</article-title>
.
<source>Journal of neuroengineering and rehabilitation</source>
.
<year>2015</year>
;
<volume>12</volume>
(
<issue>1</issue>
):
<fpage>1</fpage>
.
<pub-id pub-id-type="pmid">25557982</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0151593.ref005">
<label>5</label>
<mixed-citation publication-type="journal">
<name>
<surname>Li</surname>
<given-names>S-C</given-names>
</name>
,
<name>
<surname>Lindenberger</surname>
<given-names>U</given-names>
</name>
,
<name>
<surname>Sikstroem</surname>
<given-names>S</given-names>
</name>
.
<article-title>Aging cognition: from neuromodulation to representation</article-title>
.
<source>Trends in cognitive sciences</source>
.
<year>2001</year>
;
<volume>5</volume>
(
<issue>11</issue>
):
<fpage>479</fpage>
<lpage>86</lpage>
.
<pub-id pub-id-type="pmid">11684480</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0151593.ref006">
<label>6</label>
<mixed-citation publication-type="other">Park DC. The basic mechanisms accounting for age-related decline in cognitive function. Park DC, Schwarz N, editors. Philadelphia2000. 3–19 p.</mixed-citation>
</ref>
<ref id="pone.0151593.ref007">
<label>7</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hartshorne</surname>
<given-names>JK</given-names>
</name>
,
<name>
<surname>Germine</surname>
<given-names>LT</given-names>
</name>
.
<article-title>When Does Cognitive Functioning Peak? The Asynchronous Rise and Fall of Different Cognitive Abilities Across the Life Span</article-title>
.
<source>Psychological science</source>
.
<year>2015</year>
:0956797614567339.</mixed-citation>
</ref>
<ref id="pone.0151593.ref008">
<label>8</label>
<mixed-citation publication-type="journal">
<name>
<surname>Wilkniss</surname>
<given-names>SM</given-names>
</name>
,
<name>
<surname>Jones</surname>
<given-names>MG</given-names>
</name>
,
<name>
<surname>Korol</surname>
<given-names>DL</given-names>
</name>
,
<name>
<surname>Gold</surname>
<given-names>PE</given-names>
</name>
,
<name>
<surname>Manning</surname>
<given-names>CA</given-names>
</name>
.
<article-title>Age-related differences in an ecologically based study of route learning</article-title>
.
<source>Psychology and aging</source>
.
<year>1997</year>
;
<volume>12</volume>
(
<issue>2</issue>
):
<fpage>372</fpage>
<pub-id pub-id-type="pmid">9189997</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0151593.ref009">
<label>9</label>
<mixed-citation publication-type="journal">
<name>
<surname>Yan</surname>
<given-names>JH</given-names>
</name>
,
<name>
<surname>Thomas</surname>
<given-names>JR</given-names>
</name>
,
<name>
<surname>Stelmach</surname>
<given-names>GE</given-names>
</name>
.
<article-title>Aging and rapid aiming arm movement control</article-title>
.
<source>Experimental aging research</source>
.
<year>1998</year>
;
<volume>24</volume>
(
<issue>2</issue>
):
<fpage>155</fpage>
<lpage>68</lpage>
.
<pub-id pub-id-type="pmid">9555568</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0151593.ref010">
<label>10</label>
<mixed-citation publication-type="journal">
<name>
<surname>Moffat</surname>
<given-names>SD</given-names>
</name>
.
<article-title>Aging and spatial navigation: what do we know and where do we go?</article-title>
<source>Neuropsychology review</source>
.
<year>2009</year>
;
<volume>19</volume>
(
<issue>4</issue>
):
<fpage>478</fpage>
<lpage>89</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1007/s11065-009-9120-3">10.1007/s11065-009-9120-3</ext-link>
</comment>
<pub-id pub-id-type="pmid">19936933</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0151593.ref011">
<label>11</label>
<mixed-citation publication-type="book">Lahav O, Schloerb DW, Kumar S, Srinivasan MA, editors. BlindAid: A learning environment for enabling people who are blind to explore and navigate through unknown real spaces. Virtual Rehabilitation, 2008; 2008: IEEE.</mixed-citation>
</ref>
<ref id="pone.0151593.ref012">
<label>12</label>
<mixed-citation publication-type="journal">
<name>
<surname>Maidenbaum</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Hanassy</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Abboud</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Buchs</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Chebat</surname>
<given-names>D-R</given-names>
</name>
,
<name>
<surname>Levy-Tzedek</surname>
<given-names>S</given-names>
</name>
,
<etal>et al</etal>
<article-title>The "EyeCane", a new electronic travel aid for the blind: Technology, behavior & swift learning</article-title>
.
<source>Restorative neurology and neuroscience</source>
.
<year>2014</year>
;
<volume>32</volume>
(
<issue>6</issue>
):
<fpage>813</fpage>
<lpage>24</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.3233/RNN-130351">10.3233/RNN-130351</ext-link>
</comment>
<pub-id pub-id-type="pmid">25201814</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0151593.ref013">
<label>13</label>
<mixed-citation publication-type="journal">
<name>
<surname>Maidenbaum</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Levy-Tzedek</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Chebat</surname>
<given-names>DR</given-names>
</name>
,
<name>
<surname>Namer-Furstenberg</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Amedi</surname>
<given-names>A</given-names>
</name>
.
<article-title>The Effect of Extended Sensory Range via the EyeCane Sensory Substitution Device on the Characteristics of Visionless Virtual Navigation</article-title>
.
<source>Multisensory research</source>
.
<year>2014</year>
;
<volume>27</volume>
(
<issue>5–6</issue>
):
<fpage>379</fpage>
<lpage>97</lpage>
.
<pub-id pub-id-type="pmid">25693302</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0151593.ref014">
<label>14</label>
<mixed-citation publication-type="journal">
<name>
<surname>Maidenbaum</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Levy-Tzedek</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Chebat</surname>
<given-names>D-R</given-names>
</name>
,
<name>
<surname>Amedi</surname>
<given-names>A</given-names>
</name>
.
<article-title>Increasing Accessibility to the Blind of Virtual Environments, Using a Virtual Mobility Aid Based On the</article-title>
.
<source>PLOS ONE</source>
.
<year>2013</year>
;
<volume>8</volume>
(
<issue>8</issue>
):
<fpage>e72555</fpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1371/journal.pone.0072555">10.1371/journal.pone.0072555</ext-link>
</comment>
<pub-id pub-id-type="pmid">23977316</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0151593.ref015">
<label>15</label>
<mixed-citation publication-type="journal">
<name>
<surname>Merabet</surname>
<given-names>LB</given-names>
</name>
,
<name>
<surname>Connors</surname>
<given-names>EC</given-names>
</name>
,
<name>
<surname>Halko</surname>
<given-names>MA</given-names>
</name>
,
<name>
<surname>Sanchez</surname>
<given-names>J</given-names>
</name>
.
<article-title>Teaching the blind to find their way by playing video games</article-title>
.
<source>PLOS ONE</source>
.
<year>2012</year>
;
<volume>7</volume>
(
<issue>9</issue>
):
<fpage>e44958</fpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1371/journal.pone.0044958">10.1371/journal.pone.0044958</ext-link>
</comment>
<pub-id pub-id-type="pmid">23028703</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0151593.ref016">
<label>16</label>
<mixed-citation publication-type="journal">
<name>
<surname>Smith</surname>
<given-names>K</given-names>
</name>
.
<article-title>Universal life: the use of virtual worlds among people with disabilities</article-title>
.
<source>Universal Access in the Information Society</source>
.
<year>2012</year>
;
<volume>11</volume>
(
<issue>4</issue>
):
<fpage>387</fpage>
<lpage>98</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0151593.ref017">
<label>17</label>
<mixed-citation publication-type="book">
<name>
<surname>Westin</surname>
<given-names>T</given-names>
</name>
,
<name>
<surname>Bierre</surname>
<given-names>K</given-names>
</name>
,
<name>
<surname>Gramenos</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Hinn</surname>
<given-names>M</given-names>
</name>
.
<chapter-title>Advances in Game Accessibility from 2005 to 2010</chapter-title>
<source>Universal access in human-computer interaction Users diversity</source>
:
<publisher-name>Springer</publisher-name>
;
<year>2011</year>
p.
<fpage>400</fpage>
<lpage>9</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0151593.ref018">
<label>18</label>
<mixed-citation publication-type="journal">
<name>
<surname>Levy-Tzedek</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Hanassy</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Abboud</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Shachar</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Amedi</surname>
<given-names>A</given-names>
</name>
.
<article-title>Fast, accurate reaching movements with a visual-to-auditory sensory substitution device</article-title>
.
<source>Restorative Neurology and Neuroscience</source>
.
<year>2012</year>
;
<volume>30</volume>
(
<issue>4</issue>
):
<fpage>313</fpage>
<lpage>23</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.3233/RNN-2012-110219">10.3233/RNN-2012-110219</ext-link>
</comment>
<pub-id pub-id-type="pmid">22596353</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0151593.ref019">
<label>19</label>
<mixed-citation publication-type="journal">
<name>
<surname>Levy-Tzedek</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Novick</surname>
<given-names>I</given-names>
</name>
,
<name>
<surname>Arbel</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Abboud</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Maidenbaum</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Vaadia</surname>
<given-names>E</given-names>
</name>
,
<etal>et al</etal>
<article-title>Cross-sensory transfer of sensory-motor information: visuomotor learning affects performance on an audiomotor task, using sensory-substitution</article-title>
.
<source>Scientific reports</source>
.
<year>2012</year>
;
<volume>2</volume>
.</mixed-citation>
</ref>
<ref id="pone.0151593.ref020">
<label>20</label>
<mixed-citation publication-type="other">White GR, Fitzpatrick G, McAllister G, editors. Toward accessible 3D virtual environments for the blind and visually impaired. Proceedings of the 3rd international conference on Digital Interactive Media in Entertainment and Arts; 2008: ACM.</mixed-citation>
</ref>
<ref id="pone.0151593.ref021">
<label>21</label>
<mixed-citation publication-type="journal">
<name>
<surname>Carriot</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Bryan</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>DiZio</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Lackner</surname>
<given-names>JR</given-names>
</name>
.
<article-title>The oculogyral illusion: retinal and oculomotor factors</article-title>
.
<source>Experimental Brain Research</source>
.
<year>2011</year>
;
<volume>209</volume>
(
<issue>3</issue>
):
<fpage>415</fpage>
<lpage>23</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1007/s00221-011-2567-5">10.1007/s00221-011-2567-5</ext-link>
</comment>
<pub-id pub-id-type="pmid">21298422</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0151593.ref022">
<label>22</label>
<mixed-citation publication-type="journal">
<name>
<surname>Lackner</surname>
<given-names>JR</given-names>
</name>
,
<name>
<surname>DiZio</surname>
<given-names>P</given-names>
</name>
.
<article-title>Audiogravic and oculogravic illusions represent a unified spatial remapping</article-title>
.
<source>Experimental Brain Research</source>
.
<year>2010</year>
;
<volume>202</volume>
(
<issue>2</issue>
):
<fpage>513</fpage>
<lpage>8</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1007/s00221-009-2149-y">10.1007/s00221-009-2149-y</ext-link>
</comment>
<pub-id pub-id-type="pmid">20062982</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0151593.ref023">
<label>23</label>
<mixed-citation publication-type="journal">
<name>
<surname>Box</surname>
<given-names>GE</given-names>
</name>
,
<name>
<surname>Cox</surname>
<given-names>DR</given-names>
</name>
.
<article-title>An analysis of transformations</article-title>
.
<source>Journal of the Royal Statistical Society Series B (Methodological)</source>
.
<year>1964</year>
:
<fpage>211</fpage>
<lpage>52</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0151593.ref024">
<label>24</label>
<mixed-citation publication-type="journal">
<name>
<surname>Rodgers</surname>
<given-names>MK</given-names>
</name>
,
<name>
<surname>Sindone</surname>
<given-names>JA</given-names>
</name>
,
<name>
<surname>Moffat</surname>
<given-names>SD</given-names>
</name>
.
<article-title>Effects of age on navigation strategy</article-title>
.
<source>Neurobiology of aging</source>
.
<year>2012</year>
;
<volume>33</volume>
(
<issue>1</issue>
):
<fpage>202</fpage>
. e15-. e22.
<pub-id pub-id-type="pmid">20832911</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0151593.ref025">
<label>25</label>
<mixed-citation publication-type="journal">
<name>
<surname>Moffat</surname>
<given-names>SD</given-names>
</name>
,
<name>
<surname>Zonderman</surname>
<given-names>AB</given-names>
</name>
,
<name>
<surname>Resnick</surname>
<given-names>SM</given-names>
</name>
.
<article-title>Age differences in spatial memory in a virtual environment navigation task</article-title>
.
<source>Neurobiology of aging</source>
.
<year>2001</year>
;
<volume>22</volume>
(
<issue>5</issue>
):
<fpage>787</fpage>
<lpage>96</lpage>
.
<pub-id pub-id-type="pmid">11705638</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0151593.ref026">
<label>26</label>
<mixed-citation publication-type="journal">
<name>
<surname>Iaria</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Palermo</surname>
<given-names>L</given-names>
</name>
,
<name>
<surname>Committeri</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Barton</surname>
<given-names>JJ</given-names>
</name>
.
<article-title>Age differences in the formation and use of cognitive maps</article-title>
.
<source>Behavioural brain research</source>
.
<year>2009</year>
;
<volume>196</volume>
(
<issue>2</issue>
):
<fpage>187</fpage>
<lpage>91</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.bbr.2008.08.040">10.1016/j.bbr.2008.08.040</ext-link>
</comment>
<pub-id pub-id-type="pmid">18817815</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0151593.ref027">
<label>27</label>
<mixed-citation publication-type="journal">
<name>
<surname>Goeke</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Kornpetpanee</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Köster</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Fernández-Revelles</surname>
<given-names>AB</given-names>
</name>
,
<name>
<surname>Gramann</surname>
<given-names>K</given-names>
</name>
,
<name>
<surname>König</surname>
<given-names>P</given-names>
</name>
.
<article-title>Cultural background shapes spatial reference frame proclivity</article-title>
.
<source>Scientific reports</source>
.
<year>2015</year>
;
<volume>5</volume>
.</mixed-citation>
</ref>
<ref id="pone.0151593.ref028">
<label>28</label>
<mixed-citation publication-type="journal">
<name>
<surname>Byrne</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Becker</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Burgess</surname>
<given-names>N</given-names>
</name>
.
<article-title>Remembering the past and imagining the future: a neural model of spatial memory and imagery</article-title>
.
<source>Psychological review</source>
.
<year>2007</year>
;
<volume>114</volume>
(
<issue>2</issue>
):
<fpage>340</fpage>
<pub-id pub-id-type="pmid">17500630</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0151593.ref029">
<label>29</label>
<mixed-citation publication-type="journal">
<name>
<surname>Lin</surname>
<given-names>FR</given-names>
</name>
,
<name>
<surname>Thorpe</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Gordon-Salant</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Ferrucci</surname>
<given-names>L</given-names>
</name>
.
<article-title>Hearing loss prevalence and risk factors among older adults in the United States</article-title>
.
<source>The Journals of Gerontology Series A: Biological Sciences and Medical Sciences</source>
.
<year>2011</year>
;
<volume>66</volume>
(
<issue>5</issue>
):
<fpage>582</fpage>
<lpage>90</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0151593.ref030">
<label>30</label>
<mixed-citation publication-type="journal">
<name>
<surname>Ouda</surname>
<given-names>L</given-names>
</name>
,
<name>
<surname>Profant</surname>
<given-names>O</given-names>
</name>
,
<name>
<surname>Syka</surname>
<given-names>J</given-names>
</name>
.
<article-title>Age-related changes in the central auditory system</article-title>
.
<source>Cell and tissue research</source>
.
<year>2015</year>
:
<fpage>1</fpage>
<lpage>22</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0151593.ref031">
<label>31</label>
<mixed-citation publication-type="journal">
<name>
<surname>Stenfelt</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Roennberg</surname>
<given-names>J</given-names>
</name>
.
<article-title>The Signal- Cognition interface: Interactions between degraded auditory signals and cognitive processes</article-title>
.
<source>Scandinavian journal of psychology</source>
.
<year>2009</year>
;
<volume>50</volume>
(
<issue>5</issue>
):
<fpage>385</fpage>
<lpage>93</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1111/j.1467-9450.2009.00748.x">10.1111/j.1467-9450.2009.00748.x</ext-link>
</comment>
<pub-id pub-id-type="pmid">19778386</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0151593.ref032">
<label>32</label>
<mixed-citation publication-type="journal">
<name>
<surname>Rosenbaum</surname>
<given-names>RS</given-names>
</name>
,
<name>
<surname>Ziegler</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Winocur</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Grady</surname>
<given-names>CL</given-names>
</name>
,
<name>
<surname>Moscovitch</surname>
<given-names>M</given-names>
</name>
.
<article-title>"œI have often walked down this street before": fMRI studies on the hippocampus and other structures during mental navigation of an old environment</article-title>
.
<source>Hippocampus</source>
.
<year>2004</year>
;
<volume>14</volume>
(
<issue>7</issue>
):
<fpage>826</fpage>
<lpage>35</lpage>
.
<pub-id pub-id-type="pmid">15382253</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0151593.ref033">
<label>33</label>
<mixed-citation publication-type="journal">
<name>
<surname>Moffat</surname>
<given-names>SD</given-names>
</name>
,
<name>
<surname>Elkins</surname>
<given-names>W</given-names>
</name>
,
<name>
<surname>Resnick</surname>
<given-names>SM</given-names>
</name>
.
<article-title>Age differences in the neural systems supporting human allocentric spatial navigation</article-title>
.
<source>Neurobiology of aging</source>
.
<year>2006</year>
;
<volume>27</volume>
(
<issue>7</issue>
):
<fpage>965</fpage>
<lpage>72</lpage>
.
<pub-id pub-id-type="pmid">15982787</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0151593.ref034">
<label>34</label>
<mixed-citation publication-type="journal">
<name>
<surname>Himann</surname>
<given-names>JE</given-names>
</name>
,
<name>
<surname>Cunningham</surname>
<given-names>DA</given-names>
</name>
,
<name>
<surname>Rechnitzer</surname>
<given-names>PA</given-names>
</name>
,
<name>
<surname>Paterson</surname>
<given-names>DH</given-names>
</name>
.
<article-title>Age-related changes in speed of walking</article-title>
.
<source>Medicine and science in sports and exercise</source>
.
<year>1988</year>
;
<volume>20</volume>
(
<issue>2</issue>
):
<fpage>161</fpage>
<lpage>6</lpage>
.
<pub-id pub-id-type="pmid">3367751</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0151593.ref035">
<label>35</label>
<mixed-citation publication-type="journal">
<name>
<surname>Kimura</surname>
<given-names>T</given-names>
</name>
,
<name>
<surname>Kobayashi</surname>
<given-names>H</given-names>
</name>
,
<name>
<surname>Nakayama</surname>
<given-names>E</given-names>
</name>
,
<name>
<surname>Hanaoka</surname>
<given-names>M</given-names>
</name>
.
<article-title>Effects of aging on gait patterns in the healthy elderly</article-title>
.
<source>Anthropological Science</source>
.
<year>2007</year>
;
<volume>115</volume>
(
<issue>1</issue>
):
<fpage>67</fpage>
<lpage>72</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0151593.ref036">
<label>36</label>
<mixed-citation publication-type="journal">
<name>
<surname>Liu</surname>
<given-names>Y-C</given-names>
</name>
.
<article-title>Comparative study of the effects of auditory, visual and multimodality displays on drivers' performance in advanced traveller information systems</article-title>
.
<source>Ergonomics</source>
.
<year>2001</year>
;
<volume>44</volume>
(
<issue>4</issue>
):
<fpage>425</fpage>
<lpage>42</lpage>
.
<pub-id pub-id-type="pmid">11291824</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0151593.ref037">
<label>37</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bushara</surname>
<given-names>KO</given-names>
</name>
,
<name>
<surname>Weeks</surname>
<given-names>RA</given-names>
</name>
,
<name>
<surname>Ishii</surname>
<given-names>K</given-names>
</name>
,
<name>
<surname>Catalan</surname>
<given-names>M-J</given-names>
</name>
,
<name>
<surname>Tian</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Rauschecker</surname>
<given-names>JP</given-names>
</name>
,
<etal>et al</etal>
<article-title>Modality-specific frontal and parietal areas for auditory and visual spatial localization in humans</article-title>
.
<source>Nature neuroscience</source>
.
<year>1999</year>
;
<volume>2</volume>
(
<issue>8</issue>
):
<fpage>759</fpage>
<lpage>66</lpage>
.
<pub-id pub-id-type="pmid">10412067</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0151593.ref038">
<label>38</label>
<mixed-citation publication-type="journal">
<name>
<surname>Loomis</surname>
<given-names>JM</given-names>
</name>
,
<name>
<surname>Klatzky</surname>
<given-names>RL</given-names>
</name>
,
<name>
<surname>McHugh</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Giudice</surname>
<given-names>NA</given-names>
</name>
.
<article-title>Spatial working memory for locations specified by vision and audition: Testing the amodality hypothesis</article-title>
.
<source>Attention, Perception, & Psychophysics</source>
.
<year>2012</year>
;
<volume>74</volume>
(
<issue>6</issue>
):
<fpage>1260</fpage>
<lpage>7</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0151593.ref039">
<label>39</label>
<mixed-citation publication-type="journal">
<name>
<surname>Mahmood</surname>
<given-names>O</given-names>
</name>
,
<name>
<surname>Adamo</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Briceno</surname>
<given-names>E</given-names>
</name>
,
<name>
<surname>Moffat</surname>
<given-names>SD</given-names>
</name>
.
<article-title>Age differences in visual path integration</article-title>
.
<source>Behavioural brain research</source>
.
<year>2009</year>
;
<volume>205</volume>
(
<issue>1</issue>
):
<fpage>88</fpage>
<lpage>95</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.bbr.2009.08.001">10.1016/j.bbr.2009.08.001</ext-link>
</comment>
<pub-id pub-id-type="pmid">19665496</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0151593.ref040">
<label>40</label>
<mixed-citation publication-type="journal">
<name>
<surname>Boot</surname>
<given-names>WR</given-names>
</name>
,
<name>
<surname>Blakely</surname>
<given-names>DP</given-names>
</name>
,
<name>
<surname>Simons</surname>
<given-names>DJ</given-names>
</name>
.
<article-title>Do action video games improve perception and cognition?</article-title>
<source>Frontiers in psychology</source>
.
<year>2011</year>
;
<volume>2</volume>
.</mixed-citation>
</ref>
<ref id="pone.0151593.ref041">
<label>41</label>
<mixed-citation publication-type="journal">
<name>
<surname>Spence</surname>
<given-names>I</given-names>
</name>
,
<name>
<surname>Feng</surname>
<given-names>J</given-names>
</name>
.
<article-title>Video games and spatial cognition</article-title>
.
<source>Review of General Psychology</source>
.
<year>2010</year>
;
<volume>14</volume>
(
<issue>2</issue>
):
<fpage>92</fpage>
.</mixed-citation>
</ref>
<ref id="pone.0151593.ref042">
<label>42</label>
<mixed-citation publication-type="journal">
<name>
<surname>Dehaene</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Nakamura</surname>
<given-names>K</given-names>
</name>
,
<name>
<surname>Jobert</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Kuroki</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Ogawa</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Cohen</surname>
<given-names>L</given-names>
</name>
.
<article-title>Why do children make mirror errors in reading? Neural correlates of mirror invariance in the visual word form area</article-title>
.
<source>Neuroimage</source>
.
<year>2010</year>
;
<volume>49</volume>
(
<issue>2</issue>
):
<fpage>1837</fpage>
<lpage>48</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.neuroimage.2009.09.024">10.1016/j.neuroimage.2009.09.024</ext-link>
</comment>
<pub-id pub-id-type="pmid">19770045</pub-id>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
<affiliations>
<list>
<country>
<li>France</li>
<li>Israël</li>
<li>États-Unis</li>
</country>
<region>
<li>Massachusetts</li>
</region>
<settlement>
<li>Paris</li>
</settlement>
</list>
<tree>
<country name="Israël">
<noRegion>
<name sortKey="Levy Tzedek, S" sort="Levy Tzedek, S" uniqKey="Levy Tzedek S" first="S." last="Levy-Tzedek">S. Levy-Tzedek</name>
</noRegion>
<name sortKey="Amedi, A" sort="Amedi, A" uniqKey="Amedi A" first="A." last="Amedi">A. Amedi</name>
<name sortKey="Amedi, A" sort="Amedi, A" uniqKey="Amedi A" first="A." last="Amedi">A. Amedi</name>
<name sortKey="Levy Tzedek, S" sort="Levy Tzedek, S" uniqKey="Levy Tzedek S" first="S." last="Levy-Tzedek">S. Levy-Tzedek</name>
<name sortKey="Maidenbaum, S" sort="Maidenbaum, S" uniqKey="Maidenbaum S" first="S." last="Maidenbaum">S. Maidenbaum</name>
</country>
<country name="France">
<noRegion>
<name sortKey="Amedi, A" sort="Amedi, A" uniqKey="Amedi A" first="A." last="Amedi">A. Amedi</name>
</noRegion>
</country>
<country name="États-Unis">
<region name="Massachusetts">
<name sortKey="Lackner, J" sort="Lackner, J" uniqKey="Lackner J" first="J." last="Lackner">J. Lackner</name>
</region>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Checkpoint
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000168 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd -nk 000168 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Checkpoint
   |type=    RBID
   |clé=     PMC:4805187
   |texte=   Aging and Sensory Substitution in a Virtual Navigation Task
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/RBID.i   -Sk "pubmed:27007812" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024