Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Task-dependent calibration of auditory spatial perception through environmental visual observation

Identifieur interne : 003A10 ( Ncbi/Merge ); précédent : 003A09; suivant : 003A11

Task-dependent calibration of auditory spatial perception through environmental visual observation

Auteurs : Alessia Tonelli ; Luca Brayda ; Monica Gori

Source :

RBID : PMC:4451354

Abstract

Visual information is paramount to space perception. Vision influences auditory space estimation. Many studies show that simultaneous visual and auditory cues improve precision of the final multisensory estimate. However, the amount or the temporal extent of visual information, that is sufficient to influence auditory perception, is still unknown. It is therefore interesting to know if vision can improve auditory precision through a short-term environmental observation preceding the audio task and whether this influence is task-specific or environment-specific or both. To test these issues we investigate possible improvements of acoustic precision with sighted blindfolded participants in two audio tasks [minimum audible angle (MAA) and space bisection] and two acoustically different environments (normal room and anechoic room). With respect to a baseline of auditory precision, we found an improvement of precision in the space bisection task but not in the MAA after the observation of a normal room. No improvement was found when performing the same task in an anechoic chamber. In addition, no difference was found between a condition of short environment observation and a condition of full vision during the whole experimental session. Our results suggest that even short-term environmental observation can calibrate auditory spatial performance. They also suggest that echoes can be the cue that underpins visual calibration. Echoes may mediate the transfer of information from the visual to the auditory system.


Url:
DOI: 10.3389/fnsys.2015.00084
PubMed: 26082692
PubMed Central: 4451354

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4451354

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Task-dependent calibration of auditory spatial perception through environmental visual observation</title>
<author>
<name sortKey="Tonelli, Alessia" sort="Tonelli, Alessia" uniqKey="Tonelli A" first="Alessia" last="Tonelli">Alessia Tonelli</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Brayda, Luca" sort="Brayda, Luca" uniqKey="Brayda L" first="Luca" last="Brayda">Luca Brayda</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Gori, Monica" sort="Gori, Monica" uniqKey="Gori M" first="Monica" last="Gori">Monica Gori</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">26082692</idno>
<idno type="pmc">4451354</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4451354</idno>
<idno type="RBID">PMC:4451354</idno>
<idno type="doi">10.3389/fnsys.2015.00084</idno>
<date when="2015">2015</date>
<idno type="wicri:Area/Pmc/Corpus">000223</idno>
<idno type="wicri:Area/Pmc/Curation">000223</idno>
<idno type="wicri:Area/Pmc/Checkpoint">000297</idno>
<idno type="wicri:Area/Ncbi/Merge">003A10</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Task-dependent calibration of auditory spatial perception through environmental visual observation</title>
<author>
<name sortKey="Tonelli, Alessia" sort="Tonelli, Alessia" uniqKey="Tonelli A" first="Alessia" last="Tonelli">Alessia Tonelli</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Brayda, Luca" sort="Brayda, Luca" uniqKey="Brayda L" first="Luca" last="Brayda">Luca Brayda</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Gori, Monica" sort="Gori, Monica" uniqKey="Gori M" first="Monica" last="Gori">Monica Gori</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Frontiers in Systems Neuroscience</title>
<idno type="eISSN">1662-5137</idno>
<imprint>
<date when="2015">2015</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Visual information is paramount to space perception. Vision influences auditory space estimation. Many studies show that simultaneous visual and auditory cues improve precision of the final multisensory estimate. However, the amount or the temporal extent of visual information, that is sufficient to influence auditory perception, is still unknown. It is therefore interesting to know if vision can improve auditory precision through a short-term environmental observation preceding the audio task and whether this influence is task-specific or environment-specific or both. To test these issues we investigate possible improvements of acoustic precision with sighted blindfolded participants in two audio tasks [minimum audible angle (MAA) and space bisection] and two acoustically different environments (normal room and anechoic room). With respect to a baseline of auditory precision, we found an improvement of precision in the space bisection task but not in the MAA after the observation of a normal room. No improvement was found when performing the same task in an anechoic chamber. In addition, no difference was found between a condition of short environment observation and a condition of full vision during the whole experimental session. Our results suggest that even short-term environmental observation can calibrate auditory spatial performance. They also suggest that echoes can be the cue that underpins visual calibration. Echoes may mediate the transfer of information from the visual to the auditory system.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Alais, D" uniqKey="Alais D">D. Alais</name>
</author>
<author>
<name sortKey="Burr, D" uniqKey="Burr D">D. Burr</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Buckingham, G" uniqKey="Buckingham G">G. Buckingham</name>
</author>
<author>
<name sortKey="Milne, J L" uniqKey="Milne J">J. L. Milne</name>
</author>
<author>
<name sortKey="Byrne, C M" uniqKey="Byrne C">C. M. Byrne</name>
</author>
<author>
<name sortKey="Goodale, M A" uniqKey="Goodale M">M. A. Goodale</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Burton, G" uniqKey="Burton G">G. Burton</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Clarke, J J" uniqKey="Clarke J">J. J. Clarke</name>
</author>
<author>
<name sortKey="Yuille, A L" uniqKey="Yuille A">A. L. Yuille</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, M O" uniqKey="Ernst M">M. O. Ernst</name>
</author>
<author>
<name sortKey="Banks, M S" uniqKey="Banks M">M. S. Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ghahramani, Z" uniqKey="Ghahramani Z">Z. Ghahramani</name>
</author>
<author>
<name sortKey="Wolpert, D M" uniqKey="Wolpert D">D. M. Wolpert</name>
</author>
<author>
<name sortKey="Jordan, M I" uniqKey="Jordan M">M. I. Jordan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gori, M" uniqKey="Gori M">M. Gori</name>
</author>
<author>
<name sortKey="Del Viva, M" uniqKey="Del Viva M">M. Del Viva</name>
</author>
<author>
<name sortKey="Sandini, G" uniqKey="Sandini G">G. Sandini</name>
</author>
<author>
<name sortKey="Burr, D C" uniqKey="Burr D">D. C. Burr</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gori, M" uniqKey="Gori M">M. Gori</name>
</author>
<author>
<name sortKey="Giuliana, L" uniqKey="Giuliana L">L. Giuliana</name>
</author>
<author>
<name sortKey="Sandini, G" uniqKey="Sandini G">G. Sandini</name>
</author>
<author>
<name sortKey="Burr, D" uniqKey="Burr D">D. Burr</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gori, M" uniqKey="Gori M">M. Gori</name>
</author>
<author>
<name sortKey="Sandini, G" uniqKey="Sandini G">G. Sandini</name>
</author>
<author>
<name sortKey="Martinoli, C" uniqKey="Martinoli C">C. Martinoli</name>
</author>
<author>
<name sortKey="Burr, D C" uniqKey="Burr D">D. C. Burr</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jackson, C" uniqKey="Jackson C">C. Jackson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="King, A J" uniqKey="King A">A. J. King</name>
</author>
<author>
<name sortKey="Carlile, S" uniqKey="Carlile S">S. Carlile</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Knudsen, E I" uniqKey="Knudsen E">E. I. Knudsen</name>
</author>
<author>
<name sortKey="Knudsen, P F" uniqKey="Knudsen P">P. F. Knudsen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kolarik, A J" uniqKey="Kolarik A">A. J. Kolarik</name>
</author>
<author>
<name sortKey="Cirstea, S" uniqKey="Cirstea S">S. Cirstea</name>
</author>
<author>
<name sortKey="Pardhan, S" uniqKey="Pardhan S">S. Pardhan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Landy, M S" uniqKey="Landy M">M. S. Landy</name>
</author>
<author>
<name sortKey="Banks, M S" uniqKey="Banks M">M. S. Banks</name>
</author>
<author>
<name sortKey="Knill, D C" uniqKey="Knill D">D. C. Knill</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lessard, N" uniqKey="Lessard N">N. Lessard</name>
</author>
<author>
<name sortKey="Pare, M" uniqKey="Pare M">M. Paré</name>
</author>
<author>
<name sortKey="Lepore, F" uniqKey="Lepore F">F. Lepore</name>
</author>
<author>
<name sortKey="Lassonde, M" uniqKey="Lassonde M">M. Lassonde</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mateeff, S" uniqKey="Mateeff S">S. Mateeff</name>
</author>
<author>
<name sortKey="Hohnsbein, J" uniqKey="Hohnsbein J">J. Hohnsbein</name>
</author>
<author>
<name sortKey="Noack, T" uniqKey="Noack T">T. Noack</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Milne, J L" uniqKey="Milne J">J. L. Milne</name>
</author>
<author>
<name sortKey="Anello, M" uniqKey="Anello M">M. Anello</name>
</author>
<author>
<name sortKey="Goodale, M A" uniqKey="Goodale M">M. A. Goodale</name>
</author>
<author>
<name sortKey="Thaler, L" uniqKey="Thaler L">L. Thaler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Milne, J L" uniqKey="Milne J">J. L. Milne</name>
</author>
<author>
<name sortKey="Goodale, M A" uniqKey="Goodale M">M. A. Goodale</name>
</author>
<author>
<name sortKey="Thaler, L" uniqKey="Thaler L">L. Thaler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Recanzone, G H" uniqKey="Recanzone G">G. H. Recanzone</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rojas, J A M" uniqKey="Rojas J">J. A. M. Rojas</name>
</author>
<author>
<name sortKey="Hermosilla, J A" uniqKey="Hermosilla J">J. A. Hermosilla</name>
</author>
<author>
<name sortKey="Montero, R S" uniqKey="Montero R">R. S. Montero</name>
</author>
<author>
<name sortKey="Espi, P L L" uniqKey="Espi P">P. L. L. Espí</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shelton, B R" uniqKey="Shelton B">B. R. Shelton</name>
</author>
<author>
<name sortKey="Searle, C L" uniqKey="Searle C">C. L. Searle</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tabry, V" uniqKey="Tabry V">V. Tabry</name>
</author>
<author>
<name sortKey="Zatorre, R J" uniqKey="Zatorre R">R. J. Zatorre</name>
</author>
<author>
<name sortKey="Voss, P" uniqKey="Voss P">P. Voss</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Teng, S" uniqKey="Teng S">S. Teng</name>
</author>
<author>
<name sortKey="Puri, A" uniqKey="Puri A">A. Puri</name>
</author>
<author>
<name sortKey="Whitney, D" uniqKey="Whitney D">D. Whitney</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Teng, S" uniqKey="Teng S">S. Teng</name>
</author>
<author>
<name sortKey="Whitney, D" uniqKey="Whitney D">D. Whitney</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thaler, L" uniqKey="Thaler L">L. Thaler</name>
</author>
<author>
<name sortKey="Arnott, S R" uniqKey="Arnott S">S. R. Arnott</name>
</author>
<author>
<name sortKey="Goodale, M A" uniqKey="Goodale M">M. A. Goodale</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thaler, L" uniqKey="Thaler L">L. Thaler</name>
</author>
<author>
<name sortKey="Milne, J L" uniqKey="Milne J">J. L. Milne</name>
</author>
<author>
<name sortKey="Arnott, S R" uniqKey="Arnott S">S. R. Arnott</name>
</author>
<author>
<name sortKey="Kish, D" uniqKey="Kish D">D. Kish</name>
</author>
<author>
<name sortKey="Goodale, M A" uniqKey="Goodale M">M. A. Goodale</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vercillo, T" uniqKey="Vercillo T">T. Vercillo</name>
</author>
<author>
<name sortKey="Milne, J L" uniqKey="Milne J">J. L. Milne</name>
</author>
<author>
<name sortKey="Gori, M" uniqKey="Gori M">M. Gori</name>
</author>
<author>
<name sortKey="Goodale, M A" uniqKey="Goodale M">M. A. Goodale</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Warren, D H" uniqKey="Warren D">D. H. Warren</name>
</author>
<author>
<name sortKey="Welch, R B" uniqKey="Welch R">R. B. Welch</name>
</author>
<author>
<name sortKey="Mccarthy, T J" uniqKey="Mccarthy T">T. J. McCarthy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Watson, A B" uniqKey="Watson A">A. B. Watson</name>
</author>
<author>
<name sortKey="Pelli, D G" uniqKey="Pelli D">D. G. Pelli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Will, U" uniqKey="Will U">U. Will</name>
</author>
<author>
<name sortKey="Berg, E" uniqKey="Berg E">E. Berg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zwiers, M P" uniqKey="Zwiers M">M. P. Zwiers</name>
</author>
<author>
<name sortKey="Van Opstal, A J" uniqKey="Van Opstal A">A. J. Van Opstal</name>
</author>
<author>
<name sortKey="Paige, G D" uniqKey="Paige G">G. D. Paige</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Front Syst Neurosci</journal-id>
<journal-id journal-id-type="iso-abbrev">Front Syst Neurosci</journal-id>
<journal-id journal-id-type="publisher-id">Front. Syst. Neurosci.</journal-id>
<journal-title-group>
<journal-title>Frontiers in Systems Neuroscience</journal-title>
</journal-title-group>
<issn pub-type="epub">1662-5137</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">26082692</article-id>
<article-id pub-id-type="pmc">4451354</article-id>
<article-id pub-id-type="doi">10.3389/fnsys.2015.00084</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Task-dependent calibration of auditory spatial perception through environmental visual observation</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Tonelli</surname>
<given-names>Alessia</given-names>
</name>
<xref ref-type="aff" rid="aff1"></xref>
<uri xlink:type="simple" xlink:href="http://community.frontiersin.org/people/u/210186"></uri>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Brayda</surname>
<given-names>Luca</given-names>
</name>
<xref ref-type="aff" rid="aff1"></xref>
<uri xlink:type="simple" xlink:href="http://community.frontiersin.org/people/u/209402"></uri>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Gori</surname>
<given-names>Monica</given-names>
</name>
<xref ref-type="aff" rid="aff1"></xref>
<xref ref-type="author-notes" rid="fn001">
<sup>*</sup>
</xref>
<uri xlink:type="simple" xlink:href="http://community.frontiersin.org/people/u/25629"></uri>
</contrib>
</contrib-group>
<aff id="aff1">
<institution>Robotics, Brain and Cognitive Sciences Department, Fondazione Istituto Italiano di Tecnologia</institution>
<country>Genoa, Italy</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>Edited by: Mikhail Lebedev, Duke University, USA</p>
</fn>
<fn fn-type="edited-by">
<p>Reviewed by: Nicholas Altieri, Idaho State University, USA; Caterina Bertini, University of Bologna, Italy; Patrick Bruns, University of Hamburg, Germany</p>
</fn>
<corresp id="fn001">*Correspondence: Monica Gori, Robotics, Brain and Cognitive Sciences Department, Fondazione Istituto Italiano di Tecnologia, via Morego 30, 16163 Genoa, Italy
<email xlink:type="simple">monica.gori@iit.it</email>
</corresp>
</author-notes>
<pub-date pub-type="epub">
<day>02</day>
<month>6</month>
<year>2015</year>
</pub-date>
<pub-date pub-type="collection">
<year>2015</year>
</pub-date>
<volume>9</volume>
<elocation-id>84</elocation-id>
<history>
<date date-type="received">
<day>30</day>
<month>1</month>
<year>2015</year>
</date>
<date date-type="accepted">
<day>18</day>
<month>5</month>
<year>2015</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright © 2015 Tonelli, Brayda and Gori.</copyright-statement>
<copyright-year>2015</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/">
<license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution and reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</license-p>
</license>
</permissions>
<abstract>
<p>Visual information is paramount to space perception. Vision influences auditory space estimation. Many studies show that simultaneous visual and auditory cues improve precision of the final multisensory estimate. However, the amount or the temporal extent of visual information, that is sufficient to influence auditory perception, is still unknown. It is therefore interesting to know if vision can improve auditory precision through a short-term environmental observation preceding the audio task and whether this influence is task-specific or environment-specific or both. To test these issues we investigate possible improvements of acoustic precision with sighted blindfolded participants in two audio tasks [minimum audible angle (MAA) and space bisection] and two acoustically different environments (normal room and anechoic room). With respect to a baseline of auditory precision, we found an improvement of precision in the space bisection task but not in the MAA after the observation of a normal room. No improvement was found when performing the same task in an anechoic chamber. In addition, no difference was found between a condition of short environment observation and a condition of full vision during the whole experimental session. Our results suggest that even short-term environmental observation can calibrate auditory spatial performance. They also suggest that echoes can be the cue that underpins visual calibration. Echoes may mediate the transfer of information from the visual to the auditory system.</p>
</abstract>
<kwd-group>
<kwd>audio</kwd>
<kwd>vision</kwd>
<kwd>bisection</kwd>
<kwd>multisensory</kwd>
<kwd>calibration</kwd>
<kwd>space perception</kwd>
<kwd>echoes</kwd>
</kwd-group>
<counts>
<fig-count count="3"></fig-count>
<table-count count="0"></table-count>
<equation-count count="0"></equation-count>
<ref-count count="31"></ref-count>
<page-count count="8"></page-count>
<word-count count="5714"></word-count>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="introduction" id="s1">
<title>Introduction</title>
<p>The visual system is the most accurate sense to estimate spatial properties. Many studies involving adult individuals support this idea, showing that when the spatial locations of audio and visual stimuli are in conflict, vision usually dominates, generating the so-called “ventriloquist effect” (Warren et al.,
<xref rid="B28" ref-type="bibr">1981</xref>
; Mateeff et al.,
<xref rid="B16" ref-type="bibr">1985</xref>
). This effect is possibly due to an optimal combination of cues performed by the human brain, where modalities are weighted by their statistical reliability. Vision dominates over audition in localization tasks (Alais and Burr,
<xref rid="B1" ref-type="bibr">2004</xref>
). When visual and auditory systems are simultaneously presented to get spatial information, the final multisensory estimate tends to be more precise than either unisensory estimate (Clarke and Yuille,
<xref rid="B4" ref-type="bibr">1990</xref>
; Ghahramani et al.,
<xref rid="B6" ref-type="bibr">1997</xref>
; Ernst and Banks,
<xref rid="B5" ref-type="bibr">2002</xref>
; Alais and Burr,
<xref rid="B1" ref-type="bibr">2004</xref>
; Landy et al.,
<xref rid="B14" ref-type="bibr">2011</xref>
). Interestingly, vision can interact with audition even when a visual stimulus is not provided during an auditory task: specifically. For example, although the angle of incidence of a sound source can be estimated with the use of only auditory cues, performance improves when vision is also present (Jackson,
<xref rid="B10" ref-type="bibr">1953</xref>
; Shelton and Searle,
<xref rid="B21" ref-type="bibr">1980</xref>
). A recent study by Tabry et al. (
<xref rid="B22" ref-type="bibr">2013</xref>
) has shown that the mere possibility of observing the setup by keeping eyes open during auditory horizontal and vertical localization tasks can improve audio accuracy, even if no visual cues of the stimuli are provided.</p>
<p>Another example of the connection between audio-spatial and visuo-spatial information is given by a technique used by some blind people, called echolocation. Some studies have shown that this technique can operate as a crude substitute for vision, because some purely visual phenomenona, such as size constancy (Milne et al.,
<xref rid="B17" ref-type="bibr">2015</xref>
) or size-weight illusion (Buckingham et al.,
<xref rid="B2" ref-type="bibr">2015</xref>
), are present in expert echolocators, who use echoes to navigate in unknown environments.</p>
<p>However, it is still unknown which are the visual cues that allow an improvement in audio-spatial tasks, nor it is understood how long visual cues should last to determine such improvement. As well, it is unknown whether this phenomenon can be task-specific, i.e., if audio-spatial abilities are improved in general or if the influence of vision depends on the complexity of the audio task. Can it be argued that increased audio-spatial abilities are due to a transfer of information from the visual to the auditory system? Which is the information that is transferred? Is vision more informative for some aspects than for others?</p>
<p>In this paper we tested two audio tasks under various environmental conditions and visual feedbacks to answer these questions. In particular we investigated: (i) whether the environmental visual cues (i.e., prior short observation of the environment and full vision during the tasks) can improve auditory precision; (ii) whether this improvement is task-specific; and (iii) which are the environmental cues that mediate the auditory improvement due to the interaction between vision and audition.</p>
<p>To investigate the first point about whether environmental visual cues can improve auditory precision we tested a sample of blindfolded sighted participants twice: the first time with no visual input of the environment where the auditory task was performed; the second time after they observed for 1 min the environment. We compared the performance with no visual input of the environment with that with 1 min observation. We also tested a different group of sighted participants, who performed the two tasks with a full vision of the room but without being blindfolded. We compared the performance of this last sample with the other. Our hypothesis was that if the visual cues coming from the environmental observation can help to improve the auditory precision, then the improvement should occur at least with full vision and possibly with short-term observation.</p>
<p>The second question was about whether auditory precision improvement was task-specific. We tested all participants in two audio tasks: the minimum audible angle (MAA) task and the spatial bisection. In the MAA task the participant had to judge which of the two sounds generated by an array of loudspeakers was more from the right. Instead, in the spatial bisection task the participant heard three sounds coming from three distinct locations and had to judge whether the second sound was closer to the first or third sound coming from the array. The difference between these two tasks is that the spatial bisection task requires subjects to encode the position of three sounds, remember them over a period of 1 s and compare their remembered positions. Contrarily, in the MAA task, the subject has to compare the position of the two sounds relatively to the subject’s position. Moreover while MMA requires estimating a relation of order between two acoustic directions; bisection requires estimating a relation of order between two estimated acoustic distances. To summarize, while the space bisection requires a Euclidian representation of space and involves higher abstraction capabilities, for the MMA task a topological representation of space is sufficient.</p>
<p>Moreover, we chose these two tasks because we recently reported that the visual information is fundamental for the bisection task and not essential for the MAA task (Gori et al.,
<xref rid="B9" ref-type="bibr">2014</xref>
). A visual dominance over audition during development was observed for the space bisection task (Gori et al.,
<xref rid="B8" ref-type="bibr">2012</xref>
), while the absence of visual input in congenitally blind individuals negatively impacts their performance on audio space bisection tasks (Gori et al.,
<xref rid="B9" ref-type="bibr">2014</xref>
). However the absence of vision does not affect the ability of performing the MAA task in visually impaired individuals (Gori et al.,
<xref rid="B9" ref-type="bibr">2014</xref>
; in agreement with Lessard et al.,
<xref rid="B15" ref-type="bibr">1998</xref>
).</p>
<p>The apparent influence of the visuo-spatial knowledge on space bisection tasks leads to our second hypothesis: if the environmental visual information can improve acoustic spatial precision, then the improvement should be bigger for the space bisection task than for the MAA task.</p>
<p>With regard to the third point, which consists in investigating the environmental cues that possibly mediate the auditory improvement after observation, we replicated all audio tests in an anechoic room. In such a room, the walls absorb part of the sound energy; therefore the auditory system almost exclusively acquires the direct path of the sound, i.e., not reflected by walls. Conversely, in the normal room, a wall reflects sounds and generates echoes. Our hypothesis is that if the interpretation of echoes is triggered by visual observation, an improvement of acoustic precision should occur only in the normal room, while not in the anechoic chamber.</p>
</sec>
<sec sec-type="materials|methods" id="s2">
<title>Materials and Methods</title>
<sec id="s2-1">
<title>Participants</title>
<p>We measured auditory spatial discrimination in 33 sighted individuals with normal or corrected to normal vision (an average age of 28, 5 years, with 18 females and 15 males), all with normal hearing (assessed by Ear Test 1.0 software) and no cognitive impairment. All participants gave informed consent before starting the tests. The study was approved by the ethics committee of the local health service (
<italic>Comitato Etico, ASL3, Genova</italic>
).</p>
</sec>
<sec id="s2-2">
<title>Apparatus and Stimuli</title>
<p>The participants sat 180 cm away from the center of a bank of 23 speakers, 161 cm long (see Figure
<xref ref-type="fig" rid="F1">1</xref>
), and spanning ±25° of visual angle.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption>
<p>
<bold>(A)</bold>
Space Bisection Task.
<bold>(B)</bold>
Minimum audible angle (MAA) task.</p>
</caption>
<graphic xlink:href="fnsys-09-00084-g0001"></graphic>
</fig>
<p>During the auditory space bisection task, three stimuli, each having a duration of 75 ms, were presented at interval of 500 ms (see Figure
<xref ref-type="fig" rid="F1">1A</xref>
). The first stimulus was always at −25°, the third always at +25° and the second at an intermediate speaker position which was determined by QUEST (Watson and Pelli,
<xref rid="B29" ref-type="bibr">1983</xref>
), an adaptive algorithm which estimates the best stimulus value to be presented after each trial, given the current participant’s estimate. To ensure that a wide range of positions was sampled, that estimate was jittered by a random amount, drawn from a Gaussian distribution of space covering the full width of the loudspeaker’s array, and the nearest speaker to that estimate chosen. In the MAA task, two 75 ms pink noise (Will and Berg,
<xref rid="B30" ref-type="bibr">2007</xref>
) stimuli were presented with a 500 ms interval. One sound came from the central loudspeaker (12th speaker) and the other one at a random distance from center on its left or on its right (Figure
<xref ref-type="fig" rid="F1">1B</xref>
). Also in this case, the QUEST algorithm determined the position of the second stimulus. For both tasks, the proportion of rightward responses was calculated for each speaker distance. Gaussian functions by means of the Maximum Likelihood method were used to estimate both the mean, or PSE (Point of Subjective Equality), and the standard deviation, or JND (Just Noticeable Difference). The standard deviation of the fit was taken as an estimate of the threshold, indicating the precision of the task.</p>
<p>To better generalize our results, in 15 participants we used three different sound sources (randomized across trials), all with a 75 ms duration and a 60 dB SPL intensity (measured at the participant’s position): a 500 Hz sound (for which interaural time differences are more important for sound localization); a 3000 Hz sound (for which interaural level differences are more important); and pink noise (ranging from 0 to 5 KHz) for which both are important. As the precision in sound localization varied very little among the three sounds, only pink noise burst data were considered.</p>
</sec>
<sec id="s2-3">
<title>Procedure</title>
<p>Two audio spatial tasks were considered: an auditory space bisection task and a MAA task. The entire group of participants were divided into three groups. The first group (composed of 11 participants) performed four audio tasks (two times the bisection task and two times the MMA task) in an anechoic chamber (3 m × 5 m), the second groups (composed of 11 participants) performed four audio tasks (two times the bisection task and two times the MAA task) in a normal room (7, 20 m × 3, 5 m). The participants of these two groups were blindfolded before entering the room; during the first two audio tasks (one audio bisection and one MAA task), they had no notion of the room or the acoustic stimulation setup. After having performed both the audio tasks, the participants were allowed to remove the blindfold and observe for 1 min the room: in one case an anechoic chamber (first group) and in the other case a normal room (second group). Afterwards they were blindfolded again to repeat both audio tasks again. The last group (composed of 11 participants) was not blindfolded, so they had a full vision of the room and the setup during the tasks. They performed two audio tasks (one time the bisection task and one time the MAA task) just on the normal room. For all the groups the bisection and MAA tasks were presented in a random order.</p>
<p>In the auditory space bisection, participants reported verbally whether the second sound was spatially closer to the first sound (produced by the first speaker on the left, number 1) than the last sound (produced by the last speaker on the right, number 23). Each subject performed 60 trials.</p>
<p>In the MAA task, the participants had to verbally report which sound was more from the right, choosing between the first or the second sound. Each subject performed 60 trials for each task.</p>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>Results</title>
<p>Figure
<xref ref-type="fig" rid="F2">2</xref>
show psychometric functions of the proportion of trials judged “closer to the right sound source”, plotted against speaker position (in cm). On the top of the figure are shown the results obtained by one of the participant took as an example of the global trend in the anechoic chamber for the space bisection (Figure
<xref ref-type="fig" rid="F2">2A</xref>
) and the MAA (Figure
<xref ref-type="fig" rid="F2">2C</xref>
). In the same way, on the bottom, the Figure
<xref ref-type="fig" rid="F2">2</xref>
show the results of one of the participant for the normal room in the space bisection (Figure
<xref ref-type="fig" rid="F2">2B</xref>
) and the MAA (Figure
<xref ref-type="fig" rid="F2">2D</xref>
).</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption>
<p>
<bold>Results of the Space Bisection Task and MAA of two participants, one for each group (normal room and anechoic chamber) as example. (A,B)</bold>
Space bisection: proportion of trials judged “closer to the right sound source”, plotted against speaker position (in cm). The area of the dots is the proportion of trials at that position, normalized by the total number of trials performed by each participants. At the top-left the results obtained in the anechoic chamber by participant AT
<bold>(A)</bold>
; at the bottom-left the results obtained in the normal room by participant CP
<bold>(B)</bold>
. Both sets of data are it with the Gaussian error function.
<bold>(C,D)</bold>
MAA: proportion of trials where the second of a two-sound sequence was reported to the right of the first, plotted against difference in speaker position. At the top-right the results obtained in the anechoic chamber by participant AT
<bold>(C)</bold>
; at the bottom-right the results obtained in the normal room by participant CP
<bold>(D)</bold>
Again the fits are the Gaussian error function.</p>
</caption>
<graphic xlink:href="fnsys-09-00084-g0002"></graphic>
</fig>
<p>Figure
<xref ref-type="fig" rid="F3">3</xref>
shows the thresholds obtained before and after having observed the environment, it also shows the performance with eyes open for both the tasks: the MAA (Figure
<xref ref-type="fig" rid="F3">3A</xref>
) and the space bisection (Figure
<xref ref-type="fig" rid="F3">3B</xref>
). In both figures, the solid colors refer to the performance before room observation, while the colors with reticulus refer to the performance after the room observation.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption>
<p>
<bold>Shown here are the average precision thresholds obtain in the MAA (A) and Space Bisection (B) tasks. (A)</bold>
The dark green bars, on the left, represent the average precision thresholds obtained in the normal room before (fill in dark green bar) and after (reticulus dark green bar) environmental observation. On the right the light green bars are the average precision thresholds obtained in the anechoic chamber before (fill in light green bar) and after (reticulus light green bar) environmental observation. The violet bar is the average precision obtained by the subject in full vision in the normal room. The dots represent individual data.
<bold>(B)</bold>
For the space bisection, dark blue bars, on the left, represent the average precision thresholds obtained in the normal room before (fill in dark blue bar) and after (reticulus dark blue bar) environmental observation. On the right the light blue bars are the average precision thresholds obtained in the anechoic chamber before (fill in light blue bar) and after (reticulus light blue bar) environmental observation. Also in this case the violet bar represent the average precision obtained by the subject in full vision in the normal room. The dots represent individual data. (**) Indicates a significant difference of precision between before and after environmental observation in the normal room (
<italic>p</italic>
< 0.01).</p>
</caption>
<graphic xlink:href="fnsys-09-00084-g0003"></graphic>
</fig>
<p>We conducted a mixed model 2-way (2 × 2) ANOVA for both MAA and Space Bisection tasks with a between factor,
<italic>room kind</italic>
(normal room vs. anechoic chamber), and within factor,
<italic>room observation</italic>
(before environmental observation vs. after environmental observation). For the space bisection task the ANOVA revealed significant main effect for both factors,
<italic>room observation</italic>
(
<italic>F</italic>
<sub>(2,22)</sub>
= 6.55,
<italic>p</italic>
< 0.02) and
<italic>room kind</italic>
(
<italic>F</italic>
<sub>(2,22)</sub>
= 7.35,
<italic>p</italic>
< 0.01). It has been observed a significant
<italic>room observation</italic>
×
<italic>room kind</italic>
interaction (
<italic>F</italic>
<sub>(4,11)</sub>
= 6.86,
<italic>p</italic>
< 0.01). Then we ran Student’s
<italic>t</italic>
-test that indicate a significant difference between the groups who performed the space bisection task in the normal room and anechoic chamber before observing the room (two tailed two-sample
<italic>t</italic>
-test,
<italic>t</italic>
<sub>(20)</sub>
= 3.44,
<italic>p</italic>
< 0.01) and in the normal room between before environmental observation and after environmental observation (two tailed pair-sample
<italic>t</italic>
-test,
<italic>t</italic>
<sub>(10)</sub>
= 5.46,
<italic>p</italic>
< 0.001). On the other hand, for the MAA, no significant effect was found (
<italic>room observation</italic>
,
<italic>F</italic>
<sub>(2,22)</sub>
= 0.48,
<italic>p</italic>
= 0.49;
<italic>room kind</italic>
,
<italic>F</italic>
<sub>(2,22)</sub>
= 1.49,
<italic>p</italic>
= 0.28;
<italic>room observation</italic>
×
<italic>room kind F</italic>
<sub>(4,11)</sub>
= 0.506,
<italic>p</italic>
= 0.481).</p>
<p>No significant difference was obtained in the precision after environmental observation and full vision (violet bars) for the space bisection task (two tailed two-sample
<italic>t</italic>
-test,
<italic>t</italic>
<sub>(20)</sub>
= 1.279,
<italic>p</italic>
= 0.27) and for the MAA (two tailed two-sample
<italic>t</italic>
-test,
<italic>t</italic>
<sub>(20)</sub>
= 0.257,
<italic>p</italic>
= 0.799).</p>
<p>No change was observed in the localization bias (PSE) for both groups and tasks (bisection task: 2-ways (2 × 2) ANOVA with factors
<italic>room observation</italic>
<italic>F</italic>
<sub>(2,22)</sub>
= 0.79,
<italic>p</italic>
= 0.38—and
<italic>room kind</italic>
<italic>F</italic>
<sub>(2,22)</sub>
= 1.48,
<italic>p</italic>
= 0.23—and
<italic>room observation</italic>
×
<italic>room kind</italic>
interaction,
<italic>F</italic>
<sub>(4,11)</sub>
= 0.088,
<italic>p</italic>
= 0.77; MAA task: 2-ways (2 × 2) ANOVA with factors
<italic>room observation</italic>
<italic>F</italic>
<sub>(2,22)</sub>
= 0.373,
<italic>p</italic>
= 0.545—and
<italic>room kind</italic>
<italic>F</italic>
<sub>(2,22)</sub>
= 1.91,
<italic>p</italic>
= 0.175—, and
<italic>room observation</italic>
×
<italic>room kind</italic>
interaction,
<italic>F</italic>
<sub>(4,11)</sub>
= 0.001,
<italic>p</italic>
= 0.97).</p>
</sec>
<sec sec-type="discussion" id="s4">
<title>Discussion</title>
<p>Recent works suggest that vision can interact with the auditory modality even when visual information is not useful for the auditory task, by improving the accuracy of auditory localization judgments (Jackson,
<xref rid="B10" ref-type="bibr">1953</xref>
; Shelton and Searle,
<xref rid="B21" ref-type="bibr">1980</xref>
; Tabry et al.,
<xref rid="B22" ref-type="bibr">2013</xref>
). For example acoustic performance has been found to be better when participants were allowed to observe the setup by keeping eyes open even if no visual cues were provided (Tabry et al.,
<xref rid="B22" ref-type="bibr">2013</xref>
). Thus even the simple observation of the setup and the environment during the task can improve auditory performance.</p>
<p>Why does this process occur? Which are the visual cues used by the visual system that allow for such an auditory improvement?</p>
<p>In this paper we investigated these issues by studying: (i) the environmental visual cues that are involved in auditory precision improvement; and (ii) whether this improvement is task related.</p>
<p>We tested the first point by asking the participants to perform two audio tasks twice. The first time the tasks were performed without observing the room; the second time, after having observed the room for 1 min. The results suggest that the observation of the environment for a brief period improves the auditory space precision and that the improvement is environment dependent. The improvement was only found after the observation of a natural environment, while, when the test was replicated in an anechoic chamber, no improvement was obtained. Besides, the improvement was task dependent. Two tasks were tested: a MAA task and an audio spatial bisection task; the improvement was observed only for the space bisection task but not for the MAA.</p>
<p>The first question that arises from these results is why the improvement is task-specific, giving that it occurs only for the audio space bisection task. We think that this specificity can be related to the role of visual information on the calibration of the auditory system.</p>
<p>Most recent works on multi-sensory interactions concentrated on sensory
<italic>fusion</italic>
, investigating the efficiency of the integration of information from different senses. Equally important, but somewhat neglected aspect, is sensory
<italic>calibration</italic>
.</p>
<p>Our idea is that, while
<italic>precision</italic>
has the highest weight for sensory integration, the most important property for sensory calibration is
<italic>accuracy</italic>
. Precision is a relative measure defined as the degree of reproducibility or repeatability between measurements, usually defined as the standard deviation of the distribution. Accuracy, conversely, is defined in absolute terms as the vicinity of a measurement to its true physical value.</p>
<p>We have recently observed that during an audio-visual bisection task, sighted children show a strong visual dominance before multisensory integration occurs (Gori et al.,
<xref rid="B8" ref-type="bibr">2012</xref>
). It is reasonable that for the audio bisection task, the sense of vision is the most accurate for estimating the space. Therefore, it may be used to calibrate the audio system for this spatial task. An important question inferring from our cross-modal calibration theory is: what happens when the calibrating sense is impaired or absent, as is the case of visually impaired adults? We recently tackled this question by testing blind adults in an spatial audio bisection task demonstrating that, in absence of visual input, they have deficits in understanding the spatial relationship between sounds (Gori et al.,
<xref rid="B9" ref-type="bibr">2014</xref>
). The audio deficit was not observed, in agreement with previous studies (Lessard et al.,
<xref rid="B15" ref-type="bibr">1998</xref>
), for the MAA task.</p>
<p>Several physiological works confirm that vision is fundamental for some kind of auditory spatial localization: a series of experiments on animals have documented that displacing vision (Knudsen and Knudsen,
<xref rid="B12" ref-type="bibr">1985</xref>
) or producing total visual deprivation (King and Carlile,
<xref rid="B11" ref-type="bibr">1993</xref>
) often lead to systematic and persistent biases in auditory tasks. In the same way, transitory visual distortions in humans produce dramatic changes in auditory spatial maps (Recanzone,
<xref rid="B19" ref-type="bibr">1998</xref>
; Zwiers et al.,
<xref rid="B31" ref-type="bibr">2003</xref>
).</p>
<p>On the basis of this evidence we can infer that the visual environmental information is not directly involved in the calibration of acoustic system in tasks such as the MAA. This idea would explain why we found a specific audio improvement after environment observation only for the audio bisection task and not for the MAA task.</p>
<p>A second interesting result is that: (i) before the environment observation, audio space bisection performance was worse in the normal room than in the anechoic room; and (ii) after the environment observation, an audio improvement was observed in the normal room and not in the anechoic room. Why performance for the space bisection task did not improve in the anechoic chamber and why it was worse before environment observation in the normal room than in the anechoic chamber? The observed null effect of the short environmental observation in the anechoic room might have been caused by a ceiling effect, i.e., performance was best already before room observation. However, this was not the case in the normal room, suggesting an alternative interpretation: in an anechoic chamber part of the energy of sounds produced by the loudspeakers is absorbed by the walls, therefore the hearing system acquires almost exclusively the direct sound. This is not true in the normal room, where the sound produced by the speakers is reflected by the walls, therefore producing echoes. This results in stimuli with scattering patterns or spectral coloration, or both, which are as much different as source locations are far apart. These echoes add perceptual information to the direct path of the sound, which may not be immediately interpretable without visual input, therefore causing a mismatch and worse performance in the normal room than in anechoic condition. However, the visual system could help the auditory system to compensate for such mismatch and obtain performance again comparable to those obtained in anechoic condition.</p>
<p>For similar reasons, observing an anechoic room does not improve acoustic precision because visual knowledge of the room structure is by no means related to any acoustic cue. Obtaining improvements in both rooms (or in the anechoic room only) would have supported the hypothesis that vision helps in estimating mainly the direction of arrival of acoustic direct paths, i.e., the only cue present in an anechoic room. However, this did not happen, supporting instead the hypothesis that visual cues related, even if implicitly, more to a global footprint given by room acoustics than to the local and specific acoustical feedback of our stimulation setup.</p>
<p>As discussed above, the fact that only the space bisection task results improved after room observation suggests that the transfer of information from the visual system toward the auditory one occurs only for those aspects for which the visual system can be used to calibrate the auditory one. In this vein, gaining knowledge about room acoustics through vision seems to be involved much more when estimating complex relationship between sound sources: while estimating audible angles requires comparison between the estimated direction of two sound sources, space bisection requires establishing a specific ordering relation between the direction of three sound sources, of which two are far apart in space. This operation may require Euclidian representation of space (Gori et al.,
<xref rid="B9" ref-type="bibr">2014</xref>
) and involve more spatial processing, possibly related to cues linked to the room structure that visual input helps to interpret.</p>
<p>A final interesting result is that no difference was observed between the performance obtained for the space bisection task after 1 min of environment observation and in the condition in which the eyes were maintained open for the entire experimental session. This suggests that the visual system needs only a brief period of environment observation to allow an improvement in this audio task.</p>
<p>In our past works we suggested that a process of cross-modal calibration might occur during development (Gori et al.,
<xref rid="B7" ref-type="bibr">2008</xref>
). During this process the visual system seems to be involved in the calibration of auditory space bisection (Gori et al.,
<xref rid="B8" ref-type="bibr">2012</xref>
,
<xref rid="B9" ref-type="bibr">2014</xref>
). A possible interpretation of the results presented in this paper is that vision can calibrate audition also in a short-term form in adult individuals. It can indeed improve auditory space precision through a transfer of information about environmental cues from the visual system. In particular our results suggest that visual information might help the hearing system to compensate for the mismatch produced by echoes, and that visual knowledge of the room structure is linked to understanding of room acoustics.</p>
<p>If this interpretation is correct, then our results can be discussed in relation to the echolocation technique. Some blind individuals use the echoes produced by the environment to their advantage, thanks to echolocation. Human echolocation is an ability of humans to detect objects in their surroundings by sensing echoes from those objects. By actively creating sounds, such as clicks produced by rapidly moving the tongue in the palatal area behind the teeth (Rojas et al.,
<xref rid="B20" ref-type="bibr">2009</xref>
) or sounds produced by external mechanical means such as tapping a cane against the floor (Burton,
<xref rid="B3" ref-type="bibr">2000</xref>
), people trained to orientate with echolocation can interpret the sound waves reflected by nearby objects. Many studies conducted under controlled experimental conditions have shown that echolocation improves blind people’s spatial sensing ability.</p>
<p>For example a recent study (Vercillo et al.,
<xref rid="B27" ref-type="bibr">2015</xref>
) has compared the performance of expert echolocators, blind and sighted people with no previous experience of echolocation, in a space bisection task. It was found that blind expert echolocators performed the spatial bisection task with similar or even better precision than the sighted group. Moreover, in several studies were demonstrated that echolocation improves the ability to determine other characteristics as distance (Kolarik et al.,
<xref rid="B13" ref-type="bibr">2013</xref>
), motion (Thaler et al.,
<xref rid="B25" ref-type="bibr">2011</xref>
,
<xref rid="B26" ref-type="bibr">2014</xref>
), size (Teng and Whitney,
<xref rid="B23" ref-type="bibr">2011</xref>
; Teng et al.,
<xref rid="B24" ref-type="bibr">2012</xref>
), shape (Thaler et al.,
<xref rid="B25" ref-type="bibr">2011</xref>
; Milne et al.,
<xref rid="B18" ref-type="bibr">2014</xref>
).</p>
<p>Therefore we can assume that echolocation could serve to recalibrate the ability of blind individuals to represent sounds in some spatial configurations and compensate the lack of vision. Our results support the idea that the visual system might in some form compensate for the mismatch produced by echoes in unknown environments by helping to interpret them. Visual information and spatio-acoustic representation appear therefore intertwined. If this is correct then the use of the echolocation technique can be a way of substituting the role of the visual system on this process. This would partially explain the improved spatial skills of blind expert echolocators. To conclude, the current findings suggest that vision is not only important for the auditory system during the development of space auditory representation, but also during adulthood. Although the mechanisms that subtend this process still have to be completely understood, our results suggest that the visual system can improve some forms of auditory spatial perception also in adults and after short-term environmental observation.</p>
</sec>
<sec id="s5">
<title>Conflict of Interest Statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</body>
<back>
<fn-group>
<fn fn-type="financial-disclosure">
<p>
<bold>Acknowledgments</bold>
</p>
<p>This research is partly funded by the
<funding-source id="GS1">FP7 EU STREP project BLINDPAD</funding-source>
(
<ext-link ext-link-type="uri" xlink:href="http://www.blindpad.eu">www.blindpad.eu</ext-link>
), under grant
<award-id>611621</award-id>
, by the
<funding-source id="GS2">FP7 EU STREP project ABBI</funding-source>
(
<ext-link ext-link-type="uri" xlink:href="http://www.abbiproject.eu">www.abbiproject.eu</ext-link>
), under grant
<award-id>611452</award-id>
, and by the
<funding-source id="GS3">Fondazione Istituto Italiano di Tecnologia</funding-source>
(
<ext-link ext-link-type="uri" xlink:href="http://www.iit.it">www.iit.it</ext-link>
). The authors would like to thank all the participants to this study.</p>
</fn>
</fn-group>
<ref-list>
<title>References</title>
<ref id="B1">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Alais</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Burr</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>The ventriloquist effect results from near-optimal bimodal integration</article-title>
.
<source>Curr. Biol.</source>
<volume>14</volume>
,
<fpage>257</fpage>
<lpage>262</lpage>
.
<pub-id pub-id-type="doi">10.1016/s0960-9822(04)00043-0</pub-id>
<pub-id pub-id-type="pmid">14761661</pub-id>
</mixed-citation>
</ref>
<ref id="B2">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Buckingham</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Milne</surname>
<given-names>J. L.</given-names>
</name>
<name>
<surname>Byrne</surname>
<given-names>C. M.</given-names>
</name>
<name>
<surname>Goodale</surname>
<given-names>M. A.</given-names>
</name>
</person-group>
(
<year>2015</year>
).
<article-title>The size-weight illusion induced through human echolocation</article-title>
.
<source>Psychol. Sci.</source>
<volume>26</volume>
,
<fpage>237</fpage>
<lpage>242</lpage>
.
<pub-id pub-id-type="doi">10.1177/0956797614561267</pub-id>
<pub-id pub-id-type="pmid">25526909</pub-id>
</mixed-citation>
</ref>
<ref id="B3">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Burton</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>The role of the sound of tapping for nonvisual judgment of gap crossability</article-title>
.
<source>J. Exp. Psychol. Hum. Percept. Perform.</source>
<volume>26</volume>
,
<fpage>900</fpage>
<lpage>916</lpage>
.
<pub-id pub-id-type="doi">10.1037/0096-1523.26.3.900</pub-id>
<pub-id pub-id-type="pmid">10884001</pub-id>
</mixed-citation>
</ref>
<ref id="B4">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Clarke</surname>
<given-names>J. J.</given-names>
</name>
<name>
<surname>Yuille</surname>
<given-names>A. L.</given-names>
</name>
</person-group>
(
<year>1990</year>
).
<source>Data Fusion for Sensory Information Processing.</source>
<publisher-loc>Boston</publisher-loc>
:
<publisher-name>Kluwer Academic</publisher-name>
.</mixed-citation>
</ref>
<ref id="B5">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>M. O.</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>M. S.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Humans integrate visual and haptic information in a statistically optimal fashion</article-title>
.
<source>Nature</source>
<volume>415</volume>
,
<fpage>429</fpage>
<lpage>433</lpage>
.
<pub-id pub-id-type="doi">10.1038/415429a</pub-id>
<pub-id pub-id-type="pmid">11807554</pub-id>
</mixed-citation>
</ref>
<ref id="B6">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Ghahramani</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Wolpert</surname>
<given-names>D. M.</given-names>
</name>
<name>
<surname>Jordan</surname>
<given-names>M. I.</given-names>
</name>
</person-group>
(
<year>1997</year>
). “
<article-title>Computational models of sensorimotor integration</article-title>
,” in
<source>Self-Organization, Computational Maps and Motor Control</source>
, ed.
<person-group person-group-type="editor">
<name>
<surname>Sanguineti</surname>
<given-names>P. G. M. A. V.</given-names>
</name>
</person-group>
(
<publisher-loc>Amsterdam</publisher-loc>
:
<publisher-name>Elsevier Science Publ</publisher-name>
),
<fpage>117</fpage>
<lpage>147</lpage>
.</mixed-citation>
</ref>
<ref id="B7">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gori</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Del Viva</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Sandini</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Burr</surname>
<given-names>D. C.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Young children do not integrate visual and haptic form information</article-title>
.
<source>Curr. Biol.</source>
<volume>18</volume>
,
<fpage>694</fpage>
<lpage>698</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.cub.2008.04.036</pub-id>
<pub-id pub-id-type="pmid">18450446</pub-id>
</mixed-citation>
</ref>
<ref id="B8">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gori</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Giuliana</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Sandini</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Burr</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Visual size perception and haptic calibration during development</article-title>
.
<source>Dev. Sci.</source>
<volume>15</volume>
,
<fpage>854</fpage>
<lpage>862</lpage>
.
<pub-id pub-id-type="doi">10.1111/j.1467-7687.2012.2012.01183.x</pub-id>
<pub-id pub-id-type="pmid">23106739</pub-id>
</mixed-citation>
</ref>
<ref id="B9">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gori</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Sandini</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Martinoli</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Burr</surname>
<given-names>D. C.</given-names>
</name>
</person-group>
(
<year>2014</year>
).
<article-title>Impairment of auditory spatial localization in congenitally blind human subjects</article-title>
.
<source>Brain</source>
<volume>137</volume>
,
<fpage>288</fpage>
<lpage>293</lpage>
.
<pub-id pub-id-type="doi">10.1093/brain/awt311</pub-id>
<pub-id pub-id-type="pmid">24271326</pub-id>
</mixed-citation>
</ref>
<ref id="B10">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jackson</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>1953</year>
).
<article-title>Visual factors in auditory localization</article-title>
.
<source>Q. J. Exp. Psychol.</source>
<volume>5</volume>
,
<fpage>52</fpage>
<lpage>65</lpage>
.
<pub-id pub-id-type="doi">10.1080/17470215308416626</pub-id>
</mixed-citation>
</ref>
<ref id="B11">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>King</surname>
<given-names>A. J.</given-names>
</name>
<name>
<surname>Carlile</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>1993</year>
).
<article-title>Changes induced in the representation of auditory space in the superior colliculus by rearing ferrets with binocular eyelid suture</article-title>
.
<source>Exp. Brain Res.</source>
<volume>94</volume>
,
<fpage>444</fpage>
<lpage>455</lpage>
.
<pub-id pub-id-type="doi">10.1007/bf00230202</pub-id>
<pub-id pub-id-type="pmid">8359258</pub-id>
</mixed-citation>
</ref>
<ref id="B12">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Knudsen</surname>
<given-names>E. I.</given-names>
</name>
<name>
<surname>Knudsen</surname>
<given-names>P. F.</given-names>
</name>
</person-group>
(
<year>1985</year>
).
<article-title>Vision guides the adjustment of auditory localization in young barn owls</article-title>
.
<source>Science</source>
<volume>230</volume>
,
<fpage>545</fpage>
<lpage>548</lpage>
.
<pub-id pub-id-type="doi">10.1126/science.4048948</pub-id>
<pub-id pub-id-type="pmid">4048948</pub-id>
</mixed-citation>
</ref>
<ref id="B13">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kolarik</surname>
<given-names>A. J.</given-names>
</name>
<name>
<surname>Cirstea</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Pardhan</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>Evidence for enhanced discrimination of virtual auditory distance among blind listeners using level and direct-to-reverberant cues</article-title>
.
<source>Exp. Brain Res.</source>
<volume>224</volume>
,
<fpage>623</fpage>
<lpage>633</lpage>
.
<pub-id pub-id-type="doi">10.1007/s00221-012-3340-0</pub-id>
<pub-id pub-id-type="pmid">23178908</pub-id>
</mixed-citation>
</ref>
<ref id="B14">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Landy</surname>
<given-names>M. S.</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>M. S.</given-names>
</name>
<name>
<surname>Knill</surname>
<given-names>D. C.</given-names>
</name>
</person-group>
(
<year>2011</year>
). “
<article-title>Ideal-observer models of cue integration</article-title>
,” in
<source>Book of Sensory Cue Integration</source>
, ed.
<person-group person-group-type="editor">
<name>
<surname>Julia Trommershauser</surname>
<given-names>K. K. A. M. S. L.</given-names>
</name>
</person-group>
(
<publisher-loc>Sensory cue integration</publisher-loc>
:
<publisher-name>Oxford University Press</publisher-name>
),
<fpage>5</fpage>
<lpage>29</lpage>
. </mixed-citation>
</ref>
<ref id="B15">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lessard</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Paré</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Lepore</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Lassonde</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>Early-blind human subjects localize sound sources better than sighted subjects</article-title>
.
<source>Nature</source>
<volume>395</volume>
,
<fpage>278</fpage>
<lpage>280</lpage>
.
<pub-id pub-id-type="doi">10.1038/26228</pub-id>
<pub-id pub-id-type="pmid">9751055</pub-id>
</mixed-citation>
</ref>
<ref id="B16">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mateeff</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Hohnsbein</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Noack</surname>
<given-names>T.</given-names>
</name>
</person-group>
(
<year>1985</year>
).
<article-title>Dynamic visual capture: apparent auditory motion induced by a moving visual target</article-title>
.
<source>Perception</source>
<volume>14</volume>
,
<fpage>721</fpage>
<lpage>727</lpage>
.
<pub-id pub-id-type="doi">10.1068/p140721</pub-id>
<pub-id pub-id-type="pmid">3837873</pub-id>
</mixed-citation>
</ref>
<ref id="B17">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Milne</surname>
<given-names>J. L.</given-names>
</name>
<name>
<surname>Anello</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Goodale</surname>
<given-names>M. A.</given-names>
</name>
<name>
<surname>Thaler</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2015</year>
).
<article-title>A blind human expert echolocator shows size constancy for objects perceived by echoes</article-title>
.
<source>Neurocase</source>
<volume>21</volume>
,
<fpage>465</fpage>
<lpage>470</lpage>
.
<pub-id pub-id-type="doi">10.1080/13554794.2014.922994</pub-id>
<pub-id pub-id-type="pmid">24874426</pub-id>
</mixed-citation>
</ref>
<ref id="B18">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Milne</surname>
<given-names>J. L.</given-names>
</name>
<name>
<surname>Goodale</surname>
<given-names>M. A.</given-names>
</name>
<name>
<surname>Thaler</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2014</year>
).
<article-title>The role of head movements in the discrimination of 2-D shape by blind echolocation experts</article-title>
.
<source>Atten. Percept. Psychophys.</source>
<volume>76</volume>
,
<fpage>1828</fpage>
<lpage>1837</lpage>
.
<pub-id pub-id-type="doi">10.3758/s13414-014-0695-2</pub-id>
<pub-id pub-id-type="pmid">24874262</pub-id>
</mixed-citation>
</ref>
<ref id="B19">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Recanzone</surname>
<given-names>G. H.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>Rapidly induced auditory plasticity: the ventriloquism aftereffect</article-title>
.
<source>Proc. Natl. Acad. Sci. U S A</source>
<volume>95</volume>
,
<fpage>869</fpage>
<lpage>875</lpage>
.
<pub-id pub-id-type="doi">10.1073/pnas.95.3.869</pub-id>
<pub-id pub-id-type="pmid">9448253</pub-id>
</mixed-citation>
</ref>
<ref id="B20">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rojas</surname>
<given-names>J. A. M.</given-names>
</name>
<name>
<surname>Hermosilla</surname>
<given-names>J. A.</given-names>
</name>
<name>
<surname>Montero</surname>
<given-names>R. S.</given-names>
</name>
<name>
<surname>Espí</surname>
<given-names>P. L. L.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Physical analysis of several organic signals for human echolocation: oral vacuum pulses</article-title>
.
<source>Acta Acust. United Acust.</source>
<volume>95</volume>
,
<fpage>325</fpage>
<lpage>330</lpage>
.
<pub-id pub-id-type="doi">10.3813/aaa.918155</pub-id>
</mixed-citation>
</ref>
<ref id="B21">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shelton</surname>
<given-names>B. R.</given-names>
</name>
<name>
<surname>Searle</surname>
<given-names>C. L.</given-names>
</name>
</person-group>
(
<year>1980</year>
).
<article-title>The influence of vision on the absolute identification of sound-source position</article-title>
.
<source>Percept. Psychophys.</source>
<volume>28</volume>
,
<fpage>589</fpage>
<lpage>596</lpage>
.
<pub-id pub-id-type="doi">10.3758/bf03198830</pub-id>
<pub-id pub-id-type="pmid">7208275</pub-id>
</mixed-citation>
</ref>
<ref id="B22">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tabry</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Zatorre</surname>
<given-names>R. J.</given-names>
</name>
<name>
<surname>Voss</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>The influence of vision on sound localization abilities in both the horizontal and vertical planes</article-title>
.
<source>Front. Psychol.</source>
<volume>4</volume>
:
<fpage>932</fpage>
.
<pub-id pub-id-type="doi">10.3389/fpsyg.2013.00932</pub-id>
<pub-id pub-id-type="pmid">24376430</pub-id>
</mixed-citation>
</ref>
<ref id="B24">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Teng</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Puri</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Whitney</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Ultrafine spatial acuity of blind expert human echolocators</article-title>
.
<source>Exp. Brain Res.</source>
<volume>216</volume>
,
<fpage>483</fpage>
<lpage>488</lpage>
.
<pub-id pub-id-type="doi">10.1007/s00221-011-2951-1</pub-id>
<pub-id pub-id-type="pmid">22101568</pub-id>
</mixed-citation>
</ref>
<ref id="B23">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Teng</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Whitney</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>The acuity of echolocation: spatial resolution in the sighted compared to expert performance</article-title>
.
<source>J. Vis. Impair. Blind.</source>
<volume>105</volume>
,
<fpage>20</fpage>
<lpage>32</lpage>
.
<pub-id pub-id-type="pmid">21611133</pub-id>
</mixed-citation>
</ref>
<ref id="B25">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Thaler</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Arnott</surname>
<given-names>S. R.</given-names>
</name>
<name>
<surname>Goodale</surname>
<given-names>M. A.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Neural correlates of natural human echolocation in early and late blind echolocation experts</article-title>
.
<source>PLoS One</source>
<volume>6</volume>
:
<fpage>e20162</fpage>
.
<pub-id pub-id-type="doi">10.1371/journal.pone.0020162</pub-id>
<pub-id pub-id-type="pmid">21633496</pub-id>
</mixed-citation>
</ref>
<ref id="B26">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Thaler</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Milne</surname>
<given-names>J. L.</given-names>
</name>
<name>
<surname>Arnott</surname>
<given-names>S. R.</given-names>
</name>
<name>
<surname>Kish</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Goodale</surname>
<given-names>M. A.</given-names>
</name>
</person-group>
(
<year>2014</year>
).
<article-title>Neural correlates of motion processing through echolocation, source hearing and vision in blind echolocation experts and sighted echolocation novices</article-title>
.
<source>J. Neurophysiol.</source>
<volume>111</volume>
,
<fpage>112</fpage>
<lpage>127</lpage>
.
<pub-id pub-id-type="doi">10.1152/jn.00501.2013</pub-id>
<pub-id pub-id-type="pmid">24133224</pub-id>
</mixed-citation>
</ref>
<ref id="B27">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vercillo</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Milne</surname>
<given-names>J. L.</given-names>
</name>
<name>
<surname>Gori</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Goodale</surname>
<given-names>M. A.</given-names>
</name>
</person-group>
(
<year>2015</year>
).
<article-title>Enhanced auditory spatial localization in blind echolocators</article-title>
.
<source>Neuropsychologia</source>
<volume>67C</volume>
,
<fpage>35</fpage>
<lpage>40</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2014.12.001</pub-id>
<pub-id pub-id-type="pmid">25484307</pub-id>
</mixed-citation>
</ref>
<ref id="B28">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Warren</surname>
<given-names>D. H.</given-names>
</name>
<name>
<surname>Welch</surname>
<given-names>R. B.</given-names>
</name>
<name>
<surname>McCarthy</surname>
<given-names>T. J.</given-names>
</name>
</person-group>
(
<year>1981</year>
).
<article-title>The role of visual-auditory “compellingness” in the ventriloquism effect: implications for transitivity among the spatial senses</article-title>
.
<source>Percept. Psychophys.</source>
<volume>30</volume>
,
<fpage>557</fpage>
<lpage>564</lpage>
.
<pub-id pub-id-type="doi">10.3758/bf03202010</pub-id>
<pub-id pub-id-type="pmid">7335452</pub-id>
</mixed-citation>
</ref>
<ref id="B29">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Watson</surname>
<given-names>A. B.</given-names>
</name>
<name>
<surname>Pelli</surname>
<given-names>D. G.</given-names>
</name>
</person-group>
(
<year>1983</year>
).
<article-title>QUEST: a Bayesian adaptive psychometric method</article-title>
.
<source>Percept. Psychophys.</source>
<volume>33</volume>
,
<fpage>113</fpage>
<lpage>120</lpage>
.
<pub-id pub-id-type="doi">10.3758/bf03202828</pub-id>
<pub-id pub-id-type="pmid">6844102</pub-id>
</mixed-citation>
</ref>
<ref id="B30">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Will</surname>
<given-names>U.</given-names>
</name>
<name>
<surname>Berg</surname>
<given-names>E.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Brain wave synchronization and entrainment to periodic acoustic stimuli</article-title>
.
<source>Neurosci. Lett.</source>
<volume>424</volume>
,
<fpage>55</fpage>
<lpage>60</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neulet.2007.07.036</pub-id>
<pub-id pub-id-type="pmid">17709189</pub-id>
</mixed-citation>
</ref>
<ref id="B31">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zwiers</surname>
<given-names>M. P.</given-names>
</name>
<name>
<surname>Van Opstal</surname>
<given-names>A. J.</given-names>
</name>
<name>
<surname>Paige</surname>
<given-names>G. D.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Plasticity in human sound localization induced by compressed spatial vision</article-title>
.
<source>Nat. Neurosci.</source>
<volume>6</volume>
,
<fpage>175</fpage>
<lpage>181</lpage>
.
<pub-id pub-id-type="doi">10.1038/nn999</pub-id>
<pub-id pub-id-type="pmid">12524547</pub-id>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
<affiliations>
<list></list>
<tree>
<noCountry>
<name sortKey="Brayda, Luca" sort="Brayda, Luca" uniqKey="Brayda L" first="Luca" last="Brayda">Luca Brayda</name>
<name sortKey="Gori, Monica" sort="Gori, Monica" uniqKey="Gori M" first="Monica" last="Gori">Monica Gori</name>
<name sortKey="Tonelli, Alessia" sort="Tonelli, Alessia" uniqKey="Tonelli A" first="Alessia" last="Tonelli">Alessia Tonelli</name>
</noCountry>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Ncbi/Merge
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 003A10 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd -nk 003A10 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Ncbi
   |étape=   Merge
   |type=    RBID
   |clé=     PMC:4451354
   |texte=   Task-dependent calibration of auditory spatial perception through environmental visual observation
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/RBID.i   -Sk "pubmed:26082692" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024