Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Auditory/visual distance estimation: accuracy and variability

Identifieur interne : 003396 ( Ncbi/Merge ); précédent : 003395; suivant : 003397

Auditory/visual distance estimation: accuracy and variability

Auteurs : Paul W. Anderson [États-Unis] ; Pavel Zahorik [États-Unis]

Source :

RBID : PMC:4188027

Abstract

Past research has shown that auditory distance estimation improves when listeners are given the opportunity to see all possible sound sources when compared to no visual input. It has also been established that distance estimation is more accurate in vision than in audition. The present study investigates the degree to which auditory distance estimation is improved when matched with a congruent visual stimulus. Virtual sound sources based on binaural room impulse response (BRIR) measurements made from distances ranging from approximately 0.3 to 9.8 m in a concert hall were used as auditory stimuli. Visual stimuli were photographs taken from the participant's perspective at each distance in the impulse response measurement setup presented on a large HDTV monitor. Participants were asked to estimate egocentric distance to the sound source in each of three conditions: auditory only (A), visual only (V), and congruent auditory/visual stimuli (A+V). Each condition was presented within its own block. Sixty-two participants were tested in order to quantify the response variability inherent in auditory distance perception. Distance estimates from both the V and A+V conditions were found to be considerably more accurate and less variable than estimates from the A condition.


Url:
DOI: 10.3389/fpsyg.2014.01097
PubMed: 25339924
PubMed Central: 4188027

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4188027

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Auditory/visual distance estimation: accuracy and variability</title>
<author>
<name sortKey="Anderson, Paul W" sort="Anderson, Paul W" uniqKey="Anderson P" first="Paul W." last="Anderson">Paul W. Anderson</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Psychological and Brain Sciences, University of Louisville</institution>
<country>Louisville, KY, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Zahorik, Pavel" sort="Zahorik, Pavel" uniqKey="Zahorik P" first="Pavel" last="Zahorik">Pavel Zahorik</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Psychological and Brain Sciences, University of Louisville</institution>
<country>Louisville, KY, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Division of Communicative Disorders, Department of Surgery, School of Medicine, University of Louisville</institution>
<country>Louisville, KY, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">25339924</idno>
<idno type="pmc">4188027</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4188027</idno>
<idno type="RBID">PMC:4188027</idno>
<idno type="doi">10.3389/fpsyg.2014.01097</idno>
<date when="2014">2014</date>
<idno type="wicri:Area/Pmc/Corpus">001F19</idno>
<idno type="wicri:Area/Pmc/Curation">001F19</idno>
<idno type="wicri:Area/Pmc/Checkpoint">000E03</idno>
<idno type="wicri:Area/Ncbi/Merge">003396</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Auditory/visual distance estimation: accuracy and variability</title>
<author>
<name sortKey="Anderson, Paul W" sort="Anderson, Paul W" uniqKey="Anderson P" first="Paul W." last="Anderson">Paul W. Anderson</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Psychological and Brain Sciences, University of Louisville</institution>
<country>Louisville, KY, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Zahorik, Pavel" sort="Zahorik, Pavel" uniqKey="Zahorik P" first="Pavel" last="Zahorik">Pavel Zahorik</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Psychological and Brain Sciences, University of Louisville</institution>
<country>Louisville, KY, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Division of Communicative Disorders, Department of Surgery, School of Medicine, University of Louisville</institution>
<country>Louisville, KY, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Frontiers in Psychology</title>
<idno type="eISSN">1664-1078</idno>
<imprint>
<date when="2014">2014</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Past research has shown that auditory distance estimation improves when listeners are given the opportunity to see all possible sound sources when compared to no visual input. It has also been established that distance estimation is more accurate in vision than in audition. The present study investigates the degree to which auditory distance estimation is improved when matched with a congruent visual stimulus. Virtual sound sources based on binaural room impulse response (BRIR) measurements made from distances ranging from approximately 0.3 to 9.8 m in a concert hall were used as auditory stimuli. Visual stimuli were photographs taken from the participant's perspective at each distance in the impulse response measurement setup presented on a large HDTV monitor. Participants were asked to estimate egocentric distance to the sound source in each of three conditions: auditory only (A), visual only (V), and congruent auditory/visual stimuli (A+V). Each condition was presented within its own block. Sixty-two participants were tested in order to quantify the response variability inherent in auditory distance perception. Distance estimates from both the V and A+V conditions were found to be considerably more accurate and less variable than estimates from the A condition.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Alais, D" uniqKey="Alais D">D. Alais</name>
</author>
<author>
<name sortKey="Burr, D" uniqKey="Burr D">D. Burr</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ashmead, D H" uniqKey="Ashmead D">D. H. Ashmead</name>
</author>
<author>
<name sortKey="Davis, D L" uniqKey="Davis D">D. L. Davis</name>
</author>
<author>
<name sortKey="Northington, A" uniqKey="Northington A">A. Northington</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Blauert, J" uniqKey="Blauert J">J. Blauert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Calcagno, E R" uniqKey="Calcagno E">E. R. Calcagno</name>
</author>
<author>
<name sortKey="Abregu, E L" uniqKey="Abregu E">E. L. Abregu</name>
</author>
<author>
<name sortKey="Manuel, C E" uniqKey="Manuel C">C. E. Manuel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Coleman, P D" uniqKey="Coleman P">P. D. Coleman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Da Silva, J A" uniqKey="Da Silva J">J. A. Da Silva</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, M O" uniqKey="Ernst M">M. O. Ernst</name>
</author>
<author>
<name sortKey="Banks, M S" uniqKey="Banks M">M. S. Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gardner, M B" uniqKey="Gardner M">M. B. Gardner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gogel, W C" uniqKey="Gogel W">W. C. Gogel</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jack, C E" uniqKey="Jack C">C. E. Jack</name>
</author>
<author>
<name sortKey="Thurlow, W R" uniqKey="Thurlow W">W. R. Thurlow</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Loomis, J M" uniqKey="Loomis J">J. M. Loomis</name>
</author>
<author>
<name sortKey="Philbeck, J W" uniqKey="Philbeck J">J. W. Philbeck</name>
</author>
<author>
<name sortKey="Zahorik, P" uniqKey="Zahorik P">P. Zahorik</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mershon, D H" uniqKey="Mershon D">D. H. Mershon</name>
</author>
<author>
<name sortKey="Ballenger, W L" uniqKey="Ballenger W">W. L. Ballenger</name>
</author>
<author>
<name sortKey="Little, A D" uniqKey="Little A">A. D. Little</name>
</author>
<author>
<name sortKey="Mcmurtry, P L" uniqKey="Mcmurtry P">P. L. McMurtry</name>
</author>
<author>
<name sortKey="Buchanan, J L" uniqKey="Buchanan J">J. L. Buchanan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mershon, D H" uniqKey="Mershon D">D. H. Mershon</name>
</author>
<author>
<name sortKey="Bowers, J N" uniqKey="Bowers J">J. N. Bowers</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mershon, D H" uniqKey="Mershon D">D. H. Mershon</name>
</author>
<author>
<name sortKey="Desaulniers, D H" uniqKey="Desaulniers D">D. H. Desaulniers</name>
</author>
<author>
<name sortKey="Amerson, T L" uniqKey="Amerson T">T. L. Amerson</name>
</author>
<author>
<name sortKey="Kiefer, S A" uniqKey="Kiefer S">S. A. Kiefer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mershon, D H" uniqKey="Mershon D">D. H. Mershon</name>
</author>
<author>
<name sortKey="King, L E" uniqKey="King L">L. E. King</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Middlebrooks, J C" uniqKey="Middlebrooks J">J. C. Middlebrooks</name>
</author>
<author>
<name sortKey="Green, D M" uniqKey="Green D">D. M. Green</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mills, A W" uniqKey="Mills A">A. W. Mills</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rife, D D" uniqKey="Rife D">D. D. Rife</name>
</author>
<author>
<name sortKey="Vanderkooy, J" uniqKey="Vanderkooy J">J. Vanderkooy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sedgwick, H A" uniqKey="Sedgwick H">H. A. Sedgwick</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thurlow, W R" uniqKey="Thurlow W">W. R. Thurlow</name>
</author>
<author>
<name sortKey="Jack, C E" uniqKey="Jack C">C. E. Jack</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vroomen, J" uniqKey="Vroomen J">J. Vroomen</name>
</author>
<author>
<name sortKey="De Gelder, B" uniqKey="De Gelder B">B. de Gelder</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wagner, M" uniqKey="Wagner M">M. Wagner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wightman, F L" uniqKey="Wightman F">F. L. Wightman</name>
</author>
<author>
<name sortKey="Kistler, D J" uniqKey="Kistler D">D. J. Kistler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zahorik, P" uniqKey="Zahorik P">P. Zahorik</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zahorik, P" uniqKey="Zahorik P">P. Zahorik</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zahorik, P" uniqKey="Zahorik P">P. Zahorik</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zahorik, P" uniqKey="Zahorik P">P. Zahorik</name>
</author>
<author>
<name sortKey="Brungart, D S" uniqKey="Brungart D">D. S. Brungart</name>
</author>
<author>
<name sortKey="Bronkhorst, A W" uniqKey="Bronkhorst A">A. W. Bronkhorst</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Front Psychol</journal-id>
<journal-id journal-id-type="iso-abbrev">Front Psychol</journal-id>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title-group>
<journal-title>Frontiers in Psychology</journal-title>
</journal-title-group>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">25339924</article-id>
<article-id pub-id-type="pmc">4188027</article-id>
<article-id pub-id-type="doi">10.3389/fpsyg.2014.01097</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Original Research Article</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Auditory/visual distance estimation: accuracy and variability</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Anderson</surname>
<given-names>Paul W.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<uri xlink:type="simple" xlink:href="http://community.frontiersin.org/people/u/112854"></uri>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Zahorik</surname>
<given-names>Pavel</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="author-notes" rid="fn001">
<sup>*</sup>
</xref>
<uri xlink:type="simple" xlink:href="http://community.frontiersin.org/people/u/154580"></uri>
</contrib>
</contrib-group>
<aff id="aff1">
<sup>1</sup>
<institution>Department of Psychological and Brain Sciences, University of Louisville</institution>
<country>Louisville, KY, USA</country>
</aff>
<aff id="aff2">
<sup>2</sup>
<institution>Division of Communicative Disorders, Department of Surgery, School of Medicine, University of Louisville</institution>
<country>Louisville, KY, USA</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>Edited by: Brian Simpson, Air Force Research Laboratory, USA</p>
</fn>
<fn fn-type="edited-by">
<p>Reviewed by: Marianne Latinus, Aix-Marseille Université, France; James A. Schirillo, Wake Forest University, USA</p>
</fn>
<corresp id="fn001">*Correspondence: Pavel Zahorik, Division of Communicative Disorders, Department of Surgery, School of Medicine, University of Louisville, MDA Building, Suite 220, 627 S. Preston Street, Louisville, KY 40292, USA e-mail:
<email xlink:type="simple">pavel.zahorik@louisville.edu</email>
</corresp>
<fn fn-type="other" id="fn002">
<p>This article was submitted to Auditory Cognitive Neuroscience, a section of the journal Frontiers in Psychology.</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>07</day>
<month>10</month>
<year>2014</year>
</pub-date>
<pub-date pub-type="collection">
<year>2014</year>
</pub-date>
<volume>5</volume>
<elocation-id>1097</elocation-id>
<history>
<date date-type="received">
<day>17</day>
<month>4</month>
<year>2014</year>
</date>
<date date-type="accepted">
<day>10</day>
<month>9</month>
<year>2014</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright © 2014 Anderson and Zahorik.</copyright-statement>
<copyright-year>2014</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/">
<license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</license-p>
</license>
</permissions>
<abstract>
<p>Past research has shown that auditory distance estimation improves when listeners are given the opportunity to see all possible sound sources when compared to no visual input. It has also been established that distance estimation is more accurate in vision than in audition. The present study investigates the degree to which auditory distance estimation is improved when matched with a congruent visual stimulus. Virtual sound sources based on binaural room impulse response (BRIR) measurements made from distances ranging from approximately 0.3 to 9.8 m in a concert hall were used as auditory stimuli. Visual stimuli were photographs taken from the participant's perspective at each distance in the impulse response measurement setup presented on a large HDTV monitor. Participants were asked to estimate egocentric distance to the sound source in each of three conditions: auditory only (A), visual only (V), and congruent auditory/visual stimuli (A+V). Each condition was presented within its own block. Sixty-two participants were tested in order to quantify the response variability inherent in auditory distance perception. Distance estimates from both the V and A+V conditions were found to be considerably more accurate and less variable than estimates from the A condition.</p>
</abstract>
<kwd-group>
<kwd>spatial hearing</kwd>
<kwd>sound localization</kwd>
<kwd>distance perception</kwd>
<kwd>multimodal</kwd>
<kwd>virtual sound</kwd>
</kwd-group>
<counts>
<fig-count count="9"></fig-count>
<table-count count="1"></table-count>
<equation-count count="0"></equation-count>
<ref-count count="28"></ref-count>
<page-count count="11"></page-count>
<word-count count="7507"></word-count>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="introduction" id="s1">
<title>Introduction</title>
<p>Within the field of human sound localization, the perception of sound source distance has received relatively little scientific study compared to the perception of sound source direction. This is surprising given that the perception of distance is at least as important as direction for conveying important spatial information about our surroundings, such as locating or avoiding auditory objects under conditions when visual information may be ineffective or unavailable. Although generally less is known about auditory distance perception (ADP) than directional perception, it is clear that ADP results in both highly variable judgments (Zahorik et al.,
<xref rid="B28" ref-type="bibr">2005</xref>
) as well as systematic judgment biases (Zahorik,
<xref rid="B26" ref-type="bibr">2002a</xref>
), especially when compared to directional localization performance, which is comparatively accurate and consistent (Middlebrooks and Green,
<xref rid="B17" ref-type="bibr">1991</xref>
). In terms of judgment bias, there appears to be general consensuses across a variety of studies and listening conditions that far distances are underestimated while closer distances are overestimated (Zahorik et al.,
<xref rid="B28" ref-type="bibr">2005</xref>
). These results are seemingly at odds with our everyday experience of auditory space that appears to be consistent and relatively accurate. One possible explanation for this discrepancy is that in many everyday situations, ADP may be influenced by additional spatial information provided by other sensory modalities, such as vision. The goal of the current study is to better understand how visual input may influence both bias and variability in ADP.</p>
<p>Visual influences on the apparent direction of a sound source are well-known: The superior spatial resolution of vision dominates, or “captures,” the less precise directional information input through the auditory modality. This effect, which underlies the ventriloquist's illusion, can influence sound sources separated from visual targets by as much as 55° (Thurlow and Jack,
<xref rid="B21" ref-type="bibr">1973</xref>
). It also appears to be strengthened by temporal synchrony between auditory and visual targets (Jack and Thurlow,
<xref rid="B11" ref-type="bibr">1973</xref>
), but is unaffected by either attention to the visual distracter or feedback provided to the participant (Vroomen and de Gelder,
<xref rid="B22" ref-type="bibr">2004</xref>
).</p>
<p>Visual capture also appears to function in the distance dimension. For example, Gardner (
<xref rid="B8" ref-type="bibr">1968</xref>
) demonstrated a form of visual capture, he termed “The Proximity-Image Effect,” in which the nearest visible sound source is mistakenly chosen by listeners to be the actual sound source. Mershon et al. (
<xref rid="B15" ref-type="bibr">1980</xref>
) later discovered that the presence of a visual stimulus does not always elicit an underestimation of the physical distance of a sound source, as Gardner's (
<xref rid="B8" ref-type="bibr">1968</xref>
) data suggest. They found that when an occluded sound source was located closer to listeners than a visible dummy loudspeaker, listeners would overestimate the distance of the sound source as being located at the more distant dummy loudspeaker. Taken together, the results from these two studies clearly demonstrate that the presence of plausible visual targets can influence ADP and that under the appropriate circumstances, this influence results in reduced ADP accuracy.</p>
<p>Under other circumstances, visual information can improve ADP accuracy. For example, Zahorik (
<xref rid="B25" ref-type="bibr">2001</xref>
) demonstrated that ADP accuracy in a reverberant environment improves when listeners have the opportunity to view multiple possible sound sources prior to making judgments. Two groups of listeners were tasked with judging the apparent distance to sound sources along a loudspeaker array. One group was able to view the entire loudspeaker array, and the second group was blindfolded throughout the experiment. Distance judgments provided by the group who were able to view the loudspeaker array were more accurate than judgments from the auditory-only group. Similar conclusions were drawn in a study performed by Calcagno et al. (
<xref rid="B4" ref-type="bibr">2012</xref>
) in which visual cues in the form of LEDs were either present or absent during an ADP task in a dark room. Their setup involved a mobile loudspeaker that was moved along a track between trials and LEDs that were placed at standard intervals along the track. When LEDs were present listeners were informed of the distance to the LEDs prior to the task. Results showed that auditory distance judgments were more accurate when the LEDs were present.</p>
<p>Visual information can also affect the variability of ADP. Results from Zahorik (
<xref rid="B25" ref-type="bibr">2001</xref>
) found ADP variability was reduced in the presence of visual information. However, Calcagno et al. (
<xref rid="B4" ref-type="bibr">2012</xref>
) did not observe a reduction in variability in the presence of visual cues. The reason for these contradictory results may arise from the methodologies used in the two studies. In Zahorik (
<xref rid="B25" ref-type="bibr">2001</xref>
) visual information included information about the room and all possible locations of the loudspeakers. On the other hand, Calcagno et al.'s (
<xref rid="B4" ref-type="bibr">2012</xref>
) listeners were limited in their visual information to LEDs in a dark room. Therefore, more reliable visual distance information in Zahorik (
<xref rid="B25" ref-type="bibr">2001</xref>
) may have led to less variable distance judgments.</p>
<p>Perhaps more interesting are the potential causes of large ADP variability in the absence of visual information. Few studies have explicitly examined this issue given the experimental demands of collecting datasets of sufficient size to reliably quantify ADP variability. Such variability may be conceptualized as originating from at least two sources: one related to the judgments/percepts within a single listener, and one related to differences in judgments/percepts between listeners. Past studies of ADP have not been designed to measure these sources of variability independently. Instead they typically have concentrated on a single source of variability. For example, some ADP studies have utilized a large number (
<italic>n</italic>
= 80–200) of listeners (Mershon and King,
<xref rid="B16" ref-type="bibr">1975</xref>
; Mershon and Bowers,
<xref rid="B14" ref-type="bibr">1979</xref>
; Mershon et al.,
<xref rid="B13" ref-type="bibr">1989</xref>
), but tested relatively few source distances and/or few repetitions per distance. Such designs limit investigation of ADP variability within individual listeners. Other studies (Coleman,
<xref rid="B5" ref-type="bibr">1968</xref>
; Ashmead et al.,
<xref rid="B2" ref-type="bibr">1995</xref>
; Zahorik,
<xref rid="B26" ref-type="bibr">2002a</xref>
) have tested greater numbers of source distances with many repetitions at each distance, but at the cost of evaluating fewer individual subjects overall (
<italic>n</italic>
= 6–9). Zahorik et al. (
<xref rid="B28" ref-type="bibr">2005</xref>
) reanalyzed the results from Zahorik (
<xref rid="B26" ref-type="bibr">2002a</xref>
) to assess ADP judgment variability and found that distance judgments for a sound source may vary between 20 and 60% of the source distance. However, given the relatively small number of listeners evaluated, it is difficult to know how these results may generalize to the population as a whole.</p>
<p>The present study was motivated by gaps in knowledge surrounding the interaction of vision and audition in the distance domain as well as the inherent judgment variability associated with ADP. To assess the degree to which ADP is improved when an auditory stimulus is matched with a congruent visual stimulus, participants judged egocentric distance to a virtual sound source in three conditions: auditory only (A), visual only (V), and congruent auditory/visual stimuli (A+V). Virtual auditory space techniques (Wightman and Kistler,
<xref rid="B24" ref-type="bibr">1989</xref>
) were used for distance simulation, in order to allow for simple and rapid switching between source distances throughout the experiment. Although based on past results (Zahorik,
<xref rid="B25" ref-type="bibr">2001</xref>
), it is expected that congruent visual stimuli will result in ADP judgments that are more veridical and less variable, the present study design allows for precise quantification of these variability reduction effects and offers improved generalization to the normal-hearing population as a whole.</p>
</sec>
<sec sec-type="materials|methods" id="s2">
<title>Materials and methods</title>
<sec>
<title>Participants</title>
<p>There were a total of 62 (41 female) participants, ranging in age from 18 to 46 (
<italic>M</italic>
= 22.82). Five participants were removed from analysis: Four because of concerns about their understanding of the task, and due to concerns about self-reported hearing status. All participants had normal hearing based on either self-reports (
<italic>n</italic>
= 30) or pure-tone audiometric screening (
<italic>n</italic>
= 32) at 25 dB HL from 250 to 8000 Hz. Informed consent was obtained from all participants prior to data collection, and participants were awarded either monetary compensation or course credit for their participation. All procedures in this study involving human subject participants were approved by the University of Louisville Institutional Review Board (IRB).</p>
</sec>
<sec>
<title>Auditory stimuli</title>
<p>Binaural room impulse responses (BRIRs) were measured from 11 logarithmically-spaced distances ranging from 0.3048 to 9.7536 m at 0° azimuth in a 558-seat concert hall (Margaret Comstock Concert Hall, University of Louisville). The hall had a broadband reverberation time (T
<sub>60</sub>
) of 1.9 s (ISO-3382,
<xref rid="B10" ref-type="bibr">1997</xref>
). The auditorium was a complex shape with sloping floors and moveable “clouds” in the ceiling. It had a total volume of approximately 5225 m
<sup>3</sup>
(28.956 × 16.9164 × 10.668 m; L × W × H). All BRIR measurements were made with a KEMAR manikin (G.R.A.S. Type 45BM), with IEC711 ear-canal simulators (G.R.A.S. RA0045) and large pinnae (G.R.A.S. KB1060/1) at a fixed location near the edge of the performance stage, facing away from audience seating. The sound source, a high-quality 2-way co-axial loudspeaker (Beyma 8BX) mounted in a sealed 13.5-l cabinet, was moved across the stage to manipulate distance. BRIRs were estimated using Maximum Length Sequence (MLS) system identification techniques (Rife and Vanderkooy,
<xref rid="B19" ref-type="bibr">1989</xref>
). The MLS signal was 2.73 s in duration (17-th order MLS), sampled at 48 kHz with 24-bit resolution. Five repetitions of this signal were presented and averaged to improve signal-to-noise ratio (SNR), which was <35 dB (0.2–20 kHz) at 9.7536 m.</p>
<p>All BRIR measurements were post-processed to compensate for the response characteristics of the measurement loudspeaker as well as the presentation headphones (Beyerdynamic DR-990 Pro) when coupled to the head. Because residual noise in the measured BRIRs can be easily detectable following virtual sound source synthesis, an additional time-windowing procedure was used to further improve SNR in the BRIRs. The procedure was based on that described by Zahorik (
<xref rid="B26" ref-type="bibr">2002a</xref>
). Briefly, the BRIR was first divided into 30 frequency bands (1/3-octave bandwidth, Gaussian shape) and an energy-decay curve was computed for each band using reverse integration. A straight line was then fit to the decay curve in dB/s over an energy range of −5 to −35 dB. This fit was then used to derive an exponentially-decaying time window for each frequency band. The time windows were then applied in each band, and the results summed across bands. This procedure was effective at improving SNR particularly in the later portions of the BRIR. The source signal for virtual synthesis was a 100 ms sample of Gaussian noise.</p>
</sec>
<sec>
<title>Visual stimuli</title>
<p>Visual stimuli were digital photographs of the measurement loudspeaker taken from the position of the head of the KEMAR manikin (see Figure
<xref ref-type="fig" rid="F1">1</xref>
). The camera/lens combination (Nikon D70/Tokina f4 12 mm focal length) produced nearly a 90° field of view. The resulting images (2000 × 3008 pixels) were displayed on a high-quality large screen HDTV (either 46 or 40 in. diagonal). The viewing angle was approximately 51° at the participant's location.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption>
<p>
<bold>Visual stimulus example</bold>
. A photograph of the measurement loudspeaker was taken at each distance from where the KEMAR mannequin was placed during BRIR measurement at the front of the stage. In the V and A+V conditions a photograph was presented on a large flat screen HDTV and the participant provided a distance judgment to the sound source. In this example, the measurement loudspeaker is placed 2.44 m in front of the camera in Comstock Hall.</p>
</caption>
<graphic xlink:href="fpsyg-05-01097-g0001"></graphic>
</fig>
</sec>
<sec>
<title>Procedure</title>
<p>The entire experiment took place in a double-walled sound proof booth (Acoustic Systems, Austin, TX). Participants were asked to estimate egocentric distance to the sound source in each of the three conditions: A, V, and A+V. Participants had the opportunity to play the auditory stimulus multiple times before entering their distance judgment. Once the stimulus was played a distance judgment could be entered at any time. Therefore, some listeners may have only had one exposure to the stimulus on a given trial while other listeners may have had multiple exposures on a given trial (data on the number of times a participant listened to the stimulus were not recorded). In the V and A+V conditions the visual stimulus was present for the entire duration of the trial. Judgments were input using a computer keyboard. Participants had the option of using units of either meters or feet. All judgments were required to be precise to two decimal places, and responses in feet were transformed to meters prior to all data analysis. Listeners were instructed to reserve a response of zero for a percept of inside the head locatedness (Blauert,
<xref rid="B3" ref-type="bibr">1997</xref>
, p. 132). Most participants (
<italic>n</italic>
= 45), provided judgments in all three conditions. Each condition was tested within its own block of trials, which included 10 judgments for each of the 11 source distances, for a total of 110 judgments. The order of blocks was counterbalanced, and the order of trials within each block was randomized. An additional set of listeners (
<italic>n</italic>
= 17), participated only in the A condition and contributed 30 judgments for each of the 11 source distances for a total of 330 judgments. The data from this group of listeners were collected to increase the sample of auditory distance judgments, since we were interested in the amount of intra-subject variability inherent in ADP. Feedback was not provided to the participants. MATLAB software (Mathworks Inc., Natick, MA) was used for stimulus presentation and data collection.</p>
</sec>
<sec>
<title>Data analysis</title>
<p>Following methods used in previous ADP and VDP studies (Da Silva,
<xref rid="B6" ref-type="bibr">1985</xref>
; Sedgwick,
<xref rid="B20" ref-type="bibr">1986</xref>
; Zahorik,
<xref rid="B25" ref-type="bibr">2001</xref>
,
<xref rid="B26" ref-type="bibr">2002a</xref>
; Zahorik et al.,
<xref rid="B28" ref-type="bibr">2005</xref>
), power functions of the following form were fit (least-squares criterion) to the geometric means in each condition:
<italic>ŷ</italic>
<sub>
<italic>r</italic>
</sub>
=
<italic>k</italic>
Φ
<sup>
<italic>a</italic>
</sup>
<sub>
<italic>r</italic>
</sub>
(
<italic>ŷ</italic>
<sub>
<italic>r</italic>
</sub>
= perceived distance,
<italic>k</italic>
= constant,
<italic>a</italic>
= power-law exponent, Φ
<sub>
<italic>r</italic>
</sub>
= target source distance). The fit parameters,
<italic>k</italic>
and
<italic>a</italic>
, were used as measures of judgment accuracy. The exponent indicates the amount of non-linear compression (
<italic>a</italic>
< 1) or expansion (
<italic>a</italic>
> 1) in the function. The constant indicates the amount of linear compression (
<italic>k</italic>
< 1) or expansion (
<italic>k</italic>
> 1) in the function. The exponent and constant parameters are equivalent to slope and intercept respectively when perceived distance and physical distance are represented in logarithmic coordinates. Residual error from the fitted functions as well as the proportion of variance accounted for by the fitted function (
<italic>R</italic>
<sup>2</sup>
) were used to describe both between-subject and within-subject response variability. Measures of accuracy and variability were compared between conditions using independent samples
<italic>t</italic>
-tests with Bonferroni correction. Independent samples
<italic>t</italic>
-tests were used because not all subjects were tested in all conditions. Intra-subject variability was evaluated using independent
<italic>t</italic>
-tests comparing listeners in the A condition who performed 10 judgments per distance vs. those who performed 30 judgments per distance. Reliability of distance judgments across conditions was analyzed by computing the Pearson correlations across conditions for the fit parameters and
<italic>R</italic>
<sup>2</sup>
values. All analyses were performed using MATLAB (Mathworks Inc., Natick, MA), except for the
<italic>t</italic>
-tests, which were performed using SPSS (IBM Corp., Armonk, NY).</p>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>Results</title>
<p>Distance estimation results for a single representative participant (Code QAD) are shown in Figures
<xref ref-type="fig" rid="F2">2A–C</xref>
for the A, V, and A+V conditions respectively. Dots indicate the raw distance judgments provided by the participant (
<italic>y</italic>
), while the open circles represent the geometric mean (
<overline>y</overline>
) for each distance. The function fits for each condition are plotted as a solid line (
<italic>ŷ</italic>
), and the diagonal dotted line represents a perfectly accurate relationship between target distance and estimated distance (i.e.,
<italic>a</italic>
= 1,
<italic>k</italic>
= 1). Each panel includes the fit parameters (
<italic>a</italic>
and
<italic>k</italic>
) and proportion of variability accounted for by the fit (
<italic>R</italic>
<sup>2</sup>
). Consistent with previous studies on both auditory (Zahorik et al.,
<xref rid="B28" ref-type="bibr">2005</xref>
) and visual distance estimation (Da Silva,
<xref rid="B6" ref-type="bibr">1985</xref>
; Sedgwick,
<xref rid="B20" ref-type="bibr">1986</xref>
), power functions appear to be good fits to the data, although the distance judgments are more accurate and less variable in the conditions with visual stimuli for this participant, as evidenced by the increase in
<italic>R</italic>
<sup>2</sup>
and the facts that
<italic>a</italic>
and
<italic>k</italic>
are closer to 1.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption>
<p>
<bold>Data from a single representative participant (code QAD) for auditory (“A,” panel A), visual (“V,” panel B), and auditory/visual (“A+V,” panel C) conditions plotted on logarithmic axes</bold>
. Dots show raw distance judgments (
<italic>y</italic>
): 10 replications/distance. Open circles indicate geometric means (
<overline>
<italic>y</italic>
</overline>
) for each target distance. Data from each condition were fit with a power function (
<italic>ŷ</italic>
; solid line) of the form
<italic>ŷ</italic>
<sub>
<italic>r</italic>
</sub>
=
<italic>k</italic>
Φ
<sup>
<italic>a</italic>
</sup>
<sub>
<italic>r</italic>
</sub>
(
<italic>ŷ</italic>
<sub>
<italic>r</italic>
</sub>
= perceived distance,
<italic>k</italic>
= constant,
<italic>a</italic>
= power-law exponent, Φ
<sub>
<italic>r</italic>
</sub>
= target source distance). Fit parameters and the proportion of variability accounted for by the fit (
<italic>R</italic>
<sup>2</sup>
) are shown in each panel. Perfectly accurate performance is indicated by the dotted line in each panel.</p>
</caption>
<graphic xlink:href="fpsyg-05-01097-g0002"></graphic>
</fig>
<p>Identical analyses were conducted for all remaining participants in each of the three stimulus conditions. Any distance judgments of “zero” were noted and removed from all subsequent analyses. Of most interest were zero responses in the A condition, since listeners were instructed to only provide a judgment of zero when the stimulus was perceived as located “inside the head.” Only 0.25% of all judgments in the A condition were zero, indicating that the virtual sound sources were perceived as being localized outside the head in the vast majority of cases.</p>
<p>The distributions of
<italic>R</italic>
<sup>2</sup>
values across all participants are displayed in Figures
<xref ref-type="fig" rid="F3">3A–C</xref>
for the A, V, and A+V conditions respectively. Because the histograms have a slightly negative skew, both the mean ± one standard deviation and median (interquartile range) are included in each panel along with the number of participants in each condition. High
<italic>R</italic>
<sup>2</sup>
values indicate that power functions were good fits to the data and support the validity of the calculated power function fit parameters. The
<italic>R</italic>
<sup>2</sup>
values were generally lower without visual input. The mean
<italic>R</italic>
<sup>2</sup>
value for the A condition (
<italic>M</italic>
= 0.638,
<italic>SD</italic>
= 0.216) was significantly lower than the mean
<italic>R</italic>
<sup>2</sup>
value for both the V (
<italic>M</italic>
= 0.874,
<italic>SD</italic>
= 0.170) and A+V (
<italic>M</italic>
= 0.836,
<italic>SD</italic>
= 0.184) conditions, as demonstrated by independent-samples
<italic>t</italic>
-tests with Bonferroni correction [A vs. V:
<italic>t</italic>
<sub>(105)</sub>
= −6.085,
<italic>p</italic>
< 0.0003; A vs. A+V:
<italic>t</italic>
<sub>(105)</sub>
= −4.979,
<italic>p</italic>
< 0.0003; V vs. A+V conditions:
<italic>t</italic>
<sub>(88)</sub>
= 1.012,
<italic>p</italic>
> 0.945]. Overall, these results suggest that power functions were relatively good fits to the data, but slightly less good for the A condition.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption>
<p>
<bold>Distributions of
<italic>R</italic>
<sup>2</sup>
values from the power function fits for A (A), V (B), and A+V (C) conditions across participants</bold>
. Each panel includes the following summary statistics: mean,
<italic>M</italic>
± one standard deviation, median,
<italic>Mdn</italic>
(interquartile range), and number of participants,
<italic>n</italic>
, in each condition.</p>
</caption>
<graphic xlink:href="fpsyg-05-01097-g0003"></graphic>
</fig>
<p>Exponents from the power function fits provide information about the amount of non-linear compression in the distance judgments. Figures
<xref ref-type="fig" rid="F4">4A–C</xref>
display histograms of the exponent values across all participants for the A, V, and A+V conditions respectively. Each panel includes the mean ± one standard deviation, the median (and interquartile range), and the number of participants in each condition. Considerable inter-subject variability may be noted. Using independent-samples
<italic>t</italic>
-tests with Bonferroni correction, it was determined that the exponents in the A condition (
<italic>M</italic>
= 0.614,
<italic>SD</italic>
= 0.299) were significantly lower than the exponents for both the V condition (
<italic>M</italic>
= 0.916,
<italic>SD</italic>
= 0.267) and A+V condition (
<italic>M</italic>
= 0.874,
<italic>SD</italic>
= 0.271) indicating greater compression in the A condition [A vs. V:
<italic>t</italic>
<sub>(105)</sub>
= −5.398,
<italic>p</italic>
< 0.0003; A vs. A+V:
<italic>t</italic>
<sub>(105)</sub>
= −4.612,
<italic>p</italic>
< 0.0003; V vs. A+V conditions:
<italic>t</italic>
<sub>(88)</sub>
= 0.755,
<italic>p</italic>
> 0.999]. One-sample
<italic>t</italic>
-tests were also performed to determine whether the exponents in each condition were different from a value of one, which corresponds to no compression. Exponents in all three conditions were significantly less than one [A:
<italic>t</italic>
<sub>(61)</sub>
= −10.150,
<italic>p</italic>
< 0.0001; V:
<italic>t</italic>
<sub>(44)</sub>
= −2.082,
<italic>p</italic>
< 0.043; A+V:
<italic>t</italic>
<sub>(44)</sub>
= −3.115,
<italic>p</italic>
< 0.003], indicating exponential compression in all conditions.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption>
<p>
<bold>Distributions of exponents (
<italic>a</italic>
) from power fits for all participants in A (A), V (B), and A+V (C) conditions</bold>
. Each panel includes the following summary statistics: mean,
<italic>M</italic>
± one standard deviation, median,
<italic>Mdn</italic>
(interquartile range), and number of participants,
<italic>n</italic>
, in each condition.</p>
</caption>
<graphic xlink:href="fpsyg-05-01097-g0004"></graphic>
</fig>
<p>Constant values from the fits provide information about the amount of linear compression/expansion of the function. Figures
<xref ref-type="fig" rid="F5">5A–C</xref>
display histograms of the distributions of constant values across participants in the A, V, and A+V conditions respectively. The histograms are positively skewed, so both the mean ± one standard deviation and median (interquartile range) are included in each panel. Each panel also includes the number of participants in each condition. As in Figure
<xref ref-type="fig" rid="F4">4</xref>
, considerable inter-subject variability may be noted. Based on independent
<italic>t</italic>
-tests with Bonferroni correction, the constants in the A condition (
<italic>M</italic>
= 2.217,
<italic>SD</italic>
= 1.992) were significantly greater than constants in either the V (
<italic>M</italic>
= 1.281,
<italic>SD</italic>
= 0.801) or A+V conditions (
<italic>M</italic>
= 1.383,
<italic>SD</italic>
= 0.912). Overall, these results suggest that near distances are more overestimated in the A condition than in the V or A+V condition. The V and A+V conditions were not significantly different from each other [A vs. V:
<italic>t</italic>
<sub>(85.359)</sub>
= 3.343,
<italic>p</italic>
< 0.003; A vs. A+V:
<italic>t</italic>
<sub>(90.815)</sub>
= 2.904,
<italic>p</italic>
< 0.015; V vs. A+V:
<italic>t</italic>
<sub>(88)</sub>
= −0.559,
<italic>p</italic>
> 0.999]. One-sample
<italic>t</italic>
-tests confirmed that constants in all three conditions were greater than one [A:
<italic>t</italic>
<sub>(61)</sub>
= 4.810,
<italic>p</italic>
< 0.0001; V:
<italic>t</italic>
<sub>(44)</sub>
= 2.356,
<italic>p</italic>
< 0.023; A+V:
<italic>t</italic>
<sub>(44)</sub>
= 2.816,
<italic>p</italic>
< 0.007], indicating overestimation for distances less than 1 m in all conditions.</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption>
<p>
<bold>Distributions of constants (
<italic>k</italic>
) from power fits for all participants in the A (A), V (B), and A+V (C) conditions</bold>
. Each panel includes the following summary statistics: mean,
<italic>M</italic>
± one standard deviation, median,
<italic>Mdn</italic>
(interquartile range), and number of participants,
<italic>n</italic>
, in each condition.</p>
</caption>
<graphic xlink:href="fpsyg-05-01097-g0005"></graphic>
</fig>
<p>In order to assess the intra-subject variability of distance judgments, residuals from the power function fits for each participant were analyzed for each condition. Such analyses allow the judgment variability explained by the power function fit to be removed from the data. What remains is an estimate of judgment error independent of the power-law relationship. Figures
<xref ref-type="fig" rid="F6">6A–C</xref>
display the log-transformed residuals plotted as a function of target distance in the A, V, and A+V conditions respectively for a representative participant (code QAD, see Figure
<xref ref-type="fig" rid="F2">2</xref>
). The RMS error listed in each panel is a measure of average deviation of the responses from the best-fitting power function, and was computed as the square-root of the mean squared deviation of the log-transformed residuals from zero. Although Figure
<xref ref-type="fig" rid="F6">6B</xref>
shows the log-transformed residuals decreasing in variability with increasing distance, this pattern is not generally representative of all participants in the study.</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption>
<p>
<bold>Log-transformed residuals from the power function fit for a single representative participant (code QAD, see Figure
<xref ref-type="fig" rid="F2">2</xref>
) for the A (A), V (B), and A+V (C) conditions</bold>
. RMS error across all distances is indicated in each panel. Small random jitter was added to the target distances on the x-axis for visualization purposes.</p>
</caption>
<graphic xlink:href="fpsyg-05-01097-g0006"></graphic>
</fig>
<p>Log-transformed residuals pooled across all participants in the study are shown in Figures
<xref ref-type="fig" rid="F7">7A–C</xref>
. These residuals represent error remaining after power functions were fit to the individual subject data. Overall, the spread of the residuals was relatively homogeneous as a function of source distance, which indicates that judgment error was relatively independent of source distance. This was the rationale for our residual RMS error metric, which averages over all source distances. We also examined the distributions of the log-transformed residuals across all target distances. Figures
<xref ref-type="fig" rid="F8">8A–C</xref>
display normal-probability plots of the log-transformed residuals collapsed across distance for the A, V, and A+V conditions respectively. The dashed diagonal line in each panel indicates a normal distribution. In all three conditions, it may be observed that the distributions of the log-transformed residuals are very close to normal over a large range of probability values (0.025 and 0.975 are indicated by the dotted lines). Although very extreme values (
<italic>p</italic>
< 0.025 or
<italic>p</italic>
> 0.0975) do appear to deviate somewhat from normality, these distribution results are overall consistent with the notion that the underlying internal representation of distance and distance errors are logarithmically spaced (Zahorik,
<xref rid="B27" ref-type="bibr">2002b</xref>
).</p>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption>
<p>
<bold>Same as Figure
<xref ref-type="fig" rid="F6">6</xref>
, except results from all participants are shown</bold>
. Each panel includes the number of participants per condition. Note that the spread of the residuals is relatively homogeneous as a function of distance.</p>
</caption>
<graphic xlink:href="fpsyg-05-01097-g0007"></graphic>
</fig>
<fig id="F8" position="float">
<label>Figure 8</label>
<caption>
<p>
<bold>Normal-probability plots of the log-transformed residuals (all participants) for the A (A), V (B), and A+V (C) conditions</bold>
. The dashed diagonal line in each panel indicates normally distributed data. Probability values of 0.025 and 0.975 are shown for reference.</p>
</caption>
<graphic xlink:href="fpsyg-05-01097-g0008"></graphic>
</fig>
<p>Distributions of RMS error in the A, V and A+V conditions are displayed in Figures
<xref ref-type="fig" rid="F9">9A–C</xref>
respectively. Each panel includes the following summary statistics: mean ± one standard deviation, median (interquartile range), and number of participants in each condition. The average RMS error for the A (
<italic>M</italic>
= 0.226,
<italic>SD</italic>
= 0.111) condition was significantly greater than both the V (
<italic>M</italic>
= 0.152,
<italic>SD</italic>
= 0.108) and A+V (
<italic>M</italic>
= 0.163,
<italic>SD</italic>
= 0.086) conditions. The V and A+V conditions were not significantly different from each other based on independent samples
<italic>t</italic>
-tests with Bonferroni correction. [A vs. V:
<italic>t</italic>
<sub>(105)</sub>
= 3.440,
<italic>p</italic>
< 0.003; A vs. A+V:
<italic>t</italic>
<sub>(105)</sub>
= 3.190,
<italic>p</italic>
< 0.006; V vs. A+V:
<italic>t</italic>
<sub>(88)</sub>
= −0.523,
<italic>p</italic>
> 0.999]. These results indicate that when visual stimuli were present, the distance estimates within individual subjects were less variable.</p>
<fig id="F9" position="float">
<label>Figure 9</label>
<caption>
<p>
<bold>Distributions of RMS errors from the power function fits from individual participants in the A (A), V (B), and A+V (C) conditions</bold>
. Each panel includes the following summary statistics: mean,
<italic>M</italic>
± one standard deviation, median,
<italic>Mdn</italic>
(interquartile range), and number of participants,
<italic>n</italic>
, in each condition.</p>
</caption>
<graphic xlink:href="fpsyg-05-01097-g0009"></graphic>
</fig>
<p>To evaluate the sensitivity of the power function fit procedures to the number of judgments available, fit parameters and
<italic>R</italic>
<sup>2</sup>
values were compared between participants who performed 10 judgments per distance (
<italic>a</italic>
:
<italic>M</italic>
= 0.649,
<italic>SD</italic>
= 0.259;
<italic>k</italic>
:
<italic>M</italic>
= 2.267,
<italic>SD</italic>
= 2.098;
<italic>R</italic>
<sup>2</sup>
:
<italic>M</italic>
= 0.650,
<italic>SD</italic>
= 0.208) and a subset of participants who performed 30 judgments per distance (
<italic>a</italic>
:
<italic>M</italic>
= 0.588,
<italic>SD</italic>
= 0.274;
<italic>k</italic>
:
<italic>M</italic>
= 2.130,
<italic>SD</italic>
= 1.694;
<italic>R</italic>
<sup>2</sup>
:
<italic>M</italic>
= 0.635,
<italic>SD</italic>
= 0.201). Independent
<italic>t</italic>
-tests found no statistically significant difference between the two groups for either fit parameter or
<italic>R</italic>
<sup>2</sup>
[
<italic>a</italic>
:
<italic>t</italic>
<sub>(60)</sub>
= 0.802,
<italic>p</italic>
> 0.426;
<italic>k</italic>
:
<italic>t</italic>
<sub>(60)</sub>
= 0.240,
<italic>p</italic>
> 0.811;
<italic>R</italic>
<sup>2</sup>
:
<italic>t</italic>
<sub>(60)</sub>
= 0.246,
<italic>p</italic>
> 0.806]. These results indicate that 10 judgments per distance is sufficient to reliably estimate the distance psychophysical function.</p>
<p>In order to assess reliability of distance judgments across the three stimulus conditions, correlations between power function fit parameters and statistics were computed.
<italic>R</italic>
<sup>2</sup>
values in all three conditions were positively correlated [A and V:
<italic>r</italic>
<sub>(43)</sub>
= 0.660,
<italic>p</italic>
< 0.001; A and A+V:
<italic>r</italic>
<sub>(43)</sub>
= 0.674,
<italic>p</italic>
< 0.001; V and A+V:
<italic>r</italic>
<sub>(43)</sub>
= 0.922,
<italic>p</italic>
< 0.001]. This indicates that if a participant's power function fit was good in one condition then it was likely also a good fit in the remaining conditions. Exponents between all three conditions were also significantly positively correlated [A and V:
<italic>r</italic>
<sub>(43)</sub>
= 0.537,
<italic>p</italic>
< 0.001; A and A+V:
<italic>r</italic>
<sub>(43)</sub>
= 0.557,
<italic>p</italic>
< 0.001; V and A+V:
<italic>r</italic>
<sub>(43)</sub>
= 0.896,
<italic>p</italic>
< 0.001]. This indicates that participants with greater amounts of power-function compression, for example, display this trait consistently across stimulus conditions. Similar positive correlations were also observed for the fitted constant values [A and V:
<italic>r</italic>
<sub>(43)</sub>
= 0.422,
<italic>p</italic>
< 0.004; A and A+V:
<italic>r</italic>
<sub>(43)</sub>
= 0.343,
<italic>p</italic>
< 0.021; V and A+V:
<italic>r</italic>
<sub>(43)</sub>
= 0.885,
<italic>p</italic>
< 0.001].</p>
</sec>
<sec sec-type="discussion" id="s4">
<title>Discussion</title>
<p>Overall, the results from this study indicate that the presence of visual information improves the accuracy of distance judgments by making the relationship between target distance and judged distance more linear and reducing both inter- and intra-subject variability. These conclusions are based on the results of power function fits to the data in each of the three presentation conditions (A, V, A+V). The decision to fit our data with power functions was based on past reviews of both ADP (Zahorik et al.,
<xref rid="B28" ref-type="bibr">2005</xref>
) and VDP (Da Silva,
<xref rid="B6" ref-type="bibr">1985</xref>
; Sedgwick,
<xref rid="B20" ref-type="bibr">1986</xref>
) that used similar methods. Zahorik et al. (
<xref rid="B28" ref-type="bibr">2005</xref>
) fit power functions to 84 datasets from 21 past ADP articles. Da Silva (
<xref rid="B6" ref-type="bibr">1985</xref>
) summarized power function exponents for various visual distance perception studies. Table
<xref ref-type="table" rid="T1">1</xref>
compares
<italic>R</italic>
<sup>2</sup>
values and fit parameters (mean ± one standard deviation) from these reviews of past ADP (Zahorik et al.,
<xref rid="B28" ref-type="bibr">2005</xref>
) and VDP studies (Da Silva,
<xref rid="B6" ref-type="bibr">1985</xref>
), with those from the current study. The summary of VDP exponents only includes studies in which full-cue conditions were used.
<italic>R</italic>
<sup>2</sup>
values across all conditions and past ADP studies were generally high, which indicates that power function fits were good fits to both past and present data. Exponent and constant parameters from the fitted functions, which provide information about the amount of non-linear and linear compression/expansion of the functions, were, in most cases, similar between past and present studies. The mean exponent from the Zahorik et al. (
<xref rid="B28" ref-type="bibr">2005</xref>
) review was similar (within one standard deviation) to that observed in our A condition. Likewise for the V and A+V conditions, the mean exponents were similar (within one standard deviation) to the mean exponent resulting from Da Silva's (
<xref rid="B6" ref-type="bibr">1985</xref>
) summary. The constant values for the A condition were somewhat higher than reported by Zahorik et al. (
<xref rid="B28" ref-type="bibr">2005</xref>
). Evaluation of these differences is complicated by the fact that the variability of the constant values from the current investigation is much greater. This may be due to variability between subjects in their usage of the response scale that lacked a fixed anchor point. Because the Zahorik et al. (
<xref rid="B28" ref-type="bibr">2005</xref>
) dataset was based on average results from different studies, issues such as this that are related to individual subject variability were minimized, which may have also accounted for the somewhat higher average
<italic>R</italic>
<sup>2</sup>
values they reported. Despite differences in sources of variability between studies, the fit parameters and
<italic>R</italic>
<sup>2</sup>
values are all in relative agreement. All are within one standard deviation of each other.</p>
<table-wrap id="T1" position="float">
<label>Table 1</label>
<caption>
<p>
<bold>Summary of results from past reviews of auditory and visual distance perception studies along with results from the current study</bold>
.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" rowspan="1" colspan="1">
<bold>Data source</bold>
</th>
<th align="center" rowspan="1" colspan="1">
<bold>A Condition</bold>
</th>
<th align="center" rowspan="1" colspan="1">
<bold>V Condition</bold>
</th>
<th align="center" rowspan="1" colspan="1">
<bold>A+V Condition</bold>
</th>
<th align="center" rowspan="1" colspan="1">
<bold>(Zahorik et al.,
<xref rid="B28" ref-type="bibr">2005</xref>
)—Audition</bold>
</th>
<th align="center" rowspan="1" colspan="1">
<bold>(Da Silva,
<xref rid="B6" ref-type="bibr">1985</xref>
)—Vision</bold>
</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>a</italic>
</td>
<td align="center" rowspan="1" colspan="1">0.61 ± 0.30</td>
<td align="center" rowspan="1" colspan="1">0.92 ± 0.27</td>
<td align="center" rowspan="1" colspan="1">0.87 ± 0.27</td>
<td align="center" rowspan="1" colspan="1">0.54 ± 0.21</td>
<td align="center" rowspan="1" colspan="1">0.99 ± 0.13</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>k</italic>
</td>
<td align="center" rowspan="1" colspan="1">2.22 ± 1.99</td>
<td align="center" rowspan="1" colspan="1">1.28 ± 0.80</td>
<td align="center" rowspan="1" colspan="1">1.38 ± 0.91</td>
<td align="center" rowspan="1" colspan="1">1.32 ± 0.75</td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">
<italic>R</italic>
<sup>2</sup>
</td>
<td align="center" rowspan="1" colspan="1">0.64 ± 0.22</td>
<td align="center" rowspan="1" colspan="1">0.87 ± 0.17</td>
<td align="center" rowspan="1" colspan="1">0.84 ± 0.18</td>
<td align="center" rowspan="1" colspan="1">0.91 ± 0.13</td>
<td rowspan="1" colspan="1"></td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>Power function fit parameters (a and k) and R
<sup>2</sup>
(mean ± one standard deviation) are included from each study, except Da Silva (
<xref rid="B6" ref-type="bibr">1985</xref>
) which only provided a summary of exponent, a, values. Results from Zahorik et al. (
<xref rid="B28" ref-type="bibr">2005</xref>
) summarize data from 21 auditory studies. Results from Da Silva (
<xref rid="B6" ref-type="bibr">1985</xref>
) summarize data from 28 vision studies with full depth cues.</p>
</table-wrap-foot>
</table-wrap>
<p>Another way to evaluate judgment biases beyond the analysis of the power function fit parameters is to determine the crossover point at which overestimation of close source distances switches to underestimation of farther source distances. This crossover point is the distance at which no bias occurs. Increasing or decreasing either fit parameter moves the crossover point further or closer respectively. Research in vision suggests that the crossover point may be related to a specific distance tendency (SDT; Gogel,
<xref rid="B9" ref-type="bibr">1969</xref>
), which is the perceived distance of an object reported by participants under conditions with minimal distance cues. Mershon and King (
<xref rid="B16" ref-type="bibr">1975</xref>
) suggested that SDT can also be applied to ADP, given demonstrated tendencies for sounds to be localized toward the crossover point. Specifically, target distances located beyond the crossover point are perceived as closer, and therefore nearer to the crossover point. Conversely sound sources closer than the crossover point are localized farther away, which is again nearer to the crossover point. Mershon and King (
<xref rid="B16" ref-type="bibr">1975</xref>
) also hypothesize that SDT for auditory sources is strongly influenced by the reverberation level of a room. Hence, rooms with similar reverberation characteristics should produce similar SDTs.</p>
<p>In the current study, the crossover point for the A condition was approximately 3.23 m, based on the median exponent and constant parameters from the power function fits. This crossover point is greater than reported by Zahorik et al. (
<xref rid="B28" ref-type="bibr">2005</xref>
) dataset, which was approximately 1.9 m. Because the exponent values were similar in the two studies, it may be concluded that this crossover point discrepancy is caused primarily by the difference in the power function constant parameters. Following Mershon and King's (
<xref rid="B16" ref-type="bibr">1975</xref>
) hypothesis that SDT is related to reverberation level, it seems plausible that these differences in constant values might be linked to differences in the acoustical properties of the rooms used in the two studies. Although the acoustic environments across the data sets analyzed in Zahorik et al. (
<xref rid="B28" ref-type="bibr">2005</xref>
) varied widely, it is likely that the concert hall environment used in the current study had greater amounts of reverberation than the average room in Zahorik et al. (
<xref rid="B28" ref-type="bibr">2005</xref>
) dataset. Greater amounts of reverberation are known to produce greater distance judgments (Mershon and King,
<xref rid="B16" ref-type="bibr">1975</xref>
), and therefore perhaps greater constant parameters in the power function fits, which in turn produce a more distant SDT. Such conclusions need to be approached cautiously, however, given the large individual variability observed in the constant values, as previously discussed. For VDP, Gogel (
<xref rid="B9" ref-type="bibr">1969</xref>
) found that visual context was necessary to localize visual targets away from the SDT. Reverberation level in ADP may provide the context necessary for sound sources to appear displaced from the SDT.</p>
<p>The observation that distance judgment biases observed in the A+V condition were much lower than the A condition, and nearly identical to those observed in the V condition, we take as evidence of a degree of visual capture in the distance dimension. This result is very similar to the well-known visual capture effects for discrepancies in the angular separation between auditory and visual targets—also known as the “Ventriloquist Effect.” It has been demonstrated that a visual stimulus can bias localization of the auditory sound source when the two are as much as 30° apart in the horizontal plane (Jack and Thurlow,
<xref rid="B11" ref-type="bibr">1973</xref>
) and 55° in the vertical plane (Thurlow and Jack,
<xref rid="B21" ref-type="bibr">1973</xref>
). This is a large effect. It is more than an order of magnitude larger than the minimum audible angle that is detectable between two sound sources separated in horizontal angle, which is between 1° and 4° on the median plane (Mills,
<xref rid="B18" ref-type="bibr">1958</xref>
). Strong visual capture effects have been previously observed in the distance dimension (“The Proximity-Image Effect”) when large discrepancies exist between the auditory and visual targets (Mershon et al.,
<xref rid="B15" ref-type="bibr">1980</xref>
) and particularly when auditory distance information is impoverished (Gardner,
<xref rid="B8" ref-type="bibr">1968</xref>
). The capture effects observed here are clearly much more subtle.</p>
<p>On the other hand, there are aspects of our results from the A+V condition that are not entirely consistent with visual capture. Research on multisensory perception emphasizes the optimal integration of multisensory information based on the variances of the two modalities (Ernst and Banks,
<xref rid="B7" ref-type="bibr">2002</xref>
; Alais and Burr,
<xref rid="B1" ref-type="bibr">2004</xref>
). According to this optimal integration model, the variance of the combined bimodal information should be lower than either modality alone. Additionally, the model stipulates that the modalities are weighted by the inverse of their variance, so the modality with lower variance is more heavily weighted at the modality integration stage of the perceptual process. For example, vision should be heavily weighted in a spatial task; however, if noise is added to the visual stimulus audition will become more heavily weighted. Therefore, if optimal integration occurred in our study, the A+V condition would be expected to have had lower variance than either the A or V condition alone. This was not observed, which is surprising because even if vision in the A+V condition was weighted 100% by the sensory system, the optimal integration theory still predicts lower variance in the A+V condition. It is possible, however, that this apparent lack of optimal integration may relate to the response method used in our study. Magnitude estimation methods are inherently noisier than the discrimination methods used by previous studies that have demonstrated optimal integration (Ernst and Banks,
<xref rid="B7" ref-type="bibr">2002</xref>
; Alais and Burr,
<xref rid="B1" ref-type="bibr">2004</xref>
). It is therefore conceivable that the perceptual noise in the A+V condition was in fact lower than either the A or V condition alone, thus consistent with optimal integration, but the response noise was simply too great to observe this reduction in variance consistent with optimal integration. Nevertheless, the measurement of variability is interesting itself because it has not been studied extensively in distance judgment studies.</p>
<p>Finally, our measurements of distance judgment variability provide additional and important insights into ADP and VDP both within and across individual participants. The inherent variability in distance judgments, particularly in the auditory domain, has not been well quantified prior to this study. In general, distance judgment variability across participants was found to be reduced when visual cues were present, a result that is consistent with past work that used similar response and analysis methods for apparent distance judgments (Zahorik,
<xref rid="B25" ref-type="bibr">2001</xref>
). This result is inconsistent, however, with recent work by Calcagno et al. (
<xref rid="B4" ref-type="bibr">2012</xref>
), which shows essentially constant judgment variability independent of whether visual target information is provided to the listener. This discrepancy could be due to differences in the type of visual information available. In Calcagno's study the visual information (2–4 LEDs in a dark field) was much more limited than the visual information present in either the present study or the Zahorik (
<xref rid="B25" ref-type="bibr">2001</xref>
) study, which provided multiple depth cues to the target locations. It is also worth noting that there were differences in the number of responses evaluated in summarizing response variability (24 judgments/distance in (Calcagno et al.,
<xref rid="B4" ref-type="bibr">2012</xref>
) vs. 959 judgments/distance in this study), as well as the analysis strategies used to summarize variability (variability of raw judgments in Calcagno et al.,
<xref rid="B4" ref-type="bibr">2012</xref>
vs. variability of log-transformed judgments in this study and in Zahorik,
<xref rid="B25" ref-type="bibr">2001</xref>
). We also show that when the judgment variability is expressed as logarithmic deviation from a best-fitting power function for individual subjects, the distributions of this deviation (error) measure are approximately normal. This, in conjunction with the fact that power functions are generally good fits to the data, suggests that the perceived auditory/visual space surrounding the subject has a logarithmically spaced topology. This conclusion is consistent with past work related to ADP (Zahorik,
<xref rid="B27" ref-type="bibr">2002b</xref>
), as well as visual depth work that demonstrates perceptual foreshortening of faraway objects (Wagner,
<xref rid="B23" ref-type="bibr">1985</xref>
; Loomis et al.,
<xref rid="B12" ref-type="bibr">2002</xref>
).</p>
</sec>
<sec sec-type="conclusions" id="s5">
<title>Conclusions</title>
<p>Results from this study indicate that: (1) Distance estimates in all conditions (A, V, A+V) were well-explained by power-function fits; (2) The presence of visual targets increased distance judgment accuracy in the V and A+V conditions compared to the A condition; (3) The A condition had greater unexplained response variance than either the V or A+V condition; (4) The unexplained response variance was approximately normally distributed in logarithmic space for all three conditions. These conclusions are consistent with the notion that visual depth information, when available to the participant, dominates the auditory percept of distance. They are also consistent with the idea that aspects of distance perception in both perceived auditory and perceived visual space appear to be organized logarithmically.</p>
<sec>
<title>Conflict of interest statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</sec>
</body>
<back>
<ack>
<p>We thank Gina Collecchia, Finesse Moreno-Rivera, and Noah Jacobs for their assistance with data collection. Work supported by AFOSR / KY DEPSCoR (FA9550-08-1-0234) and NIH-NEI (R21 EY023767).</p>
</ack>
<ref-list>
<title>References</title>
<ref id="B1">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Alais</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Burr</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>The ventriloquist effect results from near-optimal bimodal integration</article-title>
.
<source>Curr. Biol</source>
.
<volume>14</volume>
,
<fpage>257</fpage>
<lpage>262</lpage>
<pub-id pub-id-type="doi">10.1016/j.cub.2004.01.029</pub-id>
<pub-id pub-id-type="pmid">14761661</pub-id>
</mixed-citation>
</ref>
<ref id="B2">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ashmead</surname>
<given-names>D. H.</given-names>
</name>
<name>
<surname>Davis</surname>
<given-names>D. L.</given-names>
</name>
<name>
<surname>Northington</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>1995</year>
).
<article-title>Contribution of listeners' approaching motion to auditory distance perception</article-title>
.
<source>J. Exp. Psychol. Hum. Percept. Perform</source>
.
<volume>21</volume>
,
<fpage>239</fpage>
<lpage>256</lpage>
<pub-id pub-id-type="doi">10.1037/0096-1523.21.2.239</pub-id>
<pub-id pub-id-type="pmid">7714470</pub-id>
</mixed-citation>
</ref>
<ref id="B3">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Blauert</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<source>Spatial Hearing: The Psychophysics of Human Sound Localization</source>
.
<publisher-loc>Cambridge, MA</publisher-loc>
:
<publisher-name>MIT press</publisher-name>
</mixed-citation>
</ref>
<ref id="B4">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Calcagno</surname>
<given-names>E. R.</given-names>
</name>
<name>
<surname>Abregu</surname>
<given-names>E. L.</given-names>
</name>
<name>
<surname>Manuel</surname>
<given-names>C. E.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>The role of vision in auditory distance perception</article-title>
.
<source>Perception</source>
<volume>41</volume>
,
<fpage>175</fpage>
<lpage>192</lpage>
<pub-id pub-id-type="doi">10.1068/p7153</pub-id>
<pub-id pub-id-type="pmid">22670346</pub-id>
</mixed-citation>
</ref>
<ref id="B5">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Coleman</surname>
<given-names>P. D.</given-names>
</name>
</person-group>
(
<year>1968</year>
).
<article-title>Dual role of frequency spectrum in determination of auditory distance</article-title>
.
<source>J. Acoust. Soc. Am</source>
.
<volume>44</volume>
,
<fpage>631</fpage>
<lpage>632</lpage>
<pub-id pub-id-type="doi">10.1121/1.1911132</pub-id>
<pub-id pub-id-type="pmid">5665535</pub-id>
</mixed-citation>
</ref>
<ref id="B6">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Da Silva</surname>
<given-names>J. A.</given-names>
</name>
</person-group>
(
<year>1985</year>
).
<article-title>Scales for perceived egocentric distance in a large open field: comparison of three psychophysical methods</article-title>
.
<source>Am. J. Psychol</source>
.
<volume>98</volume>
,
<fpage>119</fpage>
<lpage>144</lpage>
<pub-id pub-id-type="doi">10.2307/1422771</pub-id>
<pub-id pub-id-type="pmid">4003616</pub-id>
</mixed-citation>
</ref>
<ref id="B7">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>M. O.</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>M. S.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Humans integrate visual and haptic information in a statistically optimal fashion</article-title>
.
<source>Nature</source>
<volume>415</volume>
,
<fpage>429</fpage>
<lpage>433</lpage>
<pub-id pub-id-type="doi">10.1038/415429a</pub-id>
<pub-id pub-id-type="pmid">11807554</pub-id>
</mixed-citation>
</ref>
<ref id="B8">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gardner</surname>
<given-names>M. B.</given-names>
</name>
</person-group>
(
<year>1968</year>
).
<article-title>Proximity image effect in sound localization</article-title>
.
<source>J. Acoust. Soc. Am</source>
.
<volume>43</volume>
,
<fpage>163</fpage>
<pub-id pub-id-type="doi">10.1121/1.1910747</pub-id>
<pub-id pub-id-type="pmid">5636394</pub-id>
</mixed-citation>
</ref>
<ref id="B9">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gogel</surname>
<given-names>W. C.</given-names>
</name>
</person-group>
(
<year>1969</year>
).
<article-title>The sensing of retinal size</article-title>
.
<source>Vision Res</source>
.
<volume>9</volume>
,
<fpage>1079</fpage>
<lpage>1094</lpage>
<pub-id pub-id-type="doi">10.1016/0042-6989(69)90049-2</pub-id>
<pub-id pub-id-type="pmid">5350376</pub-id>
</mixed-citation>
</ref>
<ref id="B10">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<collab>ISO-3382.</collab>
</person-group>
(
<year>1997</year>
).
<source>3382. Acoustics–Measurement of The Reverberation Time of Rooms With Reference To Other Acoustical Parameters</source>
.
<publisher-loc>Geneva</publisher-loc>
:
<publisher-name>International Standards Organization</publisher-name>
</mixed-citation>
</ref>
<ref id="B11">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jack</surname>
<given-names>C. E.</given-names>
</name>
<name>
<surname>Thurlow</surname>
<given-names>W. R.</given-names>
</name>
</person-group>
(
<year>1973</year>
).
<article-title>Effects of degree of visual association and angle of displacement on the “ventriloquism” effect</article-title>
.
<source>Percept. Mot. Skills</source>
<volume>37</volume>
,
<fpage>967</fpage>
<lpage>979</lpage>
<pub-id pub-id-type="doi">10.2466/pms.1973.37.3.967</pub-id>
<pub-id pub-id-type="pmid">4764534</pub-id>
</mixed-citation>
</ref>
<ref id="B12">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Loomis</surname>
<given-names>J. M.</given-names>
</name>
<name>
<surname>Philbeck</surname>
<given-names>J. W.</given-names>
</name>
<name>
<surname>Zahorik</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Dissociation between location and shape in visual space</article-title>
.
<source>J. Exp. Psychol. Hum. Percept. Perform</source>
.
<volume>28</volume>
,
<fpage>1202</fpage>
<lpage>1212</lpage>
<pub-id pub-id-type="doi">10.1037/0096-1523.28.5.1202</pub-id>
<pub-id pub-id-type="pmid">12421065</pub-id>
</mixed-citation>
</ref>
<ref id="B13">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mershon</surname>
<given-names>D. H.</given-names>
</name>
<name>
<surname>Ballenger</surname>
<given-names>W. L.</given-names>
</name>
<name>
<surname>Little</surname>
<given-names>A. D.</given-names>
</name>
<name>
<surname>McMurtry</surname>
<given-names>P. L.</given-names>
</name>
<name>
<surname>Buchanan</surname>
<given-names>J. L.</given-names>
</name>
</person-group>
(
<year>1989</year>
).
<article-title>Effects of room reflectance and background noise on perceived auditory distance</article-title>
.
<source>Perception</source>
<volume>18</volume>
,
<fpage>403</fpage>
<lpage>416</lpage>
<pub-id pub-id-type="doi">10.1068/p180403</pub-id>
<pub-id pub-id-type="pmid">2798023</pub-id>
</mixed-citation>
</ref>
<ref id="B14">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mershon</surname>
<given-names>D. H.</given-names>
</name>
<name>
<surname>Bowers</surname>
<given-names>J. N.</given-names>
</name>
</person-group>
(
<year>1979</year>
).
<article-title>Absolute and relative cues for the auditory perception of egocentric distance</article-title>
.
<source>Perception</source>
<volume>8</volume>
,
<fpage>311</fpage>
<lpage>322</lpage>
<pub-id pub-id-type="doi">10.1068/p080311</pub-id>
<pub-id pub-id-type="pmid">534158</pub-id>
</mixed-citation>
</ref>
<ref id="B15">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mershon</surname>
<given-names>D. H.</given-names>
</name>
<name>
<surname>Desaulniers</surname>
<given-names>D. H.</given-names>
</name>
<name>
<surname>Amerson</surname>
<given-names>T. L.</given-names>
</name>
<name>
<surname>Kiefer</surname>
<given-names>S. A.</given-names>
</name>
</person-group>
(
<year>1980</year>
).
<article-title>Visual capture in auditory distance perception: proximity image effect reconsidered</article-title>
.
<source>J. Aud. Res</source>
.
<volume>20</volume>
,
<fpage>129</fpage>
<lpage>136</lpage>
<pub-id pub-id-type="pmid">7345059</pub-id>
</mixed-citation>
</ref>
<ref id="B16">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mershon</surname>
<given-names>D. H.</given-names>
</name>
<name>
<surname>King</surname>
<given-names>L. E.</given-names>
</name>
</person-group>
(
<year>1975</year>
).
<article-title>Intensity and reverberation as factors in the auditory perception of egocentric distance</article-title>
.
<source>Percept. Psychophys</source>
.
<volume>18</volume>
,
<fpage>409</fpage>
<lpage>415</lpage>
<pub-id pub-id-type="doi">10.3758/BF03204113</pub-id>
</mixed-citation>
</ref>
<ref id="B17">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Middlebrooks</surname>
<given-names>J. C.</given-names>
</name>
<name>
<surname>Green</surname>
<given-names>D. M.</given-names>
</name>
</person-group>
(
<year>1991</year>
).
<article-title>Sound localization by human listeners</article-title>
.
<source>Annu. Rev. Psychol</source>
.
<volume>42</volume>
,
<fpage>135</fpage>
<lpage>159</lpage>
<pub-id pub-id-type="doi">10.1146/annurev.ps.42.020191.001031</pub-id>
<pub-id pub-id-type="pmid">2018391</pub-id>
</mixed-citation>
</ref>
<ref id="B18">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mills</surname>
<given-names>A. W.</given-names>
</name>
</person-group>
(
<year>1958</year>
).
<article-title>On the minimum audible angle</article-title>
.
<source>J. Acoust. Soc. Am</source>
.
<volume>30</volume>
,
<fpage>237</fpage>
<lpage>246</lpage>
<pub-id pub-id-type="doi">10.1121/1.1909553</pub-id>
</mixed-citation>
</ref>
<ref id="B19">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rife</surname>
<given-names>D. D.</given-names>
</name>
<name>
<surname>Vanderkooy</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>1989</year>
).
<article-title>Transfer-function measurement with maximum-length sequences</article-title>
.
<source>J. Audio Eng. Soc</source>
.
<volume>37</volume>
,
<fpage>419</fpage>
<lpage>444</lpage>
</mixed-citation>
</ref>
<ref id="B20">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Sedgwick</surname>
<given-names>H. A.</given-names>
</name>
</person-group>
(
<year>1986</year>
).
<article-title>Space perception</article-title>
, in
<source>Handbook of Perception and Human Performance</source>
, Vol. 1, eds
<person-group person-group-type="editor">
<name>
<surname>Boff</surname>
<given-names>K. R.</given-names>
</name>
<name>
<surname>Kaufman</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Thomas</surname>
<given-names>J. P.</given-names>
</name>
</person-group>
(
<publisher-loc>New York, NY</publisher-loc>
:
<publisher-name>Wiley</publisher-name>
),
<fpage>21-1</fpage>
<lpage>21-57</lpage>
</mixed-citation>
</ref>
<ref id="B21">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Thurlow</surname>
<given-names>W. R.</given-names>
</name>
<name>
<surname>Jack</surname>
<given-names>C. E.</given-names>
</name>
</person-group>
(
<year>1973</year>
).
<article-title>Certain determinants of the “ventriloquism effect.”</article-title>
<source>Percept. Mot. Skills</source>
<volume>36</volume>
,
<fpage>1171</fpage>
<lpage>1184</lpage>
<pub-id pub-id-type="doi">10.2466/pms.1973.36.3c.1171</pub-id>
<pub-id pub-id-type="pmid">4711968</pub-id>
</mixed-citation>
</ref>
<ref id="B22">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vroomen</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>de Gelder</surname>
<given-names>B.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Temporal ventriloquism: sound modulates the flash-lag effect</article-title>
.
<source>J. Exp. Psychol. Hum. Percept. Perform</source>
.
<volume>30</volume>
,
<fpage>513</fpage>
<lpage>518</lpage>
<pub-id pub-id-type="doi">10.1037/0096-1523.30.3.513</pub-id>
<pub-id pub-id-type="pmid">15161383</pub-id>
</mixed-citation>
</ref>
<ref id="B23">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wagner</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>1985</year>
).
<article-title>The metric of visual space</article-title>
.
<source>Percept. Psychophys</source>
.
<volume>38</volume>
,
<fpage>483</fpage>
<lpage>495</lpage>
<pub-id pub-id-type="doi">10.3758/BF03207058</pub-id>
<pub-id pub-id-type="pmid">3834394</pub-id>
</mixed-citation>
</ref>
<ref id="B24">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wightman</surname>
<given-names>F. L.</given-names>
</name>
<name>
<surname>Kistler</surname>
<given-names>D. J.</given-names>
</name>
</person-group>
(
<year>1989</year>
).
<article-title>Headphone simulation of freefield listening. I: stimulus synthesis</article-title>
.
<source>J. Acoust. Soc. Am</source>
.
<volume>85</volume>
,
<fpage>858</fpage>
<lpage>867</lpage>
<pub-id pub-id-type="doi">10.1121/1.397557</pub-id>
<pub-id pub-id-type="pmid">2926000</pub-id>
</mixed-citation>
</ref>
<ref id="B25">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zahorik</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Estimating sound source distance with and without vision</article-title>
.
<source>Optom. Vis. Sci</source>
.
<volume>78</volume>
,
<fpage>270</fpage>
<lpage>275</lpage>
<pub-id pub-id-type="doi">10.1097/00006324-200105000-00009</pub-id>
<pub-id pub-id-type="pmid">11384003</pub-id>
</mixed-citation>
</ref>
<ref id="B26">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zahorik</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2002a</year>
).
<article-title>Assessing auditory distance perception using virtual acoustics</article-title>
.
<source>J. Acoust. Soc. Am</source>
.
<volume>111</volume>
,
<fpage>1832</fpage>
<lpage>1846</lpage>
<pub-id pub-id-type="doi">10.1121/1.1458027</pub-id>
<pub-id pub-id-type="pmid">12002867</pub-id>
</mixed-citation>
</ref>
<ref id="B27">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zahorik</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2002b</year>
).
<article-title>Direct-to-reverberant energy ratio sensitivity</article-title>
.
<source>J. Acoust. Soc. Am</source>
.
<volume>112</volume>
,
<fpage>2110</fpage>
<lpage>2117</lpage>
<pub-id pub-id-type="doi">10.1121/1.1506692</pub-id>
<pub-id pub-id-type="pmid">12430822</pub-id>
</mixed-citation>
</ref>
<ref id="B28">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zahorik</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Brungart</surname>
<given-names>D. S.</given-names>
</name>
<name>
<surname>Bronkhorst</surname>
<given-names>A. W.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Auditory distance perception in humans: a summary of past and present research</article-title>
.
<source>Acta Acust. United Acust</source>
.
<volume>91</volume>
,
<fpage>409</fpage>
<lpage>420</lpage>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
<affiliations>
<list>
<country>
<li>États-Unis</li>
</country>
</list>
<tree>
<country name="États-Unis">
<noRegion>
<name sortKey="Anderson, Paul W" sort="Anderson, Paul W" uniqKey="Anderson P" first="Paul W." last="Anderson">Paul W. Anderson</name>
</noRegion>
<name sortKey="Zahorik, Pavel" sort="Zahorik, Pavel" uniqKey="Zahorik P" first="Pavel" last="Zahorik">Pavel Zahorik</name>
<name sortKey="Zahorik, Pavel" sort="Zahorik, Pavel" uniqKey="Zahorik P" first="Pavel" last="Zahorik">Pavel Zahorik</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Ncbi/Merge
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 003396 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd -nk 003396 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Ncbi
   |étape=   Merge
   |type=    RBID
   |clé=     PMC:4188027
   |texte=   Auditory/visual distance estimation: accuracy and variability
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/RBID.i   -Sk "pubmed:25339924" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024