Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

The interaction of vision and audition in two-dimensional space

Identifieur interne : 000188 ( Pmc/Curation ); précédent : 000187; suivant : 000189

The interaction of vision and audition in two-dimensional space

Auteurs : Martine Godfroy-Cooper [États-Unis] ; Patrick M. B. Sandor [France] ; Joel D. Miller [États-Unis] ; Robert B. Welch [États-Unis]

Source :

RBID : PMC:4585004

Abstract

Using a mouse-driven visual pointer, 10 participants made repeated open-loop egocentric localizations of memorized visual, auditory, and combined visual-auditory targets projected randomly across the two-dimensional frontal field (2D). The results are reported in terms of variable error, constant error and local distortion. The results confirmed that auditory and visual maps of the egocentric space differ in their precision (variable error) and accuracy (constant error), both from one another and as a function of eccentricity and direction within a given modality. These differences were used, in turn, to make predictions about the precision and accuracy within which spatially and temporally congruent bimodal visual-auditory targets are localized. Overall, the improvement in precision for bimodal relative to the best unimodal target revealed the presence of optimal integration well-predicted by the Maximum Likelihood Estimation (MLE) model. Conversely, the hypothesis that accuracy in localizing the bimodal visual-auditory targets would represent a compromise between auditory and visual performance in favor of the most precise modality was rejected. Instead, the bimodal accuracy was found to be equivalent to or to exceed that of the best unimodal condition. Finally, we described how the different types of errors could be used to identify properties of the internal representations and coordinate transformations within the central nervous system (CNS). The results provide some insight into the structure of the underlying sensorimotor processes employed by the brain and confirm the usefulness of capitalizing on naturally occurring differences between vision and audition to better understand their interaction and their contribution to multimodal perception.


Url:
DOI: 10.3389/fnins.2015.00311
PubMed: 26441492
PubMed Central: 4585004

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4585004

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">The interaction of vision and audition in two-dimensional space</title>
<author>
<name sortKey="Godfroy Cooper, Martine" sort="Godfroy Cooper, Martine" uniqKey="Godfroy Cooper M" first="Martine" last="Godfroy-Cooper">Martine Godfroy-Cooper</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Advanced Controls and Displays Group, Human Systems Integration Division, NASA Ames Research Center</institution>
<country>Moffett Field, CA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>San Jose State University Research Foundation</institution>
<country>San José, CA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Sandor, Patrick M B" sort="Sandor, Patrick M B" uniqKey="Sandor P" first="Patrick M. B." last="Sandor">Patrick M. B. Sandor</name>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<institution>Institut de Recherche Biomédicale des Armées, Département Action et Cognition en Situation Opérationnelle</institution>
<country>Brétigny-sur-Orge, France</country>
</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff4">
<institution>Aix Marseille Université, Centre National de la Recherche Scientifique, ISM UMR 7287</institution>
<country>Marseille, France</country>
</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Miller, Joel D" sort="Miller, Joel D" uniqKey="Miller J" first="Joel D." last="Miller">Joel D. Miller</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Advanced Controls and Displays Group, Human Systems Integration Division, NASA Ames Research Center</institution>
<country>Moffett Field, CA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>San Jose State University Research Foundation</institution>
<country>San José, CA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Welch, Robert B" sort="Welch, Robert B" uniqKey="Welch R" first="Robert B." last="Welch">Robert B. Welch</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Advanced Controls and Displays Group, Human Systems Integration Division, NASA Ames Research Center</institution>
<country>Moffett Field, CA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">26441492</idno>
<idno type="pmc">4585004</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4585004</idno>
<idno type="RBID">PMC:4585004</idno>
<idno type="doi">10.3389/fnins.2015.00311</idno>
<date when="2015">2015</date>
<idno type="wicri:Area/Pmc/Corpus">000188</idno>
<idno type="wicri:Area/Pmc/Curation">000188</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">The interaction of vision and audition in two-dimensional space</title>
<author>
<name sortKey="Godfroy Cooper, Martine" sort="Godfroy Cooper, Martine" uniqKey="Godfroy Cooper M" first="Martine" last="Godfroy-Cooper">Martine Godfroy-Cooper</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Advanced Controls and Displays Group, Human Systems Integration Division, NASA Ames Research Center</institution>
<country>Moffett Field, CA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>San Jose State University Research Foundation</institution>
<country>San José, CA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Sandor, Patrick M B" sort="Sandor, Patrick M B" uniqKey="Sandor P" first="Patrick M. B." last="Sandor">Patrick M. B. Sandor</name>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<institution>Institut de Recherche Biomédicale des Armées, Département Action et Cognition en Situation Opérationnelle</institution>
<country>Brétigny-sur-Orge, France</country>
</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff4">
<institution>Aix Marseille Université, Centre National de la Recherche Scientifique, ISM UMR 7287</institution>
<country>Marseille, France</country>
</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Miller, Joel D" sort="Miller, Joel D" uniqKey="Miller J" first="Joel D." last="Miller">Joel D. Miller</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Advanced Controls and Displays Group, Human Systems Integration Division, NASA Ames Research Center</institution>
<country>Moffett Field, CA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>San Jose State University Research Foundation</institution>
<country>San José, CA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Welch, Robert B" sort="Welch, Robert B" uniqKey="Welch R" first="Robert B." last="Welch">Robert B. Welch</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Advanced Controls and Displays Group, Human Systems Integration Division, NASA Ames Research Center</institution>
<country>Moffett Field, CA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Frontiers in Neuroscience</title>
<idno type="ISSN">1662-4548</idno>
<idno type="eISSN">1662-453X</idno>
<imprint>
<date when="2015">2015</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Using a mouse-driven visual pointer, 10 participants made repeated open-loop egocentric localizations of memorized visual, auditory, and combined visual-auditory targets projected randomly across the two-dimensional frontal field (2D). The results are reported in terms of variable error, constant error and local distortion. The results confirmed that auditory and visual maps of the egocentric space differ in their precision (variable error) and accuracy (constant error), both from one another and as a function of eccentricity and direction within a given modality. These differences were used, in turn, to make predictions about the precision and accuracy within which spatially and temporally congruent bimodal visual-auditory targets are localized. Overall, the improvement in precision for bimodal relative to the best unimodal target revealed the presence of optimal integration well-predicted by the Maximum Likelihood Estimation (MLE) model. Conversely, the hypothesis that accuracy in localizing the bimodal visual-auditory targets would represent a compromise between auditory and visual performance in favor of the most precise modality was rejected. Instead, the bimodal accuracy was found to be equivalent to or to exceed that of the best unimodal condition. Finally, we described how the different types of errors could be used to identify properties of the internal representations and coordinate transformations within the central nervous system (CNS). The results provide some insight into the structure of the underlying sensorimotor processes employed by the brain and confirm the usefulness of capitalizing on naturally occurring differences between vision and audition to better understand their interaction and their contribution to multimodal perception.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Abrams, J" uniqKey="Abrams J">J. Abrams</name>
</author>
<author>
<name sortKey="Nizam, A" uniqKey="Nizam A">A. Nizam</name>
</author>
<author>
<name sortKey="Carrasco, M" uniqKey="Carrasco M">M. Carrasco</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Alais, D" uniqKey="Alais D">D. Alais</name>
</author>
<author>
<name sortKey="Burr, D" uniqKey="Burr D">D. Burr</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Battaglia, P W" uniqKey="Battaglia P">P. W. Battaglia</name>
</author>
<author>
<name sortKey="Jacobs, R A" uniqKey="Jacobs R">R. A. Jacobs</name>
</author>
<author>
<name sortKey="Aslin, R N" uniqKey="Aslin R">R. N. Aslin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bernardo, J M" uniqKey="Bernardo J">J. M. Bernardo</name>
</author>
<author>
<name sortKey="Smith, A F" uniqKey="Smith A">A. F. Smith</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bertelson, P" uniqKey="Bertelson P">P. Bertelson</name>
</author>
<author>
<name sortKey="Radeau, M" uniqKey="Radeau M">M. Radeau</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bertelson, P" uniqKey="Bertelson P">P. Bertelson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Best, V" uniqKey="Best V">V. Best</name>
</author>
<author>
<name sortKey="Marrone, N" uniqKey="Marrone N">N. Marrone</name>
</author>
<author>
<name sortKey="Mason, C R" uniqKey="Mason C">C. R. Mason</name>
</author>
<author>
<name sortKey="Kidd, G" uniqKey="Kidd G">G. Kidd</name>
</author>
<author>
<name sortKey="Shinn Cunningham, B G" uniqKey="Shinn Cunningham B">B. G. Shinn-Cunningham</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Blauert, J" uniqKey="Blauert J">J. Blauert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Blauert, J" uniqKey="Blauert J">J. Blauert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bronkhorst, A W" uniqKey="Bronkhorst A">A. W. Bronkhorst</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brungart, D S" uniqKey="Brungart D">D. S. Brungart</name>
</author>
<author>
<name sortKey="Rabinowitz, W M" uniqKey="Rabinowitz W">W. M. Rabinowitz</name>
</author>
<author>
<name sortKey="Durlach, N I" uniqKey="Durlach N">N. I. Durlach</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bulthoff, H H" uniqKey="Bulthoff H">H. H. Bülthoff</name>
</author>
<author>
<name sortKey="Yuille, A L" uniqKey="Yuille A">A. L. Yuille</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Carlile, S" uniqKey="Carlile S">S. Carlile</name>
</author>
<author>
<name sortKey="Leong, P" uniqKey="Leong P">P. Leong</name>
</author>
<author>
<name sortKey="Hyams, S" uniqKey="Hyams S">S. Hyams</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Charbonneau, G" uniqKey="Charbonneau G">G. Charbonneau</name>
</author>
<author>
<name sortKey="Veronneau, M" uniqKey="Veronneau M">M. Véronneau</name>
</author>
<author>
<name sortKey="Boudrias Fournier, C" uniqKey="Boudrias Fournier C">C. Boudrias-Fournier</name>
</author>
<author>
<name sortKey="Lepore, F" uniqKey="Lepore F">F. Lepore</name>
</author>
<author>
<name sortKey="Collignon, O" uniqKey="Collignon O">O. Collignon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Colonius, H" uniqKey="Colonius H">H. Colonius</name>
</author>
<author>
<name sortKey="Diederich, A" uniqKey="Diederich A">A. Diederich</name>
</author>
<author>
<name sortKey="Steenken, R" uniqKey="Steenken R">R. Steenken</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Crawford, J D" uniqKey="Crawford J">J. D. Crawford</name>
</author>
<author>
<name sortKey="Medendorp, W P" uniqKey="Medendorp W">W. P. Medendorp</name>
</author>
<author>
<name sortKey="Marotta, J J" uniqKey="Marotta J">J. J. Marotta</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Culler, E" uniqKey="Culler E">E. Culler</name>
</author>
<author>
<name sortKey="Coakley, J D" uniqKey="Coakley J">J. D. Coakley</name>
</author>
<author>
<name sortKey="Lowy, K" uniqKey="Lowy K">K. Lowy</name>
</author>
<author>
<name sortKey="Gross, N" uniqKey="Gross N">N. Gross</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Curcio, C A" uniqKey="Curcio C">C. A. Curcio</name>
</author>
<author>
<name sortKey="Sloan, K R" uniqKey="Sloan K">K. R. Sloan</name>
</author>
<author>
<name sortKey="Packer, O" uniqKey="Packer O">O. Packer</name>
</author>
<author>
<name sortKey="Hendrickson, A E" uniqKey="Hendrickson A">A. E. Hendrickson</name>
</author>
<author>
<name sortKey="Kalina, R E" uniqKey="Kalina R">R. E. Kalina</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Devalois, R L" uniqKey="Devalois R">R. L. DeValois</name>
</author>
<author>
<name sortKey="Devalois, K K" uniqKey="Devalois K">K. K. DeValois</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Easton, R D" uniqKey="Easton R">R. D. Easton</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, M O" uniqKey="Ernst M">M. O. Ernst</name>
</author>
<author>
<name sortKey="Banks, M S" uniqKey="Banks M">M. S. Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, M O" uniqKey="Ernst M">M. O. Ernst</name>
</author>
<author>
<name sortKey="Bulthoff, H H" uniqKey="Bulthoff H">H. H. Bülthoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fisher, G H" uniqKey="Fisher G">G. H. Fisher</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Freedman, E G" uniqKey="Freedman E">E. G. Freedman</name>
</author>
<author>
<name sortKey="Sparks, D L" uniqKey="Sparks D">D. L. Sparks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fuller, S" uniqKey="Fuller S">S. Fuller</name>
</author>
<author>
<name sortKey="Carrasco, M" uniqKey="Carrasco M">M. Carrasco</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gardner, J L" uniqKey="Gardner J">J. L. Gardner</name>
</author>
<author>
<name sortKey="Merriam, E P" uniqKey="Merriam E">E. P. Merriam</name>
</author>
<author>
<name sortKey="Movshon, J A" uniqKey="Movshon J">J. A. Movshon</name>
</author>
<author>
<name sortKey="Heeger, D J" uniqKey="Heeger D">D. J. Heeger</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Godfroy, M" uniqKey="Godfroy M">M. Godfroy</name>
</author>
<author>
<name sortKey="Roumes, C" uniqKey="Roumes C">C. Roumes</name>
</author>
<author>
<name sortKey="Dauchy, P" uniqKey="Dauchy P">P. Dauchy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Goossens, H H L M" uniqKey="Goossens H">H. H. L. M. Goossens</name>
</author>
<author>
<name sortKey="Van Opstal, A J" uniqKey="Van Opstal A">A. J. Van Opstal</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hairston, W D" uniqKey="Hairston W">W. D. Hairston</name>
</author>
<author>
<name sortKey="Laurienti, P J" uniqKey="Laurienti P">P. J. Laurienti</name>
</author>
<author>
<name sortKey="Mishra, G" uniqKey="Mishra G">G. Mishra</name>
</author>
<author>
<name sortKey="Burdette, J H" uniqKey="Burdette J">J. H. Burdette</name>
</author>
<author>
<name sortKey="Wallace, M T" uniqKey="Wallace M">M. T. Wallace</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hairston, W D" uniqKey="Hairston W">W. D. Hairston</name>
</author>
<author>
<name sortKey="Wallace, M T" uniqKey="Wallace M">M. T. Wallace</name>
</author>
<author>
<name sortKey="Vaughan, J W" uniqKey="Vaughan J">J. W. Vaughan</name>
</author>
<author>
<name sortKey="Stein, B E" uniqKey="Stein B">B. E. Stein</name>
</author>
<author>
<name sortKey="Norris, J L" uniqKey="Norris J">J. L. Norris</name>
</author>
<author>
<name sortKey="Schirillo, J A" uniqKey="Schirillo J">J. A. Schirillo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hay, J C" uniqKey="Hay J">J. C. Hay</name>
</author>
<author>
<name sortKey="Pick, H L" uniqKey="Pick H">H. L. Pick</name>
</author>
<author>
<name sortKey="Ikeda, K" uniqKey="Ikeda K">K. Ikeda</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Heffner, H E" uniqKey="Heffner H">H. E. Heffner</name>
</author>
<author>
<name sortKey="Heffner, R S" uniqKey="Heffner R">R. S. Heffner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Heuermann, H" uniqKey="Heuermann H">H. Heuermann</name>
</author>
<author>
<name sortKey="Colonius, H" uniqKey="Colonius H">H. Colonius</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hofman, P M" uniqKey="Hofman P">P. M. Hofman</name>
</author>
<author>
<name sortKey="Van Opstal, A J" uniqKey="Van Opstal A">A. J. Van Opstal</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hofman, P M" uniqKey="Hofman P">P. M. Hofman</name>
</author>
<author>
<name sortKey="Van Opstal, A J" uniqKey="Van Opstal A">A. J. Van Opstal</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Honda, H" uniqKey="Honda H">H. Honda</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hubel, D H" uniqKey="Hubel D">D. H. Hubel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jay, M F" uniqKey="Jay M">M. F. Jay</name>
</author>
<author>
<name sortKey="Sparks, D L" uniqKey="Sparks D">D. L. Sparks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kerzel, D" uniqKey="Kerzel D">D. Kerzel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Klier, E M" uniqKey="Klier E">E. M. Klier</name>
</author>
<author>
<name sortKey="Wang, H" uniqKey="Wang H">H. Wang</name>
</author>
<author>
<name sortKey="Crawford, J D" uniqKey="Crawford J">J. D. Crawford</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kopco, N" uniqKey="Kopco N">N. Kopco</name>
</author>
<author>
<name sortKey="Lin, I F" uniqKey="Lin I">I. F. Lin</name>
</author>
<author>
<name sortKey="Shinn Cunningham, B G" uniqKey="Shinn Cunningham B">B. G. Shinn-Cunningham</name>
</author>
<author>
<name sortKey="Groh, J M" uniqKey="Groh J">J. M. Groh</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lee, J" uniqKey="Lee J">J. Lee</name>
</author>
<author>
<name sortKey="Groh, J M" uniqKey="Groh J">J. M. Groh</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Loomis, J M" uniqKey="Loomis J">J. M. Loomis</name>
</author>
<author>
<name sortKey="Klatzky, R L" uniqKey="Klatzky R">R. L. Klatzky</name>
</author>
<author>
<name sortKey="Mchugh, B" uniqKey="Mchugh B">B. McHugh</name>
</author>
<author>
<name sortKey="Giudice, N A" uniqKey="Giudice N">N. A. Giudice</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Makous, J C" uniqKey="Makous J">J. C. Makous</name>
</author>
<author>
<name sortKey="Middlebrooks, J C" uniqKey="Middlebrooks J">J. C. Middlebrooks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mateeff, S" uniqKey="Mateeff S">S. Mateeff</name>
</author>
<author>
<name sortKey="Gourevich, A" uniqKey="Gourevich A">A. Gourevich</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mcintyre, J" uniqKey="Mcintyre J">J. McIntyre</name>
</author>
<author>
<name sortKey="Stratta, F" uniqKey="Stratta F">F. Stratta</name>
</author>
<author>
<name sortKey="Droulez, J" uniqKey="Droulez J">J. Droulez</name>
</author>
<author>
<name sortKey="Lacquaniti, F" uniqKey="Lacquaniti F">F. Lacquaniti</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Middlebrooks, J C" uniqKey="Middlebrooks J">J. C. Middlebrooks</name>
</author>
<author>
<name sortKey="Green, D M" uniqKey="Green D">D. M. Green</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Musseler, J" uniqKey="Musseler J">J. Müsseler</name>
</author>
<author>
<name sortKey="Van Der Heijden, A H C" uniqKey="Van Der Heijden A">A. H. C. Van der Heijden</name>
</author>
<author>
<name sortKey="Mahmud, S H" uniqKey="Mahmud S">S. H. Mahmud</name>
</author>
<author>
<name sortKey="Deubel, H" uniqKey="Deubel H">H. Deubel</name>
</author>
<author>
<name sortKey="Ertsey, S" uniqKey="Ertsey S">S. Ertsey</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Oldfield, S R" uniqKey="Oldfield S">S. R. Oldfield</name>
</author>
<author>
<name sortKey="Parker, S P" uniqKey="Parker S">S. P. Parker</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Oruc, I" uniqKey="Oruc I">I. Oruç</name>
</author>
<author>
<name sortKey="Maloney, L T" uniqKey="Maloney L">L. T. Maloney</name>
</author>
<author>
<name sortKey="Landy, M S" uniqKey="Landy M">M. S. Landy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pedersen, J A" uniqKey="Pedersen J">J. A. Pedersen</name>
</author>
<author>
<name sortKey="Jorgensen, T" uniqKey="Jorgensen T">T. Jorgensen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Perrott, D R" uniqKey="Perrott D">D. R. Perrott</name>
</author>
<author>
<name sortKey="Ambarsoom, H" uniqKey="Ambarsoom H">H. Ambarsoom</name>
</author>
<author>
<name sortKey="Tucker, J" uniqKey="Tucker J">J. Tucker</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Radeau, M" uniqKey="Radeau M">M. Radeau</name>
</author>
<author>
<name sortKey="Bertelson, P" uniqKey="Bertelson P">P. Bertelson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Recanzone, G H" uniqKey="Recanzone G">G. H. Recanzone</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Richard, A" uniqKey="Richard A">A. Richard</name>
</author>
<author>
<name sortKey="Churan, J" uniqKey="Churan J">J. Churan</name>
</author>
<author>
<name sortKey="Guitton, D E" uniqKey="Guitton D">D. E. Guitton</name>
</author>
<author>
<name sortKey="Pack, C C" uniqKey="Pack C">C. C. Pack</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Robinson, D A" uniqKey="Robinson D">D. A. Robinson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rock, I" uniqKey="Rock I">I. Rock</name>
</author>
<author>
<name sortKey="Victor, J" uniqKey="Victor J">J. Victor</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ross, J" uniqKey="Ross J">J. Ross</name>
</author>
<author>
<name sortKey="Morrone, C M" uniqKey="Morrone C">C. M. Morrone</name>
</author>
<author>
<name sortKey="Burr, D C" uniqKey="Burr D">D. C. Burr</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Saarinen, J" uniqKey="Saarinen J">J. Saarinen</name>
</author>
<author>
<name sortKey="Rovamo, J" uniqKey="Rovamo J">J. Rovamo</name>
</author>
<author>
<name sortKey="Virsu, V" uniqKey="Virsu V">V. Virsu</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Seeber, B" uniqKey="Seeber B">B. Seeber</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Seth, B R" uniqKey="Seth B">B. R. Seth</name>
</author>
<author>
<name sortKey="Shimojo, S" uniqKey="Shimojo S">S. Shimojo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stein, B E" uniqKey="Stein B">B. E. Stein</name>
</author>
<author>
<name sortKey="Meredith, M A" uniqKey="Meredith M">M. A. Meredith</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Strybel, T Z" uniqKey="Strybel T">T. Z. Strybel</name>
</author>
<author>
<name sortKey="Fujimoto, K" uniqKey="Fujimoto K">K. Fujimoto</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thurlow, W R" uniqKey="Thurlow W">W. R. Thurlow</name>
</author>
<author>
<name sortKey="Jack, C E" uniqKey="Jack C">C. E. Jack</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Beers, R J" uniqKey="Van Beers R">R. J. Van Beers</name>
</author>
<author>
<name sortKey="Sittig, A C" uniqKey="Sittig A">A. C. Sittig</name>
</author>
<author>
<name sortKey="Van Der Gon, J J D" uniqKey="Van Der Gon J">J. J. D. van Der Gon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Opstal, A J" uniqKey="Van Opstal A">A. J. Van Opstal</name>
</author>
<author>
<name sortKey="Van Gisbergen, J A M" uniqKey="Van Gisbergen J">J. A. M. Van Gisbergen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Warren, D H" uniqKey="Warren D">D. H. Warren</name>
</author>
<author>
<name sortKey="Mccarthy, T J" uniqKey="Mccarthy T">T. J. McCarthy</name>
</author>
<author>
<name sortKey="Welch, R B" uniqKey="Welch R">R. B. Welch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Welch, R B" uniqKey="Welch R">R. B. Welch</name>
</author>
<author>
<name sortKey="Warren, D H" uniqKey="Warren D">D. H. Warren</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Welch, R B" uniqKey="Welch R">R. B. Welch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Westheimer, G" uniqKey="Westheimer G">G. Westheimer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Westheimer, G" uniqKey="Westheimer G">G. Westheimer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Winer, B J" uniqKey="Winer B">B. J. Winer</name>
</author>
<author>
<name sortKey="Brown, D R" uniqKey="Brown D">D. R. Brown</name>
</author>
<author>
<name sortKey="Michles, K M" uniqKey="Michles K">K. M. Michles</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Witten, I B" uniqKey="Witten I">I. B. Witten</name>
</author>
<author>
<name sortKey="Knudsen, E I" uniqKey="Knudsen E">E. I. Knudsen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yost, W A" uniqKey="Yost W">W. A. Yost</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yuille, A" uniqKey="Yuille A">A. Yuille</name>
</author>
<author>
<name sortKey="Bulthoff, H H" uniqKey="Bulthoff H">H. H. Bülthoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zwiers, M" uniqKey="Zwiers M">M. Zwiers</name>
</author>
<author>
<name sortKey="Van Opstal, A J" uniqKey="Van Opstal A">A. J. Van Opstal</name>
</author>
<author>
<name sortKey="Cruysberg, J R" uniqKey="Cruysberg J">J. R. Cruysberg</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Front Neurosci</journal-id>
<journal-id journal-id-type="iso-abbrev">Front Neurosci</journal-id>
<journal-id journal-id-type="publisher-id">Front. Neurosci.</journal-id>
<journal-title-group>
<journal-title>Frontiers in Neuroscience</journal-title>
</journal-title-group>
<issn pub-type="ppub">1662-4548</issn>
<issn pub-type="epub">1662-453X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">26441492</article-id>
<article-id pub-id-type="pmc">4585004</article-id>
<article-id pub-id-type="doi">10.3389/fnins.2015.00311</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>The interaction of vision and audition in two-dimensional space</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Godfroy-Cooper</surname>
<given-names>Martine</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="author-notes" rid="fn001">
<sup>*</sup>
</xref>
<uri xlink:type="simple" xlink:href="http://loop.frontiersin.org/people/164442/overview"></uri>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Sandor</surname>
<given-names>Patrick M. B.</given-names>
</name>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
<xref ref-type="aff" rid="aff4">
<sup>4</sup>
</xref>
<uri xlink:type="simple" xlink:href="http://loop.frontiersin.org/people/263022/overview"></uri>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Miller</surname>
<given-names>Joel D.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<uri xlink:type="simple" xlink:href="http://loop.frontiersin.org/people/268554/overview"></uri>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Welch</surname>
<given-names>Robert B.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<uri xlink:type="simple" xlink:href="http://loop.frontiersin.org/people/194979/overview"></uri>
</contrib>
</contrib-group>
<aff id="aff1">
<sup>1</sup>
<institution>Advanced Controls and Displays Group, Human Systems Integration Division, NASA Ames Research Center</institution>
<country>Moffett Field, CA, USA</country>
</aff>
<aff id="aff2">
<sup>2</sup>
<institution>San Jose State University Research Foundation</institution>
<country>San José, CA, USA</country>
</aff>
<aff id="aff3">
<sup>3</sup>
<institution>Institut de Recherche Biomédicale des Armées, Département Action et Cognition en Situation Opérationnelle</institution>
<country>Brétigny-sur-Orge, France</country>
</aff>
<aff id="aff4">
<sup>4</sup>
<institution>Aix Marseille Université, Centre National de la Recherche Scientifique, ISM UMR 7287</institution>
<country>Marseille, France</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>Edited by: Guillaume Andeol, Institut de Recherche Biomédicale des Armées, France</p>
</fn>
<fn fn-type="edited-by">
<p>Reviewed by: John A. Van Opstal, University of Nijmegen, Netherlands; Simon Carlile, University of Sydney, Australia</p>
</fn>
<corresp id="fn001">*Correspondence: Martine Godfroy-Cooper, NASA Ames Research Center, PO Box 1, Mail Stop 262-4, Moffett Field, CA 94035-0001, USA
<email xlink:type="simple">martine.godfroy-1@nasa.gov</email>
</corresp>
<fn fn-type="other" id="fn002">
<p>This article was submitted to Auditory Cognitive Neuroscience, a section of the journal Frontiers in Neuroscience</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>17</day>
<month>9</month>
<year>2015</year>
</pub-date>
<pub-date pub-type="collection">
<year>2015</year>
</pub-date>
<volume>9</volume>
<elocation-id>311</elocation-id>
<history>
<date date-type="received">
<day>09</day>
<month>6</month>
<year>2014</year>
</date>
<date date-type="accepted">
<day>19</day>
<month>8</month>
<year>2015</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright © 2015 Godfroy-Cooper, Sandor, Miller and Welch.</copyright-statement>
<copyright-year>2015</copyright-year>
<copyright-holder>Godfroy-Cooper, Sandor, Miller and Welch</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</license-p>
</license>
</permissions>
<abstract>
<p>Using a mouse-driven visual pointer, 10 participants made repeated open-loop egocentric localizations of memorized visual, auditory, and combined visual-auditory targets projected randomly across the two-dimensional frontal field (2D). The results are reported in terms of variable error, constant error and local distortion. The results confirmed that auditory and visual maps of the egocentric space differ in their precision (variable error) and accuracy (constant error), both from one another and as a function of eccentricity and direction within a given modality. These differences were used, in turn, to make predictions about the precision and accuracy within which spatially and temporally congruent bimodal visual-auditory targets are localized. Overall, the improvement in precision for bimodal relative to the best unimodal target revealed the presence of optimal integration well-predicted by the Maximum Likelihood Estimation (MLE) model. Conversely, the hypothesis that accuracy in localizing the bimodal visual-auditory targets would represent a compromise between auditory and visual performance in favor of the most precise modality was rejected. Instead, the bimodal accuracy was found to be equivalent to or to exceed that of the best unimodal condition. Finally, we described how the different types of errors could be used to identify properties of the internal representations and coordinate transformations within the central nervous system (CNS). The results provide some insight into the structure of the underlying sensorimotor processes employed by the brain and confirm the usefulness of capitalizing on naturally occurring differences between vision and audition to better understand their interaction and their contribution to multimodal perception.</p>
</abstract>
<kwd-group>
<kwd>visual-auditory</kwd>
<kwd>localization</kwd>
<kwd>precision</kwd>
<kwd>accuracy</kwd>
<kwd>2D</kwd>
<kwd>MLE</kwd>
</kwd-group>
<counts>
<fig-count count="7"></fig-count>
<table-count count="1"></table-count>
<equation-count count="12"></equation-count>
<ref-count count="76"></ref-count>
<page-count count="18"></page-count>
<word-count count="13517"></word-count>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>Introduction</title>
<p>The primary goal of this research was to determine if and to what extent the precision (degree of reproducibility or repeatability between measurements) and accuracy (closeness of a measurement to its true physical value) with which auditory (A) and visual (V) targets are egocentrically localized in the 2D frontal field predict precision and accuracy in localizing physically and temporally congruent, visual-auditory (VA) targets. We used the Bayesian framework (MLE, Bülthoff and Yuille,
<xref rid="B12" ref-type="bibr">1996</xref>
; Bernardo and Smith,
<xref rid="B4" ref-type="bibr">2000</xref>
) to test the hypothesis of a weighted integration of A and V cues (1) that are not equally reliable and (2) where reliability varies as a function of direction and eccentricity in the 2D frontal field. However, this approach does not address the issue of the differences in reference frames for vision and audition and the sensorimotor transformations. We show that analyzing the orientation of the response distributions and the direction of the error vectors can provide some clues to solve this problem. We first describe the structural and functional differences between the A and V systems and how the CNS realizes the merging of the different spatial coordinates. We then review evidence from psychophysics and neurophysiology that sensory inputs from different modalities can influence one another, suggesting that there is a
<italic>translation mechanism</italic>
between the spatial representations of different sensory systems. We then reviewed the Bayesian framework for multisensory integration, which provides a set of rules to optimally combine sensory inputs with variable reliability. Finally, we present a combined quantitative and qualitative approach to test the effect of
<italic>spatial determinants</italic>
on integration of spatially and temporally congruent A and V stimuli.</p>
<sec>
<title>Structural and functional differences between the visual and the auditory systems</title>
<p>The inherent structural and functional differences between vision and audition have important implications for bimodal VA localization performance. First, A and V signals are represented in different neural encoding formats at the level of the cochlea and the retina, respectively. Whereas vision is tuned to spatial processing supported by a 2D retinotopic (eye-centered) spatial organization, audition is primarily tuned to frequency analysis resulting in a tonotopic map, i.e., an orderly map of frequencies along the length of the cochlea (Culler et al.,
<xref rid="B17" ref-type="bibr">1943</xref>
). As a consequence, the auditory system must derive the location of a sound on the basis of acoustic cues that arise from the geometry of the head and the ears (binaural and monaural cues, Yost,
<xref rid="B74" ref-type="bibr">2000</xref>
).</p>
<p>The localization of an auditory stimulus in the horizontal dimension (azimuth, defined by the angle between the source and the forward vector) results from the detection of left-right interaural differences in time (interaural time differences, ITDs, or interaural phase differences, IPDs) and differences in the received intensity (interaural level differences, ILDs, Middlebrooks and Green,
<xref rid="B47" ref-type="bibr">1991</xref>
). To localize a sound in the vertical dimension (elevation, defined by the angle between the source and the horizontal plane) and to resolve front-back confusions, the auditory system relies on the detailed geometry of the pinnae, causing acoustic waves to diffract and undergo direction-dependent reflections (Blauert,
<xref rid="B9" ref-type="bibr">1997</xref>
; Hofman and Van Opstal,
<xref rid="B35" ref-type="bibr">2003</xref>
). The two different modes of indirect coding of the position of a sound source in space (as compared to the direct spatial coding of visual stimuli) result in differences in spatial resolution in these two directions. Carlile (Carlile et al.,
<xref rid="B13" ref-type="bibr">1997</xref>
) studied localization
<italic>accuracy</italic>
for sound sources on the sagittal median plane (SMP), defined as the vertical plane passing through the midline, ±20° about the auditory-visual horizon. Using a head pointing technique, he reported constant errors (
<italic>CE</italic>
s) as small as 2-3° for the horizontal component and between 4 and 9° for the vertical component (see also Oldfield and Parker,
<xref rid="B49" ref-type="bibr">1984</xref>
; Makous and Middlebrooks,
<xref rid="B44" ref-type="bibr">1990</xref>
; Hofman and Van Opstal,
<xref rid="B34" ref-type="bibr">1998</xref>
; for similar results). For frontal sound sources (0° position in both the horizontal and vertical plane), Makous and Middlebrooks reported
<italic>CE</italic>
s of 1.5° in the horizontal plane and 2.5° in the vertical plane. The smallest errors appear to occur for locations associated with the audio-visual horizon, also referred to as horizontal median plane (HMP) while locations off the audio-visual horizon were shifted toward the audio-visual horizon, resulting in a compression of the auditory space that is exacerbated for the highest and lowest elevations (Carlile et al.,
<xref rid="B13" ref-type="bibr">1997</xref>
). Such a bias has not been reported for locations in azimuth. Recently, Pedersen and Jorgensen (
<xref rid="B51" ref-type="bibr">2005</xref>
) reported that the size of the
<italic>CE</italic>
s in the SMP depends on the actual sound source elevation and is about +3° at the horizontal plane, 0° at about 23° elevation, and becomes negative at higher elevations (e.g., −3° at about 46°; see also Best et al.,
<xref rid="B7" ref-type="bibr">2009</xref>
).</p>
<p>For
<italic>precision</italic>
, variable errors (
<italic>VE</italic>
s) are estimated to be approximately 2° in the frontal horizontal plane near 0° (directly in front of the listener) and 4–8° in elevation (Bronkhorst,
<xref rid="B10" ref-type="bibr">1995</xref>
; Pedersen and Jorgensen,
<xref rid="B51" ref-type="bibr">2005</xref>
). The magnitude of the
<italic>VE</italic>
was shown to increase with sound source laterality (eccentricity in azimuth) to a value of 10° or more for sounds presented on the sides or the rear of the listener, although to a lesser degree than the size of the
<italic>CE</italic>
s (Perrott et al.,
<xref rid="B52" ref-type="bibr">1987</xref>
). For elevation the
<italic>VEs</italic>
are minimum at frontal location (0°, 0°) and maximum at the extreme positive and negative elevations.</p>
<p>On the other hand, visual resolution, contrast sensitivity, and perception of spatial form fall off rapidly with eccentricity. This effect is due to the decrease of the density of the photoreceptors in the retina (organized in a circular symmetric fashion) as a function of the distance from the fovea (Westheimer,
<xref rid="B70" ref-type="bibr">1972</xref>
; DeValois and DeValois,
<xref rid="B19" ref-type="bibr">1988</xref>
; Saarinen et al.,
<xref rid="B59" ref-type="bibr">1989</xref>
). Indeed, humans can only see in detail within the central visual field, where spatial resolution (acuity) is remarkable (Westheimer,
<xref rid="B71" ref-type="bibr">1979</xref>
: 0.5°; Recanzone,
<xref rid="B54" ref-type="bibr">2009</xref>
: up to 1 to 2° with a head pointing task). The visual spatial resolution varies also consistently at isoeccentric locations in the visual field. At a fixed eccentricity,
<italic>precision</italic>
was reported to be higher along the HMP (where the cones density is highest) than along the vertical (or sagittal) median plane (vertical-horizontal anisotropy, VHA). Visual localization was also reported to be more precise along the lower vertical meridian than in the upper vertical meridian (vertical meridian asymmetry, VMA) a phenomenon that was also attributed to an higher cone density in the superior portion of the retina which processes the lower visual field (Curcio et al.,
<xref rid="B18" ref-type="bibr">1987</xref>
) up to 30° of polar angle (Abrams et al.,
<xref rid="B1" ref-type="bibr">2012</xref>
). These asymmetries have also been reported at the level of the lateral geniculate nucleus (LGN) and in the visual cortex. It is interesting to note that visual sensitivity at 45° is similar in the four quadrants and intermediate between the vertical and the horizontal meridians (Fuller and Carrasco,
<xref rid="B25" ref-type="bibr">2009</xref>
). For
<italic>accuracy</italic>
, it is well-documented that a brief visual stimulus flashed just before a saccade is mislocalized, and systematically displaced toward the saccadic landing point (Honda,
<xref rid="B36" ref-type="bibr">1991</xref>
). This results in a symmetrical compression of visual space (Ross et al.,
<xref rid="B58" ref-type="bibr">1997</xref>
) known as “foveal bias” (Mateeff and Gourevich,
<xref rid="B45" ref-type="bibr">1983</xref>
; Müsseler et al.,
<xref rid="B48" ref-type="bibr">1999</xref>
; Kerzel,
<xref rid="B39" ref-type="bibr">2002</xref>
) and that has been attributed to an oculomotor signal that transiently influences visual processing (Richard et al.,
<xref rid="B55" ref-type="bibr">2011</xref>
). Visual space compression was also observed in perceptual judgment tasks, where memory delays were involved, revealing that the systematic target mislocalization closer to the center of gaze was independent of eye movements, therefore demonstrating that the effect was perceptual rather than sensorimotor (Seth and Shimojo,
<xref rid="B61" ref-type="bibr">2001</xref>
).</p>
<p>These fundamental differences in encoding are preserved as information is processed and passed on from the receptors to the primary visual and auditory cortices, which raises a certain number of issues for visual-auditory integration. First, the spatial coordinates of the different sensory events need to be merged and maintained within a common reference frame. For vision, the initial transformation can be described by a logarithmic mapping function that illustrates the correspondence between the Cartesian retinal coordinates and the polar superior colliculus (SC) coordinates. The resulting collicular map can be conceived as an eye-centered map of saccade vectors in polar coordinates where saccades amplitude and direction are represented roughly along orthogonal dimensions (Robinson,
<xref rid="B56" ref-type="bibr">1972</xref>
; Jay and Sparks,
<xref rid="B38" ref-type="bibr">1984</xref>
; Van Opstal and Van Gisbergen,
<xref rid="B66" ref-type="bibr">1989</xref>
; Freedman and Sparks,
<xref rid="B24" ref-type="bibr">1997</xref>
; Klier et al.,
<xref rid="B40" ref-type="bibr">2001</xref>
).</p>
<p>Conversely, for audition, information about acoustic targets in the SC is combined with eye and head position information to encode targets in a spatial or body-centered frame of reference (motor coordinates, Goossens and Van Opstal,
<xref rid="B28" ref-type="bibr">1999</xref>
). More precisely, the representation of auditory space in the SC involves a hybrid reference frame immediately after the sound onset, that evolves to become predominantly eye-centered, and more similar to the visual representation by the time of a saccade to that sound (Lee and Groh,
<xref rid="B42" ref-type="bibr">2012</xref>
). Kopco (Kopco et al.,
<xref rid="B41" ref-type="bibr">2009</xref>
) proposed that the coordinate frame in which vision calibrates auditory spatial representation might be a mixture between eye-centered and craniocentric, suggesting that perhaps, both representation get transformed in a way that is more consistent with the motor commands of the response to stimulations in either modality. Such a transformation would potentially facilitate VA interactions by resolving the initial discrepancy between the A and V reference frames. When reach movements are required, which involve coordinating gaze shifts with arm or hand movements, the proprioceptive cues in limb or joint reference frames are also translated into an eye-centered reference frame (Crawford et al.,
<xref rid="B16" ref-type="bibr">2004</xref>
; Gardner et al.,
<xref rid="B26" ref-type="bibr">2008</xref>
).</p>
</sec>
<sec>
<title>Strategies for investigating intersensory interactions and previous related research</title>
<p>Multisensory integration refers to the processes by which information arriving from one sensory modality interacts and sometimes biases the processing in another modality, including how these sensory inputs are combined to yield to a unified percept. There is an evolutionary basis to the capacity to merge and integrate the different senses. Integrating information carried by multiple sensors provides substantial advantages to an organism in terms of survival, such as detection, discrimination, and speed responsiveness. Empirical studies have determined a set of rules (determinants) and sites in the brain that govern multisensory integration (Stein and Meredith,
<xref rid="B62" ref-type="bibr">1993</xref>
). Indeed, multisensory integration is supported by the heteromodal (associative) nature of the brain. Multisensory integration starts at the cellular level with the presence of multisensory neurons all the way from subcortical structures such as the SC and inferior colliculus (IC) to cortical areas.</p>
<p>Synchronicity and spatial correspondence are the key determinants for multisensory integration to happen. Indeed, when two or more sensory stimuli occur at the same time and place, they lead to the perception of a unique event, detected, identified and eventually responded to, faster than either input alone. This multisensory facilitation is reinforced by a semantic congruence between the two inputs, and susceptible to be modulated by attentional factors, instructions or inter-individual differences. In contrast, slight temporal and/or spatial discrepancy between two sensory cues, can be significantly less effective in eliciting responses than isolated unimodal stimuli.</p>
<p>The manipulation of one or more parameters on which the integration of two modality-specific stimuli are likely to be combined is the privileged approach for the study of multisensory interactions. One major axis of research in the domain of multisensory integration has been the experimental conflict situation in which an observer receives incongruent data from two different sensory modalities, and still perceives the unity of the event. Such experimental paradigms, in which observers are exposed to temporally congruent, but spatially discrepant A and V targets, reveal substantial intersensory interactions. The most basic example is “perceptual fusion” in which, despite separation by as much as 10° (typically in azimuth), the two targets are perceived to be in the same place (Alais and Burr,
<xref rid="B2" ref-type="bibr">2004</xref>
; Bertelson and Radeau,
<xref rid="B5" ref-type="bibr">1981</xref>
; Godfroy et al.,
<xref rid="B27" ref-type="bibr">2003</xref>
). Determining exactly where that perceived location is requires that observers be provided with a response measure, for example, open-loop reaching, by which the V, A, and VA targets can be
<italic>egocentrically</italic>
localized. Experiments of this sort have consistently showed that localization of the spatially discrepant VA target is strongly biased toward the V target. This phenomenon is referred to as “ventriloquism” because it is the basis of the ventriloquist's ability to make his or her voice seem to emanate from the mouth of the hand-held puppet (Thurlow and Jack,
<xref rid="B64" ref-type="bibr">1973</xref>
; Bertelson,
<xref rid="B6" ref-type="bibr">1999</xref>
). It is important to note, however, that despite its typically inferior status in the presence of VA spatial conflict, audition can contribute to VA localization accuracy in the form of a small shift of the perceived location of the V stimulus toward the A stimulus (Welch and Warren,
<xref rid="B68" ref-type="bibr">1980</xref>
; Easton,
<xref rid="B20" ref-type="bibr">1983</xref>
; Radeau and Bertelson,
<xref rid="B53" ref-type="bibr">1987</xref>
; Hairston et al.,
<xref rid="B30" ref-type="bibr">2003b</xref>
).</p>
<p>The most widely accepted explanation of ventriloquism is the
<italic>Modality Precision</italic>
or
<italic>Modality Appropriateness</italic>
hypothesis, according to which the more precise of two sensory modalities will bias the less precise modality more than the reverse (Rock and Victor,
<xref rid="B57" ref-type="bibr">1964</xref>
; Welch and Warren,
<xref rid="B68" ref-type="bibr">1980</xref>
; Welch,
<xref rid="B69" ref-type="bibr">1999</xref>
). Thus it is that vision, typically more precise than audition (Fisher,
<xref rid="B23" ref-type="bibr">1968</xref>
) and based on a more spatially articulated neuroanatomy (Hubel,
<xref rid="B37" ref-type="bibr">1988</xref>
), is weighted more heavily in the perceived location of VA targets. This model also successfully explains “visual capture” (Hay et al.,
<xref rid="B31" ref-type="bibr">1965</xref>
) in which the felt position of the hand viewed through a light-displacing prism is strongly shifted in the direction of its visual locus. Further support for the visual capture theory was provided in an experiment by Easton (
<xref rid="B20" ref-type="bibr">1983</xref>
), who showed that when participants were directed to move the head from side to side, thereby increasing their auditory localizability in this dimension, ventriloquism declined.</p>
<p>Bayesian models have shown to be powerful methods to account for the optimal combination of multiple sources of information. The Bayesian model makes specific predictions, among which VA localization
<italic>precision</italic>
will exceed that of the more precise modality (typically vision) according to the formula:
<disp-formula id="E1">
<label>(1)</label>
<mml:math id="M1">
<mml:mrow>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi>V</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi>A</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi>V</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi>A</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo></mml:mo>
<mml:mi>m</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>n</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi>V</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi>A</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
where
<inline-formula>
<mml:math id="M2">
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>,</mml:mo>
</mml:math>
</inline-formula>
and
<inline-formula>
<mml:math id="M3">
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:math>
</inline-formula>
, are respectively the variances in the auditory, visual, and bimodal distributions. From the variance of each modality, one may derive, in turn, their
<italic>relative weight</italic>
s, which are the normalized reciprocal variance of the unimodal distributions (Oruç et al.,
<xref rid="B50" ref-type="bibr">2003</xref>
), with respect to the bimodal percept according to the formula:
<disp-formula id="E2">
<label>(2)</label>
<mml:math id="M4">
<mml:mrow>
<mml:msub>
<mml:mi>W</mml:mi>
<mml:mi>V</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi>V</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mrow>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi>V</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo>+</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi>A</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mfrac>
<mml:mi>a</mml:mi>
<mml:mi>n</mml:mi>
<mml:mi>d</mml:mi>
<mml:mtext></mml:mtext>
<mml:msub>
<mml:mi>W</mml:mi>
<mml:mi>A</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>W</mml:mi>
<mml:mi>V</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</disp-formula>
where
<italic>W</italic>
<sub>
<italic>V</italic>
</sub>
represent the visual weight and
<italic>W</italic>
<sub>
<italic>A</italic>
</sub>
the auditory weight. With certain mathematical assumptions, an optimal model of sensory integration has been derived based on maximum-likelihood estimation (MLE) theory. In this framework, the optimal estimation model is a formalization of the modality precision hypothesis and makes mathematically explicit the relation between the reliability of a source and it's effect on the sensory interpretation of another source. According to the MLE model of multisensory integration, a sensory source is reliable if the distribution of inferences based on that source has a relatively small variance (Yuille and Bülthoff,
<xref rid="B75" ref-type="bibr">1996</xref>
; Ernst and Banks,
<xref rid="B21" ref-type="bibr">2002</xref>
; Battaglia et al.,
<xref rid="B3" ref-type="bibr">2003</xref>
; Alais and Burr,
<xref rid="B2" ref-type="bibr">2004</xref>
). In the opposite case scenario, a sensory source is regarded as unreliable if the distribution of the inferences has a large variance (noisy signal). If the noise associated with each individual sensory estimate is independent and the prior normally distributed (all stimulus positions are equally likely), the maximum-likelihood estimate for a bimodal stimulus is a simple weighted average of the unimodal estimates where the weights are the normalized reciprocal variance of the unimodal distributions:
<disp-formula id="E3">
<label>(3)</label>
<mml:math id="M5">
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:msub>
<mml:mi>r</mml:mi>
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo stretchy="true">^</mml:mo>
</mml:mover>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:msub>
<mml:mi>r</mml:mi>
<mml:mi>V</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo stretchy="true">^</mml:mo>
</mml:mover>
<mml:msub>
<mml:mi>W</mml:mi>
<mml:mi>V</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:msub>
<mml:mi>r</mml:mi>
<mml:mi>A</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo stretchy="true">^</mml:mo>
</mml:mover>
<mml:msub>
<mml:mi>W</mml:mi>
<mml:mi>A</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
where
<inline-formula>
<mml:math id="M6">
<mml:mover accent="false">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
</mml:math>
</inline-formula>
,
<inline-formula>
<mml:math id="M7">
<mml:mover accent="false">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mo>,</mml:mo>
</mml:math>
</inline-formula>
and
<inline-formula>
<mml:math id="M8">
<mml:mover accent="false">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
</mml:math>
</inline-formula>
, are respectively, the bimodal, visual and auditory location estimates and
<italic>W</italic>
<sub>
<italic>V</italic>
</sub>
and
<italic>W</italic>
<sub>
<italic>A</italic>
</sub>
are the weights of the visual and auditory stimuli.</p>
<p>This relation allows quantitative predictions to be made, for example, on the spatial distribution of adaptation to VA displacements. Within this framework, visual capture is simply a case in which the visual signal shows less variability in error and is assigned a weight of one as compared to the less reliable cue (audition), which is assigned a weight of zero. For spatially and temporally coincident A and V stimuli, and assuming that the variance of the bimodal distribution is smaller than that of either modality alone (Witten and Knudsen,
<xref rid="B73" ref-type="bibr">2005</xref>
), then multisensory localization trials perceived as unified should be less variable and as accurate as localization made in the best unimodal condition. It is of interest to note that Ernst and Bülthoff (
<xref rid="B22" ref-type="bibr">2004</xref>
) considered that the term
<italic>Modality Precision</italic>
or
<italic>Modality Appropriateness</italic>
is misleading because it is not the modality itself or the stimulus that dominates. Rather, because the dominance is determined by the estimate and how reliably it can be derived within a specific modality from a given stimulus, the term “Estimate Precision” would probably be more appropriate.</p>
<p>Different strategies for testing intersensory interactions can be distinguished: (a) impose a spatial discrepancy between the two modalities (Bertelson and Radeau,
<xref rid="B5" ref-type="bibr">1981</xref>
), (b) use spatially congruent stimuli but reduce the precision of the visual modality by degrading it (Battaglia et al.,
<xref rid="B3" ref-type="bibr">2003</xref>
; Hairston et al.,
<xref rid="B29" ref-type="bibr">2003a</xref>
; Alais and Burr,
<xref rid="B2" ref-type="bibr">2004</xref>
), (c) impose a temporal discrepancy between the two modalities (Colonius et al.,
<xref rid="B15" ref-type="bibr">2009</xref>
), and (d) capitalize on inherent differences in localization precision between the modalities (Warren et al.,
<xref rid="B67" ref-type="bibr">1983</xref>
). In the present research, we used the last of these approaches by examining VA localization precision and accuracy as a function of the eccentricity and direction of physically and spatially congruent V and A targets. The effect of spatial determinants (such as eccentricity and direction) of VA integration has already been investigated, although infrequently and with many restrictions. For eccentricity, Hairston (Hairston et al.,
<xref rid="B30" ref-type="bibr">2003b</xref>
) showed that (1) increasing distance from the midline was associated with more variability in localizing temporally and spatially congruent VA targets, but not in localizing A targets and (2) that the variability in localizing spatially coincident multisensory targets was inversely correlated with the average bias obtained with spatially discrepant A and V stimuli. They didn't report a reduction in localization variability in the bimodal condition. A possible explanation for the lack of multisensory improvement in this study is that the task was limited to targets locations in azimuth, and hence, also to responses in azimuth, reducing the uncertainty of the position to one dimension. Experiments on VA spatial integration have almost always been limited to location in azimuth, with the implicit assumption that their results apply equally across the entire 2D field. Very few studies have investigated VA interactions in 2D (azimuth and elevation cues). An early experiment by Thurlow and Jack (
<xref rid="B64" ref-type="bibr">1973</xref>
) compared VA fusion in azimuth vs. in elevation, taking advantage of the inherent differences in auditory precision between these two directions. Consistent with the MLE, fusion was greater in elevation, where auditory localization precision is relatively poor, than it was in the azimuth (results confirmed and extended by Godfroy et al.,
<xref rid="B27" ref-type="bibr">2003</xref>
). Investigating saccadic eye movements to VA targets, studies also demonstrated a role of direction for VA interactions (Heuermann and Colonius,
<xref rid="B33" ref-type="bibr">2001</xref>
).</p>
</sec>
<sec>
<title>The present research</title>
<p>Beside a greater ecological valence, a 2D experimental paradigm provides the opportunity to investigate the effect of spatial determinants on multisensory integration. The present research compared the effect of direction and eccentricity on the localization of spatially congruent visual-auditory stimuli. Instead of experimentally manipulating the resolution of the A and V stimuli, we capitalized on the previously described variations in localization precision and accuracy as a function of spatial location. The participants were presented with V, A, and physically congruent VA targets in each of an array of 35 spatial locations in the 2D frontal field and were to indicate their perceived egocentric location by means of a mouse-controlled pointer in an open-loop condition (i.e., without any direct feedback of sensory-motor input–output). Of interest were the effects of spatial direction (azimuth and elevation) and eccentricity on localization
<italic>precision</italic>
and
<italic>accuracy</italic>
and how these effects may predict localization performance for the VA targets. Following Heffner's conventions (Heffner and Heffner,
<xref rid="B32" ref-type="bibr">2005</xref>
), we distinguished between localization precision, known as the statistical (variable) error (
<italic>VE</italic>
) and the localization bias (sometimes called localization accuracy), or the systematic (constant) error (
<italic>CE</italic>
). The specific predictions of the experiment were:</p>
<sec>
<title>Precision (VE)</title>
<p>Based on the MLE model, localization precision for the VA targets will exceed that of the more precise modality, which by varying amounts across the 2D frontal field is expected to be vision. Specifically, the contribution of the visual modality to bimodal precision should be greater toward the center of the visual field than in the periphery. Response variability was also used to provide insight about the performance of the sensory motor chain. Indeed, a greater level of variability in the estimate of distance (eccentricity) vs. direction (azimuth vs. elevation) would result in a radial pattern of variable error eigenvectors (noise in the polar representation of distance and direction). Conversely, an independent estimate of target distance and direction would lead to an increase in variability in the
<italic>X</italic>
or in the
<italic>Y</italic>
direction, and cause variable errors to align gradually with the
<italic>X</italic>
or the
<italic>Y</italic>
-axis, respectively.</p>
</sec>
<sec>
<title>Accuracy (CE)</title>
<p>In the absence of conflict between the visual and auditory stimuli, the bimodal VA accuracy will be equivalent to the most precise modality, i.e., vision. However, based on the expected differences in precision for A and V in the center and in the periphery, we expected that the contribution of vision in the periphery will be reduced and that of audition increased, due to the predicted reduced gap between visual and auditory precision in this region. For direction, given the fact that A accuracy was greater in the upper than in the lower hemifield, it was expected that the differences in accuracy between A and V in the upper hemifield would be minimal, while remaining substantial in the lower hemifield.</p>
</sec>
</sec>
</sec>
<sec sec-type="materials|methods" id="s2">
<title>Materials and methods</title>
<sec>
<title>Participants</title>
<p>Three women and seven men, aged 22–50 years, participated in the experiment. They included two of the authors (
<italic>MGC</italic>
and
<italic>PMBS</italic>
). All participants possessed a minimum of 20/20 visual acuity (corrected, if necessary) and normal audiometric capacities, allowing for typical age-related differences. They were informed of the overall nature of the experiment. With the exception of the authors, they were unaware of the hypotheses being tested and the details of the stimulus configuration to which they would be exposed.</p>
<p>This study was carried out in accordance with the recommendations of the French Comite Consultatif de Protection des Personnes dans la Recherche Biomédicale (CPPPRB) Paris Cochin and received approval from the CPPPRB. All subjects gave written informed consent in accordance with the Declaration of Helsinki.</p>
</sec>
<sec>
<title>Apparatus</title>
<p>The experimental apparatus (Figure
<xref ref-type="fig" rid="F1">1</xref>
) was similar to that used in an earlier study by Godfroy (Godfroy et al.,
<xref rid="B27" ref-type="bibr">2003</xref>
). The participant sat in a chair, head position restrained by a chinrest in front of a vertical, semi-circular screen with a radius of 120 cm and height of 145 cm. The distance between the participant's eyes and the screen was 120 cm. A liquid crystal Phillips Hover SV10 video-projector located above and behind the participant, 245 cm from the screen, projected visual stimuli that covered a frontal range of 80° in azimuth and 60° in elevation (Figure
<xref ref-type="fig" rid="F1">1</xref>
, center). The screen was acoustically transparent and served as a surface upon which to project the visual stimuli, which included VA targets, a fixation cross, and a virtual response pointer (a 1°-diameter cross) referenced to as an exocentric technique. Sounds were presented via an array of 35 loudspeakers (10 cm diameter Fostex FE103 Sigma) located directly behind (<5 cm) the screen in a 7 × 5 matrix, with a 10° separation between adjacent speakers in both azimuth and elevation (Figure
<xref ref-type="fig" rid="F1">1</xref>
, left). They were not visible to the participant and their orientation was designed to create a virtual sphere centered on the observer's head at eye level.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption>
<p>
<bold>Experimental setup. Left:</bold>
the 35 loudspeakers arranged in a 7 × 5 matrix, with a 10° separation between adjacent speakers both in azimuth and in elevation.
<bold>Center:</bold>
a participant, head position restrained by a chinrest, is facing the acoustically transparent semi-cylindrical screen. The green area represents the 80° by 60° surface of projection. Red stars depict the location of the 35 targets (±30° azimuth, ±20° in elevation). Note that the reference axes represented here are not visible during the experiment.
<bold>Right:</bold>
the leg-mounted trackball is attached to the leg of the participant using Velcro straps.</p>
</caption>
<graphic xlink:href="fnins-09-00311-g0001"></graphic>
</fig>
</sec>
<sec>
<title>The targets</title>
<p>The V target was a spot of light (1° of visual angle) with a luminance of 20 cd/m
<sup>2</sup>
(background ca. 1.5 cd/m
<sup>2</sup>
) presented for 100 ms. The A target was a 100 ms burst of pink noise (broadband noise with constant intensity per octave) that had a 20 ms rise and fall time (to avoid abrupt onset and offset effects) and a 60-ms plateau (broadband sounds have been shown to be highly localizable and less biased, Blauert,
<xref rid="B8" ref-type="bibr">1983</xref>
). The stimulus duration of 100 ms was chosen based on evidence that auditory targets with durations below 80 ms are poorly localized in the vertical dimension (Hofman and Van Opstal,
<xref rid="B34" ref-type="bibr">1998</xref>
). The stimulus A-weighted sound pressure level was calibrated to 49 dB using a precision integrating sound level meter (Brüel and Kjäer Model 2230) at the location of the participant's ear (the relative intensity of the A and V stimuli was tested by a subjective equalization test with three participants). The average background noise level (generated by the video-projector) was 38 dB.</p>
<p>Each light spot was projected to the exact center of its corresponding loudspeaker and thus the simultaneous activation and deactivation of the two stimuli created a spatially and temporally congruent VA target. The 35 speakers and their associated light spots were positioned along the azimuth at 0°, ± 10°, ± 20°, and ± 30° (positive rightward) from the SMP and along the vertical dimension at 0°, ± 10°, and ± 20° (positive upward) relative to the HMP. The locations of the V, A, and VA targets are depicted in Figure
<xref ref-type="fig" rid="F1">1</xref>
, center.</p>
</sec>
<sec>
<title>Procedure</title>
<p>The participants performed a pointing task to remembered A, V, and AV targets in each of the 35 target locations distributed over the 80 by 60° Frontal field. The participants' task was to indicate the perceived location of the V, A, and VA targets in each of their possible 35 positions by directing a visual pointer to the apparent location of the stimulus via a leg-worn computer trackball, as seen in Figure
<xref ref-type="fig" rid="F1">1</xref>
. Besides providing an absolute rather than a relative measure of egocentric location, the advantage of this procedure over those in which the hand, head, or eyes are directed at the targets is that it avoids both (a) the confounding of the mental transformations of sensory target location with the efferent and/or proprioceptive information from the motor system and (b) potential distortions from the use of body-centered coordinates (Brungart et al.,
<xref rid="B11" ref-type="bibr">2000</xref>
; Seeber,
<xref rid="B60" ref-type="bibr">2003</xref>
).</p>
<p>Prior to each session the chair and the chinrest were adjusted to align participant's head and eyes with the HMP and SMP. After initial instruction and practice, the test trials were initiated each beginning with the presentation of the fixation-cross at the center (0°, 0°) of the semicircular screen for a random period of 500–1500 ms. The participants were instructed to fixate on the cross until its extinction. Simultaneous with the offset of the fixation cross, the V, A, or VA target (randomized) appeared for 100 ms at one of its 35 potential locations (randomized). Immediately following target offset, a visual pointer appeared off to one side of the target in a random direction (0–360°) and by a random amount (2.5–10° of visual angle). The participant was instructed to move the pointer, using a leg-mounted trackball, to the perceived target location (see Figure
<xref ref-type="fig" rid="F1">1</xref>
, right). Because the target was extinguished before the localization response was initiated, participants received no visual feedback about their performance. After directing the pointer to the remembered location of the target, the participant validated the response by a click of the mouse, which terminated the trial and launched the next after a 1500 ms interval. The
<italic>x</italic>
/
<italic>y</italic>
coordinates of the pointer position (defined as the position of the pointer at the termination of the pointing movement) were registered with a spatial resolution of 0.05 arcmin. Data were obtained from 1050 trials (10 repetitions of each of the 3 modalities × 35 target positions = 1050) distributed over 6 experimental sessions of 175 trials each.</p>
<sec>
<title>The measures of precision and accuracy</title>
<p>The raw data consisted of the 2D coordinates of the terminal position of the pointer relative to a given V, A, or VA target. Outliers (± 3 SD from the mean) were removed for each target location, each modality and each subject to control for intra-individual variability (0.9% for the A condition, 1.3% for the V condition, and 1.4% for the VA condition). To test the hypothesis of colinearity between the
<italic>x</italic>
and
<italic>y</italic>
components of the localization responses, a hierarchical multiple regression analysis was performed. Tests for multicollinearity indicated that a very low level of multicollinearity was present [variance inflation factor (
<italic>VIF</italic>
) = 1 for the 3 conditions]. Results of the regression analysis provided confirmation that the data were governed by a bivariate normal distribution (i.e., 2 dimensions were observed).</p>
<p>To analyze the endpoint distributions, we determined for each target and each modality the covariance matrix of all the 2D responses (
<italic>x</italic>
and
<italic>y</italic>
components). The 2D variance (
<inline-formula>
<mml:math id="M9">
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:math>
</inline-formula>
) represents the sum of the variances in the two orthogonal directions (
<inline-formula>
<mml:math id="M10">
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:math>
</inline-formula>
). The distributions were visualized by 95% confidence ellipses. We calculated ellipse orientation (θ
<sub>
<italic>a</italic>
</sub>
) as the orientation of the main eigenvector (
<italic>a</italic>
), which represents the direction of maximal dispersion. The
<italic>orientation deviations</italic>
were calculated as the difference between the ellipse orientation and the direction of the target. Because, an axis is an undirected line where there is no reason to distinguish one end of the line from the other, the data were computed within a 0–180° range. A measure of
<italic>anisotropy</italic>
of the distributions, ε, was provided, a ratio value close to 1 indicating no preferred direction, and a ratio value close to 0 indicating a preferred direction:
<disp-formula id="E4">
<label>(4)</label>
<mml:math id="M11">
<mml:mrow>
<mml:mtext> </mml:mtext>
<mml:mi>ε</mml:mi>
<mml:mo>=</mml:mo>
<mml:msqrt>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo></mml:mo>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>b</mml:mi>
<mml:mo>/</mml:mo>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>For the measure of localization accuracy, the difference between the actual 2D target position and the centroid of the distributions was computed, providing an error vector
<inline-formula>
<mml:math id="M12">
<mml:mover class="overrightarrow">
<mml:mrow>
<mml:mi>a</mml:mi>
</mml:mrow>
<mml:mo></mml:mo>
</mml:mover>
</mml:math>
</inline-formula>
(Zwiers et al.,
<xref rid="B76" ref-type="bibr">2001</xref>
) that can be analyzed along its length (or amplitude,
<italic>r</italic>
) and angular direction (α). The mean direction of the error vectors was compared to the target direction, providing a measure of the
<italic>direction deviation</italic>
. In this study, we assumed that (1) all the target positions were equally likely (the participants had no prior assumption regarding the number and spatial configuration of the targets and (2) the noise corrupting the visual signal was independent from the one corrupting the auditory signal. The present data being governed by a 2D normal distribution, we used a method described previously by Van Beers (Van Beers et al.,
<xref rid="B65" ref-type="bibr">1999</xref>
), which takes into account the “direction” of the 2D distribution. According to Winer (Winer et al.,
<xref rid="B72" ref-type="bibr">1991</xref>
), a 2D normal distribution can be written as:
<disp-formula id="E5">
<label>(5)</label>
<mml:math id="M13">
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:mi>P</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>d</mml:mi>
<mml:mi>x</mml:mi>
<mml:mi>d</mml:mi>
<mml:mi>y</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>π</mml:mi>
<mml:msub>
<mml:mi>σ</mml:mi>
<mml:mi>x</mml:mi>
</mml:msub>
<mml:msub>
<mml:mi>σ</mml:mi>
<mml:mi>y</mml:mi>
</mml:msub>
<mml:msqrt>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo></mml:mo>
<mml:msup>
<mml:mi>ρ</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:mfrac>
<mml:mi>e</mml:mi>
<mml:mi>x</mml:mi>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo></mml:mo>
<mml:msup>
<mml:mi>ρ</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>μ</mml:mi>
<mml:mi>x</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi>x</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mtext></mml:mtext>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mtext>                                 </mml:mtext>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>μ</mml:mi>
<mml:mi>y</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi>y</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo></mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>μ</mml:mi>
<mml:mi>x</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>μ</mml:mi>
<mml:mi>y</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>σ</mml:mi>
<mml:mi>x</mml:mi>
</mml:msub>
<mml:msub>
<mml:mi>σ</mml:mi>
<mml:mi>y</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
<mml:mi>d</mml:mi>
<mml:mi>x</mml:mi>
<mml:mi>d</mml:mi>
<mml:mi>y</mml:mi>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
where
<inline-formula>
<mml:math id="M14">
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:math>
</inline-formula>
and
<inline-formula>
<mml:math id="M15">
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:math>
</inline-formula>
are the variances in the orthogonal
<italic>x</italic>
and
<italic>y</italic>
directions, μ
<sub>
<italic>x</italic>
</sub>
and μ
<sub>
<italic>y</italic>
</sub>
are the means in the
<italic>x</italic>
and
<italic>y</italic>
directions, and ρ is the correlation coefficient. The parameters of the bimodal VA distribution
<italic>P</italic>
<sub>
<italic>VA</italic>
</sub>
(
<italic>x, y</italic>
), i.e.,
<inline-formula>
<mml:math id="M16">
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>V</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:math>
</inline-formula>
,
<inline-formula>
<mml:math id="M17">
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mi>V</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:math>
</inline-formula>
, μ
<sub>
<italic>xVA</italic>
</sub>
, and μ
<sub>
<italic>yVA</italic>
</sub>
were computed according to the equations in Appendix 1. The bimodal variance
<inline-formula>
<mml:math id="M18">
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>V</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
, the estimated variance (
<inline-formula>
<mml:math id="M19">
<mml:mover accent="false">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>V</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
</mml:math>
</inline-formula>
), error vectors amplitude (
<italic>r</italic>
) and direction (α) for each condition were then derived from the initial parameters.</p>
<p>Last we provided a measure of multisensory integration (MSI) by calculating the redundancy gain (
<italic>RG</italic>
, Charbonneau et al.,
<xref rid="B14" ref-type="bibr">2013</xref>
), assuming vision to be the more effective unisensory stimulus:
<disp-formula id="E6">
<label>(6)</label>
<mml:math id="M20">
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:mi>G</mml:mi>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>V</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>×</mml:mo>
<mml:mn>100</mml:mn>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>Specifically, this measure relates the magnitude of the response to the multisensory stimulus to that evoked by the more effective of the two modality-specific stimulus components. According to the principle of inverse effectiveness (
<italic>IE</italic>
, Stein and Meredith,
<xref rid="B62" ref-type="bibr">1993</xref>
), the reliability of the best sensory estimate and
<italic>RG</italic>
are inversely correlated, i.e., the less reliable single stimulus is associated to maximal
<italic>RG</italic>
when adding another stimulus.</p>
</sec>
</sec>
<sec>
<title>The statistical analyses</title>
<p>To allow for comparison between directions, targets located at ±30° eccentricity in azimuth were disregarded. Univariate and repeated measures analyses of variance (ANOVAs) were used to test for the effects of modality (A, V, VA, MLE), direction [
<italic>X</italic>
(azimuth = horizontal),
<italic>Y</italic>
(elevation = vertical)] and absolute eccentricity value (0, 10, 14, 20, 22, and 28°). Two-tailed
<italic>t</italic>
-tests were conducted with Fisher's PLSD (for univariate analyses) and with the Bonferroni/Dunn correction (for repeated measures) for exploring promising
<italic>ad hoc</italic>
target groupings. These included the comparison between lower hemifield, HMP and upper hemifield on one hand, and left hemifield, SMP and right hemifield on the other hand. Simple and multiple linear regressions were used to determine the performance predictors.</p>
<p>For the measures of the angular/vectorial data [ellipse mean main orientation (θ
<sub>
<italic>a</italic>
</sub>
) and vector mean direction (α)], linear regressions were used to assess the fit with the 24 targets orientation/direction [the responses associated to the (0°, 0°) target was excluded since it has, by definition, no direction]. The difference between target and response orientation/direction were computed, allowing for repeated measures between conditions. All of the effects described here were statistically significant at
<italic>p</italic>
< 0.05 or better.</p>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>Results</title>
<sec>
<title>Unimodal auditory and visual localization performance</title>
<p>The local characteristics of the local A and V precision, accuracy and distortion are illustrated in Figure
<xref ref-type="fig" rid="F2">2</xref>
and summarized in Table
<xref ref-type="table" rid="T1">1</xref>
.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption>
<p>
<bold>Localization Precision (left), Accuracy (center) and Local Distortion (right) for the three modalities of presentation of the targets [top to bottom: Auditory, Visual, Visual-Auditory, and predicted VA (MLE)]</bold>
. The precision for each of the 25 target positions is depicted by confidence ellipses with the maximum eigenvector (
<italic>a</italic>
) representing the direction of maximal dispersion. Accuracy: stars represent each of the 25 response centroids linked to its respective target, illustrating the main direction and length of the error vector. Local Distortion: response centroids from adjacent targets are linked to provide a visualization of the fidelity with which the relative spatial organization of the targets is maintained in the configuration of the final pointing positions.</p>
</caption>
<graphic xlink:href="fnins-09-00311-g0002"></graphic>
</fig>
<table-wrap id="T1" position="float">
<label>Table 1</label>
<caption>
<p>
<bold>Characteristics of observed A, V, VA, and predicted (MLE) measures of localization precision and accuracy (mean = μ,
<italic>sd</italic>
= σ)</bold>
.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th rowspan="1" colspan="1"></th>
<th valign="top" align="center" rowspan="1" colspan="1">
<bold>A</bold>
</th>
<th valign="top" align="center" rowspan="1" colspan="1">
<bold>V</bold>
</th>
<th valign="top" align="center" rowspan="1" colspan="1">
<bold>VA</bold>
</th>
<th valign="top" align="center" rowspan="1" colspan="1">
<bold>MLE</bold>
</th>
<th valign="top" align="center" rowspan="1" colspan="1">
<bold>A</bold>
</th>
<th valign="top" align="center" rowspan="1" colspan="1">
<bold>V</bold>
</th>
<th valign="top" align="center" rowspan="1" colspan="1">
<bold>VA</bold>
</th>
<th valign="top" align="center" rowspan="1" colspan="1">
<bold>MLE</bold>
</th>
</tr>
<tr>
<th rowspan="1" colspan="1"></th>
<th valign="top" align="center" rowspan="1" colspan="1">
<bold>μ (σ)</bold>
</th>
<th valign="top" align="center" rowspan="1" colspan="1">
<bold>μ (σ)</bold>
</th>
<th valign="top" align="center" rowspan="1" colspan="1">
<bold>μ (σ)</bold>
</th>
<th valign="top" align="center" rowspan="1" colspan="1">
<bold>μ (σ)</bold>
</th>
<th valign="top" align="center" rowspan="1" colspan="1">
<bold>μ (σ)</bold>
</th>
<th valign="top" align="center" rowspan="1" colspan="1">
<bold>μ (σ)</bold>
</th>
<th valign="top" align="center" rowspan="1" colspan="1">
<bold>μ (σ)</bold>
</th>
<th valign="top" align="center" rowspan="1" colspan="1">
<bold>μ (σ)</bold>
</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="center" colspan="4" rowspan="1">
<bold>Variable error (precision)</bold>
</td>
<td valign="top" align="center" colspan="4" rowspan="1">
<bold>Constant error (accuracy)</bold>
</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Total (
<italic>N</italic>
= 25)</td>
<td valign="top" align="center" rowspan="1" colspan="1">5.73 (0.79)</td>
<td valign="top" align="center" rowspan="1" colspan="1">1.78 (0.50)</td>
<td valign="top" align="center" rowspan="1" colspan="1">1.46 (0.37)</td>
<td valign="top" align="center" rowspan="1" colspan="1">1.53 (0.36)</td>
<td valign="top" align="center" rowspan="1" colspan="1">4.03 (2.37)</td>
<td valign="top" align="center" rowspan="1" colspan="1">2.00 (0.87)</td>
<td valign="top" align="center" rowspan="1" colspan="1">1.67 (0.72)</td>
<td valign="top" align="center" rowspan="1" colspan="1">1.94 (0.69)</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td valign="top" align="center" colspan="4" rowspan="1">
<bold>Orientation deviation</bold>
</td>
<td valign="top" align="center" colspan="4" rowspan="1">
<bold>Direction deviation</bold>
</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Total (
<italic>N</italic>
= 25)</td>
<td valign="top" align="center" rowspan="1" colspan="1">39.14 (28.66)</td>
<td valign="top" align="center" rowspan="1" colspan="1">13.05 (11.57)</td>
<td valign="top" align="center" rowspan="1" colspan="1">13.57 (13.52)</td>
<td valign="top" align="center" rowspan="1" colspan="1">25.63 (22.27)</td>
<td valign="top" align="center" rowspan="1" colspan="1">43.02 (27.15)</td>
<td valign="top" align="center" rowspan="1" colspan="1">16.74 (15.91)</td>
<td valign="top" align="center" rowspan="1" colspan="1">12.74 (11.59)</td>
<td valign="top" align="center" rowspan="1" colspan="1">16.52 (14.15)</td>
</tr>
</tbody>
</table>
</table-wrap>
<sec>
<title>Auditory</title>
<p>It can be seen from Figure
<xref ref-type="fig" rid="F2">2</xref>
that auditory localization was characterized by anisotropic response distributions oriented upward over the entire field. The difference in orientation between the target and the ellipse main orientation was highest in azimuth and lowest in elevation (
<italic>X</italic>
: μ = 86.83°,
<italic>sd</italic>
= 2.40;
<italic>Y</italic>
: μ = 1.93°,
<italic>sd</italic>
= 0.57;
<italic>X,Y</italic>
:
<italic>t</italic>
= 84.89,
<italic>p</italic>
< 0.0001, see Figure
<xref ref-type="fig" rid="F3">3</xref>
, left). These scatter properties emphasize the fact that azimuth and elevation localization are dissociate processes (see Introduction). Note also that the ellipses were narrower in the SMP than elsewhere (ε: SMP = 0.23; periphery = 0.50; SMP, periphery:
<italic>t</italic>
= −0.26,
<italic>p</italic>
< 0.0001), as seen in Figures
<xref ref-type="fig" rid="F2">2</xref>
,
<xref ref-type="fig" rid="F3">3</xref>
, right. Auditory localization precision was statistically equivalent in the
<italic>X</italic>
and
<italic>Y</italic>
direction (
<italic>X</italic>
: μ = 5.52,
<italic>sd</italic>
= 0.72;
<italic>Y</italic>
: μ = 5.34,
<italic>sd</italic>
= 1.26;
<italic>X</italic>
,
<italic>Y</italic>
:
<italic>t</italic>
= 0.17,
<italic>p</italic>
= 0.76). There was no significant effect of eccentricity [
<italic>X: F</italic>
<sub>(5, 19)</sub>
= 0.70,
<italic>p</italic>
= 0.62].</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption>
<p>
<bold>From left to right: Ellipse Orientation Deviation, Error Vector Orientation Deviation, and Ellipse Ratio in the polar coordinate system</bold>
.</p>
</caption>
<graphic xlink:href="fnins-09-00311-g0003"></graphic>
</fig>
<p>Auditory localization accuracy was characterized by significant undershoot of the responses in elevation, as seen in Figures
<xref ref-type="fig" rid="F2">2</xref>
,
<xref ref-type="fig" rid="F3">3</xref>
, center, where the error vector directions are opposite to the direction of the targets relative to the initial fixation point. Auditory localization was more accurate by a factor of 3 in the upper hemifield than in the lower hemifield (upper: μ = 2.26°,
<italic>sd</italic>
= 1.47; lower: μ = 6.48°,
<italic>sd</italic>
= 1.15; upper, lower:
<italic>t</italic>
= −4.22,
<italic>p</italic>
< 0.0001), resulting in an asymmetrical space compression (see Figures
<xref ref-type="fig" rid="F2">2</xref>
,
<xref ref-type="fig" rid="F4">4</xref>
,
<xref ref-type="fig" rid="F5">5</xref>
). The highest accuracy was observed for targets 10° above the HMP (
<italic>Y</italic>
= 0°: μ = 2.66,
<italic>sd</italic>
= 0.83;
<italic>Y</italic>
= +10°: μ = 1.25,
<italic>sd</italic>
= 0.94; 0°,+10°:
<italic>t</italic>
= 1.41,
<italic>p</italic>
= 0.02), suggesting that the A and the V “horizons” may not coincide, as was reported, though not discussed, by Carlile (Carlile et al.,
<xref rid="B13" ref-type="bibr">1997</xref>
). There was no effect of eccentricity in azimuth [
<italic>F</italic>
<sub>(2, 22)</sub>
= 0.36,
<italic>p</italic>
= 0.69].</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption>
<p>
<bold>Top:</bold>
Mean Variable Error (
<italic>VE</italic>
) for the V, VA, and the MLE as a function of eccentricity in Azimuth (left) and eccentricity in Elevation (right).
<bold>Bottom:</bold>
Mean Constant Error (
<italic>CE</italic>
) for the A, V, VA conditions and the MLE as a function of eccentricity in Azimuth (left) and eccentricity in Elevation (right).</p>
</caption>
<graphic xlink:href="fnins-09-00311-g0004"></graphic>
</fig>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption>
<p>
<bold>Top:</bold>
precision across the 2D frontal field (horizontal axis = −20°, +20°; vertical axis = −20°, +20°). From left to right: A, V, VA and MLE (predicted VA). The color bar depicts the precision in localization from extremely precise (blue) to imprecise (red).
<bold>Bottom:</bold>
accuracy across the 2D frontal field (horizontal axis = −20°, +20°; vertical axis = −20°, +20°). From left to right: A, V, VA, and MLE. The color bar depicts localization accuracy from more accurate (blue) to less accurate (red). Auditory localization was more accurate in the upper than in the lower hemifield while the opposite holds true for visual localization.</p>
</caption>
<graphic xlink:href="fnins-09-00311-g0005"></graphic>
</fig>
</sec>
<sec>
<title>Visual</title>
<p>The topology of the visual space was characterized by a radial pattern of the errors in all directions, as seen in Figure
<xref ref-type="fig" rid="F2">2</xref>
, where all the variance ellipses are aligned in the direction of the targets, relative to the initial fixation point [regression target/ellipse orientation:
<italic>R</italic>
<sup>2</sup>
= 0.89,
<italic>F</italic>
<sub>(1, 22)</sub>
= 205.28,
<italic>p</italic>
< 0.0001;
<italic>r</italic>
= 0.95,
<italic>p</italic>
< 0.0001]. The ellipses were narrower in the SMP than in the HMP, differences that were statistically significant (ε: SMP = 0.41; HMP = 0.63; SMP, HMP:
<italic>t</italic>
= 0.22,
<italic>p</italic>
= 0.001). For targets of the two orthogonal axes, the ratio was statistically equivalent to that in the
<italic>X</italic>
axis direction (see Figure
<xref ref-type="fig" rid="F3">3</xref>
, right). The overall orientation deviation was independent of the target direction (
<italic>X</italic>
: μ = 9.12°,
<italic>sd</italic>
= 6.75;
<italic>Y</italic>
: μ = 2.45°,
<italic>sd</italic>
= 2.23;
<italic>X</italic>
,
<italic>Y</italic>
:
<italic>t</italic>
= 6.66,
<italic>p</italic>
= 0.39), as seen in Figure
<xref ref-type="fig" rid="F3">3</xref>
, left. These scatter properties reveal the polar organization of the visuomotor system (Van Opstal and Van Gisbergen,
<xref rid="B66" ref-type="bibr">1989</xref>
). The VA localization was slightly more precise in elevation than in azimuth, although the difference didn't quite reach significance (
<italic>X</italic>
: μ = 1.77,
<italic>sd</italic>
= 0.42;
<italic>Y</italic>
: μ = 1.29,
<italic>sd</italic>
= 0.54;
<italic>X, Y</italic>
:
<italic>t</italic>
= 0.49,
<italic>p</italic>
= 0.09). Precision decreased systematically with eccentricity in azimuth [
<italic>F</italic>
<sub>(2, 22)</sub>
= 8.88,
<italic>p</italic>
= 0.001], but not in elevation [
<italic>F</italic>
<sub>(2, 22)</sub>
= 1.67,
<italic>p</italic>
= 0.21], as seen in Figures
<xref ref-type="fig" rid="F4">4</xref>
,
<xref ref-type="fig" rid="F5">5</xref>
, where one can see that the variability was higher in the upper hemifield than in the lower hemifield (upper: μ = 2.04,
<italic>sd</italic>
= 0.41; lower: μ = 1.57,
<italic>sd</italic>
= 0.53; upper, lower:
<italic>t</italic>
= 0.47,
<italic>p</italic>
= 0.03).</p>
<p>Visual accuracy was characterized by a systematic undershoot of the responses, i.e., the vectors direction was opposite to the direction of the target, and the difference between target and vector direction averaged 180° over the entire field (direction deviation: μ = 165.04°,
<italic>sd</italic>
= 47.64). The direction deviations were marginally larger for targets with an oblique direction (i.e., 45, 135, 225, and 315° directions) than for targets on the two orthogonal axes (
<italic>X</italic>
,
<italic>XY</italic>
:
<italic>t</italic>
= −17.96,
<italic>p</italic>
= 0.06;
<italic>Y</italic>
,
<italic>XY</italic>
:
<italic>t</italic>
= −17.46,
<italic>p</italic>
= 0.07, see Figure
<xref ref-type="fig" rid="F3">3</xref>
, center). The localization bias (
<italic>CE</italic>
) represented 11.9% of the target eccentricity, a value that conforms to previous studies, and was consistent throughout directions and eccentricities. Note that the compression of the visual space, resulting from the target undershoot, was more pronounced in upper hemifield than in the lower hemifield (upper: μ = 2.84,
<italic>sd</italic>
= 0.31; lower: μ = 1.36,
<italic>sd</italic>
= 0.49; upper, lower:
<italic>t</italic>
= 1.47,
<italic>p</italic>
< 0.0001, see Figures
<xref ref-type="fig" rid="F2">2</xref>
,
<xref ref-type="fig" rid="F4">4</xref>
,
<xref ref-type="fig" rid="F5">5</xref>
), an effect opposite to that observed for A localization accuracy.</p>
</sec>
</sec>
<sec>
<title>Bimodal visual-auditory localization performance</title>
<sec>
<title>Observed</title>
<p>The response distributions showed anisotropic distributions with the main eigenvector oriented in the direction of the targets relative to the initial fixation point [regression target/ellipse orientation:
<italic>R</italic>
<sup>2</sup>
= 0.87,
<italic>F</italic>
<sub>(1, 22)</sub>
= 158.37,
<italic>p</italic>
< 0.0001;
<italic>r</italic>
= 0.93,
<italic>p</italic>
< 0.0001] as seen in Figures
<xref ref-type="fig" rid="F2">2</xref>
,
<xref ref-type="fig" rid="F3">3</xref>
. As previously reported in the A and the V conditions, the ellipse distributions were narrower in the SMP than in the HMP (ε: SMP = 0.37; HMP = 0.55; SMP, HMP:
<italic>t</italic>
= 0.18,
<italic>p</italic>
= 0.01). The overall orientation deviation was independent of the target direction (
<italic>X</italic>
: μ = 9.04°,
<italic>sd</italic>
= 3.83;
<italic>Y</italic>
: μ = 3.23°,
<italic>sd</italic>
= 2.80;
<italic>X</italic>
,
<italic>Y</italic>
:
<italic>t</italic>
= 5.81,
<italic>p</italic>
= 0.52).</p>
<p>The VA localization was marginally more precise in elevation than in azimuth (
<italic>X</italic>
: μ = 1.49,
<italic>sd</italic>
= 0.18;
<italic>Y</italic>
: μ = 1.08,
<italic>sd</italic>
= 0.51;
<italic>X</italic>
,
<italic>Y</italic>
:
<italic>t</italic>
= 0.41,
<italic>p</italic>
= 0.07), and decreased systematically with eccentricity in azimuth [
<italic>F</italic>
<sub>(2, 22)</sub>
= 13.13,
<italic>p</italic>
< 0.0001], but not in elevation [
<italic>F</italic>
<sub>(2, 22)</sub>
= 0.31,
<italic>p</italic>
= 0.73]. However, the variability was higher in the upper hemifield than in the lower hemifield (upper: μ = 1.68,
<italic>sd</italic>
= 0.25; lower: μ = 1.28,
<italic>sd</italic>
= 0.24; upper, lower:
<italic>t</italic>
= 0.39,
<italic>p</italic>
= 0.01), a characteristic previously reported for visual precision.</p>
<p>The direction deviations were on average four times larger for targets with an oblique direction than for targets in the two orthogonal axes (
<italic>X</italic>
: μ = 2.40,
<italic>sd</italic>
= 1.67;
<italic>Y</italic>
: μ = 3.42,
<italic>sd</italic>
= 3.74;
<italic>XY</italic>
: μ = 18.76,
<italic>sd</italic>
= 10.29;
<italic>X</italic>
,
<italic>Y</italic>
:
<italic>t</italic>
= −1.02,
<italic>p</italic>
= 0.88;
<italic>X</italic>
,
<italic>XY</italic>
:
<italic>t</italic>
= −16.36,
<italic>p</italic>
= 0.01;
<italic>Y</italic>
,
<italic>XY</italic>
:
<italic>t</italic>
= −15.33,
<italic>p</italic>
= 0.02). As for vision, VA localization showed a systematic target undershoot in all directions, as illustrated in Figures
<xref ref-type="fig" rid="F2">2</xref>
,
<xref ref-type="fig" rid="F3">3</xref>
, where one can see that the direction of the vectors is opposite to the direction of the target. The localization bias (μ = 1.39,
<italic>sd</italic>
= 0.65) represented 9.22% of the target eccentricity, a value that decreased slightly with eccentricity without reaching significance [
<italic>F</italic>
<sub>(3, 12)</sub>
= 3.17,
<italic>p</italic>
= 0.06]. There was no effect of direction. Bimodal accuracy was not affected by the effect of direction (
<italic>X</italic>
: μ = 1.43,
<italic>sd</italic>
= 0.32;
<italic>Y</italic>
: μ = 1.64,
<italic>sd</italic>
= 0.87;
<italic>X</italic>
,
<italic>Y</italic>
:
<italic>t</italic>
= −0.20,
<italic>p</italic>
= 0.68) and decreased slightly with eccentricity [
<italic>F</italic>
<sub>(5, 19)</sub>
= 1.40,
<italic>p</italic>
= 0.26]. One may observe that VA accuracy was highest in the lower than in the upper hemifield (upper, lower:
<italic>t</italic>
= 1.16,
<italic>p</italic>
< 0.0001), a characteristic already shown for visual localization accuracy (see Figures
<xref ref-type="fig" rid="F2">2</xref>
,
<xref ref-type="fig" rid="F4">4</xref>
,
<xref ref-type="fig" rid="F5">5</xref>
). In the upper hemifield, the magnitude of undershoot averaged 2.38 ± 0.45°, which is almost twice as much as what was observed in the lower hemifield (1.21 ± 0.30°).</p>
</sec>
<sec>
<title>Predicted</title>
<p>The model predicted anisotropic response distributions, with in general the main eigenvector aligned with the direction of the target relative to the initial fixation point (regression target/ellipse orientation:
<italic>R</italic>
<sup>2</sup>
= 0.38,
<italic>F</italic>
<sub>(1, 22)</sub>
= 13.71,
<italic>p</italic>
= 0.01]. Interestingly, the MLE didn't predict variations in the anisotropy of the distributions as a function of direction (ε: SMP = 0.43; HMP = 0.58; SMP, HMP:
<italic>t</italic>
= 0.14,
<italic>p</italic>
= 0.29). The orientation deviation was larger in azimuth than in elevation (
<italic>X</italic>
: μ = 47.19°,
<italic>sd</italic>
= 34.62;
<italic>Y</italic>
: μ = 7.57°,
<italic>sd</italic>
= 5.90;
<italic>X, Y</italic>
:
<italic>t</italic>
= 39.61,
<italic>p</italic>
= 0.01), as seen in Figure
<xref ref-type="fig" rid="F3">3</xref>
, left. The predicted variance was statistically equivalent in the
<italic>X</italic>
and
<italic>Y</italic>
directions (
<italic>X</italic>
: μ = 1.58,
<italic>sd</italic>
= 0.37;
<italic>Y</italic>
: μ = 1.15,
<italic>sd</italic>
= 0.50;
<italic>X, Y</italic>
:
<italic>t</italic>
= 0.43,
<italic>p</italic>
= 0.06). The effect of eccentricity was significant in azimuth [
<italic>F</italic>
<sub>(2, 22)</sub>
= 8.72,
<italic>p</italic>
= 0.002] but not in elevation [
<italic>F</italic>
<sub>(2, 22)</sub>
= 1.05,
<italic>p</italic>
= 0.36] but the variance was higher in the upper hemifield than in the lower hemifield (upper: μ = 1.72,
<italic>sd</italic>
= 0.15; lower: μ = 1.36,
<italic>sd</italic>
= 0.45; upper, lower:
<italic>t</italic>
= 0.35,
<italic>p</italic>
= 0.02; see Figures
<xref ref-type="fig" rid="F2">2</xref>
,
<xref ref-type="fig" rid="F4">4</xref>
,
<xref ref-type="fig" rid="F5">5</xref>
).</p>
<p>Vector direction deviations were larger in the oblique direction than in the orthogonal directions, as seen in Figure
<xref ref-type="fig" rid="F3">3</xref>
, center (
<italic>X</italic>
: μ = 6.27,
<italic>sd</italic>
= 5.62;
<italic>Y</italic>
: μ = 6.95,
<italic>sd</italic>
= 7.08;
<italic>XY</italic>
: μ = 25.33,
<italic>sd</italic>
= 14.90;
<italic>X</italic>
,
<italic>Y</italic>
:
<italic>t</italic>
= −0.67,
<italic>p</italic>
= 0.94;
<italic>X</italic>
,
<italic>XY</italic>
:
<italic>t</italic>
= −19.06,
<italic>p</italic>
= 0.02;
<italic>Y</italic>
,
<italic>XY</italic>
:
<italic>t</italic>
= −18.38,
<italic>p</italic>
= 0.03). The predicted accuracy showed a systematic target undershoot in all directions, as illustrated in Figures
<xref ref-type="fig" rid="F2">2</xref>
,
<xref ref-type="fig" rid="F3">3</xref>
, where one can see that the direction of the vectors is opposite to the direction of the target. The localization bias (μ = 1.80,
<italic>sd</italic>
= 0.67) represented 10.85% of the target eccentricity, a value that decreased with eccentricity [
<italic>F</italic>
<sub>(4, 19)</sub>
= 8.43,
<italic>p</italic>
< 0.0001]. There was no effect of direction (
<italic>X, Y</italic>
:
<italic>t</italic>
= −0.35,
<italic>p</italic>
= 0.43) or eccentricity [
<italic>F</italic>
<sub>(5, 19)</sub>
= 1.72,
<italic>p</italic>
= 0.17]. The difference in accuracy between upper and lower hemifield observed in the VA condition was well-predicted (upper, lower:
<italic>t</italic>
= 0.74,
<italic>p</italic>
= 0.003), with an undershoot magnitude of 2.46 ± 0.37° in the upper hemifield and 1.71 ± 0.63° in the lower hemifield (see Figures
<xref ref-type="fig" rid="F4">4</xref>
,
<xref ref-type="fig" rid="F5">5</xref>
).</p>
</sec>
</sec>
<sec>
<title>Applying the MLE model to the VA localization and accuracy</title>
<sec>
<title>Orientation deviation</title>
<p>The magnitude of the ellipses orientation deviation (ellipse orientation in relation to the target direction) was very similar in the V and in the VA condition (V: μ = 13.05°,
<italic>sd</italic>
= 2.36°; VA: μ = 13.67°,
<italic>sd</italic>
= 2.76°;
<italic>t</italic>
= 0.48,
<italic>p</italic>
= 1), as seen in Figure
<xref ref-type="fig" rid="F3">3</xref>
, where the plots for V and VA almost overlap. The MLE predicted larger orientation deviations than observed in the VA condition (μ = 24.73°,
<italic>sd</italic>
= 22.58°, VA, MLE:
<italic>t</italic>
= −12.68,
<italic>p</italic>
= 0.007), primarily in the
<italic>Y</italic>
and
<italic>XY</italic>
directions.</p>
</sec>
<sec>
<title>Precision</title>
<p>Figure
<xref ref-type="fig" rid="F5">5</xref>
top depicts from left to right, the 2D variance (
<inline-formula>
<mml:math id="M21">
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>X</mml:mi>
<mml:mi>Y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:math>
</inline-formula>
) for the A, V, VA targets and the predicted MLE estimate. It illustrates the inter- and intra-modality similarities and differences reported earlier. Note the left/right symmetry for all conditions, the greater precision for audition in the upper hemifield than in the lower hemifield and the improved precision in the VA condition compared to the V condition. The ellipse ratio was higher (i.e., ellipses less anisotropic) in the observed VA condition than in the predicted VA condition (ε: VA=.60; MLE=.48; VA, MLE:
<italic>t</italic>
= 0.11,
<italic>p</italic>
= 0.002), potentially as a result of an expected greater influence of audition. Comparison between the V, VA and MLE conditions showed a significant effect of modality [
<italic>F</italic>
<sub>(2, 48)</sub>
= 24.71,
<italic>p</italic>
< 0.0001], with less variance in the VA condition than in the V condition (V, VA:
<italic>t</italic>
= 0.31,
<italic>p</italic>
< 0.0001). There was no difference between observed and predicted precision (
<italic>t</italic>
= −0.07,
<italic>p</italic>
= 0.16). There was no interaction with direction [
<italic>F</italic>
<sub>(2, 12)</sub>
= 0.34,
<italic>p</italic>
= 0.71], eccentricity [
<italic>F</italic>
<sub>(10, 38)</sub>
= 1.33,
<italic>p</italic>
= 0.24] or upper/lower hemifield [
<italic>F</italic>
<sub>(2, 36)</sub>
= 0.53,
<italic>p</italic>
= 0.59].</p>
<p>VA precision was significantly correlated with both A and V precision (
<inline-formula>
<mml:math id="M22">
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>A</mml:mi>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:math>
</inline-formula>
:
<italic>r</italic>
= 0.46,
<italic>p</italic>
= 0.01;
<inline-formula>
<mml:math id="M23">
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>A</mml:mi>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:math>
</inline-formula>
:
<italic>r</italic>
= 0.82,
<italic>p</italic>
< 0.0001), which was well-predicted by the model (
<inline-formula>
<mml:math id="M24">
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:mover accent="false">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>V</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mo>:</mml:mo>
</mml:math>
</inline-formula>
<italic>r</italic>
= 0.57,
<italic>p</italic>
= 0.002;
<inline-formula>
<mml:math id="M25">
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:mover accent="false">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>V</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
</mml:math>
</inline-formula>
:
<italic>r</italic>
= 0.91,
<italic>p</italic>
< 0.0001;
<inline-formula>
<mml:math id="M26">
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>A</mml:mi>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:mover accent="false">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>V</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
</mml:math>
</inline-formula>
:
<italic>r</italic>
= 0.88,
<italic>p</italic>
< 0.0001).</p>
<p>Step by step linear regressions (method Enter) were performed to assess the contribution of V and A precision as predictors of the observed and predicted VA localization precision. In the observed VA condition (Figure
<xref ref-type="fig" rid="F6">6A</xref>
, Left), 68% of the variance was explained, exclusively by
<inline-formula>
<mml:math id="M27">
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:math>
</inline-formula>
[(Constant),
<inline-formula>
<mml:math id="M28">
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:math>
</inline-formula>
:
<italic>R</italic>
<sup>2</sup>
= 0.67; adjusted
<italic>R</italic>
<sup>2</sup>
= 0.66;
<italic>R</italic>
<sup>2</sup>
change = 0.67;
<italic>F</italic>
<sub>(1, 23)</sub>
= 47.69,
<italic>p</italic>
< 0.0001; (Constant),
<inline-formula>
<mml:math id="M29">
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:math>
</inline-formula>
:
<italic>R</italic>
<sup>2</sup>
= 0.71; adjusted
<italic>R</italic>
<sup>2</sup>
= 0.68;
<italic>R</italic>
<sup>2</sup>
change = 0.03;
<italic>F</italic>
<sub>(1, 22)</sub>
= 2.85,
<italic>p</italic>
= 0.1]. Conversely, the model predicted a significant contribution of both the A and the V precision with an adjusted
<italic>R</italic>
<sup>2</sup>
of 0.91; i.e., 91% of the total variance was explained [see Figure
<xref ref-type="fig" rid="F6">6A</xref>
right, (Constant),
<inline-formula>
<mml:math id="M30">
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:math>
</inline-formula>
:
<italic>R</italic>
<sup>2</sup>
= 0.84; adjusted
<italic>R</italic>
<sup>2</sup>
= 0.83;
<italic>R</italic>
<sup>2</sup>
change = 0.84;
<italic>F</italic>
<sub>(1, 23)</sub>
= 122.83,
<italic>p</italic>
< 0.0001; (Constant),
<inline-formula>
<mml:math id="M31">
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:math>
</inline-formula>
:
<italic>R</italic>
<sup>2</sup>
= 0.91; adjusted
<italic>R</italic>
<sup>2</sup>
= 0.91;
<italic>R</italic>
<sup>2</sup>
change = 0.07;
<italic>F</italic>
<sub>(1, 22)</sub>
=20.39,
<italic>p</italic>
< 0.0001].</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption>
<p>
<bold>(A)</bold>
Regression plots for the bimodal observed (
<inline-formula>
<mml:math id="M32">
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>A</mml:mi>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:math>
</inline-formula>
, left) and predicted variance (
<inline-formula>
<mml:math id="M33">
<mml:mover accent="false">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>V</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
</mml:math>
</inline-formula>
, right). Predictors:
<inline-formula>
<mml:math id="M34">
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:math>
</inline-formula>
.
<bold>(B)</bold>
Redundancy gain (
<italic>RG</italic>
, in %) as a function of the magnitude of the variance in the visual condition (
<inline-formula>
<mml:math id="M35">
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:math>
</inline-formula>
). The
<italic>RG</italic>
increases as the reliability of the visual estimate decreases (variance increases). Note that the model prediction parallels the observed data, although the magnitude of the observed
<italic>RG</italic>
was significantly higher than predicted by the model.</p>
</caption>
<graphic xlink:href="fnins-09-00311-g0006"></graphic>
</fig>
<p>The observed
<italic>RG</italic>
(18.07%) was positive for 96% (24) of the tested locations and was statistically higher than the model prediction (12.76%) [
<italic>F</italic>
<sub>(1, 23)</sub>
= 7.98,
<italic>p</italic>
= 0.01]. There was no significant difference in gain, observed or predicted, throughout main direction and eccentricity.</p>
<p>In order to further investigate the association between the
<italic>RG</italic>
and unimodal localization precision, we correlated the RG with the mean precision for the best unisensory modality. The highest observed RG were associated with the less precise unimodal estimate (Figure
<xref ref-type="fig" rid="F6">6B</xref>
), although the correlation didn't quite reach significance (Pearson's
<italic>r</italic>
= 0.29,
<italic>p</italic>
= 0.07). Meanwhile, the model predicted well the
<italic>IE</italic>
effect (Figure
<xref ref-type="fig" rid="F6">6B</xref>
) with a significant correlation between RG and visual variance (Pearson's
<italic>r</italic>
= 0.53,
<italic>p</italic>
= 0.004).</p>
</sec>
<sec>
<title>Direction deviation</title>
<p>The magnitude of the vector direction deviation was statistically equivalent between V, VA, and MLE [
<italic>F</italic>
<sub>(246)</sub>
= 1.36,
<italic>p</italic>
= 0.25]. In both conditions, the orientation deviations were larger for targets with an oblique direction than on the two orthogonal axes (i.e., around the 45, 135, 225, and 315° directions).</p>
</sec>
<sec>
<title>Accuracy</title>
<p>Comparison between V and VA accuracy showed that VA accuracy was not an intermediate between the A and the V accuracy and that overall, the AV responses were more accurate than in the V condition (
<italic>r</italic>
<sub>
<italic>V</italic>
</sub>
,
<italic>r</italic>
<sub>
<italic>VA</italic>
</sub>
:
<italic>t</italic>
= 0.33,
<italic>p</italic>
< 0.0001). Conversely, accuracy predicted by the model was not statistically different than in the V condition (
<italic>r</italic>
<sub>
<italic>V</italic>
</sub>
,
<inline-formula>
<mml:math id="M36">
<mml:mover accent="false">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
</mml:math>
</inline-formula>
:
<italic>t</italic>
= 0.06,
<italic>p</italic>
= 0.62;
<italic>r</italic>
<sub>
<italic>VA</italic>
</sub>
,
<inline-formula>
<mml:math id="M37">
<mml:mover accent="false">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
</mml:math>
</inline-formula>
:
<italic>t</italic>
= −0.26,
<italic>p</italic>
= 0.01) while statistically different than observed (
<italic>r</italic>
<sub>
<italic>VA</italic>
</sub>
,
<inline-formula>
<mml:math id="M38">
<mml:mover accent="false">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
</mml:math>
</inline-formula>
:
<italic>t</italic>
= −4.98,
<italic>p</italic>
< 0.0001). There was no significant effect of interaction with direction [
<italic>F</italic>
<sub>(15, 57)</sub>
= 0.66,
<italic>p</italic>
= 0.81] or eccentricity [
<italic>F</italic>
<sub>(15, 57)</sub>
= 0.14,
<italic>p</italic>
= 1]. These general observations obscured local differences between modalities. Indeed, there was a significant effect of interaction between modality and upper/lower hemifield [
<italic>F</italic>
<sub>(6, 66)</sub>
= 34.56,
<italic>p</italic>
< 0.0001] as seen in Figures
<xref ref-type="fig" rid="F4">4</xref>
,
<xref ref-type="fig" rid="F5">5</xref>
. A first relatively unexpected result is the fact A and V accuracy were not statistically different in the upper hemifield (
<italic>r</italic>
<sub>
<italic>A</italic>
</sub>
: μ = 2.26,
<italic>sd</italic>
= 1.47;
<italic>r</italic>
<sub>
<italic>V</italic>
</sub>
: μ = 2.84,
<italic>sd</italic>
= 0.31;
<italic>r</italic>
<sub>
<italic>A</italic>
</sub>
,
<italic>r</italic>
<sub>
<italic>V</italic>
</sub>
:
<italic>t</italic>
= −1.31,
<italic>p</italic>
= 0.22), although some local differences in the periphery are visible from Figure
<xref ref-type="fig" rid="F5">5</xref>
. Conversely, in the lower hemifield, V localization was on average more accurate by an order of 5 than A localization (
<italic>r</italic>
<sub>
<italic>A</italic>
</sub>
: μ = 6.48,
<italic>sd</italic>
= 1.15;
<italic>r</italic>
<sub>
<italic>V</italic>
</sub>
: μ = 1.36,
<italic>sd</italic>
= 0.49;
<italic>r</italic>
<sub>
<italic>A</italic>
</sub>
,
<italic>r</italic>
<sub>
<italic>V</italic>
</sub>
:
<italic>t</italic>
= 5.11,
<italic>p</italic>
< 0.0001). These differences between unimodal conditions provide a unique opportunity to evaluate the relative contribution of A and V to the bimodal localization performance.</p>
<p>In the upper hemifield, the VA localization was more accurate than in the V condition (
<italic>r</italic>
<sub>
<italic>V</italic>
</sub>
,
<italic>r</italic>
<sub>
<italic>VA</italic>
</sub>
:
<italic>t</italic>
= 3.85,
<italic>p</italic>
= 0.004), but not than in the A condition (
<italic>r</italic>
<sub>
<italic>A</italic>
</sub>
,
<italic>r</italic>
<sub>
<italic>VA</italic>
</sub>
:
<italic>t</italic>
= −0.31,
<italic>p</italic>
= 0.76). The model also predicted this pattern (
<italic>r</italic>
<sub>
<italic>A</italic>
</sub>
,
<inline-formula>
<mml:math id="M39">
<mml:mover accent="false">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
</mml:math>
</inline-formula>
:
<italic>t</italic>
= −0.49,
<italic>p</italic>
= 0.63;
<italic>r</italic>
<sub>
<italic>V</italic>
</sub>
,
<inline-formula>
<mml:math id="M40">
<mml:mover accent="false">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
</mml:math>
</inline-formula>
:
<italic>t</italic>
= 2.66,
<italic>p</italic>
= 0.02), and therefore, the difference between observed and predicted accuracy was not significant (
<italic>r</italic>
<sub>
<italic>VA</italic>
</sub>
,
<inline-formula>
<mml:math id="M41">
<mml:mover accent="false">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
</mml:math>
</inline-formula>
:
<italic>t</italic>
= −0.74,
<italic>p</italic>
= 0.47).</p>
<p>In the lower hemifield, however, V and VA accuracy localization was not statistically different (
<italic>r</italic>
<sub>
<italic>V</italic>
</sub>
,
<italic>r</italic>
<sub>
<italic>VA</italic>
</sub>
:
<italic>t</italic>
= −1.83,
<italic>p</italic>
= 0.10). Meanwhile, the accuracy predicted by the model (μ = 1.71,
<italic>sd</italic>
= 0.63), less homogeneous, was not different from the V condition (
<italic>r</italic>
<sub>
<italic>V</italic>
</sub>
,
<inline-formula>
<mml:math id="M42">
<mml:mover accent="false">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
</mml:math>
</inline-formula>
:
<italic>t</italic>
= −1.47,
<italic>p</italic>
= 0.17), but the predicted VA localization was significantly less accurate than observed (
<italic>r</italic>
<sub>
<italic>VA</italic>
</sub>
,
<inline-formula>
<mml:math id="M43">
<mml:mover accent="false">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
</mml:math>
</inline-formula>
:
<italic>t</italic>
= −2.30,
<italic>p</italic>
= 0.04).</p>
</sec>
<sec>
<title>Relationships between precision and accuracy</title>
<p>According to the MLE, the VA accuracy depends, at various levels, upon the unimodal A and V precision. The visual weight (
<italic>W</italic>
<sub>
<italic>V</italic>
</sub>
) was computed to provide an estimate of the respective unimodal contribution as a function of direction and eccentricity.</p>
<p>Vision, which is the most reliable modality for elevation, was expected to be associated with a stronger weight along the elevation axis than along the azimuth axis. This is indeed what was observed (
<italic>W</italic>
<sub>
<italic>V</italic>
</sub>
:
<italic>X</italic>
: μ = 0.75,
<italic>sd</italic>
= 0.03;
<italic>W</italic>
<sub>
<italic>V</italic>
</sub>
:
<italic>Y</italic>
: μ = 0.81,
<italic>sd</italic>
= 0.03;
<italic>X</italic>
,
<italic>Y</italic>
:
<italic>t</italic>
= −0.05
<italic>p</italic>
= 0.05). As expected, the visual weight decreased significantly with eccentricity in azimuth [
<italic>F</italic>
<sub>(2, 22)</sub>
= 10.25,
<italic>p</italic>
= 0.001] but not in elevation [
<italic>F</italic>
<sub>(2, 22)</sub>
= 1.16,
<italic>p</italic>
= 0.33], as seen in Figure
<xref ref-type="fig" rid="F7">7A</xref>
, left. In this axis,
<italic>W</italic>
<sub>
<italic>V</italic>
</sub>
was marginally higher in the lower hemifield than in the upper hemifield (upper: μ = 0.74; lower: μ = 0.78; upper, lower:
<italic>t</italic>
= −0.04,
<italic>p</italic>
= 0.07).</p>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption>
<p>
<bold>(A)</bold>
Visual weight. A value of 0.5 would indicate an equivalent contribution of the A and the V modalities to the VA localization precision. For the examined region (−20 to +20° azimuth, −20 to +20° azimuth),
<italic>W</italic>
<sub>
<italic>V</italic>
</sub>
values were 0.60 to 0.90, indicating that vision always contributed more than audition to bimodal precision. Left: In azimuth,
<italic>W</italic>
<sub>
<italic>V</italic>
</sub>
decreases as the eccentricity of the target increases. In elevation,
<italic>W</italic>
<sub>
<italic>V</italic>
</sub>
was marginally higher in the lower than in the upper hemifield. Right: VA accuracy is inversely correlated to
<italic>W</italic>
<sub>
<italic>V</italic>
</sub>
i.e., the highest values of
<italic>W</italic>
<sub>
<italic>V</italic>
</sub>
were associated with the smallest
<italic>CEs</italic>
.
<bold>(B)</bold>
Regression plots for the bimodal observed (
<italic>r</italic>
<sub>
<italic>VA</italic>
</sub>
, left) and predicted accuracy (
<inline-formula>
<mml:math id="M44">
<mml:mover accent="false">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
</mml:math>
</inline-formula>
, right). Significant predictors:
<italic>W</italic>
<sub>
<italic>V</italic>
</sub>
,
<italic>r</italic>
<sub>
<italic>A</italic>
</sub>
and
<italic>r</italic>
<sub>
<italic>V</italic>
</sub>
for
<italic>r</italic>
<sub>
<italic>VA</italic>
</sub>
;
<italic>r</italic>
<sub>
<italic>V</italic>
</sub>
for
<inline-formula>
<mml:math id="M45">
<mml:mover accent="false">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
</mml:math>
</inline-formula>
.</p>
</caption>
<graphic xlink:href="fnins-09-00311-g0007"></graphic>
</fig>
<p>Overall, VA accuracy was inversely correlated to
<italic>W</italic>
<sub>
<italic>V</italic>
</sub>
(
<italic>R</italic>
<sub>
<italic>VA</italic>
</sub>
,
<italic>W</italic>
<sub>
<italic>V</italic>
</sub>
:
<italic>r</italic>
= −0.48,
<italic>p</italic>
= 0.007), i.e., the highest values of
<italic>W</italic>
<sub>
<italic>V</italic>
</sub>
were associated with the smallest values of
<italic>CEs</italic>
, as seen in Figure
<xref ref-type="fig" rid="F7">7A</xref>
, right. However,
<italic>W</italic>
<sub>
<italic>V</italic>
</sub>
alone explained only 20% of the total variance, a contribution that was significant [(Constant),
<italic>W</italic>
<sub>
<italic>V</italic>
</sub>
:
<italic>R</italic>
<sup>2</sup>
= 0.24; adjusted
<italic>R</italic>
<sup>2</sup>
= 0.20;
<italic>R</italic>
<sup>2</sup>
change = 0.24;
<italic>F</italic>
<sub>(1, 32)</sub>
= 7.24,
<italic>p</italic>
= 0.01]. A step-by-step linear regression was then performed to assess the potential additional contribution of the V and A accuracy to the bimodal accuracy (
<italic>R</italic>
<sub>
<italic>VA</italic>
</sub>
). Altogether, the three parameters explained 87% of the total variance, with a major contribution of
<italic>R</italic>
<sub>
<italic>V</italic>
</sub>
[Figure
<xref ref-type="fig" rid="F7">7B</xref>
left (Constant),
<italic>W</italic>
<sub>
<italic>V</italic>
</sub>
,
<italic>R</italic>
<sub>
<italic>A</italic>
</sub>
:
<italic>R</italic>
<sup>2</sup>
= 0.31; adjusted
<italic>R</italic>
<sup>2</sup>
= 0.24;
<italic>R</italic>
<sup>2</sup>
change = 0.07;
<italic>F</italic>
<sub>(1, 22)</sub>
= 2.26,
<italic>p</italic>
= 0.14; (Constant),
<italic>W</italic>
<sub>
<italic>V</italic>
</sub>
<italic>,R</italic>
<sub>
<italic>A</italic>
</sub>
,
<italic>R</italic>
<sub>
<italic>V</italic>
</sub>
:
<italic>R</italic>
<sup>2</sup>
= 0.24; adjusted
<italic>R</italic>
<sup>2</sup>
= 0.87;
<italic>R</italic>
<sup>2</sup>
change = 0.57;
<italic>F</italic>
<sub>(1, 21)</sub>
= 107.47,
<italic>p</italic>
< 0.0001].</p>
<p>The bimodal VA accuracy was significantly correlated to both V and A accuracy (
<italic>r
<sub>V</sub>
</italic>
,
<italic>r</italic>
<sub>VA:</sub>
<italic>r</italic>
= 0.92,
<italic>p</italic>
< 0.0001;
<italic>r
<sub>A</sub>
,r
<sub>VA</sub>
r</italic>
= −0.47,
<italic>p</italic>
= 0.01). Of interest here is the negative correlation between
<italic>r</italic>
<sub>
<italic>A</italic>
</sub>
and
<sub>
<italic>r</italic>
</sub>
<italic>R</italic>
<sub>
<italic>V</italic>
</sub>
(
<italic>r
<sub>A</sub>
</italic>
,
<italic>r
<sub>V</sub>
</italic>
:
<italic>r</italic>
= −0.64,
<italic>p</italic>
< 0.0001), suggesting a trade-off between A and V accuracy.</p>
<p>Meanwhile, there was no significant correlation between the performance predicted by the MLE and
<italic>W</italic>
<sub>
<italic>V</italic>
</sub>
(
<inline-formula>
<mml:math id="M46">
<mml:mover accent="false">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
</mml:math>
</inline-formula>
,
<italic>W</italic>
<sub>
<italic>V</italic>
</sub>
:
<italic>r</italic>
= −0.15,
<italic>p</italic>
= 0.22) and the 49% of explained variance were attributable exclusively to
<italic>r</italic>
<sub>
<italic>V</italic>
</sub>
[Figure
<xref ref-type="fig" rid="F7">7B</xref>
right (Constant),
<italic>W</italic>
<sub>
<italic>V</italic>
</sub>
:
<italic>R</italic>
<sup>2</sup>
= 0.02; adjusted
<italic>R</italic>
<sup>2</sup>
= −0.01;
<italic>R</italic>
<sup>2</sup>
change = 0.02;
<italic>F</italic>
<sub>(1, 23)</sub>
= 0.58,
<italic>p</italic>
= 0.45; (Constant),
<italic>W</italic>
<sub>
<italic>V</italic>
</sub>
<italic>,R</italic>
<sub>
<italic>A</italic>
</sub>
:
<italic>R</italic>
<sup>2</sup>
= 0.06; adjusted
<italic>R</italic>
<sup>2</sup>
= −0.16;
<italic>R</italic>
<sup>2</sup>
change = 0.04;
<italic>F</italic>
<sub>(1, 22)</sub>
= 1.03,
<italic>p</italic>
= 0.31; (Constant),
<italic>W
<sub>V</sub>
,R
<sub>A</sub>
, R
<sub>V</sub>
</italic>
:
<italic>R</italic>
<sup>2</sup>
= 0.56; adjusted
<italic>R</italic>
<sup>2</sup>
= 0.49;
<italic>R</italic>
<sup>2</sup>
change = 0.49;
<italic>F</italic>
<sub>(1, 21)</sub>
= 23.64,
<italic>p</italic>
< 0.0001].</p>
<p>Because, the bimodal visual-auditory localization was shown to be more accurate than the most accurate unimodal condition, which was not predicted by the model, one may ask whether the bimodal precision could predict bimodal accuracy. Indeed, there was a significant positive correlation between VA precision and VA accuracy (
<inline-formula>
<mml:math id="M47">
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>V</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>,</mml:mo>
</mml:math>
</inline-formula>
<italic>r</italic>
<sub>
<italic>VA</italic>
</sub>
:
<italic>r</italic>
= 0.62,
<italic>p</italic>
= 0.001), a relation not predicted by the model (
<inline-formula>
<mml:math id="M48">
<mml:mover accent="false">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>V</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
</mml:math>
</inline-formula>
,
<inline-formula>
<mml:math id="M49">
<mml:mover accent="false">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>^</mml:mo>
</mml:mover>
</mml:math>
</inline-formula>
:
<italic>r</italic>
= 0.30,
<italic>p</italic>
= 0.13).</p>
</sec>
</sec>
</sec>
<sec sec-type="discussion" id="s4">
<title>Discussion</title>
<p>The present research reaffirmed and extended previous results by demonstrating that the two-dimensional localization performance of spatially and temporally congruent visual-auditory stimuli generally exceeds that of the best unimodal condition, vision. Establishing exactly how visual-auditory integration occurs in the spatial dimension is not trivial. Indeed, the reliability of each sensory modality varies as a function of the stimulus location in space, and second, each sensory modality uses a different format to encode the same properties of the environment. We capitalized on the differences in precision and accuracy between vision and audition as a function of spatial variables, i.e., eccentricity and direction, to assess their respective contribution to bimodal visual-auditory precision and accuracy. By combining two-dimensional quantitative and qualitative measures, we provided an exhaustive description of the performance field for each condition, revealing local and global differences. The well-known characteristics of vision and audition in the frontal perceptive field were verified, providing a solid baseline for the study of visual-auditory localization performance. The experiment yielded the following findings.</p>
<p>First, visual-auditory localization precision exceeded that of the more precise modality, vision and was well-predicted by the MLE. The redundancy gain observed in the bimodal condition, signature of crossmodal integration (Stein and Meredith,
<xref rid="B62" ref-type="bibr">1993</xref>
) was greater than predicted by the model and supported an inverse effectiveness effect. The magnitude of the redundancy gain was relatively constant regardless the reliability of the best unisensory component, a result previously reported by Charbonneau (Charbonneau et al.,
<xref rid="B14" ref-type="bibr">2013</xref>
) for the localization of spatially congruent visual-auditory stimuli in azimuth. The bimodal precision, both observed and predicted, was positively correlated to the unimodal precision, with a ratio of 3:1 for vision and audition, respectively. Based on the expected differences in precision for A and V in the center and in the periphery, we expected that the contribution of vision in the periphery will be reduced and that of audition increased, due to the predicted reduced gap between visual and auditory precision in this region. For direction, vision, which is the most reliable modality for elevation was given a stronger weight along the elevation axis than along the azimuth axis. Less expected was the fact that the visual weight decreased with eccentricity in azimuth only. In elevation, the visual weight was greater in the lower than in the upper hemifield. Meanwhile, the eigenvector's radial localization pattern supported a polar representation of the bimodal space, with directions similar to those in the visual condition. For the model, the eigenvector's localization pattern supported a hybrid representation, in particular for loci where the orientations of the ellipses between modalities were the most discrepant. One may conclude at this point that the improvement in precision for the bimodal stimulus relative to the visual stimulus revealed the presence of optimal integration well-predicted by the Maximum Likelihood Estimation (MLE) model. Further, the bimodal visual-auditory stimulus location appears to be represented in a polar coordinate system at the initial stages of processing in the brain.</p>
<p>Second, visual-auditory localization was also shown to be, on average, more accurate than visual localization, a phenomenon unpredicted by the model. We observed performance enhancement in 64% of the cases, against 44% for the model. In the absence of spatial discrepancy between the visual and the auditory stimuli, the overall MLE prediction was that the bimodal visual-auditory localization accuracy would be equivalent to the most accurate unimodal condition, vision. The results showed that locally, bimodal visual-auditory localization performance was equivalent to the most accurate unimodal condition, suggesting a
<italic>relative</italic>
rather than an
<italic>absolute</italic>
sensory dominance. Of particular interest was how precision was related to accuracy when a bimodal event is perceived as unified in space and time. Overall, VA accuracy was correlated to the visual weight, the stronger the visual weight the greater the VA accuracy. However, visual accuracy was a greater predictor of the bimodal accuracy than the visual weight. Also, our results support some form of transitivity between the performance for precision and accuracy, with 62% of the cases of performance enhancement for precision leading also to performance enhancement for accuracy. As for precision, the magnitude of the redundancy gain was relatively constant regardless the reliability of the best unisensory component. There was no reduction in vector direction deviations in the bimodal condition, which was well-predicted by the model. For all the targets, we observed a relatively homogeneous and proportional underestimation of target distance, with constant errors directed inward toward the origin of the polar coordinate system. The resulting array of the final positions was an undistorted replica of the target array, displaced by a constant error common to all targets. The local distortion (which refers to the fidelity with which the relative spatial organization of the targets is maintained in the configuration of the final pointing positions, McIntyre et al.,
<xref rid="B46" ref-type="bibr">2000</xref>
) indicates an isotropic contraction, possibly produced by an inaccurate sensorimotor transformation.</p>
<p>Lastly, the measurement of the bimodal local distortion represents a local approximation of a global function that can be approximated by a linear transformation from target to endpoint position as presented in Appendix 2. One can see the similarities between the functions that describe visual and bimodal local distortion. Meanwhile, the pattern of parallel constant errors observed in the auditory condition reveal a Cartesian representation. The distortions and discrepancies in auditory and visual space described in our results can find two main explanations. The first is the possibility that open-loop response measures of egocentric location that involve reaching or pointing are susceptible to confounding by motor variables and/or a reliance on body-centric coordinates. For example, it might be proposed that reaching for visual objects is subject to a motor bias that shifts the response toward the middle of the visual (and body-centric) field, resulting in what appears to be a compression of visual space where none actually exists. A second potential concern with most response measures is that because they involve localizing a target that has just been extinguished, their results may apply to memory-stored rather than currently perceived target locations (Seth and Shimojo,
<xref rid="B61" ref-type="bibr">2001</xref>
). The present results support the fact that short-term-memory distortions may have affected the localization performance. The results also speak against the amodality hypothesis (i.e., spatial images have no trace of their modal origins, Loomis et al.,
<xref rid="B43" ref-type="bibr">2012</xref>
) because the patterns of responses clearly reveal the initial coding of the stimuli.</p>
<p>The major contribution of the present research was the demonstration of how the differences between auditory and visual spatial perception, some of which have been reported previously, relate to the interaction of the two modalities in the localization of the VA targets across the 2D frontal field. First, localization response and accuracy were estimated in two dimensions, rather than being decomposed artificially into separate, non-collinear
<italic>x</italic>
and
<italic>y</italic>
response components. Another important difference with previous research is that we used spatially congruent rather than spatially discrepant stimuli, which were both considered optimal for the task. The differences in precision and accuracy for vision and audition were used to create different ecological levels of reliability of the two modalities instead of capitalizing on the artificial degradation of one or the other stimuli. One may argue that the integration effect would have been greater by using degraded stimuli. This is indubitably true, but this may have obscured the role of eccentricity and direction.</p>
<p>Two other important distinctions between the present research and previous similar efforts were the use of (a) “free field” rather than binaurally created auditory targets and (b) an absolute (i.e., egocentric) localization measure (Oldfield and Parker,
<xref rid="B49" ref-type="bibr">1984</xref>
; Hairston et al.,
<xref rid="B29" ref-type="bibr">2003a</xref>
), rather than a forced-choice (relative) one (Strybel and Fujimoto,
<xref rid="B63" ref-type="bibr">2000</xref>
; Battaglia et al.,
<xref rid="B3" ref-type="bibr">2003</xref>
; Alais and Burr,
<xref rid="B2" ref-type="bibr">2004</xref>
). The advantage of using actual auditory targets is that they are known to provide better cues for localization in the vertical dimension than are binaural stimuli (Blauert,
<xref rid="B9" ref-type="bibr">1997</xref>
) and are, of course, more naturalistic. With respect to the localization measure, although a forced-choice indicator (e.g., “Is the sound to the left of the light or to the right?”) is useful for some experimental questions, it was inappropriate for our research in which the objective was to measure exactly where in 2D space the V, A, and VA targets appeared to be located. For example, although a forced-choice indicator could be used to measure localization accuracy along the azimuth and elevation, it would be insensitive to any departures from these canonical dimensions. For example, it could not discriminate between a sound that was localized 2° to the right of straight ahead along the azimuth from one localized 2° to the right and 1° above the azimuth. Our absolute measure in which participants directed a visual pointer at the apparent location of the target is clearly not constrained in this way.</p>
<p>At this point, it is important to note that the effects reported here could appear quite modest in regards to previous studies. This was expected given the fact we used
<italic>non-degraded</italic>
and
<italic>congruent</italic>
visual and auditory stimuli. Increasing the size of the test region, especially in azimuth, would allow modifying even more the relative reliability of vision and audition to the point where audition would dominate vision. Another limit in our study is that we used a head-restrained method that could have contributed to some of the reported local distortions. Combining a wider field and a head-free method would provide the opportunity to investigate spatial visual-auditory interactions in a more ecological framework.</p>
<p>In conclusion, these results demonstrate that spatial locus, i.e., the spatial congruency effect (SCE), must be added to the long list of factors that influence the relative weights of audition and vision for spatial localization. Thus, rather than making the blanket statement that vision dominates audition in spatial perception, it is important to determine the variables that contribute to (or reduce) this general superiority. The present results clearly show that the two-dimensional target's locus is one of these variables. Finally, we would argue that because our research capitalized on naturally occurring spatial discrepancies between vision and audition using ecologically valid stimulus targets rather than laboratory creations, its results are especially applicable to the interaction of these sensory modalities in the everyday world.</p>
<sec>
<title>Conflict of interest statement</title>
<p>The Guest Associate Editor Guillaume Andeol declares that, despite sharing an affiliation with the author Patrick Maurice Basile Sandor at the Institut de Recherche Biomédicale des Armées, the review was handled objectively. The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</sec>
</body>
<back>
<ack>
<p>We wish to thank C. Roumes for initial contribution, A. Bichot for software development, R. Bittner for mathematical support and the reviewers for their very helpful comments. A preliminary version of some of the contents of this article is contained in the Proceedings of the 26th European Conference on Visual Perception and in the Proceedings of the 26th Annual Meeting of the Cognitive Science Society. This work was supported by a Direction Générale de l'Armement/Service de Santé des Armées grant and a NASA grant.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="B1">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Abrams</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Nizam</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Carrasco</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Isoeccentric locations are not equivalent: the extent of the vertical meridian asymmetry</article-title>
.
<source>Vision Res.</source>
<volume>52</volume>
,
<fpage>70</fpage>
<lpage>78</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.visres.2011.10.016</pub-id>
<pub-id pub-id-type="pmid">22086075</pub-id>
</mixed-citation>
</ref>
<ref id="B2">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Alais</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Burr</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>The ventriloquist effect results from near-optimal bimodal integration</article-title>
.
<source>Curr. Biol.</source>
<volume>14</volume>
,
<fpage>257</fpage>
<lpage>262</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.cub.2004.01.029</pub-id>
<pub-id pub-id-type="pmid">14761661</pub-id>
</mixed-citation>
</ref>
<ref id="B3">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Battaglia</surname>
<given-names>P. W.</given-names>
</name>
<name>
<surname>Jacobs</surname>
<given-names>R. A.</given-names>
</name>
<name>
<surname>Aslin</surname>
<given-names>R. N.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Bayesian integration of visual and auditory signals for spatial localization</article-title>
.
<source>J. Acoust. Soc. Am.</source>
<volume>20</volume>
,
<fpage>1391</fpage>
<lpage>1397</lpage>
.
<pub-id pub-id-type="doi">10.1364/josaa.20.001391</pub-id>
</mixed-citation>
</ref>
<ref id="B4">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Bernardo</surname>
<given-names>J. M.</given-names>
</name>
<name>
<surname>Smith</surname>
<given-names>A. F.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<source>Bayesian Theory.</source>
<publisher-loc>Chichester; New York, NY; Weinheim; Brisbane QLD; Singapore; Toronto, ON</publisher-loc>
:
<publisher-name>John Wiley & Sons, Ltd</publisher-name>
.</mixed-citation>
</ref>
<ref id="B5">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bertelson</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Radeau</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>1981</year>
).
<article-title>Cross-modal bias and perceptual fusion with auditory-visual spatial discordance</article-title>
.
<source>Percept. Psychophys.</source>
<volume>29</volume>
,
<fpage>578</fpage>
<lpage>584</lpage>
.
<pub-id pub-id-type="doi">10.3758/BF03207374</pub-id>
<pub-id pub-id-type="pmid">7279586</pub-id>
</mixed-citation>
</ref>
<ref id="B6">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bertelson</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Ventriloquism: a case of crossmodal perceptual grouping</article-title>
.
<source>Adv. Psychol.</source>
<volume>129</volume>
,
<fpage>347</fpage>
<lpage>362</lpage>
.
<pub-id pub-id-type="doi">10.1016/S0166-4115(99)80034-X</pub-id>
</mixed-citation>
</ref>
<ref id="B7">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Best</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Marrone</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Mason</surname>
<given-names>C. R.</given-names>
</name>
<name>
<surname>Kidd</surname>
<given-names>G.</given-names>
<suffix>Jr.</suffix>
</name>
<name>
<surname>Shinn-Cunningham</surname>
<given-names>B. G.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Effects of sensorineural hearing loss on visually guided attention in a multitalker environment</article-title>
.
<source>J. Assoc. Res. Otolaryngol.</source>
<volume>10</volume>
,
<fpage>142</fpage>
<lpage>149</lpage>
.
<pub-id pub-id-type="doi">10.1007/s10162-008-0146-7</pub-id>
<pub-id pub-id-type="pmid">19009321</pub-id>
</mixed-citation>
</ref>
<ref id="B8">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Blauert</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>1983</year>
).
<article-title>Review paper: psychoacoustic binaural phenomena</article-title>
, in
<source>Hearing: Physiological Bases and Psychophysics</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Klinke</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Hartmann</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<publisher-loc>Berlin; Heidelberg</publisher-loc>
:
<publisher-name>Springer</publisher-name>
),
<fpage>182</fpage>
<lpage>189</lpage>
.</mixed-citation>
</ref>
<ref id="B9">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Blauert</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<source>Spatial Hearing: The Psychophysics of Human Sound Localization.</source>
<publisher-loc>Cambridge, MA</publisher-loc>
:
<publisher-name>Massachusetts Institute of Technology</publisher-name>
.</mixed-citation>
</ref>
<ref id="B10">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bronkhorst</surname>
<given-names>A. W.</given-names>
</name>
</person-group>
(
<year>1995</year>
).
<article-title>Localization of real and virtual sound sources</article-title>
.
<source>J. Acoust. Soc. Am.</source>
<volume>98</volume>
,
<fpage>2542</fpage>
<lpage>2553</lpage>
.
<pub-id pub-id-type="doi">10.1121/1.413219</pub-id>
</mixed-citation>
</ref>
<ref id="B11">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brungart</surname>
<given-names>D. S.</given-names>
</name>
<name>
<surname>Rabinowitz</surname>
<given-names>W. M.</given-names>
</name>
<name>
<surname>Durlach</surname>
<given-names>N. I.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>Evaluation of response methods for the localization of nearby objects</article-title>
.
<source>Percept. Psychophys.</source>
<volume>62</volume>
,
<fpage>48</fpage>
<lpage>65</lpage>
.
<pub-id pub-id-type="doi">10.3758/BF03212060</pub-id>
<pub-id pub-id-type="pmid">10703255</pub-id>
</mixed-citation>
</ref>
<ref id="B12">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Bülthoff</surname>
<given-names>H. H.</given-names>
</name>
<name>
<surname>Yuille</surname>
<given-names>A. L.</given-names>
</name>
</person-group>
(
<year>1996</year>
).
<article-title>A Bayesian framework for the integration of visual modules</article-title>
, in
<source>Attention and Performance XVI: Information Integration in Perception and Communication</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Inui</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>McClelland</surname>
<given-names>J. L.</given-names>
</name>
</person-group>
(
<publisher-loc>Hong Kong</publisher-loc>
:
<publisher-name>Palatino</publisher-name>
),
<fpage>49</fpage>
<lpage>70</lpage>
.</mixed-citation>
</ref>
<ref id="B13">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Carlile</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Leong</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Hyams</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<article-title>The nature and distribution of errors in sound localization by human listeners</article-title>
.
<source>Hear. Res.</source>
<volume>114</volume>
,
<fpage>179</fpage>
<lpage>196</lpage>
.
<pub-id pub-id-type="doi">10.1016/S0378-5955(97)00161-5</pub-id>
<pub-id pub-id-type="pmid">9447931</pub-id>
</mixed-citation>
</ref>
<ref id="B14">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Charbonneau</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Véronneau</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Boudrias-Fournier</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Lepore</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Collignon</surname>
<given-names>O.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>The ventriloquism in periphery: impact of eccentricity-related reliability on audio-visual localization</article-title>
.
<source>J. Vis.</source>
<volume>13</volume>
,
<fpage>1</fpage>
<lpage>14</lpage>
.
<pub-id pub-id-type="doi">10.1167/13.12.20</pub-id>
</mixed-citation>
</ref>
<ref id="B15">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Colonius</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Diederich</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Steenken</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Time-window-of-integration (TWIN) model for saccadic reaction time: effect of auditory masker level on visual–auditory spatial interaction in elevation</article-title>
.
<source>Brain Topogr.</source>
<volume>21</volume>
,
<fpage>177</fpage>
<lpage>184</lpage>
.
<pub-id pub-id-type="doi">10.1007/s10548-009-0091-8</pub-id>
<pub-id pub-id-type="pmid">19337824</pub-id>
</mixed-citation>
</ref>
<ref id="B16">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Crawford</surname>
<given-names>J. D.</given-names>
</name>
<name>
<surname>Medendorp</surname>
<given-names>W. P.</given-names>
</name>
<name>
<surname>Marotta</surname>
<given-names>J. J.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Spatial transformations for eye–hand coordination</article-title>
.
<source>J. Neurophysiol.</source>
<volume>92</volume>
,
<fpage>10</fpage>
<lpage>19</lpage>
.
<pub-id pub-id-type="doi">10.1152/jn.00117.2004</pub-id>
<pub-id pub-id-type="pmid">15212434</pub-id>
</mixed-citation>
</ref>
<ref id="B17">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Culler</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Coakley</surname>
<given-names>J. D.</given-names>
</name>
<name>
<surname>Lowy</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Gross</surname>
<given-names>N.</given-names>
</name>
</person-group>
(
<year>1943</year>
).
<article-title>A revised frequency-map of the guinea-pig cochlea</article-title>
.
<source>Am. J. Psychol.</source>
<volume>56</volume>
,
<fpage>475</fpage>
<lpage>500</lpage>
.
<pub-id pub-id-type="doi">10.2307/1417351</pub-id>
</mixed-citation>
</ref>
<ref id="B18">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Curcio</surname>
<given-names>C. A.</given-names>
</name>
<name>
<surname>Sloan</surname>
<given-names>K. R.</given-names>
<suffix>Jr.</suffix>
</name>
<name>
<surname>Packer</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Hendrickson</surname>
<given-names>A. E.</given-names>
</name>
<name>
<surname>Kalina</surname>
<given-names>R. E.</given-names>
</name>
</person-group>
(
<year>1987</year>
).
<article-title>Distribution of cones in human and monkey retina: individual variability and radial asymmetry</article-title>
.
<source>Science</source>
<volume>236</volume>
,
<fpage>579</fpage>
<lpage>582</lpage>
.
<pub-id pub-id-type="doi">10.1126/science.3576186</pub-id>
<pub-id pub-id-type="pmid">3576186</pub-id>
</mixed-citation>
</ref>
<ref id="B19">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>DeValois</surname>
<given-names>R. L.</given-names>
</name>
<name>
<surname>DeValois</surname>
<given-names>K. K.</given-names>
</name>
</person-group>
(
<year>1988</year>
).
<source>Spatial Vision</source>
.
<publisher-loc>New York, NY</publisher-loc>
:
<publisher-name>Oxford University Press</publisher-name>
.</mixed-citation>
</ref>
<ref id="B20">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Easton</surname>
<given-names>R. D.</given-names>
</name>
</person-group>
(
<year>1983</year>
).
<article-title>The effect of head movements on visual and auditory dominance</article-title>
.
<source>Perception</source>
<volume>12</volume>
,
<fpage>63</fpage>
<lpage>70</lpage>
.
<pub-id pub-id-type="doi">10.1068/p120063</pub-id>
<pub-id pub-id-type="pmid">6646955</pub-id>
</mixed-citation>
</ref>
<ref id="B21">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>M. O.</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>M. S.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Humans integrate visual and haptic information in a statistically optimal fashion</article-title>
.
<source>Nature</source>
<volume>415</volume>
,
<fpage>429</fpage>
<lpage>433</lpage>
.
<pub-id pub-id-type="doi">10.1038/415429a</pub-id>
<pub-id pub-id-type="pmid">11807554</pub-id>
</mixed-citation>
</ref>
<ref id="B22">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>M. O.</given-names>
</name>
<name>
<surname>Bülthoff</surname>
<given-names>H. H.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Merging the senses into a robust percept</article-title>
.
<source>Trends Cogn. Sci.</source>
<volume>8</volume>
,
<fpage>162</fpage>
<lpage>169</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.tics.2004.02.002</pub-id>
<pub-id pub-id-type="pmid">15050512</pub-id>
</mixed-citation>
</ref>
<ref id="B23">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fisher</surname>
<given-names>G. H.</given-names>
</name>
</person-group>
(
<year>1968</year>
).
<article-title>Agreement between the spatial senses</article-title>
.
<source>Percept. Mot. Skills</source>
<volume>26</volume>
,
<fpage>849</fpage>
<lpage>850</lpage>
.
<pub-id pub-id-type="doi">10.2466/pms.1968.26.3.849</pub-id>
<pub-id pub-id-type="pmid">5657734</pub-id>
</mixed-citation>
</ref>
<ref id="B24">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Freedman</surname>
<given-names>E. G.</given-names>
</name>
<name>
<surname>Sparks</surname>
<given-names>D. L.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<article-title>Activity of cells in the deeper layers of the superior colliculus of the rhesus monkey: evidence for a gaze displacement command</article-title>
.
<source>J. Neurophysiol.</source>
<volume>78</volume>
,
<fpage>1669</fpage>
<lpage>1690</lpage>
.
<pub-id pub-id-type="pmid">9310452</pub-id>
</mixed-citation>
</ref>
<ref id="B25">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fuller</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Carrasco</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Perceptual consequences of visual performance fields: the case of the line motion illusion</article-title>
.
<source>J. Vis.</source>
<volume>9</volume>
:
<fpage>13</fpage>
.
<pub-id pub-id-type="doi">10.1167/9.4.13</pub-id>
<pub-id pub-id-type="pmid">19757922</pub-id>
</mixed-citation>
</ref>
<ref id="B26">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gardner</surname>
<given-names>J. L.</given-names>
</name>
<name>
<surname>Merriam</surname>
<given-names>E. P.</given-names>
</name>
<name>
<surname>Movshon</surname>
<given-names>J. A.</given-names>
</name>
<name>
<surname>Heeger</surname>
<given-names>D. J.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Maps of visual space in human occipital cortex are retinotopic, not spatiotopic</article-title>
.
<source>J. Neurosci.</source>
<volume>28</volume>
,
<fpage>3988</fpage>
<lpage>3999</lpage>
.
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.5476-07.2008</pub-id>
<pub-id pub-id-type="pmid">18400898</pub-id>
</mixed-citation>
</ref>
<ref id="B27">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Godfroy</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Roumes</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Dauchy</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Spatial variations of visual-auditory fusion areas</article-title>
.
<source>Perception</source>
<volume>32</volume>
,
<fpage>1233</fpage>
<lpage>1246</lpage>
.
<pub-id pub-id-type="doi">10.1068/p3344</pub-id>
<pub-id pub-id-type="pmid">14700258</pub-id>
</mixed-citation>
</ref>
<ref id="B28">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Goossens</surname>
<given-names>H. H. L. M.</given-names>
</name>
<name>
<surname>Van Opstal</surname>
<given-names>A. J.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Influence of head position on the spatial representation of acoustic targets</article-title>
.
<source>J. Neurophysiol.</source>
<volume>81</volume>
,
<fpage>2720</fpage>
<lpage>2736</lpage>
.
<pub-id pub-id-type="pmid">10368392</pub-id>
</mixed-citation>
</ref>
<ref id="B29">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hairston</surname>
<given-names>W. D.</given-names>
</name>
<name>
<surname>Laurienti</surname>
<given-names>P. J.</given-names>
</name>
<name>
<surname>Mishra</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Burdette</surname>
<given-names>J. H.</given-names>
</name>
<name>
<surname>Wallace</surname>
<given-names>M. T.</given-names>
</name>
</person-group>
(
<year>2003a</year>
).
<article-title>Multisensory enhancement of localization under conditions of induced myopia</article-title>
.
<source>Exp. Brain Res.</source>
<volume>152</volume>
,
<fpage>404</fpage>
<lpage>408</lpage>
.
<pub-id pub-id-type="doi">10.1007/s00221-003-1646-7</pub-id>
<pub-id pub-id-type="pmid">14504674</pub-id>
</mixed-citation>
</ref>
<ref id="B30">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hairston</surname>
<given-names>W. D.</given-names>
</name>
<name>
<surname>Wallace</surname>
<given-names>M. T.</given-names>
</name>
<name>
<surname>Vaughan</surname>
<given-names>J. W.</given-names>
</name>
<name>
<surname>Stein</surname>
<given-names>B. E.</given-names>
</name>
<name>
<surname>Norris</surname>
<given-names>J. L.</given-names>
</name>
<name>
<surname>Schirillo</surname>
<given-names>J. A.</given-names>
</name>
</person-group>
(
<year>2003b</year>
).
<article-title>Visual localization ability influences cross-modal bias</article-title>
.
<source>J. Cogn. Neurosci.</source>
<volume>15</volume>
,
<fpage>20</fpage>
<lpage>29</lpage>
.
<pub-id pub-id-type="doi">10.1162/089892903321107792</pub-id>
<pub-id pub-id-type="pmid">12590840</pub-id>
</mixed-citation>
</ref>
<ref id="B31">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hay</surname>
<given-names>J. C.</given-names>
</name>
<name>
<surname>Pick</surname>
<given-names>H. L.</given-names>
</name>
<name>
<surname>Ikeda</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<year>1965</year>
).
<article-title>Visual capture produced by prism spectacles</article-title>
.
<source>Psychon. Sci.</source>
<volume>2</volume>
,
<fpage>215</fpage>
<lpage>216</lpage>
.
<pub-id pub-id-type="doi">10.3758/BF03343413</pub-id>
</mixed-citation>
</ref>
<ref id="B32">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Heffner</surname>
<given-names>H. E.</given-names>
</name>
<name>
<surname>Heffner</surname>
<given-names>R. S.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>The sound-localization ability of cats</article-title>
.
<source>J. Neurophysiol.</source>
<volume>94</volume>
,
<fpage>3653</fpage>
<lpage>3655</lpage>
.
<pub-id pub-id-type="doi">10.1152/jn.00720.2005</pub-id>
<pub-id pub-id-type="pmid">16222077</pub-id>
</mixed-citation>
</ref>
<ref id="B33">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Heuermann</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Colonius</surname>
<given-names>H.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Spatial and temporal factors in visual-auditory interaction</article-title>
, in
<source>Proceedings of the 17th Meeting of the International Society for Psychophysics</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Sommerfeld</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Kompass</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Lachmann</surname>
<given-names>T.</given-names>
</name>
</person-group>
(
<publisher-loc>Lengerich</publisher-loc>
:
<publisher-name>Pabst Science</publisher-name>
),
<fpage>118</fpage>
<lpage>123</lpage>
.</mixed-citation>
</ref>
<ref id="B34">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hofman</surname>
<given-names>P. M.</given-names>
</name>
<name>
<surname>Van Opstal</surname>
<given-names>A. J.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>Spectro-temporal factors in two-dimensional human sound localization</article-title>
.
<source>J. Acoust. Soc. Am.</source>
<volume>103</volume>
,
<fpage>2634</fpage>
<lpage>2648</lpage>
.
<pub-id pub-id-type="doi">10.1121/1.422784</pub-id>
<pub-id pub-id-type="pmid">9604358</pub-id>
</mixed-citation>
</ref>
<ref id="B35">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hofman</surname>
<given-names>P. M.</given-names>
</name>
<name>
<surname>Van Opstal</surname>
<given-names>A. J.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Binaural weighting of pinna cues in human sound localization</article-title>
.
<source>Exp. Brain Res.</source>
<volume>148</volume>
,
<fpage>458</fpage>
<lpage>470</lpage>
.
<pub-id pub-id-type="doi">10.1007/s00221-002-1320-5</pub-id>
<pub-id pub-id-type="pmid">12582829</pub-id>
</mixed-citation>
</ref>
<ref id="B36">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Honda</surname>
<given-names>H.</given-names>
</name>
</person-group>
(
<year>1991</year>
).
<article-title>The time courses of visual mislocalization and of extraretinal eye position signals at the time of vertical saccades</article-title>
.
<source>Vision Res.</source>
<volume>31</volume>
,
<fpage>1915</fpage>
<lpage>1921</lpage>
.
<pub-id pub-id-type="doi">10.1016/0042-6989(91)90186-9</pub-id>
<pub-id pub-id-type="pmid">1771775</pub-id>
</mixed-citation>
</ref>
<ref id="B37">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Hubel</surname>
<given-names>D. H.</given-names>
</name>
</person-group>
(
<year>1988</year>
).
<source>Eye, Brain, and Vision.</source>
<publisher-loc>New York, NY</publisher-loc>
:
<publisher-name>Scientific American Library</publisher-name>
.</mixed-citation>
</ref>
<ref id="B38">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jay</surname>
<given-names>M. F.</given-names>
</name>
<name>
<surname>Sparks</surname>
<given-names>D. L.</given-names>
</name>
</person-group>
(
<year>1984</year>
).
<article-title>Auditory receptive fields in primate superior colliculus shift with changes in eye position</article-title>
.
<source>Nature</source>
<volume>309</volume>
,
<fpage>345</fpage>
<lpage>347</lpage>
.
<pub-id pub-id-type="doi">10.1038/309345a0</pub-id>
<pub-id pub-id-type="pmid">6727988</pub-id>
</mixed-citation>
</ref>
<ref id="B39">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kerzel</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Memory for the position of stationary objects: disentangling foveal bias and memory averaging</article-title>
.
<source>Vision Res.</source>
<volume>42</volume>
,
<fpage>159</fpage>
<lpage>167</lpage>
.
<pub-id pub-id-type="doi">10.1016/s0042-6989(01)00274-7</pub-id>
<pub-id pub-id-type="pmid">11809470</pub-id>
</mixed-citation>
</ref>
<ref id="B40">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Klier</surname>
<given-names>E. M.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Crawford</surname>
<given-names>J. D.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>The superior colliculus encodes gaze commands in retinal coordinates</article-title>
.
<source>Nat. Neurosci.</source>
<volume>4</volume>
,
<fpage>627</fpage>
<lpage>632</lpage>
.
<pub-id pub-id-type="doi">10.1038/88450</pub-id>
<pub-id pub-id-type="pmid">11369944</pub-id>
</mixed-citation>
</ref>
<ref id="B41">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kopco</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Lin</surname>
<given-names>I. F.</given-names>
</name>
<name>
<surname>Shinn-Cunningham</surname>
<given-names>B. G.</given-names>
</name>
<name>
<surname>Groh</surname>
<given-names>J. M.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Reference frame of the ventriloquism aftereffect</article-title>
.
<source>J. Neurosci.</source>
<volume>29</volume>
,
<fpage>13809</fpage>
<lpage>13814</lpage>
.
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.2783-09.2009</pub-id>
<pub-id pub-id-type="pmid">19889992</pub-id>
</mixed-citation>
</ref>
<ref id="B42">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lee</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Groh</surname>
<given-names>J. M.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Auditory signals evolve from hybrid-to eye-centered coordinates in the primate superior colliculus</article-title>
.
<source>J. Neurophysiol.</source>
<volume>108</volume>
,
<fpage>227</fpage>
<lpage>242</lpage>
.
<pub-id pub-id-type="doi">10.1152/jn.00706.2011</pub-id>
<pub-id pub-id-type="pmid">22514295</pub-id>
</mixed-citation>
</ref>
<ref id="B43">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Loomis</surname>
<given-names>J. M.</given-names>
</name>
<name>
<surname>Klatzky</surname>
<given-names>R. L.</given-names>
</name>
<name>
<surname>McHugh</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Giudice</surname>
<given-names>N. A.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Spatial working memory for locations specified by vision and audition: testing the amodality hypothesis</article-title>
.
<source>Atten. Percept. Psychophys.</source>
<volume>74</volume>
,
<fpage>1260</fpage>
<lpage>1267</lpage>
.
<pub-id pub-id-type="doi">10.3758/s13414-012-0311-2</pub-id>
<pub-id pub-id-type="pmid">22552825</pub-id>
</mixed-citation>
</ref>
<ref id="B44">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Makous</surname>
<given-names>J. C.</given-names>
</name>
<name>
<surname>Middlebrooks</surname>
<given-names>J. C.</given-names>
</name>
</person-group>
(
<year>1990</year>
).
<article-title>Two-dimensional sound localization by human listeners</article-title>
.
<source>J. Acoust. Soc. Am.</source>
<volume>87</volume>
,
<fpage>2188</fpage>
<lpage>2200</lpage>
.
<pub-id pub-id-type="doi">10.1121/1.399186</pub-id>
<pub-id pub-id-type="pmid">2348023</pub-id>
</mixed-citation>
</ref>
<ref id="B45">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mateeff</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Gourevich</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>1983</year>
).
<article-title>Peripheral vision and perceived visual direction</article-title>
.
<source>Biol. Cybern.</source>
<volume>49</volume>
,
<fpage>111</fpage>
<lpage>118</lpage>
.
<pub-id pub-id-type="doi">10.1007/BF00320391</pub-id>
<pub-id pub-id-type="pmid">6661443</pub-id>
</mixed-citation>
</ref>
<ref id="B46">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>McIntyre</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Stratta</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Droulez</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Lacquaniti</surname>
<given-names>F.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>Analysis of pointing errors reveals properties of data representations and coordinate transformations within the central nervous system</article-title>
.
<source>Neural Comput.</source>
<volume>12</volume>
,
<fpage>2823</fpage>
<lpage>2855</lpage>
.
<pub-id pub-id-type="doi">10.1162/089976600300014746</pub-id>
<pub-id pub-id-type="pmid">11112257</pub-id>
</mixed-citation>
</ref>
<ref id="B47">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Middlebrooks</surname>
<given-names>J. C.</given-names>
</name>
<name>
<surname>Green</surname>
<given-names>D. M.</given-names>
</name>
</person-group>
(
<year>1991</year>
).
<article-title>Sound localization by human listeners</article-title>
.
<source>Annu. Rev. Psychol.</source>
<volume>42</volume>
,
<fpage>135</fpage>
<lpage>159</lpage>
.
<pub-id pub-id-type="doi">10.1146/annurev.ps.42.020191.001031</pub-id>
<pub-id pub-id-type="pmid">2018391</pub-id>
</mixed-citation>
</ref>
<ref id="B48">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Müsseler</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Van der Heijden</surname>
<given-names>A. H. C.</given-names>
</name>
<name>
<surname>Mahmud</surname>
<given-names>S. H.</given-names>
</name>
<name>
<surname>Deubel</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Ertsey</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Relative mislocalization of briefly presented stimuli in the retinal periphery</article-title>
.
<source>Percept. Psychophys.</source>
<volume>61</volume>
,
<fpage>1646</fpage>
<lpage>1661</lpage>
.
<pub-id pub-id-type="doi">10.3758/BF03213124</pub-id>
<pub-id pub-id-type="pmid">10598476</pub-id>
</mixed-citation>
</ref>
<ref id="B49">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Oldfield</surname>
<given-names>S. R.</given-names>
</name>
<name>
<surname>Parker</surname>
<given-names>S. P.</given-names>
</name>
</person-group>
(
<year>1984</year>
).
<article-title>Acuity of sound localization: a topography of auditory space. I. Normal hearing conditions</article-title>
.
<source>Perception</source>
<volume>13</volume>
,
<fpage>581</fpage>
<lpage>600</lpage>
.
<pub-id pub-id-type="doi">10.1068/p130581</pub-id>
<pub-id pub-id-type="pmid">6535983</pub-id>
</mixed-citation>
</ref>
<ref id="B50">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Oruç</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Maloney</surname>
<given-names>L. T.</given-names>
</name>
<name>
<surname>Landy</surname>
<given-names>M. S.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Weighted linear cue combination with possibly correlated error</article-title>
.
<source>Vision Res.</source>
<volume>43</volume>
,
<fpage>2451</fpage>
<lpage>2468</lpage>
.
<pub-id pub-id-type="doi">10.1016/s0042-6989(03)00435-8</pub-id>
<pub-id pub-id-type="pmid">12972395</pub-id>
</mixed-citation>
</ref>
<ref id="B51">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Pedersen</surname>
<given-names>J. A.</given-names>
</name>
<name>
<surname>Jorgensen</surname>
<given-names>T.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Localization performance of real and virtual sound sources</article-title>
, in
<source>Proceedings of the NATO RTO-MP-HFM-123 New Directions for Improving Audio Effectiveness Conference</source>
, (
<publisher-loc>Neuilly-sur-Seine</publisher-loc>
:
<publisher-name>NATO</publisher-name>
),
<fpage>29-1</fpage>
<lpage>29-30</lpage>
.</mixed-citation>
</ref>
<ref id="B52">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Perrott</surname>
<given-names>D. R.</given-names>
</name>
<name>
<surname>Ambarsoom</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Tucker</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>1987</year>
).
<article-title>Changes in head position as a measure of auditory localization performance: auditory psychomotor coordination under monaural and binaural listening conditions</article-title>
.
<source>J. Acoust. Soc. Am.</source>
<volume>82</volume>
,
<fpage>1637</fpage>
<lpage>1645</lpage>
.
<pub-id pub-id-type="doi">10.1121/1.395155</pub-id>
<pub-id pub-id-type="pmid">3693705</pub-id>
</mixed-citation>
</ref>
<ref id="B53">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Radeau</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Bertelson</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>1987</year>
).
<article-title>Auditory-visual interaction and the timing of inputs</article-title>
.
<source>Psychol. Res.</source>
<volume>49</volume>
,
<fpage>17</fpage>
<lpage>22</lpage>
.
<pub-id pub-id-type="doi">10.1007/bf00309198</pub-id>
<pub-id pub-id-type="pmid">3615744</pub-id>
</mixed-citation>
</ref>
<ref id="B54">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Recanzone</surname>
<given-names>G. H.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Interactions of auditory and visual stimuli in space and time</article-title>
.
<source>Hear. Res.</source>
<volume>258</volume>
,
<fpage>89</fpage>
<lpage>99</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.heares.2009.04.009</pub-id>
<pub-id pub-id-type="pmid">19393306</pub-id>
</mixed-citation>
</ref>
<ref id="B55">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Richard</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Churan</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Guitton</surname>
<given-names>D. E.</given-names>
</name>
<name>
<surname>Pack</surname>
<given-names>C. C.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Perceptual compression of visual space during eye–head gaze shifts</article-title>
.
<source>J. Vis.</source>
<volume>11</volume>
:
<fpage>1</fpage>
.
<pub-id pub-id-type="doi">10.1167/11.12.1</pub-id>
<pub-id pub-id-type="pmid">21980187</pub-id>
</mixed-citation>
</ref>
<ref id="B56">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Robinson</surname>
<given-names>D. A.</given-names>
</name>
</person-group>
(
<year>1972</year>
).
<article-title>Eye movements evoked by collicular stimulation in the alert monkey</article-title>
.
<source>Vision Res.</source>
<volume>12</volume>
,
<fpage>1795</fpage>
<lpage>1808</lpage>
.
<pub-id pub-id-type="doi">10.1016/0042-6989(72)90070-3</pub-id>
<pub-id pub-id-type="pmid">4627952</pub-id>
</mixed-citation>
</ref>
<ref id="B57">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rock</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Victor</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>1964</year>
).
<article-title>Vision and touch: an experimentally created conflict between the two senses</article-title>
.
<source>Science</source>
<volume>143</volume>
,
<fpage>594</fpage>
<lpage>596</lpage>
.
<pub-id pub-id-type="doi">10.1126/science.143.3606.594</pub-id>
<pub-id pub-id-type="pmid">14080333</pub-id>
</mixed-citation>
</ref>
<ref id="B58">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ross</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Morrone</surname>
<given-names>C. M.</given-names>
</name>
<name>
<surname>Burr</surname>
<given-names>D. C.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<article-title>Compression of visual space before saccades</article-title>
.
<source>Nature</source>
<volume>386</volume>
,
<fpage>598</fpage>
<lpage>601</lpage>
.
<pub-id pub-id-type="doi">10.1038/386598a0</pub-id>
<pub-id pub-id-type="pmid">9121581</pub-id>
</mixed-citation>
</ref>
<ref id="B59">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Saarinen</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Rovamo</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Virsu</surname>
<given-names>V.</given-names>
</name>
</person-group>
(
<year>1989</year>
).
<article-title>Analysis of spatial structure in eccentric vision</article-title>
.
<source>Invest. Ophthalmol. Vis. Sci.</source>
<volume>30</volume>
,
<fpage>293</fpage>
<lpage>296</lpage>
.
<pub-id pub-id-type="pmid">2914757</pub-id>
</mixed-citation>
</ref>
<ref id="B60">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Seeber</surname>
<given-names>B.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<source>Untersuchung der Auditiven Lokalisation mit einer Lichtzeigermethode.</source>
Doctoral dissertation,
<publisher-name>Technische Universität München, Universitätsbibliothek</publisher-name>
.</mixed-citation>
</ref>
<ref id="B61">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Seth</surname>
<given-names>B. R.</given-names>
</name>
<name>
<surname>Shimojo</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Compression of space in visual memory</article-title>
.
<source>Vision Res.</source>
<volume>41</volume>
,
<fpage>329</fpage>
<lpage>341</lpage>
.
<pub-id pub-id-type="doi">10.1016/S0042-6989(00)00230-3</pub-id>
<pub-id pub-id-type="pmid">11164448</pub-id>
</mixed-citation>
</ref>
<ref id="B62">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Stein</surname>
<given-names>B. E.</given-names>
</name>
<name>
<surname>Meredith</surname>
<given-names>M. A.</given-names>
</name>
</person-group>
(
<year>1993</year>
).
<source>The Merging of the Senses</source>
.
<publisher-loc>Cambridge, MA; London</publisher-loc>
:
<publisher-name>The MITPress</publisher-name>
.</mixed-citation>
</ref>
<ref id="B63">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Strybel</surname>
<given-names>T. Z.</given-names>
</name>
<name>
<surname>Fujimoto</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>Minimum audible angles in the horizontal and vertical planes: effects of stimulus onset asynchrony and burst duration</article-title>
.
<source>J. Acoust. Soc. Am.</source>
<volume>108</volume>
,
<fpage>3092</fpage>
<lpage>3095</lpage>
.
<pub-id pub-id-type="doi">10.1121/1.1323720</pub-id>
<pub-id pub-id-type="pmid">11144604</pub-id>
</mixed-citation>
</ref>
<ref id="B64">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Thurlow</surname>
<given-names>W. R.</given-names>
</name>
<name>
<surname>Jack</surname>
<given-names>C. E.</given-names>
</name>
</person-group>
(
<year>1973</year>
).
<article-title>Certain determinants of the “ventriloquism effect.”</article-title>
<source>Percept. Mot. Skills</source>
<volume>36</volume>
,
<fpage>1171</fpage>
<lpage>1184</lpage>
.
<pub-id pub-id-type="doi">10.2466/pms.1973.36.3c.1171</pub-id>
<pub-id pub-id-type="pmid">4711968</pub-id>
</mixed-citation>
</ref>
<ref id="B65">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Van Beers</surname>
<given-names>R. J.</given-names>
</name>
<name>
<surname>Sittig</surname>
<given-names>A. C.</given-names>
</name>
<name>
<surname>van Der Gon</surname>
<given-names>J. J. D.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Integration of proprioceptive and visual position-information: an experimentally supported model</article-title>
.
<source>J. Neurophysiol.</source>
<volume>81</volume>
,
<fpage>1355</fpage>
<lpage>1364</lpage>
.
<pub-id pub-id-type="pmid">10085361</pub-id>
</mixed-citation>
</ref>
<ref id="B66">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Van Opstal</surname>
<given-names>A. J.</given-names>
</name>
<name>
<surname>Van Gisbergen</surname>
<given-names>J. A. M.</given-names>
</name>
</person-group>
(
<year>1989</year>
).
<article-title>A nonlinear model for collicular spatial interactions underlying the metrical properties of electrically elicited saccades</article-title>
.
<source>Biol. Cybern.</source>
<volume>60</volume>
,
<fpage>171</fpage>
<lpage>183</lpage>
.
<pub-id pub-id-type="doi">10.1007/BF00207285</pub-id>
<pub-id pub-id-type="pmid">2923922</pub-id>
</mixed-citation>
</ref>
<ref id="B67">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Warren</surname>
<given-names>D. H.</given-names>
</name>
<name>
<surname>McCarthy</surname>
<given-names>T. J.</given-names>
</name>
<name>
<surname>Welch</surname>
<given-names>R. B.</given-names>
</name>
</person-group>
(
<year>1983</year>
).
<article-title>Discrepancy and non discrepancy methods of assessing visual-auditory interaction</article-title>
.
<source>Percept. Psychophys.</source>
<volume>33</volume>
,
<fpage>413</fpage>
<lpage>419</lpage>
.
<pub-id pub-id-type="doi">10.3758/BF03202891</pub-id>
<pub-id pub-id-type="pmid">6877986</pub-id>
</mixed-citation>
</ref>
<ref id="B68">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Welch</surname>
<given-names>R. B.</given-names>
</name>
<name>
<surname>Warren</surname>
<given-names>D. H.</given-names>
</name>
</person-group>
(
<year>1980</year>
).
<article-title>Immediate perceptual response to intersensory discrepancy</article-title>
.
<source>Psychol. Bull.</source>
<volume>88</volume>
:
<fpage>638</fpage>
.
<pub-id pub-id-type="doi">10.1037/0033-2909.88.3.638</pub-id>
<pub-id pub-id-type="pmid">7003641</pub-id>
</mixed-citation>
</ref>
<ref id="B69">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Welch</surname>
<given-names>R. B.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Meaning, attention, and the “unity assumption” in the intersensory bias of spatial and temporal perceptions</article-title>
.
<source>Adv. Psychol.</source>
<volume>129</volume>
,
<fpage>371</fpage>
<lpage>387</lpage>
.
<pub-id pub-id-type="doi">10.1016/s0166-4115(99)80036-3</pub-id>
</mixed-citation>
</ref>
<ref id="B70">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Westheimer</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>1972</year>
).
<article-title>Visual acuity and spatial modulation thresholds</article-title>
, in
<source>Visual Psychophysics</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Jameson</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Hurvich</surname>
<given-names>L. M.</given-names>
</name>
</person-group>
(
<publisher-loc>Berlin; Heidelberg</publisher-loc>
:
<publisher-name>Springer</publisher-name>
),
<fpage>170</fpage>
<lpage>187</lpage>
.</mixed-citation>
</ref>
<ref id="B71">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Westheimer</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>1979</year>
).
<article-title>Scaling of visual acuity measurements</article-title>
.
<source>Arch. Ophthalmol.</source>
<volume>97</volume>
,
<fpage>327</fpage>
<lpage>330</lpage>
.
<pub-id pub-id-type="doi">10.1001/archopht.1979.01020010173020</pub-id>
<pub-id pub-id-type="pmid">550809</pub-id>
</mixed-citation>
</ref>
<ref id="B72">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Winer</surname>
<given-names>B. J.</given-names>
</name>
<name>
<surname>Brown</surname>
<given-names>D. R.</given-names>
</name>
<name>
<surname>Michles</surname>
<given-names>K. M.</given-names>
</name>
</person-group>
(
<year>1991</year>
).
<source>Statistical Principles in Experimental Design, 3rd Edn.</source>
<publisher-loc>New York, NY</publisher-loc>
:
<publisher-name>McGraw-Hill</publisher-name>
.</mixed-citation>
</ref>
<ref id="B73">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Witten</surname>
<given-names>I. B.</given-names>
</name>
<name>
<surname>Knudsen</surname>
<given-names>E. I.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Why seeing is believing: merging auditory and visual worlds</article-title>
.
<source>Neuron</source>
<volume>48</volume>
,
<fpage>489</fpage>
<lpage>496</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuron.2005.10.020</pub-id>
<pub-id pub-id-type="pmid">16269365</pub-id>
</mixed-citation>
</ref>
<ref id="B74">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Yost</surname>
<given-names>W. A.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<source>Fundamentals of Hearing: An Introduction, 4th Edn</source>
.
<publisher-loc>San Diego, CA</publisher-loc>
:
<publisher-name>AcademicPress</publisher-name>
.</mixed-citation>
</ref>
<ref id="B75">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Yuille</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Bülthoff</surname>
<given-names>H. H.</given-names>
</name>
</person-group>
(
<year>1996</year>
).
<article-title>Bayesian decision theory and psychophysics</article-title>
, in
<source>Perception as Bayesian Inference</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Knill</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Richards</surname>
<given-names>W.</given-names>
</name>
</person-group>
(
<publisher-loc>Cambridge, UK</publisher-loc>
:
<publisher-name>Cambridge University Press</publisher-name>
),
<fpage>123</fpage>
<lpage>161</lpage>
.</mixed-citation>
</ref>
<ref id="B76">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zwiers</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Van Opstal</surname>
<given-names>A. J.</given-names>
</name>
<name>
<surname>Cruysberg</surname>
<given-names>J. R.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Two-dimensional sound-localization behavior of early-blind humans</article-title>
.
<source>Exp. Brain Res.</source>
<volume>140</volume>
,
<fpage>206</fpage>
<lpage>222</lpage>
.
<pub-id pub-id-type="doi">10.1007/s002210100800</pub-id>
<pub-id pub-id-type="pmid">11521153</pub-id>
</mixed-citation>
</ref>
</ref-list>
<app-group>
<app id="A1">
<title>Appendix 1</title>
<p>
<disp-formula id="E7">
<mml:math id="M50">
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd columnalign="left">
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>V</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:mi>A</mml:mi>
<mml:mo>/</mml:mo>
<mml:mo mathcolor="black" stretchy="false">(</mml:mo>
<mml:mi>A</mml:mi>
<mml:mi>B</mml:mi>
<mml:mo></mml:mo>
<mml:msup>
<mml:mi>E</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo mathcolor="black" stretchy="false">)</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="left">
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mi>V</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:mi>B</mml:mi>
<mml:mo>/</mml:mo>
<mml:mo mathcolor="black" stretchy="false">(</mml:mo>
<mml:mi>A</mml:mi>
<mml:mi>B</mml:mi>
<mml:mo></mml:mo>
<mml:msup>
<mml:mi>E</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo mathcolor="black" stretchy="false">)</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="left">
<mml:msub>
<mml:mi>μ</mml:mi>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>V</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mo mathcolor="black" stretchy="false">(</mml:mo>
<mml:mi>B</mml:mi>
<mml:mi>C</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>E</mml:mi>
<mml:mi>D</mml:mi>
<mml:mo mathcolor="black" stretchy="false">)</mml:mo>
<mml:mo>/</mml:mo>
<mml:mo mathcolor="black" stretchy="false">(</mml:mo>
<mml:mi>A</mml:mi>
<mml:mi>B</mml:mi>
<mml:mo></mml:mo>
<mml:msup>
<mml:mi>E</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo mathcolor="black" stretchy="false">)</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="left">
<mml:msub>
<mml:mi>μ</mml:mi>
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mi>V</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mo mathcolor="black" stretchy="false">(</mml:mo>
<mml:mi>A</mml:mi>
<mml:mi>D</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>E</mml:mi>
<mml:mi>C</mml:mi>
<mml:mo mathcolor="black" stretchy="false">)</mml:mo>
<mml:mo>/</mml:mo>
<mml:mo mathcolor="black" stretchy="false">(</mml:mo>
<mml:mi>A</mml:mi>
<mml:mi>B</mml:mi>
<mml:mo></mml:mo>
<mml:msup>
<mml:mi>E</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo mathcolor="black" stretchy="false">)</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="left">
<mml:mstyle class="text" mathcolor="black">
<mml:mtext>  </mml:mtext>
</mml:mstyle>
<mml:msub>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mi>E</mml:mi>
<mml:mo>/</mml:mo>
<mml:msqrt style="color:black">
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mi>B</mml:mi>
</mml:mrow>
</mml:msqrt>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="left">
<mml:mstyle class="text" mathcolor="black">
<mml:mtext>       </mml:mtext>
</mml:mstyle>
<mml:mi>A</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mo mathcolor="black" stretchy="false">(</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mi>p</mml:mi>
<mml:mi>V</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo mathcolor="black" stretchy="false">)</mml:mo>
<mml:msubsup>
<mml:mi>S</mml:mi>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo>+</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mo mathcolor="black" stretchy="false">(</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mi>p</mml:mi>
<mml:mi>A</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo mathcolor="black" stretchy="false">)</mml:mo>
<mml:msubsup>
<mml:mi>S</mml:mi>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="left">
<mml:mstyle class="text" mathcolor="black">
<mml:mtext>      </mml:mtext>
</mml:mstyle>
<mml:mi>B</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mo mathcolor="black" stretchy="false">(</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mi>p</mml:mi>
<mml:mi>V</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo mathcolor="black" stretchy="false">)</mml:mo>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo>+</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mo mathcolor="black" stretchy="false">(</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mi>p</mml:mi>
<mml:mi>A</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo mathcolor="black" stretchy="false">)</mml:mo>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="left">
<mml:mstyle class="text" mathcolor="black">
<mml:mtext>     </mml:mtext>
</mml:mstyle>
<mml:mi>C</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mi>p</mml:mi>
<mml:mi>V</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mi>μ</mml:mi>
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo></mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:msub>
<mml:mi>μ</mml:mi>
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>σ</mml:mi>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mi>p</mml:mi>
<mml:mi>A</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mi>μ</mml:mi>
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo></mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:msub>
<mml:mi>μ</mml:mi>
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>σ</mml:mi>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mi>σ</mml:mi>
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="left">
<mml:mstyle class="text" mathcolor="black">
<mml:mtext>     </mml:mtext>
</mml:mstyle>
<mml:mi>D</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mi>p</mml:mi>
<mml:mi>V</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mi>μ</mml:mi>
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo></mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:msub>
<mml:mi>μ</mml:mi>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>σ</mml:mi>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mi>S</mml:mi>
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mi>p</mml:mi>
<mml:mi>A</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mi>μ</mml:mi>
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo></mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:msub>
<mml:mi>μ</mml:mi>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>σ</mml:mi>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mi>σ</mml:mi>
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="left">
<mml:mstyle class="text" mathcolor="black">
<mml:mtext>     </mml:mtext>
</mml:mstyle>
<mml:mi>E</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mi>p</mml:mi>
<mml:mi>V</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mi>p</mml:mi>
<mml:mi>V</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:msub>
<mml:mi>σ</mml:mi>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mi>σ</mml:mi>
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
<mml:mo>+</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mi>p</mml:mi>
<mml:mi>A</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mi>p</mml:mi>
<mml:mi>A</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:msub>
<mml:mi>σ</mml:mi>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mi>σ</mml:mi>
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
</p>
</app>
</app-group>
<app-group>
<app id="A2">
<title>Appendix 2</title>
<disp-formula id="E8">
<mml:math id="M51">
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mo>:</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo></mml:mo>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>r</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>θ</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:math>
</disp-formula>
<p>Auditory:</p>
<disp-formula id="E9">
<mml:math id="M52">
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:mi>F</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>8.79</mml:mn>
<mml:mo>×</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>8</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:msup>
<mml:mi>x</mml:mi>
<mml:mn>4</mml:mn>
</mml:msup>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>1.27</mml:mn>
<mml:mo>×</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:msup>
<mml:mi>x</mml:mi>
<mml:mn>3</mml:mn>
</mml:msup>
<mml:mo></mml:mo>
<mml:mn>0.0021</mml:mn>
<mml:msup>
<mml:mi>x</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mtext>                        </mml:mtext>
<mml:mo>+</mml:mo>
<mml:mn>0.0091</mml:mn>
<mml:mi>x</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>4.51</mml:mn>
<mml:mo>+</mml:mo>
<mml:mn>0.0003</mml:mn>
<mml:msup>
<mml:mi>y</mml:mi>
<mml:mn>3</mml:mn>
</mml:msup>
<mml:mo>+</mml:mo>
<mml:mn>0.0053</mml:mn>
<mml:msup>
<mml:mi>y</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mtext>                       </mml:mtext>
<mml:mrow>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>0.201</mml:mn>
<mml:mi>y</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1.05</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>g</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
<p>where</p>
<disp-formula id="E10">
<mml:math id="M53">
<mml:mrow>
<mml:mi>g</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mi>π</mml:mi>
<mml:mo>/</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo><</mml:mo>
<mml:mn>10</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>10</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mi>π</mml:mi>
<mml:mo>/</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>></mml:mo>
<mml:mn>10</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
<mml:mo>}</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<p>Visual:</p>
<disp-formula id="E11">
<mml:math id="M54">
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:mi>F</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:msqrt>
<mml:mrow>
<mml:msup>
<mml:mi>x</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>+</mml:mo>
<mml:msup>
<mml:mi>y</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
<mml:mrow>
<mml:mn>11.4</mml:mn>
</mml:mrow>
</mml:mfrac>
<mml:mo>+</mml:mo>
<mml:mo stretchy="false">(</mml:mo>
<mml:mo></mml:mo>
<mml:mn>0.001</mml:mn>
<mml:msup>
<mml:mi>x</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo></mml:mo>
<mml:mn>0.0038</mml:mn>
<mml:mi>x</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>0.0527</mml:mn>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mtext>               </mml:mtext>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>0.0011</mml:mn>
<mml:msup>
<mml:mi>y</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>+</mml:mo>
<mml:mn>0.0457</mml:mn>
<mml:mi>y</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>0.8626</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mtext></mml:mtext>
<mml:mo>,</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mrow>
<mml:mtext>                                     </mml:mtext>
<mml:mi>t</mml:mi>
<mml:mi>a</mml:mi>
<mml:msup>
<mml:mi>n</mml:mi>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi>π</mml:mi>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>270</mml:mn>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
<p>Bimodal:</p>
<disp-formula id="E12">
<mml:math id="M55">
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:mi>F</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:msqrt>
<mml:mrow>
<mml:msup>
<mml:mi>x</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>+</mml:mo>
<mml:msup>
<mml:mi>y</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
<mml:mrow>
<mml:mn>13.6</mml:mn>
</mml:mrow>
</mml:mfrac>
<mml:mo>+</mml:mo>
<mml:mo stretchy="false">(</mml:mo>
<mml:mo></mml:mo>
<mml:mn>0.0014</mml:mn>
<mml:msup>
<mml:mi>x</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>+</mml:mo>
<mml:mn>0.0043</mml:mn>
<mml:mi>x</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>0.7187</mml:mn>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mtext>               </mml:mtext>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>1.023</mml:mn>
<mml:mo>×</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:msup>
<mml:mi>y</mml:mi>
<mml:mn>4</mml:mn>
</mml:msup>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>2.73</mml:mn>
<mml:mo>×</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:msup>
<mml:mi>y</mml:mi>
<mml:mn>3</mml:mn>
</mml:msup>
<mml:mo>+</mml:mo>
<mml:mn>0.0036</mml:mn>
<mml:msup>
<mml:mi>y</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mtext>               </mml:mtext>
<mml:mrow>
<mml:mrow>
<mml:mo>+</mml:mo>
<mml:mn>0.0364</mml:mn>
<mml:mi>y</mml:mi>
<mml:mo></mml:mo>
<mml:mn>0.0463</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>t</mml:mi>
<mml:mi>a</mml:mi>
<mml:msup>
<mml:mi>n</mml:mi>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi>π</mml:mi>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>270</mml:mn>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
</app>
</app-group>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000188 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd -nk 000188 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Curation
   |type=    RBID
   |clé=     PMC:4585004
   |texte=   The interaction of vision and audition in two-dimensional space
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Curation/RBID.i   -Sk "pubmed:26441492" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024