Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Computational Characterization of Visually Induced Auditory Spatial Adaptation

Identifieur interne : 001C08 ( Pmc/Checkpoint ); précédent : 001C07; suivant : 001C09

Computational Characterization of Visually Induced Auditory Spatial Adaptation

Auteurs : David R. Wozny [États-Unis] ; Ladan Shams [États-Unis]

Source :

RBID : PMC:3208186

Abstract

Recent research investigating the principles governing human perception has provided increasing evidence for probabilistic inference in human perception. For example, human auditory and visual localization judgments closely resemble that of a Bayesian causal inference observer, where the underlying causal structure of the stimuli are inferred based on both the available sensory evidence and prior knowledge. However, most previous studies have focused on characterization of perceptual inference within a static environment, and therefore, little is known about how this inference process changes when observers are exposed to a new environment. In this study we aimed to computationally characterize the change in auditory spatial perception induced by repeated auditory–visual spatial conflict, known as the ventriloquist aftereffect. In theory, this change could reflect a shift in the auditory sensory representations (i.e., shift in auditory likelihood distribution), a decrease in the precision of the auditory estimates (i.e., increase in spread of likelihood distribution), a shift in the auditory bias (i.e., shift in prior distribution), or an increase/decrease in strength of the auditory bias (i.e., the spread of prior distribution), or a combination of these. By quantitatively estimating the parameters of the perceptual process for each individual observer using a Bayesian causal inference model, we found that the shift in the perceived locations after exposure was associated with a shift in the mean of the auditory likelihood functions in the direction of the experienced visual offset. The results suggest that repeated exposure to a fixed auditory–visual discrepancy is attributed by the nervous system to sensory representation error and as a result, the sensory map of space is recalibrated to correct the error.


Url:
DOI: 10.3389/fnint.2011.00075
PubMed: 22069383
PubMed Central: 3208186


Affiliations:


Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:3208186

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Computational Characterization of Visually Induced Auditory Spatial Adaptation</title>
<author>
<name sortKey="Wozny, David R" sort="Wozny, David R" uniqKey="Wozny D" first="David R." last="Wozny">David R. Wozny</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Otolaryngology, Oregon Health and Science University</institution>
<country>Portland, OR, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Biomedical Engineering Interdepartmental Program, University of California Los Angeles</institution>
<country>Los Angeles, CA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Shams, Ladan" sort="Shams, Ladan" uniqKey="Shams L" first="Ladan" last="Shams">Ladan Shams</name>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Biomedical Engineering Interdepartmental Program, University of California Los Angeles</institution>
<country>Los Angeles, CA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<institution>Department of Psychology and Interdepartmental Neuroscience Program, University of California Los Angeles</institution>
<country>Los Angeles, CA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">22069383</idno>
<idno type="pmc">3208186</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3208186</idno>
<idno type="RBID">PMC:3208186</idno>
<idno type="doi">10.3389/fnint.2011.00075</idno>
<date when="2011">2011</date>
<idno type="wicri:Area/Pmc/Corpus">001E22</idno>
<idno type="wicri:Area/Pmc/Curation">001E22</idno>
<idno type="wicri:Area/Pmc/Checkpoint">001C08</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Computational Characterization of Visually Induced Auditory Spatial Adaptation</title>
<author>
<name sortKey="Wozny, David R" sort="Wozny, David R" uniqKey="Wozny D" first="David R." last="Wozny">David R. Wozny</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Otolaryngology, Oregon Health and Science University</institution>
<country>Portland, OR, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Biomedical Engineering Interdepartmental Program, University of California Los Angeles</institution>
<country>Los Angeles, CA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Shams, Ladan" sort="Shams, Ladan" uniqKey="Shams L" first="Ladan" last="Shams">Ladan Shams</name>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Biomedical Engineering Interdepartmental Program, University of California Los Angeles</institution>
<country>Los Angeles, CA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<institution>Department of Psychology and Interdepartmental Neuroscience Program, University of California Los Angeles</institution>
<country>Los Angeles, CA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Frontiers in Integrative Neuroscience</title>
<idno type="eISSN">1662-5145</idno>
<imprint>
<date when="2011">2011</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Recent research investigating the principles governing human perception has provided increasing evidence for probabilistic inference in human perception. For example, human auditory and visual localization judgments closely resemble that of a Bayesian causal inference observer, where the underlying causal structure of the stimuli are inferred based on both the available sensory evidence and prior knowledge. However, most previous studies have focused on characterization of perceptual inference within a static environment, and therefore, little is known about how this inference process changes when observers are exposed to a new environment. In this study we aimed to computationally characterize the change in auditory spatial perception induced by repeated auditory–visual spatial conflict, known as the ventriloquist aftereffect. In theory, this change could reflect a shift in the auditory sensory representations (i.e., shift in auditory likelihood distribution), a decrease in the precision of the auditory estimates (i.e., increase in spread of likelihood distribution), a shift in the auditory bias (i.e., shift in prior distribution), or an increase/decrease in strength of the auditory bias (i.e., the spread of prior distribution), or a combination of these. By quantitatively estimating the parameters of the perceptual process for each individual observer using a Bayesian causal inference model, we found that the shift in the perceived locations after exposure was associated with a shift in the mean of the auditory likelihood functions in the direction of the experienced visual offset. The results suggest that repeated exposure to a fixed auditory–visual discrepancy is attributed by the nervous system to sensory representation error and as a result, the sensory map of space is recalibrated to correct the error.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Adams, W J" uniqKey="Adams W">W. J. Adams</name>
</author>
<author>
<name sortKey="Graf, E W" uniqKey="Graf E">E. W. Graf</name>
</author>
<author>
<name sortKey="Ernst, M O" uniqKey="Ernst M">M. O. Ernst</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Barraza, J F" uniqKey="Barraza J">J. F. Barraza</name>
</author>
<author>
<name sortKey="Grzywacz, N M" uniqKey="Grzywacz N">N. M. Grzywacz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bentvelzen, A" uniqKey="Bentvelzen A">A. Bentvelzen</name>
</author>
<author>
<name sortKey="Leung, J" uniqKey="Leung J">J. Leung</name>
</author>
<author>
<name sortKey="Alais, D" uniqKey="Alais D">D. Alais</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bertelson, P" uniqKey="Bertelson P">P. Bertelson</name>
</author>
<author>
<name sortKey="Frissen, I" uniqKey="Frissen I">I. Frissen</name>
</author>
<author>
<name sortKey="Vroomen, J" uniqKey="Vroomen J">J. Vroomen</name>
</author>
<author>
<name sortKey="De Gelder, B" uniqKey="De Gelder B">B. De Gelder</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brainard, D H" uniqKey="Brainard D">D. H. Brainard</name>
</author>
<author>
<name sortKey="Longere, P" uniqKey="Longere P">P. Longère</name>
</author>
<author>
<name sortKey="Delahunt, P B" uniqKey="Delahunt P">P. B. Delahunt</name>
</author>
<author>
<name sortKey="Freeman, W T" uniqKey="Freeman W">W. T. Freeman</name>
</author>
<author>
<name sortKey="Kraft, J M" uniqKey="Kraft J">J. M. Kraft</name>
</author>
<author>
<name sortKey="Xiao, B" uniqKey="Xiao B">B. Xiao</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bresciani, J P" uniqKey="Bresciani J">J. P. Bresciani</name>
</author>
<author>
<name sortKey="Dammeier, F" uniqKey="Dammeier F">F. Dammeier</name>
</author>
<author>
<name sortKey="Ernst, M O" uniqKey="Ernst M">M. O. Ernst</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Butler, J S" uniqKey="Butler J">J. S. Butler</name>
</author>
<author>
<name sortKey="Smith, S T" uniqKey="Smith S">S. T. Smith</name>
</author>
<author>
<name sortKey="Campos, J L" uniqKey="Campos J">J. L. Campos</name>
</author>
<author>
<name sortKey="Bulthoff, H H" uniqKey="Bulthoff H">H. H. Bülthoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Canon, L K" uniqKey="Canon L">L. K. Canon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Clifford, C W" uniqKey="Clifford C">C. W. Clifford</name>
</author>
<author>
<name sortKey="Wenderoth, P" uniqKey="Wenderoth P">P. Wenderoth</name>
</author>
<author>
<name sortKey="Spehar, B" uniqKey="Spehar B">B. Spehar</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, M O" uniqKey="Ernst M">M. O. Ernst</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fetsch, C R" uniqKey="Fetsch C">C. R. Fetsch</name>
</author>
<author>
<name sortKey="Deangelis, G C" uniqKey="Deangelis G">G. C. Deangelis</name>
</author>
<author>
<name sortKey="Angelaki, D E" uniqKey="Angelaki D">D. E. Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fetsch, C R" uniqKey="Fetsch C">C. R. Fetsch</name>
</author>
<author>
<name sortKey="Turner, A H" uniqKey="Turner A">A. H. Turner</name>
</author>
<author>
<name sortKey="Deangelis, G C" uniqKey="Deangelis G">G. C. Deangelis</name>
</author>
<author>
<name sortKey="Angelaki, D E" uniqKey="Angelaki D">D. E. Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fujisaki, W" uniqKey="Fujisaki W">W. Fujisaki</name>
</author>
<author>
<name sortKey="Shimojo, S" uniqKey="Shimojo S">S. Shimojo</name>
</author>
<author>
<name sortKey="Kashino, M" uniqKey="Kashino M">M. Kashino</name>
</author>
<author>
<name sortKey="Nishida, S" uniqKey="Nishida S">S. Nishida</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Grzywacz, N M" uniqKey="Grzywacz N">N. M. Grzywacz</name>
</author>
<author>
<name sortKey="Balboa, R M" uniqKey="Balboa R">R. M. Balboa</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Grzywacz, N M" uniqKey="Grzywacz N">N. M. Grzywacz</name>
</author>
<author>
<name sortKey="De Juan, J" uniqKey="De Juan J">J. De Juan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hospedales, T" uniqKey="Hospedales T">T. Hospedales</name>
</author>
<author>
<name sortKey="Vijayakumar, S" uniqKey="Vijayakumar S">S. Vijayakumar</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jurgens, R" uniqKey="Jurgens R">R. Jürgens</name>
</author>
<author>
<name sortKey="Becker, W" uniqKey="Becker W">W. Becker</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kersten, D" uniqKey="Kersten D">D. Kersten</name>
</author>
<author>
<name sortKey="Mamassian, P" uniqKey="Mamassian P">P. Mamassian</name>
</author>
<author>
<name sortKey="Yuille, A" uniqKey="Yuille A">A. Yuille</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Knill, D C" uniqKey="Knill D">D. C. Knill</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Knill, D C" uniqKey="Knill D">D. C. Knill</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Knill, D C" uniqKey="Knill D">D. C. Knill</name>
</author>
<author>
<name sortKey="Richards, W" uniqKey="Richards W">W. Richards</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kording, K" uniqKey="Kording K">K. Körding</name>
</author>
<author>
<name sortKey="Beierholm, U R" uniqKey="Beierholm U">U. R. Beierholm</name>
</author>
<author>
<name sortKey="Ma, W" uniqKey="Ma W">W. Ma</name>
</author>
<author>
<name sortKey="Quartz, S" uniqKey="Quartz S">S. Quartz</name>
</author>
<author>
<name sortKey="Tenenbaum, J" uniqKey="Tenenbaum J">J. Tenenbaum</name>
</author>
<author>
<name sortKey="Shams, L" uniqKey="Shams L">L. Shams</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kording, K" uniqKey="Kording K">K. Körding</name>
</author>
<author>
<name sortKey="Ku, S" uniqKey="Ku S">S. Ku</name>
</author>
<author>
<name sortKey="Wolpert, D" uniqKey="Wolpert D">D. Wolpert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kording, K" uniqKey="Kording K">K. Körding</name>
</author>
<author>
<name sortKey="Wolpert, D" uniqKey="Wolpert D">D. Wolpert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Langley, K" uniqKey="Langley K">K. Langley</name>
</author>
<author>
<name sortKey="Anderson, S J" uniqKey="Anderson S">S. J. Anderson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lewald, J" uniqKey="Lewald J">J. Lewald</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Macneilage, P" uniqKey="Macneilage P">P. Macneilage</name>
</author>
<author>
<name sortKey="Banks, M S" uniqKey="Banks M">M. S. Banks</name>
</author>
<author>
<name sortKey="Berger, D R" uniqKey="Berger D">D. R. Berger</name>
</author>
<author>
<name sortKey="Bulthoff, H H" uniqKey="Bulthoff H">H. H. Bülthoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Miyazaki, M" uniqKey="Miyazaki M">M. Miyazaki</name>
</author>
<author>
<name sortKey="Nozaki, D" uniqKey="Nozaki D">D. Nozaki</name>
</author>
<author>
<name sortKey="Nakajima, Y" uniqKey="Nakajima Y">Y. Nakajima</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Miyazaki, M" uniqKey="Miyazaki M">M. Miyazaki</name>
</author>
<author>
<name sortKey="Yamamoto, S" uniqKey="Yamamoto S">S. Yamamoto</name>
</author>
<author>
<name sortKey="Uchida, S" uniqKey="Uchida S">S. Uchida</name>
</author>
<author>
<name sortKey="Kitazawa, S" uniqKey="Kitazawa S">S. Kitazawa</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nagelkerke, N" uniqKey="Nagelkerke N">N. Nagelkerke</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Radeau, M" uniqKey="Radeau M">M. Radeau</name>
</author>
<author>
<name sortKey="Bertelson, P" uniqKey="Bertelson P">P. Bertelson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rao, R P N" uniqKey="Rao R">R. P. N. Rao</name>
</author>
<author>
<name sortKey="Olshausen, B A" uniqKey="Olshausen B">B. A. Olshausen</name>
</author>
<author>
<name sortKey="Lewicki, M S" uniqKey="Lewicki M">M. S. Lewicki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Recanzone, G" uniqKey="Recanzone G">G. Recanzone</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Roach, N W" uniqKey="Roach N">N. W. Roach</name>
</author>
<author>
<name sortKey="Heron, J" uniqKey="Heron J">J. Heron</name>
</author>
<author>
<name sortKey="Mcgraw, P V" uniqKey="Mcgraw P">P. V. Mcgraw</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rowland, B" uniqKey="Rowland B">B. Rowland</name>
</author>
<author>
<name sortKey="Stanford, T" uniqKey="Stanford T">T. Stanford</name>
</author>
<author>
<name sortKey="Stein, B" uniqKey="Stein B">B. Stein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sato, Y" uniqKey="Sato Y">Y. Sato</name>
</author>
<author>
<name sortKey="Toyoizumi, T" uniqKey="Toyoizumi T">T. Toyoizumi</name>
</author>
<author>
<name sortKey="Aihara, K" uniqKey="Aihara K">K. Aihara</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Scarfe, P" uniqKey="Scarfe P">P. Scarfe</name>
</author>
<author>
<name sortKey="Hibbard, P B" uniqKey="Hibbard P">P. B. Hibbard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shams, L" uniqKey="Shams L">L. Shams</name>
</author>
<author>
<name sortKey="Ma, W J" uniqKey="Ma W">W. J. Ma</name>
</author>
<author>
<name sortKey="Beierholm, U" uniqKey="Beierholm U">U. Beierholm</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stocker, A" uniqKey="Stocker A">A. Stocker</name>
</author>
<author>
<name sortKey="Simoncelli, E" uniqKey="Simoncelli E">E. Simoncelli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stocker, A A" uniqKey="Stocker A">A. A. Stocker</name>
</author>
<author>
<name sortKey="Simoncelli, E" uniqKey="Simoncelli E">E. Simoncelli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Ee, R" uniqKey="Van Ee R">R. Van Ee</name>
</author>
<author>
<name sortKey="Adams, W J" uniqKey="Adams W">W. J. Adams</name>
</author>
<author>
<name sortKey="Mamassian, P" uniqKey="Mamassian P">P. Mamassian</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Wanrooij, M M" uniqKey="Van Wanrooij M">M. M. Van Wanrooij</name>
</author>
<author>
<name sortKey="Bremen, P" uniqKey="Bremen P">P. Bremen</name>
</author>
<author>
<name sortKey="John Van Opstal, A" uniqKey="John Van Opstal A">A. John Van Opstal</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vroomen, J" uniqKey="Vroomen J">J. Vroomen</name>
</author>
<author>
<name sortKey="Keetels, M" uniqKey="Keetels M">M. Keetels</name>
</author>
<author>
<name sortKey="De Gelder, B" uniqKey="De Gelder B">B. De Gelder</name>
</author>
<author>
<name sortKey="Bertelson, P" uniqKey="Bertelson P">P. Bertelson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wozny, D R" uniqKey="Wozny D">D. R. Wozny</name>
</author>
<author>
<name sortKey="Beierholm, U" uniqKey="Beierholm U">U. Beierholm</name>
</author>
<author>
<name sortKey="Shams, L" uniqKey="Shams L">L. Shams</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wozny, D R" uniqKey="Wozny D">D. R. Wozny</name>
</author>
<author>
<name sortKey="Beierholm, U R" uniqKey="Beierholm U">U. R. Beierholm</name>
</author>
<author>
<name sortKey="Shams, L" uniqKey="Shams L">L. Shams</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wozny, D R" uniqKey="Wozny D">D. R. Wozny</name>
</author>
<author>
<name sortKey="Shams, L" uniqKey="Shams L">L. Shams</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Front Integr Neurosci</journal-id>
<journal-id journal-id-type="publisher-id">Front. Integr. Neurosci.</journal-id>
<journal-title-group>
<journal-title>Frontiers in Integrative Neuroscience</journal-title>
</journal-title-group>
<issn pub-type="epub">1662-5145</issn>
<publisher>
<publisher-name>Frontiers Research Foundation</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">22069383</article-id>
<article-id pub-id-type="pmc">3208186</article-id>
<article-id pub-id-type="doi">10.3389/fnint.2011.00075</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Computational Characterization of Visually Induced Auditory Spatial Adaptation</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Wozny</surname>
<given-names>David R.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Shams</surname>
<given-names>Ladan</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
<xref ref-type="author-notes" rid="fn001">*</xref>
</contrib>
</contrib-group>
<aff id="aff1">
<sup>1</sup>
<institution>Department of Otolaryngology, Oregon Health and Science University</institution>
<country>Portland, OR, USA</country>
</aff>
<aff id="aff2">
<sup>2</sup>
<institution>Biomedical Engineering Interdepartmental Program, University of California Los Angeles</institution>
<country>Los Angeles, CA, USA</country>
</aff>
<aff id="aff3">
<sup>3</sup>
<institution>Department of Psychology and Interdepartmental Neuroscience Program, University of California Los Angeles</institution>
<country>Los Angeles, CA, USA</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>Edited by: John J. Foxe, Albert Einstein College of Medicine, USA</p>
</fn>
<fn fn-type="edited-by">
<p>Reviewed by: John S. Butler, Albert Einstein College of Medicine, USA; Edmund C. Lalor, Trinity College Dublin, Ireland</p>
</fn>
<corresp id="fn001">*Correspondence: Ladan Shams, Department of Psychology, University of California Los Angeles, 1285 Franz Hall, Box 951563, Los Angeles, CA 90095-1563, USA. e-mail:
<email>ladan@psych.ucla.edu</email>
</corresp>
</author-notes>
<pub-date pub-type="epub">
<day>04</day>
<month>11</month>
<year>2011</year>
</pub-date>
<pub-date pub-type="collection">
<year>2011</year>
</pub-date>
<volume>5</volume>
<elocation-id>75</elocation-id>
<history>
<date date-type="received">
<day>06</day>
<month>7</month>
<year>2011</year>
</date>
<date date-type="accepted">
<day>17</day>
<month>10</month>
<year>2011</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright © 2011 Wozny and Shams.</copyright-statement>
<copyright-year>2011</copyright-year>
<license license-type="open-access" xlink:href="http://www.frontiersin.org/licenseagreement">
<license-p>This is an open-access article subject to a non-exclusive license between the authors and Frontiers Media SA, which permits use, distribution and reproduction in other forums, provided the original authors and source are credited and other Frontiers conditions are complied with.</license-p>
</license>
</permissions>
<abstract>
<p>Recent research investigating the principles governing human perception has provided increasing evidence for probabilistic inference in human perception. For example, human auditory and visual localization judgments closely resemble that of a Bayesian causal inference observer, where the underlying causal structure of the stimuli are inferred based on both the available sensory evidence and prior knowledge. However, most previous studies have focused on characterization of perceptual inference within a static environment, and therefore, little is known about how this inference process changes when observers are exposed to a new environment. In this study we aimed to computationally characterize the change in auditory spatial perception induced by repeated auditory–visual spatial conflict, known as the ventriloquist aftereffect. In theory, this change could reflect a shift in the auditory sensory representations (i.e., shift in auditory likelihood distribution), a decrease in the precision of the auditory estimates (i.e., increase in spread of likelihood distribution), a shift in the auditory bias (i.e., shift in prior distribution), or an increase/decrease in strength of the auditory bias (i.e., the spread of prior distribution), or a combination of these. By quantitatively estimating the parameters of the perceptual process for each individual observer using a Bayesian causal inference model, we found that the shift in the perceived locations after exposure was associated with a shift in the mean of the auditory likelihood functions in the direction of the experienced visual offset. The results suggest that repeated exposure to a fixed auditory–visual discrepancy is attributed by the nervous system to sensory representation error and as a result, the sensory map of space is recalibrated to correct the error.</p>
</abstract>
<kwd-group>
<kwd>multisensory</kwd>
<kwd>perception</kwd>
<kwd>Bayesian</kwd>
<kwd>causal inference</kwd>
<kwd>spatial localization</kwd>
<kwd>adaptation</kwd>
<kwd>recalibration</kwd>
<kwd>ventriloquist aftereffect</kwd>
</kwd-group>
<counts>
<fig-count count="6"></fig-count>
<table-count count="1"></table-count>
<equation-count count="15"></equation-count>
<ref-count count="46"></ref-count>
<page-count count="11"></page-count>
<word-count count="9002"></word-count>
</counts>
</article-meta>
</front>
<body>
<sec>
<title>Introduction</title>
<p>The functional role of perception can be described as an inference about the sources of sensory signals in the environment. This process can be formalized by Bayesian probability theory that combines available sensory evidence (likelihood distribution) with prior knowledge (prior distribution) in making perceptual estimates (Knill and Richards,
<xref ref-type="bibr" rid="B21">1996</xref>
; Rao et al.,
<xref ref-type="bibr" rid="B32">2002</xref>
). This study uses a Bayesian probabilistic model to computationally explain the adaptation of auditory spatial perception in response to auditory–visual conflict.</p>
<p>Recent studies have shown that Bayesian inference can account for human perception in a number of tasks. Many studies have explained observers’ visual perception within a Bayesian framework, with tasks ranging from speed detection (Stocker and Simoncelli,
<xref ref-type="bibr" rid="B40">2006b</xref>
), to object perception (Kersten et al.,
<xref ref-type="bibr" rid="B18">2004</xref>
), color constancy (Brainard et al.,
<xref ref-type="bibr" rid="B5">2006</xref>
), and slant perception (Knill,
<xref ref-type="bibr" rid="B19">2003</xref>
; Van Ee et al.,
<xref ref-type="bibr" rid="B41">2003</xref>
). An increasing number of studies have also used Bayesian models to account for human perceptual judgments across a range of multisensory tasks, including temporal numerosity judgment (Shams et al.,
<xref ref-type="bibr" rid="B38">2005</xref>
; Bresciani et al.,
<xref ref-type="bibr" rid="B6">2006</xref>
; Wozny et al.,
<xref ref-type="bibr" rid="B44">2008</xref>
), rate perception (Roach et al.,
<xref ref-type="bibr" rid="B34">2006</xref>
), oddity detection (Hospedales and Vijayakumar,
<xref ref-type="bibr" rid="B16">2009</xref>
), self-motion perception (Fetsch et al.,
<xref ref-type="bibr" rid="B12">2009</xref>
,
<xref ref-type="bibr" rid="B11">2010</xref>
; Butler et al.,
<xref ref-type="bibr" rid="B7">2010</xref>
), angular displacement (Jürgens and Becker,
<xref ref-type="bibr" rid="B17">2006</xref>
), gravitoinertial force discrimination (Macneilage et al.,
<xref ref-type="bibr" rid="B27">2007</xref>
), and spatial localization (Körding et al.,
<xref ref-type="bibr" rid="B22">2007</xref>
; Rowland et al.,
<xref ref-type="bibr" rid="B35">2007</xref>
).</p>
<p>Whereas the aforementioned Bayesian models describe perceptual processes within a stationary setting, fewer studies have investigated the computational components of perception that undergo change as a result of adaptation. If the functional principles of sensory processing do indeed follow Bayesian computations, then the observed adaptive behavior should also be well characterized by changes in the Bayesian statistics. Given that the likelihood and prior distributions are the fundamental components of Bayesian inference, possible hypotheses are that adaptive behavior reflects: (i) changes in the prior probabilities, (ii) changes in the likelihood functions, (iii) or a combination of the two. Previous behavioral studies and model simulations appear to provide conflicting results about the perceptual component that undergoes change, and the results appear to be task dependent. Studies of sensory–motor adaptation in reaching (Körding and Wolpert,
<xref ref-type="bibr" rid="B24">2004</xref>
), force estimation (Körding et al.,
<xref ref-type="bibr" rid="B23">2004</xref>
), and coincidence timing (Miyazaki et al.,
<xref ref-type="bibr" rid="B28">2005</xref>
) indicate that adaptation is associated with the update of the priors to match the distribution of recently experienced stimulus patterns. Adaptation of priors has also been reported for visual depth perception (Knill,
<xref ref-type="bibr" rid="B20">2007</xref>
), convexity judgments (Adams et al.,
<xref ref-type="bibr" rid="B1">2004</xref>
), motion adaptation (Langley and Anderson,
<xref ref-type="bibr" rid="B25">2007</xref>
), audio–visual integration (Van Wanrooij et al.,
<xref ref-type="bibr" rid="B42">2010</xref>
), and visual–tactile integration (Ernst,
<xref ref-type="bibr" rid="B10">2007</xref>
).</p>
<p>However, other proposed models attribute adaptation to changes in sensory likelihoods. Stocker and Simoncelli (
<xref ref-type="bibr" rid="B39">2006a</xref>
), demonstrate that repulsive aftereffects observed after motion adaptation are inconsistent with a change in the prior distribution (that would produce attractive aftereffects), but instead are consistent with a sharpening of the likelihood distribution. Similarly, adaptation of the likelihood function has been proposed to qualitatively account for retinal contrast adaptation (Grzywacz and Balboa,
<xref ref-type="bibr" rid="B14">2002</xref>
; Grzywacz and De Juan,
<xref ref-type="bibr" rid="B15">2003</xref>
), speed adaptation (Barraza and Grzywacz,
<xref ref-type="bibr" rid="B2">2008</xref>
), and auditory spatial recalibration (Sato et al.,
<xref ref-type="bibr" rid="B36">2007</xref>
). Yet one study of adaptation has found both repulsive and attractive kinds of changes in a temporal order judgment task depending on stimuli (Miyazaki et al.,
<xref ref-type="bibr" rid="B29">2006</xref>
). In one experiment, subjects judged the temporal order of two tactile stimuli, delivered one to each hand. When the distribution of presented stimuli had a larger proportion of right-hand-first intervals, subjects’ responses were biased in reporting right-hand-first as shown by shifts in the point of subjective simultaneity (PSS) of the cumulative psychometric function (and vice-versa for left-hand-first stimulus distributions). These results are in agreement with a Bayesian observer that updates a prior distribution in accordance with the distribution of stimulus presentations. However, in another experiment, when subjects were asked to judge the temporal order of audio–visual stimuli, the opposite effect was reported. The PSS was shifted in the opposite direction of that predicted by a Bayesian observer that updates the prior distribution. Thus, previous studies of calibration have reported different mechanisms of adaptation under different sensory conditions and tasks.</p>
<p>In this study, we are specifically interested in computationally characterizing perceptual adaptation in response to exposure to crossmodal sensory conflict. Acquiring information from multiple sensory organs enables the nervous system to perform self-maintenance and recalibration in response to detection of error in one of the senses. Such mechanisms can be critical in coping with exogenous changes in the environment or endogenous changes that occur as a result of development, injury, or stroke. Our study investigates a well-established example of crossmodally induced adaptation known as the ventriloquist aftereffect (VAE). After repeated exposure to simultaneous but spatially discrepant auditory and visual stimuli, the localization of an auditory stimulus when presented in isolation is shifted in the direction of the previously experienced visual offset (Canon,
<xref ref-type="bibr" rid="B8">1970</xref>
; Radeau and Bertelson,
<xref ref-type="bibr" rid="B31">1974</xref>
; Recanzone,
<xref ref-type="bibr" rid="B33">1998</xref>
; Lewald,
<xref ref-type="bibr" rid="B26">2002</xref>
). It has been argued that in the absence of information useful in determining which modality is biased, the nervous system does not always recalibrate when provided with cues having conflicting biases (Scarfe and Hibbard,
<xref ref-type="bibr" rid="B37">2011</xref>
). However, the auditory spatial recalibration by vision (e.g., VAE) has been consistently shown to occur even when there is no information provided to the observers suggesting that the auditory estimates are biased, as is the case in this study. We aimed to gain insight into computational components of the perceptual process that are modified in the process of this adaptation. Because it has been previously shown that human localization of auditory and visual stimuli are consistent with a Bayesian causal inference observer (Körding et al.,
<xref ref-type="bibr" rid="B22">2007</xref>
; Wozny et al.,
<xref ref-type="bibr" rid="B45">2010</xref>
), we will use this model to characterize the perceptual components (sensory map, sensory noise, perceptual bias, etc.) for each individual observer before and after exposure to adapting stimuli. Simulation results (Figure
<xref ref-type="fig" rid="F1">1</xref>
) show that VAE can be qualitatively explained by either changes in the likelihood or changes in the prior distributions. Figure
<xref ref-type="fig" rid="F1">1</xref>
A schematically shows the stimuli used during the simulated adaptation period. The observers are exposed to simultaneous auditory and visual stimuli that are presented at a fixed spatial discrepancy from each other, here with sound to the left of the visual stimulus, at different positions along azimuth. The first row and second row show theoretical distributions of the prior (magenta), likelihood (blue), and the resulting posterior distributions (black) before and after exposure, respectively. The likelihood functions are shown for auditory stimuli at three arbitrary locations, −13°, 0°, and +13° of eccentricity (where 0° represents straight ahead location). The bottom row panels show the resultant change in the perceived location of sound at those locations. Each column represents one possible scenario in terms of changes mediating adaptation. In column B, the prior distribution is shifted to the right after exposure, which results in a rightward shift in perceived auditory locations, consistent with VAE (bottom row). In column C, the auditory likelihoods are shifted to the right. Here too, the perceived location of sounds is shifted rightward after exposure (shown in bottom row), consistent with VAE. Other changes or combinations of changes can produce distinct patterns of auditory aftereffects. One such example is shown in column D. In this example, before exposure the prior distribution is relatively flat. After exposure a prominent prior with a rightward bias emerges. This would cause asymmetric auditory shifts depending on the location of the prior mean relative to the testing locations. Therefore, VAE can be qualitatively explained by both a shift in priors and a shift in likelihoods or perhaps a combination of the two. Thus, to discover which computational changes in processing underlie this spatial adaptation phenomenon one needs to investigate it quantitatively by comparing psychophysical data with quantitative predictions of the different models.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption>
<p>
<bold>Possible computational mechanisms underlying ventriloquist aftereffect</bold>
.
<bold>(A)</bold>
Schematic illustration of adapting stimuli. We present simulations for the case in which during exposure, the visual stimuli are to the right of the auditory stimuli by a fixed offset. This kind of exposure has been previously shown to result in a subsequent rightward shift in auditory localization.
<bold>(B–D)</bold>
depict three possible mechanisms of adaptation, and the resultant behavioral effects. Top row panels show distributions prior to exposure to discrepant auditory–visual stimuli shown in
<bold>(A)</bold>
. Blue Gaussians show auditory likelihood distributions for three arbitrary horizontal locations, left (−13°), center (0°), and right (13°). Magenta Gaussian represents the prior distribution. Black Gaussians are the posterior estimates from the product of the likelihood and prior distributions. Middle panels show theoretical distribution changes after exposure. In scenario depicted in
<bold>(B)</bold>
, the prior distribution is shifted to the right. The broken lines and green arrow highlight the shift in the prior distribution. The bottom panel shows the change in auditory spatial estimates (the max of posterior) after exposure (i.e., post-pre in the peaks of the black curves). Positive values denote a shift to the right. In scenario depicted in
<bold>(C)</bold>
, adaptation causes a shift in likelihoods (blue curves). This mechanism produces the same behavioral effect as shown in
<bold>(B)</bold>
as seen in the bottom panel. Note that a smaller shift in likelihood (highlighted by the green arrow) results in the same magnitude of aftereffect as a larger shift in prior due to the relative widths of the distributions. In the scenario depicted in
<bold>(D)</bold>
, before exposure the prior distribution is relatively flat (i.e., there is no bias for location), and after exposure a bias for a location to the right appears. This creates an asymmetrical pattern in aftereffect magnitudes across locations.</p>
</caption>
<graphic xlink:href="fnint-05-00075-g001"></graphic>
</fig>
<p>As mentioned earlier, it has recently been shown that human auditory–visual spatial localization judgments are remarkably consistent with a normative Bayesian causal inference model, where the observer infers the underlying causal structure of the environment based on the available evidence and prior knowledge (Körding et al.,
<xref ref-type="bibr" rid="B22">2007</xref>
). Because the causal inference model allows quantitative estimation of likelihoods and priors, this model can be used to empirically test which one(s) of these quantities undergoes change after adaptation. For each individual participant, model parameters were fitted to auditory–visual localization responses separately for pre-adaptation and post-adaptation data. This allowed us to test for statistically significant changes in the likelihood and prior parameters between the two phases. The key feature of this approach is that it allows simultaneous testing of all hypotheses of parameter changes without any
<italic>a priori</italic>
assumptions.</p>
</sec>
<sec sec-type="materials|methods">
<title>Materials and Methods</title>
<sec>
<title>Participants and apparatus</title>
<p>Twenty-four individuals (21 female) with a mean age of 20 (range 18–25) participated in the experiment. All participants reported normal or corrected-to-normal vision and normal hearing, and did not have any known auditory or neurological disorders. Each participant signed a consent form approved by the UCLA IRB. The participants were randomly assigned to one of two experimental groups, AV-adaptation (
<italic>N</italic>
 = 12) and VA-adaptation (
<italic>N</italic>
 = 12) as described below. The pre-test data from these subjects were part of a larger set of data previously published (Wozny et al.,
<xref ref-type="bibr" rid="B45">2010</xref>
). The participants in this study were the only participants subjected to the exposure conditions described below.</p>
<p>Participants sat at a desk in a dimly lit room with their chins positioned on a chin-rest 52 cm from a projection screen of stretched black linen cloth extending a vast portion of the visual field (134° width × 60° height). Behind the screen were nine free-field speakers (5 cm × 8 cm, extended range paper cone), symmetrically positioned around midline along azimuth, 6.5° apart, 7° below fixation. The visual stimuli were presented overhead from a ceiling mounted projector set to a resolution of 1280 × 1024 pixels. Figure
<xref ref-type="fig" rid="F2">2</xref>
provides a schematic of the stimuli locations.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption>
<p>
<bold>Spatial configuration of stimuli</bold>
. Location of the visual and auditory stimuli during test phase
<bold>(A)</bold>
and during exposure phase for the AV-adaptation group
<bold>(B)</bold>
and exposure phase for the VA-adaptation group
<bold>(C)</bold>
are schematically shown. Here, the vertical locations of the visual and auditory stimuli are offset for illustration purposes; in the experiment they were vertically aligned at 7° below fixation. All combinations of visual and auditory stimulus locations were presented during test phases
<bold>(A)</bold>
.</p>
</caption>
<graphic xlink:href="fnint-05-00075-g002"></graphic>
</fig>
</sec>
<sec>
<title>Stimuli</title>
<p>The visual stimulus was a white noise disk (0.41 cd/m
<sup>2</sup>
) masked with a Gaussian envelope of 1.5° FWHM, presented 7° below the fixation point on a black background (0.07 cd/m
<sup>2</sup>
), and presented for 35 ms. The visual stimulus was presented at a position coinciding with the center of one of the central five speakers behind the screen positioned at −13°, −6.5°, 0°, 6.5°, 13° along azimuth. Auditory stimuli were 35 ms ramped white noise bursts of 69 dB(
<italic>A</italic>
) sound pressure level at a distance of 52 cm and were newly generated on each trial. The speaker locations were unknown to the participants. The central five speakers were used as test locations for the auditory stimuli. The two eccentric speakers on each side were used during the adaptation period only.</p>
</sec>
<sec>
<title>Procedure</title>
<p>The experiment consisted of three phases: pre-adaptation test, adaptation, post-adaptation test. All three phases were performed in a single session lasting about 2 h. During pre-adaptation and post-adaptation test phases, participants performed a spatial localization task on unisensory as well as bisensory trials, which were randomly interleaved. These phases were each used for the estimation of the perceptual parameters (spatial maps, noise, bias, etc.) before and after adaptation. The adaptation period induced the VAE by exposing subjects to spatially offset auditory–visual stimulus pairs.</p>
<p>In order to familiarize participants with the task, each session started with a practice period of 10 randomly interleaved trials in which only an auditory stimulus was presented at a variable location, and subjects were asked to report the location of the auditory stimulus.</p>
<p>Practice was followed by 525 test trials that took about 45 min to complete. Fifteen repetitions of 35 stimulus conditions were presented in pseudorandom order. The stimulus conditions included five unisensory auditory locations, five unisensory visual locations, and all 25 combinations of auditory and visual locations (bisensory conditions). The locations of the stimuli were at −13°, −6.5°, 0°, +6.5°, +13° as shown in Figure
<xref ref-type="fig" rid="F2">2</xref>
A (positive is right of fixation). On bisensory trials, subjects were asked to report
<italic>both</italic>
the location of auditory stimulus and the location of visual stimulus in sequential order. The order of these two responses was consistent throughout the session, and was counter-balanced across subjects. Subjects were told that “the sound and light could come from the same location, or they could come from different locations.” As a reminder, a blue “S” or green “L” was placed inside the cursor to remind subjects to respond to the sound or light respectively. Probing both responses on bisensory trials allows us to assess the degree of sensory integration or segregation on a given trial.</p>
<p>Each trial started with a fixation cross, followed after 750–1100 ms by the presentation of the stimuli. After 450 ms, the fixation cross was removed and a cursor appeared on the screen vertically just above the horizontal line where the stimuli were presented and at a random horizontal location in order to minimize response bias. The cursor was controlled by a trackball mouse placed in front of the subject, and could only be moved in the horizontal direction. Participants were instructed to “move the cursor as quickly and accurately as possible to the exact location of the stimulus and click the mouse.” This enabled the capture of continuous responses with a resolution of 0.1°/pixel.</p>
<p>Following the pre-adaptation test, a top-up design was used for the adaptation period and post-adaptation test trials. During adaptation, a train of visual stimuli flashed on the screen at only one of the five central locations every 450 ms. Randomly, between the 5th and 15th presentation the flash got noticeably brighter (changed from 0.41 to 1.23 cd/m
<sup>2</sup>
), during which time the participant was to detect the change by clicking the mouse. If the change was caught prior to the next flash presentation, the stimulus moved to a new random location and the procedure continued. If the change was not detected or a false alarm was reported, the random sequence would start over in the same location and the location of the stimulus would not change until the brightness change was detected. The initial adaptation section lasted for 40 detections (8 detections per location). During adaptation phase, a simultaneous auditory stimulus was presented 13° either to the left (for the AV-adaptation group, Figure
<xref ref-type="fig" rid="F2">2</xref>
B) or to the right (for the VA-adaptation group, Figure
<xref ref-type="fig" rid="F2">2</xref>
C) of the visual stimulus, depending on the adaptation condition. Post-adaptation test segments consisted of 40 test trials randomly interleaved, followed by 10 randomly interleaved adaptation sequences (2 detections per location) until all 525 post-adaptation test trials were completed. Except for the ordering of the trials, the pre-adaptation and post-adaptation test phases were identical.</p>
</sec>
<sec>
<title>Causal inference model</title>
<p>We used a Bayesian causal inference model of multisensory perception (Körding et al.,
<xref ref-type="bibr" rid="B22">2007</xref>
) to probe any parametric changes in likelihood or prior distributions after inducing the VAE. In the causal inference model, the underlying causal structure of the environment is inferred based on the available sensory evidence and prior knowledge. Each stimulus or event
<italic>s</italic>
in the world causes a noisy sensation
<italic>x
<sub>i</sub>
</italic>
of the event (where
<italic>i</italic>
is indexed over sensory channels). The sensory estimate for our task is the perceived location of the auditory and visual stimuli. The mapping from the world to sensory representations of the world is captured by the likelihood function
<italic>p</italic>
(
<italic>x
<sub>i</sub>
</italic>
|
<italic>s</italic>
), which is the probability of experiencing sensation
<italic>x
<sub>i</sub>
</italic>
as a result of event
<italic>s</italic>
occurring in the environment. We use a generative model to simulate experimental trials and subject responses by performing 10,000 Monte Carlo simulations for each condition. Each individual sensation is modeled using the likelihood function
<italic>p</italic>
(
<italic>x
<sub>i</sub>
</italic>
|
<italic>s</italic>
). Trial-to-trial variability is introduced by sampling the likelihood from a normal distribution around the true sensory locations
<italic>s
<sub>A</sub>
</italic>
and
<italic>s
<sub>V</sub>
</italic>
, plus bias terms Δμ
<sub>
<italic>A</italic>
</sub>
and Δμ
<sub>
<italic>V</italic>
</sub>
for auditory and visual modalities, respectively. This simulates the corruption of auditory and visual sensory channels by independent Gaussian noise with standard deviation (SD) σ
<sub>
<italic>A</italic>
</sub>
and σ
<sub>
<italic>V</italic>
</sub>
respectively. In other words, the sensations
<italic>x
<sub>A</sub>
</italic>
and
<italic>x
<sub>V</sub>
</italic>
are simulated by sampling from the distributions shown in Eqs
<xref ref-type="disp-formula" rid="E1">1</xref>
and
<xref ref-type="disp-formula" rid="E1">2</xref>
.</p>
<disp-formula id="E1">
<mml:math id="M3">
<mml:mtable class="eqnarray" columnalign="left">
<mml:mtr>
<mml:mtd class="eqnarray-1">
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mtd>
<mml:mtd class="eqnarray-2">
<mml:mi mathvariant="script">N</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:mi>Δ</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
<mml:mtd class="eqnarray-4">
<mml:mtext class="eqnarray">(1)</mml:mtext>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd class="eqnarray-1">
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mtd>
<mml:mtd class="eqnarray-2">
<mml:mi mathvariant="script">N</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:mi>Δ</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
<mml:mtd class="eqnarray-4">
<mml:mtext class="eqnarray">(2)</mml:mtext>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
<p>We assume there is a prior bias for the spatial location, modeled by a Gaussian distribution centered at μ
<sub>
<italic>P</italic>
</sub>
. The SD of the Gaussian, σ
<sub>
<italic>P</italic>
</sub>
, determines the strength of the bias. Therefore, the prior distribution of spatial location is</p>
<disp-formula id="E2">
<label>(3)</label>
<mml:math id="M4">
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mi mathvariant="script">N</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:math>
</disp-formula>
<p>It is important to note that the posterior probability of event
<italic>s</italic>
is conditioned on the causal structure of the stimuli. For bisensory stimuli, the competing causal structures are shown in Figure
<xref ref-type="fig" rid="F3">3</xref>
, where the sensations could originate either from a common cause (
<italic>C </italic>
= 1, Figure
<xref ref-type="fig" rid="F3">3</xref>
left, Eq.
<xref ref-type="disp-formula" rid="E3">4</xref>
), or independent causes (
<italic>C </italic>
= 2, Figure
<xref ref-type="fig" rid="F3">3</xref>
right, Eq.
<xref ref-type="disp-formula" rid="E4">5</xref>
).</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption>
<p>
<bold>The causal inference model</bold>
. Left: One cause can be responsible for both visual and auditory signals,
<italic>x
<sub>V</sub>
</italic>
and
<italic>x
<sub>A</sub>
</italic>
. Right: Alternatively, two independent causes may generate the visual and auditory sensations. The causal inference model infers the probability of a common cause (left,
<italic>C</italic>
 = 1) vs. two independent causes (right,
<italic>C</italic>
 = 2). The latent variable
<italic>C</italic>
determines which model generates the data.</p>
</caption>
<graphic xlink:href="fnint-05-00075-g003"></graphic>
</fig>
<disp-formula id="E3">
<label>(4)</label>
<mml:math id="M5">
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">;</mml:mo>
<mml:mi>C</mml:mi>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfrac>
</mml:math>
</disp-formula>
<disp-formula id="E4">
<mml:math id="M6">
<mml:mtable class="eqnarray" columnalign="left">
<mml:mtr>
<mml:mtd class="eqnarray-1">
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">;</mml:mo>
<mml:mi>C</mml:mi>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
<mml:mtd class="eqnarray-2">
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfrac>
<mml:mstyle class="text">
<mml:mtext></mml:mtext>
</mml:mstyle>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd class="eqnarray-1">
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">;</mml:mo>
<mml:mi>C</mml:mi>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
<mml:mtd class="eqnarray-2">
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfrac>
</mml:mtd>
<mml:mtd class="eqnarray-4">
<mml:mtext class="eqnarray">(5)</mml:mtext>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
<p>Given that the likelihood and prior distributions are Gaussian, the resulting posterior distribution is also Gaussian, and the optimal estimates for the auditory and visual locations,
<inline-formula>
<mml:math id="M1">
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi>s</mml:mi>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mi>A</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
and
<inline-formula>
<mml:math id="M2">
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi>s</mml:mi>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mi>V</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
are taken as the maximum/mean of the posterior. These estimates are given in Eq.
<xref ref-type="disp-formula" rid="E5">6</xref>
for the common cause structure, and in Eq.
<xref ref-type="disp-formula" rid="E6">7</xref>
for the independent cause structure.</p>
<disp-formula id="E5">
<label>(6)</label>
<mml:math id="M7">
<mml:msub>
<mml:mrow>
<mml:mi>ŝ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>C</mml:mi>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>ŝ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>C</mml:mi>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mfrac>
</mml:math>
</disp-formula>
<disp-formula id="E6">
<label>(7)</label>
<mml:math id="M8">
<mml:msub>
<mml:mrow>
<mml:mi>ŝ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>C</mml:mi>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mfrac>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>ŝ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>C</mml:mi>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mfrac>
</mml:math>
</disp-formula>
<p>These are the optimal auditory and visual estimates given each causal structure. However, the causal structure is not known to the nervous system and also needs to be inferred based on sensory evidence and prior knowledge. This inference is formulated using Bayes’ Rule as follows:</p>
<disp-formula id="E7">
<label>(8)</label>
<mml:math id="M9">
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>C</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>C</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfrac>
</mml:math>
</disp-formula>
<p>The posterior probability of a single cause can be computed by:</p>
<disp-formula id="E8">
<label>(9)</label>
<mml:math id="M10">
<mml:mtable class="eqnarray" columnalign="left">
<mml:mtr>
<mml:mtd class="eqnarray-2">
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel">=</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd class="eqnarray-2">
<mml:mspace width="1em" class="quad"></mml:mspace>
<mml:mfrac>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>C</mml:mi>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
<mml:msub>
<mml:mrow>
<mml:mi>p</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>common</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>C</mml:mi>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
<mml:msub>
<mml:mrow>
<mml:mi>p</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>common</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>C</mml:mi>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfenced>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>p</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>common</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfrac>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
<p>where
<italic>p</italic>
<sub>common</sub>
is the prior probability of a common cause. The likelihood of experiencing the joint sensations
<italic>x
<sub>A</sub>
</italic>
and
<italic>x
<sub>V</sub>
</italic>
given a causal structure can be found by integrating over the latent variable
<italic>s
<sub>i</sub>
</italic>
:</p>
<disp-formula id="E9">
<mml:math id="M11">
<mml:mtable class="eqnarray" columnalign="left">
<mml:mtr>
<mml:mtd class="eqnarray-1">
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>C</mml:mi>
<mml:mspace width="0.3em" class="thinspace"></mml:mspace>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mspace width="0.3em" class="thinspace"></mml:mspace>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
<mml:mtd class="eqnarray-2">
<mml:mspace width="0.3em" class="thinspace"></mml:mspace>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mspace width="0.3em" class="thinspace"></mml:mspace>
<mml:mo class="MathClass-op"></mml:mo>
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mi>d</mml:mi>
<mml:mi>s</mml:mi>
</mml:mtd>
<mml:mtd class="eqnarray-4">
<mml:mtext class="eqnarray">(10)</mml:mtext>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd class="eqnarray-1">
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>C</mml:mi>
<mml:mspace width="0.3em" class="thinspace"></mml:mspace>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mspace width="0.3em" class="thinspace"></mml:mspace>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
<mml:mtd class="eqnarray-2">
<mml:mspace width="0.3em" class="thinspace"></mml:mspace>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mspace width="0.3em" class="thinspace"></mml:mspace>
<mml:mo class="MathClass-op"></mml:mo>
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mi>d</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin"></mml:mo>
<mml:mo class="MathClass-op"></mml:mo>
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mi>d</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mtd>
<mml:mtd class="eqnarray-4">
<mml:mtext class="eqnarray">(11)</mml:mtext>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
<p>Again, since all integrands are Gaussian, the analytic solution is as follows:</p>
<disp-formula id="E10">
<label>(12)</label>
<mml:math id="M12">
<mml:mtable class="eqnarray" columnalign="left">
<mml:mtr>
<mml:mtd class="eqnarray-2">
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>C</mml:mi>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>π</mml:mi>
<mml:msqrt>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:mfrac>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd class="eqnarray-2">
<mml:mspace width="1em" class="quad"></mml:mspace>
<mml:mo class="qopname">exp</mml:mo>
<mml:mfenced separators="" open="[" close="]">
<mml:mrow>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfrac>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
<disp-formula id="E11">
<label>(13)</label>
<mml:math id="M13">
<mml:mtable class="eqnarray" columnalign="left">
<mml:mtr>
<mml:mtd class="eqnarray-1">
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:mi>C</mml:mi>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>π</mml:mi>
<mml:msqrt>
<mml:mrow>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:mfrac>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd class="eqnarray-2">
<mml:mo class="qopname">exp</mml:mo>
<mml:mfenced separators="" open="[" close="]">
<mml:mrow>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfrac>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
<p>The posterior probability of independent causes can then be calculated as:</p>
<disp-formula id="E12">
<label>(14)</label>
<mml:math id="M14">
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:math>
</disp-formula>
<p>At this point we have calculated the probability of each causal structure, and the optimal perceptual estimates assuming (i.e., under
<italic>certainty</italic>
about) each causal structure. The final stage is to obtain the perceptual estimates given the
<italic>uncertainty</italic>
in causal structure. If the goal of the nervous system is to minimize the mean squared error of the perceptual estimates, then the optimal solution would be to take the average of the estimates of the two causal structures, each weighted by their relative probability (Körding et al.,
<xref ref-type="bibr" rid="B22">2007</xref>
). This decision strategy is referred to as
<italic>model averaging</italic>
(Eq.
<xref ref-type="disp-formula" rid="E13">15</xref>
).</p>
<disp-formula id="E13">
<label>(15)</label>
<mml:math id="M15">
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mrow>
<mml:mi>ŝ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:msub>
<mml:mrow>
<mml:mi>ŝ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>C</mml:mi>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:msub>
<mml:mrow>
<mml:mi>ŝ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>C</mml:mi>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mrow>
<mml:mi>ŝ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:msub>
<mml:mrow>
<mml:mi>ŝ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>C</mml:mi>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:mi>p</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo class="MathClass-rel">|</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:msub>
<mml:mrow>
<mml:mi>ŝ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mi>C</mml:mi>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
<p>However, as shown by Wozny et al. (
<xref ref-type="bibr" rid="B45">2010</xref>
), there are alternative decision-making strategies and cost functions that are adopted by some individuals. One alternative decision-making strategy is Bayesian
<italic>model selection</italic>
, which selects the auditory and visual estimates corresponding to the more probable causal structure:</p>
<disp-formula id="E14">
<label>(16)</label>
<mml:math id="M16">
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mover accent="true">
<mml:mi>s</mml:mi>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mi>A</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mover accent="true">
<mml:mi>s</mml:mi>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>C</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo></mml:mo>
<mml:mtext>if</mml:mtext>
<mml:mo></mml:mo>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>A</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>V</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>></mml:mo>
<mml:mn>.5</mml:mn>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mover accent="true">
<mml:mi>s</mml:mi>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>C</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo></mml:mo>
<mml:mtext>if</mml:mtext>
<mml:mo></mml:mo>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>A</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>V</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo></mml:mo>
<mml:mn>.5</mml:mn>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mover accent="true">
<mml:mi>s</mml:mi>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mi>V</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mover accent="true">
<mml:mi>s</mml:mi>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>C</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo></mml:mo>
<mml:mtext>if</mml:mtext>
<mml:mo></mml:mo>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>A</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>V</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>></mml:mo>
<mml:mn>.5</mml:mn>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mover accent="true">
<mml:mi>s</mml:mi>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>C</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo></mml:mo>
<mml:mtext>if</mml:mtext>
<mml:mo></mml:mo>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>A</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>V</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo></mml:mo>
<mml:mn>.5</mml:mn>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
<p>The other alternative decision strategy we consider is
<italic>probability matching</italic>
. This is a stochastic strategy; on each trial a causal structure is selected with the probability matching its inferred probability. We simulate this strategy by randomly sampling from a uniform distribution on each trial (within a range from 0 to 1), and choosing the common cause model if its posterior probability is greater than the random sample (Eq.
<xref ref-type="disp-formula" rid="E15">17</xref>
). An analogy of this process would be as follows: if there is a 70% chance of rain, and before one leaves the house one draws from an urn containing 100 balls labeled from 1 to 100, and then decides to take an umbrella if the drawn ball has a number below 70.</p>
<disp-formula id="E15">
<label>(17)</label>
<mml:math id="M17">
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mover accent="true">
<mml:mi>s</mml:mi>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mi>A</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi>s</mml:mi>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>C</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo></mml:mo>
<mml:mtext>if</mml:mtext>
<mml:mo></mml:mo>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>A</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>V</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>></mml:mo>
<mml:mi>ξ</mml:mi>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:mtext>where</mml:mtext>
<mml:mo></mml:mo>
<mml:mi>ξ</mml:mi>
<mml:mo></mml:mo>
<mml:mo></mml:mo>
<mml:mo></mml:mo>
<mml:mo></mml:mo>
<mml:mo stretchy="false">[</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>:</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo stretchy="false">]</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mtext>uniform</mml:mtext>
<mml:mo></mml:mo>
<mml:mtext>distribution</mml:mtext>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi>s</mml:mi>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>C</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo></mml:mo>
<mml:mtext>if</mml:mtext>
<mml:mo></mml:mo>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>A</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>V</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo></mml:mo>
<mml:mi>ξ</mml:mi>
</mml:mrow>
</mml:mtd>
<mml:mtd>
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:mtext>and</mml:mtext>
<mml:mo></mml:mo>
<mml:mtext>sampled</mml:mtext>
<mml:mo></mml:mo>
<mml:mtext>on</mml:mtext>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mtext>each</mml:mtext>
<mml:mo></mml:mo>
<mml:mtext>trail</mml:mtext>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mover accent="true">
<mml:mi>s</mml:mi>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mi>V</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mover accent="true">
<mml:mi>s</mml:mi>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>C</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo></mml:mo>
<mml:mtext>if</mml:mtext>
<mml:mo></mml:mo>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>A</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>V</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>></mml:mo>
<mml:mi>ξ</mml:mi>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mover accent="true">
<mml:mi>s</mml:mi>
<mml:mo>^</mml:mo>
</mml:mover>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>C</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo></mml:mo>
<mml:mtext>if</mml:mtext>
<mml:mo></mml:mo>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>A</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>V</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo></mml:mo>
<mml:mi>ξ</mml:mi>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
<p>For each subject, we fitted model parameters to the participant’s response data using each of the three decision-making strategies described above (Wozny et al.,
<xref ref-type="bibr" rid="B45">2010</xref>
). We then chose the parameters and strategy that provided the best fit for each subject. Seven parameters were fitted simultaneously to the entire dataset (all 35 stimulus conditions) in an optimization search that maximized the likelihood of the data given the model parameters: Δμ
<sub>
<italic>A</italic>
</sub>
, σ
<sub>
<italic>A</italic>
</sub>
–the auditory likelihood mean offset and SD; Δμ
<sub>
<italic>V</italic>
</sub>
, σ
<sub>
<italic>V</italic>
</sub>
–the visual likelihood mean offset and SD; μ
<sub>
<italic>P</italic>
</sub>
, σ
<sub>
<italic>P</italic>
</sub>
–the prior mean and SD; and
<italic>p</italic>
<sub>common</sub>
– the prior probability of a common cause. A bounded version of Matlab’s fminsearch simplex algorithm was used for optimization. Parameter values were estimated separately for the pre-adaptation and post-adaptation test data. Paired two-tailed
<italic>t</italic>
-tests were used to test the differences between pre-adaptation and post-adaptation parameter values.</p>
</sec>
</sec>
<sec>
<title>Results</title>
<p>For all figures and spatial parameters, 0° indicates straight ahead; negative and positive values denote left and right, respectively. Comparison between subjects’ post-adaptation and pre-adaptation responses in the unisensory auditory conditions showed significant VAE s in all five tested locations (Figure
<xref ref-type="fig" rid="F4">4</xref>
A). For each subject, the aftereffect magnitudes were calculated as the change (post-adaptation minus pre-adaptation) in mean subject auditory responses. In order to combine the data across the two adaptation groups, we negated the value of the aftereffect for the VA-adaptation group (to make their aftereffect values represented with a positive value). The mean magnitude of the shift in auditory spatial localization for each spatial location across all 24 subjects is shown in Figure
<xref ref-type="fig" rid="F4">4</xref>
A. As can be seen, there was a statistically significant adaptation effect at all tested locations.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption>
<p>
<bold>The magnitude of the observed adaptation effect</bold>
.
<bold>(A)</bold>
Mean localization aftereffect magnitude at each tested location. Aftereffect magnitudes are measured as the difference post-adaptation minus pre-adaptation in mean subject auditory responses (
<italic>N</italic>
 = 24). Here, positive aftereffects are defined as shifts in the direction of the visual stimulus offset presented during adaptation. *
<italic>p</italic>
 < 0.05 two-tailed paired
<italic>t</italic>
-test, df = 23, Bonferroni corrected.
<bold>(B)</bold>
Scatter plot of aftereffect magnitude vs. pre-adaptation localization error. Aftereffects were measured as the post-adaptation minus pre-adaptation difference in the subjects’ mean auditory responses. Localization error was calculated at all bisensory pre-adaptation test locations with the same discrepancy (±13°) as that during exposure (3 data points per subjects × 24 subjects = 72 data points). The stimulus conditions are shown in the legend. The data points corresponding to the VA-adaptation and AV-adaptation groups are represented with filled and open symbols, respectively. The data points derived from the same subject share the same color. Dashed line shows a significant linear correlation of the data (
<italic>r</italic>
 = 0.70,
<italic>p</italic>
 < 0.0001).</p>
</caption>
<graphic xlink:href="fnint-05-00075-g004"></graphic>
</fig>
<p>Next, we examined the relationship between the auditory–visual interactions and the magnitude of the adaptation. We hypothesize that the recalibration is driven by the crossmodal error signal that occurs during the exposure presentations. Since we do not probe the auditory localization error during exposure, we must gather this information from the pre-adaptation data. As can be seen in Figure
<xref ref-type="fig" rid="F4">4</xref>
B, there is a linear correlation between the size of a subject’s aftereffect and the auditory localization error during bisensory pre-adaptation test trials with a discrepancy of 13° (the discrepancy that was presented during exposure). For each subject, there are three possible bisensory conditions that constitute either a positive or negative 13° discrepancy (consistent with exposure conditions for the two groups). For the AV-adaptation group (Figure
<xref ref-type="fig" rid="F2">2</xref>
B), the conditions are (
<italic>A</italic>
,
<italic>V</italic>
) = {(−13, 0); (−6.5, +6.5); (0, +13)}, and for the VA-adaptation group (Figure
<xref ref-type="fig" rid="F2">2</xref>
C), the conditions are (
<italic>A</italic>
,
<italic>V</italic>
) = {(0, −13); (+6.5, −6.5); (+13, 0)}. Localization error is defined as the subject’s auditory response minus the veridical location of the auditory stimulus during these bisensory trials. Aftereffect is defined as the subject’s mean post-adaptation test auditory response minus the mean pre-adaptation test auditory response at each of the three auditory alone conditions. The scatterplot of Figure
<xref ref-type="fig" rid="F4">4</xref>
B shows that the stronger the influence of the visual stimulus on auditory perception in bisensory trials (i.e., the stronger the auditory–visual interactions), the stronger the adaptation of the auditory spatial perception will be.</p>
<p>No significant correlation between the size of the aftereffect and either the SD of the auditory responses, or the fitted SD of the auditory likelihood function, σ
<sub>
<italic>A</italic>
</sub>
, (see below) was found. It should be noted that the auditory–visual interaction is a non-linear function of both the auditory and visual SD, as well as the prior bias for perceiving a common source,
<italic>p</italic>
<sub>common</sub>
. Therefore, the absence of a linear correlation between a single variable and the aftereffect magnitude is not surprising.</p>
<p>The results discussed so far replicate the previous findings of VAE, and in addition suggest a direct role for auditory–visual interactions in producing the aftereffect. In order to investigate which perceptual components undergo change in this process and result in the aftereffect, we fitted the causal inference model described in the Methods section to each individual subject’s pre-adaptation and post-adaptation test data separately. All the model fits in this study were based on individual subject’s data (as opposed to group data) in order to test for statistically significant changes in parameters. Similar to our previous study of spatial localization (Wozny et al.,
<xref ref-type="bibr" rid="B45">2010</xref>
), the majority of subjects were fitted best by probability matching strategy: 18 (75%) matching; 3 (12.5%) selection; 3 (12.5%) averaging. Model fits to the pre-adaptation test group data for the 18 probability matching subjects are shown in Figure
<xref ref-type="fig" rid="F5">5</xref>
A for illustration purposes only in order to show the bimodal nature (i.e., having two peaks) of the response distributions and the ability of the model to capture these patterns. The post-adaptation test group data for probability matching subjects in the AV-adaptation group are shown in Figure
<xref ref-type="fig" rid="F5">5</xref>
B again for illustration purposes only. As can be seen, the response distributions in the unisensory auditory conditions (first row) are shifted to the right after adaptation. The model fitted the individual subject’s data very well, on average explaining 89% of the variance in the data (
<italic>R</italic>
<sup>2</sup>
 = 0.89 ± 0.05) across subjects and test phases
<xref ref-type="fn" rid="fn1">
<sup>1</sup>
</xref>
. Although the precision of auditory localization was much worse than that of the visual localization in this experiment, and a previous study has suggested that auditory–visual integration may deviate from optimal when the difference in reliabilities of the two modalities is large (Bentvelzen et al.,
<xref ref-type="bibr" rid="B3">2009</xref>
) we do observe a pattern of behavior in all subjects that is highly consistent with Bayesian causal inference, as evident by the high values of goodness of fit.</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption>
<p>
<bold>Subject group response distributions and the model fits</bold>
. Observers’ marginal response log-probabilities for each stimulus condition are shown on the ordinate in shaded areas, and model fits are shown as superimposed solid lines. Vertical dotted lines show the true stimulus locations. The first row shows the five unisensory auditory conditions, with the sound location ranging from left to right along the azimuth as shown by the blue vertical dotted lines. The first column shows the five unisensory visual conditions, again with the stimulus position ranging from left to right as shown by the magenta vertical dotted lines. The remaining 25 panels in each figure show the bisensory conditions with both the visual and auditory response probabilities.
<bold>(A)</bold>
Pre-adaptation test data combined across 18 subjects who used the same decision-making strategy (probability matching).
<bold>(B)</bold>
Post-adaptation test data combined across subjects who were in the AV-adaptation group and used the same decision-making strategy (probability matching). For this group of eight subjects the unisensory auditory responses were shifted to the right after adaptation as can be seen in the first row.</p>
</caption>
<graphic xlink:href="fnint-05-00075-g005"></graphic>
</fig>
<p>The fitted parameter values were first submitted to a 2 × 2 repeated measures MANOVA with Adaptation (AV-adaptation, VA-adaptation) and Response Order (vision-first, audition-first) as between-subject factors and Test as a repeated measure (pre-adaptation, post-adaptation). Parameter estimate mean and SD for each Adaptation group are shown in Table
<xref ref-type="table" rid="T1">1</xref>
. There was no significant main effect of response order, or interactions with response order (
<italic>p</italic>
 > 0.05), indicating that the order of response did not have a significant impact on the results. However, there was a very strong Test × Adaptation interaction (
<italic>p</italic>
 < 0.0001). Planned comparison analysis was then performed on each group’s data separately, using a paired two-tailed
<italic>t</italic>
-test between pre-adaptation and post-adaptation parameter values and these tests were corrected for multiple comparisons using Bonferroni correction for seven tests (α = 0.007). For both the AV-adaptation and VA-adaptation groups, the auditory likelihood offset parameter Δμ
<sub>
<italic>A</italic>
</sub>
was the only parameter that was found to be significantly different between the two test phases (two-tailed paired
<italic>t</italic>
-test, df
<italic></italic>
= 11,
<italic>p</italic>
 = 0.0000 for both groups). All 24 subjects showed a shift in the auditory likelihood mean in the expected direction (i.e., toward the adapting visual offset). For the VA-adaptation group, there was a trend for increase in the spatial prior SD,
<italic>σ
<sub>P</sub>
,</italic>
after adaptation (two-tailed paired
<italic>t</italic>
-test, df = 11,
<italic>p</italic>
 = 0.01), however, it did not pass the Bonferroni test.</p>
<table-wrap id="T1" position="float">
<label>Table 1</label>
<caption>
<p>
<bold>Sample mean ± SE parameter estimates for each adaptation group</bold>
.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th colspan="2" align="center" rowspan="1"></th>
<th colspan="2" align="center" rowspan="1">Auditory likelihood
<hr></hr>
</th>
<th colspan="2" align="center" rowspan="1">Visual likelihood
<hr></hr>
</th>
<th colspan="3" align="center" rowspan="1">Prior
<hr></hr>
</th>
</tr>
<tr>
<th colspan="2" align="left" rowspan="1"></th>
<th align="left" rowspan="1" colspan="1">Δμ
<sub>
<italic>A</italic>
</sub>
(degrees)</th>
<th align="left" rowspan="1" colspan="1">σ
<sub>
<italic>A</italic>
</sub>
(degrees)</th>
<th align="left" rowspan="1" colspan="1">Δμ
<sub>
<italic>V</italic>
</sub>
(degrees)</th>
<th align="left" rowspan="1" colspan="1">σ
<sub>
<italic>V</italic>
</sub>
(degrees)</th>
<th align="left" rowspan="1" colspan="1">μ
<sub>
<italic>P</italic>
</sub>
(degrees)</th>
<th align="left" rowspan="1" colspan="1">σ
<sub>
<italic>P</italic>
</sub>
(degrees)</th>
<th align="left" rowspan="1" colspan="1">
<italic>p</italic>
<sub>common</sub>
</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">AV-adaptation
<italic>N</italic>
 = 12</td>
<td align="left" rowspan="1" colspan="1">Pre</td>
<td align="left" rowspan="1" colspan="1">0.59 ± 0.56</td>
<td align="left" rowspan="1" colspan="1">8.04 ± 0.39</td>
<td align="left" rowspan="1" colspan="1">0.05 ± 0.11</td>
<td align="left" rowspan="1" colspan="1">2.17 ± 0.27</td>
<td align="left" rowspan="1" colspan="1">−0.56 ± 1.33</td>
<td align="left" rowspan="1" colspan="1">49.71 ± 4.78</td>
<td align="left" rowspan="1" colspan="1">0.39 ± 0.08</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Post</td>
<td align="left" rowspan="1" colspan="1">4.37 ± 0.99</td>
<td align="left" rowspan="1" colspan="1">9.17 ± 0.96</td>
<td align="left" rowspan="1" colspan="1">−0.11 ± 0.10</td>
<td align="left" rowspan="1" colspan="1">2.24 ± 0.12</td>
<td align="left" rowspan="1" colspan="1">−0.99 ± 1.73</td>
<td align="left" rowspan="1" colspan="1">38.62 ± 5.47</td>
<td align="left" rowspan="1" colspan="1">0.45 ± 0.08</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Post-pre</td>
<td align="left" rowspan="1" colspan="1">3.77 ± 0.58**</td>
<td align="left" rowspan="1" colspan="1">0.07 ± 0.26</td>
<td align="left" rowspan="1" colspan="1">−0.16 ± 0.10</td>
<td align="left" rowspan="1" colspan="1">0.07 ± 0.26</td>
<td align="left" rowspan="1" colspan="1">−0.42 ± 2.12</td>
<td align="left" rowspan="1" colspan="1">−11.09 ± 6.71</td>
<td align="left" rowspan="1" colspan="1">0.06 ± 0.05</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">VA-adaptation
<italic>N</italic>
 = 12</td>
<td align="left" rowspan="1" colspan="1">Pre</td>
<td align="left" rowspan="1" colspan="1">1.94 ± 0.85</td>
<td align="left" rowspan="1" colspan="1">10.19 ± 1.83</td>
<td align="left" rowspan="1" colspan="1">0.34 ± 0.10</td>
<td align="left" rowspan="1" colspan="1">2.19 ± 0.31</td>
<td align="left" rowspan="1" colspan="1">−3.67 ± 2.54</td>
<td align="left" rowspan="1" colspan="1">31.19 ± 7.99</td>
<td align="left" rowspan="1" colspan="1">0.54 ± 0.06</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Post</td>
<td align="left" rowspan="1" colspan="1">−1.51 ± 0.81</td>
<td align="left" rowspan="1" colspan="1">11.86 ± 2.14</td>
<td align="left" rowspan="1" colspan="1">0.26 ± 0.09</td>
<td align="left" rowspan="1" colspan="1">2.44 ± 0.35</td>
<td align="left" rowspan="1" colspan="1">−4.95 ± 2.34</td>
<td align="left" rowspan="1" colspan="1">42.62 ± 10.54</td>
<td align="left" rowspan="1" colspan="1">0.62 ± 0.06</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Post-pre</td>
<td align="left" rowspan="1" colspan="1">−3.45 ± 0.55**</td>
<td align="left" rowspan="1" colspan="1">1.67 ± 0.83</td>
<td align="left" rowspan="1" colspan="1">−0.08 ± 0.05</td>
<td align="left" rowspan="1" colspan="1">0.25 ± 0.17</td>
<td align="left" rowspan="1" colspan="1">−1.12 ± 2.00</td>
<td align="left" rowspan="1" colspan="1">11.43 ± 3.55*</td>
<td align="left" rowspan="1" colspan="1">0.07 ± 0.05</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>
<italic>*
<italic>p</italic>
 < 0.05 (uncorrected), **
<italic>p</italic>
 < 0.05 (Bonferroni corrected) denotes significant changes between pre- and post-tests within each group</italic>
.</p>
</table-wrap-foot>
</table-wrap>
<p>Figure
<xref ref-type="fig" rid="F6">6</xref>
graphically displays the results using the same illustration scheme as in Figure
<xref ref-type="fig" rid="F1">1</xref>
. Actual parameters obtained from the data (shown in Table
<xref ref-type="table" rid="T1">1</xref>
) are used to create the likelihood and prior distributions, and aftereffects magnitudes obtained from subjects’ responses (described above) are shown in the bottom row for each of the two adaptation groups. The exposure conditions are shown in the top row. To avoid crowding the figure, only the +13°, 0°, and −13° stimuli, likelihoods, and posteriors are shown, but aftereffects in the bottom row are shown for all five auditory stimulus conditions. Again, to avoid crowding the figure, the mean of the auditory likelihood functions are only shown for −13° auditory stimulus location. The green arrow denotes the likelihood shift to the right (panel A), or the left (panel B), and is shown again in the bottom panels. The aftereffect appears to be slightly larger at −6.5° and at +6.5° in panels A and B, respectively. However, this difference is not statistically significant. A previous study has suggested asymmetries in spatial generalization of the aftereffect (Bertelson et al.,
<xref ref-type="bibr" rid="B4">2006</xref>
), however, further investigation is required to determine whether the apparent asymmetry observed here is real and if so, what factor underlies it. One possible hypothesis for this trend is that the maximal aftereffect in each group corresponds to the location of maximal overlap in AV exposure as seen in Figures
<xref ref-type="fig" rid="F2">2</xref>
B,C (i.e., the location having AV exposure conditions on both left and right sides).</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption>
<p>
<bold>Graphical representation of the results</bold>
. The top row schematically shows three of the five stimulus configurations for the AV-adaptation group
<bold>(A)</bold>
and VA-adaptation group
<bold>(B)</bold>
. The Gaussian distributions in the second and third rows show the auditory likelihood, prior, and posterior distributions for the pre-adaptation and post-adaptation model fits, constructed with the parameters obtained from the data and shown in Table
<xref ref-type="table" rid="T1">1</xref>
. The dashed lines highlight the mean of the likelihood distribution at one of the spatial locations (chosen arbitrarily for illustration purpose) before and after adaptation, and the green arrow shows the direction of shift in the auditory likelihood. The bottom row shows the actual aftereffects (mean ± SEM) measured from subject responses in the auditory alone conditions.</p>
</caption>
<graphic xlink:href="fnint-05-00075-g006"></graphic>
</fig>
</sec>
<sec>
<title>Discussion</title>
<p>We investigated auditory spatial adaptation using test phases in which auditory–visual and unisensory visual trials are interleaved with unisensory auditory trials. By probing both visual and auditory percepts in conditions with varying degrees of auditory–visual discrepancy we were able to quantify which of the underlying distributions underwent change during the adaptation process. Since most subjects had an almost flat spatial prior (∼40° SD), for the observed VAE to be explained by a change in prior, either a large change in prior mean or a narrowing of the prior variance would have been required. We did not observe any such changes. Instead, we find that the shifts in observers’ auditory localization are explained best with a shift in the mean of the auditory likelihood function as opposed to a change in the variance of the likelihood or a change in the position or strength of the prior bias.</p>
<p>Given that the distribution of spatial location of stimuli during test phases was uniform, it is unclear whether the relatively flat spatial prior observed in the test phases reflects
<italic>a priori</italic>
lack of strong spatial bias or whether it is quickly learned over the course of the pre-adaptation test phase. Note that the top-up design used for post-adaptation interleaving test trials with adaptation periods makes it unlikely that the test trials would entirely counteract the changes induced by adaptation. If adaptation had involved acquisition of a spatial bias as depicted in Figure
<xref ref-type="fig" rid="F1">1</xref>
D, this would have entailed a change in the variance and/or mean of the prior; which was not observed in the data. In regards to the prior bias for a common cause,
<italic>p</italic>
<sub>common</sub>
, one could expect an increase in this bias after adaptation due to exposure to repeated simultaneous auditory–visual presentations, or alternatively, a decrease in this bias due to exposure to spatially discrepant stimuli. However, we did not observe any evidence for change in
<italic>p</italic>
<sub>common</sub>
after adaptation.</p>
<p>Little is known about the longevity and robustness of VAE. Although we used a top-up design to minimize the possible erosion of the aftereffect by exposure to random auditory–visual discrepancies of the post-adaptation test trials, it is still possible that exposure to these test trials diminishes the adaptation effect and that the actual effect sizes both in terms of the shift in the auditory spatial localization and the underlying auditory likelihoods are much larger than we detected here.</p>
<p>Our findings are consistent with the theoretical work of Sato et al. (
<xref ref-type="bibr" rid="B36">2007</xref>
) on the VAE, and Grzywacz and Balboa’s (2002) framework for sensory adaptation in which adaptation is mediated by adjustment of parameters related to sensory representations. Our findings are also consistent with Stocker and Simoncelli (
<xref ref-type="bibr" rid="B39">2006a</xref>
) model that explains adaptation with a change in sensory likelihood functions. Their model accounts for unisensory repulsive aftereffects such as motion adaptation or tilt aftereffects by
<italic>sharpening</italic>
of the likelihood function. Our findings are also in line with the efficient coding theory of Clifford et al. (
<xref ref-type="bibr" rid="B9">2000</xref>
) in which repulsive tilt aftereffects are explained by adaptation in the sensory encoding.</p>
<p>Our results differ from those of some previous studies of adaptation that suggest a change in prior distributions. It should be noted that in many of these previous studies which have reported a pattern of adaptation consistent with change in the priors, no sensory, or sensorimotor conflict was present during adaptation. For example, Adams et al. (
<xref ref-type="bibr" rid="B1">2004</xref>
) showed that the “light-from-above” prior is modified after exposure to light from below stimuli conveyed through haptic cues. This study involved visual–haptic adapting stimuli that were congruent in terms of their underlying light-source.</p>
<p>Körding et al. (
<xref ref-type="bibr" rid="B23">2004</xref>
) showed that the prior expectation of force distributions can be adapted to arm perturbations over the course of an experiment. In their experiment, true visual feedback of finger movement was provided to the subjects at the end of the trial, without producing any conflicts between actual (proprioceptive) and perceived (visual) finger location. Miyazaki et al. (
<xref ref-type="bibr" rid="B28">2005</xref>
) showed that observers can adapt their sensory–motor coincidence timing to match the distribution of trial-by-trial target timing, consistent with updating the Bayesian prior. In this study there was no experimentally imposed conflict between the motor response and sensory feedback.</p>
<p>However, patterns of adaptation consistent with a change in priors have also been reported following exposure to sensory (or sensorimotor) conflict. For example, in the sensorimotor adaptation experiment by Körding and Wolpert (
<xref ref-type="bibr" rid="B24">2004</xref>
), conflicting visual feedback induced adaptive changes in reaching. The authors explain the shifts in motor behavior by the acquisition of a new prior distribution. It should be noted though, that in their model, they only incorporate visual evidence (likelihood) as the modality of sensation and the proprioceptive modality is not taken into account. An alternative explanation of the results would be a shift in the mean of the proprioceptive likelihood function, as opposed to a shift in the prior distribution. In a study by Miyazaki et al. (
<xref ref-type="bibr" rid="B29">2006</xref>
) which involved temporal order judgment of two tactile stimuli, the shift in perceived simultaneity after adaptation was consistent with a change in prior distribution of ordered stimuli. In contrast, in the same study, another experiment examining temporal order judgment of a sound and a flash showed a shift in perceived simultaneity in a direction
<italic>opposite</italic>
to that predicted by a change in the prior distribution (and consistent with previous reports of lag-adaptation (Fujisaki et al.,
<xref ref-type="bibr" rid="B13">2004</xref>
; Vroomen et al.,
<xref ref-type="bibr" rid="B43">2004</xref>
). The authors explain their findings in the audio–visual condition by incorporating a lag-adaptation mechanism, which is akin to a change in the underlying likelihood distribution. While the opposite patterns of adaptation found in these two experiments may be due to the unisensory vs. multisensory nature of stimuli, we believe it more likely that the different patterns of adaptation are due to the difference in perceived unity of the stimuli. In the unisensory tactile experiment, the two tactile stimuli were delivered to different hands. This large spatial separation together with the temporal discrepancy likely leads to the two stimuli being perceived as having stemmed from independent sources. In contrast, because of the relatively poor spatial acuity of sound, in the auditory–visual condition it is likely that the two stimuli were perceived as having a common source. Indeed in a previous study, we found that adaptation depends strongly on the perception of unity of the inducing stimuli (Wozny and Shams,
<xref ref-type="bibr" rid="B46">2011</xref>
). In the unisensory tactile experiment, if the two stimuli were perceived to be independent of each other then the time difference (the lag) between the two stimuli would not amount to a sensory conflict. Therefore, in these studies in which exposure to sensory conflict appeared to lead to a change in priors, either the change in prior (vs. likelihoods) or the very presence of sensory conflict remain questionable.</p>
<p>The results in the current study provide increasing evidence that adaptation to conflicting sensory information results in changes to the likelihood functions. Therefore, altogether based on the existing data, one could hypothesize that the presence of conflicting sensory information of a perceived common source results in a recalibration of the underlying sensory likelihood functions, whereas, exposure to stimuli lacking sensory conflict would result a change in the prior distributions. Future studies should put this hypothesis to test in varying tasks and sensory and sensorimotor conditions.</p>
</sec>
<sec>
<title>Conflict of Interest Statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</body>
<back>
<ack>
<p>We thank Stefan Schaal for helpful comments on the manuscript. David R. Wozny was supported by a UCLA graduate division fellowship and an NIH Neuroimaging Training Fellowship. Ladan Shams was supported by UCLA Faculty Grants Program, Faculty Career Development award.</p>
</ack>
<fn-group>
<fn id="fn1">
<p>
<sup>1</sup>
Goodness of fit was calculated using the generalized coefficient of determination formula described by Nagelkerke (
<xref ref-type="bibr" rid="B30">1991</xref>
). For the null model we use the maximum likelihood estimator of the linear model μ = 
<italic>x</italic>
β. The generalized
<italic>R</italic>
<sup>2</sup>
is interpreted as the proportion of variance in the data that is explained by the model.</p>
</fn>
</fn-group>
<ref-list>
<title>References</title>
<ref id="B1">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Adams</surname>
<given-names>W. J.</given-names>
</name>
<name>
<surname>Graf</surname>
<given-names>E. W.</given-names>
</name>
<name>
<surname>Ernst</surname>
<given-names>M. O.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Experience can change the “light-from-above” prior</article-title>
.
<source>Nat. Neurosci.</source>
<volume>7</volume>
,
<fpage>1057</fpage>
<lpage>1058</lpage>
<pub-id pub-id-type="doi">10.1038/nn1312</pub-id>
<pub-id pub-id-type="pmid">15361877</pub-id>
</mixed-citation>
</ref>
<ref id="B2">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Barraza</surname>
<given-names>J. F.</given-names>
</name>
<name>
<surname>Grzywacz</surname>
<given-names>N. M.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Speed adaptation as Kalman filtering</article-title>
.
<source>Vision Res.</source>
<volume>48</volume>
,
<fpage>2485</fpage>
<lpage>2491</lpage>
<pub-id pub-id-type="doi">10.1016/j.visres.2008.08.011</pub-id>
<pub-id pub-id-type="pmid">18782586</pub-id>
</mixed-citation>
</ref>
<ref id="B3">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bentvelzen</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Leung</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Alais</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Discriminating audiovisual speed: optimal integration of speed defaults to probability summation when component reliabilities diverge</article-title>
.
<source>Perception</source>
<volume>38</volume>
,
<fpage>966</fpage>
<lpage>987</lpage>
<pub-id pub-id-type="doi">10.1068/p6261</pub-id>
<pub-id pub-id-type="pmid">19764300</pub-id>
</mixed-citation>
</ref>
<ref id="B4">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bertelson</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Frissen</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Vroomen</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>De Gelder</surname>
<given-names>B.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>The after effects of ventriloquism: patterns of spatial generalization</article-title>
.
<source>Percept. Psychophys.</source>
<volume>68</volume>
,
<fpage>428</fpage>
<lpage>436</lpage>
<pub-id pub-id-type="doi">10.3758/BF03193687</pub-id>
<pub-id pub-id-type="pmid">16900834</pub-id>
</mixed-citation>
</ref>
<ref id="B5">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brainard</surname>
<given-names>D. H.</given-names>
</name>
<name>
<surname>Longère</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Delahunt</surname>
<given-names>P. B.</given-names>
</name>
<name>
<surname>Freeman</surname>
<given-names>W. T.</given-names>
</name>
<name>
<surname>Kraft</surname>
<given-names>J. M.</given-names>
</name>
<name>
<surname>Xiao</surname>
<given-names>B.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Bayesian model of human color constancy</article-title>
.
<source>J. Vis.</source>
<volume>6</volume>
,
<fpage>1267</fpage>
<lpage>1281</lpage>
<pub-id pub-id-type="doi">10.1167/6.11.10</pub-id>
<pub-id pub-id-type="pmid">17209734</pub-id>
</mixed-citation>
</ref>
<ref id="B6">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bresciani</surname>
<given-names>J. P.</given-names>
</name>
<name>
<surname>Dammeier</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Ernst</surname>
<given-names>M. O.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Vision and touch are automatically integrated for the perception of sequences of events</article-title>
.
<source>J. Vis.</source>
<volume>6</volume>
,
<fpage>554</fpage>
<lpage>564</lpage>
<pub-id pub-id-type="doi">10.1167/6.5.2</pub-id>
<pub-id pub-id-type="pmid">16881788</pub-id>
</mixed-citation>
</ref>
<ref id="B7">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Butler</surname>
<given-names>J. S.</given-names>
</name>
<name>
<surname>Smith</surname>
<given-names>S. T.</given-names>
</name>
<name>
<surname>Campos</surname>
<given-names>J. L.</given-names>
</name>
<name>
<surname>Bülthoff</surname>
<given-names>H. H.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Bayesian integration of visual and vestibular signals for heading</article-title>
.
<source>J. Vis.</source>
<volume>10</volume>
,
<fpage>23</fpage>
<pub-id pub-id-type="doi">10.1167/10.11.23</pub-id>
<pub-id pub-id-type="pmid">20884518</pub-id>
</mixed-citation>
</ref>
<ref id="B8">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Canon</surname>
<given-names>L. K.</given-names>
</name>
</person-group>
(
<year>1970</year>
).
<article-title>Intermodality inconsistency of input and directed attention as determinants of the nature of adaptation</article-title>
.
<source>J. Exp. Psychol.</source>
<volume>84</volume>
,
<fpage>141</fpage>
<lpage>147</lpage>
<pub-id pub-id-type="doi">10.1037/h0028925</pub-id>
<pub-id pub-id-type="pmid">5480918</pub-id>
</mixed-citation>
</ref>
<ref id="B9">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Clifford</surname>
<given-names>C. W.</given-names>
</name>
<name>
<surname>Wenderoth</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Spehar</surname>
<given-names>B.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>A functional angle on some after-effects in cortical vision</article-title>
.
<source>Proc. Biol. Sci.</source>
<fpage>1705</fpage>
<lpage>1710</lpage>
<pub-id pub-id-type="pmid">12233765</pub-id>
</mixed-citation>
</ref>
<ref id="B10">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>M. O.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Learning to integrate arbitrary signals from vision and touch</article-title>
.
<source>J. Vis.</source>
<volume>7</volume>
,
<fpage>7.1</fpage>
<lpage>14</lpage>
<pub-id pub-id-type="doi">10.1167/7.5.7</pub-id>
<pub-id pub-id-type="pmid">18217847</pub-id>
</mixed-citation>
</ref>
<ref id="B11">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fetsch</surname>
<given-names>C. R.</given-names>
</name>
<name>
<surname>Deangelis</surname>
<given-names>G. C.</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>D. E.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Visual-vestibular cue integration for heading perception: applications of optimal cue integration theory</article-title>
.
<source>Eur. J. Neurosci.</source>
<volume>31</volume>
,
<fpage>1721</fpage>
<lpage>1729</lpage>
<pub-id pub-id-type="doi">10.1111/j.1460-9568.2010.07207.x</pub-id>
<pub-id pub-id-type="pmid">20584175</pub-id>
</mixed-citation>
</ref>
<ref id="B12">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fetsch</surname>
<given-names>C. R.</given-names>
</name>
<name>
<surname>Turner</surname>
<given-names>A. H.</given-names>
</name>
<name>
<surname>Deangelis</surname>
<given-names>G. C.</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>D. E.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Dynamic reweighting of visual and vestibular cues during self-motion perception</article-title>
.
<source>J. Neurosci.</source>
<volume>29</volume>
,
<fpage>15601</fpage>
<lpage>15612</lpage>
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.2574-09.2009</pub-id>
<pub-id pub-id-type="pmid">20007484</pub-id>
</mixed-citation>
</ref>
<ref id="B13">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fujisaki</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Shimojo</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Kashino</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Nishida</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Recalibration of audiovisual simultaneity</article-title>
.
<source>Nat. Neurosci.</source>
<volume>7</volume>
,
<fpage>773</fpage>
<lpage>778</lpage>
<pub-id pub-id-type="doi">10.1038/nn1268</pub-id>
<pub-id pub-id-type="pmid">15195098</pub-id>
</mixed-citation>
</ref>
<ref id="B14">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Grzywacz</surname>
<given-names>N. M.</given-names>
</name>
<name>
<surname>Balboa</surname>
<given-names>R. M.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>A Bayesian framework for sensory adaptation</article-title>
.
<source>Neural Comput.</source>
<volume>14</volume>
,
<fpage>543</fpage>
<lpage>559</lpage>
<pub-id pub-id-type="doi">10.1162/089976602317250898</pub-id>
<pub-id pub-id-type="pmid">11860682</pub-id>
</mixed-citation>
</ref>
<ref id="B15">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Grzywacz</surname>
<given-names>N. M.</given-names>
</name>
<name>
<surname>De Juan</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Sensory adaptation as Kalman filtering: theory and illustration with contrast adaptation</article-title>
.
<source>Network</source>
<volume>14</volume>
,
<fpage>465</fpage>
<lpage>482</lpage>
<pub-id pub-id-type="doi">10.1088/0954-898X/14/3/305</pub-id>
<pub-id pub-id-type="pmid">12938767</pub-id>
</mixed-citation>
</ref>
<ref id="B16">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hospedales</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Vijayakumar</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Multisensory oddity detection as Bayesian inference</article-title>
.
<source>PLoS ONE</source>
<volume>4</volume>
,
<fpage>e4205</fpage>
<pub-id pub-id-type="doi">10.1371/journal.pone.0004205</pub-id>
<pub-id pub-id-type="pmid">19145254</pub-id>
</mixed-citation>
</ref>
<ref id="B17">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jürgens</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Becker</surname>
<given-names>W.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Perception of angular displacement without landmarks: evidence for Bayesian fusion of vestibular, optokinetic, podokinesthetic, and cognitive information</article-title>
.
<source>Exp. Brain Res.</source>
<volume>174</volume>
,
<fpage>528</fpage>
<lpage>543</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-006-0486-7</pub-id>
<pub-id pub-id-type="pmid">16832684</pub-id>
</mixed-citation>
</ref>
<ref id="B18">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kersten</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Mamassian</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Yuille</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Object perception as Bayesian inference</article-title>
.
<source>Annu. Rev. Psychol.</source>
<volume>55</volume>
,
<fpage>271</fpage>
<lpage>304</lpage>
<pub-id pub-id-type="doi">10.1146/annurev.psych.55.090902.142005</pub-id>
<pub-id pub-id-type="pmid">14744217</pub-id>
</mixed-citation>
</ref>
<ref id="B19">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Knill</surname>
<given-names>D. C.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Mixture models and the probabilistic structure of depth cues</article-title>
.
<source>Vision Res.</source>
<volume>43</volume>
,
<fpage>831</fpage>
<lpage>854</lpage>
<pub-id pub-id-type="doi">10.1016/S0042-6989(03)00003-8</pub-id>
<pub-id pub-id-type="pmid">12639607</pub-id>
</mixed-citation>
</ref>
<ref id="B20">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Knill</surname>
<given-names>D. C.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Learning Bayesian priors for depth perception</article-title>
.
<source>J. Vis.</source>
<volume>7</volume>
,
<fpage>13</fpage>
<pub-id pub-id-type="doi">10.1167/7.6.13</pub-id>
<pub-id pub-id-type="pmid">17685820</pub-id>
</mixed-citation>
</ref>
<ref id="B21">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Knill</surname>
<given-names>D. C.</given-names>
</name>
<name>
<surname>Richards</surname>
<given-names>W.</given-names>
</name>
</person-group>
(
<year>1996</year>
).
<source>Perception as Bayesian Inference</source>
.
<publisher-loc>Cambridge</publisher-loc>
:
<publisher-name>Cambridge University Press</publisher-name>
,
<fpage>516</fpage>
</mixed-citation>
</ref>
<ref id="B22">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Körding</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Beierholm</surname>
<given-names>U. R.</given-names>
</name>
<name>
<surname>Ma</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Quartz</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Tenenbaum</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Shams</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Causal inference in multisensory perception</article-title>
.
<source>PLoS ONE</source>
<volume>2</volume>
,
<fpage>e943</fpage>
<pub-id pub-id-type="doi">10.1371/journal.pone.0000943</pub-id>
<pub-id pub-id-type="pmid">17895984</pub-id>
</mixed-citation>
</ref>
<ref id="B23">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Körding</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Ku</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Wolpert</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Bayesian integration in force estimation</article-title>
.
<source>J. Neurophysiol.</source>
<volume>92</volume>
,
<fpage>3161</fpage>
<lpage>3165</lpage>
<pub-id pub-id-type="doi">10.1152/jn.00275.2004</pub-id>
<pub-id pub-id-type="pmid">15190091</pub-id>
</mixed-citation>
</ref>
<ref id="B24">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Körding</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Wolpert</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Bayesian integration in sensorimotor learning</article-title>
.
<source>Nature</source>
<volume>427</volume>
,
<fpage>244</fpage>
<lpage>247</lpage>
<pub-id pub-id-type="doi">10.1038/nature02169</pub-id>
<pub-id pub-id-type="pmid">14724638</pub-id>
</mixed-citation>
</ref>
<ref id="B25">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Langley</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Anderson</surname>
<given-names>S. J.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Subtractive and divisive adaptation in visual motion computations</article-title>
.
<source>Vision Res.</source>
<volume>47</volume>
,
<fpage>673</fpage>
<lpage>686</lpage>
<pub-id pub-id-type="doi">10.1016/j.visres.2006.09.031</pub-id>
<pub-id pub-id-type="pmid">17257641</pub-id>
</mixed-citation>
</ref>
<ref id="B26">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lewald</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Rapid adaptation to auditory-visual spatial disparity</article-title>
.
<source>Learn. Mem.</source>
<volume>9</volume>
,
<fpage>268</fpage>
<lpage>278</lpage>
<pub-id pub-id-type="doi">10.1101/lm.51402</pub-id>
<pub-id pub-id-type="pmid">12359836</pub-id>
</mixed-citation>
</ref>
<ref id="B27">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Macneilage</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>M. S.</given-names>
</name>
<name>
<surname>Berger</surname>
<given-names>D. R.</given-names>
</name>
<name>
<surname>Bülthoff</surname>
<given-names>H. H.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>A Bayesian model of the disambiguation of gravitoinertial force by visual cues</article-title>
.
<source>Exp. Brain Res.</source>
<volume>179</volume>
,
<fpage>263</fpage>
<lpage>290</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-006-0792-0</pub-id>
<pub-id pub-id-type="pmid">17136526</pub-id>
</mixed-citation>
</ref>
<ref id="B28">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Miyazaki</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Nozaki</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Nakajima</surname>
<given-names>Y.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Testing Bayesian models of human coincidence timing</article-title>
.
<source>J. Neurophysiol.</source>
<volume>94</volume>
,
<fpage>395</fpage>
<lpage>399</lpage>
<pub-id pub-id-type="doi">10.1152/jn.01168.2004</pub-id>
<pub-id pub-id-type="pmid">15716368</pub-id>
</mixed-citation>
</ref>
<ref id="B29">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Miyazaki</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Yamamoto</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Uchida</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Kitazawa</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Bayesian calibration of simultaneity in tactile temporal order judgment</article-title>
.
<source>Nat. Neurosci.</source>
<volume>9</volume>
,
<fpage>875</fpage>
<lpage>877</lpage>
<pub-id pub-id-type="doi">10.1038/nn1712</pub-id>
<pub-id pub-id-type="pmid">16732276</pub-id>
</mixed-citation>
</ref>
<ref id="B30">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nagelkerke</surname>
<given-names>N.</given-names>
</name>
</person-group>
(
<year>1991</year>
).
<article-title>A note on a general definition of the coefficient of determination</article-title>
.
<source>Biometrika</source>
<volume>78</volume>
,
<fpage>691</fpage>
<lpage>692</lpage>
<pub-id pub-id-type="doi">10.1093/biomet/78.3.691</pub-id>
</mixed-citation>
</ref>
<ref id="B31">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Radeau</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Bertelson</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>1974</year>
).
<article-title>The after-effects of ventriloquism</article-title>
.
<source>Q. J. Exp. Psychol.</source>
<volume>26</volume>
,
<fpage>63</fpage>
<lpage>71</lpage>
<pub-id pub-id-type="doi">10.1080/14640747408400388</pub-id>
<pub-id pub-id-type="pmid">4814864</pub-id>
</mixed-citation>
</ref>
<ref id="B32">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Rao</surname>
<given-names>R. P. N.</given-names>
</name>
<name>
<surname>Olshausen</surname>
<given-names>B. A.</given-names>
</name>
<name>
<surname>Lewicki</surname>
<given-names>M. S.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<source>Probabilistic Models of the Brain: Perception and Neural Function</source>
.
<publisher-loc>Cambridge, MA</publisher-loc>
:
<publisher-name>MIT Press</publisher-name>
,
<fpage>324</fpage>
</mixed-citation>
</ref>
<ref id="B33">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Recanzone</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>Rapidly induced auditory plasticity: the ventriloquism aftereffect</article-title>
.
<source>Proc. Natl. Acad. Sci. U.S.A.</source>
<volume>95</volume>
,
<fpage>869</fpage>
<lpage>875</lpage>
<pub-id pub-id-type="doi">10.1073/pnas.95.3.869</pub-id>
<pub-id pub-id-type="pmid">9448253</pub-id>
</mixed-citation>
</ref>
<ref id="B34">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Roach</surname>
<given-names>N. W.</given-names>
</name>
<name>
<surname>Heron</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Mcgraw</surname>
<given-names>P. V.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Resolving multisensory conflict: a strategy for balancing the costs and benefits of audio-visual integration</article-title>
.
<source>Proc. Biol. Sci.</source>
<volume>273</volume>
,
<fpage>2159</fpage>
<lpage>2168</lpage>
<pub-id pub-id-type="doi">10.1098/rspb.2006.3578</pub-id>
<pub-id pub-id-type="pmid">16901835</pub-id>
</mixed-citation>
</ref>
<ref id="B35">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rowland</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Stanford</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Stein</surname>
<given-names>B.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>A Bayesian model unifies multisensory spatial localization with the physiological properties of the superior colliculus</article-title>
.
<source>Exp. Brain Res.</source>
<volume>180</volume>
,
<fpage>153</fpage>
<lpage>161</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-006-0847-2</pub-id>
<pub-id pub-id-type="pmid">17546470</pub-id>
</mixed-citation>
</ref>
<ref id="B36">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sato</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Toyoizumi</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Aihara</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Bayesian inference explains perception of unity and ventriloquism after effect: identification of common sources of audiovisual stimuli</article-title>
.
<source>Neural Comput.</source>
<volume>19</volume>
,
<fpage>3335</fpage>
<lpage>3355</lpage>
<pub-id pub-id-type="doi">10.1162/neco.2007.19.12.3335</pub-id>
<pub-id pub-id-type="pmid">17970656</pub-id>
</mixed-citation>
</ref>
<ref id="B37">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Scarfe</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Hibbard</surname>
<given-names>P. B.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Statistically optimal integration of biased sensory estimates</article-title>
.
<source>J. Vis.</source>
<fpage>11</fpage>
<pub-id pub-id-type="doi">10.1167/11.7.12</pub-id>
</mixed-citation>
</ref>
<ref id="B38">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shams</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Ma</surname>
<given-names>W. J.</given-names>
</name>
<name>
<surname>Beierholm</surname>
<given-names>U.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Sound-induced flash illusion as an optimal percept</article-title>
.
<source>Neuroreport</source>
<volume>16</volume>
,
<fpage>1923</fpage>
<lpage>1927</lpage>
<pub-id pub-id-type="doi">10.1097/01.wnr.0000187634.68504.bb</pub-id>
<pub-id pub-id-type="pmid">16272880</pub-id>
</mixed-citation>
</ref>
<ref id="B39">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Stocker</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Simoncelli</surname>
<given-names>E.</given-names>
</name>
</person-group>
(
<year>2006a</year>
).
<article-title>“Sensory adaptation within a Bayesian framework for perception,”</article-title>
in
<source>Advances in Neural Information Processing Systems</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Weiss</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Schoelkopf</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Platt</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<publisher-loc>Cambridge, MA</publisher-loc>
:
<publisher-name>MIT Press</publisher-name>
),
<volume>18</volume>
,
<fpage>1291</fpage>
<lpage>1298</lpage>
</mixed-citation>
</ref>
<ref id="B40">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stocker</surname>
<given-names>A. A.</given-names>
</name>
<name>
<surname>Simoncelli</surname>
<given-names>E.</given-names>
</name>
</person-group>
(
<year>2006b</year>
).
<article-title>Noise characteristics and prior expectations in human visual speed perception</article-title>
.
<source>Nat. Neurosci.</source>
<volume>9</volume>
,
<fpage>578</fpage>
<lpage>585</lpage>
<pub-id pub-id-type="doi">10.1038/nn1669</pub-id>
<pub-id pub-id-type="pmid">16547513</pub-id>
</mixed-citation>
</ref>
<ref id="B41">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Van Ee</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Adams</surname>
<given-names>W. J.</given-names>
</name>
<name>
<surname>Mamassian</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Bayesian modeling of cue interaction: bistability in stereoscopic slant perception</article-title>
.
<source>J. Opt. Soc. Am. A Opt. Image Sci. Vis.</source>
<volume>20</volume>
,
<fpage>1398</fpage>
<lpage>1406</lpage>
<pub-id pub-id-type="doi">10.1364/JOSAA.20.001398</pub-id>
<pub-id pub-id-type="pmid">12868644</pub-id>
</mixed-citation>
</ref>
<ref id="B42">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Van Wanrooij</surname>
<given-names>M. M.</given-names>
</name>
<name>
<surname>Bremen</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>John Van Opstal</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Acquired prior knowledge modulates audiovisual integration</article-title>
.
<source>Eur. J. Neurosci.</source>
<volume>31</volume>
,
<fpage>1763</fpage>
<lpage>1771</lpage>
<pub-id pub-id-type="doi">10.1111/j.1460-9568.2010.07198.x</pub-id>
<pub-id pub-id-type="pmid">20584180</pub-id>
</mixed-citation>
</ref>
<ref id="B43">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vroomen</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Keetels</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>De Gelder</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Bertelson</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Recalibration of temporal order perception by exposure to audio-visual asynchrony</article-title>
.
<source>Brain Res. Cogn. Brain Res.</source>
<volume>22</volume>
,
<fpage>32</fpage>
<lpage>35</lpage>
<pub-id pub-id-type="doi">10.1016/j.cogbrainres.2004.07.003</pub-id>
<pub-id pub-id-type="pmid">15561498</pub-id>
</mixed-citation>
</ref>
<ref id="B44">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wozny</surname>
<given-names>D. R.</given-names>
</name>
<name>
<surname>Beierholm</surname>
<given-names>U.</given-names>
</name>
<name>
<surname>Shams</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Human trimodal perception follows optimal statistical inference</article-title>
.
<source>J. Vis.</source>
<volume>8</volume>
,
<fpage>24</fpage>
<pub-id pub-id-type="doi">10.1167/8.7.24</pub-id>
<pub-id pub-id-type="pmid">18484830</pub-id>
</mixed-citation>
</ref>
<ref id="B45">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wozny</surname>
<given-names>D. R.</given-names>
</name>
<name>
<surname>Beierholm</surname>
<given-names>U. R.</given-names>
</name>
<name>
<surname>Shams</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Probability matching as a computational strategy used in perception</article-title>
.
<source>PLoS Comput. Biol.</source>
<volume>6</volume>
,
<fpage>e1000871</fpage>
<pub-id pub-id-type="doi">10.1371/journal.pcbi.1000871</pub-id>
</mixed-citation>
</ref>
<ref id="B46">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wozny</surname>
<given-names>D. R.</given-names>
</name>
<name>
<surname>Shams</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Recalibration of auditory space following milliseconds of cross-modal discrepancy</article-title>
.
<source>J. Neurosci.</source>
<volume>31</volume>
,
<fpage>4607</fpage>
<lpage>4612</lpage>
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.6079-10.2011</pub-id>
<pub-id pub-id-type="pmid">21430160</pub-id>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
<affiliations>
<list>
<country>
<li>États-Unis</li>
</country>
</list>
<tree>
<country name="États-Unis">
<noRegion>
<name sortKey="Wozny, David R" sort="Wozny, David R" uniqKey="Wozny D" first="David R." last="Wozny">David R. Wozny</name>
</noRegion>
<name sortKey="Shams, Ladan" sort="Shams, Ladan" uniqKey="Shams L" first="Ladan" last="Shams">Ladan Shams</name>
<name sortKey="Shams, Ladan" sort="Shams, Ladan" uniqKey="Shams L" first="Ladan" last="Shams">Ladan Shams</name>
<name sortKey="Wozny, David R" sort="Wozny, David R" uniqKey="Wozny D" first="David R." last="Wozny">David R. Wozny</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Checkpoint
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001C08 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd -nk 001C08 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Checkpoint
   |type=    RBID
   |clé=     PMC:3208186
   |texte=   Computational Characterization of Visually Induced Auditory Spatial Adaptation
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/RBID.i   -Sk "pubmed:22069383" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024