Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

A Comprehensive Model of Audiovisual Perception: Both Percept and Temporal Dynamics

Identifieur interne : 001B76 ( Ncbi/Merge ); précédent : 001B75; suivant : 001B77

A Comprehensive Model of Audiovisual Perception: Both Percept and Temporal Dynamics

Auteurs : Patricia Besson ; Christophe Bourdin ; Lionel Bringoux

Source :

RBID : PMC:3161793

Abstract

The sparse information captured by the sensory systems is used by the brain to apprehend the environment, for example, to spatially locate the source of audiovisual stimuli. This is an ill-posed inverse problem whose inherent uncertainty can be solved by jointly processing the information, as well as introducing constraints during this process, on the way this multisensory information is handled. This process and its result - the percept - depend on the contextual conditions perception takes place in. To date, perception has been investigated and modeled on the basis of either one of two of its dimensions: the percept or the temporal dynamics of the process. Here, we extend our previously proposed audiovisual perception model to predict both these dimensions to capture the phenomenon as a whole. Starting from a behavioral analysis, we use a data-driven approach to elicit a Bayesian network which infers the different percepts and dynamics of the process. Context-specific independence analyses enable us to use the model's structure to directly explore how different contexts affect the way subjects handle the same available information. Hence, we establish that, while the percepts yielded by a unisensory stimulus or by the non-fusion of multisensory stimuli may be similar, they result from different processes, as shown by their differing temporal dynamics. Moreover, our model predicts the impact of bottom-up (stimulus driven) factors as well as of top-down factors (induced by instruction manipulation) on both the perception process and the percept itself.


Url:
DOI: 10.1371/journal.pone.0023811
PubMed: 21887324
PubMed Central: 3161793

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:3161793

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">A Comprehensive Model of Audiovisual Perception: Both Percept and Temporal Dynamics</title>
<author>
<name sortKey="Besson, Patricia" sort="Besson, Patricia" uniqKey="Besson P" first="Patricia" last="Besson">Patricia Besson</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Bourdin, Christophe" sort="Bourdin, Christophe" uniqKey="Bourdin C" first="Christophe" last="Bourdin">Christophe Bourdin</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Bringoux, Lionel" sort="Bringoux, Lionel" uniqKey="Bringoux L" first="Lionel" last="Bringoux">Lionel Bringoux</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">21887324</idno>
<idno type="pmc">3161793</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3161793</idno>
<idno type="RBID">PMC:3161793</idno>
<idno type="doi">10.1371/journal.pone.0023811</idno>
<date when="2011">2011</date>
<idno type="wicri:Area/Pmc/Corpus">002415</idno>
<idno type="wicri:Area/Pmc/Curation">002415</idno>
<idno type="wicri:Area/Pmc/Checkpoint">001C69</idno>
<idno type="wicri:Area/Ncbi/Merge">001B76</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">A Comprehensive Model of Audiovisual Perception: Both Percept and Temporal Dynamics</title>
<author>
<name sortKey="Besson, Patricia" sort="Besson, Patricia" uniqKey="Besson P" first="Patricia" last="Besson">Patricia Besson</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Bourdin, Christophe" sort="Bourdin, Christophe" uniqKey="Bourdin C" first="Christophe" last="Bourdin">Christophe Bourdin</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Bringoux, Lionel" sort="Bringoux, Lionel" uniqKey="Bringoux L" first="Lionel" last="Bringoux">Lionel Bringoux</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
</analytic>
<series>
<title level="j">PLoS ONE</title>
<idno type="eISSN">1932-6203</idno>
<imprint>
<date when="2011">2011</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>The sparse information captured by the sensory systems is used by the brain to apprehend the environment, for example, to spatially locate the source of audiovisual stimuli. This is an ill-posed inverse problem whose inherent uncertainty can be solved by jointly processing the information, as well as introducing constraints during this process, on the way this multisensory information is handled. This process and its result - the percept - depend on the contextual conditions perception takes place in. To date, perception has been investigated and modeled on the basis of either one of two of its dimensions: the percept or the temporal dynamics of the process. Here, we extend our previously proposed audiovisual perception model to predict both these dimensions to capture the phenomenon as a whole. Starting from a behavioral analysis, we use a data-driven approach to elicit a Bayesian network which infers the different percepts and dynamics of the process. Context-specific independence analyses enable us to use the model's structure to directly explore how different contexts affect the way subjects handle the same available information. Hence, we establish that, while the percepts yielded by a unisensory stimulus or by the non-fusion of multisensory stimuli may be similar, they result from different processes, as shown by their differing temporal dynamics. Moreover, our model predicts the impact of bottom-up (stimulus driven) factors as well as of top-down factors (induced by instruction manipulation) on both the perception process and the percept itself.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Clark, Jj" uniqKey="Clark J">JJ Clark</name>
</author>
<author>
<name sortKey="Yuille, Al" uniqKey="Yuille A">AL Yuille</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, Mo" uniqKey="Ernst M">MO Ernst</name>
</author>
<author>
<name sortKey="Bulthoff, Hh" uniqKey="Bulthoff H">HH Bülthoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sato, Y" uniqKey="Sato Y">Y Sato</name>
</author>
<author>
<name sortKey="Toyoizumi, T" uniqKey="Toyoizumi T">T Toyoizumi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kording, Kp" uniqKey="Kording K">KP Körding</name>
</author>
<author>
<name sortKey="Beierholm, U" uniqKey="Beierholm U">U Beierholm</name>
</author>
<author>
<name sortKey="Ma, Wj" uniqKey="Ma W">WJ Ma</name>
</author>
<author>
<name sortKey="Quartz, S" uniqKey="Quartz S">S Quartz</name>
</author>
<author>
<name sortKey="Tenenbaum, Jb" uniqKey="Tenenbaum J">JB Tenenbaum</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, Mo" uniqKey="Ernst M">MO Ernst</name>
</author>
<author>
<name sortKey="Banks, Ms" uniqKey="Banks M">MS Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Battaglia, Pw" uniqKey="Battaglia P">PW Battaglia</name>
</author>
<author>
<name sortKey="Jacobs, Ra" uniqKey="Jacobs R">RA Jacobs</name>
</author>
<author>
<name sortKey="Aslin, Rn" uniqKey="Aslin R">RN Aslin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shams, L" uniqKey="Shams L">L Shams</name>
</author>
<author>
<name sortKey="Ma, Wj" uniqKey="Ma W">WJ Ma</name>
</author>
<author>
<name sortKey="Beierholm, U" uniqKey="Beierholm U">U Beierholm</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bernstein, Hi" uniqKey="Bernstein H">HI Bernstein</name>
</author>
<author>
<name sortKey="Clark, Hm" uniqKey="Clark H">HM Clark</name>
</author>
<author>
<name sortKey="Edelstein, Ab" uniqKey="Edelstein A">AB Edelstein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bernstein, Hi" uniqKey="Bernstein H">HI Bernstein</name>
</author>
<author>
<name sortKey="Clark, Hm" uniqKey="Clark H">HM Clark</name>
</author>
<author>
<name sortKey="Edelstein, Ab" uniqKey="Edelstein A">AB Edelstein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hecht, D" uniqKey="Hecht D">D Hecht</name>
</author>
<author>
<name sortKey="Reiner, M" uniqKey="Reiner M">M Reiner</name>
</author>
<author>
<name sortKey="Karni, A" uniqKey="Karni A">A Karni</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Besson, P" uniqKey="Besson P">P Besson</name>
</author>
<author>
<name sortKey="Richiardi, J" uniqKey="Richiardi J">J Richiardi</name>
</author>
<author>
<name sortKey="Bourdin, C" uniqKey="Bourdin C">C Bourdin</name>
</author>
<author>
<name sortKey="Bringoux, L" uniqKey="Bringoux L">L Bringoux</name>
</author>
<author>
<name sortKey="Mestre, Dr" uniqKey="Mestre D">DR Mestre</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Besson, P" uniqKey="Besson P">P Besson</name>
</author>
<author>
<name sortKey="Richiardi, J" uniqKey="Richiardi J">J Richiardi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Knill, Dc" uniqKey="Knill D">DC Knill</name>
</author>
<author>
<name sortKey="Richards, W" uniqKey="Richards W">W Richards</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Roach, Nw" uniqKey="Roach N">NW Roach</name>
</author>
<author>
<name sortKey="Heron, J" uniqKey="Heron J">J Heron</name>
</author>
<author>
<name sortKey="Mcgraw, Pv" uniqKey="Mcgraw P">PV McGraw</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wozny, Dr" uniqKey="Wozny D">DR Wozny</name>
</author>
<author>
<name sortKey="Beierholm, Ur" uniqKey="Beierholm U">UR Beierholm</name>
</author>
<author>
<name sortKey="Shams, L" uniqKey="Shams L">L Shams</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Molholm, S" uniqKey="Molholm S">S Molholm</name>
</author>
<author>
<name sortKey="Ritter, W" uniqKey="Ritter W">W Ritter</name>
</author>
<author>
<name sortKey="Murray, Mm" uniqKey="Murray M">MM Murray</name>
</author>
<author>
<name sortKey="Javitt, Dc" uniqKey="Javitt D">DC Javitt</name>
</author>
<author>
<name sortKey="Schroeder, Ce" uniqKey="Schroeder C">CE Schroeder</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Diederich, A" uniqKey="Diederich A">A Diederich</name>
</author>
<author>
<name sortKey="Colonius, H" uniqKey="Colonius H">H Colonius</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jepma, M" uniqKey="Jepma M">M Jepma</name>
</author>
<author>
<name sortKey="Wagenmakers, Ej" uniqKey="Wagenmakers E">EJ Wagenmakers</name>
</author>
<author>
<name sortKey="Band, Gph" uniqKey="Band G">GPH Band</name>
</author>
<author>
<name sortKey="Nieuwenhuis, S" uniqKey="Nieuwenhuis S">S Nieuwenhuis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rowland, Ba" uniqKey="Rowland B">BA Rowland</name>
</author>
<author>
<name sortKey="Quessy, S" uniqKey="Quessy S">S Quessy</name>
</author>
<author>
<name sortKey="Stanford, Tr" uniqKey="Stanford T">TR Stanford</name>
</author>
<author>
<name sortKey="Stein, Be" uniqKey="Stein B">BE Stein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nickerson, Rs" uniqKey="Nickerson R">RS Nickerson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Braun, Da" uniqKey="Braun D">DA Braun</name>
</author>
<author>
<name sortKey="Mehring, C" uniqKey="Mehring C">C Mehring</name>
</author>
<author>
<name sortKey="Wolpert, Dm" uniqKey="Wolpert D">DM Wolpert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Warren, Dh" uniqKey="Warren D">DH Warren</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sarlegna, F" uniqKey="Sarlegna F">F Sarlegna</name>
</author>
<author>
<name sortKey="Malfait, N" uniqKey="Malfait N">N Malfait</name>
</author>
<author>
<name sortKey="Bringoux, L" uniqKey="Bringoux L">L Bringoux</name>
</author>
<author>
<name sortKey="Bourdin, C" uniqKey="Bourdin C">C Bourdin</name>
</author>
<author>
<name sortKey="Vercher, J" uniqKey="Vercher J">J Vercher</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Neapolitan, Re" uniqKey="Neapolitan R">RE Neapolitan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pearl, J" uniqKey="Pearl J">J Pearl</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Verma, T" uniqKey="Verma T">T Verma</name>
</author>
<author>
<name sortKey="Pearl, J" uniqKey="Pearl J">J Pearl</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Boutilier, C" uniqKey="Boutilier C">C Boutilier</name>
</author>
<author>
<name sortKey="Friedman, N" uniqKey="Friedman N">N Friedman</name>
</author>
<author>
<name sortKey="Goldszmidt, M" uniqKey="Goldszmidt M">M Goldszmidt</name>
</author>
<author>
<name sortKey="Koller, D" uniqKey="Koller D">D Koller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Geiger, D" uniqKey="Geiger D">D Geiger</name>
</author>
<author>
<name sortKey="Heckerman, D" uniqKey="Heckerman D">D Heckerman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Geiger, D" uniqKey="Geiger D">D Geiger</name>
</author>
<author>
<name sortKey="Heckerman, D" uniqKey="Heckerman D">D Heckerman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bilmes, J" uniqKey="Bilmes J">J Bilmes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cano, A" uniqKey="Cano A">A Cano</name>
</author>
<author>
<name sortKey="Castellano, J" uniqKey="Castellano J">J Castellano</name>
</author>
<author>
<name sortKey="Masegosa, A" uniqKey="Masegosa A">A Masegosa</name>
</author>
<author>
<name sortKey="Moral, S" uniqKey="Moral S">S Moral</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zhang, Nl" uniqKey="Zhang N">NL Zhang</name>
</author>
<author>
<name sortKey="Poole, D" uniqKey="Poole D">D Poole</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Theodoridis, S" uniqKey="Theodoridis S">S Theodoridis</name>
</author>
<author>
<name sortKey="Koutroumbas, K" uniqKey="Koutroumbas K">K Koutroumbas</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Murphy, Kp" uniqKey="Murphy K">KP Murphy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hospedales, T" uniqKey="Hospedales T">T Hospedales</name>
</author>
<author>
<name sortKey="Vijayakumar, S" uniqKey="Vijayakumar S">S Vijayakumar</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Talsma, D" uniqKey="Talsma D">D Talsma</name>
</author>
<author>
<name sortKey="Senkowski, D" uniqKey="Senkowski D">D Senkowski</name>
</author>
<author>
<name sortKey="Soto Faraco, S" uniqKey="Soto Faraco S">S Soto-Faraco</name>
</author>
<author>
<name sortKey="Woldorff, Mg" uniqKey="Woldorff M">MG Woldorff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Koelewijn, T" uniqKey="Koelewijn T">T Koelewijn</name>
</author>
<author>
<name sortKey="Bronkhorst, A" uniqKey="Bronkhorst A">A Bronkhorst</name>
</author>
<author>
<name sortKey="Theeuwes, J" uniqKey="Theeuwes J">J Theeuwes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hospedales, Tm" uniqKey="Hospedales T">TM Hospedales</name>
</author>
<author>
<name sortKey="Vijayakumar, S" uniqKey="Vijayakumar S">S Vijayakumar</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shams, L" uniqKey="Shams L">L Shams</name>
</author>
<author>
<name sortKey="Beierholm, U" uniqKey="Beierholm U">U Beierholm</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C Spence</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">PLoS One</journal-id>
<journal-id journal-id-type="publisher-id">plos</journal-id>
<journal-id journal-id-type="pmc">plosone</journal-id>
<journal-title-group>
<journal-title>PLoS ONE</journal-title>
</journal-title-group>
<issn pub-type="epub">1932-6203</issn>
<publisher>
<publisher-name>Public Library of Science</publisher-name>
<publisher-loc>San Francisco, USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">21887324</article-id>
<article-id pub-id-type="pmc">3161793</article-id>
<article-id pub-id-type="publisher-id">PONE-D-11-04803</article-id>
<article-id pub-id-type="doi">10.1371/journal.pone.0023811</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Biology</subject>
<subj-group>
<subject>Neuroscience</subject>
<subj-group>
<subject>Behavioral Neuroscience</subject>
<subject>Sensory Perception</subject>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Computer Science</subject>
<subj-group>
<subject>Computing Methods</subject>
<subj-group>
<subject>Computer Inferencing</subject>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Engineering</subject>
<subj-group>
<subject>Signal Processing</subject>
<subj-group>
<subject>Statistical Signal Processing</subject>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Mathematics</subject>
<subj-group>
<subject>Probability Theory</subject>
<subj-group>
<subject>Bayes Theorem</subject>
<subject>Probability Distribution</subject>
</subj-group>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>A Comprehensive Model of Audiovisual Perception: Both Percept and Temporal Dynamics</article-title>
<alt-title alt-title-type="running-head">Temporal Dynamics in Perception Modeling</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Besson</surname>
<given-names>Patricia</given-names>
</name>
<xref ref-type="aff" rid="aff1"></xref>
<xref ref-type="corresp" rid="cor1">
<sup>*</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Bourdin</surname>
<given-names>Christophe</given-names>
</name>
<xref ref-type="aff" rid="aff1"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Bringoux</surname>
<given-names>Lionel</given-names>
</name>
<xref ref-type="aff" rid="aff1"></xref>
</contrib>
</contrib-group>
<aff id="aff1">
<addr-line>Institute of Movement Sciences, CNRS - Université de la Méditerranée, Marseille, France</addr-line>
</aff>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Lauwereyns</surname>
<given-names>Jan</given-names>
</name>
<role>Editor</role>
<xref ref-type="aff" rid="edit1"></xref>
</contrib>
</contrib-group>
<aff id="edit1">Kyushu University, Japan</aff>
<author-notes>
<corresp id="cor1">* E-mail:
<email>patricia.besson@univmed.fr</email>
</corresp>
<fn fn-type="con">
<p>Conceived and designed the experiments: PB CB LB. Performed the experiments: PB. Analyzed the data: PB CB LB. Contributed reagents/materials/analysis tools: PB. Wrote the paper: PB CB LB.</p>
</fn>
</author-notes>
<pub-date pub-type="collection">
<year>2011</year>
</pub-date>
<pub-date pub-type="epub">
<day>22</day>
<month>8</month>
<year>2011</year>
</pub-date>
<volume>6</volume>
<issue>8</issue>
<elocation-id>e23811</elocation-id>
<history>
<date date-type="received">
<day>15</day>
<month>3</month>
<year>2011</year>
</date>
<date date-type="accepted">
<day>26</day>
<month>7</month>
<year>2011</year>
</date>
</history>
<permissions>
<copyright-statement>Besson et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.</copyright-statement>
<copyright-year>2011</copyright-year>
</permissions>
<abstract>
<p>The sparse information captured by the sensory systems is used by the brain to apprehend the environment, for example, to spatially locate the source of audiovisual stimuli. This is an ill-posed inverse problem whose inherent uncertainty can be solved by jointly processing the information, as well as introducing constraints during this process, on the way this multisensory information is handled. This process and its result - the percept - depend on the contextual conditions perception takes place in. To date, perception has been investigated and modeled on the basis of either one of two of its dimensions: the percept or the temporal dynamics of the process. Here, we extend our previously proposed audiovisual perception model to predict both these dimensions to capture the phenomenon as a whole. Starting from a behavioral analysis, we use a data-driven approach to elicit a Bayesian network which infers the different percepts and dynamics of the process. Context-specific independence analyses enable us to use the model's structure to directly explore how different contexts affect the way subjects handle the same available information. Hence, we establish that, while the percepts yielded by a unisensory stimulus or by the non-fusion of multisensory stimuli may be similar, they result from different processes, as shown by their differing temporal dynamics. Moreover, our model predicts the impact of bottom-up (stimulus driven) factors as well as of top-down factors (induced by instruction manipulation) on both the perception process and the percept itself.</p>
</abstract>
<counts>
<page-count count="11"></page-count>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="" id="s1">
<title>Introduction</title>
<p>Human beings need to efficiently collect information from their environment in order to make decisions about which action to perform next and to evaluate their actions' impact on this environment. They access this information through the perception process. This process can be understood as an inverse problem, where the cause (the physical source) must be identified from the observed stimuli. This problem is ill-posed since only partial and noisy information is conveyed by the senses
<xref ref-type="bibr" rid="pone.0023811-Clark1">[1]</xref>
,
<xref ref-type="bibr" rid="pone.0023811-Ernst1">[2]</xref>
. To arrive at a stable solution (a percept), some constraints based on high-level knowledge are used and modulate the way the information is used. Joint processing of the information collected by the different senses also constrains the perception problem, as it can help solve some ambiguities. Hence, perception can be seen as a system where complex processing of the sensory information is performed, working from the received stimuli (system inputs) to the percept itself (system output).</p>
<p>Several studies have addressed the question of understanding and modeling multisensory perception. Some focused on modeling how different input conditions (different spatio-temporal properties of the stimuli, or multisensory versus unisensory presentation of the information) yield different spatial
<xref ref-type="bibr" rid="pone.0023811-Sato1">[3]</xref>
<xref ref-type="bibr" rid="pone.0023811-Battaglia1">[6]</xref>
or temporal
<xref ref-type="bibr" rid="pone.0023811-Shams1">[7]</xref>
percepts. Others investigated the impact of these different input conditions on the perception process itself from a temporal perspective, through the analysis of reaction times in detection tasks
<xref ref-type="bibr" rid="pone.0023811-Bernstein1">[8]</xref>
,
<xref ref-type="bibr" rid="pone.0023811-Bernstein2">[9]</xref>
or in localization tasks
<xref ref-type="bibr" rid="pone.0023811-Hecht1">[10]</xref>
. The former studies aim at an understanding of how the outputs of the perception system are impacted by different contexts, whereas the latter aim at investigating the perception process itself - in particular, its dynamics. Though the results of these separate analyses suggest that the types of sensory stimulus or the mode of presentation impact both the perception process and its final output, no single model accounts for these two elements, and thus for the whole multisensory perception process.</p>
<p>In this paper, we propose a generative model of the perception process involved in a spatial localization task, in varying contexts, i.e., for different types of sensory stimulus (acoustic or visual) and for different modes of presentation (unisensory or multisensory). Our objective is not only to investigate and model the impact of these different contexts on the percepts (i.e. the outputs of the process), as in our previous work
<xref ref-type="bibr" rid="pone.0023811-Besson1">[11]</xref>
,
<xref ref-type="bibr" rid="pone.0023811-Besson2">[12]</xref>
, but to extend this to a comprehensive model accounting for the process itself. To this end, our new model embeds a temporal mark (the decision time) which characterizes the process dynamics. This comprehensive model therefore constitutes the added-value of the present paper with respect to both the state of the art and our previous work.</p>
<p>As far as the spatial percept - or output - is concerned, cross-modal biases occur when there is multisensory information. Most of the existing models resort to a Bayesian formalism to infer the output of the perception system
<xref ref-type="bibr" rid="pone.0023811-Ernst1">[2]</xref>
,
<xref ref-type="bibr" rid="pone.0023811-Knill1">[13]</xref>
. Indeed, Bayesian inference affords a principled and flexible statistical approach to inverse problems. It is particularly appropriate to model the perception process - which is inherently uncertain - since the constraints can be embedded straightforwardly in the form of prior probability distributions. Thus, the prior - on the way the information is handled - is assumed to be uniform in the classical maximum likelihood model (MLE)
<xref ref-type="bibr" rid="pone.0023811-Ernst2">[5]</xref>
,
<xref ref-type="bibr" rid="pone.0023811-Battaglia1">[6]</xref>
, which explains the integration of multisensory information as a means for the brain to increase the reliability of the sensory estimates
<xref ref-type="bibr" rid="pone.0023811-Ernst2">[5]</xref>
. Indeed, as mentioned, multiple sources of information may help constrain the inverse problem by alleviating some ambiguities
<xref ref-type="bibr" rid="pone.0023811-Clark1">[1]</xref>
. However, for stimuli showing specific physical properties, the multisensory biases may be very weak, or the information even segregated
<xref ref-type="bibr" rid="pone.0023811-Ernst1">[2]</xref>
,
<xref ref-type="bibr" rid="pone.0023811-Krding1">[4]</xref>
,
<xref ref-type="bibr" rid="pone.0023811-Besson1">[11]</xref>
,
<xref ref-type="bibr" rid="pone.0023811-Roach1">[14]</xref>
. Therefore, generalizations of the MLE model have recently been proposed, where non-uniform a priori are used, so that the two possible means of processing multisensory information (integration or segregation) are taken into account by the model
<xref ref-type="bibr" rid="pone.0023811-Sato1">[3]</xref>
,
<xref ref-type="bibr" rid="pone.0023811-Krding1">[4]</xref>
,
<xref ref-type="bibr" rid="pone.0023811-Shams1">[7]</xref>
,
<xref ref-type="bibr" rid="pone.0023811-Besson1">[11]</xref>
,
<xref ref-type="bibr" rid="pone.0023811-Roach1">[14]</xref>
,
<xref ref-type="bibr" rid="pone.0023811-Wozny1">[15]</xref>
. As specifically shown in our earlier work
<xref ref-type="bibr" rid="pone.0023811-Besson1">[11]</xref>
, the subjects exploited the available audiovisual information in different ways, depending on the type of sensory stimulus they were asked to locate (acoustic or visual). They integrated the audiovisual information when asked to locate the acoustic stimulus, whereas locating the visual stimulus was conditionally independent of the acoustic information. This confirms that, while multisensory information constrains the inverse problem, some higher-level constraints also play a part in the perception process. The Bayesian network (BN) we built earlier modeled the relationship structures connected with these two modes of multisensory information processing, as well as their dependence on the type of sensory stimulus, and ultimately inferred a spatial percept
<xref ref-type="bibr" rid="pone.0023811-Besson1">[11]</xref>
,
<xref ref-type="bibr" rid="pone.0023811-Besson2">[12]</xref>
.</p>
<p>The dynamics of the perception process have been widely explored, especially through analysis of reaction time. It has been established that multisensory information speeds up reaction times (multisensory enhancement) in both detection and localization tasks
<xref ref-type="bibr" rid="pone.0023811-Hecht1">[10]</xref>
,
<xref ref-type="bibr" rid="pone.0023811-Molholm1">[16]</xref>
,
<xref ref-type="bibr" rid="pone.0023811-Diederich1">[17]</xref>
. Brain level investigations using electroencephalography by humans
<xref ref-type="bibr" rid="pone.0023811-Hecht1">[10]</xref>
,
<xref ref-type="bibr" rid="pone.0023811-Molholm1">[16]</xref>
,
<xref ref-type="bibr" rid="pone.0023811-Jepma1">[18]</xref>
or animals
<xref ref-type="bibr" rid="pone.0023811-Rowland1">[19]</xref>
support the view that early stages of the perception process are involved, while late response stages are not significantly affected
<xref ref-type="bibr" rid="pone.0023811-Hecht1">[10]</xref>
. The reaction time to a primary stimulus can be shortened if an accessory - possibly irrelevant spatially - stimulus is presented at approximately the same time (intersensory facilitation of reaction time
<xref ref-type="bibr" rid="pone.0023811-Nickerson1">[20]</xref>
). To the best of our knowledge, no behavioral model has been proposed as a support to the study of the dynamics of the perception process.</p>
<p>The generative model of the audiovisual perception process we present here yields both the percept from a spatial localization task and a temporal feature of the process dynamics. Our comprehensive model straightforwardly supports the statistical data analysis we first perform. In keeping with our earlier model
<xref ref-type="bibr" rid="pone.0023811-Besson1">[11]</xref>
,
<xref ref-type="bibr" rid="pone.0023811-Besson2">[12]</xref>
, we employ Bayesian networks and we focus on making the structure of the variable's statistical relationships emerge from the data throughout the model elicitation process. To this end, we use the information theoretic framework proposed in
<xref ref-type="bibr" rid="pone.0023811-Besson1">[11]</xref>
. The structure of the relationships stresses the possible invariants attached to the perception process
<xref ref-type="bibr" rid="pone.0023811-Braun1">[21]</xref>
. As such, it conveys more interesting information about the causal links between the subject's percept and the environment than the quantitative strengths of these links do. The model we propose is more general than the MLE model and is relevant to different contexts. First, the type of sensory stimulus to be located is either acoustic or visual. Secondly, both unisensory and multisensory information processes are studied, in the latter case producing cross-modal biases of varying strengths.</p>
<p>The paper starts with a brief reminder of our experimental protocol in
<xref ref-type="bibr" rid="pone.0023811-Besson1">[11]</xref>
combining audiovisual perception with a spatial localization task. Both the subjects' spatial percepts and their decision times are then investigated. Then the relationships among variables are systematically analyzed and the model is elicited step by step. Finally, the results of the behavioral and of the Bayesian network analyses are discussed.</p>
</sec>
<sec sec-type="" id="s2">
<title>Analysis</title>
<sec id="s2a">
<title>Behavioral analysis</title>
<sec id="s2a1">
<title>Experimental protocol</title>
<p>The procedure is briefly outlined here, the interested reader being referred to
<xref ref-type="bibr" rid="pone.0023811-Besson1">[11]</xref>
for a more detailed description.</p>
<p>Ten subjects, seven males and three females (mean age
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e001.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) participated in the experiment. They were all right-handed with normal hearing and normal or corrected-to-normal vision. Informed written consent was obtained from all participants. Since only non-invasive behavioral measurements were carried out, the study was approved by the Institute of Movement Science Laboratory Review Board. The experiment was conducted in accordance with the Declaration of Helsinki.</p>
<p>The subjects were seated in complete darkness, in front of a curved screen. This screen bore nine red LEDs at equal distance and aligned in the azimuthal eye plane; it had a mobile buzzer above it. Two sessions, acoustic and visual, were performed in alternative order on two groups, each composed of half the subjects. In the acoustic perception task, a 35-ms-long acoustic stimulus (
<italic>primary stimulus</italic>
) was emitted at each trial, sometimes together with a visual stimulus (
<italic>secondary stimulus</italic>
), sometimes alone. The subjects were asked to report where they heard the sound. In the visual perception task, the primary and secondary stimuli are the visual and acoustic stimuli respectively. The subjects were asked to report where they saw the flash. The primary stimulus occurs randomly at
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e002.jpg" mimetype="image"></inline-graphic>
</inline-formula>
10 deg,
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e003.jpg" mimetype="image"></inline-graphic>
</inline-formula>
5 deg or 0 deg, and the secondary stimulus, when used, at 0 deg (coincident stimuli),
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e004.jpg" mimetype="image"></inline-graphic>
</inline-formula>
5 deg or
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e005.jpg" mimetype="image"></inline-graphic>
</inline-formula>
10 deg (non-coincident stimuli) from the primary stimulus position. Hence, possible positions for the secondary stimulus are
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e006.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.</p>
<p>To report the perceived location of the main stimulus, the subjects used a rotating pointer linked to a potentiometer. The subjects held the tip end and moved it from the right stop position of the pointer (the neutral position, located at 40 deg) to the perceived position. They remained in this position for about one second before coming back to the neutral position. They were free to move the pointer at the speed they wished.</p>
<p>The precise instructions given to the subjects were to locate the sound in the acoustic perception task, and the light in the visual perception task. They were informed that the acoustic stimulus might come with a visual stimulus in the acoustic perception task, and vice-versa in the visual perception task. Nevertheless, the instructions clearly asked them to focus on the primary modality.</p>
<p>For each task (acoustic or visual), the subjects were exposed to 450 stimuli: 75 unimodal stimuli (15 stimulus occurrences per position) and 375 bimodal stimuli. The latter include 75 spatially coincident stimuli (15 occurrences at each of the 5 possible primary stimulus positions) and 300 non-coincident stimuli (60 per primary stimulus position, 15 per secondary position). The whole data set thus comprises 4500 output values corresponding to the 10 subjects' responses to each input stimulus.</p>
</sec>
<sec id="s2a2">
<title>Output of the perception process</title>
<p>The perception process outputs are the subjects' spatial localizations of the primary stimuli. These were presented in
<xref ref-type="bibr" rid="pone.0023811-Besson1">[11]</xref>
, and we will only recall briefly some of the main results here. Our objective being to study and model multisensory perception, variations in the subjects' spatial responses that are unrelated to the percept itself must be removed as far as possible. Thus, a bias is observed in the subjects' answers, which is not significantly different between the acoustic and visual tasks for a given subject (for unisensory inputs), whereas it is between subjects. These inter-subject differences are then related to the sensorimotor component rather than to multisensory perception itself. They are smoothed by removing the mean of each subject's responses to unisensory stimuli, as was done in
<xref ref-type="bibr" rid="pone.0023811-Besson1">[11]</xref>
. As a result, the normalized subjects' spatial localization - adopted hereunder - can be assumed to be an approximate observation of the subjects' spatial percept.</p>
<p>The mean and standard deviations of the system outputs (values indicated by the subjects) for primary stimuli occurring in the subjects' median plane (0 deg) are shown in
<xref ref-type="fig" rid="pone-0023811-g001">Fig. 1</xref>
. Confirming the visual dominance reported for spatial localization tasks
<xref ref-type="bibr" rid="pone.0023811-Warren1">[22]</xref>
, the subjects were more accurate and less variable (i.e. more precise) in the visual than in the acoustic perception task (averaged standard deviations equal to 7.5 deg and 2.8 deg respectively). Adding a spatially coincident secondary stimulus improved the precision of localization in the acoustic perception task (standard deviation equal to 5.8 deg), whereas it slightly decreased this precision in the visual perception task (standard deviation equal to 3.2 deg). Generally speaking, the subjects' localizations of the primary stimuli were strongly impacted by the secondary stimuli in the acoustic perception task, contrary to what happened in the visual perception task (for non-coincident stimuli, the standard deviations were 9.5 deg in the acoustic localization task, against 3.0 deg in the visual localization task).</p>
<fig id="pone-0023811-g001" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0023811.g001</object-id>
<label>Figure 1</label>
<caption>
<title>Means and standard deviations of the values indicated by the subjects when locating the median (0 deg) acoustic and visual stimuli in the unisensory, coincident and non-coincident cases.</title>
<p>The values of the possible secondary stimuli are given as distances from the primary stimulus positions.</p>
</caption>
<graphic xlink:href="pone.0023811.g001"></graphic>
</fig>
</sec>
<sec id="s2a3">
<title>Dynamics of the perception process</title>
<p>We now extend the analysis performed on our data set to take into account two temporal features, movement and decision times, both of them potentially related to the dynamics of the perception process. Movement onset is defined as the time when the pointer velocity exceeds 1.5 deg/s. Conversely, movement end is considered to be when it fell below 1.5 deg/s. This cutoff was chosen after careful data inspection and is comparable to values found in the literature (e.g.,
<xref ref-type="bibr" rid="pone.0023811-Sarlegna1">[23]</xref>
after tangential velocity conversion). Decision time, which we distinguish from reaction time since there was no time constraint in our experiment, separates the presentation of the stimulus from movement onset. A statistical analysis of these features is now performed in order to establish whether one of them can be discarded.</p>
<p>A 2 tasks (visual vs acoustic)×3 modes of presentation (unisensory vs bisensory non-coincident vs bisensory coincident)
<xref ref-type="sec" rid="s2">Analysis</xref>
of Variance (ANOVA) was conducted on the mean movement times recorded for target localization. It revealed neither significant main effects (Task: p = 0.934; Mode of presentation: p = 0.119) nor interactions between the two factors (p = 0.443). In other words, neither the nature of the stimuli, nor the uni- vs multisensory mode of presentation has any significant impact on movement time.</p>
<p>Since movement time is heavily dependent on motor characteristics, it is not surprising that it does not explicitly convey the dynamics of the perception process. Therefore, we normalized the movement time by the distance to be traveled in order to minimize this bias. A 2 tasks (visual vs acoustic)×3 modes of presentation (unisensory vs bisensory non-coincident vs bisensory coincident) ANOVA was performed on the mean movement time/distance to be traveled. As for movement time, this variable revealed neither significant main effects (Task: p = 0.946; Mode of presentation: p = 0.176) nor interactions between the two factors (p = 0.520).</p>
<p>Finally, a 2 tasks (visual vs acoustic)×3 modes of presentation (unisensory vs bisensory non-coincident vs bisensory coincident) ANOVA was conducted on the mean decision time. Contrary to the two previous temporal indicators, it revealed a significant main effect of the task (F(1,9) = 8.983, p = 0.015) and of the mode of presentation (F(2,18) = 26.571, p = 0.000). As illustrated in
<xref ref-type="fig" rid="pone-0023811-g002">Fig. 2</xref>
, while visual stimuli are located more rapidly than acoustic stimuli, the mean decision time appears significantly shorter in bisensory than in unisensory presentations (Newmann-Keuls tests: unisensory vs bisensory coincident or non-coincident p
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e007.jpg" mimetype="image"></inline-graphic>
</inline-formula>
; no difference between bisensory coincident and bisensory non-coincident).</p>
<fig id="pone-0023811-g002" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0023811.g002</object-id>
<label>Figure 2</label>
<caption>
<title>Decision time as a function of context (type of sensory stimulus and mode of presentation).</title>
</caption>
<graphic xlink:href="pone.0023811.g002"></graphic>
</fig>
<p>To further explore the impact of the stimulus location, a 2 tasks (visual vs acoustic)×5 primary spatial locations ANOVA was conducted. In addition to the main task effect observed above (F(1,9) = 7.654, p = 0.022), the statistical analysis yielded a significant main effect of the primary stimulus location (F(4,36) = 5.502, p = 0.001). However, data inspection and post hoc tests showed that this effect was due to only one single target, whose eccentricity was maximal (
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e008.jpg" mimetype="image"></inline-graphic>
</inline-formula>
deg). Apart from this point, no obvious influence was found on decision time (Newman-Keuls tests showed no significant difference for other locations). The interaction between the two factors was not significant.</p>
<p>The influence of the secondary stimulus location was also investigated by a 2 tasks (visual vs. acoustic)×9 secondary spatial locations ANOVA. As previously, it revealed a main task effect (F(1,9) = 8.316, p = 0.018), but also a main effect of the secondary target location (F(8,72) = 5.966, p = 0.000) and a significant interaction between the two factors (F(8,72) = 2.822, p = 0.009). Post hoc tests and visual inspection of the data (see
<xref ref-type="fig" rid="pone-0023811-g003">Fig. 3</xref>
) confirm that decision times are longer in the acoustic than in the visual task, that no effect of the secondary stimulus location is found in the visual task (i.e., no secondary acoustic influence on decision time), and that effect of the secondary stimulus location in the acoustic task is marginal and mostly due to the most eccentric
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e009.jpg" mimetype="image"></inline-graphic>
</inline-formula>
position (
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e010.jpg" mimetype="image"></inline-graphic>
</inline-formula>
 = 20 deg).</p>
<fig id="pone-0023811-g003" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0023811.g003</object-id>
<label>Figure 3</label>
<caption>
<title>Decision time as a function of the secondary stimulus positions.</title>
</caption>
<graphic xlink:href="pone.0023811.g003"></graphic>
</fig>
<p>The first stage of our approach required us to identify the relevant variables to be embedded. As far as the percept itself was concerned, the choice was relatively straightforward. The normalized spatial positions indicated by the subjects approximately represent the percepts (since the task is spatial localization). Three temporal features, the subjects' movement times normalized or not by the distance to be traveled, and the subjects' decision times, were investigated in order to decide which better characterize the perception process. The statistical analysis we carried out showed that, unlike their movement times, the subjects' decision times were far more dependent on the sensory nature and on the mode of presentation of the stimuli than on their position. Therefore, decision time is deemed the temporal variable best characterizing the perception process.</p>
</sec>
</sec>
<sec id="s2b">
<title>Bayesian network analysis</title>
<sec id="s2b1">
<title>Statistical formulations</title>
<p>We have chosen to follow a probabilistic approach relying on Bayesian networks to model audiovisual perception, as there is an inherent uncertainty in the way the environment is perceived and processed by our sensory system. Specifically, a step-by-step elicitation of the model using BNs provides means of investigating the relationships among the variables involved in the perception process. To this end, we will use the information theoretic framework we proposed in
<xref ref-type="bibr" rid="pone.0023811-Besson1">[11]</xref>
. Thus, we must cast the problem in a statistical framework to start with.</p>
<p>The primary and secondary stimuli are modeled by two random variables (rvs)
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e011.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e012.jpg" mimetype="image"></inline-graphic>
</inline-formula>
enumerating the possible stimulus positions,
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e013.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e014.jpg" mimetype="image"></inline-graphic>
</inline-formula>
respectively. The last
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e015.jpg" mimetype="image"></inline-graphic>
</inline-formula>
range value is arbitrarily assigned to the secondary stimulus in the unimodal case. The model yields two rvs,
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e016.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e017.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e018.jpg" mimetype="image"></inline-graphic>
</inline-formula>
denotes the perceived primary stimulus localization and takes on values in the continuous range
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e019.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, bounded by the physical limitations of the pointer. The second rv
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e020.jpg" mimetype="image"></inline-graphic>
</inline-formula>
models the subjects' decision time.
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e021.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is defined on
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e022.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. We also introduce two binomial rvs,
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e023.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e024.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e025.jpg" mimetype="image"></inline-graphic>
</inline-formula>
models the type of primary sensory stimulus. It is set to 0 in the acoustic perception task, and to 1 in the visual perception task.
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e026.jpg" mimetype="image"></inline-graphic>
</inline-formula>
equals 0 if the inputs are unisensory, 1 if they are bisensory.</p>
<p>The rv probability density functions (pdfs) are estimated using histograms. The bin width to estimate the pdfs of the input signals
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e027.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e028.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is set to five, and one bin is centered on each possible value of the ground truth (so that there are five bins in total for the primary stimuli, and ten bins for the secondary stimuli). This way, the ground truth pdfs are uniform. Moreover, any possible inaccuracy pertaining to the experimental design is taken into account. The range of
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e029.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is covered by 15 bins: thirteen bins of width 5 are centered on
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e030.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and two larger bins cover the bounding ranges
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e031.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e032.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, where the data is very sparse (hence a trade-off is maintained between the pdf estimate accuracy and overfitting). Obviously, the binomial pdfs of
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e033.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e034.jpg" mimetype="image"></inline-graphic>
</inline-formula>
are estimated using two bin histograms, the bins being centered on 0 and 1. Finally, a histogram with 0.2 width bins centered on
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e035.jpg" mimetype="image"></inline-graphic>
</inline-formula>
estimates the pdf of
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e036.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. A
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e037.jpg" mimetype="image"></inline-graphic>
</inline-formula>
bin of no fixed width contains the few possibly remaining values of
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e038.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. The histograms of
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e039.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e040.jpg" mimetype="image"></inline-graphic>
</inline-formula>
are provided in
<xref ref-type="fig" rid="pone-0023811-g010">Figs. 10</xref>
and
<xref ref-type="fig" rid="pone-0023811-g011">11</xref>
.</p>
</sec>
<sec id="s2b2">
<title>Model elicitation</title>
<p>First, the mutual information (MI) normalized by the joint entropy (so that direct comparisons can be performed) is estimated between pairwise rvs and compared to
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e041.jpg" mimetype="image"></inline-graphic>
</inline-formula>
thresholds, to decide whether the values stand for dependent or independent variables. These thresholds allow for taking into account some possible approximations in the pdf estimates. Independent rvs are built by generating uniform pdfs on each rv's range. The MI values obtained with these artificial rvs give us the
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e042.jpg" mimetype="image"></inline-graphic>
</inline-formula>
values. Then, conditional information (CMI) is computed to identify any third rv independences (see
<xref ref-type="bibr" rid="pone.0023811-Besson1">[11]</xref>
for a detailed presentation of the method).</p>
<p>MI analysis yields the undirected structure presented in
<xref ref-type="fig" rid="pone-0023811-g004">Fig. 4</xref>
. As expected,
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e043.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e044.jpg" mimetype="image"></inline-graphic>
</inline-formula>
are independent of
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e045.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(the position of the stimuli are the same in the acoustic and visual perception tasks). Obviously,
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e046.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is largely dependent on
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e047.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e048.jpg" mimetype="image"></inline-graphic>
</inline-formula>
whereas
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e049.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e050.jpg" mimetype="image"></inline-graphic>
</inline-formula>
are totally independent.
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e051.jpg" mimetype="image"></inline-graphic>
</inline-formula>
shows greatest dependence on
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e052.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e053.jpg" mimetype="image"></inline-graphic>
</inline-formula>
), meaning that decision time is heavily affected by the type of primary sensory stimulus, as established in the data analysis section.
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e054.jpg" mimetype="image"></inline-graphic>
</inline-formula>
also depends strongly on
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e055.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e056.jpg" mimetype="image"></inline-graphic>
</inline-formula>
): adding an accessory stimulus to the localization task impacts the decision time. Though weaker, the dependence between
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e057.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and the stimulus positions,
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e058.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e059.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, or the subject's stimulus localization
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e060.jpg" mimetype="image"></inline-graphic>
</inline-formula>
cannot be disregarded (the MI values are above their respective
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e061.jpg" mimetype="image"></inline-graphic>
</inline-formula>
thresholds). In particular, the MI with
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e062.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is only half as weak as
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e063.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e064.jpg" mimetype="image"></inline-graphic>
</inline-formula>
).</p>
<fig id="pone-0023811-g004" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0023811.g004</object-id>
<label>Figure 4</label>
<caption>
<title>Undirected graphical model based on MI analysis.</title>
<p>The shaded nodes are the model outputs. The MI values for to each edge are indicated with the corresponding
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e065.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(thresholding) values (in brackets).</p>
</caption>
<graphic xlink:href="pone.0023811.g004"></graphic>
</fig>
<p>We now proceed to the CMI analysis, to identify potential third rv dependence. In pure machine learning problems, this step allows the inference computational cost to be decreased. In the present case, it reveals the causal relationships (in the causally sufficient senses as stated by Neapolitan in
<xref ref-type="bibr" rid="pone.0023811-Neapolitan1">[24]</xref>
) between environmental properties and the subjects' percepts. We observe that some CMI values are below their respective
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e066.jpg" mimetype="image"></inline-graphic>
</inline-formula>
threshold values. An analysis of the information flow through the network (using the d-separation theorem
<xref ref-type="bibr" rid="pone.0023811-Pearl1">[25]</xref>
) leads to the partially directed acyclic graph
<xref ref-type="bibr" rid="pone.0023811-Verma1">[26]</xref>
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e067.jpg" mimetype="image"></inline-graphic>
</inline-formula>
shown in
<xref ref-type="fig" rid="pone-0023811-g005">Fig. 5</xref>
. This analysis establishes that
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e068.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is conditionally independent of
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e069.jpg" mimetype="image"></inline-graphic>
</inline-formula>
or
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e070.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. Contrary to what might be expected at first glance,
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e071.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is also independent of
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e072.jpg" mimetype="image"></inline-graphic>
</inline-formula>
given
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e073.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e074.jpg" mimetype="image"></inline-graphic>
</inline-formula>
). Actually,
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e075.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e076.jpg" mimetype="image"></inline-graphic>
</inline-formula>
are largely redundant: they both follow Dirac distributions for unimodal inputs, but differ for bimodal inputs, where only
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e077.jpg" mimetype="image"></inline-graphic>
</inline-formula>
still follows a Dirac distribution. Thus,
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e078.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, like
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e079.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, contains information about the existence or the absence of an accessory stimulus. But, contrary to
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e080.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, it also provides clues to the location of the information. Note that
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e081.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is also very small (0.004, which is equal to
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e082.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) so that
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e083.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e084.jpg" mimetype="image"></inline-graphic>
</inline-formula>
are not far from being conditionally independent given
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e085.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. As a result, we can conclude that
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e086.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is primarily related to the mode of presentation (uni- or bisensory) of the incoming sensory information, although the position of the information is also a factor. Also, the model confirms the statement made in the previous section: decision time is conditionally independent of the primary input position
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e087.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, contrary to the percept
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e088.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, thus it primarily characterizes the perception process and not the pointing movement.</p>
<fig id="pone-0023811-g005" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0023811.g005</object-id>
<label>Figure 5</label>
<caption>
<title>Graphical model
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e089.jpg" mimetype="image"></inline-graphic>
</inline-formula>
resulting from information flow analysis based on CMI values. </title>
</caption>
<graphic xlink:href="pone.0023811.g005"></graphic>
</fig>
<p>The probabilistic law described by
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e090.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is:
<disp-formula>
<graphic xlink:href="pone.0023811.e091"></graphic>
</disp-formula>
<disp-formula>
<graphic xlink:href="pone.0023811.e092"></graphic>
<label>(1)</label>
</disp-formula>
Eq. (1) states that percept
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e093.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(output of the system) and decision time
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e094.jpg" mimetype="image"></inline-graphic>
</inline-formula>
are conditionally independent given the context, i.e. given the sensory nature
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e095.jpg" mimetype="image"></inline-graphic>
</inline-formula>
of the stimulus to be located, the position of the input stimulus,
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e096.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e097.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, as well as whether or not there is a secondary stimulus
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e098.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(implicitly,
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e099.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is the vector of the unisensory or bisensory property of the available information).</p>
</sec>
<sec id="s2b3">
<title>Context-specific independence</title>
<p>Since our aim was to bring to light specific top-down effects, depending on the context (environmental properties), we now focus on how changes in the environmental context modulate the structure of the variable relationships. To this end, we take our model-based analysis a step further by adding context-specific independence (CSI). CSI was formalized by Boutilier et al. in
<xref ref-type="bibr" rid="pone.0023811-Boutilier1">[27]</xref>
. It is related to the so-called asymmetric independence used in similarity networks and multinets
<xref ref-type="bibr" rid="pone.0023811-Geiger1">[28]</xref>
,
<xref ref-type="bibr" rid="pone.0023811-Geiger2">[29]</xref>
. While CMI reveals the possible structures of the relationship among variables for all the values these variables can take, CSI identifies dependences for different rv contextual values, i.e., for specific values of the rvs (note that we use the term
<italic>contextual value</italic>
rather than
<italic>context</italic>
as advocated in
<xref ref-type="bibr" rid="pone.0023811-Boutilier1">[27]</xref>
to avoid any confusion with the previous utilization of the word
<italic>context</italic>
in the paper). Thus, CSI further generalizes Bayesian networks
<xref ref-type="bibr" rid="pone.0023811-Bilmes1">[30]</xref>
. To represent the graphical network resulting from CSI analysis, we will resort to multinets, which allow CSI to be represented
<xref ref-type="bibr" rid="pone.0023811-Cano1">[31]</xref>
. It is important to remember that when a context is assigned to a rv, the latter becomes a constant. As a result, its impact on the other graph variables is no longer captured by the graph structure. Instead, it is yielded by the quantitative expression of the joint probabilities described by the local networks of the multinet.</p>
<p>Let us firstly assign the contextual values 0 or 1 to the rv
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e100.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. We obtain the multinet
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e101.jpg" mimetype="image"></inline-graphic>
</inline-formula>
shown in
<xref ref-type="fig" rid="pone-0023811-g006">Fig. 6</xref>
, which reveals the structure of the variable relationships connected with the acoustic or visual localization tasks respectively. The structures of these local networks provide two interesting results. First, there are two different ways of handling the information as far as the percept is concerned, depending on the sensory nature of the stimulus to be localized. The percept is impacted by the accessory stimulus
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e102.jpg" mimetype="image"></inline-graphic>
</inline-formula>
in the acoustic localization task (integration of the multisensory information), whereas it is conditionally independent of
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e103.jpg" mimetype="image"></inline-graphic>
</inline-formula>
in the visual perception task (segregation of the multisensory information). Second, the structure of the relationships involving
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e104.jpg" mimetype="image"></inline-graphic>
</inline-formula>
remains the same (the factors affecting the dynamics of the perception process are the same) whatever the type of primary sensory stimulus.</p>
<fig id="pone-0023811-g006" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0023811.g006</object-id>
<label>Figure 6</label>
<caption>
<title>Local networks
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e105.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e106.jpg" mimetype="image"></inline-graphic>
</inline-formula>
of the multinet
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e107.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, obtained for the respective contextual values 0 or 1 of
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e108.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.</title>
</caption>
<graphic xlink:href="pone.0023811.g006"></graphic>
</fig>
<p>We now remove any specific contextual value on
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e109.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and set
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e110.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to 0 (unisensory inputs) or 1 (bisensory inputs).
<xref ref-type="sec" rid="s2">Analysis</xref>
of the resulting dependences leads to the multinet
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e111.jpg" mimetype="image"></inline-graphic>
</inline-formula>
shown in
<xref ref-type="fig" rid="pone-0023811-g007">Fig. 7</xref>
. Both model outputs,
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e112.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e113.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, still depend on the type of sensory stimulus
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e114.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, whatever the mode of presentation of the inputs. Unsurprisingly, with unimodal inputs, percept dependence on
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e115.jpg" mimetype="image"></inline-graphic>
</inline-formula>
disappears, whereas it continues with bimodal inputs (let us remind that we are considering simultaneously the acoustic and visual localization tasks). Once the mode of presentation is fixed (
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e116.jpg" mimetype="image"></inline-graphic>
</inline-formula>
or
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e117.jpg" mimetype="image"></inline-graphic>
</inline-formula>
), the dependence between
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e118.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e119.jpg" mimetype="image"></inline-graphic>
</inline-formula>
vanishes, to be replaced by a slight dependence between
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e120.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e121.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. This confirms the hypothesis we put forward when we observed a conditional independence between
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e122.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e123.jpg" mimetype="image"></inline-graphic>
</inline-formula>
given
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e124.jpg" mimetype="image"></inline-graphic>
</inline-formula>
: once
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e125.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is no longer the vector of the mode of presentation, it only conveys (if it exists) clues about the spatial position of the information, as does
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e126.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.</p>
<fig id="pone-0023811-g007" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0023811.g007</object-id>
<label>Figure 7</label>
<caption>
<title>Local networks
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e127.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e128.jpg" mimetype="image"></inline-graphic>
</inline-formula>
of the multinet
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e129.jpg" mimetype="image"></inline-graphic>
</inline-formula>
obtained for the respective contextual values 0 or 1 of
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e130.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.</title>
</caption>
<graphic xlink:href="pone.0023811.g007"></graphic>
</fig>
</sec>
</sec>
</sec>
<sec sec-type="" id="s3">
<title>Results</title>
<p>While we are primarily concerned with eliciting the structure of the generative model, in order to investigate the structure of the implicit causal inference process attached to perception, we now examine the relevance of our model
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e131.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, via a quantitative analysis. Note that if our objective had been to reduce the computational costs of inference, this quantitative analysis could have been done with the multinets
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e132.jpg" mimetype="image"></inline-graphic>
</inline-formula>
or
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e133.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, the joint pdf of the multinets being recovered via the union-product operator
<xref ref-type="bibr" rid="pone.0023811-Zhang1">[32]</xref>
. It needs to be seen whether the model is able to correctly infer the different percepts
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e134.jpg" mimetype="image"></inline-graphic>
</inline-formula>
as well as the different decision times
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e135.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, in relation to the multiple generated contexts.</p>
<p>Eq. (1) expresses the joint distribution
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e136.jpg" mimetype="image"></inline-graphic>
</inline-formula>
for the model
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e137.jpg" mimetype="image"></inline-graphic>
</inline-formula>
in terms of posterior and marginal distributions, whose parameters have to be learned. The posterior distributions are a Gaussian for
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e138.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and a Log-normal for
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e139.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(i.e. taking the logarithmic values of the decision times yields normally distributed data). The conditional pdfs for
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e140.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e141.jpg" mimetype="image"></inline-graphic>
</inline-formula>
are uniform and are estimated by multinomial distributions.</p>
<p>A K-fold (with K = 10) cross-validation scheme is followed to learn the parameters and to perform the inference
<xref ref-type="bibr" rid="pone.0023811-Theodoridis1">[33]</xref>
so that no overlaps exist between the testing and the training sets. A maximum likelihood approach is used to learn the parameters of the multinomial and Gaussian distributions
<xref ref-type="bibr" rid="pone.0023811-Murphy1">[34]</xref>
on the training set. This training set is defined by the percepts and decision times of
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e142.jpg" mimetype="image"></inline-graphic>
</inline-formula>
subjects randomly picked from the 10 subject set. Data for the remaining subject forms the testing set. Once the parameters of the pdfs have been learned, we perform inference (estimating the system outputs given the inputs) using a Maximum A posteriori (MAP) approach, where the MAP are defined as:
<disp-formula>
<graphic xlink:href="pone.0023811.e143"></graphic>
<label>(2)</label>
</disp-formula>
<disp-formula>
<graphic xlink:href="pone.0023811.e144"></graphic>
<label>(3)</label>
</disp-formula>
Both the learning and inference stages were implemented using the Bayes Net Toolbox for Matlab
<xref ref-type="bibr" rid="pone.0023811-Murphy1">[34]</xref>
.</p>
<p>We followed the training and testing procedure 10 times, on audiovisual uni- and bisensory data (with
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e145.jpg" mimetype="image"></inline-graphic>
</inline-formula>
deg). The resulting mean coefficient of determination
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e146.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is 0.91 between the model's MAP and the subjects' percepts
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e147.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, and 0.63 between the model's MAP and the subjects' decision times
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e148.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.
<xref ref-type="table" rid="pone-0023811-t001">Table 1</xref>
details the mean
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e149.jpg" mimetype="image"></inline-graphic>
</inline-formula>
values obtained for the different position couples
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e150.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. The model very well infers the subjects' percepts and fairly well infers the perception process dynamics attached to different secondary stimulus locations in different sensory and modal conditions. An example of the model's performance, when trained on all but the
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e151.jpg" mimetype="image"></inline-graphic>
</inline-formula>
subject, then tested on this excluded
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e152.jpg" mimetype="image"></inline-graphic>
</inline-formula>
subject, is shown in
<xref ref-type="fig" rid="pone-0023811-g008">Fig. 8</xref>
, for the two contextual values of
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e153.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and for the different contextual values of
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e154.jpg" mimetype="image"></inline-graphic>
</inline-formula>
associated to the median position of
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e155.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e156.jpg" mimetype="image"></inline-graphic>
</inline-formula>
deg).</p>
<fig id="pone-0023811-g008" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0023811.g008</object-id>
<label>Figure 8</label>
<caption>
<title>Observed and inferred outputs
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e157.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e158.jpg" mimetype="image"></inline-graphic>
</inline-formula>
for the
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e159.jpg" mimetype="image"></inline-graphic>
</inline-formula>
subject (training set consisting of all but this subject) when
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e160.jpg" mimetype="image"></inline-graphic>
</inline-formula>
occurs at 0 deg, for
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e161.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(left hand graph) and
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e162.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(right hand graph).</title>
<p> We remind the reader that
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e163.jpg" mimetype="image"></inline-graphic>
</inline-formula>
deg stands for the unimodal case.</p>
</caption>
<graphic xlink:href="pone.0023811.g008"></graphic>
</fig>
<table-wrap id="pone-0023811-t001" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0023811.t001</object-id>
<label>Table 1</label>
<caption>
<title>Mean coefficients of determination between the data and the model.</title>
</caption>
<alternatives>
<graphic id="pone-0023811-t001-1" xlink:href="pone.0023811.t001"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
<col align="center" span="1"></col>
</colgroup>
<thead>
<tr>
<td align="left" rowspan="1" colspan="1">
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e164.jpg" mimetype="image"></inline-graphic>
</inline-formula>
positions (in deg)</td>
<td align="left" rowspan="1" colspan="1">−10</td>
<td align="left" rowspan="1" colspan="1">−5</td>
<td align="left" rowspan="1" colspan="1">0</td>
<td align="left" rowspan="1" colspan="1">5</td>
<td align="left" rowspan="1" colspan="1">10</td>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">Mean
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e165.jpg" mimetype="image"></inline-graphic>
</inline-formula>
for
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e166.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">0.59</td>
<td align="left" rowspan="1" colspan="1">0.69</td>
<td align="left" rowspan="1" colspan="1">0.61</td>
<td align="left" rowspan="1" colspan="1">0.66</td>
<td align="left" rowspan="1" colspan="1">0.57</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Mean
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e167.jpg" mimetype="image"></inline-graphic>
</inline-formula>
for
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e168.jpg" mimetype="image"></inline-graphic>
</inline-formula>
</td>
<td align="left" rowspan="1" colspan="1">0.82</td>
<td align="left" rowspan="1" colspan="1">0.95</td>
<td align="left" rowspan="1" colspan="1">0.96</td>
<td align="left" rowspan="1" colspan="1">0.94</td>
<td align="left" rowspan="1" colspan="1">0.88</td>
</tr>
</tbody>
</table>
</alternatives>
<table-wrap-foot>
<fn id="nt101">
<label></label>
<p>Mean coefficients of determination
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e169.jpg" mimetype="image"></inline-graphic>
</inline-formula>
between the model predictions and the mean subjects' decision times
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e170.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and localizations
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e171.jpg" mimetype="image"></inline-graphic>
</inline-formula>
, for different positions of
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e172.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e173.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e174.jpg" mimetype="image"></inline-graphic>
</inline-formula>
taking on values {
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e175.jpg" mimetype="image"></inline-graphic>
</inline-formula>
}).</p>
</fn>
</table-wrap-foot>
</table-wrap>
<p>It can be observed from these plots that the model fully predicts the fusion and the non-fusion of the information that occurs in the acoustic and visual localization tasks respectively, while still correctly fitting the
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e176.jpg" mimetype="image"></inline-graphic>
</inline-formula>
data in the unimodal case. It also quite faithfully infers the different decision times for the four possible contexts.</p>
<p>The lower
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e177.jpg" mimetype="image"></inline-graphic>
</inline-formula>
values for the decision times come certainly from the large inherent within-subject's variability. This impedes to get an accurate estimate of the mean subjects' decision time for each stimulus couples
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e178.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. Actually, this result could be expected from the context-specific independence analysis we performed in the previous of the paper. Indeed,
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e179.jpg" mimetype="image"></inline-graphic>
</inline-formula>
was shown to be conditionally independent of
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e180.jpg" mimetype="image"></inline-graphic>
</inline-formula>
once
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e181.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is known, because
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e182.jpg" mimetype="image"></inline-graphic>
</inline-formula>
not only informs on the unisensory or bisensory property of the stimuli, but also pertains to variability in the subjects' answers. We ensured that this within-subjects' variability was not related to
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e183.jpg" mimetype="image"></inline-graphic>
</inline-formula>
by testing a model where a direct link between
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e184.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e185.jpg" mimetype="image"></inline-graphic>
</inline-formula>
was present. The addition of this link did not improve the
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e186.jpg" mimetype="image"></inline-graphic>
</inline-formula>
values. Increasing the number of presentations of the same stimulus couples
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e187.jpg" mimetype="image"></inline-graphic>
</inline-formula>
could theoretically solve the problem. Practically however, this would require a longer experiment where decrease of vigilance and increase of fatigue would certainly deteriorate the precision of the subject's answers.</p>
<p>Actually, the data analysis and the model's structure established that the decision times do not depend on
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e188.jpg" mimetype="image"></inline-graphic>
</inline-formula>
positions so that we can remove any reference to these positions and appraise the ability of the model to predict the subjects' decision times for different modes of presentation of the stimuli. This means that we now look at the data in a similar way than what was presented on
<xref ref-type="fig" rid="pone-0023811-g002">Fig. 2</xref>
. With the MAP values obtained for three specific positions of
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e189.jpg" mimetype="image"></inline-graphic>
</inline-formula>
that correspond to three specific ways of presenting the information, namely,
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e190.jpg" mimetype="image"></inline-graphic>
</inline-formula>
deg (unisensory case),
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e191.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(bisensory coincident case) and
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e192.jpg" mimetype="image"></inline-graphic>
</inline-formula>
(bisensory non-coincident case), the mean coefficient of determination
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e193.jpg" mimetype="image"></inline-graphic>
</inline-formula>
becomes 0.92. An illustrative example of these results is presented on
<xref ref-type="fig" rid="pone-0023811-g009">Fig. 9</xref>
, still for the
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e194.jpg" mimetype="image"></inline-graphic>
</inline-formula>
subject. Hence, when the secondary stimulus locations stand for different ways of presenting the information, the model is a very good predictor of the dynamics of the perception process.</p>
<fig id="pone-0023811-g009" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0023811.g009</object-id>
<label>Figure 9</label>
<caption>
<title>Observed and inferred outputs
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e195.jpg" mimetype="image"></inline-graphic>
</inline-formula>
for the
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e196.jpg" mimetype="image"></inline-graphic>
</inline-formula>
subject (training set consisting of all but this subject), for different contexts (type of sensory stimulus and mode of presentation).</title>
</caption>
<graphic xlink:href="pone.0023811.g009"></graphic>
</fig>
<fig id="pone-0023811-g010" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0023811.g010</object-id>
<label>Figure 10</label>
<caption>
<title>Probability density function of the subjects' spatial localizations
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e197.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.</title>
</caption>
<graphic xlink:href="pone.0023811.g010"></graphic>
</fig>
<fig id="pone-0023811-g011" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0023811.g011</object-id>
<label>Figure 11</label>
<caption>
<title>Probability density function of the subjects' decision times
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e198.jpg" mimetype="image"></inline-graphic>
</inline-formula>
.</title>
</caption>
<graphic xlink:href="pone.0023811.g011"></graphic>
</fig>
</sec>
<sec sec-type="" id="s4">
<title>Discussion</title>
<p>This paper has addressed the question of understanding the perception process associated with an audiovisual localization task in its entirety. We do so by investigating and modeling not only the output of the process but also its temporal dynamics. The BN model that we propose, as a continuation and a formalization of behavioral analysis, meets this objective. The percept and the decision time, deemed to characterize the process dynamics, are both inferred for different environmental properties (contexts), i.e. for different types of sensory stimulus to be located, and for either unisensory or multisensory modes of presentation of the information. Our model is intended to investigate how multisensory integration is modulated by context, yielding different structural (bottom-up) and cognitive (top-down) factors.</p>
<p>To this end, our approach takes advantage of the compact representation of the problem domain offered by the BN structure, which depicts the relationships among the variables. Importantly, we made no a priori hypothesis about the model's structure, rather learning it from the data, following the information theoretic framework we proposed in
<xref ref-type="bibr" rid="pone.0023811-Besson1">[11]</xref>
. To investigate the impact of different bottom-up and top-down factors on perception, we manipulated the context via observable variables that were then embedded in the model. Our data-driven approach thus differs from the ones taken in
<xref ref-type="bibr" rid="pone.0023811-Krding1">[4]</xref>
,
<xref ref-type="bibr" rid="pone.0023811-Shams1">[7]</xref>
,
<xref ref-type="bibr" rid="pone.0023811-Hospedales1">[35]</xref>
where expert domain knowledge is used to hypothesize a model structure. In these models, a hidden variable mediates a model selection process, favoring either integration or segregation of the multisensory information. In our model, the observable variables
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e199.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e200.jpg" mimetype="image"></inline-graphic>
</inline-formula>
modulate the context and, as a result, the percept is shown to depend or not on both stimuli.
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e201.jpg" mimetype="image"></inline-graphic>
</inline-formula>
determines whether unisensory or multisensory stimuli are inputted (structural factor) while
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e202.jpg" mimetype="image"></inline-graphic>
</inline-formula>
models the type of sensory stimulus to be located, i.e., it stands for instruction manipulation and, indirectly, for intention manipulation (cognitive factor).</p>
<p>As a result, the structure of the general model
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e203.jpg" mimetype="image"></inline-graphic>
</inline-formula>
explicitly captures some of the data analysis results: the perception process does not only depend on certain structural properties of the stimuli, such as their spatial position and discrepancy, but also on other context properties that might induce top-down or bottom-up effects, such as the instructions given or the mode of presentation of the stimuli. To investigate this point further, we carried out a context-specific independence analysis of the relationships among the variables.</p>
<p>By assigning a contextual value to one of the variables, independences valid in this specific context only can be revealed. Thus, setting contextual values on
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e204.jpg" mimetype="image"></inline-graphic>
</inline-formula>
to specifically analyze the acoustic or the visual localization task yielded the local networks
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e205.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e206.jpg" mimetype="image"></inline-graphic>
</inline-formula>
shown in
<xref ref-type="fig" rid="pone-0023811-g006">Fig. 6</xref>
. Unsurprisingly, they correspond partly to the models we proposed in
<xref ref-type="bibr" rid="pone.0023811-Besson1">[11]</xref>
, since
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e207.jpg" mimetype="image"></inline-graphic>
</inline-formula>
is conditionally independent of the secondary stimulus position
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e208.jpg" mimetype="image"></inline-graphic>
</inline-formula>
in the visual localization task (
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e209.jpg" mimetype="image"></inline-graphic>
</inline-formula>
set to 1). As discussed in
<xref ref-type="bibr" rid="pone.0023811-Besson1">[11]</xref>
, this mathematically establishes that, in this case, the information is segregated at the percept level. The dominance of vision for spatial localization certainly explains this phenomenon. But the added value of the comprehensive modeling approach proposed here is that it reveals that the dynamics of the perception process are still dependent on whether the inputs are unisensory or multisensory (through the dependence on
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e210.jpg" mimetype="image"></inline-graphic>
</inline-formula>
), whatever the contextual value set for
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e211.jpg" mimetype="image"></inline-graphic>
</inline-formula>
. This establishes that multisensory integration is involved in both acoustic and visual localization tasks, even though in the visual context, percept
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e212.jpg" mimetype="image"></inline-graphic>
</inline-formula>
depends on the same input variables (
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e213.jpg" mimetype="image"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0023811.e214.jpg" mimetype="image"></inline-graphic>
</inline-formula>
) for both the multisensory and the unisensory cases.</p>
<p>Stated differently, the subjects receive and process multisensory information in both the acoustic and visual localization tasks, but they exploit this information differently depending on the sensory context. Therefore, as clearly shown by the global model we propose, multisensory integration phenomena (possibly reinforced by bottom-up cross-modal attention
<xref ref-type="bibr" rid="pone.0023811-Talsma1">[36]</xref>
,
<xref ref-type="bibr" rid="pone.0023811-Koelewijn1">[37]</xref>
) constrain the inverse problem (e.g., shorter decision times yielding the same percept accuracy as in the unisensory case are observed in the visual perception task). However, prior knowledge and top-down processes (the instructions given to the subjects change their intentional or attentional focus) modulate the way this multisensory information is handled. Therefore, with multisensory inputs, there is an integration of the information - visible in the process dynamics - that results in a percept where this information is either fused or not.</p>
<p>Modeling approaches concerned with process output alone might miss this important result, which sheds light on the potentially wide variations in the underlying brain processes, depending on the bottom-up and top-down factors involved in the task. For example, the computational models proposed in
<xref ref-type="bibr" rid="pone.0023811-Krding1">[4]</xref>
,
<xref ref-type="bibr" rid="pone.0023811-Shams1">[7]</xref>
,
<xref ref-type="bibr" rid="pone.0023811-Hospedales2">[38]</xref>
- or see
<xref ref-type="bibr" rid="pone.0023811-Shams2">[39]</xref>
for a review - nicely predict the final percept associated with multisensory inputs characterized by different structural properties (mainly spatial congruency or discrepancy). However, these models cannot discriminate between similar outputs resulting from different processes. Similarly, while electrophysiological studies do investigate multisensory perception at the brain level, they limit their investigation to the temporal (dynamics) dimension of the process (see e.g.
<xref ref-type="bibr" rid="pone.0023811-Hecht1">[10]</xref>
,
<xref ref-type="bibr" rid="pone.0023811-Molholm1">[16]</xref>
,
<xref ref-type="bibr" rid="pone.0023811-Jepma1">[18]</xref>
).</p>
<p>On the basis of the present work, we are convinced that a comprehensive conceptualization of the perception process, where the output and the dynamics of the process are both investigated and modeled, should lead to a clearer understanding of multisensory perception. In particular, it should provide insights into the complex interconnections between perception and top-down factors, such as those induced by instruction manipulation for example. The latter may be related to intentional and attentional phenomena which closely interlock with multisensory integration, as discussed in
<xref ref-type="bibr" rid="pone.0023811-Talsma1">[36]</xref>
,
<xref ref-type="bibr" rid="pone.0023811-Koelewijn1">[37]</xref>
,
<xref ref-type="bibr" rid="pone.0023811-Spence1">[40]</xref>
. Dedicated experimental protocols and joint behavioral and electrophysiological studies should be undertaken to investigate this point further.</p>
</sec>
</body>
<back>
<ack>
<p>The authors would like to thank G. Gauthier, J.-L. Vercher and D. R. Mestre for fruitful discussions as well as M. Sweetko for help with the English of the paper. Thanks also to F. Buloup for the electronic support and A. Donneaud for help in building the experimental setup.</p>
</ack>
<fn-group>
<fn fn-type="conflict">
<p>
<bold>Competing Interests: </bold>
The authors have declared that no competing interests exist.</p>
</fn>
<fn fn-type="financial-disclosure">
<p>
<bold>Funding: </bold>
The authors have no support or funding to report.</p>
</fn>
</fn-group>
<ref-list>
<title>References</title>
<ref id="pone.0023811-Clark1">
<label>1</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Clark</surname>
<given-names>JJ</given-names>
</name>
<name>
<surname>Yuille</surname>
<given-names>AL</given-names>
</name>
</person-group>
<year>1990</year>
<source>Data fusion for sensory information processing systems</source>
<publisher-name>Springer, 1st edition</publisher-name>
</element-citation>
</ref>
<ref id="pone.0023811-Ernst1">
<label>2</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>MO</given-names>
</name>
<name>
<surname>Bülthoff</surname>
<given-names>HH</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>Merging the senses into a robust percept.</article-title>
<source>TRENDS in Cognitive Sciences</source>
<volume>8</volume>
<fpage>162</fpage>
<lpage>169</lpage>
<pub-id pub-id-type="pmid">15050512</pub-id>
</element-citation>
</ref>
<ref id="pone.0023811-Sato1">
<label>3</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sato</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Toyoizumi</surname>
<given-names>T</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>Bayesian inference explains perception of unity and ventriloquism aftereffect: Identification of common sources of audiovisual stimuli.</article-title>
<source>Neural Computation</source>
<volume>19</volume>
<fpage>3335</fpage>
<lpage>3355</lpage>
<pub-id pub-id-type="pmid">17970656</pub-id>
</element-citation>
</ref>
<ref id="pone.0023811-Krding1">
<label>4</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Körding</surname>
<given-names>KP</given-names>
</name>
<name>
<surname>Beierholm</surname>
<given-names>U</given-names>
</name>
<name>
<surname>Ma</surname>
<given-names>WJ</given-names>
</name>
<name>
<surname>Quartz</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Tenenbaum</surname>
<given-names>JB</given-names>
</name>
<etal></etal>
</person-group>
<year>2007</year>
<article-title>Causal inference in multisensory perception.</article-title>
<source>PLoS ONE</source>
<volume>2</volume>
<fpage>e943</fpage>
<pub-id pub-id-type="pmid">17895984</pub-id>
</element-citation>
</ref>
<ref id="pone.0023811-Ernst2">
<label>5</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>MO</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>MS</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>Humans integrate visual and haptic information in a statistically optimal fashion.</article-title>
<source>Nature</source>
<volume>415</volume>
<fpage>429</fpage>
<lpage>433</lpage>
<pub-id pub-id-type="pmid">11807554</pub-id>
</element-citation>
</ref>
<ref id="pone.0023811-Battaglia1">
<label>6</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Battaglia</surname>
<given-names>PW</given-names>
</name>
<name>
<surname>Jacobs</surname>
<given-names>RA</given-names>
</name>
<name>
<surname>Aslin</surname>
<given-names>RN</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>Bayesian integration of visual and auditory signals for spatial localization.</article-title>
<source>J Opt Soc Am A</source>
<volume>20</volume>
<fpage>1391</fpage>
<lpage>1397</lpage>
</element-citation>
</ref>
<ref id="pone.0023811-Shams1">
<label>7</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shams</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Ma</surname>
<given-names>WJ</given-names>
</name>
<name>
<surname>Beierholm</surname>
<given-names>U</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>Sound-induced flash illusion as an optimal percept.</article-title>
<source>Neuro Report</source>
<volume>16</volume>
<fpage>1923</fpage>
<lpage>1927</lpage>
</element-citation>
</ref>
<ref id="pone.0023811-Bernstein1">
<label>8</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bernstein</surname>
<given-names>HI</given-names>
</name>
<name>
<surname>Clark</surname>
<given-names>HM</given-names>
</name>
<name>
<surname>Edelstein</surname>
<given-names>AB</given-names>
</name>
</person-group>
<year>1969</year>
<article-title>Intermodal effects in choice reaction time.</article-title>
<source>Journal of Experimental Psychology</source>
<volume>81</volume>
<fpage>405</fpage>
<lpage>407</lpage>
</element-citation>
</ref>
<ref id="pone.0023811-Bernstein2">
<label>9</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bernstein</surname>
<given-names>HI</given-names>
</name>
<name>
<surname>Clark</surname>
<given-names>HM</given-names>
</name>
<name>
<surname>Edelstein</surname>
<given-names>AB</given-names>
</name>
</person-group>
<year>1969</year>
<article-title>Effects of an auditory signal on visual reaction time.</article-title>
<source>Journal of Experimental Psychology</source>
<volume>80</volume>
<fpage>567</fpage>
<lpage>569</lpage>
<pub-id pub-id-type="pmid">5786157</pub-id>
</element-citation>
</ref>
<ref id="pone.0023811-Hecht1">
<label>10</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hecht</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Reiner</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Karni</surname>
<given-names>A</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Multisensory enhancement: gains in choice and in simple response times.</article-title>
<source>Exp Brain Res</source>
<volume>189</volume>
<fpage>133</fpage>
<lpage>143</lpage>
<pub-id pub-id-type="pmid">18478210</pub-id>
</element-citation>
</ref>
<ref id="pone.0023811-Besson1">
<label>11</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Besson</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Richiardi</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Bourdin</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Bringoux</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Mestre</surname>
<given-names>DR</given-names>
</name>
<etal></etal>
</person-group>
<year>2010</year>
<article-title>Bayesian networks and information theory for audio-visual perception modeling.</article-title>
<source>Biological Cybernetics</source>
<volume>103</volume>
<fpage>213</fpage>
<pub-id pub-id-type="pmid">20502912</pub-id>
</element-citation>
</ref>
<ref id="pone.0023811-Besson2">
<label>12</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Besson</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Richiardi</surname>
<given-names>J</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>A context-specific independence model of multisensory perception.</article-title>
<source>Neuro Comp</source>
<publisher-name>Bordeaux, France</publisher-name>
</element-citation>
</ref>
<ref id="pone.0023811-Knill1">
<label>13</label>
<element-citation publication-type="book">
<person-group person-group-type="editor">
<name>
<surname>Knill</surname>
<given-names>DC</given-names>
</name>
<name>
<surname>Richards</surname>
<given-names>W</given-names>
</name>
</person-group>
<year>1996</year>
<source>Perception as Bayesian Inference</source>
<publisher-loc>New York, USA</publisher-loc>
<publisher-name>Cambridge University Press</publisher-name>
</element-citation>
</ref>
<ref id="pone.0023811-Roach1">
<label>14</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Roach</surname>
<given-names>NW</given-names>
</name>
<name>
<surname>Heron</surname>
<given-names>J</given-names>
</name>
<name>
<surname>McGraw</surname>
<given-names>PV</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Resolving multisensory conflict: a strategy for balancing the costs and benefits of audio-visual integration.</article-title>
<source>Proceedings of the Royal Society B: Biological Sciences. volume 273</source>
<fpage>2159</fpage>
<lpage>2168</lpage>
<comment>Doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1098/rspb.2006.3578">10.1098/rspb.2006.3578</ext-link>
</comment>
</element-citation>
</ref>
<ref id="pone.0023811-Wozny1">
<label>15</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wozny</surname>
<given-names>DR</given-names>
</name>
<name>
<surname>Beierholm</surname>
<given-names>UR</given-names>
</name>
<name>
<surname>Shams</surname>
<given-names>L</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Human trimodal perception follows optimal statistical inference.</article-title>
<source>Journal of vision</source>
<volume>8</volume>
<fpage>1</fpage>
<lpage>11</lpage>
<pub-id pub-id-type="pmid">18484830</pub-id>
</element-citation>
</ref>
<ref id="pone.0023811-Molholm1">
<label>16</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Molholm</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Ritter</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Murray</surname>
<given-names>MM</given-names>
</name>
<name>
<surname>Javitt</surname>
<given-names>DC</given-names>
</name>
<name>
<surname>Schroeder</surname>
<given-names>CE</given-names>
</name>
<etal></etal>
</person-group>
<year>2002</year>
<article-title>Multisensory auditoryvisual interactions during early sensory processing in humans: a high-density electrical mapping study.</article-title>
<source>Brain Res Cogn Brain Res</source>
<volume>14</volume>
<fpage>115</fpage>
<lpage>128</lpage>
<pub-id pub-id-type="pmid">12063135</pub-id>
</element-citation>
</ref>
<ref id="pone.0023811-Diederich1">
<label>17</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Diederich</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Colonius</surname>
<given-names>H</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>Bimodal and trimodal multisensory enhancement: effects of stimulus onset and intensity on reaction time.</article-title>
<source>Percept Psychophys</source>
<volume>66</volume>
<fpage>1388</fpage>
<lpage>1404</lpage>
<pub-id pub-id-type="pmid">15813202</pub-id>
</element-citation>
</ref>
<ref id="pone.0023811-Jepma1">
<label>18</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jepma</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Wagenmakers</surname>
<given-names>EJ</given-names>
</name>
<name>
<surname>Band</surname>
<given-names>GPH</given-names>
</name>
<name>
<surname>Nieuwenhuis</surname>
<given-names>S</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>The effects of accessory stimuli on information processing: evidence from electrophysiology and a diffusion model analysis.</article-title>
<source>J Cogn Neurosci</source>
<volume>21</volume>
<fpage>847</fpage>
<lpage>864</lpage>
<pub-id pub-id-type="pmid">18702584</pub-id>
</element-citation>
</ref>
<ref id="pone.0023811-Rowland1">
<label>19</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rowland</surname>
<given-names>BA</given-names>
</name>
<name>
<surname>Quessy</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Stanford</surname>
<given-names>TR</given-names>
</name>
<name>
<surname>Stein</surname>
<given-names>BE</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>Multisensory integration shortens physiological response latencies.</article-title>
<source>J Neurosci</source>
<volume>27</volume>
<fpage>5879</fpage>
<lpage>5884</lpage>
<pub-id pub-id-type="pmid">17537958</pub-id>
</element-citation>
</ref>
<ref id="pone.0023811-Nickerson1">
<label>20</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nickerson</surname>
<given-names>RS</given-names>
</name>
</person-group>
<year>1973</year>
<article-title>Intersensory facilitation of reaction time: energy summation or preparation enhancement?</article-title>
<source>Psychol Rev</source>
<volume>80</volume>
<fpage>489</fpage>
<lpage>509</lpage>
<pub-id pub-id-type="pmid">4757060</pub-id>
</element-citation>
</ref>
<ref id="pone.0023811-Braun1">
<label>21</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Braun</surname>
<given-names>DA</given-names>
</name>
<name>
<surname>Mehring</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Wolpert</surname>
<given-names>DM</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>Structure learning in action.</article-title>
<source>Behav Brain Res</source>
<volume>206</volume>
<fpage>157</fpage>
<lpage>165</lpage>
<pub-id pub-id-type="pmid">19720086</pub-id>
</element-citation>
</ref>
<ref id="pone.0023811-Warren1">
<label>22</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Warren</surname>
<given-names>DH</given-names>
</name>
</person-group>
<year>1979</year>
<article-title>Spatial localization under conflict conditions: is there a single explanation?</article-title>
<source>Perception</source>
<volume>8</volume>
<fpage>323</fpage>
<lpage>337</lpage>
<pub-id pub-id-type="pmid">534159</pub-id>
</element-citation>
</ref>
<ref id="pone.0023811-Sarlegna1">
<label>23</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sarlegna</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Malfait</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Bringoux</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Bourdin</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Vercher</surname>
<given-names>J</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>Force-field adaptation without proprioception: can vision be used to model limb dynamics?</article-title>
<source>Neuropsychologia</source>
<volume>48</volume>
<fpage>60</fpage>
<lpage>67</lpage>
<pub-id pub-id-type="pmid">19695273</pub-id>
</element-citation>
</ref>
<ref id="pone.0023811-Neapolitan1">
<label>24</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Neapolitan</surname>
<given-names>RE</given-names>
</name>
</person-group>
<year>2003</year>
<source>Learning Bayesian Networks</source>
<publisher-loc>Upper Saddle River, NJ</publisher-loc>
<publisher-name>Prentice Hall</publisher-name>
</element-citation>
</ref>
<ref id="pone.0023811-Pearl1">
<label>25</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Pearl</surname>
<given-names>J</given-names>
</name>
</person-group>
<year>1988</year>
<source>Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference</source>
<publisher-loc>San Francisco, California</publisher-loc>
<publisher-name>Morgan Kaufmann</publisher-name>
</element-citation>
</ref>
<ref id="pone.0023811-Verma1">
<label>26</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Verma</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Pearl</surname>
<given-names>J</given-names>
</name>
</person-group>
<year>1992</year>
<article-title>An algorithm for deciding if a set of observed independencies has a causal explanation.</article-title>
<source>Proc. 8th Annual Conf. on Uncertainty in Artificial Intelligence (UAI-92)</source>
<publisher-loc>San Mateo, CA</publisher-loc>
<publisher-name>Morgan Kaufmann</publisher-name>
<fpage>323</fpage>
<lpage>333</lpage>
</element-citation>
</ref>
<ref id="pone.0023811-Boutilier1">
<label>27</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Boutilier</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Friedman</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Goldszmidt</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Koller</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>1996</year>
<article-title>Context-specific independence in Bayesian networks.</article-title>
<source>UAI</source>
</element-citation>
</ref>
<ref id="pone.0023811-Geiger1">
<label>28</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Geiger</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Heckerman</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>1991</year>
<article-title>Advances in probabilistic reasoning.</article-title>
<source>UAI</source>
<fpage>118126</fpage>
</element-citation>
</ref>
<ref id="pone.0023811-Geiger2">
<label>29</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Geiger</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Heckerman</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>1996</year>
<article-title>Knowledge representation and inference in similarity networks and Bayesian multinets.</article-title>
<source>Artificial Intelligence</source>
<volume>82</volume>
<fpage>45</fpage>
<lpage>74</lpage>
</element-citation>
</ref>
<ref id="pone.0023811-Bilmes1">
<label>30</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bilmes</surname>
<given-names>J</given-names>
</name>
</person-group>
<year>2000</year>
<article-title>Dynamic Bayesian multinets.</article-title>
<source>Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence (UAI)</source>
</element-citation>
</ref>
<ref id="pone.0023811-Cano1">
<label>31</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cano</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Castellano</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Masegosa</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Moral</surname>
<given-names>S</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>Methods to determine the branching attribute in Bayesian multinets classifiers.</article-title>
<source>ECSQARU</source>
<fpage>932</fpage>
<lpage>943</lpage>
</element-citation>
</ref>
<ref id="pone.0023811-Zhang1">
<label>32</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>NL</given-names>
</name>
<name>
<surname>Poole</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>1999</year>
<article-title>On the role of context-specific independence in probabilistic inference.</article-title>
<source>Proceedings of the 16th international joint conference on Artificial intelligence</source>
<publisher-loc>San Francisco, CA, USA</publisher-loc>
<publisher-name>Morgan Kaufmann Publishers Inc.</publisher-name>
<fpage>1288</fpage>
<lpage>1293</lpage>
<comment>volume 2</comment>
</element-citation>
</ref>
<ref id="pone.0023811-Theodoridis1">
<label>33</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Theodoridis</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Koutroumbas</surname>
<given-names>K</given-names>
</name>
</person-group>
<year>2006</year>
<source>Pattern Recognition</source>
<publisher-loc>Orlando, FL, USA</publisher-loc>
<publisher-name>Academic Press</publisher-name>
</element-citation>
</ref>
<ref id="pone.0023811-Murphy1">
<label>34</label>
<element-citation publication-type="other">
<person-group person-group-type="author">
<name>
<surname>Murphy</surname>
<given-names>KP</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>Dynamic Bayesian Networks: Representation, Inference and Learning.</article-title>
<comment>Phd thesis, University of California, Berkeley, USA</comment>
</element-citation>
</ref>
<ref id="pone.0023811-Hospedales1">
<label>35</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hospedales</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Vijayakumar</surname>
<given-names>S</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Multisensory oddity detection as Bayesian inference.</article-title>
<source>PLoS ONE</source>
<volume>4</volume>
<fpage>e4205</fpage>
<pub-id pub-id-type="pmid">19145254</pub-id>
</element-citation>
</ref>
<ref id="pone.0023811-Talsma1">
<label>36</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Talsma</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Senkowski</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Soto-Faraco</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Woldorff</surname>
<given-names>MG</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>The multifaceted interplay between attention and multisensory integration.</article-title>
<source>Trends Cogn Sci</source>
<volume>14</volume>
<fpage>400</fpage>
<lpage>410</lpage>
<pub-id pub-id-type="pmid">20675182</pub-id>
</element-citation>
</ref>
<ref id="pone.0023811-Koelewijn1">
<label>37</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Koelewijn</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Bronkhorst</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Theeuwes</surname>
<given-names>J</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>Attention and the multiple stages of multisensory integration: A review of audiovisual studies.</article-title>
<source>Acta Psychol (Amst)</source>
<volume>134</volume>
<fpage>372</fpage>
<lpage>384</lpage>
<pub-id pub-id-type="pmid">20427031</pub-id>
</element-citation>
</ref>
<ref id="pone.0023811-Hospedales2">
<label>38</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hospedales</surname>
<given-names>TM</given-names>
</name>
<name>
<surname>Vijayakumar</surname>
<given-names>S</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Structure inference for bayesian multisensory scene understanding.</article-title>
<source>IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI)</source>
<volume>30</volume>
<fpage>2140</fpage>
<lpage>2157</lpage>
</element-citation>
</ref>
<ref id="pone.0023811-Shams2">
<label>39</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shams</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Beierholm</surname>
<given-names>U</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>Causal inference in perception.</article-title>
<source>Trends Cogn Sci</source>
<volume>14</volume>
<fpage>425</fpage>
<lpage>432</lpage>
<pub-id pub-id-type="pmid">20705502</pub-id>
</element-citation>
</ref>
<ref id="pone.0023811-Spence1">
<label>40</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Spence</surname>
<given-names>C</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>Crossmodal spatial attention.</article-title>
<source>Ann N Y Acad Sci</source>
<volume>1191</volume>
<fpage>182</fpage>
<lpage>200</lpage>
<pub-id pub-id-type="pmid">20392281</pub-id>
</element-citation>
</ref>
</ref-list>
</back>
</pmc>
<affiliations>
<list></list>
<tree>
<noCountry>
<name sortKey="Besson, Patricia" sort="Besson, Patricia" uniqKey="Besson P" first="Patricia" last="Besson">Patricia Besson</name>
<name sortKey="Bourdin, Christophe" sort="Bourdin, Christophe" uniqKey="Bourdin C" first="Christophe" last="Bourdin">Christophe Bourdin</name>
<name sortKey="Bringoux, Lionel" sort="Bringoux, Lionel" uniqKey="Bringoux L" first="Lionel" last="Bringoux">Lionel Bringoux</name>
</noCountry>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Ncbi/Merge
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001B76 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd -nk 001B76 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Ncbi
   |étape=   Merge
   |type=    RBID
   |clé=     PMC:3161793
   |texte=   A Comprehensive Model of Audiovisual Perception: Both Percept and Temporal Dynamics
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/RBID.i   -Sk "pubmed:21887324" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024