Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Development of Visuo-Auditory Integration in Space and Time

Identifieur interne : 001786 ( Pmc/Checkpoint ); précédent : 001785; suivant : 001787

Development of Visuo-Auditory Integration in Space and Time

Auteurs : Monica Gori [Italie] ; Giulio Sandini [Italie] ; David Burr [Italie]

Source :

RBID : PMC:3443931

Abstract

Adults integrate multisensory information optimally (e.g., Ernst and Banks, 2002) while children do not integrate multisensory visual-haptic cues until 8–10 years of age (e.g., Gori et al., 2008). Before that age strong unisensory dominance occurs for size and orientation visual-haptic judgments, possibly reflecting a process of cross-sensory calibration between modalities. It is widely recognized that audition dominates time perception, while vision dominates space perception. Within the framework of the cross-sensory calibration hypothesis, we investigate visual-auditory integration in both space and time with child-friendly spatial and temporal bisection tasks. Unimodal and bimodal (conflictual and not) audio-visual thresholds and PSEs were measured and compared with the Bayesian predictions. In the temporal domain, we found that both in children and adults, audition dominates the bimodal visuo-auditory task both in perceived time and precision thresholds. On the contrary, in the visual-auditory spatial task, children younger than 12 years of age show clear visual dominance (for PSEs), and bimodal thresholds higher than the Bayesian prediction. Only in the adult group did bimodal thresholds become optimal. In agreement with previous studies, our results suggest that also visual-auditory adult-like behavior develops late. We suggest that the visual dominance for space and the auditory dominance for time could reflect a cross-sensory comparison of vision in the spatial visuo-audio task and a cross-sensory comparison of audition in the temporal visuo-audio task.


Url:
DOI: 10.3389/fnint.2012.00077
PubMed: 23060759
PubMed Central: 3443931


Affiliations:


Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:3443931

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Development of Visuo-Auditory Integration in Space and Time</title>
<author>
<name sortKey="Gori, Monica" sort="Gori, Monica" uniqKey="Gori M" first="Monica" last="Gori">Monica Gori</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Robotics, Brain and Cognitive Sciences Department, Istituto Italiano di Tecnologia</institution>
<country>Genoa, Italy</country>
</nlm:aff>
<country xml:lang="fr">Italie</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Sandini, Giulio" sort="Sandini, Giulio" uniqKey="Sandini G" first="Giulio" last="Sandini">Giulio Sandini</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Robotics, Brain and Cognitive Sciences Department, Istituto Italiano di Tecnologia</institution>
<country>Genoa, Italy</country>
</nlm:aff>
<country xml:lang="fr">Italie</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Burr, David" sort="Burr, David" uniqKey="Burr D" first="David" last="Burr">David Burr</name>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Department of Psychology, University of Florence</institution>
<country>Florence, Italy</country>
</nlm:aff>
<country xml:lang="fr">Italie</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<institution>Institute of Neuroscience, National Research Council</institution>
<country>Pisa, Italy</country>
</nlm:aff>
<country xml:lang="fr">Italie</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">23060759</idno>
<idno type="pmc">3443931</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3443931</idno>
<idno type="RBID">PMC:3443931</idno>
<idno type="doi">10.3389/fnint.2012.00077</idno>
<date when="2012">2012</date>
<idno type="wicri:Area/Pmc/Corpus">001E15</idno>
<idno type="wicri:Area/Pmc/Curation">001E15</idno>
<idno type="wicri:Area/Pmc/Checkpoint">001786</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Development of Visuo-Auditory Integration in Space and Time</title>
<author>
<name sortKey="Gori, Monica" sort="Gori, Monica" uniqKey="Gori M" first="Monica" last="Gori">Monica Gori</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Robotics, Brain and Cognitive Sciences Department, Istituto Italiano di Tecnologia</institution>
<country>Genoa, Italy</country>
</nlm:aff>
<country xml:lang="fr">Italie</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Sandini, Giulio" sort="Sandini, Giulio" uniqKey="Sandini G" first="Giulio" last="Sandini">Giulio Sandini</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Robotics, Brain and Cognitive Sciences Department, Istituto Italiano di Tecnologia</institution>
<country>Genoa, Italy</country>
</nlm:aff>
<country xml:lang="fr">Italie</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Burr, David" sort="Burr, David" uniqKey="Burr D" first="David" last="Burr">David Burr</name>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Department of Psychology, University of Florence</institution>
<country>Florence, Italy</country>
</nlm:aff>
<country xml:lang="fr">Italie</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<institution>Institute of Neuroscience, National Research Council</institution>
<country>Pisa, Italy</country>
</nlm:aff>
<country xml:lang="fr">Italie</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Frontiers in Integrative Neuroscience</title>
<idno type="eISSN">1662-5145</idno>
<imprint>
<date when="2012">2012</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Adults integrate multisensory information optimally (e.g., Ernst and Banks,
<xref ref-type="bibr" rid="B8">2002</xref>
) while children do not integrate multisensory visual-haptic cues until 8–10 years of age (e.g., Gori et al.,
<xref ref-type="bibr" rid="B11">2008</xref>
). Before that age strong unisensory dominance occurs for size and orientation visual-haptic judgments, possibly reflecting a process of cross-sensory calibration between modalities. It is widely recognized that audition dominates time perception, while vision dominates space perception. Within the framework of the cross-sensory calibration hypothesis, we investigate visual-auditory integration in both space and time with child-friendly spatial and temporal bisection tasks. Unimodal and bimodal (conflictual and not) audio-visual thresholds and PSEs were measured and compared with the Bayesian predictions. In the temporal domain, we found that both in children and adults, audition dominates the bimodal visuo-auditory task both in perceived time and precision thresholds. On the contrary, in the visual-auditory spatial task, children younger than 12 years of age show clear visual dominance (for PSEs), and bimodal thresholds higher than the Bayesian prediction. Only in the adult group did bimodal thresholds become optimal. In agreement with previous studies, our results suggest that also visual-auditory adult-like behavior develops late. We suggest that the visual dominance for space and the auditory dominance for time could reflect a cross-sensory comparison of vision in the spatial visuo-audio task and a cross-sensory comparison of audition in the temporal visuo-audio task.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Alais, D" uniqKey="Alais D">D. Alais</name>
</author>
<author>
<name sortKey="Burr, D" uniqKey="Burr D">D. Burr</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Berger, T D" uniqKey="Berger T">T. D. Berger</name>
</author>
<author>
<name sortKey="Martelli, M" uniqKey="Martelli M">M. Martelli</name>
</author>
<author>
<name sortKey="Pelli, D G" uniqKey="Pelli D">D. G. Pelli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Burr, D" uniqKey="Burr D">D. Burr</name>
</author>
<author>
<name sortKey="Banks, M S" uniqKey="Banks M">M. S. Banks</name>
</author>
<author>
<name sortKey="Morrone, M C" uniqKey="Morrone M">M. C. Morrone</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Burr, D" uniqKey="Burr D">D. Burr</name>
</author>
<author>
<name sortKey="Binda, P" uniqKey="Binda P">P. Binda</name>
</author>
<author>
<name sortKey="Gori, M" uniqKey="Gori M">M. Gori</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Burr, D" uniqKey="Burr D">D. Burr</name>
</author>
<author>
<name sortKey="Gori, M" uniqKey="Gori M">M. Gori</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Clarke, J J" uniqKey="Clarke J">J. J. Clarke</name>
</author>
<author>
<name sortKey="Yuille, A L" uniqKey="Yuille A">A. L. Yuille</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Efron, B" uniqKey="Efron B">B. Efron</name>
</author>
<author>
<name sortKey="Tibshirani, R J" uniqKey="Tibshirani R">R. J. Tibshirani</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, M O" uniqKey="Ernst M">M. O. Ernst</name>
</author>
<author>
<name sortKey="Banks, M S" uniqKey="Banks M">M. S. Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gebhard, J W" uniqKey="Gebhard J">J. W. Gebhard</name>
</author>
<author>
<name sortKey="Mowbray, G H" uniqKey="Mowbray G">G. H. Mowbray</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ghahramani, Z" uniqKey="Ghahramani Z">Z. Ghahramani</name>
</author>
<author>
<name sortKey="Wolpert, D M" uniqKey="Wolpert D">D. M. Wolpert</name>
</author>
<author>
<name sortKey="Jordan, M I" uniqKey="Jordan M">M. I. Jordan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gori, M" uniqKey="Gori M">M. Gori</name>
</author>
<author>
<name sortKey="Del Viva, M" uniqKey="Del Viva M">M. Del Viva</name>
</author>
<author>
<name sortKey="Sandini, G" uniqKey="Sandini G">G. Sandini</name>
</author>
<author>
<name sortKey="Burr, D" uniqKey="Burr D">D. Burr</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gori, M" uniqKey="Gori M">M. Gori</name>
</author>
<author>
<name sortKey="Sandini, G" uniqKey="Sandini G">G. Sandini</name>
</author>
<author>
<name sortKey="Martinoli, C" uniqKey="Martinoli C">C. Martinoli</name>
</author>
<author>
<name sortKey="Burr, D" uniqKey="Burr D">D. Burr</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gori, M" uniqKey="Gori M">M. Gori</name>
</author>
<author>
<name sortKey="Sciutti, A" uniqKey="Sciutti A">A. Sciutti</name>
</author>
<author>
<name sortKey="Burr, D" uniqKey="Burr D">D. Burr</name>
</author>
<author>
<name sortKey="Sandini, G" uniqKey="Sandini G">G. Sandini</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gori, M" uniqKey="Gori M">M. Gori</name>
</author>
<author>
<name sortKey="Tinelli, F" uniqKey="Tinelli F">F. Tinelli</name>
</author>
<author>
<name sortKey="Sandini, G" uniqKey="Sandini G">G. Sandini</name>
</author>
<author>
<name sortKey="Cioni, G" uniqKey="Cioni G">G. Cioni</name>
</author>
<author>
<name sortKey="Burr, D" uniqKey="Burr D">D. Burr</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Landy, M S" uniqKey="Landy M">M. S. Landy</name>
</author>
<author>
<name sortKey="Banks, M S" uniqKey="Banks M">M. S. Banks</name>
</author>
<author>
<name sortKey="Knill, D C" uniqKey="Knill D">D. C. Knill</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mateeff, S" uniqKey="Mateeff S">S. Mateeff</name>
</author>
<author>
<name sortKey="Hohnsbein, J" uniqKey="Hohnsbein J">J. Hohnsbein</name>
</author>
<author>
<name sortKey="Noack, T" uniqKey="Noack T">T. Noack</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nardini, M" uniqKey="Nardini M">M. Nardini</name>
</author>
<author>
<name sortKey="Bedford, R" uniqKey="Bedford R">R. Bedford</name>
</author>
<author>
<name sortKey="Mareschal, D" uniqKey="Mareschal D">D. Mareschal</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nardini, M" uniqKey="Nardini M">M. Nardini</name>
</author>
<author>
<name sortKey="Jones, P" uniqKey="Jones P">P. Jones</name>
</author>
<author>
<name sortKey="Bedford, R" uniqKey="Bedford R">R. Bedford</name>
</author>
<author>
<name sortKey="Braddick, O" uniqKey="Braddick O">O. Braddick</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pick, H L" uniqKey="Pick H">H. L. Pick</name>
</author>
<author>
<name sortKey="Warren, D H" uniqKey="Warren D">D. H. Warren</name>
</author>
<author>
<name sortKey="Hay, J C" uniqKey="Hay J">J. C. Hay</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rose, D" uniqKey="Rose D">D. Rose</name>
</author>
<author>
<name sortKey="Summers, J" uniqKey="Summers J">J. Summers</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sekuler, A B" uniqKey="Sekuler A">A. B. Sekuler</name>
</author>
<author>
<name sortKey="Sekuler, R" uniqKey="Sekuler R">R. Sekuler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shams, L" uniqKey="Shams L">L. Shams</name>
</author>
<author>
<name sortKey="Kamitani, Y" uniqKey="Kamitani Y">Y. Kamitani</name>
</author>
<author>
<name sortKey="Shimojo, S" uniqKey="Shimojo S">S. Shimojo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shams, L" uniqKey="Shams L">L. Shams</name>
</author>
<author>
<name sortKey="Kamitani, Y" uniqKey="Kamitani Y">Y. Kamitani</name>
</author>
<author>
<name sortKey="Thompson, S" uniqKey="Thompson S">S. Thompson</name>
</author>
<author>
<name sortKey="Shimojo, S" uniqKey="Shimojo S">S. Shimojo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shipley, T" uniqKey="Shipley T">T. Shipley</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tomassini, A" uniqKey="Tomassini A">A. Tomassini</name>
</author>
<author>
<name sortKey="Gori, M" uniqKey="Gori M">M. Gori</name>
</author>
<author>
<name sortKey="Burr, D" uniqKey="Burr D">D. Burr</name>
</author>
<author>
<name sortKey="Sandini, G" uniqKey="Sandini G">G. Sandini</name>
</author>
<author>
<name sortKey="Morrone, M C" uniqKey="Morrone M">M. C. Morrone</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tse, P" uniqKey="Tse P">P. Tse</name>
</author>
<author>
<name sortKey="Intriligator, J" uniqKey="Intriligator J">J. Intriligator</name>
</author>
<author>
<name sortKey="Rivest, J" uniqKey="Rivest J">J. Rivest</name>
</author>
<author>
<name sortKey="Cavanagh, P" uniqKey="Cavanagh P">P. Cavanagh</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Warren, D H" uniqKey="Warren D">D. H. Warren</name>
</author>
<author>
<name sortKey="Welch, R B" uniqKey="Welch R">R. B. Welch</name>
</author>
<author>
<name sortKey="Mccarthy, T J" uniqKey="Mccarthy T">T. J. McCarthy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Watson, A B" uniqKey="Watson A">A. B. Watson</name>
</author>
<author>
<name sortKey="Pelli, D G" uniqKey="Pelli D">D. G. Pelli</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Front Integr Neurosci</journal-id>
<journal-id journal-id-type="iso-abbrev">Front Integr Neurosci</journal-id>
<journal-id journal-id-type="publisher-id">Front. Integr. Neurosci.</journal-id>
<journal-title-group>
<journal-title>Frontiers in Integrative Neuroscience</journal-title>
</journal-title-group>
<issn pub-type="epub">1662-5145</issn>
<publisher>
<publisher-name>Frontiers Research Foundation</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">23060759</article-id>
<article-id pub-id-type="pmc">3443931</article-id>
<article-id pub-id-type="doi">10.3389/fnint.2012.00077</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Development of Visuo-Auditory Integration in Space and Time</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Gori</surname>
<given-names>Monica</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="author-notes" rid="fn001">*</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Sandini</surname>
<given-names>Giulio</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Burr</surname>
<given-names>David</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff1">
<sup>1</sup>
<institution>Robotics, Brain and Cognitive Sciences Department, Istituto Italiano di Tecnologia</institution>
<country>Genoa, Italy</country>
</aff>
<aff id="aff2">
<sup>2</sup>
<institution>Department of Psychology, University of Florence</institution>
<country>Florence, Italy</country>
</aff>
<aff id="aff3">
<sup>3</sup>
<institution>Institute of Neuroscience, National Research Council</institution>
<country>Pisa, Italy</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>Edited by: Zhuanghua Shi, Ludwig-Maximilians-Universität München, Germany</p>
</fn>
<fn fn-type="edited-by">
<p>Reviewed by: David Alais, University of Sydney, Australia; Tino Just, University of Rostock, Germany</p>
</fn>
<corresp id="fn001">*Correspondence: Monica Gori, Robotics, Brain and Cognitive Sciences Department, Istituto Italiano di Tecnologia, via Morego 30, 16163 Genoa, Italy. e-mail:
<email xlink:type="simple">monica.gori@iit.it</email>
</corresp>
</author-notes>
<pub-date pub-type="epreprint">
<day>04</day>
<month>6</month>
<year>2012</year>
</pub-date>
<pub-date pub-type="epub">
<day>17</day>
<month>9</month>
<year>2012</year>
</pub-date>
<pub-date pub-type="collection">
<year>2012</year>
</pub-date>
<volume>6</volume>
<elocation-id>77</elocation-id>
<history>
<date date-type="received">
<day>04</day>
<month>5</month>
<year>2012</year>
</date>
<date date-type="accepted">
<day>29</day>
<month>8</month>
<year>2012</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright © 2012 Gori, Sandini and Burr.</copyright-statement>
<copyright-year>2012</copyright-year>
<license license-type="open-access" xlink:href="http://www.frontiersin.org/licenseagreement">
<license-p>This is an open-access article distributed under the terms of the
<uri xlink:type="simple" xlink:href="http://creativecommons.org/licenses/by/3.0/">Creative Commons Attribution License</uri>
, which permits use, distribution and reproduction in other forums, provided the original authors and source are credited and subject to any copyright notices concerning any third-party graphics etc.</license-p>
</license>
</permissions>
<abstract>
<p>Adults integrate multisensory information optimally (e.g., Ernst and Banks,
<xref ref-type="bibr" rid="B8">2002</xref>
) while children do not integrate multisensory visual-haptic cues until 8–10 years of age (e.g., Gori et al.,
<xref ref-type="bibr" rid="B11">2008</xref>
). Before that age strong unisensory dominance occurs for size and orientation visual-haptic judgments, possibly reflecting a process of cross-sensory calibration between modalities. It is widely recognized that audition dominates time perception, while vision dominates space perception. Within the framework of the cross-sensory calibration hypothesis, we investigate visual-auditory integration in both space and time with child-friendly spatial and temporal bisection tasks. Unimodal and bimodal (conflictual and not) audio-visual thresholds and PSEs were measured and compared with the Bayesian predictions. In the temporal domain, we found that both in children and adults, audition dominates the bimodal visuo-auditory task both in perceived time and precision thresholds. On the contrary, in the visual-auditory spatial task, children younger than 12 years of age show clear visual dominance (for PSEs), and bimodal thresholds higher than the Bayesian prediction. Only in the adult group did bimodal thresholds become optimal. In agreement with previous studies, our results suggest that also visual-auditory adult-like behavior develops late. We suggest that the visual dominance for space and the auditory dominance for time could reflect a cross-sensory comparison of vision in the spatial visuo-audio task and a cross-sensory comparison of audition in the temporal visuo-audio task.</p>
</abstract>
<kwd-group>
<kwd>audio</kwd>
<kwd>bisection</kwd>
<kwd>development</kwd>
<kwd>integration</kwd>
<kwd>multisensory</kwd>
<kwd>space</kwd>
<kwd>time</kwd>
<kwd>visual</kwd>
</kwd-group>
<counts>
<fig-count count="8"></fig-count>
<table-count count="0"></table-count>
<equation-count count="7"></equation-count>
<ref-count count="28"></ref-count>
<page-count count="8"></page-count>
<word-count count="6687"></word-count>
</counts>
</article-meta>
</front>
<body>
<sec>
<title>Introduction</title>
<p>Multisensory integration is fundamental for our interaction with the world. Many recent studies show that our brain is able to integrate unisensory signals in a statistically optimal fashion, weighting each sense according to its reliability (Clarke and Yuille,
<xref ref-type="bibr" rid="B6">1990</xref>
; Ghahramani et al.,
<xref ref-type="bibr" rid="B10">1997</xref>
; Ernst and Banks,
<xref ref-type="bibr" rid="B8">2002</xref>
; Alais and Burr,
<xref ref-type="bibr" rid="B1">2004</xref>
; Landy et al.,
<xref ref-type="bibr" rid="B15">2011</xref>
). However, children do not integrate unisensory information optimally until late (Gori et al.,
<xref ref-type="bibr" rid="B11">2008</xref>
; Nardini et al.,
<xref ref-type="bibr" rid="B18">2008</xref>
,
<xref ref-type="bibr" rid="B17">2010</xref>
). We recently showed that in a visual-haptic integration task (similar to that used by Ernst and Banks,
<xref ref-type="bibr" rid="B8">2002</xref>
) children younger than 8 years of age show unisensory dominance rather than bimodal integration and the modality that dominates is task specific: the haptic modality dominates bimodal size perception and the visual modality dominates orientation bimodal perception (Gori et al.,
<xref ref-type="bibr" rid="B11">2008</xref>
). This dominance could reflect a process of cross-sensory calibration, where in the developing brain the most robust modality is used to calibrate the others (see Burr and Gori,
<xref ref-type="bibr" rid="B5">2011</xref>
for a discussion of this idea). It has been suggested that vision calibrates touch for orientation judgments, and touch calibrates vision for size judgments. A good deal of evidence suggests that the calibration process may be fundamental to acquire specific perceptual concepts: in particular we have shown that the impairment of the system that should calibrate the other impacts on the modality that needs calibration (Gori et al.,
<xref ref-type="bibr" rid="B12">2010</xref>
,
<xref ref-type="bibr" rid="B14">2012</xref>
).</p>
<p>If the communication between sensory modalities has a fundamental role in the development of multisensory function, then we should find different forms of calibration for different dimensions, such as space and time. For example the visual system is the most accurate sense for space judgments and it should be the more influential modality for cross-modal calibration of spatial perception during development. Many studies in adults support this idea, showing that when the spatial locations of audio and visual stimuli are in conflict, vision usually dominates, resulting the so called “ventriloquist effect” (Warren et al.,
<xref ref-type="bibr" rid="B27">1981</xref>
; Mateeff et al.,
<xref ref-type="bibr" rid="B16">1985</xref>
). In adults the ventriloquist effect has been explained as the result of optimal cue-combination where each cue is weighted according to its statistical reliability. Vision dominates perceived location because it specifies location more reliably than audition does (Alais and Burr,
<xref ref-type="bibr" rid="B1">2004</xref>
). The auditory system, on the other hand, is the most precise sense for temporal judgments (Burr et al.,
<xref ref-type="bibr" rid="B3">2009</xref>
), so it seems reasonable that it should be the more influential in calibrating the perception of temporal aspects of perception during development. In agreement with this idea, studies in adults show that when a flashed spot is accompanied by two beeps, it appears to flash twice (Shams et al.,
<xref ref-type="bibr" rid="B22">2000</xref>
). Furthermore, the apparent multiple flashes actually had lower discrimination thresholds (Berger et al.,
<xref ref-type="bibr" rid="B2">2003</xref>
). Also the apparent frequency of a flickering visual stimulus can be driven up or down by an accompanying auditory stimulus presented at a different rate (Gebhard and Mowbray,
<xref ref-type="bibr" rid="B9">1959</xref>
; Shipley,
<xref ref-type="bibr" rid="B24">1964</xref>
), audition dominates in audio-visual time bisection task (Burr et al.,
<xref ref-type="bibr" rid="B3">2009</xref>
), and in general audition seems to affect the interpretation of a visual stimulus also under many other conditions (e.g., see Sekuler and Sekuler,
<xref ref-type="bibr" rid="B21">1999</xref>
; Shams et al.,
<xref ref-type="bibr" rid="B23">2001</xref>
).</p>
<p>All these results suggest that in the adult visual information has a fundamental role for multisensory space perception, and that audition is fundamental for temporal perception. Like adults, children are immersed in a multisensory world but, as mentioned above, unlike adults they do not integrate optimally across senses until fairly late in development, about 8 years of age (Gori et al.,
<xref ref-type="bibr" rid="B11">2008</xref>
) and some unisensory information seems to be strongly relevant for the creation of specific perceptual aspects (Gori et al.,
<xref ref-type="bibr" rid="B11">2008</xref>
,
<xref ref-type="bibr" rid="B12">2010</xref>
,
<xref ref-type="bibr" rid="B13">2011</xref>
; Burr and Gori,
<xref ref-type="bibr" rid="B5">2011</xref>
; Burr et al.,
<xref ref-type="bibr" rid="B4">2011</xref>
). If the cross-sensory calibration process is necessary for development, then the auditory modality should calibrate vision in a bimodal temporal task, and the visual modality should calibrate audition in a bimodal spatial task. To test this idea we measured visual-auditory integration during development in both the temporal and the spatial domains. To compare the results between the two domains we used a bisection task both in space and in time to study the relative contributions of visual and auditory stimuli to the perceived timing and space of sensory events. For the spatial task we reproduced in 48 children and adults a child-friendly version of the ventriloquist stimuli used by Alais and Burr (
<xref ref-type="bibr" rid="B1">2004</xref>
). For the temporal task we reproduced in 57 children and adults a child-friendly version of the stimulus used by Burr et al. (
<xref ref-type="bibr" rid="B3">2009</xref>
). We also test whether and at which age the relative contributions of vision and audition can be explained by optimal cue-combination (Ernst and Banks,
<xref ref-type="bibr" rid="B8">2002</xref>
; Alais and Burr,
<xref ref-type="bibr" rid="B1">2004</xref>
; Landy et al.,
<xref ref-type="bibr" rid="B15">2011</xref>
).</p>
</sec>
<sec sec-type="materials|methods">
<title>Materials and Methods</title>
<sec>
<title>Audio-visual temporal bisection task</title>
<p>Fifty-seven children and adults performed the unimodal and bimodal temporal bisection tasks (illustrated in Figure
<xref ref-type="fig" rid="F2">2</xref>
A). All stimuli were delivered within a child-friendly setup (Figures
<xref ref-type="fig" rid="F1">1</xref>
A,B). The child was positioned in front of the setup and observed a sequence of three lights (red, green, and yellow, positioned in the nose of a clown cartoon Figure
<xref ref-type="fig" rid="F1">1</xref>
B), listened to a sequence of sounds (produced by speakers spatially aligned with the lights Figure
<xref ref-type="fig" rid="F1">1</xref>
B), or both. Three stimuli (visual, auditory, or both) were presented in succession for a total duration of 1000 ms, and the observer reported whether the middle stimulus appeared closer in time to the first or the third stimulus. To help the children to understand the task and the response, they were presented a cartoon with a schematic representation of the two possible responses to be indicated. In the visual task the subject perceived a sequence of three lights: the first one was always red, the second yellow, and the third green. The subject had to respond whether the yellow light appears closer in time to the first or the last one (Figure
<xref ref-type="fig" rid="F2">2</xref>
A upper panel). In the auditory task the subject had to respond if the second sound was presented closer in time to the first or the third one (Figure
<xref ref-type="fig" rid="F2">2</xref>
A panel in the middle). In the bimodal task the subject perceived a sequence of three lights associated with three sounds (Figure
<xref ref-type="fig" rid="F2">2</xref>
A bottom panel). The sequence of the lights presentation was identical to the visual task. The visual and the auditory stimuli could be presented in conflict or not (Δ = −100; 0; 100 ms). The procedure was similar to that used by Burr et al. (
<xref ref-type="bibr" rid="B3">2009</xref>
). In the bimodal condition, all stimuli had an audio-visual conflict, where the auditory stimulus preceded or followed the visual stimulus. For the second stimulus, the conflict was Δ ms (Δ = −50; 0; 50 ms), while for the first and the third stimulus the offset was inverted in sign (-Δ ms).</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption>
<p>
<bold>(A)</bold>
Representation of the setup used for the temporal bisection task while a subject is tested.
<bold>(B)</bold>
Image reporting the setup used for the temporal bisection task. Three lights are presented in front and two speakers are present behind.
<bold>(C)</bold>
Representation of the setup used for the space bisection task. The blurring panel was positioned in front of the speakers so that the subject could not see the speakers behind it. For illustrative purposes this has been replaced with a transparent panel to show the speakers.</p>
</caption>
<graphic xlink:href="fnint-06-00077-g001"></graphic>
</fig>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption>
<p>
<bold>(A)</bold>
Temporal bisection task. Representation of the visual stimulation (upper panel), auditory stimulation (middle panel), and bimodal conflictual and not conflictual visual-auditory stimulation (bottom panel).
<bold>(B)</bold>
Spatial bisection task. Representation of the visual stimulation (upper panel), auditory stimulation (middle panel), and bimodal conflictual and not conflictual visual-auditory stimulation (bottom panel). The subject was aligned with the speaker in the middle (number 12).</p>
</caption>
<graphic xlink:href="fnint-06-00077-g002"></graphic>
</fig>
<p>The visual stimuli were 1°diameter LEDs displayed for 74 ms. Auditory stimuli were tones (750 Hz) displayed for 75 ms. Accurate timing of the visual and auditory stimuli was ensured by setting system priority to maximum during stimulus presentation, avoiding interrupts from other processes (and checking synchrony by recording with microphone and light sensor). The presentation program waited for a frame-synchronization pulse then launched the visual and auditory signals. Before collecting data, subjects were familiarized with the task with two training sessions of 10 trials each (one visual and one audio). Subjects indicated after each presentation of the three stimuli whether the second appeared earlier or later than the midpoint between the first and third stimuli. We provided feedback during these training sessions so observers could learn the task and minimize errors in their responses. No feedback was given after the training sessions. During the experiment proper, five different conditions were intermingled within each session: vision only, auditory only, and three audio-visual conditions. The total single session comprised 150 trials (30 for each condition). The time of presentation of the probe was varied by independent QUEST routines (Watson and Pelli,
<xref ref-type="bibr" rid="B28">1983</xref>
). Three QUESTs were run simultaneously in the conflict conditions (and one in each of the unisensory conditions). The timing of the second stimulus was adjusted with Quest algorithm (Watson and Pelli,
<xref ref-type="bibr" rid="B28">1983</xref>
) to home in on the perceived point of bisection of the first and third stimuli. The timing for each trial was given by this quest estimate, plus a random offset drawn from a Gaussian distribution. This procedure ensured that the psychometric function was well sampled at the best point for estimating both the PSE and slope of the functions, as well as giving observers a few “easy” trials from time to time. Also, as the Gaussian offset was centered at zero, it ensured equal responses of closer to first and to third. Data for each condition were fitted by cumulative Gaussians, yielding PSE and threshold estimates from the mean and standard deviation of the best-fitting function, respectively. Standard errors for the PSE and threshold estimates were obtained by bootstrapping (Efron and Tibshirani,
<xref ref-type="bibr" rid="B7">1993</xref>
). One hundred iterations of bootstrapping were used and the standard error was the standard deviation of the bootstrap distribution. All conflict conditions were used to obtain the two-cue threshold estimates. Both unimodal and bimodal (conflict or not) audio-visual thresholds and PSEs were compared with the prediction of the Bayesian optimal-integration model.</p>
</sec>
<sec>
<title>Audio-visual spatial bisection task</title>
<p>Forty-eight children and adults performed the unimodal and bimodal spatial bisection tasks (illustrated in Figure
<xref ref-type="fig" rid="F2">2</xref>
B). Stimuli were presented with a child-friendly setup (Figure
<xref ref-type="fig" rid="F1">1</xref>
C) which displayed a sequence of three red light, three sounds, or both. The setup comprised 23 speakers, with a red LED in front of each, which projected onto a white screen in front of the speaker array, yielding a blurred blob of 14° diameter at half height (see Figure
<xref ref-type="fig" rid="F1">1</xref>
C). The room was otherwise completely dark. The audio stimulus was identical to that used for the temporal bisection task (see previous section). The subject was seated 75 cm from the screen, causing the speaker array to subtend 102° (each speaker suspended about 4.5°). The child was positioned in front of the central speaker (number 12). Three stimuli (visual, auditory, or both) were presented in succession for a total duration of 1000 ms (identical to the duration used in the temporal bisection task), with the second stimulus occurring always 500 ms after the first. Observers reported whether the middle stimulus appeared closer in space to the first or the third stimulus (corresponding to the speakers at the extreme of the array: see Figure
<xref ref-type="fig" rid="F1">1</xref>
C).</p>
<p>In the unisensory visual and auditory task subjects were presented with a sequence of three lights or sounds (Figure
<xref ref-type="fig" rid="F2">2</xref>
B upper panel and panel in the middle). In the bimodal task they were presented with a sequence of three lights associated with three sounds (Figure
<xref ref-type="fig" rid="F2">2</xref>
B bottom panel). The second stimulus was presented in conflict, the standard now comprised visual and auditory stimuli positioned in different locations: the visual stimulus was the central stimulus +Δ° and the auditory stimulus was the central stimulus −Δ° (Δ = 0 or ±4.5° or ±9°). The first and the last stimuli, the auditory, and visual components were presented aligned, with no spatial conflict. The position of the second stimulus was adjusted with Quest algorithm as for the temporal task. The durations of the auditory and visual stimulations were both 75 ms.</p>
<p>Before collecting data, subjects were familiarized with the task with two training sessions of 10 trials each (one visual and the other audio). To facilitate the understanding of the task and the response in the training phase was presented at the child the image of two monkey cartoons (one red and one green) positioned the red on the left, in proximity of the first speaker and the green on the right, in proximity of the speaker (number 23). The child had to report if the second light was closer to the position of the red or green monkey. Subjects indicated after each presentation of the three stimuli whether the second appeared closer in space to the first or to the third stimulus. We provided feedback during these training sessions so observers could learn the task and minimize errors in their responses. No feedback was given after the training sessions.</p>
<p>During the experiment proper, seven different conditions were intermingled within each session: vision only, auditory only, and five two-cue conditions. The total single session comprised 210 trials (30 for each condition). As before data for each condition were fitted with cumulative Gaussians, yielding PSE and threshold estimates from the mean and standard deviation of the best-fitting function, respectively. Standard errors for the PSE and threshold estimates were obtained by bootstrapping (Efron and Tibshirani,
<xref ref-type="bibr" rid="B7">1993</xref>
). All conflict conditions were used to obtain the bimodal threshold estimates. Both unimodal and bimodal (conflictual or not) audio-visual thresholds and PSEs were compared with the prediction of the Bayesian optimal-integration model.</p>
<p>In bisection tasks, there are often constant biases, particularly for temporal judgments: the first interval tends to appear longer than the second (Rose and Summers,
<xref ref-type="bibr" rid="B20">1995</xref>
; Tse et al.,
<xref ref-type="bibr" rid="B26">2004</xref>
). These constant biases were of little interest to the current experiment, so we eliminated them by subtracting from the estimates of each PSE the PSE for the zero conflict condition.</p>
<p>No children with hearing and vision impairments participated to the two tests. We excluded for data recording the children that were not able to perform correctly at least 7 of 10 trials in the training condition (in which the distance between the standard and the comparison were maximal and the test was presented in the simplest version).</p>
</sec>
<sec>
<title>Bayesian predictions</title>
<p>The MLE prediction for the visuo-auditory threshold σ
<sub>VA</sub>
is given by:</p>
<disp-formula id="E1">
<label>(1)</label>
<mml:math id="M6">
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>VA</mml:mtext>
</mml:mstyle>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>V</mml:mtext>
</mml:mstyle>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>A</mml:mtext>
</mml:mstyle>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>V</mml:mtext>
</mml:mstyle>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>A</mml:mtext>
</mml:mstyle>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo class="MathClass-rel"></mml:mo>
<mml:mo class="qopname">min</mml:mo>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>V</mml:mtext>
</mml:mstyle>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>A</mml:mtext>
</mml:mstyle>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
</mml:math>
</disp-formula>
<p>where σ
<sub>V</sub>
and σ
<sub>A</sub>
are the visual and auditory unimodal thresholds. The improvement is greatest (√2) when σ
<sub>V</sub>
= σ
<sub>A</sub>
.</p>
<p>The MLE calculation assumes also that for time and space judgments, the optimal bimodal estimate of PSE
<inline-formula>
<mml:math id="M1">
<mml:mrow>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>Ŝ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>AV</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:math>
</inline-formula>
is given by the weighted sum of the independent audio and visual estimates
<inline-formula>
<mml:math id="M2">
<mml:mrow>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>Ŝ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>V</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mspace width="2.77695pt" class="tmspace"></mml:mspace>
<mml:mstyle class="text">
<mml:mtext>and</mml:mtext>
</mml:mstyle>
<mml:mspace width="2.77695pt" class="tmspace"></mml:mspace>
<mml:msub>
<mml:mrow>
<mml:mi>Ŝ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>A</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:math>
</inline-formula>
.</p>
<disp-formula id="E2">
<label>(2)</label>
<mml:math id="M7">
<mml:msub>
<mml:mrow>
<mml:mi>Ŝ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>VA</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>V</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi>Ŝ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>V</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>A</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi>Ŝ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>A</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:math>
</disp-formula>
<p>Where weights
<italic>w</italic>
<sub>V</sub>
and
<italic>w</italic>
<sub>A</sub>
sum to unity and are inversely proportional to the variance (σ
<sup>2</sup>
) of the underlying noise distribution, assessed from the standard deviation σ of the Gaussian fit of the psychometric functions for visual and auditory judgments:</p>
<disp-formula id="E3">
<label>(3)</label>
<mml:math id="M8">
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>V</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>A</mml:mtext>
</mml:mstyle>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>2</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>A</mml:mtext>
</mml:mstyle>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>2</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>V</mml:mtext>
</mml:mstyle>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>2</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfrac>
<mml:mo class="MathClass-punc">,</mml:mo>
<mml:mspace width="2.77695pt" class="tmspace"></mml:mspace>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>A</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>V</mml:mtext>
</mml:mstyle>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>2</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>A</mml:mtext>
</mml:mstyle>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>2</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msubsup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>V</mml:mtext>
</mml:mstyle>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>2</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfrac>
</mml:math>
</disp-formula>
<p>To calculate the visual and auditory weights from the PSEs (Figure
<xref ref-type="fig" rid="F6">6</xref>
), we substituted the actual spaces or times (relative to standard) into Eq.
<xref ref-type="disp-formula" rid="E2">2</xref>
:</p>
<disp-formula id="E4">
<label>(4)</label>
<mml:math id="M9">
<mml:mi>Ŝ</mml:mi>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>Δ</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>V</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mi>Δ</mml:mi>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>A</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mi>Δ</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:mn>2</mml:mn>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>A</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
<mml:mi>Δ</mml:mi>
</mml:math>
</disp-formula>
<p>The slope of the function is given by the first derivative:</p>
<disp-formula id="E5">
<label>(5)</label>
<mml:math id="M10">
<mml:mi>Ŝ</mml:mi>
<mml:msup>
<mml:mrow>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mi>Δ</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mi></mml:mi>
</mml:mrow>
</mml:msup>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:mn>2</mml:mn>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>A</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
</mml:math>
</disp-formula>
<p>Rearranging:</p>
<disp-formula id="E6">
<label>(6)</label>
<mml:math id="M11">
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mstyle class="text">
<mml:mtext>A</mml:mtext>
</mml:mstyle>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:mi>Ŝ</mml:mi>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo class="MathClass-open">(</mml:mo>
<mml:mrow>
<mml:mi>Δ</mml:mi>
</mml:mrow>
<mml:mo class="MathClass-close">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi></mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfrac>
</mml:math>
</disp-formula>
<p>The slope
<inline-formula>
<mml:math id="M3">
<mml:mrow>
<mml:mi>Ŝ</mml:mi>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo class="MathClass-open">(</mml:mo>
<mml:mrow>
<mml:mi>Δ</mml:mi>
</mml:mrow>
<mml:mo class="MathClass-close">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi></mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>
was calculated by linear regression of PSEs for all values of Δ, separately for each child and each condition.</p>
<p>The data of Figure
<xref ref-type="fig" rid="F5">5</xref>
show as a function of age the proportion of the variance of the PSE data explained by the MLE model. The explained variance
<italic>R</italic>
<sup>2</sup>
was calculated by:</p>
<disp-formula id="E7">
<label>(7)</label>
<mml:math id="M12">
<mml:msup>
<mml:mrow>
<mml:mi>R</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mo class="MathClass-op">^</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mo class="MathClass-bin">+</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfrac>
<mml:mo class="MathClass-bin"></mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:msup>
<mml:mrow>
<mml:mo class="MathClass-bin"></mml:mo>
</mml:mrow>
<mml:mrow>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mo mathsize="big"></mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo class="MathClass-rel">=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:msup>
<mml:mrow>
<mml:mfenced separators="" open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo class="MathClass-bin">-</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>Ŝ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:msup>
</mml:math>
</disp-formula>
<p>Where
<italic>N</italic>
is the total number of PSE values for each specific age group (all children and all values of Δ),
<italic>S
<sub>i</sub>
</italic>
the individual PSEs for time and space,
<inline-formula>
<mml:math id="M4">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>Ŝ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
is the predicted PSE for each specific condition,
<inline-formula>
<mml:math id="M5">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mo class="MathClass-op">^</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>
is the variance associated with the predicted PSEs and σ
<sup>2</sup>
the variance associated with the measured PSEs.
<italic>R</italic>
<sup>2 </sup>
= 1 implies that the model explains all the variance of the data,
<italic>R</italic>
<sup>2 </sup>
= 0 implies that it does no better (or worse) than the mean, and
<italic>R</italic>
<sup>2 </sup>
< 0 implies that the model is worse than the mean.</p>
</sec>
</sec>
<sec>
<title>Results</title>
<p>Figure
<xref ref-type="fig" rid="F3">3</xref>
reports the PSEs for both temporal bisection (Figure
<xref ref-type="fig" rid="F3">3</xref>
A) and space bisection (Figure
<xref ref-type="fig" rid="F3">3</xref>
B). In both Figures we adjusted the PSEs for constant errors in bias by subtracting for each conflictual PSE the PSE obtained in the not conflictual condition. In the temporal bisection task (Figure
<xref ref-type="fig" rid="F3">3</xref>
A), PSEs tend to follow the green line, suggesting auditory dominance over vision. As may be expected, the results for the 5–7 age-group are noisier than the others, but the tendency is similar at all ages, particularly the older age-groups. In the audio-visual spatial bisection task (Figure
<xref ref-type="fig" rid="F3">3</xref>
B) PSEs follow the visual standard (indicated by the red line) especially until 12 years of age.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption>
<p>
<bold>(A)</bold>
PSEs measured for the different conflictual condition in the temporal bisection task.
<bold>(B)</bold>
PSEs measured for the different conflictual condition in the spatial bisection task. In both panels the green line represents total auditory dominance and the red line total visual dominance. Different ages are reported in different panels. The number of subjects who participated is indicated in each panel for each age and condition.</p>
</caption>
<graphic xlink:href="fnint-06-00077-g003"></graphic>
</fig>
<p>To observe how much this behavior is predicted by the MLE model, we plotted in Figures
<xref ref-type="fig" rid="F4">4</xref>
A,B the PSEs measured against the PSEs predicted by the Bayesian model (Eq.
<xref ref-type="disp-formula" rid="E2">2</xref>
). Superimposition of the dots on the black line (equality line) would suggest that the behavior of the group is well predicted by the Bayesian model. From this graph we can observe that for the temporal bisection task (Figure
<xref ref-type="fig" rid="F4">4</xref>
A) the behavior becomes adult-like at about 8–9 years of age when the dots lie close to (but not entirely superimposed on) the equality black line as occurs in the adult groups. On the other hand, for the space bisection task, the dots lie on the equality line only in the adult group (Figure
<xref ref-type="fig" rid="F4">4</xref>
B).</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption>
<p>
<bold>(A)</bold>
Measured against predicted PSEs for the different conflictual conditions in the temporal bisection task.
<bold>(B)</bold>
Measured against predicted PSEs for the different conflictual condition in the spatial bisection task. In both panels the black line represents the prediction of the Bayesian model and suggests optimal integration. Different ages are reported in different panels. The number of subjects who participated is indicated in each panel for each age and condition.</p>
</caption>
<graphic xlink:href="fnint-06-00077-g004"></graphic>
</fig>
<p>Figure
<xref ref-type="fig" rid="F5">5</xref>
summarizes how visuo-auditory integration develops with age. It plots the amount of variance (
<italic>R</italic>
<sup>2</sup>
) in PSEs explained by MLE model. A value of 1 means that all the variance was explained by the model, 0 that the model performed as well as the mean, and less than 0 that it performed worse than the mean (see Eq.
<xref ref-type="disp-formula" rid="E7">7</xref>
). For both the spatial and temporal tasks, the MLE model explains a large proportion of the variance at all ages except the youngest (6-year-olds). For both space and time in the 6 years old group
<italic>R</italic>
<sup>2</sup>
 ≃ 0, suggesting that the model performed as well as the mean. The 8-year-old group shows a larger proportion of explained variance (
<italic>R</italic>
<sup>2</sup>
 > 0.5) but interestingly, there is a dip in the curve at 10–12 years showing less explained variance, especially for the space bisection test (
<italic>R</italic>
<sup>2</sup>
 < 0.5). In the adult group a larger amount of variance is explained by the MLE model in the space bisection task than in the time bisection task suggesting better integration for the first task.</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption>
<p>
<bold>(A)</bold>
Proportion of variance (
<italic>R</italic>
<sup>2</sup>
) of the PSE data (Figure
<xref ref-type="fig" rid="F3">3</xref>
) for the time bisection task explained by the MLE model. A value of 1 means that all the variance was explained by the model, 0 that the model performed as well as the mean, and less than 0 that it performed worse than the mean (see Eq.
<xref ref-type="disp-formula" rid="E7">7</xref>
).
<bold>(B)</bold>
The same for the space bisection task.</p>
</caption>
<graphic xlink:href="fnint-06-00077-g005"></graphic>
</fig>
<p>We then calculated the audio and visual weights required for the Bayesian sum (Eq.
<xref ref-type="disp-formula" rid="E2">2</xref>
), separately from the estimates of PSEs (Eqs
<xref ref-type="disp-formula" rid="E4">4</xref>
<xref ref-type="disp-formula" rid="E6">6</xref>
) and from the estimates of unimodal thresholds (Eq.
<xref ref-type="disp-formula" rid="E3">3</xref>
). The results are plotted in Figure
<xref ref-type="fig" rid="F6">6</xref>
, showing auditory weights on the left ordinate and visual weights on the right (the two sum to unity). In general, for the time bisection (Figure
<xref ref-type="fig" rid="F6">6</xref>
A), the auditory weight for the PSE was more than that predicted by thresholds (points tend to fall to the right of the bisector). This occurred at all ages, but was clearest for the adults. Conversely, for the space bisection (Figure
<xref ref-type="fig" rid="F6">6</xref>
B), the PSE has less auditory weight (more visual weight) than predicted by thresholds until adulthood.</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption>
<p>
<bold>(A)</bold>
Individual weights predicted from thresholds plotted against those predicted from PSEs for different ages for the time bisection task. The black line shows the equality line.
<bold>(B)</bold>
Same for the space bisection task.</p>
</caption>
<graphic xlink:href="fnint-06-00077-g006"></graphic>
</fig>
<p>Figure
<xref ref-type="fig" rid="F7">7</xref>
plots average theoretical auditory and visual weights as a function of age: gray lines show the MLE-predicted weights (Eq.
<xref ref-type="disp-formula" rid="E3">3</xref>
), and blue lines the weights calculated from the PSE vs. conflict functions (Eq.
<xref ref-type="disp-formula" rid="E6">6</xref>
). These graphs tell a similar story to Figure
<xref ref-type="fig" rid="F6">6</xref>
. For temporal judgments (Figure
<xref ref-type="fig" rid="F7">7</xref>
A), the PSEs show a greater auditory weight than predicted by thresholds while for spatial judgments (Figure
<xref ref-type="fig" rid="F7">7</xref>
B) the PSEs show a greater visual weight than predicted. The only exception is the spatial judgments for adults, where PSE and thresholds estimates are very similar (both heavily biased toward vision).</p>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption>
<p>
<bold>(A)</bold>
Average weights as a function of age, predicted from thresholds in gray and from PSEs in blue, for the time bisection task.
<bold>(B)</bold>
Same for the space bisection task.</p>
</caption>
<graphic xlink:href="fnint-06-00077-g007"></graphic>
</fig>
<p>The strong test of optimal integration is an improvement in bimodal thresholds (given by the standard deviation of the cumulative Gaussian fits). Figure
<xref ref-type="fig" rid="F8">8</xref>
shows the results. For the temporal bisection task (blue dots in Figures
<xref ref-type="fig" rid="F8">8</xref>
A–C), the improvement in thresholds for bimodal presentations was less than predicted at all ages (see stars in Figure
<xref ref-type="fig" rid="F8">8</xref>
C and caption), if compared with the Bayesian prediction (gray symbols in Figures
<xref ref-type="fig" rid="F8">8</xref>
A–C). In the youngest group of children (5–7 years of age), bimodal thresholds follow the poorer modality (the visual one, red and blue dots in Figure
<xref ref-type="fig" rid="F8">8</xref>
A). Interestingly, at this age the bimodal PSEs also are much noisier than the older groups (see Figure
<xref ref-type="fig" rid="F4">4</xref>
A). After 7 years of age, when also PSEs become less noisy and adult-like, bimodal thresholds become identical to the auditory thresholds and remain equal to the auditory one also in the older groups (green dots in Figure
<xref ref-type="fig" rid="F8">8</xref>
A). Also for the space bisection task, PSEs and thresholds show related behaviors: when PSEs show less inter-subject variability (in the adult group), the bimodal thresholds become well predicted by the Bayesian model (blue and gray dots in Figure
<xref ref-type="fig" rid="F8">8</xref>
B, see stars in Figure
<xref ref-type="fig" rid="F8">8</xref>
D). In the younger groups they follow the poorer sense (the auditory one, blue and green dots in Figure
<xref ref-type="fig" rid="F8">8</xref>
B).</p>
<fig id="F8" position="float">
<label>Figure 8</label>
<caption>
<p>
<bold>(A)</bold>
Thresholds as a function of age for the temporal bisection task. Visual thresholds are reported in red, auditory in green, bimodal in blue, and predictions of the Bayesian model in gray.
<bold>(B)</bold>
Same for the space bisection task.
<bold>(C)</bold>
Same as A, showing for clarity only bimodal thresholds (blue) and Bayesian prediction (gray).
<bold>(D)</bold>
Same as C for the space bisection task. In all cases, two stars represent a significance level of less than 0.01 and one star a significance level of less than 0.05 in a one tailed one sample
<italic>t</italic>
-tests.</p>
</caption>
<graphic xlink:href="fnint-06-00077-g008"></graphic>
</fig>
</sec>
<sec sec-type="discussion">
<title>Discussion</title>
<sec>
<title>Audio-visual space and time bisection in adults</title>
<p>In this study we investigated audio-visual integration in space and in time perception during development. The goal was to examine the roles of the visual and auditory systems in the development of spatial and temporal aspects. To compare these two aspects, similar tasks were used to study space and time, requiring subjects to bisect temporal or spatial intervals. In adults, optimal multisensory integration, which has been reported for many tasks (Clarke and Yuille,
<xref ref-type="bibr" rid="B6">1990</xref>
; Ghahramani et al.,
<xref ref-type="bibr" rid="B10">1997</xref>
; Ernst and Banks,
<xref ref-type="bibr" rid="B8">2002</xref>
; Alais and Burr,
<xref ref-type="bibr" rid="B1">2004</xref>
; Landy et al.,
<xref ref-type="bibr" rid="B15">2011</xref>
), is not evident in our temporal bisection task at any age tested and is evident in our spatial bimodal task only for the adult group. The absence of integration obtained in our temporal task is in agreement with other studies (e.g., Tomassini et al.,
<xref ref-type="bibr" rid="B25">2011</xref>
) that show that multisensory integration is sub-optimal also for a visual-tactile time reproduction tasks. It is also in agreement with previous studies that show auditory dominance over vision rather than optimal integration in adults (Shams et al.,
<xref ref-type="bibr" rid="B22">2000</xref>
; Burr et al.,
<xref ref-type="bibr" rid="B3">2009</xref>
) for temporal localization. In particular, Burr et al. (
<xref ref-type="bibr" rid="B3">2009</xref>
) examined audio-visual integration in adults using a bisection task (similar to the one we used), and found that sound does tend to dominate the perceived timing of audio-visual stimuli. Our stimulus is for the most part similar to the stimulus used by Burr et al. (
<xref ref-type="bibr" rid="B3">2009</xref>
) with few exceptions. One difference was the larger temporal conflicts and the fact that all the three stimuli presented in the conflictual conditions contained conflict information, while in the Burr et al. (
<xref ref-type="bibr" rid="B3">2009</xref>
) stimuli the conflict was only in the first and last stimuli. Overall, if some differences between these two experiments were present, our results are mostly in agreement with those of Burr et al. (
<xref ref-type="bibr" rid="B3">2009</xref>
), particularly for the fact that auditory dominance of PSEs was not well predicted by the Bayesian model, with more weight to audition than predicted from thresholds. This audio dominance can be specific to the audio stimulus used. Burr et al. (
<xref ref-type="bibr" rid="B3">2009</xref>
) reported that bimodal prediction of thresholds was less successful for higher auditory tones (1700 Hz) than for lower tones (200 Hz) and in agreement with this finding we found auditory dominance rather than optimal integration by using a high auditory tone (750 Hz).</p>
<p>Our results on audio-visual space integration in adults agree well with previous studies. Like Alais and Burr (
<xref ref-type="bibr" rid="B1">2004</xref>
), we found optimal integration of bimodal thresholds, shown by an increment in precision compared with the unisensory performances. Both visual and multisensory thresholds (considering a similar visual blurred condition) were similar to those obtained by Alais and Burr (
<xref ref-type="bibr" rid="B1">2004</xref>
). Our auditory thresholds were better than those obtained by Alais and Burr (
<xref ref-type="bibr" rid="B1">2004</xref>
), possibly because of the different audio stimulation. Indeed in their experiment the audio stimulus was defined by only one cue (interaural timing difference), while our stimuli were real speakers in space, thereby providing many cues to localization, binaural and monaural. On the other hand our results suggest sub-optimal integration for PSEs, for which the proportion of the variance of the PSEs data is not completely explained by the MLE model (see Figure
<xref ref-type="fig" rid="F5">5</xref>
) and the weights predicted from thresholds are not completely superimposed to those computed from PSEs (see Figure
<xref ref-type="fig" rid="F7">7</xref>
). A possible explanation for this difference could be that the task in our experiment was a bisection task rather than the discrimination task as used by Alais and Burr (
<xref ref-type="bibr" rid="B1">2004</xref>
). Another difference could be that Alais and Burr’s subjects were trained extensively on the auditory task and were instructed to attend to both visual and auditory aspects of the stimuli. Given the limited time available to test children (and not wanting differences between children and adults), all subjects had the same 20 trials of training without particular attention to the auditory or bimodal aspects.</p>
</sec>
<sec>
<title>Audio-visual space and time bisection in children</title>
<p>In agreement with our previous results (Gori et al.,
<xref ref-type="bibr" rid="B11">2008</xref>
), we found that for both tasks the bimodal adult-like behavior emerges only late in development. For the time bisection the adult-like behavior occurs after 8 years of age while for the space bisection task, it was fully mature only in our adult group. Like the visual-haptic studies (Gori et al.,
<xref ref-type="bibr" rid="B11">2008</xref>
), children show strong unisensory dominance rather than multisensory integration of audio and visual space and time perception. In the child, audition dominates visual-auditory time perception and vision dominates visual-auditory space perception. This result is in agreement with our prediction and in line with our cross-sensory calibration theory (Burr and Gori,
<xref ref-type="bibr" rid="B5">2011</xref>
). The auditory dominance can reflect a process of cross-sensory calibration in which the auditory system could be used to calibrate the visual sense of time since it is the most accurate sense for temporal judgments. This result is also in agreement with many experiments performed with adults that show a dominant role of the auditory system for time (Gebhard and Mowbray,
<xref ref-type="bibr" rid="B9">1959</xref>
; Sekuler and Sekuler,
<xref ref-type="bibr" rid="B21">1999</xref>
; Shams et al.,
<xref ref-type="bibr" rid="B22">2000</xref>
,
<xref ref-type="bibr" rid="B23">2001</xref>
; Berger et al.,
<xref ref-type="bibr" rid="B2">2003</xref>
; Burr et al.,
<xref ref-type="bibr" rid="B3">2009</xref>
). Why the auditory dominance of both PSEs and bimodal thresholds persists into adulthood is not clear. A possible explanation is that for this kind of task the cross-sensory calibration process is still occurring since audition is too accurate with respect to the visual modality, and the precision of the visual system for this kind of task prevents the transition from unisensory dominance to multisensory integration. This dominance may however not be apparent with a different kind of stimulation. For example it would be interesting to observe whether auditory dominance in children occurs in other visual-auditory temporal integration tasks for which a strong multisensory integration in adults has been reported (as for example reducing the auditory tone from 750 to 200 Hz).</p>
<p>Similarly, the visual dominance of space during development could reflect a process of cross-sensory calibration in which the visual system is used to calibrate the auditory system for space perception, since it is the most accurate spatial sense. In agreement with this idea, many studies in adults show that the visual system is the most influential in determining the apparent spatial position of auditory stimuli (Pick et al.,
<xref ref-type="bibr" rid="B19">1969</xref>
; Warren et al.,
<xref ref-type="bibr" rid="B27">1981</xref>
; Mateeff et al.,
<xref ref-type="bibr" rid="B16">1985</xref>
; Alais and Burr,
<xref ref-type="bibr" rid="B1">2004</xref>
). Only after 12 years of age, visual-auditory integration seems to occur in this spatial task suggesting a very late development. Audio-visual space integration seems to mature later than visual-haptic spatial integration (that develops after 8–10 years of age, Gori et al.,
<xref ref-type="bibr" rid="B11">2008</xref>
) and also visual-auditory temporal integration. This could be related to the time of maturation of the individual sensory systems. Indeed, our previous work (Gori et al.,
<xref ref-type="bibr" rid="B11">2008</xref>
) suggested that multisensory integration occurs after the maturation of each unisensory system. The unisensory thresholds of Figure
<xref ref-type="fig" rid="F8">8</xref>
suggest that both visual and auditory thresholds continue to improve over the school years, particularly for the spatial task. For the space bisection task, the unisensory thresholds are still not mature at 12 years of age, and nor is integration optimal at this age. For the temporal task, unisensory thresholds become adult-like after 8–9 years of age, and at this age the auditory dominance appears. A delay in the development of unisensory systems seems to be related to the delay in the development of multisensory adult-like behavior.</p>
<p>These results support the idea that in children the use of one sense to calibrate the other precludes useful combination of the two sources (Gori et al.,
<xref ref-type="bibr" rid="B11">2008</xref>
; Burr and Gori,
<xref ref-type="bibr" rid="B5">2011</xref>
). On the other hand, given the strong variability between subjects and also the noise in the developing system we cannot exclude the possibility that these results reflect the greater noise in the sensory system of the developing child. The fact that the weights derived from thresholds lie at the midpoint between auditory and visual dominance do not allow us to exclude this hypothesis.</p>
<p>To examine further whether this dominance reflects a process of cross-sensory calibration it would be interesting to measure how the impairment of the dominant system impacts on the non-dominant modality that may need calibration (as we did in Gori et al.,
<xref ref-type="bibr" rid="B12">2010</xref>
,
<xref ref-type="bibr" rid="B14">2012</xref>
). In particular, it would be interesting to see how auditory spatial perception is impaired in children and adults with visual disabilities and how visual time perception is impaired in children and adults with auditory disabilities by using stimuli and procedures similar to those used in this study. If this dominance really reflects a process of a cross-sensory calibration it should allow clear and important predictions about spatial and temporal deficits in children and adults with visual and auditory disabilities.</p>
</sec>
</sec>
<sec>
<title>Conflict of Interest Statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</body>
<back>
<ack>
<p>We would like to thank the school “Dante Alighieri” of Bolzaneto, the school “De Amicis” of Voltri, the school “ISG” of Genoa and all the children that participated at this study. We would also like to thank Elisa Freddi and Marco Jacono for their important contribution for this work.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="B1">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Alais</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Burr</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>The ventriloquist effect results from near-optimal bimodal integration</article-title>
.
<source>Curr. Biol.</source>
<volume>14</volume>
,
<fpage>257</fpage>
<lpage>262</lpage>
<pub-id pub-id-type="doi">10.1016/S0960-9822(04)00043-0</pub-id>
<pub-id pub-id-type="pmid">14761661</pub-id>
</mixed-citation>
</ref>
<ref id="B2">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Berger</surname>
<given-names>T. D.</given-names>
</name>
<name>
<surname>Martelli</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Pelli</surname>
<given-names>D. G.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Flicker flutter: is an illusory event as good as the real thing?</article-title>
<source>J. Vis.</source>
<volume>3</volume>
(
<issue>6</issue>
),
<fpage>406</fpage>
<lpage>412</lpage>
<pub-id pub-id-type="doi">10.1167/3.6.1</pub-id>
<pub-id pub-id-type="pmid">12901711</pub-id>
</mixed-citation>
</ref>
<ref id="B3">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Burr</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>M. S.</given-names>
</name>
<name>
<surname>Morrone</surname>
<given-names>M. C.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Auditory dominance over vision in the perception of interval duration</article-title>
.
<source>Exp. Brain Res.</source>
<volume>198</volume>
,
<fpage>49</fpage>
<lpage>57</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-009-1933-z</pub-id>
<pub-id pub-id-type="pmid">19597804</pub-id>
</mixed-citation>
</ref>
<ref id="B4">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Burr</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Binda</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Gori</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>“Combining information from different senses: dynamic adjustment of combination weights, and the development of cross-modal integration in children,”in</article-title>
<source>Book of Sensory Cue Integration</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Trommershauser</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Körding</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Landy</surname>
<given-names>M. S.</given-names>
</name>
</person-group>
(
<publisher-loc>New York</publisher-loc>
:
<publisher-name>Oxford University Press</publisher-name>
),
<fpage>73</fpage>
<lpage>95</lpage>
</mixed-citation>
</ref>
<ref id="B5">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Burr</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Gori</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>“Multisensory integration develops late in humans,”</article-title>
in
<source>Frontiers in the Neural Bases of Multisensory Processes</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Wallace</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Murray</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<publisher-loc>Boca Raton</publisher-loc>
:
<publisher-name>Taylor & Francis Group</publisher-name>
),
<fpage>345</fpage>
<lpage>363</lpage>
</mixed-citation>
</ref>
<ref id="B6">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Clarke</surname>
<given-names>J. J.</given-names>
</name>
<name>
<surname>Yuille</surname>
<given-names>A. L.</given-names>
</name>
</person-group>
(
<year>1990</year>
)
<source>Data fusion for Sensory Information Processing</source>
.
<publisher-loc>Boston</publisher-loc>
:
<publisher-name>Kluwer Academic</publisher-name>
</mixed-citation>
</ref>
<ref id="B7">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Efron</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Tibshirani</surname>
<given-names>R. J.</given-names>
</name>
</person-group>
(
<year>1993</year>
).
<source>An Introduction to the Bootstrap</source>
.
<publisher-loc>New York</publisher-loc>
:
<publisher-name>Chapman & Hall</publisher-name>
</mixed-citation>
</ref>
<ref id="B8">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>M. O.</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>M. S.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Humans integrate visual and haptic information in a statistically optimal fashion</article-title>
.
<source>Nature</source>
<volume>415</volume>
,
<fpage>429</fpage>
<lpage>433</lpage>
<pub-id pub-id-type="doi">10.1038/415429a</pub-id>
<pub-id pub-id-type="pmid">11807554</pub-id>
</mixed-citation>
</ref>
<ref id="B9">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gebhard</surname>
<given-names>J. W.</given-names>
</name>
<name>
<surname>Mowbray</surname>
<given-names>G. H.</given-names>
</name>
</person-group>
(
<year>1959</year>
).
<article-title>On discriminating the rate of visual flicker and auditory flutter</article-title>
.
<source>Am. J. Psychol.</source>
<volume>72</volume>
,
<fpage>521</fpage>
<lpage>529</lpage>
<pub-id pub-id-type="doi">10.2307/1419493</pub-id>
<pub-id pub-id-type="pmid">13827044</pub-id>
</mixed-citation>
</ref>
<ref id="B10">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Ghahramani</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Wolpert</surname>
<given-names>D. M.</given-names>
</name>
<name>
<surname>Jordan</surname>
<given-names>M. I.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<article-title>“Computational models of sensorimotor integration,”</article-title>
in
<source>Self-organization, Computational Maps and Motor Control</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Sanguineti</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Morasso</surname>
<given-names>P. G.</given-names>
</name>
</person-group>
(
<publisher-loc>Amsterdam</publisher-loc>
:
<publisher-name>Elsevier Science Publication</publisher-name>
),
<fpage>117</fpage>
<lpage>147</lpage>
</mixed-citation>
</ref>
<ref id="B11">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gori</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Del Viva</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Sandini</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Burr</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Young children do not integrate visual and haptic form information</article-title>
.
<source>Curr. Biol.</source>
<volume>18</volume>
,
<fpage>694</fpage>
<lpage>698</lpage>
<pub-id pub-id-type="doi">10.1016/j.cub.2008.04.036</pub-id>
<pub-id pub-id-type="pmid">18450446</pub-id>
</mixed-citation>
</ref>
<ref id="B12">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gori</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Sandini</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Martinoli</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Burr</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Poor haptic orientation discrimination in nonsighted children may reflect disruption of cross-sensory calibration</article-title>
.
<source>Curr. Biol.</source>
<volume>20</volume>
,
<fpage>223</fpage>
<lpage>225</lpage>
<pub-id pub-id-type="doi">10.1016/j.cub.2009.11.069</pub-id>
<pub-id pub-id-type="pmid">20116249</pub-id>
</mixed-citation>
</ref>
<ref id="B13">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gori</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Sciutti</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Burr</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Sandini</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Direct and indirect haptic calibration of visual size judgments</article-title>
.
<source>PLoS ONE</source>
<volume>6</volume>
,
<fpage>e25599</fpage>
<pub-id pub-id-type="doi">10.1371/journal.pone.0025599</pub-id>
<pub-id pub-id-type="pmid">22022420</pub-id>
</mixed-citation>
</ref>
<ref id="B14">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gori</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Tinelli</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Sandini</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Cioni</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Burr</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Impaired visual size-discrimination in children with movement disorders</article-title>
.
<source>Neuropsychologia</source>
<volume>50</volume>
,
<fpage>1838</fpage>
<lpage>1843</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2012.04.009</pub-id>
<pub-id pub-id-type="pmid">22569216</pub-id>
</mixed-citation>
</ref>
<ref id="B15">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Landy</surname>
<given-names>M. S.</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>M. S.</given-names>
</name>
<name>
<surname>Knill</surname>
<given-names>D. C.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>“Ideal-observer models of cue integration,”</article-title>
in
<source>Book of Sensory Cue Integration</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Trommershauser</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Körding</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Landy</surname>
<given-names>M. S.</given-names>
</name>
</person-group>
(
<publisher-loc>New York</publisher-loc>
:
<publisher-name>Oxford University Press</publisher-name>
),
<fpage>5</fpage>
<lpage>30</lpage>
</mixed-citation>
</ref>
<ref id="B16">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mateeff</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Hohnsbein</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Noack</surname>
<given-names>T.</given-names>
</name>
</person-group>
(
<year>1985</year>
).
<article-title>Dynamic visual capture: apparent auditory motion induced by a moving visual target</article-title>
.
<source>Perception</source>
<volume>14</volume>
,
<fpage>721</fpage>
<lpage>727</lpage>
<pub-id pub-id-type="doi">10.1068/p140721</pub-id>
<pub-id pub-id-type="pmid">3837873</pub-id>
</mixed-citation>
</ref>
<ref id="B17">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nardini</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Bedford</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Mareschal</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Fusion of visual cues is not mandatory in children</article-title>
.
<source>Proc. Natl. Acad. Sci. U.S.A.</source>
<volume>107</volume>
,
<fpage>17041</fpage>
<lpage>17046</lpage>
<pub-id pub-id-type="doi">10.1073/pnas.1001699107</pub-id>
<pub-id pub-id-type="pmid">20837526</pub-id>
</mixed-citation>
</ref>
<ref id="B18">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nardini</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Jones</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Bedford</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Braddick</surname>
<given-names>O.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Development of cue integration in human navigation</article-title>
.
<source>Curr. Biol.</source>
<volume>18</volume>
,
<fpage>689</fpage>
<lpage>693</lpage>
<pub-id pub-id-type="doi">10.1016/j.cub.2008.04.021</pub-id>
<pub-id pub-id-type="pmid">18450447</pub-id>
</mixed-citation>
</ref>
<ref id="B19">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pick</surname>
<given-names>H. L.</given-names>
</name>
<name>
<surname>Warren</surname>
<given-names>D. H.</given-names>
</name>
<name>
<surname>Hay</surname>
<given-names>J. C.</given-names>
</name>
</person-group>
(
<year>1969</year>
).
<article-title>Sensory conflict in judgements of spatial direction</article-title>
.
<source>Percept. Psychophys.</source>
<volume>6</volume>
,
<fpage>203</fpage>
<lpage>205</lpage>
<pub-id pub-id-type="doi">10.3758/BF03207017</pub-id>
</mixed-citation>
</ref>
<ref id="B20">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rose</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Summers</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>1995</year>
).
<article-title>Duration illusions in a train of visual stimuli</article-title>
.
<source>Perception</source>
<volume>24</volume>
,
<fpage>1177</fpage>
<lpage>1187</lpage>
<pub-id pub-id-type="doi">10.1068/p241177</pub-id>
<pub-id pub-id-type="pmid">8577576</pub-id>
</mixed-citation>
</ref>
<ref id="B21">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sekuler</surname>
<given-names>A. B.</given-names>
</name>
<name>
<surname>Sekuler</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Collisions between moving visual targets, what controls alternative ways of seeing an ambiguous display?</article-title>
<source>Perception</source>
<volume>28</volume>
,
<fpage>415</fpage>
<lpage>432</lpage>
<pub-id pub-id-type="doi">10.1068/p2909</pub-id>
<pub-id pub-id-type="pmid">10664783</pub-id>
</mixed-citation>
</ref>
<ref id="B22">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shams</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Kamitani</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Shimojo</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>Illusions. What you see is what you hear</article-title>
.
<source>Nature</source>
<volume>408</volume>
,
<fpage>788</fpage>
<pub-id pub-id-type="doi">10.1038/35048669</pub-id>
<pub-id pub-id-type="pmid">11130706</pub-id>
</mixed-citation>
</ref>
<ref id="B23">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shams</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Kamitani</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Thompson</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Shimojo</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Sound alters visual evoked potentials in humans</article-title>
.
<source>Neuroreport</source>
<volume>12</volume>
,
<fpage>3849</fpage>
<lpage>3852</lpage>
<pub-id pub-id-type="doi">10.1097/00001756-200112040-00049</pub-id>
<pub-id pub-id-type="pmid">11726807</pub-id>
</mixed-citation>
</ref>
<ref id="B24">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shipley</surname>
<given-names>T.</given-names>
</name>
</person-group>
(
<year>1964</year>
).
<article-title>Auditory flutter-driving of visual flicker</article-title>
.
<source>Science</source>
<volume>145</volume>
,
<fpage>1328</fpage>
<lpage>1330</lpage>
<pub-id pub-id-type="doi">10.1126/science.145.3638.1328</pub-id>
<pub-id pub-id-type="pmid">14173429</pub-id>
</mixed-citation>
</ref>
<ref id="B25">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tomassini</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Gori</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Burr</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Sandini</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Morrone</surname>
<given-names>M. C.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Perceived duration of visual and tactile stimuli depends on perceived speed</article-title>
.
<source>Front. Integr. Neurosci.</source>
<volume>5</volume>
:
<fpage>51</fpage>
<pub-id pub-id-type="doi">10.3389/fnint.2011.00051</pub-id>
<pub-id pub-id-type="pmid">21941471</pub-id>
</mixed-citation>
</ref>
<ref id="B26">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tse</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Intriligator</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Rivest</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Cavanagh</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Attention and the subjective expansion of time</article-title>
.
<source>Percept. Psychophys.</source>
<volume>66</volume>
,
<fpage>1171</fpage>
<lpage>1189</lpage>
<pub-id pub-id-type="doi">10.3758/BF03196844</pub-id>
<pub-id pub-id-type="pmid">15751474</pub-id>
</mixed-citation>
</ref>
<ref id="B27">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Warren</surname>
<given-names>D. H.</given-names>
</name>
<name>
<surname>Welch</surname>
<given-names>R. B.</given-names>
</name>
<name>
<surname>McCarthy</surname>
<given-names>T. J.</given-names>
</name>
</person-group>
(
<year>1981</year>
).
<article-title>The role of visual-auditory “compellingness” in the ventriloquism effect: implications for transitivity among the spatial senses</article-title>
.
<source>Percept. Psychophys.</source>
<volume>30</volume>
,
<fpage>557</fpage>
<lpage>564</lpage>
<pub-id pub-id-type="doi">10.3758/BF03202010</pub-id>
<pub-id pub-id-type="pmid">7335452</pub-id>
</mixed-citation>
</ref>
<ref id="B28">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Watson</surname>
<given-names>A. B.</given-names>
</name>
<name>
<surname>Pelli</surname>
<given-names>D. G.</given-names>
</name>
</person-group>
(
<year>1983</year>
).
<article-title>QUEST: a Bayesian adaptive psychometric method</article-title>
.
<source>Percept. Psychophys.</source>
<volume>33</volume>
,
<fpage>113</fpage>
<lpage>120</lpage>
<pub-id pub-id-type="doi">10.3758/BF03202828</pub-id>
<pub-id pub-id-type="pmid">6844102</pub-id>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
<affiliations>
<list>
<country>
<li>Italie</li>
</country>
</list>
<tree>
<country name="Italie">
<noRegion>
<name sortKey="Gori, Monica" sort="Gori, Monica" uniqKey="Gori M" first="Monica" last="Gori">Monica Gori</name>
</noRegion>
<name sortKey="Burr, David" sort="Burr, David" uniqKey="Burr D" first="David" last="Burr">David Burr</name>
<name sortKey="Burr, David" sort="Burr, David" uniqKey="Burr D" first="David" last="Burr">David Burr</name>
<name sortKey="Sandini, Giulio" sort="Sandini, Giulio" uniqKey="Sandini G" first="Giulio" last="Sandini">Giulio Sandini</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Checkpoint
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001786 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd -nk 001786 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Checkpoint
   |type=    RBID
   |clé=     PMC:3443931
   |texte=   Development of Visuo-Auditory Integration in Space and Time
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/RBID.i   -Sk "pubmed:23060759" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024