Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap

Identifieur interne : 002677 ( Ncbi/Merge ); précédent : 002676; suivant : 002678

Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap

Auteurs : Warrick Roseboom [Japon] ; Takahiro Kawabe [Japon] ; Shin A Nishida [Japon]

Source :

RBID : PMC:3633943

Abstract

It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this is necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; see Experiment 1) and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; see Experiment 2) we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair alone can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap.


Url:
DOI: 10.3389/fpsyg.2013.00189
PubMed: 23658549
PubMed Central: 3633943

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:3633943

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap</title>
<author>
<name sortKey="Roseboom, Warrick" sort="Roseboom, Warrick" uniqKey="Roseboom W" first="Warrick" last="Roseboom">Warrick Roseboom</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Human Information Science Laboratory, NTT Communication Science Laboratories</institution>
<country>Atsugi, Japan</country>
</nlm:aff>
<country xml:lang="fr">Japon</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Kawabe, Takahiro" sort="Kawabe, Takahiro" uniqKey="Kawabe T" first="Takahiro" last="Kawabe">Takahiro Kawabe</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Human Information Science Laboratory, NTT Communication Science Laboratories</institution>
<country>Atsugi, Japan</country>
</nlm:aff>
<country xml:lang="fr">Japon</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Nishida, Shin A" sort="Nishida, Shin A" uniqKey="Nishida S" first="Shin A" last="Nishida">Shin A Nishida</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Human Information Science Laboratory, NTT Communication Science Laboratories</institution>
<country>Atsugi, Japan</country>
</nlm:aff>
<country xml:lang="fr">Japon</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">23658549</idno>
<idno type="pmc">3633943</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3633943</idno>
<idno type="RBID">PMC:3633943</idno>
<idno type="doi">10.3389/fpsyg.2013.00189</idno>
<date when="2013">2013</date>
<idno type="wicri:Area/Pmc/Corpus">001E69</idno>
<idno type="wicri:Area/Pmc/Curation">001E69</idno>
<idno type="wicri:Area/Pmc/Checkpoint">001397</idno>
<idno type="wicri:Area/Ncbi/Merge">002677</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap</title>
<author>
<name sortKey="Roseboom, Warrick" sort="Roseboom, Warrick" uniqKey="Roseboom W" first="Warrick" last="Roseboom">Warrick Roseboom</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Human Information Science Laboratory, NTT Communication Science Laboratories</institution>
<country>Atsugi, Japan</country>
</nlm:aff>
<country xml:lang="fr">Japon</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Kawabe, Takahiro" sort="Kawabe, Takahiro" uniqKey="Kawabe T" first="Takahiro" last="Kawabe">Takahiro Kawabe</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Human Information Science Laboratory, NTT Communication Science Laboratories</institution>
<country>Atsugi, Japan</country>
</nlm:aff>
<country xml:lang="fr">Japon</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Nishida, Shin A" sort="Nishida, Shin A" uniqKey="Nishida S" first="Shin A" last="Nishida">Shin A Nishida</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Human Information Science Laboratory, NTT Communication Science Laboratories</institution>
<country>Atsugi, Japan</country>
</nlm:aff>
<country xml:lang="fr">Japon</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Frontiers in Psychology</title>
<idno type="eISSN">1664-1078</idno>
<imprint>
<date when="2013">2013</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this is necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed
<italic>only</italic>
in featural content. Using both complex (audio visual speech; see
<xref ref-type="sec" rid="s1">Experiment 1</xref>
) and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; see
<xref ref-type="sec" rid="s2">Experiment 2</xref>
) we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair alone can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Alais, D" uniqKey="Alais D">D. Alais</name>
</author>
<author>
<name sortKey="Burr, D" uniqKey="Burr D">D. Burr</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Arnold, D H" uniqKey="Arnold D">D. H. Arnold</name>
</author>
<author>
<name sortKey="Tear, M" uniqKey="Tear M">M. Tear</name>
</author>
<author>
<name sortKey="Schindel, R" uniqKey="Schindel R">R. Schindel</name>
</author>
<author>
<name sortKey="Roseboom, W" uniqKey="Roseboom W">W. Roseboom</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Arnold, D H" uniqKey="Arnold D">D. H. Arnold</name>
</author>
<author>
<name sortKey="Yarrow, K" uniqKey="Yarrow K">K. Yarrow</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ayhan, I" uniqKey="Ayhan I">I. Ayhan</name>
</author>
<author>
<name sortKey="Bruno, A" uniqKey="Bruno A">A. Bruno</name>
</author>
<author>
<name sortKey="Nishida, S" uniqKey="Nishida S">S. Nishida</name>
</author>
<author>
<name sortKey="Johnston, A" uniqKey="Johnston A">A. Johnston</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Battaglia, P W" uniqKey="Battaglia P">P. W. Battaglia</name>
</author>
<author>
<name sortKey="Jacobs, R A" uniqKey="Jacobs R">R. A. Jacobs</name>
</author>
<author>
<name sortKey="Aslin, R N" uniqKey="Aslin R">R. N. Aslin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bennett, R G" uniqKey="Bennett R">R. G. Bennett</name>
</author>
<author>
<name sortKey="Westheimer, G" uniqKey="Westheimer G">G. Westheimer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bruno, A" uniqKey="Bruno A">A. Bruno</name>
</author>
<author>
<name sortKey="Ayhan, I" uniqKey="Ayhan I">I. Ayhan</name>
</author>
<author>
<name sortKey="Johnston, A" uniqKey="Johnston A">A. Johnston</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Burr, D" uniqKey="Burr D">D. Burr</name>
</author>
<author>
<name sortKey="Corsale, B" uniqKey="Corsale B">B. Corsale</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Burton, A M" uniqKey="Burton A">A. M. Burton</name>
</author>
<author>
<name sortKey="Bruce, V" uniqKey="Bruce V">V. Bruce</name>
</author>
<author>
<name sortKey="Dench, N" uniqKey="Dench N">N. Dench</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Di Luca, M" uniqKey="Di Luca M">M. Di Luca</name>
</author>
<author>
<name sortKey="Machulla, T K" uniqKey="Machulla T">T. K. Machulla</name>
</author>
<author>
<name sortKey="Ernst, M O" uniqKey="Ernst M">M. O. Ernst</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, M" uniqKey="Ernst M">M. Ernst</name>
</author>
<author>
<name sortKey="Banks, M" uniqKey="Banks M">M. Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Evans, K K" uniqKey="Evans K">K. K. Evans</name>
</author>
<author>
<name sortKey="Treisman, A" uniqKey="Treisman A">A. Treisman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fujisaki, W" uniqKey="Fujisaki W">W. Fujisaki</name>
</author>
<author>
<name sortKey="Nishida, S" uniqKey="Nishida S">S. Nishida</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fujisaki, W" uniqKey="Fujisaki W">W. Fujisaki</name>
</author>
<author>
<name sortKey="Shimojo, S" uniqKey="Shimojo S">S. Shimojo</name>
</author>
<author>
<name sortKey="Kashino, M" uniqKey="Kashino M">M. Kashino</name>
</author>
<author>
<name sortKey="Nishida, S" uniqKey="Nishida S">S. Nishida</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Guski, R" uniqKey="Guski R">R. Guski</name>
</author>
<author>
<name sortKey="Troje, N F" uniqKey="Troje N">N. F. Troje</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hanson, J V" uniqKey="Hanson J">J. V. Hanson</name>
</author>
<author>
<name sortKey="Heron, J" uniqKey="Heron J">J. Heron</name>
</author>
<author>
<name sortKey="Whitaker, D" uniqKey="Whitaker D">D. Whitaker</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Harrar, V" uniqKey="Harrar V">V. Harrar</name>
</author>
<author>
<name sortKey="Harris, L R" uniqKey="Harris L">L. R. Harris</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Heron, J" uniqKey="Heron J">J. Heron</name>
</author>
<author>
<name sortKey="Roach, N W" uniqKey="Roach N">N. W. Roach</name>
</author>
<author>
<name sortKey="Hanson, J V" uniqKey="Hanson J">J. V. Hanson</name>
</author>
<author>
<name sortKey="Mcgraw, P V" uniqKey="Mcgraw P">P. V. McGraw</name>
</author>
<author>
<name sortKey="Whitaker, D" uniqKey="Whitaker D">D. Whitaker</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Heron, J" uniqKey="Heron J">J. Heron</name>
</author>
<author>
<name sortKey="Roach, N W" uniqKey="Roach N">N. W. Roach</name>
</author>
<author>
<name sortKey="Whitaker, D" uniqKey="Whitaker D">D. Whitaker</name>
</author>
<author>
<name sortKey="Hanson, J V M" uniqKey="Hanson J">J. V. M. Hanson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Heron, J" uniqKey="Heron J">J. Heron</name>
</author>
<author>
<name sortKey="Whitaker, D" uniqKey="Whitaker D">D. Whitaker</name>
</author>
<author>
<name sortKey="Mcgraw, P V" uniqKey="Mcgraw P">P. V. McGraw</name>
</author>
<author>
<name sortKey="Horoshenkov, K V" uniqKey="Horoshenkov K">K. V. Horoshenkov</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hillis, J M" uniqKey="Hillis J">J. M. Hillis</name>
</author>
<author>
<name sortKey="Ernst, M O" uniqKey="Ernst M">M. O. Ernst</name>
</author>
<author>
<name sortKey="Banks, M S" uniqKey="Banks M">M. S. Banks</name>
</author>
<author>
<name sortKey="Landy, M S" uniqKey="Landy M">M. S. Landy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Johnston, A" uniqKey="Johnston A">A. Johnston</name>
</author>
<author>
<name sortKey="Arnold, D H" uniqKey="Arnold D">D. H. Arnold</name>
</author>
<author>
<name sortKey="Nishida, S" uniqKey="Nishida S">S. Nishida</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Keetels, M" uniqKey="Keetels M">M. Keetels</name>
</author>
<author>
<name sortKey="Vroomen, J" uniqKey="Vroomen J">J. Vroomen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="King, A J" uniqKey="King A">A. J. King</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kopinska, A" uniqKey="Kopinska A">A. Kopinska</name>
</author>
<author>
<name sortKey="Harris, L R" uniqKey="Harris L">L. R. Harris</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lennie, P" uniqKey="Lennie P">P. Lennie</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ma, W J" uniqKey="Ma W">W. J. Ma</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Machulla, T K" uniqKey="Machulla T">T.-K. Machulla</name>
</author>
<author>
<name sortKey="Di Luca, M" uniqKey="Di Luca M">M. Di Luca</name>
</author>
<author>
<name sortKey="Frolich, E" uniqKey="Frolich E">E. Frölich</name>
</author>
<author>
<name sortKey="Ernst, M O" uniqKey="Ernst M">M. O. Ernst</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Miyazaki, M" uniqKey="Miyazaki M">M. Miyazaki</name>
</author>
<author>
<name sortKey="Yamamoto, S" uniqKey="Yamamoto S">S. Yamamoto</name>
</author>
<author>
<name sortKey="Uchida, S" uniqKey="Uchida S">S. Uchida</name>
</author>
<author>
<name sortKey="Kitazawa, S" uniqKey="Kitazawa S">S. Kitazawa</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Navarra, J" uniqKey="Navarra J">J. Navarra</name>
</author>
<author>
<name sortKey="Garcia Morera, J" uniqKey="Garcia Morera J">J. García-Morera</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Navarra, J" uniqKey="Navarra J">J. Navarra</name>
</author>
<author>
<name sortKey="Hartcher O Rien, J" uniqKey="Hartcher O Rien J">J. Hartcher-O’Brien</name>
</author>
<author>
<name sortKey="Piazza, E" uniqKey="Piazza E">E. Piazza</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Navarra, J" uniqKey="Navarra J">J. Navarra</name>
</author>
<author>
<name sortKey="Soto Faraco, S" uniqKey="Soto Faraco S">S. Soto-Faraco</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Navarra, J" uniqKey="Navarra J">J. Navarra</name>
</author>
<author>
<name sortKey="Vatakis, A" uniqKey="Vatakis A">A. Vatakis</name>
</author>
<author>
<name sortKey="Zampini, M" uniqKey="Zampini M">M. Zampini</name>
</author>
<author>
<name sortKey="Soto Faraco, S" uniqKey="Soto Faraco S">S. Soto-Faraco</name>
</author>
<author>
<name sortKey="Humphreys, W" uniqKey="Humphreys W">W. Humphreys</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Okada, M" uniqKey="Okada M">M. Okada</name>
</author>
<author>
<name sortKey="Kashino, M" uniqKey="Kashino M">M. Kashino</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Parise, C" uniqKey="Parise C">C. Parise</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Parise, C V" uniqKey="Parise C">C. V. Parise</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
<author>
<name sortKey="Ernst, M O" uniqKey="Ernst M">M. O. Ernst</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Roach, N W" uniqKey="Roach N">N. W. Roach</name>
</author>
<author>
<name sortKey="Heron, J" uniqKey="Heron J">J. Heron</name>
</author>
<author>
<name sortKey="Whitaker, D" uniqKey="Whitaker D">D. Whitaker</name>
</author>
<author>
<name sortKey="Mcgraw, P V" uniqKey="Mcgraw P">P. V. McGraw</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Roseboom, W" uniqKey="Roseboom W">W. Roseboom</name>
</author>
<author>
<name sortKey="Arnold, D H" uniqKey="Arnold D">D. H. Arnold</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Roseboom, W" uniqKey="Roseboom W">W. Roseboom</name>
</author>
<author>
<name sortKey="Kawabe, T" uniqKey="Kawabe T">T. Kawabe</name>
</author>
<author>
<name sortKey="Nishida, S" uniqKey="Nishida S">S. Nishida</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Roseboom, W" uniqKey="Roseboom W">W. Roseboom</name>
</author>
<author>
<name sortKey="Nishida, S" uniqKey="Nishida S">S. Nishida</name>
</author>
<author>
<name sortKey="Arnold, D H" uniqKey="Arnold D">D. H. Arnold</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Roufs, J A" uniqKey="Roufs J">J. A. Roufs</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
<author>
<name sortKey="Deroy, O" uniqKey="Deroy O">O. Deroy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
<author>
<name sortKey="Shore, D I" uniqKey="Shore D">D. I. Shore</name>
</author>
<author>
<name sortKey="Klein, R M" uniqKey="Klein R">R. M. Klein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
<author>
<name sortKey="Squire, S B" uniqKey="Squire S">S. B. Squire</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stein, B E" uniqKey="Stein B">B. E. Stein</name>
</author>
<author>
<name sortKey="Meredith, M A" uniqKey="Meredith M">M. A. Meredith</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tanaka, A" uniqKey="Tanaka A">A. Tanaka</name>
</author>
<author>
<name sortKey="Kaori, A" uniqKey="Kaori A">A. Kaori</name>
</author>
<author>
<name sortKey="Hisato, I" uniqKey="Hisato I">I. Hisato</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Titchener, E B" uniqKey="Titchener E">E. B. Titchener</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Eijk, R L J" uniqKey="Van Eijk R">R. L. J. van Eijk</name>
</author>
<author>
<name sortKey="Kohlrausch, A" uniqKey="Kohlrausch A">A. Kohlrausch</name>
</author>
<author>
<name sortKey="Juola, J F" uniqKey="Juola J">J. F. Juola</name>
</author>
<author>
<name sortKey="Van De Par, S" uniqKey="Van De Par S">S. van de Par</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vatakis, A" uniqKey="Vatakis A">A. Vatakis</name>
</author>
<author>
<name sortKey="Navarra, J" uniqKey="Navarra J">J. Navarra</name>
</author>
<author>
<name sortKey="Soto Faraco, S" uniqKey="Soto Faraco S">S. Soto-Faraco</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vatakis, A" uniqKey="Vatakis A">A. Vatakis</name>
</author>
<author>
<name sortKey="Navarra, J" uniqKey="Navarra J">J. Navarra</name>
</author>
<author>
<name sortKey="Soto Faraco, S" uniqKey="Soto Faraco S">S. Soto-Faraco</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vatakis, A" uniqKey="Vatakis A">A. Vatakis</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vatakis, A" uniqKey="Vatakis A">A. Vatakis</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vatakis, A" uniqKey="Vatakis A">A. Vatakis</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C. Spence</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vroomen, J" uniqKey="Vroomen J">J. Vroomen</name>
</author>
<author>
<name sortKey="Keetels, M" uniqKey="Keetels M">M. Keetels</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vroomen, J" uniqKey="Vroomen J">J. Vroomen</name>
</author>
<author>
<name sortKey="Keetels, M" uniqKey="Keetels M">M. Keetels</name>
</author>
<author>
<name sortKey="De Gelder, B" uniqKey="De Gelder B">B. de Gelder</name>
</author>
<author>
<name sortKey="Bertelson, P" uniqKey="Bertelson P">P. Bertelson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Williams, J M" uniqKey="Williams J">J. M. Williams</name>
</author>
<author>
<name sortKey="Lit, A" uniqKey="Lit A">A. Lit</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yamamoto, S" uniqKey="Yamamoto S">S. Yamamoto</name>
</author>
<author>
<name sortKey="Miyazaki, M" uniqKey="Miyazaki M">M. Miyazaki</name>
</author>
<author>
<name sortKey="Iwano, T" uniqKey="Iwano T">T. Iwano</name>
</author>
<author>
<name sortKey="Kitazawa, S" uniqKey="Kitazawa S">S. Kitazawa</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yarrow, K" uniqKey="Yarrow K">K. Yarrow</name>
</author>
<author>
<name sortKey="Roseboom, W" uniqKey="Roseboom W">W. Roseboom</name>
</author>
<author>
<name sortKey="Arnold, D H" uniqKey="Arnold D">D. H. Arnold</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yarrow, K" uniqKey="Yarrow K">K. Yarrow</name>
</author>
<author>
<name sortKey="Jahn, N" uniqKey="Jahn N">N. Jahn</name>
</author>
<author>
<name sortKey="Durant, S" uniqKey="Durant S">S. Durant</name>
</author>
<author>
<name sortKey="Arnold, D H" uniqKey="Arnold D">D. H. Arnold</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yuan, X" uniqKey="Yuan X">X. Yuan</name>
</author>
<author>
<name sortKey="Li, B" uniqKey="Li B">B. Li</name>
</author>
<author>
<name sortKey="Bi, C" uniqKey="Bi C">C. Bi</name>
</author>
<author>
<name sortKey="Yin, H" uniqKey="Yin H">H. Yin</name>
</author>
<author>
<name sortKey="Huang, X" uniqKey="Huang X">X. Huang</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Front Psychol</journal-id>
<journal-id journal-id-type="iso-abbrev">Front Psychol</journal-id>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title-group>
<journal-title>Frontiers in Psychology</journal-title>
</journal-title-group>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">23658549</article-id>
<article-id pub-id-type="pmc">3633943</article-id>
<article-id pub-id-type="doi">10.3389/fpsyg.2013.00189</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Roseboom</surname>
<given-names>Warrick</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="author-notes" rid="fn001">*</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Kawabe</surname>
<given-names>Takahiro</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Nishida</surname>
<given-names>Shin’Ya</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff1">
<sup>1</sup>
<institution>Human Information Science Laboratory, NTT Communication Science Laboratories</institution>
<country>Atsugi, Japan</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>Edited by: Frans Verstraten, The University of Sydney, Australia</p>
</fn>
<fn fn-type="edited-by">
<p>Reviewed by: David Alais, University of Sydney, Australia; Jean Vroomen, University of Tilburg, Netherlands</p>
</fn>
<corresp id="fn001">*Correspondence: Warrick Roseboom, Human Information Science Laboratory, NTT Communication Science Laboratories, 3-1 Morinosato-Wakamiya, Atsugi, 243-0198 Kanagawa, Japan. e-mail:
<email xlink:type="simple">wjroseboom@gmail.com</email>
</corresp>
<fn fn-type="other" id="fn002">
<p>This article was submitted to Frontiers in Perception Science, a specialty of Frontiers in Psychology.</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>24</day>
<month>4</month>
<year>2013</year>
</pub-date>
<pub-date pub-type="collection">
<year>2013</year>
</pub-date>
<volume>4</volume>
<elocation-id>189</elocation-id>
<history>
<date date-type="received">
<day>21</day>
<month>1</month>
<year>2013</year>
</date>
<date date-type="accepted">
<day>29</day>
<month>3</month>
<year>2013</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright © 2013 Roseboom, Kawabe and Nishida.</copyright-statement>
<copyright-year>2013</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/3.0/">
<license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in other forums, provided the original authors and source are credited and subject to any copyright notices concerning any third-party graphics etc.</license-p>
</license>
</permissions>
<abstract>
<p>It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this is necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed
<italic>only</italic>
in featural content. Using both complex (audio visual speech; see
<xref ref-type="sec" rid="s1">Experiment 1</xref>
) and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; see
<xref ref-type="sec" rid="s2">Experiment 2</xref>
) we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair alone can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap.</p>
</abstract>
<kwd-group>
<kwd>lag adaptation</kwd>
<kwd>temporal recalibration</kwd>
<kwd>audio-visual</kwd>
<kwd>multisensory</kwd>
<kwd>speech perception</kwd>
<kwd>spatial</kwd>
<kwd>contextual</kwd>
</kwd-group>
<counts>
<fig-count count="4"></fig-count>
<table-count count="0"></table-count>
<equation-count count="0"></equation-count>
<ref-count count="60"></ref-count>
<page-count count="13"></page-count>
<word-count count="9990"></word-count>
</counts>
</article-meta>
</front>
<body>
<sec>
<title>Introduction</title>
<p>Many events in our everyday environment produce signals that can be perceived by multiple sensory modalities. For example, human speech produces correlated signals in both visual and auditory modalities. Critically, the information perceived by different sensory modalities is initially processed independently and subsequently combined to form a coherent percept. When the sources are redundant, the accuracy of perceptual judgments can be enhanced (Stein and Meredith,
<xref ref-type="bibr" rid="B45">1993</xref>
; Ernst and Banks,
<xref ref-type="bibr" rid="B11">2002</xref>
; Alais and Burr,
<xref ref-type="bibr" rid="B1">2004</xref>
; Arnold et al.,
<xref ref-type="bibr" rid="B2">2010</xref>
). However, a challenge to this process is that a common source of origin for two sensory signals does not guarantee a common perception of time due to differences in both extrinsic and intrinsic signal speeds (Spence and Squire,
<xref ref-type="bibr" rid="B44">2003</xref>
; King,
<xref ref-type="bibr" rid="B24">2005</xref>
). With regards to audio and visual signals, sound (∼330 m/s) travels through air more slowly than light (∼300,000,000 m/s). After reaching sensory receptors, transduction of sound by the hair cells of the inner ear is quicker than photo-transduction of light by the retina, resulting in processing latency differences up to ∼50 ms (King,
<xref ref-type="bibr" rid="B24">2005</xref>
). These differences in physical and neural transmission speeds will cancel each other out at observer distances of ∼10–15 m, but stimulus attributes can also contribute to this variance. For example, speed of neural propagation is correlated with signal intensity (Roufs,
<xref ref-type="bibr" rid="B41">1963</xref>
; Lennie,
<xref ref-type="bibr" rid="B26">1981</xref>
; Williams and Lit,
<xref ref-type="bibr" rid="B56">1983</xref>
; Burr and Corsale,
<xref ref-type="bibr" rid="B8">2001</xref>
; Kopinska and Harris,
<xref ref-type="bibr" rid="B25">2004</xref>
). By a related means, attention also likely contributes (e.g., prior entry; Titchener,
<xref ref-type="bibr" rid="B47">1908</xref>
; Spence et al.,
<xref ref-type="bibr" rid="B43">2001</xref>
). Consequently, discrepancies in the relative timing of audio and visual signals in the order of 10’s of milliseconds can be expected at varying event distances and signal intensities.</p>
<p>As our perception of nearby audio-visual events typically contains minimal apparent temporal discrepancy, a critical question regards what possible processes the brain may utilize to create such coherent perception. It has recently been proposed that one strategy to overcome the problem of differential transmission speeds would be to dynamically calibrate audio-visual timing perception based on recent events (Fujisaki et al.,
<xref ref-type="bibr" rid="B14">2004</xref>
; Vroomen et al.,
<xref ref-type="bibr" rid="B55">2004</xref>
; Heron et al.,
<xref ref-type="bibr" rid="B20">2007</xref>
). In support of this idea, many studies (e.g., Fujisaki et al.,
<xref ref-type="bibr" rid="B14">2004</xref>
; Vroomen et al.,
<xref ref-type="bibr" rid="B55">2004</xref>
; Navarra et al.,
<xref ref-type="bibr" rid="B33">2005</xref>
,
<xref ref-type="bibr" rid="B31">2009</xref>
,
<xref ref-type="bibr" rid="B30">2012</xref>
; Miyazaki et al.,
<xref ref-type="bibr" rid="B29">2006</xref>
; Heron et al.,
<xref ref-type="bibr" rid="B20">2007</xref>
,
<xref ref-type="bibr" rid="B19">2010</xref>
,
<xref ref-type="bibr" rid="B18">2012</xref>
; Keetels and Vroomen,
<xref ref-type="bibr" rid="B23">2007</xref>
; Vatakis et al.,
<xref ref-type="bibr" rid="B49">2007</xref>
,
<xref ref-type="bibr" rid="B50">2008</xref>
; Hanson et al.,
<xref ref-type="bibr" rid="B16">2008</xref>
; Harrar and Harris,
<xref ref-type="bibr" rid="B17">2008</xref>
; Di Luca et al.,
<xref ref-type="bibr" rid="B10">2009</xref>
; Roach et al.,
<xref ref-type="bibr" rid="B37">2011</xref>
; Roseboom and Arnold,
<xref ref-type="bibr" rid="B38">2011</xref>
; Tanaka et al.,
<xref ref-type="bibr" rid="B46">2011</xref>
; Yarrow et al.,
<xref ref-type="bibr" rid="B58">2011a</xref>
,
<xref ref-type="bibr" rid="B59">b</xref>
; Machulla et al.,
<xref ref-type="bibr" rid="B28">2012</xref>
; Yuan et al.,
<xref ref-type="bibr" rid="B60">2012</xref>
; see Vroomen and Keetels,
<xref ref-type="bibr" rid="B54">2010</xref>
for review) have demonstrated that following exposure (adaptation) to a short period (<∼3 mins) containing repeated presentations of audio-visual pairs in which the audio and visual components are presented asynchronously (∼100–300 ms), observers’ point of subjective synchrony (PSS) between audio and visual events shifts in the direction of the exposed asynchrony (i.e., observers report physical offsets between audio and visual events in the exposed direction, for example audition lagging vision, as synchronous more often than they had prior to the exposure period). This change is sometimes accompanied by a change in the width of the response distribution (reported either by the just noticeable difference; JND, standard deviation; SD, or full-width half-maximum; FWHM of the distribution) such that observers respond with less temporal precision following adaptation to asynchrony.</p>
<p>Subsequent studies support the existence of similar recalibration processes for many different combinations of both multisensory (Navarra et al.,
<xref ref-type="bibr" rid="B32">2007</xref>
; Hanson et al.,
<xref ref-type="bibr" rid="B16">2008</xref>
; Harrar and Harris,
<xref ref-type="bibr" rid="B17">2008</xref>
; Di Luca et al.,
<xref ref-type="bibr" rid="B10">2009</xref>
) and unisensory signal pairs (Bennett and Westheimer,
<xref ref-type="bibr" rid="B6">1985</xref>
; Okada and Kashino,
<xref ref-type="bibr" rid="B34">2003</xref>
; Arnold and Yarrow,
<xref ref-type="bibr" rid="B3">2011</xref>
). These results suggest that sensory recalibration occurs supra-modally. Combined with results demonstrating that temporal recalibration may transfer across stimuli or tasks (Fujisaki et al.,
<xref ref-type="bibr" rid="B14">2004</xref>
; Keetels and Vroomen,
<xref ref-type="bibr" rid="B23">2007</xref>
; Di Luca et al.,
<xref ref-type="bibr" rid="B10">2009</xref>
; Navarra et al.,
<xref ref-type="bibr" rid="B31">2009</xref>
,
<xref ref-type="bibr" rid="B30">2012</xref>
), these studies indicate that sensory recalibration may represent a change in a generalized mechanism of timing perception. However, humans exist in a spatio-temporally cluttered world with the possibility of perceiving one or more multisensory events, each at a different distance and with differing signal intensities, in close temporal succession. In such an environment, maintaining a single estimate of synchrony generalized across all possible event pairs may not be beneficial for facilitating accurate perception of any given signal pair. Accordingly, it might be possible that humans can concurrently maintain multiple, distinct, estimates of audio-visual synchrony. The results of two recent studies (Roseboom and Arnold,
<xref ref-type="bibr" rid="B38">2011</xref>
; Heron et al.,
<xref ref-type="bibr" rid="B18">2012</xref>
) support such a premise.</p>
<p>A study by Roseboom and Arnold (
<xref ref-type="bibr" rid="B38">2011</xref>
) utilized male and female audio-visual speech stimuli and demonstrated that it is possible for observers to concurrently maintain two temporally opposing estimates of audio-visual synchrony. For example, one estimate for the female identity where audition preferably leads vision, and one estimate for the male identity where audition preferably lags vision. A subsequent study by Heron et al. (
<xref ref-type="bibr" rid="B18">2012</xref>
) replicated this finding for simple stimuli, and further suggested that the spatial location, not the content of stimuli, might constrain differential temporal recalibrations. Using pairs of high or low spatial frequency Gabor’s paired with high or low temporal frequency auditory tones they presented all stimuli from the same physical location. This configuration revealed no evidence for differential temporal recalibrations dependent on the content of the stimuli. However, when presenting two identical audio and visual stimuli (Gaussian luminance blobs and auditory white noise) from different spatial locations (left or right of fixation with matched auditory location), the results clearly demonstrated opposite temporal recalibrations constrained by the physical presentation location. This result was consistent with the spatial specificity often shown by temporal adaptation effects (Johnston et al.,
<xref ref-type="bibr" rid="B22">2006</xref>
; Ayhan et al.,
<xref ref-type="bibr" rid="B4">2009</xref>
; Bruno et al.,
<xref ref-type="bibr" rid="B7">2010</xref>
).</p>
<p>However, the result is apparently inconsistent with that reported by Roseboom and Arnold (
<xref ref-type="bibr" rid="B38">2011</xref>
). In this study it was revealed that the recalibrated synchrony estimates for a given stimulus identity (male or female) did not change whether the stimuli were presented from the same or different spatial location from that in which they were presented during the adaptation period. This result indicated that the differential recalibrations were constrained not by the spatial position of presentation but were contingent primarily on the content of the stimulus, in this case the identity of the speaker (i.e., male or female). This suggestion is broadly consistent with several other recent results demonstrating that temporal perception of audio-visual displays can be modulated by the content or featural relation of the signals (e.g., Vatakis and Spence,
<xref ref-type="bibr" rid="B52">2007</xref>
; Parise and Spence,
<xref ref-type="bibr" rid="B35">2009</xref>
; Roseboom et al.,
<xref ref-type="bibr" rid="B39">2013</xref>
).</p>
<p>In trying to reconcile this difference, Heron et al. (
<xref ref-type="bibr" rid="B18">2012</xref>
) pointed to the fact that the stimuli in Roseboom and Arnold (
<xref ref-type="bibr" rid="B38">2011</xref>
) reliably differed during the adaptation phase not only in content (identity) but also in visual spatial location of presentation. By comparison, in Heron et al. (
<xref ref-type="bibr" rid="B18">2012</xref>
) investigation of content constrained temporal recalibration the stimuli were only ever presented from a single central location. One might take this to imply that spatial dissociation, at least during the initial adaptation sequence, may be a critical factor for determining the appropriate audio-visual correspondences in order for a content constrained recalibration to be revealed. However, an alternative interpretation is that while difference in spatial location is an effective factor to facilitate audio-visual correspondence during adaptation, other factors such as featural or content difference may also be able to play a similar role. According to this idea, a spatial location difference is not absolutely necessary to produce differential temporal recalibrations – featural difference may be sufficient.</p>
<p>The role of spatial specificity in temporal recalibration is a critical question. Close spatio-temporal correspondence has been demonstrated to be a critical feature for the most basic level of multisensory integration in the mammalian brain (see Stein and Meredith,
<xref ref-type="bibr" rid="B45">1993</xref>
). While featural correspondence has not been demonstrated to play such a fundamental role in multisensory perception, an array of different natural featural correspondences between different audio and visual pairs have been demonstrated (e.g., high temporal frequency sounds and high spatial frequency visual gratings; Evans and Treisman,
<xref ref-type="bibr" rid="B12">2010</xref>
). However, the evidence to suggest that these correspondences are anything more than common decisional strategies is controversial (see Spence and Deroy,
<xref ref-type="bibr" rid="B42">2013</xref>
for a recent review). Consequently, a characterization of temporal recalibration as a general process, utilizing information from many dimensions of event difference, including spatial, temporal, and featural correspondence, implies different processing requirements to a more specified process constrained only by spatio-temporal relation. We were interested in determining why the results of Roseboom and Arnold (
<xref ref-type="bibr" rid="B38">2011</xref>
) and Heron et al. (
<xref ref-type="bibr" rid="B18">2012</xref>
) support such different characterizations. We wanted to know if it was possible to obtain equivalent results to those reported by Roseboom and Arnold (
<xref ref-type="bibr" rid="B38">2011</xref>
) in stimulus displays that contain no spatial disparity during either the adaptation or test phases.</p>
</sec>
<sec id="s1">
<title>Experiment 1</title>
<p>In the first experiment we constructed a paradigm similar to that previously used by Roseboom and Arnold (
<xref ref-type="bibr" rid="B38">2011</xref>
), with some minor differences. The stimuli were male or female actors saying “ba” (see Figure
<xref ref-type="fig" rid="F1">1</xref>
; Movie
<xref ref-type="supplementary-material" rid="SM1">S1</xref>
in Supplementary Material for example). Critically, there was no difference in spatial location of presentation for the different identity stimuli during any phase of the experiment. As such, this experiment was designed to explicitly confirm whether it is necessary to have spatial disparity during the adaptation stage of the experiment to obtain multiple, concurrent, audio-visual temporal recalibrations constrained only by featural differences for audio-visual speech stimuli.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption>
<p>
<bold>Example Test trial sequence from Experiment 1</bold>
. Each trial began with an Adaptation top-up period in which two repeats of each of the adapting relationships from the previously presented Adaptation phase were repeated. Following the top-up period, participants were informed by a change in the fixation cross from red to green that the next presentation would be a Test presentation to which they would have to respond. The Adaptation phase consisted of 40 repeats of each stimulus configuration, as depicted in the Adaptation top-up period, before proceeding onto the Adaptation top-up/Test presentation cycle.</p>
</caption>
<graphic xlink:href="fpsyg-04-00189-g001"></graphic>
</fig>
<sec>
<title>Participants</title>
<p>There were eight participants, all naïve as to the experimental purpose. All reported normal or corrected to normal vision and hearing. Participants received ¥1000 per hour for their participation. Ethical approval for this study was obtained from the ethical committee at Nippon Telegraph and Telephone Corporation (NTT Communication Science Laboratories Ethical Committee). The experiments were conducted according to the principles laid down in the Helsinki Declaration. Written informed consent was obtained from all participants.</p>
</sec>
<sec>
<title>Apparatus and stimulus</title>
<p>Visual stimuli were generated using a VSG 2/3 from Cambridge Research Systems (CRS) and displayed on a 21″ Sony Trinitron GDM-F520 monitor (resolution of 800 × 600 pixels and refresh rate of 120 Hz). Participants viewed stimuli from a distance of ∼57 cm. Audio signals were presented binaurally via Sennheiser HDA200 headphones. Audio stimulus presentations were controlled by a TDT RM1 Mobile Processor (Tucker-Davis Technologies). Auditory presentation timing was driven via a digital line from a VSG Break-out box (CRS), connected to the VSG, which triggered the RM1. Participants responded using a CRS CT3 response box.</p>
<p>The stimuli consisted of 500 ms movies of native Japanese speakers, either male or female, saying “ba” (recorded using a Sony Handycam HDR-CX560). The visual components of these recordings were sampled at a rate of 30 frames per second. Visual stimuli were presented within an oval aperture (5.65° of visual angle wide, 7.65° of visual angle high) centered 5.75° of visual angle above a central fixation cross (which subtended 0.6° of visual angle in width and height) against a black background (see Figure
<xref ref-type="fig" rid="F1">1</xref>
; Movie
<xref ref-type="supplementary-material" rid="SM1">S1</xref>
in Supplementary Material for depiction). Auditory signals were produced from the original movies (16 bit sample size, mono) and were normalized to a peak sound intensity of ∼65 db SPL. A “Hiss and Hum” filter was applied to audio stimuli below 20 db (using WavePad Audio Editor, NCH Software).</p>
<p>The experiment consisted of two phases, Adaptation and post-adaptation Test. During the Adaptation phase participants observed 40 presentations of each of the male and female stimuli, sequentially alternating between the two (see Movie
<xref ref-type="supplementary-material" rid="SM1">S1</xref>
in Supplementary Material for example trial sequence). The two audio-visual stimuli possessed opposite audio-visual temporal relationships, such that, for example (as in Figure
<xref ref-type="fig" rid="F1">1</xref>
; Movie
<xref ref-type="supplementary-material" rid="SM1">S1</xref>
in Supplementary Material), the onset of the audio stream of the female voice occurred
<italic>prior</italic>
to the onset of the female visual stream, and the onset of the audio stream of the male voice occurred
<italic>following</italic>
the onset of the male visual stream. During the Adaptation phase, the temporal distance between the onset of audio and visual components was always ±300 ms. Between subsequent presentations there was a pause of 1300–1700 ms, determined on a presentation-by-presentation basis. During the adaptation period, participants were instructed to simply pay attention to the temporal relationship between audio and visual presentations, an instruction similar to that typically used (Heron et al.,
<xref ref-type="bibr" rid="B19">2010</xref>
,
<xref ref-type="bibr" rid="B18">2012</xref>
; Roseboom and Arnold,
<xref ref-type="bibr" rid="B38">2011</xref>
).</p>
<p>Subsequent to the Adaptation period, participants completed the Test phase in which they were required to make synchrony/asynchrony judgments regarding presentations of the audio-visual stimuli which they had viewed during the Adaptation phase. In the Test phase the temporal relationship between audio and visual components was manipulated across nine levels (−433, −333, −233, −133, 0, 133, 233, 333, 433 ms; negative numbers indicating audio occurred before vision). Prior to each Test trial presentation, participants viewed an adaptation top-up sequence in which two presentations of each of the previously viewed adapting configurations from the Adaptation phase were again presented. Following this four presentation sequence, participants were informed that they would be required to respond to the next presentation by a change in the central fixation cross from red to green (see Figure
<xref ref-type="fig" rid="F1">1</xref>
; Movie
<xref ref-type="supplementary-material" rid="SM1">S1</xref>
in Supplementary Material).</p>
<p>As there were two audio-visual stimuli, and two possible audio-visual temporal relationships (audio leading vision; audio trailing vision), there were four possible stimulus configurations. Each experimental session concurrently adapted the two different audio-visual stimulus combinations to opposite temporal relationships, creating two experimental conditions (male audio leads vision with female audio lags vision; and male audio lags vision with female audio leads vision). For each condition, participants completed four blocks of 72 trials; 36 Test trials for each of the two audio-visual stimulus combinations, with four repeats at each of the nine audio-visual temporal offsets. The order of completion of trials in a given block was pseudo-random. Each condition required the completion of 288 trials, 576 trials across all four conditions. Each of the eight blocks of trials took ∼25 min to complete. Participants completed the different conditions over a 2 day period with the four blocks of a given condition completed in a single day.</p>
</sec>
<sec>
<title>Results</title>
<p>Participants’ PSS’s were estimated separately for each of the stimulus identities, for each of the two possible adaptation timing relationships. The PSS was taken as the peak of a truncated Gaussian function fitted to participants’ response distributions (as done in Roseboom and Arnold,
<xref ref-type="bibr" rid="B38">2011</xref>
) obtained from synchrony/asynchrony judgments completed during Test phases (see Supplemental Material for PSS’s estimated as the average of upper and lower boundaries of a distribution fitted by the difference of two cumulative Gaussian functions based on methods demonstrated in Yarrow et al.,
<xref ref-type="bibr" rid="B59">2011b</xref>
). We also took the SD of the fitted functions as a measure of the width of the response distribution. This value is often used as an indicator of the precision with which participants are responding.</p>
<p>We conducted a repeated measures analysis of variance (ANOVA) using the individual PSS’s from each of the four possible audio-visual-adaptation relationships (Male and Female, adapting to audio leading and lagging vision relationship; the average of these values for eight participants are shown in Figure
<xref ref-type="fig" rid="F2">2</xref>
). This analysis revealed a main effect of the adapted timing relationship (
<italic>F</italic>
<sub>1,7</sub>
 = 9.705,
<italic>p</italic>
 = 0.017) such that participants’ PSS’s were significantly larger in trials following adaptation to audio lagging vision (Lag = 136.653; SEM = 17.408) compared with trials following adaptation to audio leading vision (Lead = 100.114; SEM = 18.856). There was also a main effect of identity (
<italic>F</italic>
<sub>1,7</sub>
 = 9.228,
<italic>p</italic>
 = 0.019) such that the PSS’s for the male stimulus (Male = 138.987; SEM = 14.814) were larger than for the female stimulus (Female = 97.781; SEM = 20.665), but there was no interaction between stimulus identity and adapting relationship (
<italic>F</italic>
<sub>1,7</sub>
 = 0.115,
<italic>p</italic>
 = 0.745). We also conducted a repeated measures ANOVA on the SD data of the fitted functions. This revealed a significant main effect of the different stimuli (
<italic>F</italic>
<sub>1,7</sub>
 = 9.78,
<italic>p</italic>
 = 0.017) such that the SD was larger for responses regarding the Female stimulus (mean = 248.694; SEM = 21.914) than the Male (mean = 211.969; SEM = 22.734). However, there was no difference in SD’s between adaptation conditions, nor any interaction between adaptation condition and stimulus type (
<italic>F</italic>
’s < 0.722;
<italic>p</italic>
’s > 0.424). Overall, these results are consistent with participants having concurrently adapted to opposite temporal relationships for the different stimulus identities regardless of spatial overlap of presentation.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption>
<p>
<bold>Depictions of data from Experiment 1</bold>
.
<bold>(A,B)</bold>
Distributions of reported audio and visual synchrony for eight participants in trials following adaptation to an audio leading vision relationship (blue) and an audio lagging vision relationship (red) for the Male and Female stimulus types. Broken vertical lines indicate the point of subjective synchrony (PSS). If the red vertical line is placed to the right of the blue line (i.e., more positive) it indicates that an appropriate direction of adaptation was achieved. Adapt to Audio leads Vision refers to trials in which the exposed timing relationship between audition and vision during the Adaptation phase for the given stimulus was such that audition was presented prior to vision by 300 ms. Adapt to Audio lags Vision refers to the reverse case, where the exposed timing relationship during adaptation was such that audition was presented following vision by 300 ms. Note that participants concurrently adapted to opposite audio-visual timing relationships for each stimulus during a given set of trials such that they adapted to audio leads vision for the Male stimulus while concurrently adapting to audio lags vision for the Female stimulus, or vice versa.
<bold>(C)</bold>
Difference in PSS between adapting to an audio leading vision compared to audio lagging vision relationship for each stimulus, for each participant, averaged across the eight participants. Error bars indicate ±1 SEM.</p>
</caption>
<graphic xlink:href="fpsyg-04-00189-g002"></graphic>
</fig>
</sec>
</sec>
<sec id="s2">
<title>Experiment 2</title>
<p>The results of Experiment 1 are consistent with those previously reported by Roseboom and Arnold (
<xref ref-type="bibr" rid="B38">2011</xref>
); specifically, that multiple concurrent temporal recalibrations of audio-visual speech can be constrained by the content of the stimulus, male or female identity of the speaker. This result is found whether the stimuli are presented from the same spatial location during both the Adaptation and Test phases (Experiment 1) or not (Roseboom and Arnold,
<xref ref-type="bibr" rid="B38">2011</xref>
). Critically, the only difference between those two results is that in Experiment 1 of this study, there is no difference in the presentation location at any stage during the experiment. In the previous study by Roseboom and Arnold (
<xref ref-type="bibr" rid="B38">2011</xref>
), the specificity of temporal recalibrations by identity was established by testing the different identity stimuli at different spatial locations from that in which they were presented during the adaptation period. Consequently, the results of Experiment 1 confirm the conclusions of Roseboom and Arnold (
<xref ref-type="bibr" rid="B38">2011</xref>
).</p>
<p>However, one possible criticism of the results presented in Experiment 1 is that, while the
<italic>overall</italic>
position of presentation did not differ between the different stimulus presentations, the spatial properties of the different faces were not precisely matched. Indeed, by using video clips obtained from real individuals with clearly male and female identities such differences are bound to be introduced as the face dimensions of different genders are not identical (Burton et al.,
<xref ref-type="bibr" rid="B9">1993</xref>
). Therefore, it may be that while overall presentation location did not vary between the stimuli, small scale differences in spatial configuration may have provided enough information to cue differential temporal recalibration. This speculation, combined with the previous failure to obtain results supporting multiple concurrent recalibrations using more basic stimuli (Heron et al.,
<xref ref-type="bibr" rid="B18">2012</xref>
), makes it unclear whether the constraint by content is unique to complex stimuli containing many small scale differences in spatial configuration, or whether it is possible for truly spatially overlapping stimuli. To investigate this issue we set up an experiment similar to that of Heron et al. (
<xref ref-type="bibr" rid="B18">2012</xref>
) using simple stimuli. The visual stimuli were defined by either vertically or horizontally oriented Gabors and the auditory stimuli were high or low pitch tones (see Figure
<xref ref-type="fig" rid="F3">3</xref>
; Movie
<xref ref-type="supplementary-material" rid="SM1">S2</xref>
in Supplementary Material for example). There was no difference in spatial location of presentation for the different visual or auditory stimuli during any phase of the experiment. As such, this experiment was designed to explicitly investigate whether multiple, concurrent, audio-visual temporal recalibrations are possible for simple stimuli constrained only by featural differences.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption>
<p>
<bold>Example Test trial sequence from Experiment 2</bold>
. Each trial began with an Adaptation top-up period during which three repeats of each of the adapting relationships from the previously presented Adaptation phase were repeated. Following the top-up period, participants were informed by a transient change in the fixation cross from white to black that the next presentation would be a Test presentation to which they would have to respond. The Adaptation phase consisted of 30 repeats of each stimulus configuration, as depicted in the Adaptation top-up period, before proceeding onto the Adaptation top-up/Test presentation cycle.</p>
</caption>
<graphic xlink:href="fpsyg-04-00189-g003"></graphic>
</fig>
<sec>
<title>Methods</title>
<p>The apparatus was similar to that used in Experiment 1, though the refresh rate of the monitor was 100 Hz. Five participants, naïve as to experimental purpose, completed the experiment. All reported normal or corrected to normal vision and hearing. Written informed consent was obtained from all participants.</p>
<p>The visual stimuli consisted of a vertically or horizontally oriented Gabor patch (SD = 0.7°, background luminance 62 cd/m
<sup>2</sup>
, carrier spatial frequency of 3.5 cycles/degree, Michelson contrast ∼1) centered 2.4° of visual angle above a white (123 cd/m
<sup>2</sup>
) central fixation point (0.4° of visual angle in width and height; see Figure
<xref ref-type="fig" rid="F3">3</xref>
, for depiction). Individual visual stimulus presentations were 20 ms in duration. Auditory signals consisted of a 10 ms pulse, containing 2 ms cosine onset and offset ramps of 300 or 3500 Hz sine-wave carrier at ∼55 db SPL. As such, there were four possible audio-visual stimulus pairs; vertical Gabor and 300 Hz sound, vertical Gabor with 3500 Hz sound, horizontal Gabor with 300 Hz sound, and horizontal Gabor with 3500 Hz sound.</p>
</sec>
<sec>
<title>Procedures</title>
<p>As in Experiment 1, the experiment consisted of two phases, Adaptation and post-adaptation Test. During the Adaptation phase participants observed 30 presentations of each of two audio-visual combinations, sequentially alternating between the two (see Movie
<xref ref-type="supplementary-material" rid="SM1">S2</xref>
in Supplementary Material for example trial sequence). The two audio-visual combinations possessed opposite audio-visual temporal relationships, such that, for example (as in Figure
<xref ref-type="fig" rid="F3">3</xref>
; Movie
<xref ref-type="supplementary-material" rid="SM1">S2</xref>
in Supplementary Material), a low pitch sound occurred
<italic>prior</italic>
to a horizontal Gabor, and a high pitch sound occurred
<italic>following</italic>
a vertical Gabor. During the Adaptation phase, the temporal distance between the onset of audio and visual components was always ±150 ms. Between subsequent presentations there was a pause of 1000–2000 ms, determined on a presentation-by-presentation basis.</p>
<p>Prior to commencing the experiment, participants were shown what the different audio and visual stimuli looked and sounded like. They were then informed explicitly that they would be watching the presentation of two distinct audio-visual pairs and told, for example, that one pair may consist of the vertical visual stimulus and the high pitch audio stimulus, while the other would consist of the horizontal visual stimulus and the low pitch audio stimulus. Moreover, they were informed that the different pairs would possess different audio-visual temporal relationships such that for one pair the visual stimulus would appear prior to the audio stimulus, while for the other pair the visual stimulus would appear following the audio stimulus. They were instructed that their task during the Adaptation period was to pay attention to the temporal discrepancies between audio and visual components for each of the different pairs, a variation on instructions that have previously been shown to be successful in inducing audio-visual temporal recalibration for single audio-visual pairs (Heron et al.,
<xref ref-type="bibr" rid="B19">2010</xref>
). See also Supplemental Experiment 1 for results of a task using slightly different instructions.</p>
<p>Subsequent to the Adaptation period, participants completed the Test phase in which they were required to make synchrony/asynchrony judgments regarding presentations of the audio-visual stimuli which they had viewed during the Adaptation phase. In the Test phase, audio-visual stimuli were always presented in the same pitch-orientation combinations as had been viewed during the immediately previous Adaptation phase, and the temporal relationship between audio and visual components was manipulated across 11 levels (50 ms steps from −250 to +250). Prior to each Test trial presentation, participants viewed an adaptation top-up sequence in which three presentations of each of the previously viewed adapting configurations from the Adaptation phase were again presented. Following this six presentation sequence, participants were informed that they would be required to respond to the next presentation by a change in the central fixation cross from white to black for 1000 ms (see Movie
<xref ref-type="supplementary-material" rid="SM1">S2</xref>
in Supplementary Materials for example trial sequence).</p>
<p>As there were four audio-visual stimulus combinations, and two possible audio-visual temporal relationships (audio leading vision; audio trailing vision), there were eight possible stimulus configurations. Each experimental session concurrently adapted two different audio-visual stimulus combinations to opposite temporal relationships, creating four experimental conditions (low pitch-horizontal audio leads vision and high pitch-vertical audio lags vision; low pitch-horizontal audio lags vision and high pitch-vertical audio leads vision; high pitch-horizontal audio leads vision and low pitch-vertical audio lags vision; and high pitch-horizontal audio lags vision and low pitch-vertical audio leads vision). For each condition, participants completed four blocks of 88 trials; 44 Test trials for each of the two audio-visual stimulus combinations, with four repeats at each of the 11 audio-visual temporal offsets. The order of completion of trials in a given block was pseudo-random. Each condition required the completion of 352 trials, 1408 trials across all four conditions. Each of the 16 blocks of trials took ∼20 min to complete. Participants completed the different conditions in a pseudo-random order over a 4 day period with the four blocks of a given condition completed in a single day.</p>
</sec>
<sec>
<title>Results</title>
<p>Participants’ PSS’s were estimated separately for each of the four audio-visual combinations, at each of the two possible adaptation timing relationships. The PSS was taken as the peak of a truncated Gaussian function fitted to participants’ response distributions (as done in Roseboom and Arnold,
<xref ref-type="bibr" rid="B38">2011</xref>
) obtained from audio-visual synchrony/asynchrony judgments for that condition completed during Test phases (see Supplemental Material for PSS’s estimated as the average of upper and lower boundaries of a distribution fitted by the difference of two cumulative Gaussian functions based on methods demonstrated in Yarrow et al.,
<xref ref-type="bibr" rid="B59">2011b</xref>
). Again, we also took the standard deviation of the fitted function as a measure of the precision with which participants are responding.</p>
<p>We conducted a repeated measures ANOVA using the individual PSS’s from each of the eight possible audio-visual-adaptation relationships (see Figure
<xref ref-type="fig" rid="F4">4</xref>
for overall data). This analysis revealed a main effect of the adapted timing relationship (
<italic>F</italic>
<sub>1,4</sub>
 = 25.069,
<italic>p</italic>
 = 0.007), such that participants’ PSS’s were significantly larger in trials following adaptation to audio lagging vision (mean = 28.343; SEM = 18.099) compared with trials following adaptation to audio leading vision (mean = 10.883; SEM = 15.915). There was no main effect of different visual stimulus type (
<italic>F</italic>
<sub>1,4</sub>
 = 0.262,
<italic>p</italic>
 = 0.636) but perhaps a trending influence of different auditory stimulus type (
<italic>F</italic>
<sub>1,4</sub>
 = 5.33,
<italic>p</italic>
 = 0.082). However, there was no significant interaction between stimulus types and adaptation timing relationship (
<italic>F</italic>
’s < 3.364;
<italic>p</italic>
’s > 0.141). We also conducted a repeated measures ANOVA on the SD data of the fitted functions. This revealed no significant difference between different stimuli or adaptation conditions (
<italic>F</italic>
’s < 5.135;
<italic>p</italic>
’s > 0.086; overall mean SD = 143.98 ms). Overall, these results are consistent with participants having concurrently adapted to opposite temporal relationships for the different stimulus combinations regardless of spatial overlap of presentation.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption>
<p>
<bold>Depictions of data from Experiment 2</bold>
.
<bold>(A–D)</bold>
Distributions of reported audio and visual synchrony for five participants in trials following adaptation to an audio leading vision relationship (blue) and an audio lagging vision relationship (red) for the four different stimulus combinations. Broken vertical lines indicate the point of subjective synchrony (PSS). If the red vertical line is placed to the right of the blue line (i.e., more positive) it indicates that an appropriate direction of adaptation was achieved. Adapt to Audio leads Vision refers to trials in which the exposed timing relationship between audition and vision during the Adaptation phase for the given stimulus was such that audition was presented prior to vision by 150 ms. Adapt to Audio lags Vision refers to the reverse case, where the exposed timing relationship during adaptation was such that audition was presented following vision by 150 ms. Note that participants concurrently adapted to opposite audio-visual timing relationships for each stimulus during a given set of trials such that, for example, they adapted to audio leads vision for the Horizontal Gabor and 300 Hz tone combination while concurrently adapting to audio lags vision for the Vertical Gabor and 3000 Hz combination.
<bold>(E)</bold>
Difference in PSS between adapting to an audio leading vision compared to audio lagging vision relationship for each stimulus, for each participant, averaged across the eight participants. Error bars indicate ±1 SEM.</p>
</caption>
<graphic xlink:href="fpsyg-04-00189-g004"></graphic>
</fig>
</sec>
</sec>
<sec id="s3">
<title>General Discussion</title>
<p>The purpose of this study was to determine whether it is possible to obtain multiple concurrent audio-visual temporal recalibrations when stimuli differ in featural content, but not in overall spatial location of presentation at any point during the experimental procedure. This was done in an attempt to resolve the difference in results obtained by two recent studies; Roseboom and Arnold (
<xref ref-type="bibr" rid="B38">2011</xref>
) demonstrated that multiple audio-visual temporal recalibrations could be constrained by featural information of the stimuli, while Heron et al. (
<xref ref-type="bibr" rid="B18">2012</xref>
) suggested that different recalibrations could only be constrained by spatial information. Here, we revealed that two concurrent and opposite audio-visual temporal recalibrations are possible regardless of spatial overlap for both naturally compelling (Experiment 1) and arbitrary stimulus combinations (Experiment 2).</p>
<sec>
<title>Inconsistencies with Heron et al. (2012)</title>
<p>Experiment 1 of this study explicitly addressed one of the primary differences between the two previous studies (Roseboom and Arnold,
<xref ref-type="bibr" rid="B38">2011</xref>
and Heron et al.,
<xref ref-type="bibr" rid="B18">2012</xref>
) – whether a difference in spatial location during the adaptation phase of the experiment is required. However, Experiment 2 might be considered more of a conceptual replication of the experiment from Heron et al. (
<xref ref-type="bibr" rid="B18">2012</xref>
) investigating a case of pure content/featural difference. In that experiment, Heron et al. (
<xref ref-type="bibr" rid="B18">2012</xref>
) found no evidence for multiple concurrent recalibrations, while the results of Experiment 2 of this study clearly demonstrate such an effect. This inconsistency may be attributable to minor differences in experimental paradigm between the two studies. These differences are largely superficial, but here we will speculate that they may have contributed to the overall difference.</p>
<sec>
<title>Basic stimulus properties</title>
<p>First, the visual stimuli we used in Experiment 2 were defined by orientation rather than spatial frequency (as in Heron et al.,
<xref ref-type="bibr" rid="B18">2012</xref>
). Further, the audio stimuli were defined by 300 and 3000 Hz sine carrier pure tones, rather than 500 and 2000 Hz. These differences, while minor, may have facilitated participant’s segmentation of the adapting stream into clear audio-visual pairs (e.g., vertical orientated visual paired with 300 Hz tone) to be recalibrated, while the differences in spatial frequency used by Heron et al. (
<xref ref-type="bibr" rid="B18">2012</xref>
) may not have been as clear.</p>
</sec>
<sec>
<title>Temporal structure of adaptation presentations</title>
<p>Along the same lines, the temporal structure of presentation was slightly different in our experiment compared with that of Heron et al. (
<xref ref-type="bibr" rid="B18">2012</xref>
) In their study, during the adaptation phase, successive audio-visual pairs were separated by an interval of between 500 and 1000 ms. In our study, this value was between 1000 and 2000 ms. Given that effective binding of audio and visual events becomes impossible at repetition rates of greater than 2–3 Hz (Fujisaki and Nishida,
<xref ref-type="bibr" rid="B13">2010</xref>
), the inter presentation interval used by Heron et al. (
<xref ref-type="bibr" rid="B18">2012</xref>
) may have been brief enough to have sometimes caused confusion as to which audio and visual events comprised a specific pair. A related fact that may support this kind of conclusion is that when using audio-visual speech, such as in Experiment 1 of this study and in Roseboom and Arnold (
<xref ref-type="bibr" rid="B38">2011</xref>
), the repetition rate is much lower as speech stimuli are much longer (in this study a maximum of 800 ms) than the simple stimuli (in this study a maximum of 160 ms). This temporal factor, rather than any special ecological validity of audio-visual speech (Yuan et al.,
<xref ref-type="bibr" rid="B60">2012</xref>
), may in fact account for the apparent comparative ease with which concurrent and opposite temporal recalibrations can be obtained for speech relative to simple stimuli. We believe this speculation deserves further investigation.</p>
</sec>
<sec>
<title>Experimental instructions</title>
<p>Finally, the experimental instructions used in Experiment 2 of this study differed slightly from those reportedly used by Heron et al. (
<xref ref-type="bibr" rid="B18">2012</xref>
). In Experiment 2 of our study we provided participants with extensive information about the task and explicitly informed them of which audio and visual signals comprised a pair during a given experimental condition. In the study by Heron et al. (
<xref ref-type="bibr" rid="B18">2012</xref>
) participants were told only to attend to the temporal relationship between audio and visual stimuli. Indeed when we employed instructions similar to those used by Heron et al. (
<xref ref-type="bibr" rid="B18">2012</xref>
) using five naïve participants, we found no reliable adaptation effects (see Supplemental Experiment 1). Consequently, it seems likely that this factor also contributed to determining the appropriate audio-visual pair to recalibrate to a given audio-visual temporal relationship (note, however, that respectively four and three of the six participants used in experiment one and two by Heron and colleagues were the authors).</p>
</sec>
</sec>
<sec>
<title>The comparative role of space and features</title>
<p>An important point to make is the fact that content information can constrain multiple temporal recalibrations in the absence of spatial disparity is not to say that spatial relation has no role in multiple concurrent recalibrations or in temporal recalibration generally. Indeed previous evidence strongly supports the role of spatial disparity in constraining temporally recalibrated estimates of synchrony when the task and stimulus configurations provide a clear reason to do so (Yarrow et al.,
<xref ref-type="bibr" rid="B58">2011a</xref>
; Heron et al.,
<xref ref-type="bibr" rid="B18">2012</xref>
; Yuan et al.,
<xref ref-type="bibr" rid="B60">2012</xref>
). However, when there is no requirement to be specific about spatial relationship, as when there is only a single possible audio-visual relationship presented and the task demands require you to treat it as such (Keetels and Vroomen,
<xref ref-type="bibr" rid="B23">2007</xref>
), when there is another strongly compelling cue as to the appropriate audio-visual relationship (e.g., identity; Roseboom and Arnold,
<xref ref-type="bibr" rid="B38">2011</xref>
), or when there is no useful spatial information available (such as in this study), spatial cues are not required to determine the appropriate audio and visual signal combination to recalibrate. Certainly, if one were to equate the strength of some set of spatial, content, and task demand cues such that they were equally contributing to determination of the specific audio-visual relationship then it would be possible to examine a direct trade-off between these different factors. The most appropriate task to use in order to accomplish this is not entirely clear as there would be many possible dimensions of interaction, however we believe it to be conceptually possible. The results of a recent study (Yuan et al.,
<xref ref-type="bibr" rid="B60">2012</xref>
) support this premise. Although in that study the strength of different cues was not directly equated, they did compare the magnitude of context and spatially constrained recalibrations when the spatial location of auditory presentations was clear (presented from spatially co-localized loud speakers) with that when auditory presentations were from spatially non-localized headphones. These comparisons revealed that the relative magnitude of temporal recalibration effects, as defined by spatial or context based cues, was modulated by whether the spatial information from auditory cues was strong (loud speaker condition) or less informative (headphone condition).</p>
<p>For achieving useful outcomes in real world scenarios it is likely that the strength of a given cue is determined by interplay between many factors including top-down influences from attention (Heron et al.,
<xref ref-type="bibr" rid="B20">2007</xref>
), along with stimulus properties that are typically associated with cue combination (signal reliability; e.g., Hillis et al.,
<xref ref-type="bibr" rid="B21">2002</xref>
; Battaglia et al.,
<xref ref-type="bibr" rid="B5">2003</xref>
; and covariance; e.g., Parise et al.,
<xref ref-type="bibr" rid="B36">2012</xref>
) and prior knowledge of the likelihood those signals are related (Guski and Troje,
<xref ref-type="bibr" rid="B15">2003</xref>
; Miyazaki et al.,
<xref ref-type="bibr" rid="B29">2006</xref>
; Vatakis and Spence,
<xref ref-type="bibr" rid="B52">2007</xref>
,
<xref ref-type="bibr" rid="B53">2008</xref>
; see Ma,
<xref ref-type="bibr" rid="B27">2012</xref>
for a recent review of possible statistical implementations in these kinds of scenarios).</p>
</sec>
<sec>
<title>What does this mean for putative mechanisms of temporal recalibration?</title>
<p>It may be important to differentiate how different audio-visual components are selected as appropriate pairs to be recalibrated from how a given temporal recalibration may be implemented. With regards to this latter point, several proposals have been made (e.g., selective modulation of unisensory processing speed, Di Luca et al.,
<xref ref-type="bibr" rid="B10">2009</xref>
; Navarra et al.,
<xref ref-type="bibr" rid="B31">2009</xref>
; modulation of prior likelihood distributions, Yamamoto et al.,
<xref ref-type="bibr" rid="B57">2012</xref>
; asymmetrical change in synchrony judgment criteria, Yarrow et al.,
<xref ref-type="bibr" rid="B59">2011b</xref>
; adaptation of delay sensitive neurons, Roach et al.,
<xref ref-type="bibr" rid="B37">2011</xref>
; Note that these possibilities are not necessarily mutually exclusive). That the recalibration effect can be constrained by what would typically be considered highly complex information, such as identity of a speaker, creates problems in resolving the effect we report here with some of these proposals. Generally speaking, the results of this study support a characterization of audio-visual temporal recalibration as being primarily a decision-level effect that occurs as a result of a selective change in synchrony criteria on the side of the exposed asynchrony (Yarrow et al.,
<xref ref-type="bibr" rid="B59">2011b</xref>
) for a specific audio-visual stimulus. An alternative possibility is that the multiple concurrent recalibration effect is representative of a process that only acts to constrain the operation of a more basic and direct mechanism of temporal recalibration. Making this kind of distinction suggests a two stage account of multiple temporal recalibration and may allow design of paradigms wherein the putative operations are in conflict (e.g., Yamamoto et al.,
<xref ref-type="bibr" rid="B57">2012</xref>
). These possibilities remain firmly speculative at this point and further clarification is required before any firm conclusions can be drawn.</p>
<p>Another potentially interesting direction of investigation regards the number of possible concurrent recalibrations that can be maintained. In this and previous studies addressing multiple concurrent recalibrations (Roseboom and Arnold,
<xref ref-type="bibr" rid="B38">2011</xref>
; Heron et al.,
<xref ref-type="bibr" rid="B18">2012</xref>
) only two different audio-visual temporal relationships were used; one with audio leading vision and the other with audio lagging vision. Such an arrangement is preferable under highly constrained experimental conditions as it will maximize possible differences between the two experimental conditions. However, whether more than two temporal recalibrations can be maintained is an interesting question that may shed light on the nature of the broader mechanism. It has previously been established that the PSS for different audio-visual event pairs can differ by the type of signals used (e.g., speech compared with music; Vatakis and Spence,
<xref ref-type="bibr" rid="B51">2006</xref>
) and the conditions under which they are judged (e.g., temporally sparse compared with more temporally cluttered; Roseboom et al.,
<xref ref-type="bibr" rid="B40">2009</xref>
; see van Eijk et al.,
<xref ref-type="bibr" rid="B48">2008</xref>
for a review of studies examining subjective synchrony with different stimuli and under different conditions). In this study we adapted the temporal relationship for specific audio-visual pairs over a brief exposure period. Whether the process underlying the observed change in subjective synchrony is associated with longer term determinants of synchrony, or is only a short term adaptive process, is not entirely clear. However, it has recently been demonstrated that, rather than simply dissipating over time, a recalibrated sense of synchrony is maintained until sufficient exposure to contradictory evidence (Machulla et al.,
<xref ref-type="bibr" rid="B28">2012</xref>
). This result may be consistent with the idea that short term asynchrony exposure is simply the action of general processes for determining the relationship between specific audio and visual signals.</p>
</sec>
</sec>
<sec>
<title>Conclusion</title>
<p>Determining the appropriate way to interpret an incoming stream of multisensory events is a critical and difficult task for the human perceptual system. In complex sensory environments it makes sense to be flexible and adaptive. Here we add to previous demonstrations showing that humans can not only adjust to inter-sensory temporal discrepancies (Fujisaki et al.,
<xref ref-type="bibr" rid="B14">2004</xref>
; Vroomen et al.,
<xref ref-type="bibr" rid="B55">2004</xref>
), but can do so selectively (Roseboom and Arnold,
<xref ref-type="bibr" rid="B38">2011</xref>
; Heron et al.,
<xref ref-type="bibr" rid="B18">2012</xref>
). This selectivity can be constrained by many factors including apparent spatial (Heron et al.,
<xref ref-type="bibr" rid="B18">2012</xref>
) and featural (Roseboom and Arnold,
<xref ref-type="bibr" rid="B38">2011</xref>
) correspondence. In a complex environment with many cues as to the correspondence between different sensory signals, being able to use important featural information, such as the identity of a speaker, is an attractive strategy. Here we have demonstrated that it is possible to use such rich sources of information in the absence of any spatial discrepancy for both naturally compelling and arbitrary stimulus combinations. How such information is utilized in creating an altered sense of timing remains an unresolved question, but these results suggest that audio-visual temporal recalibration is the result of complex decisional processes taking into account many aspects of sensory events including spatial and featural correspondence along with prior knowledge of likely relatedness.</p>
</sec>
<sec>
<title>Conflict of Interest Statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="supplementary-material">
<title>Supplementary Material</title>
<p>The Supplementary Material for this article can be found online at
<uri xlink:type="simple" xlink:href="http://www.frontiersin.org/Perception_Science/10.3389/fpsyg.2013.00189/abstract">http://www.frontiersin.org/Perception_Science/10.3389/fpsyg.2013.00189/abstract</uri>
</p>
<supplementary-material content-type="local-data" id="SM1">
<label>Supplementary Movies S1 and S2</label>
<caption>
<p>
<bold>Please note that the supplementary movies provided are not the actual stimuli used in the experiments</bold>
. The movies are only approximations intended to give the reader an impression of the trial presentation appearance. Due to technical constraints we cannot guarantee that these movies precisely match the spatial and temporal properties described for the actual experimental stimuli in the Section “Materials and Methods.”</p>
</caption>
<media xlink:href="45060_Roseboom_Movie1.MOV" mimetype="video" mime-subtype="quicktime">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="SM2">
<media xlink:href="45060_Roseboom_Movie2.MOV" mimetype="video" mime-subtype="quicktime">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
</sec>
</body>
<back>
<ack>
<p>The authors would like to thank Iwaki Toshima and Chie Nagai for their assistance in this project. We would also like Daniel Linares for comments and discussions throughout the course of the project as well as the two reviewers for their time and contributions to this publication. Finally, thanks to Kielan Yarrow for providing us with the means to analyze the data as shown in the Supplemental Material.</p>
</ack>
<app-group>
<app id="A1">
<title>Appendix</title>
<sec>
<title>Supplemental results</title>
<sec>
<title>Supplemental experiment 1</title>
<sec>
<title>Methods</title>
<p>The methods of Supplemental Experiment 1 were identical to that of Experiment 2 with the following exceptions. Participants consisted of five new participants, all of whom were naïve as to experimental purpose. Unlike in Experiment 2, participants were given no explicit information about the presentation sequence during the Adaptation phase, they were simply informed to pay attention to the temporal relationship between audio and visual presentations. These instructions approximate those reported to have been used by Heron et al. (
<xref ref-type="bibr" rid="B18">2012</xref>
).</p>
</sec>
<sec>
<title>Results</title>
<p>Results were analyzed as in Experiment 2, with participants’ PSS’s estimated separately for each of the four audio-visual combinations, at each of the two possible adaptation timing relationships. The PSS was taken as the peak of a truncated Gaussian function fitted to participants response distributions obtained from audio-visual synchrony/asynchrony judgments for that condition completed during Test phases (see below for results when PSS’s were estimated as the average of upper and lower boundaries of a distribution fitted by the difference of two cumulative Gaussian functions based on methods demonstrated in Yarrow et al.,
<xref ref-type="bibr" rid="B59">2011b</xref>
).</p>
<p>We conducted a repeated measures ANOVA using the individual PSS’s from each of the eight possible audio-visual-adaptation relationships. This analysis revealed no effect of the adapted timing relationship (
<italic>F
<sub>1,4</sub>
</italic>
 = 0.01, 
<italic>p</italic>
 = 0.921), nor any effects of different visual or auditory stimulus type (
<italic>F</italic>
’s < 4.82;
<italic>p</italic>
’s > 0.093). We also conducted a repeated measures ANOVA using the SD of the functions fitted to response distributions. There was no effect of adaptation or stimulus conditions on the width of the fitted functions (
<italic>F</italic>
’s < 3.582;
<italic>p</italic>
’s > 0.131). These results suggest that the instructions provided to participants may be critical to obtaining different concurrent temporal recalibrations. This outcome is broadly consistent with previous findings indicating that where participants direct their attention during the adaptation phase of the experiment can have a significant influence on the magnitude of the recalibration effect (Heron et al.,
<xref ref-type="bibr" rid="B19">2010</xref>
; Tanaka et al.,
<xref ref-type="bibr" rid="B46">2011</xref>
). In Experiment 2, we explicitly instructed participants to attend specifically to the different AV combinations and their respective audio-visual asynchronies. However, in Supplemental Experiment 1, participants were given no instructions regarding any difference of the stimuli. The use of arbitrary stimulus combinations was, of itself, unlikely to promote perception of the different combinations as distinct from one another. By contrast, similar experimental instructions to those used in Supplemental Experiment 1 were also given in Experiment 1. In that case the stimuli were two different clips of real audio-visual speech. Such stimuli may implicitly contain the appropriate information to encourage participants to consider each audio-visual stimulus as distinct from the other. However, the different temporal properties of the stimuli may also be a factor (see
<xref ref-type="sec" rid="s3">General Discussion</xref>
in the main text).</p>
</sec>
<sec>
<title>Fitting response distributions as the difference of two cumulative Gaussian functions</title>
<p>When using synchrony/asynchrony (simultaneity) judgments such as we have used in this study, it is often considered standard practice to fit the obtained response distributions with a probability density function, such as the Gaussian function. However, recently (Yarrow et al.,
<xref ref-type="bibr" rid="B59">2011b</xref>
) it was proposed that an alternative method, fitting the response distribution with two cumulative Gaussian functions, may be superior
<xref ref-type="fn" rid="fn1">
<sup>1</sup>
</xref>
. The reasons for this conclusion remain a matter of debate and are certainly outside the scope of the present study. However, in this study we provide results obtained under both approaches for the purpose of comparison for those inclined to do so.</p>
</sec>
</sec>
<sec>
<title>Experiment 1</title>
<p>Participants’ PSS’s were estimated separately for each of the stimulus identities, for each of the two possible adaptation timing relationships. The PSS was taken as the average of upper and lower boundaries of a distribution fitted by two cumulative Gaussian functions (Yarrow et al.,
<xref ref-type="bibr" rid="B59">2011b</xref>
; Yuan et al.,
<xref ref-type="bibr" rid="B60">2012</xref>
) obtained from synchrony/asynchrony judgments completed during Test phases.</p>
<p>We conducted a repeated measures ANOVA using the individual PSS’s from each of the four possible audio-visual-adaptation relationships (Male and Female, adapting to audio leading and lagging vision relationship). This analysis revealed a main effect of the adapted timing relationship (
<italic>F
<sub>1,7</sub>
</italic>
 = 6.262, 
<italic>p</italic>
 = 0.041), such that participants’ PSS’s were significantly larger in trials following adaptation to audio lagging vision (Lag = 140.34; SEM = 20.662) compared with trials following adaptation to audio leading vision (Lead = 108.27; SEM = 22.468). There was no main effect of stimulus identity (
<italic>F
<sub>1,7</sub>
</italic>
 = 0.140, 
<italic>p</italic>
 = 0.719) nor interaction between identity and adaptation timing (
<italic>F
<sub>1,7</sub>
</italic>
 = 2.26, 
<italic>p</italic>
 = 0.176). We conducted a repeated measures ANOVA using the SD’s of the functions fitted to the upper and lower bounds of response distributions. There was no effect of adaptation, stimulus conditions, or boundary side (audio leads or lags vision) on the width of the fitted functions (
<italic>F</italic>
’s < 3.878;
<italic>p</italic>
’s > 0.090). Overall, these results are consistent with those reported in the main text indicating that participants concurrently adapted to opposite temporal relationships for the different stimulus identities regardless of spatial overlap.</p>
</sec>
<sec>
<title>Experiment 2</title>
<p>The Supplemental Results of Experiment 2 were analyzed in a similar way to that shown in the Supplemental Results of Experiment 1. We again conducted a repeated measures ANOVA using the individual PSS’s from each of the eight possible audio-visual-adaptation relationships. This analysis revealed a main effect of the adapted timing relationship (
<italic>F
<sub>1,4</sub>
</italic>
 = 12.775, 
<italic>p</italic>
 = 0.023), such that participants’ PSS’s were significantly larger in trials following adaptation to audio lagging vision (mean = 28.915; SEM = 18.488) compared with trials following adaptation to audio leading vision (mean = 7.257; SEM = 13.420). There was no main effect of visual or auditory stimulus type (
<italic>F</italic>
’s < 1.254;
<italic>p</italic>
’s > 0.326). We again conducted a repeated measures ANOVA using the SD’s of the functions fitted to the upper and lower bounds of response distributions. There was no effect of adaptation, stimulus conditions, or boundary side on the width of the fitted functions (
<italic>F</italic>
’s < 4.540;
<italic>p</italic>
’s > 0.077). Overall, these results are consistent with those from reported in the main text supporting the premise that participants concurrently adapted to opposite temporal relationships for different stimulus combinations regardless of spatial overlap.</p>
</sec>
<sec>
<title>Supplemental experiment 1</title>
<p>The Supplemental results of Supplemental Experiment 1 were analyzed in the same fashion as those for Experiment 2. As for the results reported above for Supplemental Experiment 1, conducting a repeated measures ANOVA using the individual PSS’s from each of the eight possible audio-visual-adaptation relationships reveals no effect of the adapted timing relationship (
<italic>F
<sub>1,4</sub>
</italic>
 = 1.482, 
<italic>p</italic>
 = 0.290), nor any effects of different visual or auditory stimulus type (
<italic>F</italic>
’s < 1.42;
<italic>p</italic>
’s > 0.29). We again conducted a repeated measures ANOVA using the SD’s of the functions fitted to the upper and lower bounds of response distributions. There was no effect of adaptation, stimulus conditions, or boundary side on the width of the fitted functions (
<italic>F</italic>
’s < 2.501;
<italic>p</italic>
’s > 0.189).</p>
</sec>
</sec>
<fn-group>
<fn id="fn1">
<p>
<sup>1</sup>
See Yarrow et al. (
<xref ref-type="bibr" rid="B59">2011b</xref>
) for a detailed description of this approach and a comparison with the standard practice. In short, this approach can be summarized as fitting two cumulative probability functions (in this case cumulative Gaussians) that each describe one of the two sides of the distribution. These functions provide estimates of the decision boundaries for temporal order between the audio and visual signals (i.e., decision regarding audio leading vision on one side and audio lagging vision on the other).</p>
</fn>
</fn-group>
</app>
</app-group>
<ref-list>
<title>References</title>
<ref id="B1">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Alais</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Burr</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>The ventriloquist effect results from near-optimal bimodal integration</article-title>
.
<source>Curr. Biol.</source>
<volume>14</volume>
,
<fpage>257</fpage>
<lpage>262</lpage>
<pub-id pub-id-type="doi">10.1016/S0960-9822(04)00043-0</pub-id>
<pub-id pub-id-type="pmid">14761661</pub-id>
</mixed-citation>
</ref>
<ref id="B2">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Arnold</surname>
<given-names>D. H.</given-names>
</name>
<name>
<surname>Tear</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Schindel</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Roseboom</surname>
<given-names>W.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Audio-visual speech cue combination</article-title>
.
<source>PLoS ONE</source>
<volume>5</volume>
:
<fpage>e10217</fpage>
<pub-id pub-id-type="doi">10.1371/journal.pone.0010217</pub-id>
<pub-id pub-id-type="pmid">20419130</pub-id>
</mixed-citation>
</ref>
<ref id="B3">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Arnold</surname>
<given-names>D. H.</given-names>
</name>
<name>
<surname>Yarrow</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Temporal recalibration of vision</article-title>
.
<source>Proc. R. Soc. Lond. B Biol. Sci.</source>
<volume>278</volume>
,
<fpage>535</fpage>
<lpage>538</lpage>
<pub-id pub-id-type="doi">10.1098/rspb.2010.1396</pub-id>
</mixed-citation>
</ref>
<ref id="B4">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ayhan</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Bruno</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Nishida</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Johnston</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>The spatial tuning of adaptation-based time compression</article-title>
.
<source>J. Vis.</source>
<volume>9</volume>
,
<fpage>1</fpage>
<lpage>12</lpage>
<pub-id pub-id-type="doi">10.1167/9.13.1</pub-id>
<pub-id pub-id-type="pmid">20053065</pub-id>
</mixed-citation>
</ref>
<ref id="B5">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Battaglia</surname>
<given-names>P. W.</given-names>
</name>
<name>
<surname>Jacobs</surname>
<given-names>R. A.</given-names>
</name>
<name>
<surname>Aslin</surname>
<given-names>R. N.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Bayesian integration of visual and auditory signals for spatial localization</article-title>
.
<source>J. Opt. Soc. Am.</source>
<volume>20</volume>
,
<fpage>1391</fpage>
<lpage>1397</lpage>
<pub-id pub-id-type="doi">10.1364/JOSAA.20.001391</pub-id>
</mixed-citation>
</ref>
<ref id="B6">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bennett</surname>
<given-names>R. G.</given-names>
</name>
<name>
<surname>Westheimer</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>1985</year>
).
<article-title>A shift in the perceived simultaneity of adjacent visual stimuli following adaptation to stroboscopic motion along the same axis</article-title>
.
<source>Vision Res.</source>
<volume>25</volume>
,
<fpage>565</fpage>
<lpage>569</lpage>
<pub-id pub-id-type="doi">10.1016/0042-6989(85)90161-0</pub-id>
<pub-id pub-id-type="pmid">4060609</pub-id>
</mixed-citation>
</ref>
<ref id="B7">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bruno</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Ayhan</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Johnston</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Retinotopic adaptation-based visual duration compression</article-title>
.
<source>J. Vis.</source>
<volume>10</volume>
,
<fpage>1</fpage>
<lpage>18</lpage>
<pub-id pub-id-type="doi">10.1167/10.7.1413</pub-id>
</mixed-citation>
</ref>
<ref id="B8">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Burr</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Corsale</surname>
<given-names>B.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Dependency of reaction times to motion onset on luminance and chromatic contrast</article-title>
.
<source>Vision Res.</source>
<volume>41</volume>
,
<fpage>1039</fpage>
<lpage>1048</lpage>
<pub-id pub-id-type="doi">10.1016/S0042-6989(01)00072-4</pub-id>
<pub-id pub-id-type="pmid">11301077</pub-id>
</mixed-citation>
</ref>
<ref id="B9">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Burton</surname>
<given-names>A. M.</given-names>
</name>
<name>
<surname>Bruce</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Dench</surname>
<given-names>N.</given-names>
</name>
</person-group>
(
<year>1993</year>
).
<article-title>What’s the difference between men and women? Evidence from facial measurement</article-title>
.
<source>Perception</source>
<volume>22</volume>
,
<fpage>153</fpage>
<lpage>176</lpage>
<pub-id pub-id-type="doi">10.1068/p220153</pub-id>
<pub-id pub-id-type="pmid">8474841</pub-id>
</mixed-citation>
</ref>
<ref id="B10">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Di Luca</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Machulla</surname>
<given-names>T. K.</given-names>
</name>
<name>
<surname>Ernst</surname>
<given-names>M. O.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Recalibration of multisensory simultaneity: cross-modal transfer coincides with a change in perceptual latency</article-title>
.
<source>J. Vis.</source>
<volume>9</volume>
,
<fpage>1</fpage>
<lpage>16</lpage>
<pub-id pub-id-type="doi">10.1167/9.13.1</pub-id>
<pub-id pub-id-type="pmid">20053098</pub-id>
</mixed-citation>
</ref>
<ref id="B11">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Humans integrate visual and haptic information in a statistically optimal fashion</article-title>
.
<source>Nature</source>
<volume>415</volume>
,
<fpage>429</fpage>
<lpage>433</lpage>
<pub-id pub-id-type="doi">10.1038/415429a</pub-id>
<pub-id pub-id-type="pmid">11807554</pub-id>
</mixed-citation>
</ref>
<ref id="B12">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Evans</surname>
<given-names>K. K.</given-names>
</name>
<name>
<surname>Treisman</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Natural cross-modal mappings between visual and auditory features</article-title>
.
<source>J. Vis.</source>
<volume>10</volume>
,
<fpage>1</fpage>
<lpage>12</lpage>
<pub-id pub-id-type="doi">10.1167/10.3.10</pub-id>
<pub-id pub-id-type="pmid">20143899</pub-id>
</mixed-citation>
</ref>
<ref id="B13">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fujisaki</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Nishida</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>A common perceptual temporal limit of binding synchronous inputs across different sensory attributes and modalities</article-title>
.
<source>Proc. R. Soc. Lond. B Biol. Sci.</source>
<volume>277</volume>
,
<fpage>2281</fpage>
<lpage>2290</lpage>
<pub-id pub-id-type="doi">10.1098/rspb.2010.0243</pub-id>
</mixed-citation>
</ref>
<ref id="B14">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fujisaki</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Shimojo</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Kashino</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Nishida</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Recalibration of audiovisual simultaneity</article-title>
.
<source>Nat. Neurosci.</source>
<volume>7</volume>
,
<fpage>773</fpage>
<lpage>778</lpage>
<pub-id pub-id-type="doi">10.1038/nn1268</pub-id>
<pub-id pub-id-type="pmid">15195098</pub-id>
</mixed-citation>
</ref>
<ref id="B15">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Guski</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Troje</surname>
<given-names>N. F.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Audiovisual phenomenal causality</article-title>
.
<source>Percept. Psychophys.</source>
<volume>65</volume>
,
<fpage>789</fpage>
<lpage>800</lpage>
<pub-id pub-id-type="doi">10.3758/BF03194815</pub-id>
<pub-id pub-id-type="pmid">12956586</pub-id>
</mixed-citation>
</ref>
<ref id="B16">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hanson</surname>
<given-names>J. V.</given-names>
</name>
<name>
<surname>Heron</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Whitaker</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Recalibration of perceived time across sensory modalities</article-title>
.
<source>Exp. Brain Res.</source>
<volume>185</volume>
,
<fpage>347</fpage>
<lpage>352</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-008-1282-3</pub-id>
<pub-id pub-id-type="pmid">18236035</pub-id>
</mixed-citation>
</ref>
<ref id="B17">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Harrar</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Harris</surname>
<given-names>L. R.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>The effect of exposure to asynchronous audio, visual, and tactile stimulus combinations on the perception of simultaneity</article-title>
.
<source>Exp. Brain Res.</source>
<volume>186</volume>
,
<fpage>517</fpage>
<lpage>524</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-007-1253-0</pub-id>
<pub-id pub-id-type="pmid">18183377</pub-id>
</mixed-citation>
</ref>
<ref id="B18">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Heron</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Roach</surname>
<given-names>N. W.</given-names>
</name>
<name>
<surname>Hanson</surname>
<given-names>J. V.</given-names>
</name>
<name>
<surname>McGraw</surname>
<given-names>P. V.</given-names>
</name>
<name>
<surname>Whitaker</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Audiovisual time perception is spatially specific</article-title>
.
<source>Exp. Brain Res.</source>
<volume>218</volume>
,
<fpage>477</fpage>
<lpage>485</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-012-3038-3</pub-id>
<pub-id pub-id-type="pmid">22367399</pub-id>
</mixed-citation>
</ref>
<ref id="B19">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Heron</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Roach</surname>
<given-names>N. W.</given-names>
</name>
<name>
<surname>Whitaker</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Hanson</surname>
<given-names>J. V. M.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Attention regulates the plasticity of multisensory timing</article-title>
.
<source>Eur. J. Neurosci.</source>
<volume>31</volume>
,
<fpage>1755</fpage>
<lpage>1762</lpage>
<pub-id pub-id-type="doi">10.1111/j.1460-9568.2010.07194.x</pub-id>
<pub-id pub-id-type="pmid">20584179</pub-id>
</mixed-citation>
</ref>
<ref id="B20">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Heron</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Whitaker</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>McGraw</surname>
<given-names>P. V.</given-names>
</name>
<name>
<surname>Horoshenkov</surname>
<given-names>K. V.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Adaptation minimizes distance-related audiovisual delays</article-title>
.
<source>J. Vis.</source>
<volume>7</volume>
,
<fpage>1</fpage>
<lpage>8</lpage>
<pub-id pub-id-type="doi">10.1167/7.6.1</pub-id>
<pub-id pub-id-type="pmid">17997633</pub-id>
</mixed-citation>
</ref>
<ref id="B21">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hillis</surname>
<given-names>J. M.</given-names>
</name>
<name>
<surname>Ernst</surname>
<given-names>M. O.</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>M. S.</given-names>
</name>
<name>
<surname>Landy</surname>
<given-names>M. S.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Combining sensory information: mandatory fusion within, but not between, senses</article-title>
.
<source>Science</source>
<volume>298</volume>
,
<fpage>1627</fpage>
<lpage>1630</lpage>
<pub-id pub-id-type="doi">10.1126/science.1075396</pub-id>
<pub-id pub-id-type="pmid">12446912</pub-id>
</mixed-citation>
</ref>
<ref id="B22">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Johnston</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Arnold</surname>
<given-names>D. H.</given-names>
</name>
<name>
<surname>Nishida</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Spatially localised distortions of perceived duration</article-title>
.
<source>Curr. Biol.</source>
<volume>16</volume>
,
<fpage>472</fpage>
<lpage>479</lpage>
<pub-id pub-id-type="doi">10.1016/j.cub.2006.01.032</pub-id>
<pub-id pub-id-type="pmid">16527741</pub-id>
</mixed-citation>
</ref>
<ref id="B23">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Keetels</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Vroomen</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>No effect of auditory-visual spatial disparity on temporal recalibration</article-title>
.
<source>Exp. Brain Res.</source>
<volume>182</volume>
,
<fpage>559</fpage>
<lpage>565</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-007-1012-2</pub-id>
<pub-id pub-id-type="pmid">17598092</pub-id>
</mixed-citation>
</ref>
<ref id="B24">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>King</surname>
<given-names>A. J.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Multisensory integration: strategies for synchronization</article-title>
.
<source>Curr. Biol.</source>
<volume>15</volume>
,
<fpage>R339</fpage>
<lpage>341</lpage>
<pub-id pub-id-type="doi">10.1016/j.cub.2005.04.022</pub-id>
<pub-id pub-id-type="pmid">15886092</pub-id>
</mixed-citation>
</ref>
<ref id="B25">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kopinska</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Harris</surname>
<given-names>L. R.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Simultaneity constancy</article-title>
.
<source>Perception</source>
<volume>33</volume>
,
<fpage>1049</fpage>
<lpage>1060</lpage>
<pub-id pub-id-type="doi">10.1068/p5169</pub-id>
<pub-id pub-id-type="pmid">15560507</pub-id>
</mixed-citation>
</ref>
<ref id="B26">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lennie</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>1981</year>
).
<article-title>The physiological basis of variations in visual latency</article-title>
.
<source>Vision Res.</source>
<volume>21</volume>
,
<fpage>815</fpage>
<lpage>824</lpage>
<pub-id pub-id-type="doi">10.1016/0042-6989(81)90180-2</pub-id>
<pub-id pub-id-type="pmid">7314459</pub-id>
</mixed-citation>
</ref>
<ref id="B27">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ma</surname>
<given-names>W. J.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Organizing probabilistic models of perception</article-title>
.
<source>Trends Cogn. Sci. (Regul. Ed.)</source>
<volume>16</volume>
,
<fpage>511</fpage>
<lpage>518</lpage>
<pub-id pub-id-type="doi">10.1016/j.tics.2012.08.010</pub-id>
<pub-id pub-id-type="pmid">22981359</pub-id>
</mixed-citation>
</ref>
<ref id="B28">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Machulla</surname>
<given-names>T.-K.</given-names>
</name>
<name>
<surname>Di Luca</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Frölich</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Ernst</surname>
<given-names>M. O.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Multisensory simultaneity recalibration: Storage of the aftereffect in the absence of counterevidence</article-title>
.
<source>Exp. Brain Res.</source>
<volume>217</volume>
,
<fpage>89</fpage>
<lpage>97</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-011-2976-5</pub-id>
<pub-id pub-id-type="pmid">22207361</pub-id>
</mixed-citation>
</ref>
<ref id="B29">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Miyazaki</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Yamamoto</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Uchida</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Kitazawa</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Bayesian calibration of simultaneity in tactile temporal order judgment</article-title>
.
<source>Nat. Neurosci.</source>
<volume>9</volume>
,
<fpage>875</fpage>
<lpage>877</lpage>
<pub-id pub-id-type="doi">10.1038/nn1712</pub-id>
<pub-id pub-id-type="pmid">16732276</pub-id>
</mixed-citation>
</ref>
<ref id="B30">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Navarra</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>García-Morera</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Temporal adaptation to audiovisual asynchrony generalizes across different sound frequencies</article-title>
.
<source>Front. Psychol.</source>
<volume>3</volume>
:
<fpage>152</fpage>
<pub-id pub-id-type="doi">10.3389/fpsyg.2012.00152</pub-id>
<pub-id pub-id-type="pmid">22615705</pub-id>
</mixed-citation>
</ref>
<ref id="B31">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Navarra</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Hartcher-O’Brien</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Piazza</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Adaptation to audiovisual asynchrony modulates the speeded detection of sound</article-title>
.
<source>Proc. Natl. Acad. Sci. U.S.A.</source>
<volume>106</volume>
,
<fpage>9169</fpage>
<lpage>9173</lpage>
<pub-id pub-id-type="doi">10.1073/pnas.0810486106</pub-id>
<pub-id pub-id-type="pmid">19458252</pub-id>
</mixed-citation>
</ref>
<ref id="B32">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Navarra</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Soto-Faraco</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Adaptation to audiotactile asynchrony</article-title>
.
<source>Neurosci. Lett.</source>
<volume>413</volume>
,
<fpage>72</fpage>
<lpage>76</lpage>
<pub-id pub-id-type="doi">10.1016/j.neulet.2006.11.027</pub-id>
<pub-id pub-id-type="pmid">17161530</pub-id>
</mixed-citation>
</ref>
<ref id="B33">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Navarra</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Vatakis</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Zampini</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Soto-Faraco</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Humphreys</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Exposure to asynchronous audiovisual speech extends the temporal window for audiovisual integration</article-title>
.
<source>Brain Res. Cogn. Brain Res.</source>
<volume>25</volume>
,
<fpage>499</fpage>
<lpage>507</lpage>
<pub-id pub-id-type="doi">10.1016/j.cogbrainres.2005.07.009</pub-id>
<pub-id pub-id-type="pmid">16137867</pub-id>
</mixed-citation>
</ref>
<ref id="B34">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Okada</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Kashino</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>The role of spectral change detectors in temporal order judgment of tones</article-title>
.
<source>Neuroreport</source>
<volume>14</volume>
,
<fpage>261</fpage>
<lpage>264</lpage>
<pub-id pub-id-type="doi">10.1097/00001756-200310060-00002</pub-id>
<pub-id pub-id-type="pmid">12598742</pub-id>
</mixed-citation>
</ref>
<ref id="B35">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Parise</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>‘When birds of a feather flock together’: synesthetic correspondences modulate audiovisual integration in nonsynesthetes</article-title>
.
<source>PLoS ONE</source>
<volume>4</volume>
:
<fpage>e5664</fpage>
<pub-id pub-id-type="doi">10.1371/journal.pone.0005664</pub-id>
<pub-id pub-id-type="pmid">19471644</pub-id>
</mixed-citation>
</ref>
<ref id="B36">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Parise</surname>
<given-names>C. V.</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Ernst</surname>
<given-names>M. O.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>When correlation implies causation in multisensory integration</article-title>
.
<source>Curr. Biol.</source>
<volume>22</volume>
,
<fpage>46</fpage>
<lpage>49</lpage>
<pub-id pub-id-type="doi">10.1016/j.cub.2011.11.039</pub-id>
<pub-id pub-id-type="pmid">22177899</pub-id>
</mixed-citation>
</ref>
<ref id="B37">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Roach</surname>
<given-names>N. W.</given-names>
</name>
<name>
<surname>Heron</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Whitaker</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>McGraw</surname>
<given-names>P. V.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Asynchrony adaptation reveals neural population code for audio-visual timing</article-title>
.
<source>Proc. R. Soc. Lond. B Biol. Sci.</source>
<volume>278</volume>
,
<fpage>1314</fpage>
<lpage>1322</lpage>
<pub-id pub-id-type="doi">10.1098/rspb.2010.1737</pub-id>
</mixed-citation>
</ref>
<ref id="B38">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Roseboom</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Arnold</surname>
<given-names>D. H.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Twice upon a time: multiple, concurrent, temporal recalibrations of audio-visual speech</article-title>
.
<source>Psychol. Sci.</source>
<volume>22</volume>
,
<fpage>72</fpage>
<lpage>877</lpage>
<pub-id pub-id-type="doi">10.1177/0956797611413293</pub-id>
</mixed-citation>
</ref>
<ref id="B39">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Roseboom</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Kawabe</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Nishida</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>Direction of visual apparent motion driven by perceptual organization of cross-modal signals</article-title>
.
<source>J. Vis.</source>
<volume>13</volume>
,
<fpage>1</fpage>
<lpage>13</lpage>
<pub-id pub-id-type="doi">10.1167/13.3.1</pub-id>
</mixed-citation>
</ref>
<ref id="B40">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Roseboom</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Nishida</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Arnold</surname>
<given-names>D. H.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>The sliding window of audio-visual simultaneity</article-title>
.
<source>J. Vis.</source>
<volume>9</volume>
,
<fpage>1</fpage>
<lpage>8</lpage>
<pub-id pub-id-type="doi">10.1167/9.13.1</pub-id>
</mixed-citation>
</ref>
<ref id="B41">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Roufs</surname>
<given-names>J. A.</given-names>
</name>
</person-group>
(
<year>1963</year>
).
<article-title>Perception lag as a function of stimulus luminance</article-title>
.
<source>Vision Res.</source>
<volume>3</volume>
,
<fpage>81</fpage>
<lpage>91</lpage>
<pub-id pub-id-type="doi">10.1016/0042-6989(63)90070-1</pub-id>
</mixed-citation>
</ref>
<ref id="B42">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Deroy</surname>
<given-names>O.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>How automatic are crossmodal correspondences?</article-title>
<source>Conscious. Cogn.</source>
<volume>22</volume>
,
<fpage>245</fpage>
<lpage>260</lpage>
<pub-id pub-id-type="doi">10.1016/j.concog.2012.12.006</pub-id>
<pub-id pub-id-type="pmid">23370382</pub-id>
</mixed-citation>
</ref>
<ref id="B43">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Shore</surname>
<given-names>D. I.</given-names>
</name>
<name>
<surname>Klein</surname>
<given-names>R. M.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Multisensory prior entry</article-title>
.
<source>J. Exp. Psychol. Gen.</source>
<volume>130</volume>
,
<fpage>799</fpage>
<lpage>832</lpage>
<pub-id pub-id-type="doi">10.1037/0096-3445.130.4.799</pub-id>
<pub-id pub-id-type="pmid">11757881</pub-id>
</mixed-citation>
</ref>
<ref id="B44">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Squire</surname>
<given-names>S. B.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Multisensory integration: maintaining the perception of synchrony</article-title>
.
<source>Curr. Biol.</source>
<volume>13</volume>
,
<fpage>R519</fpage>
<lpage>R521</lpage>
<pub-id pub-id-type="doi">10.1016/S0959-440X(03)00103-9</pub-id>
<pub-id pub-id-type="pmid">12842029</pub-id>
</mixed-citation>
</ref>
<ref id="B45">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Stein</surname>
<given-names>B. E.</given-names>
</name>
<name>
<surname>Meredith</surname>
<given-names>M. A.</given-names>
</name>
</person-group>
(
<year>1993</year>
).
<source>The Merging of the Senses</source>
.
<publisher-loc>Cambridge, MA</publisher-loc>
:
<publisher-name>MIT Press</publisher-name>
</mixed-citation>
</ref>
<ref id="B46">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tanaka</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Kaori</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Hisato</surname>
<given-names>I.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>The change in perceptual synchrony between auditory and visual speech after exposure to asynchronous speech</article-title>
.
<source>Neuroreport</source>
<volume>22</volume>
,
<fpage>684</fpage>
<lpage>688</lpage>
<pub-id pub-id-type="doi">10.1097/WNR.0b013e32834a2724</pub-id>
<pub-id pub-id-type="pmid">21817926</pub-id>
</mixed-citation>
</ref>
<ref id="B47">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Titchener</surname>
<given-names>E. B.</given-names>
</name>
</person-group>
(
<year>1908</year>
).
<source>Lecture on the Elementary Psychology of Feeling and Attention</source>
.
<publisher-loc>New York</publisher-loc>
:
<publisher-name>Macmillan</publisher-name>
</mixed-citation>
</ref>
<ref id="B48">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>van Eijk</surname>
<given-names>R. L. J.</given-names>
</name>
<name>
<surname>Kohlrausch</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Juola</surname>
<given-names>J. F.</given-names>
</name>
<name>
<surname>van de Par</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Audiovisual synchrony and temporal order judgments: effects of experimental method and stimulus type</article-title>
.
<source>Percept. Psychophys.</source>
<volume>70</volume>
,
<fpage>955</fpage>
<lpage>968</lpage>
<pub-id pub-id-type="doi">10.3758/PP.70.6.955</pub-id>
<pub-id pub-id-type="pmid">18717383</pub-id>
</mixed-citation>
</ref>
<ref id="B49">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vatakis</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Navarra</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Soto-Faraco</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Temporal recalibration during asynchronous audiovisual speech perception</article-title>
.
<source>Exp. Brain Res.</source>
<volume>181</volume>
,
<fpage>173</fpage>
<lpage>181</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-007-0918-z</pub-id>
<pub-id pub-id-type="pmid">17431598</pub-id>
</mixed-citation>
</ref>
<ref id="B50">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vatakis</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Navarra</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Soto-Faraco</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Audiovisual temporal adaptation of speech: temporal order versus simultaneity judgments</article-title>
.
<source>Exp. Brain Res.</source>
<volume>185</volume>
,
<fpage>521</fpage>
<lpage>529</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-007-1168-9</pub-id>
<pub-id pub-id-type="pmid">17962929</pub-id>
</mixed-citation>
</ref>
<ref id="B51">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vatakis</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Audiovisual synchrony perception for music, speech, and object actions</article-title>
.
<source>Brain Res.</source>
<volume>1111</volume>
,
<fpage>134</fpage>
<lpage>142</lpage>
<pub-id pub-id-type="doi">10.1016/j.brainres.2006.05.078</pub-id>
<pub-id pub-id-type="pmid">16876772</pub-id>
</mixed-citation>
</ref>
<ref id="B52">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vatakis</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Crossmodal binding: evaluating the “unity assumption” using audiovisual speech stimuli</article-title>
.
<source>Percept. Psychophys.</source>
<volume>69</volume>
,
<fpage>744</fpage>
<lpage>756</lpage>
<pub-id pub-id-type="doi">10.3758/BF03193776</pub-id>
<pub-id pub-id-type="pmid">17929697</pub-id>
</mixed-citation>
</ref>
<ref id="B53">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vatakis</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Evaluating the influence of the ‘unity assumption’ on the temporal perception of realistic audiovisual stimuli</article-title>
.
<source>Acta Psychol. (Amst.)</source>
<volume>127</volume>
,
<fpage>12</fpage>
<lpage>23</lpage>
<pub-id pub-id-type="doi">10.1016/j.actpsy.2006.12.002</pub-id>
<pub-id pub-id-type="pmid">17258164</pub-id>
</mixed-citation>
</ref>
<ref id="B54">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vroomen</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Keetels</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Perception of intersensory synchrony: a tutorial review</article-title>
.
<source>Atten. Percept. Psychophys.</source>
<volume>72</volume>
,
<fpage>871</fpage>
<lpage>884</lpage>
<pub-id pub-id-type="doi">10.3758/APP.72.4.871</pub-id>
<pub-id pub-id-type="pmid">20436185</pub-id>
</mixed-citation>
</ref>
<ref id="B55">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vroomen</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Keetels</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>de Gelder</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Bertelson</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Recalibration of temporal order perception by exposure to audio-visual asynchrony</article-title>
.
<source>Cogn. Brain Res.</source>
<volume>22</volume>
,
<fpage>32</fpage>
<lpage>35</lpage>
<pub-id pub-id-type="doi">10.1016/j.cogbrainres.2004.07.003</pub-id>
</mixed-citation>
</ref>
<ref id="B56">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Williams</surname>
<given-names>J. M.</given-names>
</name>
<name>
<surname>Lit</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>1983</year>
).
<article-title>Luminance-dependent visual latency for the Hess effect, the Pulfrich effect, and simple reaction time</article-title>
.
<source>Vision Res.</source>
<volume>23</volume>
,
<fpage>171</fpage>
<lpage>179</lpage>
<pub-id pub-id-type="doi">10.1016/0042-6989(83)90140-2</pub-id>
<pub-id pub-id-type="pmid">6868392</pub-id>
</mixed-citation>
</ref>
<ref id="B57">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yamamoto</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Miyazaki</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Iwano</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Kitazawa</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Bayesian calibration of simultaneity in audiovisual temporal order judgments</article-title>
.
<source>PLoS ONE</source>
<volume>7</volume>
:
<fpage>e40379</fpage>
<pub-id pub-id-type="doi">10.1371/journal.pone.0040379</pub-id>
<pub-id pub-id-type="pmid">22792297</pub-id>
</mixed-citation>
</ref>
<ref id="B58">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yarrow</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Roseboom</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Arnold</surname>
<given-names>D. H.</given-names>
</name>
</person-group>
(
<year>2011a</year>
).
<article-title>Spatial grouping resolves ambiguity to drive temporal recalibration</article-title>
.
<source>J. Exp. Psychol. Hum. Percept. Perform.</source>
<volume>37</volume>
,
<fpage>1657</fpage>
<lpage>1661</lpage>
<pub-id pub-id-type="doi">10.1037/a0024235</pub-id>
<pub-id pub-id-type="pmid">21688937</pub-id>
</mixed-citation>
</ref>
<ref id="B59">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yarrow</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Jahn</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Durant</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Arnold</surname>
<given-names>D. H.</given-names>
</name>
</person-group>
(
<year>2011b</year>
).
<article-title>Shifts of criteria or neural timing? The assumptions underlying timing perception studies</article-title>
.
<source>Conscious. Cogn.</source>
<volume>20</volume>
,
<fpage>1518</fpage>
<lpage>1531</lpage>
<pub-id pub-id-type="doi">10.1016/j.concog.2011.07.003</pub-id>
<pub-id pub-id-type="pmid">21807537</pub-id>
</mixed-citation>
</ref>
<ref id="B60">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yuan</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Bi</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Yin</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Huang</surname>
<given-names>X.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Audiovisual temporal recalibration: space-based versus context-based</article-title>
.
<source>Perception</source>
<volume>41</volume>
,
<fpage>1218</fpage>
<lpage>1233</lpage>
<pub-id pub-id-type="doi">10.1068/p7243</pub-id>
<pub-id pub-id-type="pmid">23469702</pub-id>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
<affiliations>
<list>
<country>
<li>Japon</li>
</country>
</list>
<tree>
<country name="Japon">
<noRegion>
<name sortKey="Roseboom, Warrick" sort="Roseboom, Warrick" uniqKey="Roseboom W" first="Warrick" last="Roseboom">Warrick Roseboom</name>
</noRegion>
<name sortKey="Kawabe, Takahiro" sort="Kawabe, Takahiro" uniqKey="Kawabe T" first="Takahiro" last="Kawabe">Takahiro Kawabe</name>
<name sortKey="Nishida, Shin A" sort="Nishida, Shin A" uniqKey="Nishida S" first="Shin A" last="Nishida">Shin A Nishida</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Ncbi/Merge
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 002677 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd -nk 002677 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Ncbi
   |étape=   Merge
   |type=    RBID
   |clé=     PMC:3633943
   |texte=   Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/RBID.i   -Sk "pubmed:23658549" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024