Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

‘When Birds of a Feather Flock Together’: Synesthetic Correspondences Modulate Audiovisual Integration in Non-Synesthetes

Identifieur interne : 001155 ( Ncbi/Merge ); précédent : 001154; suivant : 001156

‘When Birds of a Feather Flock Together’: Synesthetic Correspondences Modulate Audiovisual Integration in Non-Synesthetes

Auteurs : Cesare Valerio Parise ; Charles Spence

Source :

RBID : PMC:2680950

Abstract

Background

Synesthesia is a condition in which the stimulation of one sense elicits an additional experience, often in a different (i.e., unstimulated) sense. Although only a small proportion of the population is synesthetic, there is growing evidence to suggest that neurocognitively-normal individuals also experience some form of synesthetic association between the stimuli presented to different sensory modalities (i.e., between auditory pitch and visual size, where lower frequency tones are associated with large objects and higher frequency tones with small objects). While previous research has highlighted crossmodal interactions between synesthetically corresponding dimensions, the possible role of synesthetic associations in multisensory integration has not been considered previously.

Methodology

Here we investigate the effects of synesthetic associations by presenting pairs of asynchronous or spatially discrepant visual and auditory stimuli that were either synesthetically matched or mismatched. In a series of three psychophysical experiments, participants reported the relative temporal order of presentation or the relative spatial locations of the two stimuli.

Principal Findings

The reliability of non-synesthetic participants' estimates of both audiovisual temporal asynchrony and spatial discrepancy were lower for pairs of synesthetically matched as compared to synesthetically mismatched audiovisual stimuli.

Conclusions

Recent studies of multisensory integration have shown that the reduced reliability of perceptual estimates regarding intersensory conflicts constitutes the marker of a stronger coupling between the unisensory signals. Our results therefore indicate a stronger coupling of synesthetically matched vs. mismatched stimuli and provide the first psychophysical evidence that synesthetic congruency can promote multisensory integration. Synesthetic crossmodal correspondences therefore appear to play a crucial (if unacknowledged) role in the multisensory integration of auditory and visual information.


Url:
DOI: 10.1371/journal.pone.0005664
PubMed: 19471644
PubMed Central: 2680950

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:2680950

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">‘When Birds of a Feather Flock Together’: Synesthetic Correspondences Modulate Audiovisual Integration in Non-Synesthetes</title>
<author>
<name sortKey="Parise, Cesare Valerio" sort="Parise, Cesare Valerio" uniqKey="Parise C" first="Cesare Valerio" last="Parise">Cesare Valerio Parise</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Spence, Charles" sort="Spence, Charles" uniqKey="Spence C" first="Charles" last="Spence">Charles Spence</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">19471644</idno>
<idno type="pmc">2680950</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2680950</idno>
<idno type="RBID">PMC:2680950</idno>
<idno type="doi">10.1371/journal.pone.0005664</idno>
<date when="2009">2009</date>
<idno type="wicri:Area/Pmc/Corpus">001119</idno>
<idno type="wicri:Area/Pmc/Curation">001119</idno>
<idno type="wicri:Area/Pmc/Checkpoint">001F89</idno>
<idno type="wicri:Area/Ncbi/Merge">001155</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">‘When Birds of a Feather Flock Together’: Synesthetic Correspondences Modulate Audiovisual Integration in Non-Synesthetes</title>
<author>
<name sortKey="Parise, Cesare Valerio" sort="Parise, Cesare Valerio" uniqKey="Parise C" first="Cesare Valerio" last="Parise">Cesare Valerio Parise</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Spence, Charles" sort="Spence, Charles" uniqKey="Spence C" first="Charles" last="Spence">Charles Spence</name>
<affiliation>
<nlm:aff id="aff1"></nlm:aff>
</affiliation>
</author>
</analytic>
<series>
<title level="j">PLoS ONE</title>
<idno type="eISSN">1932-6203</idno>
<imprint>
<date when="2009">2009</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<sec>
<title>Background</title>
<p>Synesthesia is a condition in which the stimulation of one sense elicits an additional experience, often in a different (i.e., unstimulated) sense. Although only a small proportion of the population is synesthetic, there is growing evidence to suggest that neurocognitively-normal individuals also experience some form of synesthetic association between the stimuli presented to different sensory modalities (i.e., between auditory pitch and visual size, where lower frequency tones are associated with large objects and higher frequency tones with small objects). While previous research has highlighted crossmodal interactions between synesthetically corresponding dimensions, the possible role of synesthetic associations in multisensory integration has not been considered previously.</p>
</sec>
<sec sec-type="methods">
<title>Methodology</title>
<p>Here we investigate the effects of synesthetic associations by presenting pairs of asynchronous or spatially discrepant visual and auditory stimuli that were either synesthetically matched or mismatched. In a series of three psychophysical experiments, participants reported the relative temporal order of presentation or the relative spatial locations of the two stimuli.</p>
</sec>
<sec>
<title>Principal Findings</title>
<p>The reliability of non-synesthetic participants' estimates of both audiovisual temporal asynchrony and spatial discrepancy were lower for pairs of synesthetically matched as compared to synesthetically mismatched audiovisual stimuli.</p>
</sec>
<sec>
<title>Conclusions</title>
<p>Recent studies of multisensory integration have shown that the reduced reliability of perceptual estimates regarding intersensory conflicts constitutes the marker of a stronger coupling between the unisensory signals. Our results therefore indicate a stronger coupling of synesthetically matched vs. mismatched stimuli and provide the first psychophysical evidence that synesthetic congruency can promote multisensory integration. Synesthetic crossmodal correspondences therefore appear to play a crucial (if unacknowledged) role in the multisensory integration of auditory and visual information.</p>
</sec>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article" xml:lang="EN">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">PLoS ONE</journal-id>
<journal-id journal-id-type="publisher-id">plos</journal-id>
<journal-id journal-id-type="pmc">plosone</journal-id>
<journal-title>PLoS ONE</journal-title>
<issn pub-type="epub">1932-6203</issn>
<publisher>
<publisher-name>Public Library of Science</publisher-name>
<publisher-loc>San Francisco, USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">19471644</article-id>
<article-id pub-id-type="pmc">2680950</article-id>
<article-id pub-id-type="publisher-id">09-PONE-RA-09028</article-id>
<article-id pub-id-type="doi">10.1371/journal.pone.0005664</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
<subj-group subj-group-type="Discipline">
<subject>Neuroscience/Behavioral Neuroscience</subject>
<subject>Neuroscience/Sensory Systems</subject>
<subject>Neuroscience/Experimental Psychology</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>‘When Birds of a Feather Flock Together’: Synesthetic Correspondences Modulate Audiovisual Integration in Non-Synesthetes</article-title>
<alt-title alt-title-type="running-head">Multisensory Integration</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Parise</surname>
<given-names>Cesare Valerio</given-names>
</name>
<xref ref-type="aff" rid="aff1"></xref>
<xref ref-type="corresp" rid="cor1">
<sup>*</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Spence</surname>
<given-names>Charles</given-names>
</name>
<xref ref-type="aff" rid="aff1"></xref>
</contrib>
</contrib-group>
<aff id="aff1">
<addr-line>Crossmodal Research Laboratory, Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom</addr-line>
</aff>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Burr</surname>
<given-names>David C.</given-names>
</name>
<role>Editor</role>
<xref ref-type="aff" rid="edit1"></xref>
</contrib>
</contrib-group>
<aff id="edit1">Istituto di Neurofisiologia, Italy</aff>
<author-notes>
<corresp id="cor1">* E-mail:
<email>cesare.parise@psy.ox.ac.uk</email>
</corresp>
<fn fn-type="con">
<p>Conceived and designed the experiments: CVP CS. Performed the experiments: CVP. Analyzed the data: CVP. Contributed reagents/materials/analysis tools: CS. Wrote the paper: CVP CS.</p>
</fn>
</author-notes>
<pub-date pub-type="collection">
<year>2009</year>
</pub-date>
<pub-date pub-type="epub">
<day>27</day>
<month>5</month>
<year>2009</year>
</pub-date>
<volume>4</volume>
<issue>5</issue>
<elocation-id>e5664</elocation-id>
<history>
<date date-type="received">
<day>2</day>
<month>3</month>
<year>2009</year>
</date>
<date date-type="accepted">
<day>24</day>
<month>3</month>
<year>2009</year>
</date>
</history>
<copyright-statement>Parise, Spence. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.</copyright-statement>
<copyright-year>2009</copyright-year>
<abstract>
<sec>
<title>Background</title>
<p>Synesthesia is a condition in which the stimulation of one sense elicits an additional experience, often in a different (i.e., unstimulated) sense. Although only a small proportion of the population is synesthetic, there is growing evidence to suggest that neurocognitively-normal individuals also experience some form of synesthetic association between the stimuli presented to different sensory modalities (i.e., between auditory pitch and visual size, where lower frequency tones are associated with large objects and higher frequency tones with small objects). While previous research has highlighted crossmodal interactions between synesthetically corresponding dimensions, the possible role of synesthetic associations in multisensory integration has not been considered previously.</p>
</sec>
<sec sec-type="methods">
<title>Methodology</title>
<p>Here we investigate the effects of synesthetic associations by presenting pairs of asynchronous or spatially discrepant visual and auditory stimuli that were either synesthetically matched or mismatched. In a series of three psychophysical experiments, participants reported the relative temporal order of presentation or the relative spatial locations of the two stimuli.</p>
</sec>
<sec>
<title>Principal Findings</title>
<p>The reliability of non-synesthetic participants' estimates of both audiovisual temporal asynchrony and spatial discrepancy were lower for pairs of synesthetically matched as compared to synesthetically mismatched audiovisual stimuli.</p>
</sec>
<sec>
<title>Conclusions</title>
<p>Recent studies of multisensory integration have shown that the reduced reliability of perceptual estimates regarding intersensory conflicts constitutes the marker of a stronger coupling between the unisensory signals. Our results therefore indicate a stronger coupling of synesthetically matched vs. mismatched stimuli and provide the first psychophysical evidence that synesthetic congruency can promote multisensory integration. Synesthetic crossmodal correspondences therefore appear to play a crucial (if unacknowledged) role in the multisensory integration of auditory and visual information.</p>
</sec>
</abstract>
<counts>
<page-count count="7"></page-count>
</counts>
</article-meta>
</front>
<body>
<sec id="s1">
<title>Introduction</title>
<p>Over the last few decades, studies have shown that the behavior of non-synesthetic individuals is affected by multisensory interactions that have traditionally been regarded as the prerogative of the synesthetic population
<xref ref-type="bibr" rid="pone.0005664-Gallace1">[1]</xref>
<xref ref-type="bibr" rid="pone.0005664-CohenKadosh1">[11]</xref>
. A paradigmatic example of this is the synesthetic correspondence between auditory pitch and visual size, whereby higher-pitched tones are associated with smaller objects and lower-pitched tones with larger objects
<xref ref-type="bibr" rid="pone.0005664-Gallace1">[1]</xref>
,
<xref ref-type="bibr" rid="pone.0005664-Parise1">[8]</xref>
<xref ref-type="bibr" rid="pone.0005664-Walker2">[10]</xref>
,
<xref ref-type="bibr" rid="pone.0005664-Sapir1">[12]</xref>
. Synesthetic associations in neurocognitively-normal individuals have typically been studied by means of the speeded classification paradigm, in which participants have to classify a series of stimuli in one sensory modality while trying to ignore concurrent task-irrelevant stimuli presented in a second modality
<xref ref-type="bibr" rid="pone.0005664-Marks1">[2]</xref>
,
<xref ref-type="bibr" rid="pone.0005664-Bernstein1">[13]</xref>
. The classic finding is that when the irrelevant stimulus is congruent with the relevant one (i.e., when a high pitched tone is presented with a small visual object), participants respond more rapidly and accurately than on incongruent trials, where the relevant and irrelevant stimuli do not match synesthetically
<xref ref-type="bibr" rid="pone.0005664-Marks1">[2]</xref>
,
<xref ref-type="bibr" rid="pone.0005664-Marks2">[3]</xref>
,
<xref ref-type="bibr" rid="pone.0005664-Bernstein1">[13]</xref>
. Despite a growing number of studies showing synesthetically driven interactions between crossmodal stimuli, there is to date no psychophysical evidence that synesthetic congruency actually modulates multisensory integration.</p>
<p>Here we investigate the role of synesthetic correspondences on the integration of pairs of temporally (Experiment 1 and 2) or spatially (Experiment 3) conflicting auditory and visual stimuli. When spatiotemporally conflicting stimuli from different modalities are integrated, small conflicts are often compensated for, giving rise to the ventriloquist effect, whereby the conflicting stimuli are perceptually “pulled” together toward a single spatiotemporal onset
<xref ref-type="bibr" rid="pone.0005664-Alais1">[14]</xref>
<xref ref-type="bibr" rid="pone.0005664-Slutsky1">[17]</xref>
. Participants therefore tend to perceive combinations of spatiotemporally conflicting stimuli as unitary multisensory events and become less sensitive to any crossmodal conflicts that may be present
<xref ref-type="bibr" rid="pone.0005664-Welch1">[18]</xref>
. Multisensory integration, in fact, has the cost of hampering the brain's access to the individual sensory components feeding into the integrated percept, thus reducing the reliability of estimates of potential crossmodal conflicts
<xref ref-type="bibr" rid="pone.0005664-Ernst1">[19]</xref>
,
<xref ref-type="bibr" rid="pone.0005664-Hillis1">[20]</xref>
. Reliability is defined here as the inverse of the squared discrimination threshold, the just noticeable difference (JND), that is the minimal difference along a given dimension between a test and a standard stimulus that an observer can detect at a specified level above chance.</p>
<p>According to Bayesian models of multisensory integration, the reliability of participants' estimates regarding intersensory conflicts is proportional to the strength of coupling between the integrated signals
<xref ref-type="bibr" rid="pone.0005664-Ernst2">[21]</xref>
. In particular, strong coupling may lead to a complete fusion of the original signals into the integrated percept that is evidenced behaviorally by a reduction in the reliability of conflict estimates (i.e., higher discrimination thresholds), whereas a weaker coupling only leads to partial fusion, with the system still retaining access to reliable conflict estimates (i.e., lower discrimination thresholds). The strength of coupling is a function of the sensory system's prior knowledge that the crossmodal stimuli “go together”: such prior knowledge about the mapping between signals has been modeled by a coupling prior
<xref ref-type="bibr" rid="pone.0005664-Ernst1">[19]</xref>
, representing the expected (i.e., a priori) joint distribution of the signals. The coupling prior influences the strength of coupling in inverse proportion to its variance: A variance approaching infinity (i.e., a flat prior) means that the signals are treated as independent and there is no interaction between the signals presented in the different modalities; conversely a variance approaching 0 indicates that the signals are completely fused into the integrated percept, whereas intermediate values determine a coupling of the signals without sensory fusion. The variance of the coupling prior (and therefore the strength of coupling), in turn, is known to be determined by the previous knowledge that the stimuli originate from a single object
<xref ref-type="bibr" rid="pone.0005664-Helbig1">[22]</xref>
or event
<xref ref-type="bibr" rid="pone.0005664-Bresciani1">[23]</xref>
and by a repeated exposure to statistical co-occurrence of the signals
<xref ref-type="bibr" rid="pone.0005664-Ernst2">[21]</xref>
.</p>
<p>Within such a framework, if synesthetic information is used by the perceptual system to integrate stimuli from different modalities, the strength of coupling should be higher for synesthetically congruent combinations of stimuli as compared to synesthetically incongruent combinations. Therefore, when presented with synesthetically congruent audiovisual stimuli that are either asynchronous or spatially discrepant, participants' estimates requiring access to such conflicts, such as judgments regarding the relative temporal order or the relative spatial location of the stimuli, should be less reliable (i.e., higher discrimination thresholds for spatiotemporal conflicts) as compared to conditions in which the conflicting stimuli are synesthetically incongruent.</p>
<p>A similar effect has recently been reported in the temporal domain with audiovisual speech stimuli (human voices and moving lips) presented asynchronously that were either matched (i.e., voices and moving lips belonging to the same person) or mismatched (i.e., voices and moving lips belonging to a different person). When both modalities provide congruent information, more pronounced multisensory integration takes place, leading to a “unity effect”, which is evidenced behaviorally by an increase of the discrimination thresholds for audiovisual temporal asynchronies
<xref ref-type="bibr" rid="pone.0005664-Vatakis1">[24]</xref>
,
<xref ref-type="bibr" rid="pone.0005664-Vatakis2">[25]</xref>
. Interestingly, subsequent studies have shown that the phenomenon disappears when participants are presented with realistic non-speech stimuli, thus suggesting that the “unity effect” might be specific to speech
<xref ref-type="bibr" rid="pone.0005664-Vatakis1">[24]</xref>
,
<xref ref-type="bibr" rid="pone.0005664-Vatakis3">[26]</xref>
.</p>
<p>An increase of the discrimination thresholds for spatial and temporal conflict when audiovisual stimuli are synesthetically matched would provide the first psychophysical evidence that synesthetic congruency promotes multisensory integration, thus qualifying synesthetic congruency as a novel, additional cue to multisensory integration. Moreover, such a result would constitute the first empirical demonstration that the “unity effect” is not a prerogative of speech stimuli and that it can also occur in the spatial domain. We anticipate that, in keeping with our predictions, participants' estimates regarding both spatial and temporal conflicts were less reliable with synesthetically congruent audiovisual stimuli than with synesthetically incongruent stimuli, thus supporting the claim that synesthetic congruency promotes multisensory integration.</p>
</sec>
<sec sec-type="materials|methods" id="s2">
<title>Materials and Methods</title>
<sec id="s2a">
<title>Experiment 1: Temporal Conflict – Pitch-Size</title>
<p>Twelve non-synesthetic participants, with normal vision and audition, made unspeeded audiovisual temporal order judgments (TOJs) regarding which stimulus (i.e., visual or auditory) had been presented second
<xref ref-type="bibr" rid="pone.0005664-Shore1">[27]</xref>
. Visual stimuli consisted of light grey circles presented for 26 ms at the centre of a CRT screen against a white background, and subtending 2.1° (small stimulus) or 5.2° (large stimulus) of visual angle at a viewing distance of 55 cm. The auditory stimuli consisted of 26 ms pure tones, with 5 ms linear ramps at on- and off-set and delivered via headphones against background white noise. The frequency of the tones was 300 Hz (low pitched) or 4500 Hz (high pitched). High and low pitched tones in this and the following experiments were made equally loud for each participant through an adaptive psychophysical procedure (QUEST,
<xref ref-type="bibr" rid="pone.0005664-Watson1">[28]</xref>
).</p>
<p>A visual and an auditory stimulus were presented on each trial with a variable stimulus onset asynchrony (SOA; ±467, ±333, ±267, ±200, ±133, ±76 and 0 ms, negative values indicate that visual stimulus trailed the auditory stimulus, positive values indicate that visual stimulus led). Each SOA was presented 10 times (20 for the 0 ms SOA) in each condition (i.e., in both the synesthetically congruent and synesthetically incongruent conditions). The auditory and visual stimuli presented on each trial were equiprobably either synesthetically congruent along the above-mentioned pitch-size dimension (i.e., a higher-pitched tone was paired with a smaller visual stimulus or a lower-pitched tone was paired with a larger visual stimulus) or else synesthetically incongruent (i.e., a higher-pitched tone was paired with a larger visual stimulus and a lower-pitched tone was paired with a smaller visual stimulus, see
<xref ref-type="fig" rid="pone-0005664-g001">Figure 1A</xref>
). In order to maximize the alternation of congruent and incongruent trials, no more than 2 trials from the same condition were presented in a row. The participants had to perform an unspeeded discrimination task in which they had to indicate the modality of the second stimulus presented on each trial by pressing one of two response keys.</p>
<fig id="pone-0005664-g001" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0005664.g001</object-id>
<label>Figure 1</label>
<caption>
<title>Experiment 1: stimuli and results.</title>
<p>a. Pairs of auditory and visual stimuli presented in synesthetically congruent (top) and incongruent trials (bottom) in Experiment 1. b. Psychometric functions describing performance on synesthetically congruent (continuous line) and incongruent (dashed line) conditions in Experiment 1. Filled and empty circles represent the proportion of “auditory second” responses for each SOA tested averaged over all participants of Experiment 1. c. Scatter and bagplot
<xref ref-type="bibr" rid="pone.0005664-Rousseeuw1">[39]</xref>
of participants' sensitivity (JNDs) on congruent vs. incongruent trials (log-log coordinates). Points below the identity line indicate a stronger coupling of congruent stimuli. The cross at the centre of the bag represents the depth median. d. Sensitivity of participants' responses (JNDs) on congruent and incongruent trials in log scale. The central lines in the boxes represent the median JND, the boxes indicate the first and third quartiles, and the whiskers, the range of the data.</p>
</caption>
<graphic xlink:href="pone.0005664.g001"></graphic>
</fig>
</sec>
<sec id="s2b">
<title>Experiment 2: Temporal Conflict – Pitch/Waveform-Shape</title>
<p>The generalizability of the results of Experiment 1 was tested in a second experiment by varying the synesthetic correspondence between the auditory features of pitch and waveform and the visual features of curvilinearity and the magnitude of the angles of regular shapes (see
<xref ref-type="bibr" rid="pone.0005664-Marks1">[2]</xref>
,
<xref ref-type="bibr" rid="pone.0005664-OBoyle1">[31]</xref>
; see
<xref ref-type="fig" rid="pone-0005664-g002">Fig. 2A</xref>
). The visual stimuli consisted of black 7-pointed stars presented for 26 ms against a white background and subtending 5.2° of visual angle. One star was curvilinear and had a ratio of inscribed to circumscribed circles of 0.65, whereas the other star was angular and had a ratio of inscribed to circumscribed circles of 0.55. The auditory stimuli, delivered via headphones against background white noise, consisted of 26 ms tones with 5 ms linear ramps at on- and off-set. One auditory stimulus consisted of a high pitched (1760 Hz), square waved tone, whereas the other had lower frequency (440 Hz) and sinusoidal wave.</p>
<fig id="pone-0005664-g002" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0005664.g002</object-id>
<label>Figure 2</label>
<caption>
<title>Experiment 2: stimuli and results.</title>
<p>a. Pairs of auditory and visual stimuli presented in Experiment 2. b. Psychometric functions describing performance on synesthetically congruent (continuous line) and incongruent (dashed line) conditions in Experiment 2. c. Bagplot
<xref ref-type="bibr" rid="pone.0005664-Rousseeuw1">[39]</xref>
of participants' sensitivity (JNDs) on congruent vs. incongruent trials. d. Participants' sensitivity (JNDs), on congruent and incongruent trials in Experiment 2.</p>
</caption>
<graphic xlink:href="pone.0005664.g002"></graphic>
</fig>
<p>The experimental procedure was the same as Experiment 1, with the exception that the compatible stimulus combination here consisted of the presentation of the pointed star together with the higher pitched tone and the curvilinear star with the lower pitched tone. Conversely the incompatible stimulus pairs consisted of the pointed star coupled with the lower pitched tone and the curvilinear star with the higher tone.</p>
</sec>
<sec id="s2c">
<title>Experiment 3: Spatial Conflict</title>
<p>Twelve non-synesthetic participants, with normal vision and audition, made unspeeded judgments as to whether an auditory stimulus was presented to either the left or the right of a visual stimulus.</p>
<p>The visual stimuli consisted of white Gaussian blobs projected for 200 ms against a black background on a fine fabric screen (width: 107.7 cm; height: 80.8 cm). The standard deviation of the Gaussian luminance profile of the blobs subtended 0.26° (small stimulus) or 2.3° (large stimulus) of visual angle at a viewing distance of 110.5 cm (a chinrest was used to control the head position). The auditory stimuli consisted of 200 ms pure tones with 5 ms linear ramps at on- and off-set; the frequency of the tones was 300 Hz (low pitched) or 4500 Hz (high pitched, see
<xref ref-type="fig" rid="pone-0005664-g003">Fig. 3A</xref>
). In order to provide richer spectral cues for auditory localization, the tones were convolved with white noise
<xref ref-type="bibr" rid="pone.0005664-King1">[32]</xref>
and their intensity was modulated with a sinusoidal profile with a frequency of 50 Hz. The auditory stimuli were delivered from one of four loudspeaker placed behind the fabric screen (placed 5.2 cm and 15.6 cm to the left and the right of the midline of the screen) and their intensity was randomly jittered from trial to trial (between ±1% of the standard intensity) in order to avoid participants using any potential slight differences in the intensities of the sounds delivered by the 4 loudspeaker as auxiliary cues for sound source localization. White noise was delivered by an additional pair of loudspeaker placed behind the screen throughout the experimental session.</p>
<fig id="pone-0005664-g003" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0005664.g003</object-id>
<label>Figure 3</label>
<caption>
<title>Experiment 3: stimuli and results.</title>
<p>a. Pairs of auditory and visual stimuli presented in Experiment 3. b. Psychometric functions describing performance on synesthetically congruent (continuous line) and incongruent (dashed line) conditions in Experiment 3. c. Bagplot
<xref ref-type="bibr" rid="pone.0005664-Rousseeuw1">[39]</xref>
of participants' sensitivity (JNDs) on congruent vs. incongruent trials. d. Participants' sensitivity (JNDs), on congruent and incongruent trials in Experiment 3.</p>
</caption>
<graphic xlink:href="pone.0005664.g003"></graphic>
</fig>
<p>A train of 3 synchronous audiovisual events, with an interstimulus interval randomized between 150 ms and 300 ms, was presented on each trial with the source of the auditory stimulus randomly located to the left or the right of visual stimulus with the magnitude of the azimuthal displacement determined using an adaptive psychophysical procedure. At the beginning of the experiment we assumed a psychometric function fitted over a small set of hypothetical data points. In particular, we assumed that participants correctly responded “left” or “right” in 4 trials in which the auditory stimulus was placed 9.7° and 4.9° to the left or the right of the visual stimulus and fitted a cumulative Gaussian curve over these four points. Then, after each response, the curve was fitted again with the newly-collected data and the auditory stimulus that was presented on the next trial was randomly placed to the left or the right of the visual stimulus with a displacement normally randomized around 1 JND (s.d. 1 JND). This procedure was selected after preliminary results which indicated high variability in participants' ability to localize sounds, making it hard to preventively select an effective placement of the stimuli (as required the method of limits
<xref ref-type="bibr" rid="pone.0005664-Dixon1">[33]</xref>
and the method of constant stimuli
<xref ref-type="bibr" rid="pone.0005664-Watson2">[34]</xref>
), and because it optimizes the information provided by each data by placing the stimuli around the regions that are more relevant to calculate the JND. In order to train participant to localize sounds, before running the experiment, they were required to perform a quick task (96 trials) where a sound was emitted by one of 8 loudspeakers placed behind the screen (4 to the left and 4 to the right of the vertical midline) and they had to determine whether it was coming from the left or the right of the screen's midline (visual feedback was provided after incorrect responses in the training block).</p>
<p>The auditory and visual stimuli presented on each trial were equiprobably either synesthetically congruent along the pitch-size dimension (i.e., a higher-pitched tone was paired with a smaller visual stimulus or a lower-pitched tone was paired with a larger visual stimulus) or else they were synesthetically incongruent (i.e., a higher-pitched tone was paired with a larger visual stimulus and a lower-pitched tone was paired with a smaller visual stimulus, see
<xref ref-type="fig" rid="pone-0005664-g003">Fig. 3A</xref>
). Two hundred and eighty trials were presented on each session (140 congruent and 140 incongruent). Participants performed an unspeeded discrimination task in which they had to press either the left or the right key of a computer mouse in order to indicate whether the auditory stimulus was coming from the left or the right of the visual stimulus.</p>
</sec>
<sec id="s2d">
<title>Ethics statement</title>
<p>This study was conducted in accordance to the declaration of Helsinki, and had ethical approval from the Department of Experimental Psychology at the University of Oxford. All participants provided written informed consent and received course credits or a £5 gift voucher in return.</p>
</sec>
</sec>
<sec id="s3">
<title>Results</title>
<sec id="s3a">
<title>Experiment 1: Temporal Conflict – Pitch-Size</title>
<p>Separate psychometric functions for congruent and incongruent trials were calculated for each participant by fitting the ratios of “auditory second” responses plotted against SOAs with a cumulative Gaussian distribution
<xref ref-type="bibr" rid="pone.0005664-Wichmann1">[29]</xref>
(see
<xref ref-type="fig" rid="pone-0005664-g001">Fig. 1B</xref>
). The just noticeable differences (JNDs), providing a measure of the reliability (i.e., the discrimination threshold) of participants' TOJs, were calculated for both synesthetically congruent and synesthetically incongruent conditions by subtracting the SOA at which participants made 75% “auditory second” responses from the SOA at which they made 25% “auditory second” responses and halving the result (see
<xref ref-type="fig" rid="pone-0005664-g001">Fig. 1B–D</xref>
). Synesthetic congruency had a significant influence on the reliability of participants' estimates (Wilcoxon Signed Rank Test Z = −2.903, p = .004), with smaller JNDs (indicating increased reliability) reported for synesthetically incongruent trials (median = 61 ms, interquartile range (IQR) = 72–104 ms) than for congruent trials (median = 82 ms, IQR = 51–71 ms). This result provides support for the claim that enhanced multisensory integration takes place for congruent as compared to incongruent audiovisual stimulus pairs. Eleven out of the 12 participants tested exhibited less reliable TOJ estimates for synesthetically congruent as compared to incongruent stimulus pairs (Sign Test, p = .006). Although the PSE data (denoting the point of maximum uncertainty in participants' judgments) do not provide relevant information regarding the strength of coupling (e.g., see
<xref ref-type="bibr" rid="pone.0005664-Ernst1">[19]</xref>
,
<xref ref-type="bibr" rid="pone.0005664-Ernst3">[30]</xref>
) nor the “unity effect” (e.g., see
<xref ref-type="bibr" rid="pone.0005664-Vatakis1">[24]</xref>
<xref ref-type="bibr" rid="pone.0005664-Vatakis3">[26]</xref>
), statistical outcomes on the effect of synesthetic associations on the PSE are reported for completeness: Z = −0.549, p = .583.</p>
</sec>
<sec id="s3b">
<title>Experiment 2: Temporal Conflict – Pitch/Waveform-Shape</title>
<p>JNDs (calculated with the procedure described in Experiment1) were again significantly higher on the synesthetically congruent trials (median = 95 ms, IQR = 77–129 ms) than on the synesthetically incongruent trials (median = 77 ms, IQR = 61–86 ms, Wilcoxon-Test Z = −2.589, p = .010), with 10 out of 12 of the participants tested exhibiting higher discrimination thresholds in the congruent as compared to the incongruent condition (Sign Test, p = .039, see
<xref ref-type="fig" rid="pone-0005664-g002">Fig. 2B–D</xref>
). No significant effect of condition was found in the PSE data (Z = .893, p = .343).</p>
</sec>
<sec id="s3c">
<title>Experiment 3: Spatial Conflict</title>
<p>Separate psychometric functions were calculated for congruent and incongruent trials for each participant by fitting the ratios of “auditory right” responses plotted against spatial displacement (measured in degrees of visual angle, with negative values indicating that the auditory stimulus was placed to the left of the visual one) with a cumulative Gaussian distribution
<xref ref-type="bibr" rid="pone.0005664-Wichmann1">[29]</xref>
(see
<xref ref-type="fig" rid="pone-0005664-g003">Figure 3B</xref>
). Synesthetic congruency significantly influenced the reliability of participants' estimates (Wilcoxon Signed Rank Test Z = −3.059, p = .002), with smaller discrimination thresholds reported for synesthetically incongruent trials (mean = 1.7°, IQR = 0.9°) than for congruent trials (median = 2.2°, IQR = 2.6°), thus providing support for the claim that enhanced multisensory integration takes place for congruent as compared to incongruent pairs of audiovisual stimuli. All of the participants exhibited lower discrimination thresholds in response to spatial conflicts between synesthetically congruent as compared to incongruent stimulus pairs (Sign Test, p<.001, see
<xref ref-type="fig" rid="pone-0005664-g003">Fig. 3C–D</xref>
). Interestingly, synesthetic congruency also had a significant effect on the PSE data in this experiment: Z = −2.432, p = .015.</p>
</sec>
</sec>
<sec id="s4">
<title>Discussion</title>
<p>The results of the three experiments reported here demonstrate that synesthetic correspondences affect multisensory integration, as assessed by their effect on the reliability of participants' audiovisual TOJs and spatial localization judgments. In particular, estimates requiring access to temporal (Experiments 1 & 2) and spatial (Experiment 3) conflicts between synesthetically congruent auditory and visual stimuli were found to be less reliable (i.e., higher discrimination thresholds) than those requiring access to conflicts between synesthetically incongruent stimuli. A reduced reliability of the estimates requiring access to intersensory conflicts reflects the cost of multisensory integration and is the marker of a stronger coupling between the unisensory signals
<xref ref-type="bibr" rid="pone.0005664-Ernst1">[19]</xref>
,
<xref ref-type="bibr" rid="pone.0005664-Hillis1">[20]</xref>
,
<xref ref-type="bibr" rid="pone.0005664-Ernst3">[30]</xref>
. These results therefore indicate a stronger coupling of synesthetically congruent stimuli as compared to synesthetically incongruent stimuli and provide the first psychophysical evidence that synesthetic congruency can actually promote multisensory integration. It should be noted, however, that the synesthetic associations studied here (as well as in many other studies, see
<xref ref-type="bibr" rid="pone.0005664-Gallace1">[1]</xref>
<xref ref-type="bibr" rid="pone.0005664-Marks2">[3]</xref>
,
<xref ref-type="bibr" rid="pone.0005664-Martino1">[5]</xref>
,
<xref ref-type="bibr" rid="pone.0005664-Melara1">[7]</xref>
<xref ref-type="bibr" rid="pone.0005664-Walker2">[10]</xref>
,
<xref ref-type="bibr" rid="pone.0005664-Bernstein1">[13]</xref>
) are likely relative rather than absolute, depending on the particular range of stimuli used. What is called a ‘big’ circle, in fact, would most likely behave like a small circle if we happened to pair it with an even larger circle and the same argument would apply,
<italic>mutatis mutandis,</italic>
to any other potential stimulus features that happen to be considered (see
<xref ref-type="bibr" rid="pone.0005664-Marks2">[3]</xref>
on this issue).</p>
<p>Considering that the unimodal signals used in our experiments were identical in both congruent and incongruent conditions (i.e., same signal reliability in both conditions), the difference in the strength of coupling reported here should be attributed to the knowledge of the participants' perceptual systems about which stimuli ‘belong together’ (or, rather, which normally co-occur) and should therefore be integrated. According to Bayesian integration models, such prior knowledge about stimulus mapping, the coupling prior, determines the strength of the coupling between the stimuli proportionally to its reliability (with reliability defined as the inverse of the squared variance of the coupling prior distribution), that is, the more the system is certain that two stimuli belong together (i.e., the smaller the variance of the coupling prior), the stronger such stimuli will be coupled
<xref ref-type="bibr" rid="pone.0005664-Ernst1">[19]</xref>
,
<xref ref-type="bibr" rid="pone.0005664-Ernst3">[30]</xref>
. The effect of synesthetic associations in multisensory integration could, therefore, be interpreted in terms of differences in the variance of the coupling prior (i.e., smaller variance for synesthetically congruent stimulus pairs than for synesthetically incongruent pairs), that is to say that the synesthetic associations determine the strength of coupling by modulating the variance of the coupling prior distribution.</p>
<p>It should, however, be noted that our results might also be accounted for by the possibility that synesthetic associations modulate the tuning of multisensory spatio-temporal filters (see
<xref ref-type="bibr" rid="pone.0005664-Burr1">[35]</xref>
). The early stages of sensory processing have, in fact, traditionally been modeled in terms of spatial and temporal filters operating upon the incoming sensory information (e.g. see
<xref ref-type="bibr" rid="pone.0005664-deLangeDzn1">[36]</xref>
,
<xref ref-type="bibr" rid="pone.0005664-Robson1">[37]</xref>
). Their role, in a crossmodal setting, would be critical to determining the perceived temporal simultaneity and spatial coincidence of multisensory signals
<xref ref-type="bibr" rid="pone.0005664-Burr1">[35]</xref>
. Synesthetic information might act on those filters by increasing their spatial and temporal constants under conditions of congruent crossmodal stimulation and by reducing such constants when the stimuli are incongruent. In keeping with the data reported here, a similar synesthetic modulation of the tuning of the multisensory spatio-temporal filters could also determine larger windows of both subjective simultaneity and spatial coincidence for congruent as compared to incongruent pairs of audiovisual stimuli.</p>
<p>The results of Experiments 1 and 2 also extend the finding of previous research on the “unity effect” by showing that an increase of the discrimination threshold for temporal asynchronies is not specific to matched audiovisual speech events: synesthetic congruency can also trigger robust unity effects. Vatakis and her colleagues
<xref ref-type="bibr" rid="pone.0005664-Vatakis1">[24]</xref>
<xref ref-type="bibr" rid="pone.0005664-Vatakis3">[26]</xref>
have conducted a number of studies on the integration of asynchronous but ecologically-valid audiovisual stimuli and consistently found that the “unity effect” is restricted to speech stimuli, thus concluding that speech is “special” inasmuch as the facilitatory effect on multisensory integration leading to the unity effect is specific to speech. Our results, therefore, not only extend the class of stimuli that are known to lead to a unity effect, but also suggest the hypothesis that synesthetic associations might also be “special” (or rather that audiovisual speech stimuli may not be so special, or unique, after all). In addition, the results of Experiment 3, showing that participants' discrimination thresholds for the spatial separation between auditory and visual stimuli are increased when the stimuli are synesthetically congruent, constitutes the first experimental evidence that the unity effect also occurs in the spatial domain, and thus provides additional evidence for the claim that the unity effect results from more pronounced multisensory integration.</p>
<p>While research has tended to focus on the spatiotemporal constraints of multisensory integration over the past 25 years
<xref ref-type="bibr" rid="pone.0005664-Calvert1">[38]</xref>
, the results reported here demonstrate that synesthetic congruency provides an additional constraint on such processes.</p>
</sec>
</body>
<back>
<ack>
<p>We would like to thank Jess Hartcher O'Brien, Carmel Levitan, Francesco Pavani, and Argiro Vatakis for their comments on an earlier version of this manuscript. We are also grateful to Andrew King for his advice on the selection of the auditory stimuli.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="pone.0005664-Gallace1">
<label>1</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gallace</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Multisensory synesthetic interactions in the speeded classification of visual size.</article-title>
<source>Perception & Psychophysics</source>
<volume>68</volume>
<fpage>1191</fpage>
<lpage>1203</lpage>
<pub-id pub-id-type="pmid">17355042</pub-id>
</citation>
</ref>
<ref id="pone.0005664-Marks1">
<label>2</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Marks</surname>
<given-names>LE</given-names>
</name>
</person-group>
<year>1987</year>
<article-title>On cross-modal similarity: auditory-visual interactions in speeded discrimination.</article-title>
<source>Journal of Experimental Psycholology: Human Perception and Performance</source>
<volume>13</volume>
<fpage>384</fpage>
<lpage>394</lpage>
</citation>
</ref>
<ref id="pone.0005664-Marks2">
<label>3</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Marks</surname>
<given-names>LE</given-names>
</name>
</person-group>
<year>1989</year>
<article-title>On cross-modal similarity: the perceptual structure of pitch, loudness, and brightness.</article-title>
<source>Journal of Experimental Psycholology: Human Perception and Performance</source>
<volume>15</volume>
<fpage>586</fpage>
<lpage>602</lpage>
</citation>
</ref>
<ref id="pone.0005664-Marks3">
<label>4</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Marks</surname>
<given-names>L</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>Cross-modal interactions in speeded classification.</article-title>
<person-group person-group-type="editor">
<name>
<surname>Calvert</surname>
<given-names>GA</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Stein</surname>
<given-names>BE</given-names>
</name>
</person-group>
<source>The handbook of mutisensory processes</source>
<publisher-loc>Cambridge, MA</publisher-loc>
<publisher-name>MIT Press</publisher-name>
<fpage>85</fpage>
<lpage>106</lpage>
</citation>
</ref>
<ref id="pone.0005664-Martino1">
<label>5</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Martino</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Marks</surname>
<given-names>LE</given-names>
</name>
</person-group>
<year>2000</year>
<article-title>Cross-modal interaction between vision and touch: the role of synesthetic correspondence.</article-title>
<source>Perception</source>
<volume>29</volume>
<fpage>745</fpage>
<lpage>754</lpage>
<pub-id pub-id-type="pmid">11040956</pub-id>
</citation>
</ref>
<ref id="pone.0005664-Martino2">
<label>6</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Martino</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Marks</surname>
<given-names>LE</given-names>
</name>
</person-group>
<year>2001</year>
<article-title>Synesthesia: Strong and weak.</article-title>
<source>Current Directions in Psychological Science</source>
<volume>10</volume>
<fpage>61</fpage>
<lpage>65</lpage>
</citation>
</ref>
<ref id="pone.0005664-Melara1">
<label>7</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Melara</surname>
<given-names>RD</given-names>
</name>
<name>
<surname>O'Brien</surname>
<given-names>TP</given-names>
</name>
</person-group>
<year>1987</year>
<article-title>Interaction between synesthetically corresponding dimensions.</article-title>
<source>Journal of Experimental Psychology: General</source>
<volume>116</volume>
<fpage>323</fpage>
<lpage>336</lpage>
</citation>
</ref>
<ref id="pone.0005664-Parise1">
<label>8</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Parise</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Synesthetic congruency modulates the temporal ventriloquism effect.</article-title>
<source>Neuroscience Letters</source>
<volume>442</volume>
<fpage>257</fpage>
<lpage>261</lpage>
<pub-id pub-id-type="pmid">18638522</pub-id>
</citation>
</ref>
<ref id="pone.0005664-Walker1">
<label>9</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Walker</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Smith</surname>
<given-names>S</given-names>
</name>
</person-group>
<year>1984</year>
<article-title>Stroop interference based on the synaesthetic qualities of auditory pitch.</article-title>
<source>Perception</source>
<volume>13</volume>
<fpage>75</fpage>
<lpage>81</lpage>
<pub-id pub-id-type="pmid">6473055</pub-id>
</citation>
</ref>
<ref id="pone.0005664-Walker2">
<label>10</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Walker</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Smith</surname>
<given-names>S</given-names>
</name>
</person-group>
<year>1985</year>
<article-title>Stroop interference based on the multimodal correlates of haptic size and auditory pitch.</article-title>
<source>Perception</source>
<volume>14</volume>
<fpage>729</fpage>
<lpage>736</lpage>
<pub-id pub-id-type="pmid">3837874</pub-id>
</citation>
</ref>
<ref id="pone.0005664-CohenKadosh1">
<label>11</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cohen Kadosh</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Henik</surname>
<given-names>A</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>Can synaesthesia research inform cognitive science?</article-title>
<source>Trends in Cognitive Sciences</source>
<volume>11</volume>
<fpage>177</fpage>
<lpage>184</lpage>
<pub-id pub-id-type="pmid">17331789</pub-id>
</citation>
</ref>
<ref id="pone.0005664-Sapir1">
<label>12</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sapir</surname>
<given-names>E</given-names>
</name>
</person-group>
<year>1929</year>
<article-title>A study in phonetic symbolism.</article-title>
<source>Journal of Experimental Psychology</source>
<volume>12</volume>
<fpage>225</fpage>
<lpage>239</lpage>
</citation>
</ref>
<ref id="pone.0005664-Bernstein1">
<label>13</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bernstein</surname>
<given-names>IH</given-names>
</name>
<name>
<surname>Edelstein</surname>
<given-names>BA</given-names>
</name>
</person-group>
<year>1971</year>
<article-title>Effects of some variations in auditory input upon visual choice reaction time.</article-title>
<source>Journal of Experimental Psychology</source>
<volume>87</volume>
<fpage>241</fpage>
<lpage>247</lpage>
<pub-id pub-id-type="pmid">5542226</pub-id>
</citation>
</ref>
<ref id="pone.0005664-Alais1">
<label>14</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Alais</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Burr</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>The ventriloquist effect results from near-optimal bimodal integration.</article-title>
<source>Current Biology</source>
<volume>14</volume>
<fpage>257</fpage>
<lpage>262</lpage>
<pub-id pub-id-type="pmid">14761661</pub-id>
</citation>
</ref>
<ref id="pone.0005664-Bertelson1">
<label>15</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bertelson</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Aschersleben</surname>
<given-names>G</given-names>
</name>
</person-group>
<year>1998</year>
<article-title>Automatic visual bias of perceived auditory location.</article-title>
<source>Psychonomic Bulletin & Review</source>
<volume>5</volume>
<fpage>482</fpage>
<lpage>489</lpage>
</citation>
</ref>
<ref id="pone.0005664-MoreinZamir1">
<label>16</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Morein-Zamir</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Soto-Faraco</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Kingstone</surname>
<given-names>A</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>Auditory capture of vision: examining temporal ventriloquism.</article-title>
<source>Cognitive Brain Research</source>
<volume>17</volume>
<fpage>154</fpage>
<lpage>163</lpage>
<pub-id pub-id-type="pmid">12763201</pub-id>
</citation>
</ref>
<ref id="pone.0005664-Slutsky1">
<label>17</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Slutsky</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Recanzone</surname>
<given-names>G</given-names>
</name>
</person-group>
<year>2001</year>
<article-title>Temporal and spatial dependency of the ventriloquism effect.</article-title>
<source>Neuroreport</source>
<volume>12</volume>
<fpage>7</fpage>
<lpage>10</lpage>
<pub-id pub-id-type="pmid">11201094</pub-id>
</citation>
</ref>
<ref id="pone.0005664-Welch1">
<label>18</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Welch</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Warren</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>1980</year>
<article-title>Immediate perceptual response to intersensory discrepancy.</article-title>
<source>Psychological Bullettin</source>
<volume>88</volume>
<fpage>638</fpage>
<lpage>667</lpage>
</citation>
</ref>
<ref id="pone.0005664-Ernst1">
<label>19</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>A Bayesian view on multimodal cue integration.</article-title>
<person-group person-group-type="editor">
<name>
<surname>Knoblich</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Thornton</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Grosejan</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Shiffrar</surname>
<given-names>M</given-names>
</name>
</person-group>
<source>Perception of the human body from the inside out</source>
<publisher-loc>New York</publisher-loc>
<publisher-name>Oxford University Press</publisher-name>
<fpage>105</fpage>
<lpage>131</lpage>
</citation>
</ref>
<ref id="pone.0005664-Hillis1">
<label>20</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hillis</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Ernst</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Landy</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>Combining sensory information: Mandatory fusion within, but not between, senses.</article-title>
<source>Science</source>
<volume>298</volume>
<fpage>1627</fpage>
<lpage>1630</lpage>
<pub-id pub-id-type="pmid">12446912</pub-id>
</citation>
</ref>
<ref id="pone.0005664-Ernst2">
<label>21</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>Learning to integrate arbitrary signals from vision and touch.</article-title>
<source>Journal of Vision</source>
<volume>7</volume>
<fpage>1</fpage>
<lpage>14</lpage>
<pub-id pub-id-type="pmid">18217847</pub-id>
</citation>
</ref>
<ref id="pone.0005664-Helbig1">
<label>22</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Helbig</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Ernst</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>Knowledge about a common source can promote visual-haptic integration.</article-title>
<source>Perception</source>
<volume>36</volume>
<fpage>1523</fpage>
<lpage>1533</lpage>
<pub-id pub-id-type="pmid">18265835</pub-id>
</citation>
</ref>
<ref id="pone.0005664-Bresciani1">
<label>23</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bresciani</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Dammeier</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Ernst</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Vision and touch are automatically integrated for the perception of sequences of events.</article-title>
<source>Journal of Vision</source>
<volume>6</volume>
<fpage>2</fpage>
</citation>
</ref>
<ref id="pone.0005664-Vatakis1">
<label>24</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vatakis</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Ghazanfar</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Facilitation of multisensory integration by the “unity effect” reveals that speech is special.</article-title>
<source>Journal of Vision</source>
<volume>8</volume>
<fpage>1</fpage>
<lpage>11</lpage>
<pub-id pub-id-type="pmid">18831650</pub-id>
</citation>
</ref>
<ref id="pone.0005664-Vatakis2">
<label>25</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vatakis</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>Crossmodal binding: evaluating the "unity assumption" using audiovisual speech stimuli.</article-title>
<source>Perception Psychophysics</source>
<volume>69</volume>
<fpage>744</fpage>
<lpage>756</lpage>
<pub-id pub-id-type="pmid">17929697</pub-id>
</citation>
</ref>
<ref id="pone.0005664-Vatakis3">
<label>26</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vatakis</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>Evaluating the influence of the ‘unity assumption’on the temporal perception of realistic audiovisual stimuli.</article-title>
<source>Acta Psychologica</source>
<volume>127</volume>
<fpage>12</fpage>
<lpage>23</lpage>
<pub-id pub-id-type="pmid">17258164</pub-id>
</citation>
</ref>
<ref id="pone.0005664-Shore1">
<label>27</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shore</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Klein</surname>
<given-names>R</given-names>
</name>
</person-group>
<year>2001</year>
<article-title>Visual Prior Entry.</article-title>
<source>Psychological Science</source>
<volume>12</volume>
<fpage>205</fpage>
<lpage>212</lpage>
<pub-id pub-id-type="pmid">11437302</pub-id>
</citation>
</ref>
<ref id="pone.0005664-Watson1">
<label>28</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Watson</surname>
<given-names>AB</given-names>
</name>
<name>
<surname>Pelli</surname>
<given-names>DG</given-names>
</name>
</person-group>
<year>1983</year>
<article-title>QUEST- A Bayesian adaptive psychometric method.</article-title>
<source>Perception and Psychophysics</source>
<volume>33</volume>
<fpage>113</fpage>
<lpage>120</lpage>
<pub-id pub-id-type="pmid">6844102</pub-id>
</citation>
</ref>
<ref id="pone.0005664-Wichmann1">
<label>29</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wichmann</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Hill</surname>
<given-names>N</given-names>
</name>
</person-group>
<year>2001</year>
<article-title>The psychometric function: I. Fitting, sampling, and goodness of fit.</article-title>
<source>Perception & Psychophysics</source>
<volume>63</volume>
<fpage>1293</fpage>
<lpage>1313</lpage>
<pub-id pub-id-type="pmid">11800458</pub-id>
</citation>
</ref>
<ref id="pone.0005664-Ernst3">
<label>30</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>M</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>Learning to integrate arbitrary signals from vision and touch.</article-title>
<source>Journal of Vision</source>
<volume>7</volume>
<fpage>1</fpage>
<lpage>14</lpage>
<pub-id pub-id-type="pmid">18217847</pub-id>
</citation>
</ref>
<ref id="pone.0005664-OBoyle1">
<label>31</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>O'Boyle</surname>
<given-names>MW</given-names>
</name>
<name>
<surname>Tarte</surname>
<given-names>RD</given-names>
</name>
</person-group>
<year>1980</year>
<article-title>Implications for phonetic symbolism: the relationship between pure tones and geometric figures.</article-title>
<source>Journal of Psycholinguistic Research</source>
<volume>9</volume>
<fpage>535</fpage>
<lpage>544</lpage>
<pub-id pub-id-type="pmid">6162950</pub-id>
</citation>
</ref>
<ref id="pone.0005664-King1">
<label>32</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>King</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Oldfield</surname>
<given-names>S</given-names>
</name>
</person-group>
<year>1997</year>
<article-title>The impact of signal bandwidth on auditory localization: implications for the design of three-dimensional audio displays.</article-title>
<source>Human Factors</source>
<volume>39</volume>
<fpage>287</fpage>
<lpage>295</lpage>
</citation>
</ref>
<ref id="pone.0005664-Dixon1">
<label>33</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dixon</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Mood</surname>
<given-names>A</given-names>
</name>
</person-group>
<year>1948</year>
<article-title>A method for obtaining and analyzing sensitivity data.</article-title>
<source>Journal of the American Statistical Association</source>
<volume>43</volume>
<fpage>109</fpage>
<lpage>126</lpage>
</citation>
</ref>
<ref id="pone.0005664-Watson2">
<label>34</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Watson</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Fitzhugh</surname>
<given-names>A</given-names>
</name>
</person-group>
<year>1990</year>
<article-title>The method of constant stimuli is inefficient.</article-title>
<source>Perception & Psychophysics</source>
<volume>47</volume>
<fpage>87</fpage>
<lpage>91</lpage>
<pub-id pub-id-type="pmid">2300429</pub-id>
</citation>
</ref>
<ref id="pone.0005664-Burr1">
<label>35</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Burr</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Silva</surname>
<given-names>O</given-names>
</name>
<name>
<surname>Cicchini</surname>
<given-names>GM</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>MS</given-names>
</name>
<name>
<surname>Morrone</surname>
<given-names>MC</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Temporal mechanisms of multimodal binding.</article-title>
<source>Proceedings of the Royal Society of London Series B, Biological Sciences</source>
<volume>276</volume>
<fpage>1761</fpage>
<lpage>1769</lpage>
</citation>
</ref>
<ref id="pone.0005664-deLangeDzn1">
<label>36</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>de Lange Dzn</surname>
<given-names>H</given-names>
</name>
</person-group>
<year>1954</year>
<article-title>Relationship between critical flicker-frequency and a set of low-frequency characteristics of the eye.</article-title>
<source>Journal of the Optical Society of America</source>
<volume>44</volume>
<fpage>380</fpage>
<lpage>388</lpage>
<pub-id pub-id-type="pmid">13163770</pub-id>
</citation>
</ref>
<ref id="pone.0005664-Robson1">
<label>37</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Robson</surname>
<given-names>J</given-names>
</name>
</person-group>
<year>1966</year>
<article-title>Spatial and temporal contrast-sensitivity functions of the visual system.</article-title>
<source>Journal of the Optical Society of America</source>
<volume>56</volume>
<fpage>1141</fpage>
<lpage>1142</lpage>
</citation>
</ref>
<ref id="pone.0005664-Calvert1">
<label>38</label>
<citation citation-type="book">
<person-group person-group-type="editor">
<name>
<surname>Calvert</surname>
<given-names>GA</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Stein</surname>
<given-names>BE</given-names>
</name>
</person-group>
<year>2004</year>
<source>The handbook of multisensory processes</source>
<publisher-loc>Cambridge, MA</publisher-loc>
<publisher-name>MIT Press</publisher-name>
</citation>
</ref>
<ref id="pone.0005664-Rousseeuw1">
<label>39</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rousseeuw</surname>
<given-names>PJ</given-names>
</name>
<name>
<surname>Ruts</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Tukey</surname>
<given-names>JW</given-names>
</name>
</person-group>
<year>1999</year>
<article-title>The bagplot: A bivariate boxplot.</article-title>
<source>The American Statistician</source>
<volume>53</volume>
<fpage>382</fpage>
<lpage>387</lpage>
</citation>
</ref>
</ref-list>
<fn-group>
<fn fn-type="conflict">
<p>
<bold>Competing Interests: </bold>
The authors have declared that no competing interests exist.</p>
</fn>
<fn fn-type="financial-disclosure">
<p>
<bold>Funding: </bold>
The authors have no support or funding to report.</p>
</fn>
</fn-group>
</back>
</pmc>
<affiliations>
<list></list>
<tree>
<noCountry>
<name sortKey="Parise, Cesare Valerio" sort="Parise, Cesare Valerio" uniqKey="Parise C" first="Cesare Valerio" last="Parise">Cesare Valerio Parise</name>
<name sortKey="Spence, Charles" sort="Spence, Charles" uniqKey="Spence C" first="Charles" last="Spence">Charles Spence</name>
</noCountry>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Ncbi/Merge
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001155 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd -nk 001155 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Ncbi
   |étape=   Merge
   |type=    RBID
   |clé=     PMC:2680950
   |texte=   ‘When Birds of a Feather Flock Together’: Synesthetic Correspondences Modulate Audiovisual Integration in Non-Synesthetes
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/RBID.i   -Sk "pubmed:19471644" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024