Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Crossmodal Integration Improves Sensory Detection Thresholds in the Ferret

Identifieur interne : 000291 ( Pmc/Curation ); précédent : 000290; suivant : 000292

Crossmodal Integration Improves Sensory Detection Thresholds in the Ferret

Auteurs : Karl J. Hollensteiner [Allemagne] ; Florian Pieper [Allemagne] ; Gerhard Engler [Allemagne] ; Peter König [Allemagne] ; Andreas K. Engel [Allemagne]

Source :

RBID : PMC:4430165

Abstract

During the last two decades ferrets (Mustela putorius) have been established as a highly efficient animal model in different fields in neuroscience. Here we asked whether ferrets integrate sensory information according to the same principles established for other species. Since only few methods and protocols are available for behaving ferrets we developed a head-free, body-restrained approach allowing a standardized stimulation position and the utilization of the ferret’s natural response behavior. We established a behavioral paradigm to test audiovisual integration in the ferret. Animals had to detect a brief auditory and/or visual stimulus presented either left or right from their midline. We first determined detection thresholds for auditory amplitude and visual contrast. In a second step, we combined both modalities and compared psychometric fits and the reaction times between all conditions. We employed Maximum Likelihood Estimation (MLE) to model bimodal psychometric curves and to investigate whether ferrets integrate modalities in an optimal manner. Furthermore, to test for a redundant signal effect we pooled the reaction times of all animals to calculate a race model. We observed that bimodal detection thresholds were reduced and reaction times were faster in the bimodal compared to unimodal conditions. The race model and MLE modeling showed that ferrets integrate modalities in a statistically optimal fashion. Taken together, the data indicate that principles of multisensory integration previously demonstrated in other species also apply to crossmodal processing in the ferret.


Url:
DOI: 10.1371/journal.pone.0124952
PubMed: 25970327
PubMed Central: 4430165

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4430165

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Crossmodal Integration Improves Sensory Detection Thresholds in the Ferret</title>
<author>
<name sortKey="Hollensteiner, Karl J" sort="Hollensteiner, Karl J" uniqKey="Hollensteiner K" first="Karl J." last="Hollensteiner">Karl J. Hollensteiner</name>
<affiliation wicri:level="1">
<nlm:aff id="aff001">
<addr-line>Dept. of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Dept. of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Pieper, Florian" sort="Pieper, Florian" uniqKey="Pieper F" first="Florian" last="Pieper">Florian Pieper</name>
<affiliation wicri:level="1">
<nlm:aff id="aff001">
<addr-line>Dept. of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Dept. of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Engler, Gerhard" sort="Engler, Gerhard" uniqKey="Engler G" first="Gerhard" last="Engler">Gerhard Engler</name>
<affiliation wicri:level="1">
<nlm:aff id="aff001">
<addr-line>Dept. of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Dept. of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Konig, Peter" sort="Konig, Peter" uniqKey="Konig P" first="Peter" last="König">Peter König</name>
<affiliation wicri:level="1">
<nlm:aff id="aff001">
<addr-line>Dept. of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Dept. of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff002">
<addr-line>Institute of Cognitive Science, University of Osnabrück, 49069 Osnabrück, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Institute of Cognitive Science, University of Osnabrück, 49069 Osnabrück</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Engel, Andreas K" sort="Engel, Andreas K" uniqKey="Engel A" first="Andreas K." last="Engel">Andreas K. Engel</name>
<affiliation wicri:level="1">
<nlm:aff id="aff001">
<addr-line>Dept. of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Dept. of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">25970327</idno>
<idno type="pmc">4430165</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4430165</idno>
<idno type="RBID">PMC:4430165</idno>
<idno type="doi">10.1371/journal.pone.0124952</idno>
<date when="2015">2015</date>
<idno type="wicri:Area/Pmc/Corpus">000291</idno>
<idno type="wicri:Area/Pmc/Curation">000291</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Crossmodal Integration Improves Sensory Detection Thresholds in the Ferret</title>
<author>
<name sortKey="Hollensteiner, Karl J" sort="Hollensteiner, Karl J" uniqKey="Hollensteiner K" first="Karl J." last="Hollensteiner">Karl J. Hollensteiner</name>
<affiliation wicri:level="1">
<nlm:aff id="aff001">
<addr-line>Dept. of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Dept. of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Pieper, Florian" sort="Pieper, Florian" uniqKey="Pieper F" first="Florian" last="Pieper">Florian Pieper</name>
<affiliation wicri:level="1">
<nlm:aff id="aff001">
<addr-line>Dept. of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Dept. of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Engler, Gerhard" sort="Engler, Gerhard" uniqKey="Engler G" first="Gerhard" last="Engler">Gerhard Engler</name>
<affiliation wicri:level="1">
<nlm:aff id="aff001">
<addr-line>Dept. of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Dept. of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Konig, Peter" sort="Konig, Peter" uniqKey="Konig P" first="Peter" last="König">Peter König</name>
<affiliation wicri:level="1">
<nlm:aff id="aff001">
<addr-line>Dept. of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Dept. of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff002">
<addr-line>Institute of Cognitive Science, University of Osnabrück, 49069 Osnabrück, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Institute of Cognitive Science, University of Osnabrück, 49069 Osnabrück</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Engel, Andreas K" sort="Engel, Andreas K" uniqKey="Engel A" first="Andreas K." last="Engel">Andreas K. Engel</name>
<affiliation wicri:level="1">
<nlm:aff id="aff001">
<addr-line>Dept. of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany</addr-line>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Dept. of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">PLoS ONE</title>
<idno type="eISSN">1932-6203</idno>
<imprint>
<date when="2015">2015</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>During the last two decades ferrets (
<italic>Mustela putorius</italic>
) have been established as a highly efficient animal model in different fields in neuroscience. Here we asked whether ferrets integrate sensory information according to the same principles established for other species. Since only few methods and protocols are available for behaving ferrets we developed a head-free, body-restrained approach allowing a standardized stimulation position and the utilization of the ferret’s natural response behavior. We established a behavioral paradigm to test audiovisual integration in the ferret. Animals had to detect a brief auditory and/or visual stimulus presented either left or right from their midline. We first determined detection thresholds for auditory amplitude and visual contrast. In a second step, we combined both modalities and compared psychometric fits and the reaction times between all conditions. We employed Maximum Likelihood Estimation (MLE) to model bimodal psychometric curves and to investigate whether ferrets integrate modalities in an optimal manner. Furthermore, to test for a redundant signal effect we pooled the reaction times of all animals to calculate a race model. We observed that bimodal detection thresholds were reduced and reaction times were faster in the bimodal compared to unimodal conditions. The race model and MLE modeling showed that ferrets integrate modalities in a statistically optimal fashion. Taken together, the data indicate that principles of multisensory integration previously demonstrated in other species also apply to crossmodal processing in the ferret.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Alves Pinto, A" uniqKey="Alves Pinto A">A Alves-Pinto</name>
</author>
<author>
<name sortKey="Sollini, J" uniqKey="Sollini J">J Sollini</name>
</author>
<author>
<name sortKey="Sumner, Cj" uniqKey="Sumner C">CJ Sumner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bizley, Jk" uniqKey="Bizley J">JK Bizley</name>
</author>
<author>
<name sortKey="King, Aj" uniqKey="King A">AJ King</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bizley, Jk" uniqKey="Bizley J">JK Bizley</name>
</author>
<author>
<name sortKey="Nodal, Fr" uniqKey="Nodal F">FR Nodal</name>
</author>
<author>
<name sortKey="Nelken, I" uniqKey="Nelken I">I Nelken</name>
</author>
<author>
<name sortKey="King, Aj" uniqKey="King A">AJ King</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bizley, Jk" uniqKey="Bizley J">JK Bizley</name>
</author>
<author>
<name sortKey="Nodal, Fr" uniqKey="Nodal F">FR Nodal</name>
</author>
<author>
<name sortKey="Bajo, Vm" uniqKey="Bajo V">VM Bajo</name>
</author>
<author>
<name sortKey="Nelken, I" uniqKey="Nelken I">I Nelken</name>
</author>
<author>
<name sortKey="King, Aj" uniqKey="King A">AJ King</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chiu, C" uniqKey="Chiu C">C Chiu</name>
</author>
<author>
<name sortKey="Weliky, M" uniqKey="Weliky M">M Weliky</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sukhinin, Di" uniqKey="Sukhinin D">DI Sukhinin</name>
</author>
<author>
<name sortKey="Hilgetag, Cc" uniqKey="Hilgetag C">CC Hilgetag</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Farley, Bj" uniqKey="Farley B">BJ Farley</name>
</author>
<author>
<name sortKey="Yu, H" uniqKey="Yu H">H Yu</name>
</author>
<author>
<name sortKey="Jin, Dz" uniqKey="Jin D">DZ Jin</name>
</author>
<author>
<name sortKey="Sur, M" uniqKey="Sur M">M Sur</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Foxworthy, Wa" uniqKey="Foxworthy W">WA Foxworthy</name>
</author>
<author>
<name sortKey="Allman, Bl" uniqKey="Allman B">BL Allman</name>
</author>
<author>
<name sortKey="Keniston, Lp" uniqKey="Keniston L">LP Keniston</name>
</author>
<author>
<name sortKey="Meredith, Ma" uniqKey="Meredith M">MA Meredith</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Frohlich, F" uniqKey="Frohlich F">F Fröhlich</name>
</author>
<author>
<name sortKey="Mccormick, Da" uniqKey="Mccormick D">DA McCormick</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Homman Ludiye, J" uniqKey="Homman Ludiye J">J Homman-Ludiye</name>
</author>
<author>
<name sortKey="Manger, Pr" uniqKey="Manger P">PR Manger</name>
</author>
<author>
<name sortKey="Bourne, Ja" uniqKey="Bourne J">JA Bourne</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Innocenti, Gm" uniqKey="Innocenti G">GM Innocenti</name>
</author>
<author>
<name sortKey="Manger, Pr" uniqKey="Manger P">PR Manger</name>
</author>
<author>
<name sortKey="Masiello, I" uniqKey="Masiello I">I Masiello</name>
</author>
<author>
<name sortKey="Colin, I" uniqKey="Colin I">I Colin</name>
</author>
<author>
<name sortKey="Tettoni, L" uniqKey="Tettoni L">L Tettoni</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jarosiewicz, B" uniqKey="Jarosiewicz B">B Jarosiewicz</name>
</author>
<author>
<name sortKey="Schummers, J" uniqKey="Schummers J">J Schummers</name>
</author>
<author>
<name sortKey="Malik, Wq" uniqKey="Malik W">WQ Malik</name>
</author>
<author>
<name sortKey="Brown, En" uniqKey="Brown E">EN Brown</name>
</author>
<author>
<name sortKey="Sur, M" uniqKey="Sur M">M Sur</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Keating, P" uniqKey="Keating P">P Keating</name>
</author>
<author>
<name sortKey="Nodal, Fr" uniqKey="Nodal F">FR Nodal</name>
</author>
<author>
<name sortKey="King, Aj" uniqKey="King A">AJ King</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Keniston, Lp" uniqKey="Keniston L">LP Keniston</name>
</author>
<author>
<name sortKey="Allman, Bl" uniqKey="Allman B">BL Allman</name>
</author>
<author>
<name sortKey="Meredith, Ma" uniqKey="Meredith M">MA Meredith</name>
</author>
<author>
<name sortKey="Clemo, Hr" uniqKey="Clemo H">HR Clemo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="King, Aj" uniqKey="King A">AJ King</name>
</author>
<author>
<name sortKey="Bajo, Vm" uniqKey="Bajo V">VM Bajo</name>
</author>
<author>
<name sortKey="Bizley, Jk" uniqKey="Bizley J">JK Bizley</name>
</author>
<author>
<name sortKey="Campbell, Raa" uniqKey="Campbell R">RAA Campbell</name>
</author>
<author>
<name sortKey="Nodal, Fr" uniqKey="Nodal F">FR Nodal</name>
</author>
<author>
<name sortKey="Schulz, Al" uniqKey="Schulz A">AL Schulz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Li, Y" uniqKey="Li Y">Y Li</name>
</author>
<author>
<name sortKey="Fitzpatrick, D" uniqKey="Fitzpatrick D">D Fitzpatrick</name>
</author>
<author>
<name sortKey="White, Le" uniqKey="White L">LE White</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Manger, Pr" uniqKey="Manger P">PR Manger</name>
</author>
<author>
<name sortKey="Masiello, I" uniqKey="Masiello I">I Masiello</name>
</author>
<author>
<name sortKey="Innocenti, Gm" uniqKey="Innocenti G">GM Innocenti</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Manger, Pr" uniqKey="Manger P">PR Manger</name>
</author>
<author>
<name sortKey="Kiper, D" uniqKey="Kiper D">D Kiper</name>
</author>
<author>
<name sortKey="Masiello, I" uniqKey="Masiello I">I Masiello</name>
</author>
<author>
<name sortKey="Murillo, L" uniqKey="Murillo L">L Murillo</name>
</author>
<author>
<name sortKey="Tettoni, L" uniqKey="Tettoni L">L Tettoni</name>
</author>
<author>
<name sortKey="Hunyadi, Z" uniqKey="Hunyadi Z">Z Hunyadi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Manger, Pr" uniqKey="Manger P">PR Manger</name>
</author>
<author>
<name sortKey="Engler, G" uniqKey="Engler G">G Engler</name>
</author>
<author>
<name sortKey="Moll, Cke" uniqKey="Moll C">CKE Moll</name>
</author>
<author>
<name sortKey="Engel, Ak" uniqKey="Engel A">AK Engel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Manger, Pr" uniqKey="Manger P">PR Manger</name>
</author>
<author>
<name sortKey="Engler, G" uniqKey="Engler G">G Engler</name>
</author>
<author>
<name sortKey="Moll, Cke" uniqKey="Moll C">CKE Moll</name>
</author>
<author>
<name sortKey="Engel, Ak" uniqKey="Engel A">AK Engel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nelken, I" uniqKey="Nelken I">I Nelken</name>
</author>
<author>
<name sortKey="Versnel, H" uniqKey="Versnel H">H Versnel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Phillips, Dp" uniqKey="Phillips D">DP Phillips</name>
</author>
<author>
<name sortKey="Judge, Pw" uniqKey="Judge P">PW Judge</name>
</author>
<author>
<name sortKey="Kelly, Jb" uniqKey="Kelly J">JB Kelly</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stitt, I" uniqKey="Stitt I">I Stitt</name>
</author>
<author>
<name sortKey="Galindo Leon, E" uniqKey="Galindo Leon E">E Galindo-Leon</name>
</author>
<author>
<name sortKey="Pieper, F" uniqKey="Pieper F">F Pieper</name>
</author>
<author>
<name sortKey="Engler, G" uniqKey="Engler G">G Engler</name>
</author>
<author>
<name sortKey="Engel, Ak" uniqKey="Engel A">AK Engel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yu, H" uniqKey="Yu H">H Yu</name>
</author>
<author>
<name sortKey="Majewska, Ak" uniqKey="Majewska A">AK Majewska</name>
</author>
<author>
<name sortKey="Sur, M" uniqKey="Sur M">M Sur</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fox, Jg" uniqKey="Fox J">JG Fox</name>
</author>
<author>
<name sortKey="Marini, Rp" uniqKey="Marini R">RP Marini</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fritz, Jb" uniqKey="Fritz J">JB Fritz</name>
</author>
<author>
<name sortKey="David, Sv" uniqKey="David S">SV David</name>
</author>
<author>
<name sortKey="Radtke Schuller, S" uniqKey="Radtke Schuller S">S Radtke-Schuller</name>
</author>
<author>
<name sortKey="Yin, P" uniqKey="Yin P">P Yin</name>
</author>
<author>
<name sortKey="Shamma, Sa" uniqKey="Shamma S">SA Shamma</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hartley, Deh" uniqKey="Hartley D">DEH Hartley</name>
</author>
<author>
<name sortKey="Vongpaisal, T" uniqKey="Vongpaisal T">T Vongpaisal</name>
</author>
<author>
<name sortKey="Xu, J" uniqKey="Xu J">J Xu</name>
</author>
<author>
<name sortKey="Shepherd, Rk" uniqKey="Shepherd R">RK Shepherd</name>
</author>
<author>
<name sortKey="King, Aj" uniqKey="King A">AJ King</name>
</author>
<author>
<name sortKey="Isaiah, A" uniqKey="Isaiah A">A Isaiah</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Leach, Nd" uniqKey="Leach N">ND Leach</name>
</author>
<author>
<name sortKey="Nodal, Fr" uniqKey="Nodal F">FR Nodal</name>
</author>
<author>
<name sortKey="Cordery, Pm" uniqKey="Cordery P">PM Cordery</name>
</author>
<author>
<name sortKey="King, Aj" uniqKey="King A">AJ King</name>
</author>
<author>
<name sortKey="Bajo, Vm" uniqKey="Bajo V">VM Bajo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nodal, Fr" uniqKey="Nodal F">FR Nodal</name>
</author>
<author>
<name sortKey="Bajo, Vm" uniqKey="Bajo V">VM Bajo</name>
</author>
<author>
<name sortKey="Parsons, Ch" uniqKey="Parsons C">CH Parsons</name>
</author>
<author>
<name sortKey="Schnupp, Jw" uniqKey="Schnupp J">JW Schnupp</name>
</author>
<author>
<name sortKey="King, Aj" uniqKey="King A">AJ King</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gleiss, S" uniqKey="Gleiss S">S Gleiss</name>
</author>
<author>
<name sortKey="Kayser, C" uniqKey="Kayser C">C Kayser</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gleiss, S" uniqKey="Gleiss S">S Gleiss</name>
</author>
<author>
<name sortKey="Kayser, C" uniqKey="Kayser C">C Kayser</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C Spence</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
<author>
<name sortKey="Gu, Y" uniqKey="Gu Y">Y Gu</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC DeAngelis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stein, Be" uniqKey="Stein B">BE Stein</name>
</author>
<author>
<name sortKey="London, N" uniqKey="London N">N London</name>
</author>
<author>
<name sortKey="Wilkinson, Lk" uniqKey="Wilkinson L">LK Wilkinson</name>
</author>
<author>
<name sortKey="Price, Dd" uniqKey="Price D">DD Price</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stein, Be" uniqKey="Stein B">BE Stein</name>
</author>
<author>
<name sortKey="Meredith, Ma" uniqKey="Meredith M">MA Meredith</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, Mo" uniqKey="Ernst M">MO Ernst</name>
</author>
<author>
<name sortKey="Banks, Ms" uniqKey="Banks M">MS Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Battaglia, Pw" uniqKey="Battaglia P">PW Battaglia</name>
</author>
<author>
<name sortKey="Jacobs, Ra" uniqKey="Jacobs R">RA Jacobs</name>
</author>
<author>
<name sortKey="Aslin, Rn" uniqKey="Aslin R">RN Aslin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Alais, D" uniqKey="Alais D">D Alais</name>
</author>
<author>
<name sortKey="Burr, D" uniqKey="Burr D">D Burr</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Knill, Dc" uniqKey="Knill D">DC Knill</name>
</author>
<author>
<name sortKey="Saunders, Ja" uniqKey="Saunders J">JA Saunders</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kalman, Re" uniqKey="Kalman R">RE Kalman</name>
</author>
<author>
<name sortKey="Bucy, Rs" uniqKey="Bucy R">RS Bucy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Beers, Rj Van" uniqKey="Beers R">RJ van Beers</name>
</author>
<author>
<name sortKey="Sittig, Ac" uniqKey="Sittig A">AC Sittig</name>
</author>
<author>
<name sortKey="Gon, Jjd Van Der" uniqKey="Gon J">JJD van der Gon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Beers, Rj" uniqKey="Van Beers R">RJ Van Beers</name>
</author>
<author>
<name sortKey="Wolpert, Dm" uniqKey="Wolpert D">DM Wolpert</name>
</author>
<author>
<name sortKey="Haggard, P" uniqKey="Haggard P">P Haggard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fetsch, Cr" uniqKey="Fetsch C">CR Fetsch</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC DeAngelis</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gepshtein, S" uniqKey="Gepshtein S">S Gepshtein</name>
</author>
<author>
<name sortKey="Banks, Ms" uniqKey="Banks M">MS Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Knill, Dc" uniqKey="Knill D">DC Knill</name>
</author>
<author>
<name sortKey="Pouget, A" uniqKey="Pouget A">A Pouget</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Morasso, Pg" uniqKey="Morasso P">PG Morasso</name>
</author>
<author>
<name sortKey="Sanguineti, V" uniqKey="Sanguineti V">V Sanguineti</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gu, Y" uniqKey="Gu Y">Y Gu</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC Deangelis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hershenson, M" uniqKey="Hershenson M">M Hershenson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Diederich, A" uniqKey="Diederich A">A Diederich</name>
</author>
<author>
<name sortKey="Colonius, H" uniqKey="Colonius H">H Colonius</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Alais, D" uniqKey="Alais D">D Alais</name>
</author>
<author>
<name sortKey="Newell, Fn" uniqKey="Newell F">FN Newell</name>
</author>
<author>
<name sortKey="Mamassian, P" uniqKey="Mamassian P">P Mamassian</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stein, Be" uniqKey="Stein B">BE Stein</name>
</author>
<author>
<name sortKey="Meredith, Ma" uniqKey="Meredith M">MA Meredith</name>
</author>
<author>
<name sortKey="Huneycutt, Ws" uniqKey="Huneycutt W">WS Huneycutt</name>
</author>
<author>
<name sortKey="Mcdade, L" uniqKey="Mcdade L">L McDade</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Miller, J" uniqKey="Miller J">J Miller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Miller, J" uniqKey="Miller J">J Miller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, Mo" uniqKey="Ernst M">MO Ernst</name>
</author>
<author>
<name sortKey="Bulthoff, Hh" uniqKey="Bulthoff H">HH Bülthoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Walker, Kmm" uniqKey="Walker K">KMM Walker</name>
</author>
<author>
<name sortKey="Schnupp, Jwh" uniqKey="Schnupp J">JWH Schnupp</name>
</author>
<author>
<name sortKey="Hart Schnupp, Smb" uniqKey="Hart Schnupp S">SMB Hart-Schnupp</name>
</author>
<author>
<name sortKey="King, Aj" uniqKey="King A">AJ King</name>
</author>
<author>
<name sortKey="Bizley, Jk" uniqKey="Bizley J">JK Bizley</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ulrich, R" uniqKey="Ulrich R">R Ulrich</name>
</author>
<author>
<name sortKey="Miller, J" uniqKey="Miller J">J Miller</name>
</author>
<author>
<name sortKey="Schroter, H" uniqKey="Schroter H">H Schröter</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Raab, Dh" uniqKey="Raab D">DH Raab</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kilkenny, C" uniqKey="Kilkenny C">C Kilkenny</name>
</author>
<author>
<name sortKey="Browne, Wj" uniqKey="Browne W">WJ Browne</name>
</author>
<author>
<name sortKey="Cuthill, Ic" uniqKey="Cuthill I">IC Cuthill</name>
</author>
<author>
<name sortKey="Emerson, M" uniqKey="Emerson M">M Emerson</name>
</author>
<author>
<name sortKey="Altman, Dg" uniqKey="Altman D">DG Altman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brainard, Dh" uniqKey="Brainard D">DH Brainard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kaernbach, C" uniqKey="Kaernbach C">C Kaernbach</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Miller, J" uniqKey="Miller J">J Miller</name>
</author>
<author>
<name sortKey="Ulrich, R" uniqKey="Ulrich R">R Ulrich</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Frassinetti, F" uniqKey="Frassinetti F">F Frassinetti</name>
</author>
<author>
<name sortKey="Bolognini, N" uniqKey="Bolognini N">N Bolognini</name>
</author>
<author>
<name sortKey="Ladavas, E" uniqKey="Ladavas E">E Làdavas</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rowland, B" uniqKey="Rowland B">B Rowland</name>
</author>
<author>
<name sortKey="Stanford, T" uniqKey="Stanford T">T Stanford</name>
</author>
<author>
<name sortKey="Stein, B" uniqKey="Stein B">B Stein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lippert, M" uniqKey="Lippert M">M Lippert</name>
</author>
<author>
<name sortKey="Logothetis, Nk" uniqKey="Logothetis N">NK Logothetis</name>
</author>
<author>
<name sortKey="Kayser, C" uniqKey="Kayser C">C Kayser</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Teder S Lej Rvi, Wa" uniqKey="Teder S Lej Rvi W">WA Teder-Sälejärvi</name>
</author>
<author>
<name sortKey="Di Russo, F" uniqKey="Di Russo F">F Di Russo</name>
</author>
<author>
<name sortKey="Mcdonald, Jj" uniqKey="Mcdonald J">JJ McDonald</name>
</author>
<author>
<name sortKey="Hillyard, Sa" uniqKey="Hillyard S">SA Hillyard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mcdonald, Jj" uniqKey="Mcdonald J">JJ McDonald</name>
</author>
<author>
<name sortKey="Teder S Lej Rvi, Wa" uniqKey="Teder S Lej Rvi W">WA Teder-Sälejärvi</name>
</author>
<author>
<name sortKey="Hillyard, Sa" uniqKey="Hillyard S">SA Hillyard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Oruc, I" uniqKey="Oruc I">I Oruç</name>
</author>
<author>
<name sortKey="Maloney, Lt" uniqKey="Maloney L">LT Maloney</name>
</author>
<author>
<name sortKey="Landy, Ms" uniqKey="Landy M">MS Landy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Welch, Rb" uniqKey="Welch R">RB Welch</name>
</author>
<author>
<name sortKey="Warren, Dh" uniqKey="Warren D">DH Warren</name>
</author>
<author>
<name sortKey="Boff, K R" uniqKey="Boff K">K.R. Boff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Helmchen, F" uniqKey="Helmchen F">F Helmchen</name>
</author>
<author>
<name sortKey="Denk, W" uniqKey="Denk W">W Denk</name>
</author>
<author>
<name sortKey="Kerr, Jnd" uniqKey="Kerr J">JND Kerr</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">PLoS One</journal-id>
<journal-id journal-id-type="iso-abbrev">PLoS ONE</journal-id>
<journal-id journal-id-type="publisher-id">plos</journal-id>
<journal-id journal-id-type="pmc">plosone</journal-id>
<journal-title-group>
<journal-title>PLoS ONE</journal-title>
</journal-title-group>
<issn pub-type="epub">1932-6203</issn>
<publisher>
<publisher-name>Public Library of Science</publisher-name>
<publisher-loc>San Francisco, CA USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">25970327</article-id>
<article-id pub-id-type="pmc">4430165</article-id>
<article-id pub-id-type="doi">10.1371/journal.pone.0124952</article-id>
<article-id pub-id-type="publisher-id">PONE-D-15-03709</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Crossmodal Integration Improves Sensory Detection Thresholds in the Ferret</article-title>
<alt-title alt-title-type="running-head">Crossmodal Integration in the Ferret</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" equal-contrib="yes">
<name>
<surname>Hollensteiner</surname>
<given-names>Karl J.</given-names>
</name>
<xref ref-type="aff" rid="aff001">
<sup>1</sup>
</xref>
<xref rid="cor001" ref-type="corresp">*</xref>
</contrib>
<contrib contrib-type="author" equal-contrib="yes">
<name>
<surname>Pieper</surname>
<given-names>Florian</given-names>
</name>
<xref ref-type="aff" rid="aff001">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Engler</surname>
<given-names>Gerhard</given-names>
</name>
<xref ref-type="aff" rid="aff001">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>König</surname>
<given-names>Peter</given-names>
</name>
<xref ref-type="aff" rid="aff001">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff002">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Engel</surname>
<given-names>Andreas K.</given-names>
</name>
<xref ref-type="aff" rid="aff001">
<sup>1</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff001">
<label>1</label>
<addr-line>Dept. of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany</addr-line>
</aff>
<aff id="aff002">
<label>2</label>
<addr-line>Institute of Cognitive Science, University of Osnabrück, 49069 Osnabrück, Germany</addr-line>
</aff>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Ptito</surname>
<given-names>Maurice</given-names>
</name>
<role>Academic Editor</role>
<xref ref-type="aff" rid="edit1"></xref>
</contrib>
</contrib-group>
<aff id="edit1">
<addr-line>University of Montreal, CANADA</addr-line>
</aff>
<author-notes>
<fn fn-type="conflict" id="coi001">
<p>
<bold>Competing Interests: </bold>
The authors have declared that no competing interests exist.</p>
</fn>
<fn fn-type="con" id="contrib001">
<p>Conceived and designed the experiments: KJH FP GE AKE. Performed the experiments: KJH. Analyzed the data: KJH FP. Contributed reagents/materials/analysis tools: KJH FP GE. Wrote the paper: KJH FP GE PK AKE.</p>
</fn>
<corresp id="cor001">* E-mail:
<email>k.hollensteiner@uke.de</email>
</corresp>
</author-notes>
<pub-date pub-type="epub">
<day>13</day>
<month>5</month>
<year>2015</year>
</pub-date>
<pub-date pub-type="collection">
<year>2015</year>
</pub-date>
<volume>10</volume>
<issue>5</issue>
<elocation-id>e0124952</elocation-id>
<history>
<date date-type="received">
<day>26</day>
<month>1</month>
<year>2015</year>
</date>
<date date-type="accepted">
<day>20</day>
<month>3</month>
<year>2015</year>
</date>
</history>
<permissions>
<copyright-year>2015</copyright-year>
<copyright-holder>Hollensteiner et al</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<license-p>This is an open access article distributed under the terms of the
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution License</ext-link>
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:type="simple" xlink:href="pone.0124952.pdf"></self-uri>
<abstract>
<p>During the last two decades ferrets (
<italic>Mustela putorius</italic>
) have been established as a highly efficient animal model in different fields in neuroscience. Here we asked whether ferrets integrate sensory information according to the same principles established for other species. Since only few methods and protocols are available for behaving ferrets we developed a head-free, body-restrained approach allowing a standardized stimulation position and the utilization of the ferret’s natural response behavior. We established a behavioral paradigm to test audiovisual integration in the ferret. Animals had to detect a brief auditory and/or visual stimulus presented either left or right from their midline. We first determined detection thresholds for auditory amplitude and visual contrast. In a second step, we combined both modalities and compared psychometric fits and the reaction times between all conditions. We employed Maximum Likelihood Estimation (MLE) to model bimodal psychometric curves and to investigate whether ferrets integrate modalities in an optimal manner. Furthermore, to test for a redundant signal effect we pooled the reaction times of all animals to calculate a race model. We observed that bimodal detection thresholds were reduced and reaction times were faster in the bimodal compared to unimodal conditions. The race model and MLE modeling showed that ferrets integrate modalities in a statistically optimal fashion. Taken together, the data indicate that principles of multisensory integration previously demonstrated in other species also apply to crossmodal processing in the ferret.</p>
</abstract>
<funding-group>
<funding-statement>This research was supported by a Grant from the DFG (SFB 936/A2, B6; A.K.E.;
<ext-link ext-link-type="uri" xlink:href="http://www.dfg.de/">http://www.dfg.de</ext-link>
). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.</funding-statement>
</funding-group>
<counts>
<fig-count count="7"></fig-count>
<table-count count="2"></table-count>
<page-count count="20"></page-count>
</counts>
<custom-meta-group>
<custom-meta id="data-availability">
<meta-name>Data Availability</meta-name>
<meta-value>All relevant data are within the paper and its Supporting Information files.</meta-value>
</custom-meta>
</custom-meta-group>
</article-meta>
<notes>
<title>Data Availability</title>
<p>All relevant data are within the paper and its Supporting Information files.</p>
</notes>
</front>
<body>
<sec sec-type="intro" id="sec001">
<title>Introduction</title>
<p>During the last two decades ferrets (
<italic>Mustela putorius</italic>
) have become increasingly relevant as an animal model in different fields in neuroscience [
<xref rid="pone.0124952.ref001" ref-type="bibr">1</xref>
<xref rid="pone.0124952.ref024" ref-type="bibr">24</xref>
]. Ferrets have been domesticated for over 2000 years and are easy to handle and train on behavioral tasks [
<xref rid="pone.0124952.ref015" ref-type="bibr">15</xref>
,
<xref rid="pone.0124952.ref025" ref-type="bibr">25</xref>
<xref rid="pone.0124952.ref029" ref-type="bibr">29</xref>
]. As a carnivore ferrets have excellent visual and auditory sensing and are well suited for cross-modal integration studies. An additional advantage is that the ferret brain shows substantial homologies with that of other animal models established in neuroscience, such as the cat [
<xref rid="pone.0124952.ref010" ref-type="bibr">10</xref>
,
<xref rid="pone.0124952.ref011" ref-type="bibr">11</xref>
,
<xref rid="pone.0124952.ref018" ref-type="bibr">18</xref>
<xref rid="pone.0124952.ref020" ref-type="bibr">20</xref>
] and the primate [
<xref rid="pone.0124952.ref026" ref-type="bibr">26</xref>
]. Extensive work has been performed to map cortical and subcortical regions of the ferret brain functionally and anatomically [
<xref rid="pone.0124952.ref003" ref-type="bibr">3</xref>
,
<xref rid="pone.0124952.ref011" ref-type="bibr">11</xref>
,
<xref rid="pone.0124952.ref017" ref-type="bibr">17</xref>
<xref rid="pone.0124952.ref020" ref-type="bibr">20</xref>
,
<xref rid="pone.0124952.ref022" ref-type="bibr">22</xref>
]. These mapping studies have shown that ferrets have highly complex sensory cortical systems, making them an interesting model for the study of sensory processing pathways, response properties and topographies of sensory neurons. Several studies have addressed multisensory response properties in anesthetized ferrets [
<xref rid="pone.0124952.ref002" ref-type="bibr">2</xref>
,
<xref rid="pone.0124952.ref004" ref-type="bibr">4</xref>
,
<xref rid="pone.0124952.ref008" ref-type="bibr">8</xref>
,
<xref rid="pone.0124952.ref014" ref-type="bibr">14</xref>
], but multisensory interactions have not yet been studied in a behavioral preparation in this species.</p>
<p>Substantial effort has been made to uncover principles of multisensory integration in a variety of species and paradigms [
<xref rid="pone.0124952.ref030" ref-type="bibr">30</xref>
<xref rid="pone.0124952.ref035" ref-type="bibr">35</xref>
]. Multisensory integration is crucial for animals and influences behavior in synergistic or competitive ways. Sensory integration can lead to faster reaction times, better detection rates and higher accuracy values in multi- compare to unimodal conditions [
<xref rid="pone.0124952.ref033" ref-type="bibr">33</xref>
,
<xref rid="pone.0124952.ref036" ref-type="bibr">36</xref>
,
<xref rid="pone.0124952.ref037" ref-type="bibr">37</xref>
]. Specifically, sensory integration increases the reliability by reducing the variance in the sensory estimate [
<xref rid="pone.0124952.ref036" ref-type="bibr">36</xref>
,
<xref rid="pone.0124952.ref038" ref-type="bibr">38</xref>
,
<xref rid="pone.0124952.ref039" ref-type="bibr">39</xref>
]. The consistent estimate with the lowest variance is the Maximum Likelihood Estimate (MLE) [
<xref rid="pone.0124952.ref040" ref-type="bibr">40</xref>
], which can be derived from the weighted sum of the individual sensory estimates, with weights being inversely proportional to the variance of the unisensory signals [
<xref rid="pone.0124952.ref036" ref-type="bibr">36</xref>
,
<xref rid="pone.0124952.ref039" ref-type="bibr">39</xref>
]. A substantial number of studies indicate that humans and animals indeed integrate information across sensory modalities in this way [
<xref rid="pone.0124952.ref033" ref-type="bibr">33</xref>
,
<xref rid="pone.0124952.ref036" ref-type="bibr">36</xref>
,
<xref rid="pone.0124952.ref038" ref-type="bibr">38</xref>
,
<xref rid="pone.0124952.ref039" ref-type="bibr">39</xref>
,
<xref rid="pone.0124952.ref041" ref-type="bibr">41</xref>
<xref rid="pone.0124952.ref046" ref-type="bibr">46</xref>
]. For example, Ernst and Banks [
<xref rid="pone.0124952.ref036" ref-type="bibr">36</xref>
] used a MLE model to predict the results of a visual-haptic experiment and showed that humans integrate information in a statistically optimal fashion. Similar results were obtained by application of MLE in a human audio-visual study [
<xref rid="pone.0124952.ref037" ref-type="bibr">37</xref>
] and in a vestibular-visual study in macaque monkeys [
<xref rid="pone.0124952.ref047" ref-type="bibr">47</xref>
]. These studies demonstrate that the MLE is a robust statistical model to predict the crossmodal response and to test whether subjects integrate information in a statistically optimal fashion. As a results of the sensory integration process, the accumulation of information in multimodal compared to unimodal conditions is faster, which in turn leads to decreased reaction times (RT) [
<xref rid="pone.0124952.ref048" ref-type="bibr">48</xref>
<xref rid="pone.0124952.ref053" ref-type="bibr">53</xref>
].</p>
<p>In the present study, we investigated whether ferrets integrate sensory signals according to the same principles established for humans [
<xref rid="pone.0124952.ref033" ref-type="bibr">33</xref>
,
<xref rid="pone.0124952.ref054" ref-type="bibr">54</xref>
] and non-human primates [
<xref rid="pone.0124952.ref047" ref-type="bibr">47</xref>
]. Previous studies in behaving ferrets have used either freely-moving [
<xref rid="pone.0124952.ref013" ref-type="bibr">13</xref>
,
<xref rid="pone.0124952.ref015" ref-type="bibr">15</xref>
,
<xref rid="pone.0124952.ref055" ref-type="bibr">55</xref>
] or head-restrained [
<xref rid="pone.0124952.ref026" ref-type="bibr">26</xref>
] animals. Here, we developed a head-free, body-restrained approach allowing a standardized stimulation position and the utilization of the ferret’s natural response behavior. An additional demand was that the setup should be sufficiently flexible to allow combination of the behavioral protocol with electrophysiological recordings. We established a behavioral paradigm, requiring combination and integration in the auditory and/or visual modality, to investigate features of uni- and multisensory integration in the ferret and compare it to data reported from other species. Ferrets were tested in a 2-alternative-choice task requiring them to detect lateralized auditory, visual, or combined audio-visual targets of varying intensity. We expected the ferrets to perform more accurate and faster in the bimodal cases, because congruent inputs from two modalities provide more reliable sensory evidence. We first determined unimodal thresholds for auditory amplitude and visual contrast detection. Subsequently, we combined both modalities and compared psychometric fits and the RTs between all conditions. We used MLE to model psychometric curves and to probe whether ferrets integrate visual and auditory signals in an optimal manner. Furthermore, to test for a redundant signal effect (RSE) we pooled the RT of all animals in order to calculate a race model and to investigate potential intensity- and modality-dependent effects [
<xref rid="pone.0124952.ref049" ref-type="bibr">49</xref>
,
<xref rid="pone.0124952.ref056" ref-type="bibr">56</xref>
,
<xref rid="pone.0124952.ref057" ref-type="bibr">57</xref>
].</p>
</sec>
<sec sec-type="materials|methods" id="sec002">
<title>Materials and Methods</title>
<p>Ferrets were trained in a spatial detection paradigm, which was used to perform two behavioral experiments. In the first experiment, the animals’ auditory and visual unisensory detection thresholds were determined. In the second experiment, unimodal and bimodal thresholds were assessed in a combined approach, using the unimodal results from the first experiment to adjust the test parameters.</p>
<sec id="sec003">
<title>Animals</title>
<p>Four adult female ferrets (
<italic>Mustela putorius</italic>
; Euroferret, Dybbølsgade, Denmark), aged 2 years (n = 2) and 4 years (n = 2), from two different litters were tested in the experiment. They were individually housed in a standard ferret cage with enriched environment under controlled ambient conditions (21°C, 12-h light/dark cycle, lights on at 8:00 a.m.). The animals had ad libidum access to food pellets. Access to tap water was restricted 8h before the experiments and the training procedure. All behavioral testing was done during the light cycle between 10:00 a.m. and 2:00 p.m.</p>
</sec>
<sec id="sec004">
<title>Ethics statement</title>
<p>All experiments were approved by the Hamburg state authority for animal welfare (BUG-Hamburg; Permit Number: 22/11) and performed in accordance with the guidelines of the German animal protection law. All sections of this report adhere to the ARRIVE Guidelines for reporting animal research [
<xref rid="pone.0124952.ref058" ref-type="bibr">58</xref>
].</p>
</sec>
<sec id="sec005">
<title>Experimental setup</title>
<p>The experiments were carried out in a dark sound attenuated chamber to ensure controlled conditions for sensory stimulation. Once per day each ferret performed the experimental task in a custom-build setup (Fig
<xref rid="pone.0124952.g001" ref-type="fig">1A</xref>
and
<xref rid="pone.0124952.g001" ref-type="fig">1D</xref>
). We crafted a flat-bottomed tube to conveniently house the animal during the experiment. The body of the ferret was slightly restrained by fixation to three points in the tube via a harness, while the head remained freely movable outside the tube throughout the session. The semi-circular tube was fixed on an aluminum pedestal to level the animals’ head at 20cm distance to the center of the LED screen used for visual stimulation (BenQ XL2420T, Taipei, Taiwan). On the front (‘head side’), two convex aluminum semicircles were mounted horizontally below and above the animals’ head, respectively, at 150mm distance. They carried three light-barrier-fibers (FT-FM2), in the center, left and right, respectively, connected to high-speed (sampling interval: 250μs) receivers (FX301, SUNX, Aichi, Japan). This allowed the detection of the animal head during the experiments. In addition, a waterspout was co-localized with each light-barrier source. On both sides of the LED screen a speaker (T1; Beyerdynamic, Heilbronn, Germany) was placed with a 45° angle to the screen surface and at the height of the horizontal screen midline. A custom made 3-channel water-dispenser was installed outside the sound attenuated chamber to avoid acoustical interference during the experiments. It consisted of three valves from SMC Corporation (Tokyo, Japan), a Perfusor syringe (Melsungen, Germany) as water reservoir and Perfusor tubing to connect it with the waterspouts. The setup was controlled by custom-made routines using the Matlab environment (The Mathworks Inc.; MA, USA) on a MacPro. Behavioral control (light-barriers) and reward application (water-dispenser) were triggered through NI-PCI-cards (NI-6259 and NI-6251; National Instruments GmbH, Munich, Germany). The Psychtoolbox and the custom-written NI-mex-function referred to the same internal clock allowing the precise timing of behavior and stimulation.</p>
<fig id="pone.0124952.g001" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0124952.g001</object-id>
<label>Fig 1</label>
<caption>
<title>Experimental setup and behavioral task.</title>
<p>(A) Schematic of the components of the experimental setup in a top view: the LED-screen (a) with a speaker (b) on each side, the aluminum pedestal (d), and the three light-barrier-waterspout combinations (c). The semi-circular acrylic tube with a ferret (e) inside was placed on the pedestal. (B) Successive phases in the detection task: The inter-trial window (I), the trial initialization window (II), the event window (III) and the response window (IV). The three circles below each frame represent the light-barriers (white = unbroken, red = broken). The center of the screen displays a static visual random noise pattern. (C) Schematic of trial timing. When the ferret broke the central light-barrier (II) for 500ms a trial was initialized and the event window started (III), indicated by a decrease in contrast of the static random noise pattern. At a random time between 0-1000ms during the event window the auditory and/or visual stimulus appeared for 100ms either left or right from the center. After stimulus offset the ferret had a response time window between +100-700ms (IV) to pan its head from the central position to the light-barrier on the side of the stimulation. Subsequently, the inter-trial screen (I) appeared again. During the whole session the screen’s global luminance remained unchanged. (D) Three-dimensional rendering of the experimental setup. Labeling of the components as in (A).</p>
</caption>
<graphic xlink:href="pone.0124952.g001"></graphic>
</fig>
</sec>
<sec id="sec006">
<title>Sensory stimulation</title>
<p>Auditory and visual stimuli were created using the Psychtoolbox (V3) [
<xref rid="pone.0124952.ref059" ref-type="bibr">59</xref>
] in a Matlab environment (The Mathworks Inc.; MA, USA). A white noise auditory stimulus (100ms) with up to 50dB sound pressure level (SPL) was used for auditory stimulation. It was generated digitally at 96kHz sample rate on a high-end PCI-audio card (HDSPe AES, RME-Audio, Germany) and delivered through two ‘T1’ Beyerdynamic speakers (Heilbronn, Germany). Visual stimuli consisted of concentric moving circular gratings (22.5°, 0.2cycles/°, 5Hz) up to 0.38 Michelson contrast (Cm) shown for 100ms (6 frames @ 60 Hz monitor-refresh rate). The background was set to half-maximum luminance to avoid global luminance changes at stimulus onset. In the center of the screen, a static random noise pattern was displayed (7°, Cm between 0 and 1). During ‘bimodal’ trials, both visual and auditory stimuli were presented synchronously as described below.</p>
</sec>
<sec id="sec007">
<title>Detection task</title>
<p>The ferrets were trained to solve a spatial detection task, as shown in Fig
<xref rid="pone.0124952.g001" ref-type="fig">1B</xref>
and
<xref rid="pone.0124952.g001" ref-type="fig">1C</xref>
. To initialize a trial the ferret had to maintain a central head position by breaking the central light-barrier for 500ms. This caused the random noise pattern in the center of the screen to decrease contrast and indicate to the animal that the stimulus-window (up to 1000ms) had started. During this interval the animal had to further maintain a central head position. A stimulus was presented for 100ms on the left or on the right side, respectively, starting at a random time in this window. The stimulus could be a unimodal visual (circular grating), unimodal audio (white noise burst) or temporally congruent bimodal stimulus (further details see below). After stimulus offset, the animal had to respond within 600ms by panning its head to the respective side. If the response was correct the animal received a water reward (~80μl) at that position and could immediately start the next trial. If the response was too early (before stimulus onset or within 100ms after stimulus onset), incorrect (wrong side) or omitted (no response), the trial was immediately terminated, followed by a 2000ms interval during which no new trial start was allowed.</p>
</sec>
<sec id="sec008">
<title>General procedure</title>
<p>Following the habituation to the harness, tube and setup all ferrets learned to detect unimodal stimuli. Two of the animals were trained in the auditory task first and then the visual; the other two were trained in reverse order. After completion of the training and reaching of sufficient performance, we presented stimuli of both modalities during the same sessions and determined the individual detection threshold. Twenty different stimulus amplitudes (0-50dB SPL; 0–0.38Cm) were chosen in a 1down/3up staircase procedure [
<xref rid="pone.0124952.ref060" ref-type="bibr">60</xref>
], i.e., if the animal solved the trial correctly (hits) the stimulus amplitude decreased by one step for the next trial, down to the minimum, whereas false responses (misses, or omitted responses) led to a 3 step increase. No change occurred for responses that were issued too early (rash trials). In each trial either the auditory or the visual stimulus was presented in a pseudo-randomized fashion with individual staircases. To avoid a side- or modality-bias, each modality-side-combination was titrated to an equal number of hits within each session. Due to the huge combinatorics of conditions, each ferret had to complete 10–15 sessions to accumulate a sufficient number of trials per amplitude level. The data of each animal were pooled and treated as one sample, i.e., session information was discounted during further analysis. Sensory thresholds were determined by fitting a Weibull function to the data for each ferret individually.</p>
<p>In a subsequent set of measurements, we combined simultaneous stimulus presentation in both modalities. To this end, we fixed the stimulus in one modality at the amplitude where the tested animal had an accuracy of 75% during the unimodal testing and varied the amplitude in the other modality according to the staircase procedure described above. In these bimodal sessions we again included the unimodal conditions, such that we obtained four different stimulation classes: unimodal auditory (A), unimodal visual (V), auditory supported by visual (Av), visual supported by auditory (Va). These four stimulation conditions were presented in a pseudo-randomized fashion and separate staircases during the sessions. All ferrets completed 10–12 sessions and the threshold was determined for each ferret by fitting a Weibull function to the data.</p>
</sec>
<sec id="sec009">
<title>Data Analysis</title>
<p>All offline data analysis was performed using custom written scripts in Matlab (The Mathworks Inc., MA, USA).</p>
<sec id="sec010">
<title>Psychometrics</title>
<p>We evaluated the accuracy values (P) for all N stimulus amplitude classes (a) with at least 6 hit trials in total on both sides using
<xref rid="pone.0124952.e001" ref-type="disp-formula">Eq (1)</xref>
.
<disp-formula id="pone.0124952.e001">
<alternatives>
<graphic xlink:href="pone.0124952.e001.jpg" id="pone.0124952.e001g" position="anchor" mimetype="image" orientation="portrait"></graphic>
<mml:math id="M1">
<mml:mrow>
<mml:msub>
<mml:mi>P</mml:mi>
<mml:mi>a</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mi>N</mml:mi>
<mml:mrow>
<mml:mi>a</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>h</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mi>N</mml:mi>
<mml:mrow>
<mml:mi>a</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>o</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>N</mml:mi>
<mml:mrow>
<mml:mi>a</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</alternatives>
<label>(1)</label>
</disp-formula>
Here,
<italic>a</italic>
denotes the amplitude of the stimulus,
<italic>N</italic>
<sub>
<italic>a</italic>
,
<italic>h</italic>
</sub>
(
<italic>hit trials</italic>
) was defined as the number of correct response trials for stimulus amplitude a,
<italic>N</italic>
<sub>
<italic>a</italic>
,
<italic>o</italic>
</sub>
(
<italic>onset trials</italic>
) was the number of trials for stimulus amplitude a where the animal reached stimulus onset time, and
<italic>N</italic>
<sub>
<italic>a</italic>
,
<italic>r</italic>
</sub>
(
<italic>rash trials</italic>
) as the number of trials for stimulus amplitude a were the animal gave a response before the response window had started (up to 100ms after stimulus onset), assuming the animal was guessing and not responding based on sufficiently collected sensory evidence. We estimated the detection threshold by fitting a Weibull function to P
<sub>
<italic>a</italic>
</sub>
,
<disp-formula id="pone.0124952.e002">
<alternatives>
<graphic xlink:href="pone.0124952.e002.jpg" id="pone.0124952.e002g" position="anchor" mimetype="image" orientation="portrait"></graphic>
<mml:math id="M2">
<mml:mrow>
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi>a</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo></mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>exp</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo></mml:mo>
<mml:msup>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>λ</mml:mi>
<mml:mi>a</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mi>k</mml:mi>
</mml:msup>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</alternatives>
<label>(2)</label>
</disp-formula>
here
<italic>k</italic>
signifies the form-parameter and
<italic>λ</italic>
represents the scale-parameter. The number of trials were used as weights during the fitting procedure. Due to the fact that every animal had different thresholds in the respective modalities, we calculated the standard deviation of each fit by using a delete-d jackknife method, were d = 20% corresponds to the number of sessions excluded per run, i.e. 2 or 3, respectively.</p>
</sec>
<sec id="sec011">
<title>Modeling cross-modal interaction</title>
<p>In order to quantify the cross-modal interaction, we used the MLE approach. Therefore we utilized the audio and visual accuracy from the multimodal experiment for all existing stimulus intensities. Assuming a model of a hidden Gaussian representation of the sensory input in the brain we estimated the variance (
<italic>σ</italic>
) for all points based on the
<italic>F</italic>
<sub>
<italic>a</italic>
</sub>
values form the Weibull function,
<disp-formula id="pone.0124952.e003">
<alternatives>
<graphic xlink:href="pone.0124952.e003.jpg" id="pone.0124952.e003g" position="anchor" mimetype="image" orientation="portrait"></graphic>
<mml:math id="M3">
<mml:mrow>
<mml:mi>σ</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mi>σ</mml:mi>
<mml:mtext>0</mml:mtext>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>n</mml:mi>
<mml:mi>v</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>f</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mtext>a</mml:mtext>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</alternatives>
<label>(3)</label>
</disp-formula>
where ‘
<italic>inverf</italic>
’ equates to the inverse error function and
<italic>σ</italic>
<sub>0</sub>
an unknown scale factor. As in the following calculation of
<italic>σ</italic>
<sub>
<italic>bi</italic>
</sub>
it drops out we can set it arbitrarily to a value of 1. The next step was to combine both unimodal variances to derive the bimodal variance (
<italic>σ</italic>
<sub>
<italic>bi</italic>
</sub>
) according to
<disp-formula id="pone.0124952.e004">
<alternatives>
<graphic xlink:href="pone.0124952.e004.jpg" id="pone.0124952.e004g" position="anchor" mimetype="image" orientation="portrait"></graphic>
<mml:math id="M4">
<mml:mrow>
<mml:msub>
<mml:mi>σ</mml:mi>
<mml:mrow>
<mml:mi>b</mml:mi>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mi>σ</mml:mi>
<mml:mrow>
<mml:mtext>mod</mml:mtext>
</mml:mrow>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>σ</mml:mi>
<mml:mrow>
<mml:mtext>fix</mml:mtext>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msqrt>
<mml:mrow>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mrow>
<mml:mtext>mod</mml:mtext>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</alternatives>
<label>(4)</label>
</disp-formula>
where
<italic>σ</italic>
<sub>
<italic>mod</italic>
</sub>
represents the variance for the modality which intensity were modulated and
<italic>σ</italic>
<sub>
<italic>fix</italic>
</sub>
for the modality that was fixed at 75% threshold. Subsequently, we used the inverse value of the bimodal variance in an error function (
<italic>erf</italic>
) to determine the bimodal accuracy (5).</p>
<disp-formula id="pone.0124952.e005">
<alternatives>
<graphic xlink:href="pone.0124952.e005.jpg" id="pone.0124952.e005g" position="anchor" mimetype="image" orientation="portrait"></graphic>
<mml:math id="M5">
<mml:mrow>
<mml:mi>a</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>u</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>c</mml:mi>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>b</mml:mi>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mi>e</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>f</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:msub>
<mml:mi>σ</mml:mi>
<mml:mrow>
<mml:mi>b</mml:mi>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:math>
</alternatives>
<label>(5)</label>
</disp-formula>
</sec>
<sec id="sec012">
<title>Reaction time</title>
<p>The RT was defined as the time difference between stimulus-onset and the time point when the animal panned its head out of the central light-barrier. Only intensity classes with at least 6 successful responses (hits) were included in the RT analyses. To quantify the RT differences between the corresponding amplitudes from uni- and bimodal stimulation we computed the Multisensory Response Enhancement (MRE) [
<xref rid="pone.0124952.ref049" ref-type="bibr">49</xref>
] as follows:
<disp-formula id="pone.0124952.e006">
<alternatives>
<graphic xlink:href="pone.0124952.e006.jpg" id="pone.0124952.e006g" position="anchor" mimetype="image" orientation="portrait"></graphic>
<mml:math id="M6">
<mml:mrow>
<mml:mi>M</mml:mi>
<mml:mi>R</mml:mi>
<mml:mi>E</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>min</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mi>A</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo stretchy="true">¯</mml:mo>
</mml:mover>
<mml:mo>,</mml:mo>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mi>V</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo stretchy="true">¯</mml:mo>
</mml:mover>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo></mml:mo>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo stretchy="true">¯</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>min</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mi>A</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo stretchy="true">¯</mml:mo>
</mml:mover>
<mml:mo>,</mml:mo>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mi>V</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo stretchy="true">¯</mml:mo>
</mml:mover>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</alternatives>
<label>(6)</label>
</disp-formula>
with
<inline-formula id="pone.0124952.e007">
<alternatives>
<graphic xlink:href="pone.0124952.e007.jpg" id="pone.0124952.e007g" position="anchor" mimetype="image" orientation="portrait"></graphic>
<mml:math id="M7">
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mi>A</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo stretchy="true">¯</mml:mo>
</mml:mover>
</mml:mrow>
</mml:math>
</alternatives>
</inline-formula>
and
<inline-formula id="pone.0124952.e008">
<alternatives>
<graphic xlink:href="pone.0124952.e008.jpg" id="pone.0124952.e008g" position="anchor" mimetype="image" orientation="portrait"></graphic>
<mml:math id="M8">
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mi>V</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo stretchy="true">¯</mml:mo>
</mml:mover>
</mml:mrow>
</mml:math>
</alternatives>
</inline-formula>
referring to the observed mean RT for the auditory and visual stimuli, respectively.
<inline-formula id="pone.0124952.e009">
<alternatives>
<graphic xlink:href="pone.0124952.e009.jpg" id="pone.0124952.e009g" position="anchor" mimetype="image" orientation="portrait"></graphic>
<mml:math id="M9">
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:msub>
<mml:mi>T</mml:mi>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mi>V</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo stretchy="true">¯</mml:mo>
</mml:mover>
</mml:mrow>
</mml:math>
</alternatives>
</inline-formula>
is the mean RT for the corresponding bimodal stimulus.</p>
<p>We calculated a race model [
<xref rid="pone.0124952.ref056" ref-type="bibr">56</xref>
] to evaluate potential RSE. In our study, accuracy varied across subjects and sensory conditions. In order to compare reaction times across subjects and compute the race model for all related modality combinations we introduced ‘subjective intensity classes’ (SIC) as determined by the accuracy fit in different unimodal conditions (0–74%, 75–89% and 90–100% indicating low, medium and high performance accuracy, respectively). This ensured a sufficient number of trials per SIC and additionally normalized for inter-individual differences in the range of stimulus amplitudes. Intensity and modality effects on the RT were tested applying the same grouping approach and computing a two-way ANOVA.</p>
</sec>
</sec>
</sec>
<sec sec-type="results" id="sec013">
<title>Results</title>
<p>Four ferrets were trained in a lateralized audiovisual spatial detection task until they accomplished to solve the detection task in both modalities at high supra-threshold stimulus amplitudes (audio = 50dB SPL, visual = 0.38 Cm). The training was discontinued once the animals showed a stable baseline performance (>90%) across 5 consecutive days with high accuracy levels (audio = 92±1%, visual = 92±1%; mean±SEM). Two of the animals learned first the auditory (26 and 16 days training, respectively) and then the visual task (training for 28 days in both animals). The two other ferrets acquired the modalities in the opposite sequence (11 and 19 days for the visual and 14 and 14 days for the auditory modality, respectively). All animals achieved high performance levels demonstrating the viability of the training paradigm.</p>
<p>In all experiments for the determination of sensory thresholds we pooled results from left and right stimulation sides to calculate the accuracy values for all amplitudes. Testing for a laterality bias by comparing hit performance on both sides with a paired
<italic>t</italic>
-test revealed no significant bias (unimodal experiment: all animals =
<italic>p</italic>
>0.05; bimodal experiment: all animals =
<italic>p</italic>
>0.05).</p>
<sec id="sec014">
<title>Determination of unimodal thresholds</title>
<p>In the first experiment we determined the 75% accuracy threshold for detection of visual and auditory stimuli in a unimodal setting for each individual ferret (
<xref rid="pone.0124952.g002" ref-type="fig">Fig 2</xref>
), with an individual range of stimulus amplitudes for each animal. Ferrets performed on average 12 (±2) sessions (104±26 trials±SEM/session) in the unimodal experiment. Before pooling the sessions, we tested each ferret for non-stationarity effects across sessions by comparing the variance of the first three sessions at 84% accuracy threshold against the one of the last three sessions. We used three sessions as a minimum to ensure a sufficient number of trials for a proper Weibull function fit. No animal showed a non-stationarity in any modality (
<italic>p</italic>
>0.05 Two-sample t-test, 2-sided). The pooled data could well be described by a Weibull function (
<italic>r</italic>
<sup>
<italic>2</italic>
</sup>
= 0.56–0.92,
<xref rid="pone.0124952.g002" ref-type="fig">Fig 2</xref>
).</p>
<fig id="pone.0124952.g002" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0124952.g002</object-id>
<label>Fig 2</label>
<caption>
<title>Detection task performance of the unimodal experiment.</title>
<p>(A) Data for performance in the unimodal auditory detection task. (B) Data for the unimodal visual detection task. Each row represents one animal (1–4). Each dot represents the average performance of N trials (diameter) for the tested auditory amplitudes (dB SPL) or visual contrasts (Cm). The data are fitted by a Weibull function. Numbers within the panels indicate the amplitude values corresponding to the 75% and 84% thresholds, respectively. The blue shaded area around the fit indicates the standard deviation. The unmasked parts of the graphs indicate the range of the actually tested stimulus amplitudes.</p>
</caption>
<graphic xlink:href="pone.0124952.g002"></graphic>
</fig>
</sec>
<sec id="sec015">
<title>Determination of uni- and bimodal thresholds</title>
<p>In the second experiment, the two crossmodal stimulation conditions were added to the sessions. One modality’s intensity was fixed at 75% threshold, as determined from the unimodal experiment (
<xref rid="pone.0124952.g002" ref-type="fig">Fig 2</xref>
) while the other modality was varied in amplitude according to a staircase procedure. All ferrets participated in 12 (±1) multimodal sessions (111±37 trials±SEM/session). Like for the unimodal sessions, we again tested for non-stationarity effects between the first and the last sessions by comparing the 84% accuracy threshold variance as determined by the Weibull fit. Since the introduction of bimodal classes reduced the relative number of unimodal stimulus presentations during each session, we had to pool minimum across the first and last 5 sessions, respectively, to generate a proper Weibull fit. No animal showed non-stationarity across the bimodal sessions (2-sided two-sample
<italic>t</italic>
-test;
<italic>p</italic>
>0.05). Subsequently, we calculated the accuracy for each amplitude where at least 6 trials had been performed and the psychometric curves were fit using a Weibull function (
<xref rid="pone.0124952.g003" ref-type="fig">Fig 3</xref>
). The pooled data could well be described by a Weibull function (
<italic>r</italic>
<sup>
<italic>2</italic>
</sup>
= 0.39–0.90,
<xref rid="pone.0124952.g003" ref-type="fig">Fig 3</xref>
).</p>
<fig id="pone.0124952.g003" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0124952.g003</object-id>
<label>Fig 3</label>
<caption>
<title>Detection task performance of the bimodal experiment.</title>
<p>(A) Data for the stimulus conditions auditory-only (A) and auditory stimulation supported by a visual stimulus (Av). (B) Data for the stimulus conditions visual-only (V) and visual stimulation supported by an auditory stimulus (Va). Each row represents one ferret (1–4). Each dot represents the average performance of N trials (diameter) at a given auditory amplitude (dB SPL) or visual contrast (Cm). The data are fitted by a Weibull function. The uni- and bimodal fit is represented by the blue and red line, respectively. The shaded area around the fit indicates the standard deviation. Δ
<sub>84</sub>
displays the relative amount of threshold shift of the bimodal compared to the unimodal psychometric function at a performance of 84%. A positive shift indicates a threshold decrease. The black curve represents the MLE model. The unmasked parts of the graphs indicate the range of the actually tested stimulus amplitudes.</p>
</caption>
<graphic xlink:href="pone.0124952.g003"></graphic>
</fig>
<p>The comparison of the unimodal 75% thresholds between both experiments revealed a slight increase from the uni- to the multimodal experiment, except in animal 2 which showed a decrease (
<xref rid="pone.0124952.t001" ref-type="table">Table 1</xref>
). However, the differences were smaller than one of the respective amplitude steps in the staircase procedure. Furthermore, two of the animals (1 and 4) did not reach a performance above 90±5% in the highest intensities in one modality (audio and visual, respectively). These findings indicate that the bimodal experiments were slightly more demanding, presumably because four stimulation conditions were presented compared to the unimodal experiment with only two stimulation conditions.</p>
<table-wrap id="pone.0124952.t001" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0124952.t001</object-id>
<label>Table 1</label>
<caption>
<title>Comparison of threshold values for uni- and bimodal experiments.</title>
</caption>
<alternatives>
<graphic id="pone.0124952.t001g" xlink:href="pone.0124952.t001"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
</colgroup>
<thead>
<tr>
<th align="left" rowspan="1" colspan="1"></th>
<th colspan="2" align="center" rowspan="1">Amplitude values @ 75%</th>
<th colspan="3" align="center" rowspan="1">Amplitude values @ 84%</th>
</tr>
<tr>
<th align="left" rowspan="1" colspan="1"></th>
<th align="left" rowspan="1" colspan="1">unimodal Exp.</th>
<th align="left" rowspan="1" colspan="1">bimodal Exp.</th>
<th align="left" rowspan="1" colspan="1">unimodal Exp.</th>
<th colspan="2" align="center" rowspan="1">bimodal Exp.</th>
</tr>
<tr>
<th align="left" rowspan="1" colspan="1"></th>
<th align="left" rowspan="1" colspan="1">A</th>
<th align="left" rowspan="1" colspan="1">A</th>
<th align="left" rowspan="1" colspan="1">A</th>
<th align="left" rowspan="1" colspan="1">A</th>
<th align="left" rowspan="1" colspan="1">Av</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">1</td>
<td align="left" rowspan="1" colspan="1">39</td>
<td align="left" rowspan="1" colspan="1">40</td>
<td align="left" rowspan="1" colspan="1">42</td>
<td align="left" rowspan="1" colspan="1">45</td>
<td align="left" rowspan="1" colspan="1">41</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">2</td>
<td align="left" rowspan="1" colspan="1">27</td>
<td align="left" rowspan="1" colspan="1">30</td>
<td align="left" rowspan="1" colspan="1">29</td>
<td align="left" rowspan="1" colspan="1">32</td>
<td align="left" rowspan="1" colspan="1">26</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="left" rowspan="1" colspan="1">25</td>
<td align="left" rowspan="1" colspan="1">28</td>
<td align="left" rowspan="1" colspan="1">27</td>
<td align="left" rowspan="1" colspan="1">33</td>
<td align="left" rowspan="1" colspan="1">31</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">4</td>
<td align="left" rowspan="1" colspan="1">31</td>
<td align="left" rowspan="1" colspan="1">31</td>
<td align="left" rowspan="1" colspan="1">33</td>
<td align="left" rowspan="1" colspan="1">34</td>
<td align="left" rowspan="1" colspan="1">25</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">V</td>
<td align="left" rowspan="1" colspan="1">V</td>
<td align="left" rowspan="1" colspan="1">V</td>
<td align="left" rowspan="1" colspan="1">V</td>
<td align="left" rowspan="1" colspan="1">Va</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">1</td>
<td align="left" rowspan="1" colspan="1">0.08</td>
<td align="left" rowspan="1" colspan="1">0.08</td>
<td align="left" rowspan="1" colspan="1">0.10</td>
<td align="left" rowspan="1" colspan="1">0.10</td>
<td align="left" rowspan="1" colspan="1">0.09</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">2</td>
<td align="left" rowspan="1" colspan="1">0.09</td>
<td align="left" rowspan="1" colspan="1">0.08</td>
<td align="left" rowspan="1" colspan="1">0.11</td>
<td align="left" rowspan="1" colspan="1">0.09</td>
<td align="left" rowspan="1" colspan="1">0.09</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="left" rowspan="1" colspan="1">0.07</td>
<td align="left" rowspan="1" colspan="1">0.10</td>
<td align="left" rowspan="1" colspan="1">0.09</td>
<td align="left" rowspan="1" colspan="1">0.18</td>
<td align="left" rowspan="1" colspan="1">0.12</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">4</td>
<td align="left" rowspan="1" colspan="1">0.15</td>
<td align="left" rowspan="1" colspan="1">0.19</td>
<td align="left" rowspan="1" colspan="1">0.19</td>
<td align="left" rowspan="1" colspan="1">0.25</td>
<td align="left" rowspan="1" colspan="1">0.10</td>
</tr>
</tbody>
</table>
</alternatives>
<table-wrap-foot>
<fn id="t001fn001">
<p>The amplitude values at the 75% and 84% thresholds (in dB SPL for A and Av; Cm for V and Va) in the unimodal and bimodal experiments (columns) for all animals (rows 1–4).</p>
</fn>
</table-wrap-foot>
</table-wrap>
<p>Because different values were used for the lower bounds in uni- (50%) and crossmodal (75%) fitting, we employed the 84% threshold for comparison of performance between uni- and crossmodal settings. All fits to the bimodal psychometric functions showed a left shift compared to their unimodal complements, except for animal 2 in the V-Va comparison (amplitude decrease ±SEM: A-Av = 5.3±1.5; V-Va = 0.06±0.03; for absolute values see
<xref rid="pone.0124952.t001" ref-type="table">Table 1</xref>
). This demonstrates a decrease in detection thresholds in all ferrets, except for animal 2 in the Va condition where the auditory stimulus had no augmenting effect. For quantification we calculated the relative shifts at the 84% performance-level between the uni- and bimodal psychometric fit (Δ
<sub>84</sub>
in
<xref rid="pone.0124952.g003" ref-type="fig">Fig 3</xref>
). A positive number indicates a lower threshold as determined by the bimodal fit, i.e., an increase in bimodal detection performance. On average, there was a shift (±SEM) of 15±5%, indicating an effective bimodal integration.</p>
</sec>
<sec id="sec016">
<title>Maximum likelihood estimates</title>
<p>To investigate whether ferrets integrate the two sensory modalities in a statistically optimal fashion, we computed a MLE model and compared the
<italic>r</italic>
<sup>
<italic>2</italic>
</sup>
-difference between the empirical data (
<xref rid="pone.0124952.g003" ref-type="fig">Fig 3</xref>
, red) and model (
<xref rid="pone.0124952.g003" ref-type="fig">Fig 3</xref>
, black). The range of the difference Δ
<sub>
<italic>bimodal</italic>
-
<italic>MLE</italic>
</sub>
was -1 to 49% (mean difference ±SEM 14±6). In four cases the MLE matched the bimodal psychometric function and the difference of the explained variance between the empirical fit (
<xref rid="pone.0124952.g003" ref-type="fig">Fig 3</xref>
) and the MLE was 10% or less (A1: Δ
<sub>
<italic>Va</italic>
-
<italic>MLE</italic>
</sub>
= 8%; A2: Δ
<sub>
<italic>Va</italic>
-
<italic>MLE</italic>
</sub>
= 2%; A3: Δ
<sub>
<italic>Av</italic>
-
<italic>MLE</italic>
</sub>
= 1% and Δ
<sub>
<italic>Va</italic>
-
<italic>MLE</italic>
</sub>
= -1%). For one condition (animal 1: Δ
<sub>
<italic>Av</italic>
-
<italic>MLE</italic>
</sub>
= 11%) the MLE underestimated the empirical fit at the highest stimulus amplitudes (
<xref rid="pone.0124952.g003" ref-type="fig">Fig 3A</xref>
). This may be caused by the low unimodal performance at high stimulus amplitudes, since the MLE model depends on the unimodal performance. This argument also holds true for the Va case (Δ
<sub>
<italic>Va</italic>
-
<italic>MLE</italic>
</sub>
= 15%) of animal 4 (
<xref rid="pone.0124952.g003" ref-type="fig">Fig 3B</xref>
, bottom panel). If the animal had shown a unimodal performance comparable to that previously measured in the unimodal experiment, the MLE model would be similar to the empirical bimodal fit. In the other two cases the MLE underestimated the empirical fit in the intermediate amplitude ranges (animal 2: Δ
<sub>
<italic>Av</italic>
-
<italic>MLE</italic>
</sub>
= 25% and animal 4: = 49%,
<xref rid="pone.0124952.g003" ref-type="fig">Fig 3</xref>
). Overall, the MLE modeling results support the conclusions drawn from the comparison of the 84% performance threshold between uni- and bimodal conditions. The results indicate that ferrets integrated the two modalities as good or even better than predicted by the MLE estimator (
<xref rid="pone.0124952.g003" ref-type="fig">Fig 3</xref>
).</p>
</sec>
<sec id="sec017">
<title>Reaction time analysis</title>
<p>One of the most important benefits of multisensory integration is the reduction of RTs for bimodal stimuli compared to unimodal stimulation. The measured RT varied during the multisensory experiment with target amplitude in all modality types. In all stimulus conditions and all animals, RT showed a significant negative correlation with stimulus amplitude (range A:
<italic>r</italic>
= -0.17 to -0.41; V:
<italic>r</italic>
= -0.25 to -0.45; Av:
<italic>r</italic>
= -0.21 to -0.44; Va:
<italic>r</italic>
= -0.34 to -0.46; all correlations:
<italic>p</italic>
< 0.01;
<xref rid="pone.0124952.g004" ref-type="fig">Fig 4</xref>
). RT significantly increased with decreasing amplitude (ANOVA
<italic>p</italic>
< 0.05) in all but one condition (animal 1: audio-alone, ANOVA
<italic>p</italic>
> 0.05). This is an expected finding, because the signal-to-noise ratio (SNR) decreases with decreasing stimulus amplitude and the internal signal processing is slower for low SNR.</p>
<fig id="pone.0124952.g004" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0124952.g004</object-id>
<label>Fig 4</label>
<caption>
<title>Reaction time data from the bimodal experiment.</title>
<p>(A) Data for the stimulus conditions auditory-only (A) and auditory stimulation supported by a visual stimulus (Av). (B) Data for the stimulus conditions visual-only (V) and visual stimulation supported by an auditory stimulus (Va). Each row represents one ferret (1–4). RT ± SEM are shown as a function of stimulus amplitude (red = bimodal, blue = unimodal). Each data point represents the RT average for all hit trials recorded at that amplitude. Asterisks indicate significant differences between uni- and bimodal conditions (
<italic>t-</italic>
test: * =
<italic>p</italic>
< 0.05, ** =
<italic>p</italic>
< 0.01, *** =
<italic>p</italic>
< 0.001). Below each pair of uni- and bimodal RTs the Multisensory Response Enhancement (MRE) is shown as numerical values. In each panel, Pearson correlation coefficient and regression line for both data sets are shown. The two vertical lines mark the borders between the subject intensity classes (left of first line: 0–74%, between the lines 75–89%; right of the second line 90–100% performance).</p>
</caption>
<graphic xlink:href="pone.0124952.g004"></graphic>
</fig>
<p>To reduce the dimensionality and compare reaction times across subjects we used ‘subjective intensity classes’ (SIC) (see
<xref rid="sec002" ref-type="sec">Material and Methods</xref>
). To quantify RT changes reflecting potential multimodal enhancement effects, we calculated the MRE for all uni- and bimodal stimulus pairs and summed these according to the SICs. The average MRE of both modalities was slightly positive (Av = 3.59%; Va = 0.06%). However, about one-third of the cases (7 out of 24,
<xref rid="pone.0124952.t002" ref-type="table">Table 2</xref>
) showed a negative MRE. Such negative MRE values, which indicate that the average unimodal RT is faster than the average RT of the bimodal condition, occurred only in the low and medium SIC. In the highest SIC, the MRE was consistently positive. Overall, the MRE results suggest a multimodal enhancement effect in the high and medium and an interfering effect in the lower SIC.</p>
<table-wrap id="pone.0124952.t002" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0124952.t002</object-id>
<label>Table 2</label>
<caption>
<title>Reaction time: average MRE.</title>
</caption>
<alternatives>
<graphic id="pone.0124952.t002g" xlink:href="pone.0124952.t002"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
</colgroup>
<thead>
<tr>
<th align="left" rowspan="1" colspan="1"></th>
<th align="left" rowspan="1" colspan="1">0–74%</th>
<th align="left" rowspan="1" colspan="1">75–89%</th>
<th align="left" rowspan="1" colspan="1">90–100%</th>
<th align="left" rowspan="1" colspan="1">0–74%</th>
<th align="left" rowspan="1" colspan="1">75–89%</th>
<th align="left" rowspan="1" colspan="1">90–100%</th>
</tr>
<tr>
<th align="left" rowspan="1" colspan="1"></th>
<th colspan="3" align="center" rowspan="1">MRE Av</th>
<th colspan="3" align="center" rowspan="1">MRE Va</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">1</td>
<td align="char" char="." rowspan="1" colspan="1">-6.00</td>
<td align="char" char="." rowspan="1" colspan="1">4.33</td>
<td align="char" char="." rowspan="1" colspan="1">8.00</td>
<td align="char" char="." rowspan="1" colspan="1">1.00</td>
<td align="char" char="." rowspan="1" colspan="1">-0.67</td>
<td align="char" char="." rowspan="1" colspan="1">2.22</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">2</td>
<td align="char" char="." rowspan="1" colspan="1">5.50</td>
<td align="char" char="." rowspan="1" colspan="1">6.33</td>
<td align="char" char="." rowspan="1" colspan="1">5.50</td>
<td align="char" char="." rowspan="1" colspan="1">-20.50</td>
<td align="char" char="." rowspan="1" colspan="1">-19.00</td>
<td align="char" char="." rowspan="1" colspan="1">4.40</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="char" char="." rowspan="1" colspan="1">-9.40</td>
<td align="char" char="." rowspan="1" colspan="1">-3.00</td>
<td align="char" char="." rowspan="1" colspan="1">3.63</td>
<td align="char" char="." rowspan="1" colspan="1">-18.75</td>
<td align="char" char="." rowspan="1" colspan="1">3.00</td>
<td align="char" char="." rowspan="1" colspan="1">8.50</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">4</td>
<td align="char" char="." rowspan="1" colspan="1">15.00</td>
<td align="char" char="." rowspan="1" colspan="1">4.00</td>
<td align="char" char="." rowspan="1" colspan="1">9.20</td>
<td align="char" char="." rowspan="1" colspan="1">10.00</td>
<td align="char" char="." rowspan="1" colspan="1">12.50</td>
<td align="char" char="." rowspan="1" colspan="1">18.00</td>
</tr>
</tbody>
</table>
</alternatives>
<table-wrap-foot>
<fn id="t002fn001">
<p>Multisensory Response Enhancement (MRE) computed for the RTs from all animals (rows) and stimulus conditions of the bimodal experiment according to
<xref rid="pone.0124952.e006" ref-type="disp-formula">Eq 6</xref>
. (see
<xref rid="sec002" ref-type="sec">Methods</xref>
). The MRE’s were sorted by the subjective intensity classes (SIC; columns from left to right). Av: auditory supported by visual; Va: visual supported by auditory.</p>
</fn>
</table-wrap-foot>
</table-wrap>
<p>To investigate a potential RSE we calculated a race model on the pooled RTs according to the SICs. The race model assumes that during multimodal stimulation no modality integration happens, but that signals of either modality are processed independently. Whichever of the two leads to a result first triggers and determines the response, i.e., the head movement towards the detected stimulus. Therefore, the bimodal cumulative distribution function (CDF) of the RT can be modeled by sampling from the unimodal RT CDFs. Afterwards the modeled bimodal RT CDF can be compared with the empirical bimodal RT CDF (see
<xref rid="pone.0124952.g005" ref-type="fig">Fig 5</xref>
). If the empirical RT CDF is faster in 20–50% of the percentiles compare to the modeled RT CDF the race model can be rejected and modality integration is suggested [
<xref rid="pone.0124952.ref061" ref-type="bibr">61</xref>
]. For a detailed explanation of the race model see Ulrich et al. [
<xref rid="pone.0124952.ref056" ref-type="bibr">56</xref>
].</p>
<fig id="pone.0124952.g005" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0124952.g005</object-id>
<label>Fig 5</label>
<caption>
<title>Race model example.</title>
<p>Analysis of RT CDFs from animal 4. High visual SIC CDFs are shown for unimodal visual stimulation (V, blue), auditory stimulation at 75% (A75%, green), auditory stimulus supported by visual stimulation (Av, red) and the combination of both unimodal CDFs (V+A75%, black). In this case the race model gets rejected, because the empirical bimodal CDF (red) is ‘faster’ than the modeled CDF (black).</p>
</caption>
<graphic xlink:href="pone.0124952.g005"></graphic>
</fig>
<p>We computed the relative (%) deviation from the linear unimodal combination for all stimulus conditions (
<xref rid="pone.0124952.g006" ref-type="fig">Fig 6</xref>
) for each SIC. If this difference for the empirical bimodal CDF is in 20–50% of the cases negative the race model can be rejected (Miller and Ulrich, 2003). The biggest effect of the supportive value occurred in the highest intensity group, because there the change was negative compare to the combined unisensory CDF in the lower percentiles (upper row,
<xref rid="pone.0124952.g006" ref-type="fig">Fig 6</xref>
). In the 75–89% SIC no percentile of the crossmodal combinations was negative (middle row
<xref rid="pone.0124952.g006" ref-type="fig">Fig 6</xref>
) and in the lowest intensity-group the bimodal and the supportive value RTs were similar (bottom row
<xref rid="pone.0124952.g006" ref-type="fig">Fig 6</xref>
), i.e., the benefit of the redundant signal seems to diminish with decreasing intensity group. However, in the medium and high performance classes the bimodal RT seemed to be closer to the combined CDF than each of the unimodal distributions. For the high SICs, the distributions suggest that the race model can be rejected at a descriptive level. Overall, these results are compatible with the notion that, for higher SICs, multisensory integration processes are leading to RT gains beyond what can be predicted from the fastest unimodal responses.</p>
<fig id="pone.0124952.g006" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0124952.g006</object-id>
<label>Fig 6</label>
<caption>
<title>Reaction time: race model results.</title>
<p>The RTs were sorted by the SICs (rows) and both modalities (A: audio, B: visual) pooled across all animals. The X-axis displays the cumulative reaction time differences to the race model for each modality (± SEM). A value of 0 at the X-axis corresponds to the prediction from the combination of both unimodal CDF’s. The blue curve displays the unimodal condition, the green curve the RTs at the supportive value and the red curve the bimodal class, respectively.</p>
</caption>
<graphic xlink:href="pone.0124952.g006"></graphic>
</fig>
<p>To investigate intensity, modality and interaction effects on a more global scale we pooled the RT of all animals according to subjective intensity classes and calculated a two-way ANOVA, with modality and intensity as main factors (
<xref rid="pone.0124952.g007" ref-type="fig">Fig 7</xref>
). This revealed main effects in both factors (Modality:
<italic>F</italic>
(3, 4632) = 18.84 (
<italic>p</italic>
< 0.001); Intensity:
<italic>F</italic>
(2, 4633) = 310.65 (
<italic>p</italic>
< 0.001)) and an interaction effect (Modality*Intensity:
<italic>F</italic>
(6, 4624) = 3.93 (
<italic>p</italic>
< 0.01)). A post hoc
<italic>t</italic>
-test (Holm-Bonferroni corrected) revealed significant differences between and within performance classes (
<xref rid="pone.0124952.g007" ref-type="fig">Fig 7</xref>
), respectively. The post hoc
<italic>t</italic>
-tests between the intensity groups and modalities were all highly significant (
<italic>p</italic>
< 0.001). This result suggests that the ferrets’ RTs increase as the intensity of the stimulus gets weaker and significantly decrease in the multimodal compared to the unimodal classes.</p>
<fig id="pone.0124952.g007" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0124952.g007</object-id>
<label>Fig 7</label>
<caption>
<title>Reaction time: two-way ANOVA results.</title>
<p>The reaction times (RT) pooled by subjective intensity classes (0–74%, 75–89%, 90–100%). The X-axis displays the three performance classes and the Y-axis shows the RT in milliseconds ± SEM. The solid lines represent the unimodal, the dashed lines the bimodal, red indicates the audio and blue the visual modalities (*:
<italic>p</italic>
< 0.05; **:
<italic>p</italic>
< 0.01; ***:
<italic>p</italic>
< 0.001; Holm-Bonferroni corrected). +++, significant differences between performance classes within each modality (Holm-Bonferroni corrected); red and blue asterisks, significant differences between uni- and bimodal conditions in one performance class (Holm-Bonferroni corrected); green asterisk, significant difference between the two unimodal conditions.</p>
</caption>
<graphic xlink:href="pone.0124952.g007"></graphic>
</fig>
</sec>
</sec>
<sec sec-type="conclusions" id="sec018">
<title>Discussion</title>
<p>How information from different modalities is integrated has been a subject of intense research for many years. Here we asked if ferrets integrate sensory signals according to the same principles established for other species [
<xref rid="pone.0124952.ref031" ref-type="bibr">31</xref>
,
<xref rid="pone.0124952.ref033" ref-type="bibr">33</xref>
,
<xref rid="pone.0124952.ref035" ref-type="bibr">35</xref>
<xref rid="pone.0124952.ref039" ref-type="bibr">39</xref>
,
<xref rid="pone.0124952.ref047" ref-type="bibr">47</xref>
,
<xref rid="pone.0124952.ref062" ref-type="bibr">62</xref>
,
<xref rid="pone.0124952.ref063" ref-type="bibr">63</xref>
]. We expected the ferrets to perform more accurately and with lower RTs in the bimodal cases, because congruent inputs from two modalities provide more reliable sensory evidence [
<xref rid="pone.0124952.ref062" ref-type="bibr">62</xref>
,
<xref rid="pone.0124952.ref064" ref-type="bibr">64</xref>
<xref rid="pone.0124952.ref066" ref-type="bibr">66</xref>
]. As predicted, bimodal detection thresholds were reduced and RTs were faster in the bimodal compared to unimodal conditions, demonstrating multimodal integration effects. Furthermore our results on MLE modeling suggest that ferrets integrate modalities in a statistically optimal fashion.</p>
<sec id="sec019">
<title>Methodological considerations</title>
<p>Previous studies in behaving ferrets have used either freely-moving [
<xref rid="pone.0124952.ref001" ref-type="bibr">1</xref>
,
<xref rid="pone.0124952.ref013" ref-type="bibr">13</xref>
,
<xref rid="pone.0124952.ref015" ref-type="bibr">15</xref>
,
<xref rid="pone.0124952.ref055" ref-type="bibr">55</xref>
] or head-restrained [
<xref rid="pone.0124952.ref026" ref-type="bibr">26</xref>
] animals. Here, we developed a head-free, body-restrained approach allowing a standardized stimulation position and the utilization of the ferret’s natural response behavior. The setup is especially suited for psychometric investigations because the distance between animal and the stimulus sources remains constant across trials. The high inter-trial-consistency and the fixed animal position allow the combination of behavioral protocols with neurophysiological recordings comparable to head-restrained approaches [
<xref rid="pone.0124952.ref026" ref-type="bibr">26</xref>
]. An additional advantage is the usage of a screen instead of a single light-source for the visual stimulation [
<xref rid="pone.0124952.ref001" ref-type="bibr">1</xref>
,
<xref rid="pone.0124952.ref031" ref-type="bibr">31</xref>
], enabling the spatially flexible presentation of a broad variety of visual stimuli. Similar to other ferret studies [
<xref rid="pone.0124952.ref013" ref-type="bibr">13</xref>
,
<xref rid="pone.0124952.ref055" ref-type="bibr">55</xref>
], one limitation of our approach lies in the relatively low number of trials collected per session. We therefore had to pool data from different sessions to obtain a sufficient number of trials for the fitting of psychometric functions. Merging of sessions was justified by the absence of non-stationarity effects and the high amount of variance explained by the fits. This also indicates a low day-to-day variability of perceptual thresholds. Our results complement that of an earlier study in ferrets demonstrating that measured thresholds were not affected by trial-to-trial fluctuations in the animals’ decision criterion [
<xref rid="pone.0124952.ref001" ref-type="bibr">1</xref>
]. Overall, these findings suggest that the experimental design presented in this study is well suited for psychophysical investigations.</p>
<p>Establishing links across species, our behavioral paradigm was inspired by previous human psychophysical studies which showed that temporally congruent crossmodal stimuli enhance detection [
<xref rid="pone.0124952.ref062" ref-type="bibr">62</xref>
,
<xref rid="pone.0124952.ref064" ref-type="bibr">64</xref>
<xref rid="pone.0124952.ref066" ref-type="bibr">66</xref>
]. Frassinetti et al. [
<xref rid="pone.0124952.ref062" ref-type="bibr">62</xref>
] adopted an animal approach [
<xref rid="pone.0124952.ref051" ref-type="bibr">51</xref>
] to humans and obtained similar results in terms of multisensory enhancement effects. Another study form Lippert and colleagues [
<xref rid="pone.0124952.ref064" ref-type="bibr">64</xref>
] showed that informative congruent sounds improve detection rates, but this gain disappears when subjects are not aware of the fact that the additional sound offers information about the visual stimulus. They concluded that cross-modal influences in simple detection tasks are not exclusively reflecting hard-wired sensory integration mechanisms but, rather, point to a prominent role for cognitive and contextual effects. This contrasts with more classical views suggesting that information form different sensory modalities may be integrated pre-attentively and substantially rely on automatic bottom-up processing [
<xref rid="pone.0124952.ref035" ref-type="bibr">35</xref>
]. Our observation of the inter-experiment threshold increase for the unimodal conditions might suggest possible contextual effects. A possibility is that, in the second experiment, the inclusion of the bimodal conditions may have created a contextual, or motivational, bias of the animals towards solving the bimodal trials because more sensory evidence was provided. This could also explain why the performance in the unimodal conditions of the bimodal experiment did not reach 95–100% accuracy even at the highest intensities, unlike in the unimodal experiment.</p>
<p>Taken together, our study demonstrates that the implemented behavioral paradigm is suitable to determine uni- and bimodal thresholds and to operationalize multisensory integration processes. Possible contextual and attention-like effects seem hard to elucidate by pure psychometrics, but simultaneous electrophysiological recordings could provide valuable insights into the underlying brain processes during the task.</p>
</sec>
<sec id="sec020">
<title>Optimal modality integration</title>
<p>This is the first study on behaving ferrets to quantify multimodal enhancement effects and to test for optimal modality integration. The results of our bimodal experiment show clear multisensory enhancement effects. The left shift of the psychometric function and the variance reduction, derived at 84% accuracy, demonstrate increased detection rates and enhanced reliability for lower test-intensities in the bimodal stimulation conditions, indicating that the ferrets indeed integrate information across modalities as shown for other species [
<xref rid="pone.0124952.ref031" ref-type="bibr">31</xref>
,
<xref rid="pone.0124952.ref035" ref-type="bibr">35</xref>
,
<xref rid="pone.0124952.ref037" ref-type="bibr">37</xref>
,
<xref rid="pone.0124952.ref047" ref-type="bibr">47</xref>
,
<xref rid="pone.0124952.ref054" ref-type="bibr">54</xref>
,
<xref rid="pone.0124952.ref063" ref-type="bibr">63</xref>
<xref rid="pone.0124952.ref065" ref-type="bibr">65</xref>
]. MLE modeling is typically used in multisensory integration to test the hypothesis that the integrative process is statistically optimal by fitting the parameters of the model to unisensory response distributions and then comparing the multimodal prediction of the model to the empirical data. Studies on humans have shown that different modalities get integrated in a statistical optimal fashion. For example, Battaglia et al. [
<xref rid="pone.0124952.ref037" ref-type="bibr">37</xref>
] found that human subjects integrate audio and visual modalities as an optimal observer. The same is true for visual and haptic integration [
<xref rid="pone.0124952.ref036" ref-type="bibr">36</xref>
], and integration of stereo and texture information [
<xref rid="pone.0124952.ref039" ref-type="bibr">39</xref>
,
<xref rid="pone.0124952.ref067" ref-type="bibr">67</xref>
]. Furthermore, Alais and Burr [
<xref rid="pone.0124952.ref038" ref-type="bibr">38</xref>
] could show that the ventriloquist effect is based on near-optimal sensory integration. Rowland and colleagues showed statistical optimal integration in the cat for audio-visual perception [
<xref rid="pone.0124952.ref063" ref-type="bibr">63</xref>
] and Gu et al. [
<xref rid="pone.0124952.ref047" ref-type="bibr">47</xref>
] could demonstrate the same principle in macaques for visual and vestibular sensory integration. Similar to the abovementioned studies, our results on MLE modeling suggest that ferrets integrate modalities in a statistically optimal fashion. Surprisingly, in two of our cases the MLE underestimates the empirical fit, which is counterintuitive because the MLE provides already the maximum estimate. A potential explanation might be that multisensory benefit is larger for some modalities compared to others, as suggested by the modality precision hypothesis by Welch and Warren [
<xref rid="pone.0124952.ref068" ref-type="bibr">68</xref>
]. These hypotheses states that discrepancies are always resolved in favor of the more precise modality, i.e. the modality with the highest SNR gets weighted higher in the final sensory estimate. Battaglia and coworkers [
<xref rid="pone.0124952.ref037" ref-type="bibr">37</xref>
] showed that humans have a bias towards the visual modality in a multisensory spatial detection task. Finally, it could be caused by a low unimodal performance in the intermediate intensities since the MLE model depends on the unimodal performance. In summary, the MLE model provides evidence that ferrets merge modalities in a near-optimal fashion, similar to other species [
<xref rid="pone.0124952.ref036" ref-type="bibr">36</xref>
<xref rid="pone.0124952.ref038" ref-type="bibr">38</xref>
,
<xref rid="pone.0124952.ref047" ref-type="bibr">47</xref>
,
<xref rid="pone.0124952.ref067" ref-type="bibr">67</xref>
].</p>
</sec>
<sec id="sec021">
<title>Multisensory response enhancement</title>
<p>In a second analysis approach we compared RTs of the uni- and bimodal stimulation conditions and computed a race model to test a RSE. Our main results are in line with findings from other species. Previous work in humans revealed that subjects respond faster to bimodal compared to unimodal stimuli [
<xref rid="pone.0124952.ref049" ref-type="bibr">49</xref>
,
<xref rid="pone.0124952.ref064" ref-type="bibr">64</xref>
]. Miller [
<xref rid="pone.0124952.ref053" ref-type="bibr">53</xref>
] showed that this RT gain is a result of a modality integration effect and not only a product of the fastest processed modality. Gleiss and Kayser [
<xref rid="pone.0124952.ref031" ref-type="bibr">31</xref>
] demonstrated that additional non-informative white noise decreases RT in a visual detection task in rats. The effect size of the RT gain increased when the light intensity decreased. In our study the influence of amplitude on RT is directly related to the SNR, i.e., the internal signal processing is faster for high SNR. For high intensities of the varying modality (>75% unimodal performance), the SNR should be higher compared to the fixed supporting modality. Decreasing the intensity of the variable modality leads to a continuous decrease of its SNR (until 0), such that for low intensities the RT is completely determined by the amplitude of the supporting modality. Interestingly, some MRE values were negative in the lower and intermediate subjective intensity classes. This is due to the fact that the MRE model uses the fastest unimodal RT for calculation and the RT of the supporting values is faster than the average bimodal RT. The variable modality seems to have a competitive effect on the RT at low intensities, because the average bimodal RT is slower than the RT of the supportive value.</p>
<p>In addition to the MRE analysis, we computed a race model for the RT data. The race model tests RT effects in a more sophisticated way, by comparing a modeled bimodal RT CDF with the empirical bimodal RT CDF. In our dataset, the benefit of the redundant signal increased from low to high SIC. Data reached the criterion to reject the race model only in the high SIC. In the intermediate and low SIC the linear unimodal combination was faster compared to the empirical bimodal conditions. Nevertheless, in the intermediate SIC the bimodal percentiles were closer to the linear combination than the unimodal groups, indicating a minor gain of the supportive value and therefore a multisensory enhancement effect. In contrast, in the low SIC the bimodal group matches the supporting value group, implying that the supportive value is the driving modality in the sensory process [
<xref rid="pone.0124952.ref057" ref-type="bibr">57</xref>
,
<xref rid="pone.0124952.ref061" ref-type="bibr">61</xref>
].</p>
</sec>
</sec>
<sec sec-type="conclusions" id="sec022">
<title>Conclusions</title>
<p>In conclusion, our data demonstrate that basic principles of multisensory integration, such as enhancement effects of bimodal stimuli on detection rates, precision and RT apply to crossmodal processing in the ferret brain. The race model and MLE modeling provide evidence that ferrets integrate modalities in a statistically optimal fashion. To quantify this in more detail more advanced behavioral paradigms would be required where the stimulus onset varies across modalities and a broader range of stimulus amplitudes of supporting modality can be covered.</p>
<p>The setup we have developed to test ferrets in uni- and bimodal conditions is similar to human and non-human primate tasks, and can be combined in future research with approaches for the study of the underlying neural processes. Our behavioral paradigm could be combined with neuroscientific approaches such as, e.g., optogenetics or
<italic>in vivo</italic>
imaging [
<xref rid="pone.0124952.ref069" ref-type="bibr">69</xref>
]. Furthermore, the same setup could be used to implement more complex behavioral paradigms such as discrimination or go/no-go tasks [
<xref rid="pone.0124952.ref026" ref-type="bibr">26</xref>
]. Moreover, the setup would also be suitable to investigate aspects of sensory processing other than multisensory integration relating, e.g., to altered developmental conditions [
<xref rid="pone.0124952.ref007" ref-type="bibr">7</xref>
,
<xref rid="pone.0124952.ref012" ref-type="bibr">12</xref>
,
<xref rid="pone.0124952.ref024" ref-type="bibr">24</xref>
], to top-down influences on sensory processing, or to large-scale communication across distinct sensory regions during different behavioral paradigms. Altogether, our results describe a highly multifunctional experimental approach, which may further enhance the viability and suitability of the ferret model.</p>
</sec>
<sec sec-type="supplementary-material" id="sec023">
<title>Supporting Information</title>
<supplementary-material content-type="local-data" id="pone.0124952.s001">
<label>S1 ARRIVE Guidelines</label>
<caption>
<title>Completed “ARRIVE Guidelines Checklist” for reporting animal data in this manuscript.</title>
<p>(PDF)</p>
</caption>
<media xlink:href="pone.0124952.s001.pdf">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0124952.s002">
<label>S1 Dataset</label>
<caption>
<title>Raw data of the unimodal detection task for all animals.</title>
<p>(TXT)</p>
</caption>
<media xlink:href="pone.0124952.s002.txt">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0124952.s003">
<label>S2 Dataset</label>
<caption>
<title>Raw data of the multimodal detection task for all animals.</title>
<p>(TXT)</p>
</caption>
<media xlink:href="pone.0124952.s003.txt">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
</sec>
</body>
<back>
<ack>
<p>We would like to thank Dorrit Bystron for assistance during data acquisition and Guido Nolte for advice in data modeling.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="pone.0124952.ref001">
<label>1</label>
<mixed-citation publication-type="journal">
<name>
<surname>Alves-Pinto</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Sollini</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Sumner</surname>
<given-names>CJ</given-names>
</name>
.
<article-title>Signal detection in animal psychoacoustics: analysis and simulation of sensory and decision-related influences</article-title>
.
<source>Neuroscience</source>
.
<year>2012</year>
;
<volume>220</volume>
:
<fpage>215</fpage>
<lpage>227</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.neuroscience.2012.06.001">10.1016/j.neuroscience.2012.06.001</ext-link>
</comment>
<pub-id pub-id-type="pmid">22698686</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref002">
<label>2</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bizley</surname>
<given-names>JK</given-names>
</name>
,
<name>
<surname>King</surname>
<given-names>AJ</given-names>
</name>
.
<article-title>Visual influences on ferret auditory cortex</article-title>
.
<source>Hear Res</source>
.
<year>2009</year>
;
<volume>258</volume>
:
<fpage>55</fpage>
<lpage>63</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.heares.2009.06.017">10.1016/j.heares.2009.06.017</ext-link>
</comment>
<pub-id pub-id-type="pmid">19595754</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref003">
<label>3</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bizley</surname>
<given-names>JK</given-names>
</name>
,
<name>
<surname>Nodal</surname>
<given-names>FR</given-names>
</name>
,
<name>
<surname>Nelken</surname>
<given-names>I</given-names>
</name>
,
<name>
<surname>King</surname>
<given-names>AJ</given-names>
</name>
.
<article-title>Functional organization of ferret auditory cortex</article-title>
.
<source>Cereb Cortex N Y N 1991</source>
.
<year>2005</year>
;
<volume>15</volume>
:
<fpage>1637</fpage>
<lpage>1653</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1093/cercor/bhi042">10.1093/cercor/bhi042</ext-link>
</comment>
<pub-id pub-id-type="pmid">15703254</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref004">
<label>4</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bizley</surname>
<given-names>JK</given-names>
</name>
,
<name>
<surname>Nodal</surname>
<given-names>FR</given-names>
</name>
,
<name>
<surname>Bajo</surname>
<given-names>VM</given-names>
</name>
,
<name>
<surname>Nelken</surname>
<given-names>I</given-names>
</name>
,
<name>
<surname>King</surname>
<given-names>AJ</given-names>
</name>
.
<article-title>Physiological and anatomical evidence for multisensory interactions in auditory cortex</article-title>
.
<source>Cereb Cortex N Y N 1991</source>
.
<year>2007</year>
;
<volume>17</volume>
:
<fpage>2172</fpage>
<lpage>2189</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1093/cercor/bhl128">10.1093/cercor/bhl128</ext-link>
</comment>
<pub-id pub-id-type="pmid">17135481</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref005">
<label>5</label>
<mixed-citation publication-type="journal">
<name>
<surname>Chiu</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Weliky</surname>
<given-names>M</given-names>
</name>
.
<article-title>Spontaneous activity in developing ferret visual cortex in vivo</article-title>
.
<source>J Neurosci Off J Soc Neurosci</source>
.
<year>2001</year>
;
<volume>21</volume>
:
<fpage>8906</fpage>
<lpage>8914</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0124952.ref006">
<label>6</label>
<mixed-citation publication-type="journal">
<name>
<surname>Sukhinin</surname>
<given-names>DI</given-names>
</name>
,
<name>
<surname>Hilgetag</surname>
<given-names>CC</given-names>
</name>
.
<article-title>Building the ferretome</article-title>
.
<source>Stockholm, Sweden</source>
;
<year>2013</year>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.3389/conf.fninf.2013.09.00007">10.3389/conf.fninf.2013.09.00007</ext-link>
</comment>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref007">
<label>7</label>
<mixed-citation publication-type="journal">
<name>
<surname>Farley</surname>
<given-names>BJ</given-names>
</name>
,
<name>
<surname>Yu</surname>
<given-names>H</given-names>
</name>
,
<name>
<surname>Jin</surname>
<given-names>DZ</given-names>
</name>
,
<name>
<surname>Sur</surname>
<given-names>M</given-names>
</name>
.
<article-title>Alteration of visual input results in a coordinated reorganization of multiple visual cortex maps</article-title>
.
<source>J Neurosci</source>
.
<year>2007</year>
;
<volume>27</volume>
:
<fpage>10299</fpage>
<lpage>10310</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1523/JNEUROSCI.2257-07.2007">10.1523/JNEUROSCI.2257-07.2007</ext-link>
</comment>
<pub-id pub-id-type="pmid">17881536</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref008">
<label>8</label>
<mixed-citation publication-type="journal">
<name>
<surname>Foxworthy</surname>
<given-names>WA</given-names>
</name>
,
<name>
<surname>Allman</surname>
<given-names>BL</given-names>
</name>
,
<name>
<surname>Keniston</surname>
<given-names>LP</given-names>
</name>
,
<name>
<surname>Meredith</surname>
<given-names>MA</given-names>
</name>
.
<article-title>Multisensory and unisensory neurons in ferret parietal cortex exhibit distinct functional properties</article-title>
.
<source>Eur J Neurosci</source>
.
<year>2013</year>
;
<volume>37</volume>
:
<fpage>910</fpage>
<lpage>923</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1111/ejn.12085">10.1111/ejn.12085</ext-link>
</comment>
<pub-id pub-id-type="pmid">23279600</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref009">
<label>9</label>
<mixed-citation publication-type="journal">
<name>
<surname>Fröhlich</surname>
<given-names>F</given-names>
</name>
,
<name>
<surname>McCormick</surname>
<given-names>DA</given-names>
</name>
.
<article-title>Endogenous electric fields may guide neocortical network activity</article-title>
.
<source>Neuron</source>
.
<year>2010</year>
;
<volume>67</volume>
:
<fpage>129</fpage>
<lpage>143</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.neuron.2010.06.005">10.1016/j.neuron.2010.06.005</ext-link>
</comment>
<pub-id pub-id-type="pmid">20624597</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref010">
<label>10</label>
<mixed-citation publication-type="journal">
<name>
<surname>Homman-Ludiye</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Manger</surname>
<given-names>PR</given-names>
</name>
,
<name>
<surname>Bourne</surname>
<given-names>JA</given-names>
</name>
.
<article-title>Immunohistochemical parcellation of the ferret (Mustela putorius) visual cortex reveals substantial homology with the cat (Felis catus)</article-title>
.
<source>J Comp Neurol</source>
.
<year>2010</year>
;
<volume>518</volume>
:
<fpage>4439</fpage>
<lpage>4462</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1002/cne.22465">10.1002/cne.22465</ext-link>
</comment>
<pub-id pub-id-type="pmid">20853515</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref011">
<label>11</label>
<mixed-citation publication-type="journal">
<name>
<surname>Innocenti</surname>
<given-names>GM</given-names>
</name>
,
<name>
<surname>Manger</surname>
<given-names>PR</given-names>
</name>
,
<name>
<surname>Masiello</surname>
<given-names>I</given-names>
</name>
,
<name>
<surname>Colin</surname>
<given-names>I</given-names>
</name>
,
<name>
<surname>Tettoni</surname>
<given-names>L</given-names>
</name>
.
<article-title>Architecture and callosal connections of visual areas 17, 18, 19 and 21 in the ferret (Mustela putorius)</article-title>
.
<source>Cereb Cortex N Y N 1991</source>
.
<year>2002</year>
;
<volume>12</volume>
:
<fpage>411</fpage>
<lpage>422</lpage>
.
<pub-id pub-id-type="pmid">11884356</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref012">
<label>12</label>
<mixed-citation publication-type="journal">
<name>
<surname>Jarosiewicz</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Schummers</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Malik</surname>
<given-names>WQ</given-names>
</name>
,
<name>
<surname>Brown</surname>
<given-names>EN</given-names>
</name>
,
<name>
<surname>Sur</surname>
<given-names>M</given-names>
</name>
.
<article-title>Functional biases in visual cortex neurons with identified projections to higher cortical targets</article-title>
.
<source>Curr Biol</source>
.
<year>2012</year>
;
<volume>22</volume>
:
<fpage>269</fpage>
<lpage>277</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.cub.2012.01.011">10.1016/j.cub.2012.01.011</ext-link>
</comment>
<pub-id pub-id-type="pmid">22305753</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref013">
<label>13</label>
<mixed-citation publication-type="journal">
<name>
<surname>Keating</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Nodal</surname>
<given-names>FR</given-names>
</name>
,
<name>
<surname>King</surname>
<given-names>AJ</given-names>
</name>
.
<article-title>Behavioural sensitivity to binaural spatial cues in ferrets: evidence for plasticity in the duplex theory of sound localization</article-title>
.
<source>Eur J Neurosci</source>
.
<year>2014</year>
;
<volume>39</volume>
:
<fpage>197</fpage>
<lpage>206</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1111/ejn.12402">10.1111/ejn.12402</ext-link>
</comment>
<pub-id pub-id-type="pmid">24256073</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref014">
<label>14</label>
<mixed-citation publication-type="journal">
<name>
<surname>Keniston</surname>
<given-names>LP</given-names>
</name>
,
<name>
<surname>Allman</surname>
<given-names>BL</given-names>
</name>
,
<name>
<surname>Meredith</surname>
<given-names>MA</given-names>
</name>
,
<name>
<surname>Clemo</surname>
<given-names>HR</given-names>
</name>
.
<article-title>Somatosensory and multisensory properties of the medial bank of the ferret rostral suprasylvian sulcus</article-title>
.
<source>Exp Brain Res</source>
.
<year>2009</year>
;
<volume>196</volume>
:
<fpage>239</fpage>
<lpage>251</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1007/s00221-009-1843-0">10.1007/s00221-009-1843-0</ext-link>
</comment>
<pub-id pub-id-type="pmid">19466399</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref015">
<label>15</label>
<mixed-citation publication-type="journal">
<name>
<surname>King</surname>
<given-names>AJ</given-names>
</name>
,
<name>
<surname>Bajo</surname>
<given-names>VM</given-names>
</name>
,
<name>
<surname>Bizley</surname>
<given-names>JK</given-names>
</name>
,
<name>
<surname>Campbell</surname>
<given-names>RAA</given-names>
</name>
,
<name>
<surname>Nodal</surname>
<given-names>FR</given-names>
</name>
,
<name>
<surname>Schulz</surname>
<given-names>AL</given-names>
</name>
,
<etal>et al</etal>
<article-title>Physiological and behavioral studies of spatial coding in the auditory cortex</article-title>
.
<source>Hear Res</source>
.
<year>2007</year>
;
<volume>229</volume>
:
<fpage>106</fpage>
<lpage>115</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.heares.2007.01.001">10.1016/j.heares.2007.01.001</ext-link>
</comment>
<pub-id pub-id-type="pmid">17314017</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref016">
<label>16</label>
<mixed-citation publication-type="journal">
<name>
<surname>Li</surname>
<given-names>Y</given-names>
</name>
,
<name>
<surname>Fitzpatrick</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>White</surname>
<given-names>LE</given-names>
</name>
.
<article-title>The development of direction selectivity in ferret visual cortex requires early visual experience</article-title>
.
<source>Nat Neurosci</source>
.
<year>2006</year>
;
<volume>9</volume>
:
<fpage>676</fpage>
<lpage>681</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1038/nn1684">10.1038/nn1684</ext-link>
</comment>
<pub-id pub-id-type="pmid">16604068</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref017">
<label>17</label>
<mixed-citation publication-type="journal">
<name>
<surname>Manger</surname>
<given-names>PR</given-names>
</name>
,
<name>
<surname>Masiello</surname>
<given-names>I</given-names>
</name>
,
<name>
<surname>Innocenti</surname>
<given-names>GM</given-names>
</name>
.
<article-title>Areal organization of the posterior parietal cortex of the Ferret (Mustela putorius)</article-title>
.
<source>Cereb Cortex</source>
.
<year>2002</year>
;
<volume>12</volume>
:
<fpage>1280</fpage>
<lpage>1297</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1093/cercor/12.12.1280">10.1093/cercor/12.12.1280</ext-link>
</comment>
<pub-id pub-id-type="pmid">12427679</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref018">
<label>18</label>
<mixed-citation publication-type="journal">
<name>
<surname>Manger</surname>
<given-names>PR</given-names>
</name>
,
<name>
<surname>Kiper</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Masiello</surname>
<given-names>I</given-names>
</name>
,
<name>
<surname>Murillo</surname>
<given-names>L</given-names>
</name>
,
<name>
<surname>Tettoni</surname>
<given-names>L</given-names>
</name>
,
<name>
<surname>Hunyadi</surname>
<given-names>Z</given-names>
</name>
,
<etal>et al</etal>
<article-title>The representation of the visual field in three extrastriate areas of the ferret (Mustela putorius) and the relationship of retinotopy and field boundaries to callosal connectivity</article-title>
.
<source>Cereb Cortex N Y N 1991</source>
.
<year>2002</year>
;
<volume>12</volume>
:
<fpage>423</fpage>
<lpage>437</lpage>
.
<pub-id pub-id-type="pmid">11884357</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref019">
<label>19</label>
<mixed-citation publication-type="journal">
<name>
<surname>Manger</surname>
<given-names>PR</given-names>
</name>
,
<name>
<surname>Engler</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Moll</surname>
<given-names>CKE</given-names>
</name>
,
<name>
<surname>Engel</surname>
<given-names>AK</given-names>
</name>
.
<article-title>The anterior ectosylvian visual area of the ferret: a homologue for an enigmatic visual cortical area of the cat?</article-title>
<source>Eur J Neurosci</source>
.
<year>2005</year>
;
<volume>22</volume>
:
<fpage>706</fpage>
<lpage>714</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1111/j.1460-9568.2005.04246.x">10.1111/j.1460-9568.2005.04246.x</ext-link>
</comment>
<pub-id pub-id-type="pmid">16101752</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref020">
<label>20</label>
<mixed-citation publication-type="journal">
<name>
<surname>Manger</surname>
<given-names>PR</given-names>
</name>
,
<name>
<surname>Engler</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Moll</surname>
<given-names>CKE</given-names>
</name>
,
<name>
<surname>Engel</surname>
<given-names>AK</given-names>
</name>
.
<article-title>Location, architecture, and retinotopy of the anteromedial lateral suprasylvian visual area (AMLS) of the ferret (Mustela putorius)</article-title>
.
<source>Vis Neurosci</source>
.
<year>2008</year>
;
<volume>25</volume>
:
<fpage>27</fpage>
<lpage>37</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1017/S0952523808080036">10.1017/S0952523808080036</ext-link>
</comment>
<pub-id pub-id-type="pmid">18282308</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref021">
<label>21</label>
<mixed-citation publication-type="journal">
<name>
<surname>Nelken</surname>
<given-names>I</given-names>
</name>
,
<name>
<surname>Versnel</surname>
<given-names>H</given-names>
</name>
.
<article-title>Responses to linear and logarithmic frequency-modulated sweeps in ferret primary auditory cortex</article-title>
.
<source>Eur J Neurosci</source>
.
<year>2000</year>
;
<volume>12</volume>
:
<fpage>549</fpage>
<lpage>562</lpage>
.
<pub-id pub-id-type="pmid">10712634</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref022">
<label>22</label>
<mixed-citation publication-type="journal">
<name>
<surname>Phillips</surname>
<given-names>DP</given-names>
</name>
,
<name>
<surname>Judge</surname>
<given-names>PW</given-names>
</name>
,
<name>
<surname>Kelly</surname>
<given-names>JB</given-names>
</name>
.
<article-title>Primary auditory cortex in the ferret (Mustela putorius): neural response properties and topographic organization</article-title>
.
<source>Brain Res</source>
.
<year>1988</year>
;
<volume>443</volume>
:
<fpage>281</fpage>
<lpage>294</lpage>
.
<pub-id pub-id-type="pmid">3359271</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref023">
<label>23</label>
<mixed-citation publication-type="journal">
<name>
<surname>Stitt</surname>
<given-names>I</given-names>
</name>
,
<name>
<surname>Galindo-Leon</surname>
<given-names>E</given-names>
</name>
,
<name>
<surname>Pieper</surname>
<given-names>F</given-names>
</name>
,
<name>
<surname>Engler</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Engel</surname>
<given-names>AK</given-names>
</name>
.
<article-title>Laminar profile of visual response properties in ferret superior colliculus</article-title>
.
<source>J Neurophysiol</source>
.
<year>2013</year>
;
<volume>110</volume>
:
<fpage>1333</fpage>
<lpage>1345</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1152/jn.00957.2012">10.1152/jn.00957.2012</ext-link>
</comment>
<pub-id pub-id-type="pmid">23803328</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref024">
<label>24</label>
<mixed-citation publication-type="journal">
<name>
<surname>Yu</surname>
<given-names>H</given-names>
</name>
,
<name>
<surname>Majewska</surname>
<given-names>AK</given-names>
</name>
,
<name>
<surname>Sur</surname>
<given-names>M</given-names>
</name>
.
<article-title>Rapid experience-dependent plasticity of synapse function and structure in ferret visual cortex in vivo</article-title>
.
<source>Proc Natl Acad Sci U S A</source>
.
<year>2011</year>
;
<volume>108</volume>
:
<fpage>21235</fpage>
<lpage>21240</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1073/pnas.1108270109">10.1073/pnas.1108270109</ext-link>
</comment>
<pub-id pub-id-type="pmid">22160713</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref025">
<label>25</label>
<mixed-citation publication-type="book">
<name>
<surname>Fox</surname>
<given-names>JG</given-names>
</name>
,
<name>
<surname>Marini</surname>
<given-names>RP</given-names>
</name>
.
<source>Biology and diseases of the ferret</source>
.
<publisher-name>John Wiley & Sons</publisher-name>
;
<year>2014</year>
.</mixed-citation>
</ref>
<ref id="pone.0124952.ref026">
<label>26</label>
<mixed-citation publication-type="journal">
<name>
<surname>Fritz</surname>
<given-names>JB</given-names>
</name>
,
<name>
<surname>David</surname>
<given-names>SV</given-names>
</name>
,
<name>
<surname>Radtke-Schuller</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Yin</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Shamma</surname>
<given-names>SA</given-names>
</name>
.
<article-title>Adaptive, behaviorally gated, persistent encoding of task-relevant auditory information in ferret frontal cortex</article-title>
.
<source>Nat Neurosci</source>
.
<year>2010</year>
;
<volume>13</volume>
:
<fpage>1011</fpage>
<lpage>1019</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1038/nn.2598">10.1038/nn.2598</ext-link>
</comment>
<pub-id pub-id-type="pmid">20622871</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref027">
<label>27</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hartley</surname>
<given-names>DEH</given-names>
</name>
,
<name>
<surname>Vongpaisal</surname>
<given-names>T</given-names>
</name>
,
<name>
<surname>Xu</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Shepherd</surname>
<given-names>RK</given-names>
</name>
,
<name>
<surname>King</surname>
<given-names>AJ</given-names>
</name>
,
<name>
<surname>Isaiah</surname>
<given-names>A</given-names>
</name>
.
<article-title>Bilateral cochlear implantation in the ferret: A novel animal model for behavioral studies</article-title>
.
<source>J Neurosci Methods</source>
.
<year>2010</year>
;
<volume>190</volume>
:
<fpage>214</fpage>
<lpage>228</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.jneumeth.2010.05.014">10.1016/j.jneumeth.2010.05.014</ext-link>
</comment>
<pub-id pub-id-type="pmid">20576507</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref028">
<label>28</label>
<mixed-citation publication-type="journal">
<name>
<surname>Leach</surname>
<given-names>ND</given-names>
</name>
,
<name>
<surname>Nodal</surname>
<given-names>FR</given-names>
</name>
,
<name>
<surname>Cordery</surname>
<given-names>PM</given-names>
</name>
,
<name>
<surname>King</surname>
<given-names>AJ</given-names>
</name>
,
<name>
<surname>Bajo</surname>
<given-names>VM</given-names>
</name>
.
<article-title>Cortical cholinergic input is required for normal auditory perception and experience-dependent plasticity in adult ferrets</article-title>
.
<source>J Neurosci</source>
.
<year>2013</year>
;
<volume>33</volume>
:
<fpage>6659</fpage>
<lpage>6671</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1523/JNEUROSCI.5039-12.2013">10.1523/JNEUROSCI.5039-12.2013</ext-link>
</comment>
<pub-id pub-id-type="pmid">23575862</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref029">
<label>29</label>
<mixed-citation publication-type="journal">
<name>
<surname>Nodal</surname>
<given-names>FR</given-names>
</name>
,
<name>
<surname>Bajo</surname>
<given-names>VM</given-names>
</name>
,
<name>
<surname>Parsons</surname>
<given-names>CH</given-names>
</name>
,
<name>
<surname>Schnupp</surname>
<given-names>JW</given-names>
</name>
,
<name>
<surname>King</surname>
<given-names>AJ</given-names>
</name>
.
<article-title>Sound localization behavior in ferrets: comparison of acoustic orientation and approach-to-target responses</article-title>
.
<source>Neuroscience</source>
.
<year>2008</year>
;
<volume>154</volume>
:
<fpage>397</fpage>
<lpage>408</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.neuroscience.2007.12.022">10.1016/j.neuroscience.2007.12.022</ext-link>
</comment>
<pub-id pub-id-type="pmid">18281159</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref030">
<label>30</label>
<mixed-citation publication-type="journal">
<name>
<surname>Gleiss</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Kayser</surname>
<given-names>C</given-names>
</name>
.
<article-title>Acoustic noise improves visual perception and modulates occipital oscillatory states</article-title>
.
<source>J Cogn Neurosci</source>
.
<year>2014</year>
;
<volume>26</volume>
:
<fpage>699</fpage>
<lpage>711</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1162/jocn_a_00524">10.1162/jocn_a_00524</ext-link>
</comment>
<pub-id pub-id-type="pmid">24236698</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref031">
<label>31</label>
<mixed-citation publication-type="journal">
<name>
<surname>Gleiss</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Kayser</surname>
<given-names>C</given-names>
</name>
.
<article-title>Audio-visual detection benefits in the rat</article-title>
.
<source>PloS One</source>
.
<year>2012</year>
;
<volume>7</volume>
:
<fpage>e45677</fpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1371/journal.pone.0045677">10.1371/journal.pone.0045677</ext-link>
</comment>
<pub-id pub-id-type="pmid">23029179</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref032">
<label>32</label>
<mixed-citation publication-type="journal">
<name>
<surname>Spence</surname>
<given-names>C</given-names>
</name>
.
<article-title>Crossmodal correspondences: a tutorial review</article-title>
.
<source>Atten Percept Psychophys</source>
.
<year>2011</year>
;
<volume>73</volume>
:
<fpage>971</fpage>
<lpage>995</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.3758/s13414-010-0073-7">10.3758/s13414-010-0073-7</ext-link>
</comment>
<pub-id pub-id-type="pmid">21264748</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref033">
<label>33</label>
<mixed-citation publication-type="journal">
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
,
<name>
<surname>Gu</surname>
<given-names>Y</given-names>
</name>
,
<name>
<surname>DeAngelis</surname>
<given-names>GC</given-names>
</name>
.
<article-title>Multisensory integration: psychophysics, neurophysiology, and computation</article-title>
.
<source>Curr Opin Neurobiol</source>
.
<year>2009</year>
;
<volume>19</volume>
:
<fpage>452</fpage>
<lpage>458</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.conb.2009.06.008">10.1016/j.conb.2009.06.008</ext-link>
</comment>
<pub-id pub-id-type="pmid">19616425</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref034">
<label>34</label>
<mixed-citation publication-type="journal">
<name>
<surname>Stein</surname>
<given-names>BE</given-names>
</name>
,
<name>
<surname>London</surname>
<given-names>N</given-names>
</name>
,
<name>
<surname>Wilkinson</surname>
<given-names>LK</given-names>
</name>
,
<name>
<surname>Price</surname>
<given-names>DD</given-names>
</name>
.
<article-title>Enhancement of perceived visual intensity by auditory stimuli: a psychophysical analysis</article-title>
.
<source>J Cogn Neurosci</source>
.
<year>1996</year>
;
<volume>8</volume>
:
<fpage>497</fpage>
<lpage>506</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1162/jocn.1996.8.6.497">10.1162/jocn.1996.8.6.497</ext-link>
</comment>
<pub-id pub-id-type="pmid">23961981</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref035">
<label>35</label>
<mixed-citation publication-type="book">
<name>
<surname>Stein</surname>
<given-names>BE</given-names>
</name>
,
<name>
<surname>Meredith</surname>
<given-names>MA</given-names>
</name>
.
<source>The merging of the senses</source>
.
<publisher-loc>Cambridge, Mass.</publisher-loc>
:
<publisher-name>A Bradford Book</publisher-name>
;
<year>1993</year>
.</mixed-citation>
</ref>
<ref id="pone.0124952.ref036">
<label>36</label>
<mixed-citation publication-type="journal">
<name>
<surname>Ernst</surname>
<given-names>MO</given-names>
</name>
,
<name>
<surname>Banks</surname>
<given-names>MS</given-names>
</name>
.
<article-title>Humans integrate visual and haptic information in a statistically optimal fashion</article-title>
.
<source>Nature</source>
.
<year>2002</year>
;
<volume>415</volume>
:
<fpage>429</fpage>
<lpage>433</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1038/415429a">10.1038/415429a</ext-link>
</comment>
<pub-id pub-id-type="pmid">11807554</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref037">
<label>37</label>
<mixed-citation publication-type="journal">
<name>
<surname>Battaglia</surname>
<given-names>PW</given-names>
</name>
,
<name>
<surname>Jacobs</surname>
<given-names>RA</given-names>
</name>
,
<name>
<surname>Aslin</surname>
<given-names>RN</given-names>
</name>
.
<article-title>Bayesian integration of visual and auditory signals for spatial localization</article-title>
.
<source>J Opt Soc Am A</source>
.
<year>2003</year>
;
<volume>20</volume>
:
<fpage>1391</fpage>
<lpage>1397</lpage>
.
<pub-id pub-id-type="pmid">12868643</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref038">
<label>38</label>
<mixed-citation publication-type="journal">
<name>
<surname>Alais</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Burr</surname>
<given-names>D</given-names>
</name>
.
<article-title>The ventriloquist effect results from near-optimal bimodal integration</article-title>
.
<source>Curr Biol</source>
.
<year>2004</year>
;
<volume>14</volume>
:
<fpage>257</fpage>
<lpage>262</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.cub.2004.01.029">10.1016/j.cub.2004.01.029</ext-link>
</comment>
<pub-id pub-id-type="pmid">14761661</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref039">
<label>39</label>
<mixed-citation publication-type="journal">
<name>
<surname>Knill</surname>
<given-names>DC</given-names>
</name>
,
<name>
<surname>Saunders</surname>
<given-names>JA</given-names>
</name>
.
<article-title>Do humans optimally integrate stereo and texture information for judgments of surface slant?</article-title>
<source>Vision Res</source>
.
<year>2003</year>
;
<volume>43</volume>
:
<fpage>2539</fpage>
<lpage>2558</lpage>
.
<pub-id pub-id-type="pmid">13129541</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref040">
<label>40</label>
<mixed-citation publication-type="journal">
<name>
<surname>Kalman</surname>
<given-names>RE</given-names>
</name>
,
<name>
<surname>Bucy</surname>
<given-names>RS</given-names>
</name>
.
<article-title>New results in linear filtering and prediction theory</article-title>
.
<source>Trans ASME Ser J Basic Eng</source>
.
<year>1961</year>
;
<volume>109</volume>
.</mixed-citation>
</ref>
<ref id="pone.0124952.ref041">
<label>41</label>
<mixed-citation publication-type="journal">
<name>
<surname>Beers</surname>
<given-names>RJ van</given-names>
</name>
,
<name>
<surname>Sittig</surname>
<given-names>AC</given-names>
</name>
,
<name>
<surname>Gon</surname>
<given-names>JJD van der</given-names>
</name>
.
<article-title>Integration of proprioceptive and visual position-information: An experimentally supported model</article-title>
.
<source>J Neurophysiol</source>
.
<year>1999</year>
;
<volume>81</volume>
:
<fpage>1355</fpage>
<lpage>1364</lpage>
.
<pub-id pub-id-type="pmid">10085361</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref042">
<label>42</label>
<mixed-citation publication-type="journal">
<name>
<surname>Van Beers</surname>
<given-names>RJ</given-names>
</name>
,
<name>
<surname>Wolpert</surname>
<given-names>DM</given-names>
</name>
,
<name>
<surname>Haggard</surname>
<given-names>P</given-names>
</name>
.
<article-title>When feeling is more important than seeing in sensorimotor adaptation</article-title>
.
<source>Curr Biol</source>
.
<year>2002</year>
;
<volume>12</volume>
:
<fpage>834</fpage>
<lpage>837</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/S0960-9822(02)00836-9">10.1016/S0960-9822(02)00836-9</ext-link>
</comment>
<pub-id pub-id-type="pmid">12015120</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref043">
<label>43</label>
<mixed-citation publication-type="journal">
<name>
<surname>Fetsch</surname>
<given-names>CR</given-names>
</name>
,
<name>
<surname>DeAngelis</surname>
<given-names>GC</given-names>
</name>
,
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
.
<article-title>Visual–vestibular cue integration for heading perception: applications of optimal cue integration theory</article-title>
.
<source>Eur J Neurosci</source>
.
<year>2010</year>
;
<volume>31</volume>
:
<fpage>1721</fpage>
<lpage>1729</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1111/j.1460-9568.2010.07207.x">10.1111/j.1460-9568.2010.07207.x</ext-link>
</comment>
<pub-id pub-id-type="pmid">20584175</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref044">
<label>44</label>
<mixed-citation publication-type="journal">
<name>
<surname>Gepshtein</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Banks</surname>
<given-names>MS</given-names>
</name>
.
<article-title>Viewing geometry determines how vision and haptics combine in size perception</article-title>
.
<source>Curr Biol</source>
.
<year>2003</year>
;
<volume>13</volume>
:
<fpage>483</fpage>
<lpage>488</lpage>
.
<pub-id pub-id-type="pmid">12646130</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref045">
<label>45</label>
<mixed-citation publication-type="journal">
<name>
<surname>Knill</surname>
<given-names>DC</given-names>
</name>
,
<name>
<surname>Pouget</surname>
<given-names>A</given-names>
</name>
.
<article-title>The Bayesian brain: the role of uncertainty in neural coding and computation</article-title>
.
<source>Trends Neurosci</source>
.
<year>2004</year>
;
<volume>27</volume>
:
<fpage>712</fpage>
<lpage>719</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.tins.2004.10.007">10.1016/j.tins.2004.10.007</ext-link>
</comment>
<pub-id pub-id-type="pmid">15541511</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref046">
<label>46</label>
<mixed-citation publication-type="book">
<name>
<surname>Morasso</surname>
<given-names>PG</given-names>
</name>
,
<name>
<surname>Sanguineti</surname>
<given-names>V</given-names>
</name>
.
<source>Self-organization, computational maps, and motor control</source>
.
<publisher-name>Elsevier</publisher-name>
;
<year>1997</year>
.</mixed-citation>
</ref>
<ref id="pone.0124952.ref047">
<label>47</label>
<mixed-citation publication-type="journal">
<name>
<surname>Gu</surname>
<given-names>Y</given-names>
</name>
,
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
,
<name>
<surname>Deangelis</surname>
<given-names>GC</given-names>
</name>
.
<article-title>Neural correlates of multisensory cue integration in macaque MSTd</article-title>
.
<source>Nat Neurosci</source>
.
<year>2008</year>
;
<volume>11</volume>
:
<fpage>1201</fpage>
<lpage>1210</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1038/nn.2191">10.1038/nn.2191</ext-link>
</comment>
<pub-id pub-id-type="pmid">18776893</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref048">
<label>48</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hershenson</surname>
<given-names>M</given-names>
</name>
.
<article-title>Reaction time as a measure of intersensory facilitation</article-title>
.
<source>J Exp Psychol</source>
.
<year>1962</year>
;
<volume>63</volume>
:
<fpage>289</fpage>
<lpage>293</lpage>
.
<pub-id pub-id-type="pmid">13906889</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref049">
<label>49</label>
<mixed-citation publication-type="journal">
<name>
<surname>Diederich</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Colonius</surname>
<given-names>H</given-names>
</name>
.
<article-title>Bimodal and trimodal multisensory enhancement: effects of stimulus onset and intensity on reaction time</article-title>
.
<source>Percept Psychophys</source>
.
<year>2004</year>
;
<volume>66</volume>
:
<fpage>1388</fpage>
<lpage>1404</lpage>
.
<pub-id pub-id-type="pmid">15813202</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref050">
<label>50</label>
<mixed-citation publication-type="journal">
<name>
<surname>Alais</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Newell</surname>
<given-names>FN</given-names>
</name>
,
<name>
<surname>Mamassian</surname>
<given-names>P</given-names>
</name>
.
<article-title>Multisensory processing in review: from physiology to behaviour</article-title>
.
<source>Seeing Perceiving</source>
.
<year>2010</year>
;
<volume>23</volume>
:
<fpage>3</fpage>
<lpage>38</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1163/187847510X488603">10.1163/187847510X488603</ext-link>
</comment>
<pub-id pub-id-type="pmid">20507725</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref051">
<label>51</label>
<mixed-citation publication-type="journal">
<name>
<surname>Stein</surname>
<given-names>BE</given-names>
</name>
,
<name>
<surname>Meredith</surname>
<given-names>MA</given-names>
</name>
,
<name>
<surname>Huneycutt</surname>
<given-names>WS</given-names>
</name>
,
<name>
<surname>McDade</surname>
<given-names>L</given-names>
</name>
.
<article-title>Behavioral indices of multisensory integration: orientation to visual cues is affected by auditory stimuli</article-title>
.
<source>J Cogn Neurosci</source>
.
<year>1989</year>
;
<volume>1</volume>
:
<fpage>12</fpage>
<lpage>24</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1162/jocn.1989.1.1.12">10.1162/jocn.1989.1.1.12</ext-link>
</comment>
<pub-id pub-id-type="pmid">23968407</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref052">
<label>52</label>
<mixed-citation publication-type="journal">
<name>
<surname>Miller</surname>
<given-names>J</given-names>
</name>
.
<article-title>Divided attention: Evidence for coactivation with redundant signals</article-title>
.
<source>Cognit Psychol</source>
.
<year>1982</year>
;
<volume>14</volume>
:
<fpage>247</fpage>
<lpage>279</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/0010-0285(82)90010-X">10.1016/0010-0285(82)90010-X</ext-link>
</comment>
<pub-id pub-id-type="pmid">7083803</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref053">
<label>53</label>
<mixed-citation publication-type="journal">
<name>
<surname>Miller</surname>
<given-names>J</given-names>
</name>
.
<article-title>Timecourse of coactivation in bimodal divided attention</article-title>
.
<source>Percept Psychophys</source>
.
<year>1986</year>
;
<volume>40</volume>
:
<fpage>331</fpage>
<lpage>343</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.3758/BF03203025">10.3758/BF03203025</ext-link>
</comment>
<pub-id pub-id-type="pmid">3786102</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref054">
<label>54</label>
<mixed-citation publication-type="journal">
<name>
<surname>Ernst</surname>
<given-names>MO</given-names>
</name>
,
<name>
<surname>Bülthoff</surname>
<given-names>HH</given-names>
</name>
.
<article-title>Merging the senses into a robust percept</article-title>
.
<source>Trends Cogn Sci</source>
.
<year>2004</year>
;
<volume>8</volume>
:
<fpage>162</fpage>
<lpage>169</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.tics.2004.02.002">10.1016/j.tics.2004.02.002</ext-link>
</comment>
<pub-id pub-id-type="pmid">15050512</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref055">
<label>55</label>
<mixed-citation publication-type="journal">
<name>
<surname>Walker</surname>
<given-names>KMM</given-names>
</name>
,
<name>
<surname>Schnupp</surname>
<given-names>JWH</given-names>
</name>
,
<name>
<surname>Hart-Schnupp</surname>
<given-names>SMB</given-names>
</name>
,
<name>
<surname>King</surname>
<given-names>AJ</given-names>
</name>
,
<name>
<surname>Bizley</surname>
<given-names>JK</given-names>
</name>
.
<article-title>Pitch discrimination by ferrets for simple and complex sounds</article-title>
.
<source>J Acoust Soc Am</source>
.
<year>2009</year>
;
<volume>126</volume>
:
<fpage>1321</fpage>
<lpage>1335</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1121/1.3179676">10.1121/1.3179676</ext-link>
</comment>
<pub-id pub-id-type="pmid">19739746</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref056">
<label>56</label>
<mixed-citation publication-type="journal">
<name>
<surname>Ulrich</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Miller</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Schröter</surname>
<given-names>H</given-names>
</name>
.
<article-title>Testing the race model inequality: an algorithm and computer programs</article-title>
.
<source>Behav Res Methods</source>
.
<year>2007</year>
;
<volume>39</volume>
:
<fpage>291</fpage>
<lpage>302</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.3758/BF03193160">10.3758/BF03193160</ext-link>
</comment>
<pub-id pub-id-type="pmid">17695357</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref057">
<label>57</label>
<mixed-citation publication-type="journal">
<name>
<surname>Raab</surname>
<given-names>DH</given-names>
</name>
.
<article-title>Statistical facilitation of simple reaction times</article-title>
.
<source>Trans N Y Acad Sci</source>
.
<year>1962</year>
;
<volume>24</volume>
:
<fpage>574</fpage>
<lpage>590</lpage>
.
<pub-id pub-id-type="pmid">14489538</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref058">
<label>58</label>
<mixed-citation publication-type="journal">
<name>
<surname>Kilkenny</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Browne</surname>
<given-names>WJ</given-names>
</name>
,
<name>
<surname>Cuthill</surname>
<given-names>IC</given-names>
</name>
,
<name>
<surname>Emerson</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Altman</surname>
<given-names>DG</given-names>
</name>
.
<article-title>Improving bioscience research reporting: the ARRIVE guidelines for reporting animal research</article-title>
.
<source>Osteoarthr Cartil OARS Osteoarthr Res Soc</source>
.
<year>2012</year>
;
<volume>20</volume>
:
<fpage>256</fpage>
<lpage>260</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.joca.2012.02.010">10.1016/j.joca.2012.02.010</ext-link>
</comment>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref059">
<label>59</label>
<mixed-citation publication-type="journal">
<name>
<surname>Brainard</surname>
<given-names>DH</given-names>
</name>
.
<article-title>The psychophysics toolbox</article-title>
.
<source>Spat Vis</source>
.
<year>1997</year>
;
<volume>10</volume>
:
<fpage>433</fpage>
<lpage>436</lpage>
.
<pub-id pub-id-type="pmid">9176952</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref060">
<label>60</label>
<mixed-citation publication-type="journal">
<name>
<surname>Kaernbach</surname>
<given-names>C</given-names>
</name>
.
<article-title>Simple adaptive testing with the weighted up-down method</article-title>
.
<source>Percept Psychophys</source>
.
<year>1991</year>
;
<volume>49</volume>
:
<fpage>227</fpage>
<lpage>229</lpage>
.
<pub-id pub-id-type="pmid">2011460</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref061">
<label>61</label>
<mixed-citation publication-type="journal">
<name>
<surname>Miller</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Ulrich</surname>
<given-names>R</given-names>
</name>
.
<article-title>Simple reaction time and statistical facilitation: a parallel grains model</article-title>
.
<source>Cognit Psychol</source>
.
<year>2003</year>
;
<volume>46</volume>
:
<fpage>101</fpage>
<lpage>151</lpage>
.
<pub-id pub-id-type="pmid">12643892</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref062">
<label>62</label>
<mixed-citation publication-type="journal">
<name>
<surname>Frassinetti</surname>
<given-names>F</given-names>
</name>
,
<name>
<surname>Bolognini</surname>
<given-names>N</given-names>
</name>
,
<name>
<surname>Làdavas</surname>
<given-names>E</given-names>
</name>
.
<article-title>Enhancement of visual perception by crossmodal visuo-auditory interaction</article-title>
.
<source>Exp Brain Res</source>
.
<year>2002</year>
;
<volume>147</volume>
:
<fpage>332</fpage>
<lpage>343</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1007/s00221-002-1262-y">10.1007/s00221-002-1262-y</ext-link>
</comment>
<pub-id pub-id-type="pmid">12428141</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref063">
<label>63</label>
<mixed-citation publication-type="journal">
<name>
<surname>Rowland</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Stanford</surname>
<given-names>T</given-names>
</name>
,
<name>
<surname>Stein</surname>
<given-names>B</given-names>
</name>
.
<article-title>A Bayesian model unifies multisensory spatial localization with the physiological properties of the superior colliculus</article-title>
.
<source>Exp Brain Res</source>
.
<year>2007</year>
;
<volume>180</volume>
:
<fpage>153</fpage>
<lpage>161</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1007/s00221-006-0847-2">10.1007/s00221-006-0847-2</ext-link>
</comment>
<pub-id pub-id-type="pmid">17546470</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref064">
<label>64</label>
<mixed-citation publication-type="journal">
<name>
<surname>Lippert</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Logothetis</surname>
<given-names>NK</given-names>
</name>
,
<name>
<surname>Kayser</surname>
<given-names>C</given-names>
</name>
.
<article-title>Improvement of visual contrast detection by a simultaneous sound</article-title>
.
<source>Brain Res</source>
.
<year>2007</year>
;
<volume>1173</volume>
:
<fpage>102</fpage>
<lpage>109</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.brainres.2007.07.050">10.1016/j.brainres.2007.07.050</ext-link>
</comment>
<pub-id pub-id-type="pmid">17765208</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref065">
<label>65</label>
<mixed-citation publication-type="journal">
<name>
<surname>Teder-Sälejärvi</surname>
<given-names>WA</given-names>
</name>
,
<name>
<surname>Di Russo</surname>
<given-names>F</given-names>
</name>
,
<name>
<surname>McDonald</surname>
<given-names>JJ</given-names>
</name>
,
<name>
<surname>Hillyard</surname>
<given-names>SA</given-names>
</name>
.
<article-title>Effects of spatial congruity on audio-visual multimodal integration</article-title>
.
<source>J Cogn Neurosci</source>
.
<year>2005</year>
;
<volume>17</volume>
:
<fpage>1396</fpage>
<lpage>1409</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1162/0898929054985383">10.1162/0898929054985383</ext-link>
</comment>
<pub-id pub-id-type="pmid">16197693</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref066">
<label>66</label>
<mixed-citation publication-type="journal">
<name>
<surname>McDonald</surname>
<given-names>JJ</given-names>
</name>
,
<name>
<surname>Teder-Sälejärvi</surname>
<given-names>WA</given-names>
</name>
,
<name>
<surname>Hillyard</surname>
<given-names>SA</given-names>
</name>
.
<article-title>Involuntary orienting to sound improves visual perception</article-title>
.
<source>Nature</source>
.
<year>2000</year>
;
<volume>407</volume>
:
<fpage>906</fpage>
<lpage>908</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1038/35038085">10.1038/35038085</ext-link>
</comment>
<pub-id pub-id-type="pmid">11057669</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref067">
<label>67</label>
<mixed-citation publication-type="journal">
<name>
<surname>Oruç</surname>
<given-names>I</given-names>
</name>
,
<name>
<surname>Maloney</surname>
<given-names>LT</given-names>
</name>
,
<name>
<surname>Landy</surname>
<given-names>MS</given-names>
</name>
.
<article-title>Weighted linear cue combination with possibly correlated error</article-title>
.
<source>Vision Res</source>
.
<year>2003</year>
;
<volume>43</volume>
:
<fpage>2451</fpage>
<lpage>2468</lpage>
.
<pub-id pub-id-type="pmid">12972395</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0124952.ref068">
<label>68</label>
<mixed-citation publication-type="book">
<name>
<surname>Welch</surname>
<given-names>RB</given-names>
</name>
,
<name>
<surname>Warren</surname>
<given-names>DH</given-names>
</name>
.
<chapter-title>Intersensory interactions</chapter-title>
In
<source>Handbook of perception and human performance</source>
(
<name>
<surname>Boff</surname>
<given-names>K.R.</given-names>
</name>
,
<etal>et al</etal>
, eds.).
<publisher-loc>New York</publisher-loc>
:
<publisher-name>Wiley</publisher-name>
;
<year>1986</year>
.</mixed-citation>
</ref>
<ref id="pone.0124952.ref069">
<label>69</label>
<mixed-citation publication-type="journal">
<name>
<surname>Helmchen</surname>
<given-names>F</given-names>
</name>
,
<name>
<surname>Denk</surname>
<given-names>W</given-names>
</name>
,
<name>
<surname>Kerr</surname>
<given-names>JND</given-names>
</name>
.
<article-title>Miniaturization of two-photon microscopy for imaging in freely moving animals</article-title>
.
<source>Cold Spring Harb Protoc</source>
.
<year>2013</year>
;
<volume>2013</volume>
: pdb.top078147.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1101/pdb.top078147">10.1101/pdb.top078147</ext-link>
</comment>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000291 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd -nk 000291 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Curation
   |type=    RBID
   |clé=     PMC:4430165
   |texte=   Crossmodal Integration Improves Sensory Detection Thresholds in the Ferret
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Curation/RBID.i   -Sk "pubmed:25970327" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024