Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Optimal multisensory decision-making in a reaction-time task

Identifieur interne : 000A56 ( Pmc/Checkpoint ); précédent : 000A55; suivant : 000A57

Optimal multisensory decision-making in a reaction-time task

Auteurs : Jan Drugowitsch [États-Unis, France, Suisse] ; Gregory C. Deangelis [États-Unis] ; Eliana M. Klier [États-Unis] ; Dora E. Angelaki [États-Unis] ; Alexandre Pouget [États-Unis, Suisse]

Source :

RBID : PMC:4102720

Abstract

Humans and animals can integrate sensory evidence from various sources to make decisions in a statistically near-optimal manner, provided that the stimulus presentation time is fixed across trials. Little is known about whether optimality is preserved when subjects can choose when to make a decision (reaction-time task), nor when sensory inputs have time-varying reliability. Using a reaction-time version of a visual/vestibular heading discrimination task, we show that behavior is clearly sub-optimal when quantified with traditional optimality metrics that ignore reaction times. We created a computational model that accumulates evidence optimally across both cues and time, and trades off accuracy with decision speed. This model quantitatively explains subjects's choices and reaction times, supporting the hypothesis that subjects do, in fact, accumulate evidence optimally over time and across sensory modalities, even when the reaction time is under the subject's control.

DOI:http://dx.doi.org/10.7554/eLife.03005.001


Url:
DOI: 10.7554/eLife.03005
PubMed: 24929965
PubMed Central: 4102720


Affiliations:


Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4102720

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Optimal multisensory decision-making in a reaction-time task</title>
<author>
<name sortKey="Drugowitsch, Jan" sort="Drugowitsch, Jan" uniqKey="Drugowitsch J" first="Jan" last="Drugowitsch">Jan Drugowitsch</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<addr-line>Department of Brain and Cognitive Sciences</addr-line>
,
<institution>University of Rochester</institution>
,
<addr-line>New York</addr-line>
,
<country>United States</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Institut National de la Santé et de la Recherche Médicale, École Normale Supérieure</institution>
,
<addr-line>Paris</addr-line>
,
<country>France</country>
</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff4">
<addr-line>Département des Neurosciences Fondamentales</addr-line>
,
<institution>Université de Genève</institution>
,
<addr-line>Geneva</addr-line>
,
<country>Switzerland</country>
</nlm:aff>
<country xml:lang="fr">Suisse</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Deangelis, Gregory C" sort="Deangelis, Gregory C" uniqKey="Deangelis G" first="Gregory C" last="Deangelis">Gregory C. Deangelis</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<addr-line>Department of Brain and Cognitive Sciences</addr-line>
,
<institution>University of Rochester</institution>
,
<addr-line>New York</addr-line>
,
<country>United States</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Klier, Eliana M" sort="Klier, Eliana M" uniqKey="Klier E" first="Eliana M" last="Klier">Eliana M. Klier</name>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<addr-line>Department of Neuroscience</addr-line>
,
<institution>Baylor College of Medicine</institution>
,
<addr-line>Houston</addr-line>
,
<country>United States</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Angelaki, Dora E" sort="Angelaki, Dora E" uniqKey="Angelaki D" first="Dora E" last="Angelaki">Dora E. Angelaki</name>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<addr-line>Department of Neuroscience</addr-line>
,
<institution>Baylor College of Medicine</institution>
,
<addr-line>Houston</addr-line>
,
<country>United States</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Pouget, Alexandre" sort="Pouget, Alexandre" uniqKey="Pouget A" first="Alexandre" last="Pouget">Alexandre Pouget</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<addr-line>Department of Brain and Cognitive Sciences</addr-line>
,
<institution>University of Rochester</institution>
,
<addr-line>New York</addr-line>
,
<country>United States</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff4">
<addr-line>Département des Neurosciences Fondamentales</addr-line>
,
<institution>Université de Genève</institution>
,
<addr-line>Geneva</addr-line>
,
<country>Switzerland</country>
</nlm:aff>
<country xml:lang="fr">Suisse</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">24929965</idno>
<idno type="pmc">4102720</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4102720</idno>
<idno type="RBID">PMC:4102720</idno>
<idno type="doi">10.7554/eLife.03005</idno>
<date when="2014">2014</date>
<idno type="wicri:Area/Pmc/Corpus">002543</idno>
<idno type="wicri:Area/Pmc/Curation">002543</idno>
<idno type="wicri:Area/Pmc/Checkpoint">000A56</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Optimal multisensory decision-making in a reaction-time task</title>
<author>
<name sortKey="Drugowitsch, Jan" sort="Drugowitsch, Jan" uniqKey="Drugowitsch J" first="Jan" last="Drugowitsch">Jan Drugowitsch</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<addr-line>Department of Brain and Cognitive Sciences</addr-line>
,
<institution>University of Rochester</institution>
,
<addr-line>New York</addr-line>
,
<country>United States</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Institut National de la Santé et de la Recherche Médicale, École Normale Supérieure</institution>
,
<addr-line>Paris</addr-line>
,
<country>France</country>
</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff4">
<addr-line>Département des Neurosciences Fondamentales</addr-line>
,
<institution>Université de Genève</institution>
,
<addr-line>Geneva</addr-line>
,
<country>Switzerland</country>
</nlm:aff>
<country xml:lang="fr">Suisse</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Deangelis, Gregory C" sort="Deangelis, Gregory C" uniqKey="Deangelis G" first="Gregory C" last="Deangelis">Gregory C. Deangelis</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<addr-line>Department of Brain and Cognitive Sciences</addr-line>
,
<institution>University of Rochester</institution>
,
<addr-line>New York</addr-line>
,
<country>United States</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Klier, Eliana M" sort="Klier, Eliana M" uniqKey="Klier E" first="Eliana M" last="Klier">Eliana M. Klier</name>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<addr-line>Department of Neuroscience</addr-line>
,
<institution>Baylor College of Medicine</institution>
,
<addr-line>Houston</addr-line>
,
<country>United States</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Angelaki, Dora E" sort="Angelaki, Dora E" uniqKey="Angelaki D" first="Dora E" last="Angelaki">Dora E. Angelaki</name>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<addr-line>Department of Neuroscience</addr-line>
,
<institution>Baylor College of Medicine</institution>
,
<addr-line>Houston</addr-line>
,
<country>United States</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Pouget, Alexandre" sort="Pouget, Alexandre" uniqKey="Pouget A" first="Alexandre" last="Pouget">Alexandre Pouget</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<addr-line>Department of Brain and Cognitive Sciences</addr-line>
,
<institution>University of Rochester</institution>
,
<addr-line>New York</addr-line>
,
<country>United States</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff4">
<addr-line>Département des Neurosciences Fondamentales</addr-line>
,
<institution>Université de Genève</institution>
,
<addr-line>Geneva</addr-line>
,
<country>Switzerland</country>
</nlm:aff>
<country xml:lang="fr">Suisse</country>
<wicri:regionArea># see nlm:aff country strict</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">eLife</title>
<idno type="eISSN">2050-084X</idno>
<imprint>
<date when="2014">2014</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Humans and animals can integrate sensory evidence from various sources to make decisions in a statistically near-optimal manner, provided that the stimulus presentation time is fixed across trials. Little is known about whether optimality is preserved when subjects can choose when to make a decision (reaction-time task), nor when sensory inputs have time-varying reliability. Using a reaction-time version of a visual/vestibular heading discrimination task, we show that behavior is clearly sub-optimal when quantified with traditional optimality metrics that ignore reaction times. We created a computational model that accumulates evidence optimally across both cues and time, and trades off accuracy with decision speed. This model quantitatively explains subjects's choices and reaction times, supporting the hypothesis that subjects do, in fact, accumulate evidence optimally over time and across sensory modalities, even when the reaction time is under the subject's control.</p>
<p>
<bold>DOI:</bold>
<ext-link ext-link-type="doi" xlink:href="10.7554/eLife.03005.001">http://dx.doi.org/10.7554/eLife.03005.001</ext-link>
</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Battaglia, Pw" uniqKey="Battaglia P">PW Battaglia</name>
</author>
<author>
<name sortKey="Jacobs, Ra" uniqKey="Jacobs R">RA Jacobs</name>
</author>
<author>
<name sortKey="Aslin, Rn" uniqKey="Aslin R">RN Aslin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Beck, Jm" uniqKey="Beck J">JM Beck</name>
</author>
<author>
<name sortKey="Ma, Wj" uniqKey="Ma W">WJ Ma</name>
</author>
<author>
<name sortKey="Kiani, R" uniqKey="Kiani R">R Kiani</name>
</author>
<author>
<name sortKey="Hanks, T" uniqKey="Hanks T">T Hanks</name>
</author>
<author>
<name sortKey="Churchland, Ak" uniqKey="Churchland A">AK Churchland</name>
</author>
<author>
<name sortKey="Roitman, J" uniqKey="Roitman J">J Roitman</name>
</author>
<author>
<name sortKey="Shadlen, Mn" uniqKey="Shadlen M">MN Shadlen</name>
</author>
<author>
<name sortKey="Latham, Pe" uniqKey="Latham P">PE Latham</name>
</author>
<author>
<name sortKey="Pouget, A" uniqKey="Pouget A">A Pouget</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bogacz, R" uniqKey="Bogacz R">R Bogacz</name>
</author>
<author>
<name sortKey="Brown, E" uniqKey="Brown E">E Brown</name>
</author>
<author>
<name sortKey="Moehlis, J" uniqKey="Moehlis J">J Moehlis</name>
</author>
<author>
<name sortKey="Holmes, P" uniqKey="Holmes P">P Holmes</name>
</author>
<author>
<name sortKey="Cohen, Jd" uniqKey="Cohen J">JD Cohen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Britten, Kh" uniqKey="Britten K">KH Britten</name>
</author>
<author>
<name sortKey="Van Wezel, Rj" uniqKey="Van Wezel R">RJ van Wezel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Britten, Kh" uniqKey="Britten K">KH Britten</name>
</author>
<author>
<name sortKey="Van Wezel, Rj" uniqKey="Van Wezel R">RJ Van Wezel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chen, A" uniqKey="Chen A">A Chen</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC DeAngelis</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chen, A" uniqKey="Chen A">A Chen</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC DeAngelis</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Clark, Jj" uniqKey="Clark J">JJ Clark</name>
</author>
<author>
<name sortKey="Yuille, Al" uniqKey="Yuille A">AL Yuille</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Colonius, H" uniqKey="Colonius H">H Colonius</name>
</author>
<author>
<name sortKey="Arndt, P" uniqKey="Arndt P">P Arndt</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Corneil, Bd" uniqKey="Corneil B">BD Corneil</name>
</author>
<author>
<name sortKey="Van Wanrooij, M" uniqKey="Van Wanrooij M">M Van Wanrooij</name>
</author>
<author>
<name sortKey="Munoz, Dp" uniqKey="Munoz D">DP Munoz</name>
</author>
<author>
<name sortKey="Van Opstal, Aj" uniqKey="Van Opstal A">AJ Van Opstal</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, Mo" uniqKey="Ernst M">MO Ernst</name>
</author>
<author>
<name sortKey="Banks, Ms" uniqKey="Banks M">MS Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fernandez, C" uniqKey="Fernandez C">C Fernandez</name>
</author>
<author>
<name sortKey="Goldberg, Jm" uniqKey="Goldberg J">JM Goldberg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fetsch, Cr" uniqKey="Fetsch C">CR Fetsch</name>
</author>
<author>
<name sortKey="Pouget, A" uniqKey="Pouget A">A Pouget</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC DeAngelis</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fetsch, Cr" uniqKey="Fetsch C">CR Fetsch</name>
</author>
<author>
<name sortKey="Rajguru, Sm" uniqKey="Rajguru S">SM Rajguru</name>
</author>
<author>
<name sortKey="Karunaratne, A" uniqKey="Karunaratne A">A Karunaratne</name>
</author>
<author>
<name sortKey="Gu, Y" uniqKey="Gu Y">Y Gu</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC Deangelis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fetsch, Cr" uniqKey="Fetsch C">CR Fetsch</name>
</author>
<author>
<name sortKey="Turner, Ah" uniqKey="Turner A">AH Turner</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC DeAngelis</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Goodman, Sn" uniqKey="Goodman S">SN Goodman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Graf, Ab" uniqKey="Graf A">AB Graf</name>
</author>
<author>
<name sortKey="Kohn, A" uniqKey="Kohn A">A Kohn</name>
</author>
<author>
<name sortKey="Jazayeri, M" uniqKey="Jazayeri M">M Jazayeri</name>
</author>
<author>
<name sortKey="Movshon, Ja" uniqKey="Movshon J">JA Movshon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Grice, Gr" uniqKey="Grice G">GR Grice</name>
</author>
<author>
<name sortKey="Canham, L" uniqKey="Canham L">L Canham</name>
</author>
<author>
<name sortKey="Boroughs, Jm" uniqKey="Boroughs J">JM Boroughs</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gu, Y" uniqKey="Gu Y">Y Gu</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC DeAngelis</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gu, Y" uniqKey="Gu Y">Y Gu</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC Deangelis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gu, Y" uniqKey="Gu Y">Y Gu</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC Deangelis</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gu, Y" uniqKey="Gu Y">Y Gu</name>
</author>
<author>
<name sortKey="Fetsch, Cr" uniqKey="Fetsch C">CR Fetsch</name>
</author>
<author>
<name sortKey="Adeyemo, B" uniqKey="Adeyemo B">B Adeyemo</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC Deangelis</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gu, Y" uniqKey="Gu Y">Y Gu</name>
</author>
<author>
<name sortKey="Watkins, Pv" uniqKey="Watkins P">PV Watkins</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC DeAngelis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Heuer, Hw" uniqKey="Heuer H">HW Heuer</name>
</author>
<author>
<name sortKey="Britten, Kh" uniqKey="Britten K">KH Britten</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jeffreys, H" uniqKey="Jeffreys H">H Jeffreys</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kiani, R" uniqKey="Kiani R">R Kiani</name>
</author>
<author>
<name sortKey="Hanks, Td" uniqKey="Hanks T">TD Hanks</name>
</author>
<author>
<name sortKey="Shadlen, Mn" uniqKey="Shadlen M">MN Shadlen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Knill, Dc" uniqKey="Knill D">DC Knill</name>
</author>
<author>
<name sortKey="Saunders, Ja" uniqKey="Saunders J">JA Saunders</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Laming, Drj" uniqKey="Laming D">DRJ Laming</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lisberger, Sg" uniqKey="Lisberger S">SG Lisberger</name>
</author>
<author>
<name sortKey="Movshon, Ja" uniqKey="Movshon J">JA Movshon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ma, Wj" uniqKey="Ma W">WJ Ma</name>
</author>
<author>
<name sortKey="Beck, Jm" uniqKey="Beck J">JM Beck</name>
</author>
<author>
<name sortKey="Latham, Pe" uniqKey="Latham P">PE Latham</name>
</author>
<author>
<name sortKey="Pouget, A" uniqKey="Pouget A">A Pouget</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mazurek, Me" uniqKey="Mazurek M">ME Mazurek</name>
</author>
<author>
<name sortKey="Roitman, Jd" uniqKey="Roitman J">JD Roitman</name>
</author>
<author>
<name sortKey="Ditterich, J" uniqKey="Ditterich J">J Ditterich</name>
</author>
<author>
<name sortKey="Shadlen, Mn" uniqKey="Shadlen M">MN Shadlen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Miller, J" uniqKey="Miller J">J Miller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Morgan, Ml" uniqKey="Morgan M">ML Morgan</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC Deangelis</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Onken, A" uniqKey="Onken A">A Onken</name>
</author>
<author>
<name sortKey="Drugowitsch, J" uniqKey="Drugowitsch J">J Drugowitsch</name>
</author>
<author>
<name sortKey="Kanitscheider, I" uniqKey="Kanitscheider I">I Kanitscheider</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC DeAngelis</name>
</author>
<author>
<name sortKey="Beck, Jm" uniqKey="Beck J">JM Beck</name>
</author>
<author>
<name sortKey="Pouget, A" uniqKey="Pouget A">A Pouget</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Otto, Tu" uniqKey="Otto T">TU Otto</name>
</author>
<author>
<name sortKey="Mamassian, P" uniqKey="Mamassian P">P Mamassian</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Palmer, J" uniqKey="Palmer J">J Palmer</name>
</author>
<author>
<name sortKey="Huk, Ac" uniqKey="Huk A">AC Huk</name>
</author>
<author>
<name sortKey="Shadlen, Mn" uniqKey="Shadlen M">MN Shadlen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Papoulis, A" uniqKey="Papoulis A">A Papoulis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Price, Ns" uniqKey="Price N">NS Price</name>
</author>
<author>
<name sortKey="Ono, S" uniqKey="Ono S">S Ono</name>
</author>
<author>
<name sortKey="Mustari, Mj" uniqKey="Mustari M">MJ Mustari</name>
</author>
<author>
<name sortKey="Ibbotson, Mr" uniqKey="Ibbotson M">MR Ibbotson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Raab, Dh" uniqKey="Raab D">DH Raab</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ratcliff, R" uniqKey="Ratcliff R">R Ratcliff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ratcliff, R" uniqKey="Ratcliff R">R Ratcliff</name>
</author>
<author>
<name sortKey="Smith, Pl" uniqKey="Smith P">PL Smith</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schlack, A" uniqKey="Schlack A">A Schlack</name>
</author>
<author>
<name sortKey="Sterbing D Angelo, Sj" uniqKey="Sterbing D Angelo S">SJ Sterbing-D'Angelo</name>
</author>
<author>
<name sortKey="Hartung, K" uniqKey="Hartung K">K Hartung</name>
</author>
<author>
<name sortKey="Hoffmann, Kp" uniqKey="Hoffmann K">KP Hoffmann</name>
</author>
<author>
<name sortKey="Bremmer, F" uniqKey="Bremmer F">F Bremmer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Smith, Pl" uniqKey="Smith P">PL Smith</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stanford, Tr" uniqKey="Stanford T">TR Stanford</name>
</author>
<author>
<name sortKey="Shankar, S" uniqKey="Shankar S">S Shankar</name>
</author>
<author>
<name sortKey="Massoglia, Dp" uniqKey="Massoglia D">DP Massoglia</name>
</author>
<author>
<name sortKey="Costello, Mg" uniqKey="Costello M">MG Costello</name>
</author>
<author>
<name sortKey="Salinas, E" uniqKey="Salinas E">E Salinas</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stephan, Ke" uniqKey="Stephan K">KE Stephan</name>
</author>
<author>
<name sortKey="Penny, Wd" uniqKey="Penny W">WD Penny</name>
</author>
<author>
<name sortKey="Daunizeau, J" uniqKey="Daunizeau J">J Daunizeau</name>
</author>
<author>
<name sortKey="Moran, Rj" uniqKey="Moran R">RJ Moran</name>
</author>
<author>
<name sortKey="Friston, Kj" uniqKey="Friston K">KJ Friston</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tolhurst, Dj" uniqKey="Tolhurst D">DJ Tolhurst</name>
</author>
<author>
<name sortKey="Movshon, Ja" uniqKey="Movshon J">JA Movshon</name>
</author>
<author>
<name sortKey="Dean, Af" uniqKey="Dean A">AF Dean</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Townsend, Jt" uniqKey="Townsend J">JT Townsend</name>
</author>
<author>
<name sortKey="Wenger, Mj" uniqKey="Wenger M">MJ Wenger</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Beers, Rj" uniqKey="Van Beers R">RJ van Beers</name>
</author>
<author>
<name sortKey="Sittig, Ac" uniqKey="Sittig A">AC Sittig</name>
</author>
<author>
<name sortKey="Denier Van Der Gon, Jj" uniqKey="Denier Van Der Gon J">JJ Denier van der Gon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Whitchurch, Ea" uniqKey="Whitchurch E">EA Whitchurch</name>
</author>
<author>
<name sortKey="Takahashi, Tt" uniqKey="Takahashi T">TT Takahashi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wichmann, Fa" uniqKey="Wichmann F">FA Wichmann</name>
</author>
<author>
<name sortKey="Hill, Nj" uniqKey="Hill N">NJ Hill</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">eLife</journal-id>
<journal-id journal-id-type="iso-abbrev">Elife (Cambridge)</journal-id>
<journal-id journal-id-type="hwp">eLife</journal-id>
<journal-id journal-id-type="publisher-id">eLife</journal-id>
<journal-title-group>
<journal-title>eLife</journal-title>
</journal-title-group>
<issn pub-type="epub">2050-084X</issn>
<publisher>
<publisher-name>eLife Sciences Publications, Ltd</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">24929965</article-id>
<article-id pub-id-type="pmc">4102720</article-id>
<article-id pub-id-type="publisher-id">03005</article-id>
<article-id pub-id-type="doi">10.7554/eLife.03005</article-id>
<article-categories>
<subj-group subj-group-type="display-channel">
<subject>Research Article</subject>
</subj-group>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Optimal multisensory decision-making in a reaction-time task</article-title>
</title-group>
<contrib-group>
<contrib id="author-12041" contrib-type="author">
<name>
<surname>Drugowitsch</surname>
<given-names>Jan</given-names>
</name>
<xref ref-type="aff" rid="aff1">1</xref>
<xref ref-type="aff" rid="aff2">2</xref>
<xref ref-type="aff" rid="aff4">4</xref>
<xref ref-type="corresp" rid="cor1">*</xref>
<xref ref-type="fn" rid="con1"></xref>
<xref ref-type="fn" rid="conf2"></xref>
</contrib>
<contrib id="author-11799" contrib-type="author">
<name>
<surname>DeAngelis</surname>
<given-names>Gregory C</given-names>
</name>
<xref ref-type="aff" rid="aff1">1</xref>
<xref ref-type="author-notes" rid="equal-contrib"></xref>
<xref ref-type="other" rid="par-2"></xref>
<xref ref-type="fn" rid="con2"></xref>
<xref ref-type="fn" rid="conf2"></xref>
</contrib>
<contrib id="author-13296" contrib-type="author">
<name>
<surname>Klier</surname>
<given-names>Eliana M</given-names>
</name>
<xref ref-type="aff" rid="aff3">3</xref>
<xref ref-type="fn" rid="con4"></xref>
<xref ref-type="fn" rid="conf2"></xref>
</contrib>
<contrib id="author-1039" contrib-type="author">
<name>
<surname>Angelaki</surname>
<given-names>Dora E</given-names>
</name>
<xref ref-type="aff" rid="aff3">3</xref>
<xref ref-type="author-notes" rid="equal-contrib"></xref>
<xref ref-type="other" rid="par-1"></xref>
<xref ref-type="fn" rid="con5"></xref>
<xref ref-type="fn" rid="conf1"></xref>
</contrib>
<contrib id="author-13298" contrib-type="author">
<name>
<surname>Pouget</surname>
<given-names>Alexandre</given-names>
</name>
<xref ref-type="aff" rid="aff1">1</xref>
<xref ref-type="aff" rid="aff4">4</xref>
<xref ref-type="author-notes" rid="equal-contrib"></xref>
<xref ref-type="other" rid="par-3"></xref>
<xref ref-type="other" rid="par-4"></xref>
<xref ref-type="other" rid="par-5"></xref>
<xref ref-type="other" rid="par-6"></xref>
<xref ref-type="fn" rid="con3"></xref>
<xref ref-type="fn" rid="conf2"></xref>
</contrib>
<aff id="aff1">
<label>1</label>
<addr-line>Department of Brain and Cognitive Sciences</addr-line>
,
<institution>University of Rochester</institution>
,
<addr-line>New York</addr-line>
,
<country>United States</country>
</aff>
<aff id="aff2">
<label>2</label>
<institution>Institut National de la Santé et de la Recherche Médicale, École Normale Supérieure</institution>
,
<addr-line>Paris</addr-line>
,
<country>France</country>
</aff>
<aff id="aff3">
<label>3</label>
<addr-line>Department of Neuroscience</addr-line>
,
<institution>Baylor College of Medicine</institution>
,
<addr-line>Houston</addr-line>
,
<country>United States</country>
</aff>
<aff id="aff4">
<label>4</label>
<addr-line>Département des Neurosciences Fondamentales</addr-line>
,
<institution>Université de Genève</institution>
,
<addr-line>Geneva</addr-line>
,
<country>Switzerland</country>
</aff>
</contrib-group>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Marder</surname>
<given-names>Eve</given-names>
</name>
<role>Reviewing editor</role>
<aff>
<institution>Brandeis University</institution>
,
<country>United States</country>
</aff>
</contrib>
</contrib-group>
<author-notes>
<corresp id="cor1">
<label>*</label>
For correspondence:
<email>jdrugo@gmail.com</email>
</corresp>
<fn fn-type="other" id="equal-contrib">
<label></label>
<p>These authors contributed equally to this work.</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>14</day>
<month>6</month>
<year>2014</year>
</pub-date>
<pub-date pub-type="collection">
<year>2014</year>
</pub-date>
<volume>3</volume>
<elocation-id>e03005</elocation-id>
<history>
<date date-type="received">
<day>04</day>
<month>4</month>
<year>2014</year>
</date>
<date date-type="accepted">
<day>12</day>
<month>6</month>
<year>2014</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright © 2014, Drugowitsch et al</copyright-statement>
<copyright-year>2014</copyright-year>
<copyright-holder>Drugowitsch et al</copyright-holder>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/">
<license-p>This article is distributed under the terms of the
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution License</ext-link>
, which permits unrestricted use and redistribution provided that the original author and source are credited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:type="simple" xlink:href="elife03005.pdf"></self-uri>
<abstract>
<p>Humans and animals can integrate sensory evidence from various sources to make decisions in a statistically near-optimal manner, provided that the stimulus presentation time is fixed across trials. Little is known about whether optimality is preserved when subjects can choose when to make a decision (reaction-time task), nor when sensory inputs have time-varying reliability. Using a reaction-time version of a visual/vestibular heading discrimination task, we show that behavior is clearly sub-optimal when quantified with traditional optimality metrics that ignore reaction times. We created a computational model that accumulates evidence optimally across both cues and time, and trades off accuracy with decision speed. This model quantitatively explains subjects's choices and reaction times, supporting the hypothesis that subjects do, in fact, accumulate evidence optimally over time and across sensory modalities, even when the reaction time is under the subject's control.</p>
<p>
<bold>DOI:</bold>
<ext-link ext-link-type="doi" xlink:href="10.7554/eLife.03005.001">http://dx.doi.org/10.7554/eLife.03005.001</ext-link>
</p>
</abstract>
<abstract abstract-type="executive-summary">
<title>eLife digest</title>
<p>Imagine trying out a new roller-coaster ride and doing your best to figure out if you are being hurled to the left or to the right. You might think that this task would be easier if your eyes were open because you could rely on information from your eyes and also from the vestibular system in your ears. This is also what cue combination theory says—our ability to discriminate between two potential outcomes is enhanced when we can draw on more than one of the senses.</p>
<p>However, previous tests of cue combination theory have been limited in that test subjects have been asked to respond after receiving information for a fixed period of time whereas, in real life, we tend to make a decision as soon as we have gathered sufficient information. Now, using data collected from seven human subjects in a simulator, Drugowitsch et al. have confirmed that test subjects do indeed give more correct answers in more realistic conditions when they have two sources of information to rely on, rather than only one.</p>
<p>What makes this result surprising? Traditional cue combination theories do not consider that slower decisions allow us to process more information and therefore tend to be more accurate. Drugowitsch et al. show that this shortcoming causes such theories to conclude that multiple information sources might lead to worse decisions. For example, some of their test subjects made less accurate choices when they were presented with both visual and vestibular information, compared to when only visual information was available, because they made these choices very rapidly.</p>
<p>By developing a theory that takes into account both reaction times and choice accuracy, Drugowitsch et al. were able to show that, despite different trade-offs between speed and accuracy, test subjects still combined the information from their eyes and ears in a way that was close to ideal. As such the work offers a more thorough account of human decision making.</p>
<p>
<bold>DOI:</bold>
<ext-link ext-link-type="doi" xlink:href="10.7554/eLife.03005.002">http://dx.doi.org/10.7554/eLife.03005.002</ext-link>
</p>
</abstract>
<kwd-group kwd-group-type="author-keywords">
<title>Author keywords</title>
<kwd>decision-making</kwd>
<kwd>cue combination</kwd>
<kwd>reaction time</kwd>
<kwd>diffusion models</kwd>
</kwd-group>
<kwd-group kwd-group-type="research-organism">
<title>Research organism</title>
<kwd>human</kwd>
</kwd-group>
<funding-group>
<award-group id="par-1">
<funding-source>National Institutes of Health
<ext-link ext-link-type="uri" xlink:href="http://www.crossref.org/fundref/">FundRef identification ID: </ext-link>
<named-content content-type="funder-id">http://dx.doi.org/10.13039/100000002</named-content>
</funding-source>
<award-id>R01 DC007620</award-id>
<principal-award-recipient>
<name>
<surname>Angelaki</surname>
<given-names>Dora E</given-names>
</name>
</principal-award-recipient>
</award-group>
<award-group id="par-2">
<funding-source>National Institutes of Health
<ext-link ext-link-type="uri" xlink:href="http://www.crossref.org/fundref/">FundRef identification ID: </ext-link>
<named-content content-type="funder-id">http://dx.doi.org/10.13039/100000002</named-content>
</funding-source>
<award-id>R01 EY016178</award-id>
<principal-award-recipient>
<name>
<surname>DeAngelis</surname>
<given-names>Gregory C</given-names>
</name>
</principal-award-recipient>
</award-group>
<award-group id="par-3">
<funding-source>National Science Foundation
<ext-link ext-link-type="uri" xlink:href="http://www.crossref.org/fundref/">FundRef identification ID: </ext-link>
<named-content content-type="funder-id">http://dx.doi.org/10.13039/100000001</named-content>
</funding-source>
<award-id>BCS0446730</award-id>
<principal-award-recipient>
<name>
<surname>Pouget</surname>
<given-names>Alexandre</given-names>
</name>
</principal-award-recipient>
</award-group>
<award-group id="par-4">
<funding-source>U.S. Army Research Laboratory
<ext-link ext-link-type="uri" xlink:href="http://www.crossref.org/fundref/">FundRef identification ID: </ext-link>
<named-content content-type="funder-id">http://dx.doi.org/10.13039/100006754</named-content>
</funding-source>
<award-id>Multidisciplinary University Research Initiative, N00014-07-1-0937</award-id>
<principal-award-recipient>
<name>
<surname>Pouget</surname>
<given-names>Alexandre</given-names>
</name>
</principal-award-recipient>
</award-group>
<award-group id="par-5">
<funding-source>Air Force Office of Scientific Research
<ext-link ext-link-type="uri" xlink:href="http://www.crossref.org/fundref/">FundRef identification ID: </ext-link>
<named-content content-type="funder-id">http://dx.doi.org/10.13039/100000181</named-content>
</funding-source>
<award-id>FA9550-10-1-0336</award-id>
<principal-award-recipient>
<name>
<surname>Pouget</surname>
<given-names>Alexandre</given-names>
</name>
</principal-award-recipient>
</award-group>
<award-group id="par-6">
<funding-source>James S. McDonnell Foundation
<ext-link ext-link-type="uri" xlink:href="http://www.crossref.org/fundref/">FundRef identification ID: </ext-link>
<named-content content-type="funder-id">http://dx.doi.org/10.13039/100000913</named-content>
</funding-source>
<principal-award-recipient>
<name>
<surname>Pouget</surname>
<given-names>Alexandre</given-names>
</name>
</principal-award-recipient>
</award-group>
<funding-statement>The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.</funding-statement>
</funding-group>
<custom-meta-group>
<custom-meta>
<meta-name>elife-xml-version</meta-name>
<meta-value>0.7</meta-value>
</custom-meta>
<custom-meta specific-use="meta-only">
<meta-name>Author impact statement</meta-name>
<meta-value>Through a combination of modeling and experiments it is shown that humans can near-optimally accumulate decision-related evidence across time and cues even when reaction time is under their control.</meta-value>
</custom-meta>
</custom-meta-group>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>Introduction</title>
<p>Effective decision making in an uncertain, rapidly changing environment requires optimal use of all information available to the decision-maker. Numerous previous studies have examined how integrating multiple sensory cues—either within or across sensory modalities—alters perceptual sensitivity (
<xref rid="bib47" ref-type="bibr">van Beers et al., 1996</xref>
;
<xref rid="bib11" ref-type="bibr">Ernst and Banks, 2002</xref>
;
<xref rid="bib1" ref-type="bibr">Battaglia et al., 2003</xref>
;
<xref rid="bib15" ref-type="bibr">Fetsch et al., 2009</xref>
). These studies generally reveal that subjects' ability to discriminate among stimuli improves when multiple sensory cues are available, such as visual and tactile (
<xref rid="bib47" ref-type="bibr">van Beers et al., 1996</xref>
;
<xref rid="bib11" ref-type="bibr">Ernst and Banks, 2002</xref>
), visual and auditory (
<xref rid="bib1" ref-type="bibr">Battaglia et al., 2003</xref>
), or visual and vestibular (
<xref rid="bib15" ref-type="bibr">Fetsch et al., 2009</xref>
) cues. The performance gains associated with cue integration are generally well predicted by models that combine information across senses in a statistically optimal manner (
<xref rid="bib8" ref-type="bibr">Clark and Yuille, 1990</xref>
). Specifically, we consider cue integration to be optimal if the information in the combined, multisensory condition is the sum of that available from the separate cues (see
<xref ref-type="supplementary-material" rid="SD1-data">Supplementary file 1</xref>
for formal statement) (
<xref rid="bib8" ref-type="bibr">Clark and Yuille, 1990</xref>
).</p>
<p>Previous studies and models share a common fundamental limitation: they only consider situations in which the stimulus duration is fixed and subjects are required to withhold their response until the stimulus epoch expires. In natural settings, by contrast, subjects usually choose for themselves when they have gathered enough information to make a decision. In such contexts, it is possible that subjects integrate multiple cues to gain speed or to increase their proportion of correct responses (or some combination of effects), and it is unknown whether standard criteria for optimal cue integration apply. Indeed, using a reaction-time version of a multimodal heading discrimination task, we demonstrate here that human performance is markedly suboptimal when evaluated with standard criteria that ignore reaction times. Thus, the conventional framework for optimal cue integration is not applicable to behaviors in which decision times are under subjects' control.</p>
<p>On the other hand, there is a large body of empirical studies that has focused on how multisensory integration affects reaction times, but these studies have generally ignored effects on perceptual sensitivity (
<xref rid="bib9" ref-type="bibr">Colonius and Arndt, 2001</xref>
;
<xref rid="bib34" ref-type="bibr">Otto and Mamassian, 2012</xref>
). Some of these studies have reported that reaction times for multisensory stimuli are faster than predicted by ‘parallel race’ models (
<xref rid="bib38" ref-type="bibr">Raab, 1962</xref>
;
<xref rid="bib32" ref-type="bibr">Miller, 1982</xref>
), suggesting that multisensory inputs are combined into a common representation. However, other groups have failed to replicate these findings (
<xref rid="bib10" ref-type="bibr">Corneil et al., 2002</xref>
;
<xref rid="bib48" ref-type="bibr">Whitchurch and Takahashi, 2006</xref>
) and it is unclear whether the sensory inputs are combined optimally. Thus, multisensory integration in reaction time experiments remains poorly understood, and there is no coherent framework for evaluating optimal decision making that incorporates both perceptual sensitivity and reaction times. We address this substantial gap in knowledge both theoretically and experimentally.</p>
<p>For tasks based on information from a single sensory modality, diffusion models (DMs) have proven to be very effective at characterizing both the speed and accuracy of perceptual decisions, as well as speed/accuracy trade-offs (
<xref rid="bib39" ref-type="bibr">Ratcliff, 1978</xref>
;
<xref rid="bib40" ref-type="bibr">Ratcliff and Smith, 2004</xref>
;
<xref rid="bib35" ref-type="bibr">Palmer et al., 2005</xref>
) (where accuracy is used in the sense of percentage of correct responses). Here, we develop a novel form of DM that not only integrates evidence optimally over time but also across different sensory cues, providing an optimal decision model for multisensory integration in a reaction-time context. The model is capable of combining cues optimally even when the reliability of each sensory input varies as a function of time. We show that this model reproduces human subjects' behavior very well, thus demonstrating that subjects near-optimally combine momentary evidence across sensory modalities. The model also predicts the counterintuitive finding that discrimination thresholds are often increased during cue combination, and demonstrates that this departure from standard cue-integration theory is due to a speed-accuracy tradeoff.</p>
<p>Overall, our findings provide a framework for extending cue-integration research to more natural contexts in which decision times are unconstrained and sensory cues vary substantially over time.</p>
</sec>
<sec sec-type="results" id="s2">
<title>Results</title>
<p>We collected behavioral data from seven human subjects, A–G, performing a reaction-time version of a heading discrimination task (
<xref rid="bib19" ref-type="bibr">Gu et al., 2007</xref>
,
<xref rid="bib20" ref-type="bibr">2008</xref>
,
<xref rid="bib22" ref-type="bibr">2010</xref>
;
<xref rid="bib15" ref-type="bibr">Fetsch et al., 2009</xref>
) based on optic flow alone (visual condition), inertial motion alone (vestibular condition), or a combination of both cues (combined condition,
<xref ref-type="fig" rid="fig1">Figure 1A</xref>
). In each stimulus condition, the subjects experienced forward translation with a small leftward or rightward deviation, and their task was to report whether they moved leftward or rightward relative to (an internal standard of) straight ahead (
<xref ref-type="fig" rid="fig1">Figure 1B</xref>
). In the combined condition, visual and vestibular cues were always spatially congruent, and followed temporally synchronized Gaussian velocity profiles (
<xref ref-type="fig" rid="fig1">Figure 1C</xref>
). Reliability of the visual cue was varied randomly across trials by changing the motion coherence of the optic flow stimulus (three coherence levels). For subjects B, D, and F, an additional experiment with six coherence levels was performed (denoted as B2, D2, F2). In contrast to previous tasks conducted with the same apparatus (
<xref rid="bib15" ref-type="bibr">Fetsch et al., 2009</xref>
;
<xref rid="bib22" ref-type="bibr">Gu et al., 2010</xref>
), subjects did not have to wait until the end of the stimulus presentation, but were allowed to respond at any time throughout the trial, which lasted up to 2 s.
<fig id="fig1" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.7554/eLife.03005.003</object-id>
<label>Figure 1.</label>
<caption>
<title>Heading discrimination task.</title>
<p>(
<bold>A</bold>
) Subjects are seated on a motion platform in front of a screen displaying 3D optic flow. They perform a heading discrimination task based on optic flow (visual condition), platform motion (vestibular condition), or both cues in combination (combined condition). Coherence of the optic flow is constant within a trial but varies randomly across trials. (
<bold>B</bold>
) The subjects' task is to indicate whether they are moving rightward or leftward relative to straight ahead. Both motion direction (sign of
<italic>h</italic>
) and heading angle (magnitude of
<inline-formula>
<mml:math id="inf1">
<mml:mrow>
<mml:mrow>
<mml:mo>|</mml:mo>
<mml:mi>h</mml:mi>
<mml:mo>|</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
) are chosen randomly between trials. (
<bold>C</bold>
) The velocity profile is Gaussian with peak velocity ∼1 s after stimulus onset.</p>
<p>
<bold>DOI:</bold>
<ext-link ext-link-type="doi" xlink:href="10.7554/eLife.03005.003">http://dx.doi.org/10.7554/eLife.03005.003</ext-link>
</p>
</caption>
<graphic xlink:href="elife03005f001"></graphic>
</fig>
</p>
<p>For all conditions and all subjects, heading discrimination performance improved with an increase in heading direction away from straight ahead and with increased visual motion coherence. Let
<italic>h</italic>
denote the heading angle relative to straight ahead (
<italic>h</italic>
> 0 for right,
<italic>h</italic>
< 0 for left), and
<inline-formula>
<mml:math id="inf2">
<mml:mrow>
<mml:mrow>
<mml:mo>|</mml:mo>
<mml:mi>h</mml:mi>
<mml:mo>|</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
its magnitude. Larger values of
<inline-formula>
<mml:math id="inf3">
<mml:mrow>
<mml:mrow>
<mml:mo>|</mml:mo>
<mml:mi>h</mml:mi>
<mml:mo>|</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
simplified the discrimination task, as reflected by a larger fraction of correct choices (
<xref ref-type="fig" rid="fig2">Figure 2A</xref>
for subject D2,
<xref ref-type="fig" rid="fig3s1">Figure 3—figure supplement 1</xref>
for other subjects). To quantify discrimination performance, we fitted a cumulative Gaussian function to the psychometric curve for each stimulus condition and coherence. A lower discrimination threshold, given by the standard deviation of the fitted Gaussian, indicates a steeper psychometric curve and thus better performance. For both the visual and combined conditions, discrimination thresholds consistently decreased with an increase in motion coherence (
<xref ref-type="fig" rid="fig2">Figure 2B</xref>
for subject D2,
<xref ref-type="fig" rid="fig2s1">Figure 2—figure supplement 1</xref>
for other subjects), indicating that increasing coherence improves heading discrimination.
<fig id="fig2" position="float" orientation="portrait">
<object-id pub-id-type="doi">10.7554/eLife.03005.004</object-id>
<label>Figure 2.</label>
<caption>
<title>Heading discrimination performance.</title>
<p>(
<bold>A</bold>
) Plots show the proportion of rightward choices for each heading and stimulus condition. Data are shown for subject D2, who was tested with 6 coherence levels. Error bars indicate 95% confidence intervals. (
<bold>B</bold>
) Discrimination threshold for each coherence and condition for subject D2 (see
<xref ref-type="fig" rid="fig2s1">Figure 2—figure supplement 1</xref>
for discrimination thresholds of all subjects). For large coherences, the threshold in the combined condition (solid red curve) lies between that of the vestibular and visual conditions, a marked deviation from the standard prediction (dashed red curve) of optimal cue integration theory. (
<bold>C</bold>
) Observed vs predicted discrimination thresholds for the combined condition for all subjects. Data are color coded by motion coherence. Error bars indicate 95% CIs. For most subjects, observed thresholds are significantly greater than predicted, especially for coherences greater than 25%. For comparison, analogous data from monkeys and humans (black triangles and squares, respectively) are shown from a previous study involving a fixed-duration version of the same task (
<xref rid="bib15" ref-type="bibr">Fetsch et al., 2009</xref>
).</p>
<p>
<bold>DOI:</bold>
<ext-link ext-link-type="doi" xlink:href="10.7554/eLife.03005.004">http://dx.doi.org/10.7554/eLife.03005.004</ext-link>
</p>
</caption>
<graphic xlink:href="elife03005f002"></graphic>
<p content-type="supplemental-figure">
<fig id="fig2s1" specific-use="child-fig" orientation="portrait" position="anchor">
<object-id pub-id-type="doi">10.7554/eLife.03005.005</object-id>
<label>Figure 2—figure supplement 1.</label>
<caption>
<title>Discrimination thresholds for all subjects and conditions.</title>
<p>The psychophysical thresholds are found by fitting a cumulative Gaussian function to the psychometric curve for each condition. The predicted threshold is based on the visual and vestibular thresholds measured at the same coherence. The error bars indicate bootstrapped 95% CIs. Note that the observed thresholds in the combined condition (solid red curves) are consistently greater than the predicted thresholds (dashed red curves), especially at high coherences. For a statistical comparison between various thresholds see
<xref ref-type="supplementary-material" rid="SD2-data">Supplementary file 2A</xref>
.</p>
<p>
<bold>DOI:</bold>
<ext-link ext-link-type="doi" xlink:href="10.7554/eLife.03005.005">http://dx.doi.org/10.7554/eLife.03005.005</ext-link>
</p>
</caption>
<graphic xlink:href="elife03005fs001"></graphic>
</fig>
</p>
</fig>
</p>
<sec id="s2-1">
<title>Sub-optimal cue combination?</title>
<p>Traditional cue combination models predict that the discrimination threshold in the combined condition should be smaller than that of either unimodal condition (
<xref rid="bib8" ref-type="bibr">Clark and Yuille, 1990</xref>
). With a fixed stimulus duration, this prediction has been shown to hold for visual/vestibular heading discrimination in both human and animal subjects (
<xref rid="bib15" ref-type="bibr">Fetsch et al., 2009</xref>
,
<xref rid="bib13" ref-type="bibr">2011</xref>
), consistent with optimal cue combination. In contrast, the discrimination thresholds of subjects in our reaction-time task appear to be substantially sub-optimal. For the example subject of
<xref ref-type="fig" rid="fig2">Figure 2A</xref>
, psychometric functions in the combined condition lie between the visual and vestibular functions. Correspondingly, discrimination thresholds for the combined condition are intermediate between visual and vestibular thresholds for this subject, and for high coherences, are substantially greater than the optimal predictions (
<xref ref-type="fig" rid="fig2">Figure 2B</xref>
).</p>
<p>This pattern of results was consistent across subjects (
<xref ref-type="fig" rid="fig2">Figure 2C</xref>
,
<xref ref-type="fig" rid="fig2s1">Figure 2—figure supplement 1</xref>
). In no case did subjects feature a significantly lower discrimination threshold in the combined condition than the better of the two unimodal conditions (p>0.57, one-tailed,
<xref ref-type="supplementary-material" rid="SD2-data">Supplementary file 2A</xref>
). For the largest visual motion coherence (70%), all subjects except one showed thresholds in the combined condition that were significantly greater than visual thresholds and significant greater than optimal predictions of a conventional cue-integration scheme (p<0.05,
<xref ref-type="supplementary-material" rid="SD2-data">Supplementary file 2A</xref>
). These data lie in stark contrast to previous reports using fixed duration stimuli (
<xref rid="bib15" ref-type="bibr">Fetsch et al., 2009</xref>
,
<xref rid="bib13" ref-type="bibr">2011</xref>
) in which combined thresholds were generally found to improve compared to the unimodal conditions, as expected by standard optimal multisensory integration models. To summarize this contrast, we compare the ratio of observed to predicted thresholds in the combined condition for our subjects to human and monkey subjects performing a similar task in a fixed duration setting (
<xref rid="bib15" ref-type="bibr">Fetsch et al., 2009</xref>
). We found this ratio to be significantly greater for our subjects (
<xref ref-type="fig" rid="fig2">Figure 2C</xref>
; two-sample
<italic>t</italic>
test, t (77) = 3.245, p=0.0017). This indicates that, with respect to predictions of standard multisensory integration models, our subjects performed significantly worse than those engaged in a similar fixed-duration task.</p>
<p>A different picture emerges if we take not only discrimination thresholds but also reaction times into account. Short reaction times imply that subjects gather less information to make a decision, yielding greater discrimination thresholds. Longer reaction times may decrease thresholds, but at the cost of time. Consequently, if subjects decide more rapidly in the combined condition than the visual condition, they might feature higher discrimination thresholds in the combined condition even if they make optimal use of all available information. Thus, to assess if subjects perform optimal cue combination, we need to account for the timing of their decisions.</p>
<p>Average reaction times depended on stimulus condition, motion coherence, and heading direction. In general, reaction times were faster for larger heading magnitudes, and reaction times in the vestibular condition were faster than those in the visual condition (
<xref ref-type="fig" rid="fig3">Figure 3</xref>
for subject D2,
<xref ref-type="fig" rid="fig3s1">Figure 3—figure supplement 1</xref>
for other subjects). In the combined condition, however, reaction times were much shorter than those seen for the visual condition and were comparable to those of the vestibular condition (
<xref ref-type="fig" rid="fig3">Figure 3</xref>
). Thus, subjects spent substantially more time integrating evidence in the visual condition, which boosted their discrimination performance when compared to the combined condition. Note also that discrimination thresholds in the combined condition were substantially smaller than vestibular thresholds, especially at 70% coherence (
<xref ref-type="fig" rid="fig2 fig3">Figures 2 and 3</xref>
). Thus, adding optic flow to a vestibular stimulus decreased the discrimination threshold with essentially no loss of speed. A similar overall pattern of results was observed for the other subjects (
<xref ref-type="fig" rid="fig3s1">Figure 3—figure supplement 1</xref>
). These data provide clear evidence that subjects made use of both visual and vestibular information to perform the reaction-time task, but the benefits of cue integration could not be appreciated by considering discrimination thresholds alone.
<fig id="fig3" position="float" orientation="portrait">
<object-id pub-id-type="doi">10.7554/eLife.03005.006</object-id>
<label>Figure 3.</label>
<caption>
<title>Discrimination performance and reaction times for subject D2.</title>
<p>Behavioral data (symbols with error bars) and model fits (lines) are shown separately for each motion coherence. Top plot: reaction times as a function of heading; bottom plot: proportion of rightward choices as a function of heading. Mean reaction times are shown for correct trials, with error bars representing two SEM (in some cases smaller than the symbols). Error bars on the proportion rightward choice data are 95% confidence intervals. Although reaction times are only shown for correct trials, the model is fit to data from both correct and incorrect trials. See
<xref ref-type="fig" rid="fig3s1">Figure 3—figure supplement 1</xref>
for behavioral data and model fits for all subjects.
<xref ref-type="fig" rid="fig3s2">Figure 3—figure supplement 2</xref>
shows the fitted model parameters per subject.</p>
<p>
<bold>DOI:</bold>
<ext-link ext-link-type="doi" xlink:href="10.7554/eLife.03005.006">http://dx.doi.org/10.7554/eLife.03005.006</ext-link>
</p>
</caption>
<graphic xlink:href="elife03005f003"></graphic>
<p content-type="supplemental-figure">
<fig id="fig3s1" specific-use="child-fig" orientation="portrait" position="anchor">
<object-id pub-id-type="doi">10.7554/eLife.03005.007</object-id>
<label>Figure 3—figure supplement 1.</label>
<caption>
<title>Psychometric functions, chronometric functions, and model fits for all subjects.</title>
<p>Behavioral data (symbols with error bars) and model fits (lines) are, for clarity, shown separately for each different coherence of the visual motion stimulus. The reaction time shown is the mean reaction time for correct trials, with error bars showing two SEMs (sometimes smaller than the symbols). Error bars on the proportion of rightward choices are 95% confidence intervals. Note that reaction times are shown only for correct trials, while the model is fit to both correct and incorrect trials.</p>
<p>
<bold>DOI:</bold>
<ext-link ext-link-type="doi" xlink:href="10.7554/eLife.03005.007">http://dx.doi.org/10.7554/eLife.03005.007</ext-link>
</p>
</caption>
<graphic xlink:href="elife03005fs002"></graphic>
</fig>
</p>
<p content-type="supplemental-figure">
<fig id="fig3s2" specific-use="child-fig" orientation="portrait" position="anchor">
<object-id pub-id-type="doi">10.7554/eLife.03005.008</object-id>
<label>Figure 3—figure supplement 2.</label>
<caption>
<title>Model parameters for fits of the optimal model and two alternative parameterizations.</title>
<p>Based on the maximum likelihood parameters of full model fits for each subject, the four top plots show how drift rate and normalized bounds are assumed to depend on visual motion coherence. The solid lines show fits for the model described in the main text. The dashed lines show fits for an alternative parameterization with one additional parameter (see
<xref ref-type="supplementary-material" rid="SD1-data">Supplementary file 1</xref>
). The circles show the fits of a model that, instead of linking them by a parametric function, fits these drifts and bounds for each coherence separately. As can be seen, the parametric functions qualitatively match these independent fits. The bottom bar graphs show drift rate and bound for the vestibular modalities and fitted non-decision times for each subject, all for the model parameterization described in the text. All error bars show ±1 SD of the parameter posterior. Each color corresponds to a separate subject, with color scheme given by the bottom left bar graph.</p>
<p>
<bold>DOI:</bold>
<ext-link ext-link-type="doi" xlink:href="10.7554/eLife.03005.008">http://dx.doi.org/10.7554/eLife.03005.008</ext-link>
</p>
</caption>
<graphic xlink:href="elife03005fs003"></graphic>
</fig>
</p>
</fig>
</p>
</sec>
<sec id="s2-2">
<title>Modeling cue combination with a novel diffusion model</title>
<p>To investigate whether subjects accumulate evidence optimally across both time and sensory modalities, we built a model that integrates visual and vestibular cues optimally to perform the heading discrimination task, and we compare predictions of the model to data from our human subjects. The model builds upon the structure of diffusion models (DMs), which have previously been shown to account nicely for the tradeoff between speed and accuracy of decisions (
<xref rid="bib39" ref-type="bibr">Ratcliff, 1978</xref>
;
<xref rid="bib40" ref-type="bibr">Ratcliff and Smith, 2004</xref>
;
<xref rid="bib35" ref-type="bibr">Palmer et al., 2005</xref>
). Additionally, DMs are known to optimally integrate evidence over time (
<xref rid="bib28" ref-type="bibr">Laming, 1968</xref>
;
<xref rid="bib3" ref-type="bibr">Bogacz et al., 2006</xref>
), given that the reliability of the evidence is time-invariant (such that, at any point in time from stimulus onset, the stimulus provides the same amount of information about the task variable). However, DMs have neither been used to integrate evidence from several sources, nor to handle evidence whose reliability changes over time, both of which are required for our purposes.</p>
<p>In the context of heading discrimination, a standard DM would operate as follows (
<xref ref-type="fig" rid="fig4">Figure 4A</xref>
): consider a diffusing particle with dynamics given by
<inline-formula>
<mml:math id="inf4">
<mml:mrow>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mi>k</mml:mi>
<mml:mi>sin</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>h</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mi>η</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
, where
<italic>h</italic>
is the heading direction,
<italic>k</italic>
is a positive constant relating particle drift to heading direction, and
<italic>η</italic>
(
<italic>t</italic>
) is unit variance Gaussian white noise. The particle starts at
<italic>x</italic>
(0) = 0, drifts with an average slope given by
<italic>ksin</italic>
(
<italic>h</italic>
), and diffuses until it hits either the upper bound
<italic>θ</italic>
or the lower bound −
<italic>θ</italic>
, corresponding to rightward and leftward choices, respectively. The decision time is determined by when the particle hits a bound. Larger
<inline-formula>
<mml:math id="inf5">
<mml:mrow>
<mml:mrow>
<mml:mo>|</mml:mo>
<mml:mi>h</mml:mi>
<mml:mo>|</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
's lead to shorter decision times and more correct decisions because the drift rate is greater. Lower bound levels,
<inline-formula>
<mml:math id="inf6">
<mml:mrow>
<mml:mrow>
<mml:mo>|</mml:mo>
<mml:mi>θ</mml:mi>
<mml:mo>|</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
, also lead to shorter decision times but more incorrect decisions. Errors (hitting bound
<italic>θ</italic>
when
<italic>h</italic>
< 0, or hitting bound −
<italic>θ</italic>
when
<italic>h</italic>
> 0) can occur due to the stochasticity of particle motion, which is meant to capture the variability of the momentary sensory evidence. The Fisher information in
<italic>x</italic>
(
<italic>t</italic>
) regarding
<italic>h</italic>
, a measure of how much information
<italic>x</italic>
(
<italic>t</italic>
) provides for discriminating heading (
<xref rid="bib36" ref-type="bibr">Papoulis, 1991</xref>
), is
<italic>I</italic>
<sub>
<italic>x</italic>
</sub>
(
<italic>sin</italic>
(
<italic>h</italic>
)) =
<italic>k</italic>
<sup>2</sup>
per second, showing that
<italic>k</italic>
is a measure of the subject's sensitivity to changes in heading direction. This sensitivity depends on the subject's effectiveness in estimating heading from the cue, which in turn is influenced by the reliability of the cue itself (e.g., coherence).
<fig id="fig4" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.7554/eLife.03005.009</object-id>
<label>Figure 4.</label>
<caption>
<title>Extended diffusion model (DM) for heading discrimination task.</title>
<p>(
<bold>A</bold>
) A drifting particle diffuses until it hits the lower or upper bound, corresponding to choosing ‘left’ or ‘right’ respectively. The rate of drift (black arrow) is determined by heading direction. The time at which a bound is hit corresponds to the decision time. 10 particle traces are shown for the same drift rate, corresponding to one incorrect and nine correct decisions. (
<bold>B</bold>
) Despite time-varying cue sensitivity, optimal temporal integration of evidence in DMs is preserved by weighting the evidence by the momentary measure of its sensitivity. The DM representing the combined condition is formed by an optimal sensitivity-weighted combination of the DMs of the unimodal conditions.</p>
<p>
<bold>DOI:</bold>
<ext-link ext-link-type="doi" xlink:href="10.7554/eLife.03005.009">http://dx.doi.org/10.7554/eLife.03005.009</ext-link>
</p>
</caption>
<graphic xlink:href="elife03005f004"></graphic>
</fig>
</p>
<p>Now consider both a visual (
<italic>vis</italic>
) and a vestibular (
<italic>vest</italic>
) source of evidence regarding
<italic>h</italic>
,
<inline-formula>
<mml:math id="inf7">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>sin</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>h</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>η</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
and
<inline-formula>
<mml:math id="inf8">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mi>sin</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>h</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>η</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
, where
<italic>k</italic>
<sub>
<italic>vis</italic>
</sub>
(
<italic>c</italic>
) indicates that the sensitivity to the cue in the visual modality depends on motion coherence,
<italic>c</italic>
. Combining these two sources of evidence by a simple sum,
<inline-formula>
<mml:math id="inf9">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
, would amount to adding noise to
<inline-formula>
<mml:math id="inf10">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
for low coherences (
<italic>k</italic>
<sub>
<italic>vis</italic>
</sub>
(
<italic>c</italic>
) ≈ 0), which is clearly suboptimal. Rather, it can be shown that the two particle trajectories are combined optimally by weighting their rates of change in proportion to their relative sensitivities (see
<xref ref-type="supplementary-material" rid="SD1-data">Supplementary file 1</xref>
for derivation):
<disp-formula id="equ1">
<label>(1)</label>
<mml:math id="m1">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>b</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msqrt>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:msub>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:msqrt>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msqrt>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:msub>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:msqrt>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>This allows us to model the combined condition by a single new DM,
<inline-formula>
<mml:math id="inf11">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>b</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>b</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>sin</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>h</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>η</mml:mi>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>b</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
, which is optimal because it preserves all information contained in both
<italic>x</italic>
<sub>
<italic>vis</italic>
</sub>
and
<italic>x</italic>
<sub>
<italic>vest</italic>
</sub>
(
<xref ref-type="fig" rid="fig4">Figure 4B</xref>
; see ‘Materials and methods’ and
<xref ref-type="supplementary-material" rid="SD1-data">Supplementary file 1</xref>
for a formal treatment). The sensitivity (drift rate coefficient) in the combined condition,
<disp-formula id="equ2">
<label>(2)</label>
<mml:math id="m2">
<mml:mrow>
<mml:msub>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>b</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:msqrt>
<mml:mrow>
<mml:msubsup>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:msqrt>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
</disp-formula>
is a combination of the sensitivities of the unimodal conditions and is therefore always greater than the largest unimodal sensitivity.</p>
<p>So far we have assumed that the reliability of each cue is time-invariant. However, as the motion velocity changes over time, so does the amount of information about
<italic>h</italic>
provided by each cue, and with it the subject's sensitivity to changes in
<italic>h</italic>
. For the vestibular and visual conditions, motion acceleration
<italic>a</italic>
(
<italic>t</italic>
) and motion velocity
<italic>v</italic>
(
<italic>t</italic>
), respectively, are assumed to be the physical quantities that modulate cue sensitivity (‘Materials and methods’ and ‘Discussion’). To account for these dynamics, the DMs are modified to
<inline-formula>
<mml:math id="inf12">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mi>a</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:msub>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mi>sin</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>h</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>η</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
and
<inline-formula>
<mml:math id="inf13">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mi>v</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:msub>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>sin</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>h</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>η</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
. Note that once the drift rate in a DM changes with time, it generally loses its property of integrating evidence optimally over time. For example, at the beginning of each trial when motion velocity is low,
<inline-formula>
<mml:math id="inf14">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
is dominated by noise and integrating
<inline-formula>
<mml:math id="inf15">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
is fruitless. Fortunately, weighting the momentary visual evidence,
<inline-formula>
<mml:math id="inf16">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
, by the velocity profile recovers optimality of the DM (‘Materials and methods’). This temporal weighting causes the visual evidence to contribute more at high velocities, while the noise is downweighted at low velocities. Similarly, vestibular evidence is weighted by the time course of acceleration. The new, weighted particle trajectories are described by the DMs
<inline-formula>
<mml:math id="inf17">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>X</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mi>v</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
and
<inline-formula>
<mml:math id="inf18">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>X</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mi>a</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
. The two unimodal DMs are combined as before, resulting in the combined DM given by
<inline-formula>
<mml:math id="inf19">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>X</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>b</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mi>d</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>b</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
, where the sensitivity profile
<italic>d</italic>
(
<italic>t</italic>
) is a weighted combination of the unimodal sensitivity profiles,
<disp-formula id="equ3">
<label>(3)</label>
<mml:math id="m3">
<mml:mrow>
<mml:mi>d</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:msqrt>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>b</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
<mml:msup>
<mml:mi>v</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>b</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
<mml:msup>
<mml:mi>a</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msqrt>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>(
<xref ref-type="fig" rid="fig4">Figure 4B</xref>
; see
<xref ref-type="supplementary-material" rid="SD1-data">Supplementary file 1</xref>
for derivation). These modifications to the standard DM are sufficient to integrate evidence optimally across time and sensory modalities, even as the sensitivity to the evidence changes over time.</p>
<p>The model assumes that subjects know their cue sensitivities,
<italic>k</italic>
<sub>
<italic>vis</italic>
</sub>
(
<italic>c</italic>
) and
<italic>k</italic>
<sub>
<italic>vest</italic>
</sub>
, as well as the temporal sensitivity profiles,
<italic>a</italic>
(
<italic>t</italic>
) and
<italic>v</italic>
(
<italic>t</italic>
), of each stimulus. In this respect, our model provides an upper bound on performance, since subjects may not have perfect knowledge of these variables, especially since stimulus modalities and visual motion coherence values are randomized across trials (‘Discussion’).</p>
</sec>
<sec id="s2-3">
<title>Quantitative assessment of cue combination performance</title>
<p>We tested whether subjects combined evidence optimally across both time and cues by evaluating how well the model outlined above could explain the observed behavior. The bounds,
<italic>θ</italic>
, of the modified DM, and the sensitivity parameters (
<italic>k</italic>
<sub>
<italic>vis</italic>
</sub>
,
<italic>k</italic>
<sub>
<italic>vest</italic>
</sub>
and
<italic>k</italic>
<sub>
<italic>comb</italic>
</sub>
), were allowed to vary between the visual, vestibular, and combined conditions. Varying the bound was essential to capture the deviation of the discrimination threshold in the combined condition from that predicted by traditional cue combination models (
<xref ref-type="fig" rid="fig2">Figure 2</xref>
). Indeed, this discrimination threshold is inversely proportional to bound and sensitivity (see
<xref ref-type="supplementary-material" rid="SD1-data">Supplementary file 1</xref>
). Since the sensitivity in the bimodal condition is not a free parameter (it is determined by
<xref ref-type="disp-formula" rid="equ2">Equation 2</xref>
), the height of the bound is the only parameter that could modulate the discrimination thresholds.</p>
<p>The noise terms
<italic>η</italic>
<sub>
<italic>vis</italic>
</sub>
and
<italic>η</italic>
<sub>
<italic>vest</italic>
</sub>
play crucial roles in the model, as they relate to the reliability of the momentary sensory evidence. To specify the manner in which such noise may depend on motion coherence, we relied on fundamental assumptions about how optic flow stimuli are represented by the brain. We assumed that heading is represented by a neural population code in which neurons have heading tuning curves that, within the range of heading tested in this experiment (±16°,
<xref ref-type="fig" rid="fig5">Figure 5A</xref>
), differ in their heading preferences but have similar shapes. This is broadly consistent with data from area MSTd (
<xref rid="bib13" ref-type="bibr">Fetsch et al., 2011</xref>
), but the exact location of such a code is not important for our argument. For low coherence, motion energy in the stimulus is almost uniform for all heading directions, such that all neurons in the population fire at approximately the same rate (
<xref ref-type="fig" rid="fig5">Figure 5A</xref>
, dark blue curve). For high coherence, population neural activity is strongly peaked around the actual heading direction (
<xref ref-type="fig" rid="fig5">Figure 5A</xref>
, cyan curve) (
<xref rid="bib33" ref-type="bibr">Morgan et al., 2008</xref>
;
<xref rid="bib13" ref-type="bibr">Fetsch et al., 2011</xref>
).
<fig id="fig5" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.7554/eLife.03005.010</object-id>
<label>Figure 5.</label>
<caption>
<title>Scaling of momentary evidence statistics of the diffusion model (DM) with coherence.</title>
<p>(
<bold>A</bold>
) Assumed neural population activity giving rise to the DM mean and variance of the momentary evidence, and their dependence on coherence. Each curve represents the activity of a population of neurons with a range of heading preferences, in response to optic flow with a particular coherence and a heading indicated by the dashed vertical line. (
<bold>B</bold>
) Expected pattern of reaction times if variance is independent of coherence. If neither the DM bound nor the DM variance depend on coherence, the DM predicts the same decision time for all small headings, regardless of coherence. This is due to the DM drift rate,
<italic>k</italic>
<sub>
<italic>vis</italic>
</sub>
(
<italic>c</italic>
)
<italic>sin</italic>
(
<italic>h</italic>
) being close to 0 for small headings,
<italic>h</italic>
≈0, independent of the DM sensitivity
<italic>k</italic>
<sub>
<italic>vis</italic>
</sub>
(
<italic>c</italic>
). (
<bold>C</bold>
) Expected pattern of reaction times when variance scales with coherence. If both DM sensitivity and DM variance scale with coherence while the bound remains constant, the DM predicts different decision times across coherences, even for small headings. Greater coherence causes an increase in variance, which in turn causes the bound to be reached more quickly for higher coherences, even if the heading, and thus the drift rate, is small.</p>
<p>
<bold>DOI:</bold>
<ext-link ext-link-type="doi" xlink:href="10.7554/eLife.03005.010">http://dx.doi.org/10.7554/eLife.03005.010</ext-link>
</p>
</caption>
<graphic xlink:href="elife03005f005"></graphic>
</fig>
</p>
<p>Based on this representation, and assuming that the response variability of the neurons belongs to the exponential family with linear sufficient statistics (
<xref rid="bib30" ref-type="bibr">Ma et al., 2006</xref>
) (an assumption consistent with in vivo data [
<xref rid="bib17" ref-type="bibr">Graf et al., 2011</xref>
]), heading discrimination can be performed optimally by a weighted sum of the activity of all neurons, with weights monotonically related to the preferred heading of each neuron. For a straight forward heading,
<italic>h</italic>
= 0, this sum should be 0, and for
<italic>h</italic>
> 0 (or
<italic>h</italic>
< 0) it should be positive (or negative), thus sharing the basic properties of the momentary evidence,
<inline-formula>
<mml:math id="inf20">
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:mrow>
</mml:math>
</inline-formula>
, in our DM. This allowed us to deduce the mean and variance of the momentary evidence driving
<inline-formula>
<mml:math id="inf21">
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:mrow>
</mml:math>
</inline-formula>
, based on what we know about the neural responses. First, the sensitivity,
<italic>k</italic>
<sub>
<italic>vis</italic>
</sub>
(
<italic>c</italic>
), which determines how optic flow modulates the mean drift rate of
<inline-formula>
<mml:math id="inf22">
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:mrow>
</mml:math>
</inline-formula>
, scales in proportion with the ‘peakedness’ of the neural activity, which in turn is proportional to coherence. We assumed a functional form of
<italic>k</italic>
<sub>
<italic>vis</italic>
</sub>
(
<italic>c</italic>
) given by
<inline-formula>
<mml:math id="inf23">
<mml:mrow>
<mml:msub>
<mml:mi>a</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msup>
<mml:mi>c</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>γ</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>
, where
<italic>a</italic>
<sub>
<italic>vis</italic>
</sub>
and
<italic>γ</italic>
<sub>
<italic>vis</italic>
</sub>
are positive parameters. Second, the variance of
<inline-formula>
<mml:math id="inf24">
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:mrow>
</mml:math>
</inline-formula>
is assumed to be the sum of the variances of the neural responses. Since experimental data suggest that the variance of these responses is proportional to their firing rate (
<xref rid="bib45" ref-type="bibr">Tolhurst et al., 1983</xref>
), the sum of the variances is proportional to the area underneath the population activity profile (
<xref ref-type="fig" rid="fig5">Figure 5A</xref>
). Based on the experimental data of Britten et al. (
<xref rid="bib24" ref-type="bibr">Heuer and Britten, 2007</xref>
), this area was assumed to scale roughly linearly with coherence, such that the variance of
<inline-formula>
<mml:math id="inf25">
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:mrow>
</mml:math>
</inline-formula>
is proportional to
<inline-formula>
<mml:math id="inf26">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>b</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msup>
<mml:mi>c</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>γ</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>
with free parameters
<italic>b</italic>
<sub>
<italic>vis</italic>
</sub>
and
<italic>γ</italic>
<sub>
<italic>vis</italic>
</sub>
, the latter of which captures possible deviations from linearity. We further assumed the DM bound to be independent of coherence, and given by
<italic>θ</italic>
<sub>
<italic>σ</italic>
,
<italic>vis</italic>
</sub>
. Thus, the effect of motion coherence on the momentary evidence in the DM was modeled by four parameters:
<italic>a</italic>
<sub>
<italic>vis</italic>
</sub>
,
<italic>γ</italic>
<sub>
<italic>vis</italic>
</sub>
,
<italic>b</italic>
<sub>
<italic>vis</italic>
</sub>
, and
<italic>θ</italic>
<sub>
<italic>σ</italic>
,
<italic>vis</italic>
</sub>
.</p>
<p>The above scaling of the diffusion variance by coherence, which is a consequence of the neural code for heading, makes an interesting prediction: reaction times for headings near straight ahead should be inversely proportional to coherence in the visual condition, even though the mean drift rate,
<italic>k</italic>
<sub>
<italic>vis</italic>
</sub>
(
<italic>c</italic>
)
<italic>sin</italic>
(
<italic>h</italic>
), is very close to 0. This is indeed what we observed: subjects tended to decide faster for higher coherences even when
<italic>h</italic>
≈ 0 (
<xref ref-type="fig" rid="fig3">Figure 3</xref>
,
<xref ref-type="fig" rid="fig3s1">Figure 3—figure supplement 1</xref>
). This aspect of the data can only be captured by the model if the DM variance is allowed to change with coherence (
<xref ref-type="fig" rid="fig5">Figure 5B,C</xref>
).</p>
<p>To summarize, in the combined condition, the diffusion variance was assumed to be proportional to
<inline-formula>
<mml:math id="inf27">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>b</mml:mi>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>b</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msup>
<mml:mi>c</mml:mi>
<mml:mrow>
<mml:msub>
<mml:mi>γ</mml:mi>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>b</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>
, while the bound was fixed at
<italic>θ</italic>
<sub>
<italic>σ</italic>
,
<italic>comb</italic>
</sub>
. By contrast, the diffusion rate (sensitivity) cannot be modeled freely but rather needs to obey
<inline-formula>
<mml:math id="inf28">
<mml:mrow>
<mml:msub>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>b</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:msqrt>
<mml:mrow>
<mml:msubsup>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:msub>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:math>
</inline-formula>
in order to ensure optimal cue combination. The sensitivity
<italic>k</italic>
<sub>
<italic>vest</italic>
</sub>
and bound
<italic>θ</italic>
<sub>
<italic>σ</italic>
,
<italic>vest</italic>
</sub>
in the vestibular condition do not depend on motion coherence and were thus model parameters that were fitted directly.</p>
<p>Observed reaction times were assumed to be composed of the decision time and some non-decision time. The decision time is the time from the start of integrating evidence until a decision is made, as predicted by the diffusion model. The non-decision time includes the motor latency and the time from stimulus onset to the start of integrating evidence. As the latter can vary between different modalities, we allowed it to differ between visual, vestibular, and combined conditions, but not for different coherences, thus introducing the model parameters
<italic>t</italic>
<sub>
<italic>nd</italic>
,
<italic>vis</italic>
</sub>
,
<italic>t</italic>
<sub>
<italic>nd</italic>
,
<italic>vest</italic>
</sub>
, and
<italic>t</italic>
<sub>
<italic>nd</italic>
,
<italic>comb</italic>
</sub>
. Although the fitted non-decision times were similar across stimulus conditions for most subjects (
<xref ref-type="fig" rid="fig3s2">Figure 3—figure supplement 2</xref>
), a model assuming a single non-decision time resulted in a small but significant decrease in fit quality (
<xref ref-type="fig" rid="fig7s2">Figure 7—figure supplement 2A</xref>
). Overall, 12 parameters were used to model cue sensitivities, bounds, variances, and non-decision times in all conditions, and these 12 parameters were used to fit 312 data points for subjects that were tested with 6 coherences (168 data points for the three-coherence version). An additional 14 parameters (8 parameters for the three-coherence version; one bias parameter per coherence/condition, one lapse parameter across all condition) controlled for biases in the motion direction percept and for lapses of attention that were assumed to lead to random choices (‘Materials and methods’). Although these additional parameters were necessary to achieve good model fits (
<xref ref-type="fig" rid="fig7s2">Figure 7—figure supplement 2A</xref>
), it is critical to note that they could not account for differences in heading thresholds or reaction times across stimulus conditions. As such, the additional parameters play no role in determining whether subjects perform optimal multisensory integration. Alternative parameterizations of how drift rates and bounds depend on motion coherence yielded qualitatively similar results, but caused the model fits to worsen decisively (
<xref ref-type="supplementary-material" rid="SD1-data">Supplementary file 1</xref>
;
<xref ref-type="fig" rid="fig7s2">Figure 7—figure supplement 2A</xref>
).</p>
<p>Critically, our model predicts that the unimodal sensitivities
<italic>k</italic>
<sub>
<italic>vis</italic>
</sub>
(
<italic>c</italic>
) and
<italic>k</italic>
<sub>
<italic>vest</italic>
</sub>
relate to the combined value by
<inline-formula>
<mml:math id="inf29">
<mml:mrow>
<mml:msubsup>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>b</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>d</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>t</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>d</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:msqrt>
<mml:mrow>
<mml:msub>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:math>
</inline-formula>
, if subjects accumulate evidence optimally across cues. To test this prediction, we fitted separately the unimodal and combined sensitivities,
<italic>k</italic>
<sub>
<italic>vis</italic>
</sub>
(
<italic>c</italic>
),
<italic>k</italic>
<sub>
<italic>vest</italic>
</sub>
and
<italic>k</italic>
<sub>
<italic>comb</italic>
</sub>
to the complete data set from each individual subject using maximum likelihood optimization (‘Materials and methods’), and then compared the fitted values of
<italic>k</italic>
<sub>
<italic>comb</italic>
</sub>
to the predicted values,
<inline-formula>
<mml:math id="inf30">
<mml:mrow>
<mml:msubsup>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>b</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>d</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>t</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>d</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
. Predicted and observed sensitivities for the combined condition are virtually identical (
<xref ref-type="fig" rid="fig6">Figure 6</xref>
), providing strong support for near-optimal cue combination across both time and cues. Remarkably, for low coherences at which optic flow provides no useful heading information, the sensitivity in the combined condition was not significantly different from that of the vestibular condition (
<xref ref-type="fig" rid="fig6">Figure 6</xref>
). Thus, subjects were able to completely suppress noisy visual information and rely solely on vestibular input, as predicted by the model.
<fig id="fig6" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.7554/eLife.03005.011</object-id>
<label>Figure 6.</label>
<caption>
<title>Predicted and observed sensitivity in the combined condition.</title>
<p>The sensitivity parameter measures how sensitive subjects are to a change of heading. The solid red line shows predicted sensitivity for the combined condition, as computed from the sensitivities of the unimodal conditions (dashed lines). The combined sensitivity measured by fitting the model to each coherence separately (red squares) does not differ significantly from the optimal prediction, providing strong support to the hypothesis that subjects accumulate evidence near-optimally across time and cues. Data are averaged across datasets (except 0%, 12%, 51% coherence: only datasets B2, D2, F2), with shaded areas and error bars showing the 95% CIs.</p>
<p>
<bold>DOI:</bold>
<ext-link ext-link-type="doi" xlink:href="10.7554/eLife.03005.011">http://dx.doi.org/10.7554/eLife.03005.011</ext-link>
</p>
</caption>
<graphic xlink:href="elife03005f006"></graphic>
</fig>
</p>
<p>Having established that cue sensitivities combine according to
<xref ref-type="disp-formula" rid="equ2">Equation 2</xref>
, the model was then fit to data from each individual subject under the assumption of optimal cue combination. Model fits are shown as solid curves for example subject D2 (
<xref ref-type="fig" rid="fig3">Figure 3</xref>
), as well as for all other subjects (
<xref ref-type="fig" rid="fig3s1">Figure 3—figure supplement 1</xref>
). Sensitivity parameters, bounds, and non-decision times resulting from the fits are also shown for each subject, condition, and coherence (
<xref ref-type="fig" rid="fig3s2">Figure 3—figure supplement 2</xref>
). For 8 of 10 datasets, the model explains more than 95% of the variance in the data (adjusted
<italic>R</italic>
<sup>2</sup>
> 0.95), providing additional evidence for near-optimal cue combination across both time and cues (
<xref ref-type="fig" rid="fig7">Figure 7A</xref>
). The subjects associated with these datasets show a clear decrease in reaction times with larger
<inline-formula>
<mml:math id="inf31">
<mml:mrow>
<mml:mrow>
<mml:mo>|</mml:mo>
<mml:mi>h</mml:mi>
<mml:mo>|</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
, and this effect is more pronounced in the visual condition than in the vestibular and combined conditions (
<xref ref-type="fig" rid="fig3">Figure 3</xref>
,
<xref ref-type="fig" rid="fig3s1">Figure 3—figure supplement 1</xref>
). The remaining two subjects (C and F) feature qualitatively different behavior and lower
<italic>R</italic>
<sup>2</sup>
values of approximately 0.80 and 0.90, respectively (
<xref ref-type="fig" rid="fig3s1">Figure 3—figure supplement 1</xref>
). These subjects showed little decline in reaction times with larger values of
<inline-formula>
<mml:math id="inf32">
<mml:mrow>
<mml:mrow>
<mml:mo>|</mml:mo>
<mml:mi>h</mml:mi>
<mml:mo>|</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
, and their mean reaction times were more similar across the visual, vestibular and combined conditions.
<fig id="fig7" position="float" orientation="portrait">
<object-id pub-id-type="doi">10.7554/eLife.03005.012</object-id>
<label>Figure 7.</label>
<caption>
<title>Model goodness-of-fit and comparison to alternative models.</title>
<p>(
<bold>A</bold>
) Coefficient of determination (adjusted
<italic>R</italic>
<sup>2</sup>
) of the model fit for each of the ten datasets. (
<bold>B</bold>
) Bayes factor of alternative models compared to the optimal model. The abscissa shows the base-10 logarithm of the Bayes factor of the alterative models vs the optimal model (negative values mean that the optimal model out-performs the alternative model). The gray vertical line close to the origin (at a value of −2 on the abscissa) marks the point at which the optimal model is 100 times more likely than each alternative, at which point the difference is considered ‘decisive’ (
<xref rid="bib25" ref-type="bibr">Jeffreys, 1998</xref>
). Only the ‘separate k's‘ model has more parameters than the optimal model, but the Bayes factor indicates that the slight increase in goodness-of-fit does not justify the increased degrees of freedom. The ‘no cue weighting’ model assumes that visual and vestibular cues are weighted equally, independent of their sensitivities. The ‘weighting by acceleration’ and ‘weighting by velocity’ models assume that the momentary evidence of both cues is weighted by the acceleration and velocity profile of the stimulus, respectively. The ‘no temporal weighting’ model assumes that the evidence is not weighted over time according to its sensitivity. The ‘no cue/temporal weighting’ model lacks both weighting of cues by sensitivity and weighting by temporal profile. All of the tested alternative models explain the data decisively worse than the optimal model.
<xref ref-type="fig" rid="fig7s1">Figure 7—figure supplement 1</xref>
shows how individual subjects contribute to this model comparison, and the results of a more conservative Bayesian random-effects model comparison that supports same conclusion.
<xref ref-type="fig" rid="fig7s2">Figure 7—figure supplement 2</xref>
compares the proposed model to ones with alternative parameterizations.</p>
<p>
<bold>DOI:</bold>
<ext-link ext-link-type="doi" xlink:href="10.7554/eLife.03005.012">http://dx.doi.org/10.7554/eLife.03005.012</ext-link>
</p>
</caption>
<graphic xlink:href="elife03005f007"></graphic>
<p content-type="supplemental-figure">
<fig id="fig7s1" specific-use="child-fig" orientation="portrait" position="anchor">
<object-id pub-id-type="doi">10.7554/eLife.03005.013</object-id>
<label>Figure 7—figure supplement 1.</label>
<caption>
<title>Model comparison per subject, and random-effects model comparison.</title>
<p>(
<bold>A</bold>
) Shows the contribution of each subject to the model comparison shown in
<xref ref-type="fig" rid="fig7">Figure 7B</xref>
. As in
<xref ref-type="fig" rid="fig7">Figure 7B</xref>
, the grey line shows the threshold above which the alternative models provide a decisively worse (if negative) or better (if positive) model fit. As can be seen, the model comparison is mostly consistent across subjects, except for models that weight both modalities either by acceleration or velocity only. Even in these cases, pooling across subjects leads to a decisively worse fit of the alternative model when compared to the optimal model (
<xref ref-type="fig" rid="fig7">Figure 7</xref>
). (
<bold>B</bold>
) and (
<bold>C</bold>
) Show the results of a random-effects Bayesian model comparison (
<xref rid="bib44" ref-type="bibr">Stephan et al., 2009</xref>
). This model comparison infers the probability of each model to have generated the behavior observed for each subject, and is less sensitive to model fit outliers than the fixed-effects comparison shown in
<xref ref-type="fig" rid="fig7">Figure 7B</xref>
(e.g., a single subject might strongly support an otherwise unsupported model, which could skew the overall comparison). (
<bold>B</bold>
) Shows the inferred distribution over all compared models, and supports the optimal model with exceedance probability p≈0.664 (probability that the optimal model is more likely that any other model). This random-effects comparison causes models with very similar predictions to share some probability mass—in our case the optimal model and the model assuming evidence weighting by the velocity time-course. In (
<bold>C</bold>
) we perform the same comparison without the ‘weighting by velocity’ model, in which case the exceedance probability supporting the optimal model rises to p≈0.953.</p>
<p>
<bold>DOI:</bold>
<ext-link ext-link-type="doi" xlink:href="10.7554/eLife.03005.013">http://dx.doi.org/10.7554/eLife.03005.013</ext-link>
</p>
</caption>
<graphic xlink:href="elife03005fs004"></graphic>
</fig>
</p>
<p content-type="supplemental-figure">
<fig id="fig7s2" specific-use="child-fig" orientation="portrait" position="anchor">
<object-id pub-id-type="doi">10.7554/eLife.03005.014</object-id>
<label>Figure 7—figure supplement 2.</label>
<caption>
<title>Model comparison for models with alternative parameterization.</title>
<p>(
<bold>A</bold>
) Compares the optimal model as described in the main text to various alternative models. The first model changes how drifts and bounds relate to coherence (see
<xref ref-type="supplementary-material" rid="SD1-data">Supplementary file 1</xref>
), and introduces one additional parameter. The second model fits drifts and bounds separately for all coherences. The other models either use a single non-decision time (instead of one or each modality), no heading biases, or a combination of both. The figure shows the Bayes factor, illustrating that in all cases the alternative models are decisively worse (grey line close to origin indicating threshold) than the original model. (
<bold>B</bold>
and
<bold>C</bold>
) Show the overall model goodness-of-fit (left panels) of two model that used an alternative parameterization of how drifts and bounds depend on coherence (see (
<bold>A</bold>
)). Furthermore, it compares these models, which still perform optimal evidence accumulation across both time and cues, to sub-optimal models (right panels) that do not (except ‘separate k's’, which is potentially optimal). These figures are analogous to
<xref ref-type="fig" rid="fig7">Figure 7</xref>
and show that neither change of parameterization qualitatively changes our conclusions.</p>
<p>
<bold>DOI:</bold>
<ext-link ext-link-type="doi" xlink:href="10.7554/eLife.03005.014">http://dx.doi.org/10.7554/eLife.03005.014</ext-link>
</p>
</caption>
<graphic xlink:href="elife03005fs005"></graphic>
</fig>
</p>
</fig>
</p>
<p>Critically, the model nicely captures the observation that the psychophysical threshold in the combined condition is typically greater than that for the visual condition, despite near-optimal combination of momentary evidence from the visual and vestibular modalities (e.g.,
<xref ref-type="fig" rid="fig3">Figure 3</xref>
, 70% coherence,
<xref ref-type="fig" rid="fig2s1">Figure 2—figure supplement 1</xref>
,
<xref ref-type="fig" rid="fig3s1">Figure 3—figure supplement 1</xref>
). Thus, the model fits confirm quantitatively that apparent sub-optimality in psychophysical thresholds can arise even if subjects combine all cues in a statistically optimal manner, emphasizing the need for a computational framework that incorporates both decision accuracy and speed.</p>
</sec>
<sec id="s2-4">
<title>Alternative models</title>
<p>To further assess and validate the critical design features of our modified DM, we evaluated six alternative (mostly sub-optimal) versions of the model to see if these variants are able to explain the data equally well. We compared these variants to the optimal model using Bayesian model comparison, which trades off fit quality with model complexity to determine whether additional parameters significantly improve the fit (
<xref rid="bib16" ref-type="bibr">Goodman, 1999</xref>
).</p>
<p>With regard to optimality of cue integration across modalities, we examined two model variants. The first variant (also used to generate
<xref ref-type="fig" rid="fig6">Figure 6</xref>
) eliminates the relationship,
<inline-formula>
<mml:math id="inf33">
<mml:mrow>
<mml:msub>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>b</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:msqrt>
<mml:mrow>
<mml:msubsup>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:math>
</inline-formula>
(
<xref ref-type="disp-formula" rid="equ2">Equation 2</xref>
), between the sensitivity parameters in the combined and single-cue conditions. Instead, this variant allows independent sensitivity parameters for the combined condition at each coherence, thus introducing one additional parameter per coherence. Since this variant is strictly more general than the optimal model, it must fit the data at least as well. However, if the subjects' behavior is near optimal, the additional degrees of freedom in this variant should not improve the fit enough to justify the addition of these parameters. This is indeed what we found by Bayesian model comparison (
<xref ref-type="fig" rid="fig7">Figure 7B</xref>
, ‘separate k's’), which shows the optimal model to be ∼10
<sup>70</sup>
times more likely than the variant with independent values of
<italic>k</italic>
<sub>
<italic>comb</italic>
</sub>
(
<italic>c</italic>
). This is well above the threshold value that is considered to provide ‘decisive’ evidence in favor of the optimal model (we use Fisher's definition of decisive [
<xref rid="bib25" ref-type="bibr">Jeffreys, 1998</xref>
] according to which a model is said to be decisively better if it is >100 times more likely to have generated the data). The second model variant had the same number of parameters as the optimal model, but assumed that the cues are always weighted equally. Evidence in the combined condition was given by the simple average,
<inline-formula>
<mml:math id="inf34">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>X</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>b</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mn>2</mml:mn>
</mml:mfrac>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>X</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>X</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
, ignoring cue sensitivities. The resulting fits (
<xref ref-type="fig" rid="fig7">Figure 7B</xref>
, ‘no cue weighting’) are also decisively worse than those of the optimal model. Together, these model variants strongly support the hypothesis that subjects weight cues according to their relative sensitivities, as given by
<xref ref-type="disp-formula" rid="equ2">Equation 2</xref>
. These effects were largely consistent across individual subjects (
<xref ref-type="fig" rid="fig7s1">Figure 7—figure supplement 1A</xref>
).</p>
<p>To test the other key assumption of our model—that subjects temporally weight incoming evidence according to the profile of stimulus information—we tested three model variants that modified how temporal weighting was performed without changing the number of parameters in the model. If we assumed that the temporal weighting of both modalities followed the acceleration profile of the stimulus while leaving the model otherwise unchanged, the model fit worsened decisively (
<xref ref-type="fig" rid="fig7">Figure 7B</xref>
, ‘weighting by acceleration’). Assuming that the weighting of both modalities followed the velocity profile of the stimulus also decisively reduced fit quality (
<xref ref-type="fig" rid="fig7">Figure 7B</xref>
, ‘weighting by velocity’), although this effect was not consistent across subjects (
<xref ref-type="fig" rid="fig7s1">Figure 7—figure supplement 1A</xref>
). If we completely removed temporal weighting of cues from the model, fits were dramatically worse than the optimal model (
<xref ref-type="fig" rid="fig7">Figure 7B</xref>
, ‘no temporal weighting’). Finally, for completeness, we also tested a model variant that neither performs temporal weighting of cues nor considers the relative sensitivity to the cues. Again, this model variant fit the data decisively worse than the optimal model (
<xref ref-type="fig" rid="fig7">Figure 7B</xref>
, ‘no cue/temporal weighting’). Thus, subjects seem to be able to take into account their sensitivity to the evidence across time as well as across cues. All of these model comparisons received further support from a more conservative random-effects Bayesian model comparison, shown in
<xref ref-type="fig" rid="fig7s1">Figure 7—figure supplement 1B,C</xref>
.</p>
<p>Finally, we also considered if a parallel race model could account for our data. The parallel race model (
<xref rid="bib38" ref-type="bibr">Raab, 1962</xref>
;
<xref rid="bib32" ref-type="bibr">Miller, 1982</xref>
;
<xref rid="bib46" ref-type="bibr">Townsend and Wenger, 2004</xref>
;
<xref rid="bib34" ref-type="bibr">Otto and Mamassian, 2012</xref>
) postulates that the decision in the combined condition emerges from the faster of two independent races toward a bound, one for each sensory modality. Because it does not combine information across modalities, the parallel race model predicts that decisions in the combined condition are caused by the faster modality. Consequently, choices in the combined condition are unlikely to be more correct (on average) than those of the faster unimodal condition. For all but one subject, the vestibular modality is substantially faster, even when compared to the visual modality at high coherence and controlling for the effect of heading direction (2-way ANOVA, p<0.0001 for all subjects except C). Critically, all of these subjects feature significantly lower psychophysical thresholds in the combined condition than in the vestibular condition (p<0.039 for all subjects except subject C, p=0.210,
<xref ref-type="supplementary-material" rid="SD2-data">Supplementary file 2A</xref>
). Furthermore, we performed standard tests (Miller's bound and Grice's bound) that compare the observed distribution of reaction times with that predicted by the parallel race model (
<xref rid="bib32" ref-type="bibr">Miller, 1982</xref>
;
<xref rid="bib18" ref-type="bibr">Grice et al., 1984</xref>
). These tests revealed that all but two subjects made significantly slower decisions than predicted by the parallel race model for most coherence/heading combinations (p<0.05 for all subjects except subjects F and B2;
<xref ref-type="supplementary-material" rid="SD2-data">Supplementary file 2B</xref>
), and no subject was faster than predicted (p>0.05, all subjects;
<xref ref-type="supplementary-material" rid="SD2-data">Supplementary file 2B</xref>
). Based on these observations, we can reject the parallel race model as a viable hypothesis to explain the observed behavior.</p>
</sec>
</sec>
<sec sec-type="discussion" id="s3">
<title>Discussion</title>
<p>We have shown that, when subjects are allowed to choose how long to accumulate evidence in a cue integration task, their behavior no longer follows the standard predictions of optimal cue integration theory that normally apply when stimulus presentation time is controlled by the experimenter. Particularly, they feature worse discrimination performance (higher psychophysical thresholds) in the combined condition than would be predicted from the unimodal conditions—in some cases even worse than the better of the two unimodal conditions. This occurs because subjects tend to decide more quickly in the combined condition than in the more sensitive unimodal condition and thus have less time to accumulate evidence. This indicates that a more general definition of optimal cue integration must incorporate reaction times. Indeed, subjects' behavior could be reproduced by an extended diffusion model that takes into account both speed and accuracy, thus suggesting that subjects accumulate evidence across both time and cues in a statistically near-optimal manner (i.e., with minimal information loss) despite their reduced discrimination performance in the combined condition.</p>
<p>Previous work on optimal cue integration (e.g.,
<xref rid="bib11" ref-type="bibr">Ernst and Banks, 2002</xref>
;
<xref rid="bib1" ref-type="bibr">Battaglia et al., 2003</xref>
;
<xref rid="bib27" ref-type="bibr">Knill and Saunders, 2003</xref>
;
<xref rid="bib15" ref-type="bibr">Fetsch et al., 2009</xref>
) was based on experiments that employed fixed-duration stimuli and was thus able to ignore how subjects accumulate evidence over time. Moreover, previous work relied on the implicit assumption that subjects make use of all evidence throughout the duration of the stimulus. However, this assumption need not be true and has been shown to be violated even for short presentation durations (
<xref rid="bib31" ref-type="bibr">Mazurek et al., 2003</xref>
;
<xref rid="bib26" ref-type="bibr">Kiani et al., 2008</xref>
). Therefore, apparent sub-optimality in some previous studies of cue integration or in some individual subjects (
<xref rid="bib1" ref-type="bibr">Battaglia et al., 2003</xref>
;
<xref rid="bib15" ref-type="bibr">Fetsch et al., 2009</xref>
) might be attributable to either truly sub-optimal cue combination, to subjects halting evidence accumulation before the end of the stimulus presentation period, or to the difficulty in estimating stimulus processing time (
<xref rid="bib43" ref-type="bibr">Stanford et al., 2010</xref>
). Unfortunately, these potential causes cannot be distinguished using a fixed-duration task. Allowing subjects to register their decisions at any time during the trial alleviates this potential confound.</p>
<p>We model subjects' decision times by assuming an accumulation-to-bound process. In the multisensory context, this raises the question of whether evidence accumulation is bounded for each modality separately, as assumed by the parallel race model, or whether evidence is combined across modalities before being accumulated toward a single bound, as in co-activation models and our modified diffusion model. Based on our behavioral data, we can rule out parallel race models, as they cannot explain lower psychophysical thresholds (better sensitivity) in the combined condition relative to the faster vestibular condition. Further evidence against such models is provided by neurophysiological studies which demonstrate that visual and vestibular cues to heading converge in various cortical areas, including areas MSTd (
<xref rid="bib23" ref-type="bibr">Gu et al., 2006</xref>
), VIP (
<xref rid="bib41" ref-type="bibr">Schlack et al., 2005</xref>
;
<xref rid="bib7" ref-type="bibr">Chen et al., 2011b</xref>
), and VPS (
<xref rid="bib6" ref-type="bibr">Chen et al., 2011a</xref>
). Activity in area MSTd can account for sensitivity-based cue weighting in a fixed-duration task (
<xref rid="bib13" ref-type="bibr">Fetsch et al., 2011</xref>
), and MSTd activity is causally related to multi-modal heading judgments (
<xref rid="bib4" ref-type="bibr">Britten and van Wezel, 1998</xref>
,
<xref rid="bib5" ref-type="bibr">2002</xref>
;
<xref rid="bib21" ref-type="bibr">Gu et al., 2012</xref>
). These physiological studies strongly suggest that visual and vestibular signals are integrated in sensory representations prior to decision-making, inconsistent with parallel race models.</p>
<p>Our model makes the assumption that sensory signals are integrated prior to decision-making and is in this sense similar to co-activation models that have been used previously to model reaction times in multimodal settings (
<xref rid="bib32" ref-type="bibr">Miller, 1982</xref>
;
<xref rid="bib10" ref-type="bibr">Corneil et al., 2002</xref>
;
<xref rid="bib46" ref-type="bibr">Townsend and Wenger, 2004</xref>
). However, it differs from these models in important aspects. First, co-activation models have been introduced to explain reaction times that are faster than those predicted by parallel race models (
<xref rid="bib38" ref-type="bibr">Raab, 1962</xref>
;
<xref rid="bib32" ref-type="bibr">Miller, 1982</xref>
). Our subjects, in contrast, feature reaction times that are slower than those of parallel race models in almost all conditions (
<xref ref-type="supplementary-material" rid="SD2-data">Supplementary file 2B</xref>
). We capture this effect by an elevated effective bound in the combined condition as compared to the faster vestibular condition, such that cue combination remains optimal despite longer reaction times. Second, co-activation models usually combine inputs from the different modalities by a simple sum (e.g.,
<xref rid="bib46" ref-type="bibr">Townsend and Wenger, 2004</xref>
). This entails adding noise to the combined signal if the sensitivity to one of the modalities is low, which is detrimental to discrimination performance. In contrast, we show that different cues need to be weighted according to their sensitivities to achieve statistically optimally integration of multisensory evidence at each moment in time (
<xref ref-type="disp-formula" rid="equ2">Equation 2</xref>
).</p>
<p>Another alternative to co-activation models are serial race models, which posit that the race corresponding to one cue needs to be completed before the other one starts (e.g.,
<xref rid="bib46" ref-type="bibr">Townsend and Wenger, 2004</xref>
). These models can be ruled out by observing that they predict reaction times in the combined condition to be longer than those in the slower of the two unimodal conditions. This is clearly violated by the subjects' behavior.</p>
<p>Optimal accumulation of evidence over time requires the momentary evidence to be weighted according to its associated sensitivity. For the vestibular modality, we assume that the temporal profile of sensitivity to the evidence follows acceleration. This may appear to conflict with data from multimodal areas MSTd, VIP, and VPS, where neural activity in response to self-motion reflects a mixture of velocity and acceleration components (
<xref rid="bib14" ref-type="bibr">Fetsch et al., 2010</xref>
;
<xref rid="bib6" ref-type="bibr">Chen et al., 2011a</xref>
). Note, however, that the vestibular stimulus is initially encoded by otolith afferents in terms of acceleration (
<xref rid="bib12" ref-type="bibr">Fernandez and Goldberg, 1976</xref>
). Thus, any neural representation of vestibular stimuli in terms of velocity requires a temporal integration of the acceleration signal, and this integration introduces temporal correlations into the signal. As a consequence, a neural response that is maximal at the time of peak stimulus velocity does not imply a simultaneous peak in the information coded about heading direction. Rather, information still follows the time course of its original encoding, which is in terms of acceleration. In contrast, the time course of the sensitivity to the visual stimulus is less clear. For our model we have intuitively assumed it to follow the velocity profile of the stimulus, as information per unit time about heading certainly increases with the velocity of the optic flow field, even when there is no acceleration. This assumption is supported by a decisively worse model fit if we set the weighting of the visual momentary evidence to follow the acceleration profile (
<xref ref-type="fig" rid="fig7">Figure 7B</xref>
, ‘weighted by acceleration’). Nonetheless, we cannot completely exclude any contribution of acceleration components to visual information (
<xref rid="bib29" ref-type="bibr">Lisberger and Movshon, 1999</xref>
;
<xref rid="bib37" ref-type="bibr">Price et al., 2005</xref>
). In any case, our model fits make clear that temporal weighting of vestibular and visual inputs is necessary to predict behavior when stimuli are time-varying.</p>
<p>The extended DM model described here makes the strong assumption that cue sensitivities are known before combining information from the two modalities, as these sensitivities need to be known in order to weight the cues appropriately. As only the sensitivity to the visual stimulus changes across trials in our experiment, it is possible that subjects can estimate their sensitivity (as influenced by coherence) during the initial low-velocity stimulus period (
<xref ref-type="fig" rid="fig1">Figure 1C</xref>
) in which heading information is minimal but motion coherence is salient. Thus, for our task, it is reasonable to assume that subjects can estimate their sensitivity to cues. We have recently begun to consider how sensitivity estimation and cue integration can be implemented neurally. The neural model (
<xref rid="bib50" ref-type="bibr">Onken et al., 2012</xref>
. Near optimal multisensory integration with nonlinear probabilistic population codes using divisive normalization. The Society for Neuroscience annual meeting 2012) estimates the sensitivity to the visual input from motion sensitive neurons and uses this estimate to perform near-optimal multisensory integration with generalized probabilistic population codes (
<xref rid="bib30" ref-type="bibr">Ma et al., 2006</xref>
;
<xref rid="bib2" ref-type="bibr">Beck et al., 2008</xref>
) using divisive normalization. We intend to extend this model to the integration of evidence over time to predict neural responses (e.g., in area LIP) that should roughly track the temporal evolution of the decision variable (
<italic>x</italic>
<sub>
<italic>comb</italic>
</sub>
(
<italic>t</italic>
), ‘Materials and methods’) in the DM model. This will make predictions for activity in decision-making areas that can be tested in future experiments.</p>
<p>In closing, our findings establish that conventional definitions of optimality do not apply to cue integration tasks in which subjects’ decision times are unconstrained. We establish how sensory evidence should be weighted across modalities and time to achieve optimal performance in reaction-time tasks, and we show that human behavior is broadly consistent with these predictions but not with alternative models. These findings, and the extended diffusion model that we have developed, provide the foundation for building a general understanding of perceptual decision-making under more natural conditions in which multiple cues vary dynamically over time and subjects make rapid decisions when they have acquired sufficient evidence.</p>
</sec>
<sec sec-type="materials|methods" id="s4">
<title>Materials and methods</title>
<sec id="s4-1">
<title>Subjects and apparatus</title>
<p>Seven subjects (3 males) aged 23–38 years with normal or corrected-to-normal vision and no history of vestibular deficits participated in the experiments. All subjects but one were informed of the purposes of the study. Informed consent was obtained from all participants and all procedures were reviewed and approved by the Washington University Office of Human Research Protections (OHRP), Institutional Review Board (IRB; IRB ID# 201109183). Consent to publish was not obtained in writing, as it was not required by the IRB, but all subjects were recruited for this purpose and approved verbally. Of these subjects, three (subjects B, D, F; 1 male) participated in a follow-up experiment roughly 2 years after the initial data collection, with six coherence levels instead of the original three. The six-coherence version of their data is referred to as B2, D2, and F2. Procedures for the follow-up experiment were approved by the Institutional Review Board for Human Subject Research for Baylor College of Medicine and Affiliated Hospitals (BCM IRB, ID# H-29411) and informed consent and consent to publish was given again by all three subjects.</p>
<p>The apparatus, stimuli, and task design have been described in detail previously (
<xref rid="bib15" ref-type="bibr">Fetsch et al., 2009</xref>
;
<xref rid="bib22" ref-type="bibr">Gu et al., 2010</xref>
), and are briefly summarized here. Subjects were seated comfortably in a padded racing seat that was firmly attached to a 6-degree-of-freedom motion platform (MOOG, Inc). A 3-chip DLP projector (Galaxy 6; Barco, Kortrijk, Belgium) was mounted on the motion platform behind the subject and front-projected images onto a large (149 × 127 cm) projection screen via a mirror mounted above the subject’s head. The viewing distance to the projection screen was ∼70 cm, thus allowing for a field of view of ∼94° × 84°. Subjects were secured to the seat using a 5-point racing harness, and a custom-fitted plastic mask immobilized the head against a cushioned head mount. Seated subjects were enclosed in a black aluminum superstructure, such that only the display screen was visible in the darkened room. To render stimuli stereoscopically, subjects wore active stereo shutter glasses (CrystalEyes 3; RealD, Beverly Hills, CA) which restricted the field of view to ∼90° × 70°. Subjects were instructed to look at a centrally-located, head-fixed target throughout each trial. Sounds from the motion platform were masked by playing white noise through headphones. Behavioral task sequences and data acquisition were controlled by Matlab and responses were collected using a button box.</p>
<p>Visual stimuli were generated by an OpenGL accelerator board (nVidia Quadro FX1400), and were plotted with sub-pixel accuracy using hardware anti-aliasing. In the visual and combined conditions, visual stimuli depicted self-translation through a 3D cloud of stars distributed uniformly within a virtual space 130 cm wide, 150 cm tall, and 75 cm deep. Star density was 0.01/cm
<sup>3</sup>
, with each star being a 0.5 cm × 0.5 cm triangle. Motion coherence was manipulated by randomizing the three-dimensional location of a percentage of stars on each display update while the remaining stars moved according to the specified heading. The probability of a single star following the trajectory associated with a particular heading for N video updates is therefore (c/100)
<sup>N</sup>
, where c denotes motion coherence (ranging from 0–100%). At the largest coherence used here (70%), there is only a 3% probability that a particular star would follow the same trajectory for 10 display updates (0.17 s). Thus, it was practically not possible for subjects to track the trajectories of individual stars. This manipulation degraded optic flow as a heading cue and was used to manipulate visual cue reliability in the visual and combined conditions. ‘Zero’ coherence stimuli had c set to 0.1, which was practically indistinguishable from c = 0, but allowed us to maintain a precise definition of the correctness of the subject's choice.</p>
</sec>
<sec id="s4-2">
<title>Behavioral task</title>
<p>In all stimulus conditions, the task was a single-interval, two-alternative forced choice (2AFC) heading discrimination task. In each trial, human subjects were presented with a translational motion stimulus in the horizontal plane (Gaussian velocity profile; peak velocity, 0.403 m/s; peak acceleration, 0.822 m/s
<sup>2</sup>
; total displacement, 0.3 m; maximum duration, 2 s). Heading was varied in small steps around straight ahead (±0.686°, ±1.96°, ±5.6°, ±16°) and subjects were instructed to report (by a button press) their perceived heading (leftward or rightward relative to an internal standard of straight ahead) as quickly and accurately as possible. In the visual and combined conditions, cue reliability was varied across trials by randomly choosing the motion coherence of the visual stimulus from among either a group of three values (25%, 37%, and 70%, subjects A–G) or a group of six values (0%, 12%, 25%, 37%, 51%, and 70%, subjects B2, D2, F2). A coherence of 25% means that 25% of the dots move in a direction consistent with the subject's heading, whereas the remaining 75% of the dots are relocated randomly within the dot cloud. In the combined condition, visual and vestibular stimuli always specified the same heading (there was no cue conflict).</p>
<p>During the main phase of data collection, subjects were not informed about the correctness of their choices (no feedback). In the vestibular and combined conditions, platform motion was halted smoothly but rapidly immediately following registration of the decision, and the platform then returned to its original starting point. In the visual condition, the optic flow stimulus disappeared from the screen when a decision was made. In all conditions, 2.5 s after the decision, a sound informed the subjects that they could initiate the next trial by pushing a third button. Once a trial was initiated, the stimulus onset occurred following a randomized delay period (truncated exponential; mean, 987 ms). Prior to data collection, subjects were introduced to the task for 1–2 week ‘training’ sessions, in which they were informed about the correctness of their choices by either a low-frequency (incorrect) or a high-frequency (correct) sound. The training period was terminated once their behavior stabilized across consecutive training sessions. During training, subjects were able to adjust their speed-accuracy trade-off based on feedback. During subsequent data collection, we did not observe any clear changes in the speed-accuracy trade-off exhibited by subjects.</p>
</sec>
<sec id="s4-3">
<title>Data analysis</title>
<p>Analyses and statistical tests were performed using MATLAB R2013a (The Mathworks, MA, USA).</p>
<p>For each subject, discrimination thresholds were determined separately for each combination of stimulus modality (visual-only, vestibular-only, combined) and coherence (25%, 37%, and 70% for subjects A–G; 0%, 12%, 25%, 37%, 51%, and 70% for subjects B2, D2, F2) by plotting the proportion of rightward choices as a function of heading direction (
<xref ref-type="fig" rid="fig2">Figure 2A</xref>
). The psychophysical discrimination threshold was taken as the standard deviation of a cumulative Gaussian function, fitted by maximum likelihood methods. We assumed a common lapse rate (proportion of random choices) across all stimulus conditions, but allowed for a separate bias parameter (horizontal shift of the psychometric function) for each modality/coherence. Confidence intervals for threshold estimates were obtained by taking 5000 parametric bootstrap samples (
<xref rid="bib49" ref-type="bibr">Wichmann and Hill, 2001</xref>
). These samples also form the basis for statistical comparisons of discrimination thresholds: two thresholds were compared by computing the difference between their associated samples, leading to 5000 threshold difference samples. Subsequently, we determined the fraction of differences that were below or above zero, depending on the directionality of interest. This fraction determined the raw significance level for accepting the null hypothesis (no difference). The reported significance levels are Bonferroni-corrected for multiple comparisons. All comparisons were one-tailed. Following traditional cue combination analyses (
<xref rid="bib8" ref-type="bibr">Clark and Yuille, 1990</xref>
), the optimal threshold
<italic>σ</italic>
<sub>
<italic>pred</italic>
,
<italic>c</italic>
</sub>
in the combined condition for coherence
<italic>c</italic>
was predicted from the visual threshold
<italic>σ</italic>
<sub>
<italic>vis</italic>
,
<italic>c</italic>
</sub>
and the vestibular threshold
<italic>σ</italic>
<sub>
<italic>vest</italic>
</sub>
by
<inline-formula>
<mml:math id="inf35">
<mml:mrow>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>d</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>/</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>c</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
. Confidence intervals and statistical tests were again based on applying this formula to individual bootstrap samples of the unimodal threshold estimates.
<xref ref-type="supplementary-material" rid="SD2-data">Supplementary file 2A</xref>
reports the p-values for all subjects and all comparisons.</p>
<p>For each dataset, we evaluated the absolute goodness-of-fit of the optimal model (
<xref ref-type="fig" rid="fig7">Figure 7A</xref>
) by finding the set of model parameters
<italic>φ</italic>
that maximized the likelihood of the observed choices and reaction times, and then computing the average coefficient of determination,
<inline-formula>
<mml:math id="inf36">
<mml:mrow>
<mml:msup>
<mml:mi>R</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mtext>D</mml:mtext>
<mml:mi>φ</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mn>2</mml:mn>
</mml:mfrac>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mi>R</mml:mi>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>h</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>φ</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mi>R</mml:mi>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>n</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>φ</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
. Here,
<inline-formula>
<mml:math id="inf37">
<mml:mrow>
<mml:msubsup>
<mml:mi>R</mml:mi>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>h</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>φ</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
and
<inline-formula>
<mml:math id="inf38">
<mml:mrow>
<mml:msubsup>
<mml:mi>R</mml:mi>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>n</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>φ</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
denote the adjusted
<italic>R</italic>
<sup>2</sup>
values for the psychometric and chronometric functions, respectively, across all modalities/coherences. The value of
<inline-formula>
<mml:math id="inf39">
<mml:mrow>
<mml:msubsup>
<mml:mi>R</mml:mi>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>h</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
for the psychometric function was based on the probability of making a correct choice across all heading angles, coherences, and conditions, weighted by the number of observations, and adjusted for the number of model parameters. The same procedure, based on the mean reaction times, was used to find
<inline-formula>
<mml:math id="inf40">
<mml:mrow>
<mml:msubsup>
<mml:mi>R</mml:mi>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>n</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
, but we additionally distinguished between mean reaction times for correct and incorrect choices, and fitted both weighted by their corresponding number of observations (see SI for expressions for
<inline-formula>
<mml:math id="inf41">
<mml:mrow>
<mml:msubsup>
<mml:mi>R</mml:mi>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>y</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>h</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>φ</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
and
<inline-formula>
<mml:math id="inf42">
<mml:mrow>
<mml:msubsup>
<mml:mi>R</mml:mi>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>h</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>n</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>φ</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
).</p>
<p>We compared different variants of the full model (
<xref ref-type="fig" rid="fig7">Figure 7B</xref>
) by Bayesian model comparison based on Bayes factors, which were computed as follows. First, we found for each model
<inline-formula>
<mml:math id="inf56">
<mml:mi mathvariant="script">M</mml:mi>
</mml:math>
</inline-formula>
and subject
<italic>s</italic>
the set of parameters
<italic>φ</italic>
that maximized the likelihood,
<inline-formula>
<mml:math id="inf43">
<mml:mrow>
<mml:msubsup>
<mml:mi>φ</mml:mi>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="script">M</mml:mi>
</mml:mrow>
<mml:mo>*</mml:mo>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:mi>arg</mml:mi>
<mml:msub>
<mml:mrow>
<mml:mi>max</mml:mi>
</mml:mrow>
<mml:mi>φ</mml:mi>
</mml:msub>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mtext>data of subj </mml:mtext>
<mml:mi>s</mml:mi>
<mml:mo>|</mml:mo>
<mml:mi>φ</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="script">M</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
. Second, we approximated the Bayesian model evidence, measuring the model posterior probability while marginalizing over the parameters, up to a constant by the Bayesian information criterion,
<inline-formula>
<mml:math id="inf44">
<mml:mrow>
<mml:mi>ln</mml:mi>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi mathvariant="script">M</mml:mi>
<mml:mo>|</mml:mo>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo></mml:mo>
<mml:mo></mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mn>2</mml:mn>
</mml:mfrac>
<mml:mtext>BIC</mml:mtext>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="script">M</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
with
<inline-formula>
<mml:math id="inf45">
<mml:mrow>
<mml:mtext>BIC</mml:mtext>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>M</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mo></mml:mo>
<mml:mn>2</mml:mn>
<mml:mi>ln</mml:mi>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mo>|</mml:mo>
<mml:msubsup>
<mml:mi>φ</mml:mi>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="script">M</mml:mi>
</mml:mrow>
<mml:mo>*</mml:mo>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:mi mathvariant="script">M</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>k</mml:mi>
<mml:mi mathvariant="script">M</mml:mi>
</mml:msub>
<mml:mi>ln</mml:mi>
<mml:msub>
<mml:mi>N</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
. Here,
<inline-formula>
<mml:math id="inf57">
<mml:mrow>
<mml:msub>
<mml:mi>k</mml:mi>
<mml:mi mathvariant="script">M</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
is the number of parameters of model
<inline-formula>
<mml:math id="inf58">
<mml:mi mathvariant="script">M</mml:mi>
</mml:math>
</inline-formula>
, and
<italic>N</italic>
<sub>
<italic>s</italic>
</sub>
is the number of trials for dataset
<italic>s</italic>
, respectively. Based on this, we computed the Bayes factor of model
<inline-formula>
<mml:math id="inf59">
<mml:mi mathvariant="script">M</mml:mi>
</mml:math>
</inline-formula>
vs the optimal model
<inline-formula>
<mml:math id="inf60">
<mml:mi mathvariant="script">M</mml:mi>
</mml:math>
</inline-formula>
<sub>
<italic>opt</italic>
</sub>
by pooling the model evidence over datasets, resulting in
<inline-formula>
<mml:math id="inf46">
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:msub>
<mml:mo></mml:mo>
<mml:mi>s</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>ln</mml:mi>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi mathvariant="script">M</mml:mi>
<mml:mo>|</mml:mo>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo></mml:mo>
<mml:mi>ln</mml:mi>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="script">M</mml:mi>
<mml:mrow>
<mml:mi>o</mml:mi>
<mml:mi>p</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>|</mml:mo>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:mstyle>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
These values, converted to a base-10 logarithm, are shown in
<xref ref-type="fig" rid="fig7">Figure 7B</xref>
. In this case, a negative log
<sub>10</sub>
-difference of 2 implies that the optimal model is 100 times more likely given the data than the alternative model, a difference that is considered decisive in favor of the optimal model (
<xref rid="bib25" ref-type="bibr">Jeffreys, 1998</xref>
).</p>
<p>To determine the faster stimulus modality for each subject, we compared reaction times for the vestibular condition with those for the visual condition at 70% coherence. We tested the difference in the logarithm of these reaction times by a 2-way ANOVA with stimulus modality and heading direction as the two factors, and we report the main effect of stimulus modality on reaction times. Although we performed a log-transform of the reaction times to ensure their normality, a Jarque–Bera test revealed that normality did not hold for some heading directions. Thus, we additionally performed a Friedman test on subsampled data (to have the same number of trials per modality/heading) which supported the ANOVA result at the same significance level. In the main text, we only report the main effect of stimulus modality on reaction time from the 2-way ANOVA. Detailed results of the 2-way ANOVA, the Jarque–Bera test, and the Friedman test are reported for each subject in
<xref ref-type="supplementary-material" rid="SD2-data">Supplementary file 2C</xref>
.</p>
</sec>
<sec id="s4-4">
<title>The extended diffusion model</title>
<p>Here we outline the critical extensions to the diffusion model. Detailed derivations and properties of the model are described in the
<xref ref-type="supplementary-material" rid="SD1-data">Supplementary file 1</xref>
.</p>
<p>Discretizing time into small steps of size Δ allows us to describe the particle trajectory
<italic>x</italic>
(
<italic>t</italic>
) in a DM by a random walk,
<inline-formula>
<mml:math id="inf47">
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mstyle displaystyle="true">
<mml:msub>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mi>δ</mml:mi>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
</mml:math>
</inline-formula>
, where each of the steps
<italic>δx</italic>
<sub>
<italic>n</italic>
</sub>
∼ (
<italic>ksin</italic>
(
<italic>h</italic>
)Δ, Δ), called the momentary evidence, are normally distributed with mean
<italic>ksin</italic>
(
<italic>h</italic>
)Δ and variance Δ (1:
<italic>t</italic>
denotes the set of all steps up to time
<italic>t</italic>
). This representation is exact in the sense that it recovers the diffusion model,
<inline-formula>
<mml:math id="inf48">
<mml:mrow>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>x</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mi>k</mml:mi>
<mml:mi>sin</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>h</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mi>η</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
, in the limit of Δ→0.</p>
<p>For the standard diffusion model, the posterior probability of
<italic>sin</italic>
(
<italic>h</italic>
) after observing the stimulus for
<italic>t</italic>
seconds, and under the assumption of a uniform prior, is given by Bayes rule
<disp-formula id="equ4">
<label>(4)</label>
<mml:math id="m4">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>sin</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>h</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>|</mml:mo>
<mml:mi>δ</mml:mi>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo></mml:mo>
<mml:mstyle displaystyle="true">
<mml:munder>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:munder>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>δ</mml:mi>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>|</mml:mo>
<mml:mi>s</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>n</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>h</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo></mml:mo>
<mml:mi>N</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>sin</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>h</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>|</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mo>,</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:msup>
<mml:mi>k</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:mstyle>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
</disp-formula>
where
<italic>δx</italic>
<sub>1:
<italic>t</italic>
</sub>
is the momentary evidence up to time
<italic>t</italic>
. From this we can derive the belief that heading is rightward, resulting in
<disp-formula id="equ5">
<label>(5)</label>
<mml:math id="m5">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>h</mml:mi>
<mml:mo>></mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>|</mml:mo>
<mml:mi>δ</mml:mi>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>sin</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>h</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>></mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>|</mml:mo>
<mml:mi>δ</mml:mi>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:munderover>
<mml:mstyle displaystyle="true">
<mml:mo></mml:mo>
</mml:mstyle>
<mml:mn>0</mml:mn>
<mml:mi>π</mml:mi>
</mml:munderover>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>sin</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>h</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>|</mml:mo>
<mml:mi>δ</mml:mi>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mtext>d</mml:mtext>
<mml:mi>h</mml:mi>
<mml:mo>=</mml:mo>
<mml:mi>Φ</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:msqrt>
<mml:mi>t</mml:mi>
</mml:msqrt>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
</disp-formula>
where
<inline-formula>
<mml:math id="inf49">
<mml:mrow>
<mml:mi>Φ</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mo>·</mml:mo>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
denotes the standard cumulative Gaussian function. This shows that both the posterior of the actual heading angle, as well as the belief about ‘rightward’ being the correct choice, only depend on
<italic>x</italic>
(
<italic>t</italic>
) rather than the whole trajectory
<italic>δx</italic>
<sub>1:
<italic>t</italic>
</sub>
.</p>
<p>The above formulation assumes that evidence is constant over time, which is not the case for our stimuli. Considering the visual cue and assuming that its associated sensitivity varies with velocity
<italic>v</italic>
(
<italic>t</italic>
), the momentary evidence
<inline-formula>
<mml:math id="inf50">
<mml:mrow>
<mml:mi>δ</mml:mi>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo></mml:mo>
<mml:mi>N</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>v</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:msub>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>sin</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>h</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>Δ</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>Δ</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
is Gaussian with mean
<italic>v</italic>
<sub>
<italic>n</italic>
</sub>
<italic>k</italic>
<sub>
<italic>vis</italic>
</sub>
(
<italic>c</italic>
)
<italic>sin</italic>
(
<italic>h</italic>
)Δ, where
<italic>v</italic>
<sub>
<italic>n</italic>
</sub>
is the velocity at time step
<italic>n</italic>
, and variance Δ. Using Bayes rule again to find the posterior of
<italic>sin</italic>
(
<italic>h</italic>
), it is easy to shown that
<italic>x</italic>
<sub>
<italic>vis</italic>
</sub>
(
<italic>t</italic>
) is no longer sufficient to determine the posterior distribution. Rather, we need to perform a velocity-weighted accumulation,
<inline-formula>
<mml:math id="inf51">
<mml:mrow>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mstyle displaystyle="true">
<mml:msub>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:msub>
<mml:mi>v</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mi>δ</mml:mi>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
</mml:math>
</inline-formula>
to replace
<italic>x</italic>
<sub>
<italic>vis</italic>
</sub>
(
<italic>t</italic>
), and replace time
<italic>t</italic>
with
<inline-formula>
<mml:math id="inf52">
<mml:mrow>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mstyle displaystyle="true">
<mml:msub>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:msubsup>
<mml:mi>v</mml:mi>
<mml:mi>n</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mi>Δ</mml:mi>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
</mml:math>
</inline-formula>
, resulting in the following expression for the posterior
<disp-formula id="equ6">
<label>(6)</label>
<mml:math id="m6">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>sin</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>h</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>|</mml:mo>
<mml:mi>δ</mml:mi>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
<mml:mo>,</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>sin</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>h</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mi>N</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>sin</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>h</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>|</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
<mml:mo>,</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:msubsup>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>V</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>Consequently, the belief about ‘rightward’ being correct can also be fully expressed by
<italic>X</italic>
<sub>
<italic>vis</italic>
</sub>
(
<italic>t</italic>
) and
<italic>V</italic>
(
<italic>t</italic>
). This shows that optimal accumulation of evidence with a single-particle diffusion model with time-varying evidence sensitivity requires the momentary evidence to be weighted by its momentary sensitivity. A similar formulation holds for the posterior over heading based on the vestibular cue, however the vestibular cue is assumed to be weighted by the temporal profile of stimulus acceleration, instead of velocity.</p>
<p>When combining multiple cues into a single DM,
<inline-formula>
<mml:math id="inf53">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mi>X</mml:mi>
<mml:mo>˙</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>b</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mi>d</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>d</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:msub>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>b</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mi>sin</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>h</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>η</mml:mi>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>b</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
, we aim to find expressions for
<italic>k</italic>
<sub>
<italic>comb</italic>
</sub>
and
<italic>d</italic>
(
<italic>t</italic>
) that keep the posterior over
<italic>sin</italic>
(
<italic>h</italic>
) unchanged, that is
<disp-formula id="equ7">
<label>(7)</label>
<mml:math id="m7">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>sin</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>h</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>|</mml:mo>
<mml:mi>δ</mml:mi>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>b</mml:mi>
<mml:mo>,</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>sin</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>h</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>|</mml:mo>
<mml:mi>δ</mml:mi>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
<mml:mo>,</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi>δ</mml:mi>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
<mml:mo>,</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>:</mml:mo>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>
<italic>δx</italic>
<sub>
<italic>comb</italic>
,1:
<italic>t</italic>
</sub>
is the sequence of momentary evidence in the combined condition, following
<inline-formula>
<mml:math id="inf54">
<mml:mrow>
<mml:mi>δ</mml:mi>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>b</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo></mml:mo>
<mml:mi>N</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>d</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:msub>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>b</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>sin</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>h</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>Δ</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>Δ</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
. Expanding the probabilities reveals the equality to hold if the combined sensitivity is given by
<inline-formula>
<mml:math id="inf55">
<mml:mrow>
<mml:msubsup>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>b</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:msup>
<mml:mrow>
<mml:msub>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi>v</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>
, and
<italic>d</italic>
(
<italic>t</italic>
) is expressed by
<xref ref-type="disp-formula" rid="equ3">Equation 3</xref>
, leading to
<xref ref-type="disp-formula" rid="equ1">Equation 1</xref>
for optimally combining the momentary evidence (see
<xref ref-type="supplementary-material" rid="SD1-data">Supplementary file 1</xref>
for derivation).</p>
</sec>
<sec id="s4-5">
<title>Model fitting</title>
<p>The model used to fit the behavioral data is described in the main text. We never averaged data across subjects as they feature qualitatively different behavior, due to different speed-accuracy tradeoffs. Furthermore, for subjects performing both the three-coherence and the six-coherence version of the experiment, we treated either version as a separate data set. For each modality/coherence combination (7 combinations for 3 coherences, 13 combinations for 6 coherences) we fitted one bias parameter that prevents behavioral biases from influencing model fits. The fact that performance of subjects often fails to reach 100% correct even for the highest coherences and largest heading angles was modeled by a lapse rate, which describes the frequency with which the subject makes a random choice rather than one based on accumulated evidence. This lapse rate was assumed to be independent of stimulus modality or coherence, and so a single lapse rate parameter is shared among all modality/coherence combinations.</p>
<p>All model fits sought to find the model parameters
<italic>φ</italic>
that maximize the likelihood of the observed choices and reaction times for each dataset. As in
<xref rid="bib35" ref-type="bibr">Palmer et al. (2005)</xref>
, we have assumed the likelihood of the choices to follow a binomial distribution, and the reaction times of correct and incorrect choices to follow different Gaussian distributions centered on the empirical means and spread according to the standard error. Model predictions for choice fractions and reaction times for correct and incorrect choices were computed from the solution to integral equations describing first-passage times of bounded diffusion processes (
<xref rid="bib42" ref-type="bibr">Smith, 2000</xref>
). See
<xref ref-type="supplementary-material" rid="SD1-data">Supplementary file 1</xref>
for the exact form of the likelihood function that was used.</p>
<p>To avoid getting trapped in local maxima of this likelihood, we utilized a three-step maximization procedure. First, we found a (possibly local) maximum by pseudo-gradient ascent on the likelihood function. Starting from this maximum, we used a Markov Chain Monte Carlo procedure to draw 44,000 samples from the parameter posterior under the assumption of a uniform, bounded prior. After this, we used the highest-likelihood sample, which is expected to be close to the mode of this posterior, as a starting point to find the posterior mode by pseudo-gradient ascent. The resulting parameter vector is taken as the maximum-likelihood estimate. All pseudo-gradient ascent maximizations were performed with the Optimization Toolbox of Matlab R2013a (Mathworks), using stringent stopping criteria (TolFun = TolX = 10
<sup>−20</sup>
) to prevent premature convergence.</p>
</sec>
</sec>
</body>
<back>
<sec sec-type="funding-information">
<title>Funding Information</title>
<p>This paper was supported by the following grants:</p>
<list list-type="bullet">
<list-item>
<p>
<funding-source>National Institutes of Health
<ext-link ext-link-type="uri" xlink:href="http://www.crossref.org/fundref/">FundRef identification ID: </ext-link>
<named-content content-type="funder-id">http://dx.doi.org/10.13039/100000002</named-content>
</funding-source>
<award-id>R01 DC007620</award-id>
to Dora E Angelaki.</p>
</list-item>
<list-item>
<p>
<funding-source>National Institutes of Health
<ext-link ext-link-type="uri" xlink:href="http://www.crossref.org/fundref/">FundRef identification ID: </ext-link>
<named-content content-type="funder-id">http://dx.doi.org/10.13039/100000002</named-content>
</funding-source>
<award-id>R01 EY016178</award-id>
to Gregory C DeAngelis.</p>
</list-item>
<list-item>
<p>
<funding-source>National Science Foundation
<ext-link ext-link-type="uri" xlink:href="http://www.crossref.org/fundref/">FundRef identification ID: </ext-link>
<named-content content-type="funder-id">http://dx.doi.org/10.13039/100000001</named-content>
</funding-source>
<award-id>BCS0446730</award-id>
to Alexandre Pouget.</p>
</list-item>
<list-item>
<p>
<funding-source>U.S. Army Research Laboratory
<ext-link ext-link-type="uri" xlink:href="http://www.crossref.org/fundref/">FundRef identification ID: </ext-link>
<named-content content-type="funder-id">http://dx.doi.org/10.13039/100006754</named-content>
</funding-source>
<award-id>Multidisciplinary University Research Initiative, N00014-07-1-0937</award-id>
to Alexandre Pouget.</p>
</list-item>
<list-item>
<p>
<funding-source>Air Force Office of Scientific Research
<ext-link ext-link-type="uri" xlink:href="http://www.crossref.org/fundref/">FundRef identification ID: </ext-link>
<named-content content-type="funder-id">http://dx.doi.org/10.13039/100000181</named-content>
</funding-source>
<award-id>FA9550-10-1-0336</award-id>
to Alexandre Pouget.</p>
</list-item>
<list-item>
<p>
<funding-source>James S. McDonnell Foundation
<ext-link ext-link-type="uri" xlink:href="http://www.crossref.org/fundref/">FundRef identification ID: </ext-link>
<named-content content-type="funder-id">http://dx.doi.org/10.13039/100000913</named-content>
</funding-source>
to Alexandre Pouget.</p>
</list-item>
</list>
</sec>
<sec sec-type="additional-information">
<title>Additional information</title>
<fn-group content-type="competing-interest">
<title>
<bold>Competing interests</bold>
</title>
<fn fn-type="conflict" id="conf1">
<p>DEA: Reviewing editor,
<italic>eLife</italic>
.</p>
</fn>
<fn fn-type="conflict" id="conf2">
<p>The other authors declare that no competing interests exist.</p>
</fn>
</fn-group>
<fn-group content-type="author-contribution">
<title>
<bold>Author contributions</bold>
</title>
<fn fn-type="con" id="con1">
<p>JD, Conception and design, Analysis and interpretation of data, Drafting or revising the article.</p>
</fn>
<fn fn-type="con" id="con2">
<p>GCDA, Conception and design, Drafting or revising the article.</p>
</fn>
<fn fn-type="con" id="con3">
<p>AP, Conception and design, Drafting or revising the article.</p>
</fn>
<fn fn-type="con" id="con4">
<p>EMK, Acquisition of data, Drafting or revising the article.</p>
</fn>
<fn fn-type="con" id="con5">
<p>DEA, Conception and design, Acquisition of data, Drafting or revising the article.</p>
</fn>
</fn-group>
<fn-group content-type="ethics-information">
<title>
<bold>Ethics</bold>
</title>
<fn fn-type="other">
<p>Human subjects: Informed consent was obtained from all participants and all procedures were reviewed and approved by the Washington University Office of Human Research Protections (OHRP), Institutional Review Board (IRB; IRB ID# 201109183). Consent to publish was not obtained in writing, as it was not required by the IRB, but all subjects were recruited for this purpose and approved verbally. Of the initial seven subjects, three participated in a follow-up experiment roughly 2 years after the initial data collection. Procedures for the follow-up experiment were approved by the Institutional Review Board for Human Subject Research for Baylor College of Medicine and Affiliated Hospitals (BCM IRB, ID# H-29411) and informed consent and consent to publish was given again by all three subjects.</p>
</fn>
</fn-group>
</sec>
<sec sec-type="supplementary-material">
<title>Additional files</title>
<supplementary-material content-type="local-data" id="SD1-data">
<object-id pub-id-type="doi">10.7554/eLife.03005.015</object-id>
<label>Supplementary file 1.</label>
<caption>
<p>Detailed model derivation and description.</p>
<p>
<bold>DOI:</bold>
<ext-link ext-link-type="doi" xlink:href="10.7554/eLife.03005.015">http://dx.doi.org/10.7554/eLife.03005.015</ext-link>
</p>
</caption>
<media xlink:href="elife03005s001.pdf" mimetype="application" mime-subtype="pdf" orientation="portrait" xlink:type="simple" id="d35e5408" position="anchor"></media>
</supplementary-material>
<supplementary-material content-type="local-data" id="SD2-data">
<object-id pub-id-type="doi">10.7554/eLife.03005.016</object-id>
<label>Supplementary file 2.</label>
<caption>
<p>Outcome of additional statistical hypothesis tests.</p>
<p>
<bold>DOI:</bold>
<ext-link ext-link-type="doi" xlink:href="10.7554/eLife.03005.016">http://dx.doi.org/10.7554/eLife.03005.016</ext-link>
</p>
</caption>
<media xlink:href="elife03005s002.pdf" mimetype="application" mime-subtype="pdf" orientation="portrait" xlink:type="simple" id="d35e5423" position="anchor"></media>
</supplementary-material>
</sec>
<ref-list>
<title>References</title>
<ref id="bib1">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Battaglia</surname>
<given-names>PW</given-names>
</name>
<name>
<surname>Jacobs</surname>
<given-names>RA</given-names>
</name>
<name>
<surname>Aslin</surname>
<given-names>RN</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>Bayesian integration of visual and auditory signals for spatial localization</article-title>
.
<source>Journal of the Optical Society of America A, Optics, Image Science, and Vision</source>
<volume>20</volume>
:
<fpage>1391</fpage>
<lpage>1397</lpage>
. doi:
<pub-id pub-id-type="doi">10.1364/JOSAA.20.001391</pub-id>
</mixed-citation>
</ref>
<ref id="bib2">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Beck</surname>
<given-names>JM</given-names>
</name>
<name>
<surname>Ma</surname>
<given-names>WJ</given-names>
</name>
<name>
<surname>Kiani</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Hanks</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Churchland</surname>
<given-names>AK</given-names>
</name>
<name>
<surname>Roitman</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Shadlen</surname>
<given-names>MN</given-names>
</name>
<name>
<surname>Latham</surname>
<given-names>PE</given-names>
</name>
<name>
<surname>Pouget</surname>
<given-names>A</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Probabilistic population codes for Bayesian decision making</article-title>
.
<source>Neuron</source>
<volume>60</volume>
:
<fpage>1142</fpage>
<lpage>1152</lpage>
. doi:
<pub-id pub-id-type="doi">10.1016/j.neuron.2008.09.021</pub-id>
<pub-id pub-id-type="pmid">19109917</pub-id>
</mixed-citation>
</ref>
<ref id="bib3">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bogacz</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Brown</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Moehlis</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Holmes</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Cohen</surname>
<given-names>JD</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>The physics of optimal decision making: a formal analysis of models of performance in two-alternative forced-choice tasks</article-title>
.
<source>Psychological Review</source>
<volume>113</volume>
:
<fpage>700</fpage>
<lpage>765</lpage>
. doi:
<pub-id pub-id-type="doi">10.1037/0033-295X.113.4.700</pub-id>
<pub-id pub-id-type="pmid">17014301</pub-id>
</mixed-citation>
</ref>
<ref id="bib4">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Britten</surname>
<given-names>KH</given-names>
</name>
<name>
<surname>van Wezel</surname>
<given-names>RJ</given-names>
</name>
</person-group>
<year>1998</year>
<article-title>Electrical microstimulation of cortical area MST biases heading perception in monkeys</article-title>
.
<source>Nature Neuroscience</source>
<volume>1</volume>
:
<fpage>59</fpage>
<lpage>63</lpage>
. doi:
<pub-id pub-id-type="doi">10.1038/259</pub-id>
</mixed-citation>
</ref>
<ref id="bib5">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Britten</surname>
<given-names>KH</given-names>
</name>
<name>
<surname>Van Wezel</surname>
<given-names>RJ</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>Area MST and heading perception in macaque monkeys</article-title>
.
<source>Cerebral Cortex</source>
<volume>12</volume>
:
<fpage>692</fpage>
<lpage>701</lpage>
. doi:
<pub-id pub-id-type="doi">10.1093/cercor/12.7.692</pub-id>
<pub-id pub-id-type="pmid">12050081</pub-id>
</mixed-citation>
</ref>
<ref id="bib6">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chen</surname>
<given-names>A</given-names>
</name>
<name>
<surname>DeAngelis</surname>
<given-names>GC</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
</person-group>
<year>2011a</year>
<article-title>A comparison of vestibular spatiotemporal tuning in macaque parietoinsular vestibular cortex, ventral intraparietal area, and medial superior temporal area</article-title>
.
<source>The Journal of Neuroscience</source>
<volume>31</volume>
:
<fpage>3082</fpage>
<lpage>3094</lpage>
. doi:
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.4476-10.2011</pub-id>
<pub-id pub-id-type="pmid">21414929</pub-id>
</mixed-citation>
</ref>
<ref id="bib7">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chen</surname>
<given-names>A</given-names>
</name>
<name>
<surname>DeAngelis</surname>
<given-names>GC</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
</person-group>
<year>2011b</year>
<article-title>Representation of vestibular and visual cues to self-motion in ventral intraparietal cortex</article-title>
.
<source>The Journal of Neuroscience</source>
<volume>31</volume>
:
<fpage>12036</fpage>
<lpage>12052</lpage>
. doi:
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.0395-11.2011</pub-id>
<pub-id pub-id-type="pmid">21849564</pub-id>
</mixed-citation>
</ref>
<ref id="bib8">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Clark</surname>
<given-names>JJ</given-names>
</name>
<name>
<surname>Yuille</surname>
<given-names>AL</given-names>
</name>
</person-group>
<year>1990</year>
<source>Data fusion for sensory information processing systems</source>
.
<publisher-loc>Boston</publisher-loc>
:
<publisher-name>Kluwer Academic</publisher-name>
</mixed-citation>
</ref>
<ref id="bib9">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Colonius</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Arndt</surname>
<given-names>P</given-names>
</name>
</person-group>
<year>2001</year>
<article-title>A two-stage model for visual-auditory interaction in saccadic latencies</article-title>
.
<source>Perception & Psychophysics</source>
<volume>63</volume>
:
<fpage>126</fpage>
<lpage>147</lpage>
. doi:
<pub-id pub-id-type="doi">10.3758/BF03200508</pub-id>
<pub-id pub-id-type="pmid">11304009</pub-id>
</mixed-citation>
</ref>
<ref id="bib10">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Corneil</surname>
<given-names>BD</given-names>
</name>
<name>
<surname>Van Wanrooij</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Munoz</surname>
<given-names>DP</given-names>
</name>
<name>
<surname>Van Opstal</surname>
<given-names>AJ</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>Auditory-visual interactions subserving goal-directed saccades in a complex scene</article-title>
.
<source>Journal of Neurophysiology</source>
<volume>88</volume>
:
<fpage>438</fpage>
<lpage>454</lpage>
<pub-id pub-id-type="pmid">12091566</pub-id>
</mixed-citation>
</ref>
<ref id="bib11">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>MO</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>MS</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>Humans integrate visual and haptic information in a statistically optimal fashion</article-title>
.
<source>Nature</source>
<volume>415</volume>
:
<fpage>429</fpage>
<lpage>433</lpage>
. doi:
<pub-id pub-id-type="doi">10.1038/415429a</pub-id>
<pub-id pub-id-type="pmid">11807554</pub-id>
</mixed-citation>
</ref>
<ref id="bib12">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fernandez</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Goldberg</surname>
<given-names>JM</given-names>
</name>
</person-group>
<year>1976</year>
<article-title>Physiology of peripheral neurons innervating otolith organs of the squirrel monkey. III. Response dynamics</article-title>
.
<source>Journal of Neurophysiology</source>
<volume>39</volume>
:
<fpage>996</fpage>
<lpage>1008</lpage>
<pub-id pub-id-type="pmid">824414</pub-id>
</mixed-citation>
</ref>
<ref id="bib13">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fetsch</surname>
<given-names>CR</given-names>
</name>
<name>
<surname>Pouget</surname>
<given-names>A</given-names>
</name>
<name>
<surname>DeAngelis</surname>
<given-names>GC</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
</person-group>
<year>2011</year>
<article-title>Neural correlates of reliability-based cue weighting during multisensory integration</article-title>
.
<source>Nature Neuroscience</source>
<volume>15</volume>
:
<fpage>146</fpage>
<lpage>154</lpage>
. doi:
<pub-id pub-id-type="doi">10.1038/nn.2983</pub-id>
</mixed-citation>
</ref>
<ref id="bib14">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fetsch</surname>
<given-names>CR</given-names>
</name>
<name>
<surname>Rajguru</surname>
<given-names>SM</given-names>
</name>
<name>
<surname>Karunaratne</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Gu</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
<name>
<surname>Deangelis</surname>
<given-names>GC</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>Spatiotemporal properties of vestibular responses in area MSTd</article-title>
.
<source>Journal of Neurophysiology</source>
<volume>104</volume>
:
<fpage>1506</fpage>
<lpage>1522</lpage>
. doi:
<pub-id pub-id-type="doi">10.1152/jn.91247.2008</pub-id>
<pub-id pub-id-type="pmid">20631212</pub-id>
</mixed-citation>
</ref>
<ref id="bib15">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fetsch</surname>
<given-names>CR</given-names>
</name>
<name>
<surname>Turner</surname>
<given-names>AH</given-names>
</name>
<name>
<surname>DeAngelis</surname>
<given-names>GC</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Dynamic reweighting of visual and vestibular cues during self-motion perception</article-title>
.
<source>The Journal of Neuroscience</source>
<volume>29</volume>
:
<fpage>15601</fpage>
<lpage>15612</lpage>
. doi:
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.2574-09.2009</pub-id>
<pub-id pub-id-type="pmid">20007484</pub-id>
</mixed-citation>
</ref>
<ref id="bib16">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Goodman</surname>
<given-names>SN</given-names>
</name>
</person-group>
<year>1999</year>
<article-title>Toward evidence-based medical statistics. 2: the Bayes factor</article-title>
.
<source>Annals of Internal Medicine</source>
<volume>130</volume>
:
<fpage>1005</fpage>
<lpage>1013</lpage>
. doi:
<pub-id pub-id-type="doi">10.7326/0003-4819-130-12-199906150-00019</pub-id>
<pub-id pub-id-type="pmid">10383350</pub-id>
</mixed-citation>
</ref>
<ref id="bib17">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Graf</surname>
<given-names>AB</given-names>
</name>
<name>
<surname>Kohn</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Jazayeri</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Movshon</surname>
<given-names>JA</given-names>
</name>
</person-group>
<year>2011</year>
<article-title>Decoding the activity of neuronal populations in macaque primary visual cortex</article-title>
.
<source>Nature Neuroscience</source>
<volume>14</volume>
:
<fpage>239</fpage>
<lpage>245</lpage>
. doi:
<pub-id pub-id-type="doi">10.1038/nn.2733</pub-id>
</mixed-citation>
</ref>
<ref id="bib18">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Grice</surname>
<given-names>GR</given-names>
</name>
<name>
<surname>Canham</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Boroughs</surname>
<given-names>JM</given-names>
</name>
</person-group>
<year>1984</year>
<article-title>Combination rule for redundant information in reaction time tasks with divided attention</article-title>
.
<source>Perception & Psychophysics</source>
<volume>35</volume>
:
<fpage>451</fpage>
<lpage>463</lpage>
. doi:
<pub-id pub-id-type="doi">10.3758/BF03203922</pub-id>
<pub-id pub-id-type="pmid">6462872</pub-id>
</mixed-citation>
</ref>
<ref id="bib19">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gu</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>DeAngelis</surname>
<given-names>GC</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>A functional link between area MSTd and heading perception based on vestibular signals</article-title>
.
<source>Nature Neuroscience</source>
<volume>10</volume>
:
<fpage>1038</fpage>
<lpage>1047</lpage>
. doi:
<pub-id pub-id-type="doi">10.1038/nn1935</pub-id>
</mixed-citation>
</ref>
<ref id="bib20">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gu</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
<name>
<surname>Deangelis</surname>
<given-names>GC</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Neural correlates of multisensory cue integration in macaque MSTd</article-title>
.
<source>Nature Neuroscience</source>
<volume>11</volume>
:
<fpage>1201</fpage>
<lpage>1210</lpage>
. doi:
<pub-id pub-id-type="doi">10.1038/nn.2191</pub-id>
</mixed-citation>
</ref>
<ref id="bib21">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gu</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Deangelis</surname>
<given-names>GC</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
</person-group>
<year>2012</year>
<article-title>Causal links between dorsal medial superior temporal area neurons and multisensory heading perception</article-title>
.
<source>The Journal of Neuroscience</source>
<volume>32</volume>
:
<fpage>2299</fpage>
<lpage>2313</lpage>
. doi:
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.5154-11.2012</pub-id>
<pub-id pub-id-type="pmid">22396405</pub-id>
</mixed-citation>
</ref>
<ref id="bib22">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gu</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Fetsch</surname>
<given-names>CR</given-names>
</name>
<name>
<surname>Adeyemo</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Deangelis</surname>
<given-names>GC</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>Decoding of MSTd population activity accounts for variations in the precision of heading perception</article-title>
.
<source>Neuron</source>
<volume>66</volume>
:
<fpage>596</fpage>
<lpage>609</lpage>
. doi:
<pub-id pub-id-type="doi">10.1016/j.neuron.2010.04.026</pub-id>
<pub-id pub-id-type="pmid">20510863</pub-id>
</mixed-citation>
</ref>
<ref id="bib23">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gu</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Watkins</surname>
<given-names>PV</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
<name>
<surname>DeAngelis</surname>
<given-names>GC</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Visual and nonvisual contributions to three-dimensional heading selectivity in the medial superior temporal area</article-title>
.
<source>The Journal of Neuroscience</source>
<volume>26</volume>
:
<fpage>73</fpage>
<lpage>85</lpage>
. doi:
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.2356-05.2006</pub-id>
<pub-id pub-id-type="pmid">16399674</pub-id>
</mixed-citation>
</ref>
<ref id="bib24">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Heuer</surname>
<given-names>HW</given-names>
</name>
<name>
<surname>Britten</surname>
<given-names>KH</given-names>
</name>
</person-group>
<year>2007</year>
<article-title>Linear responses to stochastic motion signals in area MST</article-title>
.
<source>Journal of Neurophysiology</source>
<volume>98</volume>
:
<fpage>1115</fpage>
<lpage>1124</lpage>
. doi:
<pub-id pub-id-type="doi">10.1152/jn.00083.2007</pub-id>
<pub-id pub-id-type="pmid">17615139</pub-id>
</mixed-citation>
</ref>
<ref id="bib25">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Jeffreys</surname>
<given-names>H</given-names>
</name>
</person-group>
<year>1998</year>
<source>Theory of probability</source>
.
<publisher-loc>Oxford</publisher-loc>
:
<publisher-name>Clarendon Press</publisher-name>
</mixed-citation>
</ref>
<ref id="bib26">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kiani</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Hanks</surname>
<given-names>TD</given-names>
</name>
<name>
<surname>Shadlen</surname>
<given-names>MN</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Bounded integration in parietal cortex underlies decisions even when viewing duration is dictated by the environment</article-title>
.
<source>The Journal of Neuroscience</source>
<volume>28</volume>
:
<fpage>3017</fpage>
<lpage>3029</lpage>
. doi:
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.4761-07.2008</pub-id>
<pub-id pub-id-type="pmid">18354005</pub-id>
</mixed-citation>
</ref>
<ref id="bib27">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Knill</surname>
<given-names>DC</given-names>
</name>
<name>
<surname>Saunders</surname>
<given-names>JA</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>Do humans optimally integrate stereo and texture information for judgments of surface slant?</article-title>
<source>Vision Research</source>
<volume>43</volume>
:
<fpage>2539</fpage>
<lpage>2558</lpage>
. doi:
<pub-id pub-id-type="doi">10.1016/S0042-6989(03)00458-9</pub-id>
<pub-id pub-id-type="pmid">13129541</pub-id>
</mixed-citation>
</ref>
<ref id="bib28">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Laming</surname>
<given-names>DRJ</given-names>
</name>
</person-group>
<year>1968</year>
<source>Information theory of choice-reaction times</source>
.
<publisher-loc>London</publisher-loc>
:
<publisher-name>Academic Press</publisher-name>
</mixed-citation>
</ref>
<ref id="bib29">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lisberger</surname>
<given-names>SG</given-names>
</name>
<name>
<surname>Movshon</surname>
<given-names>JA</given-names>
</name>
</person-group>
<year>1999</year>
<article-title>Visual motion analysis for pursuit eye movements in area MT of macaque monkeys</article-title>
.
<source>The Journal of Neuroscience</source>
<volume>19</volume>
:
<fpage>2224</fpage>
<lpage>2246</lpage>
<pub-id pub-id-type="pmid">10066275</pub-id>
</mixed-citation>
</ref>
<ref id="bib30">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ma</surname>
<given-names>WJ</given-names>
</name>
<name>
<surname>Beck</surname>
<given-names>JM</given-names>
</name>
<name>
<surname>Latham</surname>
<given-names>PE</given-names>
</name>
<name>
<surname>Pouget</surname>
<given-names>A</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Bayesian inference with probabilistic population codes</article-title>
.
<source>Nature Neuroscience</source>
<volume>9</volume>
:
<fpage>1432</fpage>
<lpage>1438</lpage>
. doi:
<pub-id pub-id-type="doi">10.1038/nn1790</pub-id>
</mixed-citation>
</ref>
<ref id="bib31">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mazurek</surname>
<given-names>ME</given-names>
</name>
<name>
<surname>Roitman</surname>
<given-names>JD</given-names>
</name>
<name>
<surname>Ditterich</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Shadlen</surname>
<given-names>MN</given-names>
</name>
</person-group>
<year>2003</year>
<article-title>A role for neural integrators in perceptual decision making</article-title>
.
<source>Cerebral Cortex</source>
<volume>13</volume>
:
<fpage>1257</fpage>
<lpage>1269</lpage>
. doi:
<pub-id pub-id-type="doi">10.1093/cercor/bhg097</pub-id>
<pub-id pub-id-type="pmid">14576217</pub-id>
</mixed-citation>
</ref>
<ref id="bib32">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Miller</surname>
<given-names>J</given-names>
</name>
</person-group>
<year>1982</year>
<article-title>Divided attention: evidence for coactivation with redundant signals</article-title>
.
<source>Cognitive Psychology</source>
<volume>14</volume>
:
<fpage>247</fpage>
<lpage>279</lpage>
. doi:
<pub-id pub-id-type="doi">10.1016/0010-0285(82)90010-X</pub-id>
<pub-id pub-id-type="pmid">7083803</pub-id>
</mixed-citation>
</ref>
<ref id="bib33">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Morgan</surname>
<given-names>ML</given-names>
</name>
<name>
<surname>Deangelis</surname>
<given-names>GC</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
</person-group>
<year>2008</year>
<article-title>Multisensory integration in macaque visual cortex depends on cue reliability</article-title>
.
<source>Neuron</source>
<volume>59</volume>
:
<fpage>662</fpage>
<lpage>673</lpage>
. doi:
<pub-id pub-id-type="doi">10.1016/j.neuron.2008.06.024</pub-id>
<pub-id pub-id-type="pmid">18760701</pub-id>
</mixed-citation>
</ref>
<ref id="bib50">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Onken</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Drugowitsch</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Kanitscheider</surname>
<given-names>I</given-names>
</name>
<name>
<surname>DeAngelis</surname>
<given-names>GC</given-names>
</name>
<name>
<surname>Beck</surname>
<given-names>JM</given-names>
</name>
<name>
<surname>Pouget</surname>
<given-names>A</given-names>
</name>
</person-group>
<year>2012</year>
<article-title>Near optimal multisensory integration with nonlinear probabilistic population codes using divisive normalization</article-title>
.
<comment>
<italic>The Society for Neuroscience annual meeting 2012</italic>
</comment>
</mixed-citation>
</ref>
<ref id="bib34">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Otto</surname>
<given-names>TU</given-names>
</name>
<name>
<surname>Mamassian</surname>
<given-names>P</given-names>
</name>
</person-group>
<year>2012</year>
<article-title>Noise and correlations in parallel perceptual decision making</article-title>
.
<source>Current Biology</source>
<volume>22</volume>
:
<fpage>1391</fpage>
<lpage>1396</lpage>
. doi:
<pub-id pub-id-type="doi">10.1016/j.cub.2012.05.031</pub-id>
<pub-id pub-id-type="pmid">22771043</pub-id>
</mixed-citation>
</ref>
<ref id="bib35">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Palmer</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Huk</surname>
<given-names>AC</given-names>
</name>
<name>
<surname>Shadlen</surname>
<given-names>MN</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>The effect of stimulus strength on the speed and accuracy of a perceptual decision</article-title>
.
<source>Journal of Vision</source>
<volume>5</volume>
:
<fpage>376</fpage>
<lpage>404</lpage>
. doi:
<pub-id pub-id-type="doi">10.1167/5.5.1</pub-id>
<pub-id pub-id-type="pmid">16097871</pub-id>
</mixed-citation>
</ref>
<ref id="bib36">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Papoulis</surname>
<given-names>A</given-names>
</name>
</person-group>
<year>1991</year>
<source>Probability, random variables, and stochastic processes</source>
.
<publisher-loc>New York, London</publisher-loc>
:
<publisher-name>McGraw-Hill</publisher-name>
</mixed-citation>
</ref>
<ref id="bib37">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Price</surname>
<given-names>NS</given-names>
</name>
<name>
<surname>Ono</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Mustari</surname>
<given-names>MJ</given-names>
</name>
<name>
<surname>Ibbotson</surname>
<given-names>MR</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>Comparing acceleration and speed tuning in macaque MT: physiology and modeling</article-title>
.
<source>Journal of Neurophysiology</source>
<volume>94</volume>
:
<fpage>3451</fpage>
<lpage>3464</lpage>
. doi:
<pub-id pub-id-type="doi">10.1152/jn.00564.2005</pub-id>
<pub-id pub-id-type="pmid">16079192</pub-id>
</mixed-citation>
</ref>
<ref id="bib38">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Raab</surname>
<given-names>DH</given-names>
</name>
</person-group>
<year>1962</year>
<article-title>Statistical facilitation of simple reaction times</article-title>
.
<source>Transactions of the New York Academy of Sciences</source>
<volume>24</volume>
:
<fpage>574</fpage>
<lpage>590</lpage>
. doi:
<pub-id pub-id-type="doi">10.1111/j.2164-0947.1962.tb01433.x</pub-id>
<pub-id pub-id-type="pmid">14489538</pub-id>
</mixed-citation>
</ref>
<ref id="bib39">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ratcliff</surname>
<given-names>R</given-names>
</name>
</person-group>
<year>1978</year>
<article-title>Theory of memory retrieval</article-title>
.
<source>Psychological Review</source>
<volume>85</volume>
:
<fpage>59</fpage>
<lpage>108</lpage>
. doi:
<pub-id pub-id-type="doi">10.1037/0033-295X.85.2.59</pub-id>
</mixed-citation>
</ref>
<ref id="bib40">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ratcliff</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Smith</surname>
<given-names>PL</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>A comparison of sequential sampling models for two-choice reaction time</article-title>
.
<source>Psychological Review</source>
<volume>111</volume>
:
<fpage>333</fpage>
<lpage>367</lpage>
. doi:
<pub-id pub-id-type="doi">10.1037/0033-295X.111.2.333</pub-id>
<pub-id pub-id-type="pmid">15065913</pub-id>
</mixed-citation>
</ref>
<ref id="bib41">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schlack</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Sterbing-D'Angelo</surname>
<given-names>SJ</given-names>
</name>
<name>
<surname>Hartung</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Hoffmann</surname>
<given-names>KP</given-names>
</name>
<name>
<surname>Bremmer</surname>
<given-names>F</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>Multisensory space representations in the macaque ventral intraparietal area</article-title>
.
<source>The Journal of Neuroscience</source>
<volume>25</volume>
:
<fpage>4616</fpage>
<lpage>4625</lpage>
. doi:
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.0455-05.2005</pub-id>
<pub-id pub-id-type="pmid">15872109</pub-id>
</mixed-citation>
</ref>
<ref id="bib42">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Smith</surname>
<given-names>PL</given-names>
</name>
</person-group>
<year>2000</year>
<article-title>Stochastic dynamic models of response time and accuracy: a foundational primer</article-title>
.
<source>Journal of Mathematical Psychology</source>
<volume>44</volume>
:
<fpage>408</fpage>
<lpage>463</lpage>
. doi:
<pub-id pub-id-type="doi">10.1006/jmps.1999.1260</pub-id>
<pub-id pub-id-type="pmid">10973778</pub-id>
</mixed-citation>
</ref>
<ref id="bib43">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stanford</surname>
<given-names>TR</given-names>
</name>
<name>
<surname>Shankar</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Massoglia</surname>
<given-names>DP</given-names>
</name>
<name>
<surname>Costello</surname>
<given-names>MG</given-names>
</name>
<name>
<surname>Salinas</surname>
<given-names>E</given-names>
</name>
</person-group>
<year>2010</year>
<article-title>Perceptual decision making in less than 30 milliseconds</article-title>
.
<source>Nature Neuroscience</source>
<volume>13</volume>
:
<fpage>379</fpage>
<lpage>385</lpage>
. doi:
<pub-id pub-id-type="doi">10.1038/nn.2485</pub-id>
</mixed-citation>
</ref>
<ref id="bib44">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stephan</surname>
<given-names>KE</given-names>
</name>
<name>
<surname>Penny</surname>
<given-names>WD</given-names>
</name>
<name>
<surname>Daunizeau</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Moran</surname>
<given-names>RJ</given-names>
</name>
<name>
<surname>Friston</surname>
<given-names>KJ</given-names>
</name>
</person-group>
<year>2009</year>
<article-title>Bayesian model selection for group studies</article-title>
.
<source>NeuroImage</source>
<volume>46</volume>
:
<fpage>1004</fpage>
<lpage>1017</lpage>
. doi:
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2009.03.025</pub-id>
<pub-id pub-id-type="pmid">19306932</pub-id>
</mixed-citation>
</ref>
<ref id="bib45">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tolhurst</surname>
<given-names>DJ</given-names>
</name>
<name>
<surname>Movshon</surname>
<given-names>JA</given-names>
</name>
<name>
<surname>Dean</surname>
<given-names>AF</given-names>
</name>
</person-group>
<year>1983</year>
<article-title>The statistical reliability of signals in single neurons in cat and monkey visual cortex</article-title>
.
<source>Vision Research</source>
<volume>23</volume>
:
<fpage>775</fpage>
<lpage>785</lpage>
. doi:
<pub-id pub-id-type="doi">10.1016/0042-6989(83)90200-6</pub-id>
<pub-id pub-id-type="pmid">6623937</pub-id>
</mixed-citation>
</ref>
<ref id="bib46">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Townsend</surname>
<given-names>JT</given-names>
</name>
<name>
<surname>Wenger</surname>
<given-names>MJ</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>A theory of interactive parallel processing: new capacity measures and predictions for a response time inequality series</article-title>
.
<source>Psychological Review</source>
<volume>111</volume>
:
<fpage>1003</fpage>
<lpage>1035</lpage>
. doi:
<pub-id pub-id-type="doi">10.1037/0033-295X.111.4.1003</pub-id>
<pub-id pub-id-type="pmid">15482071</pub-id>
</mixed-citation>
</ref>
<ref id="bib47">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>van Beers</surname>
<given-names>RJ</given-names>
</name>
<name>
<surname>Sittig</surname>
<given-names>AC</given-names>
</name>
<name>
<surname>Denier van der Gon</surname>
<given-names>JJ</given-names>
</name>
</person-group>
<year>1996</year>
<article-title>How humans combine simultaneous proprioceptive and visual position information</article-title>
.
<source>Experimental Brain Research</source>
<volume>111</volume>
:
<fpage>253</fpage>
<lpage>261</lpage>
. doi:
<pub-id pub-id-type="doi">10.1016/S0079-6123(08)60413-6</pub-id>
<pub-id pub-id-type="pmid">8891655</pub-id>
</mixed-citation>
</ref>
<ref id="bib48">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Whitchurch</surname>
<given-names>EA</given-names>
</name>
<name>
<surname>Takahashi</surname>
<given-names>TT</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Combined auditory and visual stimuli facilitate head saccades in the barn owl (Tyto alba)</article-title>
.
<source>Journal of Neurophysiology</source>
<volume>96</volume>
:
<fpage>730</fpage>
<lpage>745</lpage>
. doi:
<pub-id pub-id-type="doi">10.1152/jn.00072.2006</pub-id>
<pub-id pub-id-type="pmid">16672296</pub-id>
</mixed-citation>
</ref>
<ref id="bib49">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wichmann</surname>
<given-names>FA</given-names>
</name>
<name>
<surname>Hill</surname>
<given-names>NJ</given-names>
</name>
</person-group>
<year>2001</year>
<article-title>The psychometric function: II. Bootstrap-based confidence intervals and sampling</article-title>
.
<source>Perception & Psychophysics</source>
<volume>63</volume>
:
<fpage>1314</fpage>
<lpage>1329</lpage>
. doi:
<pub-id pub-id-type="doi">10.3758/BF03194545</pub-id>
<pub-id pub-id-type="pmid">11800459</pub-id>
</mixed-citation>
</ref>
</ref-list>
</back>
<sub-article id="SA1" article-type="article-commentary">
<front-stub>
<article-id pub-id-type="doi">10.7554/eLife.03005.017</article-id>
<title-group>
<article-title>Decision letter</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Marder</surname>
<given-names>Eve</given-names>
</name>
<role>Reviewing editor</role>
<aff>
<institution>Brandeis University</institution>
,
<country>United States</country>
</aff>
</contrib>
</contrib-group>
</front-stub>
<body>
<boxed-text position="float" orientation="portrait">
<p>eLife posts the editorial decision letter and author response on a selection of the published articles (subject to the approval of the authors). An edited version of the letter sent to the authors after peer review is shown, indicating the substantive concerns or comments; minor concerns are not usually shown. Reviewers have the opportunity to discuss the decision before the letter is sent (see
<ext-link ext-link-type="uri" xlink:href="http://elifesciences.org/review-process">review process</ext-link>
). Similarly, the author response typically shows only responses to the major concerns raised by the reviewers.</p>
</boxed-text>
<p>Thank you for sending your work entitled “Optimal multisensory decision-making in a reaction-time task” for consideration at
<italic>eLife.</italic>
Your article has been favorably evaluated by Eve Marder (Senior editor) and 2 reviewers, one of whom, Emilio Salinas, has agreed to reveal his identity.</p>
<p>The Senior editor and the two reviewers discussed their comments before we reached this decision, and the Senior editor has assembled the following comments to help you prepare a revised submission.</p>
<p>The authors carry out a detailed theoretical analysis of a vestibular-visual cue integration task, in which subjects can make a response at any time after the stimulus comes on. Unlike tasks that have fixed information delivery times, the behavioral thresholds in the combined task, in the present study, are not better than both of the individual thresholds. The reason for this is that subjects terminate evidence accumulation more quickly in the combined condition. The authors develop a model which incorporates time-varying evidence across both cues up to the reaction time (minus baseline stimulus-response processing) and they show that this model accurately characterizes reaction times and accuracy. They also show that the subjects approximately optimally integrate evidence.</p>
<p>This study represents both theoretical and empirical advances. The behavior has been carefully carried out, the data analysis is detailed and thorough, and the modeling provides and important insight into the behavioral process. I think both the experimental data and the modeling insights are quite compelling and novel. On one hand, multisensory experiments have become quite popular, and performance improvements have been amply documented both in terms of reaction times and of response accuracy. But in retrospect it seems rather surprising that multisensory enhancement has not been studied for the more natural, simultaneous condition in which both time and accuracy are in play. This work not only fills in this gap, but also presents results that may seem quite paradoxical when RT and accuracy are analyzed separately from each other. Surprisingly, the combined condition does not produce better (i.e., more accurate) performance, as one may have thought based on previous results, but mostly faster performance.</p>
<p>The study also presents fits of the experimental data to a generalized version of the diffusion model that works with two independent streams of sensory evidence. The model may not be the ultimate one – it is rather abstract and provides little mechanistic intuition about the underlying neuronal coding schemes and circuit interactions – but it does serve its purpose at this point, which is to prove a quantitative statistical benchmark for measuring the effectiveness of those underlying neural interactions, as well as a framework for testing and generating hypotheses. The generalization to two evidence streams, rather than one, and to a time-dependent reliability of the sensory evidence is a clever and useful theoretical advance, and it describes the data quite well.</p>
<p>Minor comments:</p>
<p>1) How were the degrees of freedom calculated for the BIC? Was the model probability calculated by computing a BIC for each subject, and then summing these across subjects? Another approach that is used in functional imaging is to compute exceedance probabilities. Model evidence across multiple subjects inflates degrees of freedom, and exceedance probabilities have been developed to deal with that problem. This is similar to fixed effects vs. mixed effects (or hierarchical) models for analyzing behavioral data across multiple subjects.</p>
<p>2) Within the Results section there is a paragraph about how to set the noise terms in the model, but the reader finds that out several lines ahead. This would be easier to follow if an introductory sentence were added along the lines of 'The noise terms eta_vis and eta_vest play crucial roles in the model, as they relate to the reliability of the momentary sensory evidence. To specify the manner in which such noise may depend on motion coherence, we relied on fundamental assumptions about how optic flow stimuli are represented by the brain...”</p>
</body>
</sub-article>
<sub-article id="SA2" article-type="reply">
<front-stub>
<article-id pub-id-type="doi">10.7554/eLife.03005.018</article-id>
<title-group>
<article-title>Author response</article-title>
</title-group>
</front-stub>
<body>
<p>
<italic>1) How were the degrees of freedom calculated for the BIC? Was the model probability calculated by computing a BIC for each subject, and then summing these across subjects? Another approach that is used in functional imaging is to compute exceedance probabilities. Model evidence across multiple subjects inflates degrees of freedom, and exceedance probabilities have been developed to deal with that problem. This is similar to fixed effects vs. mixed effects (or hierarchical) models for analyzing behavioral data across multiple subjects</italic>
.</p>
<p>As we fitted the model parameters for each subject separately, the BIC was computed for each subject separately and then summed. The details of this procedure are described in the Data Analysis subsection in Methods. All but one model discussed in the main text have the same number of parameters, such that other approaches to taking the number of model parameters into account would have led to the same result.</p>
<p>As suggested by the reviewers, we have additionally added a random-effects Bayesian model comparison, which was added to
<xref ref-type="fig" rid="fig7s1">Figure 7–figure supplement 1</xref>
(panels b and c). The results of this analysis are consistent with our previous BIC analysis, adding strength to the conclusions. We thank the reviewers for this good suggestion.</p>
<p>
<italic>2) Within the Results section there is a paragraph about how to set the noise terms in the model, but the reader finds that out several lines ahead. This would be easier to follow if an introductory sentence were added along the lines of 'The noise terms eta_vis and eta_vest play crucial roles in the model, as they relate to the reliability of the momentary sensory evidence</italic>
.
<italic>To specify the manner in which such noise may depend on motion coherence, we relied on fundamental assumptions about how optic flow stimuli are represented by the brain...”</italic>
</p>
<p>Thank you for this suggestion. We have modified the beginning of this paragraph as suggested.</p>
</body>
</sub-article>
</pmc>
<affiliations>
<list>
<country>
<li>France</li>
<li>Suisse</li>
<li>États-Unis</li>
</country>
</list>
<tree>
<country name="États-Unis">
<noRegion>
<name sortKey="Drugowitsch, Jan" sort="Drugowitsch, Jan" uniqKey="Drugowitsch J" first="Jan" last="Drugowitsch">Jan Drugowitsch</name>
</noRegion>
<name sortKey="Angelaki, Dora E" sort="Angelaki, Dora E" uniqKey="Angelaki D" first="Dora E" last="Angelaki">Dora E. Angelaki</name>
<name sortKey="Deangelis, Gregory C" sort="Deangelis, Gregory C" uniqKey="Deangelis G" first="Gregory C" last="Deangelis">Gregory C. Deangelis</name>
<name sortKey="Klier, Eliana M" sort="Klier, Eliana M" uniqKey="Klier E" first="Eliana M" last="Klier">Eliana M. Klier</name>
<name sortKey="Pouget, Alexandre" sort="Pouget, Alexandre" uniqKey="Pouget A" first="Alexandre" last="Pouget">Alexandre Pouget</name>
</country>
<country name="France">
<noRegion>
<name sortKey="Drugowitsch, Jan" sort="Drugowitsch, Jan" uniqKey="Drugowitsch J" first="Jan" last="Drugowitsch">Jan Drugowitsch</name>
</noRegion>
</country>
<country name="Suisse">
<noRegion>
<name sortKey="Drugowitsch, Jan" sort="Drugowitsch, Jan" uniqKey="Drugowitsch J" first="Jan" last="Drugowitsch">Jan Drugowitsch</name>
</noRegion>
<name sortKey="Pouget, Alexandre" sort="Pouget, Alexandre" uniqKey="Pouget A" first="Alexandre" last="Pouget">Alexandre Pouget</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Checkpoint
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000A56 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd -nk 000A56 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Checkpoint
   |type=    RBID
   |clé=     PMC:4102720
   |texte=   Optimal multisensory decision-making in a reaction-time task
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/RBID.i   -Sk "pubmed:24929965" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024