Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Human discrimination of head-centred visual–inertial yaw rotations

Identifieur interne : 000539 ( Pmc/Checkpoint ); précédent : 000538; suivant : 000540

Human discrimination of head-centred visual–inertial yaw rotations

Auteurs : Alessandro Nesti [Allemagne] ; Karl A. Beykirch [Allemagne, Autriche] ; Paolo Pretto [Allemagne] ; Heinrich H. Bülthoff [Allemagne, Corée du Sud]

Source :

RBID : PMC:4646930

Abstract

To successfully perform daily activities such as maintaining posture or running, humans need to be sensitive to self-motion over a large range of motion intensities. Recent studies have shown that the human ability to discriminate self-motion in the presence of either inertial-only motion cues or visual-only motion cues is not constant but rather decreases with motion intensity. However, these results do not yet allow for a quantitative description of how self-motion is discriminated in the presence of combined visual and inertial cues, since little is known about visual–inertial perceptual integration and the resulting self-motion perception over a wide range of motion intensity. Here we investigate these two questions for head-centred yaw rotations (0.5 Hz) presented either in darkness or combined with visual cues (optical flow with limited lifetime dots). Participants discriminated a reference motion, repeated unchanged for every trial, from a comparison motion, iteratively adjusted in peak velocity so as to measure the participants’ differential threshold, i.e. the smallest perceivable change in stimulus intensity. A total of six participants were tested at four reference velocities (15, 30, 45 and 60 °/s). Results are combined for further analysis with previously published differential thresholds measured for visual-only yaw rotation cues using the same participants and procedure. Overall, differential thresholds increase with stimulus intensity following a trend described well by three power functions with exponents of 0.36, 0.62 and 0.49 for inertial, visual and visual–inertial stimuli, respectively. Despite the different exponents, differential thresholds do not depend on the type of sensory input significantly, suggesting that combining visual and inertial stimuli does not lead to improved discrimination performance over the investigated range of yaw rotations.


Url:
DOI: 10.1007/s00221-015-4426-2
PubMed: 26319547
PubMed Central: 4646930


Affiliations:


Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4646930

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Human discrimination of head-centred visual–inertial yaw rotations</title>
<author>
<name sortKey="Nesti, Alessandro" sort="Nesti, Alessandro" uniqKey="Nesti A" first="Alessandro" last="Nesti">Alessandro Nesti</name>
<affiliation wicri:level="3">
<nlm:aff id="Aff1">Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen</wicri:regionArea>
<placeName>
<region type="land" nuts="1">Bade-Wurtemberg</region>
<region type="district" nuts="2">District de Tübingen</region>
<settlement type="city">Tübingen</settlement>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Beykirch, Karl A" sort="Beykirch, Karl A" uniqKey="Beykirch K" first="Karl A." last="Beykirch">Karl A. Beykirch</name>
<affiliation wicri:level="3">
<nlm:aff id="Aff1">Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen</wicri:regionArea>
<placeName>
<region type="land" nuts="1">Bade-Wurtemberg</region>
<region type="district" nuts="2">District de Tübingen</region>
<settlement type="city">Tübingen</settlement>
</placeName>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="Aff2">Research and Development, AMST Systemtechnik GmbH, Ranshofen, Austria</nlm:aff>
<country xml:lang="fr">Autriche</country>
<wicri:regionArea>Research and Development, AMST Systemtechnik GmbH, Ranshofen</wicri:regionArea>
<wicri:noRegion>Ranshofen</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Pretto, Paolo" sort="Pretto, Paolo" uniqKey="Pretto P" first="Paolo" last="Pretto">Paolo Pretto</name>
<affiliation wicri:level="3">
<nlm:aff id="Aff1">Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen</wicri:regionArea>
<placeName>
<region type="land" nuts="1">Bade-Wurtemberg</region>
<region type="district" nuts="2">District de Tübingen</region>
<settlement type="city">Tübingen</settlement>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Bulthoff, Heinrich H" sort="Bulthoff, Heinrich H" uniqKey="Bulthoff H" first="Heinrich H." last="Bülthoff">Heinrich H. Bülthoff</name>
<affiliation wicri:level="3">
<nlm:aff id="Aff1">Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen</wicri:regionArea>
<placeName>
<region type="land" nuts="1">Bade-Wurtemberg</region>
<region type="district" nuts="2">District de Tübingen</region>
<settlement type="city">Tübingen</settlement>
</placeName>
</affiliation>
<affiliation wicri:level="3">
<nlm:aff id="Aff3">Department of Brain and Cognitive Engineering, Korea University, Seoul, Korea</nlm:aff>
<country xml:lang="fr">Corée du Sud</country>
<wicri:regionArea>Department of Brain and Cognitive Engineering, Korea University, Seoul</wicri:regionArea>
<placeName>
<settlement type="city">Séoul</settlement>
</placeName>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">26319547</idno>
<idno type="pmc">4646930</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4646930</idno>
<idno type="RBID">PMC:4646930</idno>
<idno type="doi">10.1007/s00221-015-4426-2</idno>
<date when="2015">2015</date>
<idno type="wicri:Area/Pmc/Corpus">000789</idno>
<idno type="wicri:Area/Pmc/Curation">000789</idno>
<idno type="wicri:Area/Pmc/Checkpoint">000539</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Human discrimination of head-centred visual–inertial yaw rotations</title>
<author>
<name sortKey="Nesti, Alessandro" sort="Nesti, Alessandro" uniqKey="Nesti A" first="Alessandro" last="Nesti">Alessandro Nesti</name>
<affiliation wicri:level="3">
<nlm:aff id="Aff1">Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen</wicri:regionArea>
<placeName>
<region type="land" nuts="1">Bade-Wurtemberg</region>
<region type="district" nuts="2">District de Tübingen</region>
<settlement type="city">Tübingen</settlement>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Beykirch, Karl A" sort="Beykirch, Karl A" uniqKey="Beykirch K" first="Karl A." last="Beykirch">Karl A. Beykirch</name>
<affiliation wicri:level="3">
<nlm:aff id="Aff1">Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen</wicri:regionArea>
<placeName>
<region type="land" nuts="1">Bade-Wurtemberg</region>
<region type="district" nuts="2">District de Tübingen</region>
<settlement type="city">Tübingen</settlement>
</placeName>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="Aff2">Research and Development, AMST Systemtechnik GmbH, Ranshofen, Austria</nlm:aff>
<country xml:lang="fr">Autriche</country>
<wicri:regionArea>Research and Development, AMST Systemtechnik GmbH, Ranshofen</wicri:regionArea>
<wicri:noRegion>Ranshofen</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Pretto, Paolo" sort="Pretto, Paolo" uniqKey="Pretto P" first="Paolo" last="Pretto">Paolo Pretto</name>
<affiliation wicri:level="3">
<nlm:aff id="Aff1">Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen</wicri:regionArea>
<placeName>
<region type="land" nuts="1">Bade-Wurtemberg</region>
<region type="district" nuts="2">District de Tübingen</region>
<settlement type="city">Tübingen</settlement>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Bulthoff, Heinrich H" sort="Bulthoff, Heinrich H" uniqKey="Bulthoff H" first="Heinrich H." last="Bülthoff">Heinrich H. Bülthoff</name>
<affiliation wicri:level="3">
<nlm:aff id="Aff1">Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen</wicri:regionArea>
<placeName>
<region type="land" nuts="1">Bade-Wurtemberg</region>
<region type="district" nuts="2">District de Tübingen</region>
<settlement type="city">Tübingen</settlement>
</placeName>
</affiliation>
<affiliation wicri:level="3">
<nlm:aff id="Aff3">Department of Brain and Cognitive Engineering, Korea University, Seoul, Korea</nlm:aff>
<country xml:lang="fr">Corée du Sud</country>
<wicri:regionArea>Department of Brain and Cognitive Engineering, Korea University, Seoul</wicri:regionArea>
<placeName>
<settlement type="city">Séoul</settlement>
</placeName>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Experimental Brain Research</title>
<idno type="ISSN">0014-4819</idno>
<idno type="eISSN">1432-1106</idno>
<imprint>
<date when="2015">2015</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>To successfully perform daily activities such as maintaining posture or running, humans need to be sensitive to self-motion over a large range of motion intensities. Recent studies have shown that the human ability to discriminate self-motion in the presence of either inertial-only motion cues or visual-only motion cues is not constant but rather decreases with motion intensity. However, these results do not yet allow for a quantitative description of how self-motion is discriminated in the presence of combined visual and inertial cues, since little is known about visual–inertial perceptual integration and the resulting self-motion perception over a wide range of motion intensity. Here we investigate these two questions for head-centred yaw rotations (0.5 Hz) presented either in darkness or combined with visual cues (optical flow with limited lifetime dots). Participants discriminated a reference motion, repeated unchanged for every trial, from a comparison motion, iteratively adjusted in peak velocity so as to measure the participants’ differential threshold, i.e. the smallest perceivable change in stimulus intensity. A total of six participants were tested at four reference velocities (15, 30, 45 and 60 °/s). Results are combined for further analysis with previously published differential thresholds measured for visual-only yaw rotation cues using the same participants and procedure. Overall, differential thresholds increase with stimulus intensity following a trend described well by three power functions with exponents of 0.36, 0.62 and 0.49 for inertial, visual and visual–inertial stimuli, respectively. Despite the different exponents, differential thresholds do not depend on the type of sensory input significantly, suggesting that combining visual and inertial stimuli does not lead to improved discrimination performance over the investigated range of yaw rotations.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Allum, Jhj" uniqKey="Allum J">JHJ Allum</name>
</author>
<author>
<name sortKey="Graf, W" uniqKey="Graf W">W Graf</name>
</author>
<author>
<name sortKey="Dichgans, J" uniqKey="Dichgans J">J Dichgans</name>
</author>
<author>
<name sortKey="Schmidt, Cl" uniqKey="Schmidt C">CL Schmidt</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Beierholm, U" uniqKey="Beierholm U">U Beierholm</name>
</author>
<author>
<name sortKey="Kording, K" uniqKey="Kording K">K Körding</name>
</author>
<author>
<name sortKey="Shams, L" uniqKey="Shams L">L Shams</name>
</author>
<author>
<name sortKey="Ma, W J" uniqKey="Ma W">W-J Ma</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bertolini, G" uniqKey="Bertolini G">G Bertolini</name>
</author>
<author>
<name sortKey="Ramat, S" uniqKey="Ramat S">S Ramat</name>
</author>
<author>
<name sortKey="Laurens, J" uniqKey="Laurens J">J Laurens</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bos, Je" uniqKey="Bos J">JE Bos</name>
</author>
<author>
<name sortKey="Bles, W" uniqKey="Bles W">W Bles</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Butler, Js" uniqKey="Butler J">JS Butler</name>
</author>
<author>
<name sortKey="Smith, St" uniqKey="Smith S">ST Smith</name>
</author>
<author>
<name sortKey="Campos, Jl" uniqKey="Campos J">JL Campos</name>
</author>
<author>
<name sortKey="Bulthoff, Hh" uniqKey="Bulthoff H">HH Bülthoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Butler, Js" uniqKey="Butler J">JS Butler</name>
</author>
<author>
<name sortKey="Campos, Jl" uniqKey="Campos J">JL Campos</name>
</author>
<author>
<name sortKey="Bulthoff, Hh" uniqKey="Bulthoff H">HH Bülthoff</name>
</author>
<author>
<name sortKey="Smith, St" uniqKey="Smith S">ST Smith</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chaudhuri, Se" uniqKey="Chaudhuri S">SE Chaudhuri</name>
</author>
<author>
<name sortKey="Karmali, F" uniqKey="Karmali F">F Karmali</name>
</author>
<author>
<name sortKey="Merfeld, Dm" uniqKey="Merfeld D">DM Merfeld</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="De Winkel, Kn" uniqKey="De Winkel K">KN De Winkel</name>
</author>
<author>
<name sortKey="Werkhoven, Pj" uniqKey="Werkhoven P">PJ Werkhoven</name>
</author>
<author>
<name sortKey="Groen, El" uniqKey="Groen E">EL Groen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="De Winkel, Kn" uniqKey="De Winkel K">KN De Winkel</name>
</author>
<author>
<name sortKey="Soyka, F" uniqKey="Soyka F">F Soyka</name>
</author>
<author>
<name sortKey="Barnett Cowan, M" uniqKey="Barnett Cowan M">M Barnett-Cowan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dichgans, J" uniqKey="Dichgans J">J Dichgans</name>
</author>
<author>
<name sortKey="Brandt, T" uniqKey="Brandt T">T Brandt</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Duh, Hb L" uniqKey="Duh H">HB-L Duh</name>
</author>
<author>
<name sortKey="Parker, De" uniqKey="Parker D">DE Parker</name>
</author>
<author>
<name sortKey="Philips, Jo" uniqKey="Philips J">JO Philips</name>
</author>
<author>
<name sortKey="Furness, Ta" uniqKey="Furness T">TA Furness</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, Mo" uniqKey="Ernst M">MO Ernst</name>
</author>
<author>
<name sortKey="Bulthoff, Hh" uniqKey="Bulthoff H">HH Bülthoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fetsch, Cr" uniqKey="Fetsch C">CR Fetsch</name>
</author>
<author>
<name sortKey="Turner, Ah" uniqKey="Turner A">AH Turner</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC DeAngelis</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gescheider, Ga" uniqKey="Gescheider G">GA Gescheider</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gescheider, Ga" uniqKey="Gescheider G">GA Gescheider</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Grossman, Ge" uniqKey="Grossman G">GE Grossman</name>
</author>
<author>
<name sortKey="Leigh, Rj" uniqKey="Leigh R">RJ Leigh</name>
</author>
<author>
<name sortKey="Abel, La" uniqKey="Abel L">LA Abel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gu, Y" uniqKey="Gu Y">Y Gu</name>
</author>
<author>
<name sortKey="Angelaki, De" uniqKey="Angelaki D">DE Angelaki</name>
</author>
<author>
<name sortKey="Deangelis, Gc" uniqKey="Deangelis G">GC Deangelis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Guedry, Fej" uniqKey="Guedry F">FEJ Guedry</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Guedry, Fe" uniqKey="Guedry F">FE Guedry</name>
</author>
<author>
<name sortKey="Benson, Aj" uniqKey="Benson A">AJ Benson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Guilford, Jp" uniqKey="Guilford J">JP Guilford</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Johnson, Wh" uniqKey="Johnson W">WH Johnson</name>
</author>
<author>
<name sortKey="Sunahara, Fa" uniqKey="Sunahara F">FA Sunahara</name>
</author>
<author>
<name sortKey="Landolt, Jp" uniqKey="Landolt J">JP Landolt</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Karmali, F" uniqKey="Karmali F">F Karmali</name>
</author>
<author>
<name sortKey="Lim, K" uniqKey="Lim K">K Lim</name>
</author>
<author>
<name sortKey="Merfeld, Dm" uniqKey="Merfeld D">DM Merfeld</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mallery, Rm" uniqKey="Mallery R">RM Mallery</name>
</author>
<author>
<name sortKey="Olomu, Ou" uniqKey="Olomu O">OU Olomu</name>
</author>
<author>
<name sortKey="Uchanski, Rm" uniqKey="Uchanski R">RM Uchanski</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Maloney, Lt" uniqKey="Maloney L">LT Maloney</name>
</author>
<author>
<name sortKey="Yang, Jn" uniqKey="Yang J">JN Yang</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Massot, C" uniqKey="Massot C">C Massot</name>
</author>
<author>
<name sortKey="Chacron, Mj" uniqKey="Chacron M">MJ Chacron</name>
</author>
<author>
<name sortKey="Cullen, Ke" uniqKey="Cullen K">KE Cullen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Merfeld, Dm" uniqKey="Merfeld D">DM Merfeld</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Merfeld, Dm" uniqKey="Merfeld D">DM Merfeld</name>
</author>
<author>
<name sortKey="Young, Lr" uniqKey="Young L">LR Young</name>
</author>
<author>
<name sortKey="Oman, Cm" uniqKey="Oman C">CM Oman</name>
</author>
<author>
<name sortKey="Sehlhamer, Mj" uniqKey="Sehlhamer M">MJ Sehlhamer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mergner, T" uniqKey="Mergner T">T Mergner</name>
</author>
<author>
<name sortKey="Becker, W" uniqKey="Becker W">W Becker</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Naseri, Ar" uniqKey="Naseri A">AR Naseri</name>
</author>
<author>
<name sortKey="Grant, Pr" uniqKey="Grant P">PR Grant</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nesti, A" uniqKey="Nesti A">A Nesti</name>
</author>
<author>
<name sortKey="Barnett Cowan, M" uniqKey="Barnett Cowan M">M Barnett-Cowan</name>
</author>
<author>
<name sortKey="Macneilage, Pr" uniqKey="Macneilage P">PR Macneilage</name>
</author>
<author>
<name sortKey="Bulthoff, Hh" uniqKey="Bulthoff H">HH Bülthoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nesti, A" uniqKey="Nesti A">A Nesti</name>
</author>
<author>
<name sortKey="Beykirch, Ka" uniqKey="Beykirch K">KA Beykirch</name>
</author>
<author>
<name sortKey="Macneilage, Pr" uniqKey="Macneilage P">PR MacNeilage</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nesti, A" uniqKey="Nesti A">A Nesti</name>
</author>
<author>
<name sortKey="Beykirch, Ka" uniqKey="Beykirch K">KA Beykirch</name>
</author>
<author>
<name sortKey="Pretto, P" uniqKey="Pretto P">P Pretto</name>
</author>
<author>
<name sortKey="Bulthoff, Hh" uniqKey="Bulthoff H">HH Bülthoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nieuwenhuizen, Fm" uniqKey="Nieuwenhuizen F">FM Nieuwenhuizen</name>
</author>
<author>
<name sortKey="Bulthoff, Hh" uniqKey="Bulthoff H">HH Bülthoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Prsa, M" uniqKey="Prsa M">M Prsa</name>
</author>
<author>
<name sortKey="Gale, S" uniqKey="Gale S">S Gale</name>
</author>
<author>
<name sortKey="Blanke, O" uniqKey="Blanke O">O Blanke</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pulaski, Pd" uniqKey="Pulaski P">PD Pulaski</name>
</author>
<author>
<name sortKey="Zee, Ds" uniqKey="Zee D">DS Zee</name>
</author>
<author>
<name sortKey="Robinson, Da" uniqKey="Robinson D">DA Robinson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Robinson, Da" uniqKey="Robinson D">DA Robinson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Roditi, Re" uniqKey="Roditi R">RE Roditi</name>
</author>
<author>
<name sortKey="Crane, Bt" uniqKey="Crane B">BT Crane</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sadeghi, Sg" uniqKey="Sadeghi S">SG Sadeghi</name>
</author>
<author>
<name sortKey="Chacron, Mj" uniqKey="Chacron M">MJ Chacron</name>
</author>
<author>
<name sortKey="Taylor, Mc" uniqKey="Taylor M">MC Taylor</name>
</author>
<author>
<name sortKey="Cullen, Ke" uniqKey="Cullen K">KE Cullen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Seidman, Sh" uniqKey="Seidman S">SH Seidman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shams, L" uniqKey="Shams L">L Shams</name>
</author>
<author>
<name sortKey="Beierholm, Ur" uniqKey="Beierholm U">UR Beierholm</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Teghtsoonian, R" uniqKey="Teghtsoonian R">R Teghtsoonian</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Telford, L" uniqKey="Telford L">L Telford</name>
</author>
<author>
<name sortKey="Howard, Ip" uniqKey="Howard I">IP Howard</name>
</author>
<author>
<name sortKey="Ohmi, M" uniqKey="Ohmi M">M Ohmi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Valko, Y" uniqKey="Valko Y">Y Valko</name>
</author>
<author>
<name sortKey="Lewis, Rf" uniqKey="Lewis R">RF Lewis</name>
</author>
<author>
<name sortKey="Priesol, Aj" uniqKey="Priesol A">AJ Priesol</name>
</author>
<author>
<name sortKey="Merfeld, Dm" uniqKey="Merfeld D">DM Merfeld</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Atteveldt, Nm" uniqKey="Van Atteveldt N">NM Van Atteveldt</name>
</author>
<author>
<name sortKey="Formisano, E" uniqKey="Formisano E">E Formisano</name>
</author>
<author>
<name sortKey="Blomert, L" uniqKey="Blomert L">L Blomert</name>
</author>
<author>
<name sortKey="Goebel, R" uniqKey="Goebel R">R Goebel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Wassenhove, V" uniqKey="Van Wassenhove V">V Van Wassenhove</name>
</author>
<author>
<name sortKey="Grant, Kw" uniqKey="Grant K">KW Grant</name>
</author>
<author>
<name sortKey="Poeppel, D" uniqKey="Poeppel D">D Poeppel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Waespe, W" uniqKey="Waespe W">W Waespe</name>
</author>
<author>
<name sortKey="Henn, V" uniqKey="Henn V">V Henn</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Weber, Kp" uniqKey="Weber K">KP Weber</name>
</author>
<author>
<name sortKey="Aw, St" uniqKey="Aw S">ST Aw</name>
</author>
<author>
<name sortKey="Todd, Mj" uniqKey="Todd M">MJ Todd</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wei, X" uniqKey="Wei X">X Wei</name>
</author>
<author>
<name sortKey="Stocker, Aa" uniqKey="Stocker A">AA Stocker</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zupan, Lh" uniqKey="Zupan L">LH Zupan</name>
</author>
<author>
<name sortKey="Merfeld, Dm" uniqKey="Merfeld D">DM Merfeld</name>
</author>
<author>
<name sortKey="Darlot, C" uniqKey="Darlot C">C Darlot</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Exp Brain Res</journal-id>
<journal-id journal-id-type="iso-abbrev">Exp Brain Res</journal-id>
<journal-title-group>
<journal-title>Experimental Brain Research</journal-title>
</journal-title-group>
<issn pub-type="ppub">0014-4819</issn>
<issn pub-type="epub">1432-1106</issn>
<publisher>
<publisher-name>Springer Berlin Heidelberg</publisher-name>
<publisher-loc>Berlin/Heidelberg</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">26319547</article-id>
<article-id pub-id-type="pmc">4646930</article-id>
<article-id pub-id-type="publisher-id">4426</article-id>
<article-id pub-id-type="doi">10.1007/s00221-015-4426-2</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Human discrimination of head-centred visual–inertial yaw rotations</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Nesti</surname>
<given-names>Alessandro</given-names>
</name>
<address>
<email>alessandro.nesti@tuebingen.mpg.de</email>
</address>
<xref ref-type="aff" rid="Aff1"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Beykirch</surname>
<given-names>Karl A.</given-names>
</name>
<xref ref-type="aff" rid="Aff1"></xref>
<xref ref-type="aff" rid="Aff2"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Pretto</surname>
<given-names>Paolo</given-names>
</name>
<xref ref-type="aff" rid="Aff1"></xref>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Bülthoff</surname>
<given-names>Heinrich H.</given-names>
</name>
<address>
<email>heinrich.buelthoff@tuebingen.mpg.de</email>
</address>
<xref ref-type="aff" rid="Aff1"></xref>
<xref ref-type="aff" rid="Aff3"></xref>
</contrib>
<aff id="Aff1">
<label></label>
Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany</aff>
<aff id="Aff2">
<label></label>
Research and Development, AMST Systemtechnik GmbH, Ranshofen, Austria</aff>
<aff id="Aff3">
<label></label>
Department of Brain and Cognitive Engineering, Korea University, Seoul, Korea</aff>
</contrib-group>
<pub-date pub-type="epub">
<day>30</day>
<month>8</month>
<year>2015</year>
</pub-date>
<pub-date pub-type="pmc-release">
<day>30</day>
<month>8</month>
<year>2015</year>
</pub-date>
<pub-date pub-type="ppub">
<year>2015</year>
</pub-date>
<volume>233</volume>
<issue>12</issue>
<fpage>3553</fpage>
<lpage>3564</lpage>
<history>
<date date-type="received">
<day>16</day>
<month>12</month>
<year>2014</year>
</date>
<date date-type="accepted">
<day>21</day>
<month>8</month>
<year>2015</year>
</date>
</history>
<permissions>
<copyright-statement>© The Author(s) 2015</copyright-statement>
<license license-type="OpenAccess">
<license-p>
<bold>Open Access</bold>
This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">http://creativecommons.org/licenses/by/4.0/</ext-link>
), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.</license-p>
</license>
</permissions>
<abstract id="Abs1">
<p>To successfully perform daily activities such as maintaining posture or running, humans need to be sensitive to self-motion over a large range of motion intensities. Recent studies have shown that the human ability to discriminate self-motion in the presence of either inertial-only motion cues or visual-only motion cues is not constant but rather decreases with motion intensity. However, these results do not yet allow for a quantitative description of how self-motion is discriminated in the presence of combined visual and inertial cues, since little is known about visual–inertial perceptual integration and the resulting self-motion perception over a wide range of motion intensity. Here we investigate these two questions for head-centred yaw rotations (0.5 Hz) presented either in darkness or combined with visual cues (optical flow with limited lifetime dots). Participants discriminated a reference motion, repeated unchanged for every trial, from a comparison motion, iteratively adjusted in peak velocity so as to measure the participants’ differential threshold, i.e. the smallest perceivable change in stimulus intensity. A total of six participants were tested at four reference velocities (15, 30, 45 and 60 °/s). Results are combined for further analysis with previously published differential thresholds measured for visual-only yaw rotation cues using the same participants and procedure. Overall, differential thresholds increase with stimulus intensity following a trend described well by three power functions with exponents of 0.36, 0.62 and 0.49 for inertial, visual and visual–inertial stimuli, respectively. Despite the different exponents, differential thresholds do not depend on the type of sensory input significantly, suggesting that combining visual and inertial stimuli does not lead to improved discrimination performance over the investigated range of yaw rotations.</p>
</abstract>
<kwd-group xml:lang="en">
<title>Keywords</title>
<kwd>Differential thresholds</kwd>
<kwd>Multisensory integration</kwd>
<kwd>Vection</kwd>
<kwd>Self-motion perception</kwd>
<kwd>Yaw</kwd>
<kwd>Virtual reality</kwd>
<kwd>Psychophysics</kwd>
</kwd-group>
<custom-meta-group>
<custom-meta>
<meta-name>issue-copyright-statement</meta-name>
<meta-value>© Springer-Verlag Berlin Heidelberg 2015</meta-value>
</custom-meta>
</custom-meta-group>
</article-meta>
</front>
<body>
<sec id="Sec1">
<title>Introduction</title>
<p>When moving through the environment, humans need to constantly estimate their own motion for performing a variety of crucial tasks (e.g. maintaining posture in presence of external disturbances or controlling a vehicle). This estimate of self-motion, computed by the central nervous system (CNS), is the result of complex multisensory information processing of mainly visual and inertial cues and is inevitably affected by noise, and therefore uncertainty. This, for example, can cause two motions with different amplitudes to be perceived as similar, or can cause repetitions of the same motion to be perceived as different.</p>
<p>Over the last century, researchers have been investigating the properties of this perceptual variability, as well as its sources. While a large group of important studies focused on measuring the smallest perceivable motion intensity (absolute threshold) and its dependency on motion direction and frequency (cf. Guedry
<xref ref-type="bibr" rid="CR21">1974</xref>
), only few studies addressed how the smallest perceivable
<italic>change</italic>
in motion intensity (differential threshold, DT) depends on the intensity of the supra-threshold motion (Zaichik et al.
<xref ref-type="bibr" rid="CR56">1999</xref>
; Mallery et al.
<xref ref-type="bibr" rid="CR27">2010</xref>
; Naseri and Grant
<xref ref-type="bibr" rid="CR34">2012</xref>
; Nesti et al.
<xref ref-type="bibr" rid="CR35">2014a</xref>
,
<xref ref-type="bibr" rid="CR37">2015</xref>
). DTs for different intensities of combined visual and inertial motion cues have (to the best of our knowledge) not been investigated yet, as previous studies focused on how visual and inertial sensory cues independently contribute to the discrimination of self-motion. In this study, we investigate the human ability to discriminate rotations centred on the head-vertical axis (yaw) by measuring DTs for different supra-threshold motion intensities in the presence of congruent visual–inertial cues. Moreover, by comparing DTs for visual–inertial rotation cues with DTs for visual-only and inertial-only rotation cues (measured as three separate conditions), we address the question of whether redundant information from different sensory systems can improve discrimination of self-motion.</p>
<sec id="Sec2">
<title>Supra-threshold motion discrimination</title>
<p>In everyday life, humans are frequently exposed to a wide range of self-motion intensities. For example during locomotion, head rotation velocities can range from 0 to 400 °/s and even higher (Grossman et al.
<xref ref-type="bibr" rid="CR19">1988</xref>
). Recent studies investigated human DTs for different motion intensities (Zaichik et al.
<xref ref-type="bibr" rid="CR56">1999</xref>
; Mallery et al.
<xref ref-type="bibr" rid="CR27">2010</xref>
; Naseri and Grant
<xref ref-type="bibr" rid="CR34">2012</xref>
; Nesti et al.
<xref ref-type="bibr" rid="CR35">2014a</xref>
,
<xref ref-type="bibr" rid="CR37">2015</xref>
). This is commonly done by presenting a participant with two consecutive motion stimuli and iteratively adjusting their difference in motion intensity until discrimination performance converges to a specific, statistically derived level of accuracy (Gescheider
<xref ref-type="bibr" rid="CR17">1997</xref>
). By measuring DTs for different reference intensities, these studies showed that DTs increase for increasing motion intensities.</p>
<p>In three recent studies, Mallery et al. (
<xref ref-type="bibr" rid="CR27">2010</xref>
), Naseri and Grant (
<xref ref-type="bibr" rid="CR34">2012</xref>
) and Nesti et al. (
<xref ref-type="bibr" rid="CR35">2014a</xref>
) measured human DTs for inertial-only motion cues (i.e. in darkness) for head-centred yaw rotations, forward–backward translations and vertical translations, respectively. Moreover, Nesti et al. (
<xref ref-type="bibr" rid="CR37">2015</xref>
) measured DTs for yaw self-motion perception as evoked by a purely visual stimulation (vection). These studies have shown that DTs can be described well by a power function of the general form Δ
<italic>S</italic>
 = 
<italic>k</italic>
 
<italic>*</italic>
 
<italic>S</italic>
<sup>
<italic>a</italic>
</sup>
, where Δ
<italic>S</italic>
is the DT,
<italic>S</italic>
is the stimulus intensity and
<italic>k</italic>
and
<italic>a</italic>
are free parameters that depend on the type of motion investigated. Of these two parameters, the exponent is the one that determines how fast DTs change with intensity: an exponent of 0 reflects DTs that do not depend on stimulus intensity, whereas an exponent of 1 results in the well-known Weber’s law (Gescheider
<xref ref-type="bibr" rid="CR16">1988</xref>
), which linearly relates DTs to stimulus intensity. In the studies mentioned above, the exponent ranges from 0.37 for yaw discrimination (Mallery et al.
<xref ref-type="bibr" rid="CR27">2010</xref>
) to 0.60 for discrimination of upward translations (Nesti et al.
<xref ref-type="bibr" rid="CR35">2014a</xref>
). Whether the functions describing DTs for visual-only and inertial-only stimuli also hold for congruent visual–inertial stimuli is still open question that we address with the present work.</p>
</sec>
<sec id="Sec3">
<title>Multisensory integration</title>
<p>In a natural setting, humans rely on visual, vestibular, auditory and somatosensory cues to estimate their orientation and self-motion. This information, coded by multiple sensory systems, must be integrated by the CNS to create a coherent and robust perception of self-motion. The theory of maximum likelihood integration (MLI) provides a mathematical framework for how noisy sensory estimates might combine in a statistically optimal fashion (Ernst and Bülthoff
<xref ref-type="bibr" rid="CR14">2004</xref>
; Doya et al.
<xref ref-type="bibr" rid="CR12">2007</xref>
). In addition to providing a prediction of the multisensory percept, MLI theory also predicts the variance (i.e. the uncertainty) associated with that percept, based on the individual variances associated with each sensory modality. According to MLI, multisensory estimates always have lower variances than individual unisensory estimates (Ernst and Bülthoff
<xref ref-type="bibr" rid="CR14">2004</xref>
; Doya et al.
<xref ref-type="bibr" rid="CR12">2007</xref>
).</p>
<p>MLI is supported by a large amount of experimental evidence, for example, in the fields of visual–auditory and visual–haptic integration (cf. Doya et al.
<xref ref-type="bibr" rid="CR12">2007</xref>
). However, it is not unusual for psychophysical studies on visual–inertial integration to report deviations, sometimes substantial, from MLI predictions. For example, De Winkel et al. (
<xref ref-type="bibr" rid="CR9">2010</xref>
) measured the human ability to estimate heading from visual, inertial and congruent visual–inertial motion cues and observed that the variance associated with multimodal estimates was between the variances measured in the unisensory conditions. In a similar heading experiment, Butler et al. (
<xref ref-type="bibr" rid="CR6">2010</xref>
) investigated human heading perception for visual and inertial stimuli as well as for congruent and incongruent visual–inertial stimuli. While congruent multisensory cues led to increased precision, for conflicting multisensory cues more weight was given to the inertial motion cue, resulting in multisensory estimates whose precision was not as high as MLI would predict. The MLI model was also rejected by De Winkel et al. (
<xref ref-type="bibr" rid="CR10">2013</xref>
) in an experiment where participants discriminated between different yaw rotation intensities. In contrast, optimal or near-optimal integration of visual–inertial cues was reported in psychophysical experiments with humans (Butler et al.
<xref ref-type="bibr" rid="CR7">2011</xref>
; Prsa et al.
<xref ref-type="bibr" rid="CR40">2012</xref>
; Karmali et al.
<xref ref-type="bibr" rid="CR25">2014</xref>
), as well as monkeys (Gu et al.
<xref ref-type="bibr" rid="CR20">2008</xref>
; Fetsch et al.
<xref ref-type="bibr" rid="CR15">2009</xref>
). Interestingly, Butler et al. (
<xref ref-type="bibr" rid="CR7">2011</xref>
) suggested that stereo vision might be important in order to achieve MLI of visual and inertial cues, although results from Fetsch et al. (
<xref ref-type="bibr" rid="CR15">2009</xref>
) contradict this hypothesis.</p>
<p>Overall, considering the high degree of similarity between experimental setups and procedures, such qualitative differences in results are surprising. A possible explanation could reside in the intrinsic ambiguity of visual stimuli (De Winkel et al.
<xref ref-type="bibr" rid="CR9">2010</xref>
), which contain information on both object motion and self-motion. Depending on properties of the visual stimuli, such as their duration, participants may or may not experience illusory self-motion perception (vection) (Dichgans and Brandt
<xref ref-type="bibr" rid="CR11">1978</xref>
). If vection is absent or incomplete, sensory integration is not expected to occur since the two visual and inertial sensory channels are believed to inform about two different physical stimuli: the motion of objects in the visual scene and self-motion.</p>
</sec>
<sec id="Sec4">
<title>Current study</title>
<p>The goal of this study is to psychophysically measure DTs for congruent visual–inertial yaw rotations over an intensity range of 15–60 °/s and to identify the parameters of an analytical relationship (power function) that relates yaw DTs to motion intensity. Furthermore, we measure, in a separate condition and with the same participants, DTs to yaw rotation in darkness. We then compare DTs measured for inertial motion cues (inertial-only condition) and visual–inertial motion cues (visual–inertial condition) with DTs measured for visual motion cues (visual-only condition). The latter data were collected in a previous experiment on vection during constant visual yaw rotations conducted in our laboratory (Nesti et al.
<xref ref-type="bibr" rid="CR37">2015</xref>
). The present study is therefore designed to facilitate comparison of the data with Nesti et al. (
<xref ref-type="bibr" rid="CR37">2015</xref>
) and to allow testing of an MLI model that predicts the variance of the bimodal (visual–inertial) estimate based on the variance of the unimodal (visual-only and inertial-only) estimates. Note that the use of constant visual rotation is a drastic deviation from the standard approaches described above for investigating MLI of visual–inertial cues and is motivated by the desire to ensure that, in the presence of visual-only cues, participants’ discrimination is based on self-motion perception rather than object-motion perception. We hypothesize that DTs depend significantly on motion intensity and that providing visual–inertial motion results in DTs lower than those measured for unimodal motion cues, perhaps as low as MLI predicts.</p>
<p>This study extends current knowledge on self-motion perception by investigating motion discrimination with multisensory cues at different motion intensities. These types of stimuli occur frequently in everyday life and are therefore of interest to several applied fields. For instance, motion drive algorithms for motion simulators implement knowledge of self-motion perception to provide more realistic motion experiences within their limited workspace (Telban et al.
<xref ref-type="bibr" rid="CR48">2005</xref>
). Furthermore, models have been developed (Bos and Bles
<xref ref-type="bibr" rid="CR5">2002</xref>
; Newman et al.
<xref ref-type="bibr" rid="CR38">2012</xref>
) and employed to quantify pilots’ perceptions of self-motion and orientation during both simulated and real flight, allowing estimation of any perceived deviation from reality in the simulator. By measuring DTs, we provide necessary information to adapt these multisensory models to account for the effect of stimulus intensity on the perception of self-motion, which in turn will result in more accurate predictions particularly at high motion intensities.</p>
</sec>
</sec>
<sec id="Sec5">
<title>Methods</title>
<sec id="Sec6">
<title>Participants</title>
<p>Six participants (age 26–53, 1 female), four naïve and two experimenters (AN and KAB) took part in the study. They all had normal or corrected to normal vision, reported no history of balance or spinal disorders and no motion sickness susceptibility. Written informed consent was collected prior to the inclusion in the study, in accordance with the ethical standards specified by the 1964 Declaration of Helsinki.</p>
</sec>
<sec id="Sec7">
<title>Setup</title>
<p>The experiment was conducted using the MPI CyberMotion Simulator, an 8 degrees-of-freedom motion system capable of reproducing continuous head-centred yaw rotations [Fig. 
<xref rid="Fig1" ref-type="fig">1</xref>
, for technical details refer to Nieuwenhuizen and Bülthoff (
<xref ref-type="bibr" rid="CR39">2013</xref>
); Robocoaster, KUKA Roboter GmbH, Germany]. Participants sat inside the closed cabin in a chair with a 5-point harness, and visual stimuli were presented on the white inner surface of the cabin door (approximately 60 cm in front of the participants’ head) by means of two chair-fixed projectors (each 1920 × 1200 pixels resolution, 60 Hz frame rate). For this experiment, a field of view of approximately 70° × 90° and an actual stimulus resolution of approximately 20 pixels/° were used. Participants wore headsets that played white noise during stimuli presentation to mask noise from the simulator motors and provide continuous communication with the experimenter (for safety reasons). The participant’s head was restrained with a Velcro band, which combined with careful instruction to maintain an upright posture helped participants avoid Coriolis effects (Guedry and Benson
<xref ref-type="bibr" rid="CR22">1976</xref>
; Lackner and Graybiel
<xref ref-type="bibr" rid="CR26">1984</xref>
), i.e. the illusory perception of rolling/pitching following head tilts during constant velocity yaw rotations. Participants controlled the experiment with a button box with three active buttons; one was used to initiate the stimulus (control button) and the other two for providing a forced-choice response (response buttons). As per instruction, the button box was held between the participants’ knees, an active effort to help minimize proprioceptive information from the legs. The seat was also wrapped in foam to help mask vibrations of the simulator.
<fig id="Fig1">
<label>Fig. 1</label>
<caption>
<p>Experimental setup. Participants sat inside the simulator cabin and were presented with visual stimuli projected on the inner surface of the cabin door. The
<italic>inset</italic>
provides a picture of the visual stimulus</p>
</caption>
<graphic xlink:href="221_2015_4426_Fig1_HTML" id="MO1"></graphic>
</fig>
</p>
</sec>
<sec id="Sec8">
<title>Stimuli</title>
<p>In both the visual–inertial and the inertial-only conditions, inertial stimuli consisted of 0.5 Hz sinusoidal yaw rotations centred on the participant’s head. Each stimulus was composed of two consecutive parts characterized by two different peak amplitudes, a reference amplitude and a comparison amplitude, whose presentation order was randomized. The stimulus velocity first increased from 0 °/s to the first peak amplitude following a raised half-cycle cosine mask of 1 s. This amplitude was then maintained for 5 s (2.5 cycles) before changing, again by means of a 1 s raised half-cycle cosine mask, to the comparison amplitude. After 5 s (2.5 cycles) the stimulus was terminated by decreasing its amplitude to 0 °/s through a 3 s raised half-cycle cosine mask. The velocity profile of a typical stimulus is illustrated in Fig. 
<xref rid="Fig2" ref-type="fig">2</xref>
. Different stimulus onset and offset durations are used to hinder comparison of the two constant amplitudes based on stimulus accelerations. As shown by Mallery et al. (
<xref ref-type="bibr" rid="CR27">2010</xref>
) through both modelling and experimental approaches, no confound is to be expected due to velocity storage for such stimuli, i.e. the perception of rotation that persists after the rotational stimulus stops (Bertolini et al.
<xref ref-type="bibr" rid="CR4">2011</xref>
). The stimuli designed for this study resemble those employed by Mallery et al. (
<xref ref-type="bibr" rid="CR27">2010</xref>
) to the greatest possible extent to favour comparison of experimental findings.
<fig id="Fig2">
<label>Fig. 2</label>
<caption>
<p>Velocity profile of a typical stimulus composed of a reference amplitude of 60 °/s and a comparison amplitude of 72 °/s</p>
</caption>
<graphic xlink:href="221_2015_4426_Fig2_HTML" id="MO2"></graphic>
</fig>
</p>
<p>Depending on the experimental condition, stimuli were always presented either in darkness (inertial condition) or combined with a virtual visual environment (visual–inertial condition) projected on the inner wall of the cabin (60 cm away from the participant). In the inertial condition, projectors were off and participants were instructed to close their eyes. Visual stimuli, generated with authoring software for interactive 3D applications (Virtools, 3DVIA), consisted of limited lifetime dots (Fig. 
<xref rid="Fig1" ref-type="fig">1</xref>
) displayed on the surface of a virtual cylinder whose axis coincided with the head-vertical axis of the participants. The radius of the virtual cylinder (5 m) was chosen to achieve a satisfactory visual appearance on the screen (i.e. texture resolution and object size). Dot life was set to 1 s to ensure that no dot outlived a full cycle of the sinusoidal motion, thereby preventing participants from comparing dots’ travelled distances. The number of dots in the scene was maintained constant, and the appearance delay was selected randomly between 0 and 200 ms. Each dot’s diameter as it appeared on the inner wall of the cabin was 3 cm and remained constant for the entire lifetime of the dot. Visual and inertial sinusoidal rotations always had equal intensity and opposite direction, resulting in a congruent multisensory experience of self-motion, that is, the visual scene was perceived as earth-stationary. No visual fixation was used, thereby preserving the participants’ natural behaviour.</p>
<p>Similar to Nesti et al. (
<xref ref-type="bibr" rid="CR37">2015</xref>
), participants were continuously rotating in each session around the head-vertical axis at the constant velocity of 20 °/s. Although the perception of constant inertial rotations disappears within a few seconds after rotation onset (Bertolini et al.
<xref ref-type="bibr" rid="CR4">2011</xref>
), such motion generates vibrations (vibration rms of 0.08 m/s
<sup>2</sup>
) unrelated to the stimulus, which serves multiple purposes. First, as suggested by Butler et al. (
<xref ref-type="bibr" rid="CR6">2010</xref>
), when comparing reference and comparison stimuli, stimulus-unrelated vibrations could mask stimulus-related vibrations from the simulator, which are known to be amplitude dependent (Nesti et al.
<xref ref-type="bibr" rid="CR36">2014b</xref>
). Second, by setting the reference amplitude to 0 °/s, it is possible to measure the yaw absolute threshold in a discrimination task, as it prevents participants from merely performing a vibration detection task (Mallery et al.
<xref ref-type="bibr" rid="CR27">2010</xref>
; Merfeld
<xref ref-type="bibr" rid="CR30">2011</xref>
). Finally, this allows for a more direct comparison with DTs estimated by (Nesti et al.
<xref ref-type="bibr" rid="CR37">2015</xref>
). The direction of the constant rotation was reversed approximately every 15 min and stimulus presentation began 1 min after constant velocity was reached to guarantee disappearance of rotational motion perception.</p>
<p>An inertial measurement unit (YEI 3-Space Sensor, 500 Hz) mounted on top of a participant’s head was used to verify the absence of centripetal accelerations during constant and sinusoidal yaw rotations and for measuring temporal disparities between visual and inertial motion, a common concern for mechanical and visual systems. This procedure revealed that, when commanded simultaneously, the visual motion preceded the physical motion by approximately 32 ms. Because increasing temporal disparities diminishes the influence that multimodal cues have on each other (van Wassenhove et al.
<xref ref-type="bibr" rid="CR52">2007</xref>
; van Atteveldt et al.
<xref ref-type="bibr" rid="CR51">2007</xref>
), temporal disparities were minimized by delaying visual stimuli by 2 frames, which corresponds to approximately 33 ms at the projectors frame rate of 60 Hz.</p>
</sec>
<sec id="Sec9">
<title>Procedure</title>
<p>Before stimulus presentation, participants sat in darkness (inertial condition) or in front of the visual environment, initially stationary with respect to the participants (visual–inertial condition). Stimuli were initiated by the participants through the button box and started 1 s after the control button was pressed. A 5 s tone accompanied the presentation of both the reference and the comparison amplitudes. After hearing a beep indicating the end of the stimulus, participants were asked “which rotation felt stronger (1st or 2nd)?”. Participants were specifically instructed to refer to the motion they felt during the two 5-s tone presentations and not during any other part of the stimulus. After a feedback beep, confirming that the answer was recorded, participants waited for 3 s before a beep signalled they could start the next stimulus. In the visual–inertial condition, the visual scene remained visible and stationary with respect to the participants during the time between stimuli.</p>
<p>Both the inertial and the visual–inertial conditions were divided into four sessions of approximately 45 min each, with a 10 min break roughly in the middle of the session to avoid fatigue. Each participant was only allowed to complete 1 session per day. In every session, the participant’s DT was measured for one of the four reference velocities (15, 30, 45 or 60 °/s) using a psychophysical two-interval forced-choice (2IFC) procedure. While the reference velocity remained constant throughout the whole session, comparison velocities were adjusted for every trial according to an adaptive staircase algorithm: the stimulus level was decreased after three consecutive correct responses and increased after every incorrect response [3-down 1-up rule (Levitt 1971)]. Such an algorithm converges where the probability of a single correct answer is 0.794 (cube root of 0.5), i.e. when the probability of a stimulus increase (wrong answer) or decrease (three consecutive correct answers) is equal (
<italic>p</italic>
 = 0.5). The comparison velocity
<italic>c</italic>
<sub>0</sub>
for the first trial was obtained by multiplying the reference velocity by 1.2. The step size, initially set at 2 °/s, was halved every five reversals. Sessions were terminated after 13 reversals (final step size of 0.5 °/s). Typical staircases for one participant are illustrated in Fig. 
<xref rid="Fig3" ref-type="fig">3</xref>
. All participants completed the inertial-only condition before the visual–inertial condition. Reference velocities were tested in random order. An additional session was run to measure the yaw absolute threshold (reference velocity set to 0 °/s) for inertial-only motion stimuli. In this session, the initial comparison velocity was set to 2 °/s with a constant step size of 0.1 °/s.
<fig id="Fig3">
<label>Fig. 3</label>
<caption>
<p>Evolution of the adaptive algorithms for one participant in the inertial (
<italic>black line</italic>
), and visual–inertial (
<italic>red line</italic>
) conditions.
<italic>Blue line</italic>
represents data re-plotted from Nesti et al. (
<xref ref-type="bibr" rid="CR37">2015</xref>
), where DTs for visual-only motion cues were measured using an identical adaptive procedure. Reference velocity was 60 °/s.
<italic>Empty markers</italic>
indicate reversals (color figure online)</p>
</caption>
<graphic xlink:href="221_2015_4426_Fig3_HTML" id="MO3"></graphic>
</fig>
</p>
</sec>
<sec id="Sec10">
<title>Visual-only condition</title>
<p>Human discrimination of yaw rotations in the presence of visual cues alone was investigated previously by Nesti et al. (
<xref ref-type="bibr" rid="CR37">2015</xref>
), to allow comparison with inertial and visual–inertial cues, as was done in this work. Briefly, in Nesti et al. (
<xref ref-type="bibr" rid="CR37">2015</xref>
) we measured DTs for circular vection for the same six participants of the present study and for the same four reference rotational velocities (15, 30, 45 and 60 °/s). The study also employed the same setup and experimental procedure (2IFC, 3-down 1-up adaptive staircase): at every trial participants experienced two consecutive stimuli and reported which rotation felt stronger. Visual rotations were presented at constant velocity, a stimulus that is known to induce a compelling self-motion perception due to its lack of conflict between visual and inertial information (Dichgans and Brandt
<xref ref-type="bibr" rid="CR11">1978</xref>
). Indeed, human perception of head-centred constant velocity inertial rotations in darkness decays to zero with time. After such time, this results in non-conflicting visual and inertial sensory information during constant visual rotations irrespective from the intensity of the inertial rotation. To guarantee that a compelling self-motion perception was induced in the participants
<italic>at</italic>
<italic>every trial</italic>
, visual rotations were terminated by participants via a button press only after the visual scene was confidently perceived as stationary, i.e. all the visual motion was attributed to self-motion. Note that this constitutes a qualitative difference with most published studies on MLI of visual–inertial cues in self-motion perception, where stimuli for the visual-only condition are obtained by simply removing the inertial component from the visual–inertial condition (Butler et al.
<xref ref-type="bibr" rid="CR6">2010</xref>
,
<xref ref-type="bibr" rid="CR7">2011</xref>
; De Winkel et al.
<xref ref-type="bibr" rid="CR9">2010</xref>
,
<xref ref-type="bibr" rid="CR10">2013</xref>
; Prsa et al.
<xref ref-type="bibr" rid="CR40">2012</xref>
). The method used in the present study to measure visual-only DTs has the benefit of avoiding a possible comparison of a true self-motion percept (for inertial-only and visual–inertial conditions) with a mixed perception of object and self-motion, which can occur with the other method. Although the stimuli from Nesti et al. (
<xref ref-type="bibr" rid="CR37">2015</xref>
) differ from the stimuli employed in the present study in terms of stimulus frequency and visual environment, we argue that these differences do not hinder a meaningful comparison of the results of these two studies (see “
<xref rid="Sec18" ref-type="sec">Validity of study comparison</xref>
” in “
<xref rid="Sec14" ref-type="sec">Discussion</xref>
” section). A combined analysis allows for comparison of yaw discrimination in response to visual-only or inertial-only cues. Moreover, it allows for investigation of how redundant sensory information from the visual and inertial sensory systems combines in the presence of multisensory motion cues.</p>
</sec>
<sec id="Sec11">
<title>Data analysis</title>
<p>For every condition, the last eight reversals of the staircase algorithm were averaged in order to compute the DT corresponding to the reference velocity and sensory modality tested. The DTs for each amplitude were averaged across participants for each of the three conditions, inertial-only, visual–inertial and visual-only (Nesti et al.
<xref ref-type="bibr" rid="CR37">2015</xref>
). The averages were fit for each condition to a power function of the form:
<disp-formula id="Equ1">
<label>1</label>
<alternatives>
<tex-math id="M1">\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Delta S = k*S^{a}$$\end{document}</tex-math>
<mml:math id="M2" display="block">
<mml:mrow>
<mml:mi mathvariant="normal">Δ</mml:mi>
<mml:mi>S</mml:mi>
<mml:mo>=</mml:mo>
<mml:mi>k</mml:mi>
<mml:mrow></mml:mrow>
<mml:mo></mml:mo>
<mml:msup>
<mml:mi>S</mml:mi>
<mml:mi>a</mml:mi>
</mml:msup>
</mml:mrow>
</mml:math>
<graphic xlink:href="221_2015_4426_Article_Equ1.gif" position="anchor"></graphic>
</alternatives>
</disp-formula>
where Δ
<italic>S</italic>
is the differential threshold and
<italic>S</italic>
is the stimulus intensity and
<italic>k</italic>
and
<italic>a</italic>
are free parameters. The choice of the power function is motivated by previous studies showing that the power function provides a good description of DTs for self-motion perception as well as for other perceptual modalities (Guilford
<xref ref-type="bibr" rid="CR23">1932</xref>
; Mallery et al.
<xref ref-type="bibr" rid="CR27">2010</xref>
; Nesti et al.
<xref ref-type="bibr" rid="CR35">2014a</xref>
,
<xref ref-type="bibr" rid="CR37">2015</xref>
).</p>
<p>A repeated-measures analysis of the covariance (rmANCOVA) was run to assess the effect of the factor “condition” (3 levels: “inertial”, “visual” and “visual–inertial”) and of the covariate “motion intensity”. In order to perform the rmANCOVA using the power function model, the following transformation of the data was required:
<disp-formula id="Equ2">
<label>2</label>
<alternatives>
<tex-math id="M3">\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\log \left( {\Delta S} \right) = \log \left( k \right) + a*\log \left( S \right)$$\end{document}</tex-math>
<mml:math id="M4" display="block">
<mml:mrow>
<mml:mo>log</mml:mo>
<mml:mfenced close=")" open="(" separators="">
<mml:mrow>
<mml:mi mathvariant="normal">Δ</mml:mi>
<mml:mi>S</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>=</mml:mo>
<mml:mo>log</mml:mo>
<mml:mfenced close=")" open="(">
<mml:mi>k</mml:mi>
</mml:mfenced>
<mml:mo>+</mml:mo>
<mml:mi>a</mml:mi>
<mml:mrow></mml:mrow>
<mml:mo></mml:mo>
<mml:mo>log</mml:mo>
<mml:mfenced close=")" open="(">
<mml:mi>S</mml:mi>
</mml:mfenced>
</mml:mrow>
</mml:math>
<graphic xlink:href="221_2015_4426_Article_Equ2.gif" position="anchor"></graphic>
</alternatives>
</disp-formula>
</p>
<p>Additionally, to assess whether the integration of visual and inertial cues followed the MLI model in this experiment, an rmANCOVA was run to compare participants’ DTs in the visual–inertial condition with MLI predictions based on their own DTs as measured in the visual-only and inertial-only conditions. Note that it is common practice to test MLI using the variance of the physiological noise underlying the decision process rather than the experimentally derived thresholds (see, e.g. Butler et al.
<xref ref-type="bibr" rid="CR6">2010</xref>
; De Winkel et al.
<xref ref-type="bibr" rid="CR10">2013</xref>
). For a two-interval discrimination task, such as the one employed here, this requires dividing the DTs by 0.58 (Merfeld
<xref ref-type="bibr" rid="CR30">2011</xref>
). Such a linear transformation of the data does not, however, affect the results of the statistical analysis, and we therefore test MLI directly on the measured DTs using the following equation (Ernst and Bülthoff
<xref ref-type="bibr" rid="CR14">2004</xref>
):
<disp-formula id="Equ3">
<label>3</label>
<alternatives>
<tex-math id="M5">\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\overline{{{\text{DT}}_{\text{vi}} }}^{2} = \frac{{{\text{DT}}_{\text{v}}^{2} * {\text{DT}}_{\text{i}}^{2} }}{{{\text{DT}}_{\text{v}}^{2} + {\text{DT}}_{\text{i}}^{2} }}$$\end{document}</tex-math>
<mml:math id="M6" display="block">
<mml:mrow>
<mml:msup>
<mml:mover>
<mml:msub>
<mml:mtext>DT</mml:mtext>
<mml:mtext>vi</mml:mtext>
</mml:msub>
<mml:mo>¯</mml:mo>
</mml:mover>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mtext>DT</mml:mtext>
<mml:mrow>
<mml:mtext>v</mml:mtext>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mrow></mml:mrow>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mtext>DT</mml:mtext>
<mml:mrow>
<mml:mtext>i</mml:mtext>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mtext>DT</mml:mtext>
<mml:mrow>
<mml:mtext>v</mml:mtext>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mtext>DT</mml:mtext>
<mml:mrow>
<mml:mtext>i</mml:mtext>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
<graphic xlink:href="221_2015_4426_Article_Equ3.gif" position="anchor"></graphic>
</alternatives>
</disp-formula>
where
<inline-formula id="IEq1">
<alternatives>
<tex-math id="M7">\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text{DT}}_{\text{v}}$$\end{document}</tex-math>
<mml:math id="M8">
<mml:msub>
<mml:mtext>DT</mml:mtext>
<mml:mtext>v</mml:mtext>
</mml:msub>
</mml:math>
<inline-graphic xlink:href="221_2015_4426_Article_IEq1.gif"></inline-graphic>
</alternatives>
</inline-formula>
and
<inline-formula id="IEq2">
<alternatives>
<tex-math id="M9">\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text{DT}}_{\text{i}}$$\end{document}</tex-math>
<mml:math id="M10">
<mml:msub>
<mml:mtext>DT</mml:mtext>
<mml:mtext>i</mml:mtext>
</mml:msub>
</mml:math>
<inline-graphic xlink:href="221_2015_4426_Article_IEq2.gif"></inline-graphic>
</alternatives>
</inline-formula>
are the DTs measured in the visual-only and inertial-only conditions, respectively, for every reference intensity and
<inline-formula id="IEq3">
<alternatives>
<tex-math id="M11">\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\overline{{{\text{DT}}_{\text{vi}} }}$$\end{document}</tex-math>
<mml:math id="M12">
<mml:mover>
<mml:msub>
<mml:mtext>DT</mml:mtext>
<mml:mtext>vi</mml:mtext>
</mml:msub>
<mml:mo>¯</mml:mo>
</mml:mover>
</mml:math>
<inline-graphic xlink:href="221_2015_4426_Article_IEq3.gif"></inline-graphic>
</alternatives>
</inline-formula>
is the MLI prediction for the DT at the given reference velocity in the visual–inertial condition.</p>
</sec>
<sec id="Sec12">
<title>Stimulus noise analysis</title>
<p>When reproducing motion commands, motion simulators inevitably introduce noise that affects the amplitude and spectral content of the intended inertial stimulus and could affect psychophysical measurements (Seidman
<xref ref-type="bibr" rid="CR45">2008</xref>
; Chaudhuri et al.
<xref ref-type="bibr" rid="CR8">2013</xref>
). As extensively discussed in Nesti et al. (
<xref ref-type="bibr" rid="CR36">2014b</xref>
), analysing the noise introduced in the stimulus by the simulator provides important insights into the study of self-motion perception, as it allows dissociation of the mechanical noise of the experimental setup from the noise that is inherent in the perceptual processes. A signal-to-noise ratio (SNR) analysis (Nesti et al.
<xref ref-type="bibr" rid="CR36">2014b</xref>
) of the motion stimuli was therefore conducted using an inertial measurement unit (STIM300 IMU, Sensonar AS, 250 Hz) rigidly mounted on the floor of the simulator cabin. The SNR expresses the relative amount of commanded signal with respect to motion noise and is therefore an indicator of similarity between commanded and reproduced motion. For every reference velocity, 20 stimulus repetitions were recorded and the noise was then extracted by removing the motion command from the recorded signal (Nesti et al.
<xref ref-type="bibr" rid="CR36">2014b</xref>
). Average SNRs were computed for every reference stimulus and tested by means of an ANCOVA to investigate the effect of motion intensity on the motion SNRs.
<disp-formula id="Equ4">
<label>4</label>
<alternatives>
<tex-math id="M13">\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text{SNR}} = \left( {\frac{{{\text{rms}}_{\text{signal}} }}{{{\text{rms}}_{\text{noise}} }}} \right)^{2}$$\end{document}</tex-math>
<mml:math id="M14" display="block">
<mml:mrow>
<mml:mtext>SNR</mml:mtext>
<mml:mo>=</mml:mo>
<mml:msup>
<mml:mfenced close=")" open="(" separators="">
<mml:mfrac>
<mml:msub>
<mml:mtext>rms</mml:mtext>
<mml:mtext>signal</mml:mtext>
</mml:msub>
<mml:msub>
<mml:mtext>rms</mml:mtext>
<mml:mtext>noise</mml:mtext>
</mml:msub>
</mml:mfrac>
</mml:mfenced>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:math>
<graphic xlink:href="221_2015_4426_Article_Equ4.gif" position="anchor"></graphic>
</alternatives>
</disp-formula>
where rms stands for the root mean square of the noise signal and of the recorded signal (Nesti et al.
<xref ref-type="bibr" rid="CR36">2014b</xref>
).</p>
</sec>
</sec>
<sec id="Sec13">
<title>Results</title>
<p>Motion analysis of the reference stimuli, illustrated in Fig. 
<xref rid="Fig4" ref-type="fig">4</xref>
, shows a significant increase in stimulus SNR for increasing amplitudes of the velocity command (
<italic>F</italic>
(1,78) = 113.8,
<italic>p</italic>
 < 0.001). This is a common feature of motion simulators (cf. Nesti et al.
<xref ref-type="bibr" rid="CR36">2014b</xref>
) and is expected to facilitate motion discrimination of higher as compared to lower motion intensities for those perceptual systems [including the human perceptual system (Greig
<xref ref-type="bibr" rid="CR18">1988</xref>
)] whose discrimination performance increases with SNRs. The fact that human DTs for self-motion increase for increasing motion intensities (Mallery et al.
<xref ref-type="bibr" rid="CR27">2010</xref>
; Naseri and Grant
<xref ref-type="bibr" rid="CR34">2012</xref>
; Nesti et al.
<xref ref-type="bibr" rid="CR35">2014a</xref>
, present study) could indicate an additional noise source inherent to the perceptual system and proportional to stimulus intensity.
<fig id="Fig4">
<label>Fig. 4</label>
<caption>
<p>SNR analysis of the motion stimuli employed in this study.
<bold>a</bold>
SNRs increase for increasing rotational intensities, resulting in the highest comparison stimulus (peak amplitude of 60 °/s) having an SNR approximately five times higher than the lowest comparison stimulus (peak amplitude of 15 °/s).
<italic>Error bars</italic>
represent ±1 SEM.
<bold>b</bold>
Comparison of the commanded and recorded motion profile for one stimulus with 15 °/s amplitude</p>
</caption>
<graphic xlink:href="221_2015_4426_Fig4_HTML" id="MO8"></graphic>
</fig>
</p>
<p>During the experiment, each condition took approximately 40 min and required 61 trials on average. No session needed to be terminated because of fatigue or other reasons, and no participant reported symptoms of motion sickness.</p>
<p>The absolute threshold measured in the inertial-only condition was 0.87 ± 0.13 °/s, a value that is consistent with previous studies (see, e.g. Zaichik et al.
<xref ref-type="bibr" rid="CR56">1999</xref>
; Mallery et al.
<xref ref-type="bibr" rid="CR27">2010</xref>
; Valko et al.
<xref ref-type="bibr" rid="CR50">2012</xref>
; Roditi and Crane
<xref ref-type="bibr" rid="CR43">2012</xref>
).</p>
<p>Fitting Eq. 
<xref rid="Equ1" ref-type="">1</xref>
(power function) to inertial, visual and visual–inertial DTs averaged for each reference velocity results in gain coefficients
<italic>k</italic>
<sub>i</sub>
,
<italic>k</italic>
<sub>v</sub>
and
<italic>k</italic>
<sub>vi</sub>
of 1.33, 0.55 and 0.76 and in exponent coefficients
<italic>a</italic>
<sub>i</sub>
,
<italic>a</italic>
<sub>v</sub>
and
<italic>a</italic>
<sub>vi</sub>
of 0.36, 0.62 and 0.49, where the subscripts i, v and vi stand for inertial, visual and visual–inertial, respectively (Fig. 
<xref rid="Fig5" ref-type="fig">5</xref>
). Goodness of fit is quantified by
<italic>R</italic>
<sup>2</sup>
coefficients of 0.88, 0.89 and 0.99, respectively. Note that the inertial condition qualitatively replicates the findings of Mallery et al. (
<xref ref-type="bibr" rid="CR27">2010</xref>
), where
<italic>k</italic>
<sub>i</sub>
 = 0.88 and
<italic>a</italic>
<sub>i</sub>
 = 0.37. The overall higher thresholds found in our study, reflected in the higher gain (1.33 vs 0.88), are likely due to the use of a different simulator. However, similar exponents indicate that the effect of motion intensity on self-motion discrimination in darkness is consistent between studies. This is only partially surprising given the high level of similarity in the experimental methods. A linear fit resulted in intercept coefficients
<italic>q</italic>
<sub>i</sub>
,
<italic>q</italic>
<sub>v</sub>
and
<italic>q</italic>
<sub>vi</sub>
of 2.88, 1.73 and 2.05 and in slope coefficients
<italic>m</italic>
<sub>i</sub>
,
<italic>m</italic>
<sub>v</sub>
and
<italic>m</italic>
<sub>vi</sub>
of 0.05, 0.09 and 0.06.
<italic>R</italic>
<sup>2</sup>
coefficients are 0.91, 0.87 and 0.99 in the inertial-only, visual-only and visual–inertial condition, respectively. Although the linear model provides a slightly superior fit than the power function model for the inertial-only condition, we performed the rmANCOVA using the power function model as it should generalize better for larger ranges of sensory input amplitudes (Guilford
<xref ref-type="bibr" rid="CR23">1932</xref>
; Teghtsoonian
<xref ref-type="bibr" rid="CR47">1971</xref>
).
<fig id="Fig5">
<label>Fig. 5</label>
<caption>
<p>DTs for yaw rotations with inertial (
<italic>blue</italic>
), visual (
<italic>green</italic>
) and visual–inertial (
<italic>red</italic>
) motion cues are well described by three power functions. DTs for visual cues are re-plotted from Nesti et al. (
<xref ref-type="bibr" rid="CR37">2015</xref>
).
<italic>Error bars</italic>
represent ±1 SEM (color figure online)</p>
</caption>
<graphic xlink:href="221_2015_4426_Fig5_HTML" id="MO9"></graphic>
</fig>
</p>
<p>The ANCOVA revealed that DTs increased significantly with motion intensity (
<italic>F</italic>
(1,63) = 32.55,
<italic>p</italic>
 < 0.001), confirming previous results on self-motion discrimination in the presence of visual-only or inertial-only cues and extending the analysis to the case of visual–inertial cues. However, DTs did not depend on the cue type (
<italic>F</italic>
(2,63) = 1.59,
<italic>p</italic>
 = 0.21), i.e. whether participants experienced inertial, visual or visual–inertial stimuli. Predictions based on MLI were contradicted by measured visual–inertial DTs (Fig. 
<xref rid="Fig6" ref-type="fig">6</xref>
), with actual results significantly higher (
<italic>F</italic>
(1,40) = 5.93,
<italic>p</italic>
 = 0.02).
<fig id="Fig6">
<label>Fig. 6</label>
<caption>
<p>Comparison of measured DTs (
<italic>red circles</italic>
) and predicted DTs (
<italic>black squares</italic>
) based on MLI. Data do not support models of statistically optimal integration of visual and inertial sensory information.
<italic>Error bars</italic>
represent ±1 SEM (color figure online)</p>
</caption>
<graphic xlink:href="221_2015_4426_Fig6_HTML" id="MO10"></graphic>
</fig>
</p>
</sec>
<sec id="Sec14">
<title>Discussion</title>
<p>Human self-motion perception involves the contribution of different sensory information from the visual, vestibular, auditory and somatosensory systems. In this study, we investigated human discrimination of self-motion for a wide intensity range of yaw rotations in darkness (inertial-only motion cues) and with congruent visual–inertial motion cues. Measured DTs increase with motion intensity following a trend described well by a power function, in agreement with previous studies on rotations and translations in darkness (Mallery et al.
<xref ref-type="bibr" rid="CR27">2010</xref>
; Naseri and Grant
<xref ref-type="bibr" rid="CR34">2012</xref>
; Nesti et al.
<xref ref-type="bibr" rid="CR35">2014a</xref>
) and for visually induced self-motion perception (Nesti et al.
<xref ref-type="bibr" rid="CR37">2015</xref>
). The use of a power function is consistent with previous work on self-motion perception (Mallery et al.
<xref ref-type="bibr" rid="CR27">2010</xref>
; Nesti et al.
<xref ref-type="bibr" rid="CR35">2014a</xref>
,
<xref ref-type="bibr" rid="CR37">2015</xref>
) and resulted in a high goodness of fit. Note, however, that a Weber’s law fit also provides a similar goodness of fit.</p>
<p>In the next sections, the relationship between DTs and motion intensity and the sub-optimal integration that emerged from the present study are discussed in detail.</p>
<sec id="Sec15">
<title>Discrimination of yaw rotations</title>
<p>Constant discrimination performance (i.e. constant DTs) would be expected if the relationship between physical and perceived motion intensity was linear and affected by constant noise. Instead, we found that human DTs for self-motion are not independent from the intensity of the motion but rather increase for increasing motion intensities. The present study shows that such behaviour is present not only for visual-only (Nesti et al.
<xref ref-type="bibr" rid="CR37">2015</xref>
) and inertial-only conditions (Zaichik et al.
<xref ref-type="bibr" rid="CR56">1999</xref>
; Mallery et al.
<xref ref-type="bibr" rid="CR27">2010</xref>
; Naseri and Grant
<xref ref-type="bibr" rid="CR33">2011</xref>
; Nesti et al.
<xref ref-type="bibr" rid="CR35">2014a</xref>
, present study), but is encountered also for congruent visual and inertial sensory cues. This indicates that the perceptual processes converting physical to perceived motion are nonlinear and/or affected by stimulus-dependent noise (with the amount of noise increasing with the intensity of the physical stimulus). In contrast, responses to head rotations are linear with constant inter-trial variability for neurons in the vestibular afferents (Sadeghi et al.
<xref ref-type="bibr" rid="CR44">2007</xref>
), as well as for eye movements (Pulaski et al.
<xref ref-type="bibr" rid="CR41">1981</xref>
; Weber et al.
<xref ref-type="bibr" rid="CR54">2008</xref>
). A comparison between psychophysical and physiological studies suggests therefore that nonlinearities and/or stimulus-dependent increases in physiological noise occur further along the neuronal pathways processing sensory information and are likely due to central processes, multisensory integration mechanisms and/or cognitive factors. Interestingly, increased variability was observed in neural recordings from the vestibular nuclei of macaque monkeys for faster compared to slower inertial (Massot et al.
<xref ref-type="bibr" rid="CR29">2011</xref>
), visual (Waespe and Henn
<xref ref-type="bibr" rid="CR53">1977</xref>
) and visual–inertial (Allum et al.
<xref ref-type="bibr" rid="CR1">1976</xref>
) yaw rotational cues. We hypothesize that this increase in variability reduces discrimination performance at high stimulus velocities. Future studies are required to better quantify the relationship between stimulus intensity, neural activity and behavioural responses.</p>
<p>Stimulus-dependent DTs might also represent an efficient strategy of the CNS to account for how frequently a particular motion intensity occurs in everyday life. This would indeed result in smaller DTs for low rotation intensities, as they are more common than large rotations during everyday experience. To better illustrate this concept, we present in Fig. 
<xref rid="Fig7" ref-type="fig">7</xref>
rotational velocity intensities recorded with an inertial sensor (YEI 3-Space Sensor, 500 Hz) over 40 min of normal activity (running) and fit with an exponential distribution. A simple model with two parameters, gain and offset, is able to describe the increasing trend of DTs well.
<fig id="Fig7">
<label>Fig. 7</label>
<caption>
<p>Physical stimulus statistics obtained using an IMU are presented in a
<italic>histogram</italic>
where
<italic>bars</italic>
represent the normalized occurrence frequency of yaw rotational velocities during a 40 min running session. Normalized frequencies are obtained by dividing the
<italic>histogram</italic>
of yaw data samples by its area. Fitting data with an exponential distribution [
<italic>red line</italic>
,
<italic>y</italic>
(
<italic>S</italic>
) = 28.5 * exp(−28.5 * 
<italic>S</italic>
), where
<italic>S</italic>
is the stimulus intensity and
<italic>y</italic>
(
<italic>S</italic>
) is the exponential distribution] allows development of a simple model [Δ
<italic>S</italic>
 = 
<italic>a</italic>
 + 
<italic>b</italic>
 * 1/
<italic>y</italic>
(
<italic>S</italic>
)] that relates DTs to motion intensity by accounting for how frequently a particular intensity occurs.
<italic>Error bars</italic>
represent ±1 SEM (color figure online)</p>
</caption>
<graphic xlink:href="221_2015_4426_Fig7_HTML" id="MO11"></graphic>
</fig>
</p>
<p>Note that the simple model from Fig. 
<xref rid="Fig7" ref-type="fig">7</xref>
only serves as an illustrative example. A more systematic approach for using stimulus statistics to model perceptual responses is presented in Wei and Stocker (
<xref ref-type="bibr" rid="CR55">2013</xref>
).</p>
</sec>
<sec id="Sec16">
<title>Multisensory integration</title>
<p>In this study, we investigated multisensory integration in a yaw intensity discrimination task by comparing DTs for inertial-only and visual-only motion stimuli with DTs for congruent (i.e. redundant) visual–inertial cues. Although a number of studies indicated MLI as a valid model of visual–inertial cue integration for the perception of translational and rotational motion (see, e.g. Gu et al.
<xref ref-type="bibr" rid="CR20">2008</xref>
; Fetsch et al.
<xref ref-type="bibr" rid="CR15">2009</xref>
; Butler et al.
<xref ref-type="bibr" rid="CR7">2011</xref>
; Prsa et al.
<xref ref-type="bibr" rid="CR40">2012</xref>
; Karmali et al.
<xref ref-type="bibr" rid="CR25">2014</xref>
), our data do not seem to follow MLI. This is only partially surprising, as we are not the first to report substantial deviations from MLI (Telford et al.
<xref ref-type="bibr" rid="CR49">1995</xref>
; Butler et al.
<xref ref-type="bibr" rid="CR6">2010</xref>
; De Winkel et al.
<xref ref-type="bibr" rid="CR9">2010</xref>
,
<xref ref-type="bibr" rid="CR10">2013</xref>
). However, when comparing this study with the existing literature, it is important to consider two main differences. First, the great majority of visual–inertial integration studies used a heading task, rather than a rotation intensity discrimination task as we have. Although MLI has been suggested as a general strategy for multisensory integration, the stimuli are radically different and even involve different vestibular sensors (note that a heading stimulus is composed by linear translations only); therefore, caution is advised in the generalization of the results. The only other studies, of which we are aware, that employed yaw stimuli are from Prsa et al. (
<xref ref-type="bibr" rid="CR40">2012</xref>
), whose findings support MLI, and from De Winkel et al. (
<xref ref-type="bibr" rid="CR10">2013</xref>
), where the MLI model is rejected. Second, the stimuli we chose for testing for MLI were designed to avoid visual–inertial conflicts. This required an inertial-only stimulus to which the visual system is insensitive (i.e. motion in darkness) and a visual-only stimulus to which the inertial systems are insensitive (i.e. rotation at constant velocity (Nesti et al.
<xref ref-type="bibr" rid="CR37">2015</xref>
), which lacks inertial accelerations). To the best of our knowledge, such stimuli have not been previously employed for validating MLI of visual–inertial motion cues. Instead, perceptual thresholds for visual-only cues were always investigated by removing the inertial component from the visual–inertial stimulus, a choice that has the clear benefit of minimizing experimental manipulations but might lead to visual–inertial sensory conflicts.</p>
<p>In the light of our experimental results, the visual–inertial DTs may be reconciled with MLI through the theory of causal inference (Beierholm et al.
<xref ref-type="bibr" rid="CR3">2008</xref>
; Shams and Beierholm
<xref ref-type="bibr" rid="CR46">2010</xref>
), which predicts that sensory integration is subordinate to whether stimuli are perceived as originating from the same physical event or not. Although in the present study, the visual and inertial stimuli were always congruent in representing head-centred rotations, we have to consider the possibility that they were not always perceived as congruent by the participants. Indeed, the simple fact that visual stimuli were computer-generated virtual objects might induce in the participants expectations of incongruence with the actual motion (the visual and inertial stimuli “belong” to different environments). Causal inference theory suggests that in this event, stimuli are segregated and participants respond based on the information coming from either one of the two sensory channels. Statistical models, other than causal inference, have been suggested in the literature to account for the possibility that stimuli are not integrated according to MLI because they are perceived as incongruent [see De Winkel et al. (
<xref ref-type="bibr" rid="CR10">2013</xref>
) for a review]. For instance, a “switching strategy” model could be applied to our data by assuming that stimuli perceived as congruent are integrated according to MLI, whereas stimuli perceived as incongruent are segregated and the response is based only on one sensory modality (e.g. the inertial). Equation 
<xref rid="Equ3" ref-type="">3</xref>
would then be modified as follows:
<disp-formula id="Equ5">
<label>5</label>
<alternatives>
<tex-math id="M15">\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\overline{{\sigma_{\text{iv}} }}^{2} = \frac{{\sigma_{\text{i}}^{2} * \sigma_{\text{v}}^{2} }}{{\sigma_{\text{i}}^{2} + \sigma_{\text{v}}^{2} }}* \pi + \sigma_{\text{i}}^{2} *\left( {1 - \pi } \right)$$\end{document}</tex-math>
<mml:math id="M16" display="block">
<mml:mrow>
<mml:msup>
<mml:mover>
<mml:msub>
<mml:mi mathvariant="italic">σ</mml:mi>
<mml:mtext>iv</mml:mtext>
</mml:msub>
<mml:mo>¯</mml:mo>
</mml:mover>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mi mathvariant="italic">σ</mml:mi>
<mml:mrow>
<mml:mtext>i</mml:mtext>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mrow></mml:mrow>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mi mathvariant="italic">σ</mml:mi>
<mml:mrow>
<mml:mtext>v</mml:mtext>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mi mathvariant="italic">σ</mml:mi>
<mml:mrow>
<mml:mtext>i</mml:mtext>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mi mathvariant="italic">σ</mml:mi>
<mml:mrow>
<mml:mtext>v</mml:mtext>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mrow></mml:mrow>
<mml:mo></mml:mo>
<mml:mi mathvariant="italic">π</mml:mi>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mi mathvariant="italic">σ</mml:mi>
<mml:mrow>
<mml:mtext>i</mml:mtext>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mrow></mml:mrow>
<mml:mo></mml:mo>
<mml:mfenced close=")" open="(" separators="">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>-</mml:mo>
<mml:mi mathvariant="italic">π</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:math>
<graphic xlink:href="221_2015_4426_Article_Equ5.gif" position="anchor"></graphic>
</alternatives>
</disp-formula>
leading to an estimated average probability (
<italic>π</italic>
) of 0.33 that participants perceived the stimuli as congruent.</p>
</sec>
<sec id="Sec17">
<title>Nonlinear self-motion perception models</title>
<p>Human self-motion perception models compute how people update the estimate of their motion in space in response to physical motion. Several models were developed combining knowledge of sensor dynamics, oculomotor responses, psychophysics and neurophysiology (Merfeld et al.
<xref ref-type="bibr" rid="CR31">1993</xref>
; Bos and Bles
<xref ref-type="bibr" rid="CR5">2002</xref>
; Zupan et al.
<xref ref-type="bibr" rid="CR57">2002</xref>
; Newman et al.
<xref ref-type="bibr" rid="CR38">2012</xref>
). Despite capturing a large variety of perceptual phenomena well, to the best of our knowledge no published model can account for the decrease in discrimination performance with increasing motion intensity. The experimental data collected in this study and by Nesti et al. (
<xref ref-type="bibr" rid="CR37">2015</xref>
) constitute a crucial step towards a more complete approach to self-motion perception models. Considering that the DTs measured here increase with stimulus intensity and were not affected by manipulation of the type of sensory information, a natural and straightforward choice would be to implement a single, common nonlinear process after the integration of the visual and inertial sensory pathways. Future studies should be dedicated to measuring rotational and translational multisensory DTs for the remaining degrees of freedom, implementing perceptual nonlinearities in computational models of human self-motion perception and validating these models using alternative motion profiles and experimental paradigms (e.g. maximum likelihood difference scaling, Maloney and Yang
<xref ref-type="bibr" rid="CR28">2003</xref>
).</p>
</sec>
<sec id="Sec18">
<title>Validity of study comparison</title>
<p>We compared our results with DTs for vection (Nesti et al.
<xref ref-type="bibr" rid="CR37">2015</xref>
) to test the hypothesis that redundant information from the visual and inertial sensory systems is perceptually combined in a statistically optimal fashion. Comparison of DTs measured here and in Nesti et al. (
<xref ref-type="bibr" rid="CR37">2015</xref>
) is particularly natural because of the high similarity between the studies (experimental setup, participants, procedure and stimulus intensities). However, two important differences should be discussed.</p>
<p>First, in the present study 0.5 Hz sinusoidal motion profiles were used, whereas in Nesti et al. (
<xref ref-type="bibr" rid="CR37">2015</xref>
) we measured vection DTs for constant (0 Hz) yaw rotations and stimuli were self-terminated by the participant to account for the high individual variability in vection onset time (Dichgans and Brandt
<xref ref-type="bibr" rid="CR11">1978</xref>
). These choices were made in order to measure DTs for stimuli as free of visual–inertial conflicts as possible, ensuring that all the visual motion is attributed to self-motion rather than object motion. Note how, for supra-threshold motion intensities, a visual stimulus at 0.5 Hz combined with no inertial motion will surely evoke a visual–inertial sensory conflict, as the continuous changes in the velocity of the visual environment conflict with the lack of acceleration signal from the inertial sensory systems. Evidence that conflicts between visual and inertial cues could confound self-motion perception is provided for instance by Johnson et al. (
<xref ref-type="bibr" rid="CR24">1999</xref>
), who showed that in bilateral labyrinthectomized patients, who lack one of the main sources of inertial information (i.e. the vestibular system), vection latencies are shorter than those of healthy subjects. Comparing DTs for constant rotations with DTs for visual–inertial rotations at 0.5 Hz requires, however, the assumption that visual responses remain constants within this frequency range. Previous studies indicate that postural, psychophysical and neurophysiological responses to visually simulated self-motion show low-pass characteristics (Robinson
<xref ref-type="bibr" rid="CR42">1977</xref>
; Mergner and Becker
<xref ref-type="bibr" rid="CR32">1990</xref>
; Duh et al.
<xref ref-type="bibr" rid="CR13">2004</xref>
). For instance, visual responses in the vestibular nuclei only begin to attenuate for frequencies higher than 0.03 Hz (Robinson
<xref ref-type="bibr" rid="CR42">1977</xref>
), while subjective reports of circular vection intensities remain approximately constant for frequencies between 0.025 and 0.8 Hz (Mergner and Becker
<xref ref-type="bibr" rid="CR32">1990</xref>
). It is, however, reasonable to expect that this attenuation is at least in part due to multisensory conflicts that arise at stimulus frequencies to which the inertial sensors respond. Further studies in labyrinthectomized patients might help in clarifying the dependency of visual responses on frequency, although it should not be forgotten that the vestibular system is not the only system contributing to self-motion perception.</p>
<p>The second important difference involves the different visual stimulus: whereas in the present study we employed a limited lifetime dot field, in Nesti et al. (
<xref ref-type="bibr" rid="CR37">2015</xref>
) we employed a 360° panoramic picture of a forest. Although it is known that different visual environments (e.g. with different spatial frequencies) affect vection onset time (Dichgans and Brandt
<xref ref-type="bibr" rid="CR11">1978</xref>
), we suggest that DTs after vection arises (i.e. when the visual environment is perceived as stationary) depend only on the velocity of the optic flow and not on the texture of the visual stimulus. This difference could be obviously eliminated in future studies by employing the same virtual environment for every condition and ensuring that it does not provide visual references.</p>
<p>To the best of our knowledge, this is the first study that focuses on minimizing sensory conflicts when testing MLI of visual–inertial cues for self-motion perception. While we acknowledge that the discussed differences between stimuli in the three conditions advise for caution in the interpretation of the results, we believe that preventing confounds between object-motion and self-motion perception in psychophysical experiments is an important step towards the understanding of the perceptual processes underlying the integration of visual–inertial cues.</p>
</sec>
</sec>
</body>
<back>
<ack>
<p>We gratefully thank Maria Lächele, Reiner Boss, Michael Kerger and Harald Teufel for technical assistance and Mikhail Katliar for useful discussions. This work was supported by the Brain Korea 21 PLUS Program through the National Research Foundation of Korea funded by the Ministry of Education. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.</p>
</ack>
<ref-list id="Bib1">
<title>References</title>
<ref id="CR1">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Allum</surname>
<given-names>JHJ</given-names>
</name>
<name>
<surname>Graf</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Dichgans</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Schmidt</surname>
<given-names>CL</given-names>
</name>
</person-group>
<article-title>Visual–vestibular interactions in the vestibular nuclei of the goldfish</article-title>
<source>Exp Brain Res</source>
<year>1976</year>
<volume>26</volume>
<fpage>463</fpage>
<lpage>485</lpage>
<pub-id pub-id-type="doi">10.1007/BF00238821</pub-id>
<pub-id pub-id-type="pmid">1087607</pub-id>
</element-citation>
</ref>
<ref id="CR3">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Beierholm</surname>
<given-names>U</given-names>
</name>
<name>
<surname>Körding</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Shams</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Ma</surname>
<given-names>W-J</given-names>
</name>
</person-group>
<article-title>Comparing Bayesian models for multisensory cue combination without mandatory integration</article-title>
<source>Adv Neural Inf Process Syst</source>
<year>2008</year>
<volume>20</volume>
<fpage>81</fpage>
<lpage>88</lpage>
</element-citation>
</ref>
<ref id="CR4">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bertolini</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Ramat</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Laurens</surname>
<given-names>J</given-names>
</name>
<etal></etal>
</person-group>
<article-title>Velocity storage contribution to vestibular self-motion perception in healthy human subjects</article-title>
<source>J Neurophysiol</source>
<year>2011</year>
<volume>105</volume>
<fpage>209</fpage>
<lpage>223</lpage>
<pub-id pub-id-type="doi">10.1152/jn.00154.2010</pub-id>
<pub-id pub-id-type="pmid">21068266</pub-id>
</element-citation>
</ref>
<ref id="CR5">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bos</surname>
<given-names>JE</given-names>
</name>
<name>
<surname>Bles</surname>
<given-names>W</given-names>
</name>
</person-group>
<article-title>Theoretical considerations on canal–otolith interaction and an observer model</article-title>
<source>Biol Cybern</source>
<year>2002</year>
<volume>86</volume>
<fpage>191</fpage>
<lpage>207</lpage>
<pub-id pub-id-type="doi">10.1007/s00422-001-0289-7</pub-id>
<pub-id pub-id-type="pmid">12068786</pub-id>
</element-citation>
</ref>
<ref id="CR6">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Butler</surname>
<given-names>JS</given-names>
</name>
<name>
<surname>Smith</surname>
<given-names>ST</given-names>
</name>
<name>
<surname>Campos</surname>
<given-names>JL</given-names>
</name>
<name>
<surname>Bülthoff</surname>
<given-names>HH</given-names>
</name>
</person-group>
<article-title>Bayesian integration of visual and vestibular signals for heading</article-title>
<source>J Vis</source>
<year>2010</year>
<volume>10</volume>
<fpage>1</fpage>
<lpage>13</lpage>
<pub-id pub-id-type="doi">10.1167/10.11.23</pub-id>
</element-citation>
</ref>
<ref id="CR7">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Butler</surname>
<given-names>JS</given-names>
</name>
<name>
<surname>Campos</surname>
<given-names>JL</given-names>
</name>
<name>
<surname>Bülthoff</surname>
<given-names>HH</given-names>
</name>
<name>
<surname>Smith</surname>
<given-names>ST</given-names>
</name>
</person-group>
<article-title>The role of stereo vision in visual–vestibular integration</article-title>
<source>Seeing Perceiving</source>
<year>2011</year>
<volume>24</volume>
<fpage>453</fpage>
<lpage>470</lpage>
<pub-id pub-id-type="doi">10.1163/187847511X588070</pub-id>
<pub-id pub-id-type="pmid">21888763</pub-id>
</element-citation>
</ref>
<ref id="CR8">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chaudhuri</surname>
<given-names>SE</given-names>
</name>
<name>
<surname>Karmali</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Merfeld</surname>
<given-names>DM</given-names>
</name>
</person-group>
<article-title>Whole body motion-detection tasks can yield much lower thresholds than direction-recognition tasks: implications for the role of vibration</article-title>
<source>J Neurophysiol</source>
<year>2013</year>
<volume>110</volume>
<fpage>2764</fpage>
<lpage>2772</lpage>
<pub-id pub-id-type="doi">10.1152/jn.00091.2013</pub-id>
<pub-id pub-id-type="pmid">24068754</pub-id>
</element-citation>
</ref>
<ref id="CR9">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>De Winkel</surname>
<given-names>KN</given-names>
</name>
<name>
<surname>Werkhoven</surname>
<given-names>PJ</given-names>
</name>
<name>
<surname>Groen</surname>
<given-names>EL</given-names>
</name>
</person-group>
<article-title>Integration of visual and inertial cues in perceived heading of self-motion</article-title>
<source>J Vis</source>
<year>2010</year>
<volume>10</volume>
<fpage>1</fpage>
<lpage>10</lpage>
<pub-id pub-id-type="doi">10.1167/10.12.1</pub-id>
<pub-id pub-id-type="pmid">21047733</pub-id>
</element-citation>
</ref>
<ref id="CR10">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>De Winkel</surname>
<given-names>KN</given-names>
</name>
<name>
<surname>Soyka</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Barnett-Cowan</surname>
<given-names>M</given-names>
</name>
<etal></etal>
</person-group>
<article-title>Integration of visual and inertial cues in the perception of angular self-motion</article-title>
<source>Exp Brain Res</source>
<year>2013</year>
<volume>231</volume>
<fpage>209</fpage>
<lpage>218</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-013-3683-1</pub-id>
<pub-id pub-id-type="pmid">24013788</pub-id>
</element-citation>
</ref>
<ref id="CR11">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dichgans</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Brandt</surname>
<given-names>T</given-names>
</name>
</person-group>
<article-title>Visual–vestibular interaction: effects on self-motion perception and postural control</article-title>
<source>Handb Sens Physiol Percept</source>
<year>1978</year>
<volume>8</volume>
<fpage>755</fpage>
<lpage>804</lpage>
</element-citation>
</ref>
<ref id="CR12">
<mixed-citation publication-type="other">Doya K, Ishii S, Pouget A, Rao RPN (eds) (2007) The bayesian brain: probabilistic approaches to neural coding, Cambridge, MA: MIT Press</mixed-citation>
</ref>
<ref id="CR13">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Duh</surname>
<given-names>HB-L</given-names>
</name>
<name>
<surname>Parker</surname>
<given-names>DE</given-names>
</name>
<name>
<surname>Philips</surname>
<given-names>JO</given-names>
</name>
<name>
<surname>Furness</surname>
<given-names>TA</given-names>
</name>
</person-group>
<article-title>“Conflicting” motion cues to the visual and vestibular self-motion systems around 0.06 Hz evoke simulator sickness</article-title>
<source>Hum Factors</source>
<year>2004</year>
<volume>46</volume>
<fpage>142</fpage>
<lpage>153</lpage>
<pub-id pub-id-type="doi">10.1518/hfes.46.1.142.30384</pub-id>
<pub-id pub-id-type="pmid">15151161</pub-id>
</element-citation>
</ref>
<ref id="CR14">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>MO</given-names>
</name>
<name>
<surname>Bülthoff</surname>
<given-names>HH</given-names>
</name>
</person-group>
<article-title>Merging the senses into a robust percept</article-title>
<source>Trends Cogn Sci</source>
<year>2004</year>
<volume>8</volume>
<fpage>162</fpage>
<lpage>169</lpage>
<pub-id pub-id-type="doi">10.1016/j.tics.2004.02.002</pub-id>
<pub-id pub-id-type="pmid">15050512</pub-id>
</element-citation>
</ref>
<ref id="CR15">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fetsch</surname>
<given-names>CR</given-names>
</name>
<name>
<surname>Turner</surname>
<given-names>AH</given-names>
</name>
<name>
<surname>DeAngelis</surname>
<given-names>GC</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
</person-group>
<article-title>Dynamic reweighting of visual and vestibular cues during self-motion perception</article-title>
<source>J Neurosci</source>
<year>2009</year>
<volume>29</volume>
<fpage>15601</fpage>
<lpage>15612</lpage>
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.2574-09.2009</pub-id>
<pub-id pub-id-type="pmid">20007484</pub-id>
</element-citation>
</ref>
<ref id="CR16">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gescheider</surname>
<given-names>GA</given-names>
</name>
</person-group>
<article-title>Psychophysical scaling</article-title>
<source>Annu Rev Psychol</source>
<year>1988</year>
<volume>39</volume>
<fpage>169</fpage>
<lpage>200</lpage>
<pub-id pub-id-type="doi">10.1146/annurev.ps.39.020188.001125</pub-id>
<pub-id pub-id-type="pmid">3278675</pub-id>
</element-citation>
</ref>
<ref id="CR17">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Gescheider</surname>
<given-names>GA</given-names>
</name>
</person-group>
<source>Psychophysics the fundamentals</source>
<year>1997</year>
<publisher-loc>Mahwah</publisher-loc>
<publisher-name>Lawrence Erlbaum Associates</publisher-name>
</element-citation>
</ref>
<ref id="CR18">
<mixed-citation publication-type="other">Greig GL (1988) Masking of motion cues by random motion: comparison of human performance with a signal detection model. Tech Rep 313</mixed-citation>
</ref>
<ref id="CR19">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Grossman</surname>
<given-names>GE</given-names>
</name>
<name>
<surname>Leigh</surname>
<given-names>RJ</given-names>
</name>
<name>
<surname>Abel</surname>
<given-names>LA</given-names>
</name>
<etal></etal>
</person-group>
<article-title>Frequency and velocity of rotational head perturbations during locomotion</article-title>
<source>Exp Brain Res</source>
<year>1988</year>
<volume>70</volume>
<fpage>470</fpage>
<lpage>476</lpage>
<pub-id pub-id-type="doi">10.1007/BF00247595</pub-id>
<pub-id pub-id-type="pmid">3384048</pub-id>
</element-citation>
</ref>
<ref id="CR20">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gu</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>DE</given-names>
</name>
<name>
<surname>Deangelis</surname>
<given-names>GC</given-names>
</name>
</person-group>
<article-title>Neural correlates of multisensory cue integration in macaque MSTd</article-title>
<source>Nat Neurosci</source>
<year>2008</year>
<volume>11</volume>
<fpage>1201</fpage>
<lpage>1210</lpage>
<pub-id pub-id-type="doi">10.1038/nn.2191</pub-id>
<pub-id pub-id-type="pmid">18776893</pub-id>
</element-citation>
</ref>
<ref id="CR21">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Guedry</surname>
<given-names>FEJ</given-names>
</name>
</person-group>
<article-title>Psychophysics of vestibular sensation</article-title>
<source>Vestib Syst 2 Psychophys Appl Asp Gen Interpret Handb Sens Physiol</source>
<year>1974</year>
<volume>6</volume>
<fpage>3</fpage>
<lpage>154</lpage>
</element-citation>
</ref>
<ref id="CR22">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Guedry</surname>
<given-names>FE</given-names>
</name>
<name>
<surname>Benson</surname>
<given-names>AJ</given-names>
</name>
</person-group>
<source>Coriolis cross-coupling effects: disorienting and nauseogenic or not?</source>
<year>1976</year>
<publisher-loc>Pensacola</publisher-loc>
<publisher-name>Naval Aerospace Medical Research Lab</publisher-name>
</element-citation>
</ref>
<ref id="CR23">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Guilford</surname>
<given-names>JP</given-names>
</name>
</person-group>
<article-title>A generalized psychophysical law</article-title>
<source>Psychol Rev</source>
<year>1932</year>
<volume>39</volume>
<fpage>73</fpage>
<lpage>85</lpage>
<pub-id pub-id-type="doi">10.1037/h0070969</pub-id>
</element-citation>
</ref>
<ref id="CR24">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Johnson</surname>
<given-names>WH</given-names>
</name>
<name>
<surname>Sunahara</surname>
<given-names>FA</given-names>
</name>
<name>
<surname>Landolt</surname>
<given-names>JP</given-names>
</name>
</person-group>
<article-title>Importance of the vestibular system in visually induced nausea and self-vection</article-title>
<source>J Vestib Res</source>
<year>1999</year>
<volume>9</volume>
<fpage>83</fpage>
<lpage>87</lpage>
<pub-id pub-id-type="pmid">10378179</pub-id>
</element-citation>
</ref>
<ref id="CR25">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Karmali</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Lim</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Merfeld</surname>
<given-names>DM</given-names>
</name>
</person-group>
<article-title>Visual and vestibular perceptual thresholds each demonstrate better precision at specific frequencies and also exhibit optimal integration</article-title>
<source>J Neurophysiol</source>
<year>2014</year>
<volume>111</volume>
<fpage>2393</fpage>
<lpage>2403</lpage>
<pub-id pub-id-type="doi">10.1152/jn.00332.2013</pub-id>
<pub-id pub-id-type="pmid">24371292</pub-id>
</element-citation>
</ref>
<ref id="CR26">
<mixed-citation publication-type="other">Lackner JR, Graybiel A (1984) Influence of gravitoinertial force level on apparent magnitude of coriolis cross-coupled angular accelerations and motion sickness. In: NATO-AGARD aerospace medical panel symposium on motion sickness: mechanisms, prediction, prevention and treatment. AGARD-CP-372, vol 22, pp 1–7</mixed-citation>
</ref>
<ref id="CR27">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mallery</surname>
<given-names>RM</given-names>
</name>
<name>
<surname>Olomu</surname>
<given-names>OU</given-names>
</name>
<name>
<surname>Uchanski</surname>
<given-names>RM</given-names>
</name>
<etal></etal>
</person-group>
<article-title>Human discrimination of rotational velocities</article-title>
<source>Exp Brain Res</source>
<year>2010</year>
<volume>204</volume>
<fpage>11</fpage>
<lpage>20</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-010-2288-1</pub-id>
<pub-id pub-id-type="pmid">20526711</pub-id>
</element-citation>
</ref>
<ref id="CR28">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Maloney</surname>
<given-names>LT</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>JN</given-names>
</name>
</person-group>
<article-title>Maximum likelihood difference scaling</article-title>
<source>J Vis</source>
<year>2003</year>
<volume>3</volume>
<fpage>573</fpage>
<lpage>585</lpage>
<pub-id pub-id-type="doi">10.1167/3.8.5</pub-id>
<pub-id pub-id-type="pmid">14632609</pub-id>
</element-citation>
</ref>
<ref id="CR29">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Massot</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Chacron</surname>
<given-names>MJ</given-names>
</name>
<name>
<surname>Cullen</surname>
<given-names>KE</given-names>
</name>
</person-group>
<article-title>Information transmission and detection thresholds in the vestibular nuclei: single neurons vs. population encoding</article-title>
<source>J Neurophysiol</source>
<year>2011</year>
<volume>105</volume>
<fpage>1798</fpage>
<lpage>1814</lpage>
<pub-id pub-id-type="doi">10.1152/jn.00910.2010</pub-id>
<pub-id pub-id-type="pmid">21307329</pub-id>
</element-citation>
</ref>
<ref id="CR30">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Merfeld</surname>
<given-names>DM</given-names>
</name>
</person-group>
<article-title>Signal detection theory and vestibular thresholds: I. Basic theory and practical considerations</article-title>
<source>Exp Brain Res</source>
<year>2011</year>
<volume>210</volume>
<fpage>389</fpage>
<lpage>405</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-011-2557-7</pub-id>
<pub-id pub-id-type="pmid">21359662</pub-id>
</element-citation>
</ref>
<ref id="CR31">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Merfeld</surname>
<given-names>DM</given-names>
</name>
<name>
<surname>Young</surname>
<given-names>LR</given-names>
</name>
<name>
<surname>Oman</surname>
<given-names>CM</given-names>
</name>
<name>
<surname>Sehlhamer</surname>
<given-names>MJ</given-names>
</name>
</person-group>
<article-title>A multidimensional model of the effect of gravity on the spatial orientation of the monkey</article-title>
<source>J Vestib Res</source>
<year>1993</year>
<volume>3</volume>
<fpage>141</fpage>
<lpage>161</lpage>
<pub-id pub-id-type="pmid">8275250</pub-id>
</element-citation>
</ref>
<ref id="CR32">
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Mergner</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Becker</surname>
<given-names>W</given-names>
</name>
</person-group>
<person-group person-group-type="editor">
<name>
<surname>Warren</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Wertheim</surname>
<given-names>AH</given-names>
</name>
</person-group>
<article-title>Perception of horizontal self-rotations: multisensory and cognitive aspects</article-title>
<source>Percept. Control self-motion</source>
<year>1990</year>
<publisher-loc>Hillsdale London</publisher-loc>
<publisher-name>Lawrence Erlbaum</publisher-name>
<fpage>219</fpage>
<lpage>263</lpage>
</element-citation>
</ref>
<ref id="CR33">
<mixed-citation publication-type="other">Naseri A, Grant PR (2011) Difference thresholds: measurement and modeling. In: AIAA modeling and simulation technologies conference exhibition, pp 1–10</mixed-citation>
</ref>
<ref id="CR34">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Naseri</surname>
<given-names>AR</given-names>
</name>
<name>
<surname>Grant</surname>
<given-names>PR</given-names>
</name>
</person-group>
<article-title>Human discrimination of translational accelerations</article-title>
<source>Exp Brain Res</source>
<year>2012</year>
<volume>218</volume>
<fpage>455</fpage>
<lpage>464</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-012-3035-6</pub-id>
<pub-id pub-id-type="pmid">22354103</pub-id>
</element-citation>
</ref>
<ref id="CR35">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nesti</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Barnett-Cowan</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Macneilage</surname>
<given-names>PR</given-names>
</name>
<name>
<surname>Bülthoff</surname>
<given-names>HH</given-names>
</name>
</person-group>
<article-title>Human sensitivity to vertical self-motion</article-title>
<source>Exp Brain Res</source>
<year>2014</year>
<volume>232</volume>
<fpage>303</fpage>
<lpage>314</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-013-3741-8</pub-id>
<pub-id pub-id-type="pmid">24158607</pub-id>
</element-citation>
</ref>
<ref id="CR36">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nesti</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Beykirch</surname>
<given-names>KA</given-names>
</name>
<name>
<surname>MacNeilage</surname>
<given-names>PR</given-names>
</name>
<etal></etal>
</person-group>
<article-title>The importance of stimulus noise analysis for self-motion studies</article-title>
<source>PLoS ONE</source>
<year>2014</year>
<volume>9</volume>
<fpage>e94570</fpage>
<pub-id pub-id-type="doi">10.1371/journal.pone.0094570</pub-id>
<pub-id pub-id-type="pmid">24755871</pub-id>
</element-citation>
</ref>
<ref id="CR37">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nesti</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Beykirch</surname>
<given-names>KA</given-names>
</name>
<name>
<surname>Pretto</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Bülthoff</surname>
<given-names>HH</given-names>
</name>
</person-group>
<article-title>Self-motion sensitivity to visual yaw rotations in humans</article-title>
<source>Exp Brain Res</source>
<year>2015</year>
<volume>233</volume>
<fpage>861</fpage>
<lpage>899</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-014-4161-0</pub-id>
<pub-id pub-id-type="pmid">25511163</pub-id>
</element-citation>
</ref>
<ref id="CR38">
<mixed-citation publication-type="other">Newman MC, Lawson BD, Rupert AH, McGrath BJ (2012) The role of perceptual modeling in the understanding of spatial disorientation during flight and ground-based simulator training. In: AIAA modeling and simulation technologies conference exhibition, vol 5009</mixed-citation>
</ref>
<ref id="CR39">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nieuwenhuizen</surname>
<given-names>FM</given-names>
</name>
<name>
<surname>Bülthoff</surname>
<given-names>HH</given-names>
</name>
</person-group>
<article-title>The MPI cybermotion simulator: a novel research platform to investigate human control behavior</article-title>
<source>J Comput Sci Eng</source>
<year>2013</year>
<volume>7</volume>
<fpage>122</fpage>
<lpage>131</lpage>
<pub-id pub-id-type="doi">10.5626/JCSE.2013.7.2.122</pub-id>
</element-citation>
</ref>
<ref id="CR40">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Prsa</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Gale</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Blanke</surname>
<given-names>O</given-names>
</name>
</person-group>
<article-title>Self-motion leads to mandatory cue fusion across sensory modalities</article-title>
<source>J Neurophysiol</source>
<year>2012</year>
<volume>108</volume>
<fpage>2282</fpage>
<lpage>2291</lpage>
<pub-id pub-id-type="doi">10.1152/jn.00439.2012</pub-id>
<pub-id pub-id-type="pmid">22832567</pub-id>
</element-citation>
</ref>
<ref id="CR41">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pulaski</surname>
<given-names>PD</given-names>
</name>
<name>
<surname>Zee</surname>
<given-names>DS</given-names>
</name>
<name>
<surname>Robinson</surname>
<given-names>DA</given-names>
</name>
</person-group>
<article-title>The behavior of the vestibulo-ocular reflex at high velocities of head rotation</article-title>
<source>Brain Res</source>
<year>1981</year>
<volume>222</volume>
<fpage>159</fpage>
<lpage>165</lpage>
<pub-id pub-id-type="doi">10.1016/0006-8993(81)90952-5</pub-id>
<pub-id pub-id-type="pmid">7296263</pub-id>
</element-citation>
</ref>
<ref id="CR42">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Robinson</surname>
<given-names>DA</given-names>
</name>
</person-group>
<article-title>Brain linear addition of optokinetic and vestibular signals in the vestibular nucleus</article-title>
<source>Exp Brain Res</source>
<year>1977</year>
<volume>30</volume>
<fpage>447</fpage>
<lpage>450</lpage>
<pub-id pub-id-type="pmid">413730</pub-id>
</element-citation>
</ref>
<ref id="CR43">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Roditi</surname>
<given-names>RE</given-names>
</name>
<name>
<surname>Crane</surname>
<given-names>BT</given-names>
</name>
</person-group>
<article-title>Directional asymmetries and age effects in human self-motion perception</article-title>
<source>J Assoc Res Otolaryngol</source>
<year>2012</year>
<volume>13</volume>
<fpage>381</fpage>
<lpage>401</lpage>
<pub-id pub-id-type="doi">10.1007/s10162-012-0318-3</pub-id>
<pub-id pub-id-type="pmid">22402987</pub-id>
</element-citation>
</ref>
<ref id="CR44">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sadeghi</surname>
<given-names>SG</given-names>
</name>
<name>
<surname>Chacron</surname>
<given-names>MJ</given-names>
</name>
<name>
<surname>Taylor</surname>
<given-names>MC</given-names>
</name>
<name>
<surname>Cullen</surname>
<given-names>KE</given-names>
</name>
</person-group>
<article-title>Neural variability, detection thresholds, and information transmission in the vestibular system</article-title>
<source>J Neurosci</source>
<year>2007</year>
<volume>27</volume>
<fpage>771</fpage>
<lpage>781</lpage>
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.4690-06.2007</pub-id>
<pub-id pub-id-type="pmid">17251416</pub-id>
</element-citation>
</ref>
<ref id="CR45">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Seidman</surname>
<given-names>SH</given-names>
</name>
</person-group>
<article-title>Translational motion perception and vestiboocular responses in the absence of non-inertial cues</article-title>
<source>Exp Brain Res</source>
<year>2008</year>
<volume>184</volume>
<fpage>13</fpage>
<lpage>29</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-007-1072-3</pub-id>
<pub-id pub-id-type="pmid">17680240</pub-id>
</element-citation>
</ref>
<ref id="CR46">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shams</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Beierholm</surname>
<given-names>UR</given-names>
</name>
</person-group>
<article-title>Causal inference in perception</article-title>
<source>Trends Cogn Sci</source>
<year>2010</year>
<volume>14</volume>
<fpage>425</fpage>
<lpage>432</lpage>
<pub-id pub-id-type="doi">10.1016/j.tics.2010.07.001</pub-id>
<pub-id pub-id-type="pmid">20705502</pub-id>
</element-citation>
</ref>
<ref id="CR47">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Teghtsoonian</surname>
<given-names>R</given-names>
</name>
</person-group>
<article-title>On the exponents in Stevens’ law and the constant in Ekman’s law</article-title>
<source>Psychol Rev</source>
<year>1971</year>
<volume>78</volume>
<fpage>71</fpage>
<lpage>80</lpage>
<pub-id pub-id-type="doi">10.1037/h0030300</pub-id>
<pub-id pub-id-type="pmid">5545194</pub-id>
</element-citation>
</ref>
<ref id="CR48">
<mixed-citation publication-type="other">Telban RJ, Cardullo FM, Kelly LC (2005) Motion cueing algorithm development : piloted performance testing of the cueing algorithms. NASA/CR–2005, 213747, pp 1–183</mixed-citation>
</ref>
<ref id="CR49">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Telford</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Howard</surname>
<given-names>IP</given-names>
</name>
<name>
<surname>Ohmi</surname>
<given-names>M</given-names>
</name>
</person-group>
<article-title>Heading judgments during active and passive self-motion</article-title>
<source>Exp Brain Res</source>
<year>1995</year>
<volume>104</volume>
<fpage>502</fpage>
<lpage>510</lpage>
<pub-id pub-id-type="doi">10.1007/BF00231984</pub-id>
<pub-id pub-id-type="pmid">7589301</pub-id>
</element-citation>
</ref>
<ref id="CR50">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Valko</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Lewis</surname>
<given-names>RF</given-names>
</name>
<name>
<surname>Priesol</surname>
<given-names>AJ</given-names>
</name>
<name>
<surname>Merfeld</surname>
<given-names>DM</given-names>
</name>
</person-group>
<article-title>Vestibular labyrinth contributions to human whole-body motion discrimination</article-title>
<source>J Neurosci</source>
<year>2012</year>
<volume>32</volume>
<fpage>13537</fpage>
<lpage>13542</lpage>
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.2157-12.2012</pub-id>
<pub-id pub-id-type="pmid">23015443</pub-id>
</element-citation>
</ref>
<ref id="CR51">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Van Atteveldt</surname>
<given-names>NM</given-names>
</name>
<name>
<surname>Formisano</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Blomert</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Goebel</surname>
<given-names>R</given-names>
</name>
</person-group>
<article-title>The effect of temporal asynchrony on the multisensory integration of letters and speech sounds</article-title>
<source>Cereb Cortex</source>
<year>2007</year>
<volume>17</volume>
<fpage>962</fpage>
<lpage>974</lpage>
<pub-id pub-id-type="doi">10.1093/cercor/bhl007</pub-id>
<pub-id pub-id-type="pmid">16751298</pub-id>
</element-citation>
</ref>
<ref id="CR52">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Van Wassenhove</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Grant</surname>
<given-names>KW</given-names>
</name>
<name>
<surname>Poeppel</surname>
<given-names>D</given-names>
</name>
</person-group>
<article-title>Temporal window of integration in auditory–visual speech perception</article-title>
<source>Neuropsychologia</source>
<year>2007</year>
<volume>45</volume>
<fpage>598</fpage>
<lpage>607</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2006.01.001</pub-id>
<pub-id pub-id-type="pmid">16530232</pub-id>
</element-citation>
</ref>
<ref id="CR53">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Waespe</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Henn</surname>
<given-names>V</given-names>
</name>
</person-group>
<article-title>Neuronal activity in the vestibular nuclei of the alert monkey during vestibular and optokinetic stimulation</article-title>
<source>Exp Brain Res</source>
<year>1977</year>
<volume>27</volume>
<fpage>523</fpage>
<lpage>538</lpage>
<pub-id pub-id-type="doi">10.1007/BF00239041</pub-id>
<pub-id pub-id-type="pmid">404173</pub-id>
</element-citation>
</ref>
<ref id="CR54">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Weber</surname>
<given-names>KP</given-names>
</name>
<name>
<surname>Aw</surname>
<given-names>ST</given-names>
</name>
<name>
<surname>Todd</surname>
<given-names>MJ</given-names>
</name>
<etal></etal>
</person-group>
<article-title>Head impulse test in unilateral vestibular loss: vestibulo-ocular reflex and catch-up saccades</article-title>
<source>Neurology</source>
<year>2008</year>
<volume>70</volume>
<fpage>454</fpage>
<lpage>463</lpage>
<pub-id pub-id-type="doi">10.1212/01.wnl.0000299117.48935.2e</pub-id>
<pub-id pub-id-type="pmid">18250290</pub-id>
</element-citation>
</ref>
<ref id="CR55">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wei</surname>
<given-names>X</given-names>
</name>
<name>
<surname>Stocker</surname>
<given-names>AA</given-names>
</name>
</person-group>
<article-title>Efficient coding provides a direct link between prior and likelihood in perceptual Bayesian inference</article-title>
<source>Adv Neural Inf Process Syst</source>
<year>2013</year>
<volume>25</volume>
<fpage>1313</fpage>
<lpage>1321</lpage>
</element-citation>
</ref>
<ref id="CR56">
<mixed-citation publication-type="other">Zaichik L, Rodchenko V, Rufov I, et al (1999) Acceleration perception. In: AIAA modeling and simulation technologies conference and exhibit, pp 512–520</mixed-citation>
</ref>
<ref id="CR57">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zupan</surname>
<given-names>LH</given-names>
</name>
<name>
<surname>Merfeld</surname>
<given-names>DM</given-names>
</name>
<name>
<surname>Darlot</surname>
<given-names>C</given-names>
</name>
</person-group>
<article-title>Using sensory weighting to model the influence of canal, otolith and visual cues on spatial orientation and eye movements</article-title>
<source>Biol Cybern</source>
<year>2002</year>
<volume>86</volume>
<fpage>209</fpage>
<lpage>230</lpage>
<pub-id pub-id-type="doi">10.1007/s00422-001-0290-1</pub-id>
<pub-id pub-id-type="pmid">12068787</pub-id>
</element-citation>
</ref>
</ref-list>
</back>
</pmc>
<affiliations>
<list>
<country>
<li>Allemagne</li>
<li>Autriche</li>
<li>Corée du Sud</li>
</country>
<region>
<li>Bade-Wurtemberg</li>
<li>District de Tübingen</li>
</region>
<settlement>
<li>Séoul</li>
<li>Tübingen</li>
</settlement>
</list>
<tree>
<country name="Allemagne">
<region name="Bade-Wurtemberg">
<name sortKey="Nesti, Alessandro" sort="Nesti, Alessandro" uniqKey="Nesti A" first="Alessandro" last="Nesti">Alessandro Nesti</name>
</region>
<name sortKey="Beykirch, Karl A" sort="Beykirch, Karl A" uniqKey="Beykirch K" first="Karl A." last="Beykirch">Karl A. Beykirch</name>
<name sortKey="Bulthoff, Heinrich H" sort="Bulthoff, Heinrich H" uniqKey="Bulthoff H" first="Heinrich H." last="Bülthoff">Heinrich H. Bülthoff</name>
<name sortKey="Pretto, Paolo" sort="Pretto, Paolo" uniqKey="Pretto P" first="Paolo" last="Pretto">Paolo Pretto</name>
</country>
<country name="Autriche">
<noRegion>
<name sortKey="Beykirch, Karl A" sort="Beykirch, Karl A" uniqKey="Beykirch K" first="Karl A." last="Beykirch">Karl A. Beykirch</name>
</noRegion>
</country>
<country name="Corée du Sud">
<noRegion>
<name sortKey="Bulthoff, Heinrich H" sort="Bulthoff, Heinrich H" uniqKey="Bulthoff H" first="Heinrich H." last="Bülthoff">Heinrich H. Bülthoff</name>
</noRegion>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Checkpoint
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000539 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd -nk 000539 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Checkpoint
   |type=    RBID
   |clé=     PMC:4646930
   |texte=   Human discrimination of head-centred visual–inertial yaw rotations
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/RBID.i   -Sk "pubmed:26319547" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024