Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Modeling depth from motion parallax with the motion/pursuit ratio

Identifieur interne : 003397 ( Ncbi/Merge ); précédent : 003396; suivant : 003398

Modeling depth from motion parallax with the motion/pursuit ratio

Auteurs : Mark Nawrot [États-Unis] ; Michael Ratzlaff [États-Unis] ; Zachary Leonard [États-Unis] ; Keith Stroyan [États-Unis]

Source :

RBID : PMC:4186274

Abstract

The perception of unambiguous scaled depth from motion parallax relies on both retinal image motion and an extra-retinal pursuit eye movement signal. The motion/pursuit ratio represents a dynamic geometric model linking these two proximal cues to the ratio of depth to viewing distance. An important step in understanding the visual mechanisms serving the perception of depth from motion parallax is to determine the relationship between these stimulus parameters and empirically determined perceived depth magnitude. Observers compared perceived depth magnitude of dynamic motion parallax stimuli to static binocular disparity comparison stimuli at three different viewing distances, in both head-moving and head-stationary conditions. A stereo-viewing system provided ocular separation for stereo stimuli and monocular viewing of parallax stimuli. For each motion parallax stimulus, a point of subjective equality (PSE) was estimated for the amount of binocular disparity that generates the equivalent magnitude of perceived depth from motion parallax. Similar to previous results, perceived depth from motion parallax had significant foreshortening. Head-moving conditions produced even greater foreshortening due to the differences in the compensatory eye movement signal. An empirical version of the motion/pursuit law, termed the empirical motion/pursuit ratio, which models perceived depth magnitude from these stimulus parameters, is proposed.


Url:
DOI: 10.3389/fpsyg.2014.01103
PubMed: 25339926
PubMed Central: 4186274

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4186274

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Modeling depth from motion parallax with the motion/pursuit ratio</title>
<author>
<name sortKey="Nawrot, Mark" sort="Nawrot, Mark" uniqKey="Nawrot M" first="Mark" last="Nawrot">Mark Nawrot</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Psychology, Center for Visual and Cognitive Neuroscience, North Dakota State University</institution>
<country>Fargo, ND, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Ratzlaff, Michael" sort="Ratzlaff, Michael" uniqKey="Ratzlaff M" first="Michael" last="Ratzlaff">Michael Ratzlaff</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Psychology, Center for Visual and Cognitive Neuroscience, North Dakota State University</institution>
<country>Fargo, ND, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Leonard, Zachary" sort="Leonard, Zachary" uniqKey="Leonard Z" first="Zachary" last="Leonard">Zachary Leonard</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Psychology, Center for Visual and Cognitive Neuroscience, North Dakota State University</institution>
<country>Fargo, ND, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Stroyan, Keith" sort="Stroyan, Keith" uniqKey="Stroyan K" first="Keith" last="Stroyan">Keith Stroyan</name>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Math Department, University of Iowa</institution>
<country>Iowa City, IA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">25339926</idno>
<idno type="pmc">4186274</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4186274</idno>
<idno type="RBID">PMC:4186274</idno>
<idno type="doi">10.3389/fpsyg.2014.01103</idno>
<date when="2014">2014</date>
<idno type="wicri:Area/Pmc/Corpus">001F20</idno>
<idno type="wicri:Area/Pmc/Curation">001F20</idno>
<idno type="wicri:Area/Pmc/Checkpoint">000B10</idno>
<idno type="wicri:Area/Ncbi/Merge">003397</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Modeling depth from motion parallax with the motion/pursuit ratio</title>
<author>
<name sortKey="Nawrot, Mark" sort="Nawrot, Mark" uniqKey="Nawrot M" first="Mark" last="Nawrot">Mark Nawrot</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Psychology, Center for Visual and Cognitive Neuroscience, North Dakota State University</institution>
<country>Fargo, ND, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Ratzlaff, Michael" sort="Ratzlaff, Michael" uniqKey="Ratzlaff M" first="Michael" last="Ratzlaff">Michael Ratzlaff</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Psychology, Center for Visual and Cognitive Neuroscience, North Dakota State University</institution>
<country>Fargo, ND, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Leonard, Zachary" sort="Leonard, Zachary" uniqKey="Leonard Z" first="Zachary" last="Leonard">Zachary Leonard</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Department of Psychology, Center for Visual and Cognitive Neuroscience, North Dakota State University</institution>
<country>Fargo, ND, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Stroyan, Keith" sort="Stroyan, Keith" uniqKey="Stroyan K" first="Keith" last="Stroyan">Keith Stroyan</name>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Math Department, University of Iowa</institution>
<country>Iowa City, IA, USA</country>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Frontiers in Psychology</title>
<idno type="eISSN">1664-1078</idno>
<imprint>
<date when="2014">2014</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>The perception of unambiguous scaled depth from motion parallax relies on both retinal image motion and an extra-retinal pursuit eye movement signal. The motion/pursuit ratio represents a dynamic geometric model linking these two proximal cues to the ratio of depth to viewing distance. An important step in understanding the visual mechanisms serving the perception of depth from motion parallax is to determine the relationship between these stimulus parameters and empirically determined perceived depth magnitude. Observers compared perceived depth magnitude of dynamic motion parallax stimuli to static binocular disparity comparison stimuli at three different viewing distances, in both head-moving and head-stationary conditions. A stereo-viewing system provided ocular separation for stereo stimuli and monocular viewing of parallax stimuli. For each motion parallax stimulus, a point of subjective equality (PSE) was estimated for the amount of binocular disparity that generates the equivalent magnitude of perceived depth from motion parallax. Similar to previous results, perceived depth from motion parallax had significant foreshortening. Head-moving conditions produced even greater foreshortening due to the differences in the compensatory eye movement signal. An empirical version of the motion/pursuit law, termed the empirical motion/pursuit ratio, which models perceived depth magnitude from these stimulus parameters, is proposed.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Backus, B T" uniqKey="Backus B">B. T. Backus</name>
</author>
<author>
<name sortKey="Banks, M S" uniqKey="Banks M">M. S. Banks</name>
</author>
<author>
<name sortKey="Van Ee, R" uniqKey="Van Ee R">R. van Ee</name>
</author>
<author>
<name sortKey="Crowell, J A" uniqKey="Crowell J">J. A. Crowell</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Baird, J C" uniqKey="Baird J">J. C. Baird</name>
</author>
<author>
<name sortKey="Biersdorf, W R" uniqKey="Biersdorf W">W. R. Biersdorf</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Barlow, H B" uniqKey="Barlow H">H. B. Barlow</name>
</author>
<author>
<name sortKey="Blakemore, C" uniqKey="Blakemore C">C. Blakemore</name>
</author>
<author>
<name sortKey="Pettigrew, J D" uniqKey="Pettigrew J">J. D. Pettigrew</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bradshaw, M F" uniqKey="Bradshaw M">M. F. Bradshaw</name>
</author>
<author>
<name sortKey="Glennerster, A" uniqKey="Glennerster A">A. Glennerster</name>
</author>
<author>
<name sortKey="Rogers, B J" uniqKey="Rogers B">B. J. Rogers</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bradshaw, M F" uniqKey="Bradshaw M">M. F. Bradshaw</name>
</author>
<author>
<name sortKey="Parton, A D" uniqKey="Parton A">A. D. Parton</name>
</author>
<author>
<name sortKey="Eagle, R A" uniqKey="Eagle R">R. A. Eagle</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bradshaw, M F" uniqKey="Bradshaw M">M. F. Bradshaw</name>
</author>
<author>
<name sortKey="Parton, A D" uniqKey="Parton A">A. D. Parton</name>
</author>
<author>
<name sortKey="Glennerster, A" uniqKey="Glennerster A">A. Glennerster</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brenner, E" uniqKey="Brenner E">E. Brenner</name>
</author>
<author>
<name sortKey="Smeets, J B" uniqKey="Smeets J">J. B. Smeets</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brenner, E" uniqKey="Brenner E">E. Brenner</name>
</author>
<author>
<name sortKey="Van Damme, W J" uniqKey="Van Damme W">W. J. Van Damme</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brenner, E" uniqKey="Brenner E">E. Brenner</name>
</author>
<author>
<name sortKey="Van Den Berg, A V" uniqKey="Van Den Berg A">A. V. van den Berg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brindley, G" uniqKey="Brindley G">G. Brindley</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Campbell, F W" uniqKey="Campbell F">F. W. Campbell</name>
</author>
<author>
<name sortKey="Maffei, L" uniqKey="Maffei L">L. Maffei</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cavanagh, P" uniqKey="Cavanagh P">P. Cavanagh</name>
</author>
<author>
<name sortKey="Tyler, C W" uniqKey="Tyler C">C. W. Tyler</name>
</author>
<author>
<name sortKey="Favreau, O E" uniqKey="Favreau O">O. E. Favreau</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cormack, R" uniqKey="Cormack R">R. Cormack</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cormack, R" uniqKey="Cormack R">R. Cormack</name>
</author>
<author>
<name sortKey="Fox, R" uniqKey="Fox R">R. Fox</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cumming, B G" uniqKey="Cumming B">B. G. Cumming</name>
</author>
<author>
<name sortKey="Johnston, E B" uniqKey="Johnston E">E. B. Johnston</name>
</author>
<author>
<name sortKey="Parker, A J" uniqKey="Parker A">A. J. Parker</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Diener, H C" uniqKey="Diener H">H. C. Diener</name>
</author>
<author>
<name sortKey="Wist, E R" uniqKey="Wist E">E. R. Wist</name>
</author>
<author>
<name sortKey="Dichgans, J" uniqKey="Dichgans J">J. Dichgans</name>
</author>
<author>
<name sortKey="Brant, T" uniqKey="Brant T">T. Brant</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Di Luca, M" uniqKey="Di Luca M">M. Di Luca</name>
</author>
<author>
<name sortKey="Domini, F" uniqKey="Domini F">F. Domini</name>
</author>
<author>
<name sortKey="Caudek, C" uniqKey="Caudek C">C. Caudek</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dobbins, A C" uniqKey="Dobbins A">A. C. Dobbins</name>
</author>
<author>
<name sortKey="Jeo, R M" uniqKey="Jeo R">R. M. Jeo</name>
</author>
<author>
<name sortKey="Fiser, J" uniqKey="Fiser J">J. Fiser</name>
</author>
<author>
<name sortKey="Allman, J M" uniqKey="Allman J">J. M. Allman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Domini, F" uniqKey="Domini F">F. Domini</name>
</author>
<author>
<name sortKey="Caudek, C" uniqKey="Caudek C">C. Caudek</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Domini, F" uniqKey="Domini F">F. Domini</name>
</author>
<author>
<name sortKey="Caudek, C" uniqKey="Caudek C">C. Caudek</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Domini, F" uniqKey="Domini F">F. Domini</name>
</author>
<author>
<name sortKey="Caudek, C" uniqKey="Caudek C">C. Caudek</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Durgin, F H" uniqKey="Durgin F">F. H. Durgin</name>
</author>
<author>
<name sortKey="Proffitt, D R" uniqKey="Proffitt D">D. R. Proffitt</name>
</author>
<author>
<name sortKey="Reinke, K S" uniqKey="Reinke K">K. S. Reinke</name>
</author>
<author>
<name sortKey="Olson, T J" uniqKey="Olson T">T. J. Olson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Filehne, W" uniqKey="Filehne W">W. Filehne</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fisher, S K" uniqKey="Fisher S">S. K. Fisher</name>
</author>
<author>
<name sortKey="Ciuffreda, K J" uniqKey="Ciuffreda K">K. J. Ciuffreda</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fleischl, E" uniqKey="Fleischl E">E. Fleischl</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Foster, R" uniqKey="Foster R">R. Foster</name>
</author>
<author>
<name sortKey="Fantoni, C" uniqKey="Fantoni C">C. Fantoni</name>
</author>
<author>
<name sortKey="Caudek, C" uniqKey="Caudek C">C. Caudek</name>
</author>
<author>
<name sortKey="Domini, F" uniqKey="Domini F">F. Domini</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Freeman, T C" uniqKey="Freeman T">T. C. Freeman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Freeman, T C A" uniqKey="Freeman T">T. C. A. Freeman</name>
</author>
<author>
<name sortKey="Banks, M S" uniqKey="Banks M">M. S. Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Freeman, T C A" uniqKey="Freeman T">T. C. A. Freeman</name>
</author>
<author>
<name sortKey="Fowler, T A" uniqKey="Fowler T">T. A. Fowler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Frisby, J P" uniqKey="Frisby J">J. P. Frisby</name>
</author>
<author>
<name sortKey="Buckley, D" uniqKey="Buckley D">D. Buckley</name>
</author>
<author>
<name sortKey="Duke, P A" uniqKey="Duke P">P. A. Duke</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Garding, J" uniqKey="Garding J">J. Garding</name>
</author>
<author>
<name sortKey="Porrill, J" uniqKey="Porrill J">J. Porrill</name>
</author>
<author>
<name sortKey="Mayhew, J E" uniqKey="Mayhew J">J. E. Mayhew</name>
</author>
<author>
<name sortKey="Frisby, J P" uniqKey="Frisby J">J. P. Frisby</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Glennerster, A" uniqKey="Glennerster A">A. Glennerster</name>
</author>
<author>
<name sortKey="Rogers, B J" uniqKey="Rogers B">B. J. Rogers</name>
</author>
<author>
<name sortKey="Bradshaw, M F" uniqKey="Bradshaw M">M. F. Bradshaw</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gonzalez, F" uniqKey="Gonzalez F">F. Gonzalez</name>
</author>
<author>
<name sortKey="Perez, R" uniqKey="Perez R">R. Perez</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Graham, C H" uniqKey="Graham C">C. H. Graham</name>
</author>
<author>
<name sortKey="Baker, K E" uniqKey="Baker K">K. E. Baker</name>
</author>
<author>
<name sortKey="Hecht, M" uniqKey="Hecht M">M. Hecht</name>
</author>
<author>
<name sortKey="Lloyd, V V" uniqKey="Lloyd V">V. V. Lloyd</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Johnston, E B" uniqKey="Johnston E">E. B. Johnston</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Johnston, E B" uniqKey="Johnston E">E. B. Johnston</name>
</author>
<author>
<name sortKey="Cumming, B G" uniqKey="Cumming B">B. G. Cumming</name>
</author>
<author>
<name sortKey="Landy, M S" uniqKey="Landy M">M. S. Landy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Liao, K" uniqKey="Liao K">K. Liao</name>
</author>
<author>
<name sortKey="Walker, M F" uniqKey="Walker M">M. F. Walker</name>
</author>
<author>
<name sortKey="Joshi, A" uniqKey="Joshi A">A. Joshi</name>
</author>
<author>
<name sortKey="Millard, R" uniqKey="Millard R">R. Millard</name>
</author>
<author>
<name sortKey="Leigh, R J" uniqKey="Leigh R">R. J. Leigh</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Longuet Higgins, H C" uniqKey="Longuet Higgins H">H. C. Longuet-Higgins</name>
</author>
<author>
<name sortKey="Prazdny, K" uniqKey="Prazdny K">K. Prazdny</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mack, A" uniqKey="Mack A">A. Mack</name>
</author>
<author>
<name sortKey="Herman, E" uniqKey="Herman E">E. Herman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mackenzie, K J" uniqKey="Mackenzie K">K. J. MacKenzie</name>
</author>
<author>
<name sortKey="Murray, R F" uniqKey="Murray R">R. F. Murray</name>
</author>
<author>
<name sortKey="Wilcox, L M" uniqKey="Wilcox L">L. M. Wilcox</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mckee, S P" uniqKey="Mckee S">S. P. McKee</name>
</author>
<author>
<name sortKey="Taylor, D G" uniqKey="Taylor D">D. G. Taylor</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Miles, F A" uniqKey="Miles F">F. A. Miles</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Miles, F A" uniqKey="Miles F">F. A. Miles</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Miles, F A" uniqKey="Miles F">F. A. Miles</name>
</author>
<author>
<name sortKey="Busettini, C" uniqKey="Busettini C">C. Busettini</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mon Williams, M" uniqKey="Mon Williams M">M. Mon-Williams</name>
</author>
<author>
<name sortKey="Tresilian, J R" uniqKey="Tresilian J">J. R. Tresilian</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mon Williams, M" uniqKey="Mon Williams M">M. Mon-Williams</name>
</author>
<author>
<name sortKey="Tresilian, J R" uniqKey="Tresilian J">J. R. Tresilian</name>
</author>
<author>
<name sortKey="Roberts, A" uniqKey="Roberts A">A. Roberts</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nadler, J W" uniqKey="Nadler J">J. W. Nadler</name>
</author>
<author>
<name sortKey="Nawrot, M" uniqKey="Nawrot M">M. Nawrot</name>
</author>
<author>
<name sortKey="Angelaki, D E" uniqKey="Angelaki D">D. E. Angelaki</name>
</author>
<author>
<name sortKey="Deangelis, G C" uniqKey="Deangelis G">G. C. DeAngelis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Naji, J J" uniqKey="Naji J">J. J. Naji</name>
</author>
<author>
<name sortKey="Freeman, T C" uniqKey="Freeman T">T. C. Freeman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nakayama, K" uniqKey="Nakayama K">K. Nakayama</name>
</author>
<author>
<name sortKey="Loomis, J M" uniqKey="Loomis J">J. M. Loomis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nawrot, M" uniqKey="Nawrot M">M. Nawrot</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nawrot, M" uniqKey="Nawrot M">M. Nawrot</name>
</author>
<author>
<name sortKey="Joyce, L" uniqKey="Joyce L">L. Joyce</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nawrot, M" uniqKey="Nawrot M">M. Nawrot</name>
</author>
<author>
<name sortKey="Stroyan, K" uniqKey="Stroyan K">K. Stroyan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ono, M E" uniqKey="Ono M">M. E. Ono</name>
</author>
<author>
<name sortKey="Rivest, J" uniqKey="Rivest J">J. Rivest</name>
</author>
<author>
<name sortKey="Ono, H" uniqKey="Ono H">H. Ono</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Philbeck, J W" uniqKey="Philbeck J">J. W. Philbeck</name>
</author>
<author>
<name sortKey="Loomis, J M" uniqKey="Loomis J">J. M. Loomis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ramat, S" uniqKey="Ramat S">S. Ramat</name>
</author>
<author>
<name sortKey="Zee, D S" uniqKey="Zee D">D. S. Zee</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Read, J C" uniqKey="Read J">J. C. Read</name>
</author>
<author>
<name sortKey="Cumming, B G" uniqKey="Cumming B">B. G. Cumming</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ritter, M" uniqKey="Ritter M">M. Ritter</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rogers, B" uniqKey="Rogers B">B. Rogers</name>
</author>
<author>
<name sortKey="Graham, M" uniqKey="Graham M">M. Graham</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rogers, B J" uniqKey="Rogers B">B. J. Rogers</name>
</author>
<author>
<name sortKey="Bradshaw, M F" uniqKey="Bradshaw M">M. F. Bradshaw</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Souman, J L" uniqKey="Souman J">J. L. Souman</name>
</author>
<author>
<name sortKey="Freeman, T C A" uniqKey="Freeman T">T. C. A. Freeman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stroyan, K" uniqKey="Stroyan K">K. Stroyan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stroyan, K" uniqKey="Stroyan K">K. Stroyan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stroyan, K" uniqKey="Stroyan K">K. Stroyan</name>
</author>
<author>
<name sortKey="Nawrot, M" uniqKey="Nawrot M">M. Nawrot</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tittle, J S" uniqKey="Tittle J">J. S. Tittle</name>
</author>
<author>
<name sortKey="Todd, J T" uniqKey="Todd J">J. T. Todd</name>
</author>
<author>
<name sortKey="Perotti, V J" uniqKey="Perotti V">V. J. Perotti</name>
</author>
<author>
<name sortKey="Norman, J F" uniqKey="Norman J">J. F. Norman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Todd, J T" uniqKey="Todd J">J. T. Todd</name>
</author>
<author>
<name sortKey="Norman, J F" uniqKey="Norman J">J. F. Norman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Trotter, Y" uniqKey="Trotter Y">Y. Trotter</name>
</author>
<author>
<name sortKey="Celebrini, S" uniqKey="Celebrini S">S. Celebrini</name>
</author>
<author>
<name sortKey="Stricanne, B" uniqKey="Stricanne B">B. Stricanne</name>
</author>
<author>
<name sortKey="Thorpe, S" uniqKey="Thorpe S">S. Thorpe</name>
</author>
<author>
<name sortKey="Imbert, M" uniqKey="Imbert M">M. Imbert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tufte, E" uniqKey="Tufte E">E. Tufte</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Turano, K A" uniqKey="Turano K">K. A. Turano</name>
</author>
<author>
<name sortKey="Heidenreich, S M" uniqKey="Heidenreich S">S. M. Heidenreich</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Turano, K A" uniqKey="Turano K">K. A. Turano</name>
</author>
<author>
<name sortKey="Massof, R W" uniqKey="Massof R">R. W. Massof</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Viguier, A" uniqKey="Viguier A">A. Viguier</name>
</author>
<author>
<name sortKey="Clement, G" uniqKey="Clement G">G. Clement</name>
</author>
<author>
<name sortKey="Trotter, Y" uniqKey="Trotter Y">Y. Trotter</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Von Helmholtz, H" uniqKey="Von Helmholtz H">H. von Helmholtz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Von Hofsten, C" uniqKey="Von Hofsten C">C. Von Hofsten</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wallach, H" uniqKey="Wallach H">H. Wallach</name>
</author>
<author>
<name sortKey="Zuckerman, C" uniqKey="Zuckerman C">C. Zuckerman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Watamaniuk, S N J" uniqKey="Watamaniuk S">S. N. J. Watamaniuk</name>
</author>
<author>
<name sortKey="Grzywacz, N M" uniqKey="Grzywacz N">N. M. Grzywacz</name>
</author>
<author>
<name sortKey="Yuille, A L" uniqKey="Yuille A">A. L. Yuille</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wertheim, A H" uniqKey="Wertheim A">A. H. Wertheim</name>
</author>
<author>
<name sortKey="Van Gelder, P" uniqKey="Van Gelder P">P. Van Gelder</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Front Psychol</journal-id>
<journal-id journal-id-type="iso-abbrev">Front Psychol</journal-id>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title-group>
<journal-title>Frontiers in Psychology</journal-title>
</journal-title-group>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">25339926</article-id>
<article-id pub-id-type="pmc">4186274</article-id>
<article-id pub-id-type="doi">10.3389/fpsyg.2014.01103</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Original Research Article</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Modeling depth from motion parallax with the motion/pursuit ratio</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Nawrot</surname>
<given-names>Mark</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="author-notes" rid="fn001">
<sup>*</sup>
</xref>
<uri xlink:type="simple" xlink:href="http://community.frontiersin.org/people/u/157703"></uri>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Ratzlaff</surname>
<given-names>Michael</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<uri xlink:type="simple" xlink:href="http://community.frontiersin.org/people/u/166218"></uri>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Leonard</surname>
<given-names>Zachary</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<uri xlink:type="simple" xlink:href="http://community.frontiersin.org/people/u/176767"></uri>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Stroyan</surname>
<given-names>Keith</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<uri xlink:type="simple" xlink:href="http://community.frontiersin.org/people/u/173635"></uri>
</contrib>
</contrib-group>
<aff id="aff1">
<sup>1</sup>
<institution>Department of Psychology, Center for Visual and Cognitive Neuroscience, North Dakota State University</institution>
<country>Fargo, ND, USA</country>
</aff>
<aff id="aff2">
<sup>2</sup>
<institution>Math Department, University of Iowa</institution>
<country>Iowa City, IA, USA</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>Edited by: Peter J. Bex, Harvard University, USA</p>
</fn>
<fn fn-type="edited-by">
<p>Reviewed by: Richard J. A. Van Wezel, Radboud University, Netherlands; Tom C. A. Freeman, Cardiff University, UK</p>
</fn>
<corresp id="fn001">*Correspondence: Mark Nawrot, Department of Psychology, Center for Visual and Cognitive Neuroscience, College of Science and Mathematics, North Dakota State University, NDSU Department 2765, PO Box 6050, Fargo, ND 58108-6050, USA e-mail:
<email xlink:type="simple">mark.nawrot@ndsu.edu</email>
</corresp>
<fn fn-type="other" id="fn002">
<p>This article was submitted to Perception Science, a section of the journal Frontiers in Psychology.</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>06</day>
<month>10</month>
<year>2014</year>
</pub-date>
<pub-date pub-type="collection">
<year>2014</year>
</pub-date>
<volume>5</volume>
<elocation-id>1103</elocation-id>
<history>
<date date-type="received">
<day>05</day>
<month>6</month>
<year>2014</year>
</date>
<date date-type="accepted">
<day>11</day>
<month>9</month>
<year>2014</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright © 2014 Nawrot, Ratzlaff, Leonard and Stroyan.</copyright-statement>
<copyright-year>2014</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/">
<license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</license-p>
</license>
</permissions>
<abstract>
<p>The perception of unambiguous scaled depth from motion parallax relies on both retinal image motion and an extra-retinal pursuit eye movement signal. The motion/pursuit ratio represents a dynamic geometric model linking these two proximal cues to the ratio of depth to viewing distance. An important step in understanding the visual mechanisms serving the perception of depth from motion parallax is to determine the relationship between these stimulus parameters and empirically determined perceived depth magnitude. Observers compared perceived depth magnitude of dynamic motion parallax stimuli to static binocular disparity comparison stimuli at three different viewing distances, in both head-moving and head-stationary conditions. A stereo-viewing system provided ocular separation for stereo stimuli and monocular viewing of parallax stimuli. For each motion parallax stimulus, a point of subjective equality (PSE) was estimated for the amount of binocular disparity that generates the equivalent magnitude of perceived depth from motion parallax. Similar to previous results, perceived depth from motion parallax had significant foreshortening. Head-moving conditions produced even greater foreshortening due to the differences in the compensatory eye movement signal. An empirical version of the motion/pursuit law, termed the empirical motion/pursuit ratio, which models perceived depth magnitude from these stimulus parameters, is proposed.</p>
</abstract>
<kwd-group>
<kwd>depth perception</kwd>
<kwd>motion parallax</kwd>
<kwd>pursuit eye movements</kwd>
<kwd>stereopsis</kwd>
<kwd>motion perception</kwd>
</kwd-group>
<counts>
<fig-count count="9"></fig-count>
<table-count count="0"></table-count>
<equation-count count="6"></equation-count>
<ref-count count="75"></ref-count>
<page-count count="14"></page-count>
<word-count count="12257"></word-count>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="introduction" id="s1">
<title>Introduction</title>
<p>The visual perception of depth is an important part of successful navigation and obstacle avoidance. While the human visual system can employ a variety of visual cues to object depth, the percept of depth created by the relative movements of objects in the scene is especially salient for the moving observer. This apparent relative movement of objectively stationary objects is created by the translation of the observer and is called motion parallax. Specifically, during the lateral translation we study, the observer's visual system maintains fixation on a particular stationary object in the scene by moving the eyes in the direction opposite the translation. Therefore, while the visual system ensures that this fixated object remains stationary on the observer's retina during the translation, presumably to maintain acuity for the visual information available at this location (Miles,
<xref rid="B39" ref-type="bibr">1998</xref>
), the retinal image of objects nearer and farther than the fixation point move in opposite directions on the observer's retina. This combination of retinal motion and eye pursuit was noted as far back as the 1925 edition of von Helmholtz (
<xref rid="B67" ref-type="bibr">1910/1925/1962</xref>
, Vol. III, p. 371) where the passage concludes, “…the probability is that both of them generally contribute to (forming estimates of distance) in some way, although it would be hard to say exactly how.” We now understand geometrically how the ratio of these rates determines relative depth and experimentally why the motion/pursuit ratio is a key quantity.</p>
<p>Information about the direction and speed of both the retinal image motion and the pursuit eye movement are used by the visual system to recover the relative depth of objects in the scene (Nawrot,
<xref rid="B45" ref-type="bibr">2003</xref>
; Naji and Freeman,
<xref rid="B44" ref-type="bibr">2004</xref>
; Nawrot and Joyce,
<xref rid="B46" ref-type="bibr">2006</xref>
; Nadler et al.,
<xref rid="B43" ref-type="bibr">2009</xref>
). The prototypical conditions for motion parallax (Figure
<xref ref-type="fig" rid="F1">1</xref>
, left panel) involve a translating observer maintaining fixation upon a static point (F) giving a viewing distance (
<italic>f</italic>
). The angle of the observer's eye (α) changes over time (at rate
<italic>d</italic>
α/
<italic>dt</italic>
or displacement
<italic>d</italic>
α in a small time increment), which corresponds to the magnitude of the observer's compensatory eye movement. While the fixation point remains stationary on the observer's retina, other points (illustrated here by point D) nearer or farther than the fixation point will move on the observer's retina by the change in angle θ (at rate
<italic>d</italic>
θ/
<italic>dt</italic>
or displacement
<italic>d</italic>
θ in a small time increment) which correspond to the magnitude of retinal image motion of D. The relationship between these values (
<italic>d</italic>
θ and
<italic>d</italic>
α) and relative depth (
<italic>d/f</italic>
), between points F and D, is geometrically given by the motion/pursuit law (M/PL) (1),
<disp-formula id="E1">
<label>(1)</label>
<mml:math id="M1">
<mml:mrow>
<mml:mfrac>
<mml:mi>d</mml:mi>
<mml:mi>f</mml:mi>
</mml:mfrac>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mtext>d</mml:mtext>
<mml:mi>θ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mtext>d</mml:mtext>
<mml:mi>α</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mfrac>
<mml:mtext>1</mml:mtext>
<mml:mrow>
<mml:mtext>1-d</mml:mtext>
<mml:mi>θ</mml:mi>
<mml:mtext>/d</mml:mtext>
<mml:mi>α</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</disp-formula>
which describes how the visual system could use the retinal motion signal (
<italic>d</italic>
θ) and the eye movement signal (
<italic>d</italic>
α) to determine the exact ratio of depth (
<italic>d</italic>
) to viewing distance (
<italic>f</italic>
) (Nawrot and Stroyan,
<xref rid="B47" ref-type="bibr">2009</xref>
; Stroyan and Nawrot,
<xref rid="B58" ref-type="bibr">2012</xref>
). Because of the small value of the motion/pursuit ratio in our experiments, the exact geometric law (1) can be replaced with the simple approximate geometric relationship that says the motion/pursuit ratio (M/PR) approximates relative depth (2):
<disp-formula id="E2">
<label>(2)</label>
<mml:math id="M2">
<mml:mrow>
<mml:mfrac>
<mml:mi>d</mml:mi>
<mml:mi>f</mml:mi>
</mml:mfrac>
<mml:mo></mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mtext>d</mml:mtext>
<mml:mi>θ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mtext>d</mml:mtext>
<mml:mi>α</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</disp-formula>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption>
<p>
<bold>The left panel depicts one condition producing motion parallax, with the eye (and head) translating laterally to the left</bold>
. Point F is the fixation point at viewing distance (
<italic>f</italic>
), and D is the point with some depth (
<italic>d</italic>
) beyond F. The value
<italic>d</italic>
α gives the increment of eye rotation necessary to maintain fixation on F during an increment of the translation. The value
<italic>d</italic>
θ/
<italic>dt</italic>
gives the velocity of the D on the retina. D in any other position would generate a different
<italic>d</italic>
θ increment with the same
<italic>d</italic>
α, and thus a different ratio. The right panel shows that the same values of
<italic>f, d</italic>
,
<italic>d</italic>
α, and
<italic>d</italic>
θ can be created with a translating stimulus and stationary observer.</p>
</caption>
<graphic xlink:href="fpsyg-05-01103-g0001"></graphic>
</fig>
</p>
<p>Of course, if the visual system has an available estimate of viewing distance (
<italic>f</italic>
), like the estimate of viewing distance required to recover depth from retinal disparity for binocular stereopsis, the M/PR could be used to describe the recovery of depth (
<italic>d</italic>
) from motion parallax in a process very similar to that for binocular stereopsis. In fact, there is even a strong geometrical similarity between the M/PR and the ratio of retinal disparity over binocular convergence (Stroyan,
<xref rid="B57" ref-type="bibr">2010</xref>
). Further, there is some evidence that the brain may use “affine” quantities to represent quantities like depth (Di Luca et al.,
<xref rid="B16" ref-type="bibr">2010</xref>
), so the affine M/PR may even have a neural representation. Additional details of how the current “motion/pursuit ratio” approach differs from previous “observer velocity” approaches to motion parallax (e.g., Nakayama and Loomis,
<xref rid="B44a" ref-type="bibr">1974</xref>
; Longuet-Higgins and Prazdny,
<xref rid="B35a" ref-type="bibr">1980</xref>
) are provided in Nawrot and Stroyan (
<xref rid="B47" ref-type="bibr">2009</xref>
) and Stroyan and Nawrot (
<xref rid="B58" ref-type="bibr">2012</xref>
). Interactive numeric demonstrations of the motion/pursuit approach can be viewed at Stroyan (
<xref rid="B56" ref-type="bibr">2008</xref>
).</p>
<p>In addition to the case of a translating observer, the same M/PL describes the relationship when the observer is stationary (Figure
<xref ref-type="fig" rid="F1">1</xref>
, right panel) and viewing a translating stimulus (Graham et al.,
<xref rid="B33" ref-type="bibr">1948</xref>
). The primary difference between the two viewing conditions is that
<italic>d</italic>
α comprises a pursuit signal in the observer stationary case, while
<italic>d</italic>
α is a combination of pursuit and translational vestibular ocular response (tVOR) signals in the observer translation case. Previous work (Nawrot and Joyce,
<xref rid="B46" ref-type="bibr">2006</xref>
) has shown that only the pursuit component of the
<italic>d</italic>
α signal is used for motion parallax. Therefore, these two conditions should produce different estimates of perceived depth magnitude. This is one of the hypotheses to be investigated here.</p>
<p>While the M/PR provides a reasonable approximation of the M/PL, neither provides an explanation for the perceptual underestimate of depth, or foreshortening, of perceived depth from motion parallax (e.g., Ono et al.,
<xref rid="B48" ref-type="bibr">1986</xref>
; Domini and Caudek,
<xref rid="B18" ref-type="bibr">2003</xref>
; Nawrot,
<xref rid="B45" ref-type="bibr">2003</xref>
). For instance, in two experiments Durgin et al. (
<xref rid="B21" ref-type="bibr">1995</xref>
) show motion parallax foreshortening between about 25% and 125% compared to comparable binocular disparity stimuli. This led them to conclude, “ … geometrically equivalent depth information does not lead to the same quantitative perception [of depth] when presented through motion parallax as when presented through binocular disparity.” This is clear evidence that motion parallax and binocular disparity generate different perceptual estimates of depth given the same underlying geometry in a scene.</p>
<p>More recently, McKee and Taylor (
<xref rid="B37" ref-type="bibr">2010</xref>
) reported that motion parallax difference thresholds are about 10 times larger than comparable thresholds with binocular disparity. While studying the precision of depth judgments in a “natural setting”—objects and rods presented on a stage—McKee and Taylor (
<xref rid="B37" ref-type="bibr">2010</xref>
) found that 8–10 cm lateral head translations did not improve static monocular depth thresholds for most observers at the 112 cm viewing distance. Moreover, depth thresholds for all three observers were about a log
<sub>10</sub>
unit higher for motion parallax than to the comparable binocular disparity conditions. This indicates that observers exhibit much less sensitivity in the use of motion parallax, compared to binocular disparity, for the recovery of information about the geometry of a visual scene. The magnitude of the perceptual foreshortening suggested by Durgin et al. (
<xref rid="B21" ref-type="bibr">1995</xref>
) and by McKee and Taylor (
<xref rid="B37" ref-type="bibr">2010</xref>
) is large indeed, and presents a challenge to the purely geometric analysis provided by the M/PL and M/PR. Other important factors must be involved.</p>
<p>These other important factors are the accuracy of the actual eye movement and retinal image velocity signals recovered by the visual system. The depth estimate provided by the M/PR model assumes that the visual system has accurate internal signals regarding retinal image motion and the pursuit eye movement. While this is a reasonable starting point when considering the underlying geometry and how it might theoretically provide the information necessary to recover depth from motion parallax, this assumption of accurate motion signals is a less reasonable assumption for a model of human perception of depth from motion parallax. We know that the accuracy of perceived motion velocity is affected by disparate stimulus parameters such as contrast (Campbell and Maffei,
<xref rid="B10" ref-type="bibr">1981</xref>
), color (Cavanagh et al.,
<xref rid="B11" ref-type="bibr">1984</xref>
), dot density (Watamaniuk et al.,
<xref rid="B70" ref-type="bibr">1993</xref>
), and spatial frequency (Diener et al.,
<xref rid="B15" ref-type="bibr">1976</xref>
; Campbell and Maffei,
<xref rid="B10" ref-type="bibr">1981</xref>
). Moreover, the accuracy of internal eye movement signals, studied in the context of combination with retinal image motion for the perception of head-relative motion, can be quite inaccurate (Freeman and Banks,
<xref rid="B27" ref-type="bibr">1998</xref>
; Freeman,
<xref rid="B26" ref-type="bibr">2001</xref>
; Turano and Massof,
<xref rid="B65" ref-type="bibr">2001</xref>
; Souman and Freeman,
<xref rid="B55" ref-type="bibr">2008</xref>
). To model these inaccuracies, these studies have applied linear gain factors or non-linear transducers to the retinal image velocity and eye movement velocity signals to explain velocity-matching results. For instance, in one of the earliest explorations of how retinal image motion and pursuit eye movements affect perceived slant, Freeman and Fowler (
<xref rid="B28" ref-type="bibr">2000</xref>
) used a linear model of motion and pursuit combination to account for perceived-speed, which in turn explained changes in the perceived slant. We follow a similar rationale with the current study.</p>
<p>In the present work, the same retinal image velocity and eye movement velocity signals are employed, but for the markedly different purpose of recovering depth, not motion. It is interesting to know whether these signals display similar accuracy for the perception of depth from motion parallax as they do for the perception of head-centric motion. Therefore, the goal of the current study is to: (1) determine how well the M/PR predicts the perception of relative depth from motion parallax in psychophysical observers, and (2) determine whether non-linear transducers applied to the retinal motion signal and to the pursuit signal can produce an “empirical” M/PR model that accounts for the perception of depth magnitude from motion parallax.</p>
</sec>
<sec sec-type="materials|methods" id="s2">
<title>Materials and methods</title>
<p>Observers completed a 2IFC task comparing depth magnitude from a motion parallax stimulus with the depth from a binocular disparity stimulus (e.g., Nawrot,
<xref rid="B45" ref-type="bibr">2003</xref>
; MacKenzie et al.,
<xref rid="B36a" ref-type="bibr">2008</xref>
; Domini and Caudek,
<xref rid="B19" ref-type="bibr">2009</xref>
,
<xref rid="B20" ref-type="bibr">2010</xref>
). The experiment included six different conditions: two motion parallax with head-translating conditions at two viewing distances (36 and 72 cm), and four head-stationary conditions at three viewing distances (36, 54, and 72 cm). Two conditions were run at the 36 cm viewing distance with stationary head, each condition having a different range of pursuit (
<italic>d</italic>
α) speeds. Both conditions at 36 cm included the 4.95 d/s pursuit speed providing a partial replication of those data points. For each motion parallax stimulus, the point of subjective equality (PSE) between the two stimuli (
<italic>d
<sub>stereo</sub>
</italic>
<italic>d</italic>
<sub>
<italic>mp</italic>
</sub>
) allowed the particular stereo stimulus parameters to provide a reasonable estimate of the depth from a particular set of physical motion parallax parameters. It is then possible to compare empirical estimates of
<italic>d
<sub>mp</sub>
</italic>
to the theoretical depth predicted by the parameters of the M/PR, and determine how these empirical estimates differ from the geometric model.</p>
<p>The accuracy of the motion parallax depth magnitude estimates depends on how closely perceived depth from binocular disparity represents the binocular stimulus geometry (depth constancy). While there are examples of systematic distortions in perceived depth from binocular disparity (e.g., Johnston,
<xref rid="B34" ref-type="bibr">1991</xref>
; Tittle et al.,
<xref rid="B60" ref-type="bibr">1995</xref>
; Todd and Norman,
<xref rid="B61" ref-type="bibr">2003</xref>
), most failures of depth constancy are linked to a mis-estimate of viewing distance due to “reduced viewing conditions” (Wallach and Zuckerman,
<xref rid="B69" ref-type="bibr">1963</xref>
; Cumming et al.,
<xref rid="B14" ref-type="bibr">1991</xref>
; Johnston et al.,
<xref rid="B34a" ref-type="bibr">1994</xref>
; Durgin et al.,
<xref rid="B21" ref-type="bibr">1995</xref>
; Todd and Norman,
<xref rid="B61" ref-type="bibr">2003</xref>
; Domini and Caudek,
<xref rid="B20" ref-type="bibr">2010</xref>
). Therefore, the current study employs “full-cue” viewing conditions that optimize distance perception (Mon-Williams and Tresilian,
<xref rid="B41" ref-type="bibr">1999</xref>
) and have lead to accurate depth perception (Philbeck and Loomis,
<xref rid="B49" ref-type="bibr">1997</xref>
). Additionally, to optimize the inter-cue depth magnitude comparison, identical viewing conditions were used for both the motion parallax and binocular disparity stimuli. Any distortion in perceived depth resulting from a mis-estimate of viewing distance should affect both motion parallax and binocular disparity. Furthermore, in the Discussion Section below we have included an analysis of the data that uses a systematic error of the perception of stereoscopic depth based on Johnston (
<xref rid="B34" ref-type="bibr">1991</xref>
). The effect of that analysis is to determine the effect of such a systematic error in stereo depth constancy on the exponents of the empirical motion/pursuit ratio. This analysis shows the changes due to purported mis-estimate of depth from binocular stereopsis are small compared to the systematic under-estimate of perception of depth from motion parallax.</p>
<p>Here the “full-cue” conditions were implemented with a Z-Screen (Stereographics; San Rafael, CA) stereo viewing system that allows natural binocular viewing of the stimulus monitor without mirrors, prisms, or active shutter-glasses. Moreover, viewing distances were less than 80 cm, the distance within which convergence provides the most reliable cue to viewing distance (Von Hofsten,
<xref rid="B68" ref-type="bibr">1976</xref>
; Ritter,
<xref rid="B52" ref-type="bibr">1977</xref>
; Brenner and Van Damme,
<xref rid="B7" ref-type="bibr">1998</xref>
; Brenner and Smeets,
<xref rid="B6" ref-type="bibr">2000</xref>
; Mon-Williams et al.,
<xref rid="B42" ref-type="bibr">2000</xref>
; Viguier et al.,
<xref rid="B66" ref-type="bibr">2001</xref>
) and the distance within which accommodation may contribute as a cue to viewing distance (Fisher and Ciuffreda,
<xref rid="B23" ref-type="bibr">1988</xref>
; Mon-Williams and Tresilian,
<xref rid="B41" ref-type="bibr">1999</xref>
). These short viewing conditions, and the use of passive stereo-viewing glasses, ensured that the vertical disparity information available to scale the distance of the display and monitor (Garding et al.,
<xref rid="B30" ref-type="bibr">1995</xref>
; Rogers and Bradshaw,
<xref rid="B54" ref-type="bibr">1995</xref>
; Bradshaw et al.,
<xref rid="B3" ref-type="bibr">1996</xref>
; Read and Cumming,
<xref rid="B51" ref-type="bibr">2006</xref>
) was large and unobstructed. Therefore, these viewing conditions were optimized for the use of depth-scaling cues such as convergence, accommodation, vertical disparity, and their possible combination (Backus et al.,
<xref rid="B1" ref-type="bibr">1999</xref>
).</p>
<p>Moreover, the role of the particular psychophysical task has also been examined in the failure of depth constancy in binocular stereopsis (Frisby et al.,
<xref rid="B29" ref-type="bibr">1996</xref>
; Glennerster et al.,
<xref rid="B31" ref-type="bibr">1996</xref>
; Bradshaw et al.,
<xref rid="B4" ref-type="bibr">1998</xref>
,
<xref rid="B5" ref-type="bibr">2000</xref>
; Todd and Norman,
<xref rid="B61" ref-type="bibr">2003</xref>
). Psychophysical depth-matching tasks were found to produce more accurate depth constancy than shape-judgment tasks. Such depth-matching tasks are considered “Class A” observations (Brindley,
<xref rid="B9" ref-type="bibr">1970</xref>
) in which the observer compares the two sensations of depth produced by viewing two stimuli. Class A observations are believed to be more direct than alternative Class B observations by avoiding their necessary mental transformations, as with, for example, a depth-to-half-height task, although haptic tasks have shown near perfect depth constancy, similar to visual tasks (Foster et al.,
<xref rid="B25" ref-type="bibr">2011</xref>
). For example, Glennerster et al. (
<xref rid="B31" ref-type="bibr">1996</xref>
), Bradshaw et al. (
<xref rid="B5" ref-type="bibr">2000</xref>
), and Todd and Norman (
<xref rid="B61" ref-type="bibr">2003</xref>
) used both Class A and B observations, and all found more accurate depth judgments with the Class A depth-matching task, with an average performance close to perfect constancy. Therefore, to improve the accuracy of depth perception, the current study employed a Class A depth-matching task in which observers compared the sensation of depth produced by viewing two similar stimuli.</p>
<p>While the full-cue conditions in the current experiment were intended to provide maximum information about viewing distance (
<italic>f</italic>
) and increase accuracy of perceived depth for both the binocular disparity and motion parallax stimuli, depth constancy with these binocular disparity stimuli was investigated in a separate control condition. This condition simulated the design of Glennerster et al. (
<xref rid="B31" ref-type="bibr">1996</xref>
; see also Bradshaw et al.,
<xref rid="B5" ref-type="bibr">2000</xref>
; Todd and Norman,
<xref rid="B61" ref-type="bibr">2003</xref>
) to empirically determine whether the binocular disparity stimulus used here provided a reasonable estimate of perceived depth magnitude for the motion parallax stimuli. To foreshadow the results, the deviation from perfect depth constancy was very small, indicating that the use of this binocular disparity stimulus in a perceptual matching procedure was reasonable for the task of determining perceived depth magnitude from motion parallax.</p>
<sec>
<title>Apparatus</title>
<p>Stimuli were generated on a Macintosh computer (Apple; Cupertino, CA) and presented on an IIyama CRT (IIyama International; Oude Meer, The Netherlands) monitor (1600 × 1200 × 85 Hz). In head-movement conditions, head position was measured with a linear potentiometer (ETI Systems; Carlsbad, CA) using a head-movement recording device (described in detail in Nawrot and Joyce,
<xref rid="B46" ref-type="bibr">2006</xref>
). Head position was registered in the computer at 85 Hz using a 16-bit multifunction I/O board (National Instruments; Austin, TX) connected to the head movement device. The device has excellent linearity (
<italic>r</italic>
> 0.999) and accuracy (<0.1 mm).</p>
<p>A Z-Screen (Stereographics; San Rafael, CA) stereoscopic imaging system, which uses reversing circular polarization for frame-sequential presentation of the stereo images, was used for all conditions of the experiment. While this system gave stereo separation for the stereo stimulus presentation, it was also used to restrict presentation of the motion parallax stimulus to the observer's right eye. That is, the motion parallax stimulus was visible only to the observer's right eye, while the fixation stimulus was visible to both the observer's right and left eye. This maintained the same vergence, accommodation, and vertical disparity information for both the motion parallax and binocular disparity stimuli. Transitions of the polarization state of the Z-Screen were controlled by the experimental computer through a digital output channel in the multi-function I/O board. With this stereoscopic viewing system, observers wore passive “aviator-style” glasses with the two lenses fitted with opposite directions of circular polarization, similar to the “Real3D” glasses commonly used in 3D movie viewing in theaters. The use of these glasses precluded the use of a remote-optics eye tracking system to verify observer fixation in this experiment. Previous work has compared conditions in which fixation was and was not objectively enforced with an eye tracker (Nawrot and Stroyan,
<xref rid="B47" ref-type="bibr">2009</xref>
) and demonstrated very similar quantitative results in both conditions. Here, as in both conditions of Nawrot and Stroyan (
<xref rid="B47" ref-type="bibr">2009</xref>
), observers were given instructions about the importance of maintaining fixation.</p>
<p>To minimize any effect of cross-talk in the binocular viewing system (information presented to one eye that is visible to the other eye), the monitor luminance was reduced to 38.8 cd/m
<sup>2</sup>
, which was further reduced to 16.0 cd/m
<sup>2</sup>
by the Z-screen viewing system. In a functional test of cross-talk, information was presented in one of the two channels to an observer with one eye occluded. Using the non-occluded and non-presented eye, observers were at chance in detecting whether or not a stimulus was presented and were at chance in detecting the direction of a translating stimulus.</p>
<p>In the depth-constancy control conditions, the viewing apparatus was duplicated with one monitor and Z-screen at a viewing distance of 36 cm (and offset to the left of the line of sight, similar to the virtual monitor positions in Figure 1 of Glennerster et al.,
<xref rid="B31" ref-type="bibr">1996</xref>
) and the other monitor and Z-screen at a viewing distance of 72 cm (and offset to the right of the line of sight). The height of the monitors was adjusted to make the centers of the two stimuli level with the observer's eye. Synchrony of the monitors was achieved by splitting the signals to both monitors and Z-screens. The stimulus viewed at 36 cm (left monitor) was drawn on the right side of the screen while the left side of the screen was occluded. The stimulus viewed at 72 cm (right monitor) was drawn on the left side of the screen while the right side of the screen was occluded.</p>
</sec>
<sec>
<title>Stimuli</title>
<p>To allow comparison of these results to other studies in the motion parallax literature, we employed a random-dot stimulus depicting a frontal corrugated surface varying sinusoidally in depth along the vertical dimension (Rogers and Graham,
<xref rid="B53" ref-type="bibr">1979</xref>
). The general design of this type of random-dot stimulus for stereo, head-movement, and head-stationary motion parallax conditions has been detailed elsewhere (Nawrot and Joyce,
<xref rid="B46" ref-type="bibr">2006</xref>
). In the current experiment the stimulus depicted 1 cycle of depth corrugation, with one half-cycle appearing above and below the fixation point.</p>
<p>The square stimulus window was 300 × 300 pixels (0.244 mm/pixel). At the three different viewing distances this corresponded to: 11.5° (2.3 min/pixel) at 36 cm, 7.75° (1.55 min/pixel) at 54 cm, or 5.85° (1.17 min/pixel) a side at 72 cm viewing distance. The stimulus was composed of 5000 one-pixel black dots randomly positioned on a white background. The maximum disparity of the corrugated stereo stimulus varied between 1 and 9 pixels, with the angular dimension varying with viewing distance: 36 cm, 2.3–20.7 min; 54 cm, 1.55–14.0 min; 72 cm, 1.17–10.5 min. The horizontal meridian and the fixation point always had zero pixels of disparity. The stereo stimulus was stationary and drawn at the center of the monitor. Motion parallax stimuli varied between maximum
<italic>d</italic>
θ/
<italic>d</italic>
α ratios of 0.042 and 0.25 with a variety of pursuit (
<italic>d</italic>
α/
<italic>dt</italic>
, 1.1–11.57 d/s) and retinal image (
<italic>d</italic>
θ/
<italic>dt</italic>
, 0.14–1.65 d/s) velocities. Motion parallax stimuli were presented to the right eye, while the left eye was presented only the fixation spot. This allowed the fixation spot to be binocularly fused by the observer, ensuring the same ocular convergence and accommodation in both motion parallax and binocular disparity stimuli.</p>
<p>In head-stationary conditions, the motion parallax stimulus window translated 7.3 cm across the monitor at the specified
<italic>d</italic>
α velocity for that stimulus trial. Within the translating stimulus window, dots generating the peak motion parallax cue moved leftwards or rightwards at the peak
<italic>d</italic>
θ velocity for that trial. Since the observer maintained fixation on a point at the center of the translating stimulus window, these
<italic>d</italic>
θ stimulus velocities correspond to retinal image velocities. The duration of the stimulus presentation varied, and depended on the particular
<italic>d</italic>
α velocity.</p>
<p>In head-translation conditions, the motion parallax stimulus window remained stationary on the monitor and was only displayed during the central 7.3 cm of each trial's head translation. Observers were instructed to move at a speed so that the entire head translation took about 1 s, and the stimulus presentation duration was about 0.5 s. This corresponds to a commonly used 0.5 Hz head translation speed (e.g., Nawrot,
<xref rid="B45" ref-type="bibr">2003</xref>
). The precise duration of the observer's head translation through the central 7.3 cm was recorded for each trial and was used to calculate the average
<italic>d</italic>
α and
<italic>d</italic>
θ values. The peak velocity of local stimulus dot movements (within the stimulus window) was linked to the velocity of the observer's head translation, which was measured every 0.012 s with the head movement device. Observers maintained fixation on a point at the center of the stimulus window with eye movements during the head translation, and local stimulus dots moved in relation to this point, making it possible to maintain the proper M/PRs (
<italic>d</italic>
θ/
<italic>d</italic>
α values between 0.042 and 0.25) for each trial, even though the exact head translation velocity varied between trials.</p>
<p>In the depth-constancy control conditions, two stereo stimuli were drawn to the screen at the same time. However, observers saw only the stimulus on the left side of the screen at the 72 cm distance, and the right side stimulus at the 36 cm distance. Stimulus dots viewed at 36 cm were 1 pixel (2.3 arc min) in size, and those viewed at 72 cm were 2 × 2 pixels (2.3 arc min) in size. In one condition the stimulus viewed at 36 cm was fixed at a peak disparity of 23.3 arc min, while variable stimulus at 72 cm varied between 1.2 and 11.7 arc min of disparity in a method of constant stimuli. In the second condition, the stimulus viewed at 72 cm was fixed at a peak disparity of 4.7 arc min of disparity while the variable stimulus at 36 cm varied between 9.3 and 28 arc min of disparity in a method of constant stimuli. Similar to the other conditions, the phase of the two stimuli was always reversed. Unlike other conditions, viewing of the two stimuli was unrestricted.</p>
</sec>
<sec>
<title>Procedure</title>
<p>These procedures were overseen by the North Dakota State University Institutional Review Board and adhered to the tenets of the Declaration of Helsinki. Observers were required to have corrected acuity of 20/40, Pelli-Robson contrast sensitivity of 1.80, a stereothreshold (Randot and Stereofly tests) of 50 s, and not have neurological or ophthalmic disorders. Eight naïve observers performed a 2IFC of perceived depth magnitude between a motion parallax stimulus (first interval) and a binocular disparity stimulus (second interval). Trials began with a fixation dot positioned at the center position of where the motion parallax stimulus presentation would begin. Following the motion parallax stimulus presentation the fixation dot moved to the center of the display and following a 1 s ISI, the binocular disparity stimulus was displayed.</p>
<p>For head-stationary conditions (Figure
<xref ref-type="fig" rid="F2">2</xref>
), trials began with the fixation point displaced to the left or right of the monitor center, indicating the center of the motion parallax stimulus when the trial began. Observers initiated the trial with a button press. For head-translation conditions (Figure
<xref ref-type="fig" rid="F3">3</xref>
), a screen graphic indicated which direction (left or right) the observer was required to move his or her head during the trial. In both figures the stimuli are depicted with perspective information, but are actually perceived as fronto-parallel to the observer. To initiate the trial the observer was required to move his or her head to an appropriate starting position >5 cm from the center head position. When the observer's head was in an appropriate starting position, the graphic indicator vanished and the central fixation point appeared, indicating that the observer's head should then be translated across the display. The motion parallax stimulus was presented when the observer's head movement was within 3.65 cm of the center head position. The stimulus disappeared and the trial ended when the observer's head had traveled through the entire center 7.3 cm of head position. The stimulus disappeared and the trial was repeated if the observer's head movement stopped or reversed while within the central 7.3 cm range of head translation. During the 1 s ISI, the observer's head was moved to a central position and held stationary during presentation of the binocular disparity stimulus.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption>
<p>
<bold>Depicted are the key stimulus events in conditions with a stationary observer</bold>
. Trials began with the fixation point at the position that the motion parallax stimulus would appear and translate across the screen. Following a 1000 ms ISI, the comparison stimulus with binocular stereopsis stimulus appeared at the screen center.</p>
</caption>
<graphic xlink:href="fpsyg-05-01103-g0002"></graphic>
</fig>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption>
<p>
<bold>Depicted are the key stimulus events in conditions with a translating observer</bold>
. Trials began with the fixation point at screen center and the observer's head extended to the side indicated on the screen. During observer head translation the motion parallax stimulus was presented at the screen center. Following a 1000 ms ISI, during which the observer's head was returned to a central position, the comparison stimulus with binocular stereopsis stimulus appeared at the screen center.</p>
</caption>
<graphic xlink:href="fpsyg-05-01103-g0003"></graphic>
</fig>
<p>Following the presentation of the second stimulus the screen was blanked, and observers could then use a button press to indicate which of the two intervals contained the stimulus with the larger magnitude depth. Following the response, the appropriate fixation point was drawn to the screen indicating the observer could initiate the next trial. Each of the eight observers completed 20 blocks of 117 trials in each of the 6 conditions (~14000 trials). Leftward and rightward directions of head and eye movements alternated, and the two directions were collapsed in the subsequent analysis.</p>
<p>The experiment included six different conditions: two motion parallax with head-translating conditions at two viewing distances (36 and 72 cm), and four head-stationary conditions at three viewing distances (36, 54, and 72 cm). Two conditions were run at the 36 cm viewing distance with stationary head, each condition having a different range of pursuit (
<italic>d</italic>
α) speeds. Both conditions at 36 cm included the 4.95 d/s pursuit speed providing a partial replication of those data points.</p>
<p>In the depth-constancy control conditions, trials began with two fixation spots drawn where the two stationary stereo stimuli would appear. Nine naïve observers completed two blocks of 90 trials in two separate conditions. In each of the two control conditions, the peak stimulus disparity at one distance was held constant while the peak disparity at the other distance was varied. Observers initiated each trial with a button press. Both stereo stimuli were presented simultaneously and observers were free to move their gaze back and forth to compare the two stimuli. Observers used a button press to indicate which of the two stimuli appeared to have greater depth. Following the response both stimuli were extinguished, and the fixation spots were redrawn signaling the start of the next trial.</p>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>Results</title>
<sec>
<title>Control conditions</title>
<p>For each observer, in each of the two control conditions, a PSE was determined from each psychometric function based on a cumulative normal. This PSE gives the binocular disparity of the variable stereo stimulus at one depth that appears to match the magnitude of depth from the fixed binocular disparity of the stereo stimulus viewed at the other distance. Figure
<xref ref-type="fig" rid="F4">4</xref>
shows the normalized depth matches found in the control condition. The blue symbols show the results of the two control conditions. The red symbols give the hypothetical results if observers were matching retinal disparity of the two stimuli instead of relative depth. The green symbols give the hypothetical results of the depth matching if observers had a 10% mis-estimate of viewing distance to the variable stimulus, an overestimate when the variable stimulus was at 36 cm, and an underestimate when the variable stimulus was at 72 cm.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption>
<p>
<bold>Shown are the normalized depth matches for the control experiment compared to two hypothetical results</bold>
. The blue line shows the normalized depth match (match/expected) on the vertical axis with the viewing distance of the fixed standard stimulus shown on the horizontal axis. The red line shows the expected results if observers were matching disparity. The green line shows the expected results if the viewing distance to the variable stimulus were mis-estimated by 10%.</p>
</caption>
<graphic xlink:href="fpsyg-05-01103-g0004"></graphic>
</fig>
<p>When the standard stimulus, viewed at 36 cm (Figure
<xref ref-type="fig" rid="F4">4</xref>
, left blue point), was fixed at 23.3 arc min of peak disparity, the average matching stimulus at 72 cm viewing distance had 5.82 (
<italic>SE</italic>
= 0.16) arc min of disparity. In terms of depth, the fixed stimulus at 36 cm had 1.35 cm of depth while the variable stimulus at 72 cm was judged equivalent when it had 1.349 (
<italic>SE</italic>
= 0.04) cm of depth for a normalized depth match of 0.999. Similarly, a psychometric function fit to the cumulative data produced a PSE estimate of 5.80 arc min with β = 0.656 arc min, and σ = 1.08 arc min. This corresponds to a depth discrimination threshold of 0.25 cm for the binocular disparity stimulus, and corresponds to the left error bar shown in Figure
<xref ref-type="fig" rid="F4">4</xref>
.</p>
<p>When the standard stimulus, viewed at 72 cm (Figure
<xref ref-type="fig" rid="F4">4</xref>
, right blue point), was fixed at 4.66 arc min of peak disparity, the average matching stimulus at 36 cm viewing distance had 18.37 (
<italic>SE</italic>
= 0.41) arc min of disparity. In terms of depth, the fixed stimulus at 72 cm had 1.08 cm of depth while the variable stimulus at 36 cm was judged equivalent when it had 1.065 (0.03) cm of depth for a normalized depth match of 0.985. The psychometric function fit to the cumulative data produced a PSE estimate of 18.36 arc min with β = 0.228 arc min, and σ = 3.11 arc min. This corresponds to a depth discrimination threshold of 0.18 cm for the binocular disparity stimulus and corresponds to the right error bar shown in Figure
<xref ref-type="fig" rid="F4">4</xref>
. In both conditions observers were very accurate in their ability to match depths across a doubling of viewing distance. This depth constancy is not unexpected (see Materials and Methods). Indeed, the performance here is very similar to the performance of observers in Glennerster et al. (
<xref rid="B31" ref-type="bibr">1996</xref>
, see their Figure 2A).</p>
<p>These results indicate near perfect depth constancy for the binocular disparity stimuli viewed at the range of distances, and in the particular viewing conditions, used in this study. Such matches would only be possible if depth from each of the two binocular disparity stimuli were accurately scaled with their respective viewing distances. While Glennerster et al. (
<xref rid="B31" ref-type="bibr">1996</xref>
) point out that these results do not preclude a systematic mis-estimation of viewing distance (f), it is crucial that any mis-estimation preserved the precise viewing distance ratio used here. This alternative explanation appears unlikely for several reasons: First, the failure of depth constancy (e.g., Johnston,
<xref rid="B34" ref-type="bibr">1991</xref>
) has often been attributed to a mis-perception of viewing distance that varied with the viewing distance (Johnston et al.,
<xref rid="B34a" ref-type="bibr">1994</xref>
), being over-estimated at near distances and under-estimated at far viewing distances. Such a viewing distance-dependent pattern of mis-estimation is unlikely to preserve a precise ratio of viewing distances required for accurate depth constancy. That is, if the viewing distances were misestimated, the closer would be over estimated and the farther underestimated, disrupting the precise ratio necessary for this alternative explanation for depth constancy. Second, the purposeful discrimination of distance ratios, as required here, does not appear to be accurate enough (~5% error within 1 m, Baird and Biersdorf,
<xref rid="B1a" ref-type="bibr">1967</xref>
) to provide an alternative explanation for the accurate depth constancy. Moreover, the error in determining viewing distance ratios was even larger over longer viewing distances (see Table 4 in Baird and Biersdorf,
<xref rid="B1a" ref-type="bibr">1967</xref>
) such as those used in Glennerster et al. (
<xref rid="B31" ref-type="bibr">1996</xref>
) and Bradshaw et al. (
<xref rid="B5" ref-type="bibr">2000</xref>
) making the distance-ratio matching hypothetical a less likely explanation in those cases. Finally, there is no evidence that observers can actually attempt to match the ratio of two retinal disparites to the inverse ratio of the two viewing distances squared. In this control experiment, observers were asked to indicate which of the two stimuli appeared to have greater peak-to-trough depth, a task that they reported was very easy to complete (similar to the reports in Glennerster et al.,
<xref rid="B31" ref-type="bibr">1996</xref>
; Todd and Norman,
<xref rid="B61" ref-type="bibr">2003</xref>
). One might reasonably conclude that depth matching is likely the product of a direct, low-level visual function relying on disparity sensitivity (Barlow et al.,
<xref rid="B2" ref-type="bibr">1967</xref>
) and low-level scaling by viewing distance (Trotter et al.,
<xref rid="B62" ref-type="bibr">1996</xref>
; Dobbins et al.,
<xref rid="B17" ref-type="bibr">1998</xref>
; Gonzalez and Perez,
<xref rid="B32" ref-type="bibr">1998</xref>
), and there is no requirement that it be supplanted by an indirect, high-level, hypothetical distance-ratio computation. Therefore, we contend the current depth-matching data represents accurate depth constancy in these viewing conditions indicating that the binocular disparity stimuli used in the main experiment provide a reasonable means to estimate perceived depth from motion parallax.</p>
</sec>
<sec>
<title>Experimental conditions</title>
<p>For each observer, in each condition, the 20 blocks of trials were compiled and used to generate a series of psychometric functions, one for each motion parallax M/PR value. Each psychometric function shows the percentage of judgments of the binocular disparity stimulus having greater depth than the motion parallax stimulus, for the nine different disparity values. The 50% PSE for each function gives the magnitude of binocular disparity (δ) that produced a perceived depth magnitude (
<italic>d
<sub>stereo</sub>
</italic>
) equivalent to the perceived depth magnitude for the motion parallax stimulus (
<italic>d
<sub>mp</sub>
</italic>
=
<italic>d</italic>
<sub>
<italic>stereo</italic>
</sub>
) with the particular values of
<italic>d</italic>
θ,
<italic>d</italic>
α, and
<italic>f</italic>
. Knowing the binocular stimulus viewing parameters
<italic>d, f</italic>
and inter-ocular distance (
<italic>i</italic>
), it is possible to estimate
<italic>d
<sub>stereo</sub>
</italic>
from the distance-square law (3), and therefore recover a reasonable estimate of
<italic>d
<sub>mp</sub>
</italic>
.</p>
<disp-formula id="E3">
<label>(3)</label>
<mml:math id="M3">
<mml:mrow>
<mml:msub>
<mml:mi>d</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mi>p</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>d</mml:mi>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>o</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo></mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mi>f</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>*</mml:mo>
<mml:mi>δ</mml:mi>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:mfrac>
</mml:mrow>
</mml:math>
</disp-formula>
<p>Figure
<xref ref-type="fig" rid="F5">5</xref>
shows the 13 raw psychometric functions for the group-averaged raw data in one condition (
<italic>f</italic>
= 36 cm, head stationary). Each line corresponds to a motion parallax stimulus with a different M/PR (
<italic>d</italic>
θ/
<italic>d</italic>
α) (see legend on the right). In a few instances (0.042, 0.083, and 0.167) the same M/PR is produced with different
<italic>d</italic>
θ and
<italic>d</italic>
α values. The horizontal axis shows the binocular disparity of the stereo stimulus being compared to the motion parallax stimulus. The vertical axis shows the percentage of responses for which the perceived depth magnitude of the stereo stimulus was greater than for the motion parallax stimulus. To the left side of the figure, with small disparities, the stereo stimulus is rarely perceived as having greater depth. To the right side, with large disparities, the stereo stimulus is most often perceived as having greater depth.</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption>
<p>
<bold>Shown are 13 psychometric functions for group-averaged data in one head stationary condition with a 36 cm viewing distance</bold>
. The horizontal axis shows the peak disparity of the comparison binocular stereopsis stimulus. The vertical axis shows the percentage of trials in which the comparison stimulus was indicated to have greater depth magnitude than the motion parallax stimulus. The 13 different functions represent motion parallax stimuli with different motion/pursuit ratios (see legend). Lines with the same ratio are produced with different
<italic></italic>
and
<italic>d</italic>
α velocities.</p>
</caption>
<graphic xlink:href="fpsyg-05-01103-g0005"></graphic>
</fig>
<p>The psychometric functions, and PSE's, of seven observers were very similar in all 6 conditions. The remaining observer generated PSE's that were >3 SD from the group means, and were excluded from the subsequent group analysis. For each individual and group-averaged psychometric function, in each of the 6 conditions, the
<italic>d
<sub>mp</sub>
</italic>
was determined from the PSE of a fitted cumulative normal (ERF) in MATLAB (Mathworks; Natick, MA). For instance, 13 of these PSE's were determined from the data shown in Figure
<xref ref-type="fig" rid="F5">5</xref>
. The
<italic>d
<sub>mp</sub>
</italic>
estimates determined from group-averaged data were the same as the average of the individual
<italic>d
<sub>mp</sub>
</italic>
estimates. Standard error for the average
<italic>d
<sub>mp</sub>
</italic>
was calculated from the variability of these individual
<italic>d
<sub>mp</sub>
</italic>
estimates.</p>
<p>Figure
<xref ref-type="fig" rid="F6">6</xref>
shows these 52
<italic>d
<sub>mp</sub>
</italic>
values for the 4 conditions with a stationary head. The horizontal axis shows M/PR (
<italic>d</italic>
θ/
<italic>d</italic>
α). The vertical axis shows depth depicted in the matching binocular disparity stimuli, providing an estimate of
<italic>d
<sub>mp</sub>
</italic>
, in cm. The different color groups correspond to the three different viewing distances (
<italic>f</italic>
) (greens = 72 cm, reds = 54 cm, blues = 36 cm). Lines connect
<italic>d
<sub>mp</sub>
</italic>
values that come from stimuli that have the same pursuit velocity (
<italic>d</italic>
α) (see legend) at the same viewing distance.</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption>
<p>
<bold>Shown are the average depth matches in the four head stationary conditions</bold>
. The vertical axis shows depth depicted in the matching binocular disparity stimuli, the horizontal axis shows the different motion/pursuit ratios for motion parallax stimuli. Lines connect stimuli with the same pursuit velocity (
<italic>d</italic>
α, see legend) from the same condition. Lines and symbols shaded in blue are from conditions with 36 cm viewing distance, lines and symbols shaded in red are from 54 cm viewing distance, and lines and symbols shaded in green are from 72 cm distance. Two lone data points (5.81 d/s @ 72 cm, and 11.57 d/s @ 36 cm) are unaccompanied by other data points at that pursuit velocity.</p>
</caption>
<graphic xlink:href="fpsyg-05-01103-g0006"></graphic>
</fig>
<p>Several observations and conclusions can be made from this data. First, the magnitude of
<italic>d
<sub>mp</sub>
</italic>
is much less than that predicted from the geometric M/PR model. For instance, a M/PR of 0.25 and a viewing distance (
<italic>f</italic>
) of 36 cm should produce a
<italic>d
<sub>mp</sub>
</italic>
of 9 cm, but the largest
<italic>d
<sub>mp</sub>
</italic>
found in these conditions was about 1 cm. The
<italic>d
<sub>mp</sub>
</italic>
estimates for the 54 cm and 72 cm viewing distances were similarly an order of magnitude less than that predicted by the geometric model. A subsequent analysis will quantify this pattern of foreshortening for all of the stimulus variables.</p>
<p>Despite the foreshortening,
<italic>d
<sub>mp</sub>
</italic>
is still very orderly and varies with the
<italic>f</italic>
,
<italic>d</italic>
θ, and
<italic>d</italic>
α variables. Illustrating this orderly relationship, data from the three viewing distances shows an orderly increase in
<italic>d
<sub>mp</sub>
</italic>
with an increased in viewing distance (
<italic>f</italic>
) (Ono et al.,
<xref rid="B48" ref-type="bibr">1986</xref>
). In Figure
<xref ref-type="fig" rid="F6">6</xref>
viewing distance is color coded with green points corresponding to 72 cm, red points to 54 cm, and blue points to 36 cm. The three colors are dispersed vertically meaning that points with similar
<italic>d</italic>
α and M/PR values produce different
<italic>d
<sub>mp</sub>
</italic>
values depending on the viewing distance. This distance scaling is predicted by the M/PR geometry, and appears very orderly with the points for the 54 cm viewing distance falling between those for the 72 cm and 36 cm viewing distances.</p>
<p>Additionally, data points form straight lines along each
<italic>d</italic>
α parameter, with each line sloping upward indicating a linear increase in
<italic>d
<sub>mp</sub>
</italic>
with the increase in the M/PR. This change in M/PR is accomplished here with a change in the
<italic>d</italic>
θ value, since
<italic>d</italic>
α is constant along each line. This shows the well-known role of
<italic>d</italic>
θ in the perception of depth from motion parallax. That is, with other independent variables remaining constant (
<italic>d</italic>
α and
<italic>f</italic>
), an increase in retinal image velocity (
<italic>d</italic>
θ) produces an increase in
<italic>d
<sub>mp</sub>
</italic>
. The direction and linearity of the
<italic>d</italic>
θ effect is predicted by the M/PR, but, as outlined above, the quantitative changes are less than that predicted by the geometric model.</p>
<p>A similar, but smaller, effect is found for changes in
<italic>d</italic>
α. The different lines in Figure
<xref ref-type="fig" rid="F6">6</xref>
represent data points with different
<italic>d</italic>
α values, and within a particular viewing distance lines with smaller
<italic>d</italic>
α values produce smaller
<italic>d
<sub>mp</sub>
</italic>
magnitudes than lines with larger
<italic>d</italic>
α values. However, the vertical displacement of these
<italic>d</italic>
α lines is due to a change in both
<italic>d</italic>
α and
<italic>d</italic>
θ, as the M/PR remains constant. The independent effect of
<italic>d</italic>
α is most easily seen in Figure
<xref ref-type="fig" rid="F7">7</xref>
, which re-plots a subset of the points from Figure
<xref ref-type="fig" rid="F6">6</xref>
for which at least 3 points share a common
<italic>d</italic>
θ value in the same viewing condition. The axes and data points are the same as Figure
<xref ref-type="fig" rid="F6">6</xref>
but the lines now connect a fixed
<italic>d</italic>
θ value. Like the
<italic>d</italic>
α lines in Figure
<xref ref-type="fig" rid="F6">6</xref>
, these
<italic>d</italic>
θ lines also slope upwards with increasing M/PR (and therefore increasing
<italic>d</italic>
α), but with shallower slopes.</p>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption>
<p>
<bold>Shown is a subset of the average depth match data from Figure
<xref ref-type="fig" rid="F5">5</xref>
</bold>
. The vertical axis shows depth depicted in the matching binocular disparity stimuli, the horizontal axis shows the different motion/pursuit ratios for motion parallax stimuli. Here lines connect stimuli with the same retinal image velocity (
<italic>d</italic>
θ, see legend) from the same condition. Lines and symbols shaded in blue are from conditions with 36 cm viewing distance, lines and symbols shaded in red are from 54 cm viewing distance, and lines and symbols shaded in green are from 72 cm distance.</p>
</caption>
<graphic xlink:href="fpsyg-05-01103-g0007"></graphic>
</fig>
<p>Figure
<xref ref-type="fig" rid="F8">8</xref>
shows 20
<italic>d
<sub>mp</sub>
</italic>
values for the 2 conditions in which observers made lateral head translations. For comparison, the closest data points from the head-stationary conditions in Figure
<xref ref-type="fig" rid="F5">5</xref>
are shown overlaid, without error bars. Again, the vertical axis shows depth depicted in the matching binocular disparity stimuli, providing an estimate of
<italic>d
<sub>mp</sub>
</italic>
, in cm The horizontal axis shows the M/PR (
<italic>d</italic>
θ/
<italic>d</italic>
α). The different colored points correspond to the two different viewing distances (
<italic>f</italic>
) (green = 72 cm, blue = 36 cm).</p>
<fig id="F8" position="float">
<label>Figure 8</label>
<caption>
<p>
<bold>Shown are the average depth matches in the two head translating conditions</bold>
. The vertical axis shows depth depicted in the matching binocular disparity stimuli, the horizontal axis shows the different motion/pursuit ratios for motion parallax stimuli. Lines connect stimuli with the same pursuit velocity (
<italic>d</italic>
α) from the same condition. The line and symbols shaded in blue are from the 36 cm viewing distance condition while those shaded in green are from the 72 cm distance condition.</p>
</caption>
<graphic xlink:href="fpsyg-05-01103-g0008"></graphic>
</fig>
<p>To determine the actual head translation speed, and the actual eye movement speed, an average head velocity was determined for each trial, for each observer, from the mean head velocity during the central 7.3 cm range of head translation. The mean observer head translation speed in the 72 cm viewing distance condition was 12.1 cm/s (
<italic>SE</italic>
= 1.2 cm/s), and in the 36 cm viewing distance condition was 11.0 cm/s (
<italic>SE</italic>
= 0.9 cm/s). With the assumption that the observer maintained accurate fixation on the static fixation point during the stimulus presentation and head translation, these head translation velocities correspond to an average eye movement speed (
<italic>d</italic>
α) of 9.3 and 17.5 d/s, respectively. It is important to note that regardless of the variability in the observer head translation speeds, the stimulus presentation program maintained the proper M/PR for each trial. However, knowing the average eye movement speed allows these results to be compared to those for the head-stationary conditions.</p>
<p>In the comparison of the 36 cm conditions, the blue line (with error bars) showing data from the head-translating condition shown in Figure
<xref ref-type="fig" rid="F8">8</xref>
straddles the 6.6 d/s line (violet line) from the head-stationary condition shown in Figure
<xref ref-type="fig" rid="F6">6</xref>
. This similarity in the perceived depth suggests that at the same M/PR, a
<italic>d</italic>
α of 17.5 d/s during head translation produces the same
<italic>d
<sub>mp</sub>
</italic>
magnitude as a
<italic>d</italic>
α of 6.6 d/s in head-stationary conditions. In the comparison of the 72 cm conditions, the dark green line (with error bars), showing data from the head-translating condition shown in Figure
<xref ref-type="fig" rid="F8">8</xref>
, straddles the 4.98 d/s line (medium green) and the 4.15 d/s line (light green) from the head-stationary condition shown in Figure
<xref ref-type="fig" rid="F6">6</xref>
. Again, a comparison of the head stationary and the head moving conditions indicates that at the same M/PR, a
<italic>d</italic>
α of 9.3 d/s during head translation produces the same magnitude of
<italic>d
<sub>mp</sub>
</italic>
as
<italic>d</italic>
α of 4.15–4.98 d/s in head-stationary conditions.</p>
<p>The difference in the type of eye movements generated in the two conditions may explain this discrepancy: lateral head translations generate a tVOR in addition to the visually driven pursuit eye movement (Miles and Busettini,
<xref rid="B40" ref-type="bibr">1992</xref>
; Miles,
<xref rid="B38" ref-type="bibr">1993</xref>
). However, only the pursuit component of the compensatory eye movement is used in the perception of depth from motion parallax (Nawrot,
<xref rid="B45" ref-type="bibr">2003</xref>
; Nawrot and Joyce,
<xref rid="B46" ref-type="bibr">2006</xref>
). The tVOR signal does not appear to have role in the mechanisms serving perceived depth. Therefore, the internal
<italic>d</italic>
α signal generated during lateral head movements may be much less than the magnitude of the total compensatory eye movement generated during the lateral head translation. This was the rationale offered by Nawrot and Joyce (
<xref rid="B46" ref-type="bibr">2006</xref>
) to explain the transition and reversal in perceived depth sign between world-fixed and head-fixed motion parallax stimuli.</p>
<p>It appears that across a variety of viewing conditions, tVOR generates about 60% of the eye-movement compensation necessary to maintain fixation (Ramat and Zee,
<xref rid="B50" ref-type="bibr">2003</xref>
; Liao et al.,
<xref rid="B35" ref-type="bibr">2008</xref>
). This means that to maintain fixation, and high visual acuity, the remaining 40% of the compensatory eye movement must come from a visually driven pursuit signal (Miles and Busettini,
<xref rid="B40" ref-type="bibr">1992</xref>
; Miles,
<xref rid="B38" ref-type="bibr">1993</xref>
). In the current experiment, we determined the eye movement velocities for which a head-stationary pursuit signal (36 cm: 6.6 d/s; 72 cm: 4.15 d/s) generates the same
<italic>d
<sub>mp</sub>
</italic>
magnitude as a head-translating tVOR+pursuit signal (36 cm: 17.5 d/s; 72 cm: 9.3 d/s). These pursuit velocities are about 40% (36 cm: 38%; 72 cm: 45%) of the tVOR+pursuit velocity. Therefore, the differences in perceived depth in the head-stationary and head-translating condition are explained by the differences in the eye movements, and support the proposal that the
<italic>d</italic>
α signal comes solely from the pursuit system (Nawrot and Joyce,
<xref rid="B46" ref-type="bibr">2006</xref>
; Nadler et al.,
<xref rid="B43" ref-type="bibr">2009</xref>
).</p>
</sec>
</sec>
<sec sec-type="discussion" id="s4">
<title>Discussion</title>
<p>The results indicate that depth from motion parallax is greatly foreshortened compared to the depth that might be expected from the dynamic geometry. Here, foreshortening means the object is perceived closer to the point of fixation than the spatial geometry indicates. For objects farther than the fixation point foreshortening means they are perceived nearer to the observer. But for objects closer to the observer than the fixation point, foreshortening means they are perceived as farther away from the observer than they actually are. Even with binocular, full-cue conditions that should provide a reliable estimate of physical viewing distance, (which might otherwise affect depth scaling) the depth foreshortening found here represents a near 10-fold diminution of perceived depth magnitude, which is further explained in the analysis below.</p>
<p>Returning to head-stationary conditions and the set of data points shown in Figure
<xref ref-type="fig" rid="F6">6</xref>
. Figure
<xref ref-type="fig" rid="F9">9</xref>
shows a three-dimensional contour plot of this same data using
<italic>Log</italic>
(
<italic>d</italic>
α),
<italic>Log</italic>
(
<italic>d</italic>
θ), and
<italic>Log</italic>
(
<italic>d/f</italic>
). (Note that these are natural Logs, not Log
<sub>10</sub>
, and taking logarithms makes the M/PR (2) an exactly planar graph,
<inline-formula>
<mml:math id="M7">
<mml:mrow>
<mml:mi>L</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>g</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi>d</mml:mi>
<mml:mi>θ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>d</mml:mi>
<mml:mi>α</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mi>L</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>g</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>d</mml:mi>
<mml:mi>θ</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo></mml:mo>
<mml:mi>L</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>g</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>d</mml:mi>
<mml:mi>α</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>
). Here the aggregate data from the three different viewing distances defines a remarkably flat contour shown with the rainbow coloring. The contour lines show equal relative depth (
<italic>d/f</italic>
). The overlain green plane depicts the least-squares fit to the data set (
<italic>Log</italic>
(
<italic>d/f</italic>
) = −3.463 + 0.416
<italic>Log</italic>
(
<italic>d</italic>
θ) − 0.192
<italic>Log</italic>
(
<italic>d</italic>
α). This agreement between the green plane and the data is excellent, with the
<italic>r</italic>
<sup>2</sup>
= 0.875. (A dynamic, rotatable version of this graph, and the program and the data points used to generate it, can be found in a Mathematica CDF file in the Supplementary Material). Of course, this least-squares fit does not represent a test of the relationship between the variables of the M/PR model but instead it provides a quantitative estimate of the relationship between the variables for the M/PR to explain the perceived depth measured here (e.g., Tufte,
<xref rid="B63" ref-type="bibr">1974/2006</xref>
). The gray transparent plane illustrates the geometrically correct depth percept predicted by the MP/R. As noted earlier, perceived depth from motion parallax is greatly foreshortened compared to the depth predicted by the geometric model.</p>
<fig id="F9" position="float">
<label>Figure 9</label>
<caption>
<p>
<bold>Shown is a Log-Log-Log plot of relative depth (Log (
<italic>d</italic>
/
<italic>f</italic>
)) on the vertical axis, retinal image velocity (Log
<italic></italic>
) on the horizontal axis, and pursuit velocity (Log
<italic></italic>
) on the (upper) z-axis</bold>
. The rainbow-shaded surface contains all of the data points from Figure
<xref ref-type="fig" rid="F5">5</xref>
. The green-shaded surface represents the least-squares fit to these data points. The gray-shaded surface represents veridical depth from the motion/pursuit law.</p>
</caption>
<graphic xlink:href="fpsyg-05-01103-g0009"></graphic>
</fig>
<p>One possible reason is that the visual system is unable to recover, or to use, accurate motion or pursuit signals. The M/PL models a precise depth percept based on having veridical signals regarding
<italic>d</italic>
θ,
<italic>d</italic>
α, and
<italic>f</italic>
. The perception of motion during eye movements is an important problem in visual science (Mack and Herman,
<xref rid="B36" ref-type="bibr">1972</xref>
; Brenner and van den Berg,
<xref rid="B8" ref-type="bibr">1994</xref>
; Turano and Heidenreich,
<xref rid="B64" ref-type="bibr">1996</xref>
). Incorrect estimates of the two dynamic signals,
<italic>d</italic>
θ or
<italic>d</italic>
α, could produce a misestimate of perceived depth magnitude (
<italic>d</italic>
<sub>
<italic>mp</italic>
</sub>
), but the perceived underestimate seems to involve more than just estimates of the basic rates. The issue is how the visual system represents and then combines internal signals about retinal image motion and eye movement to generate an internal representation object movements in a scene. And, the visual system's solution to this problem is often inaccurate, as seen with the Aubert-Fleischl phenomenon (Fleischl,
<xref rid="B24" ref-type="bibr">1882</xref>
; Wertheim and Van Gelder,
<xref rid="B71" ref-type="bibr">1990</xref>
), in which the visual pursuit of an object reduces its perceived speed, and the Filehne Illusion (Filehne,
<xref rid="B22" ref-type="bibr">1922</xref>
) in which a stationary object appears to move in the direction opposite an eye movement. One approach to this problem is to understand the inherent errors in the internal eye movement and retinal motion signals (e.g., Freeman and Banks,
<xref rid="B27" ref-type="bibr">1998</xref>
) and model these errors with power-law transducers (Freeman,
<xref rid="B26" ref-type="bibr">2001</xref>
; Turano and Massof,
<xref rid="B65" ref-type="bibr">2001</xref>
; Souman and Freeman,
<xref rid="B55" ref-type="bibr">2008</xref>
). These transducers give the estimated internal eye velocity and retinal image velocity signals based on the actual physical velocities.</p>
<p>The equation for the least-squares surface (the green surface in Figure
<xref ref-type="fig" rid="F9">9</xref>
) in non-log form gives the empirical motion/pursuit ratio:
<disp-formula id="E4">
<label>(4)</label>
<mml:math id="M4">
<mml:mrow>
<mml:msub>
<mml:mi>d</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mi>p</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mtext>d</mml:mtext>
<mml:msup>
<mml:mi>θ</mml:mi>
<mml:mrow>
<mml:mtext>0</mml:mtext>
<mml:mo>.</mml:mo>
<mml:mtext>416</mml:mtext>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mrow>
<mml:mi>d</mml:mi>
<mml:msup>
<mml:mi>α</mml:mi>
<mml:mrow>
<mml:mn>0.192</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfrac>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>0.0313</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>f</mml:mi>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>This gives a result similar to Freeman (
<xref rid="B26" ref-type="bibr">2001</xref>
) and Turano and Massof (
<xref rid="B65" ref-type="bibr">2001</xref>
), where the log-least-squares coefficients act like power-law transducers (e and r) for the pursuit velocity signal (
<italic>d</italic>
α
<sup>e</sup>
) and for the retinal image velocity signal (
<italic>d</italic>
θ
<sup>r</sup>
). With these power-law transducers the empirical M/PR provides an excellent account for the perceived depth from motion parallax within the range of the variables tested in the head-stationary conditions here.</p>
<p>The actual transducer exponents derived from this experiment are quite interesting and maybe a little confusing. First, the pursuit exponent (e) is smaller than the retinal image motion transducer (r). This is in general agreement with the comparative sizes of the transducers found by Turano and Massof (
<xref rid="B65" ref-type="bibr">2001</xref>
; Table 2) and Freeman (
<xref rid="B26" ref-type="bibr">2001</xref>
; Figure 12). In the current study the relative size indicates that changes in retinal image motion (
<italic>d</italic>
θ) have a larger effect on changes in perceived depth than changes in pursuit velocity (
<italic>d</italic>
α). This corresponds to the relative slopes of the lines in Figure
<xref ref-type="fig" rid="F6">6</xref>
(changes in
<italic>d</italic>
θ) and Figure
<xref ref-type="fig" rid="F7">7</xref>
(changes in
<italic>d</italic>
α), and the relative slopes of the rainbow and green surfaces along the Log(
<italic>d</italic>
θ) and Log(
<italic>d</italic>
α) axes in Figure
<xref ref-type="fig" rid="F9">9</xref>
.</p>
<p>However, these transducer values,
<italic>e</italic>
= 0.192 and
<italic>r</italic>
= 0.416, are smaller than those that characterize the perception of motion during eye movements, which are typically near 1 [e.g., Figure 12, (Freeman,
<xref rid="B26" ref-type="bibr">2001</xref>
); although these values are very similar to those for the ill-fitting nonlinear model of (Turano and Massof,
<xref rid="B65" ref-type="bibr">2001</xref>
), Table 2]. A smaller transducer value means that the visual system is registering, or using, a smaller internal representation of the external physical stimulus. While small transducer values might be problematic for motion perception, the perceptual situation for motion parallax is much different. In the former the mechanism is operating to determine relative velocity, while in the latter the mechanism is determining relative depths. Additionally, with motion parallax the objects are not perceived as moving, but are perceived as stationary within the environment. Therefore, it is, perhaps, not unusual that the different mechanisms operate with different types of inputs. And, perhaps, the lower transducer values contribute to this perceptual difference in object motion with motion parallax. Of course, it is unclear exactly where these signals become inaccurate. Given the higher transducer values for motion perception, it is likely that the reduced transducer values reflect processing of these signals within the mechanism that does the combination for motion parallax.</p>
<p>Finally, the scaling constant applied to the ratio (0.0313) appears to be only related to the chosen units used to represent angles (degrees vs. radians). Recall that Newton's Law of motion says acceleration is proportional to force. The constant of proportionality (mass) depends on units. For instance, if visual angle had been computed in radians, similar to the distance-square approximation for binocular disparity (e.g., Cormack and Fox,
<xref rid="B13" ref-type="bibr">1985</xref>
) instead of degrees, the scaling constant would be 0.0778 while the transducer values remain unchanged. (A change of scale by a constant (
<italic>c</italic>
) changes dθ
<sup>0.416</sup>
/dα
<sup>0.192</sup>
by the factor c
<sup>0.416</sup>
/c
<sup>0.192</sup>
and 0.0313372/(c
<sup>0.416</sup>
/c
<sup>0.192</sup>
) = 0.0777579 when
<italic>c</italic>
= π/180. To give the scaling constant a value of 1 in the empirical model, the units of visual angle would have to be represented in a unit equivalent to 5.004 × 10
<sup>6</sup>
degrees).</p>
<p>Another curious feature of our empirically measured law is the difference in the two exponents. While these are remarkably similar to the differences observed in the transducer model experiments mentioned above, and while the smaller exponent for pursuit, dα
<sup>0.192</sup>
, accounts for the foreshortened depth perception and points more strongly to pursuit as the cause of foreshortening, there is another possible contribution to the difference. The brain combines the retinal motion and eye pursuit signals. The mathematical values of motion are much smaller than the mathematical values of the pursuit rates, but the neural representations of these signals could conceivably be scaled differently before making this combination. Their internal units might be different. A scaled combination of logarithmic signals would mathematically be constants in a difference of logs like the least-squares-log formula above (those constants are our transducer exponents). A better understanding of the difference in transducer exponents may reveal insights about the internal neural mechanisms used to recover relative depth from motion parallax.</p>
<p>As mentioned earlier, and addressed with the control conditions, the accuracy and robustness of the empirical motion/pursuit ratio (Equation 4) may depend on how well-depth constancy was preserved in the binocular disparity stimuli with which the motion parallax stimuli were compared. While the Materials and Methods Section outlined the stimulus considerations used to optimize depth constancy in the current study, and the control conditions demonstrated excellent depth constancy, here we consider the implications if depth from binocular disparity were independently overestimated at the near viewing distances used in the current study (Johnston et al.,
<xref rid="B34a" ref-type="bibr">1994</xref>
). Hypothetically, with the depth matching procedure used here, this would produce an underestimate the perceived depth from motion parallax from the motion parallax stimulus parameters. Moreover, we can estimate the effect of a hypothetical distortion found with binocular disparity. For this we used the well-known, and often cited, example of distortion provided by Johnston (
<xref rid="B34" ref-type="bibr">1991</xref>
). Using the data extrapolated from her Figure 4 at the two shortest viewing distances (
<italic>f</italic>
= 53.4 and 107 cm) for both observers (EBJ and JSM), we determined a least squares function of viewing distance. Johnston does not include data at 36 cm, so we needed to extrapolate her result down to our data range. This function was used to scale the depth magnitude estimates at each the viewing distances in the current study (
<italic>f</italic>
= 36, 54, and 72 cm) with the scaling factor:
<disp-formula id="E5">
<label>(5)</label>
<mml:math id="M5">
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi>d</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>p</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>v</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>d</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>d</mml:mi>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>v</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>d</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>l</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mfrac>
<mml:mo>=</mml:mo>
<mml:mn>2.015</mml:mn>
<mml:mo></mml:mo>
<mml:mn>0.011</mml:mn>
<mml:mi>f</mml:mi>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>Notice that this ratio,
<italic>d</italic>
(
<italic>perceived</italic>
) =
<italic>d</italic>
(
<italic>veridical</italic>
), is 1 at
<italic>f</italic>
= 96.2 cm, rather than at the 80 cm viewing distance (e.g., Johnston et al.,
<xref rid="B34a" ref-type="bibr">1994</xref>
) that Johnston found by another approach. This increases our depth magnitude estimates at all viewing distances (
<italic>f</italic>
= 72, 54 and 36 cm) with a greater increase in distortion at smaller f. Compared to the PSE and σ values for the depth discrimination from binocular disparity determined in the control studies, this distortion represents a PSE shift of about 2-to-3 σ values.</p>
<p>With the scaling function given in Equation (5) representing a hypothetical distortion in the perception of depth from binocular disparity, the transducer values found in Equation (4) changed from
<italic>r</italic>
= 0.416 and
<italic>e</italic>
= 0.192 to adjusted values of
<italic>r</italic>
<sub>
<italic>a</italic>
</sub>
= 0.428 and
<italic>e</italic>
<sub>
<italic>a</italic>
</sub>
= 0.148, giving an adjusted empirical motion/pursuit ratio:
<disp-formula id="E6">
<label>(6)</label>
<mml:math id="M6">
<mml:mrow>
<mml:msub>
<mml:mi>d</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mi>p</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mtext>d</mml:mtext>
<mml:msup>
<mml:mi>θ</mml:mi>
<mml:mrow>
<mml:mtext>0</mml:mtext>
<mml:mo>.</mml:mo>
<mml:mtext>428</mml:mtext>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mrow>
<mml:mi>d</mml:mi>
<mml:msup>
<mml:mi>α</mml:mi>
<mml:mrow>
<mml:mn>0.148</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfrac>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>0.0444</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>f</mml:mi>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>Graphically, this hypothetical adjustment would shift the green surface in Figure
<xref ref-type="fig" rid="F9">9</xref>
vertically up by less than half a natural log unit. In units of perceived depth, this adjustment corresponds to a increase in the magnitude of perceived depth from motion parallax of a few to several mm in the parameter space studied here. Interactive graphs of both the adjusted and un-adjusted plots (Figure
<xref ref-type="fig" rid="F9">9</xref>
) are included in the Supplementary Material. The reader can move the figures around and see the comparison both with each other and with the motion/pursuit ratio. The data and programs that generated the plots are also included. Of course, this extrapolated adjustment corresponds to an extreme case, but it persuasively demonstrates that any failure of depth constancy with the binocular disparity stimuli would have only a small effect on the interpretation of these results. This is because the documented distortions in the perception of depth from binocular disparity are small compared to the systematic distortion in the perception of depth from motion parallax expressed in the empirical motion/pursuit ratio (Equation 4).</p>
<p>The results of this study show that the M/PR, with the application of a single set of non-linear transducers that represent the inherent inaccuracies of the internal motion and pursuit signals, can account for the perception of depth from motion parallax over a variety of pursuit velocities, retinal image velocities, and viewing distances. Moreover, the empirical M/PR espoused here provides testable, quantitative predictions for parameters outside this range. While the non-linearities suggest the empirical M/PR may generalize to a much wider range of parameters, it is unclear what may happen at very large viewing distances. While the retinal motion and pursuit are subject to a “speed multiplier” effect for long viewing distances while the observer is translating at a higher speed (Nawrot and Stroyan,
<xref rid="B47" ref-type="bibr">2009</xref>
), the perception of depth may be more closely tied to the apparent distance, rather than the actual physical distance, as it is for stereoscopic depth perception (Cormack,
<xref rid="B12" ref-type="bibr">1984</xref>
). This would, of course, present an obvious difficulty for the quantitative predictions of the model.</p>
<p>Another important caveat is the empirical M/PR does not account for conditions in which the observer is accelerating and producing involuntary tVOR eye movements. These include conditions in which the observer's head is being translated from side-to-side. In these conditions the compensatory eye movement is a combination of tVOR and smooth pursuit (Miles,
<xref rid="B38" ref-type="bibr">1993</xref>
,
<xref rid="B39" ref-type="bibr">1998</xref>
), but it is only the pursuit component of the compensatory eye movement that contributes to the internal signal
<italic>d</italic>
α (Nawrot and Joyce,
<xref rid="B46" ref-type="bibr">2006</xref>
). As illustrated by the head-translating conditions in the current experiment, a high velocity eye movement during head translation produces the same
<italic>d</italic>
<sub>
<italic>mp</italic>
</sub>
depth magnitude as a slower velocity eye movement with a stationary head. The difference is due to the tVOR contributing to the eye movement gain, but not to the mechanisms responsible for perceived depth.</p>
<p>The results of this study indicate large depth foreshortening with motion parallax. This is found with both head-stationary viewing (which isolates pursuit eye movements) and head-moving conditions (which elicits both pursuit and tVOR eye movements). The empirical M/PR now addresses this depth foreshortening with power-law transducers adjusting the retinal motion (
<italic>d</italic>
θ) and pursuit (
<italic>d</italic>
α) signals. The use of power-law transducers here is similar to their use in explaining the inaccuracies in perceived motion during eye movements (Freeman,
<xref rid="B26" ref-type="bibr">2001</xref>
; Turano and Massof,
<xref rid="B65" ref-type="bibr">2001</xref>
). However, the exponents found here, for the perception of depth from motion and eye movements, are smaller than those for the perception of motion, but not depth, during eye movements. A possible link between these might be to determine the power-law transducers that model the perception of motion for objects nearer or farther than the fixation plane. Such work would reveal much about how we recover the relative depth for non-fixated moving objects while the observer is also moving, a common occurrence in our cluttered environment.</p>
<sec>
<title>Conflict of interest statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</sec>
</body>
<back>
<ack>
<p>This work was supported by a Centers of Biomedical Research Excellence (COBRE) Grant: NIH P20 RR020151 and P20 GM103505. A portion of this research was initially presented to the Vision Science Society meetings in 2011.</p>
</ack>
<sec sec-type="supplementary-material" id="s5">
<title>Supplementary material</title>
<p>The Supplementary Material for this article can be found online at:
<ext-link ext-link-type="uri" xlink:href="http://www.frontiersin.org/journal/10.3389/fpsyg.2014.01103/abstract">http://www.frontiersin.org/journal/10.3389/fpsyg.2014.01103/abstract</ext-link>
</p>
<p>Supplementary materials present interactive versions of the data presented in Figure 8, along with the effects of a hypothetical distortion in the perception of depth from binocular disparity presented in the discussion. The file is in Mathematica CDF format, which requires the free Mathematica CDF reader available at (
<ext-link ext-link-type="uri" xlink:href="http://www.wolfram.com/cdf-player/">http://www.wolfram.com/cdf-player/</ext-link>
).</p>
<supplementary-material content-type="local-data" id="SM1">
<media xlink:href="Presentation1.PDF">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="SM2">
<media xlink:href="Presentation1.ZIP">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Backus</surname>
<given-names>B. T.</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>M. S.</given-names>
</name>
<name>
<surname>van Ee</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Crowell</surname>
<given-names>J. A.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Horizontal and vertical disparity, eye position, and stereoscopic slant perception</article-title>
.
<source>Vision Res</source>
.
<volume>39</volume>
,
<fpage>1143</fpage>
<lpage>1170</lpage>
<pub-id pub-id-type="doi">10.1016/S0042-6989(98)00139-4</pub-id>
<pub-id pub-id-type="pmid">10343832</pub-id>
</mixed-citation>
</ref>
<ref id="B1a">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Baird</surname>
<given-names>J. C.</given-names>
</name>
<name>
<surname>Biersdorf</surname>
<given-names>W. R.</given-names>
</name>
</person-group>
(
<year>1967</year>
).
<article-title>Quantitative functions for size and distance judgments</article-title>
.
<source>Percept. Psychophys</source>
.
<volume>2</volume>
,
<fpage>161</fpage>
<lpage>166</lpage>
<pub-id pub-id-type="doi">10.3758/BF03210312</pub-id>
<pub-id pub-id-type="pmid">20972775</pub-id>
</mixed-citation>
</ref>
<ref id="B2">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Barlow</surname>
<given-names>H. B.</given-names>
</name>
<name>
<surname>Blakemore</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Pettigrew</surname>
<given-names>J. D.</given-names>
</name>
</person-group>
(
<year>1967</year>
).
<article-title>The neural mechanism of binocular depth discrimination</article-title>
.
<source>J. Physiol</source>
.
<volume>193</volume>
:
<fpage>327</fpage>
<pub-id pub-id-type="pmid">6065881</pub-id>
</mixed-citation>
</ref>
<ref id="B3">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bradshaw</surname>
<given-names>M. F.</given-names>
</name>
<name>
<surname>Glennerster</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Rogers</surname>
<given-names>B. J.</given-names>
</name>
</person-group>
(
<year>1996</year>
).
<article-title>The effect of display size on disparity scaling from differential perspective and vergence cues</article-title>
.
<source>Vision Res</source>
.
<volume>36</volume>
,
<fpage>1255</fpage>
<lpage>1264</lpage>
<pub-id pub-id-type="doi">10.1016/0042-6989(95)00190-5</pub-id>
<pub-id pub-id-type="pmid">8711905</pub-id>
</mixed-citation>
</ref>
<ref id="B4">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bradshaw</surname>
<given-names>M. F.</given-names>
</name>
<name>
<surname>Parton</surname>
<given-names>A. D.</given-names>
</name>
<name>
<surname>Eagle</surname>
<given-names>R. A.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>The interaction of binocular disparity and motion parallax in determining perceived depth and perceived size</article-title>
.
<source>Perception</source>
<volume>27</volume>
,
<fpage>1317</fpage>
<lpage>1333</lpage>
<pub-id pub-id-type="doi">10.1068/p271317</pub-id>
<pub-id pub-id-type="pmid">10505177</pub-id>
</mixed-citation>
</ref>
<ref id="B5">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bradshaw</surname>
<given-names>M. F.</given-names>
</name>
<name>
<surname>Parton</surname>
<given-names>A. D.</given-names>
</name>
<name>
<surname>Glennerster</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>The task-dependent use of binocular disparity and motion parallax information</article-title>
.
<source>Vision Res</source>
.
<volume>40</volume>
,
<fpage>3725</fpage>
<lpage>3734</lpage>
<pub-id pub-id-type="doi">10.1016/S0042-6989(00)00214-5</pub-id>
<pub-id pub-id-type="pmid">11090665</pub-id>
</mixed-citation>
</ref>
<ref id="B6">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brenner</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Smeets</surname>
<given-names>J. B.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>Comparing extra-retinal information about distance and direction</article-title>
.
<source>Vision Res</source>
.
<volume>40</volume>
,
<fpage>1649</fpage>
<lpage>1651</lpage>
<pub-id pub-id-type="doi">10.1016/S0042-6989(00)00062-6</pub-id>
<pub-id pub-id-type="pmid">10814753</pub-id>
</mixed-citation>
</ref>
<ref id="B7">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brenner</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Van Damme</surname>
<given-names>W. J.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>Judging distance from ocular convergence</article-title>
.
<source>Vision Res</source>
.
<volume>38</volume>
,
<fpage>493</fpage>
<lpage>498</lpage>
<pub-id pub-id-type="doi">10.1016/S0042-6989(97)00236-8</pub-id>
<pub-id pub-id-type="pmid">9536373</pub-id>
</mixed-citation>
</ref>
<ref id="B8">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brenner</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>van den Berg</surname>
<given-names>A. V.</given-names>
</name>
</person-group>
(
<year>1994</year>
).
<article-title>Judging object velocity during smooth pursuit eye movements</article-title>
.
<source>Exp. Brain Res</source>
.
<volume>99</volume>
,
<fpage>316</fpage>
<lpage>324</lpage>
<pub-id pub-id-type="doi">10.1007/BF00239598</pub-id>
<pub-id pub-id-type="pmid">7925812</pub-id>
</mixed-citation>
</ref>
<ref id="B9">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Brindley</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>1970</year>
).
<source>Physiology of the Retina and Visual Pathways</source>
.
<publisher-loc>Baltimore, MD</publisher-loc>
:
<publisher-name>Williams & Wilkins</publisher-name>
</mixed-citation>
</ref>
<ref id="B10">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Campbell</surname>
<given-names>F. W.</given-names>
</name>
<name>
<surname>Maffei</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>1981</year>
).
<article-title>The influence of spatial frequency and contrast on the perception of moving patterns</article-title>
.
<source>Vision Res</source>
.
<volume>21</volume>
,
<fpage>713</fpage>
<lpage>721</lpage>
<pub-id pub-id-type="doi">10.1016/0042-6989(81)90080-8</pub-id>
<pub-id pub-id-type="pmid">7293002</pub-id>
</mixed-citation>
</ref>
<ref id="B11">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cavanagh</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Tyler</surname>
<given-names>C. W.</given-names>
</name>
<name>
<surname>Favreau</surname>
<given-names>O. E.</given-names>
</name>
</person-group>
(
<year>1984</year>
).
<article-title>Perceived velocity of moving chromatic gratings</article-title>
.
<source>J. Opt. Soc. Am. A</source>
<volume>1</volume>
,
<fpage>893</fpage>
<lpage>899</lpage>
<pub-id pub-id-type="doi">10.1364/JOSAA.1.000893</pub-id>
<pub-id pub-id-type="pmid">6470841</pub-id>
</mixed-citation>
</ref>
<ref id="B12">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cormack</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>1984</year>
).
<article-title>Stereoscopic depth perception at far viewing distances</article-title>
.
<source>Percept. Psychophys</source>
.
<volume>35</volume>
,
<fpage>423</fpage>
<lpage>428</lpage>
<pub-id pub-id-type="doi">10.3758/BF03203918</pub-id>
<pub-id pub-id-type="pmid">6462868</pub-id>
</mixed-citation>
</ref>
<ref id="B13">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cormack</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Fox</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>1985</year>
).
<article-title>The computation of retinal disparity</article-title>
.
<source>Percept. Psychophys</source>
.
<volume>37</volume>
,
<fpage>176</fpage>
<lpage>178</lpage>
<pub-id pub-id-type="doi">10.3758/BF03202855</pub-id>
<pub-id pub-id-type="pmid">4011374</pub-id>
</mixed-citation>
</ref>
<ref id="B14">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cumming</surname>
<given-names>B. G.</given-names>
</name>
<name>
<surname>Johnston</surname>
<given-names>E. B.</given-names>
</name>
<name>
<surname>Parker</surname>
<given-names>A. J.</given-names>
</name>
</person-group>
(
<year>1991</year>
).
<article-title>Vertical disparities and perception of 3-dimensional shape</article-title>
.
<source>Nature</source>
<volume>349</volume>
,
<fpage>411</fpage>
<lpage>413</lpage>
<pub-id pub-id-type="doi">10.1038/349411a0</pub-id>
<pub-id pub-id-type="pmid">1992341</pub-id>
</mixed-citation>
</ref>
<ref id="B15">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Diener</surname>
<given-names>H. C.</given-names>
</name>
<name>
<surname>Wist</surname>
<given-names>E. R.</given-names>
</name>
<name>
<surname>Dichgans</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Brant</surname>
<given-names>T.</given-names>
</name>
</person-group>
(
<year>1976</year>
).
<article-title>The spatial frequency effect on perceived velocity</article-title>
.
<source>Vision Res</source>
.
<volume>16</volume>
,
<fpage>169</fpage>
<lpage>176</lpage>
<pub-id pub-id-type="doi">10.1016/0042-6989(76)90094-8</pub-id>
<pub-id pub-id-type="pmid">1266057</pub-id>
</mixed-citation>
</ref>
<ref id="B16">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Di Luca</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Domini</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Caudek</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Inconsistency of perceived 3D shape</article-title>
.
<source>Vision Res</source>
.
<volume>21</volume>
,
<fpage>1519</fpage>
<lpage>1531</lpage>
<pub-id pub-id-type="doi">10.1016/j.visres.2010.05.006</pub-id>
<pub-id pub-id-type="pmid">20470815</pub-id>
</mixed-citation>
</ref>
<ref id="B17">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dobbins</surname>
<given-names>A. C.</given-names>
</name>
<name>
<surname>Jeo</surname>
<given-names>R. M.</given-names>
</name>
<name>
<surname>Fiser</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Allman</surname>
<given-names>J. M.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>Distance modulation of neural activity in the visual cortex</article-title>
.
<source>Science</source>
<volume>281</volume>
,
<fpage>552</fpage>
<lpage>555</lpage>
<pub-id pub-id-type="doi">10.1126/science.281.5376.552</pub-id>
<pub-id pub-id-type="pmid">9677196</pub-id>
</mixed-citation>
</ref>
<ref id="B18">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Domini</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Caudek</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>3-D structure perceived from dynamic information: a new theory</article-title>
.
<source>Trends Cogn. Sci</source>
.
<volume>7</volume>
,
<fpage>444</fpage>
<lpage>449</lpage>
<pub-id pub-id-type="doi">10.1016/j.tics.2003.08.007</pub-id>
<pub-id pub-id-type="pmid">14550491</pub-id>
</mixed-citation>
</ref>
<ref id="B19">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Domini</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Caudek</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>The intrinsic constraint model and Fechnerian sensory scaling</article-title>
.
<source>J. Vis</source>
.
<volume>9</volume>
,
<fpage>25.1</fpage>
<lpage>25.15</lpage>
<pub-id pub-id-type="doi">10.1167/9.2.25</pub-id>
<pub-id pub-id-type="pmid">19271935</pub-id>
</mixed-citation>
</ref>
<ref id="B20">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Domini</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Caudek</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Matching perceived depth from disparity and from velocity: modeling and psychophysics</article-title>
.
<source>Acta Psychol</source>
.
<volume>133</volume>
,
<fpage>81</fpage>
<lpage>89</lpage>
<pub-id pub-id-type="doi">10.1016/j.actpsy.2009.10.003</pub-id>
<pub-id pub-id-type="pmid">19963200</pub-id>
</mixed-citation>
</ref>
<ref id="B21">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Durgin</surname>
<given-names>F. H.</given-names>
</name>
<name>
<surname>Proffitt</surname>
<given-names>D. R.</given-names>
</name>
<name>
<surname>Reinke</surname>
<given-names>K. S.</given-names>
</name>
<name>
<surname>Olson</surname>
<given-names>T. J.</given-names>
</name>
</person-group>
(
<year>1995</year>
).
<article-title>Comparing depth from motion with depth from binocular disparity</article-title>
.
<source>J. Exp. Psychol. Hum. Percept. Perform</source>
.
<volume>21</volume>
,
<fpage>679</fpage>
<lpage>699</lpage>
<pub-id pub-id-type="doi">10.1037/0096-1523.21.3.679</pub-id>
<pub-id pub-id-type="pmid">7790841</pub-id>
</mixed-citation>
</ref>
<ref id="B22">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Filehne</surname>
<given-names>W.</given-names>
</name>
</person-group>
(
<year>1922</year>
).
<article-title>Uder das optische Wahrnehmen von Bewegungen</article-title>
.
<source>Z. Sinnesphysiol</source>
.
<volume>53</volume>
,
<fpage>134</fpage>
<lpage>145</lpage>
</mixed-citation>
</ref>
<ref id="B23">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fisher</surname>
<given-names>S. K.</given-names>
</name>
<name>
<surname>Ciuffreda</surname>
<given-names>K. J.</given-names>
</name>
</person-group>
(
<year>1988</year>
).
<article-title>Accommodation and apparent distance</article-title>
.
<source>Perception</source>
<volume>17</volume>
,
<fpage>609</fpage>
<lpage>621</lpage>
<pub-id pub-id-type="doi">10.1068/p170609</pub-id>
<pub-id pub-id-type="pmid">3249669</pub-id>
</mixed-citation>
</ref>
<ref id="B24">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fleischl</surname>
<given-names>E.</given-names>
</name>
</person-group>
(
<year>1882</year>
).
<article-title>Physiologische-optische notizen</article-title>
.
<source>SB Akad. Wiss. Wien</source>
<volume>86</volume>
,
<fpage>17</fpage>
<lpage>25</lpage>
<pub-id pub-id-type="pmid">16784441</pub-id>
</mixed-citation>
</ref>
<ref id="B25">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Foster</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Fantoni</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Caudek</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Domini</surname>
<given-names>F.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Integration of disparity and velocity information for haptic and perceptual judgments of object depth</article-title>
.
<source>Acta Psychol</source>
.
<volume>136</volume>
,
<fpage>300</fpage>
<lpage>310</lpage>
<pub-id pub-id-type="doi">10.1016/j.actpsy.2010.12.003</pub-id>
<pub-id pub-id-type="pmid">21237442</pub-id>
</mixed-citation>
</ref>
<ref id="B26">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Freeman</surname>
<given-names>T. C.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Transducer models of head centered motion perception</article-title>
.
<source>Vision Res</source>
.
<volume>41</volume>
,
<fpage>2741</fpage>
<lpage>2755</lpage>
<pub-id pub-id-type="doi">10.1016/S0042-6989(01)00159-6</pub-id>
<pub-id pub-id-type="pmid">11587724</pub-id>
</mixed-citation>
</ref>
<ref id="B27">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Freeman</surname>
<given-names>T. C. A.</given-names>
</name>
<name>
<surname>Banks</surname>
<given-names>M. S.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>Perceived head-centric speed is affected by both extra-retinal and retinal errors</article-title>
.
<source>Vision Res</source>
.
<volume>38</volume>
,
<fpage>941</fpage>
<lpage>946</lpage>
<pub-id pub-id-type="doi">10.1016/S0042-6989(97)00395-7</pub-id>
<pub-id pub-id-type="pmid">9666976</pub-id>
</mixed-citation>
</ref>
<ref id="B28">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Freeman</surname>
<given-names>T. C. A.</given-names>
</name>
<name>
<surname>Fowler</surname>
<given-names>T. A.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>Unequal retinal and extra-retinal motion signals produce different perceived slants of moving surfaces</article-title>
.
<source>Vision Res</source>
.
<volume>40</volume>
,
<fpage>1857</fpage>
<lpage>1868</lpage>
<pub-id pub-id-type="doi">10.1016/S0042-6989(00)00045-6</pub-id>
<pub-id pub-id-type="pmid">10837831</pub-id>
</mixed-citation>
</ref>
<ref id="B29">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Frisby</surname>
<given-names>J. P.</given-names>
</name>
<name>
<surname>Buckley</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Duke</surname>
<given-names>P. A.</given-names>
</name>
</person-group>
(
<year>1996</year>
).
<article-title>Evidence for the good recovery of lengths of real objects seen with natural stereoviewing</article-title>
.
<source>Perception</source>
<volume>25</volume>
,
<fpage>129</fpage>
<lpage>154</lpage>
<pub-id pub-id-type="doi">10.1068/p250129</pub-id>
<pub-id pub-id-type="pmid">8733143</pub-id>
</mixed-citation>
</ref>
<ref id="B30">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Garding</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Porrill</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Mayhew</surname>
<given-names>J. E.</given-names>
</name>
<name>
<surname>Frisby</surname>
<given-names>J. P.</given-names>
</name>
</person-group>
(
<year>1995</year>
).
<article-title>Stereopsis, vertical disparity and relief transformations</article-title>
.
<source>Vision Res</source>
.
<volume>35</volume>
,
<fpage>703</fpage>
<lpage>722</lpage>
<pub-id pub-id-type="doi">10.1016/0042-6989(94)00162-F</pub-id>
<pub-id pub-id-type="pmid">7900308</pub-id>
</mixed-citation>
</ref>
<ref id="B31">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Glennerster</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Rogers</surname>
<given-names>B. J.</given-names>
</name>
<name>
<surname>Bradshaw</surname>
<given-names>M. F.</given-names>
</name>
</person-group>
(
<year>1996</year>
).
<article-title>Stereoscopic depth constancy depends on the subject's task</article-title>
.
<source>Vision Res</source>
.
<volume>36</volume>
,
<fpage>3441</fpage>
<lpage>3456</lpage>
<pub-id pub-id-type="doi">10.1016/0042-6989(96)00090-9</pub-id>
<pub-id pub-id-type="pmid">8977011</pub-id>
</mixed-citation>
</ref>
<ref id="B32">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gonzalez</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Perez</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>Modulation of cell responses to horizontal disparities by ocular vergence in the visual cortex of the awake
<italic>macaca mulatta</italic>
monkey</article-title>
.
<source>Neurosci. Lett</source>
.
<volume>245</volume>
,
<fpage>101</fpage>
<lpage>104</lpage>
<pub-id pub-id-type="doi">10.1016/S0304-3940(98)00191-8</pub-id>
<pub-id pub-id-type="pmid">9605495</pub-id>
</mixed-citation>
</ref>
<ref id="B33">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Graham</surname>
<given-names>C. H.</given-names>
</name>
<name>
<surname>Baker</surname>
<given-names>K. E.</given-names>
</name>
<name>
<surname>Hecht</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Lloyd</surname>
<given-names>V. V.</given-names>
</name>
</person-group>
(
<year>1948</year>
).
<article-title>Factors influencing the thresholds for monocular movement parallax</article-title>
.
<source>J. Exp. Psychol</source>
.
<volume>38</volume>
,
<fpage>205</fpage>
<lpage>223</lpage>
<pub-id pub-id-type="doi">10.1037/h0054067</pub-id>
<pub-id pub-id-type="pmid">18865224</pub-id>
</mixed-citation>
</ref>
<ref id="B34">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Johnston</surname>
<given-names>E. B.</given-names>
</name>
</person-group>
(
<year>1991</year>
).
<article-title>Systematic distortions of shape from stereopsis</article-title>
.
<source>Vision Res</source>
.
<volume>31</volume>
,
<fpage>1351</fpage>
<lpage>1360</lpage>
<pub-id pub-id-type="doi">10.1016/0042-6989(91)90056-B</pub-id>
<pub-id pub-id-type="pmid">1891823</pub-id>
</mixed-citation>
</ref>
<ref id="B34a">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Johnston</surname>
<given-names>E. B.</given-names>
</name>
<name>
<surname>Cumming</surname>
<given-names>B. G.</given-names>
</name>
<name>
<surname>Landy</surname>
<given-names>M. S.</given-names>
</name>
</person-group>
(
<year>1994</year>
).
<article-title>Integration of stereopsis and motion shape cues</article-title>
.
<source>Vision Res</source>
.
<volume>34</volume>
,
<fpage>2259</fpage>
<lpage>2275</lpage>
<pub-id pub-id-type="doi">10.1016/0042-6989(94)90106-6</pub-id>
<pub-id pub-id-type="pmid">7941420</pub-id>
</mixed-citation>
</ref>
<ref id="B35">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Liao</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Walker</surname>
<given-names>M. F.</given-names>
</name>
<name>
<surname>Joshi</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Millard</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Leigh</surname>
<given-names>R. J.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Vestibulo-ocular responses to vertical translation in normal human subjects</article-title>
.
<source>Exp. Brain Res</source>
.
<volume>185</volume>
,
<fpage>553</fpage>
<lpage>563</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-007-1181-z</pub-id>
<pub-id pub-id-type="pmid">17989972</pub-id>
</mixed-citation>
</ref>
<ref id="B35a">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Longuet-Higgins</surname>
<given-names>H. C.</given-names>
</name>
<name>
<surname>Prazdny</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<year>1980</year>
).
<article-title>The interpretation of a moving retinal image</article-title>
.
<source>Proc. R. Soc. Lond. B Biol. Sci</source>
.
<volume>208</volume>
,
<fpage>385</fpage>
<lpage>397</lpage>
<pub-id pub-id-type="doi">10.1098/rspb.1980.0057</pub-id>
<pub-id pub-id-type="pmid">6106198</pub-id>
</mixed-citation>
</ref>
<ref id="B36">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mack</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Herman</surname>
<given-names>E.</given-names>
</name>
</person-group>
(
<year>1972</year>
).
<article-title>A new illusion: the underestimationof distance during pursuit eye movements</article-title>
.
<source>Percept. Psychophys</source>
.
<volume>12</volume>
,
<fpage>471</fpage>
<lpage>473</lpage>
<pub-id pub-id-type="doi">10.3758/BF03210937</pub-id>
</mixed-citation>
</ref>
<ref id="B36a">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>MacKenzie</surname>
<given-names>K. J.</given-names>
</name>
<name>
<surname>Murray</surname>
<given-names>R. F.</given-names>
</name>
<name>
<surname>Wilcox</surname>
<given-names>L. M.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>The intrinsic constraint approach to cue combination: an empirical and theoretical evaluation</article-title>
.
<source>J. Vis</source>
.
<volume>8</volume>
,
<fpage>5.1</fpage>
<lpage>5.10</lpage>
<pub-id pub-id-type="doi">10.1167/8.8.5</pub-id>
<pub-id pub-id-type="pmid">18831628</pub-id>
</mixed-citation>
</ref>
<ref id="B37">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>McKee</surname>
<given-names>S. P.</given-names>
</name>
<name>
<surname>Taylor</surname>
<given-names>D. G.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>The precision of binocular and monocular depth judgments in natural settings</article-title>
.
<source>J. Vis</source>
.
<volume>10</volume>
:
<fpage>5</fpage>
<pub-id pub-id-type="doi">10.1167/10.10.5</pub-id>
<pub-id pub-id-type="pmid">20884470</pub-id>
</mixed-citation>
</ref>
<ref id="B38">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Miles</surname>
<given-names>F. A.</given-names>
</name>
</person-group>
(
<year>1993</year>
).
<article-title>The sensing of rotational and translational optic flow by the primate optokinetic system</article-title>
, in
<source>Visual Motion And Its Role In The Stabilization Of Gaze</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Miles</surname>
<given-names>F. A.</given-names>
</name>
<name>
<surname>Wallamn</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<publisher-loc>New York, NY</publisher-loc>
:
<publisher-name>Elsevier</publisher-name>
),
<fpage>393</fpage>
<lpage>403</lpage>
</mixed-citation>
</ref>
<ref id="B39">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Miles</surname>
<given-names>F. A.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>The neural processing of 3-D visual information: evidence from eye movements</article-title>
.
<source>Eur. J. Neurosci</source>
.
<volume>10</volume>
,
<fpage>811</fpage>
<lpage>822</lpage>
<pub-id pub-id-type="doi">10.1046/j.1460-9568.1998.00112.x</pub-id>
<pub-id pub-id-type="pmid">9753150</pub-id>
</mixed-citation>
</ref>
<ref id="B40">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Miles</surname>
<given-names>F. A.</given-names>
</name>
<name>
<surname>Busettini</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>1992</year>
).
<article-title>Ocular compensation for self-motion. Visual mechanisms</article-title>
.
<source>Ann. N.Y. Acad. Sci</source>
.
<volume>656</volume>
,
<fpage>220</fpage>
<lpage>232</lpage>
<pub-id pub-id-type="doi">10.1111/j.1749-6632.1992.tb25211.x</pub-id>
<pub-id pub-id-type="pmid">1599145</pub-id>
</mixed-citation>
</ref>
<ref id="B41">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mon-Williams</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Tresilian</surname>
<given-names>J. R.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Some recent studies on the extraretinal contribution to distance perception</article-title>
.
<source>Perception</source>
<volume>28</volume>
,
<fpage>167</fpage>
<lpage>181</lpage>
<pub-id pub-id-type="doi">10.1068/p2737</pub-id>
<pub-id pub-id-type="pmid">10615458</pub-id>
</mixed-citation>
</ref>
<ref id="B42">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mon-Williams</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Tresilian</surname>
<given-names>J. R.</given-names>
</name>
<name>
<surname>Roberts</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>Vergence provides veridical depth perception from horizontal retinal image disparities</article-title>
.
<source>Exp. Brain Res</source>
.
<volume>133</volume>
,
<fpage>407</fpage>
<lpage>413</lpage>
<pub-id pub-id-type="doi">10.1007/s002210000410</pub-id>
<pub-id pub-id-type="pmid">10958531</pub-id>
</mixed-citation>
</ref>
<ref id="B43">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nadler</surname>
<given-names>J. W.</given-names>
</name>
<name>
<surname>Nawrot</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Angelaki</surname>
<given-names>D. E.</given-names>
</name>
<name>
<surname>DeAngelis</surname>
<given-names>G. C.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>MT neurons combine visual motion with a smooth eye movement signal to code depth sign from motion parallax</article-title>
.
<source>Neuron</source>
<volume>63</volume>
,
<fpage>523</fpage>
<lpage>532</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuron.2009.07.029</pub-id>
<pub-id pub-id-type="pmid">19709633</pub-id>
</mixed-citation>
</ref>
<ref id="B44">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Naji</surname>
<given-names>J. J.</given-names>
</name>
<name>
<surname>Freeman</surname>
<given-names>T. C.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Perceiving depth order during pursuit eye movement</article-title>
.
<source>Vision Res</source>
.
<volume>44</volume>
,
<fpage>3025</fpage>
<lpage>3034</lpage>
<pub-id pub-id-type="doi">10.1016/j.visres.2004.07.007</pub-id>
<pub-id pub-id-type="pmid">15474575</pub-id>
</mixed-citation>
</ref>
<ref id="B44a">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nakayama</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Loomis</surname>
<given-names>J. M.</given-names>
</name>
</person-group>
(
<year>1974</year>
).
<article-title>Optical velocity patterns, velocity-sensitive neurons, and space perception: a hypothesis</article-title>
.
<source>Perception</source>
<volume>3</volume>
,
<fpage>63</fpage>
<lpage>80</lpage>
<pub-id pub-id-type="doi">10.1068/p030063</pub-id>
<pub-id pub-id-type="pmid">4444922</pub-id>
</mixed-citation>
</ref>
<ref id="B45">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nawrot</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Depth from motion parallax scales with eye movement gain</article-title>
.
<source>J. Vis</source>
.
<volume>3</volume>
,
<fpage>841</fpage>
<lpage>851</lpage>
<pub-id pub-id-type="doi">10.1167/3.11.17</pub-id>
<pub-id pub-id-type="pmid">14765966</pub-id>
</mixed-citation>
</ref>
<ref id="B46">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nawrot</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Joyce</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>The pursuit theory of motion parallax</article-title>
.
<source>Vision Res</source>
.
<volume>46</volume>
,
<fpage>4709</fpage>
<lpage>4725</lpage>
<pub-id pub-id-type="doi">10.1016/j.visres.2006.07.006</pub-id>
<pub-id pub-id-type="pmid">17083957</pub-id>
</mixed-citation>
</ref>
<ref id="B47">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nawrot</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Stroyan</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>The motion/pursuit law for visual depth perception from motion parallax</article-title>
.
<source>Vision Res</source>
.
<volume>49</volume>
,
<fpage>1969</fpage>
<lpage>1978</lpage>
<pub-id pub-id-type="doi">10.1016/j.visres.2009.05.008</pub-id>
<pub-id pub-id-type="pmid">19463848</pub-id>
</mixed-citation>
</ref>
<ref id="B48">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ono</surname>
<given-names>M. E.</given-names>
</name>
<name>
<surname>Rivest</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Ono</surname>
<given-names>H.</given-names>
</name>
</person-group>
(
<year>1986</year>
).
<article-title>Depth perception as a function of motion parallax and absolute distance information</article-title>
.
<source>J. Exp. Psychol. Hum. Percept. Perform</source>
.
<volume>12</volume>
,
<fpage>331</fpage>
<lpage>337</lpage>
<pub-id pub-id-type="doi">10.1037/0096-1523.12.3.331</pub-id>
<pub-id pub-id-type="pmid">2943861</pub-id>
</mixed-citation>
</ref>
<ref id="B49">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Philbeck</surname>
<given-names>J. W.</given-names>
</name>
<name>
<surname>Loomis</surname>
<given-names>J. M.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<article-title>Comparison of two indicators of perceived egocentric distance under full-cue and reduced-cue conditions</article-title>
.
<source>J. Exp. Psychol. Hum. Percept. Perform</source>
.
<volume>23</volume>
:
<fpage>72</fpage>
<pub-id pub-id-type="doi">10.1037/0096-1523.23.1.72</pub-id>
<pub-id pub-id-type="pmid">9090147</pub-id>
</mixed-citation>
</ref>
<ref id="B50">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ramat</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Zee</surname>
<given-names>D. S.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Ocular motor responses to abrupt interaural head translation in normal humans</article-title>
.
<source>J. Neurophysiol</source>
.
<volume>90</volume>
,
<fpage>887</fpage>
<lpage>902</lpage>
<pub-id pub-id-type="doi">10.1152/jn.01121.2002</pub-id>
<pub-id pub-id-type="pmid">12672783</pub-id>
</mixed-citation>
</ref>
<ref id="B51">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Read</surname>
<given-names>J. C.</given-names>
</name>
<name>
<surname>Cumming</surname>
<given-names>B. G.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Does depth perception require vertical-disparity detectors?</article-title>
<source>J. Vis</source>
.
<volume>6</volume>
,
<fpage>1323</fpage>
<lpage>1355</lpage>
<pub-id pub-id-type="doi">10.1167/6.12.1</pub-id>
<pub-id pub-id-type="pmid">17209738</pub-id>
</mixed-citation>
</ref>
<ref id="B52">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ritter</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>1977</year>
).
<article-title>Effect of disparity and viewing distance on perceived depth</article-title>
.
<source>Percept. Psychophys</source>
.
<volume>22</volume>
,
<fpage>400</fpage>
<lpage>407</lpage>
<pub-id pub-id-type="doi">10.3758/BF03199707</pub-id>
</mixed-citation>
</ref>
<ref id="B53">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rogers</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Graham</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>1979</year>
).
<article-title>Motion parallax as an independent cue for depth perception</article-title>
.
<source>Perception</source>
<volume>8</volume>
,
<fpage>125</fpage>
<lpage>134</lpage>
<pub-id pub-id-type="doi">10.1068/p080125</pub-id>
<pub-id pub-id-type="pmid">471676</pub-id>
</mixed-citation>
</ref>
<ref id="B54">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rogers</surname>
<given-names>B. J.</given-names>
</name>
<name>
<surname>Bradshaw</surname>
<given-names>M. F.</given-names>
</name>
</person-group>
(
<year>1995</year>
).
<article-title>Disparity scaling and the perception of frontoparallel surfaces</article-title>
.
<source>Perception</source>
<volume>24</volume>
,
<fpage>155</fpage>
<lpage>180</lpage>
<pub-id pub-id-type="doi">10.1068/p240155</pub-id>
<pub-id pub-id-type="pmid">7617423</pub-id>
</mixed-citation>
</ref>
<ref id="B55">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Souman</surname>
<given-names>J. L.</given-names>
</name>
<name>
<surname>Freeman</surname>
<given-names>T. C. A.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Motion perception during sinusoidal smooth pursuit eye movements: signal latencies and non-linearities</article-title>
.
<source>J. Vis</source>
.
<volume>8</volume>
,
<fpage>10.1</fpage>
<lpage>10.14</lpage>
<pub-id pub-id-type="doi">10.1167/8.14.10</pub-id>
<pub-id pub-id-type="pmid">19146311</pub-id>
</mixed-citation>
</ref>
<ref id="B56">
<mixed-citation publication-type="webpage">
<person-group person-group-type="author">
<name>
<surname>Stroyan</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<source>Interactive Computation of Geometric Inputs to Vision, 2008.1 Motion Pursuit Law In 1D: Visual Depth Perception 1</source>
. Available online at:
<ext-link ext-link-type="uri" xlink:href="http://demonstrations.wolfram.com/MotionPursuitLawIn1DVisualDepthPerception1/2008.11">http://demonstrations.wolfram.com/MotionPursuitLawIn1DVisualDepthPerception1/2008.11</ext-link>
<italic>Tracking and Separation: Visual Depth Perception</italic>
11. Available online at:
<ext-link ext-link-type="uri" xlink:href="http://demonstrations.wolfram.com/TrackingAndSeparationVisualDepthPerception11/">http:/demonstrations.wolfram.com/TrackingAndSeparationVisualDepthPerception11/</ext-link>
</mixed-citation>
</ref>
<ref id="B57">
<mixed-citation publication-type="webpage">
<person-group person-group-type="author">
<name>
<surname>Stroyan</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<source>Motion Parallax is Asymptotic to Binocular Disparity</source>
. Available online at:
<ext-link ext-link-type="uri" xlink:href="http://arxiv.org/abs/1010.0575">http://arxiv.org/abs/1010.0575</ext-link>
</mixed-citation>
</ref>
<ref id="B58">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stroyan</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Nawrot</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Visual depth from motion parallax and eye pursuit</article-title>
.
<source>J. Math. Biol</source>
.
<volume>64</volume>
,
<fpage>1157</fpage>
<lpage>1188</lpage>
<pub-id pub-id-type="doi">10.1007/s00285-011-0445-1</pub-id>
<pub-id pub-id-type="pmid">21695531</pub-id>
</mixed-citation>
</ref>
<ref id="B60">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tittle</surname>
<given-names>J. S.</given-names>
</name>
<name>
<surname>Todd</surname>
<given-names>J. T.</given-names>
</name>
<name>
<surname>Perotti</surname>
<given-names>V. J.</given-names>
</name>
<name>
<surname>Norman</surname>
<given-names>J. F.</given-names>
</name>
</person-group>
(
<year>1995</year>
).
<article-title>Systematic distortion of perceived three-dimensional structure from motion and binocular stereopsis</article-title>
.
<source>J. Exp. Psychol. Hum. Percept. Perform</source>
.
<volume>21</volume>
,
<fpage>663</fpage>
<lpage>678</lpage>
<pub-id pub-id-type="doi">10.1037/0096-1523.21.3.663</pub-id>
<pub-id pub-id-type="pmid">7790840</pub-id>
</mixed-citation>
</ref>
<ref id="B61">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Todd</surname>
<given-names>J. T.</given-names>
</name>
<name>
<surname>Norman</surname>
<given-names>J. F.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>The visual perception of 3-D shape from multiple cues: are observers capable of perceiving metric structure?</article-title>
<source>Percept. Psychophys</source>
.
<volume>65</volume>
,
<fpage>31</fpage>
<lpage>47</lpage>
<pub-id pub-id-type="doi">10.3758/BF03194781</pub-id>
<pub-id pub-id-type="pmid">12699307</pub-id>
</mixed-citation>
</ref>
<ref id="B62">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Trotter</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Celebrini</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Stricanne</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Thorpe</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Imbert</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>1996</year>
).
<article-title>Neural processing of stereopsis as a function of viewing distance in primate visual cortical area V1</article-title>
.
<source>J. Neurophysiol</source>
.
<volume>76</volume>
,
<fpage>2872</fpage>
<lpage>2885</lpage>
<pub-id pub-id-type="pmid">8930240</pub-id>
</mixed-citation>
</ref>
<ref id="B63">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Tufte</surname>
<given-names>E.</given-names>
</name>
</person-group>
(
<year>1974/2006</year>
).
<source>Data Analysis for Politics and Policy</source>
.
<publisher-loc>Cheshire, CT</publisher-loc>
:
<publisher-name>Graphics Press</publisher-name>
</mixed-citation>
</ref>
<ref id="B64">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Turano</surname>
<given-names>K. A.</given-names>
</name>
<name>
<surname>Heidenreich</surname>
<given-names>S. M.</given-names>
</name>
</person-group>
(
<year>1996</year>
).
<article-title>Speed discrimination of distal stimuli during smooth pursuit eye motion</article-title>
.
<source>Vision Res</source>
.
<volume>36</volume>
,
<fpage>3507</fpage>
<lpage>3517</lpage>
<pub-id pub-id-type="doi">10.1016/0042-6989(96)00071-5</pub-id>
<pub-id pub-id-type="pmid">8977017</pub-id>
</mixed-citation>
</ref>
<ref id="B65">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Turano</surname>
<given-names>K. A.</given-names>
</name>
<name>
<surname>Massof</surname>
<given-names>R. W.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Nonlinear contribution of eye velocity to motion perception</article-title>
.
<source>Vision Res</source>
.
<volume>41</volume>
,
<fpage>385</fpage>
<lpage>395</lpage>
<pub-id pub-id-type="doi">10.1016/S0042-6989(00)00255-8</pub-id>
<pub-id pub-id-type="pmid">11164453</pub-id>
</mixed-citation>
</ref>
<ref id="B66">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Viguier</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Clement</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Trotter</surname>
<given-names>Y.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Distance perception within near visual space</article-title>
.
<source>Perception</source>
<volume>30</volume>
,
<fpage>115</fpage>
<lpage>124</lpage>
<pub-id pub-id-type="doi">10.1068/p3119</pub-id>
<pub-id pub-id-type="pmid">11257974</pub-id>
</mixed-citation>
</ref>
<ref id="B67">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>von Helmholtz</surname>
<given-names>H.</given-names>
</name>
</person-group>
(
<year>1910/1925/1962</year>
).
<source>Treatise on Physiological Optics</source>
.
<publisher-loc>New York, NY</publisher-loc>
:
<publisher-name>Dover</publisher-name>
(english translation by
<person-group person-group-type="translator">
<name>
<surname>Southall</surname>
<given-names>J. P. C.</given-names>
</name>
</person-group>
three volumes bound as two, from the 3rd German edition of Handbuch der PhysiologischenOptik. von Kries).</mixed-citation>
</ref>
<ref id="B68">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Von Hofsten</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>1976</year>
).
<article-title>The role of convergence in visual space perception</article-title>
.
<source>Vision Res</source>
.
<volume>16</volume>
,
<fpage>193</fpage>
<lpage>198</lpage>
<pub-id pub-id-type="doi">10.1016/0042-6989(76)90098-5</pub-id>
<pub-id pub-id-type="pmid">1266061</pub-id>
</mixed-citation>
</ref>
<ref id="B69">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wallach</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Zuckerman</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>1963</year>
).
<article-title>The constancy of stereoscopic depth</article-title>
.
<source>Am. J. Psychol</source>
.
<fpage>404</fpage>
<lpage>412</lpage>
<pub-id pub-id-type="doi">10.2307/1419781</pub-id>
<pub-id pub-id-type="pmid">13998575</pub-id>
</mixed-citation>
</ref>
<ref id="B70">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Watamaniuk</surname>
<given-names>S. N. J.</given-names>
</name>
<name>
<surname>Grzywacz</surname>
<given-names>N. M.</given-names>
</name>
<name>
<surname>Yuille</surname>
<given-names>A. L.</given-names>
</name>
</person-group>
(
<year>1993</year>
).
<article-title>Dependence of speed and direction perception on cinematogram dot density</article-title>
.
<source>Vision Res</source>
.
<volume>33</volume>
,
<fpage>849</fpage>
<lpage>859</lpage>
<pub-id pub-id-type="doi">10.1016/0042-6989(93)90204-A</pub-id>
<pub-id pub-id-type="pmid">8351856</pub-id>
</mixed-citation>
</ref>
<ref id="B71">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wertheim</surname>
<given-names>A. H.</given-names>
</name>
<name>
<surname>Van Gelder</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>1990</year>
).
<article-title>An acceleration illusion caused by underestimation of stimulus velocity during pursuit eye movements: aubert-Fleischl revisited</article-title>
.
<source>Perception</source>
<volume>19</volume>
,
<fpage>471</fpage>
<lpage>482</lpage>
<pub-id pub-id-type="doi">10.1068/p190471</pub-id>
<pub-id pub-id-type="pmid">2096365</pub-id>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
<affiliations>
<list>
<country>
<li>États-Unis</li>
</country>
</list>
<tree>
<country name="États-Unis">
<noRegion>
<name sortKey="Nawrot, Mark" sort="Nawrot, Mark" uniqKey="Nawrot M" first="Mark" last="Nawrot">Mark Nawrot</name>
</noRegion>
<name sortKey="Leonard, Zachary" sort="Leonard, Zachary" uniqKey="Leonard Z" first="Zachary" last="Leonard">Zachary Leonard</name>
<name sortKey="Ratzlaff, Michael" sort="Ratzlaff, Michael" uniqKey="Ratzlaff M" first="Michael" last="Ratzlaff">Michael Ratzlaff</name>
<name sortKey="Stroyan, Keith" sort="Stroyan, Keith" uniqKey="Stroyan K" first="Keith" last="Stroyan">Keith Stroyan</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Ncbi/Merge
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 003397 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd -nk 003397 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Ncbi
   |étape=   Merge
   |type=    RBID
   |clé=     PMC:4186274
   |texte=   Modeling depth from motion parallax with the motion/pursuit ratio
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/RBID.i   -Sk "pubmed:25339926" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024