Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

A multisensory approach to spatial updating: the case of mental rotations

Identifieur interne : 001175 ( Ncbi/Merge ); précédent : 001174; suivant : 001176

A multisensory approach to spatial updating: the case of mental rotations

Auteurs : Manuel Vidal [Allemagne, France] ; Alexandre Lehmann [France] ; Heinrich H. Bülthoff [Allemagne]

Source :

RBID : PMC:2708330

Abstract

Mental rotation is the capacity to predict the outcome of spatial relationships after a change in viewpoint. These changes arise either from the rotation of the test object array or from the rotation of the observer. Previous studies showed that the cognitive cost of mental rotations is reduced when viewpoint changes result from the observer’s motion, which was explained by the spatial updating mechanism involved during self-motion. However, little is known about how various sensory cues available might contribute to the updating performance. We used a Virtual Reality setup in a series of experiments to investigate table-top mental rotations under different combinations of modalities among vision, body and audition. We found that mental rotation performance gradually improved when adding sensory cues to the moving observer (from None to Body or Vision and then to Body & Audition or Body & Vision), but that the processing time drops to the same level for any of the sensory contexts. These results are discussed in terms of an additive contribution when sensory modalities are co-activated to the spatial updating mechanism involved during self-motion. Interestingly, this multisensory approach can account for different findings reported in the literature.


Url:
DOI: 10.1007/s00221-009-1892-4
PubMed: 19544058
PubMed Central: 2708330

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:2708330

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">A multisensory approach to spatial updating: the case of mental rotations</title>
<author>
<name sortKey="Vidal, Manuel" sort="Vidal, Manuel" uniqKey="Vidal M" first="Manuel" last="Vidal">Manuel Vidal</name>
<affiliation wicri:level="3">
<nlm:aff id="Aff1">Max Planck Institute for Biological Cybernetics, Tübingen, Germany</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Max Planck Institute for Biological Cybernetics, Tübingen</wicri:regionArea>
<placeName>
<region type="land" nuts="1">Bade-Wurtemberg</region>
<region type="district" nuts="2">District de Tübingen</region>
<settlement type="city">Tübingen</settlement>
</placeName>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="Aff2">LPPA, CNRS, Collège de France, Paris, France</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea>LPPA, CNRS, Collège de France, Paris</wicri:regionArea>
<placeName>
<settlement type="city">Paris</settlement>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Lehmann, Alexandre" sort="Lehmann, Alexandre" uniqKey="Lehmann A" first="Alexandre" last="Lehmann">Alexandre Lehmann</name>
<affiliation wicri:level="1">
<nlm:aff id="Aff2">LPPA, CNRS, Collège de France, Paris, France</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea>LPPA, CNRS, Collège de France, Paris</wicri:regionArea>
<placeName>
<settlement type="city">Paris</settlement>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Bulthoff, Heinrich H" sort="Bulthoff, Heinrich H" uniqKey="Bulthoff H" first="Heinrich H." last="Bülthoff">Heinrich H. Bülthoff</name>
<affiliation wicri:level="3">
<nlm:aff id="Aff1">Max Planck Institute for Biological Cybernetics, Tübingen, Germany</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Max Planck Institute for Biological Cybernetics, Tübingen</wicri:regionArea>
<placeName>
<region type="land" nuts="1">Bade-Wurtemberg</region>
<region type="district" nuts="2">District de Tübingen</region>
<settlement type="city">Tübingen</settlement>
</placeName>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">19544058</idno>
<idno type="pmc">2708330</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2708330</idno>
<idno type="RBID">PMC:2708330</idno>
<idno type="doi">10.1007/s00221-009-1892-4</idno>
<date when="2009">2009</date>
<idno type="wicri:Area/Pmc/Corpus">000B96</idno>
<idno type="wicri:Area/Pmc/Curation">000B96</idno>
<idno type="wicri:Area/Pmc/Checkpoint">002215</idno>
<idno type="wicri:Area/Ncbi/Merge">001175</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">A multisensory approach to spatial updating: the case of mental rotations</title>
<author>
<name sortKey="Vidal, Manuel" sort="Vidal, Manuel" uniqKey="Vidal M" first="Manuel" last="Vidal">Manuel Vidal</name>
<affiliation wicri:level="3">
<nlm:aff id="Aff1">Max Planck Institute for Biological Cybernetics, Tübingen, Germany</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Max Planck Institute for Biological Cybernetics, Tübingen</wicri:regionArea>
<placeName>
<region type="land" nuts="1">Bade-Wurtemberg</region>
<region type="district" nuts="2">District de Tübingen</region>
<settlement type="city">Tübingen</settlement>
</placeName>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="Aff2">LPPA, CNRS, Collège de France, Paris, France</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea>LPPA, CNRS, Collège de France, Paris</wicri:regionArea>
<placeName>
<settlement type="city">Paris</settlement>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Lehmann, Alexandre" sort="Lehmann, Alexandre" uniqKey="Lehmann A" first="Alexandre" last="Lehmann">Alexandre Lehmann</name>
<affiliation wicri:level="1">
<nlm:aff id="Aff2">LPPA, CNRS, Collège de France, Paris, France</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea>LPPA, CNRS, Collège de France, Paris</wicri:regionArea>
<placeName>
<settlement type="city">Paris</settlement>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Bulthoff, Heinrich H" sort="Bulthoff, Heinrich H" uniqKey="Bulthoff H" first="Heinrich H." last="Bülthoff">Heinrich H. Bülthoff</name>
<affiliation wicri:level="3">
<nlm:aff id="Aff1">Max Planck Institute for Biological Cybernetics, Tübingen, Germany</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Max Planck Institute for Biological Cybernetics, Tübingen</wicri:regionArea>
<placeName>
<region type="land" nuts="1">Bade-Wurtemberg</region>
<region type="district" nuts="2">District de Tübingen</region>
<settlement type="city">Tübingen</settlement>
</placeName>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Experimental Brain Research. Experimentelle Hirnforschung. Experimentation Cerebrale</title>
<idno type="ISSN">0014-4819</idno>
<idno type="eISSN">1432-1106</idno>
<imprint>
<date when="2009">2009</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Mental rotation is the capacity to predict the outcome of spatial relationships after a change in viewpoint. These changes arise either from the rotation of the test object array or from the rotation of the observer. Previous studies showed that the cognitive cost of mental rotations is reduced when viewpoint changes result from the observer’s motion, which was explained by the spatial updating mechanism involved during self-motion. However, little is known about how various sensory cues available might contribute to the updating performance. We used a Virtual Reality setup in a series of experiments to investigate table-top mental rotations under different combinations of modalities among vision, body and audition. We found that mental rotation performance gradually improved when adding sensory cues to the moving observer (from
<italic>None</italic>
to
<italic>Body</italic>
or
<italic>Vision</italic>
and then to
<italic>Body</italic>
&
<italic>Audition</italic>
or
<italic>Body</italic>
&
<italic>Vision</italic>
), but that the processing time drops to the same level for any of the sensory contexts. These results are discussed in terms of an additive contribution when sensory modalities are co-activated to the spatial updating mechanism involved during self-motion. Interestingly, this multisensory approach can account for different findings reported in the literature.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc xml:lang="EN" article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Exp Brain Res</journal-id>
<journal-title>Experimental Brain Research. Experimentelle Hirnforschung. Experimentation Cerebrale</journal-title>
<issn pub-type="ppub">0014-4819</issn>
<issn pub-type="epub">1432-1106</issn>
<publisher>
<publisher-name>Springer-Verlag</publisher-name>
<publisher-loc>Berlin/Heidelberg</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">19544058</article-id>
<article-id pub-id-type="pmc">2708330</article-id>
<article-id pub-id-type="publisher-id">1892</article-id>
<article-id pub-id-type="doi">10.1007/s00221-009-1892-4</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>A multisensory approach to spatial updating: the case of mental rotations</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name name-style="western">
<surname>Vidal</surname>
<given-names>Manuel</given-names>
</name>
<address>
<email>manuel.vidal@college-de-france.fr</email>
</address>
<xref ref-type="aff" rid="Aff1">1</xref>
<xref ref-type="aff" rid="Aff2">2</xref>
</contrib>
<contrib contrib-type="author">
<name name-style="western">
<surname>Lehmann</surname>
<given-names>Alexandre</given-names>
</name>
<xref ref-type="aff" rid="Aff2">2</xref>
</contrib>
<contrib contrib-type="author">
<name name-style="western">
<surname>Bülthoff</surname>
<given-names>Heinrich H.</given-names>
</name>
<xref ref-type="aff" rid="Aff1">1</xref>
</contrib>
<aff id="Aff1">
<label>1</label>
Max Planck Institute for Biological Cybernetics, Tübingen, Germany</aff>
<aff id="Aff2">
<label>2</label>
LPPA, CNRS, Collège de France, Paris, France</aff>
</contrib-group>
<pub-date pub-type="epub">
<day>21</day>
<month>6</month>
<year>2009</year>
</pub-date>
<pub-date pub-type="ppub">
<month>7</month>
<year>2009</year>
</pub-date>
<volume>197</volume>
<issue>1</issue>
<fpage>59</fpage>
<lpage>68</lpage>
<history>
<date date-type="received">
<day>20</day>
<month>4</month>
<year>2009</year>
</date>
<date date-type="accepted">
<day>1</day>
<month>6</month>
<year>2009</year>
</date>
</history>
<permissions>
<copyright-statement>© The Author(s) 2009</copyright-statement>
</permissions>
<abstract xml:lang="EN">
<p>Mental rotation is the capacity to predict the outcome of spatial relationships after a change in viewpoint. These changes arise either from the rotation of the test object array or from the rotation of the observer. Previous studies showed that the cognitive cost of mental rotations is reduced when viewpoint changes result from the observer’s motion, which was explained by the spatial updating mechanism involved during self-motion. However, little is known about how various sensory cues available might contribute to the updating performance. We used a Virtual Reality setup in a series of experiments to investigate table-top mental rotations under different combinations of modalities among vision, body and audition. We found that mental rotation performance gradually improved when adding sensory cues to the moving observer (from
<italic>None</italic>
to
<italic>Body</italic>
or
<italic>Vision</italic>
and then to
<italic>Body</italic>
&
<italic>Audition</italic>
or
<italic>Body</italic>
&
<italic>Vision</italic>
), but that the processing time drops to the same level for any of the sensory contexts. These results are discussed in terms of an additive contribution when sensory modalities are co-activated to the spatial updating mechanism involved during self-motion. Interestingly, this multisensory approach can account for different findings reported in the literature.</p>
</abstract>
<kwd-group>
<title>Keywords</title>
<kwd>Mental rotations</kwd>
<kwd>Spatial updating</kwd>
<kwd>Multisensory</kwd>
<kwd>Virtual reality</kwd>
</kwd-group>
<custom-meta-wrap>
<custom-meta>
<meta-name>issue-copyright-statement</meta-name>
<meta-value>© Springer-Verlag 2009</meta-value>
</custom-meta>
</custom-meta-wrap>
</article-meta>
</front>
<body>
<sec id="Sec1" sec-type="introduction">
<title>Introduction</title>
<p>In everyday life, we often have to imagine ourselves from another perspective in order to correctly drive our behavior and actions within the environment. This process involves mental rotations (or perspective taking), which can be defined as the capacity to mentally update either spatial relationships of objects or structural features within an object, after orientation changes in the observer’s reference frame. This dynamic process is considered as analogue to the actual physical rotation of the objects or observer, and provides a prediction of the outcome of these relationships after the rotation. The topic of the present research concerns the specific contributions and interactions of different senses to the spatial updating mechanism involved during the observer motion in order to perform mental rotations.</p>
<p>Attempting to identify an object from an unusual viewpoint requires a cognitive effort that has long been characterized. The recognition of an oriented object from a novel viewpoint is harder and reaction times are proportional to the angular difference between the two orientations presented simultaneously (Shepard and Metzler
<xref ref-type="bibr" rid="CR20">1971</xref>
), which suggests an on-going mental rotation process performed at constant speed. Other studies also found view dependency effects of spatial memory for object layouts presented successively (Rieser
<xref ref-type="bibr" rid="CR19">1989</xref>
; Diwadkar and McNamara
<xref ref-type="bibr" rid="CR6">1997</xref>
). Christou et al. (
<xref ref-type="bibr" rid="CR5">2003</xref>
) investigated the effect of external cues concerning the change in viewpoint on the recognition of highly view-dependent stimuli. They found that both visual background and indication of the next viewpoint improved participants’ performance, thus providing evidence for egocentric or view-based encoding of shapes.</p>
<p>When an observer is viewing an array of objects, the same relative change in viewpoint can arise either from object array rotation or from viewer rotation. Since in the absence of external visual cues, these two transformations result in identical changes in the observer’s retinal projection of the scene, traditional models of object recognition would predict a similar cognitive performance. Simons and Wang (
<xref ref-type="bibr" rid="CR21">1998</xref>
) compared for the first time these two situations in a table-top mental rotation task. Participants learned an array of objects on a circular table. They were then tested on the same array either after a change in viewing position or a rotation of the table, both leading to the same change in relative orientation of the layout. They found that when the change in orientation resulted from the observer motion rather than from the table rotation, the recognition performance was much higher indicating an improved mental rotation mechanism. Surprisingly, when participants moved to a novel viewing position, performance was better if the layout had remained static than if it rotated in order to present the exact same view. This facilitation effect found for physically moving observers, often referred to as the viewer advantage, has also been reported in other types of mental rotation paradigms, namely with imagined rotations (Amorim and Stucchi
<xref ref-type="bibr" rid="CR1">1997</xref>
; Wraga et al.
<xref ref-type="bibr" rid="CR26">1999</xref>
,
<xref ref-type="bibr" rid="CR27">2000</xref>
), with virtual rotations (Christou and Bülthoff
<xref ref-type="bibr" rid="CR4">1999</xref>
; Wraga et al.
<xref ref-type="bibr" rid="CR28">2004</xref>
), and finally for haptically learned object layouts (Pasqualotto et al.
<xref ref-type="bibr" rid="CR18">2005</xref>
; Newell et al.
<xref ref-type="bibr" rid="CR17">2005</xref>
). These findings strongly contradict earlier object recognition theories and suggest the existence of distinct mechanisms at work for static and moving observers. Until recently, the most widely accepted interpretation is that when the change in viewpoint arises from the observer locomotion, the latter benefits from the spatial updating mechanism (Simons and Wang
<xref ref-type="bibr" rid="CR21">1998</xref>
; Wang and Simons
<xref ref-type="bibr" rid="CR23">1999</xref>
; Amorim et al.
<xref ref-type="bibr" rid="CR2">1997</xref>
). This updating mechanism would allow integrating self-motion cues in order to dynamically update spatial relationships within the test layout, resulting in improved mental rotation performance. Nevertheless, recent findings cast doubts on the interpretation of the locomotion contribution. Indeed, this advantage was cancelled for larger rotations or when additional cues indicating the change in perspective are provided (Motes et al.
<xref ref-type="bibr" rid="CR12">2006</xref>
; Mou et al.
<xref ref-type="bibr" rid="CR16">2009</xref>
). Accordingly, a new interpretation of the role of locomotion was formulated in the later study: the updating involved during locomotion would simply allow keeping trace of reference directions between the study and test views. In order to ensure that differences arise exclusively from the spatial updating mechanism, in the present study we used a table texture with clearly visible wood stripes that provide a strong reference direction in every condition.</p>
<p>This updating mechanism, which has been proposed as a fundamental capacity driving animal navigation behaviors (Wang and Spelke
<xref ref-type="bibr" rid="CR25">2002</xref>
), can be fed by a variety of sensory information. On one hand, the vestibular system provides self-motion-specific receptors which in conjunction with the somatosensory inputs, allow humans and mammals to keep track of their position in their environment even in complete darkness, by continuously integrating these cues (Mittelstaedt and Mittelstaedt
<xref ref-type="bibr" rid="CR11">1980</xref>
; Loomis et al.
<xref ref-type="bibr" rid="CR10">1999</xref>
). On the other hand, since the 1950s, many studies have emphasized how processing optic flow provides humans with very efficient mechanisms to evaluate and guide self-motion in a stable environment (Gibson
<xref ref-type="bibr" rid="CR7">1950</xref>
). Klatzky et al. (
<xref ref-type="bibr" rid="CR8">1998</xref>
) studied how visual, vestibular and somatosensory cues combine in order to update the starting position when participants are exposed to a two-segment path with a turn. Systematic pointing errors were observed when vestibular information was absent, which suggest an on/off discrete contribution of the vestibular input, rather than a continuous integration with other cues such as vision. In our view, a reduction of sensory information should reduce the global reliability of motion cues and therefore increase errors, but not in such systematic fashion. Recent disorienting maneuvers showed that updating object placed around participants does not involve allocentric information (Wang and Spelke
<xref ref-type="bibr" rid="CR24">2000</xref>
), suggesting that even if participants were given time to learn their locations, object-to-object relationships are not encoded. A recent study extended these results showing that if participants learned the object layout from an external point of view, which is the case of table-top mental rotation tasks, object-to-object properties are encoded and participants can point towards the objects even after disorientation (Mou et al.
<xref ref-type="bibr" rid="CR13">2006</xref>
). The authors explain that when objects are placed around participants, allocentric properties such as intrinsic axis cannot be drawn easily, and only the updating of egocentric representations can be used to perform the task, which is impaired by disorientation.</p>
<p>Simons and Wang (
<xref ref-type="bibr" rid="CR21">1998</xref>
) investigated whether background visual cues around the test layout played a role in the viewer advantage using phosphorescent objects in a dark room. Moving observers still yielded significantly better mental rotation performance than static, although the advantage was reduced compared to conditions in which environmental cues were available. This reduction led us to believe that in this task, different sources of information could contribute to the viewer advantage, namely visual cues. Nonetheless, in follow-up experiments these authors reported that extra-retinal cues were responsible for this advantage and claim that visual cues do not contribute at all (Simons et al.
<xref ref-type="bibr" rid="CR22">2002</xref>
). In this study, very poor visual information was available compared to previous work: static snapshots before and after the change in viewpoint with a rather uniform background were used. Participants experienced only a limited field of view and were provided with no dynamic visual information about their rotation. In the present work, the visual contribution to the updating mechanism will be assessed independently from other modalities. We believe that richer visual cues can in fact improve mental rotations, if not alone, at least contribute to the overall effect.</p>
<p>To summarize, several studies have manipulated modalities (visual, vestibular, somatosensory) that could be involved in the mental rotation process. Nevertheless, it is not yet clear what are the exact contribution of these sensory modalities and how they might interact together. In particular, no previous study has really assessed the role of audition, although relevant acoustic cues could also improve mental rotations. The aim of the present research was to use the exact same setup in order to compare mental rotation performance of moving observers, when the spatial updating mechanism is provided with unimodal information or with different bimodal combinations. A multimodal virtual reality platform was designed so as to systematically measure the performance in a table-top mental rotation task under different sensory contexts. We believe that the richer the sensory context available for the updating mechanism is, the better mental rotations will be. This new multisensory approach of mental rotations will allow explaining a large range of findings reported previously.</p>
</sec>
<sec id="Sec2">
<title>General methods</title>
<sec id="Sec3">
<title>Participants</title>
<p>Twelve university students (4 females and 8 males, mean age 24.4 ± 3.3) participated in experiment A and another 12 (5 females and 7 males, mean age 24.5 ± 3.9) participated in experiment B. All were right-handed except one in experiment A and two in experiment B. All had normal or corrected-to-normal vision. None of the participants knew of the hypotheses being tested.</p>
</sec>
<sec id="Sec4">
<title>Apparatus</title>
<p>The experiments were conducted using an interactive Virtual Reality setup that immersed participants in a partial virtual environment. The setup consisted of a cabin mounted on the top of a 6 degree of freedom Stewart platform (see Fig. 
<xref rid="Fig1" ref-type="fig">1</xref>
, left panel). Inside of this completely enclosed cabin was a seat and a round physical table having a diameter of 40 cm that was placed between the seat and a large projection screen. The seat position was adjusted in order to have a constant viewing position across participants (50 cm away from the table central axis, 38.5 cm above the table surface, and 138 cm away from the front projection screen, subtending 61° of horizontal FOV). A real-time application was developed using Virtools
<sup>TM</sup>
, a behavioral and rendering engine, in order to synchronously control the platform motion, the visual background and the table layout with the response interface. An infrared camera in the cabin of the setup allowed the experimenter to continuously monitor the participants. In the following sections, we will specify how viewpoint changes were done, how the test objects were displayed, and how the response interface was designed.
<fig id="Fig1">
<label>Fig. 1</label>
<caption>
<p>
<italic>Left</italic>
A sketch of the experimental setup. Participants sat inside a closed cabin mounted on a motion platform that contained a front projection screen displaying the virtual scene, a table placed in the middle of the cabin with a screen and touch screen embedded displaying the test object layout and recording the participant answers. The different motion cues available during the viewpoint changes were achieved with a combination of the following manipulations:
<italic>P</italic>
the platform rotation,
<italic>R</italic>
the room rotation on the front screen,
<italic>T</italic>
the layout rotation on the table screen, and a speaker providing a stable external sound cue.
<italic>Right</italic>
The five objects used in the spatial layouts: a mobile phone, a shoe, an iron, a teddy bear and a roll of film</p>
</caption>
<graphic position="anchor" xlink:href="221_2009_1892_Fig1_HTML" id="MO1"></graphic>
</fig>
</p>
<sec id="Sec5">
<title>Changes in observer position</title>
<p>In every trial the participants were rotated passively around an off-vertical axis centered on the tabletop using different combinations of motion cueing such as, body motion, visual scene motion, and acoustic scene motion. Note that this rotation corresponds to a rotation in the simulated environment, therefore in the case of purely visual rotations, the participant is not physically rotated. In each of the modalities, the rotation corresponded to the same smooth off-axis yaw rotation performed around the table’s vertical axis. The rotation amplitude was always 50° to the right and the duration was 5 s using a raised-cosine velocity profile. The body rotations around the table were performed via a rotation of the platform that stimulated the vestibular system (canals and otoliths) and to a lesser extent proprioception from the inertial forces applied to the body. The visual scene rotation corresponded to the rotation of the viewpoint in the virtual environment. This environment was displayed on the back screen and consisted of a detailed model of a rectangular room (a 2 m wide × 3 m long indoor space with furniture). The acoustic scene rotation corresponded to the rotation of a church bell ringing during the entire rotation. The bell was outside the virtual room, and the sound was played using a loudspeaker that was placed 30° to the right of participants in the initial position at a height of 1.05 m. Note that with this setup, since the auditory stimulation was static, it was necessarily combined with the body rotation and its contribution was not assessed independently. At the end of the trials involving body motion, the repositioning of the motion platform to the starting position was performed using a trapezoidal velocity profile with a maximum instant velocity of 10°/s.</p>
</sec>
<sec id="Sec6">
<title>The table and the objects</title>
<p>For the mental rotation task, the same five objects were used to create the spatial layouts (mobile phone, shoe, iron, teddy bear, roll of film, see Fig. 
<xref rid="Fig1" ref-type="fig">1</xref>
, right panel). The size and familiarity of these objects were matched. The size was adjusted so as to have equivalent projected surfaces on the horizontal plane and limited discrepancy in height across objects. As for the familiarity, we chose objects from everyday life that could potentially be found on a table. These layouts were displayed on a 21 in. TFT-screen that was embedded in the cylindrical table physically placed in the middle of the cabin and virtually located in the center of the room (see Fig. 
<xref rid="Fig1" ref-type="fig">1</xref>
, left picture). Only a disc section of this screen was visible (29 cm diameter). The object configurations were generated automatically according to specific rules to avoid overlapping objects. Both the virtual room and the objects on the table were displayed using a passive stereoscopic vision technique based on anaglyphs. Therefore, during the entire experiment, participants wore a pair of red/cyan spectacles in order to correctly filter each image that was to be displayed for each eye. Independently of the participants’ rotations, the table and object layout could be virtually rotated while hidden.</p>
</sec>
<sec id="Sec7">
<title>The response interface</title>
<p>The mental rotation task required participants to select the object from the layout that was moved. In order to make the responding as natural and intuitive as possible, we used a touch screen mounted on the top of the table screen allowing participants to pick objects simply by touching the desired object with the index finger of their dominant hand. The name of the selected object was displayed on the table screen at the end of the response phase, in order to provide a possible control for errors in the automatic detection process. Reaction times (RTs) were recorded.</p>
</sec>
</sec>
<sec id="Sec8">
<title>Procedure</title>
<p>In the present work, we will compare different combinations of sensory cues available during the changes in position of the observer, resulting from two separate experiments. The results will then be compared to those of a previous experiment (Lehmann et al.
<xref ref-type="bibr" rid="CR9">2008</xref>
), in which for some conditions the observer remained in the same position. The latter validated the experimental setup by replicating as close as possible the first experiment of (Wang and Simons
<xref ref-type="bibr" rid="CR23">1999</xref>
). Nonetheless, all conditions will be detailed below and the associated experiment will always be mentioned.</p>
<sec id="Sec9">
<title>Time-course of a trial</title>
<p>On each trial, participants viewed a new layout of the five objects on the table for 3 s (learning phase). Then the objects and the table disappeared for 7 s (hidden phase). During this period, participants and the table could rotate independently in the virtual room, and systematically one of the five objects was translated 4 cm in a random direction, insuring that the movement avoided collisions with the other four. The objects and the table were then displayed again, and participants were asked to pick the object they thought had moved.</p>
</sec>
<sec id="Sec10">
<title>Experimental conditions</title>
<p>As illustrated in Fig. 
<xref rid="Fig2" ref-type="fig">2</xref>
, the experimental conditions are defined by a combination of two factors: the view of the layout (
<italic>same</italic>
or
<italic>rotated</italic>
) and the sensory context available (
<italic>Vision</italic>
,
<italic>Body</italic>
,
<italic>Body</italic>
&
<italic>Audition</italic>
,
<italic>Body</italic>
&
<italic>Vision</italic>
and
<italic>None</italic>
). The changes in position were done during the hidden phase where participants were passively rotated counter-clockwise around the table by 50° to the second viewpoint, with a given sensory context. The description of the conditions will be done in pairs, corresponding to the two possible values of the test view of the layout. Note that this factor is defined according to an egocentric reference frame. In trials with the
<italic>same</italic>
view, participants were tested with the same view of the table as in the learning phase; therefore, the table was rotated so as to compensate for the observer rotation. In trials with
<italic>rotated</italic>
view, they were tested with a clockwise 50° orientation change resulting from the rotation of the observer around the table.
<fig id="Fig2">
<label>Fig. 2</label>
<caption>
<p>Illustration of the experimental conditions according to different simulated self-motion sensory contexts (consistent manipulations of body physical position, visual orientation in the virtual room and external sound source).
<italic>P</italic>
,
<italic>R</italic>
and
<italic>T</italic>
indicate the technical manipulations detailed in Fig. 
<xref rid="Fig1" ref-type="fig">1</xref>
involved in each condition. The
<italic>first column</italic>
shows the learning context while the other
<italic>two</italic>
show the corresponding test conditions with an egocentric rotation of the layout’s view (5 mental rotation conditions) or not (5 control conditions). The
<italic>two asterisked</italic>
sensory contexts in the bottom were studied in the validation experiment published elsewhere (Lehmann et al.
<xref ref-type="bibr" rid="CR9">2008</xref>
)</p>
</caption>
<graphic position="anchor" xlink:href="221_2009_1892_Fig2_HTML" id="MO2"></graphic>
</fig>
</p>
<p>Experiment A addressed the role of visual cues (conditions in the
<italic>Vision</italic>
row of Fig. 
<xref rid="Fig2" ref-type="fig">2</xref>
). There was a pair of experimental conditions in which participants were provided with visual information about their rotation by means of the projected room rotation in the front screen, and no physical motion of the platform. Experiment B addressed the role of auditory and vestibular cues (conditions in the
<italic>Body</italic>
and
<italic>Body</italic>
&
<italic>Audition</italic>
rows). The two pairs of conditions were either a pure vestibular rotation, or the vestibular rotation coupled with an acoustic cue, always in the dark. The acoustic cue was a static external sound source, a church bell played through a loudspeaker (see Fig. 
<xref rid="Fig1" ref-type="fig">1</xref>
), thus rotating in the egocentric reference frame.</p>
<p>The validation experiment published elsewhere addressed on the one hand the role of visio-vestibular cues (conditions in the
<italic>Body</italic>
&
<italic>Vision</italic>
row) when participant’s position was changed, and on the other hand the baseline performance when the participant remained in the same position and only the table rotated (conditions in the
<italic>None</italic>
row). In the latter condition, there was no sensory cue about the participant rotation, though they were informed about the table rotation. In the four conditions involved, the visual environment was visible and could rotate accordingly with the platform in the case of a change in position.</p>
</sec>
<sec id="Sec11">
<title>Experimental design</title>
<p>In all the experiments, each condition was tested 20 times. In order to avoid both the difficulties in switching from one condition to another, and possible order effects, trials were partially blocked within conditions and their order was counterbalanced both within and across participants using a nested Latin-Square design (as in Wang and Simons
<xref ref-type="bibr" rid="CR23">1999</xref>
). The rank of the Latin-Square corresponded to the number of experimental conditions that a given participant was tested on (2 or 4 for experiments A or B, respectively). Trials were arranged into blocks (10 or 20 trials) where all conditions were tested for five successive trials. The orders of the conditions within these blocks were created using a Latin-Squares design, and each participant experienced all of these blocks. Finally, the order of these blocks was counterbalanced across participants, also using a Latin-Squares design. At the beginning of each block, participants were informed of the condition with a text message displayed on the front projection screen. This defined whether and how the view of the layout would change (“The table will rotate” or “You will rotate around the table”) or remain the same (“Nothing will rotate” or “You and the table will rotate”). Experiments A and B lasted approximately 45 and 75 min, respectively.</p>
</sec>
</sec>
<sec id="Sec12">
<title>Data analysis</title>
<p>The statistical analyses of the accuracy and RT were done using two kinds of repeated measures ANOVA designs, according to whether the comparison across sensory contexts were within or between participants from the three experiments (A, B and validation). The independent variables were the view of the layout (same or different) and the sensory context, the latter being either a within participants factor or a group factor (between participants). For each participant, accuracy and RT costs of the mental rotations in a given motion sensory context were computed as the difference between trials where the view of the layout was the same and trials where it was rotated. Costs were introduced to facilitate the presentation of the results, as comparing them across sensory context is statistically equivalent to the analysis of the interaction between the two independent factors (view of the layout × sensory context). Note that for RTs, the cost signs were inverted for consistency.</p>
</sec>
</sec>
<sec id="Sec13" sec-type="results">
<title>Results</title>
<p>Performance in this task has consequences on two observable variables: the accuracy in the moved object detection and the RT. Accuracy is more related to the efficacy of the mechanisms involved, whereas RT provides an indication of the cognitive processing time. The accuracy chance level for this task was 20%, therefore even in the worst condition, participants performed well above. The accuracy and RT measured in experiments A and B, as well as baseline data from the previous validation experiment, are presented in Fig. 
<xref rid="Fig3" ref-type="fig">3</xref>
(top plots) together with the associated sensory context costs (bottom plots).
<fig id="Fig3">
<label>Fig. 3</label>
<caption>
<p>The mental rotation task performance plotted together with the results from the previous experiment. The average accuracy and reaction times as a function of the change in layout view (
<italic>top plots</italic>
), and the corresponding mental rotation costs (
<italic>bottom plots</italic>
), for the various sensory combination contexts:
<italic>Body</italic>
,
<italic>Vision</italic>
and
<italic>Body & Audition</italic>
(from the current experiments),
<italic>None</italic>
and
<italic>Body & Vision</italic>
(from the previous validation experiment). The
<italic>error bars</italic>
correspond to the inter-individual standard error</p>
</caption>
<graphic position="anchor" xlink:href="221_2009_1892_Fig3_HTML" id="MO3"></graphic>
</fig>
</p>
<p>The accuracy cost when only the table rotated without motion cues (
<italic>None</italic>
: 34%) was not statistically different than with vestibular cues [
<italic>Body</italic>
: 32%,
<italic>F</italic>
(1,22) = 0.32;
<italic>P</italic>
 = 0.6] or with visual cues [
<italic>Vision</italic>
: 28%,
<italic>F</italic>
(1,22) = 1.32;
<italic>P</italic>
 = 0.26]. Nevertheless, the associated RT costs were significantly shorter of 830 ms for
<italic>Body</italic>
[
<italic>F</italic>
(1,22) = 6.2;
<italic>P</italic>
 < 0.02) and marginally shorter of 600 ms for
<italic>Vision</italic>
[
<italic>F</italic>
(1,22) = 2.6;
<italic>P</italic>
 = 0.12]. When two modalities were available, the accuracy cost was statistically lower than without motion cues, for both
<italic>Body</italic>
&
<italic>Audition</italic>
[20%,
<italic>F</italic>
(1,22) = 7.9;
<italic>P</italic>
 < 0.01] and
<italic>Body</italic>
&
<italic>Vision</italic>
[14%,
<italic>F</italic>
(1,11) = 12.5;
<italic>P</italic>
 < 0.005], and the RT cost was significantly shorter for both
<italic>Body</italic>
&
<italic>Audition</italic>
of 1,180 ms [
<italic>F</italic>
(1,22) = 11.1;
<italic>P</italic>
 < 0.005] and
<italic>Body</italic>
&
<italic>Vision</italic>
of 570 ms [
<italic>F</italic>
(1,11) = 6.7;
<italic>P</italic>
 < 0.03]. Planned comparisons showed that the significant improvement in the performance detailed above stem from statistical differences when the layout view is the same (control conditions) for the accuracy costs of
<italic>Body</italic>
&
<italic>Audition</italic>
(
<italic>P</italic>
 < 0.02) and
<italic>Body</italic>
&
<italic>Vision</italic>
(
<italic>P</italic>
 < 0.001), whereas they stem from statistical differences when the layout view is rotated (mental rotations) for the RT costs of
<italic>Body</italic>
&
<italic>Audition</italic>
(
<italic>P</italic>
 < 0.04), and marginally for
<italic>Body</italic>
(
<italic>P</italic>
 = 0.078) and
<italic>Body</italic>
&
<italic>Vision</italic>
(
<italic>P</italic>
 = 0.083). These results indicate that when participants were provided with unimodal sensory contexts, spatial updating allowed for shorter processing times although the mechanism was not more efficient. In turn, both accuracy and RTs improved when participants were stimulated by a combination of two modalities.</p>
<p>The accuracy costs of the studied bimodal sensory contexts were always statistically different than each associated unimodal contexts:
<italic>Body</italic>
&
<italic>Audition</italic>
versus
<italic>Body</italic>
[
<italic>F</italic>
(1,11) = 13.1;
<italic>P</italic>
 < 0.005];
<italic>Body</italic>
&
<italic>Vision</italic>
versus
<italic>Body</italic>
[
<italic>F</italic>
(1,22) = 9.8;
<italic>P</italic>
 < 0.005];
<italic>Body</italic>
&
<italic>Vision</italic>
versus
<italic>Vision</italic>
[
<italic>F</italic>
(1,22) = 4.6;
<italic>P</italic>
 < 0.05]. Here, again these differences stemmed rather from the conditions in which participants were tested with the same layout (with
<italic>P</italic>
 < 0.04,
<italic>P</italic>
 < 0.001 and
<italic>P</italic>
 = 0.11, respectively). None of these effects was found in the analyses of the RT.</p>
</sec>
<sec id="Sec14">
<title>General discussion</title>
<p>To summarize the results, first the performance improved for unimodal sensory contexts as indicated by the shorter processing times (RT), as compared to when the participants remained in the same position and only the table was rotated. Second, with bimodal sensory contexts, the performance improved in efficacy (accuracy) as compared to unimodal as well as static situations. Third, the processing times were not different between unimodal and bimodal situations, but were shorter than when participants were not moved. Fourth, although in terms of efficacy, the performance improve stemmed from the conditions in which participants were tested with the same view of the layout, the shorter processing times resulted from the conditions in which participants were tested with rotated views.</p>
<sec id="Sec15">
<title>Interpreting our results in a sensory-based framework</title>
<p>Taken together, these findings allow concluding that there is a gradual efficacy increase of the spatial updating mechanism underlying mental rotations when adding sensory cues to the moving observer, but that the processing time drops to the same level for any of the sensory contexts. This is consistent with the idea that the updating is an ongoing process, which takes the same amount of time independently of the modalities available. This suggests that it is always the same spatial updating mechanism at stake during self-motion, which is sensory-independent but with additive effects when different sensory modalities are consistently co-activated. Note that this improvement of spatial updating should not be considered as quantitatively dependent on the number of sensory modalities stimulated, but rather on the global richness of the cues involved. This predicts a minimal cost in an ecological environment that would be maximally rich in terms of relevant sensory cues available, which is in line with the previous results obtained with real setups that showed a negative cost (Simons and Wang
<xref ref-type="bibr" rid="CR21">1998</xref>
). The fact that our experimental platform uses mixed elements of a physical and virtual setup allowed for innovative multisensory stimulations, despite the limitations in terms of ecological validity. Indeed, while previous apparatus have offered limited (real setups) to null (imagined) assessment of the modality contributions, we could manipulate independently a large set of sensory combinations involved when an observer is moving.</p>
<p>The findings reported here provide strong evidence that spatial updating of a memorized object layout requires a certain amount of continuous motion-related information in order to bind cognitively the layout with the change in viewpoint. We showed that this binding could be performed efficiently if a minimal sensory context provides the observer with information about his own movements. Indeed, visual information alone was sufficient to exhibit an advantage for moving observers, and we believe that with maximally rich visual information (full field of view and a natural environment) this improvement could have been observed in the accuracy also. Moreover, combining the visual modality with vestibular information also improves the performance, which again indicates that vision plays a role in the mental rotation task when resulting from changes in observer position. These findings clearly contradict the claims of (Simons et al.
<xref ref-type="bibr" rid="CR22">2002</xref>
). Similarly, combining an auditory cue with the vestibular stimulation also resulted in a significant improvement of mental rotations, showing the possibility to use stable acoustic landmarks for the spatial updating mechanism.</p>
</sec>
<sec id="Sec16">
<title>Inhibition of the automatic spatial updating process</title>
<p>Finally, we should comment why, in terms of spatial updating efficiency only, the accuracy cost reduction observed for each sensory context seems to be produced by the decrease in performance of the control conditions (same view), and to a lesser extent by the improvement of the mental rotation conditions (rotated view). Note that it was the contrary for the drop in processing times. In fact, the spatial updating mechanism being automatic, in four of the control conditions it must be inhibited in order to perform the task with the same view of the layout. Otherwise the expected view would not match the tested view, even though it is precisely the learned view. Still considering the sensory-based framework, it is easy to understand that the richer the sensory cues about the rotation of the observer are, the harder will be preventing the automatic spatial updating mechanism from working. In fact, the inhibition of the updating is only effective when little or no sensory cues about self-motion are present. This is exactly what we observed in our experiment. It explains why the control conditions get impaired, and the one with no motion cue at all gets the best performance. Conversely, the better the motion cueing is, the best spatial updating performance will be, resulting in an improved mental rotation performance. As commented before, in our setup the visual and the body stimulations could have been richer, which lead to not as improved mental rotation performance as one could expect. Nevertheless, what really matters is the global effect of the sensory context on the cost of the mental rotation, which is characterized by the difference between the mental rotation condition and the control condition within the
<italic>same</italic>
sensory context.</p>
</sec>
<sec id="Sec17">
<title>Accounting for previous findings</title>
<p>Interestingly, our sensory-based framework also accounts for the wide range of cost reduction that has been previously reported in the literature. Indeed, in a real setup where all cues are naturally available, this cost can even become negative (Wang and Simons
<xref ref-type="bibr" rid="CR23">1999</xref>
), which means that for moving observers, it is easier to perform a mental rotation than to have to keep the same layout view in memory. This result can be explained by the great difficulty of preventing the automatic updating process when the observer is moving, which is necessary if the table also rotates in order to preserve the same view. As discussed in the previous section, the inhibition of the updating is only effective when little or no sensory cues about self-motion are present, again showing the link with the sensory richness. Nevertheless, in the previous experiments the mental rotation cost for moving observers was also reduced when the sensory context was not maximal, such as passively moved observers (Wang and Simons
<xref ref-type="bibr" rid="CR23">1999</xref>
) or provided with no visual background (Simons and Wang
<xref ref-type="bibr" rid="CR21">1998</xref>
). According to our framework, the amount of mental cost reduction depends on the richness of cues provided by the diverse experimental setups and conditions. It is thus even possible to extend it to imagined rotation studies; the scenario given to the participants and their vividness of imagination modulated the strength of the effect (Amorim and Stucchi
<xref ref-type="bibr" rid="CR1">1997</xref>
; Wraga et al.
<xref ref-type="bibr" rid="CR26">1999</xref>
,
<xref ref-type="bibr" rid="CR27">2000</xref>
).</p>
<p>When it comes to the effect of the manipulation of the visual background, the interpretations provided by different authors has not always been consistent. Some claim that the updating of object structures for moving observers is mediated by extra-retinal cues alone, and that visual cues provided by the background do not contribute (Simons et al.
<xref ref-type="bibr" rid="CR22">2002</xref>
). Our findings contradicted to some extent this conclusion. Although we could not provide evidence that visual cues alone can lead to a significant cost reduction—for reasons discussed previously—we found that together with the vestibular and somatosensory cues, the facilitation becomes significant. The fact that individual cues alone are not sufficient to properly enhance mental rotations, shows that visual information contributes to the quality of the updating mechanism. As we explained before, we believe that the manipulation of the visual cues in the study of Simons et al. was not convincing. Indeed, only visual snapshots of before and after the rotation were provided, with no dynamic information about the observer’s change in viewpoint. According to our sensory interpretation of the viewer advantage effect, it is not surprising that they could not find significant improvement in the mental rotation task with such a limited “amount” of sensory information and continuity about the rotation.</p>
</sec>
<sec id="Sec18">
<title>Allocentric encoding versus additional sensory cue</title>
<p>In a recent study using a similar paradigm, Burgess et al. (
<xref ref-type="bibr" rid="CR3">2004</xref>
) introduced a phosphorescent cue card providing a landmark external to the array. This cue card could be congruent with an egocentric reference frame (i.e., move with the observer) or with an allocentric reference frame (i.e., remain static in the room). They independently manipulated three factors (subject viewpoint S, table T and cue card C) in order to test three different types of representations of object locations based either on visual snapshots, or egocentric representations that can be updated by self-motion, or allocentric representations that are relative to the external cue. They found that the consistency with the cue card also allowed for improved performance, although less markedly than for static snapshot recognition or for the updating of an egocentric representation. They conclude that part of the effect attributed to egocentric updating by Wang and Simons can be explained by the use of allocentric spatial representations provided by the room.</p>
<p>In our view, there is a confound between the level of spatial knowledge representation and the level of spatial information processing. Indeed, speaking of visual snapshots defines one way of storing spatial relationships using mental imagery, therefore stands at the representational level. Recognition tasks relying on this type of storage are more efficient when the view is not changed, as detailed in the introduction, showing its intrinsic egocentric nature. In contrast, speaking of representations updated by self-motion involves both an egocentric storage (representational level), which might well be visual snapshots after all, and the spatial updating mechanism (processing level), which allows making predictions about the outcome after a change in viewpoint, provided that continuous spatial information is available. Finally, we believe that for these table-top mental rotation tasks, the use of extrinsic allocentric cues enabling the encoding of object-to-environment spatial relationships may not be very efficient given the short presentation delays and the distance between the table and the possible external cues. Instead, intrinsic object-to-object relationships within the layout are probably encoded. Indeed, learning intrinsic directions improves recognition (Mou et al.
<xref ref-type="bibr" rid="CR14">2008a</xref>
), and the movement of an object within the layout is more likely detected when the other objects are stationary than when they move (Mou et al.
<xref ref-type="bibr" rid="CR15">2008b</xref>
). Finally, the updating of such spatial structure during locomotion would rely on an egocentric process in which the global structure orientation is updated in a self-to-object manner. In other words, locomotion would allow keeping track of the layout’s intrinsic “reference direction”, as described by Mou et al. (
<xref ref-type="bibr" rid="CR16">2009</xref>
). Nonetheless, these reference directions are not necessarily allocentric (e.g. related to the environment).</p>
<p>The multisensory approach introduced in this paper brings another explanation to the cue card effects reported and the so-called use of allocentric representations. One can distinguish two types of self-motion cues that can be processed in order to update spatial features, those provided by internal mechanisms (idiothetic information, as defined by Mittelstaedt and Mittelstaedt
<xref ref-type="bibr" rid="CR11">1980</xref>
), and those coming from external signals (allothetic information). On one hand, since idiothetic information includes signals from the vestibular and somatosensory systems and efferent copies of motor commands, they are intrinsically egocentric. On the other hand, allothetic information provided for instance by optic or acoustic flow must rely on stable sources of the environment in order to be efficient. These sources are then allocentric in nature. Indeed, without a stable world, a moving observer could not use static sources in order to correctly integrate their egocentric optic flow. Therefore, the cue card introduced in the experiment of Burgess et al. (
<xref ref-type="bibr" rid="CR3">2004</xref>
) can be considered as nothing more than a stable visual landmark that contributes to the perception of self-motion and provides an additional sensory cue to the updating mechanism. Accordingly, let us compare the resulting mental rotation costs when adding this visual cue to the sensory context of the observer’s motion around the table. We find a considerable improvement when adding this visual cue (ST − S ≈ −12%, stable cue card) as compared to the body cues alone (STC − SC ≈ −2%, moving cue card).</p>
</sec>
</sec>
<sec id="Sec19" sec-type="conclusion">
<title>Conclusion</title>
<p>The original contribution of this work is to have directly assessed the influence of sensory cue richness on the quality of the spatial updating mechanism involved when predicting the outcome of a spatial layout after the observer’s position change. We found that the sensory contribution to the egocentric updating mechanism in table-top mental rotation tasks is additive: the richer the sensory context during observer motion is, the better mental rotations performance will be. Finally, we showed that this multisensory approach can account for most of the findings reported previously using similar tasks. In particular, it provides an alternate interpretation to the allocentric contribution introduced by Burgess et al. (
<xref ref-type="bibr" rid="CR3">2004</xref>
). Indeed, there is a more ecological explanation based on the multisensory redundancy processed by our brains, in order to efficiently update information while moving.</p>
</sec>
</body>
<back>
<ack>
<p>This work was presented at the IMRF 2008 conference. Manuel Vidal received a post-doctoral scholarship from the Max Planck Society and Alexandre Lehmann received a doctoral scholarship from the Centre Nationale pour la Recherche Scientifique. We are grateful to the workshop of the Max Planck Institute for the construction of the table set-up.</p>
<p>
<bold>Open Access</bold>
This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.</p>
</ack>
<ref-list id="Bib1">
<title>References</title>
<ref id="CR1">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Amorim</surname>
<given-names>MA</given-names>
</name>
<name>
<surname>Stucchi</surname>
<given-names>N</given-names>
</name>
</person-group>
<article-title>Viewer- and object-centered mental explorations of an imagined environment are not equivalent</article-title>
<source>Brain Res Cogn Brain Res</source>
<year>1997</year>
<volume>5</volume>
<fpage>229</fpage>
<lpage>239</lpage>
<pub-id pub-id-type="doi">10.1016/S0926-6410(96)00073-0</pub-id>
</citation>
<citation citation-type="display-unstructured">Amorim MA, Stucchi N (1997) Viewer- and object-centered mental explorations of an imagined environment are not equivalent. Brain Res Cogn Brain Res 5:229–239
<pub-id pub-id-type="pmid">9088559</pub-id>
</citation>
</ref>
<ref id="CR2">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Amorim</surname>
<given-names>MA</given-names>
</name>
<name>
<surname>Glasauer</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Corpinot</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Berthoz</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>Updating an object’s orientation and location during nonvisual navigation: a comparison between two processing modes</article-title>
<source>Percept Psychophys</source>
<year>1997</year>
<volume>59</volume>
<fpage>404</fpage>
<lpage>418</lpage>
</citation>
<citation citation-type="display-unstructured">Amorim MA, Glasauer S, Corpinot K, Berthoz A (1997) Updating an object’s orientation and location during nonvisual navigation: a comparison between two processing modes. Percept Psychophys 59:404–418
<pub-id pub-id-type="pmid">9136270</pub-id>
</citation>
</ref>
<ref id="CR3">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Burgess</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Spiers</surname>
<given-names>HJ</given-names>
</name>
<name>
<surname>Paleologou</surname>
<given-names>E</given-names>
</name>
</person-group>
<article-title>Orientational manoeuvres in the dark: dissociating allocentric and egocentric influences on spatial memory</article-title>
<source>Cognition</source>
<year>2004</year>
<volume>94</volume>
<fpage>149</fpage>
<lpage>166</lpage>
<pub-id pub-id-type="doi">10.1016/j.cognition.2004.01.001</pub-id>
</citation>
<citation citation-type="display-unstructured">Burgess N, Spiers HJ, Paleologou E (2004) Orientational manoeuvres in the dark: dissociating allocentric and egocentric influences on spatial memory. Cognition 94:149–166
<pub-id pub-id-type="pmid">15582624</pub-id>
</citation>
</ref>
<ref id="CR4">
<citation citation-type="other">Christou C, Bülthoff HH (1999) The perception of spatial layout in a virtual world, vol 75. Max Planck Institute Technical Report, Tübingen, Germany</citation>
</ref>
<ref id="CR5">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Christou</surname>
<given-names>CG</given-names>
</name>
<name>
<surname>Tjan</surname>
<given-names>BS</given-names>
</name>
<name>
<surname>Bülthoff</surname>
<given-names>HH</given-names>
</name>
</person-group>
<article-title>Extrinsic cues aid shape recognition from novel viewpoints</article-title>
<source>J Vis</source>
<year>2003</year>
<volume>3</volume>
<fpage>183</fpage>
<lpage>198</lpage>
<pub-id pub-id-type="doi">10.1167/3.3.1</pub-id>
</citation>
<citation citation-type="display-unstructured">Christou CG, Tjan BS, Bülthoff HH (2003) Extrinsic cues aid shape recognition from novel viewpoints. J Vis 3:183–198
<pub-id pub-id-type="pmid">12723964</pub-id>
</citation>
</ref>
<ref id="CR6">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Diwadkar</surname>
<given-names>VA</given-names>
</name>
<name>
<surname>McNamara</surname>
<given-names>TP</given-names>
</name>
</person-group>
<article-title>Viewpoint dependence in scene recognition</article-title>
<source>Psychol Sci</source>
<year>1997</year>
<volume>8</volume>
<fpage>302</fpage>
<lpage>307</lpage>
<pub-id pub-id-type="doi">10.1111/j.1467-9280.1997.tb00442.x</pub-id>
</citation>
<citation citation-type="display-unstructured">Diwadkar VA, McNamara TP (1997) Viewpoint dependence in scene recognition. Psychol Sci 8:302–307 </citation>
</ref>
<ref id="CR7">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Gibson</surname>
<given-names>JJ</given-names>
</name>
</person-group>
<source>The perception of the visual world</source>
<year>1950</year>
<publisher-loc>Boston</publisher-loc>
<publisher-name>Houghton Mifflin</publisher-name>
</citation>
<citation citation-type="display-unstructured">Gibson JJ (1950) The perception of the visual world. Houghton Mifflin, Boston </citation>
</ref>
<ref id="CR8">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Klatzky</surname>
<given-names>RL</given-names>
</name>
<name>
<surname>Loomis</surname>
<given-names>JM</given-names>
</name>
<name>
<surname>Beall</surname>
<given-names>AC</given-names>
</name>
<name>
<surname>Chance</surname>
<given-names>SS</given-names>
</name>
<name>
<surname>Golledge</surname>
<given-names>RG</given-names>
</name>
</person-group>
<article-title>Spatial updating of self-position and orientation during real, imagined, and virtual locomotion</article-title>
<source>Psychol Sci</source>
<year>1998</year>
<volume>9</volume>
<fpage>293</fpage>
<lpage>298</lpage>
<pub-id pub-id-type="doi">10.1111/1467-9280.00058</pub-id>
</citation>
<citation citation-type="display-unstructured">Klatzky RL, Loomis JM, Beall AC, Chance SS, Golledge RG (1998) Spatial updating of self-position and orientation during real, imagined, and virtual locomotion. Psychol Sci 9:293–298 </citation>
</ref>
<ref id="CR9">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lehmann</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Vidal</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Bülthoff</surname>
<given-names>HH</given-names>
</name>
</person-group>
<article-title>A high-end virtual reality setup for the study of mental rotations</article-title>
<source>Presence Teleoperators Virtual Environ</source>
<year>2008</year>
<volume>17</volume>
<fpage>365</fpage>
<lpage>375</lpage>
<pub-id pub-id-type="doi">10.1162/pres.17.4.365</pub-id>
</citation>
<citation citation-type="display-unstructured">Lehmann A, Vidal M, Bülthoff HH (2008) A high-end virtual reality setup for the study of mental rotations. Presence Teleoperators Virtual Environ 17:365–375 </citation>
</ref>
<ref id="CR10">
<citation citation-type="other">Loomis JM, Klatzky RL, Golledge RG, Philbeck JW (1999) Human navigation by path integration. In: Wayfinding behavior. The John Hopkins University Press, Baltimore, pp 125–151</citation>
</ref>
<ref id="CR11">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mittelstaedt</surname>
<given-names>ML</given-names>
</name>
<name>
<surname>Mittelstaedt</surname>
<given-names>H</given-names>
</name>
</person-group>
<article-title>Homing by path integration in a mammal</article-title>
<source>Naturwissenschaftens</source>
<year>1980</year>
<volume>67</volume>
<fpage>566</fpage>
<lpage>567</lpage>
<pub-id pub-id-type="doi">10.1007/BF00450672</pub-id>
</citation>
<citation citation-type="display-unstructured">Mittelstaedt ML, Mittelstaedt H (1980) Homing by path integration in a mammal. Naturwissenschaftens 67:566–567 </citation>
</ref>
<ref id="CR12">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Motes</surname>
<given-names>MA</given-names>
</name>
<name>
<surname>Finlay</surname>
<given-names>CA</given-names>
</name>
<name>
<surname>Kozhevnikov</surname>
<given-names>M</given-names>
</name>
</person-group>
<article-title>Scene recognition following locomotion around a scene</article-title>
<source>Perception</source>
<year>2006</year>
<volume>35</volume>
<fpage>1507</fpage>
<lpage>1520</lpage>
<pub-id pub-id-type="doi">10.1068/p5459</pub-id>
</citation>
<citation citation-type="display-unstructured">Motes MA, Finlay CA, Kozhevnikov M (2006) Scene recognition following locomotion around a scene. Perception 35:1507–1520
<pub-id pub-id-type="pmid">17286121</pub-id>
</citation>
</ref>
<ref id="CR13">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mou</surname>
<given-names>W</given-names>
</name>
<name>
<surname>McNamara</surname>
<given-names>TP</given-names>
</name>
<name>
<surname>Rump</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Xiao</surname>
<given-names>C</given-names>
</name>
</person-group>
<article-title>Roles of egocentric and allocentric spatial representations in locomotion and reorientation</article-title>
<source>J Exp Psychol Learn Mem Cogn</source>
<year>2006</year>
<volume>32</volume>
<fpage>1274</fpage>
<lpage>1290</lpage>
<pub-id pub-id-type="doi">10.1037/0278-7393.32.6.1274</pub-id>
</citation>
<citation citation-type="display-unstructured">Mou W, McNamara TP, Rump B, Xiao C (2006) Roles of egocentric and allocentric spatial representations in locomotion and reorientation. J Exp Psychol Learn Mem Cogn 32:1274–1290
<pub-id pub-id-type="pmid">17087583</pub-id>
</citation>
</ref>
<ref id="CR14">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mou</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Fan</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>McNamara</surname>
<given-names>TP</given-names>
</name>
<name>
<surname>Owen</surname>
<given-names>CB</given-names>
</name>
</person-group>
<article-title>Intrinsic frames of reference and egocentric viewpoints in scene recognition</article-title>
<source>Cognition</source>
<year>2008</year>
<volume>106</volume>
<fpage>750</fpage>
<lpage>769</lpage>
<pub-id pub-id-type="doi">10.1016/j.cognition.2007.04.009</pub-id>
</citation>
<citation citation-type="display-unstructured">Mou W, Fan Y, McNamara TP, Owen CB (2008a) Intrinsic frames of reference and egocentric viewpoints in scene recognition. Cognition 106:750–769
<pub-id pub-id-type="pmid">17540353</pub-id>
</citation>
</ref>
<ref id="CR15">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mou</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Xiao</surname>
<given-names>C</given-names>
</name>
<name>
<surname>McNamara</surname>
<given-names>TP</given-names>
</name>
</person-group>
<article-title>Reference directions and reference objects in spatial memory of a briefly viewed layout</article-title>
<source>Cognition</source>
<year>2008</year>
<volume>108</volume>
<fpage>136</fpage>
<lpage>154</lpage>
<pub-id pub-id-type="doi">10.1016/j.cognition.2008.02.004</pub-id>
</citation>
<citation citation-type="display-unstructured">Mou W, Xiao C, McNamara TP (2008b) Reference directions and reference objects in spatial memory of a briefly viewed layout. Cognition 108:136–154
<pub-id pub-id-type="pmid">18342299</pub-id>
</citation>
</ref>
<ref id="CR16">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mou</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>H</given-names>
</name>
<name>
<surname>McNamara</surname>
<given-names>TP</given-names>
</name>
</person-group>
<article-title>Novel-view scene recognition relies on identifying spatial reference directions</article-title>
<source>Cognition</source>
<year>2009</year>
<volume>111</volume>
<fpage>175</fpage>
<lpage>186</lpage>
<pub-id pub-id-type="doi">10.1016/j.cognition.2009.01.007</pub-id>
</citation>
<citation citation-type="display-unstructured">Mou W, Zhang H, McNamara TP (2009) Novel-view scene recognition relies on identifying spatial reference directions. Cognition 111:175–186
<pub-id pub-id-type="pmid">19281971</pub-id>
</citation>
</ref>
<ref id="CR17">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Newell</surname>
<given-names>FN</given-names>
</name>
<name>
<surname>Woods</surname>
<given-names>AT</given-names>
</name>
<name>
<surname>Mernagh</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Bülthoff</surname>
<given-names>HH</given-names>
</name>
</person-group>
<article-title>Visual, haptic and crossmodal recognition of scenes</article-title>
<source>Exp Brain Res</source>
<year>2005</year>
<volume>161</volume>
<fpage>233</fpage>
<lpage>242</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-004-2067-y</pub-id>
</citation>
<citation citation-type="display-unstructured">Newell FN, Woods AT, Mernagh M, Bülthoff HH (2005) Visual, haptic and crossmodal recognition of scenes. Exp Brain Res 161:233–242
<pub-id pub-id-type="pmid">15490135</pub-id>
</citation>
</ref>
<ref id="CR18">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pasqualotto</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Finucane</surname>
<given-names>CM</given-names>
</name>
<name>
<surname>Newell</surname>
<given-names>FN</given-names>
</name>
</person-group>
<article-title>Visual and haptic representations of scenes are updated with observer movement</article-title>
<source>Exp Brain Res</source>
<year>2005</year>
<volume>166</volume>
<fpage>481</fpage>
<lpage>488</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-005-2388-5</pub-id>
</citation>
<citation citation-type="display-unstructured">Pasqualotto A, Finucane CM, Newell FN (2005) Visual and haptic representations of scenes are updated with observer movement. Exp Brain Res 166:481–488
<pub-id pub-id-type="pmid">16034564</pub-id>
</citation>
</ref>
<ref id="CR19">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rieser</surname>
<given-names>JJ</given-names>
</name>
</person-group>
<article-title>Access to knowledge of spatial structure at novel points of observation</article-title>
<source>J Exp Psychol Learn Mem Cogn</source>
<year>1989</year>
<volume>6</volume>
<fpage>1157</fpage>
<lpage>1165</lpage>
</citation>
<citation citation-type="display-unstructured">Rieser JJ (1989) Access to knowledge of spatial structure at novel points of observation. J Exp Psychol Learn Mem Cogn 6:1157–1165
<pub-id pub-id-type="pmid">2530309</pub-id>
</citation>
</ref>
<ref id="CR20">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shepard</surname>
<given-names>RN</given-names>
</name>
<name>
<surname>Metzler</surname>
<given-names>J</given-names>
</name>
</person-group>
<article-title>Mental rotation of three-dimensional objects</article-title>
<source>Science</source>
<year>1971</year>
<volume>171</volume>
<fpage>701</fpage>
<lpage>703</lpage>
<pub-id pub-id-type="doi">10.1126/science.171.3972.701</pub-id>
</citation>
<citation citation-type="display-unstructured">Shepard RN, Metzler J (1971) Mental rotation of three-dimensional objects. Science 171:701–703
<pub-id pub-id-type="pmid">5540314</pub-id>
</citation>
</ref>
<ref id="CR21">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Simons</surname>
<given-names>DJ</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>RF</given-names>
</name>
</person-group>
<article-title>Perceiving real-world viewpoint changes</article-title>
<source>Psychol Sci</source>
<year>1998</year>
<volume>9</volume>
<fpage>315</fpage>
<lpage>320</lpage>
<pub-id pub-id-type="doi">10.1111/1467-9280.00062</pub-id>
</citation>
<citation citation-type="display-unstructured">Simons DJ, Wang RF (1998) Perceiving real-world viewpoint changes. Psychol Sci 9:315–320 </citation>
</ref>
<ref id="CR22">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Simons</surname>
<given-names>DJ</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>RXF</given-names>
</name>
<name>
<surname>Roddenberry</surname>
<given-names>D</given-names>
</name>
</person-group>
<article-title>Object recognition is mediated by extraretinal information</article-title>
<source>Percept Psychophys</source>
<year>2002</year>
<volume>64</volume>
<fpage>521</fpage>
<lpage>530</lpage>
</citation>
<citation citation-type="display-unstructured">Simons DJ, Wang RXF, Roddenberry D (2002) Object recognition is mediated by extraretinal information. Percept Psychophys 64:521–530
<pub-id pub-id-type="pmid">12132755</pub-id>
</citation>
</ref>
<ref id="CR23">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>RXF</given-names>
</name>
<name>
<surname>Simons</surname>
<given-names>DJ</given-names>
</name>
</person-group>
<article-title>Active and passive scene recognition across views</article-title>
<source>Cognition</source>
<year>1999</year>
<volume>70</volume>
<fpage>191</fpage>
<lpage>210</lpage>
<pub-id pub-id-type="doi">10.1016/S0010-0277(99)00012-8</pub-id>
</citation>
<citation citation-type="display-unstructured">Wang RXF, Simons DJ (1999) Active and passive scene recognition across views. Cognition 70:191–210
<pub-id pub-id-type="pmid">10349763</pub-id>
</citation>
</ref>
<ref id="CR24">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>RF</given-names>
</name>
<name>
<surname>Spelke</surname>
<given-names>ES</given-names>
</name>
</person-group>
<article-title>Updating egocentric representations in human navigation</article-title>
<source>Cognition</source>
<year>2000</year>
<volume>77</volume>
<fpage>215</fpage>
<lpage>250</lpage>
<pub-id pub-id-type="doi">10.1016/S0010-0277(00)00105-0</pub-id>
</citation>
<citation citation-type="display-unstructured">Wang RF, Spelke ES (2000) Updating egocentric representations in human navigation. Cognition 77:215–250
<pub-id pub-id-type="pmid">11018510</pub-id>
</citation>
</ref>
<ref id="CR25">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>RF</given-names>
</name>
<name>
<surname>Spelke</surname>
<given-names>ES</given-names>
</name>
</person-group>
<article-title>Human spatial representation: insights from animals</article-title>
<source>Trends Cogn Sci</source>
<year>2002</year>
<volume>6</volume>
<fpage>376</fpage>
<lpage>382</lpage>
<pub-id pub-id-type="doi">10.1016/S1364-6613(02)01961-7</pub-id>
</citation>
<citation citation-type="display-unstructured">Wang RF, Spelke ES (2002) Human spatial representation: insights from animals. Trends Cogn Sci 6:376–382
<pub-id pub-id-type="pmid">12200179</pub-id>
</citation>
</ref>
<ref id="CR26">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wraga</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Creem</surname>
<given-names>SH</given-names>
</name>
<name>
<surname>Proffitt</surname>
<given-names>DR</given-names>
</name>
</person-group>
<article-title>The influence of spatial reference frames on imagined object- and viewer rotations</article-title>
<source>Acta Psychol (Amst)</source>
<year>1999</year>
<volume>102</volume>
<fpage>247</fpage>
<lpage>264</lpage>
<pub-id pub-id-type="doi">10.1016/S0001-6918(98)00057-2</pub-id>
</citation>
<citation citation-type="display-unstructured">Wraga M, Creem SH, Proffitt DR (1999) The influence of spatial reference frames on imagined object- and viewer rotations. Acta Psychol (Amst) 102:247–264
<pub-id pub-id-type="pmid">10504883</pub-id>
</citation>
</ref>
<ref id="CR27">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wraga</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Creem</surname>
<given-names>SH</given-names>
</name>
<name>
<surname>Proffitt</surname>
<given-names>DR</given-names>
</name>
</person-group>
<article-title>Updating displays after imagined object and viewer rotations</article-title>
<source>J Exp Psychol Learn Mem Cogn</source>
<year>2000</year>
<volume>26</volume>
<fpage>151</fpage>
<lpage>168</lpage>
<pub-id pub-id-type="doi">10.1037/0278-7393.26.1.151</pub-id>
</citation>
<citation citation-type="display-unstructured">Wraga M, Creem SH, Proffitt DR (2000) Updating displays after imagined object and viewer rotations. J Exp Psychol Learn Mem Cogn 26:151–168
<pub-id pub-id-type="pmid">10682295</pub-id>
</citation>
</ref>
<ref id="CR28">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wraga</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Creem-Regehr</surname>
<given-names>SH</given-names>
</name>
<name>
<surname>Proffitt</surname>
<given-names>DR</given-names>
</name>
</person-group>
<article-title>Spatial updating of virtual displays during self- and display rotation</article-title>
<source>Mem Cognit</source>
<year>2004</year>
<volume>32</volume>
<fpage>399</fpage>
<lpage>415</lpage>
</citation>
<citation citation-type="display-unstructured">Wraga M, Creem-Regehr SH, Proffitt DR (2004) Spatial updating of virtual displays during self- and display rotation. Mem Cognit 32:399–415
<pub-id pub-id-type="pmid">15285124</pub-id>
</citation>
</ref>
</ref-list>
</back>
</pmc>
<affiliations>
<list>
<country>
<li>Allemagne</li>
<li>France</li>
</country>
<region>
<li>Bade-Wurtemberg</li>
<li>District de Tübingen</li>
</region>
<settlement>
<li>Paris</li>
<li>Tübingen</li>
</settlement>
</list>
<tree>
<country name="Allemagne">
<region name="Bade-Wurtemberg">
<name sortKey="Vidal, Manuel" sort="Vidal, Manuel" uniqKey="Vidal M" first="Manuel" last="Vidal">Manuel Vidal</name>
</region>
<name sortKey="Bulthoff, Heinrich H" sort="Bulthoff, Heinrich H" uniqKey="Bulthoff H" first="Heinrich H." last="Bülthoff">Heinrich H. Bülthoff</name>
</country>
<country name="France">
<noRegion>
<name sortKey="Vidal, Manuel" sort="Vidal, Manuel" uniqKey="Vidal M" first="Manuel" last="Vidal">Manuel Vidal</name>
</noRegion>
<name sortKey="Lehmann, Alexandre" sort="Lehmann, Alexandre" uniqKey="Lehmann A" first="Alexandre" last="Lehmann">Alexandre Lehmann</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Ncbi/Merge
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001175 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd -nk 001175 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Ncbi
   |étape=   Merge
   |type=    RBID
   |clé=     PMC:2708330
   |texte=   A multisensory approach to spatial updating: the case of mental rotations
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/RBID.i   -Sk "pubmed:19544058" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024