Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Cross-modal visuo-haptic mental rotation: comparing objects between senses

Identifieur interne : 001F29 ( Pmc/Checkpoint ); précédent : 001F28; suivant : 001F30

Cross-modal visuo-haptic mental rotation: comparing objects between senses

Auteurs : Robert Volcic [Allemagne] ; Maarten W. A. Wijntjes [Pays-Bas] ; Erik C. Kool [Pays-Bas] ; Astrid M. L. Kappers [Pays-Bas]

Source :

RBID : PMC:2875473

Abstract

The simple experience of a coherent percept while looking and touching an object conceals an intriguing issue: different senses encode and compare information in different modality-specific reference frames. We addressed this problem in a cross-modal visuo-haptic mental rotation task. Two objects in various orientations were presented at the same spatial location, one visually and one haptically. Participants had to identify the objects as same or different. The relative angle between viewing direction and hand orientation was manipulated (Aligned versus Orthogonal). In an additional condition (Delay), a temporal delay was introduced between haptic and visual explorations while the viewing direction and the hand orientation were orthogonal to each other. Whereas the phase shift of the response time function was close to 0° in the Aligned condition, we observed a consistent phase shift in the hand’s direction in the Orthogonal condition. A phase shift, although reduced, was also found in the Delay condition. Counterintuitively, these results mean that seen and touched objects do not need to be physically aligned for optimal performance to occur. The present results suggest that the information about an object is acquired in separate visual and hand-centered reference frames, which directly influence each other and which combine in a time-dependent manner.


Url:
DOI: 10.1007/s00221-010-2262-y
PubMed: 20437169
PubMed Central: 2875473


Affiliations:


Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:2875473

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Cross-modal visuo-haptic mental rotation: comparing objects between senses</title>
<author>
<name sortKey="Volcic, Robert" sort="Volcic, Robert" uniqKey="Volcic R" first="Robert" last="Volcic">Robert Volcic</name>
<affiliation wicri:level="3">
<nlm:aff id="Aff1">Psychologisches Institut II, Westfälische Wilhelms-Universität Münster, Fliednerstr. 21, 48149 Münster, Germany</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Psychologisches Institut II, Westfälische Wilhelms-Universität Münster, Fliednerstr. 21, 48149 Münster</wicri:regionArea>
<placeName>
<region type="land" nuts="1">Rhénanie-du-Nord-Westphalie</region>
<region type="district" nuts="2">District de Münster</region>
<settlement type="city">Münster</settlement>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Wijntjes, Maarten W A" sort="Wijntjes, Maarten W A" uniqKey="Wijntjes M" first="Maarten W. A." last="Wijntjes">Maarten W. A. Wijntjes</name>
<affiliation wicri:level="1">
<nlm:aff id="Aff2">Faculty of Industrial Design Engineering, Delft University of Technology, Delft, The Netherlands</nlm:aff>
<country xml:lang="fr">Pays-Bas</country>
<wicri:regionArea>Faculty of Industrial Design Engineering, Delft University of Technology, Delft</wicri:regionArea>
<wicri:noRegion>Delft</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Kool, Erik C" sort="Kool, Erik C" uniqKey="Kool E" first="Erik C." last="Kool">Erik C. Kool</name>
<affiliation wicri:level="4">
<nlm:aff id="Aff3">Physics of Man, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands</nlm:aff>
<country xml:lang="fr">Pays-Bas</country>
<wicri:regionArea>Physics of Man, Helmholtz Institute, Utrecht University, Utrecht</wicri:regionArea>
<placeName>
<settlement type="city">Utrecht</settlement>
<region nuts="2" type="province">Utrecht (province)</region>
</placeName>
<orgName type="university">Université d'Utrecht</orgName>
</affiliation>
</author>
<author>
<name sortKey="Kappers, Astrid M L" sort="Kappers, Astrid M L" uniqKey="Kappers A" first="Astrid M. L." last="Kappers">Astrid M. L. Kappers</name>
<affiliation wicri:level="4">
<nlm:aff id="Aff3">Physics of Man, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands</nlm:aff>
<country xml:lang="fr">Pays-Bas</country>
<wicri:regionArea>Physics of Man, Helmholtz Institute, Utrecht University, Utrecht</wicri:regionArea>
<placeName>
<settlement type="city">Utrecht</settlement>
<region nuts="2" type="province">Utrecht (province)</region>
</placeName>
<orgName type="university">Université d'Utrecht</orgName>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">20437169</idno>
<idno type="pmc">2875473</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2875473</idno>
<idno type="RBID">PMC:2875473</idno>
<idno type="doi">10.1007/s00221-010-2262-y</idno>
<date when="2010">2010</date>
<idno type="wicri:Area/Pmc/Corpus">000801</idno>
<idno type="wicri:Area/Pmc/Curation">000801</idno>
<idno type="wicri:Area/Pmc/Checkpoint">001F29</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Cross-modal visuo-haptic mental rotation: comparing objects between senses</title>
<author>
<name sortKey="Volcic, Robert" sort="Volcic, Robert" uniqKey="Volcic R" first="Robert" last="Volcic">Robert Volcic</name>
<affiliation wicri:level="3">
<nlm:aff id="Aff1">Psychologisches Institut II, Westfälische Wilhelms-Universität Münster, Fliednerstr. 21, 48149 Münster, Germany</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea>Psychologisches Institut II, Westfälische Wilhelms-Universität Münster, Fliednerstr. 21, 48149 Münster</wicri:regionArea>
<placeName>
<region type="land" nuts="1">Rhénanie-du-Nord-Westphalie</region>
<region type="district" nuts="2">District de Münster</region>
<settlement type="city">Münster</settlement>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Wijntjes, Maarten W A" sort="Wijntjes, Maarten W A" uniqKey="Wijntjes M" first="Maarten W. A." last="Wijntjes">Maarten W. A. Wijntjes</name>
<affiliation wicri:level="1">
<nlm:aff id="Aff2">Faculty of Industrial Design Engineering, Delft University of Technology, Delft, The Netherlands</nlm:aff>
<country xml:lang="fr">Pays-Bas</country>
<wicri:regionArea>Faculty of Industrial Design Engineering, Delft University of Technology, Delft</wicri:regionArea>
<wicri:noRegion>Delft</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Kool, Erik C" sort="Kool, Erik C" uniqKey="Kool E" first="Erik C." last="Kool">Erik C. Kool</name>
<affiliation wicri:level="4">
<nlm:aff id="Aff3">Physics of Man, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands</nlm:aff>
<country xml:lang="fr">Pays-Bas</country>
<wicri:regionArea>Physics of Man, Helmholtz Institute, Utrecht University, Utrecht</wicri:regionArea>
<placeName>
<settlement type="city">Utrecht</settlement>
<region nuts="2" type="province">Utrecht (province)</region>
</placeName>
<orgName type="university">Université d'Utrecht</orgName>
</affiliation>
</author>
<author>
<name sortKey="Kappers, Astrid M L" sort="Kappers, Astrid M L" uniqKey="Kappers A" first="Astrid M. L." last="Kappers">Astrid M. L. Kappers</name>
<affiliation wicri:level="4">
<nlm:aff id="Aff3">Physics of Man, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands</nlm:aff>
<country xml:lang="fr">Pays-Bas</country>
<wicri:regionArea>Physics of Man, Helmholtz Institute, Utrecht University, Utrecht</wicri:regionArea>
<placeName>
<settlement type="city">Utrecht</settlement>
<region nuts="2" type="province">Utrecht (province)</region>
</placeName>
<orgName type="university">Université d'Utrecht</orgName>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Experimental Brain Research. Experimentelle Hirnforschung. Experimentation Cerebrale</title>
<idno type="ISSN">0014-4819</idno>
<idno type="eISSN">1432-1106</idno>
<imprint>
<date when="2010">2010</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>The simple experience of a coherent percept while looking and touching an object conceals an intriguing issue: different senses encode and compare information in different modality-specific reference frames. We addressed this problem in a cross-modal visuo-haptic mental rotation task. Two objects in various orientations were presented at the same spatial location, one visually and one haptically. Participants had to identify the objects as same or different. The relative angle between viewing direction and hand orientation was manipulated (Aligned versus Orthogonal). In an additional condition (Delay), a temporal delay was introduced between haptic and visual explorations while the viewing direction and the hand orientation were orthogonal to each other. Whereas the phase shift of the response time function was close to 0° in the Aligned condition, we observed a consistent phase shift in the hand’s direction in the Orthogonal condition. A phase shift, although reduced, was also found in the Delay condition. Counterintuitively, these results mean that seen and touched objects do not need to be physically aligned for optimal performance to occur. The present results suggest that the information about an object is acquired in separate visual and hand-centered reference frames, which directly influence each other and which combine in a time-dependent manner.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Brainard, Dh" uniqKey="Brainard D">DH Brainard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bridgeman, B" uniqKey="Bridgeman B">B Bridgeman</name>
</author>
<author>
<name sortKey="Peery, S" uniqKey="Peery S">S Peery</name>
</author>
<author>
<name sortKey="Anand, S" uniqKey="Anand S">S Anand</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Carpenter, Pa" uniqKey="Carpenter P">PA Carpenter</name>
</author>
<author>
<name sortKey="Eisenberg, P" uniqKey="Eisenberg P">P Eisenberg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Carrozzo, M" uniqKey="Carrozzo M">M Carrozzo</name>
</author>
<author>
<name sortKey="Stratta, F" uniqKey="Stratta F">F Stratta</name>
</author>
<author>
<name sortKey="Mcintyre, J" uniqKey="Mcintyre J">J McIntyre</name>
</author>
<author>
<name sortKey="Lacquaniti, F" uniqKey="Lacquaniti F">F Lacquaniti</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Corballis, Mc" uniqKey="Corballis M">MC Corballis</name>
</author>
<author>
<name sortKey="Zbrodoff, J" uniqKey="Zbrodoff J">J Zbrodoff</name>
</author>
<author>
<name sortKey="Roldan, C" uniqKey="Roldan C">C Roldan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Corballis, Mc" uniqKey="Corballis M">MC Corballis</name>
</author>
<author>
<name sortKey="Nagourney, Ba" uniqKey="Nagourney B">BA Nagourney</name>
</author>
<author>
<name sortKey="Shetzer, Li" uniqKey="Shetzer L">LI Shetzer</name>
</author>
<author>
<name sortKey="Stefanatos, G" uniqKey="Stefanatos G">G Stefanatos</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, Mo" uniqKey="Ernst M">MO Ernst</name>
</author>
<author>
<name sortKey="Lange, C" uniqKey="Lange C">C Lange</name>
</author>
<author>
<name sortKey="Newell, Fn" uniqKey="Newell F">FN Newell</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gauthier, I" uniqKey="Gauthier I">I Gauthier</name>
</author>
<author>
<name sortKey="Hayward, Wg" uniqKey="Hayward W">WG Hayward</name>
</author>
<author>
<name sortKey="Tarr, Mj" uniqKey="Tarr M">MJ Tarr</name>
</author>
<author>
<name sortKey="Anderson, Aw" uniqKey="Anderson A">AW Anderson</name>
</author>
<author>
<name sortKey="Skudlarski, P" uniqKey="Skudlarski P">P Skudlarski</name>
</author>
<author>
<name sortKey="Gore, Jc" uniqKey="Gore J">JC Gore</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gibson, Jj" uniqKey="Gibson J">JJ Gibson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gibson, Jj" uniqKey="Gibson J">JJ Gibson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lacey, S" uniqKey="Lacey S">S Lacey</name>
</author>
<author>
<name sortKey="Peters, A" uniqKey="Peters A">A Peters</name>
</author>
<author>
<name sortKey="Sathian, K" uniqKey="Sathian K">K Sathian</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lacey, S" uniqKey="Lacey S">S Lacey</name>
</author>
<author>
<name sortKey="Pappas, M" uniqKey="Pappas M">M Pappas</name>
</author>
<author>
<name sortKey="Kreps, A" uniqKey="Kreps A">A Kreps</name>
</author>
<author>
<name sortKey="Lee, K" uniqKey="Lee K">K Lee</name>
</author>
<author>
<name sortKey="Sathian, K" uniqKey="Sathian K">K Sathian</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lawson, R" uniqKey="Lawson R">R Lawson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Milner, Ad" uniqKey="Milner A">AD Milner</name>
</author>
<author>
<name sortKey="Paulignan, Y" uniqKey="Paulignan Y">Y Paulignan</name>
</author>
<author>
<name sortKey="Dijkerman, Hc" uniqKey="Dijkerman H">HC Dijkerman</name>
</author>
<author>
<name sortKey="Michel, F" uniqKey="Michel F">F Michel</name>
</author>
<author>
<name sortKey="Jeannerod, M" uniqKey="Jeannerod M">M Jeannerod</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Newell, Fn" uniqKey="Newell F">FN Newell</name>
</author>
<author>
<name sortKey="Ernst, Mo" uniqKey="Ernst M">MO Ernst</name>
</author>
<author>
<name sortKey="Tjan, Bs" uniqKey="Tjan B">BS Tjan</name>
</author>
<author>
<name sortKey="Bulthoff, Hh" uniqKey="Bulthoff H">HH Bülthoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Norman, Jf" uniqKey="Norman J">JF Norman</name>
</author>
<author>
<name sortKey="Norman, Hf" uniqKey="Norman H">HF Norman</name>
</author>
<author>
<name sortKey="Clayton, Am" uniqKey="Clayton A">AM Clayton</name>
</author>
<author>
<name sortKey="Lianekhammy, J" uniqKey="Lianekhammy J">J Lianekhammy</name>
</author>
<author>
<name sortKey="Zielke, G" uniqKey="Zielke G">G Zielke</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Norman, Jf" uniqKey="Norman J">JF Norman</name>
</author>
<author>
<name sortKey="Clayton, Am" uniqKey="Clayton A">AM Clayton</name>
</author>
<author>
<name sortKey="Norman, Hf" uniqKey="Norman H">HF Norman</name>
</author>
<author>
<name sortKey="Crabtree, Ce" uniqKey="Crabtree C">CE Crabtree</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pelli, Dg" uniqKey="Pelli D">DG Pelli</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Phillips, F" uniqKey="Phillips F">F Phillips</name>
</author>
<author>
<name sortKey="Egan, Ejl" uniqKey="Egan E">EJL Egan</name>
</author>
<author>
<name sortKey="Perry, Bn" uniqKey="Perry B">BN Perry</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Prather, Sc" uniqKey="Prather S">SC Prather</name>
</author>
<author>
<name sortKey="Sathian, K" uniqKey="Sathian K">K Sathian</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rossetti, Y" uniqKey="Rossetti Y">Y Rossetti</name>
</author>
<author>
<name sortKey="Gaunet, F" uniqKey="Gaunet F">F Gaunet</name>
</author>
<author>
<name sortKey="Thinus Blanc, C" uniqKey="Thinus Blanc C">C Thinus-Blanc</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shepard, Rn" uniqKey="Shepard R">RN Shepard</name>
</author>
<author>
<name sortKey="Metzler, J" uniqKey="Metzler J">J Metzler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Volcic, R" uniqKey="Volcic R">R Volcic</name>
</author>
<author>
<name sortKey="Wijntjes, Mwa" uniqKey="Wijntjes M">MWA Wijntjes</name>
</author>
<author>
<name sortKey="Kappers, Aml" uniqKey="Kappers A">AML Kappers</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zuidhoek, S" uniqKey="Zuidhoek S">S Zuidhoek</name>
</author>
<author>
<name sortKey="Kappers, Aml" uniqKey="Kappers A">AML Kappers</name>
</author>
<author>
<name sortKey="Lubbe, Rhj" uniqKey="Lubbe R">RHJ Lubbe</name>
</author>
<author>
<name sortKey="Postma, A" uniqKey="Postma A">A Postma</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Exp Brain Res</journal-id>
<journal-title-group>
<journal-title>Experimental Brain Research. Experimentelle Hirnforschung. Experimentation Cerebrale</journal-title>
</journal-title-group>
<issn pub-type="ppub">0014-4819</issn>
<issn pub-type="epub">1432-1106</issn>
<publisher>
<publisher-name>Springer-Verlag</publisher-name>
<publisher-loc>Berlin/Heidelberg</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">20437169</article-id>
<article-id pub-id-type="pmc">2875473</article-id>
<article-id pub-id-type="publisher-id">2262</article-id>
<article-id pub-id-type="doi">10.1007/s00221-010-2262-y</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Note</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Cross-modal visuo-haptic mental rotation: comparing objects between senses</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Volcic</surname>
<given-names>Robert</given-names>
</name>
<address>
<phone>+49-251-8334177</phone>
<email>volcic@uni-muenster.de</email>
</address>
<xref ref-type="aff" rid="Aff1">1</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Wijntjes</surname>
<given-names>Maarten W. A.</given-names>
</name>
<xref ref-type="aff" rid="Aff2">2</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Kool</surname>
<given-names>Erik C.</given-names>
</name>
<xref ref-type="aff" rid="Aff3">3</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Kappers</surname>
<given-names>Astrid M. L.</given-names>
</name>
<xref ref-type="aff" rid="Aff3">3</xref>
</contrib>
<aff id="Aff1">
<label>1</label>
Psychologisches Institut II, Westfälische Wilhelms-Universität Münster, Fliednerstr. 21, 48149 Münster, Germany</aff>
<aff id="Aff2">
<label>2</label>
Faculty of Industrial Design Engineering, Delft University of Technology, Delft, The Netherlands</aff>
<aff id="Aff3">
<label>3</label>
Physics of Man, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands</aff>
</contrib-group>
<pub-date pub-type="epub">
<day>1</day>
<month>5</month>
<year>2010</year>
</pub-date>
<pub-date pub-type="pmc-release">
<day>1</day>
<month>5</month>
<year>2010</year>
</pub-date>
<pub-date pub-type="ppub">
<month>6</month>
<year>2010</year>
</pub-date>
<volume>203</volume>
<issue>3</issue>
<fpage>621</fpage>
<lpage>627</lpage>
<history>
<date date-type="received">
<day>10</day>
<month>2</month>
<year>2010</year>
</date>
<date date-type="accepted">
<day>9</day>
<month>4</month>
<year>2010</year>
</date>
</history>
<permissions>
<copyright-statement>© The Author(s) 2010</copyright-statement>
</permissions>
<abstract>
<p>The simple experience of a coherent percept while looking and touching an object conceals an intriguing issue: different senses encode and compare information in different modality-specific reference frames. We addressed this problem in a cross-modal visuo-haptic mental rotation task. Two objects in various orientations were presented at the same spatial location, one visually and one haptically. Participants had to identify the objects as same or different. The relative angle between viewing direction and hand orientation was manipulated (Aligned versus Orthogonal). In an additional condition (Delay), a temporal delay was introduced between haptic and visual explorations while the viewing direction and the hand orientation were orthogonal to each other. Whereas the phase shift of the response time function was close to 0° in the Aligned condition, we observed a consistent phase shift in the hand’s direction in the Orthogonal condition. A phase shift, although reduced, was also found in the Delay condition. Counterintuitively, these results mean that seen and touched objects do not need to be physically aligned for optimal performance to occur. The present results suggest that the information about an object is acquired in separate visual and hand-centered reference frames, which directly influence each other and which combine in a time-dependent manner.</p>
</abstract>
<kwd-group>
<title>Keywords</title>
<kwd>Cross-modal perception</kwd>
<kwd>Touch</kwd>
<kwd>Vision</kwd>
<kwd>Frames of reference</kwd>
<kwd>Mental rotation</kwd>
<kwd>Hand</kwd>
</kwd-group>
<custom-meta-group>
<custom-meta>
<meta-name>issue-copyright-statement</meta-name>
<meta-value>© Springer-Verlag 2010</meta-value>
</custom-meta>
</custom-meta-group>
</article-meta>
</front>
<body>
<sec id="Sec1">
<title>Introduction</title>
<p>The integration of multi-modal information forms our internal representation of the sensory world. Whenever we handle an object, we can effortlessly achieve a coherent percept based on the different visual and haptic sources. We see the object we are touching, and we touch the object we are looking at. This seemingly simple act of perceiving an object conceals, however, some intriguing issues. Both visual and haptic modalities are capable of encoding the coarse information about an object, e.g. its orientation, size and gross shape; however, each modality performs the encoding in its own reference frame first, vision retinotopically and haptics somatotopically. This information needs to be shared by way of translation or comparison between modalities, but only at a later stage. The issue of how the information about objects is shared across modalities is the topic of the current paper.</p>
<p>Humans can effectively compare the shape of 3D objects across the modalities of vision and touch, although cross-modal performance is usually poorer than unimodal performance (Gibson
<xref ref-type="bibr" rid="CR9">1962</xref>
,
<xref ref-type="bibr" rid="CR10">1963</xref>
; Norman et al.
<xref ref-type="bibr" rid="CR16">2004</xref>
,
<xref ref-type="bibr" rid="CR17">2008</xref>
; Phillips et al.
<xref ref-type="bibr" rid="CR19">2009</xref>
). These studies suggest that the two modalities either share a common representation or have independent objects’ representations with similar formats for effective comparisons to take place. Several studies have investigated the effect of orientation on the cross-modal identification of 3D objects. Recognition performance across modalities was assessed by means of different experimental methods. In an old/new recognition task and in a forced-choice object recognition task, objects were learned in one modality and then recognition was tested in the other modality (Newell et al.
<xref ref-type="bibr" rid="CR15">2001</xref>
; Lacey et al.
<xref ref-type="bibr" rid="CR11">2007</xref>
,
<xref ref-type="bibr" rid="CR12">2009</xref>
; Ernst et al.
<xref ref-type="bibr" rid="CR7">2007</xref>
). On the other hand, in a sequential matching task, an object presented in one modality was shortly afterwards compared with a test object through the other modality (Lawson
<xref ref-type="bibr" rid="CR13">2009</xref>
). In all these methods, the test objects were presented either in the same or in a different orientation. In some cases and independently from the experimental method, the recognition of objects across modalities had additional costs on the performance (Newell et al.
<xref ref-type="bibr" rid="CR15">2001</xref>
; Ernst et al.
<xref ref-type="bibr" rid="CR7">2007</xref>
; Lawson
<xref ref-type="bibr" rid="CR13">2009</xref>
), whereas in other cases the performance was unaffected irrespective of the change in orientation (Lacey et al.
<xref ref-type="bibr" rid="CR11">2007</xref>
,
<xref ref-type="bibr" rid="CR12">2009</xref>
; Lawson
<xref ref-type="bibr" rid="CR13">2009</xref>
). These studies support the idea that the achievement of object constancy, i.e. the recognition of objects despite changes in size, position and orientation, can thus be fast and accurate in both within-modal and cross-modal object recognition, although there is often an additional cost when objects’ representations are compared across modalities. The additional cost might be attributed to the fact that object recognition supposedly relies on features represented in viewpoint- and modality-specific frames of reference.</p>
<p>In all of the foregoing object recognition experiments, relatively long temporal intervals occurred between the presentations of the first and the second stimulus (or set of stimuli). An alternative method for studying the sharing of information between the visual and haptic modalities that minimizes the aforementioned temporal interval is a matching task in which objects are compared simultaneously. A widely used task that satisfies these properties is the handedness recognition task employed in most mental rotation studies since its introduction by Shepard and Metzler (
<xref ref-type="bibr" rid="CR22">1971</xref>
). Two objects of the same shape and in different orientations are compared, and the participant has to determine whether the two objects are mirrored versions of each other or identical except for their orientation. In the simplest case, response times are fastest when objects are physically aligned with each other and response times linearly increase as a function of the angular misalignment between objects. The physical alignment of objects does, however, not always induce the fastest response times: the shape and specifically the phase shift of the response time function depends on the reference frame in which the objects are encoded. In vision, retinal and gravitational encodings were contrasted by having the participants perceive the stimuli with the head either in the upright or tilted orientation (Corballis et al.
<xref ref-type="bibr" rid="CR5">1976</xref>
,
<xref ref-type="bibr" rid="CR6">1978</xref>
). The response time function shifted in accordance to the participants’ head tilt. The phase shifts were, however, only partial. The stimuli were encoded in a reference frame that was intermediate between a retinally defined egocentric reference frame and an allocentric reference frame.</p>
<p>The interactions between reference frames were explored also in the haptic domain. In unimanual mental rotation studies, participants were presented with a single letter in a normal or mirror-image form in various orientations (Carpenter and Eisenberg
<xref ref-type="bibr" rid="CR3">1978</xref>
; Prather and Sathian
<xref ref-type="bibr" rid="CR20">2002</xref>
). The orientation of the hand exploring the stimuli was varied and, as a consequence, the response time function partially shifted. In these studies, the haptic information was compared with an internal representation of the stimuli retrieved from memory. More recently, a haptic mental rotation study was conducted, in which two objects were separately but simultaneously explored by the two hands and, by this, haptic information was directly compared (Volcic et al.
<xref ref-type="bibr" rid="CR23">2009</xref>
). Not surprisingly, with hands aligned the fastest response times were measured when also the objects were physically aligned, thus the phase shift was equal to zero. However, when the hands were held in either a convergent or divergent orientation, the response time functions shifted in opposite directions. The condition-dependent directions and extents of the phase shifts suggested an interplay of multiple reference frames, in which the hand-centered reference frame plays the central role.</p>
<p>A natural step from the foregoing is to investigate the interaction of reference frames across the visual and haptic modality. In this context, fundamental questions arise. Does one modality take over from the other and by this provide a unique reference frame in which both visual and haptic information are compared? Or, alternatively, do multiple reference frames coexist simultaneously and interact with each other?</p>
<p>To address these questions, we used a visuo-haptic cross-modal mental rotation task. One of the objects was viewed and the other was haptically explored. Moreover, we designed the setup such that both objects were perceptually located in exactly the same spatial position (see Fig. 
<xref rid="Fig1" ref-type="fig">1</xref>
b). The logic behind the present experiment was simple: varying the hand orientation while keeping the viewing direction constant allows the dissociation of the visual reference frame and the hand-centered reference frame (see Fig. 
<xref rid="Fig1" ref-type="fig">1</xref>
a). In the Aligned condition, viewing direction and hand orientation were aligned. In the Orthogonal condition, the hand orientation was instead orthogonal to the viewing direction. With this experimental manipulation, the visual reference frame and the hand-centered reference frame were put in misalignment with each other. In an additional condition, the Delay condition, a temporal delay was introduced between exploration of the haptic object and display of the visual one. However, viewing direction and hand orientation were still orthogonal to each other. The latter condition was of interest because several studies suggest that different frames of reference may dominate at different time intervals (Bridgeman et al.
<xref ref-type="bibr" rid="CR2">1997</xref>
; Carrozzo et al.
<xref ref-type="bibr" rid="CR4">2002</xref>
; Milner et al.
<xref ref-type="bibr" rid="CR14">1999</xref>
; Rossetti et al.
<xref ref-type="bibr" rid="CR21">1996</xref>
; Zuidhoek et al.
<xref ref-type="bibr" rid="CR24">2003</xref>
). Typically, egocentric reference frames prevail at short time intervals and the allocentric frame of reference is strengthened at longer time intervals (in the range of 5–10 s). The temporal delay in the Delay condition should thus reduce the influence (if any) given by the misalignment of the hand-centered reference frame with respect to the visual reference frame.
<fig id="Fig1">
<label>Fig. 1</label>
<caption>
<p>
<bold>a</bold>
Experimental conditions of the cross-modal visuo-haptic mental rotation task. In the Aligned condition, viewing direction and hand orientation were aligned. In the Orthogonal condition, the hand was rotated counterclockwise by 90°. In the Delay condition, a 5 s temporal delay was introduced between the haptic and visual exploration of the stimuli.
<bold>b</bold>
Schematic view of the experimental setup. The participant looked in the mirror, which was positioned midway between the table and the projection screen. The visual stimulus displayed on the projection screen was seen via the mirror as if it were located on the table exactly in the same location as the haptic stimulus. Both arms were occluded. The right hand explored the haptic stimulus, whereas the left hand controlled the keyboard below the table.
<bold>c</bold>
Forms of the triangle wave function when the phase shift is
<inline-formula id="IEq7">
<alternatives>
<tex-math id="M1">\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\upphi = 0^{\circ}$$\end{document}</tex-math>
<inline-graphic xlink:href="221_2010_2262_Article_IEq7.gif"></inline-graphic>
</alternatives>
</inline-formula>
or
<inline-formula id="IEq8">
<alternatives>
<tex-math id="M2">\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\upphi = 90^{\circ}.$$\end{document}</tex-math>
<inline-graphic xlink:href="221_2010_2262_Article_IEq8.gif"></inline-graphic>
</alternatives>
</inline-formula>
Depending on the experimental condition, different hypotheses (see main text) predict that the function might take these forms or a form with an intermediate phase shift
<inline-formula id="IEq9">
<alternatives>
<tex-math id="M3">\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\left(0^{\circ} < \upphi < 90^{\circ}\right)$$\end{document}</tex-math>
<inline-graphic xlink:href="221_2010_2262_Article_IEq9.gif"></inline-graphic>
</alternatives>
</inline-formula>
</p>
</caption>
<graphic xlink:href="221_2010_2262_Fig1_HTML" id="MO2"></graphic>
</fig>
</p>
<p>Hypotheses about object encoding that are based on a single reference frame make straightforward predictions. If all the spatial information is encoded in a single visual reference frame, in a single haptic hand-centered reference frame or in an allocentric reference frame, then no deviation from the zero phase shift will occur in any of the conditions. The fastest responses would be expected to occur when the two objects have the same orientation with respect to the used reference frame, and in all conditions the triangle wave function would take the form depicted in Fig. 
<xref rid="Fig1" ref-type="fig">1</xref>
c,
<inline-formula id="IEq1">
<alternatives>
<tex-math id="M4">\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\upphi = 0^{\circ}.$$\end{document}</tex-math>
<inline-graphic xlink:href="221_2010_2262_Article_IEq1.gif"></inline-graphic>
</alternatives>
</inline-formula>
Interestingly, the same prediction is made by a hypothesis based on multiple reference frames, but only if their interaction is optimal, i.e., the relative orientation of the viewing direction and the hand orientation is taken into account. On the contrary, an interaction of multiple reference frames that discards this proprioceptive information completely, predicts a phase shift in the direction and by the amount specified by the change in hand orientation (Fig. 
<xref rid="Fig1" ref-type="fig">1</xref>
c,
<inline-formula id="IEq2">
<alternatives>
<tex-math id="M5">\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\upphi = 0^{\circ}$$\end{document}</tex-math>
<inline-graphic xlink:href="221_2010_2262_Article_IEq2.gif"></inline-graphic>
</alternatives>
</inline-formula>
in the Aligned condition,
<inline-formula id="IEq3">
<alternatives>
<tex-math id="M6">\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\upphi = 90^{\circ}$$\end{document}</tex-math>
<inline-graphic xlink:href="221_2010_2262_Article_IEq3.gif"></inline-graphic>
</alternatives>
</inline-formula>
in the Orthogonal condition). An intermediate phase shift
<inline-formula id="IEq4">
<alternatives>
<tex-math id="M7">\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\left(0^{\circ} < \upphi < 90^{\circ}\right)$$\end{document}</tex-math>
<inline-graphic xlink:href="221_2010_2262_Article_IEq4.gif"></inline-graphic>
</alternatives>
</inline-formula>
in the Orthogonal condition will support the hypothesis of multiple interacting reference frames, in which, however, the proprioceptive information is only partially comprised. Any additional effect due to the temporal delay between haptic and visual exploration will be an indication of a time-dependent interaction of reference frames.</p>
</sec>
<sec id="Sec2" sec-type="materials|methods">
<title>Materials and methods</title>
<sec id="Sec3">
<title>Participants</title>
<p>Ten right-handed male participants took part in this experiment. Three of them are authors of the paper. All the others were undergraduate students and were paid for their efforts. None of the participants (except the authors) had any prior knowledge of the experimental design and the task. The experiment was performed in accordance with the guidelines from the declaration of Helsinki.</p>
</sec>
<sec id="Sec4">
<title>Apparatus and stimuli</title>
<p>The setup consisted of a large horizontal table in the center of which an iron plate (30 × 30 cm) was positioned. The iron plate was covered with a plastic layer on which a protractor was printed. The center of the protractor was 20 cm from the long table edge. Participants were seated on a stool nearby the longest table edge. The 3D objects used as haptic stimuli were made of two cylindrical bars, with a diameter of 1 cm. The main bar had a length of 20 cm, and attached perpendicularly to this at 5 cm from the center was a smaller bar with a length of 5 cm (see Fig. 
<xref rid="Fig1" ref-type="fig">1</xref>
b). One pair of objects had the smaller bar attached on the right side of the main bar, whereas the other pair had it attached on the left side. The main bar had an arrow-shaped end on one side that allowed the orientation to be read off with an accuracy of 0.5°. Small magnets were attached under the bar to prevent accidental rotations. Color photographs of the same objects were used as visual stimuli. Visual stimuli were presented as virtual images in the plane of the table. This was achieved by projecting the images of the objects with an LCD projector onto a horizontal rear projection screen suspended 51 cm above the table. A horizontal front-reflecting mirror was placed face up 25.5 cm above the table. Participants viewed the reflected image of the rear projection screen binocularly by looking down in the mirror (see Fig. 
<xref rid="Fig1" ref-type="fig">1</xref>
b). By matching the screen-mirror distance to the mirror-table distance, all projected images appeared to be in the plane of the table. The center of the images of visual stimuli was perfectly aligned with the center of haptic stimuli, and the stimuli were matched in size. On a surface 13 cm below the table plane, a keyboard was placed which was used to collect participants’ responses. For visual stimuli presentation and data collection, we used MATLAB with Psychtoolbox (Brainard
<xref ref-type="bibr" rid="CR1">1997</xref>
; Pelli
<xref ref-type="bibr" rid="CR18">1997</xref>
).</p>
<p>Stimuli were presented in pairs: one stimulus haptically and one stimulus visually. The haptic stimulus was oriented at 0°, 90°, 180° or 270°. An orientation of 0° is parallel to the long table edge; increasing orientation values signify a rotation in counterclockwise direction. The visual stimulus was presented at 18 different orientations, between 0° and 340°, in steps of 20°. We chose to use more incremental steps with the visual stimulus than with the haptic one, because the haptic stimulus had to be manually adjusted by the experimenter, whereas the visual stimulus was automatically presented on the screen. Most importantly, it was the relative orientation between haptic and visual objects that was manipulated experimentally. Each stimulus was paired with either another identical stimulus (Same trial) or its mirror version (Different trial).</p>
<p>Stimuli were presented in three different experimental conditions (see Fig. 
<xref rid="Fig1" ref-type="fig">1</xref>
a). In the Aligned condition, the main axis of the right hand exploring the haptic stimulus was aligned with the viewing direction. In the Orthogonal condition, the main axis of the right hand was rotated 90° counterclockwise and was thus orthogonal with respect to the viewing direction. In both conditions, haptic and visual stimuli were simultaneously explored. In the Delay condition, the relation between the exploring hand and the viewing direction was the same as in the Orthogonal condition, but it differed with respect to the timing of the visual stimulus presentation. The visual stimulus was presented with a delay of 5 s after participants stopped exploring the haptic stimulus.</p>
<p>In total, each participant completed 864 trials (2 objects × 4 orientations of the haptic object × 18 orientations of the visual object × 2 same/different pairs × 3 conditions). The order of trials in each experimental condition was random and different for each participant. The order of the experimental conditions was counterbalanced across participants.</p>
</sec>
<sec id="Sec5">
<title>Procedure</title>
<p>Participants had to perform a cross-modal visuo-haptic mental rotation task. The time-line of each condition is represented in Fig. 
<xref rid="Fig1" ref-type="fig">1</xref>
a. Before the start of each trial, the experimenter set the haptic stimulus and gave a start signal to the participant. Participants had no direct view of their arm and hand because these were covered by the mirror and a curtain. They were instructed to position their hand in the orientation determined by the experimental condition and touch the haptic stimulus. The time it took to position the hand and identify the distinctive parts was approximately 1 s. By presenting the haptic stimulus before the visual one, we were able to exclude this task-irrelevant exploration time from the computation of the response time. As soon as they had identified the distinctive parts of the stimulus, they pressed a key with their left hand. In the Aligned and in the Orthogonal conditions, the key press made the projector display the visual stimulus. During the presentation of the visual stimulus, the right hand was kept in contact with the haptic stimulus. In the Delay condition, the visual stimulus was displayed 5 s after the key press. In this period, participants lifted their hand from the haptic stimulus and repositioned it on the table, but kept it in the same orientation. The timing of the participants’ response started with the presentation of the visual stimulus, that is, simultaneously with the key press in the Aligned and Orthogonal conditions, and 5 s after the key press in the Delay condition. They had then to respond as fast as possible whether the two stimuli were the same or different. The visual stimulus stayed on until participants’ response, which terminated the trial. These responses were collected via key presses. It was stressed that the answer should be correct. Participants received feedback on their responses and when an incorrect response was given, the trial was repeated at the end of the experimental condition. Each experimental condition was preceded by practice trials. Experimental sessions ended after one hour to prevent fatigue and participants took on average 3 h to complete all conditions.</p>
<p>In the present study, we performed only the cross-modal conditions, because the main purpose was to allow the participants to explore the haptic and visual objects simultaneously. Objects should have been inevitably presented sequentially in within-modal conditions that would require a change in the viewing direction or hand orientation making the cross-modal and within-modal conditions difficult to compare.</p>
</sec>
<sec id="Sec6">
<title>Data analysis</title>
<p>Data analysis was focused on the response times of the Same trials. The analysis of the Different trials does not convey any information since the angle through which the different objects must be rotated to achieve congruence is not defined. The error rates of participants’ responses were low (below 10%) and were not further analyzed.</p>
</sec>
<sec id="Sec7">
<title>Fitting procedure</title>
<p>Response times on Same trials were grouped separately for each participant, for each condition, and each orientation difference. For each orientation difference, we took the median of the response times. For each participant and each condition, a triangle wave function was then fitted through the data to extract the amplitude, the phase shift and the vertical shift from the response time data (see Volcic et al.
<xref ref-type="bibr" rid="CR23">2009</xref>
). The fit of the triangle wave function was performed by minimizing the sum of squares between the median response times and the model. The triangle wave function is a periodic function with a fixed wave period of 360°. We define it as:
<disp-formula id="Equ1">
<label>1</label>
<alternatives>
<tex-math id="M8">\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ T(x,A,\upphi,\upmu)=2 A \left| \hbox{Int}\left(\frac{x-\upphi}{360^{\circ}}\right)-\frac{x-\upphi}{360^{\circ}}\right| +\upmu -\frac{A}{2} $$\end{document}</tex-math>
<graphic xlink:href="221_2010_2262_Article_Equ1.gif" position="anchor"></graphic>
</alternatives>
</disp-formula>
where
<italic>A</italic>
is the amplitude,
<inline-formula id="IEq5">
<alternatives>
<tex-math id="M9">\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\upphi$$\end{document}</tex-math>
<inline-graphic xlink:href="221_2010_2262_Article_IEq5.gif"></inline-graphic>
</alternatives>
</inline-formula>
is the phase shift and
<inline-formula id="IEq6">
<alternatives>
<tex-math id="M10">\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\upmu$$\end{document}</tex-math>
<inline-graphic xlink:href="221_2010_2262_Article_IEq6.gif"></inline-graphic>
</alternatives>
</inline-formula>
is the vertical shift. The function Int(
<italic>x</italic>
) gives the integer closest to
<italic>x</italic>
.</p>
</sec>
</sec>
<sec id="Sec8">
<title>Results</title>
<p>Figure 
<xref rid="Fig2" ref-type="fig">2</xref>
a represents the response times averaged over participants in the Aligned, Orthogonal and Delay conditions. The fitted lines correspond to the triangle wave function. As it is clear from these graphs, the response time functions are very similar in the three conditions except for their phase shifts. The phase shift is associated with the orientation difference between the haptic and the visual object at which the response times are fastest. From that point, response times linearly increase in both positive and negative directions.
<fig id="Fig2">
<label>Fig. 2</label>
<caption>
<p>
<bold>a</bold>
Response times as a function of the orientation difference averaged over all participants for the Aligned, Orthogonal and Delay conditions. Data are fitted by the triangle wave function.
<bold>b</bold>
Phase shifts
<inline-formula id="IEq10">
<alternatives>
<tex-math id="M11">\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\upphi)$$\end{document}</tex-math>
<inline-graphic xlink:href="221_2010_2262_Article_IEq10.gif"></inline-graphic>
</alternatives>
</inline-formula>
averaged over all participants for the Aligned, Orthogonal and Delay conditions.
<italic>Error</italic>
<italic>bars</italic>
indicate the 95% confidence interval of the standard error of the mean</p>
</caption>
<graphic xlink:href="221_2010_2262_Fig2_HTML" id="MO3"></graphic>
</fig>
</p>
<p>To analyze the differences between conditions, we ran separate repeated measures ANOVA on the phase shifts, vertical shifts and amplitudes with the experimental conditions as a factor. The parameters were computed for each participant individually. We also ran the same analyses without including the authors’ data, and we obtained the same results (except for a
<italic>t</italic>
-test, see below).</p>
<p>The average phase shifts were 5.5°, 37° and 14.4° in the Aligned, Orthogonal and Delay conditions, respectively, and are represented in Fig. 
<xref rid="Fig2" ref-type="fig">2</xref>
b. We found a significant effect of condition,
<italic>F</italic>
(2, 18) = 10.507,
<italic>P</italic>
 < 0.001. Subsequent pair-wise comparisons with Bonferroni corrections showed a significant difference between the Aligned and Orthogonal conditions,
<italic>P</italic>
 < 0.005, and a significant difference between the Orthogonal and Delay conditions,
<italic>P</italic>
 < 0.05. The comparison between the Aligned and Delay conditions was not significant,
<italic>P</italic>
= 0.762. In addition, we ran three simple t-tests to check which response time functions actually shifted with respect to the reference point defined by zero degree orientation difference. The phase shifts in the Aligned condition were not significantly different from zero (
<italic>t</italic>
(9) = 1.349,
<italic>P</italic>
= 0.21). However, the phase shifts did differ significantly from zero in both Orthogonal (
<italic>t</italic>
(9) = 5.787,
<italic>P</italic>
<  0.001) and Delay conditions (
<italic>t</italic>
(9) = 2.861,
<italic>P</italic>
 < 0.05). By excluding the authors’ data from the analyses, the phase shifts in the Delay condition did not differ from zero (
<italic>t</italic>
(6) = 1.851,
<italic>P</italic>
= 0.11). All the other analyses yielded the same results.</p>
<p>The vertical shifts were 866 ms, 884 ms and 776 ms in the Aligned, Orthogonal and Delay conditions, respectively. No significant effect of condition was found,
<italic>F</italic>
(2, 18) = 1.386,
<italic>P</italic>
= 0.275. The amplitudes were 490 ms, 510 ms and 304 ms in the Aligned, Orthogonal and Delay conditions, respectively. The effect of condition was marginally significant,
<italic>F</italic>
(2, 18) = 3.523,
<italic>P</italic>
= 0.051. However, the subsequent pair-wise comparisons did not reach significance.</p>
</sec>
<sec id="Sec9">
<title>Discussion</title>
<p>There is general agreement that spatial information in different sensory modalities can be encoded in multiple reference frames. A less well-understood problem concerns the interplay of reference frames across modalities. Here, we shed light on how spatial information is encoded in visual and haptic reference frames, and on how these reference frames interact with each other. In the present cross-modal mental rotation experiment, we found that the response time function shifted in the direction of the misalignment between the viewing direction and the orientation of the exploring hand. However, the phase shift was only partial and was reduced with a longer temporal delay between haptic and visual explorations. These phase shifts indicate that, contrary to common sense, the haptic and visual objects do not need to be physically aligned with each other to be quickly identified as being the same.</p>
<p>The hypotheses involving a single reference frame in the encoding of the objects can be discarded on the basis of the present results, because they did not predict any phase shift. If both visual and haptic spatial information were encoded in a unique reference frame and the common reference frame was, for instance, visual, then the response time functions would have been invariant to the orientation of the hand. The same holds for the haptic hand-centered reference frame and for the allocentric reference frame.</p>
<p>The alternative hypothesis postulates the interaction of multiple reference frames. However, this interaction could take different forms. One alternative would suggest a translation process either from one modality into the other or from both modalities into a multi-modal format. Another alternative would presuppose a direct comparison of the encoded information across modalities. Unfortunately, we cannot distinguish between any of these possibilities, because they do not strictly exclude each other, and they are most likely closely intertwined. Nevertheless, it is clear that each modality encodes the spatial representation in its own frame of reference and it is the interplay of these frames of reference that gives rise to the phase shifts observed in the present study.</p>
<p>An optimal spatial mapping between the visual retinotopic information and the tactile somatotopic information should also comprise the proprioceptive information about the current hand posture. This was clearly not the case. The hand’s misalignment with respect to the viewing direction actually induced the phase shift of the response time function. It evidently follows that the proprioceptive information was largely ignored. Interestingly, since the effect on the phase shift was reduced after the temporal delay, we might presuppose that the proprioceptive information was partially comprised at a later stage only. The spatial information in the hand-centered reference frame in combination with the proprioceptive information about one’s hand posture and one’s body position in space constitutes the necessary information for the construction of an allocentric spatial representation. The temporal delay might have induced the recoding of egocentric spatial information into an allocentric reference frame, which led to the decrement in the phase shift. It should be noted here that, given that the phase shift was still biased by the orthogonal orientation of the hand, it is necessary to interpret the effect as the result of an interaction of different reference frames. This outcome is in line with previously reported results (Bridgeman et al.
<xref ref-type="bibr" rid="CR2">1997</xref>
; Carrozzo et al.
<xref ref-type="bibr" rid="CR4">2002</xref>
; Milner et al.
<xref ref-type="bibr" rid="CR14">1999</xref>
; Rossetti et al.
<xref ref-type="bibr" rid="CR21">1996</xref>
; Zuidhoek et al.
<xref ref-type="bibr" rid="CR24">2003</xref>
).</p>
<p>Previous unimodal mental rotation studies reported a substantial reference frame influence on the way spatial information is encoded and compared, both in vision and in haptics (Carpenter and Eisenberg
<xref ref-type="bibr" rid="CR3">1978</xref>
; Corballis et al.
<xref ref-type="bibr" rid="CR5">1976</xref>
,
<xref ref-type="bibr" rid="CR6">1978</xref>
; Prather and Sathian
<xref ref-type="bibr" rid="CR20">2002</xref>
; Volcic et al.
<xref ref-type="bibr" rid="CR23">2009</xref>
). Here, we present the novel finding that spatial orientational information is encoded within modality-specific reference frames and that performance in a visuo-haptic cross-modal mental rotation task is bound to the relative alignments of reference frames and their interactions. In addition, the intervening temporal delay is presumed to have affected the integration of the proprioceptive information. Although mental rotation and recognition of rotated objects show behaviorally similar effects, they rely on different processes (e.g., Gauthier et al.
<xref ref-type="bibr" rid="CR8">2002</xref>
). However, the present findings can make an important contribution to the general discussion about how the information about objects is shared across modalities. In this respect, we might tentatively hypothesize that the view-dependence/independence effects in cross-modal object recognition could depend on the solution of a conflict between modality-specific reference frames.</p>
</sec>
</body>
<back>
<ack>
<p>This research was supported by a grant from the Netherlands Organisation for Scientific Research (NWO) and a grant from the EU (FP7-ICT-217077-Eyeshots).</p>
<p>
<bold>Open Access</bold>
This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.</p>
</ack>
<ref-list id="Bib1">
<title>References</title>
<ref id="CR1">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Brainard</surname>
<given-names>DH</given-names>
</name>
</person-group>
<article-title>The psychophysics toolbox</article-title>
<source>Spat Vis</source>
<year>1997</year>
<volume>10</volume>
<fpage>433</fpage>
<lpage>436</lpage>
<pub-id pub-id-type="doi">10.1163/156856897X00357</pub-id>
<pub-id pub-id-type="pmid">9176952</pub-id>
</mixed-citation>
</ref>
<ref id="CR2">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bridgeman</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Peery</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Anand</surname>
<given-names>S</given-names>
</name>
</person-group>
<article-title>Interaction of cognitive and sensorimotor maps of visual space</article-title>
<source>Percept Psychophys</source>
<year>1997</year>
<volume>59</volume>
<fpage>456</fpage>
<lpage>469</lpage>
<pub-id pub-id-type="pmid">9136275</pub-id>
</mixed-citation>
</ref>
<ref id="CR3">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Carpenter</surname>
<given-names>PA</given-names>
</name>
<name>
<surname>Eisenberg</surname>
<given-names>P</given-names>
</name>
</person-group>
<article-title>Mental rotation and the frame of reference in blind and sighted individuals</article-title>
<source>Percept Psychophys</source>
<year>1978</year>
<volume>23</volume>
<fpage>117</fpage>
<lpage>124</lpage>
<pub-id pub-id-type="pmid">643507</pub-id>
</mixed-citation>
</ref>
<ref id="CR4">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Carrozzo</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Stratta</surname>
<given-names>F</given-names>
</name>
<name>
<surname>McIntyre</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Lacquaniti</surname>
<given-names>F</given-names>
</name>
</person-group>
<article-title>Cognitive allocentric representations of visual space shape pointing errors</article-title>
<source>Exp Brain Res</source>
<year>2002</year>
<volume>147</volume>
<fpage>426</fpage>
<lpage>436</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-002-1232-4</pub-id>
<pub-id pub-id-type="pmid">12444474</pub-id>
</mixed-citation>
</ref>
<ref id="CR5">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Corballis</surname>
<given-names>MC</given-names>
</name>
<name>
<surname>Zbrodoff</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Roldan</surname>
<given-names>C</given-names>
</name>
</person-group>
<article-title>What’s up in mental rotation</article-title>
<source>Percept Psychophys</source>
<year>1976</year>
<volume>19</volume>
<fpage>525</fpage>
<lpage>530</lpage>
</mixed-citation>
</ref>
<ref id="CR6">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Corballis</surname>
<given-names>MC</given-names>
</name>
<name>
<surname>Nagourney</surname>
<given-names>BA</given-names>
</name>
<name>
<surname>Shetzer</surname>
<given-names>LI</given-names>
</name>
<name>
<surname>Stefanatos</surname>
<given-names>G</given-names>
</name>
</person-group>
<article-title>Mental rotation under head tilt: factors influencing the location of the subjective reference frame</article-title>
<source>Percept Psychophys</source>
<year>1978</year>
<volume>24</volume>
<fpage>263</fpage>
<lpage>273</lpage>
<pub-id pub-id-type="pmid">704287</pub-id>
</mixed-citation>
</ref>
<ref id="CR7">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ernst</surname>
<given-names>MO</given-names>
</name>
<name>
<surname>Lange</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Newell</surname>
<given-names>FN</given-names>
</name>
</person-group>
<article-title>Multisensory recognition of actively explored objects</article-title>
<source>Can J Exp Psychol</source>
<year>2007</year>
<volume>61</volume>
<fpage>242</fpage>
<lpage>253</lpage>
<pub-id pub-id-type="pmid">17974318</pub-id>
</mixed-citation>
</ref>
<ref id="CR8">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gauthier</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Hayward</surname>
<given-names>WG</given-names>
</name>
<name>
<surname>Tarr</surname>
<given-names>MJ</given-names>
</name>
<name>
<surname>Anderson</surname>
<given-names>AW</given-names>
</name>
<name>
<surname>Skudlarski</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Gore</surname>
<given-names>JC</given-names>
</name>
</person-group>
<article-title>Bold activity during mental rotation and viewpoint-dependent object recognition</article-title>
<source>Neuron</source>
<year>2002</year>
<volume>34</volume>
<fpage>161</fpage>
<lpage>171</lpage>
<pub-id pub-id-type="doi">10.1016/S0896-6273(02)00622-0</pub-id>
<pub-id pub-id-type="pmid">11931750</pub-id>
</mixed-citation>
</ref>
<ref id="CR9">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gibson</surname>
<given-names>JJ</given-names>
</name>
</person-group>
<article-title>Observations on active touch</article-title>
<source>Psychol Rev</source>
<year>1962</year>
<volume>69</volume>
<fpage>477</fpage>
<lpage>491</lpage>
<pub-id pub-id-type="doi">10.1037/h0046962</pub-id>
<pub-id pub-id-type="pmid">13947730</pub-id>
</mixed-citation>
</ref>
<ref id="CR10">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gibson</surname>
<given-names>JJ</given-names>
</name>
</person-group>
<article-title>The useful dimensions of sensitivity</article-title>
<source>Am Psychol</source>
<year>1963</year>
<volume>18</volume>
<fpage>1</fpage>
<lpage>15</lpage>
<pub-id pub-id-type="doi">10.1037/h0046033</pub-id>
</mixed-citation>
</ref>
<ref id="CR11">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lacey</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Peters</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Sathian</surname>
<given-names>K</given-names>
</name>
</person-group>
<article-title>Cross-modal object recognition is viewpoint-independent</article-title>
<source>PLoS ONE</source>
<year>2007</year>
<volume>2</volume>
<fpage>e890</fpage>
<pub-id pub-id-type="doi">10.1371/journal.pone.0000890</pub-id>
<pub-id pub-id-type="pmid">17849019</pub-id>
</mixed-citation>
</ref>
<ref id="CR12">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lacey</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Pappas</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Kreps</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Sathian</surname>
<given-names>K</given-names>
</name>
</person-group>
<article-title>Perceptual learning of view-independence in visuo-haptic object representations</article-title>
<source>Exp Brain Res</source>
<year>2009</year>
<volume>198</volume>
<fpage>329</fpage>
<lpage>337</lpage>
<pub-id pub-id-type="doi">10.1007/s00221-009-1856-8</pub-id>
<pub-id pub-id-type="pmid">19484467</pub-id>
</mixed-citation>
</ref>
<ref id="CR13">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lawson</surname>
<given-names>R</given-names>
</name>
</person-group>
<article-title>A comparison of the effects of depth rotation on visual and haptic three-dimensional object recognition</article-title>
<source>J Exp Psychol Human</source>
<year>2009</year>
<volume>35</volume>
<fpage>911</fpage>
<lpage>930</lpage>
<pub-id pub-id-type="doi">10.1037/a0015025</pub-id>
</mixed-citation>
</ref>
<ref id="CR14">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Milner</surname>
<given-names>AD</given-names>
</name>
<name>
<surname>Paulignan</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Dijkerman</surname>
<given-names>HC</given-names>
</name>
<name>
<surname>Michel</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Jeannerod</surname>
<given-names>M</given-names>
</name>
</person-group>
<article-title>A paradoxical improvement of misreaching in optic ataxia: new evidence for two separate neural systems for visual localization</article-title>
<source>Proc Biol Sci</source>
<year>1999</year>
<volume>266</volume>
<fpage>2225</fpage>
<lpage>2229</lpage>
<pub-id pub-id-type="doi">10.1098/rspb.1999.0912</pub-id>
<pub-id pub-id-type="pmid">10649637</pub-id>
</mixed-citation>
</ref>
<ref id="CR15">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Newell</surname>
<given-names>FN</given-names>
</name>
<name>
<surname>Ernst</surname>
<given-names>MO</given-names>
</name>
<name>
<surname>Tjan</surname>
<given-names>BS</given-names>
</name>
<name>
<surname>Bülthoff</surname>
<given-names>HH</given-names>
</name>
</person-group>
<article-title>Viewpoint dependence in visual and haptic object recognition</article-title>
<source>Psychol Sci</source>
<year>2001</year>
<volume>12</volume>
<fpage>37</fpage>
<lpage>42</lpage>
<pub-id pub-id-type="doi">10.1111/1467-9280.00307</pub-id>
<pub-id pub-id-type="pmid">11294226</pub-id>
</mixed-citation>
</ref>
<ref id="CR16">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Norman</surname>
<given-names>JF</given-names>
</name>
<name>
<surname>Norman</surname>
<given-names>HF</given-names>
</name>
<name>
<surname>Clayton</surname>
<given-names>AM</given-names>
</name>
<name>
<surname>Lianekhammy</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Zielke</surname>
<given-names>G</given-names>
</name>
</person-group>
<article-title>The visual and haptic perception of natural object shape</article-title>
<source>Percept Psychophys</source>
<year>2004</year>
<volume>66</volume>
<fpage>342</fpage>
<lpage>351</lpage>
<pub-id pub-id-type="pmid">15129753</pub-id>
</mixed-citation>
</ref>
<ref id="CR17">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Norman</surname>
<given-names>JF</given-names>
</name>
<name>
<surname>Clayton</surname>
<given-names>AM</given-names>
</name>
<name>
<surname>Norman</surname>
<given-names>HF</given-names>
</name>
<name>
<surname>Crabtree</surname>
<given-names>CE</given-names>
</name>
</person-group>
<article-title>Learning to perceive differences in solid shape through vision and touch</article-title>
<source>Perception</source>
<year>2008</year>
<volume>37</volume>
<fpage>185</fpage>
<lpage>196</lpage>
<pub-id pub-id-type="doi">10.1068/p5679</pub-id>
<pub-id pub-id-type="pmid">18456923</pub-id>
</mixed-citation>
</ref>
<ref id="CR18">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pelli</surname>
<given-names>DG</given-names>
</name>
</person-group>
<article-title>The videotoolbox software for visual psychophysics: transforming numbers into movies</article-title>
<source>Spatial Vision</source>
<year>1997</year>
<volume>10</volume>
<fpage>437</fpage>
<lpage>442</lpage>
<pub-id pub-id-type="doi">10.1163/156856897X00366</pub-id>
<pub-id pub-id-type="pmid">9176953</pub-id>
</mixed-citation>
</ref>
<ref id="CR19">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Phillips</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Egan</surname>
<given-names>EJL</given-names>
</name>
<name>
<surname>Perry</surname>
<given-names>BN</given-names>
</name>
</person-group>
<article-title>Perceptual equivalence between vision and touch is complexity dependent</article-title>
<source>Acta Psychol</source>
<year>2009</year>
<volume>132</volume>
<fpage>259</fpage>
<lpage>266</lpage>
<pub-id pub-id-type="doi">10.1016/j.actpsy.2009.07.010</pub-id>
</mixed-citation>
</ref>
<ref id="CR20">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Prather</surname>
<given-names>SC</given-names>
</name>
<name>
<surname>Sathian</surname>
<given-names>K</given-names>
</name>
</person-group>
<article-title>Mental rotation of tactile stimuli</article-title>
<source>Cognitive Brain Res</source>
<year>2002</year>
<volume>14</volume>
<fpage>91</fpage>
<lpage>98</lpage>
<pub-id pub-id-type="doi">10.1016/S0926-6410(02)00063-0</pub-id>
</mixed-citation>
</ref>
<ref id="CR21">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rossetti</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Gaunet</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Thinus-Blanc</surname>
<given-names>C</given-names>
</name>
</person-group>
<article-title>Early visual experience affects memorization and spatial representation of proprioceptive targets</article-title>
<source>Neuroreport</source>
<year>1996</year>
<volume>7</volume>
<fpage>1219</fpage>
<lpage>1223</lpage>
<pub-id pub-id-type="doi">10.1097/00001756-199604260-00025</pub-id>
<pub-id pub-id-type="pmid">8817536</pub-id>
</mixed-citation>
</ref>
<ref id="CR22">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shepard</surname>
<given-names>RN</given-names>
</name>
<name>
<surname>Metzler</surname>
<given-names>J</given-names>
</name>
</person-group>
<article-title>Mental rotation of three-dimensional objects</article-title>
<source>Science</source>
<year>1971</year>
<volume>171</volume>
<fpage>701</fpage>
<lpage>703</lpage>
<pub-id pub-id-type="doi">10.1126/science.171.3972.701</pub-id>
<pub-id pub-id-type="pmid">5540314</pub-id>
</mixed-citation>
</ref>
<ref id="CR23">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Volcic</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Wijntjes</surname>
<given-names>MWA</given-names>
</name>
<name>
<surname>Kappers</surname>
<given-names>AML</given-names>
</name>
</person-group>
<article-title>Haptic mental rotation revisited: multiple reference frame dependence</article-title>
<source>Acta Psychol</source>
<year>2009</year>
<volume>130</volume>
<fpage>251</fpage>
<lpage>259</lpage>
<pub-id pub-id-type="doi">10.1016/j.actpsy.2009.01.004</pub-id>
</mixed-citation>
</ref>
<ref id="CR24">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zuidhoek</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Kappers</surname>
<given-names>AML</given-names>
</name>
<name>
<surname>Lubbe</surname>
<given-names>RHJ</given-names>
</name>
<name>
<surname>Postma</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>Delay improves performance on a haptic spatial matching task</article-title>
<source>Exp Brain Res</source>
<year>2003</year>
<volume>149</volume>
<fpage>320</fpage>
<lpage>330</lpage>
<pub-id pub-id-type="pmid">12632234</pub-id>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
<affiliations>
<list>
<country>
<li>Allemagne</li>
<li>Pays-Bas</li>
</country>
<region>
<li>District de Münster</li>
<li>Rhénanie-du-Nord-Westphalie</li>
<li>Utrecht (province)</li>
</region>
<settlement>
<li>Münster</li>
<li>Utrecht</li>
</settlement>
<orgName>
<li>Université d'Utrecht</li>
</orgName>
</list>
<tree>
<country name="Allemagne">
<region name="Rhénanie-du-Nord-Westphalie">
<name sortKey="Volcic, Robert" sort="Volcic, Robert" uniqKey="Volcic R" first="Robert" last="Volcic">Robert Volcic</name>
</region>
</country>
<country name="Pays-Bas">
<noRegion>
<name sortKey="Wijntjes, Maarten W A" sort="Wijntjes, Maarten W A" uniqKey="Wijntjes M" first="Maarten W. A." last="Wijntjes">Maarten W. A. Wijntjes</name>
</noRegion>
<name sortKey="Kappers, Astrid M L" sort="Kappers, Astrid M L" uniqKey="Kappers A" first="Astrid M. L." last="Kappers">Astrid M. L. Kappers</name>
<name sortKey="Kool, Erik C" sort="Kool, Erik C" uniqKey="Kool E" first="Erik C." last="Kool">Erik C. Kool</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Checkpoint
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001F29 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd -nk 001F29 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Checkpoint
   |type=    RBID
   |clé=     PMC:2875473
   |texte=   Cross-modal visuo-haptic mental rotation: comparing objects between senses
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/RBID.i   -Sk "pubmed:20437169" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024