Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Rotation-independent representations for haptic movements

Identifieur interne : 001027 ( Pmc/Checkpoint ); précédent : 001026; suivant : 001028

Rotation-independent representations for haptic movements

Auteurs : Satoshi Shioiri ; Takanori Yamazaki ; Kazumichi Matsumiya ; Ichiro Kuriki

Source :

RBID : PMC:3763250

Abstract

The existence of a common mechanism for visual and haptic representations has been reported in object perception. In contrast, representations of movements might be more specific to modalities. Referring to the vertical axis is natural for visual representations whereas a fixed reference axis might be inappropriate for haptic movements and thus also inappropriate for its representations in the brain. The present study found that visual and haptic movement representations are processed independently. A psychophysical experiment examining mental rotation revealed the well-known effect of rotation angle for visual representations whereas no such effect was found for haptic representations. We also found no interference between processes for visual and haptic movements in an experiment where different stimuli were presented simultaneously through visual and haptic modalities. These results strongly suggest that (1) there are separate representations of visual and haptic movements, and (2) the haptic process has a rotation-independent representation.


Url:
DOI: 10.1038/srep02595
PubMed: 24005481
PubMed Central: 3763250


Affiliations:


Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:3763250

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Rotation-independent representations for haptic movements</title>
<author>
<name sortKey="Shioiri, Satoshi" sort="Shioiri, Satoshi" uniqKey="Shioiri S" first="Satoshi" last="Shioiri">Satoshi Shioiri</name>
<affiliation>
<nlm:aff id="a1">
<institution>Research Institute of Electrical Communication Tohoku University</institution>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Yamazaki, Takanori" sort="Yamazaki, Takanori" uniqKey="Yamazaki T" first="Takanori" last="Yamazaki">Takanori Yamazaki</name>
<affiliation>
<nlm:aff id="a1">
<institution>Research Institute of Electrical Communication Tohoku University</institution>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Matsumiya, Kazumichi" sort="Matsumiya, Kazumichi" uniqKey="Matsumiya K" first="Kazumichi" last="Matsumiya">Kazumichi Matsumiya</name>
<affiliation>
<nlm:aff id="a1">
<institution>Research Institute of Electrical Communication Tohoku University</institution>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Kuriki, Ichiro" sort="Kuriki, Ichiro" uniqKey="Kuriki I" first="Ichiro" last="Kuriki">Ichiro Kuriki</name>
<affiliation>
<nlm:aff id="a1">
<institution>Research Institute of Electrical Communication Tohoku University</institution>
</nlm:aff>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">24005481</idno>
<idno type="pmc">3763250</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3763250</idno>
<idno type="RBID">PMC:3763250</idno>
<idno type="doi">10.1038/srep02595</idno>
<date when="2013">2013</date>
<idno type="wicri:Area/Pmc/Corpus">000026</idno>
<idno type="wicri:Area/Pmc/Curation">000026</idno>
<idno type="wicri:Area/Pmc/Checkpoint">001027</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Rotation-independent representations for haptic movements</title>
<author>
<name sortKey="Shioiri, Satoshi" sort="Shioiri, Satoshi" uniqKey="Shioiri S" first="Satoshi" last="Shioiri">Satoshi Shioiri</name>
<affiliation>
<nlm:aff id="a1">
<institution>Research Institute of Electrical Communication Tohoku University</institution>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Yamazaki, Takanori" sort="Yamazaki, Takanori" uniqKey="Yamazaki T" first="Takanori" last="Yamazaki">Takanori Yamazaki</name>
<affiliation>
<nlm:aff id="a1">
<institution>Research Institute of Electrical Communication Tohoku University</institution>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Matsumiya, Kazumichi" sort="Matsumiya, Kazumichi" uniqKey="Matsumiya K" first="Kazumichi" last="Matsumiya">Kazumichi Matsumiya</name>
<affiliation>
<nlm:aff id="a1">
<institution>Research Institute of Electrical Communication Tohoku University</institution>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Kuriki, Ichiro" sort="Kuriki, Ichiro" uniqKey="Kuriki I" first="Ichiro" last="Kuriki">Ichiro Kuriki</name>
<affiliation>
<nlm:aff id="a1">
<institution>Research Institute of Electrical Communication Tohoku University</institution>
</nlm:aff>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Scientific Reports</title>
<idno type="eISSN">2045-2322</idno>
<imprint>
<date when="2013">2013</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>The existence of a common mechanism for visual and haptic representations has been reported in object perception. In contrast, representations of movements might be more specific to modalities. Referring to the vertical axis is natural for visual representations whereas a fixed reference axis might be inappropriate for haptic movements and thus also inappropriate for its representations in the brain. The present study found that visual and haptic movement representations are processed independently. A psychophysical experiment examining mental rotation revealed the well-known effect of rotation angle for visual representations whereas no such effect was found for haptic representations. We also found no interference between processes for visual and haptic movements in an experiment where different stimuli were presented simultaneously through visual and haptic modalities. These results strongly suggest that (1) there are separate representations of visual and haptic movements, and (2) the haptic process has a rotation-independent representation.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Anguera, J A" uniqKey="Anguera J">J. A. Anguera</name>
</author>
<author>
<name sortKey="Reuter Lorenz, P A" uniqKey="Reuter Lorenz P">P. A. Reuter-Lorenz</name>
</author>
<author>
<name sortKey="Willingham, D T" uniqKey="Willingham D">D. T. Willingham</name>
</author>
<author>
<name sortKey="Seidler, R D" uniqKey="Seidler R">R. D. Seidler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gentaz, E" uniqKey="Gentaz E">E. Gentaz</name>
</author>
<author>
<name sortKey="Baud Bovy, G" uniqKey="Baud Bovy G">G. Baud-Bovy</name>
</author>
<author>
<name sortKey="Luyat, M" uniqKey="Luyat M">M. Luyat</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Prather, S C" uniqKey="Prather S">S. C. Prather</name>
</author>
<author>
<name sortKey="Sathian, K" uniqKey="Sathian K">K. Sathian</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Weiller, C" uniqKey="Weiller C">C. Weiller</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rowe, J B" uniqKey="Rowe J">J. B. Rowe</name>
</author>
<author>
<name sortKey="Siebner, H R" uniqKey="Siebner H">H. R. Siebner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dayan, E" uniqKey="Dayan E">E. Dayan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jeannerod, M" uniqKey="Jeannerod M">M. Jeannerod</name>
</author>
<author>
<name sortKey="Frak, V" uniqKey="Frak V">V. Frak</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pellizzer, G" uniqKey="Pellizzer G">G. Pellizzer</name>
</author>
<author>
<name sortKey="Georgopoulos, A P" uniqKey="Georgopoulos A">A. P. Georgopoulos</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Easton, R D" uniqKey="Easton R">R. D. Easton</name>
</author>
<author>
<name sortKey="Srinivas, K" uniqKey="Srinivas K">K. Srinivas</name>
</author>
<author>
<name sortKey="Greene, A J" uniqKey="Greene A">A. J. Greene</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ballesteros, S" uniqKey="Ballesteros S">S. Ballesteros</name>
</author>
<author>
<name sortKey="Millar, S" uniqKey="Millar S">S. Millar</name>
</author>
<author>
<name sortKey="Reales, J M" uniqKey="Reales J">J. M. Reales</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Woods, A T" uniqKey="Woods A">A. T. Woods</name>
</author>
<author>
<name sortKey="Newell, F N" uniqKey="Newell F">F. N. Newell</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lacey, S" uniqKey="Lacey S">S. Lacey</name>
</author>
<author>
<name sortKey="Campbell, C" uniqKey="Campbell C">C. Campbell</name>
</author>
<author>
<name sortKey="Sathian, K" uniqKey="Sathian K">K. Sathian</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Giudice, N A" uniqKey="Giudice N">N. A. Giudice</name>
</author>
<author>
<name sortKey="Betty, M R" uniqKey="Betty M">M. R. Betty</name>
</author>
<author>
<name sortKey="Loomis, J M" uniqKey="Loomis J">J. M. Loomis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zangaladze, A" uniqKey="Zangaladze A">A. Zangaladze</name>
</author>
<author>
<name sortKey="Epstein, C M" uniqKey="Epstein C">C. M. Epstein</name>
</author>
<author>
<name sortKey="Grafton, S T" uniqKey="Grafton S">S. T. Grafton</name>
</author>
<author>
<name sortKey="Sathian, K" uniqKey="Sathian K">K. Sathian</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Amedi, A" uniqKey="Amedi A">A. Amedi</name>
</author>
<author>
<name sortKey="Malach, R" uniqKey="Malach R">R. Malach</name>
</author>
<author>
<name sortKey="Hendler, T" uniqKey="Hendler T">T. Hendler</name>
</author>
<author>
<name sortKey="Peled, S" uniqKey="Peled S">S. Peled</name>
</author>
<author>
<name sortKey="Zohary, E" uniqKey="Zohary E">E. Zohary</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pietrini, P" uniqKey="Pietrini P">P. Pietrini</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Merabet, L" uniqKey="Merabet L">L. Merabet</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Korman, M" uniqKey="Korman M">M. Korman</name>
</author>
<author>
<name sortKey="Teodorescu, K" uniqKey="Teodorescu K">K. Teodorescu</name>
</author>
<author>
<name sortKey="Cohen, A" uniqKey="Cohen A">A. Cohen</name>
</author>
<author>
<name sortKey="Reiner, M" uniqKey="Reiner M">M. Reiner</name>
</author>
<author>
<name sortKey="Gopher, D" uniqKey="Gopher D">D. Gopher</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Amedi, A" uniqKey="Amedi A">A. Amedi</name>
</author>
<author>
<name sortKey="Von Kriegstein, K" uniqKey="Von Kriegstein K">K. von Kriegstein</name>
</author>
<author>
<name sortKey="Van Atteveldt, N M" uniqKey="Van Atteveldt N">N. M. van Atteveldt</name>
</author>
<author>
<name sortKey="Beauchamp, M S" uniqKey="Beauchamp M">M. S. Beauchamp</name>
</author>
<author>
<name sortKey="Naumer, M J" uniqKey="Naumer M">M. J. Naumer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sathian, K" uniqKey="Sathian K">K. Sathian</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stilla, R" uniqKey="Stilla R">R. Stilla</name>
</author>
<author>
<name sortKey="Sathian, K" uniqKey="Sathian K">K. Sathian</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tal, N" uniqKey="Tal N">N. Tal</name>
</author>
<author>
<name sortKey="Amedi, A" uniqKey="Amedi A">A. Amedi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nabeta, T" uniqKey="Nabeta T">T. Nabeta</name>
</author>
<author>
<name sortKey="Ono, F" uniqKey="Ono F">F. Ono</name>
</author>
<author>
<name sortKey="Kawahara, J" uniqKey="Kawahara J">J. Kawahara</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Livingstone, M" uniqKey="Livingstone M">M. Livingstone</name>
</author>
<author>
<name sortKey="Hubel, D" uniqKey="Hubel D">D. Hubel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Merigan, W H" uniqKey="Merigan W">W. H. Merigan</name>
</author>
<author>
<name sortKey="Maunsell, J H" uniqKey="Maunsell J">J. H. Maunsell</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Goodale, M A" uniqKey="Goodale M">M. A. Goodale</name>
</author>
<author>
<name sortKey="Milner, A D" uniqKey="Milner A">A. D. Milner</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mishkin, M" uniqKey="Mishkin M">M. Mishkin</name>
</author>
<author>
<name sortKey="Macko, K A" uniqKey="Macko K">K. A. Macko</name>
</author>
<author>
<name sortKey="Ungerleider, L G" uniqKey="Ungerleider L">L. G. Ungerleider</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rizzolatti, G" uniqKey="Rizzolatti G">G. Rizzolatti</name>
</author>
<author>
<name sortKey="Craighero, L" uniqKey="Craighero L">L. Craighero</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Matteau, I" uniqKey="Matteau I">I. Matteau</name>
</author>
<author>
<name sortKey="Kupers, R" uniqKey="Kupers R">R. Kupers</name>
</author>
<author>
<name sortKey="Ricciardi, E" uniqKey="Ricciardi E">E. Ricciardi</name>
</author>
<author>
<name sortKey="Pietrini, P" uniqKey="Pietrini P">P. Pietrini</name>
</author>
<author>
<name sortKey="Ptito, M" uniqKey="Ptito M">M. Ptito</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wacker, E" uniqKey="Wacker E">E. Wacker</name>
</author>
<author>
<name sortKey="Spitzer, B" uniqKey="Spitzer B">B. Spitzer</name>
</author>
<author>
<name sortKey="Lutzkendorf, R" uniqKey="Lutzkendorf R">R. Lutzkendorf</name>
</author>
<author>
<name sortKey="Bernarding, J" uniqKey="Bernarding J">J. Bernarding</name>
</author>
<author>
<name sortKey="Blankenburg, F" uniqKey="Blankenburg F">F. Blankenburg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Amedi, A" uniqKey="Amedi A">A. Amedi</name>
</author>
<author>
<name sortKey="Raz, N" uniqKey="Raz N">N. Raz</name>
</author>
<author>
<name sortKey="Azulay, H" uniqKey="Azulay H">H. Azulay</name>
</author>
<author>
<name sortKey="Malach, R" uniqKey="Malach R">R. Malach</name>
</author>
<author>
<name sortKey="Zohary, E" uniqKey="Zohary E">E. Zohary</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cooper, L A" uniqKey="Cooper L">L. A. Cooper</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cooper, L A" uniqKey="Cooper L">L. A. Cooper</name>
</author>
<author>
<name sortKey="Shepard, R N" uniqKey="Shepard R">R. N. Shepard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shepard, R N" uniqKey="Shepard R">R. N. Shepard</name>
</author>
<author>
<name sortKey="Metzler, J" uniqKey="Metzler J">J. Metzler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sekiyama, K" uniqKey="Sekiyama K">K. Sekiyama</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Volcic, R" uniqKey="Volcic R">R. Volcic</name>
</author>
<author>
<name sortKey="Wijntjes, M W" uniqKey="Wijntjes M">M. W. Wijntjes</name>
</author>
<author>
<name sortKey="Kool, E C" uniqKey="Kool E">E. C. Kool</name>
</author>
<author>
<name sortKey="Kappers, A M" uniqKey="Kappers A">A. M. Kappers</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wexler, M" uniqKey="Wexler M">M. Wexler</name>
</author>
<author>
<name sortKey="Kosslyn, S M" uniqKey="Kosslyn S">S. M. Kosslyn</name>
</author>
<author>
<name sortKey="Berthoz, A" uniqKey="Berthoz A">A. Berthoz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Windischberger, C" uniqKey="Windischberger C">C. Windischberger</name>
</author>
<author>
<name sortKey="Lamm, C" uniqKey="Lamm C">C. Lamm</name>
</author>
<author>
<name sortKey="Bauer, H" uniqKey="Bauer H">H. Bauer</name>
</author>
<author>
<name sortKey="Moser, E" uniqKey="Moser E">E. Moser</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Reiner, M" uniqKey="Reiner M">M. Reiner</name>
</author>
<author>
<name sortKey="Korsnes, M" uniqKey="Korsnes M">M. Korsnes</name>
</author>
<author>
<name sortKey="Glover, G" uniqKey="Glover G">G. Glover</name>
</author>
<author>
<name sortKey="Hugdahl, K" uniqKey="Hugdahl K">K. Hugdahl</name>
</author>
<author>
<name sortKey="Feldmann, M" uniqKey="Feldmann M">M. Feldmann</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ernst, M O" uniqKey="Ernst M">M. O. Ernst</name>
</author>
<author>
<name sortKey="Banks, M S" uniqKey="Banks M">M. S. Banks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Knill, D C" uniqKey="Knill D">D. C. Knill</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lages, M" uniqKey="Lages M">M. Lages</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Macaluso, E" uniqKey="Macaluso E">E. Macaluso</name>
</author>
<author>
<name sortKey="Maravita, A" uniqKey="Maravita A">A. Maravita</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Sci Rep</journal-id>
<journal-id journal-id-type="iso-abbrev">Sci Rep</journal-id>
<journal-title-group>
<journal-title>Scientific Reports</journal-title>
</journal-title-group>
<issn pub-type="epub">2045-2322</issn>
<publisher>
<publisher-name>Nature Publishing Group</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">24005481</article-id>
<article-id pub-id-type="pmc">3763250</article-id>
<article-id pub-id-type="pii">srep02595</article-id>
<article-id pub-id-type="doi">10.1038/srep02595</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Rotation-independent representations for haptic movements</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Shioiri</surname>
<given-names>Satoshi</given-names>
</name>
<xref ref-type="corresp" rid="c1">a</xref>
<xref ref-type="aff" rid="a1">1</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Yamazaki</surname>
<given-names>Takanori</given-names>
</name>
<xref ref-type="aff" rid="a1">1</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Matsumiya</surname>
<given-names>Kazumichi</given-names>
</name>
<xref ref-type="aff" rid="a1">1</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Kuriki</surname>
<given-names>Ichiro</given-names>
</name>
<xref ref-type="aff" rid="a1">1</xref>
</contrib>
<aff id="a1">
<label>1</label>
<institution>Research Institute of Electrical Communication Tohoku University</institution>
</aff>
</contrib-group>
<author-notes>
<corresp id="c1">
<label>a</label>
<email>shioiri@riec.tohoku.ac.jp</email>
</corresp>
</author-notes>
<pub-date pub-type="epub">
<day>05</day>
<month>09</month>
<year>2013</year>
</pub-date>
<pub-date pub-type="collection">
<year>2013</year>
</pub-date>
<volume>3</volume>
<elocation-id>2595</elocation-id>
<history>
<date date-type="received">
<day>30</day>
<month>11</month>
<year>2012</year>
</date>
<date date-type="accepted">
<day>21</day>
<month>08</month>
<year>2013</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright © 2013, Macmillan Publishers Limited. All rights reserved</copyright-statement>
<copyright-year>2013</copyright-year>
<copyright-holder>Macmillan Publishers Limited. All rights reserved</copyright-holder>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/3.0/">
<pmc-comment>author-paid</pmc-comment>
<license-p>This work is licensed under a Creative Commons Attribution 3.0 Unported License. To view a copy of this license, visit
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/3.0/">http://creativecommons.org/licenses/by/3.0/</ext-link>
</license-p>
</license>
</permissions>
<abstract>
<p>The existence of a common mechanism for visual and haptic representations has been reported in object perception. In contrast, representations of movements might be more specific to modalities. Referring to the vertical axis is natural for visual representations whereas a fixed reference axis might be inappropriate for haptic movements and thus also inappropriate for its representations in the brain. The present study found that visual and haptic movement representations are processed independently. A psychophysical experiment examining mental rotation revealed the well-known effect of rotation angle for visual representations whereas no such effect was found for haptic representations. We also found no interference between processes for visual and haptic movements in an experiment where different stimuli were presented simultaneously through visual and haptic modalities. These results strongly suggest that (1) there are separate representations of visual and haptic movements, and (2) the haptic process has a rotation-independent representation.</p>
</abstract>
</article-meta>
</front>
<body>
<p>We move our hands and arms to write, draw, and gesture, referring to representations of actions and/or images in memory, which can be obtained through active/passive haptic movements or through visual information on another person's movements. Both visual and haptic inputs can ultimately be used for haptic movements. Yet to be elucidated, however, is whether visual and haptic information are represented in the same system. Although interaction between the inputs is known for motor skill acquisition
<xref ref-type="bibr" rid="b1">1</xref>
, it does not answer the question of what are the representation processes. Different representations for haptic movements from visual ones likely have an advantage in body movement control without the restriction of a coordinate system fixed to the visual system: The visual system usually uses a coordinate system referring to the vertical axis as the normal, while the motor process has its own coordinate system for movement control as suggested by the difference in subjective vertical between vision and haptics
<xref ref-type="bibr" rid="b2">2</xref>
<xref ref-type="bibr" rid="b3">3</xref>
. The representation of haptic movements might have fewer constraints in spatial coordinates to allow greater freedom for control. Note that here we consider passive haptic movements and another person's movements as inputs, although our interest is in active haptic movements. This is because the motor cortex is activated by passive movements
<xref ref-type="bibr" rid="b4">4</xref>
<xref ref-type="bibr" rid="b5">5</xref>
and similar brain activity is found for action and visual motion perception
<xref ref-type="bibr" rid="b6">6</xref>
<xref ref-type="bibr" rid="b7">7</xref>
.</p>
<p>Several psychophysical studies have found that signals from visual and haptic modalities are processed in a common multimodal representation system
<xref ref-type="bibr" rid="b8">8</xref>
<xref ref-type="bibr" rid="b9">9</xref>
<xref ref-type="bibr" rid="b10">10</xref>
<xref ref-type="bibr" rid="b11">11</xref>
<xref ref-type="bibr" rid="b12">12</xref>
<xref ref-type="bibr" rid="b13">13</xref>
<xref ref-type="bibr" rid="b14">14</xref>
<xref ref-type="bibr" rid="b15">15</xref>
<xref ref-type="bibr" rid="b16">16</xref>
<xref ref-type="bibr" rid="b17">17</xref>
<xref ref-type="bibr" rid="b18">18</xref>
. Several functional magnetic resonance imaging (fMRI) studies have also shown involvement of common brain areas in visual and haptic recognition
<xref ref-type="bibr" rid="b19">19</xref>
<xref ref-type="bibr" rid="b20">20</xref>
<xref ref-type="bibr" rid="b21">21</xref>
<xref ref-type="bibr" rid="b22">22</xref>
<xref ref-type="bibr" rid="b23">23</xref>
, which also suggests a common process for visual and haptic representations. Representations of haptic movements, however, might be processed in a system different from the one for object representations. For the visual process, there is a known dichotomy between the motion/space/action-related (dorsal) pathway and the color/object-related (ventral) pathway
<xref ref-type="bibr" rid="b24">24</xref>
<xref ref-type="bibr" rid="b25">25</xref>
<xref ref-type="bibr" rid="b26">26</xref>
<xref ref-type="bibr" rid="b27">27</xref>
. There might also be different haptic processes for object and movement information. Knowledge of the object representation process might not help to understand the representation of movement signals obtained through haptic perception.</p>
<p>Because actions are often based on visual information, representations of haptic movements and their relationship with vision should be as important as or even more important than representations for objects, particularly when one mimics another person's limb movements, as suggested by studies on the mirror-neuron system
<xref ref-type="bibr" rid="b28">28</xref>
: A mirror neuron responds to both one's own action and observation of the same action performed by another. To transform someone's action into self-action, haptic representations independent of visual representations potentially play an important role because visual representations are usually related to physical space/objects and are often viewpoint-dependent.</p>
<p>Despite this importance, no study, to the best of our knowledge, has examined whether a modality-specific or multimodal process contributes to movement representations. On the one hand, a few studies have suggested a common process for visual and haptic motion perception
<xref ref-type="bibr" rid="b29">29</xref>
<xref ref-type="bibr" rid="b30">30</xref>
, but that is about perception of object motion, not haptic movements. On the other hand, hand movements have often been used for object perception
<xref ref-type="bibr" rid="b22">22</xref>
<xref ref-type="bibr" rid="b31">31</xref>
, but our interest is in representation of movements themselves, such as those of drawing characters. We compared the characteristics of an imagery task involving mental rotation
<xref ref-type="bibr" rid="b32">32</xref>
<xref ref-type="bibr" rid="b33">33</xref>
<xref ref-type="bibr" rid="b34">34</xref>
<xref ref-type="bibr" rid="b35">35</xref>
between visual and haptic movements using a virtual display of both visual and haptic information (
<xref ref-type="fig" rid="f1">Fig. 1</xref>
). Although similar effects of rotation angle have been reported for visual and haptic object representations
<xref ref-type="bibr" rid="b8">8</xref>
<xref ref-type="bibr" rid="b36">36</xref>
<xref ref-type="bibr" rid="b37">37</xref>
, and the contribution of parietal and premotor cortices to mental rotation of visual stimuli has been suggested
<xref ref-type="bibr" rid="b38">38</xref>
, investigation of movement representations is a different issue. No study has compared the effect of rotation angle on reproducing movement patterns obtained through visual and haptic inputs, although stimulus objects have often been explored with haptic movements
<xref ref-type="bibr" rid="b22">22</xref>
<xref ref-type="bibr" rid="b31">31</xref>
<xref ref-type="bibr" rid="b39">39</xref>
. Perhaps, we obtain a mental rotation effect for visual stimuli similar to the effect obtained with static visual images because a movement pattern can be easily represented as a static line drawing. In contrast, we might not obtain a similar effect for haptic stimuli because haptic information without reference to a particular axis is possibly advantageous for controlling limb movements. This contrasts with object recognition, where haptic and visual representations have the benefit of sharing the same coordinate system independent of source modality.</p>
<p>For mental rotation stimuli, we used two-stroke patterns that were expressed by movements of a visual stimulus or a passive haptic movement. A computer controlled the movements in both cases. There was a learning phase and a test phase (learning indicates temporal memorization of movement patterns, not training of a motor skill). In the learning phase, a movement pattern was presented to the participant either visually on a display or haptically via a force-feedback device. To present visual patterns, a computer moved a yellow disk on the display. To present haptic patterns, a computer moved the stylus of a force-feedback device held in the participant's right hand. The stylus pulled the participant's hand, and the participant perceived the movement pattern through the passive hand movement. The stylus was static in the visual presentation, and the display was dark in the haptic presentation.</p>
<p>In the subsequent test phase, the pattern was rotated in an angle, and the first stroke was presented. The participant's task was to recall the learned pattern and to show the second stroke (
<xref ref-type="fig" rid="f1">Fig. 1b</xref>
). There were visual and haptic test conditions. A computer presented the first stroke by the movement of the yellow disk in the visual test. The participant was instructed to indicate the end point of the second stroke by the cursor (the same yellow disk), which was moved with the force-feedback device (
<xref ref-type="fig" rid="f1">Fig. 1c</xref>
). In the haptic test, the computer presented the first stroke by the stylus's movement, and the participant was instructed to draw the second stroke by moving the stylus (no force-feedback except for the reaction force from the virtual plane). Although the participant used the stylus in both visual and haptic test conditions, the task in the visual test was to place the visual cursor at the terminal point of the imaginary second stroke, and that in the haptic test was to move the stylus to draw the second stroke without visual feedback. Because the first stroke was given visually and because the participant was instructed to move the visual cursor on the display, the visual signal was assumed to be used dominantly in the visual test. Similarly, because the first stroke was given haptically without visual stimulation and because the participant was instructed to draw the second stroke, the haptic signal had to be used in the haptic test. In both test conditions, we measured latency to start the stylus's movement. This is a mental rotation task because the participant had to rotate the representation of the learned pattern to perform the task, although this differs from conventional normal/mirror image discrimination
<xref ref-type="bibr" rid="b32">32</xref>
<xref ref-type="bibr" rid="b33">33</xref>
<xref ref-type="bibr" rid="b34">34</xref>
.</p>
<p>To isolate differences in representation between visual and haptic processes from other response-related effects, we used four combinations of learning and test modalities: visual learning and visual test (VV), visual learning and haptic test (VH), haptic learning and visual test (HV), and haptic learning and haptic test (HH). Common features between VV and VH reflect characteristics of visual representations, and those between HV and HH reflect characteristics of haptic representations.</p>
<sec disp-level="1" sec-type="results">
<title>Results</title>
<p>We found different effects of stimulus rotation between visual and haptic learning conditions independent of test conditions. The average response latency, which is the average of the individual median latencies, increased with rotation angle for the visually learned stimulus. In contrast, we observed a much weaker effect of rotation angle for the haptically learned stimulus (
<xref ref-type="fig" rid="f2">Fig. 2</xref>
). A conventional mental rotation effect, longer time for larger rotations
<xref ref-type="bibr" rid="b32">32</xref>
<xref ref-type="bibr" rid="b33">33</xref>
<xref ref-type="bibr" rid="b34">34</xref>
<xref ref-type="bibr" rid="b35">35</xref>
, was found in the visual learning condition with the haptic test as with the visual test. In contrast, latency showed a difference between only the 0° and 90° rotations for haptic learning, also in both types of tests. These findings are confirmed by statistical tests. A three-way within-subjects analysis of variance (ANOVA) (learning modality, test modality, and rotation angle) showed significant main effects of test modality (F(1,7) = 532.99, p < 0.0001) and rotation angle (F(3,21) = 14.09, p < 0.0001), and interactions between learning modality and rotation angle (F(3,21) = 3.08, p < 0.05). We also tested the effect of order, separating data in the first and second sessions. A four-way ANOVA (first/second sessions, learning modality, test modality, and rotation angle) showed significant interactions between learning modality and rotation angle (F(3,21) = 4.32, p < 0.01), whereas no significant interaction was shown among learning modality and rotation angle and first/second sessions (F(3,21) = 0.09, p > 0.1). We also found shortening of latency (approximately 4% on average) in the second session (F(3,21) = 14.2, p < 0.001) as expected from practice and/or learning effects.</p>
<p>To test statistical differences among rotation angles following the ANOVA results, we performed multiple comparisons between all combinations of rotation angles, except between 90° and 270° (or −90°), where the angle difference from the original angle was the same. We used averages of the visual and haptic test conditions for the test because the ANOVA showed a significant interaction between rotation angle and learning modality, but not between test modality and rotation angle. With Holm's correction, a
<italic>t</italic>
-test showed significant differences between all five comparisons for visual learning. The same test showed a significant difference between only 0° and 90°, 180°, or 270° for haptic learning (p < 0.05).</p>
<p>There was a trivial effect in response accuracy between rotation angles, as in previous visual mental rotation experiments
<xref ref-type="bibr" rid="b32">32</xref>
<xref ref-type="bibr" rid="b34">34</xref>
. Although the ANOVA showed a significant main effect of rotation angle (F(3,21) = 8.65, p < 0.0001), no interaction was found between any two factors. The effect was very small, and less than 15% of the standard deviation (
<xref ref-type="supplementary-material" rid="s1">Supplementary Fig. 1</xref>
). The
<italic>t</italic>
-test with Holm's correction showed significant differences only between 0° and 90° or 270° for the data averaged over all learning and test conditions (p < 0.05). It is not surprising that there was a benefit in task performance in the 0° condition, where the test stimulus was a repetition of the learned stimulus.</p>
<p>These results are consistent with the hypothesis that there are different representation processes between visual and haptic information. Haptic movements are coded as rotation-independent representations as opposed to visual representations that depend on rotation angle. It should be noted that we do not expect an interaction between test modality and rotation angle here. If visual and haptic information is stored through separate mechanisms, the effect of test angle should be the same for a given learning modality, but does not have to be the same for a given test modality. Although more time might be required to access the representation when different modalities are used between learning and test phases, no difference in the effect of rotation angle is expected with the same learning modality as far as mental rotation of the memorized representation is concerned. This is not only a strong piece of evidence for the separate representations of vision and haptic movements, but also a finding of a unique feature of the haptic movement process. This, however, is not to say that visual imagery is never obtained through haptic stimulation. Even if the participants constructed visual representations more or less from haptic movements, these results suggest that the visual representation obtained from haptic learning was not used, at least not as often as the haptic representation.</p>
<p>The difference found between 0° and other angles for haptic learning appears to complicate the interpretation of the results. Such a difference is not consistent with the assumption of rotation-independent representations. The difference can be attributed to a specific effect of repeated movements in the condition with 0° rotation. In the HH condition with 0° rotation, haptic movement was identical between learning and test phases. Identical movements could lead to further decreases in latency in the second movement. To examine the possible effect of repeated movements, we performed additional experiments using HV and HH conditions. The only difference from the main experiment was an interposed task between the learning and test phases. The participant was asked to track a circularly moving target on the display by using a visual cursor controlled by the haptic device for 4 s between the two phases. Latency data showed no difference between 0° and 90° rotations and no difference between rotation angles in either condition (
<xref ref-type="fig" rid="f3">Fig. 3</xref>
). The same ANOVA as in the main experiment showed a significant effect for only the test condition (F(1,7) = 236.20, p < 0.0001). The difference between 0° and 90° rotations for haptic learning in the main experiment can be attributed to a facilitation effect due to repetitive arm movements between learning and test phases.</p>
<p>The existence of two independent processes for visual and haptic representations raises an important question. Are visual and haptic representations integrated as a multimodal representation? Differences between learning modalities independent of test modalities suggest no integration of visual and haptic representations. If there is a common process, the process is likely used for efficient task excursion when different modalities are used in the learning and test phases. To directly address this issue, we conducted a second experiment. In the second experiment, we used simultaneous presentations of visual and haptic movements to investigate interference between the two types of representations. We would expect no interference if visual and haptic representations were never integrated. After simultaneously presenting visual and haptic movements in the learning phase, we presented an auditory cue in the interval between the learning and test phases to inform which of the visual or haptic movements to recall (one beep for the visual stimulus and two beeps for the haptic stimulus). The participant thus had to memorize both visual and haptic movements for the task in the test phase. There were consistent and inconsistent trials. Visual and haptic movements were the same in consistent trials and different in inconsistent trials (
<xref ref-type="fig" rid="f4">Fig. 4</xref>
).</p>
<p>Results in the inconsistent condition revealed little or no interaction between visual and haptic learning/memory processes (
<xref ref-type="fig" rid="f5">Fig. 5(A)</xref>
). Even when visual and haptic information is memorized simultaneously, latency is similar to that under single modality conditions. Moreover, the effect of rotation angle is specific to the memorizing modality, as under single modality conditions.</p>
<p>A four-way within-subjects ANOVA (learning modality, test modality, consistent/inconsistent, and rotation angle) showed significant main effects of test modality (F(1,7) = 569.61, p < 0.0001) and rotation angle (F(3,21) = 11.31, p < 0.0001). We found a significant interaction between rotation angle and learning modality (F(3,21) = 3.86, p < 0.05) as in Experiment 1. A three-way interaction among consistent/inconsistent, test modality and rotation angle was also significant (F(3,21) = 3.89, p < 0.05). In inconsistent trials, the
<italic>t</italic>
-test showed significant differences (p < 0.05) between all five comparisons for visual learning, whereas it showed a significant difference (p < 0.05) between only 0° and 90°, 180°, or 270° for haptic learning with Holm's correction.</p>
<p>Although we had no particular prediction for consistent trials, we expected some kind of mixture of two representations in response because both visual and haptic representations were available in the test. The general trend of the results of the consistent trials is similar to that of visual learning in Experiment 1 (
<xref ref-type="fig" rid="f5">Fig. 5(B)</xref>
). However, latency functions are somehow between the visual and haptic learning results in Experiment 1. Rotation angle clearly changed latency, but the amount of the angle effect was smaller than that of visual learning in Experiment 1 and that of the inconsistent trials, consistent with the significant three-way interaction among consistent/inconsistent, test modality, and rotation angle. The
<italic>t</italic>
-test showed significant differences (p < 0.05) between all five comparisons for haptic learning and all but one comparison for visual learning (except for that between 180° and 270°). This appears to indicate interference between the two representation systems, which disagrees with the concept of independent representations for visual and haptic movements. However, this pattern of results can be predicted by assuming random selection from two independent representations (see next section).</p>
<sec disp-level="2">
<title>Probabilistic interaction of visual and haptic representations</title>
<p>We investigated whether independent visual and haptic representations are sufficient to predict the apparent interference shown by consistent trials in the second experiment. In a consistent trial, the participant should use either one of two representations for performing the task under the assumption that two processes work independently. The participant can choose either representation randomly from trial to trial with a certain probability for each choice. In such a case, median latency can be determined from a mixture of latency distributions for the visual and haptic representations.</p>
<p>We calculated latency results in consistent trials by using a model of random selection with latency distributions of inconsistent trials. Assuming that each of the two representations was chosen with a certain probability (α and 1 − α for visual and haptic representations, respectively, and α varied with an interval of 0.05), we obtained a latency distribution of consistent trials for each participant by using 2100 random samples (the number was set to be a multiple of 21, because 1/21 was the step of probability changes). Then, we calculated median latency from the distribution of each individual to obtain an average over individual medians. We repeated the procedure 1000 times and obtained the average for comparison with the actual results (
<xref ref-type="supplementary-material" rid="s1">Supplementary Fig. 2</xref>
). By comparing the prediction error with variable α values, we chose the α with the minimum least square error for each of the visual and haptic responses. With the best α value, the random selection model predicts experimental data well, both in terms of shape and absolute values (
<xref ref-type="fig" rid="f6">Fig. 6</xref>
): The value of α is respectively 0.6 and 0.75 for visual and haptic response conditions. These values indicate a greater contribution of visual representation than haptic representation, independent of response type.</p>
<p>Integration of different signals in relation to Bayesian theory has been investigated in several cue integration studies
<xref ref-type="bibr" rid="b40">40</xref>
<xref ref-type="bibr" rid="b41">41</xref>
<xref ref-type="bibr" rid="b42">42</xref>
. Ernst and Banks, for example, showed that visual and haptic signals are averaged with weights determined by signal reliability
<xref ref-type="bibr" rid="b40">40</xref>
. We examined whether the same rule applied to the relative contribution of visual and haptic representations in consistent trials. Assuming independent visual and haptic processes, our analysis showed a relative contribution, or probability of selection, of about 0.7 (0.6 and 0.75 for the two response conditions) for visual representations and 0.3 for haptic representations. The larger relative contribution of visual signals might be related to the higher reliability of visual representations relative to that of haptic representations. Reliabilities of the signals can be estimated from the precision of the response, that is, the standard deviation of goal direction indicated by participants. Standard deviations in angle variation for the responses were 49.6°, 56.7°, 57.1°, and 57.8° for VV, VH, HH, and HV, respectively. Reliability was slightly higher (standard deviation was smaller) for the visual learning condition than for the haptic learning condition with visual response, whereas little difference was seen for haptic response. Relative contributions for visual response were 0.55 for visual learning and 0.45 for haptic learning, and those for haptic response were 0.50 and 0.50. The signal selection ratio that we estimated for our latency results (0.60 vs. 0.40 or 0.75 vs. 0.25) was not likely directly related to signal reliability. Signal selection might be fundamentally different from signal integration.</p>
</sec>
</sec>
<sec disp-level="1" sec-type="discussion">
<title>Discussion</title>
<p>Representations of haptic information do not seem to have a particular angle of reference and thus can be considered rotation-independent representations. This study indicated that no cost is required to use haptic movement representations with an angle different from a memorized one. This contrasts with visual representation, which requires longer time to rotate larger angles. The difference might be related to the reference axis in the representation system. There is perhaps no physical direction that is special for haptic movements, whereas the vertical direction is special for visual perception. One can draw a letter similarly on a front parallel plane or on a plane at either side of the body. Haptic information has the benefit of being represented relative to the body. However, the rotation-independent representation we found is a more general property of haptic perception rather than a property specific to the motor control system. A hand-centered representation cannot explain the results of the present experiment because the haptic device was held and the hand position was similar for all rotations in the experiments. If haptic movements are represented, for example, in the shoulder coordinates, there is no reason to expect the absence of the mental rotation effect. We suggest that the difference between visual and haptic representations for movements is not simply due to the coordinate system, but due to a qualitative difference in represented information. Such a representation might be related to the mirror-neuron system. When one tries to mimic another person's movements as visually observed, the movement information observed and that used for self-movements are represented from different viewpoints: the third- and first-person viewpoints. The system with rotation-independent representations can be used to transfer one from the other and/or to integrate them. The method developed in this study has the potential to reveal this relationship in future studies.</p>
<p>The rotation-independent representation contrasts with the representation process for haptically perceived objects. Mental rotation effect has been reported with haptic objects
<xref ref-type="bibr" rid="b8">8</xref>
<xref ref-type="bibr" rid="b36">36</xref>
<xref ref-type="bibr" rid="b37">37</xref>
and mutual interference between haptic and visual stimulation has also been reported
<xref ref-type="bibr" rid="b22">22</xref>
<xref ref-type="bibr" rid="b43">43</xref>
. These studies suggest a common process for haptic and visual object representations. However, the representation for objects and that for movement signals are possibly very different. Indeed, they are processed in different pathways for vision
<xref ref-type="bibr" rid="b24">24</xref>
<xref ref-type="bibr" rid="b25">25</xref>
: object recognition in the ventral pathway, and movement perception in the dorsal pathways. For haptic signals, there are suggestions that both dorsal and ventral pathways contribute to haptic perception but in different ways
<xref ref-type="bibr" rid="b20">20</xref>
<xref ref-type="bibr" rid="b21">21</xref>
<xref ref-type="bibr" rid="b39">39</xref>
. The dorsal pathway likely processes haptic texture and the ventral pathway likely processes haptic objects
<xref ref-type="bibr" rid="b20">20</xref>
<xref ref-type="bibr" rid="b21">21</xref>
. Moreover, representations for texture are perhaps rotation independent because no particular reference axis is required for texture perception. Our finding of the rotation-independence in haptic movement representation, which differs from that in haptic object representation, is consistent with the dichotomy of haptic processes.</p>
</sec>
<sec disp-level="1" sec-type="methods">
<title>Methods</title>
<sec disp-level="2">
<title>Experiment 1</title>
<sec disp-level="3">
<title>Stimulus</title>
<p>The stimuli were two stroke patterns (
<xref ref-type="fig" rid="f1">Fig. 1</xref>
). The length of each line segment was between 40 and 50 mm (6.0° and 7.5° in visual angle), and the angle between the two segments varied randomly. The visual stimulus was a yellow disk with a diameter of 4 mm (0.6° in visual angle). Disk luminance was 125 cd/m
<sup>2</sup>
with CIE1931
<italic>xy</italic>
-coordinates (0.40, 0.50) against a black background (0.75 cd/m
<sup>2</sup>
). The haptic stimulus was movement of the stylus of a force-feedback device. A computer controlled the force needed to pull the stylus on a virtual plane. Movement speed of the disk and stylus was 6.0 cm/s (9.0°/s).</p>
</sec>
<sec disp-level="3">
<title>Apparatus</title>
<p>Visual stimulus was presented on a cathode ray tube display (Iiyama, HF703U), and the participant viewed it through a mirror, behind which the participant moved the stylus of a force-feedback device with the right hand (
<xref ref-type="fig" rid="f1">Fig. 1</xref>
). The participant felt as if drawing with a pen when the locus of the cursor movement was shown (locus was never shown during the experiment). The left hand was relaxed on the desk or knee. Display size was 32.5 × 24.5 cm and the refresh rate was 75 Hz. Viewing distance was 38 cm. The force-feedback device (PHANTOM Omni, Sensable) was positioned so that the point of the stylus and the cursor on the display coincided. A virtual haptic plane corresponded to a virtual visual display behind the mirror. Position data were obtained from the force-feedback device at the same rate as the display's refresh rate (75 Hz). Experiments were performed in a room without any light source other than the display.</p>
</sec>
<sec disp-level="3">
<title>Procedure</title>
<p>We used four combinations of haptic and visual stimuli for the learning and test phases. We used either visual or haptic stimulus in the learning and test phases: visual learning and visual test (VV), visual learning and haptic test (VH), haptic learning and visual test (HV), and haptic learning and haptic test (HH). In the learning phase, a two-stroke stimulus was presented either by a moving visual disk or by a haptic force on the participant's hand to draw lines.</p>
<p>The direction of the first stroke in the learning phase was randomly chosen from 45°, 135°, 225°, and 315° in each trial, and the angle between the two lines was also randomly chosen in each trial, with the restriction that the same percentage of trials be selected for eight evenly divided ranges of angles: −22.5° to 22.5°, 22.5° to 67.5°, …, 292.5° to 337.5°. In the test phase, the first stroke was given after the stimulus was rotated by 0°, 90°, 180°, or 270°. The task was to recall and indicate the second stroke as soon as possible after the computer presented the first stroke, mentally rotating the learned stimulus similarly to a conventional visual mental rotation task (although the task might be performed without mental rotation in haptic learning).</p>
<p>A trial started when the participant pressed a button on the haptic device while fixating on a fixation spot on the display. The location of the fixation spot, which corresponded to the stylus location, was randomly determined within a central 20 mm square area at the center of the display. The haptic device pulled the participant's hand by means of the stylus to the location. After a blank interval chosen randomly between 2 and 3 s, a learning stimulus was presented either visually or haptically. Before the test phase, a blank interval was also randomly chosen between 2.5 and 3.5 s. During the interval, the haptic device pulled the participant's hand again to the central area with the same positional randomization. The participant's task was to recall the learned pattern and to show the second stroke in an appropriate rotation. The computer presented the first stroke by movement of the yellow disk in the visual test. The participant was instructed to indicate the end point of the second stroke by the cursor on the display, which was moved by the force-feedback device. In the haptic test, the computer presented the first stroke by the stylus's movement, and the participant was instructed to draw the second stroke by moving the stylus from the end location of the first stroke.</p>
<p>Each session consisted of 128 trials (4 rotations × 4 first-line directions in the learning phase × 8 ranges of the angle between the strokes), and each participant performed two sessions in each of the four conditions. For all participants, conditions were run in a fixed order of VV, VM, MV, and MM in the first session and in the reverse order in the second session.</p>
</sec>
<sec disp-level="3">
<title>Participants</title>
<p>Eight students at Tohoku University participated in the experiments (age range 22–24 years). All had normal or corrected-to-normal visual acuity. All participants were right-handed and held the stylus with their right hand. They had experience in psychophysical experiments but did not know the purpose of the experiment. The research was conducted according to the principles expressed in the Declaration of Helsinki. The experiments were undertaken with the understanding and written consent of each participant.</p>
</sec>
</sec>
<sec disp-level="2">
<title>Experiment 2</title>
<sec disp-level="3">
<title>Procedure</title>
<p>The stimulus, apparatus, and participants were the same as in Experiment 1. The experimental paradigm was similar to that in Experiment 1 except that visual and haptic movements were presented simultaneously in the learning phase. Either a visual or haptic stimulus was used in the test phase of a session. Because which of the visual or haptic pattern was presented in the test was informed by an auditory cue in the interval between the learning and test phases (one beep for visual and two beeps for haptic), the participant had to memorize both visual and haptic movements to perform the task appropriately.</p>
<p>The learning phase had both consistent and inconsistent trials. Visual and haptic movements were the same in consistent trials and different in inconsistent trials. Three possible inconsistent stimuli corresponded to one consistent stimulus (
<xref ref-type="fig" rid="f4">Fig. 4</xref>
). To realize similar conditions between consistent and inconsistent trials, we used inconsistent movements symmetrical along either the axis of first stroke or the perpendicular axis. To clearly differentiate between the two modalities in the inconsistent stimulus, we did not use angles between the first and second strokes close to 0°, 90°, 180°, or 270° (excluding the range of −22.5° to +22.5°). In order to equate the numbers of consistent and inconsistent trials, we used the consistent stimulus three times more often than each of the inconsistent stimuli. Test modality was fixed in a session and the learned stimuli to be recalled were chosen randomly for each trial. Two types of sessions differed in test modality (visual test, VV/HV, or haptic test, VH/HH), and both consistent and inconsistent trials were mixed in a session.</p>
<p>For each session, we conducted 192 trials (4 rotations × 4 first stroke directions in the learning phase × 6 consistent/inconsistent combinations [3 inconsistent trials and 3 consistent trials] × 2 types of retrieval stimuli) and randomly divided the trials into two blocks such that a participant ran one block of 96 trials in one day.</p>
</sec>
<sec disp-level="3">
<title>Statistical tests</title>
<p>We used within-subjects ANOVA to test the effect of rotation angle on latency under various conditions. To compare latencies between data with different rotation angles, we performed
<italic>t</italic>
-tests with Holm's correction for multiple comparisons.</p>
</sec>
</sec>
</sec>
<sec disp-level="1">
<title>Author Contributions</title>
<p>S.S. and T.Y. designed the experiment; T.Y. and K.M. built the experimental setup; T.Y. performed experiments; S.S., T.Y., K.M. and I.K. wrote the manuscript.</p>
</sec>
<sec sec-type="supplementary-material" id="s1">
<title>Supplementary Material</title>
<supplementary-material id="d33e25" content-type="local-data">
<caption>
<title>Supplementary Information</title>
<p>Supplementary information</p>
</caption>
<media xlink:href="srep02595-s1.doc" mimetype="application" mime-subtype="msword"></media>
</supplementary-material>
</sec>
</body>
<back>
<ack>
<p>This work was partially supported by a Grant-in-Aid for Scientific Research (B 22330198) to S. Shioiri from the Japan Society for the Promotion of Science and by the Core Research for Evolutional Science and Technology of the Japan Science and Technology Corporation. The authors thank Kazuya Matsubara for help in data analysis.</p>
</ack>
<ref-list>
<ref id="b1">
<mixed-citation publication-type="journal">
<name>
<surname>Anguera</surname>
<given-names>J. A.</given-names>
</name>
,
<name>
<surname>Reuter-Lorenz</surname>
<given-names>P. A.</given-names>
</name>
,
<name>
<surname>Willingham</surname>
<given-names>D. T.</given-names>
</name>
&
<name>
<surname>Seidler</surname>
<given-names>R. D.</given-names>
</name>
<article-title>Contributions of spatial working memory to visuomotor learning</article-title>
.
<source>J. cognitive neurosci.</source>
<volume>22</volume>
,
<fpage>1917</fpage>
<lpage>1930</lpage>
(
<year>2010</year>
).</mixed-citation>
</ref>
<ref id="b2">
<mixed-citation publication-type="journal">
<name>
<surname>Gentaz</surname>
<given-names>E.</given-names>
</name>
,
<name>
<surname>Baud-Bovy</surname>
<given-names>G.</given-names>
</name>
&
<name>
<surname>Luyat</surname>
<given-names>M.</given-names>
</name>
<article-title>The haptic perception of spatial orientations</article-title>
.
<source>Exp. Brain Res.</source>
<volume>187</volume>
,
<fpage>331</fpage>
<lpage>348</lpage>
(
<year>2008</year>
).
<pub-id pub-id-type="pmid">18446332</pub-id>
</mixed-citation>
</ref>
<ref id="b3">
<mixed-citation publication-type="journal">
<name>
<surname>Prather</surname>
<given-names>S. C.</given-names>
</name>
&
<name>
<surname>Sathian</surname>
<given-names>K.</given-names>
</name>
<article-title>Mental rotation of tactile stimuli</article-title>
.
<source>Cognit. Brain Res.</source>
<volume>14</volume>
,
<fpage>91</fpage>
<lpage>98</lpage>
(
<year>2002</year>
).</mixed-citation>
</ref>
<ref id="b4">
<mixed-citation publication-type="journal">
<name>
<surname>Weiller</surname>
<given-names>C.</given-names>
</name>
<italic>et al.</italic>
<article-title>Brain representation of active and passive movements</article-title>
.
<source>NeuroImage</source>
<volume>4</volume>
,
<fpage>105</fpage>
<lpage>110</lpage>
(
<year>1996</year>
).
<pub-id pub-id-type="pmid">9345502</pub-id>
</mixed-citation>
</ref>
<ref id="b5">
<mixed-citation publication-type="journal">
<name>
<surname>Rowe</surname>
<given-names>J. B.</given-names>
</name>
&
<name>
<surname>Siebner</surname>
<given-names>H. R.</given-names>
</name>
<article-title>The motor system and its disorders</article-title>
.
<source>NeuroImage</source>
<volume>61</volume>
,
<fpage>464</fpage>
<lpage>477</lpage>
(
<year>2012</year>
).
<pub-id pub-id-type="pmid">22227135</pub-id>
</mixed-citation>
</ref>
<ref id="b6">
<mixed-citation publication-type="journal">
<name>
<surname>Dayan</surname>
<given-names>E.</given-names>
</name>
<italic>et al.</italic>
<article-title>Neural representations of kinematic laws of motion: evidence for action-perception coupling</article-title>
.
<source>Proc. Natl. Acad. Sci. USA</source>
<volume>104</volume>
,
<fpage>20582</fpage>
<lpage>20587</lpage>
(
<year>2007</year>
).
<pub-id pub-id-type="pmid">18079289</pub-id>
</mixed-citation>
</ref>
<ref id="b7">
<mixed-citation publication-type="journal">
<name>
<surname>Jeannerod</surname>
<given-names>M.</given-names>
</name>
&
<name>
<surname>Frak</surname>
<given-names>V.</given-names>
</name>
<article-title>Mental imaging of motor activity in humans</article-title>
.
<source>Curr. Opin. Neurobiol.</source>
<volume>9</volume>
,
<fpage>735</fpage>
<lpage>739</lpage>
(
<year>1999</year>
).
<pub-id pub-id-type="pmid">10607647</pub-id>
</mixed-citation>
</ref>
<ref id="b8">
<mixed-citation publication-type="journal">
<name>
<surname>Pellizzer</surname>
<given-names>G.</given-names>
</name>
&
<name>
<surname>Georgopoulos</surname>
<given-names>A. P.</given-names>
</name>
<article-title>Common processing constraints for visuomotor and visual mental rotations</article-title>
.
<source>Exp. Brain Res.</source>
<volume>93</volume>
,
<fpage>165</fpage>
<lpage>172</lpage>
(
<year>1993</year>
).
<pub-id pub-id-type="pmid">8467886</pub-id>
</mixed-citation>
</ref>
<ref id="b9">
<mixed-citation publication-type="journal">
<name>
<surname>Easton</surname>
<given-names>R. D.</given-names>
</name>
,
<name>
<surname>Srinivas</surname>
<given-names>K.</given-names>
</name>
&
<name>
<surname>Greene</surname>
<given-names>A. J.</given-names>
</name>
<article-title>Do vision and haptics share common representations? Implicit and explicit memory within and between modalities</article-title>
.
<source>J. Exp. Psychol. Learn.</source>
<volume>23</volume>
,
<fpage>153</fpage>
<lpage>163</lpage>
(
<year>1997</year>
).</mixed-citation>
</ref>
<ref id="b10">
<mixed-citation publication-type="journal">
<name>
<surname>Ballesteros</surname>
<given-names>S.</given-names>
</name>
,
<name>
<surname>Millar</surname>
<given-names>S.</given-names>
</name>
&
<name>
<surname>Reales</surname>
<given-names>J. M.</given-names>
</name>
<article-title>Symmetry in haptic and in visual shape perception</article-title>
.
<source>Percept Psychophys.</source>
<volume>60</volume>
,
<fpage>389</fpage>
<lpage>404</lpage>
(
<year>1998</year>
).
<pub-id pub-id-type="pmid">9599991</pub-id>
</mixed-citation>
</ref>
<ref id="b11">
<mixed-citation publication-type="journal">
<name>
<surname>Woods</surname>
<given-names>A. T.</given-names>
</name>
&
<name>
<surname>Newell</surname>
<given-names>F. N.</given-names>
</name>
<article-title>Visual, haptic and cross-modal recognition of objects and scenes</article-title>
.
<source>J. Physiol. (Paris), Paris</source>
<volume>98</volume>
,
<fpage>147</fpage>
<lpage>159</lpage>
(
<year>2004</year>
).</mixed-citation>
</ref>
<ref id="b12">
<mixed-citation publication-type="journal">
<name>
<surname>Lacey</surname>
<given-names>S.</given-names>
</name>
,
<name>
<surname>Campbell</surname>
<given-names>C.</given-names>
</name>
&
<name>
<surname>Sathian</surname>
<given-names>K.</given-names>
</name>
<article-title>Vision and touch: multiple or multisensory representations of objects?</article-title>
<source>Perception</source>
<volume>36</volume>
,
<fpage>1513</fpage>
<lpage>1521</lpage>
(
<year>2007</year>
).
<pub-id pub-id-type="pmid">18265834</pub-id>
</mixed-citation>
</ref>
<ref id="b13">
<mixed-citation publication-type="journal">
<name>
<surname>Giudice</surname>
<given-names>N. A.</given-names>
</name>
,
<name>
<surname>Betty</surname>
<given-names>M. R.</given-names>
</name>
&
<name>
<surname>Loomis</surname>
<given-names>J. M.</given-names>
</name>
<article-title>Functional equivalence of spatial images from touch and vision: evidence from spatial updating in blind and sighted individuals</article-title>
.
<source>J. Exp. Psychol. Learn.</source>
<volume>37</volume>
,
<fpage>621</fpage>
<lpage>634</lpage>
(
<year>2011</year>
).</mixed-citation>
</ref>
<ref id="b14">
<mixed-citation publication-type="journal">
<name>
<surname>Zangaladze</surname>
<given-names>A.</given-names>
</name>
,
<name>
<surname>Epstein</surname>
<given-names>C. M.</given-names>
</name>
,
<name>
<surname>Grafton</surname>
<given-names>S. T.</given-names>
</name>
&
<name>
<surname>Sathian</surname>
<given-names>K.</given-names>
</name>
<article-title>Involvement of visual cortex in tactile discrimination of orientation</article-title>
.
<source>Nature</source>
<volume>401</volume>
,
<fpage>587</fpage>
<lpage>590</lpage>
(
<year>1999</year>
).
<pub-id pub-id-type="pmid">10524625</pub-id>
</mixed-citation>
</ref>
<ref id="b15">
<mixed-citation publication-type="journal">
<name>
<surname>Amedi</surname>
<given-names>A.</given-names>
</name>
,
<name>
<surname>Malach</surname>
<given-names>R.</given-names>
</name>
,
<name>
<surname>Hendler</surname>
<given-names>T.</given-names>
</name>
,
<name>
<surname>Peled</surname>
<given-names>S.</given-names>
</name>
&
<name>
<surname>Zohary</surname>
<given-names>E.</given-names>
</name>
<article-title>Visuo-haptic object-related activation in the ventral visual pathway</article-title>
.
<source>Nature Neurosci.</source>
<volume>4</volume>
,
<fpage>324</fpage>
<lpage>330</lpage>
(
<year>2001</year>
).
<pub-id pub-id-type="pmid">11224551</pub-id>
</mixed-citation>
</ref>
<ref id="b16">
<mixed-citation publication-type="journal">
<name>
<surname>Pietrini</surname>
<given-names>P.</given-names>
</name>
<italic>et al.</italic>
<article-title>Beyond sensory images: Object-based representation in the human ventral pathway</article-title>
.
<source>Proc. Natl. Acad. Sci. USA</source>
<volume>101</volume>
,
<fpage>5658</fpage>
<lpage>5663</lpage>
(
<year>2004</year>
).
<pub-id pub-id-type="pmid">15064396</pub-id>
</mixed-citation>
</ref>
<ref id="b17">
<mixed-citation publication-type="journal">
<name>
<surname>Merabet</surname>
<given-names>L.</given-names>
</name>
<italic>et al.</italic>
<article-title>Feeling by sight or seeing by touch?</article-title>
<source>Neuron</source>
<volume>42</volume>
,
<fpage>173</fpage>
<lpage>179</lpage>
(
<year>2004</year>
).
<pub-id pub-id-type="pmid">15066274</pub-id>
</mixed-citation>
</ref>
<ref id="b18">
<mixed-citation publication-type="journal">
<name>
<surname>Korman</surname>
<given-names>M.</given-names>
</name>
,
<name>
<surname>Teodorescu</surname>
<given-names>K.</given-names>
</name>
,
<name>
<surname>Cohen</surname>
<given-names>A.</given-names>
</name>
,
<name>
<surname>Reiner</surname>
<given-names>M.</given-names>
</name>
&
<name>
<surname>Gopher</surname>
<given-names>D.</given-names>
</name>
<article-title>Effects of Order and Sensory Modality in Stiffness Perception</article-title>
.
<source>Presence-Teleop. Virt.</source>
<volume>21</volume>
,
<fpage>295</fpage>
<lpage>304</lpage>
(
<year>2012</year>
).</mixed-citation>
</ref>
<ref id="b19">
<mixed-citation publication-type="journal">
<name>
<surname>Amedi</surname>
<given-names>A.</given-names>
</name>
,
<name>
<surname>von Kriegstein</surname>
<given-names>K.</given-names>
</name>
,
<name>
<surname>van Atteveldt</surname>
<given-names>N. M.</given-names>
</name>
,
<name>
<surname>Beauchamp</surname>
<given-names>M. S.</given-names>
</name>
&
<name>
<surname>Naumer</surname>
<given-names>M. J.</given-names>
</name>
<article-title>Functional imaging of human crossmodal identification and object recognition</article-title>
.
<source>Exp. Brain Res.</source>
<volume>166</volume>
,
<fpage>559</fpage>
<lpage>571</lpage>
(
<year>2005</year>
).
<pub-id pub-id-type="pmid">16028028</pub-id>
</mixed-citation>
</ref>
<ref id="b20">
<mixed-citation publication-type="journal">
<name>
<surname>Sathian</surname>
<given-names>K.</given-names>
</name>
<italic>et al.</italic>
<article-title>Dual pathways for haptic and visual perception of spatial and texture information</article-title>
.
<source>NeuroImage</source>
<volume>57</volume>
,
<fpage>462</fpage>
<lpage>475</lpage>
(
<year>2011</year>
).
<pub-id pub-id-type="pmid">21575727</pub-id>
</mixed-citation>
</ref>
<ref id="b21">
<mixed-citation publication-type="journal">
<name>
<surname>Stilla</surname>
<given-names>R.</given-names>
</name>
&
<name>
<surname>Sathian</surname>
<given-names>K.</given-names>
</name>
<article-title>Selective visuo-haptic processing of shape and texture</article-title>
.
<source>Hum. Brain Mapp.</source>
<volume>29</volume>
,
<fpage>1123</fpage>
<lpage>1138</lpage>
(
<year>2008</year>
).
<pub-id pub-id-type="pmid">17924535</pub-id>
</mixed-citation>
</ref>
<ref id="b22">
<mixed-citation publication-type="journal">
<name>
<surname>Tal</surname>
<given-names>N.</given-names>
</name>
&
<name>
<surname>Amedi</surname>
<given-names>A.</given-names>
</name>
<article-title>Multisensory visual-tactile object related network in humans: insights gained using a novel crossmodal adaptation approach</article-title>
.
<source>Exp. Brain Res.</source>
<volume>198</volume>
,
<fpage>165</fpage>
<lpage>182</lpage>
(
<year>2009</year>
).
<pub-id pub-id-type="pmid">19652959</pub-id>
</mixed-citation>
</ref>
<ref id="b23">
<mixed-citation publication-type="journal">
<name>
<surname>Nabeta</surname>
<given-names>T.</given-names>
</name>
,
<name>
<surname>Ono</surname>
<given-names>F.</given-names>
</name>
&
<name>
<surname>Kawahara</surname>
<given-names>J.</given-names>
</name>
<article-title>Transfer of spatial context from visual to haptic search</article-title>
.
<source>Perception</source>
<volume>32</volume>
,
<fpage>1351</fpage>
<lpage>1358</lpage>
(
<year>2003</year>
).
<pub-id pub-id-type="pmid">14959796</pub-id>
</mixed-citation>
</ref>
<ref id="b24">
<mixed-citation publication-type="journal">
<name>
<surname>Livingstone</surname>
<given-names>M.</given-names>
</name>
&
<name>
<surname>Hubel</surname>
<given-names>D.</given-names>
</name>
<article-title>Segregation of form, color, movement, and depth: anatomy, physiology, and perception</article-title>
.
<source>Science</source>
<volume>240</volume>
,
<fpage>740</fpage>
<lpage>749</lpage>
(
<year>1988</year>
).
<pub-id pub-id-type="pmid">3283936</pub-id>
</mixed-citation>
</ref>
<ref id="b25">
<mixed-citation publication-type="journal">
<name>
<surname>Merigan</surname>
<given-names>W. H.</given-names>
</name>
&
<name>
<surname>Maunsell</surname>
<given-names>J. H.</given-names>
</name>
<article-title>How parallel are the primate visual pathways?</article-title>
<source>Annu. Rev. Neurosci.</source>
<volume>16</volume>
,
<fpage>369</fpage>
<lpage>402</lpage>
(
<year>1993</year>
).
<pub-id pub-id-type="pmid">8460898</pub-id>
</mixed-citation>
</ref>
<ref id="b26">
<mixed-citation publication-type="journal">
<name>
<surname>Goodale</surname>
<given-names>M. A.</given-names>
</name>
&
<name>
<surname>Milner</surname>
<given-names>A. D.</given-names>
</name>
<article-title>Separate visual pathways for perception and action</article-title>
.
<source>Trends. Neurosci.</source>
<volume>15</volume>
,
<fpage>20</fpage>
<lpage>25</lpage>
(
<year>1992</year>
).
<pub-id pub-id-type="pmid">1374953</pub-id>
</mixed-citation>
</ref>
<ref id="b27">
<mixed-citation publication-type="journal">
<name>
<surname>Mishkin</surname>
<given-names>M.</given-names>
</name>
,
<name>
<surname>Macko</surname>
<given-names>K. A.</given-names>
</name>
&
<name>
<surname>Ungerleider</surname>
<given-names>L. G.</given-names>
</name>
<article-title>Object vision and spatial vision: Two cortical pathways</article-title>
.
<source>Trends. Neurosci.</source>
<volume>6</volume>
,
<fpage>414</fpage>
<lpage>417</lpage>
(
<year>1983</year>
).</mixed-citation>
</ref>
<ref id="b28">
<mixed-citation publication-type="journal">
<name>
<surname>Rizzolatti</surname>
<given-names>G.</given-names>
</name>
&
<name>
<surname>Craighero</surname>
<given-names>L.</given-names>
</name>
<article-title>The mirror-neuron system</article-title>
.
<source>Annu. Rev. Neurosci.</source>
<volume>27</volume>
,
<fpage>169</fpage>
<lpage>192</lpage>
(
<year>2004</year>
).
<pub-id pub-id-type="pmid">15217330</pub-id>
</mixed-citation>
</ref>
<ref id="b29">
<mixed-citation publication-type="journal">
<name>
<surname>Matteau</surname>
<given-names>I.</given-names>
</name>
,
<name>
<surname>Kupers</surname>
<given-names>R.</given-names>
</name>
,
<name>
<surname>Ricciardi</surname>
<given-names>E.</given-names>
</name>
,
<name>
<surname>Pietrini</surname>
<given-names>P.</given-names>
</name>
&
<name>
<surname>Ptito</surname>
<given-names>M.</given-names>
</name>
<article-title>Beyond visual, aural and haptic movement perception: hMT+ is activated by electrotactile motion stimulation of the tongue in sighted and in congenitally blind individuals</article-title>
.
<source>Brain Res. Bull.</source>
<volume>82</volume>
,
<fpage>264</fpage>
<lpage>270</lpage>
(
<year>2010</year>
).
<pub-id pub-id-type="pmid">20466041</pub-id>
</mixed-citation>
</ref>
<ref id="b30">
<mixed-citation publication-type="journal">
<name>
<surname>Wacker</surname>
<given-names>E.</given-names>
</name>
,
<name>
<surname>Spitzer</surname>
<given-names>B.</given-names>
</name>
,
<name>
<surname>Lutzkendorf</surname>
<given-names>R.</given-names>
</name>
,
<name>
<surname>Bernarding</surname>
<given-names>J.</given-names>
</name>
&
<name>
<surname>Blankenburg</surname>
<given-names>F.</given-names>
</name>
<article-title>Tactile motion and pattern processing assessed with high-field FMRI</article-title>
.
<source>PLoS One</source>
<volume>6</volume>
,
<fpage>e24860</fpage>
(
<year>2011</year>
).
<pub-id pub-id-type="pmid">21949769</pub-id>
</mixed-citation>
</ref>
<ref id="b31">
<mixed-citation publication-type="journal">
<name>
<surname>Amedi</surname>
<given-names>A.</given-names>
</name>
,
<name>
<surname>Raz</surname>
<given-names>N.</given-names>
</name>
,
<name>
<surname>Azulay</surname>
<given-names>H.</given-names>
</name>
,
<name>
<surname>Malach</surname>
<given-names>R.</given-names>
</name>
&
<name>
<surname>Zohary</surname>
<given-names>E.</given-names>
</name>
<article-title>Cortical activity during tactile exploration of objects in blind and sighted humans</article-title>
.
<source>Restor. Neurol. Neuros.</source>
<volume>28</volume>
,
<fpage>143</fpage>
<lpage>156</lpage>
(
<year>2010</year>
).</mixed-citation>
</ref>
<ref id="b32">
<mixed-citation publication-type="journal">
<name>
<surname>Cooper</surname>
<given-names>L. A.</given-names>
</name>
<article-title>Mental Rotation of Random 2-Dimensional Shapes</article-title>
.
<source>Cognitive Psychol.</source>
<volume>7</volume>
,
<fpage>20</fpage>
<lpage>43</lpage>
(
<year>1975</year>
).</mixed-citation>
</ref>
<ref id="b33">
<mixed-citation publication-type="journal">
<name>
<surname>Cooper</surname>
<given-names>L. A.</given-names>
</name>
&
<name>
<surname>Shepard</surname>
<given-names>R. N.</given-names>
</name>
<article-title>Time Required to Prepare for a Rotated Stimulus</article-title>
.
<source>Mem. Cognition</source>
<volume>1</volume>
,
<fpage>246</fpage>
<lpage>250</lpage>
(
<year>1973</year>
).</mixed-citation>
</ref>
<ref id="b34">
<mixed-citation publication-type="journal">
<name>
<surname>Shepard</surname>
<given-names>R. N.</given-names>
</name>
&
<name>
<surname>Metzler</surname>
<given-names>J.</given-names>
</name>
<article-title>Mental Rotation of 3-Dimensional Objects</article-title>
.
<source>Science</source>
<volume>171</volume>
,
<fpage>701</fpage>
<lpage>&</lpage>
(
<year>1971</year>
).
<pub-id pub-id-type="pmid">5540314</pub-id>
</mixed-citation>
</ref>
<ref id="b35">
<mixed-citation publication-type="journal">
<name>
<surname>Sekiyama</surname>
<given-names>K.</given-names>
</name>
<article-title>Mental rotation of kinesthetic hand images and modes of stimulus presentation</article-title>
.
<source>Shinrigaku Kenkyu</source>
<volume>57</volume>
,
<fpage>342</fpage>
<lpage>349</lpage>
(
<year>1987</year>
).
<pub-id pub-id-type="pmid">3613295</pub-id>
</mixed-citation>
</ref>
<ref id="b36">
<mixed-citation publication-type="journal">
<name>
<surname>Volcic</surname>
<given-names>R.</given-names>
</name>
,
<name>
<surname>Wijntjes</surname>
<given-names>M. W.</given-names>
</name>
,
<name>
<surname>Kool</surname>
<given-names>E. C.</given-names>
</name>
&
<name>
<surname>Kappers</surname>
<given-names>A. M.</given-names>
</name>
<article-title>Cross-modal visuo-haptic mental rotation: comparing objects between senses</article-title>
.
<source>Exp. Brain Res.</source>
<volume>203</volume>
,
<fpage>621</fpage>
<lpage>627</lpage>
(
<year>2010</year>
).
<pub-id pub-id-type="pmid">20437169</pub-id>
</mixed-citation>
</ref>
<ref id="b37">
<mixed-citation publication-type="journal">
<name>
<surname>Wexler</surname>
<given-names>M.</given-names>
</name>
,
<name>
<surname>Kosslyn</surname>
<given-names>S. M.</given-names>
</name>
&
<name>
<surname>Berthoz</surname>
<given-names>A.</given-names>
</name>
<article-title>Motor processes in mental rotation</article-title>
.
<source>Cognition</source>
<volume>68</volume>
,
<fpage>77</fpage>
<lpage>94</lpage>
(
<year>1998</year>
).
<pub-id pub-id-type="pmid">9775517</pub-id>
</mixed-citation>
</ref>
<ref id="b38">
<mixed-citation publication-type="journal">
<name>
<surname>Windischberger</surname>
<given-names>C.</given-names>
</name>
,
<name>
<surname>Lamm</surname>
<given-names>C.</given-names>
</name>
,
<name>
<surname>Bauer</surname>
<given-names>H.</given-names>
</name>
&
<name>
<surname>Moser</surname>
<given-names>E.</given-names>
</name>
<article-title>Human motor cortex activity during mental rotation</article-title>
.
<source>NeuroImage</source>
<volume>20</volume>
,
<fpage>225</fpage>
<lpage>232</lpage>
(
<year>2003</year>
).
<pub-id pub-id-type="pmid">14527583</pub-id>
</mixed-citation>
</ref>
<ref id="b39">
<mixed-citation publication-type="journal">
<name>
<surname>Reiner</surname>
<given-names>M.</given-names>
</name>
,
<name>
<surname>Korsnes</surname>
<given-names>M.</given-names>
</name>
,
<name>
<surname>Glover</surname>
<given-names>G.</given-names>
</name>
,
<name>
<surname>Hugdahl</surname>
<given-names>K.</given-names>
</name>
&
<name>
<surname>Feldmann</surname>
<given-names>M.</given-names>
</name>
<article-title>Seeing shapes and hearing textures: Two neural categories of touch</article-title>
.
<source>The Open Neuroscience Journal</source>
<volume>5</volume>
,
<fpage>8</fpage>
<lpage>15</lpage>
(
<year>2011</year>
).</mixed-citation>
</ref>
<ref id="b40">
<mixed-citation publication-type="journal">
<name>
<surname>Ernst</surname>
<given-names>M. O.</given-names>
</name>
&
<name>
<surname>Banks</surname>
<given-names>M. S.</given-names>
</name>
<article-title>Humans integrate visual and haptic information in a statistically optimal fashion</article-title>
.
<source>Nature</source>
<volume>415</volume>
,
<fpage>429</fpage>
<lpage>433</lpage>
(
<year>2002</year>
).
<pub-id pub-id-type="pmid">11807554</pub-id>
</mixed-citation>
</ref>
<ref id="b41">
<mixed-citation publication-type="journal">
<name>
<surname>Knill</surname>
<given-names>D. C.</given-names>
</name>
<article-title>Learning Bayesian priors for depth perception</article-title>
.
<source>J. Vis.</source>
<volume>7</volume>
,
<fpage>13</fpage>
(
<year>2007</year>
).
<pub-id pub-id-type="pmid">17685820</pub-id>
</mixed-citation>
</ref>
<ref id="b42">
<mixed-citation publication-type="journal">
<name>
<surname>Lages</surname>
<given-names>M.</given-names>
</name>
<article-title>Bayesian models of binocular 3-D motion perception</article-title>
.
<source>J. Vis.</source>
<volume>6</volume>
,
<fpage>508</fpage>
<lpage>522</lpage>
(
<year>2006</year>
).
<pub-id pub-id-type="pmid">16889483</pub-id>
</mixed-citation>
</ref>
<ref id="b43">
<mixed-citation publication-type="journal">
<name>
<surname>Macaluso</surname>
<given-names>E.</given-names>
</name>
&
<name>
<surname>Maravita</surname>
<given-names>A.</given-names>
</name>
<article-title>The representation of space near the body through touch and vision</article-title>
.
<source>Neuropsychologia</source>
<volume>48</volume>
,
<fpage>782</fpage>
<lpage>795</lpage>
(
<year>2010</year>
).
<pub-id pub-id-type="pmid">19837101</pub-id>
</mixed-citation>
</ref>
</ref-list>
</back>
<floats-group>
<fig id="f1">
<label>Figure 1</label>
<caption>
<p>(a) Experimental setup. Visual stimuli were presented on a cathode ray tube display, which the participant viewed through a mirror, behind which the participant moved the stylus of a force-feedback device with the right hand. There was a virtual haptic plane corresponded to the virtual visual display. (b) The stimulus was a movement pattern of two strokes. (c) Visual and haptic stimulation in each phase of each modality. In visual learning, a computer moved a yellow disk on the display. In haptic learning, the computer moved the stylus of a force-feedback device held in the participant's right hand. In the visual test, the computer moved a yellow disk to display the first stroke, and the participant moved a cursor (the moving disk for the first stroke) to the end point of the second stroke by moving the stylus. In the haptic test, the computer moved a stylus to display the first stroke pulling the participant's hand, and the participant drew the second stroke while continuously moving the stylus. Before each learning and test phase, the stylus and hand were moved to the central location.</p>
</caption>
<graphic xlink:href="srep02595-f1"></graphic>
</fig>
<fig id="f2">
<label>Figure 2</label>
<caption>
<title>Average of median response latencies of all participants plotted against rotation angle.</title>
<p>Response latency is longer for larger rotation angles (the largest rotation angle is 180°) for visual learning, whereas a clear difference is seen only between 0° and other rotations for haptic learning. Each function shows latency data for one of the four combinations of visual/haptic learning/test conditions. Error bars indicate standard error of the mean across participants, and the plot at 360° is a replica of that at 0°.</p>
</caption>
<graphic xlink:href="srep02595-f2"></graphic>
</fig>
<fig id="f3">
<label>Figure 3</label>
<caption>
<title>Response latency plotted against rotation angle for the control experiment with an interference task between learning and test phases.</title>
<p>The difference between 0° and other rotations for haptic learning found in the main experiment disappeared. Error bars indicate standard error of the mean across participants, and the plot at 360° is a replica of that at 0°.</p>
</caption>
<graphic xlink:href="srep02595-f3"></graphic>
</fig>
<fig id="f4">
<label>Figure 4</label>
<caption>
<title>Visual and haptic movements in consistent and inconsistent trials.</title>
<p>To realize similar conditions between consistent and inconsistent trials, we used inconsistent movements symmetrical along either the axis of first stroke or the perpendicular axis. Solid arrows indicate consistent trials and dashed arrows indicate inconsistent trials.</p>
</caption>
<graphic xlink:href="srep02595-f4"></graphic>
</fig>
<fig id="f5">
<label>Figure 5</label>
<caption>
<title>Response latency plotted against rotation angle for inconsistent trials (a) and consistent trials (b) Effect of rotation angle for inconsistent trials is similar to that in Experiment 1, suggesting no interaction between visual and haptic representations.</title>
<p>Error bars indicate standard error of the mean across participants, and the plot at 360° is a replica of that at 0°.</p>
</caption>
<graphic xlink:href="srep02595-f5"></graphic>
</fig>
<fig id="f6">
<label>Figure 6</label>
<caption>
<title>Predicted response latency in consistent trials using latency distribution data obtained in inconsistent trials, assuming random selection from independent representation processes for visual and haptic movements.</title>
<p>Selection probability, assumed to be the same for all participants, was determined for the best prediction in terms of least square error.</p>
</caption>
<graphic xlink:href="srep02595-f6"></graphic>
</fig>
</floats-group>
</pmc>
<affiliations>
<list></list>
<tree>
<noCountry>
<name sortKey="Kuriki, Ichiro" sort="Kuriki, Ichiro" uniqKey="Kuriki I" first="Ichiro" last="Kuriki">Ichiro Kuriki</name>
<name sortKey="Matsumiya, Kazumichi" sort="Matsumiya, Kazumichi" uniqKey="Matsumiya K" first="Kazumichi" last="Matsumiya">Kazumichi Matsumiya</name>
<name sortKey="Shioiri, Satoshi" sort="Shioiri, Satoshi" uniqKey="Shioiri S" first="Satoshi" last="Shioiri">Satoshi Shioiri</name>
<name sortKey="Yamazaki, Takanori" sort="Yamazaki, Takanori" uniqKey="Yamazaki T" first="Takanori" last="Yamazaki">Takanori Yamazaki</name>
</noCountry>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Checkpoint
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001027 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd -nk 001027 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Checkpoint
   |type=    RBID
   |clé=     PMC:3763250
   |texte=   Rotation-independent representations for haptic movements
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/RBID.i   -Sk "pubmed:24005481" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024