Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Cross-Modal Object Recognition Is Viewpoint-Independent

Identifieur interne : 002470 ( Pmc/Checkpoint ); précédent : 002469; suivant : 002471

Cross-Modal Object Recognition Is Viewpoint-Independent

Auteurs : Simon Lacey [États-Unis] ; Andrew Peters [États-Unis] ; K. Sathian [États-Unis]

Source :

RBID : PMC:1964535

Abstract

Background

Previous research suggests that visual and haptic object recognition are viewpoint-dependent both within- and cross-modally. However, this conclusion may not be generally valid as it was reached using objects oriented along their extended y-axis, resulting in differential surface processing in vision and touch. In the present study, we removed this differential by presenting objects along the z-axis, thus making all object surfaces more equally available to vision and touch.

Methodology/Principal Findings

Participants studied previously unfamiliar objects, in groups of four, using either vision or touch. Subsequently, they performed a four-alternative forced-choice object identification task with the studied objects presented in both unrotated and rotated (180° about the x-, y-, and z-axes) orientations. Rotation impaired within-modal recognition accuracy in both vision and touch, but not cross-modal recognition accuracy. Within-modally, visual recognition accuracy was reduced by rotation about the x- and y-axes more than the z-axis, whilst haptic recognition was equally affected by rotation about all three axes. Cross-modal (but not within-modal) accuracy correlated with spatial (but not object) imagery scores.

Conclusions/Significance

The viewpoint-independence of cross-modal object identification points to its mediation by a high-level abstract representation. The correlation between spatial imagery scores and cross-modal performance suggest that construction of this high-level representation is linked to the ability to perform spatial transformations. Within-modal viewpoint-dependence appears to have a different basis in vision than in touch, possibly due to surface occlusion being important in vision but not touch.


Url:
DOI: 10.1371/journal.pone.0000890
PubMed: 17849019
PubMed Central: 1964535


Affiliations:


Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:1964535

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Cross-Modal Object Recognition Is Viewpoint-Independent</title>
<author>
<name sortKey="Lacey, Simon" sort="Lacey, Simon" uniqKey="Lacey S" first="Simon" last="Lacey">Simon Lacey</name>
<affiliation wicri:level="2">
<nlm:aff id="aff1">
<addr-line>Department of Neurology, Emory University, Atlanta, Georgia, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Neurology, Emory University, Atlanta, Georgia</wicri:regionArea>
<placeName>
<region type="state">Géorgie (États-Unis)</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Peters, Andrew" sort="Peters, Andrew" uniqKey="Peters A" first="Andrew" last="Peters">Andrew Peters</name>
<affiliation wicri:level="2">
<nlm:aff id="aff1">
<addr-line>Department of Neurology, Emory University, Atlanta, Georgia, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Neurology, Emory University, Atlanta, Georgia</wicri:regionArea>
<placeName>
<region type="state">Géorgie (États-Unis)</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Sathian, K" sort="Sathian, K" uniqKey="Sathian K" first="K." last="Sathian">K. Sathian</name>
<affiliation wicri:level="2">
<nlm:aff id="aff1">
<addr-line>Department of Neurology, Emory University, Atlanta, Georgia, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Neurology, Emory University, Atlanta, Georgia</wicri:regionArea>
<placeName>
<region type="state">Géorgie (États-Unis)</region>
</placeName>
</affiliation>
<affiliation wicri:level="2">
<nlm:aff id="aff2">
<addr-line>Department of Rehabilitation Medicine, Emory University, Atlanta, Georgia, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Rehabilitation Medicine, Emory University, Atlanta, Georgia</wicri:regionArea>
<placeName>
<region type="state">Géorgie (États-Unis)</region>
</placeName>
</affiliation>
<affiliation wicri:level="2">
<nlm:aff id="aff3">
<addr-line>Department of Psychology, Emory University, Atlanta, Georgia, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Psychology, Emory University, Atlanta, Georgia</wicri:regionArea>
<placeName>
<region type="state">Géorgie (États-Unis)</region>
</placeName>
</affiliation>
<affiliation wicri:level="2">
<nlm:aff id="aff4">
<addr-line>Atlanta Veterans Affairs Medical Center, Rehabilitation Research and Development Center of Excellence, Decatur, Georgia, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Atlanta Veterans Affairs Medical Center, Rehabilitation Research and Development Center of Excellence, Decatur, Georgia</wicri:regionArea>
<placeName>
<region type="state">Géorgie (États-Unis)</region>
</placeName>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">17849019</idno>
<idno type="pmc">1964535</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1964535</idno>
<idno type="RBID">PMC:1964535</idno>
<idno type="doi">10.1371/journal.pone.0000890</idno>
<date when="2007">2007</date>
<idno type="wicri:Area/Pmc/Corpus">002192</idno>
<idno type="wicri:Area/Pmc/Curation">002192</idno>
<idno type="wicri:Area/Pmc/Checkpoint">002470</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Cross-Modal Object Recognition Is Viewpoint-Independent</title>
<author>
<name sortKey="Lacey, Simon" sort="Lacey, Simon" uniqKey="Lacey S" first="Simon" last="Lacey">Simon Lacey</name>
<affiliation wicri:level="2">
<nlm:aff id="aff1">
<addr-line>Department of Neurology, Emory University, Atlanta, Georgia, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Neurology, Emory University, Atlanta, Georgia</wicri:regionArea>
<placeName>
<region type="state">Géorgie (États-Unis)</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Peters, Andrew" sort="Peters, Andrew" uniqKey="Peters A" first="Andrew" last="Peters">Andrew Peters</name>
<affiliation wicri:level="2">
<nlm:aff id="aff1">
<addr-line>Department of Neurology, Emory University, Atlanta, Georgia, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Neurology, Emory University, Atlanta, Georgia</wicri:regionArea>
<placeName>
<region type="state">Géorgie (États-Unis)</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Sathian, K" sort="Sathian, K" uniqKey="Sathian K" first="K." last="Sathian">K. Sathian</name>
<affiliation wicri:level="2">
<nlm:aff id="aff1">
<addr-line>Department of Neurology, Emory University, Atlanta, Georgia, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Neurology, Emory University, Atlanta, Georgia</wicri:regionArea>
<placeName>
<region type="state">Géorgie (États-Unis)</region>
</placeName>
</affiliation>
<affiliation wicri:level="2">
<nlm:aff id="aff2">
<addr-line>Department of Rehabilitation Medicine, Emory University, Atlanta, Georgia, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Rehabilitation Medicine, Emory University, Atlanta, Georgia</wicri:regionArea>
<placeName>
<region type="state">Géorgie (États-Unis)</region>
</placeName>
</affiliation>
<affiliation wicri:level="2">
<nlm:aff id="aff3">
<addr-line>Department of Psychology, Emory University, Atlanta, Georgia, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Department of Psychology, Emory University, Atlanta, Georgia</wicri:regionArea>
<placeName>
<region type="state">Géorgie (États-Unis)</region>
</placeName>
</affiliation>
<affiliation wicri:level="2">
<nlm:aff id="aff4">
<addr-line>Atlanta Veterans Affairs Medical Center, Rehabilitation Research and Development Center of Excellence, Decatur, Georgia, United States of America</addr-line>
</nlm:aff>
<country xml:lang="fr">États-Unis</country>
<wicri:regionArea>Atlanta Veterans Affairs Medical Center, Rehabilitation Research and Development Center of Excellence, Decatur, Georgia</wicri:regionArea>
<placeName>
<region type="state">Géorgie (États-Unis)</region>
</placeName>
</affiliation>
</author>
</analytic>
<series>
<title level="j">PLoS ONE</title>
<idno type="eISSN">1932-6203</idno>
<imprint>
<date when="2007">2007</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<sec>
<title>Background</title>
<p>Previous research suggests that visual and haptic object recognition are viewpoint-dependent both within- and cross-modally. However, this conclusion may not be generally valid as it was reached using objects oriented along their extended y-axis, resulting in differential surface processing in vision and touch. In the present study, we removed this differential by presenting objects along the z-axis, thus making all object surfaces more equally available to vision and touch.</p>
</sec>
<sec>
<title>Methodology/Principal Findings</title>
<p>Participants studied previously unfamiliar objects, in groups of four, using either vision or touch. Subsequently, they performed a four-alternative forced-choice object identification task with the studied objects presented in both unrotated and rotated (180° about the x-, y-, and z-axes) orientations. Rotation impaired within-modal recognition accuracy in both vision and touch, but not cross-modal recognition accuracy. Within-modally, visual recognition accuracy was reduced by rotation about the x- and y-axes more than the z-axis, whilst haptic recognition was equally affected by rotation about all three axes. Cross-modal (but not within-modal) accuracy correlated with spatial (but not object) imagery scores.</p>
</sec>
<sec>
<title>Conclusions/Significance</title>
<p>The viewpoint-independence of cross-modal object identification points to its mediation by a high-level abstract representation. The correlation between spatial imagery scores and cross-modal performance suggest that construction of this high-level representation is linked to the ability to perform spatial transformations. Within-modal viewpoint-dependence appears to have a different basis in vision than in touch, possibly due to surface occlusion being important in vision but not touch.</p>
</sec>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">PLoS ONE</journal-id>
<journal-id journal-id-type="publisher-id">plos</journal-id>
<journal-title>PLoS ONE</journal-title>
<issn pub-type="epub">1932-6203</issn>
<publisher>
<publisher-name>Public Library of Science</publisher-name>
<publisher-loc>San Francisco, USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">17849019</article-id>
<article-id pub-id-type="pmc">1964535</article-id>
<article-id pub-id-type="publisher-id">07-PONE-RA-01545R2</article-id>
<article-id pub-id-type="doi">10.1371/journal.pone.0000890</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
<subj-group subj-group-type="Discipline">
<subject>Neuroscience/Behavioral Neuroscience</subject>
<subject>Neuroscience/Cognitive Neuroscience</subject>
<subject>Neuroscience/Sensory Systems</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Cross-Modal Object Recognition Is Viewpoint-Independent</article-title>
<alt-title alt-title-type="running-head">Cross-Modal Object Recognition</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Lacey</surname>
<given-names>Simon</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Peters</surname>
<given-names>Andrew</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Sathian</surname>
<given-names>K.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
<xref ref-type="aff" rid="aff4">
<sup>4</sup>
</xref>
<xref ref-type="corresp" rid="n101">
<sup>*</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff1">
<label>1</label>
<addr-line>Department of Neurology, Emory University, Atlanta, Georgia, United States of America</addr-line>
</aff>
<aff id="aff2">
<label>2</label>
<addr-line>Department of Rehabilitation Medicine, Emory University, Atlanta, Georgia, United States of America</addr-line>
</aff>
<aff id="aff3">
<label>3</label>
<addr-line>Department of Psychology, Emory University, Atlanta, Georgia, United States of America</addr-line>
</aff>
<aff id="aff4">
<label>4</label>
<addr-line>Atlanta Veterans Affairs Medical Center, Rehabilitation Research and Development Center of Excellence, Decatur, Georgia, United States of America</addr-line>
</aff>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Harris</surname>
<given-names>Justin</given-names>
</name>
<role>Academic Editor</role>
<xref ref-type="aff" rid="edit1"></xref>
</contrib>
</contrib-group>
<aff id="edit1">University of Sydney, Australia</aff>
<author-notes>
<corresp id="n101">* To whom correspondence should be addressed. E-mail:
<email>krish.sathian@emory.edu</email>
</corresp>
<fn fn-type="con">
<p>Conceived and designed the experiments: SL. Performed the experiments: AP. Analyzed the data: KS SL. Contributed reagents/materials/analysis tools: SL. Wrote the paper: KS SL AP.</p>
</fn>
</author-notes>
<pub-date pub-type="collection">
<year>2007</year>
</pub-date>
<pub-date pub-type="epub">
<day>12</day>
<month>9</month>
<year>2007</year>
</pub-date>
<volume>2</volume>
<issue>9</issue>
<elocation-id>e890</elocation-id>
<history>
<date date-type="received">
<day>18</day>
<month>6</month>
<year>2007</year>
</date>
<date date-type="accepted">
<day>24</day>
<month>8</month>
<year>2007</year>
</date>
</history>
<copyright-statement>Lacey et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.</copyright-statement>
<copyright-year>2007</copyright-year>
<abstract>
<sec>
<title>Background</title>
<p>Previous research suggests that visual and haptic object recognition are viewpoint-dependent both within- and cross-modally. However, this conclusion may not be generally valid as it was reached using objects oriented along their extended y-axis, resulting in differential surface processing in vision and touch. In the present study, we removed this differential by presenting objects along the z-axis, thus making all object surfaces more equally available to vision and touch.</p>
</sec>
<sec>
<title>Methodology/Principal Findings</title>
<p>Participants studied previously unfamiliar objects, in groups of four, using either vision or touch. Subsequently, they performed a four-alternative forced-choice object identification task with the studied objects presented in both unrotated and rotated (180° about the x-, y-, and z-axes) orientations. Rotation impaired within-modal recognition accuracy in both vision and touch, but not cross-modal recognition accuracy. Within-modally, visual recognition accuracy was reduced by rotation about the x- and y-axes more than the z-axis, whilst haptic recognition was equally affected by rotation about all three axes. Cross-modal (but not within-modal) accuracy correlated with spatial (but not object) imagery scores.</p>
</sec>
<sec>
<title>Conclusions/Significance</title>
<p>The viewpoint-independence of cross-modal object identification points to its mediation by a high-level abstract representation. The correlation between spatial imagery scores and cross-modal performance suggest that construction of this high-level representation is linked to the ability to perform spatial transformations. Within-modal viewpoint-dependence appears to have a different basis in vision than in touch, possibly due to surface occlusion being important in vision but not touch.</p>
</sec>
</abstract>
<counts>
<page-count count="6"></page-count>
</counts>
</article-meta>
</front>
<body>
<sec id="s1">
<title>Introduction</title>
<p>Previous research suggests that object recognition is viewpoint-dependent within both the visual
<xref ref-type="bibr" rid="pone.0000890-Jolicoeur1">[1]</xref>
and haptic
<xref ref-type="bibr" rid="pone.0000890-Newell1">[2]</xref>
modalities, since recognition accuracy is degraded if objects are rotated between encoding and test presentations. However, what happens for visuo-haptic
<bold>cross-modal</bold>
object recognition is less clear, since differences in the perceptual salience of particular object properties between vision and touch suggest qualitatively different unisensory representations
<xref ref-type="bibr" rid="pone.0000890-Klatzky1">[3]</xref>
, whereas cross-modal priming studies suggest a common representation
<xref ref-type="bibr" rid="pone.0000890-Reales1">[4]</xref>
.
<italic>A priori</italic>
, one would expect that when touch is involved, representations should be viewpoint-independent because the hands can move freely over the object, collecting information from all surfaces. However, cross-modal recognition was reported to be viewpoint-dependent, improving when objects with an elongated vertical (y-) axis were rotated away from the learned view about the x- and y-axes, and degrading when rotated about the z-axis
<xref ref-type="bibr" rid="pone.0000890-Newell1">[2]</xref>
. The explanation suggested for these findings was that haptic exploration naturally favors the far surface of objects, and vision, the near surface
<xref ref-type="bibr" rid="pone.0000890-Newell1">[2]</xref>
. When objects are rotated about the x- and y-axes, the near and far surfaces are exchanged, the haptic far surface becoming the visual near surface. In contrast, rotation about the z-axis does not involve such a surface exchange. But the haptic preference for the far surface may only be true for objects extended along the y-axis: encoding the near surface of these objects haptically is difficult, given the biomechanical constraints of the hand
<xref ref-type="bibr" rid="pone.0000890-Newell1">[2]</xref>
,
<xref ref-type="bibr" rid="pone.0000890-Heller1">[5]</xref>
. If this is true, the observed cross-modal effects might simply reflect the particular experimental design. Here we used multi-part objects extended along the z-axis (
<xref ref-type="fig" rid="pone-0000890-g001">Figure 1</xref>
): this removed the near/far asymmetry since these surfaces were identical facets, making all object surfaces that carried shape information more equally available to haptic exploration. We reasoned that this would allow a truer understanding of the effect of object rotation on cross-modal recognition.</p>
<fig id="pone-0000890-g001" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0000890.g001</object-id>
<label>Figure 1</label>
<caption>
<title>An example object used in the present study in the original orientation (A) and rotated 180° about the z-axis (B), x-axis (C) and y-axis (D).</title>
</caption>
<graphic xlink:href="pone.0000890.g001"></graphic>
</fig>
<p>Recognition of rotated objects involves complex mental spatial transformations. In visual within-modal object recognition, mental rotation and recognition of rotated objects have behaviorally similar signatures (in both, errors and latencies increase with angle of rotation) but rely on different neural networks
<xref ref-type="bibr" rid="pone.0000890-Gauthier1">[6]</xref>
. The relationships between the spatial transformations underlying mental rotation and cross-modal recognition of rotated objects are unclear. As a preliminary step to exploring these relationships further, participants completed the Object-Spatial Imagery Questionnaire (OSIQ)
<xref ref-type="bibr" rid="pone.0000890-Blajenkova1">[7]</xref>
which measures individual preference for both ‘object imagery’ (pictorial object representations primarily concerned with the visual appearance of an object) and ‘spatial imagery’ (abstract spatial representations primarily concerned with the spatial relations between objects, object parts, and complex spatial transformations)
<xref ref-type="bibr" rid="pone.0000890-Blajenkova1">[7]</xref>
,
<xref ref-type="bibr" rid="pone.0000890-Kozhevnikov1">[8]</xref>
. We predicted that performance with our multi-part objects would correlate with the spatial imagery ability reflected in OSIQ-spatial scores, but not with the pictorial imagery ability indexed by OSIQ-object scores.</p>
</sec>
<sec sec-type="materials|methods" id="s2">
<title>Materials and Methods</title>
<p>Forty-eight objects were constructed, each made from six smooth wooden blocks measuring 1.6 cm high, 3.6 cm long and 2.2 cm wide. The resulting objects were 9.5 cm high, the other dimensions varying according to the arrangement of the component blocks. Constructing the objects from smooth wooden component blocks avoided the textural difference between the top and bottom surfaces of Lego™ bricks used by Newell et al.
<xref ref-type="bibr" rid="pone.0000890-Newell1">[2]</xref>
. This was important to obviate undesirable cues to rotation around the x- and y-axes. The objects were painted medium grey to remove visual cues from variations in the natural wood color and grain. Each object had a small (<1 mm) grey pencil dot on one facet that was used to guide presentation of the object by the experimenter to the participant in a particular orientation. Pilot testing showed that participants were never aware of these small dots and debriefing confirmed that this was so in the main experiment also.</p>
<p>The 48 objects were divided into three sets of sixteen, one for each axis of rotation. Each set was further divided into four subsets of four, with one subset for each modality condition. These subsets were checked to ensure that they contained no ‘mirror-image’ pairs. Difference matrices were calculated for the twelve subsets based on the number of differences in the position (three possibilities: in the middle or at either end of the preceding block along the z-axis) and orientation (two possibilities: either the same as, or orthogonal to, the preceding block along the z-axis) of each component block. These values could range from 0 (identical) to 6 (completely different) and were used to calculate the mean difference between objects. The mean difference between objects within a subset ranged from 5.2 to 5.7; the mean of these subset scores within a set was taken as the score for the set and these ranged from 5.4 to 5.5. Paired t-tests on these scores showed no significant differences between subsets or sets (all p values >.05) and the objects were therefore considered equally discriminable.</p>
<p>The procedures were approved by the Institutional Review Board of Emory University. Twenty-four undergraduates (12 male and 12 female, mean age 20 years 3 months) participated after giving informed written consent. Participants performed a four-alternative forced-choice object identification task in two within-modal (visual-visual; haptic-haptic) and two cross-modal (visual-haptic; haptic-visual) conditions. Objects were either unrotated between encoding and test presentations, or rotated by 180° about the x-, y-, and z-axes (
<xref ref-type="fig" rid="pone-0000890-g001">Figure 1</xref>
). In each encoding-recognition sequence, participants learned four objects, identified by numbers, either visually or haptically. Each object was presented for 30 seconds haptically or 15 seconds visually; these times were determined by a pilot experiment. The 2:1 haptic:visual ratio of presentation times reflects that used in previous studies
<xref ref-type="bibr" rid="pone.0000890-Newell1">[2]</xref>
,
<xref ref-type="bibr" rid="pone.0000890-Lacey1">[9]</xref>
,
<xref ref-type="bibr" rid="pone.0000890-Freides1">[10]</xref>
. During visual presentation, participants sat at a table on which the objects were placed. The table was 86 cm high so that the initial viewing distance was 30–40 cm and the initial viewing angle as the participants looked down on the objects was approximately 35–45°. As in the earlier study of Newell et al.
<xref ref-type="bibr" rid="pone.0000890-Newell1">[2]</xref>
, the seated participants were free to move their head and eyes when looking at the objects but were not allowed to get up and walk around them.</p>
<p>During haptic presentation, participants felt the objects behind an opaque cloth screen and were free to move their hands around the objects. Unlike the study of Newell et al.
<xref ref-type="bibr" rid="pone.0000890-Newell1">[2]</xref>
, the objects were not fixed to a surface but placed in the participants' hands: participants were instructed to keep the objects in exactly the same orientation as presented and not to rotate or otherwise manipulate them. On subsequent recognition trials, the four objects were presented both unrotated and rotated by 180°, about a specific axis from the initial orientation, providing blocks of eight trials. Participants were asked to identify each object by its number. Objects were rotated about each axis in turn, all the modality conditions being completed for a given axis before moving on to the next axis of rotation. The order of the modality conditions, axes of rotation and object sets was fully counterbalanced across subjects.</p>
</sec>
<sec id="s3">
<title>Results</title>
<p>
<xref ref-type="fig" rid="pone-0000890-g002">Figure 2</xref>
shows that object rotation substantially degraded recognition accuracy in the within-modal conditions, but only slightly decreased cross-modal recognition accuracy. A two-way (within- vs. cross-modal, unrotated vs. rotated) repeated-measures analysis of variance (RM-ANOVA) showed that object rotation significantly reduced recognition accuracy (F
<sub>1,23</sub>
 = 30.04, p = <.001) and that overall within-modal recognition accuracy was marginally better than overall cross-modal recognition (F
<sub>1,23</sub>
 = 4.23, p = .051). These two factors interacted (F
<sub>1,23</sub>
 = 12.58, p = .002) and post-hoc t-tests showed that this was because within-modal recognition accuracy was highly significantly reduced by rotation (t = 7.25, p <.001) while cross-modal recognition accuracy was not (t = 1.66, p = .11) (
<xref ref-type="fig" rid="pone-0000890-g002">Figure 2</xref>
).</p>
<fig id="pone-0000890-g002" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0000890.g002</object-id>
<label>Figure 2</label>
<caption>
<title>The effect on recognition accuracy of rotating objects away from the learned orientation was confined to the within-modal conditions, with no effect in the cross-modal conditions.</title>
<p>(Error bars = s.e.m.; asterisk = significant difference; horizontal line = chance performance at 25% in the four-alternative forced-choice task used).</p>
</caption>
<graphic xlink:href="pone.0000890.g002"></graphic>
</fig>
<p>Analyzing this further, a three-way (modality: within-modal visual, within-modal haptic, cross-modal visual-haptic and cross-modal haptic-visual; rotation; axis) RM-ANOVA again showed a main effect of object rotation (F
<sub>1,23</sub>
 = 30.04, p = .001) but the axis of rotation was unimportant (F
<sub>2,46</sub>
 = .39, p = .68), and the main effect of modality fell short of significance (F
<sub>3,69</sub>
 = 2.49, p = .07). However, modality and rotation again interacted (F
<sub>2,46</sub>
 = 4.82, p = .004). Three-way (separate within- and cross-modal, rotation, axis) RM-ANOVAs showed again that this was because rotation had an effect in the within-modal conditions (F
<sub>1,23</sub>
 = 52.57, p <.001) but not the cross-modal conditions (F
<sub>1,23</sub>
 = 2.74, p = .11). There were no other significant effects or interactions in the cross-modal conditions.
<xref ref-type="fig" rid="pone-0000890-g003">Figure 3</xref>
illustrates that the two within-modal conditions were similar to each other, as were the two cross-modal conditions.</p>
<fig id="pone-0000890-g003" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0000890.g003</object-id>
<label>Figure 3</label>
<caption>
<title>Interaction between modality and rotation.</title>
<p>Rotation away from the learned orientation only affected within-modal, not cross-modal, recognition accuracy. (Error bars = s.e.m.; asterisk = significant difference; horizontal line = chance performance at 25% in the four-alternative forced-choice task used).</p>
</caption>
<graphic xlink:href="pone.0000890.g003"></graphic>
</fig>
<p>In the within-modal conditions, visual and haptic recognition were not significantly different (F
<sub>1,23</sub>
 = 2.66, p = .12) but modality and axis interacted (F
<sub>2,46</sub>
 = 4.37, p = .02). To investigate this, we ran separate two-way (axis, rotation) RM-ANOVAs for each modality. While rotation reduced both visual (F
<sub>1,23</sub>
 = 36.36, p = .001) and haptic (F
<sub>1,23</sub>
 = 13.54, p = .001) recognition accuracy, there was an effect of axis in vision (F
<sub>2,46</sub>
 = 3.93, p = .03) but not touch (F
<sub>2,46</sub>
 = .56, p = .58). To examine this further, we compared the percentage reduction in accuracy for each axis in vision and touch. This was computed using the formula {[unrotated score–rotated score]/unrotated score}*100. (Four observations (2.7% of the total) could not be calculated because the formula required division by zero as there were no correct responses for unrotated objects in these cases; these instances were set to zero). Paired t-tests on these difference scores showed that visual recognition accuracy after z-rotation was significantly better than after x-rotation (t = −2.97, p = .007) or y-rotation (t = −2.19, p = .04): the x- and y-rotations were not different (t = .49, p = .63). In contrast, haptic recognition accuracy was equally disrupted by each axis of rotation (z-x: t = .71, p = .48; z-y: t = .48, p = .63; x-y: t = −.34, p = .73) (
<xref ref-type="fig" rid="pone-0000890-g004">Figure 4</xref>
).</p>
<fig id="pone-0000890-g004" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0000890.g004</object-id>
<label>Figure 4</label>
<caption>
<title>Interaction between the within-modal conditions and the axis of rotation.</title>
<p>Haptic within-modal recognition accuracy was equally disrupted by rotation about each axis whereas visual within-modal recognition was disrupted by the x- and y-rotations more than the z-rotation. The graph shows the percentage decrease in accuracy due to rotating the object away from the learned view. (Error bars = s.e.m.; asterisk = significant difference).</p>
</caption>
<graphic xlink:href="pone.0000890.g004"></graphic>
</fig>
<p>A three-way (rotation, axis, modality) ANOVA of the cross-modal conditions alone showed that there was no main effect of object rotation (F
<sub>1,23</sub>
 = 2.74, p = .11) or the axis of rotation (F
<sub>2,46</sub>
 = .03, p = .97), and no significant difference between the two cross-modal conditions (F
<sub>1,23</sub>
 = 1.34, p = .25). There were no significant interactions.</p>
<p>OSIQ-spatial scores were significantly correlated with overall accuracy in both rotated (r = .51, p = .01) and unrotated (r = .48, p = .02) conditions. As
<xref ref-type="fig" rid="pone-0000890-g005">Figure 5</xref>
shows, OSIQ-spatial scores were also significantly correlated with cross-modal accuracy in both rotated (r = .58, p = .003) and unrotated (r = .55, p = .005) conditions, but not with within-modal accuracy (rotated: r = .37, p = .08; unrotated: r = .28, p = .19). OSIQ-object scores were uncorrelated with accuracy, as predicted.</p>
<fig id="pone-0000890-g005" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0000890.g005</object-id>
<label>Figure 5</label>
<caption>
<title>Scatterplots showing that OSIQ-spatial imagery scores correlate with cross-modal (A & B) but not within-modal object recognition accuracy (C & D).</title>
</caption>
<graphic xlink:href="pone.0000890.g005"></graphic>
</fig>
</sec>
<sec id="s4">
<title>Discussion</title>
<p>This study is the first to show that visuo-haptic cross-modal object recognition is essentially viewpoint-independent. Both visual and haptic within-modal recognition were significantly reduced by rotation of the object away from the learned view. This was not so for the two cross-modal conditions. It is well established that, as here, cross-modal recognition comes at a cost compared to within-modal recognition [for example, 11–15], but there was no significant additional cost associated with object rotation. This finding is the more robust because the task in this study was more demanding than in the study of Newell et al.
<xref ref-type="bibr" rid="pone.0000890-Newell1">[2]</xref>
and yet the additional difficulty of object rotation had little effect on cross-modal recognition. For example, although we used similar objects as Newell et al.
<xref ref-type="bibr" rid="pone.0000890-Newell1">[2]</xref>
did (with the exception of the removal of a texture cue) we allowed only half the time for object learning. In addition, participants had to discriminate between specific objects rather than just make a new/old judgment between learned objects and unlearned distractors.</p>
<p>In vision, viewpoint-independence suggests mediation by a high-level, relatively abstract representation
<xref ref-type="bibr" rid="pone.0000890-Riesenhuber1">[16]</xref>
. Viewpoint-independence can occur, more trivially, when all object views are familiar
<xref ref-type="bibr" rid="pone.0000890-Tarr1">[17]</xref>
, perhaps because separate, lower-level representations have been established for each viewpoint; or when the object has very distinctive parts
<xref ref-type="bibr" rid="pone.0000890-Biederman1">[18]</xref>
that are easily transformed to match the new viewpoint. However, the objects in the present study were unfamiliar and lacked distinctive parts because the component blocks were identical except in their relationships to one another. Thus, viewpoint-independence could not have arisen simply from object familiarity or distinctiveness of object parts. Rather, the findings of the present study favor the idea of an abstract, high-level, modality-independent representation underlying cross-modal object recognition. Such a representation could be constructed by integrating lower-level, unisensory, viewpoint-dependent representations
<xref ref-type="bibr" rid="pone.0000890-Riesenhuber1">[16]</xref>
. Functional neuroimaging studies have demonstrated convergence of visual and haptic shape processing in the intraparietal sulcus (IPS) and the lateral occipital complex (LOC)
<xref ref-type="bibr" rid="pone.0000890-Amedi1">[19]</xref>
<xref ref-type="bibr" rid="pone.0000890-Peltier1">[22]</xref>
. The nature of the representations in these areas is, however, incompletely understood, and has only been studied using visual stimuli. Activity in parts of the IPS scales with the angle of mental rotation
<xref ref-type="bibr" rid="pone.0000890-Gauthier1">[6]</xref>
and also appears to be viewpoint-dependent
<xref ref-type="bibr" rid="pone.0000890-James2">[23]</xref>
. There is a difference of opinion as to whether LOC activity is viewpoint-dependent
<xref ref-type="bibr" rid="pone.0000890-GrillSpector1">[24]</xref>
or viewpoint-independent
<xref ref-type="bibr" rid="pone.0000890-James2">[23]</xref>
. Thus, at present, the locus of the modality- and viewpoint-independent, high-level representation underlying cross-modal object recognition is unknown.</p>
<p>The existence of the high-level, modality-independent representation inferred here was obscured in earlier work
<xref ref-type="bibr" rid="pone.0000890-Newell1">[2]</xref>
using objects that were extended along the y-axis. Here, we removed the confounding near-far exchange inherent in this earlier study, by selecting a presentation axis that made all object surfaces more equally available to touch, and demonstrated that cross-modal object recognition is consistently viewpoint-independent across all three axes of rotation. This contrasts with within-modal recognition, where viewpoint-dependence suggests mediation by lower-level, unisensory representations that might feed into the high-level viewpoint-independent representation mediating cross-modal recognition. The correlation between spatial imagery scores and cross-modal, but not within-modal, accuracy, and the lack of any correlation of object imagery scores with performance, suggests that the ability to mentally image complex spatial transformations is linked to viewpoint-independent recognition and supports the view that cross-modal performance is served by an abstract spatial representation.</p>
<p>Our results are also the first to suggest differences between visual and haptic viewpoint-dependence. Rotating an object can occlude a surface and transform the global shape in different ways depending on the axis of rotation
<xref ref-type="bibr" rid="pone.0000890-Gauthier1">[6]</xref>
, suggesting potentially different bases for viewpoint-dependence in vision and touch. Varying the axis of rotation may not matter to touch because the hands are free to move around the object or manipulate it into different orientations relative to the hand. Thus no surface is occluded in touch and it is only necessary to deal with shape transformations. However, these manipulations are not possible visually unless one physically changes location with respect to the object
<xref ref-type="bibr" rid="pone.0000890-Pasqualotto1">[25]</xref>
, so that vision has to deal with both shape transformations and surface occlusion.
<xref ref-type="fig" rid="pone-0000890-g004">Figure 4</xref>
suggests that the axis of rotation affects vision but not touch. Visual recognition was best after z-rotation – although this occluded the top surface, the shape transformation is a simple left/right mirror-image in the picture-plane. The x- and y- rotations were more complex; the x-rotation occluded the top surface and produced a mirror-image in the depth-plane. The y-rotation did not occlude a surface but involved two shape transformations, reversing the object from left to right and in the depth-plane. Although it may be counterintuitive that a rotation involving the occlusion of a surface on the main information-bearing axis is easier to process, it should be borne in mind that shape information from the two side surfaces was still available. There is evidence that such picture-plane rotations are easier than depth-plane rotations
<xref ref-type="bibr" rid="pone.0000890-Gauthier1">[6]</xref>
,
<xref ref-type="bibr" rid="pone.0000890-Logothetis1">[26]</xref>
,
<xref ref-type="bibr" rid="pone.0000890-Perrett1">[27]</xref>
. Monkey inferotemporal neurons show faster generalization and exhibit larger generalization fields for picture-plane rotations than depth-plane rotations
<xref ref-type="bibr" rid="pone.0000890-Logothetis1">[26]</xref>
. Face-selective neurons are more sensitive to depth-plane rotations (faces tilted towards/away from the viewer) than to picture-plane rotations (horizontal or inverted faces)
<xref ref-type="bibr" rid="pone.0000890-Perrett1">[27]</xref>
. Picture-plane (z-axis) rotations result in faster and more accurate performance than depth-plane (x- and y-axis) rotations in both object recognition and mental rotation tasks, even though these tasks involve distinct neural networks
<xref ref-type="bibr" rid="pone.0000890-Gauthier1">[6]</xref>
. Thus the picture-plane advantage may be a fairly general one. However, further work is necessary to verify that the differences between vision and touch derive from the nature of shape transformations and the presence of surface occlusion.</p>
<p>Our main conclusion is to clarify an important point about visuo-haptic cross-modal object recognition: that the underlying representation is viewpoint-independent even for unfamiliar objects lacking distinctive local features. Further, despite the unisensory representations each being viewpoint-dependent, there are differences between modalities with the axis of rotation being important in vision but not touch.</p>
</sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="pone.0000890-Jolicoeur1">
<label>1</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jolicoeur</surname>
<given-names>P</given-names>
</name>
</person-group>
<year>1985</year>
<article-title>The time to name disoriented objects.</article-title>
<source>Mem Cognition,</source>
<volume>13</volume>
<fpage>289</fpage>
<lpage>303</lpage>
</citation>
</ref>
<ref id="pone.0000890-Newell1">
<label>2</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Newell</surname>
<given-names>FN</given-names>
</name>
<name>
<surname>Ernst</surname>
<given-names>MO</given-names>
</name>
<name>
<surname>Tjan</surname>
<given-names>BS</given-names>
</name>
<name>
<surname>Bulthoff</surname>
<given-names>HH</given-names>
</name>
</person-group>
<year>2001</year>
<article-title>Viewpoint dependence in visual and haptic object recognition.</article-title>
<source>Psychol Sci,</source>
<volume>12</volume>
<fpage>37</fpage>
<lpage>42</lpage>
<pub-id pub-id-type="pmid">11294226</pub-id>
</citation>
</ref>
<ref id="pone.0000890-Klatzky1">
<label>3</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Klatzky</surname>
<given-names>RL</given-names>
</name>
<name>
<surname>Lederman</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Reed</surname>
<given-names>C</given-names>
</name>
</person-group>
<year>1987</year>
<article-title>There's more to touch than meets the eye: The salience of object attributes for haptics with and without vision.</article-title>
<source>J Exp Psychol: Gen,</source>
<volume>116</volume>
<fpage>356</fpage>
<lpage>369</lpage>
</citation>
</ref>
<ref id="pone.0000890-Reales1">
<label>4</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Reales</surname>
<given-names>JM</given-names>
</name>
<name>
<surname>Ballesteros</surname>
<given-names>S</given-names>
</name>
</person-group>
<year>1999</year>
<article-title>Implicit and explicit memory for visual and haptic objects: Cross-modal priming depends on structural descriptions.</article-title>
<source>J Exp Psychol: Learn,</source>
<volume>25</volume>
<fpage>644</fpage>
<lpage>663</lpage>
</citation>
</ref>
<ref id="pone.0000890-Heller1">
<label>5</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Heller</surname>
<given-names>MA</given-names>
</name>
<name>
<surname>Brackett</surname>
<given-names>DD</given-names>
</name>
<name>
<surname>Scroggs</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Steffen</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Heatherly</surname>
<given-names>K</given-names>
</name>
<etal></etal>
</person-group>
<year>2002</year>
<article-title>Tangible pictures: Viewpoint effects and linear perspective in visually impaired people.</article-title>
<source>Perception,</source>
<volume>31</volume>
<fpage>747</fpage>
<lpage>769</lpage>
<pub-id pub-id-type="pmid">12092800</pub-id>
</citation>
</ref>
<ref id="pone.0000890-Gauthier1">
<label>6</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gauthier</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Hayward</surname>
<given-names>WG</given-names>
</name>
<name>
<surname>Tarr</surname>
<given-names>MJ</given-names>
</name>
<name>
<surname>Anderson</surname>
<given-names>AW</given-names>
</name>
<name>
<surname>Skudlarski</surname>
<given-names>P</given-names>
</name>
<etal></etal>
</person-group>
<year>2002</year>
<article-title>BOLD activity during mental rotation and viewpoint-dependent object recognition.</article-title>
<source>Neuron,</source>
<volume>34</volume>
<fpage>161</fpage>
<lpage>171</lpage>
<pub-id pub-id-type="pmid">11931750</pub-id>
</citation>
</ref>
<ref id="pone.0000890-Blajenkova1">
<label>7</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Blajenkova</surname>
<given-names>O</given-names>
</name>
<name>
<surname>Kozhevnikov</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Motes</surname>
<given-names>MA</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Object-spatial imagery: A new self-report questionnaire.</article-title>
<source>Appl Cognit Psychol,</source>
<volume>20</volume>
<fpage>239</fpage>
<lpage>263</lpage>
</citation>
</ref>
<ref id="pone.0000890-Kozhevnikov1">
<label>8</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kozhevnikov</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Kosslyn</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Shephard</surname>
<given-names>J</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>Spatial versus object visualizers: A new characterization of cognitive style.</article-title>
<source>Mem Cognition,</source>
<volume>33</volume>
<fpage>710</fpage>
<lpage>726</lpage>
</citation>
</ref>
<ref id="pone.0000890-Lacey1">
<label>9</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lacey</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Campbell</surname>
<given-names>C</given-names>
</name>
</person-group>
<year>2006</year>
<article-title>Mental representation in visual/haptic crossmodal memory: Evidence from interference effects.</article-title>
<source>Q J Exp Psychol,</source>
<volume>59</volume>
<fpage>361</fpage>
<lpage>376</lpage>
</citation>
</ref>
<ref id="pone.0000890-Freides1">
<label>10</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Freides</surname>
<given-names>D</given-names>
</name>
</person-group>
<year>1974</year>
<article-title>Human information processing and sensory modality: Cross-modal functions, information complexity, memory, and deficit.</article-title>
<source>Psychol Bull,</source>
<volume>8</volume>
<fpage>284</fpage>
<lpage>310</lpage>
<pub-id pub-id-type="pmid">4608609</pub-id>
</citation>
</ref>
<ref id="pone.0000890-Casey1">
<label>11</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Casey</surname>
<given-names>SJ</given-names>
</name>
<name>
<surname>Newell</surname>
<given-names>FN</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>The role of long-term and short-term familiarity in visual and haptic face recognition.</article-title>
<source>Exp Brain Res,</source>
<volume>166</volume>
<fpage>583</fpage>
<lpage>591</lpage>
<pub-id pub-id-type="pmid">15983771</pub-id>
</citation>
</ref>
<ref id="pone.0000890-Newell2">
<label>12</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Newell</surname>
<given-names>FN</given-names>
</name>
<name>
<surname>Woods</surname>
<given-names>AT</given-names>
</name>
<name>
<surname>Mernagh</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Bulthoff</surname>
<given-names>HH</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>Visual, haptic and crossmodal recognition of scenes.</article-title>
<source>Exp Brain Res,</source>
<volume>161</volume>
<fpage>233</fpage>
<lpage>242</lpage>
<pub-id pub-id-type="pmid">15490135</pub-id>
</citation>
</ref>
<ref id="pone.0000890-Norman1">
<label>13</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Norman</surname>
<given-names>JF</given-names>
</name>
<name>
<surname>Norman</surname>
<given-names>HF</given-names>
</name>
<name>
<surname>Clayton</surname>
<given-names>AM</given-names>
</name>
<name>
<surname>Lianekhammy</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Zielke</surname>
<given-names>G</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>The visual and haptic perception of natural object shape.</article-title>
<source>Percept Psychophys,</source>
<volume>66</volume>
<fpage>342</fpage>
<lpage>351</lpage>
<pub-id pub-id-type="pmid">15129753</pub-id>
</citation>
</ref>
<ref id="pone.0000890-Bushnell1">
<label>14</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bushnell</surname>
<given-names>EW</given-names>
</name>
<name>
<surname>Baxt</surname>
<given-names>C</given-names>
</name>
</person-group>
<year>1999</year>
<article-title>Children's haptic and cross-modal recognition with familiar and unfamiliar objects.</article-title>
<source>J Exp Psychol Human,</source>
<volume>25</volume>
<fpage>1867</fpage>
<lpage>1881</lpage>
</citation>
</ref>
<ref id="pone.0000890-Newell3">
<label>15</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Newell</surname>
<given-names>KM</given-names>
</name>
<name>
<surname>Shapiro</surname>
<given-names>DC</given-names>
</name>
<name>
<surname>Carlton</surname>
<given-names>MJ</given-names>
</name>
</person-group>
<year>1979</year>
<article-title>Coordinating visual and kinaesthetic memory codes.</article-title>
<source>Brit J Psychol,</source>
<volume>70</volume>
<fpage>87</fpage>
<lpage>96</lpage>
<pub-id pub-id-type="pmid">486868</pub-id>
</citation>
</ref>
<ref id="pone.0000890-Riesenhuber1">
<label>16</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Riesenhuber</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Poggio</surname>
<given-names>T</given-names>
</name>
</person-group>
<year>1999</year>
<article-title>Hierarchical models of object recognition in cortex.</article-title>
<source>Nature Neurosci,</source>
<volume>2</volume>
<fpage>1019</fpage>
<lpage>1025</lpage>
<pub-id pub-id-type="pmid">10526343</pub-id>
</citation>
</ref>
<ref id="pone.0000890-Tarr1">
<label>17</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tarr</surname>
<given-names>MJ</given-names>
</name>
<name>
<surname>Bulthoff</surname>
<given-names>HH</given-names>
</name>
</person-group>
<year>1995</year>
<article-title>Is human object recognition better described by geon structural descriptions or by multiple views: Comment on Biederman and Gerhardstein (1993).</article-title>
<source>J Exp Psychol Human,</source>
<volume>21</volume>
<fpage>1494</fpage>
<lpage>1505</lpage>
</citation>
</ref>
<ref id="pone.0000890-Biederman1">
<label>18</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Biederman</surname>
<given-names>I</given-names>
</name>
</person-group>
<year>1987</year>
<article-title>Recognition-by-components: A theory of human image understanding.</article-title>
<source>Psychol Rev,</source>
<volume>94</volume>
<fpage>115</fpage>
<lpage>147</lpage>
<pub-id pub-id-type="pmid">3575582</pub-id>
</citation>
</ref>
<ref id="pone.0000890-Amedi1">
<label>19</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Amedi</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Malach</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Hendler</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Peled</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Zohary</surname>
<given-names>E</given-names>
</name>
</person-group>
<year>2001</year>
<article-title>Visuo-haptic object-related activation in the ventral pathway.</article-title>
<source>Nature Neurosci,</source>
<volume>4</volume>
<fpage>324</fpage>
<lpage>330</lpage>
<pub-id pub-id-type="pmid">11224551</pub-id>
</citation>
</ref>
<ref id="pone.0000890-James1">
<label>20</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>James</surname>
<given-names>TW</given-names>
</name>
<name>
<surname>Humphrey</surname>
<given-names>GK</given-names>
</name>
<name>
<surname>Gati</surname>
<given-names>JS</given-names>
</name>
<name>
<surname>Servos</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Menon</surname>
<given-names>RS</given-names>
</name>
<etal></etal>
</person-group>
<year>2002</year>
<article-title>Haptic study of three-dimensional objects activates extrastriate visual areas.</article-title>
<source>Neuropsychologia,</source>
<volume>40</volume>
<fpage>1706</fpage>
<lpage>1714</lpage>
<pub-id pub-id-type="pmid">11992658</pub-id>
</citation>
</ref>
<ref id="pone.0000890-Zhang1">
<label>21</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Weisser</surname>
<given-names>VD</given-names>
</name>
<name>
<surname>Stilla</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Prather</surname>
<given-names>SC</given-names>
</name>
<name>
<surname>Sathian</surname>
<given-names>K</given-names>
</name>
</person-group>
<year>2004</year>
<article-title>Multisensory cortical processing of shape and its relation to mental imagery.</article-title>
<source>Cogn Affect Behav Ne,</source>
<volume>4</volume>
<fpage>251</fpage>
<lpage>259</lpage>
</citation>
</ref>
<ref id="pone.0000890-Peltier1">
<label>22</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Peltier</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Stilla</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Mariola</surname>
<given-names>E</given-names>
</name>
<name>
<surname>LaConte</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Hu</surname>
<given-names>X</given-names>
</name>
<etal></etal>
</person-group>
<year>2007</year>
<article-title>Activity and effective connectivity of parietal and occipital cortical regions during haptic shape perception.</article-title>
<source>Neuropsychologia,</source>
<volume>45</volume>
<fpage>476</fpage>
<lpage>483</lpage>
<pub-id pub-id-type="pmid">16616940</pub-id>
</citation>
</ref>
<ref id="pone.0000890-James2">
<label>23</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>James</surname>
<given-names>TW</given-names>
</name>
<name>
<surname>Humphrey</surname>
<given-names>GK</given-names>
</name>
<name>
<surname>Gati</surname>
<given-names>JS</given-names>
</name>
<name>
<surname>Menon</surname>
<given-names>RS</given-names>
</name>
<name>
<surname>Goodale</surname>
<given-names>MA</given-names>
</name>
</person-group>
<year>2002</year>
<article-title>Differential effects of viewpoint on object-driven activation in dorsal and ventral streams.</article-title>
<source>Neuron,</source>
<volume>35</volume>
<fpage>793</fpage>
<lpage>801</lpage>
<pub-id pub-id-type="pmid">12194877</pub-id>
</citation>
</ref>
<ref id="pone.0000890-GrillSpector1">
<label>24</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Grill-Spector</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Kushnir</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Edelman</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Avidan</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Itzchak</surname>
<given-names>Y</given-names>
</name>
<etal></etal>
</person-group>
<year>1999</year>
<article-title>Differential processing of objects under various viewing conditions in the human lateral occipital complex.</article-title>
<source>Neuron,</source>
<volume>24</volume>
<fpage>187</fpage>
<lpage>203</lpage>
<pub-id pub-id-type="pmid">10677037</pub-id>
</citation>
</ref>
<ref id="pone.0000890-Pasqualotto1">
<label>25</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pasqualotto</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Finucane</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Newell</surname>
<given-names>FN</given-names>
</name>
</person-group>
<year>2005</year>
<article-title>Visual and haptic representations of scenes are updated with observer movement.</article-title>
<source>Exp Brain Res,</source>
<volume>166</volume>
<fpage>481</fpage>
<lpage>488</lpage>
<pub-id pub-id-type="pmid">16034564</pub-id>
</citation>
</ref>
<ref id="pone.0000890-Logothetis1">
<label>26</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Logothetis</surname>
<given-names>NK</given-names>
</name>
<name>
<surname>Pauls</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Poggio</surname>
<given-names>T</given-names>
</name>
</person-group>
<year>1995</year>
<article-title>Shape representation in the inferior temporal cortex of monkeys.</article-title>
<source>Curr Biol,</source>
<volume>5</volume>
<fpage>552</fpage>
<lpage>563</lpage>
<pub-id pub-id-type="pmid">7583105</pub-id>
</citation>
</ref>
<ref id="pone.0000890-Perrett1">
<label>27</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Perrett</surname>
<given-names>DI</given-names>
</name>
<name>
<surname>Smith</surname>
<given-names>PAJ</given-names>
</name>
<name>
<surname>Potter</surname>
<given-names>DD</given-names>
</name>
<name>
<surname>Mistlin</surname>
<given-names>AJ</given-names>
</name>
<name>
<surname>Head</surname>
<given-names>AS</given-names>
</name>
<etal></etal>
</person-group>
<year>1985</year>
<article-title>Visual cells in the temporal cortex sensitive to face view and gaze direction.</article-title>
<source>Proc R Soc Lond B,</source>
<volume>223</volume>
<fpage>293</fpage>
<lpage>317</lpage>
<pub-id pub-id-type="pmid">2858100</pub-id>
</citation>
</ref>
</ref-list>
<fn-group>
<fn fn-type="conflict">
<p>
<bold>Competing Interests: </bold>
The authors have declared that no competing interests exist.</p>
</fn>
<fn fn-type="financial-disclosure">
<p>
<bold>Funding: </bold>
This work was supported by grants to KS from the National Eye Institute at NIH (R01 EY012440 and K24 EY017332) and the National Science Foundation (BCS 0519417). Support to KS by the Veterans Administration is also gratefully acknowledged.</p>
</fn>
</fn-group>
</back>
</pmc>
<affiliations>
<list>
<country>
<li>États-Unis</li>
</country>
<region>
<li>Géorgie (États-Unis)</li>
</region>
</list>
<tree>
<country name="États-Unis">
<region name="Géorgie (États-Unis)">
<name sortKey="Lacey, Simon" sort="Lacey, Simon" uniqKey="Lacey S" first="Simon" last="Lacey">Simon Lacey</name>
</region>
<name sortKey="Peters, Andrew" sort="Peters, Andrew" uniqKey="Peters A" first="Andrew" last="Peters">Andrew Peters</name>
<name sortKey="Sathian, K" sort="Sathian, K" uniqKey="Sathian K" first="K." last="Sathian">K. Sathian</name>
<name sortKey="Sathian, K" sort="Sathian, K" uniqKey="Sathian K" first="K." last="Sathian">K. Sathian</name>
<name sortKey="Sathian, K" sort="Sathian, K" uniqKey="Sathian K" first="K." last="Sathian">K. Sathian</name>
<name sortKey="Sathian, K" sort="Sathian, K" uniqKey="Sathian K" first="K." last="Sathian">K. Sathian</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Checkpoint
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 002470 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd -nk 002470 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Checkpoint
   |type=    RBID
   |clé=     PMC:1964535
   |texte=   Cross-Modal Object Recognition Is Viewpoint-Independent
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/RBID.i   -Sk "pubmed:17849019" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024