Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Recognizing familiar objects by hand and foot: Haptic shape perception generalizes to inputs from unusual locations and untrained body parts.

Identifieur interne : 000553 ( PubMed/Checkpoint ); précédent : 000552; suivant : 000554

Recognizing familiar objects by hand and foot: Haptic shape perception generalizes to inputs from unusual locations and untrained body parts.

Auteurs : Rebecca Lawson [Royaume-Uni]

Source :

RBID : pubmed:24197503

English descriptors

Abstract

The limits of generalization of our 3-D shape recognition system to identifying objects by touch was investigated by testing exploration at unusual locations and using untrained effectors. In Experiments 1 and 2, people found identification by hand of real objects, plastic 3-D models of objects, and raised line drawings placed in front of themselves no easier than when exploration was behind their back. Experiment 3 compared one-handed, two-handed, one-footed, and two-footed haptic object recognition of familiar objects. Recognition by foot was slower (7 vs. 13 s) and much less accurate (9 % vs. 47 % errors) than recognition by either one or both hands. Nevertheless, item difficulty was similar across hand and foot exploration, and there was a strong correlation between an individual's hand and foot performance. Furthermore, foot recognition was better with the largest 20 of the 80 items (32 % errors), suggesting that physical limitations hampered exploration by foot. Thus, object recognition by hand generalized efficiently across the spatial location of stimuli, while object recognition by foot seemed surprisingly good given that no prior training was provided. Active touch (haptics) thus efficiently extracts 3-D shape information and accesses stored representations of familiar objects from novel modes of input.

DOI: 10.3758/s13414-013-0559-1
PubMed: 24197503


Affiliations:


Links toward previous steps (curation, corpus...)


Links to Exploration step

pubmed:24197503

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Recognizing familiar objects by hand and foot: Haptic shape perception generalizes to inputs from unusual locations and untrained body parts.</title>
<author>
<name sortKey="Lawson, Rebecca" sort="Lawson, Rebecca" uniqKey="Lawson R" first="Rebecca" last="Lawson">Rebecca Lawson</name>
<affiliation wicri:level="1">
<nlm:affiliation>School of Psychology, University of Liverpool, Liverpool, UK, rlawson@liv.ac.uk.</nlm:affiliation>
<country wicri:rule="url">Royaume-Uni</country>
<wicri:regionArea>School of Psychology, University of Liverpool, Liverpool, UK</wicri:regionArea>
<wicri:noRegion>UK</wicri:noRegion>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PubMed</idno>
<date when="2014">2014</date>
<idno type="doi">10.3758/s13414-013-0559-1</idno>
<idno type="RBID">pubmed:24197503</idno>
<idno type="pmid">24197503</idno>
<idno type="wicri:Area/PubMed/Corpus">000820</idno>
<idno type="wicri:Area/PubMed/Curation">000820</idno>
<idno type="wicri:Area/PubMed/Checkpoint">000553</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en">Recognizing familiar objects by hand and foot: Haptic shape perception generalizes to inputs from unusual locations and untrained body parts.</title>
<author>
<name sortKey="Lawson, Rebecca" sort="Lawson, Rebecca" uniqKey="Lawson R" first="Rebecca" last="Lawson">Rebecca Lawson</name>
<affiliation wicri:level="1">
<nlm:affiliation>School of Psychology, University of Liverpool, Liverpool, UK, rlawson@liv.ac.uk.</nlm:affiliation>
<country wicri:rule="url">Royaume-Uni</country>
<wicri:regionArea>School of Psychology, University of Liverpool, Liverpool, UK</wicri:regionArea>
<wicri:noRegion>UK</wicri:noRegion>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Attention, perception & psychophysics</title>
<idno type="eISSN">1943-393X</idno>
<imprint>
<date when="2014" type="published">2014</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Adult</term>
<term>Analysis of Variance</term>
<term>Female</term>
<term>Foot</term>
<term>Hand</term>
<term>Healthy Volunteers</term>
<term>Human Body</term>
<term>Humans</term>
<term>Male</term>
<term>Posture (physiology)</term>
<term>Recognition (Psychology) (physiology)</term>
<term>Touch Perception (physiology)</term>
<term>Visual Perception (physiology)</term>
<term>Young Adult</term>
</keywords>
<keywords scheme="MESH" qualifier="physiology" xml:lang="en">
<term>Posture</term>
<term>Recognition (Psychology)</term>
<term>Touch Perception</term>
<term>Visual Perception</term>
</keywords>
<keywords scheme="MESH" xml:lang="en">
<term>Adult</term>
<term>Analysis of Variance</term>
<term>Female</term>
<term>Foot</term>
<term>Hand</term>
<term>Healthy Volunteers</term>
<term>Human Body</term>
<term>Humans</term>
<term>Male</term>
<term>Young Adult</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">The limits of generalization of our 3-D shape recognition system to identifying objects by touch was investigated by testing exploration at unusual locations and using untrained effectors. In Experiments 1 and 2, people found identification by hand of real objects, plastic 3-D models of objects, and raised line drawings placed in front of themselves no easier than when exploration was behind their back. Experiment 3 compared one-handed, two-handed, one-footed, and two-footed haptic object recognition of familiar objects. Recognition by foot was slower (7 vs. 13 s) and much less accurate (9 % vs. 47 % errors) than recognition by either one or both hands. Nevertheless, item difficulty was similar across hand and foot exploration, and there was a strong correlation between an individual's hand and foot performance. Furthermore, foot recognition was better with the largest 20 of the 80 items (32 % errors), suggesting that physical limitations hampered exploration by foot. Thus, object recognition by hand generalized efficiently across the spatial location of stimuli, while object recognition by foot seemed surprisingly good given that no prior training was provided. Active touch (haptics) thus efficiently extracts 3-D shape information and accesses stored representations of familiar objects from novel modes of input.</div>
</front>
</TEI>
<pubmed>
<MedlineCitation Owner="NLM" Status="MEDLINE">
<PMID Version="1">24197503</PMID>
<DateCreated>
<Year>2014</Year>
<Month>04</Month>
<Day>09</Day>
</DateCreated>
<DateCompleted>
<Year>2014</Year>
<Month>08</Month>
<Day>08</Day>
</DateCompleted>
<Article PubModel="Print">
<Journal>
<ISSN IssnType="Electronic">1943-393X</ISSN>
<JournalIssue CitedMedium="Internet">
<Volume>76</Volume>
<Issue>2</Issue>
<PubDate>
<Year>2014</Year>
<Month>Feb</Month>
</PubDate>
</JournalIssue>
<Title>Attention, perception & psychophysics</Title>
<ISOAbbreviation>Atten Percept Psychophys</ISOAbbreviation>
</Journal>
<ArticleTitle>Recognizing familiar objects by hand and foot: Haptic shape perception generalizes to inputs from unusual locations and untrained body parts.</ArticleTitle>
<Pagination>
<MedlinePgn>541-58</MedlinePgn>
</Pagination>
<ELocationID EIdType="doi" ValidYN="Y">10.3758/s13414-013-0559-1</ELocationID>
<Abstract>
<AbstractText>The limits of generalization of our 3-D shape recognition system to identifying objects by touch was investigated by testing exploration at unusual locations and using untrained effectors. In Experiments 1 and 2, people found identification by hand of real objects, plastic 3-D models of objects, and raised line drawings placed in front of themselves no easier than when exploration was behind their back. Experiment 3 compared one-handed, two-handed, one-footed, and two-footed haptic object recognition of familiar objects. Recognition by foot was slower (7 vs. 13 s) and much less accurate (9 % vs. 47 % errors) than recognition by either one or both hands. Nevertheless, item difficulty was similar across hand and foot exploration, and there was a strong correlation between an individual's hand and foot performance. Furthermore, foot recognition was better with the largest 20 of the 80 items (32 % errors), suggesting that physical limitations hampered exploration by foot. Thus, object recognition by hand generalized efficiently across the spatial location of stimuli, while object recognition by foot seemed surprisingly good given that no prior training was provided. Active touch (haptics) thus efficiently extracts 3-D shape information and accesses stored representations of familiar objects from novel modes of input.</AbstractText>
</Abstract>
<AuthorList CompleteYN="Y">
<Author ValidYN="Y">
<LastName>Lawson</LastName>
<ForeName>Rebecca</ForeName>
<Initials>R</Initials>
<AffiliationInfo>
<Affiliation>School of Psychology, University of Liverpool, Liverpool, UK, rlawson@liv.ac.uk.</Affiliation>
</AffiliationInfo>
</Author>
</AuthorList>
<Language>eng</Language>
<PublicationTypeList>
<PublicationType UI="D016430">Clinical Trial</PublicationType>
<PublicationType UI="D016428">Journal Article</PublicationType>
<PublicationType UI="D013485">Research Support, Non-U.S. Gov't</PublicationType>
</PublicationTypeList>
</Article>
<MedlineJournalInfo>
<Country>United States</Country>
<MedlineTA>Atten Percept Psychophys</MedlineTA>
<NlmUniqueID>101495384</NlmUniqueID>
<ISSNLinking>1943-3921</ISSNLinking>
</MedlineJournalInfo>
<CitationSubset>C</CitationSubset>
<CitationSubset>IM</CitationSubset>
<MeshHeadingList>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D000328">Adult</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D000704">Analysis of Variance</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D005260">Female</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D005528">Foot</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D006225">Hand</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D064368">Healthy Volunteers</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D018594">Human Body</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D006801">Humans</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D008297">Male</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D011187">Posture</DescriptorName>
<QualifierName MajorTopicYN="N" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D021641">Recognition (Psychology)</DescriptorName>
<QualifierName MajorTopicYN="Y" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D055698">Touch Perception</DescriptorName>
<QualifierName MajorTopicYN="Y" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D014796">Visual Perception</DescriptorName>
<QualifierName MajorTopicYN="N" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D055815">Young Adult</DescriptorName>
</MeshHeading>
</MeshHeadingList>
</MedlineCitation>
<PubmedData>
<History>
<PubMedPubDate PubStatus="entrez">
<Year>2013</Year>
<Month>11</Month>
<Day>8</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="pubmed">
<Year>2013</Year>
<Month>11</Month>
<Day>8</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="medline">
<Year>2014</Year>
<Month>8</Month>
<Day>13</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
</History>
<PublicationStatus>ppublish</PublicationStatus>
<ArticleIdList>
<ArticleId IdType="doi">10.3758/s13414-013-0559-1</ArticleId>
<ArticleId IdType="pubmed">24197503</ArticleId>
</ArticleIdList>
</PubmedData>
</pubmed>
<affiliations>
<list>
<country>
<li>Royaume-Uni</li>
</country>
</list>
<tree>
<country name="Royaume-Uni">
<noRegion>
<name sortKey="Lawson, Rebecca" sort="Lawson, Rebecca" uniqKey="Lawson R" first="Rebecca" last="Lawson">Rebecca Lawson</name>
</noRegion>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/PubMed/Checkpoint
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000553 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/PubMed/Checkpoint/biblio.hfd -nk 000553 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    PubMed
   |étape=   Checkpoint
   |type=    RBID
   |clé=     pubmed:24197503
   |texte=   Recognizing familiar objects by hand and foot: Haptic shape perception generalizes to inputs from unusual locations and untrained body parts.
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/PubMed/Checkpoint/RBID.i   -Sk "pubmed:24197503" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/PubMed/Checkpoint/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024