Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Learning and generalization in haptic classification of 2-D raised-line drawings of facial expressions of emotion by sighted and adventitiously blind observers.

Identifieur interne : 000F95 ( PubMed/Corpus ); précédent : 000F94; suivant : 000F96

Learning and generalization in haptic classification of 2-D raised-line drawings of facial expressions of emotion by sighted and adventitiously blind observers.

Auteurs : Aneta Abramowicz ; Roberta L. Klatzky ; Susan J. Lederman

Source :

RBID : pubmed:21125953

English descriptors

Abstract

Sighted blindfolded individuals can successfully classify basic facial expressions of emotion (FEEs) by manually exploring simple 2-D raised-line drawings (Lederman et al 2008, IEEE Transactions on Haptics 1 27-38). The effect of training on classification accuracy was assessed by sixty sighted blindfolded participants (experiment 1) and by three adventitiously blind participants (experiment 2). We further investigated whether the underlying learning process(es) constituted token-specific learning and/or generalization. A hybrid learning paradigm comprising pre/post and old/new test comparisons was used. For both participant groups, classification accuracy for old (ie trained) drawings markedly increased over study trials (mean improvement --76%, and 88%, respectively). Additionally, RT decreased by a mean of 30% for the sighted, and 31% for the adventitiously blind. Learning was mostly token-specific, but some generalization was also observed for both groups. The sighted classified novel drawings of all six FEEs faster with training (mean RT decrease = 20%). Accuracy also improved significantly (mean improvement = 20%), but this improvement was restricted to two FEEs (anger and sadness). Two of three adventitiously blind participants classified new drawings more accurately (mean improvement = 30%); however, RTs for this group did not reflect generalization. Based on a limited number of blind subjects, our results tentatively suggest that adventitiously blind individuals learn to haptically classify FEEs as well as, or even better than, sighted persons.

PubMed: 21125953

Links to Exploration step

pubmed:21125953

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Learning and generalization in haptic classification of 2-D raised-line drawings of facial expressions of emotion by sighted and adventitiously blind observers.</title>
<author>
<name sortKey="Abramowicz, Aneta" sort="Abramowicz, Aneta" uniqKey="Abramowicz A" first="Aneta" last="Abramowicz">Aneta Abramowicz</name>
<affiliation>
<nlm:affiliation>Department of Psychology, Queen's University, 62 Arch Street, Kingston, Ontario K7L 3N6, Canada. abramo.aneta@gmail.com</nlm:affiliation>
</affiliation>
</author>
<author>
<name sortKey="Klatzky, Roberta L" sort="Klatzky, Roberta L" uniqKey="Klatzky R" first="Roberta L" last="Klatzky">Roberta L. Klatzky</name>
</author>
<author>
<name sortKey="Lederman, Susan J" sort="Lederman, Susan J" uniqKey="Lederman S" first="Susan J" last="Lederman">Susan J. Lederman</name>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PubMed</idno>
<date when="2010">2010</date>
<idno type="RBID">pubmed:21125953</idno>
<idno type="pmid">21125953</idno>
<idno type="wicri:Area/PubMed/Corpus">000F95</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en">Learning and generalization in haptic classification of 2-D raised-line drawings of facial expressions of emotion by sighted and adventitiously blind observers.</title>
<author>
<name sortKey="Abramowicz, Aneta" sort="Abramowicz, Aneta" uniqKey="Abramowicz A" first="Aneta" last="Abramowicz">Aneta Abramowicz</name>
<affiliation>
<nlm:affiliation>Department of Psychology, Queen's University, 62 Arch Street, Kingston, Ontario K7L 3N6, Canada. abramo.aneta@gmail.com</nlm:affiliation>
</affiliation>
</author>
<author>
<name sortKey="Klatzky, Roberta L" sort="Klatzky, Roberta L" uniqKey="Klatzky R" first="Roberta L" last="Klatzky">Roberta L. Klatzky</name>
</author>
<author>
<name sortKey="Lederman, Susan J" sort="Lederman, Susan J" uniqKey="Lederman S" first="Susan J" last="Lederman">Susan J. Lederman</name>
</author>
</analytic>
<series>
<title level="j">Perception</title>
<idno type="ISSN">0301-0066</idno>
<imprint>
<date when="2010" type="published">2010</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Adult</term>
<term>Aged</term>
<term>Blindness (psychology)</term>
<term>Discrimination Learning</term>
<term>Emotions</term>
<term>Facial Expression</term>
<term>Humans</term>
<term>Male</term>
<term>Pattern Recognition, Visual</term>
<term>Practice (Psychology)</term>
<term>Recognition (Psychology)</term>
<term>Touch</term>
<term>Young Adult</term>
</keywords>
<keywords scheme="MESH" qualifier="psychology" xml:lang="en">
<term>Blindness</term>
</keywords>
<keywords scheme="MESH" xml:lang="en">
<term>Adult</term>
<term>Aged</term>
<term>Discrimination Learning</term>
<term>Emotions</term>
<term>Facial Expression</term>
<term>Humans</term>
<term>Male</term>
<term>Pattern Recognition, Visual</term>
<term>Practice (Psychology)</term>
<term>Recognition (Psychology)</term>
<term>Touch</term>
<term>Young Adult</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">Sighted blindfolded individuals can successfully classify basic facial expressions of emotion (FEEs) by manually exploring simple 2-D raised-line drawings (Lederman et al 2008, IEEE Transactions on Haptics 1 27-38). The effect of training on classification accuracy was assessed by sixty sighted blindfolded participants (experiment 1) and by three adventitiously blind participants (experiment 2). We further investigated whether the underlying learning process(es) constituted token-specific learning and/or generalization. A hybrid learning paradigm comprising pre/post and old/new test comparisons was used. For both participant groups, classification accuracy for old (ie trained) drawings markedly increased over study trials (mean improvement --76%, and 88%, respectively). Additionally, RT decreased by a mean of 30% for the sighted, and 31% for the adventitiously blind. Learning was mostly token-specific, but some generalization was also observed for both groups. The sighted classified novel drawings of all six FEEs faster with training (mean RT decrease = 20%). Accuracy also improved significantly (mean improvement = 20%), but this improvement was restricted to two FEEs (anger and sadness). Two of three adventitiously blind participants classified new drawings more accurately (mean improvement = 30%); however, RTs for this group did not reflect generalization. Based on a limited number of blind subjects, our results tentatively suggest that adventitiously blind individuals learn to haptically classify FEEs as well as, or even better than, sighted persons.</div>
</front>
</TEI>
<pubmed>
<MedlineCitation Owner="NLM" Status="MEDLINE">
<PMID Version="1">21125953</PMID>
<DateCreated>
<Year>2010</Year>
<Month>12</Month>
<Day>03</Day>
</DateCreated>
<DateCompleted>
<Year>2011</Year>
<Month>01</Month>
<Day>11</Day>
</DateCompleted>
<Article PubModel="Print">
<Journal>
<ISSN IssnType="Print">0301-0066</ISSN>
<JournalIssue CitedMedium="Print">
<Volume>39</Volume>
<Issue>9</Issue>
<PubDate>
<Year>2010</Year>
</PubDate>
</JournalIssue>
<Title>Perception</Title>
<ISOAbbreviation>Perception</ISOAbbreviation>
</Journal>
<ArticleTitle>Learning and generalization in haptic classification of 2-D raised-line drawings of facial expressions of emotion by sighted and adventitiously blind observers.</ArticleTitle>
<Pagination>
<MedlinePgn>1261-75</MedlinePgn>
</Pagination>
<Abstract>
<AbstractText>Sighted blindfolded individuals can successfully classify basic facial expressions of emotion (FEEs) by manually exploring simple 2-D raised-line drawings (Lederman et al 2008, IEEE Transactions on Haptics 1 27-38). The effect of training on classification accuracy was assessed by sixty sighted blindfolded participants (experiment 1) and by three adventitiously blind participants (experiment 2). We further investigated whether the underlying learning process(es) constituted token-specific learning and/or generalization. A hybrid learning paradigm comprising pre/post and old/new test comparisons was used. For both participant groups, classification accuracy for old (ie trained) drawings markedly increased over study trials (mean improvement --76%, and 88%, respectively). Additionally, RT decreased by a mean of 30% for the sighted, and 31% for the adventitiously blind. Learning was mostly token-specific, but some generalization was also observed for both groups. The sighted classified novel drawings of all six FEEs faster with training (mean RT decrease = 20%). Accuracy also improved significantly (mean improvement = 20%), but this improvement was restricted to two FEEs (anger and sadness). Two of three adventitiously blind participants classified new drawings more accurately (mean improvement = 30%); however, RTs for this group did not reflect generalization. Based on a limited number of blind subjects, our results tentatively suggest that adventitiously blind individuals learn to haptically classify FEEs as well as, or even better than, sighted persons.</AbstractText>
</Abstract>
<AuthorList CompleteYN="Y">
<Author ValidYN="Y">
<LastName>Abramowicz</LastName>
<ForeName>Aneta</ForeName>
<Initials>A</Initials>
<AffiliationInfo>
<Affiliation>Department of Psychology, Queen's University, 62 Arch Street, Kingston, Ontario K7L 3N6, Canada. abramo.aneta@gmail.com</Affiliation>
</AffiliationInfo>
</Author>
<Author ValidYN="Y">
<LastName>Klatzky</LastName>
<ForeName>Roberta L</ForeName>
<Initials>RL</Initials>
</Author>
<Author ValidYN="Y">
<LastName>Lederman</LastName>
<ForeName>Susan J</ForeName>
<Initials>SJ</Initials>
</Author>
</AuthorList>
<Language>eng</Language>
<GrantList CompleteYN="Y">
<Grant>
<Agency>Canadian Institutes of Health Research</Agency>
<Country>Canada</Country>
</Grant>
</GrantList>
<PublicationTypeList>
<PublicationType UI="D016428">Journal Article</PublicationType>
<PublicationType UI="D013485">Research Support, Non-U.S. Gov't</PublicationType>
</PublicationTypeList>
</Article>
<MedlineJournalInfo>
<Country>England</Country>
<MedlineTA>Perception</MedlineTA>
<NlmUniqueID>0372307</NlmUniqueID>
<ISSNLinking>0301-0066</ISSNLinking>
</MedlineJournalInfo>
<CitationSubset>IM</CitationSubset>
<MeshHeadingList>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D000328">Adult</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D000368">Aged</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D001766">Blindness</DescriptorName>
<QualifierName MajorTopicYN="Y" UI="Q000523">psychology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="Y" UI="D004193">Discrimination Learning</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="Y" UI="D004644">Emotions</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="Y" UI="D005149">Facial Expression</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D006801">Humans</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D008297">Male</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="Y" UI="D010364">Pattern Recognition, Visual</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D011214">Practice (Psychology)</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D021641">Recognition (Psychology)</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="Y" UI="D014110">Touch</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D055815">Young Adult</DescriptorName>
</MeshHeading>
</MeshHeadingList>
</MedlineCitation>
<PubmedData>
<History>
<PubMedPubDate PubStatus="entrez">
<Year>2010</Year>
<Month>12</Month>
<Day>4</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="pubmed">
<Year>2010</Year>
<Month>12</Month>
<Day>4</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="medline">
<Year>2011</Year>
<Month>1</Month>
<Day>12</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
</History>
<PublicationStatus>ppublish</PublicationStatus>
<ArticleIdList>
<ArticleId IdType="pubmed">21125953</ArticleId>
</ArticleIdList>
</PubmedData>
</pubmed>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/PubMed/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000F95 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/PubMed/Corpus/biblio.hfd -nk 000F95 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    PubMed
   |étape=   Corpus
   |type=    RBID
   |clé=     pubmed:21125953
   |texte=   Learning and generalization in haptic classification of 2-D raised-line drawings of facial expressions of emotion by sighted and adventitiously blind observers.
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/PubMed/Corpus/RBID.i   -Sk "pubmed:21125953" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/PubMed/Corpus/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024