Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Characteristics of eye movements in 3-D object learning: comparison between within-modal and cross-modal object recognition.

Identifieur interne : 000B80 ( PubMed/Checkpoint ); précédent : 000B79; suivant : 000B81

Characteristics of eye movements in 3-D object learning: comparison between within-modal and cross-modal object recognition.

Auteurs : Yoshiyuki Ueda [Japon] ; Jun Saiki

Source :

RBID : pubmed:23513616

English descriptors

Abstract

Recent studies have indicated that the object representation acquired during visual learning depends on the encoding modality during the test phase. However, the nature of the differences between within-modal learning (eg visual learning-visual recognition) and cross-modal learning (eg visual learning-haptic recognition) remains unknown. To address this issue, we utilised eye movement data and investigated object learning strategies during the learning phase of a cross-modal object recognition experiment. Observers informed of the test modality studied an unfamiliar visually presented 3-D object. Quantitative analyses showed that recognition performance was consistent regardless of rotation in the cross-modal condition, but was reduced when objects were rotated in the within-modal condition. In addition, eye movements during learning significantly differed between within-modal and cross-modal learning. Fixations were more diffused for cross-modal learning than in within-modal learning. Moreover, over the course of the trial, fixation durations became longer in cross-modal learning than in within-modal learning. These results suggest that the object learning strategies employed during the learning phase differ according to the modality of the test phase, and that this difference leads to different recognition performances.

PubMed: 23513616


Affiliations:


Links toward previous steps (curation, corpus...)


Links to Exploration step

pubmed:23513616

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Characteristics of eye movements in 3-D object learning: comparison between within-modal and cross-modal object recognition.</title>
<author>
<name sortKey="Ueda, Yoshiyuki" sort="Ueda, Yoshiyuki" uniqKey="Ueda Y" first="Yoshiyuki" last="Ueda">Yoshiyuki Ueda</name>
<affiliation wicri:level="4">
<nlm:affiliation>Kokoro Research Center, Kyoto University, Yoshida Shimoadachi-cho 46, Sakyo, Kyoto 606-8501, Japan. ueda@educ.kyoto-u.ac.jp</nlm:affiliation>
<country xml:lang="fr">Japon</country>
<wicri:regionArea>Kokoro Research Center, Kyoto University, Yoshida Shimoadachi-cho 46, Sakyo, Kyoto 606-8501</wicri:regionArea>
<orgName type="university">Université de Kyoto</orgName>
<placeName>
<settlement type="city">Kyoto</settlement>
<region type="prefecture">Région du Kansai</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Saiki, Jun" sort="Saiki, Jun" uniqKey="Saiki J" first="Jun" last="Saiki">Jun Saiki</name>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PubMed</idno>
<date when="2012">2012</date>
<idno type="RBID">pubmed:23513616</idno>
<idno type="pmid">23513616</idno>
<idno type="wicri:Area/PubMed/Corpus">000D52</idno>
<idno type="wicri:Area/PubMed/Curation">000D52</idno>
<idno type="wicri:Area/PubMed/Checkpoint">000B80</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en">Characteristics of eye movements in 3-D object learning: comparison between within-modal and cross-modal object recognition.</title>
<author>
<name sortKey="Ueda, Yoshiyuki" sort="Ueda, Yoshiyuki" uniqKey="Ueda Y" first="Yoshiyuki" last="Ueda">Yoshiyuki Ueda</name>
<affiliation wicri:level="4">
<nlm:affiliation>Kokoro Research Center, Kyoto University, Yoshida Shimoadachi-cho 46, Sakyo, Kyoto 606-8501, Japan. ueda@educ.kyoto-u.ac.jp</nlm:affiliation>
<country xml:lang="fr">Japon</country>
<wicri:regionArea>Kokoro Research Center, Kyoto University, Yoshida Shimoadachi-cho 46, Sakyo, Kyoto 606-8501</wicri:regionArea>
<orgName type="university">Université de Kyoto</orgName>
<placeName>
<settlement type="city">Kyoto</settlement>
<region type="prefecture">Région du Kansai</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Saiki, Jun" sort="Saiki, Jun" uniqKey="Saiki J" first="Jun" last="Saiki">Jun Saiki</name>
</author>
</analytic>
<series>
<title level="j">Perception</title>
<idno type="ISSN">0301-0066</idno>
<imprint>
<date when="2012" type="published">2012</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Analysis of Variance</term>
<term>Eye Movements (physiology)</term>
<term>Fixation, Ocular (physiology)</term>
<term>Form Perception (physiology)</term>
<term>Humans</term>
<term>Learning (physiology)</term>
<term>Recognition (Psychology) (physiology)</term>
<term>Time Factors</term>
<term>Touch (physiology)</term>
<term>Visual Perception (physiology)</term>
</keywords>
<keywords scheme="MESH" qualifier="physiology" xml:lang="en">
<term>Eye Movements</term>
<term>Fixation, Ocular</term>
<term>Form Perception</term>
<term>Learning</term>
<term>Recognition (Psychology)</term>
<term>Touch</term>
<term>Visual Perception</term>
</keywords>
<keywords scheme="MESH" xml:lang="en">
<term>Analysis of Variance</term>
<term>Humans</term>
<term>Time Factors</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">Recent studies have indicated that the object representation acquired during visual learning depends on the encoding modality during the test phase. However, the nature of the differences between within-modal learning (eg visual learning-visual recognition) and cross-modal learning (eg visual learning-haptic recognition) remains unknown. To address this issue, we utilised eye movement data and investigated object learning strategies during the learning phase of a cross-modal object recognition experiment. Observers informed of the test modality studied an unfamiliar visually presented 3-D object. Quantitative analyses showed that recognition performance was consistent regardless of rotation in the cross-modal condition, but was reduced when objects were rotated in the within-modal condition. In addition, eye movements during learning significantly differed between within-modal and cross-modal learning. Fixations were more diffused for cross-modal learning than in within-modal learning. Moreover, over the course of the trial, fixation durations became longer in cross-modal learning than in within-modal learning. These results suggest that the object learning strategies employed during the learning phase differ according to the modality of the test phase, and that this difference leads to different recognition performances.</div>
</front>
</TEI>
<pubmed>
<MedlineCitation Owner="NLM" Status="MEDLINE">
<PMID Version="1">23513616</PMID>
<DateCreated>
<Year>2013</Year>
<Month>03</Month>
<Day>21</Day>
</DateCreated>
<DateCompleted>
<Year>2013</Year>
<Month>04</Month>
<Day>25</Day>
</DateCompleted>
<Article PubModel="Print">
<Journal>
<ISSN IssnType="Print">0301-0066</ISSN>
<JournalIssue CitedMedium="Print">
<Volume>41</Volume>
<Issue>11</Issue>
<PubDate>
<Year>2012</Year>
</PubDate>
</JournalIssue>
<Title>Perception</Title>
<ISOAbbreviation>Perception</ISOAbbreviation>
</Journal>
<ArticleTitle>Characteristics of eye movements in 3-D object learning: comparison between within-modal and cross-modal object recognition.</ArticleTitle>
<Pagination>
<MedlinePgn>1289-98</MedlinePgn>
</Pagination>
<Abstract>
<AbstractText>Recent studies have indicated that the object representation acquired during visual learning depends on the encoding modality during the test phase. However, the nature of the differences between within-modal learning (eg visual learning-visual recognition) and cross-modal learning (eg visual learning-haptic recognition) remains unknown. To address this issue, we utilised eye movement data and investigated object learning strategies during the learning phase of a cross-modal object recognition experiment. Observers informed of the test modality studied an unfamiliar visually presented 3-D object. Quantitative analyses showed that recognition performance was consistent regardless of rotation in the cross-modal condition, but was reduced when objects were rotated in the within-modal condition. In addition, eye movements during learning significantly differed between within-modal and cross-modal learning. Fixations were more diffused for cross-modal learning than in within-modal learning. Moreover, over the course of the trial, fixation durations became longer in cross-modal learning than in within-modal learning. These results suggest that the object learning strategies employed during the learning phase differ according to the modality of the test phase, and that this difference leads to different recognition performances.</AbstractText>
</Abstract>
<AuthorList CompleteYN="Y">
<Author ValidYN="Y">
<LastName>Ueda</LastName>
<ForeName>Yoshiyuki</ForeName>
<Initials>Y</Initials>
<AffiliationInfo>
<Affiliation>Kokoro Research Center, Kyoto University, Yoshida Shimoadachi-cho 46, Sakyo, Kyoto 606-8501, Japan. ueda@educ.kyoto-u.ac.jp</Affiliation>
</AffiliationInfo>
</Author>
<Author ValidYN="Y">
<LastName>Saiki</LastName>
<ForeName>Jun</ForeName>
<Initials>J</Initials>
</Author>
</AuthorList>
<Language>eng</Language>
<PublicationTypeList>
<PublicationType UI="D003160">Comparative Study</PublicationType>
<PublicationType UI="D016428">Journal Article</PublicationType>
<PublicationType UI="D013485">Research Support, Non-U.S. Gov't</PublicationType>
</PublicationTypeList>
</Article>
<MedlineJournalInfo>
<Country>England</Country>
<MedlineTA>Perception</MedlineTA>
<NlmUniqueID>0372307</NlmUniqueID>
<ISSNLinking>0301-0066</ISSNLinking>
</MedlineJournalInfo>
<CitationSubset>IM</CitationSubset>
<MeshHeadingList>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D000704">Analysis of Variance</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D005133">Eye Movements</DescriptorName>
<QualifierName MajorTopicYN="Y" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D005403">Fixation, Ocular</DescriptorName>
<QualifierName MajorTopicYN="N" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D005556">Form Perception</DescriptorName>
<QualifierName MajorTopicYN="Y" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D006801">Humans</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D007858">Learning</DescriptorName>
<QualifierName MajorTopicYN="Y" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D021641">Recognition (Psychology)</DescriptorName>
<QualifierName MajorTopicYN="Y" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D013997">Time Factors</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D014110">Touch</DescriptorName>
<QualifierName MajorTopicYN="Y" UI="Q000502">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D014796">Visual Perception</DescriptorName>
<QualifierName MajorTopicYN="Y" UI="Q000502">physiology</QualifierName>
</MeshHeading>
</MeshHeadingList>
</MedlineCitation>
<PubmedData>
<History>
<PubMedPubDate PubStatus="entrez">
<Year>2013</Year>
<Month>3</Month>
<Day>22</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="pubmed">
<Year>2012</Year>
<Month>1</Month>
<Day>1</Day>
<Hour>0</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="medline">
<Year>2013</Year>
<Month>4</Month>
<Day>26</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
</History>
<PublicationStatus>ppublish</PublicationStatus>
<ArticleIdList>
<ArticleId IdType="pubmed">23513616</ArticleId>
</ArticleIdList>
</PubmedData>
</pubmed>
<affiliations>
<list>
<country>
<li>Japon</li>
</country>
<region>
<li>Région du Kansai</li>
</region>
<settlement>
<li>Kyoto</li>
</settlement>
<orgName>
<li>Université de Kyoto</li>
</orgName>
</list>
<tree>
<noCountry>
<name sortKey="Saiki, Jun" sort="Saiki, Jun" uniqKey="Saiki J" first="Jun" last="Saiki">Jun Saiki</name>
</noCountry>
<country name="Japon">
<region name="Région du Kansai">
<name sortKey="Ueda, Yoshiyuki" sort="Ueda, Yoshiyuki" uniqKey="Ueda Y" first="Yoshiyuki" last="Ueda">Yoshiyuki Ueda</name>
</region>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/PubMed/Checkpoint
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000B80 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/PubMed/Checkpoint/biblio.hfd -nk 000B80 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    PubMed
   |étape=   Checkpoint
   |type=    RBID
   |clé=     pubmed:23513616
   |texte=   Characteristics of eye movements in 3-D object learning: comparison between within-modal and cross-modal object recognition.
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/PubMed/Checkpoint/RBID.i   -Sk "pubmed:23513616" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/PubMed/Checkpoint/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024