Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Does men's advantage in mental rotation persist when real three-dimensional objects are either felt or seen?

Identifieur interne : 001B69 ( PubMed/Corpus ); précédent : 001B68; suivant : 001B70

Does men's advantage in mental rotation persist when real three-dimensional objects are either felt or seen?

Auteurs : Michèle Robert ; Eliane Chevrier

Source :

RBID : pubmed:14704028

English descriptors

Abstract

In several spatial tasks in which men outperform women in the processing of visual input, the sex difference has been eliminated in matching contexts limited to haptic input. The present experiment tested whether such contrasting results would be reproduced in a mental rotation task. A standard visual condition involved two-dimensional illustrations of three-dimensional stimuli; in a haptic condition, three-dimensional replicas of these stimuli were only felt; in an additional visual condition, these replicas were seen. The results indicated that, irrespective of condition, men's response times were shorter than women's, although accuracy did not significantly differ according to sex. For both men and women, response times were shorter and accuracy was higher in the standard condition than in the haptic one, the best performances being recorded when full replicas were shown. Self-reported solving strategies also varied as a function of sex and condition. The discussion emphasizes the robustness of men's faster speed in mental rotation. With respect to both speed and accuracy, the demanding sequential processing called for in the haptic setting, relative to the standard condition, is underscored, as is the benefit resulting from easier access to depth cues in the visual context with real three-dimensional objects.

PubMed: 14704028

Links to Exploration step

pubmed:14704028

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Does men's advantage in mental rotation persist when real three-dimensional objects are either felt or seen?</title>
<author>
<name sortKey="Robert, Michele" sort="Robert, Michele" uniqKey="Robert M" first="Michèle" last="Robert">Michèle Robert</name>
<affiliation>
<nlm:affiliation>Département de Psychologie, Université de Montréal, Montréal, Québec, Canada. michele.robert@umontreal.ca</nlm:affiliation>
</affiliation>
</author>
<author>
<name sortKey="Chevrier, Eliane" sort="Chevrier, Eliane" uniqKey="Chevrier E" first="Eliane" last="Chevrier">Eliane Chevrier</name>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PubMed</idno>
<date when="2003">2003</date>
<idno type="RBID">pubmed:14704028</idno>
<idno type="pmid">14704028</idno>
<idno type="wicri:Area/PubMed/Corpus">001B69</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en">Does men's advantage in mental rotation persist when real three-dimensional objects are either felt or seen?</title>
<author>
<name sortKey="Robert, Michele" sort="Robert, Michele" uniqKey="Robert M" first="Michèle" last="Robert">Michèle Robert</name>
<affiliation>
<nlm:affiliation>Département de Psychologie, Université de Montréal, Montréal, Québec, Canada. michele.robert@umontreal.ca</nlm:affiliation>
</affiliation>
</author>
<author>
<name sortKey="Chevrier, Eliane" sort="Chevrier, Eliane" uniqKey="Chevrier E" first="Eliane" last="Chevrier">Eliane Chevrier</name>
</author>
</analytic>
<series>
<title level="j">Memory & cognition</title>
<idno type="ISSN">0090-502X</idno>
<imprint>
<date when="2003" type="published">2003</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Adult</term>
<term>Aptitude</term>
<term>Depth Perception</term>
<term>Discrimination Learning</term>
<term>Female</term>
<term>Humans</term>
<term>Male</term>
<term>Orientation</term>
<term>Pattern Recognition, Visual</term>
<term>Problem Solving</term>
<term>Psychophysics</term>
<term>Reaction Time</term>
<term>Sex Characteristics</term>
<term>Stereognosis</term>
</keywords>
<keywords scheme="MESH" xml:lang="en">
<term>Adult</term>
<term>Aptitude</term>
<term>Depth Perception</term>
<term>Discrimination Learning</term>
<term>Female</term>
<term>Humans</term>
<term>Male</term>
<term>Orientation</term>
<term>Pattern Recognition, Visual</term>
<term>Problem Solving</term>
<term>Psychophysics</term>
<term>Reaction Time</term>
<term>Sex Characteristics</term>
<term>Stereognosis</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">In several spatial tasks in which men outperform women in the processing of visual input, the sex difference has been eliminated in matching contexts limited to haptic input. The present experiment tested whether such contrasting results would be reproduced in a mental rotation task. A standard visual condition involved two-dimensional illustrations of three-dimensional stimuli; in a haptic condition, three-dimensional replicas of these stimuli were only felt; in an additional visual condition, these replicas were seen. The results indicated that, irrespective of condition, men's response times were shorter than women's, although accuracy did not significantly differ according to sex. For both men and women, response times were shorter and accuracy was higher in the standard condition than in the haptic one, the best performances being recorded when full replicas were shown. Self-reported solving strategies also varied as a function of sex and condition. The discussion emphasizes the robustness of men's faster speed in mental rotation. With respect to both speed and accuracy, the demanding sequential processing called for in the haptic setting, relative to the standard condition, is underscored, as is the benefit resulting from easier access to depth cues in the visual context with real three-dimensional objects.</div>
</front>
</TEI>
<pubmed>
<MedlineCitation Owner="NLM" Status="MEDLINE">
<PMID Version="1">14704028</PMID>
<DateCreated>
<Year>2004</Year>
<Month>01</Month>
<Day>05</Day>
</DateCreated>
<DateCompleted>
<Year>2004</Year>
<Month>02</Month>
<Day>20</Day>
</DateCompleted>
<DateRevised>
<Year>2006</Year>
<Month>11</Month>
<Day>15</Day>
</DateRevised>
<Article PubModel="Print">
<Journal>
<ISSN IssnType="Print">0090-502X</ISSN>
<JournalIssue CitedMedium="Print">
<Volume>31</Volume>
<Issue>7</Issue>
<PubDate>
<Year>2003</Year>
<Month>Oct</Month>
</PubDate>
</JournalIssue>
<Title>Memory & cognition</Title>
<ISOAbbreviation>Mem Cognit</ISOAbbreviation>
</Journal>
<ArticleTitle>Does men's advantage in mental rotation persist when real three-dimensional objects are either felt or seen?</ArticleTitle>
<Pagination>
<MedlinePgn>1136-45</MedlinePgn>
</Pagination>
<Abstract>
<AbstractText>In several spatial tasks in which men outperform women in the processing of visual input, the sex difference has been eliminated in matching contexts limited to haptic input. The present experiment tested whether such contrasting results would be reproduced in a mental rotation task. A standard visual condition involved two-dimensional illustrations of three-dimensional stimuli; in a haptic condition, three-dimensional replicas of these stimuli were only felt; in an additional visual condition, these replicas were seen. The results indicated that, irrespective of condition, men's response times were shorter than women's, although accuracy did not significantly differ according to sex. For both men and women, response times were shorter and accuracy was higher in the standard condition than in the haptic one, the best performances being recorded when full replicas were shown. Self-reported solving strategies also varied as a function of sex and condition. The discussion emphasizes the robustness of men's faster speed in mental rotation. With respect to both speed and accuracy, the demanding sequential processing called for in the haptic setting, relative to the standard condition, is underscored, as is the benefit resulting from easier access to depth cues in the visual context with real three-dimensional objects.</AbstractText>
</Abstract>
<AuthorList CompleteYN="Y">
<Author ValidYN="Y">
<LastName>Robert</LastName>
<ForeName>Michèle</ForeName>
<Initials>M</Initials>
<AffiliationInfo>
<Affiliation>Département de Psychologie, Université de Montréal, Montréal, Québec, Canada. michele.robert@umontreal.ca</Affiliation>
</AffiliationInfo>
</Author>
<Author ValidYN="Y">
<LastName>Chevrier</LastName>
<ForeName>Eliane</ForeName>
<Initials>E</Initials>
</Author>
</AuthorList>
<Language>eng</Language>
<PublicationTypeList>
<PublicationType UI="D003160">Comparative Study</PublicationType>
<PublicationType UI="D016428">Journal Article</PublicationType>
<PublicationType UI="D013485">Research Support, Non-U.S. Gov't</PublicationType>
</PublicationTypeList>
</Article>
<MedlineJournalInfo>
<Country>United States</Country>
<MedlineTA>Mem Cognit</MedlineTA>
<NlmUniqueID>0357443</NlmUniqueID>
<ISSNLinking>0090-502X</ISSNLinking>
</MedlineJournalInfo>
<CitationSubset>IM</CitationSubset>
<MeshHeadingList>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D000328">Adult</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="Y" UI="D001076">Aptitude</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="Y" UI="D003867">Depth Perception</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D004193">Discrimination Learning</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D005260">Female</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D006801">Humans</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D008297">Male</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="Y" UI="D009949">Orientation</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="Y" UI="D010364">Pattern Recognition, Visual</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D011340">Problem Solving</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D011601">Psychophysics</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="N" UI="D011930">Reaction Time</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="Y" UI="D012727">Sex Characteristics</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName MajorTopicYN="Y" UI="D013236">Stereognosis</DescriptorName>
</MeshHeading>
</MeshHeadingList>
</MedlineCitation>
<PubmedData>
<History>
<PubMedPubDate PubStatus="pubmed">
<Year>2004</Year>
<Month>1</Month>
<Day>6</Day>
<Hour>5</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="medline">
<Year>2004</Year>
<Month>2</Month>
<Day>21</Day>
<Hour>5</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="entrez">
<Year>2004</Year>
<Month>1</Month>
<Day>6</Day>
<Hour>5</Hour>
<Minute>0</Minute>
</PubMedPubDate>
</History>
<PublicationStatus>ppublish</PublicationStatus>
<ArticleIdList>
<ArticleId IdType="pubmed">14704028</ArticleId>
</ArticleIdList>
</PubmedData>
</pubmed>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/PubMed/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001B69 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/PubMed/Corpus/biblio.hfd -nk 001B69 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    PubMed
   |étape=   Corpus
   |type=    RBID
   |clé=     pubmed:14704028
   |texte=   Does men's advantage in mental rotation persist when real three-dimensional objects are either felt or seen?
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/PubMed/Corpus/RBID.i   -Sk "pubmed:14704028" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/PubMed/Corpus/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024