Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Are representations of unfamiliar faces independent of encoding modality?

Identifieur interne : 000B52 ( PascalFrancis/Corpus ); précédent : 000B51; suivant : 000B53

Are representations of unfamiliar faces independent of encoding modality?

Auteurs : Sarah J. Casey ; Fiona N. Newell

Source :

RBID : Pascal:07-0255152

Descripteurs français

English descriptors

Abstract

It is well documented that both featural and configural information are important in visual face recognition. Less is known, however, about the nature of the information underlying haptic face recognition and whether or not this information is the same as in vision. In our experiments we found better within modal than crossmodal face recognition performance suggesting that face representations are largely specific to each modality. Moreover, this cost in crossmodal performance was found to be independent of differences in exploratory procedures across the modalities during encoding. We found that crossmodal face perception was most efficient when configural information of the facial features was preserved suggesting that configural information is shared across modalities. Our findings suggest that face information is processed in a similar manner across vision and touch but that qualitative differences in the nature of the information encoded underlies efficient within modal relative to crossmodal recognition.

Notice en format standard (ISO 2709)

Pour connaître la documentation sur le format Inist Standard.

pA  
A01 01  1    @0 0028-3932
A02 01      @0 NUPSA6
A03   1    @0 Neuropsychologia
A05       @2 45
A06       @2 3
A08 01  1  ENG  @1 Are representations of unfamiliar faces independent of encoding modality?
A09 01  1  ENG  @1 Advances in multisensory processes
A11 01  1    @1 CASEY (Sarah J.)
A11 02  1    @1 NEWELL (Fiona N.)
A12 01  1    @1 PAVANI (Francesco) @9 ed.
A12 02  1    @1 MURRRAY (Micah) @9 ed.
A12 03  1    @1 SCHROEDER (Charles) @9 ed.
A14 01      @1 School of Psychology and Institute of Neuroscience, Trinity College @2 Dublin @3 IRL @Z 1 aut. @Z 2 aut.
A15 01      @1 Department of Cognitive Sciences and Education, University of Trento, Corso Bettini 31 @2 38068 Rovereto @3 ITA @Z 1 aut.
A15 02      @1 Neuropsychology Division and Radiology Service, CHUV, Hôpital Nestlé, 5 avenue Pierre Decker @2 1011 Lausanne @3 CHE @Z 2 aut.
A15 03      @1 The Nathan S. Kline Institute for Psychiatric Research, 140 Old Orangeburg Road @2 Orangeburg, NY 10962 @3 USA @Z 3 aut.
A20       @1 506-513
A21       @1 2007
A23 01      @0 ENG
A43 01      @1 INIST @2 11143 @5 354000159522200050
A44       @0 0000 @1 © 2007 INIST-CNRS. All rights reserved.
A45       @0 3/4 p.
A47 01  1    @0 07-0255152
A60       @1 P
A61       @0 A
A64 01  1    @0 Neuropsychologia
A66 01      @0 GBR
C01 01    ENG  @0 It is well documented that both featural and configural information are important in visual face recognition. Less is known, however, about the nature of the information underlying haptic face recognition and whether or not this information is the same as in vision. In our experiments we found better within modal than crossmodal face recognition performance suggesting that face representations are largely specific to each modality. Moreover, this cost in crossmodal performance was found to be independent of differences in exploratory procedures across the modalities during encoding. We found that crossmodal face perception was most efficient when configural information of the facial features was preserved suggesting that configural information is shared across modalities. Our findings suggest that face information is processed in a similar manner across vision and touch but that qualitative differences in the nature of the information encoded underlies efficient within modal relative to crossmodal recognition.
C02 01  X    @0 002B18C13
C02 02  X    @0 002A26E08
C03 01  X  FRE  @0 Représentation @5 01
C03 01  X  ENG  @0 Representation @5 01
C03 01  X  SPA  @0 Representación @5 01
C03 02  X  FRE  @0 Familiarité stimulus @5 02
C03 02  X  ENG  @0 Stimulus familiarity @5 02
C03 02  X  SPA  @0 Familiaridad estímulo @5 02
C03 03  X  FRE  @0 Face @5 03
C03 03  X  ENG  @0 Face @5 03
C03 03  X  SPA  @0 Cara @5 03
C03 04  X  FRE  @0 Codage @5 04
C03 04  X  ENG  @0 Coding @5 04
C03 04  X  SPA  @0 Codificación @5 04
C03 05  X  FRE  @0 Reconnaissance @5 05
C03 05  X  ENG  @0 Recognition @5 05
C03 05  X  SPA  @0 Reconocimiento @5 05
C03 06  X  FRE  @0 Sensibilité tactile @5 06
C03 06  X  ENG  @0 Tactile sensitivity @5 06
C03 06  X  SPA  @0 Sensibilidad tactil @5 06
C03 07  X  FRE  @0 Perception intermodale @5 07
C03 07  X  ENG  @0 Intermodal perception @5 07
C03 07  X  SPA  @0 Percepción intermodal @5 07
C03 08  X  FRE  @0 Vision @5 08
C03 08  X  ENG  @0 Vision @5 08
C03 08  X  SPA  @0 Visión @5 08
C03 09  X  FRE  @0 Cognition @5 09
C03 09  X  ENG  @0 Cognition @5 09
C03 09  X  SPA  @0 Cognición @5 09
C03 10  X  FRE  @0 Homme @5 18
C03 10  X  ENG  @0 Human @5 18
C03 10  X  SPA  @0 Hombre @5 18
C07 01  X  FRE  @0 Perception @5 37
C07 01  X  ENG  @0 Perception @5 37
C07 01  X  SPA  @0 Percepción @5 37
N21       @1 169

Format Inist (serveur)

NO : PASCAL 07-0255152 INIST
ET : Are representations of unfamiliar faces independent of encoding modality?
AU : CASEY (Sarah J.); NEWELL (Fiona N.); PAVANI (Francesco); MURRRAY (Micah); SCHROEDER (Charles)
AF : School of Psychology and Institute of Neuroscience, Trinity College/Dublin/Irlande (1 aut., 2 aut.); Department of Cognitive Sciences and Education, University of Trento, Corso Bettini 31/38068 Rovereto/Italie (1 aut.); Neuropsychology Division and Radiology Service, CHUV, Hôpital Nestlé, 5 avenue Pierre Decker/1011 Lausanne/Suisse (2 aut.); The Nathan S. Kline Institute for Psychiatric Research, 140 Old Orangeburg Road/Orangeburg, NY 10962/Etats-Unis (3 aut.)
DT : Publication en série; Niveau analytique
SO : Neuropsychologia; ISSN 0028-3932; Coden NUPSA6; Royaume-Uni; Da. 2007; Vol. 45; No. 3; Pp. 506-513; Bibl. 3/4 p.
LA : Anglais
EA : It is well documented that both featural and configural information are important in visual face recognition. Less is known, however, about the nature of the information underlying haptic face recognition and whether or not this information is the same as in vision. In our experiments we found better within modal than crossmodal face recognition performance suggesting that face representations are largely specific to each modality. Moreover, this cost in crossmodal performance was found to be independent of differences in exploratory procedures across the modalities during encoding. We found that crossmodal face perception was most efficient when configural information of the facial features was preserved suggesting that configural information is shared across modalities. Our findings suggest that face information is processed in a similar manner across vision and touch but that qualitative differences in the nature of the information encoded underlies efficient within modal relative to crossmodal recognition.
CC : 002B18C13; 002A26E08
FD : Représentation; Familiarité stimulus; Face; Codage; Reconnaissance; Sensibilité tactile; Perception intermodale; Vision; Cognition; Homme
FG : Perception
ED : Representation; Stimulus familiarity; Face; Coding; Recognition; Tactile sensitivity; Intermodal perception; Vision; Cognition; Human
EG : Perception
SD : Representación; Familiaridad estímulo; Cara; Codificación; Reconocimiento; Sensibilidad tactil; Percepción intermodal; Visión; Cognición; Hombre
LO : INIST-11143.354000159522200050
ID : 07-0255152

Links to Exploration step

Pascal:07-0255152

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en" level="a">Are representations of unfamiliar faces independent of encoding modality?</title>
<author>
<name sortKey="Casey, Sarah J" sort="Casey, Sarah J" uniqKey="Casey S" first="Sarah J." last="Casey">Sarah J. Casey</name>
<affiliation>
<inist:fA14 i1="01">
<s1>School of Psychology and Institute of Neuroscience, Trinity College</s1>
<s2>Dublin</s2>
<s3>IRL</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Newell, Fiona N" sort="Newell, Fiona N" uniqKey="Newell F" first="Fiona N." last="Newell">Fiona N. Newell</name>
<affiliation>
<inist:fA14 i1="01">
<s1>School of Psychology and Institute of Neuroscience, Trinity College</s1>
<s2>Dublin</s2>
<s3>IRL</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">INIST</idno>
<idno type="inist">07-0255152</idno>
<date when="2007">2007</date>
<idno type="stanalyst">PASCAL 07-0255152 INIST</idno>
<idno type="RBID">Pascal:07-0255152</idno>
<idno type="wicri:Area/PascalFrancis/Corpus">000B52</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a">Are representations of unfamiliar faces independent of encoding modality?</title>
<author>
<name sortKey="Casey, Sarah J" sort="Casey, Sarah J" uniqKey="Casey S" first="Sarah J." last="Casey">Sarah J. Casey</name>
<affiliation>
<inist:fA14 i1="01">
<s1>School of Psychology and Institute of Neuroscience, Trinity College</s1>
<s2>Dublin</s2>
<s3>IRL</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Newell, Fiona N" sort="Newell, Fiona N" uniqKey="Newell F" first="Fiona N." last="Newell">Fiona N. Newell</name>
<affiliation>
<inist:fA14 i1="01">
<s1>School of Psychology and Institute of Neuroscience, Trinity College</s1>
<s2>Dublin</s2>
<s3>IRL</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
</analytic>
<series>
<title level="j" type="main">Neuropsychologia</title>
<title level="j" type="abbreviated">Neuropsychologia</title>
<idno type="ISSN">0028-3932</idno>
<imprint>
<date when="2007">2007</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
<seriesStmt>
<title level="j" type="main">Neuropsychologia</title>
<title level="j" type="abbreviated">Neuropsychologia</title>
<idno type="ISSN">0028-3932</idno>
</seriesStmt>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Coding</term>
<term>Cognition</term>
<term>Face</term>
<term>Human</term>
<term>Intermodal perception</term>
<term>Recognition</term>
<term>Representation</term>
<term>Stimulus familiarity</term>
<term>Tactile sensitivity</term>
<term>Vision</term>
</keywords>
<keywords scheme="Pascal" xml:lang="fr">
<term>Représentation</term>
<term>Familiarité stimulus</term>
<term>Face</term>
<term>Codage</term>
<term>Reconnaissance</term>
<term>Sensibilité tactile</term>
<term>Perception intermodale</term>
<term>Vision</term>
<term>Cognition</term>
<term>Homme</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">It is well documented that both featural and configural information are important in visual face recognition. Less is known, however, about the nature of the information underlying haptic face recognition and whether or not this information is the same as in vision. In our experiments we found better within modal than crossmodal face recognition performance suggesting that face representations are largely specific to each modality. Moreover, this cost in crossmodal performance was found to be independent of differences in exploratory procedures across the modalities during encoding. We found that crossmodal face perception was most efficient when configural information of the facial features was preserved suggesting that configural information is shared across modalities. Our findings suggest that face information is processed in a similar manner across vision and touch but that qualitative differences in the nature of the information encoded underlies efficient within modal relative to crossmodal recognition.</div>
</front>
</TEI>
<inist>
<standard h6="B">
<pA>
<fA01 i1="01" i2="1">
<s0>0028-3932</s0>
</fA01>
<fA02 i1="01">
<s0>NUPSA6</s0>
</fA02>
<fA03 i2="1">
<s0>Neuropsychologia</s0>
</fA03>
<fA05>
<s2>45</s2>
</fA05>
<fA06>
<s2>3</s2>
</fA06>
<fA08 i1="01" i2="1" l="ENG">
<s1>Are representations of unfamiliar faces independent of encoding modality?</s1>
</fA08>
<fA09 i1="01" i2="1" l="ENG">
<s1>Advances in multisensory processes</s1>
</fA09>
<fA11 i1="01" i2="1">
<s1>CASEY (Sarah J.)</s1>
</fA11>
<fA11 i1="02" i2="1">
<s1>NEWELL (Fiona N.)</s1>
</fA11>
<fA12 i1="01" i2="1">
<s1>PAVANI (Francesco)</s1>
<s9>ed.</s9>
</fA12>
<fA12 i1="02" i2="1">
<s1>MURRRAY (Micah)</s1>
<s9>ed.</s9>
</fA12>
<fA12 i1="03" i2="1">
<s1>SCHROEDER (Charles)</s1>
<s9>ed.</s9>
</fA12>
<fA14 i1="01">
<s1>School of Psychology and Institute of Neuroscience, Trinity College</s1>
<s2>Dublin</s2>
<s3>IRL</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</fA14>
<fA15 i1="01">
<s1>Department of Cognitive Sciences and Education, University of Trento, Corso Bettini 31</s1>
<s2>38068 Rovereto</s2>
<s3>ITA</s3>
<sZ>1 aut.</sZ>
</fA15>
<fA15 i1="02">
<s1>Neuropsychology Division and Radiology Service, CHUV, Hôpital Nestlé, 5 avenue Pierre Decker</s1>
<s2>1011 Lausanne</s2>
<s3>CHE</s3>
<sZ>2 aut.</sZ>
</fA15>
<fA15 i1="03">
<s1>The Nathan S. Kline Institute for Psychiatric Research, 140 Old Orangeburg Road</s1>
<s2>Orangeburg, NY 10962</s2>
<s3>USA</s3>
<sZ>3 aut.</sZ>
</fA15>
<fA20>
<s1>506-513</s1>
</fA20>
<fA21>
<s1>2007</s1>
</fA21>
<fA23 i1="01">
<s0>ENG</s0>
</fA23>
<fA43 i1="01">
<s1>INIST</s1>
<s2>11143</s2>
<s5>354000159522200050</s5>
</fA43>
<fA44>
<s0>0000</s0>
<s1>© 2007 INIST-CNRS. All rights reserved.</s1>
</fA44>
<fA45>
<s0>3/4 p.</s0>
</fA45>
<fA47 i1="01" i2="1">
<s0>07-0255152</s0>
</fA47>
<fA60>
<s1>P</s1>
</fA60>
<fA61>
<s0>A</s0>
</fA61>
<fA64 i1="01" i2="1">
<s0>Neuropsychologia</s0>
</fA64>
<fA66 i1="01">
<s0>GBR</s0>
</fA66>
<fC01 i1="01" l="ENG">
<s0>It is well documented that both featural and configural information are important in visual face recognition. Less is known, however, about the nature of the information underlying haptic face recognition and whether or not this information is the same as in vision. In our experiments we found better within modal than crossmodal face recognition performance suggesting that face representations are largely specific to each modality. Moreover, this cost in crossmodal performance was found to be independent of differences in exploratory procedures across the modalities during encoding. We found that crossmodal face perception was most efficient when configural information of the facial features was preserved suggesting that configural information is shared across modalities. Our findings suggest that face information is processed in a similar manner across vision and touch but that qualitative differences in the nature of the information encoded underlies efficient within modal relative to crossmodal recognition.</s0>
</fC01>
<fC02 i1="01" i2="X">
<s0>002B18C13</s0>
</fC02>
<fC02 i1="02" i2="X">
<s0>002A26E08</s0>
</fC02>
<fC03 i1="01" i2="X" l="FRE">
<s0>Représentation</s0>
<s5>01</s5>
</fC03>
<fC03 i1="01" i2="X" l="ENG">
<s0>Representation</s0>
<s5>01</s5>
</fC03>
<fC03 i1="01" i2="X" l="SPA">
<s0>Representación</s0>
<s5>01</s5>
</fC03>
<fC03 i1="02" i2="X" l="FRE">
<s0>Familiarité stimulus</s0>
<s5>02</s5>
</fC03>
<fC03 i1="02" i2="X" l="ENG">
<s0>Stimulus familiarity</s0>
<s5>02</s5>
</fC03>
<fC03 i1="02" i2="X" l="SPA">
<s0>Familiaridad estímulo</s0>
<s5>02</s5>
</fC03>
<fC03 i1="03" i2="X" l="FRE">
<s0>Face</s0>
<s5>03</s5>
</fC03>
<fC03 i1="03" i2="X" l="ENG">
<s0>Face</s0>
<s5>03</s5>
</fC03>
<fC03 i1="03" i2="X" l="SPA">
<s0>Cara</s0>
<s5>03</s5>
</fC03>
<fC03 i1="04" i2="X" l="FRE">
<s0>Codage</s0>
<s5>04</s5>
</fC03>
<fC03 i1="04" i2="X" l="ENG">
<s0>Coding</s0>
<s5>04</s5>
</fC03>
<fC03 i1="04" i2="X" l="SPA">
<s0>Codificación</s0>
<s5>04</s5>
</fC03>
<fC03 i1="05" i2="X" l="FRE">
<s0>Reconnaissance</s0>
<s5>05</s5>
</fC03>
<fC03 i1="05" i2="X" l="ENG">
<s0>Recognition</s0>
<s5>05</s5>
</fC03>
<fC03 i1="05" i2="X" l="SPA">
<s0>Reconocimiento</s0>
<s5>05</s5>
</fC03>
<fC03 i1="06" i2="X" l="FRE">
<s0>Sensibilité tactile</s0>
<s5>06</s5>
</fC03>
<fC03 i1="06" i2="X" l="ENG">
<s0>Tactile sensitivity</s0>
<s5>06</s5>
</fC03>
<fC03 i1="06" i2="X" l="SPA">
<s0>Sensibilidad tactil</s0>
<s5>06</s5>
</fC03>
<fC03 i1="07" i2="X" l="FRE">
<s0>Perception intermodale</s0>
<s5>07</s5>
</fC03>
<fC03 i1="07" i2="X" l="ENG">
<s0>Intermodal perception</s0>
<s5>07</s5>
</fC03>
<fC03 i1="07" i2="X" l="SPA">
<s0>Percepción intermodal</s0>
<s5>07</s5>
</fC03>
<fC03 i1="08" i2="X" l="FRE">
<s0>Vision</s0>
<s5>08</s5>
</fC03>
<fC03 i1="08" i2="X" l="ENG">
<s0>Vision</s0>
<s5>08</s5>
</fC03>
<fC03 i1="08" i2="X" l="SPA">
<s0>Visión</s0>
<s5>08</s5>
</fC03>
<fC03 i1="09" i2="X" l="FRE">
<s0>Cognition</s0>
<s5>09</s5>
</fC03>
<fC03 i1="09" i2="X" l="ENG">
<s0>Cognition</s0>
<s5>09</s5>
</fC03>
<fC03 i1="09" i2="X" l="SPA">
<s0>Cognición</s0>
<s5>09</s5>
</fC03>
<fC03 i1="10" i2="X" l="FRE">
<s0>Homme</s0>
<s5>18</s5>
</fC03>
<fC03 i1="10" i2="X" l="ENG">
<s0>Human</s0>
<s5>18</s5>
</fC03>
<fC03 i1="10" i2="X" l="SPA">
<s0>Hombre</s0>
<s5>18</s5>
</fC03>
<fC07 i1="01" i2="X" l="FRE">
<s0>Perception</s0>
<s5>37</s5>
</fC07>
<fC07 i1="01" i2="X" l="ENG">
<s0>Perception</s0>
<s5>37</s5>
</fC07>
<fC07 i1="01" i2="X" l="SPA">
<s0>Percepción</s0>
<s5>37</s5>
</fC07>
<fN21>
<s1>169</s1>
</fN21>
</pA>
</standard>
<server>
<NO>PASCAL 07-0255152 INIST</NO>
<ET>Are representations of unfamiliar faces independent of encoding modality?</ET>
<AU>CASEY (Sarah J.); NEWELL (Fiona N.); PAVANI (Francesco); MURRRAY (Micah); SCHROEDER (Charles)</AU>
<AF>School of Psychology and Institute of Neuroscience, Trinity College/Dublin/Irlande (1 aut., 2 aut.); Department of Cognitive Sciences and Education, University of Trento, Corso Bettini 31/38068 Rovereto/Italie (1 aut.); Neuropsychology Division and Radiology Service, CHUV, Hôpital Nestlé, 5 avenue Pierre Decker/1011 Lausanne/Suisse (2 aut.); The Nathan S. Kline Institute for Psychiatric Research, 140 Old Orangeburg Road/Orangeburg, NY 10962/Etats-Unis (3 aut.)</AF>
<DT>Publication en série; Niveau analytique</DT>
<SO>Neuropsychologia; ISSN 0028-3932; Coden NUPSA6; Royaume-Uni; Da. 2007; Vol. 45; No. 3; Pp. 506-513; Bibl. 3/4 p.</SO>
<LA>Anglais</LA>
<EA>It is well documented that both featural and configural information are important in visual face recognition. Less is known, however, about the nature of the information underlying haptic face recognition and whether or not this information is the same as in vision. In our experiments we found better within modal than crossmodal face recognition performance suggesting that face representations are largely specific to each modality. Moreover, this cost in crossmodal performance was found to be independent of differences in exploratory procedures across the modalities during encoding. We found that crossmodal face perception was most efficient when configural information of the facial features was preserved suggesting that configural information is shared across modalities. Our findings suggest that face information is processed in a similar manner across vision and touch but that qualitative differences in the nature of the information encoded underlies efficient within modal relative to crossmodal recognition.</EA>
<CC>002B18C13; 002A26E08</CC>
<FD>Représentation; Familiarité stimulus; Face; Codage; Reconnaissance; Sensibilité tactile; Perception intermodale; Vision; Cognition; Homme</FD>
<FG>Perception</FG>
<ED>Representation; Stimulus familiarity; Face; Coding; Recognition; Tactile sensitivity; Intermodal perception; Vision; Cognition; Human</ED>
<EG>Perception</EG>
<SD>Representación; Familiaridad estímulo; Cara; Codificación; Reconocimiento; Sensibilidad tactil; Percepción intermodal; Visión; Cognición; Hombre</SD>
<LO>INIST-11143.354000159522200050</LO>
<ID>07-0255152</ID>
</server>
</inist>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/PascalFrancis/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000B52 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/PascalFrancis/Corpus/biblio.hfd -nk 000B52 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    PascalFrancis
   |étape=   Corpus
   |type=    RBID
   |clé=     Pascal:07-0255152
   |texte=   Are representations of unfamiliar faces independent of encoding modality?
}}

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024