Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Does My Face FIT?: A Face Image Task Reveals Structure and Distortions of Facial Feature Representation

Identifieur interne : 002306 ( Pmc/Curation ); précédent : 002305; suivant : 002307

Does My Face FIT?: A Face Image Task Reveals Structure and Distortions of Facial Feature Representation

Auteurs : Christina T. Fuentes ; Catarina Runa [Portugal] ; Xenxo Alvarez Blanco [Portugal] ; Ver Nica Orvalho [Portugal] ; Patrick Haggard

Source :

RBID : PMC:3793930

Abstract

Despite extensive research on face perception, few studies have investigated individuals’ knowledge about the physical features of their own face. In this study, 50 participants indicated the location of key features of their own face, relative to an anchor point corresponding to the tip of the nose, and the results were compared to the true location of the same individual’s features from a standardised photograph. Horizontal and vertical errors were analysed separately. An overall bias to underestimate vertical distances revealed a distorted face representation, with reduced face height. Factor analyses were used to identify separable subconfigurations of facial features with correlated localisation errors. Independent representations of upper and lower facial features emerged from the data pattern. The major source of variation across individuals was in representation of face shape, with a spectrum from tall/thin to short/wide representation. Visual identification of one’s own face is excellent, and facial features are routinely used for establishing personal identity. However, our results show that spatial knowledge of one’s own face is remarkably poor, suggesting that face representation may not contribute strongly to self-awareness.


Url:
DOI: 10.1371/journal.pone.0076805
PubMed: 24130790
PubMed Central: 3793930

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:3793930

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Does My Face FIT?: A Face Image Task Reveals Structure and Distortions of Facial Feature Representation</title>
<author>
<name sortKey="Fuentes, Christina T" sort="Fuentes, Christina T" uniqKey="Fuentes C" first="Christina T." last="Fuentes">Christina T. Fuentes</name>
<affiliation>
<nlm:aff id="aff1">
<addr-line>Institute of Cognitive Neuroscience, University College London, London, United Kingdom</addr-line>
 </nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Runa, Catarina" sort="Runa, Catarina" uniqKey="Runa C" first="Catarina" last="Runa">Catarina Runa</name>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<addr-line>Instituto de Telecomunicações, Porto Interactive Center, Universidade do Porto, Porto, Portugal</addr-line>
</nlm:aff>
<country xml:lang="fr">Portugal</country>
<wicri:regionArea>Instituto de Telecomunicações, Porto Interactive Center, Universidade do Porto, Porto</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Blanco, Xenxo Alvarez" sort="Blanco, Xenxo Alvarez" uniqKey="Blanco X" first="Xenxo Alvarez" last="Blanco">Xenxo Alvarez Blanco</name>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<addr-line>Instituto de Telecomunicações, Porto Interactive Center, Universidade do Porto, Porto, Portugal</addr-line>
</nlm:aff>
<country xml:lang="fr">Portugal</country>
<wicri:regionArea>Instituto de Telecomunicações, Porto Interactive Center, Universidade do Porto, Porto</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Orvalho, Ver Nica" sort="Orvalho, Ver Nica" uniqKey="Orvalho V" first="Ver Nica" last="Orvalho">Ver Nica Orvalho</name>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<addr-line>Instituto de Telecomunicações, Porto Interactive Center, Universidade do Porto, Porto, Portugal</addr-line>
</nlm:aff>
<country xml:lang="fr">Portugal</country>
<wicri:regionArea>Instituto de Telecomunicações, Porto Interactive Center, Universidade do Porto, Porto</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Haggard, Patrick" sort="Haggard, Patrick" uniqKey="Haggard P" first="Patrick" last="Haggard">Patrick Haggard</name>
<affiliation>
<nlm:aff id="aff1">
<addr-line>Institute of Cognitive Neuroscience, University College London, London, United Kingdom</addr-line>
 </nlm:aff>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">24130790</idno>
<idno type="pmc">3793930</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3793930</idno>
<idno type="RBID">PMC:3793930</idno>
<idno type="doi">10.1371/journal.pone.0076805</idno>
<date when="2013">2013</date>
<idno type="wicri:Area/Pmc/Corpus">002306</idno>
<idno type="wicri:Area/Pmc/Curation">002306</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Does My Face FIT?: A Face Image Task Reveals Structure and Distortions of Facial Feature Representation</title>
<author>
<name sortKey="Fuentes, Christina T" sort="Fuentes, Christina T" uniqKey="Fuentes C" first="Christina T." last="Fuentes">Christina T. Fuentes</name>
<affiliation>
<nlm:aff id="aff1">
<addr-line>Institute of Cognitive Neuroscience, University College London, London, United Kingdom</addr-line>
 </nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Runa, Catarina" sort="Runa, Catarina" uniqKey="Runa C" first="Catarina" last="Runa">Catarina Runa</name>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<addr-line>Instituto de Telecomunicações, Porto Interactive Center, Universidade do Porto, Porto, Portugal</addr-line>
</nlm:aff>
<country xml:lang="fr">Portugal</country>
<wicri:regionArea>Instituto de Telecomunicações, Porto Interactive Center, Universidade do Porto, Porto</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Blanco, Xenxo Alvarez" sort="Blanco, Xenxo Alvarez" uniqKey="Blanco X" first="Xenxo Alvarez" last="Blanco">Xenxo Alvarez Blanco</name>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<addr-line>Instituto de Telecomunicações, Porto Interactive Center, Universidade do Porto, Porto, Portugal</addr-line>
</nlm:aff>
<country xml:lang="fr">Portugal</country>
<wicri:regionArea>Instituto de Telecomunicações, Porto Interactive Center, Universidade do Porto, Porto</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Orvalho, Ver Nica" sort="Orvalho, Ver Nica" uniqKey="Orvalho V" first="Ver Nica" last="Orvalho">Ver Nica Orvalho</name>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<addr-line>Instituto de Telecomunicações, Porto Interactive Center, Universidade do Porto, Porto, Portugal</addr-line>
</nlm:aff>
<country xml:lang="fr">Portugal</country>
<wicri:regionArea>Instituto de Telecomunicações, Porto Interactive Center, Universidade do Porto, Porto</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Haggard, Patrick" sort="Haggard, Patrick" uniqKey="Haggard P" first="Patrick" last="Haggard">Patrick Haggard</name>
<affiliation>
<nlm:aff id="aff1">
<addr-line>Institute of Cognitive Neuroscience, University College London, London, United Kingdom</addr-line>
 </nlm:aff>
</affiliation>
</author>
</analytic>
<series>
<title level="j">PLoS ONE</title>
<idno type="eISSN">1932-6203</idno>
<imprint>
<date when="2013">2013</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Despite extensive research on face perception, few studies have investigated individuals’ knowledge about the physical features of their own face. In this study, 50 participants indicated the location of key features of their own face, relative to an anchor point corresponding to the tip of the nose, and the results were compared to the true location of the same individual’s features from a standardised photograph. Horizontal and vertical errors were analysed separately. An overall bias to underestimate vertical distances revealed a distorted face representation, with reduced face height. Factor analyses were used to identify separable subconfigurations of facial features with correlated localisation errors. Independent representations of upper and lower facial features emerged from the data pattern. The major source of variation across individuals was in representation of face shape, with a spectrum from tall/thin to short/wide representation. Visual identification of one’s own face is excellent, and facial features are routinely used for establishing personal identity. However, our results show that spatial knowledge of one’s own face is remarkably poor, suggesting that face representation may not contribute strongly to self-awareness.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Uddin, Lq" uniqKey="Uddin L">LQ Uddin</name>
</author>
<author>
<name sortKey="Kaplan, Jt" uniqKey="Kaplan J">JT Kaplan</name>
</author>
<author>
<name sortKey="Molnar Szakacs, I" uniqKey="Molnar Szakacs I">I Molnar-Szakacs</name>
</author>
<author>
<name sortKey="Zaidel, E" uniqKey="Zaidel E">E Zaidel</name>
</author>
<author>
<name sortKey="Iacoboni, M" uniqKey="Iacoboni M">M Iacoboni</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rooney, B" uniqKey="Rooney B">B Rooney</name>
</author>
<author>
<name sortKey="Keyes, H" uniqKey="Keyes H">H Keyes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Devue, C" uniqKey="Devue C">C Devue</name>
</author>
<author>
<name sortKey="Bredart, S" uniqKey="Bredart S">S Brédart</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bredart, S" uniqKey="Bredart S">S Brédart</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brady, N" uniqKey="Brady N">N Brady</name>
</author>
<author>
<name sortKey="Campbell, M" uniqKey="Campbell M">M Campbell</name>
</author>
<author>
<name sortKey="Flaherty, M" uniqKey="Flaherty M">M Flaherty</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Brady, N" uniqKey="Brady N">N Brady</name>
</author>
<author>
<name sortKey="Campbell, M" uniqKey="Campbell M">M Campbell</name>
</author>
<author>
<name sortKey="Flaherty, M" uniqKey="Flaherty M">M Flaherty</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Keyes, H" uniqKey="Keyes H">H Keyes</name>
</author>
<author>
<name sortKey="Brady, N" uniqKey="Brady N">N Brady</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Corradi Dell Cqua, C" uniqKey="Corradi Dell Cqua C">C Corradi-Dell’Acqua</name>
</author>
<author>
<name sortKey="Hesse, Md" uniqKey="Hesse M">MD Hesse</name>
</author>
<author>
<name sortKey="Rumiati, Ri" uniqKey="Rumiati R">RI Rumiati</name>
</author>
<author>
<name sortKey="Fink, Gr" uniqKey="Fink G">GR Fink</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Casey, Sj" uniqKey="Casey S">SJ Casey</name>
</author>
<author>
<name sortKey="Newell, Fn" uniqKey="Newell F">FN Newell</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fuentes, Ct" uniqKey="Fuentes C">CT Fuentes</name>
</author>
<author>
<name sortKey="Pazzaglia, M" uniqKey="Pazzaglia M">M Pazzaglia</name>
</author>
<author>
<name sortKey="Longo, Mr" uniqKey="Longo M">MR Longo</name>
</author>
<author>
<name sortKey="Scivoletto, G" uniqKey="Scivoletto G">G Scivoletto</name>
</author>
<author>
<name sortKey="Haggard, P" uniqKey="Haggard P">P Haggard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Longo, Mr" uniqKey="Longo M">MR Longo</name>
</author>
<author>
<name sortKey="Haggard, P" uniqKey="Haggard P">P Haggard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mundfrom, Dj" uniqKey="Mundfrom D">DJ Mundfrom</name>
</author>
<author>
<name sortKey="Shaw, Dg" uniqKey="Shaw D">DG Shaw</name>
</author>
<author>
<name sortKey="Ke, Tl" uniqKey="Ke T">TL Ke</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Longo, Mr" uniqKey="Longo M">MR Longo</name>
</author>
<author>
<name sortKey="Haggard, P" uniqKey="Haggard P">P Haggard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gandevia, Sc" uniqKey="Gandevia S">SC Gandevia</name>
</author>
<author>
<name sortKey="Phegan, Cm" uniqKey="Phegan C">CM Phegan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Palmer, Ar" uniqKey="Palmer A">AR Palmer</name>
</author>
<author>
<name sortKey="Strobeck, C" uniqKey="Strobeck C">C Strobeck</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Little, Ac" uniqKey="Little A">AC Little</name>
</author>
<author>
<name sortKey="Jones, Bc" uniqKey="Jones B">BC Jones</name>
</author>
<author>
<name sortKey="Burt, Dm" uniqKey="Burt D">DM Burt</name>
</author>
<author>
<name sortKey="Perrett, Di" uniqKey="Perrett D">DI Perrett</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ramon, M" uniqKey="Ramon M">M Ramon</name>
</author>
<author>
<name sortKey="Rossion, B" uniqKey="Rossion B">B Rossion</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Watson, Tl" uniqKey="Watson T">TL Watson</name>
</author>
<author>
<name sortKey="Clifford, Cwg" uniqKey="Clifford C">CWG Clifford</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Piepers, Dw" uniqKey="Piepers D">DW Piepers</name>
</author>
<author>
<name sortKey="Robbins, Ra" uniqKey="Robbins R">RA Robbins</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tanaka, Jw" uniqKey="Tanaka J">JW Tanaka</name>
</author>
<author>
<name sortKey="Gordon, I" uniqKey="Gordon I">I Gordon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Young, Aw" uniqKey="Young A">AW Young</name>
</author>
<author>
<name sortKey="Hellawell, D" uniqKey="Hellawell D">D Hellawell</name>
</author>
<author>
<name sortKey="Hay, Dc" uniqKey="Hay D">DC Hay</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chamberlain, R" uniqKey="Chamberlain R">R Chamberlain</name>
</author>
<author>
<name sortKey="Mcmanus, Ic" uniqKey="Mcmanus I">IC McManus</name>
</author>
<author>
<name sortKey="Riley, H" uniqKey="Riley H">H Riley</name>
</author>
<author>
<name sortKey="Rankin, Q" uniqKey="Rankin Q">Q Rankin</name>
</author>
<author>
<name sortKey="Brunswick, N" uniqKey="Brunswick N">N Brunswick</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zhou, G" uniqKey="Zhou G">G Zhou</name>
</author>
<author>
<name sortKey="Cheng, Z" uniqKey="Cheng Z">Z Cheng</name>
</author>
<author>
<name sortKey="Zhang, X" uniqKey="Zhang X">X Zhang</name>
</author>
<author>
<name sortKey="Wong, Acn" uniqKey="Wong A">ACN Wong</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Greenberg, Sn" uniqKey="Greenberg S">SN Greenberg</name>
</author>
<author>
<name sortKey="Goshen Gottstein, Y" uniqKey="Goshen Gottstein Y">Y Goshen-Gottstein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Metzinger, T" uniqKey="Metzinger T">T Metzinger</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kanwisher, N" uniqKey="Kanwisher N">N Kanwisher</name>
</author>
<author>
<name sortKey="Yovel, G" uniqKey="Yovel G">G Yovel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gauthier, I" uniqKey="Gauthier I">I Gauthier</name>
</author>
<author>
<name sortKey="Skudlarski, P" uniqKey="Skudlarski P">P Skudlarski</name>
</author>
<author>
<name sortKey="Gore, Jc" uniqKey="Gore J">JC Gore</name>
</author>
<author>
<name sortKey="Anderson, Aw" uniqKey="Anderson A">AW Anderson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Avery, Gc" uniqKey="Avery G">GC Avery</name>
</author>
<author>
<name sortKey="Day, Rh" uniqKey="Day R">RH Day</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bianchi, I" uniqKey="Bianchi I">I Bianchi</name>
</author>
<author>
<name sortKey="Savardi, U" uniqKey="Savardi U">U Savardi</name>
</author>
<author>
<name sortKey="Bertamini, M" uniqKey="Bertamini M">M Bertamini</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Savardi, U" uniqKey="Savardi U">U Savardi</name>
</author>
<author>
<name sortKey="Bianchi, I" uniqKey="Bianchi I">I Bianchi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tversky, B" uniqKey="Tversky B">B Tversky</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shin, Ms" uniqKey="Shin M">MS Shin</name>
</author>
<author>
<name sortKey="Park, Sy" uniqKey="Park S">SY Park</name>
</author>
<author>
<name sortKey="Park, Sr" uniqKey="Park S">SR Park</name>
</author>
<author>
<name sortKey="Seol, Sh" uniqKey="Seol S">SH Seol</name>
</author>
<author>
<name sortKey="Kwon, Js" uniqKey="Kwon J">JS Kwon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Costa, M" uniqKey="Costa M">M Costa</name>
</author>
<author>
<name sortKey="Menzani, M" uniqKey="Menzani M">M Menzani</name>
</author>
<author>
<name sortKey="Bitti, Per" uniqKey="Bitti P">PER Bitti</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Collishaw, Sm" uniqKey="Collishaw S">SM Collishaw</name>
</author>
<author>
<name sortKey="Hole, Gj" uniqKey="Hole G">GJ Hole</name>
</author>
<author>
<name sortKey="Schwaninger, A" uniqKey="Schwaninger A">A Schwaninger</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rhodes, G" uniqKey="Rhodes G">G Rhodes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thomas, R" uniqKey="Thomas R">R Thomas</name>
</author>
<author>
<name sortKey="Press, C" uniqKey="Press C">C Press</name>
</author>
<author>
<name sortKey="Haggard, P" uniqKey="Haggard P">P Haggard</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">PLoS One</journal-id>
<journal-id journal-id-type="iso-abbrev">PLoS ONE</journal-id>
<journal-id journal-id-type="publisher-id">plos</journal-id>
<journal-id journal-id-type="pmc">plosone</journal-id>
<journal-title-group>
<journal-title>PLoS ONE</journal-title>
</journal-title-group>
<issn pub-type="epub">1932-6203</issn>
<publisher>
<publisher-name>Public Library of Science</publisher-name>
<publisher-loc>San Francisco, USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">24130790</article-id>
<article-id pub-id-type="pmc">3793930</article-id>
<article-id pub-id-type="publisher-id">PONE-D-13-26749</article-id>
<article-id pub-id-type="doi">10.1371/journal.pone.0076805</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Does My Face FIT?: A Face Image Task Reveals Structure and Distortions of Facial Feature Representation</article-title>
<alt-title alt-title-type="running-head">Face Image Task</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Fuentes</surname>
<given-names>Christina T.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Runa</surname>
<given-names>Catarina</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Blanco</surname>
<given-names>Xenxo Alvarez</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Orvalho</surname>
<given-names>Verónica</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Haggard</surname>
<given-names>Patrick</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="corresp" rid="cor1">
<sup>*</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff1">
<label>1</label>
<addr-line>Institute of Cognitive Neuroscience, University College London, London, United Kingdom</addr-line>
 </aff>
<aff id="aff2">
<label>2</label>
<addr-line>Instituto de Telecomunicações, Porto Interactive Center, Universidade do Porto, Porto, Portugal</addr-line>
</aff>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Costantini</surname>
<given-names>Marcello</given-names>
</name>
<role>Editor</role>
<xref ref-type="aff" rid="edit1"></xref>
</contrib>
</contrib-group>
<aff id="edit1">
<addr-line>University G. d'Annunzio, Italy</addr-line>
</aff>
<author-notes>
<corresp id="cor1">* E-mail:
<email>haggard@ucl.ac.uk</email>
</corresp>
<fn fn-type="conflict">
<p>
<bold>Competing Interests: </bold>
The authors have declared that no competing interests exist.</p>
</fn>
<fn fn-type="con">
<p>Conceived and designed the experiments: CTF VO PH. Performed the experiments: CTF. Contributed reagents/materials/analysis tools: CTF CR XB VO PH. Wrote the manuscript: CTF PH. </p>
</fn>
</author-notes>
<pub-date pub-type="collection">
<year>2013</year>
</pub-date>
<pub-date pub-type="epub">
<day>9</day>
<month>10</month>
<year>2013</year>
</pub-date>
<volume>8</volume>
<issue>10</issue>
<elocation-id>e76805</elocation-id>
<history>
<date date-type="received">
<day>27</day>
<month>6</month>
<year>2013</year>
</date>
<date date-type="accepted">
<day>3</day>
<month>9</month>
<year>2013</year>
</date>
</history>
<permissions>
<copyright-year>2013</copyright-year>
<copyright-holder>Fuentes et al</copyright-holder>
<license>
<license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.</license-p>
</license>
</permissions>
<abstract>
<p>Despite extensive research on face perception, few studies have investigated individuals’ knowledge about the physical features of their own face. In this study, 50 participants indicated the location of key features of their own face, relative to an anchor point corresponding to the tip of the nose, and the results were compared to the true location of the same individual’s features from a standardised photograph. Horizontal and vertical errors were analysed separately. An overall bias to underestimate vertical distances revealed a distorted face representation, with reduced face height. Factor analyses were used to identify separable subconfigurations of facial features with correlated localisation errors. Independent representations of upper and lower facial features emerged from the data pattern. The major source of variation across individuals was in representation of face shape, with a spectrum from tall/thin to short/wide representation. Visual identification of one’s own face is excellent, and facial features are routinely used for establishing personal identity. However, our results show that spatial knowledge of one’s own face is remarkably poor, suggesting that face representation may not contribute strongly to self-awareness.</p>
</abstract>
<funding-group>
<funding-statement>This research was supported by EU FP7 project VERE 257696, work package 1. PH was further supported by an ESRC Professorial Fellowship. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.</funding-statement>
</funding-group>
</article-meta>
</front>
<body>
<sec id="s1">
<title> Introduction</title>
<p>Face perception is a central topic in modern psychology. The field has overwhelmingly used visual stimuli and focussed on face recognition, even when considering perception of one’s own face [
<xref ref-type="bibr" rid="B1">1</xref>
]. People see their own face only rarely – vanishingly rarely until the recent ready availability of mirrors. Nevertheless, several studies indicate a specific mechanism involved in recognising one’s own face (e.g., [
<xref ref-type="bibr" rid="B2">2</xref>
], see also
<xref ref-type="bibr" rid="B3">3</xref>
for a review). Much of this literature has on focussed sensitivity to facial symmetry and its relation to effects of mirrors [
<xref ref-type="bibr" rid="B4">4</xref>
,
<xref ref-type="bibr" rid="B5">5</xref>
], and cerebral hemispheric specialisation [
<xref ref-type="bibr" rid="B6">6</xref>
]. Many visual face recognition studies suggest a superior and accurate visual representation of one’s own face [
<xref ref-type="bibr" rid="B3">3</xref>
]. However, the persistence of this advantage even when faces are inverted suggests that it relies on local rather than configural processing [
<xref ref-type="bibr" rid="B7">7</xref>
].</p>
<p>In general, the self-face visual recognition literature cannot readily distinguish between self-face processing based on familiarity with a visual image of one’s own face suitable for template matching, or based on structural knowledge about what one’s face is like (i.e., a face image or a hypothetical stored representation containing information about the positions of facial features relative to one another, akin to the body structural description [
<xref ref-type="bibr" rid="B8">8</xref>
]. Here we largely remove the visual recognition aspect of self-face processing to focus on the latter, structural representation aspect. Only one study has investigated somatosensory self-face perception [
<xref ref-type="bibr" rid="B9">9</xref>
], and found generally poor performance. Therefore, it remains unclear what people know about their own facial structure, and how this knowledge is stored and represented independent of a specific visual stimulus.</p>
<p>We recently developed tasks for studying the sensed position of body parts (Longo and Haggard, 2012), and stored models of one’s own body [
<xref ref-type="bibr" rid="B10">10</xref>
,
<xref ref-type="bibr" rid="B11">11</xref>
]. These representations both showed systematic patterns of distortion, which potentially indicate how spatial information about bodies is represented and stored in the brain. Here we report results on representation of one’s own facial features using a method that does involve visual recognition. We show, first, that people make large errors in locating their own facial features, particularly underestimating face height. Second, we show through factor analysis that the representation of facial feature locations follows a characteristic structure. The patterns of localisation errors showed covariance across specific subsets of features, which may be relevant to identifying the organisation of face representation at a supra-featural, or configural level. The overall structure of face representations implies an important distortion of face shape. Our work provides a novel and systematic approach to a classic question of Gestalt psychology: how are configurations of multiple features represented in the brain as a composite pattern? Our results may also be relevant to the considerable concern regarding one’s own facial structure and appearance in some individuals and cultures.</p>
</sec>
<sec sec-type="methods" id="s2">
<title>Methods</title>
<sec id="s2.1">
<title>Ethics Statement</title>
<p>All participants gave informed written consent. All experiments were approved by the local ethics committee at University College London.</p>
<p>Participants were seated in front of a computer screen in portrait orientation (Dell model 2007 WFPb, measuring 43.5 cm vertical, 27.5 cm horizontal) which displayed only a small central dot. The position of the dot on the screen was randomised across trials. Participants were instructed to imagine their own face projected frontally, life-size on the screen, with the tip of the nose located at the dot. They used a mouse to indicate the locations corresponding to 11 landmark facial features The figure reproduced as
<xref ref-type="fig" rid="pone-0076805-g001">Figure 1A</xref>
was shown to participants before the experiment to indicate the exact anatomical landmarks intended. Before each trial, a text label (e.g., “botton of chin”, “centre of left eye”) briefly appeared centrally on the screen. Environmental lighting was controlled so that they could not see any reflection of their face on the screen. Each landmark was judged five times in a random order. To quantify errors in perceived position of facial features, responses were later compared to the actual locations of those landmarks, obtained by taking a photograph under standardized conditions and rendering it at life-size on the same screen. The average horizontal (x) and vertical (y) error for attempts to locate each facial landmark were calculated.</p>
<fig id="pone-0076805-g001" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0076805.g001</object-id>
<label>Figure 1</label>
<caption>
<title>Biases in face representation.</title>
<p>A Schematic of feature locations used to instruct participants. B. Actual and mean represented locations. C, Average of 50 female faces reproduced with permission from
<ext-link ext-link-type="uri" xlink:href="http://www.perceptionlab.com">www.perceptionlab.com</ext-link>
. Blue arrows indicate mean judgement error for each feature. D. Average female face adjusted according to the mean represented locations of our participants. E, F: as for C, D with average of 50 male faces.</p>
</caption>
<graphic xlink:href="pone.0076805.g001"></graphic>
</fig>
<p>Fifty participants (24 female, average age 25 years) took part. The x data from left-sided landmarks (ears, nose and mouth edges, eyes) was reflected in the midline, and averaged with the corresponding right-sided landmark. This imposed an assumption of facial symmetry, but reduced the number of dependent variables and avoided possible confusion regarding the terms
<italic>left</italic>
and
<italic>right</italic>
in the context of the task. By analysing the pattern of errors, we aimed to investigate the internal stored representation of one’s own face.</p>
<p>Finally, a subset of 10 participants were asked to attend for a second session, in which the screen was rotated to landscape mode.</p>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>Results</title>
<p>The average error vectors are shown superimposed on a schematic face in
<xref ref-type="fig" rid="pone-0076805-g001">Figure 1</xref>
. They reveal large overall biases in locating facial landmarks. The anatomical structure of the face is very different in the horizontal and vertical dimensions. The horizontal dimension is characterised by symmetry and homology, while the vertical dimension lacks both these attributes. Therefore, we expected different patterns of error in the X and Y dimensions, and accordingly analysed each dimension separately. In the horizontal dimension, mouth and eye width are overestimated, while nose width is underestimated. In the vertical dimension, the hairline is represented as lower, and the chin as higher, than their true locations, suggesting that the face is represented as shorter than its true height. No simple geometric distortion can explain the
<italic>overall</italic>
pattern of biases: for example, the compression of face height may appear to be a regression of judgement towards the mean defined by the anchor point on the nose tip. However, eye and ear vertical positions appear to be unaffected by this bias, and the bias is absent in the horizontal dimension, suggesting it is not simply a matter of eccentricity. Moreover, Bonferroni-corrected testing showed significant biases for some facial features close to the anchor point, but not for those farther away (
<xref ref-type="table" rid="pone-0076805-t001">table 1</xref>
).</p>
<table-wrap id="pone-0076805-t001" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0076805.t001</object-id>
<label>Table 1</label>
<caption>
<title>Average localisation errors for each feature in cm.</title>
</caption>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col span="1"></col>
<col span="1"></col>
<col span="1"></col>
</colgroup>
<thead>
<tr>
<th rowspan="1" colspan="1">
<underline>Part</underline>
</th>
<th rowspan="1" colspan="1">
<underline>Mean Horizontal Error (cm) (SD)</underline>
</th>
<th rowspan="1" colspan="1">
<underline>Mean Vertical Error (cm) (SD)</underline>
</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="1" colspan="1">Hairline</td>
<td rowspan="1" colspan="1">-0.0875 (0.2989)</td>
<td rowspan="1" colspan="1">
<bold>-3.1533 (1.8734)</bold>
</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Chin</td>
<td rowspan="1" colspan="1">-0.0640 (0.3028)</td>
<td rowspan="1" colspan="1">
<bold>1.8987 (1.6650)</bold>
</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Ear</td>
<td rowspan="1" colspan="1">0.0396 (1.5981)</td>
<td rowspan="1" colspan="1">0.3534 (1.7092)</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Nose Bridge</td>
<td rowspan="1" colspan="1">
<bold>0.0735 (0.1401)</bold>
</td>
<td rowspan="1" colspan="1">-0.4734 (1.3835)</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Nose</td>
<td rowspan="1" colspan="1">
<bold>0.4995 (0.6141)</bold>
</td>
<td rowspan="1" colspan="1">-0.0246 (0.5963)</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Mouth</td>
<td rowspan="1" colspan="1">-0.3170 (1.0228)</td>
<td rowspan="1" colspan="1">
<bold>0.9060 (0.9188)</bold>
</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Eyes</td>
<td rowspan="1" colspan="1">-0.2510 (1.0588)</td>
<td rowspan="1" colspan="1">0.2509 (1.4582)</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn>
<p>Values that are significantly different from 0 (p<.05, after Bonferroni correction for 7 tests) are shown in
<bold>bold type</bold>
.</p>
</fn>
</table-wrap-foot>
</table-wrap>
<p>In the ten participants who performed the task with the screen in portrait and landscape mode, we found no effects of screen orientation on judgement error, and no interaction between screen orientation and feature judged, in either X or Y dimensions (all F<1, all p>0.60).</p>
<p>To investigate the underlying
<italic>structure</italic>
of the face representation shown in
<xref ref-type="fig" rid="pone-0076805-g001">Figure 1</xref>
, we applied separate factor analyses to x and y judgement errors (
<xref ref-type="supplementary-material" rid="pone.0076805.s001">tables S1</xref>
and
<xref ref-type="supplementary-material" rid="pone.0076805.s002">S2</xref>
). The ratio of measurements-to-cases falls within the guideline range for exploratory factor analysis [
<xref ref-type="bibr" rid="B12">12</xref>
]. Principal components were extracted, and varimax rotated. Factors with eigenvalues over 1 were retained (
<xref ref-type="table" rid="pone-0076805-t002">table 2</xref>
and
<xref ref-type="supplementary-material" rid="pone.0076805.s003">Figure S1</xref>
). </p>
<table-wrap id="pone-0076805-t002" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0076805.t002</object-id>
<label>Table 2</label>
<caption>
<title>Factor scores for horizontal X and vertical Y components.</title>
</caption>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col span="1"></col>
<col span="1"></col>
<col span="1"></col>
<col span="1"></col>
<col span="1"></col>
<col span="1"></col>
</colgroup>
<thead>
<tr>
<th rowspan="1" colspan="1">
<underline>Factor</underline>
</th>
<th rowspan="1" colspan="1">
<underline>X1</underline>
</th>
<th rowspan="1" colspan="1">
<underline>X
<sup>2</sup>
</underline>
</th>
<th rowspan="1" colspan="1">
<underline>X3</underline>
</th>
<th rowspan="1" colspan="1">
<underline>Y1</underline>
</th>
<th rowspan="1" colspan="1">
<underline>Y2</underline>
</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="1" colspan="1">Eigenvalue</td>
<td rowspan="1" colspan="1">2.75</td>
<td rowspan="1" colspan="1">1.70</td>
<td rowspan="1" colspan="1">1.05</td>
<td rowspan="1" colspan="1">3.09</td>
<td rowspan="1" colspan="1">1.84</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Variance proportion</td>
<td rowspan="1" colspan="1">39%</td>
<td rowspan="1" colspan="1">24%</td>
<td rowspan="1" colspan="1">15%</td>
<td rowspan="1" colspan="1">44%</td>
<td rowspan="1" colspan="1">26%</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Hairline</td>
<td rowspan="1" colspan="1">-0.00134</td>
<td rowspan="1" colspan="1">0.91808</td>
<td rowspan="1" colspan="1">-0.08091</td>
<td rowspan="1" colspan="1">0.86393</td>
<td rowspan="1" colspan="1">-0.14580</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Chin</td>
<td rowspan="1" colspan="1">-0.02570</td>
<td rowspan="1" colspan="1">-0.01205</td>
<td rowspan="1" colspan="1">0.97501</td>
<td rowspan="1" colspan="1">-0.45231</td>
<td rowspan="1" colspan="1">0.76155</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Nose bridge</td>
<td rowspan="1" colspan="1">0.09507</td>
<td rowspan="1" colspan="1">0.89391</td>
<td rowspan="1" colspan="1">0.07555</td>
<td rowspan="1" colspan="1">0.89044</td>
<td rowspan="1" colspan="1">-0.14270</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Nose edge</td>
<td rowspan="1" colspan="1">0.66612</td>
<td rowspan="1" colspan="1">-0.24710</td>
<td rowspan="1" colspan="1">-0.11494</td>
<td rowspan="1" colspan="1">0.21907</td>
<td rowspan="1" colspan="1">0.77971</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Mouth</td>
<td rowspan="1" colspan="1">0.88904</td>
<td rowspan="1" colspan="1">0.09058</td>
<td rowspan="1" colspan="1">-0.14932</td>
<td rowspan="1" colspan="1">-0.17919</td>
<td rowspan="1" colspan="1">0.92808</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Eye</td>
<td rowspan="1" colspan="1">0.90951</td>
<td rowspan="1" colspan="1">0.14324</td>
<td rowspan="1" colspan="1">0.01498</td>
<td rowspan="1" colspan="1">0.92128</td>
<td rowspan="1" colspan="1">-0.02805</td>
</tr>
<tr>
<td rowspan="1" colspan="1">Ear</td>
<td rowspan="1" colspan="1">0.78575</td>
<td rowspan="1" colspan="1">0.14074</td>
<td rowspan="1" colspan="1">0.23051</td>
<td rowspan="1" colspan="1">0.32930</td>
<td rowspan="1" colspan="1">0.24252</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn>
<p>Only factors with eigenvalues over 1 are shown.</p>
</fn>
</table-wrap-foot>
</table-wrap>
<p>For horizontal errors, we identified three retainable factors, which we label X1, X
<sup>2</sup>
, X3 for convenience, corresponding to the principal, independent sources of variability in horizontal judgement errors for facial features. The first factor (X1) suggested a tendency to expand facial width outward from the midline. It loaded strongly and roughly equally on all lateralised structures (eye, mouth, ear, nose), but not on midline structures (centre of hairline, bridge of nose, chin). The second factor (X
<sup>2</sup>
) suggested lateral distortion of the upper face. It loaded largely on the hairline and nose bridge. The third factor (X3) suggested lateral distortion of the lower face, loading almost exclusively on the chin. For analysis of vertical errors, only two factors were retained. The first (Y1) loaded strongly on upper face structures (eyes), including midline structures (nose bridge, hairline), but with some modest negative loading on the chin. This factor suggested a vertical expansion of the face from its centre. The loadings of the second factor (Y2) on lower face structures (mouth, nose edges, chin) suggest a vertical shift confined to the lower face.</p>
<p>We investigated the relation between the factors underlying face representation and our participants’ actual facial features, as measured from photos. Since factor X1 was interpreted as the width of the face, we correlated scores on this factor with the actual ear-to-ear distance. Since factor Y1 was interpreted as the vertical height of the face, we correlated it with the actual hairline-to-chin distance. We found no associations between represented and actual facial dimensions (r=-0.036 NS and 0.016 NS, respectively).</p>
<p>These factor solutions carry important information about the internal structure of horizontal and vertical face representation. Factors X1, X
<sup>2</sup>
, Y1 and Y2 all loaded on more than one facial feature. The loading patterns suggest complexes of two or more individual features that group together, and which covary across the face representations of different individuals. By this means, we could identify separable representations of lateral and midline horizontal facial features, and separable representations of upper and lower face vertical structure. The effects of varying each factor on an average face are shown as vectors in
<xref ref-type="supplementary-material" rid="pone.0076805.s003">Figure S1</xref>
, and pictorially in
<xref ref-type="supplementary-material" rid="pone.0076805.s004">Figure S2</xref>
.</p>
<p>We also investigated the overall geometry of face representation by seeking an inter-domain association between factors affecting horizontal and vertical errors. We used canonical correlation to identify the principal associations between our horizontal factors (X1, X
<sup>2</sup>
) and vertical factors (Y1, Y2).</p>
<p>The first canonical variate accounted for 48.5% of the variance between the horizontal and vertical factors and was highly significant (Wilks’ Lambda 0.506, approximated by F(4,92)=9.34, p<.001). The standardised weights showed that the canonical variate related X1 (weighting 0.99) negatively to Y1 (-0.85) and positively, though less strongly, to Y2 (0.53). In contrast, factor X
<sup>2</sup>
made little contribution to this inter-domain association (weighting 0.12), suggesting that it constituted an independent aspect of facial structure. The combination of weightings in the first canonical variate is readily interpretable as face aspect ratio, or 2D shape. The lateral shift of eyes, mouth edges, ears and nose captured by factor X1 was associated with a downward shift of the hairline and nose-bridge (captured by Y1), and some upward shift of the mouth, nose edges and chin (captured by Y2). That is, the lateral expansion of the face was strongly associated with a vertical compression of towards the face centre, suggesting that the face aspect ratio is the major structural principle of face representation. The second canonical variate explained only 1.7% of the shared variance between factors, and was far from significant (p=0.37). Factor X3 was excluded from the inter-domain analysis, as its loading was largely confined to a single feature. However, re-running the analysis with this factor included had only small effects on weightings of inter-domain association and did not change the pattern of inference.
<xref ref-type="fig" rid="pone-0076805-g002">Figure 2A</xref>
shows the vectors associated with the major loadings (>0.4) of each factor, adjusted by the factor’s weighting in the canonical variate.
<xref ref-type="fig" rid="pone-0076805-g002">Figure 2B</xref>
shows the face images implied by a positive and negative unit score on the canonical variate.</p>
<fig id="pone-0076805-g002" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0076805.g002</object-id>
<label>Figure 2</label>
<caption>
<title>Association between horizontal and vertical distortion factors demonstrates variation in representation of face shape across individuals.</title>
<p>Results of a canonical correlation between the horizontal (X1,X2) and vertical (Y1,Y2) factors. A. Vectors showing the principal feature loadings (>0.4 or <-0.4) of the factors, adjusted by the coefficients indicating important (>0.4 or <-0.4) contributions to the canonical variate. The vector lengths are shown at 4x the actual values for visual clarity. Note the negative sign for Y1 coefficient. B. Average female and male faces implied by a low and high score on the canonical variate. Note that the canonical variate separates long and thin from short and wide face representations.</p>
</caption>
<graphic xlink:href="pone.0076805.g002"></graphic>
</fig>
</sec>
<sec sec-type="discussion" id="s4">
<title>Discussion</title>
<p>We have developed a new method to investigated stored knowledge about the “face image”, or structural arrangement of one’s own facial features. Importantly, this method allows the structural description of the face to be investigated independent of visual recognition.</p>
<p>Analyses of errors in locating facial landmarks relative to the tip of the nose suggested an internal representation or model of one’s own face, with characteristic structure. We first showed an overall bias to represent face shape as shorter than it really is. This bias was unrelated to the actual height and width of an individual’s face. Second, we showed that the most prominent signature of different individuals’ overall face representations is the extent to which they express a set of associated factors that code for tall/thin vs short/wide face representation. This recalls similar shape distortions for the position sense of the hand [
<xref ref-type="bibr" rid="B13">13</xref>
], and for the body image [
<xref ref-type="bibr" rid="B10">10</xref>
]. Since shape and size of body parts is not directly signalled by any somatosensory receptor [
<xref ref-type="bibr" rid="B14">14</xref>
], it may be unsurprising that face representation is non-veridical. However, our results show, for the first time, that errors in facial representation are not simply random noise, or regression to the mean, but have a systematic structure.</p>
<p>One striking component of this structure was the aspect ratio defined by facial features. We investigated horizontal and vertical structure of face representation in two independent analyses. We next investigated the association of these dimensions, and found that facial aspect ratio emerged as a prominent feature of the data pattern. Our data therefore provides strong and independent convergent evidence that aspect ratio is a major source of variation in face representation. Not only are people poor at estimating the shape of their own face (
<xref ref-type="fig" rid="pone-0076805-g001">Figure 1</xref>
), but the principal source of variation across individuals is in the biased representation of face shape.</p>
<p>A second clear component of face structure was the separation between upper and lower facial features. For most of the factors we extracted, we found that high loadings on the upper face were accompanied by low loadings on the lower face, or
<italic>vice versa</italic>
. This dissociation could reflect innervation by different branches of the trigeminal nerve, or it could reflect different functions of the upper face (gaze, attention) and lower face (speech, eating). In any case, our data confirm a fundamental division in face
<italic>representation</italic>
, as opposed to face perception, between upper and lower face.</p>
<p>Third, we found important misrepresentations of the lateral position of midline structures. Interestingly, these midline shifts occurred independently for the upper face (factor X
<sup>2</sup>
) and lower face (factor X3), providing further strong evidence for independent representation of upper and lower face, but this time from the orthogonal, horizontal dimension of representation. We note that factor X3 requires a more cautious interpretation, given the marginal eigenvalue and loading on a single feature (the chin). The two midline shift factors could be interpreted as forehead and mandibular asymmetry, respectively. The importance of symmetry in developmental and evolutionary biology is widely accepted [
<xref ref-type="bibr" rid="B15">15</xref>
], and fluctuating asymmetry is also thought to be used as a proxy for biological quality in mate selection [
<xref ref-type="bibr" rid="B16">16</xref>
]. Alternatively, our findings of may reflect brain functions underlying face representation, rather than sensitivity to body morphology. Neuroscientific studies suggest that the two cerebral hemispheres may play different roles in face perception [
<xref ref-type="bibr" rid="B17">17</xref>
]. Variation across individuals in such hemispheric specialization might also explain asymmetric representation of one’s own face.</p>
<p>Distortions in face representation have been widely reported in visual perception. For example, one study using adaptation procedures investigated aspect suggested that aspect ratio was a core component of face coding in the human brain [
<xref ref-type="bibr" rid="B18">18</xref>
]. However, those studies did not specifically test for
<italic>other</italic>
distortions of face coding, apart from shape, and could design only a limited range of stimuli to test dimensions of coding hypothesised a priori. In our approach, by contrast, the key dimensions of face coding emerge from the pattern of participants’ responses, rather than by experimenters’ choice of stimulus set.</p>
<sec id="s4.1">
<title>Configural processing</title>
<p>Models of face perception distinguish between information about individual facial features, and ‘holistic’ or ‘configural’ information about spacing between features [
<xref ref-type="bibr" rid="B19">19</xref>
]. Psychophysical studies, for example using the composite face effect, confirm that configural information plays an important role in face perception [
<xref ref-type="bibr" rid="B20">20</xref>
,
<xref ref-type="bibr" rid="B21">21</xref>
], and that this information is processed ‘holistically’. However, the structure of the underlying Gestalt or face configuration is not known. Most previous studies have focussed on spatial relations between facial features that are either hypothesised a priori, or motivated by general processing considerations independent of face perception. These include relations between the upper and lower and left and right facial features [
<xref ref-type="bibr" rid="B17">17</xref>
]. In contrast, the Face Image Task (FIT) provides a new, hypothesis-free method for investigating how multiple features are combined in configural representations, at least for representation of one’s own face.</p>
<p>In particular, our factor method extracted distinct sets of features whose representations tended to covary, even though we did not impose such a pattern of variation by designing our stimuli, and even though only one feature was ever judged at a time. This grouping of features was not simply defined by proximity (e.g., the edges of the nose grouped with the chin in factor Y2, not with the eyes, despite being closer to the latter than to the former). We suggest that such feature grouping may underlie configural face processing, and could provide a useful data-driven method for identifying what structural information is actually stored in the hypothesised configural representation. Configural processing might reflect precise representation of the spatial relations of features
<italic>within</italic>
a group, while spatial relations
<italic>between</italic>
groups of features might be less precisely represented. These findings generate testable predictions for future face-recognition experiments. For example, laterally shifting ears relative to eyes should be readily detectable, due to the common high loadings of these features on factor X1. But vertically shifting ears relative to eyes should be less detectable, since these features are not strongly grouped by any important factor.</p>
</sec>
<sec id="s4.2">
<title>Perceptual and productive self-representation</title>
<p>Our results show that the structural knowledge about one’s own facial features is remarkably poor. This contrasts with numerous results in visual self-face recognition showing that self-face processing is remarkably good, and superior to processing of other faces (e.g., [
<xref ref-type="bibr" rid="B7">7</xref>
]). Our results suggest that the internal
<italic>representation</italic>
of the face is strongly and systematically distorted, but we have no difficulty in recognising much smaller distortions when
<italic>viewing</italic>
faces (
<xref ref-type="fig" rid="pone-0076805-g001">Figure 1</xref>
). This points to a dissociation between the processes of matching visual input to a perceptual template, and the processes of accessing structural representations directly for purposes of reproducing them. Artists often improve their face drawing skills by learning geometric rules regarding the spacing of facial features. This may be considered a transfer of training from perceptual representation to productive representation. Interestingly, this process is accompanied by strengthened representation of local featural detail in face perception, at the expense of holistic, configural processing [
<xref ref-type="bibr" rid="B22">22</xref>
,
<xref ref-type="bibr" rid="B23">23</xref>
]. Comparisons of self-face and other-face processing also suggest a dominance of local over configural information for one’s own face [
<xref ref-type="bibr" rid="B24">24</xref>
]. Our data suggest that configural information about one’s own face is also poorly represented because there are systematic biases in judgements about feature locations. Nevertheless, we found grouping of features in virtue of loading on a single factor. This suggests that some configural structure to face representation is present, albeit of limited accuracy.</p>
<p>In addition, our results offer a dramatic example of the asymmetry between fluent, automatic, stimulus-driven access to object representation, and the limited accessibility of such object representations to the kind of deliberate controlled processing involved in our task. Even our own face appears to be impenetrable to controlled cognition. It is well-known from memory research that recognition is superior to recall. In contrast, the everyday concept of self-awareness implies an opposite pattern. We do not need to recognise our thoughts and mental states as ours. Rather, a stable, persistent core self is held to be directly known, and to provide an origin for mental states, attitudes and actions. This account of the self has recently been questioned [
<xref ref-type="bibr" rid="B25">25</xref>
]. Our approach suggests that bodily self-knowledge is poor, even for elements such as the face, which may be important for personal identity. Therefore, if there is a stable core self underlying self-identity, knowledge about the physical structure of one’s own face does not appear to be strongly linked to it.</p>
</sec>
<sec id="s4.3">
<title>Specificity</title>
<p>It is unclear whether the distortions reported here are specific to representing one’s own face, or indeed to faces as a category. Identifying suitable objects for a control task is problematic. The quality and quantity of experience we have with other people’s faces, and with non-face objects, is entirely different from the experience of our own face. Controlling for modality, familiarity, prototypicality and other relevant factors is therefore difficult. Further, the features of non-face objects cannot match those of faces in number, salience and configuration, almost by definition. Thus, the representation of information about faces cannot easily be compared to representation of other objects. Many perceptual studies suggest a specialised brain system for face processing [
<xref ref-type="bibr" rid="B26">26</xref>
], consistent with specificity. In addition, processing of one’s own face may involve a specialised network not used, or used to a lesser extent, for processing of other faces [
<xref ref-type="bibr" rid="B3">3</xref>
]. Comparisons between perception of faces and of non-face objects generally focus on neural
<italic>processes</italic>
, reflecting the difficulty of comparing the
<italic>content</italic>
of information represented [
<xref ref-type="bibr" rid="B27">27</xref>
].</p>
<p>For these reasons, it remains unclear if our effects are specific to representations of one’s own face. However, the bias towards short and wide face representation recalls similar biases for hands [
<xref ref-type="bibr" rid="B11">11</xref>
] and body shape [
<xref ref-type="bibr" rid="B10">10</xref>
]. The literature on visual perception and memory for shape do not suggest similar distortions for other objects. For example, people robustly overestimate vertical visual distances compared to horizontal distances [
<xref ref-type="bibr" rid="B28">28</xref>
], whereas we found a striking 27.7% underestimation of face height with relatively unbiased representation of face width (
<xref ref-type="fig" rid="pone-0076805-g001">Figure 1</xref>
). A previous study reported systematic overestimates of one’s own head size [
<xref ref-type="bibr" rid="B29">29</xref>
]. However, this conclusion was based on drawing outlines rather than locating features, and more specific analyses identified primarily width overestimation rather than height overestimation [
<xref ref-type="bibr" rid="B30">30</xref>
]. Classic studies of memory for feature locations report several Gestalt-type distortions of spatial representation, but do not mention distortions of aspect ratio [
<xref ref-type="bibr" rid="B31">31</xref>
]. The extensive literature on memory representations for complex figures [
<xref ref-type="bibr" rid="B32">32</xref>
] scarcely mentions distortions of shape – yet it seems unlikely that bias and variability as striking as those we have found for face representation would simply be overlooked. Therefore, we tentatively suggest that the effects reported here may be face-specific, but more research is needed.</p>
</sec>
<sec id="s4.4">
<title>Alternative explanations</title>
<p>Could the factor structure we identified arise artefactually, from some process other than face representation? One possibility is a simple rotational error. Any head tilt in the facial photographs we used to measure judgement accuracy, or in the internal representation of the face that participants used to locate features, would produce systematic errors in judging the features of positions. The misrepresentation of face shape cannot be explained in this way because shape is invariant under rotation. However, some of the other distortions we noted could potentially be due to rotation. Tilt of the head (canting) is particularly likely [
<xref ref-type="bibr" rid="B33">33</xref>
], and is known to influence face recognition [
<xref ref-type="bibr" rid="B34">34</xref>
]. The pattern of errors would depend on the precise centre of rotation. For example, a tilt of the head around the centre of the face would cause equal and opposite X shifts in the hairline and chin. Crucially, our analyses would place these shifts in the same factor, with equal and opposite loadings, because the two shifts are perfectly correlated. In fact, we found that hairline and chin shifts were associated with orthogonal factors. Therefore errors in feature judgements do not appear to be due to face rotation.</p>
<p>A second alternative explanation would involve the spatial distribution of pointing errors around the fixation/anchor point. For example, regression to the mean might cause people to judge all facial features as closer to the nose-tip anchor point than their true location. On this account, errors should vary strictly geometrically with each feature’s position in the face, but we found several aspects of face representation that were feature-specific and independent of position in the face or on the screen. For example, we found that errors in localising the bridge of the nose were lower than errors in localising the edges of the mouth (
<xref ref-type="fig" rid="pone-0076805-g001">Figure 1</xref>
), even though both are approximately equidistant from the nose-tip anchor. Our factor analyses confirmed that individual features make distinct contributions to face representation, which are not simply explained by the feature’s location within the face. For example, factor Y2 loaded strongly on the mouth, but much less on the nose edges and chin, even though these features are all close together. Further, simple geometric features of our response method cannot readily explain the strong correlations between factors underlying vertical and horizontal errors. In a previous study of hand representation, patterns of distortion were shown to be invariant when the hand was presented rotated by 90 degrees relative to the body. This suggested the distortion arose from an allocentric representation of the hand, rather than from egocentric or screen-based responding. Such tests can rule out response-specific explanations of bodily distortions for the hand. Such a test is more challenging for face representation, because the face cannot be repositioned within egocentric space in the same way as the hand.</p>
</sec>
<sec id="s4.5">
<title>Limitations</title>
<p>Finally, we acknowledge several limitations of our study. First, the number of participants is small, though it meets standards for exploratory factor analysis based on detailed simulation studies [
<xref ref-type="bibr" rid="B12">12</xref>
]. Second, our data reduction method enforced symmetry of the face around the midline, so is insensitive to possible asymmetries in representation of lateral face structures. Fluctuating asymmetry is an important facial cue to health, genetic quality, and judgements of attractiveness [
<xref ref-type="bibr" rid="B35">35</xref>
]. Future research should examine facial symmetry systematically by testing larger groups, and by directly comparing laterally inverted (mirror) versus confrontational (photograph) representations of the face [
<xref ref-type="bibr" rid="B36">36</xref>
]. Interestingly, we nevertheless identified factors involving midline shifts, confirming that asymmetry is an important aspect of face representation. Third, we have tested location judgement relative to just one central anchor, the tip of the nose. Using another anchor might, in principle, give different results – although tests of body image were largely unaffected by moving the anchor from the head to the feet [
<xref ref-type="bibr" rid="B10">10</xref>
]. Fourth, we tested only the representation of one’s own face, so we cannot say whether comparable distortions exist for less familiar faces of others, or for faces as a general semantic category. Fifth and finally, we have used factor analysis to identify the general structure of face representations from individual participants’ errors. However, we could not investigate how differences
<italic>between</italic>
individuals may influence their face representation, due to limited sample size. In particular, an individual’s face representation might depend on their actual facial structure, on their gender, or on cultural factors such as a desire to play down unusual or “unattractive” features.</p>
</sec>
</sec>
<sec sec-type="supplementary-material">
<title>Supporting Information</title>
<supplementary-material content-type="local-data" id="pone.0076805.s001">
<label>Table S1</label>
<caption>
<p>
<bold>Correlation matrix for horizontal errors in feature localisation.</bold>
</p>
<p>(DOCX)</p>
</caption>
<media xlink:href="pone.0076805.s001.docx">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0076805.s002">
<label>Table S2</label>
<caption>
<p>
<bold>Correlation matrix for vertical errors in feature localisation.</bold>
</p>
<p>(DOCX)</p>
</caption>
<media xlink:href="pone.0076805.s002.docx">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0076805.s003">
<label>Figure S1</label>
<caption>
<p>
<bold>Results of factor analysis of the face image task reveal principal factors of horizontal and vertical distortion in face representation, rendered on an average female face.</bold>
Vector show the principal feature loadings (>0.4 or <-0.4) of each factor. The vector lengths are shown at 4x the actual values for visual clarity. The percentage variance and tentative interpretation of each factor are given.</p>
<p>(TIF)</p>
</caption>
<media xlink:href="pone.0076805.s003.tif">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0076805.s004">
<label>Figure S2</label>
<caption>
<p>
<bold>Pictorial representation of the principal factors of horizontal and vertical distortion.</bold>
For each factor, the upper row shows an average male face distorted by a positive score of 1 standard deviation, and the bottom row shows the same face distorted by a negative unit score. Only features with high (>0.4 or <-0.4) loadings on the relevant factor were used to render the distortions.</p>
<p>(TIF)</p>
</caption>
<media xlink:href="pone.0076805.s004.tif">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
</sec>
</body>
<back>
<ack>
<p>We are grateful to Dave Perrett and Amanda Hahn of the Perception Lab, University of St Andrews,
<ext-link ext-link-type="uri" xlink:href="http://www.perceptionlab.com">
<underline>www.perceptionlab.com</underline>
</ext-link>
, for permission to use their average face images.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="B1">
<label>1</label>
<mixed-citation publication-type="journal">
<name>
<surname>Uddin</surname>
<given-names>LQ</given-names>
</name>
,
<name>
<surname>Kaplan</surname>
<given-names>JT</given-names>
</name>
,
<name>
<surname>Molnar-Szakacs</surname>
<given-names>I</given-names>
</name>
,
<name>
<surname>Zaidel</surname>
<given-names>E</given-names>
</name>
,
<name>
<surname>Iacoboni</surname>
<given-names>M</given-names>
</name>
(
<year>2005</year>
)
<article-title>Self-face recognition activates a frontoparietal ‘mirror’ network in the right hemisphere: an event-related fMRI study</article-title>
.
<source>Neuroimage</source>
<volume>25</volume>
:
<fpage>926</fpage>
<lpage>935</lpage>
.
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.neuroimage.2004.12.018">10.1016/j.neuroimage.2004.12.018</ext-link>
PubMed:
<ext-link ext-link-type="uri" xlink:href="http://www.ncbi.nlm.nih.gov/pubmed/15808992">15808992</ext-link>
<pub-id pub-id-type="pmid">15808992</pub-id>
</mixed-citation>
</ref>
<ref id="B2">
<label>2</label>
<mixed-citation publication-type="journal">
<name>
<surname>Rooney</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Keyes</surname>
<given-names>H</given-names>
</name>
(
<year>2012</year>
)
<article-title>Shared or separate mechanisms for self-face and other-face processing? Evidence from adaptation. Front</article-title>
.
<source>Psychology</source>
<volume>3</volume>
:
<fpage>66</fpage>
.
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.3389/fpsyg.2012.00066">10.3389/fpsyg.2012.00066</ext-link>
</mixed-citation>
</ref>
<ref id="B3">
<label>3</label>
<mixed-citation publication-type="journal">
<name>
<surname>Devue</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Brédart</surname>
<given-names>S</given-names>
</name>
(
<year>2011</year>
)
<article-title>The neural correlates of visual self-recognition</article-title>
.
<source>Conscious Cogn</source>
<volume>20</volume>
:
<fpage>40</fpage>
<lpage>51</lpage>
.
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.concog.2010.09.007">10.1016/j.concog.2010.09.007</ext-link>
PubMed:
<ext-link ext-link-type="uri" xlink:href="http://www.ncbi.nlm.nih.gov/pubmed/20880722">20880722</ext-link>
<pub-id pub-id-type="pmid">20880722</pub-id>
</mixed-citation>
</ref>
<ref id="B4">
<label>4</label>
<mixed-citation publication-type="journal">
<name>
<surname>Brédart</surname>
<given-names>S</given-names>
</name>
(
<year>2003</year>
)
<article-title>Recognising the usual orientation of one’s own face: the role of asymmetrically located details</article-title>
.
<source>Perception</source>
<volume>32</volume>
:
<fpage>805</fpage>
<lpage>811</lpage>
.
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1068/p3354">10.1068/p3354</ext-link>
PubMed:
<ext-link ext-link-type="uri" xlink:href="http://www.ncbi.nlm.nih.gov/pubmed/12974566">12974566</ext-link>
<pub-id pub-id-type="pmid">12974566</pub-id>
</mixed-citation>
</ref>
<ref id="B5">
<label>5</label>
<mixed-citation publication-type="journal">
<name>
<surname>Brady</surname>
<given-names>N</given-names>
</name>
,
<name>
<surname>Campbell</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Flaherty</surname>
<given-names>M</given-names>
</name>
(
<year>2005</year>
)
<article-title>Perceptual asymmetries are preserved in memory for highly familiar faces of self and friend</article-title>
.
<source>Brain Cogn</source>
<volume>58</volume>
:
<fpage>334</fpage>
<lpage>342</lpage>
.
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.bandc.2005.01.001">10.1016/j.bandc.2005.01.001</ext-link>
PubMed:
<ext-link ext-link-type="uri" xlink:href="http://www.ncbi.nlm.nih.gov/pubmed/15963384">15963384</ext-link>
<pub-id pub-id-type="pmid">15963384</pub-id>
</mixed-citation>
</ref>
<ref id="B6">
<label>6</label>
<mixed-citation publication-type="journal">
<name>
<surname>Brady</surname>
<given-names>N</given-names>
</name>
,
<name>
<surname>Campbell</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Flaherty</surname>
<given-names>M</given-names>
</name>
(
<year>2004</year>
)
<article-title>My left brain and me: a dissociation in the perception of self and others</article-title>
.
<source>Neuropsychologia</source>
<volume>42</volume>
:
<fpage>1156</fpage>
<lpage>1161</lpage>
.
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.neuropsychologia.2004.02.007">10.1016/j.neuropsychologia.2004.02.007</ext-link>
PubMed:
<ext-link ext-link-type="uri" xlink:href="http://www.ncbi.nlm.nih.gov/pubmed/15178167">15178167</ext-link>
<pub-id pub-id-type="pmid">15178167</pub-id>
</mixed-citation>
</ref>
<ref id="B7">
<label>7</label>
<mixed-citation publication-type="journal">
<name>
<surname>Keyes</surname>
<given-names>H</given-names>
</name>
,
<name>
<surname>Brady</surname>
<given-names>N</given-names>
</name>
(
<year>2010</year>
)
<article-title>Self-face recognition is characterized by ‘bilateral gain’ and by faster, more accurate performance which persists when faces are inverted</article-title>
.
<source>Q J Exp Psychol (Hove)</source>
<volume>63</volume>
:
<fpage>840</fpage>
<lpage>847</lpage>
.
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1080/17470211003611264">10.1080/17470211003611264</ext-link>
<pub-id pub-id-type="pmid">20198537</pub-id>
</mixed-citation>
</ref>
<ref id="B8">
<label>8</label>
<mixed-citation publication-type="journal">
<name>
<surname>Corradi-Dell’Acqua</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Hesse</surname>
<given-names>MD</given-names>
</name>
,
<name>
<surname>Rumiati</surname>
<given-names>RI</given-names>
</name>
,
<name>
<surname>Fink</surname>
<given-names>GR</given-names>
</name>
(
<year>2008</year>
)
<article-title>Where is a nose with respect to a foot? The left posterior parietal cortex processes spatial relationships among body parts</article-title>
.
<source>Cereb Cortex</source>
<volume>18</volume>
:
<fpage>2879</fpage>
<lpage>2890</lpage>
.
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1093/cercor/bhn046">10.1093/cercor/bhn046</ext-link>
PubMed:
<ext-link ext-link-type="uri" xlink:href="http://www.ncbi.nlm.nih.gov/pubmed/18424775">18424775</ext-link>
<pub-id pub-id-type="pmid">18424775</pub-id>
</mixed-citation>
</ref>
<ref id="B9">
<label>9</label>
<mixed-citation publication-type="journal">
<name>
<surname>Casey</surname>
<given-names>SJ</given-names>
</name>
,
<name>
<surname>Newell</surname>
<given-names>FN</given-names>
</name>
(
<year>2005</year>
)
<article-title>The role of long-term and short-term familiarity in visual and haptic face recognition</article-title>
.
<source>Exp Brain Res</source>
<volume>166</volume>
:
<fpage>583</fpage>
<lpage>591</lpage>
.
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1007/s00221-005-2398-3">10.1007/s00221-005-2398-3</ext-link>
PubMed:
<ext-link ext-link-type="uri" xlink:href="http://www.ncbi.nlm.nih.gov/pubmed/15983771">15983771</ext-link>
<pub-id pub-id-type="pmid">15983771</pub-id>
</mixed-citation>
</ref>
<ref id="B10">
<label>10</label>
<mixed-citation publication-type="journal">
<name>
<surname>Fuentes</surname>
<given-names>CT</given-names>
</name>
,
<name>
<surname>Pazzaglia</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Longo</surname>
<given-names>MR</given-names>
</name>
,
<name>
<surname>Scivoletto</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Haggard</surname>
<given-names>P</given-names>
</name>
(
<year>2013</year>
)
<article-title>Body image distortions following spinal cord injury</article-title>
.
<source>J Neurol Neurosurg Psychiatry</source>
<volume>84</volume>
:
<fpage>201</fpage>
<lpage>207</lpage>
.
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1136/jnnp-2012-304001">10.1136/jnnp-2012-304001</ext-link>
PubMed:
<ext-link ext-link-type="uri" xlink:href="http://www.ncbi.nlm.nih.gov/pubmed/23204474">23204474</ext-link>
<pub-id pub-id-type="pmid">23204474</pub-id>
</mixed-citation>
</ref>
<ref id="B11">
<label>11</label>
<mixed-citation publication-type="journal">
<name>
<surname>Longo</surname>
<given-names>MR</given-names>
</name>
,
<name>
<surname>Haggard</surname>
<given-names>P</given-names>
</name>
(
<year>2012</year>
)
<article-title>Implicit body representations and the conscious body image</article-title>
.
<source>Acta Psychol (Amst)</source>
<volume>141</volume>
:
<fpage>164</fpage>
<lpage>168</lpage>
.
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.actpsy.2012.07.015">10.1016/j.actpsy.2012.07.015</ext-link>
PubMed:
<ext-link ext-link-type="uri" xlink:href="http://www.ncbi.nlm.nih.gov/pubmed/22964057">22964057</ext-link>
<pub-id pub-id-type="pmid">22964057</pub-id>
</mixed-citation>
</ref>
<ref id="B12">
<label>12</label>
<mixed-citation publication-type="journal">
<name>
<surname>Mundfrom</surname>
<given-names>DJ</given-names>
</name>
,
<name>
<surname>Shaw</surname>
<given-names>DG</given-names>
</name>
,
<name>
<surname>Ke</surname>
<given-names>TL</given-names>
</name>
(
<year>2005</year>
)
<article-title>Minimum Sample Size Recommendations for Conducting Factor Analyses</article-title>
.
<source>Int J Test</source>
<volume>5</volume>
:
<fpage>159</fpage>
<lpage>168</lpage>
.
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1207/s15327574ijt0502_4">10.1207/s15327574ijt0502_4</ext-link>
</mixed-citation>
</ref>
<ref id="B13">
<label>13</label>
<mixed-citation publication-type="journal">
<name>
<surname>Longo</surname>
<given-names>MR</given-names>
</name>
,
<name>
<surname>Haggard</surname>
<given-names>P</given-names>
</name>
(
<year>2010</year>
)
<article-title>An implicit body representation underlying human position sense</article-title>
.
<source>Proc Natl Acad Sci U_S_A</source>
<volume>107</volume>
:
<fpage>11727</fpage>
<lpage>11732</lpage>
.
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1073/pnas.1003483107">10.1073/pnas.1003483107</ext-link>
PubMed:
<ext-link ext-link-type="uri" xlink:href="http://www.ncbi.nlm.nih.gov/pubmed/20547858">20547858</ext-link>
<pub-id pub-id-type="pmid">20547858</pub-id>
</mixed-citation>
</ref>
<ref id="B14">
<label>14</label>
<mixed-citation publication-type="journal">
<name>
<surname>Gandevia</surname>
<given-names>SC</given-names>
</name>
,
<name>
<surname>Phegan</surname>
<given-names>CM</given-names>
</name>
(
<year>1999</year>
)
<article-title>Perceptual distortions of the human body image produced by local anaesthesia, pain and cutaneous stimulation</article-title>
.
<source>J Physiol Lond</source>
<volume>514</volume>
(
<issue-id>2</issue-id>
):
<fpage>609</fpage>
<lpage>616</lpage>
.
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1111/j.1469-7793.1999.609ae.x">10.1111/j.1469-7793.1999.609ae.x</ext-link>
PubMed:
<ext-link ext-link-type="uri" xlink:href="http://www.ncbi.nlm.nih.gov/pubmed/9852339">9852339</ext-link>
<pub-id pub-id-type="pmid">9852339</pub-id>
</mixed-citation>
</ref>
<ref id="B15">
<label>15</label>
<mixed-citation publication-type="journal">
<name>
<surname>Palmer</surname>
<given-names>AR</given-names>
</name>
,
<name>
<surname>Strobeck</surname>
<given-names>C</given-names>
</name>
(
<year>1986</year>
)
<article-title>Fluctuating Asymmetry: Measurement, Analysis, Patterns</article-title>
.
<source>Annu Rev Ecol Evol Syst</source>
<volume>17</volume>
:
<fpage>391</fpage>
<lpage>421</lpage>
.
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1146/annurev.es.17.110186.002135">10.1146/annurev.es.17.110186.002135</ext-link>
</mixed-citation>
</ref>
<ref id="B16">
<label>16</label>
<mixed-citation publication-type="journal">
<name>
<surname>Little</surname>
<given-names>AC</given-names>
</name>
,
<name>
<surname>Jones</surname>
<given-names>BC</given-names>
</name>
,
<name>
<surname>Burt</surname>
<given-names>DM</given-names>
</name>
,
<name>
<surname>Perrett</surname>
<given-names>DI</given-names>
</name>
(
<year>2007</year>
)
<article-title>Preferences for symmetry in faces change across the menstrual cycle</article-title>
.
<source>Biol Psychol</source>
<volume>76</volume>
:
<fpage>209</fpage>
<lpage>216</lpage>
.
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.biopsycho.2007.08.003">10.1016/j.biopsycho.2007.08.003</ext-link>
PubMed:
<ext-link ext-link-type="uri" xlink:href="http://www.ncbi.nlm.nih.gov/pubmed/17919806">17919806</ext-link>
<pub-id pub-id-type="pmid">17919806</pub-id>
</mixed-citation>
</ref>
<ref id="B17">
<label>17</label>
<mixed-citation publication-type="journal">
<name>
<surname>Ramon</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Rossion</surname>
<given-names>B</given-names>
</name>
(
<year>2012</year>
)
<article-title>Hemisphere-dependent holistic processing of familiar faces</article-title>
.
<source>Brain Cogn</source>
<volume>78</volume>
:
<fpage>7</fpage>
<lpage>13</lpage>
.
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.bandc.2011.10.009">10.1016/j.bandc.2011.10.009</ext-link>
PubMed:
<ext-link ext-link-type="uri" xlink:href="http://www.ncbi.nlm.nih.gov/pubmed/22099150">22099150</ext-link>
<pub-id pub-id-type="pmid">22099150</pub-id>
</mixed-citation>
</ref>
<ref id="B18">
<label>18</label>
<mixed-citation publication-type="journal">
<name>
<surname>Watson</surname>
<given-names>TL</given-names>
</name>
,
<name>
<surname>Clifford</surname>
<given-names>CWG</given-names>
</name>
(
<year>2003</year>
)
<article-title>Pulling faces: an investigation of the face-distortion aftereffect</article-title>
.
<source>Perception</source>
<volume>32</volume>
:
<fpage>1109</fpage>
<lpage>1116</lpage>
.
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1068/p5082">10.1068/p5082</ext-link>
PubMed:
<ext-link ext-link-type="uri" xlink:href="http://www.ncbi.nlm.nih.gov/pubmed/14651323">14651323</ext-link>
<pub-id pub-id-type="pmid">14651323</pub-id>
</mixed-citation>
</ref>
<ref id="B19">
<label>19</label>
<mixed-citation publication-type="journal">
<name>
<surname>Piepers</surname>
<given-names>DW</given-names>
</name>
,
<name>
<surname>Robbins</surname>
<given-names>RA</given-names>
</name>
(
<year>2012</year>
)
<article-title>A review and clarification of the terms ‘holistic,’ ‘configural,’ and ‘relational’ in the face perception literature. Front</article-title>
.
<source>Psychology</source>
<volume>3</volume>
:
<fpage>559</fpage>
.
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.3389/fpsyg.2012.00559">10.3389/fpsyg.2012.00559</ext-link>
</mixed-citation>
</ref>
<ref id="B20">
<label>20</label>
<mixed-citation publication-type="book">
<name>
<surname>Tanaka</surname>
<given-names>JW</given-names>
</name>
,
<name>
<surname>Gordon</surname>
<given-names>I</given-names>
</name>
(
<year>2011</year>
)
<article-title>Features, Configuration, and Holistic Face Processing</article-title>
. In:
<person-group person-group-type="editor">
<name>
<surname>Rhodes</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Calder</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Johnson</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Haxby</surname>
<given-names>JV</given-names>
</name>
</person-group>
<source>Oxford Handbook of Face Perception</source>
.
<publisher-name>Oxford University Press</publisher-name>
</mixed-citation>
</ref>
<ref id="B21">
<label>21</label>
<mixed-citation publication-type="journal">
<name>
<surname>Young</surname>
<given-names>AW</given-names>
</name>
,
<name>
<surname>Hellawell</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Hay</surname>
<given-names>DC</given-names>
</name>
(
<year>1987</year>
)
<article-title>Configurational information in face perception</article-title>
.
<source>Perception</source>
<volume>16</volume>
:
<fpage>747</fpage>
<lpage>759</lpage>
.
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1068/p160747">10.1068/p160747</ext-link>
PubMed:
<ext-link ext-link-type="uri" xlink:href="http://www.ncbi.nlm.nih.gov/pubmed/3454432">3454432</ext-link>
<pub-id pub-id-type="pmid">3454432</pub-id>
</mixed-citation>
</ref>
<ref id="B22">
<label>22</label>
<mixed-citation publication-type="journal">
<name>
<surname>Chamberlain</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>McManus</surname>
<given-names>IC</given-names>
</name>
,
<name>
<surname>Riley</surname>
<given-names>H</given-names>
</name>
,
<name>
<surname>Rankin</surname>
<given-names>Q</given-names>
</name>
,
<name>
<surname>Brunswick</surname>
<given-names>N</given-names>
</name>
(
<year>2012</year>
)
<article-title>Local processing enhancements associated with superior observational drawing are due to enhanced perceptual functioning, not weak central coherence</article-title>
.
<source>Q J Exp Psychol (Hove)</source>
.
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1080/17470218.2012.750678">10.1080/17470218.2012.750678</ext-link>
</mixed-citation>
</ref>
<ref id="B23">
<label>23</label>
<mixed-citation publication-type="journal">
<name>
<surname>Zhou</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Cheng</surname>
<given-names>Z</given-names>
</name>
,
<name>
<surname>Zhang</surname>
<given-names>X</given-names>
</name>
,
<name>
<surname>Wong</surname>
<given-names>ACN</given-names>
</name>
(
<year>2012</year>
)
<article-title>Smaller holistic processing of faces associated with face drawing experience</article-title>
.
<source>Psychon Bull Rev</source>
<volume>19</volume>
:
<fpage>157</fpage>
<lpage>162</lpage>
.
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.3758/s13423-011-0174-x">10.3758/s13423-011-0174-x</ext-link>
PubMed:
<ext-link ext-link-type="uri" xlink:href="http://www.ncbi.nlm.nih.gov/pubmed/22215464">22215464</ext-link>
<pub-id pub-id-type="pmid">22215464</pub-id>
</mixed-citation>
</ref>
<ref id="B24">
<label>24</label>
<mixed-citation publication-type="journal">
<name>
<surname>Greenberg</surname>
<given-names>SN</given-names>
</name>
,
<name>
<surname>Goshen-Gottstein</surname>
<given-names>Y</given-names>
</name>
(
<year>2009</year>
)
<article-title>Not all faces are processed equally: evidence for featural rather than holistic processing of one’s own face in a face-imaging task</article-title>
.
<source>J Exp Psychol Learn Mem Cogn</source>
<volume>35</volume>
:
<fpage>499</fpage>
<lpage>508</lpage>
.
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1037/a0014640">10.1037/a0014640</ext-link>
PubMed:
<ext-link ext-link-type="uri" xlink:href="http://www.ncbi.nlm.nih.gov/pubmed/19271862">19271862</ext-link>
<pub-id pub-id-type="pmid">19271862</pub-id>
</mixed-citation>
</ref>
<ref id="B25">
<label>25</label>
<mixed-citation publication-type="book">
<name>
<surname>Metzinger</surname>
<given-names>T</given-names>
</name>
(
<year>2004</year>
)
<source>Being No One: The Self-model Theory of Subjectivity</source>
.
<publisher-loc>Cambridge, MA</publisher-loc>
:
<publisher-name>The MIT Press</publisher-name>
</mixed-citation>
</ref>
<ref id="B26">
<label>26</label>
<mixed-citation publication-type="journal">
<name>
<surname>Kanwisher</surname>
<given-names>N</given-names>
</name>
,
<name>
<surname>Yovel</surname>
<given-names>G</given-names>
</name>
(
<year>2006</year>
)
<article-title>The fusiform face area: a cortical region specialized for the perception of faces</article-title>
.
<source>Philos Trans R Soc Lond, B, Biol Sci</source>
<volume>361</volume>
:
<fpage>2109</fpage>
<lpage>2128</lpage>
.
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1098/rstb.2006.1934">10.1098/rstb.2006.1934</ext-link>
PubMed:
<ext-link ext-link-type="uri" xlink:href="http://www.ncbi.nlm.nih.gov/pubmed/17118927">17118927</ext-link>
<pub-id pub-id-type="pmid">17118927</pub-id>
</mixed-citation>
</ref>
<ref id="B27">
<label>27</label>
<mixed-citation publication-type="journal">
<name>
<surname>Gauthier</surname>
<given-names>I</given-names>
</name>
,
<name>
<surname>Skudlarski</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Gore</surname>
<given-names>JC</given-names>
</name>
,
<name>
<surname>Anderson</surname>
<given-names>AW</given-names>
</name>
(
<year>2000</year>
)
<article-title>Expertise for cars and birds recruits brain areas involved in face recognition</article-title>
.
<source>Nat Neurosci</source>
<volume>3</volume>
:
<fpage>191</fpage>
<lpage>197</lpage>
.
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1038/72140">10.1038/72140</ext-link>
PubMed:
<ext-link ext-link-type="uri" xlink:href="http://www.ncbi.nlm.nih.gov/pubmed/10649576">10649576</ext-link>
<pub-id pub-id-type="pmid">10649576</pub-id>
</mixed-citation>
</ref>
<ref id="B28">
<label>28</label>
<mixed-citation publication-type="journal">
<name>
<surname>Avery</surname>
<given-names>GC</given-names>
</name>
,
<name>
<surname>Day</surname>
<given-names>RH</given-names>
</name>
(
<year>1969</year>
)
<article-title>Basis of the horizontal-vertical illusion</article-title>
.
<source>J Exp Psychol Hum Learn</source>
<volume>81</volume>
:
<fpage>376</fpage>
<lpage>380</lpage>
PubMed:
<ext-link ext-link-type="uri" xlink:href="http://www.ncbi.nlm.nih.gov/pubmed/5811814">5811814</ext-link>
</mixed-citation>
</ref>
<ref id="B29">
<label>29</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bianchi</surname>
<given-names>I</given-names>
</name>
,
<name>
<surname>Savardi</surname>
<given-names>U</given-names>
</name>
,
<name>
<surname>Bertamini</surname>
<given-names>M</given-names>
</name>
(
<year>2008</year>
)
<article-title>Estimation and representation of head size (people overestimate the size of their head - evidence starting from the 15th century)</article-title>
.
<source>Br J Psychol</source>
<volume>99</volume>
:
<fpage>513</fpage>
<lpage>531</lpage>
.
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1348/000712608X304469">10.1348/000712608X304469</ext-link>
PubMed:
<ext-link ext-link-type="uri" xlink:href="http://www.ncbi.nlm.nih.gov/pubmed/18471345">18471345</ext-link>
<pub-id pub-id-type="pmid">18471345</pub-id>
</mixed-citation>
</ref>
<ref id="B30">
<label>30</label>
<mixed-citation publication-type="journal">
<name>
<surname>Savardi</surname>
<given-names>U</given-names>
</name>
,
<name>
<surname>Bianchi</surname>
<given-names>I</given-names>
</name>
(
<year>2006</year>
)
<article-title>Quanto grande è la mia testa? Contributi dalla fenomenologia sperimentale della percezione</article-title>
.
<source>DIPAV - Quaderni</source>
<volume>15</volume>
:
<fpage>59</fpage>
<lpage>78</lpage>
</mixed-citation>
</ref>
<ref id="B31">
<label>31</label>
<mixed-citation publication-type="journal">
<name>
<surname>Tversky</surname>
<given-names>B</given-names>
</name>
(
<year>1981</year>
)
<article-title>Distortions in memory for maps</article-title>
.
<source>Cogn Psychol</source>
<volume>13</volume>
:
<fpage>407</fpage>
<lpage>433</lpage>
.
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/0010-0285(81)90016-5">10.1016/0010-0285(81)90016-5</ext-link>
</mixed-citation>
</ref>
<ref id="B32">
<label>32</label>
<mixed-citation publication-type="journal">
<name>
<surname>Shin</surname>
<given-names>MS</given-names>
</name>
,
<name>
<surname>Park</surname>
<given-names>SY</given-names>
</name>
,
<name>
<surname>Park</surname>
<given-names>SR</given-names>
</name>
,
<name>
<surname>Seol</surname>
<given-names>SH</given-names>
</name>
,
<name>
<surname>Kwon</surname>
<given-names>JS</given-names>
</name>
(
<year>2006</year>
)
<article-title>Clinical and empirical applications of the Rey-Osterrieth Complex Figure Test</article-title>
.
<source>Nat Protoc</source>
<volume>1</volume>
:
<fpage>892</fpage>
<lpage>899</lpage>
.
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1038/nprot.2006.115">10.1038/nprot.2006.115</ext-link>
PubMed:
<ext-link ext-link-type="uri" xlink:href="http://www.ncbi.nlm.nih.gov/pubmed/17406322">17406322</ext-link>
<pub-id pub-id-type="pmid">17406322</pub-id>
</mixed-citation>
</ref>
<ref id="B33">
<label>33</label>
<mixed-citation publication-type="journal">
<name>
<surname>Costa</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Menzani</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Bitti</surname>
<given-names>PER</given-names>
</name>
(
<year>2001</year>
)
<article-title>Head Canting in Paintings: An Historical Study</article-title>
.
<source>J Nonverbal Behav</source>
<volume>25</volume>
:
<fpage>63</fpage>
<lpage>73</lpage>
.
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1023/A:1006737224617">10.1023/A:1006737224617</ext-link>
</mixed-citation>
</ref>
<ref id="B34">
<label>34</label>
<mixed-citation publication-type="journal">
<name>
<surname>Collishaw</surname>
<given-names>SM</given-names>
</name>
,
<name>
<surname>Hole</surname>
<given-names>GJ</given-names>
</name>
,
<name>
<surname>Schwaninger</surname>
<given-names>A</given-names>
</name>
(
<year>2005</year>
)
<article-title>Configural processing and perceptions of head tilt</article-title>
.
<source>Perception</source>
<volume>34</volume>
:
<fpage>163</fpage>
<lpage>168</lpage>
.
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1068/p5216">10.1068/p5216</ext-link>
PubMed:
<ext-link ext-link-type="uri" xlink:href="http://www.ncbi.nlm.nih.gov/pubmed/15832567">15832567</ext-link>
<pub-id pub-id-type="pmid">15832567</pub-id>
</mixed-citation>
</ref>
<ref id="B35">
<label>35</label>
<mixed-citation publication-type="journal">
<name>
<surname>Rhodes</surname>
<given-names>G</given-names>
</name>
(
<year>2006</year>
)
<article-title>The evolutionary psychology of facial beauty</article-title>
.
<source>Annu Rev Psychol</source>
<volume>57</volume>
:
<fpage>199</fpage>
<lpage>226</lpage>
.
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1146/annurev.psych.57.102904.190208">10.1146/annurev.psych.57.102904.190208</ext-link>
PubMed:
<ext-link ext-link-type="uri" xlink:href="http://www.ncbi.nlm.nih.gov/pubmed/16318594">16318594</ext-link>
<pub-id pub-id-type="pmid">16318594</pub-id>
</mixed-citation>
</ref>
<ref id="B36">
<label>36</label>
<mixed-citation publication-type="journal">
<name>
<surname>Thomas</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Press</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Haggard</surname>
<given-names>P</given-names>
</name>
(
<year>2006</year>
)
<article-title>Shared representations in body perception</article-title>
.
<source>Acta Psychol</source>
<volume>121</volume>
:
<fpage>317</fpage>
<lpage>330</lpage>
.
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1016/j.actpsy.2005.08.002">10.1016/j.actpsy.2005.08.002</ext-link>
PubMed:
<ext-link ext-link-type="uri" xlink:href="http://www.ncbi.nlm.nih.gov/pubmed/16194527">16194527</ext-link>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 002306 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd -nk 002306 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Curation
   |type=    RBID
   |clé=     PMC:3793930
   |texte=   Does My Face FIT?: A Face Image Task Reveals Structure and Distortions of Facial Feature Representation
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Curation/RBID.i   -Sk "pubmed:24130790" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024