Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Do congenital prosopagnosia and the other-race effect affect the same face recognition mechanisms?

Identifieur interne : 000C95 ( Pmc/Checkpoint ); précédent : 000C94; suivant : 000C96

Do congenital prosopagnosia and the other-race effect affect the same face recognition mechanisms?

Auteurs : Janina Esins [Allemagne] ; Johannes Schultz [Allemagne, Royaume-Uni] ; Christian Wallraven [Corée du Sud] ; Isabelle Bülthoff [Allemagne, Corée du Sud]

Source :

RBID : PMC:4179381

Abstract

Congenital prosopagnosia (CP), an innate impairment in recognizing faces, as well as the other-race effect (ORE), a disadvantage in recognizing faces of foreign races, both affect face recognition abilities. Are the same face processing mechanisms affected in both situations? To investigate this question, we tested three groups of 21 participants: German congenital prosopagnosics, South Korean participants and German controls on three different tasks involving faces and objects. First we tested all participants on the Cambridge Face Memory Test in which they had to recognize Caucasian target faces in a 3-alternative-forced-choice task. German controls performed better than Koreans who performed better than prosopagnosics. In the second experiment, participants rated the similarity of Caucasian faces that differed parametrically in either features or second-order relations (configuration). Prosopagnosics were less sensitive to configuration changes than both other groups. In addition, while all groups were more sensitive to changes in features than in configuration, this difference was smaller in Koreans. In the third experiment, participants had to learn exemplars of artificial objects, natural objects, and faces and recognize them among distractors of the same category. Here prosopagnosics performed worse than participants in the other two groups only when they were tested on face stimuli. In sum, Koreans and prosopagnosic participants differed from German controls in different ways in all tests. This suggests that German congenital prosopagnosics perceive Caucasian faces differently than do Korean participants. Importantly, our results suggest that different processing impairments underlie the ORE and CP.


Url:
DOI: 10.3389/fnhum.2014.00759
PubMed: 25324757
PubMed Central: 4179381


Affiliations:


Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4179381

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Do congenital prosopagnosia and the other-race effect affect the same face recognition mechanisms?</title>
<author>
<name sortKey="Esins, Janina" sort="Esins, Janina" uniqKey="Esins J" first="Janina" last="Esins">Janina Esins</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics</institution>
<country>Tübingen, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Schultz, Johannes" sort="Schultz, Johannes" uniqKey="Schultz J" first="Johannes" last="Schultz">Johannes Schultz</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics</institution>
<country>Tübingen, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Department of Psychology, Durham University</institution>
<country>Durham, UK</country>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Wallraven, Christian" sort="Wallraven, Christian" uniqKey="Wallraven C" first="Christian" last="Wallraven">Christian Wallraven</name>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<institution>Department of Brain and Cognitive Engineering, Korea University</institution>
<country>Seoul, South Korea</country>
</nlm:aff>
<country xml:lang="fr">Corée du Sud</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Bulthoff, Isabelle" sort="Bulthoff, Isabelle" uniqKey="Bulthoff I" first="Isabelle" last="Bülthoff">Isabelle Bülthoff</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics</institution>
<country>Tübingen, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<institution>Department of Brain and Cognitive Engineering, Korea University</institution>
<country>Seoul, South Korea</country>
</nlm:aff>
<country xml:lang="fr">Corée du Sud</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">25324757</idno>
<idno type="pmc">4179381</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4179381</idno>
<idno type="RBID">PMC:4179381</idno>
<idno type="doi">10.3389/fnhum.2014.00759</idno>
<date when="2014">2014</date>
<idno type="wicri:Area/Pmc/Corpus">001D76</idno>
<idno type="wicri:Area/Pmc/Curation">001D76</idno>
<idno type="wicri:Area/Pmc/Checkpoint">000C95</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Do congenital prosopagnosia and the other-race effect affect the same face recognition mechanisms?</title>
<author>
<name sortKey="Esins, Janina" sort="Esins, Janina" uniqKey="Esins J" first="Janina" last="Esins">Janina Esins</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics</institution>
<country>Tübingen, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Schultz, Johannes" sort="Schultz, Johannes" uniqKey="Schultz J" first="Johannes" last="Schultz">Johannes Schultz</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics</institution>
<country>Tübingen, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff2">
<institution>Department of Psychology, Durham University</institution>
<country>Durham, UK</country>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Wallraven, Christian" sort="Wallraven, Christian" uniqKey="Wallraven C" first="Christian" last="Wallraven">Christian Wallraven</name>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<institution>Department of Brain and Cognitive Engineering, Korea University</institution>
<country>Seoul, South Korea</country>
</nlm:aff>
<country xml:lang="fr">Corée du Sud</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Bulthoff, Isabelle" sort="Bulthoff, Isabelle" uniqKey="Bulthoff I" first="Isabelle" last="Bülthoff">Isabelle Bülthoff</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<institution>Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics</institution>
<country>Tübingen, Germany</country>
</nlm:aff>
<country xml:lang="fr">Allemagne</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="aff3">
<institution>Department of Brain and Cognitive Engineering, Korea University</institution>
<country>Seoul, South Korea</country>
</nlm:aff>
<country xml:lang="fr">Corée du Sud</country>
<wicri:regionArea></wicri:regionArea>
<wicri:regionArea># see nlm:aff region in country</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Frontiers in Human Neuroscience</title>
<idno type="eISSN">1662-5161</idno>
<imprint>
<date when="2014">2014</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Congenital prosopagnosia (CP), an innate impairment in recognizing faces, as well as the other-race effect (ORE), a disadvantage in recognizing faces of foreign races, both affect face recognition abilities. Are the same face processing mechanisms affected in both situations? To investigate this question, we tested three groups of 21 participants: German congenital prosopagnosics, South Korean participants and German controls on three different tasks involving faces and objects. First we tested all participants on the Cambridge Face Memory Test in which they had to recognize Caucasian target faces in a 3-alternative-forced-choice task. German controls performed better than Koreans who performed better than prosopagnosics. In the second experiment, participants rated the similarity of Caucasian faces that differed parametrically in either features or second-order relations (configuration). Prosopagnosics were less sensitive to configuration changes than both other groups. In addition, while all groups were more sensitive to changes in features than in configuration, this difference was smaller in Koreans. In the third experiment, participants had to learn exemplars of artificial objects, natural objects, and faces and recognize them among distractors of the same category. Here prosopagnosics performed worse than participants in the other two groups only when they were tested on face stimuli. In sum, Koreans and prosopagnosic participants differed from German controls in different ways in all tests. This suggests that German congenital prosopagnosics perceive Caucasian faces differently than do Korean participants. Importantly, our results suggest that different processing impairments underlie the ORE and CP.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Avidan, G" uniqKey="Avidan G">G. Avidan</name>
</author>
<author>
<name sortKey="Behrmann, M" uniqKey="Behrmann M">M. Behrmann</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Avidan, G" uniqKey="Avidan G">G. Avidan</name>
</author>
<author>
<name sortKey="Hasson, U" uniqKey="Hasson U">U. Hasson</name>
</author>
<author>
<name sortKey="Malach, R" uniqKey="Malach R">R. Malach</name>
</author>
<author>
<name sortKey="Behrmann, M" uniqKey="Behrmann M">M. Behrmann</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Avidan, G" uniqKey="Avidan G">G. Avidan</name>
</author>
<author>
<name sortKey="Tanzer, M" uniqKey="Tanzer M">M. Tanzer</name>
</author>
<author>
<name sortKey="Behrmann, M" uniqKey="Behrmann M">M. Behrmann</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Avidan, G" uniqKey="Avidan G">G. Avidan</name>
</author>
<author>
<name sortKey="Thomas, C" uniqKey="Thomas C">C. Thomas</name>
</author>
<author>
<name sortKey="Behrmann, M" uniqKey="Behrmann M">M. Behrmann</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Barton, J J S" uniqKey="Barton J">J. J. S. Barton</name>
</author>
<author>
<name sortKey="Cherkasova, M V" uniqKey="Cherkasova M">M. V. Cherkasova</name>
</author>
<author>
<name sortKey="Press, D Z" uniqKey="Press D">D. Z. Press</name>
</author>
<author>
<name sortKey="Intriligator, J M" uniqKey="Intriligator J">J. M. Intriligator</name>
</author>
<author>
<name sortKey="O Connor, M" uniqKey="O Connor M">M. O'Connor</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Behrmann, M" uniqKey="Behrmann M">M. Behrmann</name>
</author>
<author>
<name sortKey="Avidan, G" uniqKey="Avidan G">G. Avidan</name>
</author>
<author>
<name sortKey="Marotta, J J" uniqKey="Marotta J">J. J. Marotta</name>
</author>
<author>
<name sortKey="Kimchi, R" uniqKey="Kimchi R">R. Kimchi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bernstein, M J" uniqKey="Bernstein M">M. J. Bernstein</name>
</author>
<author>
<name sortKey="Young, S G" uniqKey="Young S">S. G. Young</name>
</author>
<author>
<name sortKey="Hugenberg, K" uniqKey="Hugenberg K">K. Hugenberg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Blais, C" uniqKey="Blais C">C. Blais</name>
</author>
<author>
<name sortKey="Jack, R E" uniqKey="Jack R">R. E. Jack</name>
</author>
<author>
<name sortKey="Scheepers, C" uniqKey="Scheepers C">C. Scheepers</name>
</author>
<author>
<name sortKey="Fiset, D" uniqKey="Fiset D">D. Fiset</name>
</author>
<author>
<name sortKey="Caldara, R" uniqKey="Caldara R">R. Caldara</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Carbon, C C" uniqKey="Carbon C">C.-C. Carbon</name>
</author>
<author>
<name sortKey="Gruter, T" uniqKey="Gruter T">T. Grüter</name>
</author>
<author>
<name sortKey="Weber, J E" uniqKey="Weber J">J. E. Weber</name>
</author>
<author>
<name sortKey="Lueschow, A" uniqKey="Lueschow A">A. Lueschow</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Collishaw, S M" uniqKey="Collishaw S">S. M. Collishaw</name>
</author>
<author>
<name sortKey="Hole, G J" uniqKey="Hole G">G. J. Hole</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Degutis, J M" uniqKey="Degutis J">J. M. DeGutis</name>
</author>
<author>
<name sortKey="Bentin, S" uniqKey="Bentin S">S. Bentin</name>
</author>
<author>
<name sortKey="Robertson, L C" uniqKey="Robertson L">L. C. Robertson</name>
</author>
<author>
<name sortKey="D Esposito, M" uniqKey="D Esposito M">M. D'Esposito</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Degutis, J M" uniqKey="Degutis J">J. M. DeGutis</name>
</author>
<author>
<name sortKey="Wilmer, J" uniqKey="Wilmer J">J. Wilmer</name>
</author>
<author>
<name sortKey="Mercado, R J" uniqKey="Mercado R">R. J. Mercado</name>
</author>
<author>
<name sortKey="Cohan, S" uniqKey="Cohan S">S. Cohan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Duchaine, B C" uniqKey="Duchaine B">B. C. Duchaine</name>
</author>
<author>
<name sortKey="Nakayama, K" uniqKey="Nakayama K">K. Nakayama</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Duchaine, B C" uniqKey="Duchaine B">B. C. Duchaine</name>
</author>
<author>
<name sortKey="Nakayama, K" uniqKey="Nakayama K">K. Nakayama</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Duchaine, B C" uniqKey="Duchaine B">B. C. Duchaine</name>
</author>
<author>
<name sortKey="Yovel, G" uniqKey="Yovel G">G. Yovel</name>
</author>
<author>
<name sortKey="Nakayama, K" uniqKey="Nakayama K">K. Nakayama</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Esins, J" uniqKey="Esins J">J. Esins</name>
</author>
<author>
<name sortKey="Bulthoff, I" uniqKey="Bulthoff I">I. Bülthoff</name>
</author>
<author>
<name sortKey="Schultz, J" uniqKey="Schultz J">J. Schultz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fowler, D R" uniqKey="Fowler D">D. R. Fowler</name>
</author>
<author>
<name sortKey="Meinhardt, H" uniqKey="Meinhardt H">H. Meinhardt</name>
</author>
<author>
<name sortKey="Prusinkiewicz, P" uniqKey="Prusinkiewicz P">P. Prusinkiewicz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Freire, A" uniqKey="Freire A">A. Freire</name>
</author>
<author>
<name sortKey="Lee, K" uniqKey="Lee K">K. Lee</name>
</author>
<author>
<name sortKey="Symons, L A" uniqKey="Symons L">L. A. Symons</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gai Ert, N" uniqKey="Gai Ert N">N. Gaißert</name>
</author>
<author>
<name sortKey="Wallraven, C" uniqKey="Wallraven C">C. Wallraven</name>
</author>
<author>
<name sortKey="Bulthoff, H H" uniqKey="Bulthoff H">H. H. Bülthoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Goffaux, V" uniqKey="Goffaux V">V. Goffaux</name>
</author>
<author>
<name sortKey="Hault, B" uniqKey="Hault B">B. Hault</name>
</author>
<author>
<name sortKey="Michel, C" uniqKey="Michel C">C. Michel</name>
</author>
<author>
<name sortKey="Vuong, Q C" uniqKey="Vuong Q">Q. C. Vuong</name>
</author>
<author>
<name sortKey="Rossion, B" uniqKey="Rossion B">B. Rossion</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gruter, T" uniqKey="Gruter T">T. Grüter</name>
</author>
<author>
<name sortKey="Gruter, M" uniqKey="Gruter M">M. Grüter</name>
</author>
<author>
<name sortKey="Carbon, C C" uniqKey="Carbon C">C.-C. Carbon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hayward, W G" uniqKey="Hayward W">W. G. Hayward</name>
</author>
<author>
<name sortKey="Rhodes, G" uniqKey="Rhodes G">G. Rhodes</name>
</author>
<author>
<name sortKey="Schwaninger, A" uniqKey="Schwaninger A">A. Schwaninger</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hugenberg, K" uniqKey="Hugenberg K">K. Hugenberg</name>
</author>
<author>
<name sortKey="Young, S G" uniqKey="Young S">S. G. Young</name>
</author>
<author>
<name sortKey="Bernstein, M J" uniqKey="Bernstein M">M. J. Bernstein</name>
</author>
<author>
<name sortKey="Sacco, D F" uniqKey="Sacco D">D. F. Sacco</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kennerknecht, I" uniqKey="Kennerknecht I">I. Kennerknecht</name>
</author>
<author>
<name sortKey="Ho, N Y" uniqKey="Ho N">N. Y. Ho</name>
</author>
<author>
<name sortKey="Wong, V C N" uniqKey="Wong V">V. C. N. Wong</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kimchi, R" uniqKey="Kimchi R">R. Kimchi</name>
</author>
<author>
<name sortKey="Behrmann, M" uniqKey="Behrmann M">M. Behrmann</name>
</author>
<author>
<name sortKey="Avidan, G" uniqKey="Avidan G">G. Avidan</name>
</author>
<author>
<name sortKey="Amishav, R" uniqKey="Amishav R">R. Amishav</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Konar, Y" uniqKey="Konar Y">Y. Konar</name>
</author>
<author>
<name sortKey="Bennett, P J" uniqKey="Bennett P">P. J. Bennett</name>
</author>
<author>
<name sortKey="Sekuler, A B" uniqKey="Sekuler A">A. B. Sekuler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kress, T" uniqKey="Kress T">T. Kress</name>
</author>
<author>
<name sortKey="Daum, I" uniqKey="Daum I">I. Daum</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Le Grand, R" uniqKey="Le Grand R">R. Le Grand</name>
</author>
<author>
<name sortKey="Cooper, P A" uniqKey="Cooper P">P. A. Cooper</name>
</author>
<author>
<name sortKey="Mondloch, C J" uniqKey="Mondloch C">C. J. Mondloch</name>
</author>
<author>
<name sortKey="Lewis, T L" uniqKey="Lewis T">T. L. Lewis</name>
</author>
<author>
<name sortKey="Sagiv, N" uniqKey="Sagiv N">N. Sagiv</name>
</author>
<author>
<name sortKey="De Gelder, B" uniqKey="De Gelder B">B. De Gelder</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lobmaier, J S" uniqKey="Lobmaier J">J. S. Lobmaier</name>
</author>
<author>
<name sortKey="Bolte, J" uniqKey="Bolte J">J. Bölte</name>
</author>
<author>
<name sortKey="Mast, F W" uniqKey="Mast F">F. W. Mast</name>
</author>
<author>
<name sortKey="Dobel, C" uniqKey="Dobel C">C. Dobel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Macmillan, N A" uniqKey="Macmillan N">N. A. Macmillan</name>
</author>
<author>
<name sortKey="Creelman, C D" uniqKey="Creelman C">C. D. Creelman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Maurer, D" uniqKey="Maurer D">D. Maurer</name>
</author>
<author>
<name sortKey="Le Grand, R" uniqKey="Le Grand R">R. Le Grand</name>
</author>
<author>
<name sortKey="Mondloch, C J" uniqKey="Mondloch C">C. J. Mondloch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Maurer, D" uniqKey="Maurer D">D. Maurer</name>
</author>
<author>
<name sortKey="O Craven, K M" uniqKey="O Craven K">K. M. O'Craven</name>
</author>
<author>
<name sortKey="Le Grand, R" uniqKey="Le Grand R">R. Le Grand</name>
</author>
<author>
<name sortKey="Mondloch, C J" uniqKey="Mondloch C">C. J. Mondloch</name>
</author>
<author>
<name sortKey="Springer, M V" uniqKey="Springer M">M. V. Springer</name>
</author>
<author>
<name sortKey="Lewis, T L" uniqKey="Lewis T">T. L. Lewis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mckone, E" uniqKey="Mckone E">E. McKone</name>
</author>
<author>
<name sortKey="Aimola Davies, A" uniqKey="Aimola Davies A">A. Aimola Davies</name>
</author>
<author>
<name sortKey="Fernando, D" uniqKey="Fernando D">D. Fernando</name>
</author>
<author>
<name sortKey="Aalders, R" uniqKey="Aalders R">R. Aalders</name>
</author>
<author>
<name sortKey="Leung, H" uniqKey="Leung H">H. Leung</name>
</author>
<author>
<name sortKey="Wickramariyaratne, T" uniqKey="Wickramariyaratne T">T. Wickramariyaratne</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mckone, E" uniqKey="Mckone E">E. McKone</name>
</author>
<author>
<name sortKey="Brewer, J L" uniqKey="Brewer J">J. L. Brewer</name>
</author>
<author>
<name sortKey="Macpherson, S" uniqKey="Macpherson S">S. MacPherson</name>
</author>
<author>
<name sortKey="Rhodes, G" uniqKey="Rhodes G">G. Rhodes</name>
</author>
<author>
<name sortKey="Hayward, W G" uniqKey="Hayward W">W. G. Hayward</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mckone, E" uniqKey="Mckone E">E. McKone</name>
</author>
<author>
<name sortKey="Stokes, S" uniqKey="Stokes S">S. Stokes</name>
</author>
<author>
<name sortKey="Liu, J" uniqKey="Liu J">J. Liu</name>
</author>
<author>
<name sortKey="Cohan, S" uniqKey="Cohan S">S. Cohan</name>
</author>
<author>
<name sortKey="Fiorentini, C" uniqKey="Fiorentini C">C. Fiorentini</name>
</author>
<author>
<name sortKey="Pidcock, M" uniqKey="Pidcock M">M. Pidcock</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Meissner, C A" uniqKey="Meissner C">C. A. Meissner</name>
</author>
<author>
<name sortKey="Brigham, J C" uniqKey="Brigham J">J. C. Brigham</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Michel, C" uniqKey="Michel C">C. Michel</name>
</author>
<author>
<name sortKey="Rossion, B" uniqKey="Rossion B">B. Rossion</name>
</author>
<author>
<name sortKey="Han, J" uniqKey="Han J">J. Han</name>
</author>
<author>
<name sortKey="Chung, C S" uniqKey="Chung C">C.-S. Chung</name>
</author>
<author>
<name sortKey="Caldara, R" uniqKey="Caldara R">R. Caldara</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mondloch, C J" uniqKey="Mondloch C">C. J. Mondloch</name>
</author>
<author>
<name sortKey="Elms, N" uniqKey="Elms N">N. Elms</name>
</author>
<author>
<name sortKey="Maurer, D" uniqKey="Maurer D">D. Maurer</name>
</author>
<author>
<name sortKey="Rhodes, G" uniqKey="Rhodes G">G. Rhodes</name>
</author>
<author>
<name sortKey="Hayward, W G" uniqKey="Hayward W">W. G. Hayward</name>
</author>
<author>
<name sortKey="Tanaka, J W" uniqKey="Tanaka J">J. W. Tanaka</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rhodes, G" uniqKey="Rhodes G">G. Rhodes</name>
</author>
<author>
<name sortKey="Brake, S" uniqKey="Brake S">S. Brake</name>
</author>
<author>
<name sortKey="Taylor, K" uniqKey="Taylor K">K. Taylor</name>
</author>
<author>
<name sortKey="Tan, S" uniqKey="Tan S">S. Tan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rhodes, G" uniqKey="Rhodes G">G. Rhodes</name>
</author>
<author>
<name sortKey="Hayward, W G" uniqKey="Hayward W">W. G. Hayward</name>
</author>
<author>
<name sortKey="Winkler, C" uniqKey="Winkler C">C. Winkler</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Richler, J J" uniqKey="Richler J">J. J. Richler</name>
</author>
<author>
<name sortKey="Cheung, O S" uniqKey="Cheung O">O. S. Cheung</name>
</author>
<author>
<name sortKey="Gauthier, I" uniqKey="Gauthier I">I. Gauthier</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rivolta, D" uniqKey="Rivolta D">D. Rivolta</name>
</author>
<author>
<name sortKey="Palermo, R" uniqKey="Palermo R">R. Palermo</name>
</author>
<author>
<name sortKey="Schmalzl, L" uniqKey="Schmalzl L">L. Schmalzl</name>
</author>
<author>
<name sortKey="Coltheart, M" uniqKey="Coltheart M">M. Coltheart</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rotshtein, P" uniqKey="Rotshtein P">P. Rotshtein</name>
</author>
<author>
<name sortKey="Geng, J J" uniqKey="Geng J">J. J. Geng</name>
</author>
<author>
<name sortKey="Driver, J" uniqKey="Driver J">J. Driver</name>
</author>
<author>
<name sortKey="Dolan, R J" uniqKey="Dolan R">R. J. Dolan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rushton, J P" uniqKey="Rushton J">J. P. Rushton</name>
</author>
<author>
<name sortKey="Jensen, A R" uniqKey="Jensen A">A. R. Jensen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stollhoff, R" uniqKey="Stollhoff R">R. Stollhoff</name>
</author>
<author>
<name sortKey="Jost, J" uniqKey="Jost J">J. Jost</name>
</author>
<author>
<name sortKey="Elze, T" uniqKey="Elze T">T. Elze</name>
</author>
<author>
<name sortKey="Kennerknecht, I" uniqKey="Kennerknecht I">I. Kennerknecht</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Towler, J" uniqKey="Towler J">J. Towler</name>
</author>
<author>
<name sortKey="Gosling, A" uniqKey="Gosling A">A. Gosling</name>
</author>
<author>
<name sortKey="Duchaine, B C" uniqKey="Duchaine B">B. C. Duchaine</name>
</author>
<author>
<name sortKey="Eimer, M" uniqKey="Eimer M">M. Eimer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Troje, N F" uniqKey="Troje N">N. F. Troje</name>
</author>
<author>
<name sortKey="Bulthoff, H H" uniqKey="Bulthoff H">H. H. Bülthoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vetter, T" uniqKey="Vetter T">T. Vetter</name>
</author>
<author>
<name sortKey="Blanz, V" uniqKey="Blanz V">V. Blanz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wang, H" uniqKey="Wang H">H. Wang</name>
</author>
<author>
<name sortKey="Stollhoff, R" uniqKey="Stollhoff R">R. Stollhoff</name>
</author>
<author>
<name sortKey="Elze, T" uniqKey="Elze T">T. Elze</name>
</author>
<author>
<name sortKey="Jost, J" uniqKey="Jost J">J. Jost</name>
</author>
<author>
<name sortKey="Kennerknecht, I" uniqKey="Kennerknecht I">I. Kennerknecht</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yovel, G" uniqKey="Yovel G">G. Yovel</name>
</author>
<author>
<name sortKey="Duchaine, B C" uniqKey="Duchaine B">B. C. Duchaine</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yovel, G" uniqKey="Yovel G">G. Yovel</name>
</author>
<author>
<name sortKey="Kanwisher, N" uniqKey="Kanwisher N">N. Kanwisher</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Front Hum Neurosci</journal-id>
<journal-id journal-id-type="iso-abbrev">Front Hum Neurosci</journal-id>
<journal-id journal-id-type="publisher-id">Front. Hum. Neurosci.</journal-id>
<journal-title-group>
<journal-title>Frontiers in Human Neuroscience</journal-title>
</journal-title-group>
<issn pub-type="epub">1662-5161</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">25324757</article-id>
<article-id pub-id-type="pmc">4179381</article-id>
<article-id pub-id-type="doi">10.3389/fnhum.2014.00759</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research Article</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Do congenital prosopagnosia and the other-race effect affect the same face recognition mechanisms?</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Esins</surname>
<given-names>Janina</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="author-notes" rid="fn001">
<sup>*</sup>
</xref>
<uri xlink:type="simple" xlink:href="http://community.frontiersin.org/people/u/128576"></uri>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Schultz</surname>
<given-names>Johannes</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<uri xlink:type="simple" xlink:href="http://community.frontiersin.org/people/u/46545"></uri>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Wallraven</surname>
<given-names>Christian</given-names>
</name>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
<uri xlink:type="simple" xlink:href="http://community.frontiersin.org/people/u/166207"></uri>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Bülthoff</surname>
<given-names>Isabelle</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
<uri xlink:type="simple" xlink:href="http://community.frontiersin.org/people/u/166929"></uri>
</contrib>
</contrib-group>
<aff id="aff1">
<sup>1</sup>
<institution>Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics</institution>
<country>Tübingen, Germany</country>
</aff>
<aff id="aff2">
<sup>2</sup>
<institution>Department of Psychology, Durham University</institution>
<country>Durham, UK</country>
</aff>
<aff id="aff3">
<sup>3</sup>
<institution>Department of Brain and Cognitive Engineering, Korea University</institution>
<country>Seoul, South Korea</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>Edited by: Davide Rivolta, University of East London, UK</p>
</fn>
<fn fn-type="edited-by">
<p>Reviewed by: Tamara L. Watson, University of Western Sydney, Australia; Roberta Daini, Università degli studi di Milano - Bicocca, Italy</p>
</fn>
<corresp id="fn001">*Correspondence: Janina Esins, Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Spemannstrasse 38, Tübingen 72076, Germany e-mail:
<email xlink:type="simple">janina.esins@tuebingen.mpg.de</email>
</corresp>
<fn fn-type="other" id="fn002">
<p>This article was submitted to the journal Frontiers in Human Neuroscience.</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>29</day>
<month>9</month>
<year>2014</year>
</pub-date>
<pub-date pub-type="collection">
<year>2014</year>
</pub-date>
<volume>8</volume>
<elocation-id>759</elocation-id>
<history>
<date date-type="received">
<day>30</day>
<month>4</month>
<year>2014</year>
</date>
<date date-type="accepted">
<day>08</day>
<month>9</month>
<year>2014</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright © 2014 Esins, Schultz, Wallraven and Bülthoff.</copyright-statement>
<copyright-year>2014</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/">
<license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</license-p>
</license>
</permissions>
<abstract>
<p>Congenital prosopagnosia (CP), an innate impairment in recognizing faces, as well as the other-race effect (ORE), a disadvantage in recognizing faces of foreign races, both affect face recognition abilities. Are the same face processing mechanisms affected in both situations? To investigate this question, we tested three groups of 21 participants: German congenital prosopagnosics, South Korean participants and German controls on three different tasks involving faces and objects. First we tested all participants on the Cambridge Face Memory Test in which they had to recognize Caucasian target faces in a 3-alternative-forced-choice task. German controls performed better than Koreans who performed better than prosopagnosics. In the second experiment, participants rated the similarity of Caucasian faces that differed parametrically in either features or second-order relations (configuration). Prosopagnosics were less sensitive to configuration changes than both other groups. In addition, while all groups were more sensitive to changes in features than in configuration, this difference was smaller in Koreans. In the third experiment, participants had to learn exemplars of artificial objects, natural objects, and faces and recognize them among distractors of the same category. Here prosopagnosics performed worse than participants in the other two groups only when they were tested on face stimuli. In sum, Koreans and prosopagnosic participants differed from German controls in different ways in all tests. This suggests that German congenital prosopagnosics perceive Caucasian faces differently than do Korean participants. Importantly, our results suggest that different processing impairments underlie the ORE and CP.</p>
</abstract>
<kwd-group>
<kwd>congenital prosopagnosia</kwd>
<kwd>other-race effect</kwd>
<kwd>face recognition</kwd>
<kwd>Asian</kwd>
<kwd>Caucasian</kwd>
</kwd-group>
<counts>
<fig-count count="9"></fig-count>
<table-count count="2"></table-count>
<equation-count count="0"></equation-count>
<ref-count count="51"></ref-count>
<page-count count="14"></page-count>
<word-count count="11410"></word-count>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="introduction" id="s1">
<title>Introduction</title>
<p>Recognizing faces is arguably the most important way to identify other humans and bears great social importance. Even though faces are a visually homogeneous object class, most humans are experts in face identification: within milliseconds we can identify a familiar face in poor lighting, after 15 years of aging, 20 pounds of weight loss, or with a different hairdo—and this is true for the several hundred acquaintances we have on average.</p>
<p>One explanation for this achievement is that we use “holistic processing” for faces: we integrate the different components of a face [e.g., the form and color of the features (eyes, nose, and mouth) and their configuration (i.e., spatial distances between the features)] into a whole and do not process single pieces of information individually (Maurer et al.,
<xref rid="B31" ref-type="bibr">2002</xref>
). If the retrieval of this information is disturbed, holistic processing and thus face recognition are impaired (Collishaw and Hole,
<xref rid="B10" ref-type="bibr">2000</xref>
). Especially configural processing is considered to be one of the most important aspects of holistic processing: disturbing this process alone already strongly affects holistic processing of faces (Maurer et al.,
<xref rid="B31" ref-type="bibr">2002</xref>
).</p>
<p>Most humans are undoubtedly experts at every-day face recognition but this expertise can be disturbed in various ways. Two well-known phenomena in which people show impaired face recognition abilities are congenital prosopagnosia (CP) and the other-race effect (ORE).</p>
<p>CP is an innate impairment in face processing. People with CP often encounter social difficulties, like being considered arrogant or ignorant because they fail to recognize and greet acquaintances. Therefore, some of them tend to keep a socially withdrawn life. Presumably 2.5% of the population is affected (Kennerknecht et al.,
<xref rid="B24" ref-type="bibr">2008</xref>
). In contrast to the acquired form of prosopagnosia, which is caused by acquired brain damage, CP is inborn and there are no evident brain lesions. Also several studies found normal functional brain response to faces in fMRI studies (e.g., Avidan et al.,
<xref rid="B2" ref-type="bibr">2005</xref>
; Avidan and Behrmann,
<xref rid="B1" ref-type="bibr">2009</xref>
) and EEG studies (e.g., Towler et al.,
<xref rid="B46" ref-type="bibr">2012</xref>
) but subtle differences in connectivity between face processing brain regions for congenital prosopagnosics compared with controls (Avidan et al.,
<xref rid="B4" ref-type="bibr">2008</xref>
). In a single case study of CP, this reduced connectivity could be enhanced by training on spatial integration of mouth and eye regions of faces. The training also had positive effects on face recognition performance but vanished after a few months (DeGutis et al.,
<xref rid="B11" ref-type="bibr">2007</xref>
).</p>
<p>The ORE describes the fact that we recognize faces of our own (familiar) race faster and more accurately than faces of an unfamiliar ethnicity (Meissner and Brigham,
<xref rid="B36" ref-type="bibr">2001</xref>
). This effect (also called “cross-race bias,” “own-race advantage,” or “other-race deficit”) is a common and known phenomenon. Several models exist to explain the underlying mechanisms causing the ORE. The most common explanation is the higher level of expertise for same-race faces compared with other-race faces (Meissner and Brigham,
<xref rid="B36" ref-type="bibr">2001</xref>
). This perceptual expertise hypothesis states that the frequent encounter and the training in individuating own-race faces leads to a greater experience in encoding the dimensions most useful to individuate faces of that race. Nevertheless, competing models exists, like the social categorization hypothesis, which states that mere social out-group categorization is sufficient to elicit a drop in face recognition performance (Bernstein et al.,
<xref rid="B7" ref-type="bibr">2007</xref>
). Another hypothesis is the categorization-individuation model which combines perceptual experience, social categorization and motivated individuation (discrimination among individuals within a racial group which requires attending to face-identity characteristics rather than to category-diagnostic characteristics), all three of which co-act and generate the ORE (Hugenberg et al.,
<xref rid="B23" ref-type="bibr">2010</xref>
). The underlying mechanisms are not clear yet, but it has been shown that the ORE can be overcome by training, but only for the trained faces (McKone et al.,
<xref rid="B34" ref-type="bibr">2007</xref>
).</p>
<p>As nearly everyone has experienced the ORE, it is sometimes cited as an example by congenital prosopagnosics when they try to describe to non-prosopagnosics what they experience in everyday life. Both phenomena are characterized by the difficulty in telling people apart or recognizing previously encountered people based on their faces. But also, in both cases, there is evidence for parallels in disturbances of face processing as reviewed in the following.</p>
<p>Some studies used the inversion effect or the composite face effect to test face processing abilities of their participants. The inversion effect describes the effect that face recognition performance is reduced if the faces are presented upside down. The strength of this effect is significantly larger for faces than for other objects for which we are not experts. The composite face effect describes the illusion of a new identity when combining the top half of the face of one person with the bottom half face of another person. The two halves cannot be processed individually and create the face of a new, third person. The illusion disappears when the two halves are misaligned. Both effects, the face inversion and the composite face effect, are considered to be hallmarks for holistic face processing. Both disrupt the configural information leaving the featural information intact. This again is an indication of the importance of configural processing for holistic processing (Maurer et al.,
<xref rid="B31" ref-type="bibr">2002</xref>
). A study testing congenital prosopagnosic participants found no face inversion effect or composite-face effect, neither in accuracy nor in reaction times, indicating their impairment in holistic processing of faces (Avidan et al.,
<xref rid="B3" ref-type="bibr">2011</xref>
). Regarding the face inversion effect for other-race faces, two experiments testing European and Asian participants found a larger effect for same-race faces than for other-race faces in both groups of participants (Rhodes et al.,
<xref rid="B39" ref-type="bibr">1989</xref>
). When testing the composite face task with Asian and European participants, similarly, Michel and colleagues found a significantly larger composite face effect for same-race faces compared with other-race faces (Michel et al.,
<xref rid="B37" ref-type="bibr">2006</xref>
).</p>
<p>In a study conducted by Lobmaier and colleagues, congenital prosopagnosics were tested with scrambled faces (configural information destroyed) and blurred faces (featural information destroyed) in a delayed matching task. Prosopagnosic participants showed significantly worse performance than controls in both conditions (Lobmaier et al.,
<xref rid="B29" ref-type="bibr">2010</xref>
). Chinese and Caucasian-Australian participants tested in an old-new recognition task on blurred and scrambled Asian and Caucasian faces also showed a significantly worse performance for other-race faces than for own-race faces in both conditions (Hayward et al.,
<xref rid="B22" ref-type="bibr">2008</xref>
).</p>
<p>In another study, congenital prosopagnosics participants were tested on a same-different task with the so-called “Jane” set of stimuli (Le Grand et al.,
<xref rid="B28" ref-type="bibr">2006</xref>
). These stimuli faces differ either in features, configuration, or contour. Only a minority of the prosopagnosic participants performed significantly worse than controls on the faces differing in configuration or features, but most prosopagnosics performed significantly worse on faces differing in their contour. A study with Asian participants using the same “Jane” stimuli and a similarly created Asian female face set also showed only marginal effects (Mondloch et al.,
<xref rid="B38" ref-type="bibr">2010</xref>
): Chinese participants were significantly slower on other-race compared with same-race faces (analysis collapsed over all three types (features, configuration, contour), with the longest mean reaction times for the faces differing in contour) but showed no significant differences in performance for any modification (features, configuration, contour). Even though this lack of differences between groups for the “Jane” stimuli was challenged by (Yovel and Duchaine,
<xref rid="B50" ref-type="bibr">2006</xref>
) (this will be disussed in our general discussion), we note that similar results for other-race observers and prosopagnosic observers were obtained in both studies.</p>
<p>There are several different causes that can reduce face recognition ability (aging, illnesses, drug consumption, etc.). However, the two face recognition disturbances under study here, CP and the ORE, seem to impair face recognition abilities in a similar way, namely by disrupting featural and configural face processing (depending on the used stimuli and task, as reviewed above) causing a lack or reduction of face expertise. Also, in both cases face recognition performance can be increased to a certain extent through training. These similarities could be a hint that the same face processing mechanisms are impaired.</p>
<p>To verify the hypothesis of a common underlying disturbance, it is necessary to compare in detail whether the same kind of impairments appear when looking specifically and directly at featural and configural processing. On one hand, if differences in face recognition performance appear, we can exclude a common underlying disturbance. On the other hand, if similar impairments are found, the hypothesis that the same mechanisms are disturbed is not proven, but possible. In any case, a direct comparison between CP and the ORE is a great chance to get further insights into the yet unknown mechanisms underlying face processing and face recognition.</p>
<p>To conduct this direct comparison we recruited three age- and gender-matched participant groups with a comparatively large sample size of 21 participants per group: German congenital prosopagnosic participants, Korean participants, and German controls. All participant groups performed the same three tests. (1) the Cambridge Face Memory Test (CFMT, Duchaine and Nakayama,
<xref rid="B14" ref-type="bibr">2006</xref>
), an objective measure of the face recognition abilities of Caucasian faces, (2) a parametric test of the sensitivity to configural and featural information in faces; sensitivity to these two types of facial information has been shown to be reduced in congenital prosopagnosics and other-race observers in previous studies, and (3) a recognition task of faces and familiar and unfamiliar objects to test the influence of expertise on recognition performance.</p>
<p>As all face stimuli used in our tests were derived from Caucasian faces, we expected the Korean group to exhibit evidence of the ORE that could be compared with the performance of the prosopagnosics while the German control group would serve as a baseline. Our predictions for each test were the following: (1) For the CFMT, Koreans and prosopagnosics would have a lower score compared with German controls, due to the disadvantage in recognizing other-race faces for the Koreans and the innate face recognition impairment for the prosopagnosics. This test is a general measure of the severity of face recognition impairments and does not detect if differences in the nature of the impairments exist. (2) We expected to find a decreased sensitivity to configural and featural information for prosopagnosics and Koreans. This prediction was based on reported deficits in processing both kinds of information in prosopagnosic as well as other-race observers (Hayward et al.,
<xref rid="B22" ref-type="bibr">2008</xref>
; Lobmaier et al.,
<xref rid="B29" ref-type="bibr">2010</xref>
respectively). If prosopagnosics and Koreans would show differences in the extraction of featural and configural information, we could exclude that common mechanisms are impaired. (3) In the object and face recognition test we expected an impaired recognition performance of the face stimuli for Koreans and prosopagnosics, again due to the disadvantage in recognizing other-race faces for the Koreans and the innate face recognition impairment for the prosopagnosics. We expected to find no differences across all participant groups in recognizing the non-expertise object stimuli. Despite a study describing that 54 congenital prosopagnosics self-reported impaired object recognition during interviews (Grüter et al.,
<xref rid="B21" ref-type="bibr">2008</xref>
), most studies explicitly testing object recognition found nearly-normal to normal object recognition abilities for prosopagnosic participants. When impairments were found, they were less pronounced than face recognition impairments (see Kress and Daum,
<xref rid="B27" ref-type="bibr">2003</xref>
; Le Grand et al.,
<xref rid="B28" ref-type="bibr">2006</xref>
for reviews).</p>
</sec>
<sec sec-type="materials|methods" id="s2">
<title>Materials and methods</title>
<sec>
<title>Participants</title>
<p>We tested three groups of participants: German congenital prosopagnosic participants (from now on referred to as “prosopagnosics”), South Korean participants (“Koreans”), and German control participants (“Germans”) with 21 participants per group. The ratio of female to male participants as well as the age of participants in each group was matched as closely as possible. Note that it was hard to recruit older male Korean participants, presumably for cultural reasons; therefore we had to resort to younger male participants in that group to have matching numbers of participants in all groups.</p>
<p>So far, no universally-accepted standard diagnostic tool for CP exists: while the CFMT is widely used to characterize prosopagnosic participants (e.g., Rivolta et al.,
<xref rid="B42" ref-type="bibr">2011</xref>
; Kimchi et al.,
<xref rid="B25" ref-type="bibr">2012</xref>
), other diagnostic means exist. The prosopagnosics of our study were identified by a questionnaire and interview (Stollhoff et al.,
<xref rid="B45" ref-type="bibr">2011</xref>
). Due to time constraints the Koreans and Germans did not participate in the diagnostic interview but reported to have no problems in recognizing faces of their friends and family members. To provide an objective measure of face processing abilities and to maintain comparability with other studies, we tested all participants on the CFMT and report their scores and z-scores, based on the results of the German controls, in Table
<xref ref-type="table" rid="T1">1</xref>
.</p>
<table-wrap id="T1" position="float">
<label>Table 1</label>
<caption>
<p>
<bold>Overview of the participants in the three different groups</bold>
.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th rowspan="1" colspan="1"></th>
<th align="center" colspan="4" rowspan="1">
<bold>Prosopagnosic</bold>
</th>
<th align="center" colspan="4" rowspan="1">
<bold>Korean</bold>
</th>
<th align="center" colspan="4" rowspan="1">
<bold>German</bold>
</th>
</tr>
<tr>
<th rowspan="1" colspan="1"></th>
<th align="left" rowspan="1" colspan="1">
<bold>Sex</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Age</bold>
</th>
<th align="center" colspan="2" rowspan="1">
<bold>CFMT</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Sex</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Age</bold>
</th>
<th align="center" colspan="2" rowspan="1">
<bold>CFMT</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Sex</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>Age</bold>
</th>
<th align="center" colspan="2" rowspan="1">
<bold>CFMT</bold>
</th>
</tr>
<tr>
<th rowspan="1" colspan="1"></th>
<th rowspan="1" colspan="1"></th>
<th rowspan="1" colspan="1"></th>
<th align="left" rowspan="1" colspan="1">
<bold>score</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>z-score</bold>
</th>
<th rowspan="1" colspan="1"></th>
<th rowspan="1" colspan="1"></th>
<th align="left" rowspan="1" colspan="1">
<bold>score</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>z-score</bold>
</th>
<th rowspan="1" colspan="1"></th>
<th rowspan="1" colspan="1"></th>
<th align="left" rowspan="1" colspan="1">
<bold>score</bold>
</th>
<th align="left" rowspan="1" colspan="1">
<bold>z-score</bold>
</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">1</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">21</td>
<td align="center" rowspan="1" colspan="1">38</td>
<td align="center" rowspan="1" colspan="1">−3.57</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">22</td>
<td align="center" rowspan="1" colspan="1">53</td>
<td align="center" rowspan="1" colspan="1">−1.05</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">23</td>
<td align="center" rowspan="1" colspan="1">65</td>
<td align="center" rowspan="1" colspan="1">0.96</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">2</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">22</td>
<td align="center" rowspan="1" colspan="1">44</td>
<td align="center" rowspan="1" colspan="1">−2.57</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">23</td>
<td align="center" rowspan="1" colspan="1">53</td>
<td align="center" rowspan="1" colspan="1">−1.05</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">24</td>
<td align="center" rowspan="1" colspan="1">69</td>
<td align="center" rowspan="1" colspan="1">1.63</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">3</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">24</td>
<td align="center" rowspan="1" colspan="1">37</td>
<td align="center" rowspan="1" colspan="1">−3.74</td>
<td align="center" rowspan="1" colspan="1">m</td>
<td align="center" rowspan="1" colspan="1">24</td>
<td align="center" rowspan="1" colspan="1">47</td>
<td align="center" rowspan="1" colspan="1">−2.06</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">24</td>
<td align="center" rowspan="1" colspan="1">64</td>
<td align="center" rowspan="1" colspan="1">0.79</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">4</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">27</td>
<td align="center" rowspan="1" colspan="1">47</td>
<td align="center" rowspan="1" colspan="1">−2.06</td>
<td align="center" rowspan="1" colspan="1">m</td>
<td align="center" rowspan="1" colspan="1">24</td>
<td align="center" rowspan="1" colspan="1">57</td>
<td align="center" rowspan="1" colspan="1">−0.38</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">25</td>
<td align="center" rowspan="1" colspan="1">57</td>
<td align="center" rowspan="1" colspan="1">−0.38</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">5</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">27</td>
<td align="center" rowspan="1" colspan="1">42</td>
<td align="center" rowspan="1" colspan="1">−2.90</td>
<td align="center" rowspan="1" colspan="1">m</td>
<td align="center" rowspan="1" colspan="1">26</td>
<td align="center" rowspan="1" colspan="1">51</td>
<td align="center" rowspan="1" colspan="1">−1.39</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">29</td>
<td align="center" rowspan="1" colspan="1">61</td>
<td align="center" rowspan="1" colspan="1">0.29</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">6</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">28</td>
<td align="center" rowspan="1" colspan="1">36</td>
<td align="center" rowspan="1" colspan="1">−3.91</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">28</td>
<td align="center" rowspan="1" colspan="1">57</td>
<td align="center" rowspan="1" colspan="1">−0.38</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">31</td>
<td align="center" rowspan="1" colspan="1">53</td>
<td align="center" rowspan="1" colspan="1">−1.05</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">7</td>
<td align="center" rowspan="1" colspan="1">m</td>
<td align="center" rowspan="1" colspan="1">33</td>
<td align="center" rowspan="1" colspan="1">45</td>
<td align="center" rowspan="1" colspan="1">−2.40</td>
<td align="center" rowspan="1" colspan="1">m</td>
<td align="center" rowspan="1" colspan="1">30</td>
<td align="center" rowspan="1" colspan="1">50</td>
<td align="center" rowspan="1" colspan="1">−1.56</td>
<td align="center" rowspan="1" colspan="1">m</td>
<td align="center" rowspan="1" colspan="1">33</td>
<td align="center" rowspan="1" colspan="1">59</td>
<td align="center" rowspan="1" colspan="1">−0.05</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">8</td>
<td align="center" rowspan="1" colspan="1">m</td>
<td align="center" rowspan="1" colspan="1">34</td>
<td align="center" rowspan="1" colspan="1">33</td>
<td align="center" rowspan="1" colspan="1">−4.41</td>
<td align="center" rowspan="1" colspan="1">m</td>
<td align="center" rowspan="1" colspan="1">37</td>
<td align="center" rowspan="1" colspan="1">53</td>
<td align="center" rowspan="1" colspan="1">−1.05</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">36</td>
<td align="center" rowspan="1" colspan="1">55</td>
<td align="center" rowspan="1" colspan="1">−0.72</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">9</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">36</td>
<td align="center" rowspan="1" colspan="1">38</td>
<td align="center" rowspan="1" colspan="1">−3.57</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">39</td>
<td align="center" rowspan="1" colspan="1">58</td>
<td align="center" rowspan="1" colspan="1">−0.22</td>
<td align="center" rowspan="1" colspan="1">m</td>
<td align="center" rowspan="1" colspan="1">36</td>
<td align="center" rowspan="1" colspan="1">58</td>
<td align="center" rowspan="1" colspan="1">−0.22</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">10</td>
<td align="center" rowspan="1" colspan="1">m</td>
<td align="center" rowspan="1" colspan="1">36</td>
<td align="center" rowspan="1" colspan="1">45</td>
<td align="center" rowspan="1" colspan="1">−2.40</td>
<td align="center" rowspan="1" colspan="1">m</td>
<td align="center" rowspan="1" colspan="1">41</td>
<td align="center" rowspan="1" colspan="1">55</td>
<td align="center" rowspan="1" colspan="1">−0.72</td>
<td align="center" rowspan="1" colspan="1">m</td>
<td align="center" rowspan="1" colspan="1">37</td>
<td align="center" rowspan="1" colspan="1">50</td>
<td align="center" rowspan="1" colspan="1">−1.56</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">11</td>
<td align="center" rowspan="1" colspan="1">m</td>
<td align="center" rowspan="1" colspan="1">37</td>
<td align="center" rowspan="1" colspan="1">34</td>
<td align="center" rowspan="1" colspan="1">−4.24</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">41</td>
<td align="center" rowspan="1" colspan="1">55</td>
<td align="center" rowspan="1" colspan="1">−0.72</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">37</td>
<td align="center" rowspan="1" colspan="1">64</td>
<td align="center" rowspan="1" colspan="1">0.79</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">12</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">41</td>
<td align="center" rowspan="1" colspan="1">34</td>
<td align="center" rowspan="1" colspan="1">−4.24</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">42</td>
<td align="center" rowspan="1" colspan="1">53</td>
<td align="center" rowspan="1" colspan="1">−1.05</td>
<td align="center" rowspan="1" colspan="1">m</td>
<td align="center" rowspan="1" colspan="1">39</td>
<td align="center" rowspan="1" colspan="1">62</td>
<td align="center" rowspan="1" colspan="1">0.46</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">13</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">46</td>
<td align="center" rowspan="1" colspan="1">44</td>
<td align="center" rowspan="1" colspan="1">−2.57</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">42</td>
<td align="center" rowspan="1" colspan="1">63</td>
<td align="center" rowspan="1" colspan="1">0.62</td>
<td align="center" rowspan="1" colspan="1">m</td>
<td align="center" rowspan="1" colspan="1">39</td>
<td align="center" rowspan="1" colspan="1">52</td>
<td align="center" rowspan="1" colspan="1">−1.22</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">14</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">46</td>
<td align="center" rowspan="1" colspan="1">39</td>
<td align="center" rowspan="1" colspan="1">−3.40</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">45</td>
<td align="center" rowspan="1" colspan="1">64</td>
<td align="center" rowspan="1" colspan="1">0.79</td>
<td align="center" rowspan="1" colspan="1">m</td>
<td align="center" rowspan="1" colspan="1">44</td>
<td align="center" rowspan="1" colspan="1">71</td>
<td align="center" rowspan="1" colspan="1">1,97</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">15</td>
<td align="center" rowspan="1" colspan="1">m</td>
<td align="center" rowspan="1" colspan="1">47</td>
<td align="center" rowspan="1" colspan="1">43</td>
<td align="center" rowspan="1" colspan="1">−2.73</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">46</td>
<td align="center" rowspan="1" colspan="1">44</td>
<td align="center" rowspan="1" colspan="1">−2.57</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">44</td>
<td align="center" rowspan="1" colspan="1">52</td>
<td align="center" rowspan="1" colspan="1">−1.22</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">16</td>
<td align="center" rowspan="1" colspan="1">m</td>
<td align="center" rowspan="1" colspan="1">52</td>
<td align="center" rowspan="1" colspan="1">40</td>
<td align="center" rowspan="1" colspan="1">−3.24</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">50</td>
<td align="center" rowspan="1" colspan="1">47</td>
<td align="center" rowspan="1" colspan="1">−2.06</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">46</td>
<td align="center" rowspan="1" colspan="1">59</td>
<td align="center" rowspan="1" colspan="1">−0.05</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">17</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">53</td>
<td align="center" rowspan="1" colspan="1">36</td>
<td align="center" rowspan="1" colspan="1">−3.91</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">51</td>
<td align="center" rowspan="1" colspan="1">63</td>
<td align="center" rowspan="1" colspan="1">0,62</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">47</td>
<td align="center" rowspan="1" colspan="1">54</td>
<td align="center" rowspan="1" colspan="1">−0.89</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">18</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">54</td>
<td align="center" rowspan="1" colspan="1">46</td>
<td align="center" rowspan="1" colspan="1">−2.23</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">55</td>
<td align="center" rowspan="1" colspan="1">54</td>
<td align="center" rowspan="1" colspan="1">−0.89</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">49</td>
<td align="center" rowspan="1" colspan="1">58</td>
<td align="center" rowspan="1" colspan="1">−0.22</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">19</td>
<td align="center" rowspan="1" colspan="1">m</td>
<td align="center" rowspan="1" colspan="1">57</td>
<td align="center" rowspan="1" colspan="1">37</td>
<td align="center" rowspan="1" colspan="1">−3.74</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">55</td>
<td align="center" rowspan="1" colspan="1">38</td>
<td align="center" rowspan="1" colspan="1">−3.57</td>
<td align="center" rowspan="1" colspan="1">m</td>
<td align="center" rowspan="1" colspan="1">54</td>
<td align="center" rowspan="1" colspan="1">68</td>
<td align="center" rowspan="1" colspan="1">1.46</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">20</td>
<td align="center" rowspan="1" colspan="1">m</td>
<td align="center" rowspan="1" colspan="1">59</td>
<td align="center" rowspan="1" colspan="1">38</td>
<td align="center" rowspan="1" colspan="1">−3.57</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">57</td>
<td align="center" rowspan="1" colspan="1">50</td>
<td align="center" rowspan="1" colspan="1">−1.56</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">58</td>
<td align="center" rowspan="1" colspan="1">54</td>
<td align="center" rowspan="1" colspan="1">−0.89</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">21</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">64</td>
<td align="center" rowspan="1" colspan="1">38</td>
<td align="center" rowspan="1" colspan="1">−3.57</td>
<td align="center" rowspan="1" colspan="1">f</td>
<td align="center" rowspan="1" colspan="1">58</td>
<td align="center" rowspan="1" colspan="1">50</td>
<td align="center" rowspan="1" colspan="1">−1.56</td>
<td align="center" rowspan="1" colspan="1">m</td>
<td align="center" rowspan="1" colspan="1">60</td>
<td align="center" rowspan="1" colspan="1">60</td>
<td align="center" rowspan="1" colspan="1">0.12</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Mean scores</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="center" rowspan="1" colspan="1">39.71</td>
<td align="center" rowspan="1" colspan="1">−3.28</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="center" rowspan="1" colspan="1">53.10</td>
<td align="center" rowspan="1" colspan="1">−1.04</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="center" rowspan="1" colspan="1">59.29</td>
<td align="center" rowspan="1" colspan="1">0.00</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="center" rowspan="1" colspan="1">8</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="center" rowspan="1" colspan="1">6</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="center" rowspan="1" colspan="1">8</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Mean age</td>
<td rowspan="1" colspan="1"></td>
<td align="center" rowspan="1" colspan="1">40.2</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="center" rowspan="1" colspan="1">39.8</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
<td align="center" rowspan="1" colspan="1">38.8</td>
<td rowspan="1" colspan="1"></td>
<td rowspan="1" colspan="1"></td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>
<italic>Depicted are their sex (f, female; m, male), age in years, and their scores in the CFMT as well as the according z-scores, based on the results of the German controls</italic>
.</p>
</table-wrap-foot>
</table-wrap>
<p>All participants provided informed consent. All participants have normal or corrected-to-normal visual acuity.</p>
<sec>
<title>German congenital prosopagnosic participants</title>
<p>The prosopagnosics were diagnosed by the Institute of Human Genetics, Universitäts-klinikum Münster, based on a screening questionnaire and an diagnostic semi-structured interview (Stollhoff et al.,
<xref rid="B45" ref-type="bibr">2011</xref>
). All prosopagnosics were tested at the Max Planck Institute for Biological Cybernetics in Tübingen, Germany and compensated with 8 Euro per hour plus travel expenses.</p>
</sec>
<sec>
<title>Korean participants</title>
<p>The Korean participants were compensated with 30,000 Won (approximately 20 Euro) for the whole experiment. All participants of this group were tested at Korea University in Seoul, South Korea. The Koreans did not perform a diagnostic interview but were asked if they had noticeable problems recognizing faces of friends and family members. None of the participants reported face recognition impairments.</p>
</sec>
<sec>
<title>German control participants</title>
<p>The German control participants were compensated with 8 Euro per hour. All participants of this group were tested at the Max Planck Institute for Biological Cybernetics in Tübingen, Germany. The Germans did not perform a diagnostic interview but were asked if they had noticeable problems recognizing faces of friends and family members. None of the participants reported face recognition impairments.</p>
</sec>
</sec>
<sec>
<title>Analysis</title>
<p>Many studies found faster reaction times for Asian compared with Caucasian participants regardless of the task (Rushton and Jensen,
<xref rid="B44" ref-type="bibr">2005</xref>
). We made similar observations in our study and hence we do not compare reaction times between our Asian and Caucasian participants, as any comparison would not give interpretable results. Nevertheless, we compared reaction times for prosopagnosics and Germans for the object recognition task, as participants in both groups share the same ethnicity.</p>
<p>All analyses were conducted with Matlab2011b (Natick, MA) and IBM SPSS Statistics Version 20 (Armonk, NY). The dependent variables analyzed in each test are described in the respective sections.</p>
<p>We report effect sizes as partial eta square (η
<sup>2</sup>
<sub>
<italic>p</italic>
</sub>
). For One-Way ANOVAs partial eta square and eta square (η
<sup>2</sup>
) are the same. For our Two-Way ANOVAs partial eta square differs from eta square, therefore we give both values.</p>
</sec>
<sec>
<title>Apparatus</title>
<p>All participants were tested individually. For prosopagnosics and Germans the experiments were run on a desktop PC with 24″ screen, Koreans performed the tests on a MacBook Pro with a 17″ screen. The CFMT is Java-script based; Matlab and Psychtoolbox were used to run the other experiments. Participants were seated at a viewing distance of approximately 60 cm from the screen.</p>
</sec>
<sec>
<title>Procedure</title>
<p>The procedure was approved by the local IRB. All participants completed three tests: (1) the CFMT, (2) a rating task of the similarity of faces differing in features or configuration, (3) an object recognition task. All tests were conducted in the same order to obtain comparable results for each participant. Participants could take self-paced breaks between experiments.</p>
</sec>
</sec>
<sec>
<title>Test battery</title>
<sec>
<title>Cambridge face memory test</title>
<sec>
<title>Motivation</title>
<p>The CFMT was created and provided by Bradley Duchaine and Ken Nakayama (Duchaine and Nakayama,
<xref rid="B14" ref-type="bibr">2006</xref>
). This test assesses recognition abilities using unfamiliar faces in a 3-alternative-forced-choice task. It has been widely used in recent years in studies of CP and of the ORE. Therefore, we used it here as an objective measure of face recognition abilities.</p>
</sec>
<sec>
<title>Stimuli</title>
<p>As this test has been described in detail in the original study, only a short description is given here. Pictures of the faces of young male Caucasians shown under three different viewpoints and under different lighting and noise conditions were used in recognition tests of increasing difficulty. For a complete description of the test see the original study (Duchaine and Nakayama,
<xref rid="B14" ref-type="bibr">2006</xref>
).</p>
</sec>
<sec>
<title>Task</title>
<p>First the participants were familiarized with six target faces which they then had to recognize among distractors in a 3-alternative-forced-choice task with tests of increasing difficulty. No feedback was given. The test can be run in an upright and inverted condition. We only used the upright condition.</p>
</sec>
<sec>
<title>Results</title>
<p>The percent correct recognition of participants was calculated and the mean and standard error of the three participants groups are depicted in Figure
<xref ref-type="fig" rid="F1">1</xref>
.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption>
<p>
<bold>Performance of the 3 participant groups in the CFMT</bold>
. Data are displayed as mean percentage correct responses. Error bars: SEM.</p>
</caption>
<graphic xlink:href="fnhum-08-00759-g0001"></graphic>
</fig>
<p>Germans (mean percent correct = 82.3%,
<italic>SD</italic>
= 8.3) performed significantly better than Koreans (mean = 73.7%,
<italic>SD</italic>
= 8.8), who performed significantly better than prosopagnosics (mean = 55.2%,
<italic>SD</italic>
= 5.9) [One-Way ANOVA:
<italic>F</italic>
<sub>(2, 62)</sub>
= 67.34,
<italic>p</italic>
< 0.001, η
<sup>2</sup>
<sub>
<italic>p</italic>
</sub>
= 0.69, with Tukey HSD
<italic>post-hoc</italic>
tests: all comparisons
<italic>p</italic>
≤ 0.002].</p>
</sec>
<sec>
<title>Discussion</title>
<p>As predicted, the Koreans and prosopagnosics performed significantly worse than the Germans. Furthermore, the prosopagnosics performed significantly worse than the Koreans. The significant difference in performance for the Germans and Koreans shows an own-race advantage for the Germans. We assume that reduced performance of the Koreans is due to the ORE; however, as we did not perform the reverse test with Asian faces, we cannot completely exclude an alternative cause for this difference between participant groups. We suggest that this is very unlikely, because the CFMT and its Chinese version (comprising Chinese faces depicted in a similar way and format as the faces in the CFMT; only published after our data acquisition) were already successfully used to measure the ORE in a complete cross-over design in Caucasian and Asian participants (McKone et al.,
<xref rid="B35" ref-type="bibr">2012</xref>
).</p>
<p>From our finding that Koreans show a significantly better recognition performance than prosopagnosics we cannot exclude that the same mechanisms for processing Caucasian faces are affected in these groups. But we can infer that CP has a stronger impact on face recognition abilities compared with the ORE.</p>
</sec>
</sec>
<sec>
<title>Similarity rating of faces differing in features or configuration</title>
<sec>
<title>Motivation</title>
<p>This test was conducted to measure in what way and to what extent the retrieval of featural and configural information is disturbed in other-race observers and prosopagnosics. Based on this pattern we want to infer if we can exclude that the same mechanisms for processing Caucasian faces are affected in CP and the ORE. As discussed in the introduction, previous studies found disturbances in holistic processing (e.g., Avidan et al.,
<xref rid="B3" ref-type="bibr">2011</xref>
for CP; Rhodes et al.,
<xref rid="B39" ref-type="bibr">1989</xref>
; Michel et al.,
<xref rid="B37" ref-type="bibr">2006</xref>
for the ORE), and disruptions of configural and featural processing (e.g., Lobmaier et al.,
<xref rid="B29" ref-type="bibr">2010</xref>
for CP; Hayward et al.,
<xref rid="B22" ref-type="bibr">2008</xref>
for the ORE). However, other studies using different tasks and stimuli found only minor or no impairments in configural and featural processing (e.g., Le Grand et al.,
<xref rid="B28" ref-type="bibr">2006</xref>
for CP; Mondloch et al.,
<xref rid="B38" ref-type="bibr">2010</xref>
for the ORE). The pattern of findings obtained so far was too inconsistent and not detailed enough to draw conclusions regarding our research question. To resolve this controversy and to obtain usable data, we assessed the fine-grained sensitivity to featural and configural facial information and compared the effects of CP and the ORE.</p>
</sec>
<sec>
<title>Stimulus creation</title>
<p>We generated eight natural-looking face sets with gradual small-step changes in features and configuration to determine the grade of sensitivity to featural and configural facial information, without resorting to unnatural modifications (like blurring or scrambling). The faces in each of our stimulus sets differ only in internal features and their configuration. Skin texture and outer face shape were held constant to allow testing purely for sensitivity to internal features and configuration. The face stimuli contain no extra-facial cues (no hair, makeup, clothing, or jewelry).</p>
<p>The stimuli were created using faces from our in-house 3D face database (Troje and Bülthoff,
<xref rid="B47" ref-type="bibr">1996</xref>
). The faces are 3D laser scans of the faces of real persons. A morphable model allows to isolate and exchange the four main face regions between any faces of the database (Vetter and Blanz,
<xref rid="B48" ref-type="bibr">1999</xref>
). Those four regions are: both eyes (including eyebrows), the nose, the mouth, and the outer face shape (Figure
<xref ref-type="fig" rid="F2">2</xref>
). For these regions, the texture (i.e., “skin”) and / or the shape can be morphed as well as exchanged between all faces. Additionally the regions can be shifted within each face (e.g., moving the eyes up or apart of each other).</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption>
<p>
<bold>Illustration of the editable regions of the 3D faces of our in-house face database (Troje and Bülthoff,
<xref rid="B47" ref-type="bibr">1996</xref>
)</bold>
.</p>
</caption>
<graphic xlink:href="fnhum-08-00759-g0002"></graphic>
</fig>
<p>We chose pairs of faces from the database such that the faces in each pair differed largely from each other in both configuration and features. Previous studies that have used faces differing in either features or configuration have shown that participants are more sensitive to featural than to configural changes (Freire et al.,
<xref rid="B18" ref-type="bibr">2000</xref>
; Goffaux et al.,
<xref rid="B20" ref-type="bibr">2005</xref>
; Maurer et al.,
<xref rid="B32" ref-type="bibr">2007</xref>
; Rotshtein et al.,
<xref rid="B43" ref-type="bibr">2007</xref>
). For this reason we further increased the configural differences of the face pairs by shifting the features slightly (e.g., we moved the eyes closer together in the face which had more closely spaced eyes, and moved the eyes further apart in the other face of the pair). This was done for best conditions to measure configural sensitivity, as this is one main focus of our study, while remaining within natural limits. That the faces are still perceived as natural was tested in a pilot study described further below.</p>
<p>The outer face shape and skin texture of the modified faces were averaged within each pair and applied to both modified faces to create two faces A and B (Figure
<xref ref-type="fig" rid="F3">3B</xref>
). A and B exhibit different features and inner configuration but identical averaged outer face shape and skin texture. Based on the faces A and B we then generated two more faces by creating a face X with features of face A and the configuration of face B (i.e., the features of face A were moved to the feature locations of face B) and vice versa for face Y (see scheme in Figure
<xref ref-type="fig" rid="F3">3A</xref>
; see actual face stimuli in Figure
<xref ref-type="fig" rid="F3">3B</xref>
). By morphing between these four faces in 25% increments we generated a whole set of faces parametrically differing from each other in features (Figure
<xref ref-type="fig" rid="F3">3C</xref>
, horizontal axes) or configuration (Figure
<xref ref-type="fig" rid="F3">3C</xref>
, vertical axes). We created eight different sets in the same way as the one depicted in Figure
<xref ref-type="fig" rid="F3">3C</xref>
, one for each of eight pairs of original faces of our database (note: each original face was used only in one set).</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption>
<p>
<bold>(A)</bold>
Schematic four faces which either differ in features (horizontal) or configuration (vertical).
<bold>(B)</bold>
The same design is applied to real faces of our face database.
<bold>(C)</bold>
Morphing between the four faces in
<bold>(B)</bold>
gives a set. Morphing steps between each row and column are equally spaced with 25%.</p>
</caption>
<graphic xlink:href="fnhum-08-00759-g0003"></graphic>
</fig>
<p>To ensure that the faces we created appeared just as natural as the original faces, we ran a pilot study in which participants rated the naturalness of the modified and original faces without any knowledge about the facial modifications. The modified faces we used for our study showed no significant difference in perceived naturalness compared with the original scanned faces of real people (Esins et al.,
<xref rid="B16" ref-type="bibr">2011</xref>
).</p>
<p>Further, to verify that featural and configural modifications introduced similar amounts of changes in the pictures, we calculated the mean pixelwise image differences between the stimuli with the greatest configural and featural parametrical differences per set. We took the two end point faces of the vertical bar (see Figure
<xref ref-type="fig" rid="F4">4</xref>
) and calculated their Euclidean distance for each pixel and did the same for the two end point faces of the horizontal bar. Then we calculated the average pixel distance for the two comparisons
<xref ref-type="fn" rid="fn0001">
<sup>1</sup>
</xref>
. With this method we obtained mean Euclidean pixel distances for configural and featural changes, for each of the eight created sets. A Wilcoxon signed rank test run on all eight mean distances for the featural changes vs. the eight configural change distances was not significant (
<italic>p</italic>
= 0.31), supporting the idea that featural and configural face modifications introduced similar amounts of computational change in the pictures.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption>
<p>
<bold>One of the eight sets of face stimuli used in the similarity rating experiment</bold>
. Only faces of the central horizontal and vertical bars were used for the experiments. The endpoint faces were used to calculate mean pixelwise image differences between the stimuli.</p>
</caption>
<graphic xlink:href="fnhum-08-00759-g0004"></graphic>
</fig>
</sec>
<sec>
<title>Task</title>
<p>Participants had to rate the pair-wise similarity of faces originating from the same set. Due to time limitations we used only nine test faces per set: the ones located on the central horizontal bar (differing in features) and the central vertical bar (differing in configuration) of each set (see Figure
<xref ref-type="fig" rid="F4">4</xref>
). Each face was compared with the eight other faces on the central bars of the same set and with itself. Trials in which faces differed in both, features and configuration, were considered filler trials to avoid participants realizing the nature of the stimuli and were omitted from the analysis. Therefore, in sum, for each of the eight sets, we analyzed 29 pair-wise similarity ratings: nine identical face comparisons (100% parametrical similarity), eight face comparisons with 75% parametrical similarity (two faces next to each other in the set), six face comparisons with 50% parametrical similarity, four face comparisons with 25% parametrical similarity, and two face comparisons with 0% parametrical similarity (comparison of the extreme faces of the same bar). So in total there were 232 comparisons during this experiment. The order of comparisons was randomized within and across sets for each participant.</p>
<p>Participants had to rate the perceived similarity on a Likert scale from 1 (little similarity) to 7 (high similarity/identical) and were told to use the whole range of ratings over the whole experiment. The participants saw the first face for 2000 ms, then a pixelated face mask for 800 ms, and then the second face for another 2000 ms. Subsequently, the Likert scale appeared on the screen: here participants marked their rating by moving a slider via the arrow keys on the keyboard (Figure
<xref ref-type="fig" rid="F5">5</xref>
). The start position of the slider was randomized. There was no time restriction for entering the answer, however, participants were told to rate the similarity without too long considerations. After every 20 comparisons there was a self-paced pause.</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption>
<p>
<bold>Example of one trial of the similarity rating task</bold>
. Both faces in a trial always belong to the same set.</p>
</caption>
<graphic xlink:href="fnhum-08-00759-g0005"></graphic>
</fig>
<p>The face and mask stimuli had a size of approximately 5.7° horizontal and 8.6° vertical visual angle. To prevent pixel matching, the faces were presented at different random positions on the screen within a viewing angle of about 7.6° horizontally and 10.5° vertically.</p>
</sec>
<sec>
<title>Analysis</title>
<p>For every participant we calculated the mean similarity ratings across all eight sets at each of the five levels of parametric similarity (100, 75, 50, 25, 0%). Example data of one German participant is given in Figure
<xref ref-type="fig" rid="F6">6</xref>
. The black triangles show the average rating of face pairs of all sets differing in features, sorted by the different parametrical similarities. The gray squares show the same for configural changes. As expected, Germans gave similarity ratings close to 7 (high similarity) for very similar faces.</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption>
<p>
<bold>Exemplar results of one German participant of the similarity ratings</bold>
. For each of the five similarity levels, the average ratings across all face comparisons of all sets were calculated. The sensitivity ratings for changes in features (black triangles) and configuration (gray squares) are shown separately. The error bars depict standard error. A linear regression (
<italic>y</italic>
= β
<italic>x</italic>
+ ε) was fitted to both curves individually (dotted black and dotted gray, respectively). The slopes (β) serve as measure of the sensitivity to features and configuration.</p>
</caption>
<graphic xlink:href="fnhum-08-00759-g0006"></graphic>
</fig>
<p>A linear regression (
<italic>y</italic>
= β
<italic>x</italic>
+ ε) was fitted to these mean similarity ratings (dotted black and gray lines in Figure
<xref ref-type="fig" rid="F6">6</xref>
). The steepness of the slopes (β) was then used as a measure of sensitivity: steeper slopes indicate more strongly perceived configural or featural changes. For every participant we calculated one regression slope for their featural and one for their configural ratings. The mean and the standard error of the sensitivity β per participant group are illustrated in Figure
<xref ref-type="fig" rid="F7">7A</xref>
.</p>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption>
<p>
<bold>Results of the similarity rating experiment</bold>
.
<bold>(A)</bold>
Mean values of slopes (β) for the “feature” and “configuration” regression lines for each group Error bars: SEM.
<bold>(B)</bold>
“featural advantage”: mean difference between configural and featural regression slopes (β) calculated for each participant. Error bars: SEM.</p>
</caption>
<graphic xlink:href="fnhum-08-00759-g0007"></graphic>
</fig>
<p>To compare performance data, we took a closer look at the pattern of sensitivity to features and configuration: For each individual participant, we subtracted their configural sensitivity from their featural sensitivity. We refer to this difference as ‘featural advantage’. The illustration in Figure
<xref ref-type="fig" rid="F7">7B</xref>
shows the mean of the calculated differences, i.e., the mean of the featural advantage for each group.</p>
</sec>
<sec>
<title>Results</title>
<p>A 2 × 3 ANOVA on the regression slopes β as a measure of sensitivity showed that the main effect of change type (configural, featural) was significant [
<italic>F</italic>
<sub>(1, 60)</sub>
= 233.7,
<italic>p</italic>
< 0.001, η
<sup>2</sup>
= 0.46, η
<sup>2</sup>
<sub>
<italic>p</italic>
</sub>
= 0.796]. All participants showed a greater sensitivity to changes in features than to changes in configurations. The main effect of participant group (prosopagnosics, Koreans, Germans) was also significant [
<italic>F</italic>
<sub>(2, 60)</sub>
= 6.46,
<italic>p</italic>
= 0.003, η
<sup>2</sup>
= 0.07, η
<sup>2</sup>
<sub>
<italic>p</italic>
</sub>
= 0.18]. The interaction between change type and participant group was significant, too [
<italic>F</italic>
<sub>(2, 60)</sub>
= 5.48,
<italic>p</italic>
= 0.007, η
<sup>2</sup>
= 0.02, η
<sup>2</sup>
<sub>
<italic>p</italic>
</sub>
= 0.15].</p>
<p>Analysis of simple effects for both change types (configural, featural) was carried out: The group differences of sensitivity to features approaches significance [One-Way ANOVA
<italic>F</italic>
<sub>(2, 62)</sub>
= 3.12,
<italic>p</italic>
= 0.0515, η
<sup>2</sup>
<sub>
<italic>p</italic>
</sub>
= 0.09], which was mainly driven by the difference between prosopagnosic and Germans (Tukey HSD
<italic>post-hoc</italic>
test,
<italic>p</italic>
= 0.051, both other differences
<italic>p</italic>
> 0.17). For configural changes there were significant group differences in sensitivity [One-Way ANOVA
<italic>F</italic>
<sub>(2, 62)</sub>
= 9.11,
<italic>p</italic>
< 0.001, η
<sup>2</sup>
<sub>
<italic>p</italic>
</sub>
= 0.23] with prosopagnosics performing significantly differently from Koreans and Germans (Tukey HSD
<italic>post-hoc</italic>
test,
<italic>p</italic>
= 0.001 and
<italic>p</italic>
= 0.003, respectively. Tukey HSD
<italic>post-hoc</italic>
test for Koreans vs. Germans
<italic>p</italic>
= 0.91).</p>
<p>For analysis of the featural advantage (Figure
<xref ref-type="fig" rid="F7">7B</xref>
) we conducted a One-Way ANOVA to further examine the significant interaction of the main effects (participant group vs. change type). The ANOVA showed significant differences between the three groups [
<italic>F</italic>
<sub>(2, 62)</sub>
= 5.48,
<italic>p</italic>
= 0.007, η
<sup>2</sup>
<sub>
<italic>p</italic>
</sub>
= 0.15], which are the same values as for the interaction in the 2 × 3 ANOVA, as expected. The Tukey HSD
<italic>post-hoc</italic>
tests revealed significant differences in the featural advantage between Koreans and prosopagnosics (
<italic>p</italic>
= 0.005), a difference approaching significance for the Koreans vs. the Germans (
<italic>p</italic>
= 0.091) and no difference for prosopagnosics vs. Germans (
<italic>p</italic>
= 0.51).</p>
</sec>
<sec>
<title>Discussion</title>
<p>There is a clear difference in sensitivity to features and configuration of our stimuli faces between Koreans and prosopagnosics: while both groups show about the same sensitivity to featural changes, we found that prosopagnosics have a significantly reduced sensitivity to configuration compared with Koreans (and Germans). Also the featural advantage was significantly smaller for Koreans than for the prosopagnosics. These differences in absolute sensitivity to configural and featural changes, and also the differences in featural advantage, suggest that Korean and prosopagnosic participants do not perceive our Caucasian face stimuli in the same way. Because CP and the ORE show parallels in disrupting featural and configural face processing, we hypothesized that the same mechanisms are disturbed in both cases. This would result in a similarly reduced sensitivity to features and configuration for participants affected by CP or the ORE. But as Korean and prosopagnosic participants show a different pattern of disturbance of their sensitivity, we can reject this hypothesis and conclude that different underlying mechanisms are affected.</p>
<p>Our similarity rating task also allowed to obtain a more detailed picture of the sensitivities to featural and configural information in CP and the ORE. For the prosopagnosics compared with the Germans, the difference between both groups approached significance for sensitivity to features and reached significance for sensitivity to configuration (Figure
<xref ref-type="fig" rid="F7">7A</xref>
). Our results show a marginally significant difference for prosopagnosics and Germans in featural sensitivity (
<italic>p</italic>
= 0.051). These results bridge the gap between two studies reporting conflicting results using the so-called “Jane” stimuli (Le Grand et al.,
<xref rid="B28" ref-type="bibr">2006</xref>
) and “Alfred” stimuli (Yovel and Kanwisher,
<xref rid="B51" ref-type="bibr">2004</xref>
; Yovel and Duchaine,
<xref rid="B50" ref-type="bibr">2006</xref>
), which, like our stimuli, also differ in features and configuration (and contour for the “Jane” stimuli). Only a minority of the prosopagnosic participants performed significantly worse than controls on the “Jane” stimuli differing in features and configuration (Le Grand et al.,
<xref rid="B28" ref-type="bibr">2006</xref>
). Based on the data by Le Grand and colleagues given in Table 4 of that study, comparing prosopagnosics and controls, one can estimate that there was a significant performance difference for the configural but not for the featural modifications. Yovel and colleagues also used the “Jane” stimuli with prosopagnosics and controls and confirmed the significant performance difference between groups for configural modifications and non-significant difference for featural modifications (Yovel and Duchaine,
<xref rid="B50" ref-type="bibr">2006</xref>
). However, they challenged the “Jane” stimuli for including obvious brightness differences (due to makeup) for the featural modifications. For their own “Alfred” stimuli they found significantly reduced sensitivity to featural and configural modifications for prosopagnosic participants (Yovel and Duchaine,
<xref rid="B50" ref-type="bibr">2006</xref>
; Duchaine et al.,
<xref rid="B15" ref-type="bibr">2007</xref>
). In turn, their “Alfred” stimuli were challenged for configural modifications going beyond natural limits (as discussed in Maurer et al.,
<xref rid="B32" ref-type="bibr">2007</xref>
). Our newly created stimulus set contains no extra-facial cues (no hair, makeup, glasses, or beard) and exhibits configural changes which have been tested to be within natural limits. With these well controlled stimuli our results suggest that for prosopagnosic participants, the retrieval of the configural information of a face is indeed impaired compared with the Germans. For the sensitivity to features, our results lie between the non-significant results obtained with the “Jane” stimuli and the significant results obtained with “Alfred” faces. Therefore, we conclude that the retrieval of featural information might be impaired for prosopagnosics, although to a lesser degree than the retrieval of configural information.</p>
<p>We found no significant difference in sensitivity to featural or configural information between the Korean and German groups. Our result are in concordance with a previous study, also using the “Jane” stimuli, that found no differences between Caucasian and Asian participants (Mondloch et al.,
<xref rid="B38" ref-type="bibr">2010</xref>
). In contrast, other studies found an own-race advantage for both configuration and feature changes (Rhodes et al.,
<xref rid="B40" ref-type="bibr">2006</xref>
; Hayward et al.,
<xref rid="B22" ref-type="bibr">2008</xref>
). However, we note that the stimuli used in those latter studies involved different kinds of changes than those used in our present study (features and configuration were changed by blurring and scrambling (Hayward et al.,
<xref rid="B22" ref-type="bibr">2008</xref>
) or features were changed through changes in color (Rhodes et al.,
<xref rid="B40" ref-type="bibr">2006</xref>
), which opens the possibility that the ORE impacts differently on the perception of these different kinds of stimulus modifications. Nevertheless, as our stimuli contain more natural and ecological modifications of faces, we believe that our results better reflect participants' face perception. Even though we found no significant differences in sensitivity to featural or configural information between Germans and Koreans, we found that the featural advantage shows a trend to be larger for the Germans compared with the Koreans. Although this difference only approaches significance, we present two explanations for this pattern. The first explanation is that due to the ORE, the sensitivity pattern is altered for our Korean participants. The ORE could reflect Koreans' lower expertise with other-race facial features whereas their configural processing stays unaffected when viewing other-race faces. The second explanation is that the effect is due to cultural differences. Studies have shown that Western Caucasian and Eastern Asian participants focus at different areas of faces and have dissimilar patterns of fixation when looking at faces (Blais et al.,
<xref rid="B8" ref-type="bibr">2008</xref>
). It might be that German and Korean participants employ different strategies when comparing faces in our task, which could have caused the effects we found. In accordance with this hypothesis, a study using Navon figures reported that Eastern Asian participants focus more on global configuration compared with Western Caucasian participants (McKone et al.,
<xref rid="B33" ref-type="bibr">2010</xref>
). By analogy, a greater focus on configurations in faces could explain the reduced featural advantage we observed in the Korean group.</p>
<p>Furthermore, our results show that all groups, regardless of their race and face recognition abilities, were more sensitive to differences in the featural than in the configural dimension of our stimulus set (Figure
<xref ref-type="fig" rid="F7">7A</xref>
). The presence of a featural advantage is in accordance with findings of previous studies using faces modified within natural limits in their configuration and features, where participants showed a higher sensitivity for featural changes as well (Freire et al.,
<xref rid="B18" ref-type="bibr">2000</xref>
; Goffaux et al.,
<xref rid="B20" ref-type="bibr">2005</xref>
; Maurer et al.,
<xref rid="B32" ref-type="bibr">2007</xref>
; Rotshtein et al.,
<xref rid="B43" ref-type="bibr">2007</xref>
). Even though for the “Alfred” stimuli similar sensitivities to featural and configural modifications were found by Yovel and Kanwisher (
<xref rid="B51" ref-type="bibr">2004</xref>
), their result should be regarded with caution in view of the unnatural configural modifications of their face stimuli (as discussed in Maurer et al.,
<xref rid="B32" ref-type="bibr">2007</xref>
). In contrast, we took care that our face stimuli were always natural looking and pixelwise analyses of our stimuli, as described earlier, have revealed no differences in induced image changes in the featural and configural dimensions. In other words, our stimuli exhibit the same pixelwise variation for the featural and configural changes. The fact that the observers nevertheless show a featural advantage suggests that humans are more sensitive to featural information, and/or perceive these changes to be more profound than changes in configuration. Another possible explanation is that it is more difficult to compare faces differing in configuration than to compare faces differing in features. Additionally, differences between two naturally-occurring faces are more likely to be featural than configural. Therefore, the human face discrimination system might have developed to be better at detecting featural than featural differences between faces.</p>
</sec>
</sec>
<sec>
<title>Object recognition</title>
<sec>
<title>Motivation</title>
<p>In this test we measured the influence of expertise on recognition performance. To this end, we compared recognition performance for objects for which one group has expertise (Caucasian faces) to recognition performance for objects for which no group has expertise (seashells and blue objects).</p>
</sec>
<sec>
<title>Stimulus creation</title>
<p>Three categories of stimuli were used: computer renditions of natural objects (seashells), artificial novel objects (blue objects, dissimilar to any known shapes) and faces. See Figure
<xref ref-type="fig" rid="F8">8</xref>
for examples of these three categories of objects. All objects and faces where full 3D models, allowing to train and test participants on different viewpoints (see below). For each category we created four targets and twelve distractors.</p>
<fig id="F8" position="float">
<label>Figure 8</label>
<caption>
<p>
<bold>Exemplars of the stimuli used in the object recognition experiment</bold>
.</p>
</caption>
<graphic xlink:href="fnhum-08-00759-g0008"></graphic>
</fig>
<p>Sixteen synthetic seashells were taken from a previously created stimulus set (Gaißert et al.,
<xref rid="B19" ref-type="bibr">2010</xref>
). The shells were created using a mathematical model (Fowler et al.,
<xref rid="B17" ref-type="bibr">1992</xref>
) implemented in the software ShellyLib (
<ext-link ext-link-type="uri" xlink:href="http://www.shelly.de">www.shelly.de</ext-link>
). Attention was paid to sample stimuli spread evenly over the parametrically defined stimulus set space (see Gaißert et al.,
<xref rid="B19" ref-type="bibr">2010</xref>
for details).</p>
<p>The blue objects were created with 3D Studio Max by Christoph D. Dahl (unpublished work) and were novel to all participants. Differences between these objects are less obvious for a human observer, making recognition more difficult.</p>
<p>For the face stimuli, 16 male Caucasian faces were selected from the MPI 3D face database (Troje and Bülthoff,
<xref rid="B47" ref-type="bibr">1996</xref>
). The 16 faces where chosen to have as little salient distinctive features as possible (all were clean shaven, had the same gaze direction; showed no blemishes or moles, etc).</p>
<p>None of the stimuli had been seen before by our participants. We created two sets of images for each stimulus category: frontal views for the learning phase, and stimuli rotated by 15 degrees to the right around the vertical axis (yaw) for the testing phase. The change between learning and testing was designed to prevent pixel matching of the stimuli.</p>
<p>All stimuli were shown at a viewing angle of approximately 9.5° horizontally and vertically.</p>
</sec>
<sec>
<title>Task</title>
<p>There was one block of trials per stimulus category, with the same procedure in all three blocks, as follows: During the learning phase, participants had to memorize four target exemplars depicted in frontal view. First, all four targets were shown together on the screen, then each of the four targets was shown one after the other, and finally all target exemplars were presented together again. Participants could control when to switch to the next screen via a button press. They were aware that if they switched to the next view they could not return to the previous one. No time restriction was applied. During testing, participants saw the images depicting the targets and distractors of the same category under a new orientation and performed an old-new-decision task by pressing buttons on a standard computer keyboard (old = left hand button press; new = right hand button press). Stimuli were presented for a duration of 2000 ms or until key press, whichever came first. The next image appeared as soon as an answer was entered.</p>
<p>Targets and distractors were presented in pseudo-randomized order: The testing was divided into three runs. Four targets and four distractors per category were shown in each run. While the targets were the same in each run, four new distractors were presented, such that all four targets were seen three times and each of the 12 distractors was seen only once. The order of the stimulus blocks (shells, faces then blue objects) was fixed to induce similar effects of tiredness in all participants. Participants took short self-paced breaks between blocks.</p>
<p>We kept the number of targets and distractors low, as performing tests with faces can be demotivating for prosopagnosics. We used the same number of stimuli in all stimulus categories to ensure comparability. The high similarity between the non-face objects was designed to avoid ceiling performance despite the low number of stimuli and to mimic the homogeneity of the face stimuli.</p>
</sec>
<sec>
<title>Analysis</title>
<p>The results were analyzed based on the dependent measure
<italic>d</italic>
′. The term
<italic>d</italic>
′ refers to signal-detection theory measures (Macmillan and Creelman,
<xref rid="B30" ref-type="bibr">2005</xref>
) and is an index of subjects' ability to discriminate between signal (target stimuli) and noise (distractors). The maximum possible
<italic>d</italic>
′ value in this experiment is 3.46 (this depends on the number of trials). A
<italic>d</italic>
′ of zero indicates chance discrimination performance, higher values indicate increasing ability to tell targets and distractors apart.</p>
</sec>
<sec>
<title>Results</title>
<p>For a summary analysis of the general influence of object category (faces, shells, blue objects) and participant group (prosopagnosics, Koreans, Germans) we ran a 3 × 3 ANOVA on the
<italic>d</italic>
′ values. The main effect of participant group was not significant [
<italic>F</italic>
<sub>(2, 60)</sub>
= 1.22,
<italic>p</italic>
= 0.303, η
<sup>2</sup>
= 0.009, η
<sup>2</sup>
<sub>
<italic>p</italic>
</sub>
= 0.04] but the main effect of object category was [
<italic>F</italic>
<sub>(2, 60)</sub>
= 145.54,
<italic>p</italic>
< 0.001, η
<sup>2</sup>
= 0.52, η
<sup>2</sup>
<sub>
<italic>p</italic>
</sub>
= 0.71], as well as the interaction between participant group and object category [
<italic>F</italic>
<sub>(4, 120)</sub>
= 7.14,
<italic>p</italic>
< 0.001, η
<sup>2</sup>
= 0.05, η
<sup>2</sup>
<sub>
<italic>p</italic>
</sub>
= 0.19]. Figure
<xref ref-type="fig" rid="F9">9</xref>
depicts the performance of all groups graphically. The Germans and the Koreans were better at recognizing faces than shells and worst for recognizing the blue objects. This order differs for the prosopagnosics who were best at recognizing shells, faces and blue objects in that order.</p>
<fig id="F9" position="float">
<label>Figure 9</label>
<caption>
<p>
<bold>Performance of the three participant groups in the object recognition task</bold>
. Data are shown as mean
<italic>d</italic>
′ values. Error bars: SEM.</p>
</caption>
<graphic xlink:href="fnhum-08-00759-g0009"></graphic>
</fig>
<p>A One-Way ANOVA on the
<italic>d</italic>
′ values for each object category across participant groups revealed significant differences for the face stimuli:
<italic>F</italic>
<sub>(2, 62)</sub>
= 8.14,
<italic>p</italic>
= 0.001, η
<sup>2</sup>
<sub>
<italic>p</italic>
</sub>
= 0.21. A
<italic>post-hoc</italic>
analysis showed that prosopagnosics' performance was significantly different from the other two groups (Games Howel test,
<italic>p</italic>
≤ 0.01 for prosopagnosics vs. Koreans and prosopagnosics vs. Germans). The other One-Way ANOVAs and
<italic>post-hoc</italic>
tests on the level of shells and blue objects, respectively, were not significant (all
<italic>p</italic>
s > 0.2).</p>
<p>We also compared reaction times of Germans and prosopagnosics for the non-face object categories (shells, blue objects) with the Wilcoxon Rank sum test. We found no significant differences (
<italic>p</italic>
= 0.13 for shells,
<italic>p</italic>
= 0.31 for blue objects).</p>
</sec>
<sec>
<title>Discussion</title>
<p>As expected, no significant differences between groups were found for shells and blue objects. This can be explained by the fact that all participants, equally, were non-experts for these objects. Performance differed only for faces. We found that prosopagnosics, as non-experts for faces, performed less well on face recognition than the other two groups. Interestingly, the Koreans, also non-experts for our Caucasian stimuli, did not exhibit a lower recognition performance than Germans. An obvious reason for the absence of the ORE is the small amount of targets to be memorized for this test. It is thus likely that the task was too easy for all non-prosopagnosic participants. For the prosopagnosics, our results show that the task is difficult even with this small amount of target faces. This confirms the results we observed in the CFMT, namely that CP has a stronger impact on face recognition abilities compared with the ORE.</p>
<p>We compared recognition performance for faces not only with one type of objects but with easy and difficult object categories, which reduces the risk of ceiling or flooring effects. Germans and Koreans recognized the non-face objects less easily than the faces, probably because, even for Koreans, their expertise for faces is better than their expertise for the visually similar non-face objects. For prosopagnosics the accuracy performance for faces lay between their performance for easy and difficult object categories. This indicates that the stimuli were not too easy to recognize.</p>
<p>Our findings confirm previous results indicating that, although some prosopagnosics might show object recognition deficits, those impairments are less severe than their face recognition deficits (Kress and Daum,
<xref rid="B27" ref-type="bibr">2003</xref>
; Le Grand et al.,
<xref rid="B28" ref-type="bibr">2006</xref>
). But a further aspect of object recognition expertise worth exploring is reaction times. Behrmann and colleagues found that object recognition deficit of their five prosopagnosic participants does not show in accuracy performance, but in reaction time (Behrmann et al.,
<xref rid="B6" ref-type="bibr">2005</xref>
); and in a study by Duchaine and Nakayama (
<xref rid="B13" ref-type="bibr">2005</xref>
), many prosopagnosic participants exhibited longer reaction times rather than lower recognition accuracy compared with control participants: four of their seven prosopagnosic participants had a reaction time slower by more than 2 SD compared with the mean reaction time of their controls in most tasks. We did not find slower reaction times for non-face object recognition for prosopagnosics compared with Germans. These results thus exclude a general recognition deficit in our prosopagnosics.</p>
</sec>
</sec>
</sec>
<sec>
<title>Correlations between tests</title>
<p>Given that we ran several face processing experiments with different tasks testing for different aspects of recognition, we also examined the degree of correlation between test performances. For this we calculated Pearson's correlations between task performances across participants of all groups (Table
<xref ref-type="table" rid="T2">2</xref>
).</p>
<table-wrap id="T2" position="float">
<label>Table 2</label>
<caption>
<p>
<bold>Pairwise correlations between test scores of all participants combined</bold>
.</p>
</caption>
<graphic xlink:href="fnhum-08-00759-i0001"></graphic>
<table-wrap-foot>
<p>Depicted are the correlation coefficient, and in parentheses the p-value of the coefficient. Negative correlations are marked in red, significant correlations are written in bold letters. (CFMT, final score; Feat, sensitivity to featural changes in a face; Conf, sensitivity to configural changes in a face; Shells, Faces, Blue objects: d′
<italic>values for shells, faces and the blue objects in the object recognition task</italic>
.)</p>
</table-wrap-foot>
</table-wrap>
<p>Performance on all four face-related tasks [CFMT, sensitivity to features (Feat) and configuration (Conf), object recognition task with face stimuli (Faces)] were positively and significantly correlated or approached significance. The effect sizes of these correlations (0.22 <
<italic>r</italic>
< 0.49) were medium and hence the proportions of shared variance (0.05 <
<italic>r</italic>
<sup>2</sup>
< 0.24) were rather small. Thus, we assume that although different aspects of face perception are investigated by the tests (i.e., recognition performance, memory, and sensitivity to features and configuration) these aspects are nevertheless to some degree dependent from each other.</p>
<p>Surprisingly there was another significant, but negative correlation (with a rather small effect size): participants with a high sensitivity to configuration of a face tended to have bad performance in the shell recognition task. The small proportion of shared variance of
<italic>r</italic>
<sup>2</sup>
= 0.09 led us to refrain from any speculations.</p>
</sec>
<sec>
<title>General discussion</title>
<p>The combination of tasks used in this study tested various aspects of face and object recognition, which allowed us to compare directly the influence of CP and the ORE. Our hypothesis, based on previous findings, was that in CP and the ORE the same underlying mechanisms might be affected. While we could disprove this hypothesis (this is discussed in detail below), we were able to confirm results of previous studies and importantly we gain new insights concerning the similarities between these two impairments of face recognition.</p>
<p>First, we were able to replicate the findings that congenital prosopagnosics exhibit face recognition deficits but no object recognition deficits (Le Grand et al.,
<xref rid="B28" ref-type="bibr">2006</xref>
). Second, we were able to replicate the ORE with our Koreans in the CFMT. Interestingly our results differ somewhat from the results by McKone et al. (
<xref rid="B35" ref-type="bibr">2012</xref>
) who only found a trend toward a different performance between their Asian and Caucasian participants on the original CFMT. A possible explanation for this discrepancy is that their Asian participants may have had more experience with Caucasian faces because they were overseas students living in Australia at the time of testing. Our Asian participants were tested in Korea and thus were likely to have less experience with Caucasian faces. Third, our experiment testing sensitivity toward featural and configural changes within a face resolves discrepancies between studies testing sensitivity toward featural and configural facial information for prosopagnosics (Le Grand et al.,
<xref rid="B28" ref-type="bibr">2006</xref>
; Yovel and Duchaine,
<xref rid="B50" ref-type="bibr">2006</xref>
). Our results, in the context of previous studies, show that, compared with German controls, prosopagnosics exhibit an impaired sensitivity toward configural information and possibly and only to a lesser extent, toward featural information of a face.</p>
<p>Importantly, besides those confirmations of previous findings, we report the new finding that sensitivities to features and configuration of a face differ between Korean and prosopagnosic participants. For both groups, the observed sensitivity to the featural changes in a face was about the same. The Koreans, however, were better than prosopagnosics (and as good as Germans) at detecting fine changes in configural information in a face. When comparing CP with the ORE, we asked if they derive from a disturbance in the same underlying mechanisms. Our results indicate that this is not the case: especially the difference in absolute sensitivity to configural and featural changes for prosopagnosic and other-race observers is a strong indicator that CP and the ORE impair face recognition differently. As we used the same face stimuli to test all participant groups, our results indicate that lacking expertise for a certain face group does not impact configural processing of those faces (Korean group), while CP does (prosopagnosic group). Even though we cannot explain what exactly causes this difference, these results clearly show that there are different mechanisms underlying both impairments. Therefore, we are not “prosopagnosic for other-race faces” (see also Wang et al.,
<xref rid="B49" ref-type="bibr">2009</xref>
).</p>
<p>Our second main finding is that face recognition performance is more strongly affected by CP than by the ORE. Our prosopagnosics performed significantly worse than the Koreans in all face recognition tasks. A possible explanation is that generally an existing expertise for same-race faces can be used for recognition of untrained other-race faces, while no such expertise exists in CP (Carbon et al.,
<xref rid="B9" ref-type="bibr">2007</xref>
).</p>
<p>The findings of our test battery also have some further implications for the general understanding of face perception and face processing. First, we find that better configural sensitivity relates to better face recognition ability. Koreans and Germans performed significantly better in the general face recognition task Cambridge Face Memory Test, and at the same time showed a significantly higher sensitivity to configural changes in our second test than the prosopagnosics. This importance of configural processing for holistic processing was so far only shown by disrupting configural information, e.g., by the inversion effect (Freire et al.,
<xref rid="B18" ref-type="bibr">2000</xref>
). Our finding is an important result that allows us to get further insight about which aspect of face recognition relates with being a good face recognizer. When correlating performance in the CFMT with the sensitivity to configural changes across all participants, we obtained a significant but medium proportion of shared variance of
<italic>r</italic>
<sup>2</sup>
= 0.24 (which is larger than the proportion of shared variance of
<italic>r</italic>
<sup>2</sup>
= 0.09 of performance in the CFMT and sensitivity to featural changes). Until now studies looking for processes related to face recognition performance mostly correlated it to holistic processing in general (e.g., performance in the composite face task or part-whole-face-task). Different proportions of shared variance were found: either zero (
<italic>r</italic>
<sup>2</sup>
= 0.003, Konar et al.,
<xref rid="B26" ref-type="bibr">2010</xref>
), or medium (
<italic>r</italic>
<sup>2</sup>
= 0.16, Richler et al.,
<xref rid="B41" ref-type="bibr">2011</xref>
), or similar to our value (
<italic>r</italic>
<sup>2</sup>
= 0.21, DeGutis et al.,
<xref rid="B12" ref-type="bibr">2013</xref>
). The range of results in these studies might be explained by the different measures used for face recognition (CFMT vs. own identity recognition tasks), holistic processing (composite face task vs. part-whole-face-task) and different approaches to calculate the effect scores (subtraction scores vs. regression scores, and partial vs. complete composite face design). Whether general problems in processing faces results in an inability to see subtle differences in facial configuration, whether a reduced sensitivity to configuration results in impaired face recognition ability, or whether configural sensitivity and face recognition performance are impaired by disrupting a common underlying process remains an open question. This is a decade-old, and as-of-yet unanswered issue (Barton et al.,
<xref rid="B5" ref-type="bibr">2003</xref>
) which we cannot address using our current data. Nevertheless, our results strengthen the hypothesis that configural processing is linked to face recognition ability, but the proportions of shared variance are only low to medium, which show that configural sensitivity and/or holistic processing cannot solely explain face processing abilities.</p>
<p>The second implication of our findings for face processing stems from the fact that we find no difference in terms of sensitivity to facial features between Koreans and prosopagnosics. This suggests that this aspect is not crucial for determining face recognition abilities. This finding is supported by the low effect size found in correlating the sensitivity to featural changes with face recognition performance (tested either using the CFMT or the face recognition performance in the object recognition task): only a small portion of the variance of face recognition abilities is explained by the sensitivity to differences in features (
<italic>r</italic>
<sup>2</sup>
= 0.09 and 0.11 in both cases).</p>
<p>Overall, with our test battery we were able to replicate results of previous studies and provide new insights into the face processing disturbances caused by CP and the ORE. Thus, when a (Caucasian) prosopagnosic person tries to explain his or her condition to a (Korean) non-prosopagnosic person with the ORE (“They all look the same to you; everyone else does for me, too”) this is an inexact comparison. Although the perception of Caucasian faces by Koreans and prosopagnosics observers differs, the analogy probably gives at least an idea of the problems congenital prosopagnosics (though to a stronger extent) have to face.</p>
<sec>
<title>Conflict of interest statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</sec>
</body>
<back>
<ack>
<p>This research was supported by funding from the Max Planck Society, as well as from the world class university (WCU) program. We would like to thank all participants who participated in this study. Also the help of Prof. Dr. Ingo Kennerknecht in contacting the prosopagnosic participants is highly appreciated. We thank Bradley Duchaine and Ken Nakayama, Nina Gaißert, and Christoph Dahl for graciously giving us their stimulus material.</p>
</ack>
<fn-group>
<fn id="fn0001">
<p>
<sup>1</sup>
Only pixels which actually differed between both images were taken into consideration. Thus, the gray background and the common outer face shape were omitted for the averaging process. This avoids an artificial reduction of the mean pixel distances.</p>
</fn>
</fn-group>
<sec sec-type="supplementary-material" id="s3">
<title>Supplementary material</title>
<p>The Supplementary Material for this article can be found online at:
<ext-link ext-link-type="uri" xlink:href="http://www.frontiersin.org/journal/10.3389/fnhum.2014.00759/abstract">http://www.frontiersin.org/journal/10.3389/fnhum.2014.00759/abstract</ext-link>
</p>
<supplementary-material content-type="local-data">
<media xlink:href="DataSheet1.DOCX">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Avidan</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Behrmann</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Functional MRI reveals compromised neural integrity of the face processing network in congenital prosopagnosia</article-title>
.
<source>Curr. Biol</source>
.
<volume>19</volume>
,
<fpage>1146</fpage>
<lpage>1150</lpage>
<pub-id pub-id-type="doi">10.1016/j.cub.2009.04.060</pub-id>
<pub-id pub-id-type="pmid">19481456</pub-id>
</mixed-citation>
</ref>
<ref id="B2">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Avidan</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Hasson</surname>
<given-names>U.</given-names>
</name>
<name>
<surname>Malach</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Behrmann</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Detailed exploration of face-related processing in congenital prosopagnosia: 2. Functional neuroimaging findings</article-title>
.
<source>J. Cogn. Neurosci</source>
.
<volume>17</volume>
,
<fpage>1150</fpage>
<lpage>1167</lpage>
<pub-id pub-id-type="doi">10.1162/0898929054475145</pub-id>
<pub-id pub-id-type="pmid">16102242</pub-id>
</mixed-citation>
</ref>
<ref id="B3">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Avidan</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Tanzer</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Behrmann</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Impaired holistic processing in congenital prosopagnosia</article-title>
.
<source>Neuropsychologia</source>
<volume>49</volume>
,
<fpage>2541</fpage>
<lpage>2552</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2011.05.002</pub-id>
<pub-id pub-id-type="pmid">21601583</pub-id>
</mixed-citation>
</ref>
<ref id="B4">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Avidan</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Thomas</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Behrmann</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>An integrative approach towards understanding the psychological and neural basis of congenital prosopagnosia</article-title>
, in
<source>Cortical Mechanisms of Vision</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Jenkin</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Harris</surname>
<given-names>L. R.</given-names>
</name>
</person-group>
(
<publisher-loc>New York, NY</publisher-loc>
:
<publisher-name>Cambridge University Press</publisher-name>
),
<fpage>241</fpage>
<lpage>270</lpage>
Available online at:
<ext-link ext-link-type="uri" xlink:href="http://tdlc.ucsd.edu/research/publications/Avidan_Integrative_Approach_2009.pdf">http://tdlc.ucsd.edu/research/publications/Avidan_Integrative_Approach_2009.pdf</ext-link>
(Accessed June 19, 2014).</mixed-citation>
</ref>
<ref id="B5">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Barton</surname>
<given-names>J. J. S.</given-names>
</name>
<name>
<surname>Cherkasova</surname>
<given-names>M. V.</given-names>
</name>
<name>
<surname>Press</surname>
<given-names>D. Z.</given-names>
</name>
<name>
<surname>Intriligator</surname>
<given-names>J. M.</given-names>
</name>
<name>
<surname>O'Connor</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Developmental prosopagnosia: astudy of three patients</article-title>
.
<source>Brain Cogn</source>
.
<volume>51</volume>
,
<fpage>12</fpage>
<lpage>30</lpage>
<pub-id pub-id-type="doi">10.1016/S0278-2626(02)00516-X</pub-id>
<pub-id pub-id-type="pmid">12633587</pub-id>
</mixed-citation>
</ref>
<ref id="B6">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Behrmann</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Avidan</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Marotta</surname>
<given-names>J. J.</given-names>
</name>
<name>
<surname>Kimchi</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Detailed exploration of face-related processing in congenital prosopagnosia: 1. Behavioral findings</article-title>
.
<source>J. Cogn. Neurosci</source>
.
<volume>17</volume>
,
<fpage>1130</fpage>
<lpage>1149</lpage>
<pub-id pub-id-type="doi">10.1162/0898929054475154</pub-id>
<pub-id pub-id-type="pmid">16102241</pub-id>
</mixed-citation>
</ref>
<ref id="B7">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bernstein</surname>
<given-names>M. J.</given-names>
</name>
<name>
<surname>Young</surname>
<given-names>S. G.</given-names>
</name>
<name>
<surname>Hugenberg</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>The cross-category effect: mere social categorization is sufficient to elicit an own-group bias in face recognition</article-title>
.
<source>Psychol. Sci</source>
.
<volume>18</volume>
,
<fpage>706</fpage>
<lpage>712</lpage>
<pub-id pub-id-type="doi">10.1111/j.1467-9280.2007.01964.x</pub-id>
<pub-id pub-id-type="pmid">17680942</pub-id>
</mixed-citation>
</ref>
<ref id="B8">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Blais</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Jack</surname>
<given-names>R. E.</given-names>
</name>
<name>
<surname>Scheepers</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Fiset</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Caldara</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Culture shapes how we look at faces</article-title>
.
<source>PLoS ONE</source>
<volume>3</volume>
:
<fpage>e3022</fpage>
<pub-id pub-id-type="doi">10.1371/journal.pone.0003022</pub-id>
<pub-id pub-id-type="pmid">18714387</pub-id>
</mixed-citation>
</ref>
<ref id="B9">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Carbon</surname>
<given-names>C.-C.</given-names>
</name>
<name>
<surname>Grüter</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Weber</surname>
<given-names>J. E.</given-names>
</name>
<name>
<surname>Lueschow</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Faces as objects of non-expertise: processing of thatcherised faces in congenital prosopagnosia</article-title>
.
<source>Perception</source>
<volume>36</volume>
,
<fpage>1635</fpage>
<lpage>1645</lpage>
<pub-id pub-id-type="doi">10.1068/p5467</pub-id>
<pub-id pub-id-type="pmid">18265844</pub-id>
</mixed-citation>
</ref>
<ref id="B10">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Collishaw</surname>
<given-names>S. M.</given-names>
</name>
<name>
<surname>Hole</surname>
<given-names>G. J.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>Featural and configurational processes in the recognition of faces of different familiarity</article-title>
.
<source>Perception</source>
<volume>29</volume>
,
<fpage>893</fpage>
<lpage>909</lpage>
<pub-id pub-id-type="doi">10.1068/p2949</pub-id>
<pub-id pub-id-type="pmid">11145082</pub-id>
</mixed-citation>
</ref>
<ref id="B11">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>DeGutis</surname>
<given-names>J. M.</given-names>
</name>
<name>
<surname>Bentin</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Robertson</surname>
<given-names>L. C.</given-names>
</name>
<name>
<surname>D'Esposito</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Functional plasticity in ventral temporal cortex following cognitive rehabilitation of a congenital prosopagnosic</article-title>
.
<source>J. Cogn. Neurosci</source>
.
<volume>19</volume>
,
<fpage>1790</fpage>
<lpage>1802</lpage>
<pub-id pub-id-type="doi">10.1162/jocn.2007.19.11.1790</pub-id>
<pub-id pub-id-type="pmid">17958482</pub-id>
</mixed-citation>
</ref>
<ref id="B12">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>DeGutis</surname>
<given-names>J. M.</given-names>
</name>
<name>
<surname>Wilmer</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Mercado</surname>
<given-names>R. J.</given-names>
</name>
<name>
<surname>Cohan</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>Using regression to measure holistic face processing reveals a strong link with face recognition ability</article-title>
.
<source>Cognition</source>
<volume>126</volume>
,
<fpage>87</fpage>
<lpage>100</lpage>
<pub-id pub-id-type="doi">10.1016/j.cognition.2012.09.004</pub-id>
<pub-id pub-id-type="pmid">23084178</pub-id>
</mixed-citation>
</ref>
<ref id="B13">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Duchaine</surname>
<given-names>B. C.</given-names>
</name>
<name>
<surname>Nakayama</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Dissociations of face and object recognition in developmental prosopagnosia</article-title>
.
<source>J. Cogn. Neurosci</source>
.
<volume>17</volume>
,
<fpage>249</fpage>
<lpage>261</lpage>
<pub-id pub-id-type="doi">10.1162/0898929053124857</pub-id>
<pub-id pub-id-type="pmid">15811237</pub-id>
</mixed-citation>
</ref>
<ref id="B14">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Duchaine</surname>
<given-names>B. C.</given-names>
</name>
<name>
<surname>Nakayama</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>The Cambridge face memory test: results for neurologically intact individuals and an investigation of its validity using inverted face stimuli and prosopagnosic participants</article-title>
.
<source>Neuropsychologia</source>
<volume>44</volume>
,
<fpage>576</fpage>
<lpage>585</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2005.07.001</pub-id>
<pub-id pub-id-type="pmid">16169565</pub-id>
</mixed-citation>
</ref>
<ref id="B15">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Duchaine</surname>
<given-names>B. C.</given-names>
</name>
<name>
<surname>Yovel</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Nakayama</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>No global processing deficit in the Navon task in 14 developmental prosopagnosics</article-title>
.
<source>Soc. Cogn. Affect. Neurosci</source>
.
<volume>2</volume>
,
<fpage>104</fpage>
<lpage>113</lpage>
<pub-id pub-id-type="doi">10.1093/scan/nsm003</pub-id>
<pub-id pub-id-type="pmid">18985129</pub-id>
</mixed-citation>
</ref>
<ref id="B16">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Esins</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Bülthoff</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Schultz</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>The role of featural and configural information for perceived similarity between faces</article-title>
.
<source>J. Vis</source>
.
<volume>11</volume>
:
<fpage>673</fpage>
<pub-id pub-id-type="doi">10.1167/11.11.673</pub-id>
</mixed-citation>
</ref>
<ref id="B17">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fowler</surname>
<given-names>D. R.</given-names>
</name>
<name>
<surname>Meinhardt</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Prusinkiewicz</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>1992</year>
).
<article-title>Modeling seashells</article-title>
.
<source>ACM SIGGRAPH Comput. Grap</source>
.
<volume>26</volume>
,
<fpage>379</fpage>
<lpage>387</lpage>
<pub-id pub-id-type="doi">10.1145/133994.134096</pub-id>
</mixed-citation>
</ref>
<ref id="B18">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Freire</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Symons</surname>
<given-names>L. A.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>The face-inversion effect as a deficit in the encoding of configural information: direct evidence</article-title>
.
<source>Perception</source>
<volume>29</volume>
,
<fpage>159</fpage>
<lpage>170</lpage>
<pub-id pub-id-type="doi">10.1068/p3012</pub-id>
<pub-id pub-id-type="pmid">10820599</pub-id>
</mixed-citation>
</ref>
<ref id="B19">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gaißert</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Wallraven</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Bülthoff</surname>
<given-names>H. H.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Visual and haptic perceptual spaces show high similarity in humans</article-title>
.
<source>J. Vis</source>
.
<volume>10</volume>
,
<fpage>1</fpage>
<lpage>20</lpage>
<pub-id pub-id-type="doi">10.1167/10.11.2</pub-id>
<pub-id pub-id-type="pmid">20884497</pub-id>
</mixed-citation>
</ref>
<ref id="B20">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Goffaux</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Hault</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Michel</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Vuong</surname>
<given-names>Q. C.</given-names>
</name>
<name>
<surname>Rossion</surname>
<given-names>B.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>The respective role of low and high spatial frequencies in supporting configural and featural processing of faces</article-title>
.
<source>Perception</source>
<volume>34</volume>
,
<fpage>77</fpage>
<lpage>86</lpage>
<pub-id pub-id-type="doi">10.1068/p5370</pub-id>
<pub-id pub-id-type="pmid">15773608</pub-id>
</mixed-citation>
</ref>
<ref id="B21">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Grüter</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Grüter</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Carbon</surname>
<given-names>C.-C.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Neural and genetic foundations of face recognition and prosopagnosia</article-title>
.
<source>J. Neuropsychol</source>
.
<volume>2</volume>
,
<fpage>79</fpage>
<lpage>97</lpage>
<pub-id pub-id-type="doi">10.1348/174866407X231001</pub-id>
<pub-id pub-id-type="pmid">19334306</pub-id>
</mixed-citation>
</ref>
<ref id="B22">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hayward</surname>
<given-names>W. G.</given-names>
</name>
<name>
<surname>Rhodes</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Schwaninger</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>An own-race advantage for components as well as configurations in face recognition</article-title>
.
<source>Cognition</source>
<volume>106</volume>
,
<fpage>1017</fpage>
<lpage>1027</lpage>
<pub-id pub-id-type="doi">10.1016/j.cognition.2007.04.002</pub-id>
<pub-id pub-id-type="pmid">17524388</pub-id>
</mixed-citation>
</ref>
<ref id="B23">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hugenberg</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Young</surname>
<given-names>S. G.</given-names>
</name>
<name>
<surname>Bernstein</surname>
<given-names>M. J.</given-names>
</name>
<name>
<surname>Sacco</surname>
<given-names>D. F.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>The categorization-individuation model: an integrative account of the other-race recognition deficit</article-title>
.
<source>Psychol. Rev</source>
.
<volume>117</volume>
,
<fpage>1168</fpage>
<lpage>1187</lpage>
<pub-id pub-id-type="doi">10.1037/a0020463</pub-id>
<pub-id pub-id-type="pmid">20822290</pub-id>
</mixed-citation>
</ref>
<ref id="B24">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kennerknecht</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Ho</surname>
<given-names>N. Y.</given-names>
</name>
<name>
<surname>Wong</surname>
<given-names>V. C. N.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Prevalence of hereditary prosopagnosia (HPA) in Hong Kong Chinese population</article-title>
.
<source>Am. J. Med. Genet. A</source>
<volume>146A</volume>
,
<fpage>2863</fpage>
<lpage>2870</lpage>
<pub-id pub-id-type="doi">10.1002/ajmg.a.32552</pub-id>
<pub-id pub-id-type="pmid">18925678</pub-id>
</mixed-citation>
</ref>
<ref id="B25">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kimchi</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Behrmann</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Avidan</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Amishav</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Perceptual separability of featural and configural information in congenital prosopagnosia</article-title>
.
<source>Cogn. Neuropsychol</source>
.
<volume>29</volume>
,
<fpage>447</fpage>
<lpage>463</lpage>
<pub-id pub-id-type="doi">10.1080/02643294.2012.752723</pub-id>
<pub-id pub-id-type="pmid">23428081</pub-id>
</mixed-citation>
</ref>
<ref id="B26">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Konar</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Bennett</surname>
<given-names>P. J.</given-names>
</name>
<name>
<surname>Sekuler</surname>
<given-names>A. B.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Holistic processing is not correlated with face-identification accuracy</article-title>
.
<source>Psychol. Sci</source>
.
<volume>21</volume>
,
<fpage>38</fpage>
<lpage>43</lpage>
<pub-id pub-id-type="doi">10.1177/0956797609356508</pub-id>
<pub-id pub-id-type="pmid">20424020</pub-id>
</mixed-citation>
</ref>
<ref id="B27">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kress</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Daum</surname>
<given-names>I.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Developmental prosopagnosia: a review</article-title>
.
<source>Behav. Neurol</source>
.
<volume>14</volume>
,
<fpage>109</fpage>
<lpage>121</lpage>
<pub-id pub-id-type="doi">10.1155/2003/520476</pub-id>
<pub-id pub-id-type="pmid">14757987</pub-id>
</mixed-citation>
</ref>
<ref id="B28">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Le Grand</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Cooper</surname>
<given-names>P. A.</given-names>
</name>
<name>
<surname>Mondloch</surname>
<given-names>C. J.</given-names>
</name>
<name>
<surname>Lewis</surname>
<given-names>T. L.</given-names>
</name>
<name>
<surname>Sagiv</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>De Gelder</surname>
<given-names>B.</given-names>
</name>
<etal></etal>
</person-group>
(
<year>2006</year>
).
<article-title>What aspects of face processing are impaired in developmental prosopagnosia?</article-title>
<source>Brain Cogn</source>
.
<volume>61</volume>
,
<fpage>139</fpage>
<lpage>158</lpage>
<pub-id pub-id-type="doi">10.1016/j.bandc.2005.11.005</pub-id>
<pub-id pub-id-type="pmid">16466839</pub-id>
</mixed-citation>
</ref>
<ref id="B29">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lobmaier</surname>
<given-names>J. S.</given-names>
</name>
<name>
<surname>Bölte</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Mast</surname>
<given-names>F. W.</given-names>
</name>
<name>
<surname>Dobel</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Configural and featural processing in humans with congenital prosopagnosia</article-title>
.
<source>Adv. Cogn. Psychol</source>
.
<volume>6</volume>
,
<fpage>23</fpage>
<lpage>34</lpage>
<pub-id pub-id-type="doi">10.2478/v10053-008-0074-4</pub-id>
<pub-id pub-id-type="pmid">20689639</pub-id>
</mixed-citation>
</ref>
<ref id="B30">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Macmillan</surname>
<given-names>N. A.</given-names>
</name>
<name>
<surname>Creelman</surname>
<given-names>C. D.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<source>Detection Theory: A User's Guide. 2nd Edn</source>
.
<publisher-loc>Mahwah, NJ</publisher-loc>
:
<publisher-name>Lawrence Erlbaum Associates</publisher-name>
</mixed-citation>
</ref>
<ref id="B31">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Maurer</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Le Grand</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Mondloch</surname>
<given-names>C. J.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>The many faces of configural processing</article-title>
.
<source>Trends Cogn. Sci</source>
.
<volume>6</volume>
,
<fpage>255</fpage>
<lpage>260</lpage>
<pub-id pub-id-type="doi">10.1016/S1364-6613(02)01903-4</pub-id>
<pub-id pub-id-type="pmid">12039607</pub-id>
</mixed-citation>
</ref>
<ref id="B32">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Maurer</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>O'Craven</surname>
<given-names>K. M.</given-names>
</name>
<name>
<surname>Le Grand</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Mondloch</surname>
<given-names>C. J.</given-names>
</name>
<name>
<surname>Springer</surname>
<given-names>M. V.</given-names>
</name>
<name>
<surname>Lewis</surname>
<given-names>T. L.</given-names>
</name>
<etal></etal>
</person-group>
(
<year>2007</year>
).
<article-title>Neural correlates of processing facial identity based on features versus their spacing</article-title>
.
<source>Neuropsychologia</source>
<volume>45</volume>
,
<fpage>1438</fpage>
<lpage>1451</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2006.11.016</pub-id>
<pub-id pub-id-type="pmid">17204295</pub-id>
</mixed-citation>
</ref>
<ref id="B33">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>McKone</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Aimola Davies</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Fernando</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Aalders</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Leung</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Wickramariyaratne</surname>
<given-names>T.</given-names>
</name>
<etal></etal>
</person-group>
(
<year>2010</year>
).
<article-title>Asia has the global advantage: race and visual attention</article-title>
.
<source>Vision Res</source>
.
<volume>50</volume>
,
<fpage>1540</fpage>
<lpage>1549</lpage>
<pub-id pub-id-type="doi">10.1016/j.visres.2010.05.010</pub-id>
<pub-id pub-id-type="pmid">20488198</pub-id>
</mixed-citation>
</ref>
<ref id="B34">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>McKone</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Brewer</surname>
<given-names>J. L.</given-names>
</name>
<name>
<surname>MacPherson</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Rhodes</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Hayward</surname>
<given-names>W. G.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Familiar other-race faces show normal holistic processing and are robust to perceptual stress</article-title>
.
<source>Perception</source>
<volume>36</volume>
,
<fpage>224</fpage>
<lpage>248</lpage>
<pub-id pub-id-type="doi">10.1068/p5499</pub-id>
<pub-id pub-id-type="pmid">17402665</pub-id>
</mixed-citation>
</ref>
<ref id="B35">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>McKone</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Stokes</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Cohan</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Fiorentini</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Pidcock</surname>
<given-names>M.</given-names>
</name>
<etal></etal>
</person-group>
(
<year>2012</year>
).
<article-title>A robust method of measuring other-race and other-ethnicity effects: the Cambridge face memory test format</article-title>
.
<source>PLoS ONE</source>
<volume>7</volume>
:
<fpage>e47956</fpage>
<pub-id pub-id-type="doi">10.1371/journal.pone.0047956</pub-id>
<pub-id pub-id-type="pmid">23118912</pub-id>
</mixed-citation>
</ref>
<ref id="B36">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Meissner</surname>
<given-names>C. A.</given-names>
</name>
<name>
<surname>Brigham</surname>
<given-names>J. C.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Thirty years of investigating the own-race bias in memory for faces: a meta-analytic review</article-title>
.
<source>Psychol. Pub. Policy Law</source>
<volume>7</volume>
,
<fpage>3</fpage>
<lpage>35</lpage>
<pub-id pub-id-type="doi">10.1037/1076-8971.7.1.3</pub-id>
</mixed-citation>
</ref>
<ref id="B37">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Michel</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Rossion</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Han</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Chung</surname>
<given-names>C.-S.</given-names>
</name>
<name>
<surname>Caldara</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Holistic processing is finely tuned for faces of one's own race</article-title>
.
<source>Psychol. Sci</source>
.
<volume>17</volume>
,
<fpage>608</fpage>
<lpage>615</lpage>
<pub-id pub-id-type="doi">10.1111/j.1467-9280.2006.01752.x</pub-id>
<pub-id pub-id-type="pmid">16866747</pub-id>
</mixed-citation>
</ref>
<ref id="B38">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mondloch</surname>
<given-names>C. J.</given-names>
</name>
<name>
<surname>Elms</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Maurer</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Rhodes</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Hayward</surname>
<given-names>W. G.</given-names>
</name>
<name>
<surname>Tanaka</surname>
<given-names>J. W.</given-names>
</name>
<etal></etal>
</person-group>
(
<year>2010</year>
).
<article-title>Processes underlying the cross-race effect: an investigation of holistic, featural, and relational processing of own-race versus other-race faces</article-title>
.
<source>Perception</source>
<volume>39</volume>
,
<fpage>1065</fpage>
<lpage>1085</lpage>
<pub-id pub-id-type="doi">10.1068/p6608</pub-id>
<pub-id pub-id-type="pmid">20942358</pub-id>
</mixed-citation>
</ref>
<ref id="B39">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rhodes</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Brake</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Taylor</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Tan</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>1989</year>
).
<article-title>Expertise and configural coding in face recognition</article-title>
.
<source>Br. J. Psychol</source>
.
<volume>80</volume>
,
<fpage>313</fpage>
<lpage>331</lpage>
<pub-id pub-id-type="doi">10.1111/j.2044-8295.1989.tb02323.x</pub-id>
<pub-id pub-id-type="pmid">2790391</pub-id>
</mixed-citation>
</ref>
<ref id="B40">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rhodes</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Hayward</surname>
<given-names>W. G.</given-names>
</name>
<name>
<surname>Winkler</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Expert face coding: configural and component coding of own-race and other-race faces</article-title>
.
<source>Psychon. Bull. Rev</source>
.
<volume>13</volume>
,
<fpage>499</fpage>
<lpage>505</lpage>
<pub-id pub-id-type="doi">10.3758/BF03193876</pub-id>
<pub-id pub-id-type="pmid">17048737</pub-id>
</mixed-citation>
</ref>
<ref id="B41">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Richler</surname>
<given-names>J. J.</given-names>
</name>
<name>
<surname>Cheung</surname>
<given-names>O. S.</given-names>
</name>
<name>
<surname>Gauthier</surname>
<given-names>I.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Holistic processing predicts face recognition</article-title>
.
<source>Psychol. Sci</source>
.
<volume>22</volume>
,
<fpage>464</fpage>
<lpage>471</lpage>
<pub-id pub-id-type="doi">10.1177/0956797611401753</pub-id>
<pub-id pub-id-type="pmid">21393576</pub-id>
</mixed-citation>
</ref>
<ref id="B42">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rivolta</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Palermo</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Schmalzl</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Coltheart</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Covert face recognition in congenital prosopagnosia: a group study</article-title>
.
<source>Cortex</source>
<volume>48</volume>
,
<fpage>1</fpage>
<lpage>9</lpage>
<pub-id pub-id-type="doi">10.1016/j.cortex.2011.01.005</pub-id>
<pub-id pub-id-type="pmid">22136876</pub-id>
</mixed-citation>
</ref>
<ref id="B43">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rotshtein</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Geng</surname>
<given-names>J. J.</given-names>
</name>
<name>
<surname>Driver</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Dolan</surname>
<given-names>R. J.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Role of features and second-order spatial relations in face discrimination, face recognition, and individual face skills: behavioral and functional magnetic resonance imaging data</article-title>
.
<source>J. Cogn. Neurosci</source>
.
<volume>19</volume>
,
<fpage>1435</fpage>
<lpage>1452</lpage>
<pub-id pub-id-type="doi">10.1162/jocn.2007.19.9.1435</pub-id>
<pub-id pub-id-type="pmid">17714006</pub-id>
</mixed-citation>
</ref>
<ref id="B44">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rushton</surname>
<given-names>J. P.</given-names>
</name>
<name>
<surname>Jensen</surname>
<given-names>A. R.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Thirty years of research on race differences in cognitive ability</article-title>
.
<source>Psychol. Pub. Policy Law</source>
<volume>11</volume>
,
<fpage>235</fpage>
<lpage>294</lpage>
<pub-id pub-id-type="doi">10.1037/1076-8971.11.2.235</pub-id>
</mixed-citation>
</ref>
<ref id="B45">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stollhoff</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Jost</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Elze</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Kennerknecht</surname>
<given-names>I.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Deficits in long-term recognition memory reveal dissociated subtypes in congenital prosopagnosia</article-title>
.
<source>PLoS ONE</source>
<volume>6</volume>
:
<fpage>e15702</fpage>
<pub-id pub-id-type="doi">10.1371/journal.pone.0015702</pub-id>
<pub-id pub-id-type="pmid">21283572</pub-id>
</mixed-citation>
</ref>
<ref id="B46">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Towler</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Gosling</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Duchaine</surname>
<given-names>B. C.</given-names>
</name>
<name>
<surname>Eimer</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>The face-sensitive N170 component in developmental prosopagnosia</article-title>
.
<source>Neuropsychologia</source>
<volume>50</volume>
,
<fpage>3588</fpage>
<lpage>3599</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2012.10.017</pub-id>
<pub-id pub-id-type="pmid">23092937</pub-id>
</mixed-citation>
</ref>
<ref id="B47">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Troje</surname>
<given-names>N. F.</given-names>
</name>
<name>
<surname>Bülthoff</surname>
<given-names>H. H.</given-names>
</name>
</person-group>
(
<year>1996</year>
).
<article-title>Face recognition under varying poses: the role of texture and shape</article-title>
.
<source>Vision Res</source>
.
<volume>36</volume>
,
<fpage>1761</fpage>
<lpage>1771</lpage>
<pub-id pub-id-type="doi">10.1016/0042-6989(95)00230-8</pub-id>
<pub-id pub-id-type="pmid">8759445</pub-id>
</mixed-citation>
</ref>
<ref id="B48">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Vetter</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Blanz</surname>
<given-names>V.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>A morphable model for the synthesis of 3D faces</article-title>
, in
<source>SIGGRAPH'99 Proceedings of the 26th annual conference on Computer graphics and interactive techniques</source>
(
<publisher-loc>New York, NY</publisher-loc>
:
<publisher-name>ACM Press/Addison-Wesley Publishing Co.</publisher-name>
),
<fpage>187</fpage>
<lpage>194</lpage>
<pub-id pub-id-type="doi">10.1145/311535.311556</pub-id>
</mixed-citation>
</ref>
<ref id="B49">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Stollhoff</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Elze</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Jost</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Kennerknecht</surname>
<given-names>I.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Are we all prosopagnosics for other race faces?</article-title>
<source>Perception 38 ECVP Abstract Supplement</source>
,
<volume>78</volume>
<pub-id pub-id-type="doi">10.1371/journal.pone.0003022</pub-id>
<pub-id pub-id-type="pmid">21570991</pub-id>
</mixed-citation>
</ref>
<ref id="B50">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yovel</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Duchaine</surname>
<given-names>B. C.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Specialized face perception mechanisms extract both part and spacing information: evidence from developmental prosopagnosia</article-title>
.
<source>J. Cogn. Neurosci</source>
.
<volume>18</volume>
,
<fpage>580</fpage>
<lpage>593</lpage>
<pub-id pub-id-type="doi">10.1162/jocn.2006.18.4.580</pub-id>
<pub-id pub-id-type="pmid">16768361</pub-id>
</mixed-citation>
</ref>
<ref id="B51">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yovel</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Kanwisher</surname>
<given-names>N.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Face perception: domain specific, not process specific</article-title>
.
<source>Neuron</source>
<volume>44</volume>
,
<fpage>889</fpage>
<lpage>898</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuron.2004.11.018</pub-id>
<pub-id pub-id-type="pmid">15572118</pub-id>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
<affiliations>
<list>
<country>
<li>Allemagne</li>
<li>Corée du Sud</li>
<li>Royaume-Uni</li>
</country>
</list>
<tree>
<country name="Allemagne">
<noRegion>
<name sortKey="Esins, Janina" sort="Esins, Janina" uniqKey="Esins J" first="Janina" last="Esins">Janina Esins</name>
</noRegion>
<name sortKey="Bulthoff, Isabelle" sort="Bulthoff, Isabelle" uniqKey="Bulthoff I" first="Isabelle" last="Bülthoff">Isabelle Bülthoff</name>
<name sortKey="Schultz, Johannes" sort="Schultz, Johannes" uniqKey="Schultz J" first="Johannes" last="Schultz">Johannes Schultz</name>
</country>
<country name="Royaume-Uni">
<noRegion>
<name sortKey="Schultz, Johannes" sort="Schultz, Johannes" uniqKey="Schultz J" first="Johannes" last="Schultz">Johannes Schultz</name>
</noRegion>
</country>
<country name="Corée du Sud">
<noRegion>
<name sortKey="Wallraven, Christian" sort="Wallraven, Christian" uniqKey="Wallraven C" first="Christian" last="Wallraven">Christian Wallraven</name>
</noRegion>
<name sortKey="Bulthoff, Isabelle" sort="Bulthoff, Isabelle" uniqKey="Bulthoff I" first="Isabelle" last="Bülthoff">Isabelle Bülthoff</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Checkpoint
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000C95 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd -nk 000C95 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Checkpoint
   |type=    RBID
   |clé=     PMC:4179381
   |texte=   Do congenital prosopagnosia and the other-race effect affect the same face recognition mechanisms?
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Checkpoint/RBID.i   -Sk "pubmed:25324757" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Checkpoint/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024