Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Modeling the Minimal Newborn's Intersubjective Mind: The Visuotopic-Somatotopic Alignment Hypothesis in the Superior Colliculus

Identifieur interne : 002848 ( Ncbi/Merge ); précédent : 002847; suivant : 002849

Modeling the Minimal Newborn's Intersubjective Mind: The Visuotopic-Somatotopic Alignment Hypothesis in the Superior Colliculus

Auteurs : Alexandre Pitti [France] ; Yasuo Kuniyoshi [Japon] ; Mathias Quoy [France] ; Philippe Gaussier [France]

Source :

RBID : PMC:3724856

Abstract

The question whether newborns possess inborn social skills is a long debate in developmental psychology. Fetal behavioral and anatomical observations show evidences for the control of eye movements and facial behaviors during the third trimester of pregnancy whereas specific sub-cortical areas, like the superior colliculus (SC) and the striatum appear to be functionally mature to support these behaviors. These observations suggest that the newborn is potentially mature for developing minimal social skills. In this manuscript, we propose that the mechanism of sensory alignment observed in SC is particularly important for enabling the social skills observed at birth such as facial preference and facial mimicry. In a computational simulation of the maturing superior colliculus connected to a simulated facial tissue of a fetus, we model how the incoming tactile information is used to direct visual attention toward faces. We suggest that the unisensory superficial visual layer (eye-centered) and the deep somatopic layer (face-centered) in SC are combined into an intermediate layer for visuo-tactile integration and that multimodal alignment in this third layer allows newborns to have a sensitivity to configuration of eyes and mouth. We show that the visual and tactile maps align through a Hebbian learning stage and and strengthen their synaptic links from each other into the intermediate layer. It results that the global network produces some emergent properties such as sensitivity toward the spatial configuration of face-like patterns and the detection of eyes and mouth movement.


Url:
DOI: 10.1371/journal.pone.0069474
PubMed: 23922718
PubMed Central: 3724856

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:3724856

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Modeling the Minimal Newborn's Intersubjective Mind: The Visuotopic-Somatotopic Alignment Hypothesis in the Superior Colliculus</title>
<author>
<name sortKey="Pitti, Alexandre" sort="Pitti, Alexandre" uniqKey="Pitti A" first="Alexandre" last="Pitti">Alexandre Pitti</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<addr-line>Department of Compter Sciences, ETIS Laboratory, UMR CNRS 8051, the University of Cergy-Pontoise, ENSEA, Cergy-Pontoise, France</addr-line>
</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea>Department of Compter Sciences, ETIS Laboratory, UMR CNRS 8051, the University of Cergy-Pontoise, ENSEA, Cergy-Pontoise</wicri:regionArea>
<wicri:noRegion>Cergy-Pontoise</wicri:noRegion>
<wicri:noRegion>Cergy-Pontoise</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Kuniyoshi, Yasuo" sort="Kuniyoshi, Yasuo" uniqKey="Kuniyoshi Y" first="Yasuo" last="Kuniyoshi">Yasuo Kuniyoshi</name>
<affiliation wicri:level="3">
<nlm:aff id="aff2">
<addr-line>ISI Laboratory, Department of Mechano-Informatics, Graduate School of Information Science and Technology, University of Tokyo, Tokyo, Japan</addr-line>
</nlm:aff>
<country xml:lang="fr">Japon</country>
<wicri:regionArea>ISI Laboratory, Department of Mechano-Informatics, Graduate School of Information Science and Technology, University of Tokyo, Tokyo</wicri:regionArea>
<placeName>
<settlement type="city">Tokyo</settlement>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Quoy, Mathias" sort="Quoy, Mathias" uniqKey="Quoy M" first="Mathias" last="Quoy">Mathias Quoy</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<addr-line>Department of Compter Sciences, ETIS Laboratory, UMR CNRS 8051, the University of Cergy-Pontoise, ENSEA, Cergy-Pontoise, France</addr-line>
</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea>Department of Compter Sciences, ETIS Laboratory, UMR CNRS 8051, the University of Cergy-Pontoise, ENSEA, Cergy-Pontoise</wicri:regionArea>
<wicri:noRegion>Cergy-Pontoise</wicri:noRegion>
<wicri:noRegion>Cergy-Pontoise</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Gaussier, Philippe" sort="Gaussier, Philippe" uniqKey="Gaussier P" first="Philippe" last="Gaussier">Philippe Gaussier</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<addr-line>Department of Compter Sciences, ETIS Laboratory, UMR CNRS 8051, the University of Cergy-Pontoise, ENSEA, Cergy-Pontoise, France</addr-line>
</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea>Department of Compter Sciences, ETIS Laboratory, UMR CNRS 8051, the University of Cergy-Pontoise, ENSEA, Cergy-Pontoise</wicri:regionArea>
<wicri:noRegion>Cergy-Pontoise</wicri:noRegion>
<wicri:noRegion>Cergy-Pontoise</wicri:noRegion>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">23922718</idno>
<idno type="pmc">3724856</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3724856</idno>
<idno type="RBID">PMC:3724856</idno>
<idno type="doi">10.1371/journal.pone.0069474</idno>
<date when="2013">2013</date>
<idno type="wicri:Area/Pmc/Corpus">002289</idno>
<idno type="wicri:Area/Pmc/Curation">002289</idno>
<idno type="wicri:Area/Pmc/Checkpoint">001150</idno>
<idno type="wicri:Area/Ncbi/Merge">002848</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Modeling the Minimal Newborn's Intersubjective Mind: The Visuotopic-Somatotopic Alignment Hypothesis in the Superior Colliculus</title>
<author>
<name sortKey="Pitti, Alexandre" sort="Pitti, Alexandre" uniqKey="Pitti A" first="Alexandre" last="Pitti">Alexandre Pitti</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<addr-line>Department of Compter Sciences, ETIS Laboratory, UMR CNRS 8051, the University of Cergy-Pontoise, ENSEA, Cergy-Pontoise, France</addr-line>
</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea>Department of Compter Sciences, ETIS Laboratory, UMR CNRS 8051, the University of Cergy-Pontoise, ENSEA, Cergy-Pontoise</wicri:regionArea>
<wicri:noRegion>Cergy-Pontoise</wicri:noRegion>
<wicri:noRegion>Cergy-Pontoise</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Kuniyoshi, Yasuo" sort="Kuniyoshi, Yasuo" uniqKey="Kuniyoshi Y" first="Yasuo" last="Kuniyoshi">Yasuo Kuniyoshi</name>
<affiliation wicri:level="3">
<nlm:aff id="aff2">
<addr-line>ISI Laboratory, Department of Mechano-Informatics, Graduate School of Information Science and Technology, University of Tokyo, Tokyo, Japan</addr-line>
</nlm:aff>
<country xml:lang="fr">Japon</country>
<wicri:regionArea>ISI Laboratory, Department of Mechano-Informatics, Graduate School of Information Science and Technology, University of Tokyo, Tokyo</wicri:regionArea>
<placeName>
<settlement type="city">Tokyo</settlement>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Quoy, Mathias" sort="Quoy, Mathias" uniqKey="Quoy M" first="Mathias" last="Quoy">Mathias Quoy</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<addr-line>Department of Compter Sciences, ETIS Laboratory, UMR CNRS 8051, the University of Cergy-Pontoise, ENSEA, Cergy-Pontoise, France</addr-line>
</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea>Department of Compter Sciences, ETIS Laboratory, UMR CNRS 8051, the University of Cergy-Pontoise, ENSEA, Cergy-Pontoise</wicri:regionArea>
<wicri:noRegion>Cergy-Pontoise</wicri:noRegion>
<wicri:noRegion>Cergy-Pontoise</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Gaussier, Philippe" sort="Gaussier, Philippe" uniqKey="Gaussier P" first="Philippe" last="Gaussier">Philippe Gaussier</name>
<affiliation wicri:level="1">
<nlm:aff id="aff1">
<addr-line>Department of Compter Sciences, ETIS Laboratory, UMR CNRS 8051, the University of Cergy-Pontoise, ENSEA, Cergy-Pontoise, France</addr-line>
</nlm:aff>
<country xml:lang="fr">France</country>
<wicri:regionArea>Department of Compter Sciences, ETIS Laboratory, UMR CNRS 8051, the University of Cergy-Pontoise, ENSEA, Cergy-Pontoise</wicri:regionArea>
<wicri:noRegion>Cergy-Pontoise</wicri:noRegion>
<wicri:noRegion>Cergy-Pontoise</wicri:noRegion>
</affiliation>
</author>
</analytic>
<series>
<title level="j">PLoS ONE</title>
<idno type="eISSN">1932-6203</idno>
<imprint>
<date when="2013">2013</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>The question whether newborns possess inborn social skills is a long debate in developmental psychology. Fetal behavioral and anatomical observations show evidences for the control of eye movements and facial behaviors during the third trimester of pregnancy whereas specific sub-cortical areas, like the superior colliculus (SC) and the striatum appear to be functionally mature to support these behaviors. These observations suggest that the newborn is potentially mature for developing minimal social skills. In this manuscript, we propose that the mechanism of
<italic>sensory alignment</italic>
observed in SC is particularly important for enabling the social skills observed at birth such as facial preference and facial mimicry. In a computational simulation of the maturing superior colliculus connected to a simulated facial tissue of a fetus, we model how the incoming tactile information is used to direct visual attention toward faces. We suggest that the unisensory superficial visual layer (eye-centered) and the deep somatopic layer (face-centered) in SC are combined into an intermediate layer for visuo-tactile integration and that multimodal alignment in this third layer allows newborns to have a sensitivity to configuration of eyes and mouth. We show that the visual and tactile maps align through a Hebbian learning stage and and strengthen their synaptic links from each other into the intermediate layer. It results that the global network produces some emergent properties such as sensitivity toward the spatial configuration of face-like patterns and the detection of eyes and mouth movement.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rochat, P" uniqKey="Rochat P">P Rochat</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Johnson, M" uniqKey="Johnson M">M Johnson</name>
</author>
<author>
<name sortKey="Griffn, R" uniqKey="Griffn R">R Griffn</name>
</author>
<author>
<name sortKey="Csibra, G" uniqKey="Csibra G">G Csibra</name>
</author>
<author>
<name sortKey="Halit, H" uniqKey="Halit H">H Halit</name>
</author>
<author>
<name sortKey="Farroni, T" uniqKey="Farroni T">T Farroni</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kuniyoshi, Y" uniqKey="Kuniyoshi Y">Y Kuniyoshi</name>
</author>
<author>
<name sortKey="Sangawa, S" uniqKey="Sangawa S">S Sangawa</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kinjo, K" uniqKey="Kinjo K">K Kinjo</name>
</author>
<author>
<name sortKey="Nabeshima, C" uniqKey="Nabeshima C">C Nabeshima</name>
</author>
<author>
<name sortKey="Sangawa, S" uniqKey="Sangawa S">S Sangawa</name>
</author>
<author>
<name sortKey="Kuniyoshi, Y" uniqKey="Kuniyoshi Y">Y Kuniyoshi</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Meltzoff, A" uniqKey="Meltzoff A">A Meltzoff</name>
</author>
<author>
<name sortKey="Moore, K" uniqKey="Moore K">K Moore</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Meltzoff, A" uniqKey="Meltzoff A">A Meltzoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Meltzoff, A" uniqKey="Meltzoff A">A Meltzoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ferrari, P" uniqKey="Ferrari P">P Ferrari</name>
</author>
<author>
<name sortKey="Paukner, A" uniqKey="Paukner A">A Paukner</name>
</author>
<author>
<name sortKey="Ruggiero, A" uniqKey="Ruggiero A">A Ruggiero</name>
</author>
<author>
<name sortKey="Darcey, L" uniqKey="Darcey L">L Darcey</name>
</author>
<author>
<name sortKey="Unbehagen, S" uniqKey="Unbehagen S">S Unbehagen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lepage, J" uniqKey="Lepage J">J Lepage</name>
</author>
<author>
<name sortKey="Theoret, H" uniqKey="Theoret H">H Théoret</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Valenza, E" uniqKey="Valenza E">E Valenza</name>
</author>
<author>
<name sortKey="Simion, F" uniqKey="Simion F">F Simion</name>
</author>
<author>
<name sortKey="Macchi Cassia, V" uniqKey="Macchi Cassia V">V Macchi Cassia</name>
</author>
<author>
<name sortKey="Umilta, C" uniqKey="Umilta C">C Umilta</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Simion, F" uniqKey="Simion F">F Simion</name>
</author>
<author>
<name sortKey="Valenza, E" uniqKey="Valenza E">E Valenza</name>
</author>
<author>
<name sortKey="Umilta, C" uniqKey="Umilta C">C Umilta</name>
</author>
<author>
<name sortKey="Dallabarba, B" uniqKey="Dallabarba B">B DallaBarba</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="De Haan, M" uniqKey="De Haan M">M de Haan</name>
</author>
<author>
<name sortKey="Pascalis, O" uniqKey="Pascalis O">O Pascalis</name>
</author>
<author>
<name sortKey="Johnson, M" uniqKey="Johnson M">M Johnson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Johnson, M" uniqKey="Johnson M">M Johnson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Johnson, M" uniqKey="Johnson M">M Johnson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Senju, A" uniqKey="Senju A">A Senju</name>
</author>
<author>
<name sortKey="Johnson, M" uniqKey="Johnson M">M Johnson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Morton, J" uniqKey="Morton J">J Morton</name>
</author>
<author>
<name sortKey="Johnson, M" uniqKey="Johnson M">M Johnson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Johnson, M" uniqKey="Johnson M">M Johnson</name>
</author>
<author>
<name sortKey="Dziurawiec, S" uniqKey="Dziurawiec S">S Dziurawiec</name>
</author>
<author>
<name sortKey="Ellis, H" uniqKey="Ellis H">H Ellis</name>
</author>
<author>
<name sortKey="J, M" uniqKey="J M">M J</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="De Schonen, S" uniqKey="De Schonen S">S de Schonen</name>
</author>
<author>
<name sortKey="Mathivet, E" uniqKey="Mathivet E">E Mathivet</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Acerra, F" uniqKey="Acerra F">F Acerra</name>
</author>
<author>
<name sortKey="Burnod, Y" uniqKey="Burnod Y">Y Burnod</name>
</author>
<author>
<name sortKey="De Schonen, S" uniqKey="De Schonen S">S de Schonen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nelson, C" uniqKey="Nelson C">C Nelson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Turati, C" uniqKey="Turati C">C Turati</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Heyes, C" uniqKey="Heyes C">C Heyes</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ray, E" uniqKey="Ray E">E Ray</name>
</author>
<author>
<name sortKey="Heyes, C" uniqKey="Heyes C">C Heyes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kalesnykas, R" uniqKey="Kalesnykas R">R Kalesnykas</name>
</author>
<author>
<name sortKey="Sparks, D" uniqKey="Sparks D">D Sparks</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Crish, S" uniqKey="Crish S">S Crish</name>
</author>
<author>
<name sortKey="Dengler Crish, C" uniqKey="Dengler Crish C">C Dengler-Crish</name>
</author>
<author>
<name sortKey="Comer, C" uniqKey="Comer C">C Comer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Joseph, R" uniqKey="Joseph R">R Joseph</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stein, B" uniqKey="Stein B">B Stein</name>
</author>
<author>
<name sortKey="Standford, T" uniqKey="Standford T">T Standford</name>
</author>
<author>
<name sortKey="Rowland, B" uniqKey="Rowland B">B Rowland</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stanojevic, M" uniqKey="Stanojevic M">M Stanojevic</name>
</author>
<author>
<name sortKey="Kurjak, A" uniqKey="Kurjak A">A Kurjak</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="James, D" uniqKey="James D">D James</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Groh, J" uniqKey="Groh J">J Groh</name>
</author>
<author>
<name sortKey="Sparks, D" uniqKey="Sparks D">D Sparks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Moschovakis, A" uniqKey="Moschovakis A">A Moschovakis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stein, B" uniqKey="Stein B">B Stein</name>
</author>
<author>
<name sortKey="Magalhaes Castro, B" uniqKey="Magalhaes Castro B">B Magalhães Castro</name>
</author>
<author>
<name sortKey="Kruger, L" uniqKey="Kruger L">L Kruger</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dr Ger, U" uniqKey="Dr Ger U">U Dräger</name>
</author>
<author>
<name sortKey="Hubel, D" uniqKey="Hubel D">D Hubel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="King, A" uniqKey="King A">A King</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dominey, P" uniqKey="Dominey P">P Dominey</name>
</author>
<author>
<name sortKey="Arbib, M" uniqKey="Arbib M">M Arbib</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stein, B" uniqKey="Stein B">B Stein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wallace, M" uniqKey="Wallace M">M Wallace</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stein, B" uniqKey="Stein B">B Stein</name>
</author>
<author>
<name sortKey="Perrault Jr, T" uniqKey="Perrault Jr T">T Perrault Jr</name>
</author>
<author>
<name sortKey="Stanford, T" uniqKey="Stanford T">T Stanford</name>
</author>
<author>
<name sortKey="Rowland, B" uniqKey="Rowland B">B Rowland</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wallace, M" uniqKey="Wallace M">M Wallace</name>
</author>
<author>
<name sortKey="Stein, B" uniqKey="Stein B">B Stein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stein, B" uniqKey="Stein B">B Stein</name>
</author>
<author>
<name sortKey="Burr, D" uniqKey="Burr D">D Burr</name>
</author>
<author>
<name sortKey="Constantinidis, C" uniqKey="Constantinidis C">C Constantinidis</name>
</author>
<author>
<name sortKey="Laurienti, P" uniqKey="Laurienti P">P Laurienti</name>
</author>
<author>
<name sortKey="Meredith, M" uniqKey="Meredith M">M Meredith</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bednar, J" uniqKey="Bednar J">J Bednar</name>
</author>
<author>
<name sortKey="Miikulainen, R" uniqKey="Miikulainen R">R Miikulainen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Balas, B" uniqKey="Balas B">B Balas</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pascalis, O" uniqKey="Pascalis O">O Pascalis</name>
</author>
<author>
<name sortKey="De Haan, M" uniqKey="De Haan M">M de Haan</name>
</author>
<author>
<name sortKey="Nelson, C" uniqKey="Nelson C">C Nelson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Triplett, J" uniqKey="Triplett J">J Triplett</name>
</author>
<author>
<name sortKey="Phan, A" uniqKey="Phan A">A Phan</name>
</author>
<author>
<name sortKey="Yamada, J" uniqKey="Yamada J">J Yamada</name>
</author>
<author>
<name sortKey="Feldheim, D" uniqKey="Feldheim D">D Feldheim</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Benedetti, F" uniqKey="Benedetti F">F Benedetti</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Perrault Jr, T" uniqKey="Perrault Jr T">T Perrault Jr</name>
</author>
<author>
<name sortKey="Vaughan, J" uniqKey="Vaughan J">J Vaughan</name>
</author>
<author>
<name sortKey="Stein, B" uniqKey="Stein B">B Stein</name>
</author>
<author>
<name sortKey="Wallace, M" uniqKey="Wallace M">M Wallace</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Benedetti, F" uniqKey="Benedetti F">F Benedetti</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wallace, M" uniqKey="Wallace M">M Wallace</name>
</author>
<author>
<name sortKey="Stein, B" uniqKey="Stein B">B Stein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wallace, M" uniqKey="Wallace M">M Wallace</name>
</author>
<author>
<name sortKey="Stein, B" uniqKey="Stein B">B Stein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Rullen, R" uniqKey="Van Rullen R">R Van Rullen</name>
</author>
<author>
<name sortKey="Gautrais, J" uniqKey="Gautrais J">J Gautrais</name>
</author>
<author>
<name sortKey="Delorme, A" uniqKey="Delorme A">A Delorme</name>
</author>
<author>
<name sortKey="Thorpe, S" uniqKey="Thorpe S">S Thorpe</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thorpe, S" uniqKey="Thorpe S">S Thorpe</name>
</author>
<author>
<name sortKey="Delorme, A" uniqKey="Delorme A">A Delorme</name>
</author>
<author>
<name sortKey="Van Rullen, R" uniqKey="Van Rullen R">R Van Rullen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kohonen, T" uniqKey="Kohonen T">T Kohonen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sirosh, J" uniqKey="Sirosh J">J Sirosh</name>
</author>
<author>
<name sortKey="Miikulainen, I" uniqKey="Miikulainen I">I Miikulainen</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Glaser, C" uniqKey="Glaser C">C Glasër</name>
</author>
<author>
<name sortKey="Joublin, F" uniqKey="Joublin F">F Joublin</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tsunozaki, M" uniqKey="Tsunozaki M">M Tsunozaki</name>
</author>
<author>
<name sortKey="Bautista, D" uniqKey="Bautista D">D Bautista</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Boot, P" uniqKey="Boot P">P Boot</name>
</author>
<author>
<name sortKey="Rowden, G" uniqKey="Rowden G">G Rowden</name>
</author>
<author>
<name sortKey="Walsh, N" uniqKey="Walsh N">N Walsh</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Feller, M" uniqKey="Feller M">M Feller</name>
</author>
<author>
<name sortKey="Butts, D" uniqKey="Butts D">D Butts</name>
</author>
<author>
<name sortKey="Aaron, H" uniqKey="Aaron H">H Aaron</name>
</author>
<author>
<name sortKey="Rokhsar, D" uniqKey="Rokhsar D">D Rokhsar</name>
</author>
<author>
<name sortKey="Shatz, C" uniqKey="Shatz C">C Shatz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="De Vries, J" uniqKey="De Vries J">J de Vries</name>
</author>
<author>
<name sortKey="Visser, G" uniqKey="Visser G">G Visser</name>
</author>
<author>
<name sortKey="Prechtl, H" uniqKey="Prechtl H">H Prechtl</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Crish, S" uniqKey="Crish S">S Crish</name>
</author>
<author>
<name sortKey="Comer, C" uniqKey="Comer C">C Comer</name>
</author>
<author>
<name sortKey="Marasco, P" uniqKey="Marasco P">P Marasco</name>
</author>
<author>
<name sortKey="Catania, K" uniqKey="Catania K">K Catania</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Rullen, R" uniqKey="Van Rullen R">R Van Rullen</name>
</author>
<author>
<name sortKey="Thorpe, S" uniqKey="Thorpe S">S Thorpe</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pellegrini, G" uniqKey="Pellegrini G">G Pellegrini</name>
</author>
<author>
<name sortKey="De Arcangelis, L" uniqKey="De Arcangelis L">L de Arcangelis</name>
</author>
<author>
<name sortKey="Herrmann, H" uniqKey="Herrmann H">H Herrmann</name>
</author>
<author>
<name sortKey="Perrone Capano, C" uniqKey="Perrone Capano C">C Perrone-Capano</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fruchterman, T" uniqKey="Fruchterman T">T Fruchterman</name>
</author>
<author>
<name sortKey="Reingold, E" uniqKey="Reingold E">E Reingold</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sporns, O" uniqKey="Sporns O">O Sporns</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pitti, A" uniqKey="Pitti A">A Pitti</name>
</author>
<author>
<name sortKey="Lungarella, M" uniqKey="Lungarella M">M Lungarella</name>
</author>
<author>
<name sortKey="Kuniyoshi, Y" uniqKey="Kuniyoshi Y">Y Kuniyoshi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Farroni, T" uniqKey="Farroni T">T Farroni</name>
</author>
<author>
<name sortKey="Johnson, M" uniqKey="Johnson M">M Johnson</name>
</author>
<author>
<name sortKey="Menon, E" uniqKey="Menon E">E Menon</name>
</author>
<author>
<name sortKey="Zulian, L" uniqKey="Zulian L">L Zulian</name>
</author>
<author>
<name sortKey="Faraguna, D" uniqKey="Faraguna D">D Faraguna</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sprague, J" uniqKey="Sprague J">J Sprague</name>
</author>
<author>
<name sortKey="Meikle, T" uniqKey="Meikle T">T Meikle</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lungarella, M" uniqKey="Lungarella M">M Lungarella</name>
</author>
<author>
<name sortKey="Sporns, O" uniqKey="Sporns O">O Sporns</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kurjak, A" uniqKey="Kurjak A">A Kurjak</name>
</author>
<author>
<name sortKey="Azumendi, G" uniqKey="Azumendi G">G Azumendi</name>
</author>
<author>
<name sortKey="Vecek, N" uniqKey="Vecek N">N Vecek</name>
</author>
<author>
<name sortKey="Kupeic, S" uniqKey="Kupeic S">S Kupeic</name>
</author>
<author>
<name sortKey="Solak, M" uniqKey="Solak M">M Solak</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Simion, F" uniqKey="Simion F">F Simion</name>
</author>
<author>
<name sortKey="Regolin, L" uniqKey="Regolin L">L Regolin</name>
</author>
<author>
<name sortKey="Bulf, H" uniqKey="Bulf H">H Bulf</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Farroni, T" uniqKey="Farroni T">T Farroni</name>
</author>
<author>
<name sortKey="Csibra, G" uniqKey="Csibra G">G Csibra</name>
</author>
<author>
<name sortKey="Simion, F" uniqKey="Simion F">F Simion</name>
</author>
<author>
<name sortKey="Johnson, M" uniqKey="Johnson M">M Johnson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Streri, A" uniqKey="Streri A">A Streri</name>
</author>
<author>
<name sortKey="Lhote, M" uniqKey="Lhote M">M Lhote</name>
</author>
<author>
<name sortKey="Dutilleul, S" uniqKey="Dutilleul S">S Dutilleul</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shibata, M" uniqKey="Shibata M">M Shibata</name>
</author>
<author>
<name sortKey="Fuchino, Y" uniqKey="Fuchino Y">Y Fuchino</name>
</author>
<author>
<name sortKey="Naoi, N" uniqKey="Naoi N">N Naoi</name>
</author>
<author>
<name sortKey="Kohno, S" uniqKey="Kohno S">S Kohno</name>
</author>
<author>
<name sortKey="Kawai, M" uniqKey="Kawai M">M Kawai</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Myowa Yamakoshi, M" uniqKey="Myowa Yamakoshi M">M Myowa-Yamakoshi</name>
</author>
<author>
<name sortKey="Takeshita, H" uniqKey="Takeshita H">H Takeshita</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nagy, E" uniqKey="Nagy E">E Nagy</name>
</author>
<author>
<name sortKey="Molnar, P" uniqKey="Molnar P">P Molnar</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Neil, Pa" uniqKey="Neil P">PA Neil</name>
</author>
<author>
<name sortKey="Chee Ruiter, C" uniqKey="Chee Ruiter C">C Chee-Ruiter</name>
</author>
<author>
<name sortKey="Scheier, C" uniqKey="Scheier C">C Scheier</name>
</author>
<author>
<name sortKey="Lewkowicz, Dj" uniqKey="Lewkowicz D">DJ Lewkowicz</name>
</author>
<author>
<name sortKey="Shimojo, S" uniqKey="Shimojo S">S Shimojo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Salihagic Kadic, A" uniqKey="Salihagic Kadic A">A Salihagic Kadic</name>
</author>
<author>
<name sortKey="Predojevic, M" uniqKey="Predojevic M">M Predojevic</name>
</author>
<author>
<name sortKey="Kurjak, A" uniqKey="Kurjak A">A Kurjak</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bremner, A" uniqKey="Bremner A">A Bremner</name>
</author>
<author>
<name sortKey="Holmes, N" uniqKey="Holmes N">N Holmes</name>
</author>
<author>
<name sortKey="Spence, C" uniqKey="Spence C">C Spence</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Andersen, R" uniqKey="Andersen R">R Andersen</name>
</author>
<author>
<name sortKey="Snyder, L" uniqKey="Snyder L">L Snyder</name>
</author>
<author>
<name sortKey="Li, Cs" uniqKey="Li C">CS Li</name>
</author>
<author>
<name sortKey="Stricanne, B" uniqKey="Stricanne B">B Stricanne</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pouget, A" uniqKey="Pouget A">A Pouget</name>
</author>
<author>
<name sortKey="Snyder, L" uniqKey="Snyder L">L Snyder</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Salinas, E" uniqKey="Salinas E">E Salinas</name>
</author>
<author>
<name sortKey="Thier, P" uniqKey="Thier P">P Thier</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">PLoS One</journal-id>
<journal-id journal-id-type="iso-abbrev">PLoS ONE</journal-id>
<journal-id journal-id-type="publisher-id">plos</journal-id>
<journal-id journal-id-type="pmc">plosone</journal-id>
<journal-title-group>
<journal-title>PLoS ONE</journal-title>
</journal-title-group>
<issn pub-type="epub">1932-6203</issn>
<publisher>
<publisher-name>Public Library of Science</publisher-name>
<publisher-loc>San Francisco, USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">23922718</article-id>
<article-id pub-id-type="pmc">3724856</article-id>
<article-id pub-id-type="publisher-id">PONE-D-13-04392</article-id>
<article-id pub-id-type="doi">10.1371/journal.pone.0069474</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Biology</subject>
<subj-group>
<subject>Anatomy and Physiology</subject>
<subj-group>
<subject>Musculoskeletal System</subject>
<subj-group>
<subject>Robotics</subject>
</subj-group>
</subj-group>
</subj-group>
<subj-group>
<subject>Computational Biology</subject>
<subj-group>
<subject>Computational Neuroscience</subject>
<subj-group>
<subject>Sensory Systems</subject>
</subj-group>
</subj-group>
</subj-group>
<subj-group>
<subject>Neuroscience</subject>
<subj-group>
<subject>Computational Neuroscience</subject>
<subj-group>
<subject>Sensory Systems</subject>
</subj-group>
</subj-group>
<subj-group>
<subject>Developmental Neuroscience</subject>
<subject>Neural Networks</subject>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Medicine</subject>
<subj-group>
<subject>Anatomy and Physiology</subject>
<subj-group>
<subject>Musculoskeletal System</subject>
<subj-group>
<subject>Robotics</subject>
</subj-group>
</subj-group>
</subj-group>
<subj-group>
<subject>Mental Health</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Developmental Psychology</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v2">
<subject>Social and Behavioral Sciences</subject>
<subj-group>
<subject>Psychology</subject>
<subj-group>
<subject>Developmental Psychology</subject>
</subj-group>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Modeling the Minimal Newborn's Intersubjective Mind: The Visuotopic-Somatotopic Alignment Hypothesis in the Superior Colliculus</article-title>
<alt-title alt-title-type="running-head">Sensory Alignment in SC for a Social Mind</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Pitti</surname>
<given-names>Alexandre</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="corresp" rid="cor1">
<sup>*</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Kuniyoshi</surname>
<given-names>Yasuo</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Quoy</surname>
<given-names>Mathias</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Gaussier</surname>
<given-names>Philippe</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff1">
<label>1</label>
<addr-line>Department of Compter Sciences, ETIS Laboratory, UMR CNRS 8051, the University of Cergy-Pontoise, ENSEA, Cergy-Pontoise, France</addr-line>
</aff>
<aff id="aff2">
<label>2</label>
<addr-line>ISI Laboratory, Department of Mechano-Informatics, Graduate School of Information Science and Technology, University of Tokyo, Tokyo, Japan</addr-line>
</aff>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Chacron</surname>
<given-names>Maurice J.</given-names>
</name>
<role>Editor</role>
<xref ref-type="aff" rid="edit1"></xref>
</contrib>
</contrib-group>
<aff id="edit1">
<addr-line>McGill University, Canada</addr-line>
</aff>
<author-notes>
<corresp id="cor1">* E-mail:
<email>alexandre.pitti@ensea.fr</email>
</corresp>
<fn fn-type="conflict">
<p>
<bold>Competing Interests: </bold>
The authors have declared that no competing interests exist.</p>
</fn>
<fn fn-type="con">
<p>Conceived and designed the experiments: AP YK MQ PG. Performed the experiments: AP. Analyzed the data: AP. Contributed reagents/materials/analysis tools: AP. Wrote the paper: AP YK MQ PG. Computational modeling: AP.</p>
</fn>
</author-notes>
<pub-date pub-type="collection">
<year>2013</year>
</pub-date>
<pub-date pub-type="epub">
<day>26</day>
<month>7</month>
<year>2013</year>
</pub-date>
<volume>8</volume>
<issue>7</issue>
<elocation-id>e69474</elocation-id>
<history>
<date date-type="received">
<day>29</day>
<month>1</month>
<year>2013</year>
</date>
<date date-type="accepted">
<day>10</day>
<month>6</month>
<year>2013</year>
</date>
</history>
<permissions>
<copyright-year>2013</copyright-year>
<copyright-holder>Pitti et al</copyright-holder>
<license>
<license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.</license-p>
</license>
</permissions>
<abstract>
<p>The question whether newborns possess inborn social skills is a long debate in developmental psychology. Fetal behavioral and anatomical observations show evidences for the control of eye movements and facial behaviors during the third trimester of pregnancy whereas specific sub-cortical areas, like the superior colliculus (SC) and the striatum appear to be functionally mature to support these behaviors. These observations suggest that the newborn is potentially mature for developing minimal social skills. In this manuscript, we propose that the mechanism of
<italic>sensory alignment</italic>
observed in SC is particularly important for enabling the social skills observed at birth such as facial preference and facial mimicry. In a computational simulation of the maturing superior colliculus connected to a simulated facial tissue of a fetus, we model how the incoming tactile information is used to direct visual attention toward faces. We suggest that the unisensory superficial visual layer (eye-centered) and the deep somatopic layer (face-centered) in SC are combined into an intermediate layer for visuo-tactile integration and that multimodal alignment in this third layer allows newborns to have a sensitivity to configuration of eyes and mouth. We show that the visual and tactile maps align through a Hebbian learning stage and and strengthen their synaptic links from each other into the intermediate layer. It results that the global network produces some emergent properties such as sensitivity toward the spatial configuration of face-like patterns and the detection of eyes and mouth movement.</p>
</abstract>
<funding-group>
<funding-statement>This study was supported by Japan Science and Technology Agency Asada ERATO Synergistic project (Japan) (
<ext-link ext-link-type="uri" xlink:href="http://www.jst.go.jp/EN/">http://www.jst.go.jp/EN/</ext-link>
) and Agence Nationale de la Recherche project INTERACT ANR09-CORD-014 (France) (
<ext-link ext-link-type="uri" xlink:href="http://www.agence-nationale-recherche.fr/">http://www.agence-nationale-recherche.fr/</ext-link>
). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.</funding-statement>
</funding-group>
<counts>
<page-count count="14"></page-count>
</counts>
</article-meta>
</front>
<body>
<sec id="s1">
<title>Introduction</title>
<p>A growing number of developmental studies raise that the newborn infant is prepared, evolutionarily and physiologically, to be born intersubjective
<xref ref-type="bibr" rid="pone.0069474-Nagy1">[1]</xref>
<xref ref-type="bibr" rid="pone.0069474-Trevarthen1">[3]</xref>
. Here, social cognition is thought to start at the very beginning of infant development
<xref ref-type="bibr" rid="pone.0069474-Rochat1">[4]</xref>
<xref ref-type="bibr" rid="pone.0069474-Johnson1">[6]</xref>
, instead of at its achievement as Piaget proposed it
<xref ref-type="bibr" rid="pone.0069474-Piaget1">[7]</xref>
. The unmatured brain of the fetus is argued to be socially prepared to recognize human faces at birth, to make eye contact with others
<xref ref-type="bibr" rid="pone.0069474-Rigato1">[8]</xref>
, to respond emotionally to biological motion and to imitate others with limited abilities. In this nature versus nurture debate, we propose to investigate what could be the minimal neural core responsible for the development of the neonate social brain. This work pursues some other investigations in which we modeled different aspects of fetal and infant development features with computer simulations
<xref ref-type="bibr" rid="pone.0069474-Kuniyoshi1">[9]</xref>
<xref ref-type="bibr" rid="pone.0069474-Boucenna1">[16]</xref>
.</p>
<p>Perhaps the most famous experiment in favor of neonate social engagement is the one conducted by Meltzoff, who showed that newborns are capable of imitating facial gestures off-the-shelf
<xref ref-type="bibr" rid="pone.0069474-Meltzoff1">[17]</xref>
. Although still under debate, neonate imitation suggests that the bonding of human newborns is either innate or acquired from an early imprinting of the body image. Whether these neural circuits are pre-wired or not, they necessarily influence the normal cognitive development of neonates to guide the spontaneous interactions in the physical world and in the social world. Meltzoff suggests that neonates interact with others
<italic>because</italic>
they are capable of goal-directed actions and
<italic>because</italic>
they recognize this genuine characteristic in others. He summarized this idea in his “like-me” theory
<xref ref-type="bibr" rid="pone.0069474-Meltzoff2">[18]</xref>
where he proposes that this mirroring mechanism between self and others could be based on a supra-modal representation of the body constructed from intra-uterine motor babbling experiences. Accordingly, this supramodal body image is supposed to identify organs and their configural relations that will serve him later for the cross-modal equivalence underlying imitation
<xref ref-type="bibr" rid="pone.0069474-Meltzoff3">[19]</xref>
. The successful replicating of neonatal imitation in monkeys by Ferrari argues further for the commonality of an early recognition mechanism in mammals' development, which may be based on “mouth mirror neurons” for facial and ingestive actions
<xref ref-type="bibr" rid="pone.0069474-Ferrari1">[20]</xref>
,
<xref ref-type="bibr" rid="pone.0069474-Lepage1">[21]</xref>
. Although the visual and motor cortices seem mature enough to support such system at birth, a subcortical scenario is more probable
<xref ref-type="bibr" rid="pone.0069474-Valenza1">[22]</xref>
,
<xref ref-type="bibr" rid="pone.0069474-Simion1">[23]</xref>
, in which the subcortical units shape the cerebral cortex. This scenario may explain how a primitive body image could be accessible at an early age for sensorimotor coordination.</p>
<p>Consequently, the early functioning of the subcortical structures from the fetal stage appears very important for cortical development and therefore for the development of the social brain
<xref ref-type="bibr" rid="pone.0069474-Johnson1">[6]</xref>
,
<xref ref-type="bibr" rid="pone.0069474-deHaan1">[24]</xref>
<xref ref-type="bibr" rid="pone.0069474-Johnson3">[26]</xref>
. Considering further the case of neonate face recognition, Johnson argues that the visual cortex is not mature enough before two months to support this function
<xref ref-type="bibr" rid="pone.0069474-Senju1">[27]</xref>
. He proposes that a fast-track modulation model that includes the superior colliculus (SC), the pulvinar and the amygdala is at work in newborns for face detection, mood recognition and eye contact. He suggests also that this midbrain structure –dubbed as the CONSPEC model– includes an innate face-like visual pattern, nonplastic, that influences gradually the learning of a separate plastic cortical system, dubbed as the CONLERN model
<xref ref-type="bibr" rid="pone.0069474-Morton1">[28]</xref>
,
<xref ref-type="bibr" rid="pone.0069474-Johnson4">[29]</xref>
; a variant of this model has been given by
<xref ref-type="bibr" rid="pone.0069474-deSchonen1">[30]</xref>
,
<xref ref-type="bibr" rid="pone.0069474-Acerra1">[31]</xref>
.</p>
<p>In so far, despite their appealling layouts, Meltzoff's and Johnson's models have been criticized for lacking evidence that
<italic>(i)</italic>
the visual motor pathway has feature detectors that would cause faces to be attractive
<xref ref-type="bibr" rid="pone.0069474-Nelson1">[32]</xref>
,
<xref ref-type="bibr" rid="pone.0069474-Turati1">[33]</xref>
and that
<italic>(ii)</italic>
motor outputs look actually the same from a third party perspective
<xref ref-type="bibr" rid="pone.0069474-Heyes1">[34]</xref>
, which refers to the so-called correspondence problem
<xref ref-type="bibr" rid="pone.0069474-Brass1">[35]</xref>
,
<xref ref-type="bibr" rid="pone.0069474-Ray1">[36]</xref>
. We propose nonetheless that a framework consistent with both viewpoints can be drawn based on the neural functioning of the SC. More precisely, the SC presents three relevant features that are potentially determinant for the building of a social brain
<xref ref-type="bibr" rid="pone.0069474-Johnson1">[6]</xref>
.</p>
<p>First, SC supports unisensory processing in the visual, auditory and somatosensory domains accessible in a topographically-ordered representation to orient the animal to the source of sensory stimuli. Just as visual cues orient the eyes for tracking behaviors
<xref ref-type="bibr" rid="pone.0069474-Kalesnykas1">[37]</xref>
, somatosensory cues extend the motor repertoire for full-body representation, including the neck and the face
<xref ref-type="bibr" rid="pone.0069474-Stein1">[38]</xref>
<xref ref-type="bibr" rid="pone.0069474-Crish1">[40]</xref>
; the SC is coextensive with the pons, which is concerned with facial sensation, movement and vibro-acoustic sensation
<xref ref-type="bibr" rid="pone.0069474-Joseph1">[41]</xref>
and the face is represented in a magnified fashion with receptive fields
<xref ref-type="bibr" rid="pone.0069474-Stein1">[38]</xref>
. Although the SC is a structure late to mature, the somatosensory modality is the first modality to be mapped in the third trimester of pregnancy
<xref ref-type="bibr" rid="pone.0069474-Stein2">[42]</xref>
, followed by vision with observations of occular saccades behaviors
<xref ref-type="bibr" rid="pone.0069474-Stanojevic1">[43]</xref>
. These aspects are important since some developmental studies attribute to SC a role in fetal learning using some form of vibro-acoustic stimulation to explain how the fetus is capable to sense and to learn through the body skin
<xref ref-type="bibr" rid="pone.0069474-James1">[44]</xref>
and that SC is well-known as an important pathway for gaze shifting and saccade control
<xref ref-type="bibr" rid="pone.0069474-Groh1">[45]</xref>
,
<xref ref-type="bibr" rid="pone.0069474-Moschovakis1">[46]</xref>
. Second, the SC supports sensory alignment of each topographic layer. That is, the somatotopic organization (in the deeper layers) is not only topographic but also follows the design of the visual map (in the superficial layers)
<xref ref-type="bibr" rid="pone.0069474-Stein1">[38]</xref>
,
<xref ref-type="bibr" rid="pone.0069474-Stein3">[47]</xref>
<xref ref-type="bibr" rid="pone.0069474-King1">[49]</xref>
. Third, the intermediate layers exhibit ‘multisensory facilitation’ to converging inputs from different sensory modalities within the same region in space. As expressed by King,
<italic>“multisensory facilitation is likely to be extremely useful for aiding localization of biologically important events, such as potential predators and prey, (…) and to a number of behavioral phenomena”</italic>
<xref ref-type="bibr" rid="pone.0069474-King1">[49]</xref>
. Stein and colleagues underline also the importance of the multimodal alignment between visuotopic and the somatotopic organizations for seizing or manipulating a prey and for adjusting the body
<xref ref-type="bibr" rid="pone.0069474-Stein3">[47]</xref>
.</p>
<p>Collectively, these aligned colliculus layers suggest that the sensorimotor space of the animal is represented in ego-centered coordinates
<xref ref-type="bibr" rid="pone.0069474-Ferrell1">[39]</xref>
as it has been proposed by Stein and Meredith
<xref ref-type="bibr" rid="pone.0069474-Stein1">[38]</xref>
and others
<xref ref-type="bibr" rid="pone.0069474-Dominey1">[50]</xref>
; the SC is made up not of separate visual, auditory, and somatosensory maps, but rather of a single integrated multisensory map. Although comparative research in cats indicate that multimodal integration in SC is protracted during postnatal periods after considerable sensory experiences
<xref ref-type="bibr" rid="pone.0069474-Stein4">[51]</xref>
<xref ref-type="bibr" rid="pone.0069474-Stein5">[53]</xref>
, multisensory integration is present at birth in the rhesus monkey
<xref ref-type="bibr" rid="pone.0069474-Wallace2">[54]</xref>
and has been suggested to play a role for neonatal orientation behaviors in humans. Moreover, while the difficulty to compare human development with other species has been acknowledged,
<italic>“some human infant studies suggest a developmental pattern wherein some low-level multisensory capabilities appear to be present at birth or emerge shortly thereafter”</italic>
<xref ref-type="bibr" rid="pone.0069474-Stein6">[55]</xref>
.</p>
<p>Considering these points about SC functionalities and developmental observations, we make the hypothesis that SC supports some neonatal social behaviors like facial preference and simple facial mimicry as a multimodal experience between the visual and somatosensory modalities,
<italic>not</italic>
just as a simple visual processing experience as it is commonly understood (see
<xref ref-type="fig" rid="pone-0069474-g001">Fig. 1</xref>
). We argue that, in comparison to standard visual stimuli, face-like visual patterns could correspond to unique types of stimuli as they overlap almost perfectly the same region in the visual topographic map and in the somatotopic topographic map. We propose therefore that the alignment of the external face-like stimuli in the SC visual map (some others' face) with the internal facial representation in the somatotopic map (one's own face) may accelerate and intensify multisensory binding between the visual and the somatosensory maps. Occular saccades to the correct stimulus may furtherly facilitate the fine tuning of the sensory alignment between the maps.</p>
<fig id="pone-0069474-g001" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0069474.g001</object-id>
<label>Figure 1</label>
<caption>
<title>Proposal for a minimal network in SC for an inter-subjective mind.</title>
<p>In comparison to normal stimuli, we propose that faces are particular patterns because the visual and somatic maps in the superior colliculus are perfectly aligned topologically in the intermediate layer. We suggest that the spatial distribution of the neurons in the somatotopic map is preserved in the intermediate map, which makes the multimodal neurons salient to visual patterns with a similar spatial configuration of eyes and mouth. We hypothesize that this feature potentially influence the social skills in neonates, for detecting faces and reproducing facial movements.</p>
</caption>
<graphic xlink:href="pone.0069474.g001"></graphic>
</fig>
<p>Moreover, in comparison with unimodal models of facial orientation, which support a phylogenetic ground of social development
<xref ref-type="bibr" rid="pone.0069474-Acerra1">[31]</xref>
,
<xref ref-type="bibr" rid="pone.0069474-Bednar1">[56]</xref>
,
<xref ref-type="bibr" rid="pone.0069474-Balas1">[57]</xref>
, this scenario would have the advantage to explain from a constructivist viewpoint why neonates may prefer to look at configurational patterns of eyes and mouth rather than other types of stimuli
<xref ref-type="bibr" rid="pone.0069474-Johnson2">[25]</xref>
,
<xref ref-type="bibr" rid="pone.0069474-Pascalis1">[58]</xref>
. Stated like this, the ego-centric and multimodal representation in the SC has many similarities with Meltzoff's suggestion of an inter- but not supra-modal representation of the body responsible for neonate imitation.</p>
<p>In this paper, we model the perinatal period starting from the maturation of unisensory layers to multisensory integration in the SC. This corresponds to the fetal maturation of the deep layers (somatosensory only) and of the superficial layer (vision only) at first, then to the post-natal visuo-somatosensory integration in the intermediate layers when the neonate perceives face-like patterns. Nonetheless, we make the note to the reader that we do not model the map formation in SC at the molecular level although there is some evidence that activity-independent mechanisms are used to establish topographic alignment between modalities such as the molecular gradient-matching mechanism studied in
<xref ref-type="bibr" rid="pone.0069474-Triplett1">[59]</xref>
. Instead, we focus at the epigenetic level, on the experience-driven formation of the neural maps during sensorimotor learning, in which we model the adaptation mechanisms in multisensory integration that occurs when there is a close spatial and temporal proximity between stimuli from different senses
<xref ref-type="bibr" rid="pone.0069474-Benedetti1">[60]</xref>
<xref ref-type="bibr" rid="pone.0069474-Wallace4">[64]</xref>
.</p>
<p>In computer simulations with realistic physiological properties of a fetus face, we simulate how somatosensory experiences resulting from distortions of the soft tissues (e.g., during the motion of the mouth or the contraction of the eyes' muscles) contribute to the construction of a facial representation. We use, to this end, an original implementation of feed-forward spiking neural networks to model the topological formation that may occur in neural tissues. Its learning mechanism is based on the rank order coding algorithm proposed by Thorpe and colleagues
<xref ref-type="bibr" rid="pone.0069474-VanRullen1">[65]</xref>
,
<xref ref-type="bibr" rid="pone.0069474-Thorpe1">[66]</xref>
, which transforms one input's amplitude into an ordered temporal code. We take advantage of this biologically-plausible mechanism to preserve the input's temporal structure on the one hand and to transpose it into its corresponding spatial topology on the other hand.</p>
<p>In comparison to other topological algorithms
<xref ref-type="bibr" rid="pone.0069474-Kohonen1">[67]</xref>
<xref ref-type="bibr" rid="pone.0069474-Glasr1">[71]</xref>
, the synaptic weights of each neuron inform about the vicinity to other neurons based on their rank order: that is, neurons with similar rank codes are spatially near. First, we study how the sensory inputs shape the sensory mapping and how multimodal integration occurs between the two maps within an intermediate layer that learns information from both. We propose that the registration of the somatosensory neural image aligned with the visual coordinates, as it could occur in the SC at birth, may give an easy solution to the correspondence problem, for instance, to recognize and to mimic the raw configuration of other people's facial expressions at birth. This scenario is in line with Boucenna and colleagues who showed how social referencing can emerge from simple sensorimotor systems
<xref ref-type="bibr" rid="pone.0069474-Boucenna1">[16]</xref>
,
<xref ref-type="bibr" rid="pone.0069474-Boucenna2">[72]</xref>
.</p>
</sec>
<sec id="s2">
<title>Models</title>
<sec id="s2a">
<title>Face Modeling</title>
<p>In order to simulate the somatosensory information on the skin, we use a physical simulation that verifies the average characteristics of a 7–9 months-old fetus' face. In our experiments, the whole face can move freely so that its motion can generate weak displacements at the skin surface and strong amplitude forces during contact.</p>
<p>The face tissue is modeled as a mass-spring network and local stretches are calculated with the Hook's spring law (see below) representing the forces that a spring exerts on two points. The resulting forces on each node of the mesh simulate tactile receptors like the Meissner's corpuscles, which detect facial vibro-acoustic pressures and distortions during facial actions
<xref ref-type="bibr" rid="pone.0069474-Tsunozaki1">[73]</xref>
, see
<xref ref-type="fig" rid="pone-0069474-g002">Fig. 2</xref>
.
<disp-formula id="pone.0069474.e005">
<graphic xlink:href="pone.0069474.e005"></graphic>
</disp-formula>
<disp-formula id="pone.0069474.e006">
<graphic xlink:href="pone.0069474.e006"></graphic>
</disp-formula>
<disp-formula id="pone.0069474.e007">
<graphic xlink:href="pone.0069474.e007"></graphic>
</disp-formula>
<disp-formula id="pone.0069474.e008">
<graphic xlink:href="pone.0069474.e008"></graphic>
</disp-formula>
</p>
<fig id="pone-0069474-g002" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0069474.g002</object-id>
<label>Figure 2</label>
<caption>
<title>Face mesh of the fetus model.</title>
<p>The distorsion of the facial tissue is simulated as a mass-spring network of
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e001.jpg"></inline-graphic>
</inline-formula>
tactile points and
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e002.jpg"></inline-graphic>
</inline-formula>
springs. Stress and displacement of the facial tissue are rendered by the actions of group muscles around the mouth and the eyes. In
<bold>A</bold>
, the front view of the face, the warm colors indicate the position of the segments in depth. The plot in
<bold>B</bold>
, the profile view, indicate the action limits of the face mesh in Z axis.</p>
</caption>
<graphic xlink:href="pone.0069474.g002"></graphic>
</fig>
<p>This formula represents the force applied to the particles
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e009.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e010.jpg"></inline-graphic>
</inline-formula>
; the distance between these particles,
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e011.jpg"></inline-graphic>
</inline-formula>
; the rest length of the spring,
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e012.jpg"></inline-graphic>
</inline-formula>
; the spring constant or stiffness,
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e013.jpg"></inline-graphic>
</inline-formula>
; the damping constant,
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e014.jpg"></inline-graphic>
</inline-formula>
; and the velocity of the particles,
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e015.jpg"></inline-graphic>
</inline-formula>
. The damping term in the equation is needed in order to simulate the natural damping that would occur due to the forces of friction. This force, called viscous damping, is the friction force exerted on the mesh-network that is directly proportional and opposite to the velocity of the moving mass. In practice, the damping term lends stability to the action of the spring. The facial tissue is modeled with
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e016.jpg"></inline-graphic>
</inline-formula>
vertices and
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e017.jpg"></inline-graphic>
</inline-formula>
edges, and the mouth and the eyes apertures represent concave sets forming non-contiguous ensembles. The collision detection between two points or two springs is activated depending on the relative distance between the nodes and whether they are connected or not. On the one hand, for the case of contiguous points –that is, for the points connected with a spring– force collision is proportionnal to the local spring stiffness, to which no ad hoc force is added; this physical model corresponds to the behavior of the Meissner's corpuscles.</p>
<p>On the other hand, for the case of non-contiguous points –that is, unconnected points– virtual springs are added at the contact points to model the softness of the tissue material jonction and the stress in the radial direction; this physical model corresponds to the behavior of the Merkel cells, which are tactile receptors that detect pressure at localized points
<xref ref-type="bibr" rid="pone.0069474-Boot1">[74]</xref>
. The radial force is added when the nodes' spatial location is below a certain minimal distance
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e018.jpg"></inline-graphic>
</inline-formula>
equals to
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e019.jpg"></inline-graphic>
</inline-formula>
.</p>
<p>For the sake of simplicity, we model the mouth motor activity and the eyes motor activity with virtual springs on the two lips of the mouth and on the two lids of the eyes. The contractions of these fictuous links control either the closure or the opening of the aperture of the mouth or of the eyes. In addition, we define as a prior choice that the two eyes move together (no eye blinking).</p>
</sec>
<sec id="s2b">
<title>Visual System Implementation</title>
<p>The eyes are the most controlled of the infant motor abilities at birth
<xref ref-type="bibr" rid="pone.0069474-Moschovakis1">[46]</xref>
. Although it is still unclear how and why visual system emerges during development, it has been argued that SC supports early visuomotor transformation
<xref ref-type="bibr" rid="pone.0069474-Johnson2">[25]</xref>
.</p>
<p>Another proposal is that, before birth, traveling waves in the retina could serve as input to organize the formation of topological maps in the collicular visual system, furnishing preferential orientation and direction
<xref ref-type="bibr" rid="pone.0069474-Feller1">[75]</xref>
. This process may be done even in the prenatal period because the eyes of the fetus can be seen to move in the womb from 18 weeks after conception, although the eyes stay closed until week 26 (6 months)
<xref ref-type="bibr" rid="pone.0069474-deVries1">[76]</xref>
,
<xref ref-type="bibr" rid="pone.0069474-Prechtl1">[77]</xref>
.</p>
<p>We model a rough eye receptive field to simulate this modality with a two dimensional matrix of
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e021.jpg"></inline-graphic>
</inline-formula>
pixels (no log-polar transform), whose values are comprise between
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e022.jpg"></inline-graphic>
</inline-formula>
with no neighbouring information from each others. Morever, the eye position is considered fixed. We make the note that the topology respects the density distribution of the eye receptors in order to have more information on the fovea.</p>
</sec>
<sec id="s2c">
<title>Superior Colliculus Neural Model</title>
<p>Although there is little information about how non-visual information is translated into orienting motor input, numerous researches on fetal learning do report motor habituation to vibro-acoustic stimuli
<xref ref-type="bibr" rid="pone.0069474-James1">[44]</xref>
. The exploration of the general movements in the womb are likely to generate intrinsic sensory stimuli pertinent for sensorimotor learning
<xref ref-type="bibr" rid="pone.0069474-Joseph1">[41]</xref>
. For instance, recent studies on the SC in the baby mole-rat indicate evidence for population coding strategies to accomplish orientation to somatosensory cues by a mammal, in a similar fashion to the treatment of visual cues and to eyes control in SC
<xref ref-type="bibr" rid="pone.0069474-Crish1">[40]</xref>
,
<xref ref-type="bibr" rid="pone.0069474-Crish2">[78]</xref>
, even at birth
<xref ref-type="bibr" rid="pone.0069474-Moschovakis1">[46]</xref>
. Other research further supports activity-dependent integration in the SC during map formation
<xref ref-type="bibr" rid="pone.0069474-Benedetti1">[60]</xref>
,
<xref ref-type="bibr" rid="pone.0069474-Benedetti2">[62]</xref>
, even though some molecular mechanisms are also at work
<xref ref-type="bibr" rid="pone.0069474-Triplett1">[59]</xref>
.</p>
<p>Considering these points, we propose to model the experience-dependent formation of visuotopic and somatopic maps in the SC using a population coding strategy capable to preserve the input topology. We use for that the rank order coding algorithm proposed by Thorpe and colleagues
<xref ref-type="bibr" rid="pone.0069474-VanRullen1">[65]</xref>
,
<xref ref-type="bibr" rid="pone.0069474-VanRullen2">[79]</xref>
, which modulates the neuron's activation depending on the
<italic>ordinated</italic>
values of the input vector,
<italic>not</italic>
directly on the input values.</p>
<p>In comparison to Kohonen-like topological maps, this very fast biologically-inspired algorithm has the advantage to preserve the temporal or phasic details of the input structure during the learning, which can be exploited to organize rapidly the topology of the neural maps.</p>
<p>The conversion from an analog to a rank order code of the input vector is simply done by assigning to each input its ordinality
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e025.jpg"></inline-graphic>
</inline-formula>
depending on its relative value compared to other inputs
<xref ref-type="bibr" rid="pone.0069474-Thorpe1">[66]</xref>
. One neuron is associated to a specific rank code of the input units so that it is activated when this sequence occurs. A simple model of the activation function is to modulate its sensitivity based on the order in the input sequence
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e026.jpg"></inline-graphic>
</inline-formula>
relative to its own ordinal sequence
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e027.jpg"></inline-graphic>
</inline-formula>
, so that any other pattern of firing will produce a lower level of activation with the weakest response being produced when the inputs are in the opposite order. Its synaptic weights are learnt to describe this stage:
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e028.jpg"></inline-graphic>
</inline-formula>
Its activation function is:
<disp-formula id="pone.0069474.e029">
<graphic xlink:href="pone.0069474.e029"></graphic>
</disp-formula>
<disp-formula id="pone.0069474.e030">
<graphic xlink:href="pone.0069474.e030"></graphic>
</disp-formula>
</p>
<p>The most active neuron wins the competition and sees its weights updated according to a gradient descent rule:
<disp-formula id="pone.0069474.e031">
<graphic xlink:href="pone.0069474.e031"></graphic>
</disp-formula>
with
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e032.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e033.jpg"></inline-graphic>
</inline-formula>
the learning step that we set to
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e034.jpg"></inline-graphic>
</inline-formula>
.</p>
<p>By looking at the rank code in the weight vector, it is possible to measure and to compare the relative distance between the neurons, which respects the input topology.</p>
<p>During the learning process, we do not impose any lateral connectivity between the neurons. However, neurons with similar weights distribution may be considered neighbors and to belong to the same cluster. As we said earlier in this section, the process of map formation is done through the mechanism of activity-dependent neural growth
<xref ref-type="bibr" rid="pone.0069474-Pellegrini1">[80]</xref>
. However, we do not model the competition/stabilization processes at the molecular level as it has been described in
<xref ref-type="bibr" rid="pone.0069474-Triplett1">[59]</xref>
. Instead, we model here the neurogenesis and the neural spatialization with two complementary mechanisms. The first mechanism imposes to each neuron a maximum number of iterations above which its synaptic weights are no more plastic, we set
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e035.jpg"></inline-graphic>
</inline-formula>
. Besides, the second mechanism creates new neurons within the map initialized with plastic synapses when a neuron reaches its maximum allowed number of variation. This dual mechanism draws a developmental timeline by incrementing neurons within the maps where the most frequent stimuli patterns are represented with a greater number of neurons.</p>
<p>In our experimental simulations, the maps are initialized firstly with
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e036.jpg"></inline-graphic>
</inline-formula>
neurons and their maximum growth is fixed to
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e037.jpg"></inline-graphic>
</inline-formula>
neurons. In accordance to the Model Section, the somatic map is linked to the
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e038.jpg"></inline-graphic>
</inline-formula>
afferent somatic nodes and the vision map is linked to
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e039.jpg"></inline-graphic>
</inline-formula>
afferent retinal nodes. We make the note to the reader that each sensory map is unique which is different from the real anatomy of SC that is comprised of two hemispheres, each of which is mapped independently and organized such that central visual space along the azimuth axis is represented anteriorly and more peripheral space is posterior. We think nonetheless that our model is coherent and grasps the functional features of SC, like the sensory alignment. The experiments are presented therein-after in the next section.</p>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>Results</title>
<sec id="s3a">
<title>Development of Unisensory Maps</title>
<p>Our experiments with our fetus face simulation were done as follows. We make the muscles from the eyelids and from the mouth to move at random periods of time, alternating rapid and slow periods of contraction and relaxation. The face model simulates the tension lines, which propagate across the whole facial tissue, producing characteristic strain patterns mostly localized around the organ contours, see
<xref ref-type="fig" rid="pone-0069474-g003">Fig. 3</xref>
. Here, the stress induced by the mouth's displacement is distributed to all the neighbouring regions. These graphs show how dynamic the patterns are due to the intermingled relations within the mesh network. For instance, the intensity profile in only one node during mouth motion displays complex dynamics difficult to apprehend, see
<xref ref-type="fig" rid="pone-0069474-g004">Fig. 4</xref>
for the normalized activity between
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e040.jpg"></inline-graphic>
</inline-formula>
. Thus, an important feature for a learning algorithm is to find the causal links and the topological structure from their temporal correlation patterns. The rank order coding algorithm satisfies these requirements because it allows to identify the amplitude relations among the tension nodes. The formation of the visual map follows a similar process. In order to mimic the visuo-spatial stimuli occuring when touching their face, we model the hand as a ball passing in front of the eye field and touching the skin at the same time (not shown). We make the note that occular movements are not modeled.</p>
<fig id="pone-0069474-g003" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0069474.g003</object-id>
<label>Figure 3</label>
<caption>
<title>Strain/stress evolution of the facial tissue during the opening and the closing of the mouth.</title>
<p>The figures highlight the propagation of the strain/stress lines on the facial tissue around the mouth during its opening. The color intensity indicates the variation on each edge of the relative stress, which is propagated from neighbouring points to neighbouring points. The tension lines permits to draw the functional connectivity of each region on the facial tissue.</p>
</caption>
<graphic xlink:href="pone.0069474.g003"></graphic>
</fig>
<fig id="pone-0069474-g004" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0069474.g004</object-id>
<label>Figure 4</label>
<caption>
<title>Stress intensity profile observed in one node.</title>
<p>We can observe the very dynamic stress intensity level during facial movements on one node, normalized between
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e003.jpg"></inline-graphic>
</inline-formula>
. Its complex activity is due to the intermingled topology of the mesh network on which it resides. Some features from the spatial topology of the whole mesh can be extracted however from its temporal structure.</p>
</caption>
<graphic xlink:href="pone.0069474.g004"></graphic>
</fig>
<p>During the learning process, the nodes from each map encode one specific temporal pattern and the most frequent patterns get over-represented with new nodes added. The developmental growth of the two maps is described in
<xref ref-type="fig" rid="pone-0069474-g005">Fig. 5</xref>
with the evolution of the map size and of the weights variation parameter,
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e041.jpg"></inline-graphic>
</inline-formula>
, respectively top and bottom. While the convergence rate gradually stabilizes over time, new neurons get recruted which furnish some plasticity to the maps. After the transitory period, which corresponds to the learning stage, each neuron gets salient to specific receptive fields and
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e042.jpg"></inline-graphic>
</inline-formula>
gradually diminishes.</p>
<fig id="pone-0069474-g005" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0069474.g005</object-id>
<label>Figure 5</label>
<caption>
<title>Evolution of the neural growth and synaptic plasticity during map formation.</title>
<p>The plots describe the global variation of the synaptic weights and the number of units in each map, over time. The colors correspond respectively to the somatic map (in blue) and to the visual map (in red). Over time, the unisensory layers converge to stable neural populations through the mechanism of reinforcement learning (hebbian synaptic plasticity) as
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e004.jpg"></inline-graphic>
</inline-formula>
goes to zero and neurogenesis, as the maps reach their maximum number of units allowed; one hundred units. The density distribution of the neural populations depends on the sensory activity probability distribution.</p>
</caption>
<graphic xlink:href="pone.0069474.g005"></graphic>
</fig>
<p>We reconstruct in
<xref ref-type="fig" rid="pone-0069474-g006">Figures 6</xref>
and
<xref ref-type="fig" rid="pone-0069474-g007">7</xref>
the final configuration of the visuotopic and somatopic maps using the Fruchterman-Reingold (FR) layout algorithm
<xref ref-type="bibr" rid="pone.0069474-Fruchterman1">[81]</xref>
, which is a force-directed graph based on the a measure distance between the nodes. Although very caricatural, the FR algorithm has been used for molecular placement simulations and can serve here to some extent to simulate the competition within the SC maps during ontogeny. We compute the euclidean distance between the weights distribution to evaluate the nodes' similarity and the attraction/repulsion forces between them. The color code used for plotting the visual neurons follows a uniform density distribution displayed in
<xref ref-type="fig" rid="pone-0069474-g006">Fig. 6</xref>
. Here, the units deploy in a retinotopic manner with more units encoding the center of the image than the periphery. Hence, the FR algorithm models well the logarithmic transformation found in the visual inputs.</p>
<fig id="pone-0069474-g006" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0069474.g006</object-id>
<label>Figure 6</label>
<caption>
<title>Visuotopic reconstruction using the Fruchterman-Reingold layout algorithm.</title>
<p>This graphic layout (right) displays spatially in a 2D map the distance between neurons computed in the weights space on the principle of attraction/repulsion forces. The layout models grossely the molecular mechanisms of map formation. The graph shows that the visual neural network represents well the fovea-centered distribution of its visual input represented on the left with the same color code.</p>
</caption>
<graphic xlink:href="pone.0069474.g006"></graphic>
</fig>
<fig id="pone-0069474-g007" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0069474.g007</object-id>
<label>Figure 7</label>
<caption>
<title>Somatopic reconstruction using the Fruchterman-Reingold layout algorithm.</title>
<p>As in the previous figure, the Fruchterman-Reingold graphic layout (right) displays spatially in a 2D map the distance between neurons computed in the weights space for the tactile neurons, based on the principle of attracting and repelling forces. In accordance with the previous figure, the graph shows that the tactile neural network respects quite well the topology of the face (left) with the same color code for the neurons connected to their respective somatic area: the neural clusters respects the vertical and horizontal symmetries of the face with the orange-red-pink regions corresponding to the lower part of the face, the green-cyan-blue regions to the higher part of the face, the green and orange regions to left-side of the face and the blue-pink regions to the right-side of the face.</p>
</caption>
<graphic xlink:href="pone.0069474.g007"></graphic>
</fig>
<p>Parallely, the topology of the face is well reconstructed by the somatic map as it preserves well the location of the Merkel cells, see
<xref ref-type="fig" rid="pone-0069474-g006">Fig. 6</xref>
. The neurons' position respects the neighbouring relation between the tactile cells and the characteristic regions like the mouth, the nose and the eyes: for instance, the neurons colored in green and blue are encoding the upper-part of the face, and are well separated from the neurons colored in pink, red and orange tags corresponding to the mouth region. Moreover, the map is also differentiated in the vertical plan, with the green/yellow regions for the left side of the face, and the blue/red regions for its right side.</p>
</sec>
<sec id="s3b">
<title>Multisensory Integration</title>
<p>The unisensory maps have learnt somatosensory and visual receptive fields in their respective frame of reference. However, these two layers are not in spatial register. According to Groh
<xref ref-type="bibr" rid="pone.0069474-Groh1">[45]</xref>
, the spatial registration between two neural maps occur when one receptive field (e.g., somatosensory) lands within the other (e.g., vision). Moreover, cells in true registry have to respond to the same visuo-tactile stimuli's spatial locations. Regarding how spatial registration is done in the SC, clinical studies and meta-analysis indicate that multimodal integration is done (1) in the intermediate layers, and (2) later in development after unimodal maturation
<xref ref-type="bibr" rid="pone.0069474-Stein6">[55]</xref>
.</p>
<p>To simulate the transition that occurs in cognitive development, we introduce a third map that models this intermediate layer for the somatic and visual registration between the superficial and the deep-layers in SC; see
<xref ref-type="fig" rid="pone-0069474-g001">Figs. 1</xref>
and
<xref ref-type="fig" rid="pone-0069474-g008">8</xref>
. We want to obtain through learning a relative spatial bijection or one-to-one correspondence between the neurons from the visual map and those from the somatopic map. Its neurons receive synaptic inputs from the two unimodal maps and are defined with the rank-order coding algorithm as for the previous maps. Furthermore, this new map follows a similar maturational process with at the beginning
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e043.jpg"></inline-graphic>
</inline-formula>
neurons initialized with a uniform distribution, the map containing at the end one hundred neurons.</p>
<fig id="pone-0069474-g008" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0069474.g008</object-id>
<label>Figure 8</label>
<caption>
<title>Multimodal integration schema in SC between vision and tactile information.</title>
<p>Integration is done as follows, the visual signals in the superfical layer and the somatosensory signals in the deep layer converge to the intermediate multimodal map (no reentrance) in which bimodal neurons align pair-wise visuo-tactile associations. In certain cases, the synaptic links from different neurons in the unisensory maps converge to the same bimodal neurons whereas in other cases the synaptic links from the same neurons in the unisensory maps diverge to different bimodal neurons.</p>
</caption>
<graphic xlink:href="pone.0069474.g008"></graphic>
</fig>
<p>We present in
<xref ref-type="fig" rid="pone-0069474-g009">Fig. 9</xref>
the raster plots for the three maps during tactual-visual stimulation when the hand skims over the face, in our case the hand is replaced by a ball moving over the face. One can observe that the spiking rates between the vision map and the tactile map are different, which shows that there is not a one-to-one relationship between the two maps and that the multimodal map has to combine partially their respective topology. The bimodal neurons learn over time the contingent visual and somatosensory activity and we hypothesize that they associate the common spatial locations between a eye-centered reference frame and the face-centered reference frame. To study this situation, we plot a connectivity diagram in
<xref ref-type="fig" rid="pone-0069474-g010">Fig. 10</xref>
<bold>A</bold>
constructed from the learnt synaptic weights between the three maps. For clarity purpose, the connectivity diagram is created from the most robust visual and tactile links. We observe from this graph some
<italic>hub-like</italic>
nodes in the bimodal map (the blue segment), which correspond to converging neurons from the two unimodal maps. Here, the intermediate neurons binds the two modalities. As an example, we color four links from the visual and tactile maps (resp. cyan, green and magenta, red segments) converging to two neurons from the bimodal map. We transcribe the associated visual and tactile patterns location at the top figures with the same color code. In these figures, on the left, the green dots in the visual map (resp. cyan and blue) indicate where the neurons trigger in visual coordinates and on the right, the red dots in the tactile map (resp. magenta and blue) indicate where the neurons trigger in tactile coordinates. Thus, the congruent spatial locations are mostly in registration from each others, and the bimodal map matches up with the two topologies.</p>
<fig id="pone-0069474-g009" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0069474.g009</object-id>
<label>Figure 9</label>
<caption>
<title>Raster plots from the visual, the tactile and the bimodal maps, during visuo-tactual stimulation when the hand skims over the face.</title>
<p>The activity of the visual, tactile and bimodal maps is drawn respectively at the bottom, the middle and at the top frame. At a given time, the spikes contingency across the neurons in the three different maps creates the conditions for reinforcing their synaptic links from the neurons of the unisensory maps to the neurons of the bimodal map. The difference of spiking rates between the maps show that there is not a bijective connection between the neurons and that some bimodal neurons may associate groups of visual neurons to groups of tactile neurons.</p>
</caption>
<graphic xlink:href="pone.0069474.g009"></graphic>
</fig>
<fig id="pone-0069474-g010" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0069474.g010</object-id>
<label>Figure 10</label>
<caption>
<title>Networks analysis of visuo-tactile integration and connectivity.</title>
<p>
<bold>A</bold>
Connectivity circle linking the visual and tactile maps (resp. green and red) to the bimodal map (blue). The graph describes the dense connectivity of synaptic links starting from the visual and tactile maps and converging to the multimodal map. The colored links correspond to localized visuo-tactile stimuli on the nose (green/red links) and on the right eye (cyan/magenta links), see the patterns on the upper figure. The links show the correct spatial correspondance between the neurons of the two maps.
<bold>B</bold>
Weights density distribution from the visual and tactile maps to the bimodal map relative to their strength. These histograms show that the neurons from both modalities have only few strong connections from each others. This suggest a bijection between the neurons of each map.
<bold>C</bold>
Normalized distance error between linked visual and tactile neurons. When looking at the pairwise neurons of the two maps (red histogram in
<bold>B</bold>
for weights
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e020.jpg"></inline-graphic>
</inline-formula>
), the spatial distortion between the neurons from the two maps is weak: vision neurons coding one location on the eyes receptive fields are strongly linked to the tactile neurons coding the same region on the face.</p>
</caption>
<graphic xlink:href="pone.0069474.g010"></graphic>
</fig>
<p>In
<bold>B</bold>
, we reproduce the histogram distribution of the inter-modal connection weights taken from the tactile and visual maps to the bimodal map. The weights are uniformly distributed for the two modalities in blue and green with in average an equal number of weak connections (low values) and of strong connections (high values). However, for the neurons having necessarily strong links from both modalities (the red histogram), their number dramatically diminishes. For these neurons, only
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e044.jpg"></inline-graphic>
</inline-formula>
of the neurons population (i.e., eighteen neurons) have their synaptic weights above
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e045.jpg"></inline-graphic>
</inline-formula>
from the two unimodal populations. For neurons having their synaptic weights above
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e046.jpg"></inline-graphic>
</inline-formula>
, their number decreases to
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e047.jpg"></inline-graphic>
</inline-formula>
of the neurons population (i.e., eight neurons). Although the global nework is not fully recurrent, the probability distribution describes a log-curve distribution very similar to small-world and to complex networks
<xref ref-type="bibr" rid="pone.0069474-Sporns1">[82]</xref>
. Complex networks are well-known structures for efficient information processing, locally within the sub-parts and globally over the whole system
<xref ref-type="bibr" rid="pone.0069474-Pitti2">[83]</xref>
.</p>
<p>The histogram in
<bold>C</bold>
draws a similar probability distribution for the spatial congruence between the visual mapping and the tactile mapping. This histogram displays the spatial error between the associated receptive fields taken from their respective barycentre (e.g.,
<xref ref-type="fig" rid="pone-0069474-g010">Fig. 10</xref>
) and normalized between
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e048.jpg"></inline-graphic>
</inline-formula>
. It shows that the unimodal receptive fields linked by the intermediate neurons overlap mostly their spatial location with
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e049.jpg"></inline-graphic>
</inline-formula>
error only. Besides, the spatial distance decreases drastically above this value. As a result, most of the neurons from the two maps (
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e050.jpg"></inline-graphic>
</inline-formula>
) are in spatial registry.
<xref ref-type="fig" rid="pone-0069474-g011">Figure 11</xref>
plots the spatial alignment between the visual and the tactile neurons, resp. above and below, relative to their location on their respective map. The links between the neurons are mostly vertical and parallel and only few of them cross other spatial regions on the other map. In order to mark out the aligned links, we color in dark grey the links that have a small spatial displacement between the two maps: the darker the link, the more aligned are the neurons.</p>
<fig id="pone-0069474-g011" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0069474.g011</object-id>
<label>Figure 11</label>
<caption>
<title>Neural arrangement and synaptic alignment.</title>
<p>Spatial topology of the neurons in the visual and tactile maps, with their respective pairwise connections to the bimodal neurons, the darker the link, the more aligned are the neurons. In accordance with the results found in Fig. 9, the spatial error between the neurons of each map is weak, which is seen in the alignment of synapses that are mostly parallel; e.g., the dark links. At reverse, the few spatial errors present big spatial distortion (light grey).</p>
</caption>
<graphic xlink:href="pone.0069474.g011"></graphic>
</fig>
</sec>
<sec id="s3c">
<title>Sensitivity to Configuration of Eyes and Mouth</title>
<p>In order to investigate the functional properties of the global network, we replicate the three dots experiment tested on the newborns by Mark Johnson
<xref ref-type="bibr" rid="pone.0069474-Rigato1">[8]</xref>
,
<xref ref-type="bibr" rid="pone.0069474-Johnson2">[25]</xref>
. This test aims at demonstrating facial imitation and facial perception in newborns.</p>
<p>We analyze the networks' activity response for different configurations of an iconified face-like pattern exemplified by three large dots corresponding to the two eyes and the mouth, see the framed figure in
<xref ref-type="fig" rid="pone-0069474-g012">Fig. 12</xref>
on the top-left. For this, we rotate this pattern between
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e051.jpg"></inline-graphic>
</inline-formula>
and collect the neural activation responses from the vision map (in blue) and from the intermediate map (in red). When the pattern is modulated by
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e052.jpg"></inline-graphic>
</inline-formula>
radians (120°), we can observe a strong response activation taken from the visual map as the face-like stimuli is well-aligned with the visual neurons, which have encoded this spatial distribution. Concerning the multimodal map, its neural response presents a similar activity pattern but two time stronger and shifted by
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e053.jpg"></inline-graphic>
</inline-formula>
radians (30°). This slight difference in response between the two maps indicates that they share some common features in their respective receptive fields but do not completely overlap from each other. Although visual and somatosensory maps are not organized in the same manner due to the skin-based or retinotopic reference frames. As exemplified in
<xref ref-type="fig" rid="pone-0069474-g011">Figure 11</xref>
, the intermediate map recodes and aligns the two maps in a common space from the congruent visuo-tactile stimuli presented.</p>
<fig id="pone-0069474-g012" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0069474.g012</object-id>
<label>Figure 12</label>
<caption>
<title>Sensitivity to face-like patterns for certain orientations.</title>
<p>This plot presents the sensitivity of the neural network to face-like patterns, with an experimental setup similar to the three-dots test done in newborns
<xref ref-type="bibr" rid="pone.0069474-Johnson4">[29]</xref>
. When rotating the three dots pattern centered on the eye, the neural activity within the visual map and the bimodal map gets higher only to certain orientations,
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e023.jpg"></inline-graphic>
</inline-formula>
and
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e024.jpg"></inline-graphic>
</inline-formula>
, when the three dots align correctly to the caricatural eyes and mouth configurational topology.</p>
</caption>
<graphic xlink:href="pone.0069474.g012"></graphic>
</fig>
<p>Furthermore, we can observe cross-modal enhancement as the activity in the multimodal map is higher than from its visual input. The face-like stimulation pattern boosts the neurons activity when they are presented in the correct orientation coinciding with the facial topology. Thus, activity in the intermediate layer is stronger despite it does not receive any information from the tactile map. That is, thanks to the sensory alignment between the two modalities, the intermediate layer is able to simulate the neural activity of the tactile map.</p>
<p>In addition, we make five other experiments with different visual patterns in order to evaluate our system with respect to infants psychology tests. In
<xref ref-type="fig" rid="pone-0069474-g013">Figure 13</xref>
, we present the averaged activity level of the multimodal map over
<inline-formula>
<inline-graphic xlink:href="pone.0069474.e054.jpg"></inline-graphic>
</inline-formula>
experiments, for the eyes and mouth configurational pattern with the white on black three dots
<bold>A</bold>
, the eyes only
<bold>B</bold>
, mouth only
<bold>C</bold>
and a black pattern, a random pattern and the black on white three dots pattern; resp.
<bold>D</bold>
,
<bold>E</bold>
,
<bold>F</bold>
. In this chart, the white on black three dots pattern in
<bold>A</bold>
is the most selective. In comparison to the eyes two dots pattern in
<bold>B</bold>
and to the one dot pattern in
<bold>C</bold>
, its level is much more higher than the sum of its constitutive patterns. Interestingly, a full black pattern, in
<bold>D</bold>
, or a random pattern, in
<bold>E</bold>
, get on average higher scores whereas the inverted three dots pattern in
<bold>F</bold>
gets the lowest level. Patterns
<bold>D</bold>
and
<bold>E</bold>
could correspond to the baseline of the map activity level, whereas pattern
<bold>F</bold>
show the contrast sensitivity of this type of neuron: rank-order coding neurons have been used to simulate the neurons in V1 and are found robust to noise and luminosity, but not to contrast polarity
<xref ref-type="bibr" rid="pone.0069474-VanRullen1">[65]</xref>
,
<xref ref-type="bibr" rid="pone.0069474-Thorpe1">[66]</xref>
,
<xref ref-type="bibr" rid="pone.0069474-VanRullen2">[79]</xref>
. This point is particularly important because it may explain partly results on contrast sensitivity of neonates on face-like configuration
<xref ref-type="bibr" rid="pone.0069474-Farroni1">[84]</xref>
, although neonates are more sensitive to black on white patterns rather than the reverse as in our model.</p>
<fig id="pone-0069474-g013" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0069474.g013</object-id>
<label>Figure 13</label>
<caption>
<title>Performance Tests for different configurational patterns.</title>
<p>We perform several experiments around the three dots test, the results on the sensitivity of the bimodal neurons are averaged on twenty experiments. In
<bold>A</bold>
the performance of the network on the black background and the three white dots, in
<bold>B</bold>
on the eyes only, in
<bold>C</bold>
on the mouth only, in
<bold>D</bold>
on a pitch black pattern, in
<bold>E</bold>
on a random pattern and in
<bold>F</bold>
on the reverse pattern. Bimodal neurons show a maximum intensity for the pattern
<bold>A</bold>
, where the three dots match the spatial location of the eyes and of the mouth. In comparison, its constitutive patterns presented separately to the network in
<bold>B</bold>
and in
<bold>C</bold>
generate a much lower activity, whereas The full back pattern in
<bold>D</bold>
and the random pattern in
<bold>E</bold>
reach an averaged activity level inside the network and the reversed pattern in
<bold>F</bold>
, its lowest level. This last performance is due to the contrast polarity sensitivity of the rank-order coding neurons, which is a characteristic comparable with the capacities of the visual system
<xref ref-type="bibr" rid="pone.0069474-VanRullen1">[65]</xref>
, but here the system learns light components against dark background but not dark components against light background as observed in infants
<xref ref-type="bibr" rid="pone.0069474-Farroni1">[84]</xref>
.</p>
</caption>
<graphic xlink:href="pone.0069474.g013"></graphic>
</fig>
</sec>
<sec id="s3d">
<title>Detection of Mouth and Eyes Movements</title>
<p>Our next experiment studied the influence of facial expressions on the multimodal system. A sequence of facial expression images, which alternated stare and smile, is presented to the visual map at regular timing period. First, the images were pre-processed with a motion detection filter, which simply subtracts two consecutive images, see
<xref ref-type="fig" rid="pone-0069474-g014">Fig. 14</xref>
on the top. As a result, the static regions between the two consecutive images are filtered (e.g., the background and the cheeks) whereas its dynamical parts (i.e., the eyelids, the eyes, the nose and the mouth) are strongly emphasized when a strong facial expression is established. In this situation, the salient regions match well the three dots icon in
<xref ref-type="fig" rid="pone-0069474-g012">Fig. 12</xref>
.</p>
<fig id="pone-0069474-g014" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0069474.g014</object-id>
<label>Figure 14</label>
<caption>
<title>Neural activity taken from the intermediate visuo-tactile map during observation of a facial expression: surprise (red frame) and stare (green frame).</title>
<p>We present a sequence of facial expressions from surprise to stare and vice-versa. The selected bimodal neuron taken from the intermediate map triggers to the characteristic visual configurational patterns of the face during rapid changes, which permits to detect the mouth and eyes movements. this behavior is due to the sensory alignment and of the high correlation with the tactile distribution of its own face. Note: the subject has given written informed consent to publication of his photograph.</p>
</caption>
<graphic xlink:href="pone.0069474.g014"></graphic>
</fig>
<p>At the network level, not all the neurons are active but some are very receptive to certain facial expressions and to the dynamic activation of certain spatial regions. We display a neuron dynamics in
<xref ref-type="fig" rid="pone-0069474-g014">Fig. 14</xref>
for different facial expressions presented at periodic time from staring to surprise, and then from surprise to staring.</p>
<p>Here, the visuo-tactile neuron in the intermediate map is visually highly receptive to the regions that characterize the face because of sensory alignment and that its distribution is correlated to the tactile distribution of its own face. Therefore, whenever a transition occurs in facial expression, the neuron fires. One can imagine then that if the intermediate cells feed-forward this activity to the corresponding facial motor activity, then imitation will occur.</p>
</sec>
</sec>
<sec sec-type="discussion" id="s4">
<title>Discussion</title>
<p>We have introduced a developmental model of SC starting from the fetal stage in the context of social primitive behaviors. In comparison to normal stimuli, we propose that faces are particular patterns as the visual and somatic maps in SC are perfectly aligned topologically. We suggest that multimodal alignment may influence neonates for social skills, to recognize faces and to generate mimicry. The model consists of two unisensory layers, receiving the raw tactile information from the facial mechano-receptors simulated with a mass-spring mesh network and the raw visual information from the not-yet matured eyes. We make the note that the SC is comprised of two hemispheres and a unilateral SC lesion produces contralateral sensory (visual, somatosensory and auditory) deficits
<xref ref-type="bibr" rid="pone.0069474-Sprague1">[85]</xref>
. Although we could have modeled only one hemisphere and given to the system only half of the contralateral sensory information, we think our system would have learnt the same. The two circuits are initialized in a primitive stage starting with few neurons with randomized synaptic connections. We simulate the developmental aspects of the map formations during the third trimester of pregrancy through the mechanisms of activity-dependent neural growth
<xref ref-type="bibr" rid="pone.0069474-Pellegrini1">[80]</xref>
and synaptic plasticity. Over time, the two maps evolve into topographic networks and a third map is introduced, which corresponds to the intermediate layer in SC that aligns the visual and tactile sensory modalities from each other. The neurons are modeled with the rank-order coding algorithm proposed by Thorpe and colleagues
<xref ref-type="bibr" rid="pone.0069474-Thorpe1">[66]</xref>
, which defines a fast integrate-and-fire neuron model that learns the discrete phasic information of the input vector.</p>
<p>The major finding of our model is that minimal social features, like the sensitivy to configuration of eyes and mouth, can emerge from the multimodal integration operated between the topographic maps built from structured sensory information
<xref ref-type="bibr" rid="pone.0069474-Lungarella1">[86]</xref>
,
<xref ref-type="bibr" rid="pone.0069474-Lungarella2">[87]</xref>
. A result in line with the plastic formation of the neural maps built from sensorimotor experiences
<xref ref-type="bibr" rid="pone.0069474-Benedetti1">[60]</xref>
<xref ref-type="bibr" rid="pone.0069474-Benedetti2">[62]</xref>
. We acknowledge however that this model does not account for the fine-tuned discrimination of different mouth actions and imitation of the same action. We believe that this can be done only to some extent due to the limitation of our experimental setup. In our predictions, however, we believe that a more accurate facial model which includes the gustative motor system can account to represent the somatopic map with more fine-tuned discrimination of mouth movements with throat-jaws and tongue motions (tongue protrusion) against jaw and cheeks actions (mouth opening). Moreover, our model of the visual system is rudimentary and does not show sensitivity in the three dots experiments of dark components against light background as observed in infants
<xref ref-type="bibr" rid="pone.0069474-Farroni1">[84]</xref>
. A more accurate model integrating the retina and V1 area may better fit this behavior.</p>
<p>Although it is not clear whether the human system possesses inborn predisposition for social stimuli, we think our model could provide a consistent computational framework on the inner mechanisms supporting that hypothesis. This model may explain also some psychological findings in newborns like the preference to face-like patterns, contrast sensitivity to facial patterns and the detection of mouth and eyes movements, which are the premise for facial mimicry. Furthermore, our model is also consistent with fetal behavioral and cranial anatomical observations showing on the one hand the control of eye movements and facial behaviors during the third trimester
<xref ref-type="bibr" rid="pone.0069474-Kurjak1">[88]</xref>
, and on the other hand the maturation of specific sub-cortical areas; e.g. the substantia nigra, the inferior-auditory and superior-visual colliculi, responsible for these behaviors
<xref ref-type="bibr" rid="pone.0069474-Stanojevic1">[43]</xref>
.</p>
<p>Clinical studies found that newborns are sensitive to biological motion
<xref ref-type="bibr" rid="pone.0069474-Simion2">[89]</xref>
, to eye gaze
<xref ref-type="bibr" rid="pone.0069474-Farroni2">[90]</xref>
and to face-like patterns
<xref ref-type="bibr" rid="pone.0069474-Morton1">[28]</xref>
. They demonstrate also low-level facial gestures imitation off-the-shelf
<xref ref-type="bibr" rid="pone.0069474-Meltzoff1">[17]</xref>
, which is a result that is also found in newborn monkeys
<xref ref-type="bibr" rid="pone.0069474-Ferrari1">[20]</xref>
. However, if the hypothesis of a minimal social brain is valid, which mechanisms contribute to it? Johnson and colleagues propose for instance that sub-cortical structures embed a coarse template of faces broadly tuned to detect low-level perceptual cues embedded in social stimuli
<xref ref-type="bibr" rid="pone.0069474-Johnson4">[29]</xref>
. They consider that a recognition mechanism based on configural topology is likely to be involved that can describe faces as a collection of general structural and configural properties. A different idea is the proposal of Boucenna and colleagues who suggest that the amygdala is strongly involved in the rapid learning of social references (e.g., smiles)
<xref ref-type="bibr" rid="pone.0069474-Boucenna1">[16]</xref>
,
<xref ref-type="bibr" rid="pone.0069474-Boucenna2">[72]</xref>
. Since eyes and faces are highly salient due to their specific configurations and patterns, the learning of social skills is bootstrapped simply from low-level visuo-motor coordination. Besides, Meltzoff proposes that neonates possess an innate system named the Active Intermodal Matching (AIM mechanism)
<xref ref-type="bibr" rid="pone.0069474-Meltzoff3">[19]</xref>
that identifies organs and their configural relations. He furtherly suggests that this map is at the origin of a supramodal body image built from the visuo-motor matching behaviors, auditory-oral matching behaviors, and visual-tactile matching behaviors during the perinatal period
<xref ref-type="bibr" rid="pone.0069474-Streri1">[91]</xref>
.</p>
<p>How such body image can be built? and when? Takeshita and colleagues emphasize the importance of tactile sensation during brain maturation in the last trimester of pregnancy
<xref ref-type="bibr" rid="pone.0069474-Takeshita1">[92]</xref>
. NIRS analysis on newborns during bimodal stimulation show that tactile stimuli activate in broader brain areas compared with other stimuli
<xref ref-type="bibr" rid="pone.0069474-Shibata1">[93]</xref>
. Retranscribed from
<xref ref-type="bibr" rid="pone.0069474-Kurjak1">[88]</xref>
, Kurjak and colleagues indicate that human fetuses begin to learn towards “their own body”, showing coordinated movements such as hands to mouth, sucking, grasping hand, tiptoes, knees (22 weeks), opening mouth before hand to mouth/sucking (24 weeks), and various patterns of facial expressions starting from 18 weeks (mouth opening, tongue/lip protrusion, smiling and yawning). Furthermore, supporting observations by Myowa-Yamakoshi and colleagues show evidences for fetal anticipatory mouth opening
<xref ref-type="bibr" rid="pone.0069474-MyowaYamakoshi1">[94]</xref>
, whereas
<xref ref-type="bibr" rid="pone.0069474-Stanojevic1">[43]</xref>
shows continuity between fetal and neonatal neurobehavior with self-exploratory behaviors.</p>
<p>Although neonate imitation is only a marker that disappears after 2–3 months in human, we propose that the SC is at the root of this behavior for enabling automatic social interactions. This hypothesis has been also suggested by
<xref ref-type="bibr" rid="pone.0069474-Nagy2">[95]</xref>
<xref ref-type="bibr" rid="pone.0069474-SalihagicKadic1">[97]</xref>
who emphasized the central place that occupies the SC for fusioning the senses with respect to other brain regions not yet matured. Anatomical studies on collicular cells show that the eye neurons go forward to the deep layers without recurrent synaptic connections, which has to confer to SC a strong computational power due to alignment; e.g., the easy and rapid construction of a primitive body image. This primitive body image may correspond to the first-stage of Piaget's spatial and motor development landscape characterized by an egocentric representation and sensorimotor coordination before the apparition of a more complex spatial representation of the body in allocentric metric
<xref ref-type="bibr" rid="pone.0069474-Piaget1">[7]</xref>
,
<xref ref-type="bibr" rid="pone.0069474-Bremner1">[98]</xref>
, mapped into the cortex. The multimodal cells in SC, along with the other forebrain structures such as the hippocampus and the amygdala, may help the construction of such body schema in the parieto-motor cortices. For instance, we proposed in previous works the importance of hippocampal interactions with the parieto-motor cortices for spatial perception and the elaboration of a body image
<xref ref-type="bibr" rid="pone.0069474-Pitti1">[15]</xref>
,
<xref ref-type="bibr" rid="pone.0069474-Pitti3">[99]</xref>
. There, other mechanisms than sensory alignment may be at play such as the gain-field modulatory effect found for coordinate transformation
<xref ref-type="bibr" rid="pone.0069474-Andersen1">[100]</xref>
<xref ref-type="bibr" rid="pone.0069474-Pitti4">[103]</xref>
.</p>
</sec>
</body>
<back>
<ack>
<p>We thank Arnaud Blanchard and Jean-Paul Banquet for their comments.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="pone.0069474-Nagy1">
<label>1</label>
<mixed-citation publication-type="other">Nagy E (2010) The newborn infant: A missing stage in developmental psychology. Inf Child Dev: 10.1002/icd.683.</mixed-citation>
</ref>
<ref id="pone.0069474-Porges1">
<label>2</label>
<mixed-citation publication-type="other">Porges S, Furman S (2010) The early development of the autonomic nervous system provides a neural platform for social behaviour: A polyvagal perspective. Inf Child Dev: 10.1002/icd.688.</mixed-citation>
</ref>
<ref id="pone.0069474-Trevarthen1">
<label>3</label>
<mixed-citation publication-type="other">Trevarthen C (2010) What is it like to be a person who knows nothing? defining the active intersubjective mind of a newborn human being. Inf Child Dev: 10.1002/icd.689.</mixed-citation>
</ref>
<ref id="pone.0069474-Rochat1">
<label>4</label>
<mixed-citation publication-type="journal">
<name>
<surname>Rochat</surname>
<given-names>P</given-names>
</name>
(
<year>2011</year>
)
<article-title>The self as phenotype</article-title>
.
<source>Consciousness and Cognition</source>
<volume>20</volume>
:
<fpage>109</fpage>
<lpage>119</lpage>
<pub-id pub-id-type="pmid">21145260</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-Reddy1">
<label>5</label>
<mixed-citation publication-type="other">Reddy V (2008) How Infants Know Minds. Harvard University Press.</mixed-citation>
</ref>
<ref id="pone.0069474-Johnson1">
<label>6</label>
<mixed-citation publication-type="journal">
<name>
<surname>Johnson</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Griffn</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Csibra</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Halit</surname>
<given-names>H</given-names>
</name>
,
<name>
<surname>Farroni</surname>
<given-names>T</given-names>
</name>
,
<etal>et al</etal>
(
<year>2005</year>
)
<article-title>The emergence of the social brain network: Evidence from typical and atypical development</article-title>
.
<source>Development and Psychopathology</source>
<volume>17</volume>
:
<fpage>599</fpage>
<lpage>619</lpage>
<pub-id pub-id-type="pmid">16262984</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-Piaget1">
<label>7</label>
<mixed-citation publication-type="other">Piaget J (1954) The construction of reality in the child. New York: Basic Books.</mixed-citation>
</ref>
<ref id="pone.0069474-Rigato1">
<label>8</label>
<mixed-citation publication-type="other">Rigato S, Menon E, Johnson M, Faraguna D, Farroni T (2010) Direct gaze may modulate face recognition in newborns. Inf Child Dev: 10.1002/icd.684.</mixed-citation>
</ref>
<ref id="pone.0069474-Kuniyoshi1">
<label>9</label>
<mixed-citation publication-type="other">Kuniyoshi Y, Yorozu Y, Inaba M, Inoue H (2003) From visuo-motor self learning to early imitation - a neural architecture for humanoid learning. International conference on robotics and Automation: 3132–3139.</mixed-citation>
</ref>
<ref id="pone.0069474-Kuniyoshi2">
<label>10</label>
<mixed-citation publication-type="journal">
<name>
<surname>Kuniyoshi</surname>
<given-names>Y</given-names>
</name>
,
<name>
<surname>Sangawa</surname>
<given-names>S</given-names>
</name>
(
<year>2006</year>
)
<article-title>A neural model for exploration and learning of embodied movement patterns</article-title>
.
<source>Bio Cyb</source>
<volume>95</volume>
:
<fpage>589</fpage>
<lpage>605</lpage>
</mixed-citation>
</ref>
<ref id="pone.0069474-Mori1">
<label>11</label>
<mixed-citation publication-type="other">Mori H, Kuniyoshi K (2007) A cognitive developmental scenario of transitional motor primitives acquisition. In: 7th international Conference on Epigenetic Robotics. 93–100.</mixed-citation>
</ref>
<ref id="pone.0069474-Kinjo1">
<label>12</label>
<mixed-citation publication-type="journal">
<name>
<surname>Kinjo</surname>
<given-names>K</given-names>
</name>
,
<name>
<surname>Nabeshima</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Sangawa</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Kuniyoshi</surname>
<given-names>Y</given-names>
</name>
(
<year>2008</year>
)
<article-title>A neural model for exploration and learning of embodied movement patterns</article-title>
.
<source>J of Rob and Mecha</source>
<volume>20</volume>
:
<fpage>358</fpage>
<lpage>366</lpage>
</mixed-citation>
</ref>
<ref id="pone.0069474-Mori2">
<label>13</label>
<mixed-citation publication-type="other">Mori H, Kuniyoshi Y (2010) A human fetus development simulation: Self-organization of behaviors through tactile sensation. IEEE 9th International Conference on Development and Learning: 82–97.</mixed-citation>
</ref>
<ref id="pone.0069474-Yamada1">
<label>14</label>
<mixed-citation publication-type="other">Yamada Y, Mori H, Kuniyoshi Y (2010) A fetus and infant developmental scenario: Selforganization of goal-directed behaviors based on sensory constraints. 10th International Conference on Epigenetic Robotics: 145–152.</mixed-citation>
</ref>
<ref id="pone.0069474-Pitti1">
<label>15</label>
<mixed-citation publication-type="other">Pitti A, Mori H, Yamada Y, Kuniyoshi Y (2010) A model of spatial development from parietohippocampal learning of body-place associations. 10th International Conference on Epigenetic Robotics: 89–96.</mixed-citation>
</ref>
<ref id="pone.0069474-Boucenna1">
<label>16</label>
<mixed-citation publication-type="other">Boucenna S, Gaussier P, Andry P, Hafemeister L (2010) Imitation as a communication tool for online facial expression learning and recognition. IROS: 1–6.</mixed-citation>
</ref>
<ref id="pone.0069474-Meltzoff1">
<label>17</label>
<mixed-citation publication-type="journal">
<name>
<surname>Meltzoff</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Moore</surname>
<given-names>K</given-names>
</name>
(
<year>1977</year>
)
<article-title>Imitation of facial and manual gestures by human neonates</article-title>
.
<source>Science</source>
<volume>198</volume>
:
<fpage>75</fpage>
<lpage>78</lpage>
<pub-id pub-id-type="pmid">17741897</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-Meltzoff2">
<label>18</label>
<mixed-citation publication-type="journal">
<name>
<surname>Meltzoff</surname>
<given-names>A</given-names>
</name>
(
<year>2007</year>
)
<article-title>like me: a foundation for social cognition</article-title>
.
<source>Developmental Science</source>
<volume>10</volume>
:
<fpage>126</fpage>
<lpage>134</lpage>
<pub-id pub-id-type="pmid">17181710</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-Meltzoff3">
<label>19</label>
<mixed-citation publication-type="journal">
<name>
<surname>Meltzoff</surname>
<given-names>A</given-names>
</name>
(
<year>1997</year>
)
<article-title>Explaining facial imitation: A theoretical model</article-title>
.
<source>Early Development and Parenting</source>
<volume>6</volume>
:
<fpage>179</fpage>
<lpage>192</lpage>
</mixed-citation>
</ref>
<ref id="pone.0069474-Ferrari1">
<label>20</label>
<mixed-citation publication-type="journal">
<name>
<surname>Ferrari</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Paukner</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Ruggiero</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Darcey</surname>
<given-names>L</given-names>
</name>
,
<name>
<surname>Unbehagen</surname>
<given-names>S</given-names>
</name>
,
<etal>et al</etal>
(
<year>2009</year>
)
<article-title>Interindividual differences in neonatal imitation and the development of action chains in rhesus macaques</article-title>
.
<source>Child Development</source>
<volume>80</volume>
:
<fpage>1057</fpage>
<lpage>1068</lpage>
<pub-id pub-id-type="pmid">19630893</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-Lepage1">
<label>21</label>
<mixed-citation publication-type="journal">
<name>
<surname>Lepage</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Théoret</surname>
<given-names>H</given-names>
</name>
(
<year>2007</year>
)
<article-title>The mirror neurons system: grasping others' action from birth?</article-title>
<source>Developmental Science</source>
<volume>10</volume>
:
<fpage>513</fpage>
<lpage>523</lpage>
<pub-id pub-id-type="pmid">17683336</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-Valenza1">
<label>22</label>
<mixed-citation publication-type="journal">
<name>
<surname>Valenza</surname>
<given-names>E</given-names>
</name>
,
<name>
<surname>Simion</surname>
<given-names>F</given-names>
</name>
,
<name>
<surname>Macchi Cassia</surname>
<given-names>V</given-names>
</name>
,
<name>
<surname>Umilta</surname>
<given-names>C</given-names>
</name>
(
<year>1996</year>
)
<article-title>Face preference at birth</article-title>
.
<source>J Exp Psychol Hum Percept Perform</source>
<volume>22</volume>
:
<fpage>892</fpage>
<lpage>903</lpage>
<pub-id pub-id-type="pmid">8756957</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-Simion1">
<label>23</label>
<mixed-citation publication-type="journal">
<name>
<surname>Simion</surname>
<given-names>F</given-names>
</name>
,
<name>
<surname>Valenza</surname>
<given-names>E</given-names>
</name>
,
<name>
<surname>Umilta</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>DallaBarba</surname>
<given-names>B</given-names>
</name>
(
<year>1998</year>
)
<article-title>Preferential orienting to faces in newborns: a temporal-nasal asymmetry</article-title>
.
<source>J Exp Psychol Hum Percept Perform</source>
<volume>24</volume>
:
<fpage>1399</fpage>
<lpage>1405</lpage>
<pub-id pub-id-type="pmid">9778830</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-deHaan1">
<label>24</label>
<mixed-citation publication-type="journal">
<name>
<surname>de Haan</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Pascalis</surname>
<given-names>O</given-names>
</name>
,
<name>
<surname>Johnson</surname>
<given-names>M</given-names>
</name>
(
<year>2002</year>
)
<article-title>Specialization of neural mechanisms underlying face recognition in human infants</article-title>
.
<source>Journal of Cognitive Neuroscience</source>
<volume>14</volume>
:
<fpage>199</fpage>
<lpage>209</lpage>
<pub-id pub-id-type="pmid">11970786</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-Johnson2">
<label>25</label>
<mixed-citation publication-type="journal">
<name>
<surname>Johnson</surname>
<given-names>M</given-names>
</name>
(
<year>2005</year>
)
<article-title>Subcortical face processing</article-title>
.
<source>Nature Reviews Neuroscience</source>
<volume>6</volume>
:
<fpage>766</fpage>
<lpage>774</lpage>
</mixed-citation>
</ref>
<ref id="pone.0069474-Johnson3">
<label>26</label>
<mixed-citation publication-type="journal">
<name>
<surname>Johnson</surname>
<given-names>M</given-names>
</name>
(
<year>2007</year>
)
<article-title>Developing a social brain</article-title>
.
<source>Acta Pdiatrica/Acta Pdiatrica</source>
<volume>96</volume>
:
<fpage>3</fpage>
<lpage>5</lpage>
</mixed-citation>
</ref>
<ref id="pone.0069474-Senju1">
<label>27</label>
<mixed-citation publication-type="journal">
<name>
<surname>Senju</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Johnson</surname>
<given-names>M</given-names>
</name>
(
<year>2009</year>
)
<article-title>The eye contact effect: mechanisms and development</article-title>
.
<source>Trends in Cognitive Sciences</source>
<volume>13</volume>
:
<fpage>127</fpage>
<lpage>134</lpage>
<pub-id pub-id-type="pmid">19217822</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-Morton1">
<label>28</label>
<mixed-citation publication-type="journal">
<name>
<surname>Morton</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Johnson</surname>
<given-names>M</given-names>
</name>
(
<year>1991</year>
)
<article-title>Conspec and conlern: a two-process theory of infant face recognition</article-title>
.
<source>Psychological Review</source>
<volume>98</volume>
:
<fpage>164</fpage>
<lpage>181</lpage>
<pub-id pub-id-type="pmid">2047512</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-Johnson4">
<label>29</label>
<mixed-citation publication-type="journal">
<name>
<surname>Johnson</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Dziurawiec</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Ellis</surname>
<given-names>H</given-names>
</name>
,
<name>
<surname>J</surname>
<given-names>M</given-names>
</name>
(
<year>1991</year>
)
<article-title>Newborns preferential tracking of face-like stimuli and its subsequent decline</article-title>
.
<source>Cognition</source>
<volume>40</volume>
:
<fpage>1</fpage>
<lpage>19</lpage>
<pub-id pub-id-type="pmid">1786670</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-deSchonen1">
<label>30</label>
<mixed-citation publication-type="journal">
<name>
<surname>de Schonen</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Mathivet</surname>
<given-names>E</given-names>
</name>
(
<year>1989</year>
)
<article-title>First come first served: a scenario about the development of hemispheric specialization in face processing in infancy</article-title>
.
<source>European Bulletin of Cognitive Psychology</source>
<volume>9</volume>
:
<fpage>3</fpage>
<lpage>44</lpage>
</mixed-citation>
</ref>
<ref id="pone.0069474-Acerra1">
<label>31</label>
<mixed-citation publication-type="journal">
<name>
<surname>Acerra</surname>
<given-names>F</given-names>
</name>
,
<name>
<surname>Burnod</surname>
<given-names>Y</given-names>
</name>
,
<name>
<surname>de Schonen</surname>
<given-names>S</given-names>
</name>
(
<year>2002</year>
)
<article-title>Modelling aspects of face processing in early infancy</article-title>
.
<source>Developmental Science</source>
<volume>5</volume>
:
<fpage>98</fpage>
<lpage>117</lpage>
</mixed-citation>
</ref>
<ref id="pone.0069474-Nelson1">
<label>32</label>
<mixed-citation publication-type="journal">
<name>
<surname>Nelson</surname>
<given-names>C</given-names>
</name>
(
<year>2001</year>
)
<article-title>The development and neural bases of face recognition</article-title>
.
<source>Infant and Child Development</source>
<volume>10</volume>
:
<fpage>3</fpage>
<lpage>18</lpage>
</mixed-citation>
</ref>
<ref id="pone.0069474-Turati1">
<label>33</label>
<mixed-citation publication-type="journal">
<name>
<surname>Turati</surname>
<given-names>C</given-names>
</name>
(
<year>2004</year>
)
<article-title>Why faces are not special to newborns</article-title>
.
<source>Current Directions in Psychological Science</source>
<volume>13</volume>
:
<fpage>5</fpage>
<lpage>8</lpage>
</mixed-citation>
</ref>
<ref id="pone.0069474-Heyes1">
<label>34</label>
<mixed-citation publication-type="journal">
<name>
<surname>Heyes</surname>
<given-names>C</given-names>
</name>
(
<year>2003</year>
)
<article-title>Four routes of cognitive evolution</article-title>
.
<source>Psychological Reviews</source>
<volume>110</volume>
:
<fpage>713</fpage>
<lpage>727</lpage>
</mixed-citation>
</ref>
<ref id="pone.0069474-Brass1">
<label>35</label>
<mixed-citation publication-type="other">Brass M, Heyes C (2005) Imitation: is cognitive neuroscience solving the correspondence problem? Trends in Cognitive Sciences: 489–495.</mixed-citation>
</ref>
<ref id="pone.0069474-Ray1">
<label>36</label>
<mixed-citation publication-type="journal">
<name>
<surname>Ray</surname>
<given-names>E</given-names>
</name>
,
<name>
<surname>Heyes</surname>
<given-names>C</given-names>
</name>
(
<year>2011</year>
)
<article-title>Imitation in infancy: the wealth of the stimulus</article-title>
.
<source>Developmental Science</source>
<volume>14</volume>
:
<fpage>92</fpage>
<lpage>105</lpage>
<pub-id pub-id-type="pmid">21159091</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-Kalesnykas1">
<label>37</label>
<mixed-citation publication-type="journal">
<name>
<surname>Kalesnykas</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Sparks</surname>
<given-names>D</given-names>
</name>
(
<year>1996</year>
)
<article-title>The primate superior colliculus and the control of saccadic eye movements of saccadic eye movements</article-title>
.
<source>Neuroscientist</source>
<volume>2</volume>
:
<fpage>284</fpage>
<lpage>292</lpage>
</mixed-citation>
</ref>
<ref id="pone.0069474-Stein1">
<label>38</label>
<mixed-citation publication-type="other">Stein B, Meredith M (1993) The Merging of the Senses. A Bradford Book, cambridge, MA.</mixed-citation>
</ref>
<ref id="pone.0069474-Ferrell1">
<label>39</label>
<mixed-citation publication-type="other">Ferrell C (1996) Orientation behavior using registered topographic maps. Proceedings of the Fourth International Conference on Simulation of Adaptive Behavior: 94–103.</mixed-citation>
</ref>
<ref id="pone.0069474-Crish1">
<label>40</label>
<mixed-citation publication-type="journal">
<name>
<surname>Crish</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Dengler-Crish</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Comer</surname>
<given-names>C</given-names>
</name>
(
<year>2006</year>
)
<article-title>Population coding strategies and involvement of the superior colliculus in the tactile orienting behavior of naked mole-rats</article-title>
.
<source>Neuroscience</source>
<volume>139</volume>
:
<fpage>1461</fpage>
<lpage>1466</lpage>
<pub-id pub-id-type="pmid">16603320</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-Joseph1">
<label>41</label>
<mixed-citation publication-type="journal">
<name>
<surname>Joseph</surname>
<given-names>R</given-names>
</name>
(
<year>2000</year>
)
<article-title>Fetal brain behavior and cognitive development</article-title>
.
<source>Developmental Review</source>
<volume>20</volume>
:
<fpage>81</fpage>
<lpage>98</lpage>
</mixed-citation>
</ref>
<ref id="pone.0069474-Stein2">
<label>42</label>
<mixed-citation publication-type="journal">
<name>
<surname>Stein</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Standford</surname>
<given-names>T</given-names>
</name>
,
<name>
<surname>Rowland</surname>
<given-names>B</given-names>
</name>
(
<year>2009</year>
)
<article-title>The neural basis of multisensory integration in the midbrain: Its organization and maturation</article-title>
.
<source>Hearing Research</source>
<volume>258</volume>
:
<fpage>4</fpage>
<lpage>15</lpage>
<pub-id pub-id-type="pmid">19345256</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-Stanojevic1">
<label>43</label>
<mixed-citation publication-type="journal">
<name>
<surname>Stanojevic</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Kurjak</surname>
<given-names>A</given-names>
</name>
(
<year>2008</year>
)
<article-title>Continuity between fetal and neonatal neurobehavior</article-title>
.
<source>Journal of Ultrasound in Obstetrics and Gynecology</source>
<volume>2</volume>
:
<fpage>64</fpage>
<lpage>75</lpage>
</mixed-citation>
</ref>
<ref id="pone.0069474-James1">
<label>44</label>
<mixed-citation publication-type="journal">
<name>
<surname>James</surname>
<given-names>D</given-names>
</name>
(
<year>2010</year>
)
<article-title>Fetal learning: a critical review</article-title>
.
<source>Infant and Child Development</source>
<volume>19</volume>
:
<fpage>45</fpage>
<lpage>54</lpage>
</mixed-citation>
</ref>
<ref id="pone.0069474-Groh1">
<label>45</label>
<mixed-citation publication-type="journal">
<name>
<surname>Groh</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Sparks</surname>
<given-names>D</given-names>
</name>
(
<year>1996</year>
)
<article-title>Saccades to somatosensory targets. iii. eye-position-dependent somatosensory activity in primate superior colliculus</article-title>
.
<source>Journal of Neurophysiology</source>
<volume>75</volume>
:
<fpage>439</fpage>
<lpage>453</lpage>
<pub-id pub-id-type="pmid">8822569</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-Moschovakis1">
<label>46</label>
<mixed-citation publication-type="journal">
<name>
<surname>Moschovakis</surname>
<given-names>A</given-names>
</name>
(
<year>1996</year>
)
<article-title>The superior colliculus and eye movement control</article-title>
.
<source>Current Biology</source>
<volume>6</volume>
:
<fpage>811</fpage>
<lpage>816</lpage>
</mixed-citation>
</ref>
<ref id="pone.0069474-Stein3">
<label>47</label>
<mixed-citation publication-type="journal">
<name>
<surname>Stein</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Magalhães Castro</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Kruger</surname>
<given-names>L</given-names>
</name>
(
<year>1975</year>
)
<article-title>Superior colliculus: Visuotopic-somatotopic overlap</article-title>
.
<source>Science</source>
<volume>189</volume>
:
<fpage>224</fpage>
<lpage>226</lpage>
<pub-id pub-id-type="pmid">1094540</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-Drger1">
<label>48</label>
<mixed-citation publication-type="journal">
<name>
<surname>Dräger</surname>
<given-names>U</given-names>
</name>
,
<name>
<surname>Hubel</surname>
<given-names>D</given-names>
</name>
(
<year>1976</year>
)
<article-title>Topography of visual and somatosensory projections to mouse superior colliculus</article-title>
.
<source>J Neurophysiol</source>
<volume>39</volume>
:
<fpage>91</fpage>
<lpage>101</lpage>
<pub-id pub-id-type="pmid">1249606</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-King1">
<label>49</label>
<mixed-citation publication-type="journal">
<name>
<surname>King</surname>
<given-names>A</given-names>
</name>
(
<year>2004</year>
)
<article-title>The superior colliculus</article-title>
.
<source>Current Biology</source>
<volume>14</volume>
:
<fpage>R335</fpage>
<lpage>R338</lpage>
<pub-id pub-id-type="pmid">15120083</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-Dominey1">
<label>50</label>
<mixed-citation publication-type="journal">
<name>
<surname>Dominey</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Arbib</surname>
<given-names>M</given-names>
</name>
(
<year>1992</year>
)
<article-title>A cortico-subcortical model for generation of spatially accurate sequential saccades</article-title>
.
<source>Cerebral Cortex</source>
<volume>2</volume>
:
<fpage>153</fpage>
<lpage>175</lpage>
<pub-id pub-id-type="pmid">1633413</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-Stein4">
<label>51</label>
<mixed-citation publication-type="journal">
<name>
<surname>Stein</surname>
<given-names>B</given-names>
</name>
(
<year>1984</year>
)
<article-title>Development of the superior colliculus</article-title>
.
<source>Ann Rev Neurosci</source>
<volume>7</volume>
:
<fpage>95</fpage>
<lpage>125</lpage>
<pub-id pub-id-type="pmid">6370084</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-Wallace1">
<label>52</label>
<mixed-citation publication-type="journal">
<name>
<surname>Wallace</surname>
<given-names>M</given-names>
</name>
(
<year>2004</year>
)
<article-title>The development of multisensory processes</article-title>
.
<source>Cogn Process</source>
<volume>5</volume>
:
<fpage>69</fpage>
<lpage>83</lpage>
</mixed-citation>
</ref>
<ref id="pone.0069474-Stein5">
<label>53</label>
<mixed-citation publication-type="journal">
<name>
<surname>Stein</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Perrault Jr</surname>
<given-names>T</given-names>
</name>
,
<name>
<surname>Stanford</surname>
<given-names>T</given-names>
</name>
,
<name>
<surname>Rowland</surname>
<given-names>B</given-names>
</name>
(
<year>2010</year>
)
<article-title>Postnatal experiences inuence how the brain integrates information from different senses</article-title>
.
<source>Frontiers in Integrative Neuroscience</source>
<volume>30</volume>
:
<fpage>4904</fpage>
<lpage>4913</lpage>
</mixed-citation>
</ref>
<ref id="pone.0069474-Wallace2">
<label>54</label>
<mixed-citation publication-type="journal">
<name>
<surname>Wallace</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Stein</surname>
<given-names>B</given-names>
</name>
(
<year>2001</year>
)
<article-title>Sensory and multisensory responses in the newborn monkey superior colliculus</article-title>
.
<source>The Journal of Neuroscience</source>
<volume>21</volume>
:
<fpage>8886</fpage>
<lpage>8894</lpage>
<pub-id pub-id-type="pmid">11698600</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-Stein6">
<label>55</label>
<mixed-citation publication-type="journal">
<name>
<surname>Stein</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Burr</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Constantinidis</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Laurienti</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Meredith</surname>
<given-names>M</given-names>
</name>
,
<etal>et al</etal>
(
<year>2010</year>
)
<article-title>Semantic confusion regarding the development of multisensory integration: a practical solution</article-title>
.
<source>European Journal of Neuroscience</source>
<volume>31</volume>
:
<fpage>1713</fpage>
<lpage>1720</lpage>
<pub-id pub-id-type="pmid">20584174</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-Bednar1">
<label>56</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bednar</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Miikulainen</surname>
<given-names>R</given-names>
</name>
(
<year>2003</year>
)
<article-title>Learning innate face preferences</article-title>
.
<source>Neural Computation</source>
<volume>15</volume>
:
<fpage>1525</fpage>
<lpage>1557</lpage>
<pub-id pub-id-type="pmid">12816565</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-Balas1">
<label>57</label>
<mixed-citation publication-type="journal">
<name>
<surname>Balas</surname>
<given-names>B</given-names>
</name>
(
<year>2010</year>
)
<article-title>Using innate visual biases to guide face learning in natural scenes: a computational investigation</article-title>
.
<source>Developmental Science</source>
<volume>5</volume>
:
<fpage>469</fpage>
<lpage>478</lpage>
<pub-id pub-id-type="pmid">20443967</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-Pascalis1">
<label>58</label>
<mixed-citation publication-type="journal">
<name>
<surname>Pascalis</surname>
<given-names>O</given-names>
</name>
,
<name>
<surname>de Haan</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Nelson</surname>
<given-names>C</given-names>
</name>
(
<year>2002</year>
)
<article-title>Is face processing species-specific during the first year of life?</article-title>
<source>Science</source>
<volume>296</volume>
:
<fpage>1321</fpage>
<lpage>1323</lpage>
<pub-id pub-id-type="pmid">12016317</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-Triplett1">
<label>59</label>
<mixed-citation publication-type="journal">
<name>
<surname>Triplett</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Phan</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Yamada</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Feldheim</surname>
<given-names>D</given-names>
</name>
(
<year>2012</year>
)
<article-title>Alignment of multimodal sensory input in the superior colliculus through a gradient-matching mechanism</article-title>
.
<source>The Journal of Neuroscience</source>
<volume>32</volume>
:
<fpage>5264</fpage>
<lpage>5271</lpage>
<pub-id pub-id-type="pmid">22496572</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-Benedetti1">
<label>60</label>
<mixed-citation publication-type="journal">
<name>
<surname>Benedetti</surname>
<given-names>F</given-names>
</name>
(
<year>1995</year>
)
<article-title>Orienting behaviour and superior colliculus sensory representations in mice with the vibrissae bent into the contralateral hemispace</article-title>
.
<source>European Journal of Neuroscience</source>
<volume>7</volume>
:
<fpage>1512</fpage>
<lpage>9</lpage>
<pub-id pub-id-type="pmid">7551177</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-PerraultJr1">
<label>61</label>
<mixed-citation publication-type="journal">
<name>
<surname>Perrault Jr</surname>
<given-names>T</given-names>
</name>
,
<name>
<surname>Vaughan</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Stein</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Wallace</surname>
<given-names>M</given-names>
</name>
(
<year>2005</year>
)
<article-title>Superior colliculus neurons use distinct operational modes in the integration of multisensory stimuli</article-title>
.
<source>J Neurophysiol</source>
<volume>93</volume>
:
<fpage>2575</fpage>
<lpage>2586</lpage>
<pub-id pub-id-type="pmid">15634709</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-Benedetti2">
<label>62</label>
<mixed-citation publication-type="journal">
<name>
<surname>Benedetti</surname>
<given-names>F</given-names>
</name>
(
<year>2006</year>
)
<article-title>Differential formation of topographic maps in the cerebral cortex and superior colliculus of the mouse by temporally correlated tactile-tactile and tactile-visual inputs</article-title>
.
<source>European Journal of Neuroscience</source>
<volume>7</volume>
:
<fpage>1942</fpage>
<lpage>1951</lpage>
<pub-id pub-id-type="pmid">8528470</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-Wallace3">
<label>63</label>
<mixed-citation publication-type="journal">
<name>
<surname>Wallace</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Stein</surname>
<given-names>B</given-names>
</name>
(
<year>2000</year>
)
<article-title>Onset of cross-modal synthesis in the neonatal superior colliculus is gated by the development of cortical inuences</article-title>
.
<source>J Neurophysiol</source>
<volume>83</volume>
:
<fpage>3578</fpage>
<lpage>3582</lpage>
<pub-id pub-id-type="pmid">10848574</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-Wallace4">
<label>64</label>
<mixed-citation publication-type="journal">
<name>
<surname>Wallace</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Stein</surname>
<given-names>B</given-names>
</name>
(
<year>2007</year>
)
<article-title>Early experience determines how the senses will interact</article-title>
.
<source>J Neurophysiol</source>
<volume>97</volume>
:
<fpage>921</fpage>
<lpage>926</lpage>
<pub-id pub-id-type="pmid">16914616</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-VanRullen1">
<label>65</label>
<mixed-citation publication-type="journal">
<name>
<surname>Van Rullen</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Gautrais</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Delorme</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Thorpe</surname>
<given-names>S</given-names>
</name>
(
<year>1998</year>
)
<article-title>Face processing using one spike per neurone</article-title>
.
<source>BioSystems</source>
<volume>48</volume>
:
<fpage>229</fpage>
<lpage>239</lpage>
<pub-id pub-id-type="pmid">9886652</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-Thorpe1">
<label>66</label>
<mixed-citation publication-type="journal">
<name>
<surname>Thorpe</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Delorme</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Van Rullen</surname>
<given-names>R</given-names>
</name>
(
<year>2001</year>
)
<article-title>Spike-based strategies for rapid processing</article-title>
.
<source>Neural Networks</source>
<volume>14</volume>
:
<fpage>715</fpage>
<lpage>725</lpage>
<pub-id pub-id-type="pmid">11665765</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-Kohonen1">
<label>67</label>
<mixed-citation publication-type="journal">
<name>
<surname>Kohonen</surname>
<given-names>T</given-names>
</name>
(
<year>1982</year>
)
<article-title>Self-organized formation of topologically correct feature maps</article-title>
.
<source>Biological Cybernetics</source>
<volume>43</volume>
:
<fpage>59</fpage>
<lpage>69</lpage>
</mixed-citation>
</ref>
<ref id="pone.0069474-Sirosh1">
<label>68</label>
<mixed-citation publication-type="journal">
<name>
<surname>Sirosh</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Miikulainen</surname>
<given-names>I</given-names>
</name>
(
<year>1994</year>
)
<article-title>Cooperative self-organization of afferetn and lateral connections in cortical maps</article-title>
.
<source>Biological Cybernetics</source>
<volume>71</volume>
:
<fpage>65</fpage>
<lpage>78</lpage>
</mixed-citation>
</ref>
<ref id="pone.0069474-Casey1">
<label>69</label>
<mixed-citation publication-type="other">Casey M, Pavlou A (2008) A behavioral model of sensory alignment in the superficial and deep layers of the superior colliculus. Proceedings of the International Joint Conference on Neural Networks (IJCNN) 2008, Hong Kong: IEEE.</mixed-citation>
</ref>
<ref id="pone.0069474-Pavlou1">
<label>70</label>
<mixed-citation publication-type="other">Pavlou A, Casey M (2010) Simulating the effects of cortical feedback in the superior colliculus with topographic maps. Proceedings of the International Joint Conference on Neural Networks (IJCNN) 2010, Barcelona: IEEE.</mixed-citation>
</ref>
<ref id="pone.0069474-Glasr1">
<label>71</label>
<mixed-citation publication-type="journal">
<name>
<surname>Glasër</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Joublin</surname>
<given-names>F</given-names>
</name>
(
<year>2011</year>
)
<article-title>Firing rate homeostasis for dynamic neural field formation</article-title>
.
<source>IEEE Transactions on Autonomous Mental Development</source>
<volume>3</volume>
:
<fpage>285</fpage>
<lpage>299</lpage>
</mixed-citation>
</ref>
<ref id="pone.0069474-Boucenna2">
<label>72</label>
<mixed-citation publication-type="other">Boucenna S, Gaussier P, Hafemeister L, Bard K (2010) Autonomous development of social referencing skills. In: Autonomous Development Of Social Referencing Skills. p. pages.</mixed-citation>
</ref>
<ref id="pone.0069474-Tsunozaki1">
<label>73</label>
<mixed-citation publication-type="journal">
<name>
<surname>Tsunozaki</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Bautista</surname>
<given-names>D</given-names>
</name>
(
<year>2009</year>
)
<article-title>Mammalian somatosensory mechanotransduction</article-title>
.
<source>The American Journal of Dermatopathology</source>
<volume>19</volume>
:
<fpage>1</fpage>
<lpage>8</lpage>
</mixed-citation>
</ref>
<ref id="pone.0069474-Boot1">
<label>74</label>
<mixed-citation publication-type="journal">
<name>
<surname>Boot</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Rowden</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Walsh</surname>
<given-names>N</given-names>
</name>
(
<year>1992</year>
)
<article-title>The distribution of merkel cells in human fetal and adult skin</article-title>
.
<source>The American Journal of Dermatopathology</source>
<volume>14</volume>
:
<fpage>391</fpage>
<lpage>396</lpage>
<pub-id pub-id-type="pmid">1415956</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-Feller1">
<label>75</label>
<mixed-citation publication-type="journal">
<name>
<surname>Feller</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Butts</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Aaron</surname>
<given-names>H</given-names>
</name>
,
<name>
<surname>Rokhsar</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Shatz</surname>
<given-names>C</given-names>
</name>
(
<year>1997</year>
)
<article-title>Dynamic processes shape spatiotemporal properties of retinal waves</article-title>
.
<source>Neurons</source>
<volume>19</volume>
:
<fpage>293</fpage>
<lpage>306</lpage>
</mixed-citation>
</ref>
<ref id="pone.0069474-deVries1">
<label>76</label>
<mixed-citation publication-type="journal">
<name>
<surname>de Vries</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Visser</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Prechtl</surname>
<given-names>H</given-names>
</name>
(
<year>1982</year>
)
<article-title>The emergence of fetal behavior. i. qualitative aspects</article-title>
.
<source>Early human development</source>
<volume>7</volume>
:
<fpage>301</fpage>
<lpage>302</lpage>
<pub-id pub-id-type="pmid">7169027</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-Prechtl1">
<label>77</label>
<mixed-citation publication-type="other">Prechtl H (2001) Prenatal and early postnatal development of human motor behaviour. Handbook of brain and behaviour in human development Kalverboer AF, Gramsbergen A, editors Amsterdam: Kluver: 415–427.</mixed-citation>
</ref>
<ref id="pone.0069474-Crish2">
<label>78</label>
<mixed-citation publication-type="journal">
<name>
<surname>Crish</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Comer</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Marasco</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Catania</surname>
<given-names>K</given-names>
</name>
(
<year>2003</year>
)
<article-title>Somatosensation in the superior colliculus of the star-nosed mole</article-title>
.
<source>The Journal of Comparative Neurology</source>
<volume>464</volume>
:
<fpage>415</fpage>
<lpage>425</lpage>
<pub-id pub-id-type="pmid">12900913</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-VanRullen2">
<label>79</label>
<mixed-citation publication-type="journal">
<name>
<surname>Van Rullen</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Thorpe</surname>
<given-names>S</given-names>
</name>
(
<year>2002</year>
)
<article-title>Surfing a spike wave down the ventral stream</article-title>
.
<source>Vision Research</source>
<volume>42</volume>
:
<fpage>2593</fpage>
<lpage>2615</lpage>
<pub-id pub-id-type="pmid">12446033</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-Pellegrini1">
<label>80</label>
<mixed-citation publication-type="journal">
<name>
<surname>Pellegrini</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>de Arcangelis</surname>
<given-names>L</given-names>
</name>
,
<name>
<surname>Herrmann</surname>
<given-names>H</given-names>
</name>
,
<name>
<surname>Perrone-Capano</surname>
<given-names>C</given-names>
</name>
(
<year>2007</year>
)
<article-title>Activity-dependent neural network model on scale-free networks</article-title>
.
<source>Physical Review E</source>
<volume>76</volume>
:
<fpage>016107</fpage>
</mixed-citation>
</ref>
<ref id="pone.0069474-Fruchterman1">
<label>81</label>
<mixed-citation publication-type="journal">
<name>
<surname>Fruchterman</surname>
<given-names>T</given-names>
</name>
,
<name>
<surname>Reingold</surname>
<given-names>E</given-names>
</name>
(
<year>1991</year>
)
<article-title>Graph drawing by force-directed placement</article-title>
.
<source>Software: Practice and Experience</source>
<volume>21</volume>
:
<fpage>1129</fpage>
<lpage>1164</lpage>
</mixed-citation>
</ref>
<ref id="pone.0069474-Sporns1">
<label>82</label>
<mixed-citation publication-type="journal">
<name>
<surname>Sporns</surname>
<given-names>O</given-names>
</name>
(
<year>2006</year>
)
<article-title>Small-world connectivity, motif composition, and complexity of fractal neurona connections</article-title>
.
<source>BioSystems</source>
<volume>85</volume>
:
<fpage>55</fpage>
<lpage>64</lpage>
<pub-id pub-id-type="pmid">16757100</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-Pitti2">
<label>83</label>
<mixed-citation publication-type="journal">
<name>
<surname>Pitti</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Lungarella</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Kuniyoshi</surname>
<given-names>Y</given-names>
</name>
(
<year>2008</year>
)
<article-title>Metastability and functional integration in anisotropically coupled map lattices</article-title>
.
<source>Eur Phys J B</source>
<volume>63</volume>
:
<fpage>239</fpage>
<lpage>243</lpage>
</mixed-citation>
</ref>
<ref id="pone.0069474-Farroni1">
<label>84</label>
<mixed-citation publication-type="journal">
<name>
<surname>Farroni</surname>
<given-names>T</given-names>
</name>
,
<name>
<surname>Johnson</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Menon</surname>
<given-names>E</given-names>
</name>
,
<name>
<surname>Zulian</surname>
<given-names>L</given-names>
</name>
,
<name>
<surname>Faraguna</surname>
<given-names>D</given-names>
</name>
,
<etal>et al</etal>
(
<year>2005</year>
)
<article-title>Newborns preference for face-relevant stimuli: Effects of contrast polarity</article-title>
.
<source>Proceeding of The National Academy of Sciences of the USA</source>
<volume>02</volume>
:
<fpage>17245</fpage>
<lpage>17250</lpage>
</mixed-citation>
</ref>
<ref id="pone.0069474-Sprague1">
<label>85</label>
<mixed-citation publication-type="journal">
<name>
<surname>Sprague</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Meikle</surname>
<given-names>T</given-names>
</name>
(
<year>1965</year>
)
<article-title>The role of the superior colliculus in visually guided behavior</article-title>
.
<source>Science</source>
<volume>11</volume>
:
<fpage>115</fpage>
<lpage>146</lpage>
</mixed-citation>
</ref>
<ref id="pone.0069474-Lungarella1">
<label>86</label>
<mixed-citation publication-type="other">Lungarella M, Sporns O (2005) Information self-structuring: Key principle for learning and development. Proc of the 4th Int Conf on Development and Learning: 25–30.</mixed-citation>
</ref>
<ref id="pone.0069474-Lungarella2">
<label>87</label>
<mixed-citation publication-type="journal">
<name>
<surname>Lungarella</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Sporns</surname>
<given-names>O</given-names>
</name>
(
<year>2006</year>
)
<article-title>Mapping information flow in sensorimotor networks</article-title>
.
<source>Plos Computational Biology</source>
<volume>2</volume>
:
<fpage>1301</fpage>
<lpage>1312</lpage>
</mixed-citation>
</ref>
<ref id="pone.0069474-Kurjak1">
<label>88</label>
<mixed-citation publication-type="journal">
<name>
<surname>Kurjak</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Azumendi</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Vecek</surname>
<given-names>N</given-names>
</name>
,
<name>
<surname>Kupeic</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Solak</surname>
<given-names>M</given-names>
</name>
,
<etal>et al</etal>
(
<year>2005</year>
)
<article-title>Fetal hand and facial expression in normal pregnancy studied by four-dimensional sonography</article-title>
.
<source>J Perinat Med</source>
<volume>31</volume>
:
<fpage>496</fpage>
<lpage>508</lpage>
<pub-id pub-id-type="pmid">14711106</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-Simion2">
<label>89</label>
<mixed-citation publication-type="journal">
<name>
<surname>Simion</surname>
<given-names>F</given-names>
</name>
,
<name>
<surname>Regolin</surname>
<given-names>L</given-names>
</name>
,
<name>
<surname>Bulf</surname>
<given-names>H</given-names>
</name>
(
<year>2008</year>
)
<article-title>A predisposition for biological motion in the newborn baby</article-title>
.
<source>Proceeding of The National Academy of Sciences of the USA</source>
<volume>105</volume>
:
<fpage>809</fpage>
<lpage>813</lpage>
</mixed-citation>
</ref>
<ref id="pone.0069474-Farroni2">
<label>90</label>
<mixed-citation publication-type="journal">
<name>
<surname>Farroni</surname>
<given-names>T</given-names>
</name>
,
<name>
<surname>Csibra</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Simion</surname>
<given-names>F</given-names>
</name>
,
<name>
<surname>Johnson</surname>
<given-names>M</given-names>
</name>
(
<year>2002</year>
)
<article-title>Eye contact detection in humans from birth</article-title>
.
<source>Proceeding of The National Academy of Sciences of the USA</source>
<volume>99</volume>
:
<fpage>9602</fpage>
<lpage>9605</lpage>
</mixed-citation>
</ref>
<ref id="pone.0069474-Streri1">
<label>91</label>
<mixed-citation publication-type="journal">
<name>
<surname>Streri</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Lhote</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Dutilleul</surname>
<given-names>S</given-names>
</name>
(
<year>2000</year>
)
<article-title>Haptic perception in newborns</article-title>
.
<source>Developmental Science</source>
<volume>3</volume>
:
<fpage>319</fpage>
<lpage>327</lpage>
</mixed-citation>
</ref>
<ref id="pone.0069474-Takeshita1">
<label>92</label>
<mixed-citation publication-type="other">Takeshita H, Myowa-Yamakoshi M, Hirata S (2006) A new comparative perspective on prenatal motor behaviors: Preliminary research with four-dimensional (4d) ultrasonography. In: T Matsuzawa, M Toimonaga, M Tanaka (eds), Cognitive development in chimpanzees Tokyo: Springer-Verlag: 37–47.</mixed-citation>
</ref>
<ref id="pone.0069474-Shibata1">
<label>93</label>
<mixed-citation publication-type="journal">
<name>
<surname>Shibata</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Fuchino</surname>
<given-names>Y</given-names>
</name>
,
<name>
<surname>Naoi</surname>
<given-names>N</given-names>
</name>
,
<name>
<surname>Kohno</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Kawai</surname>
<given-names>M</given-names>
</name>
,
<etal>et al</etal>
(
<year>2012</year>
)
<article-title>Broad cortical activation in response to tactile stimulation in newborns</article-title>
.
<source>Neuro Report</source>
<volume>23</volume>
:
<fpage>373</fpage>
<lpage>377</lpage>
</mixed-citation>
</ref>
<ref id="pone.0069474-MyowaYamakoshi1">
<label>94</label>
<mixed-citation publication-type="journal">
<name>
<surname>Myowa-Yamakoshi</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Takeshita</surname>
<given-names>H</given-names>
</name>
(
<year>2006</year>
)
<article-title>Do human fetuses anticipate self-directed actions? a study by four-dimensional (4d) ultrasonography</article-title>
.
<source>Infancy</source>
<volume>10</volume>
:
<fpage>289</fpage>
<lpage>301</lpage>
</mixed-citation>
</ref>
<ref id="pone.0069474-Nagy2">
<label>95</label>
<mixed-citation publication-type="journal">
<name>
<surname>Nagy</surname>
<given-names>E</given-names>
</name>
,
<name>
<surname>Molnar</surname>
<given-names>P</given-names>
</name>
(
<year>2004</year>
)
<article-title>Homo imitans or homo provocans? human imprinting model of neonatal imitation</article-title>
.
<source>Infant Behavior and Development</source>
<volume>27</volume>
:
<fpage>54</fpage>
<lpage>63</lpage>
</mixed-citation>
</ref>
<ref id="pone.0069474-Neil1">
<label>96</label>
<mixed-citation publication-type="journal">
<name>
<surname>Neil</surname>
<given-names>PA</given-names>
</name>
,
<name>
<surname>Chee-Ruiter</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Scheier</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Lewkowicz</surname>
<given-names>DJ</given-names>
</name>
,
<name>
<surname>Shimojo</surname>
<given-names>S</given-names>
</name>
(
<year>2006</year>
)
<article-title>Development of multisensory spatial integration and perception in humans</article-title>
.
<source>Developmental Science</source>
<volume>9</volume>
:
<fpage>454</fpage>
<lpage>464</lpage>
<pub-id pub-id-type="pmid">16911447</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-SalihagicKadic1">
<label>97</label>
<mixed-citation publication-type="journal">
<name>
<surname>Salihagic Kadic</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Predojevic</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>Kurjak</surname>
<given-names>A</given-names>
</name>
(
<year>2008</year>
)
<article-title>Advances in fetal neurophysiology</article-title>
.
<source>Journal of Ultrasound in Obstetrics and Gynecology</source>
<volume>2</volume>
:
<fpage>19</fpage>
<lpage>34</lpage>
</mixed-citation>
</ref>
<ref id="pone.0069474-Bremner1">
<label>98</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bremner</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Holmes</surname>
<given-names>N</given-names>
</name>
,
<name>
<surname>Spence</surname>
<given-names>C</given-names>
</name>
(
<year>2008</year>
)
<article-title>Infants lost in (peripersonal) space?</article-title>
<source>Trends in Cognitive Sciences</source>
<volume>12</volume>
:
<fpage>298</fpage>
<lpage>305</lpage>
<pub-id pub-id-type="pmid">18606563</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-Pitti3">
<label>99</label>
<mixed-citation publication-type="other">Pitti A, Kuniyoshi Y (2012) Neural models for social development in shared parieto-motor circuits. Book Chapter 11 in Horizons in Neuroscience Research Volume 6, Nova Science Publishers: 247–282.</mixed-citation>
</ref>
<ref id="pone.0069474-Andersen1">
<label>100</label>
<mixed-citation publication-type="journal">
<name>
<surname>Andersen</surname>
<given-names>R</given-names>
</name>
,
<name>
<surname>Snyder</surname>
<given-names>L</given-names>
</name>
,
<name>
<surname>Li</surname>
<given-names>CS</given-names>
</name>
,
<name>
<surname>Stricanne</surname>
<given-names>B</given-names>
</name>
(
<year>1993</year>
)
<article-title>Coordinate transformations in the representation information</article-title>
.
<source>Current Opinion in Neurobiology</source>
<volume>3</volume>
:
<fpage>171</fpage>
<lpage>176</lpage>
<pub-id pub-id-type="pmid">8513228</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-Pouget1">
<label>101</label>
<mixed-citation publication-type="journal">
<name>
<surname>Pouget</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Snyder</surname>
<given-names>L</given-names>
</name>
(
<year>1997</year>
)
<article-title>Spatial transformations in the parietal cortex using basis functions</article-title>
.
<source>J of Cog Neuro</source>
<volume>3</volume>
:
<fpage>1192</fpage>
<lpage>1198</lpage>
</mixed-citation>
</ref>
<ref id="pone.0069474-Salinas1">
<label>102</label>
<mixed-citation publication-type="journal">
<name>
<surname>Salinas</surname>
<given-names>E</given-names>
</name>
,
<name>
<surname>Thier</surname>
<given-names>P</given-names>
</name>
(
<year>2000</year>
)
<article-title>Gain modulation: A major computational principle of the central nervous system</article-title>
.
<source>Neuron</source>
<volume>27</volume>
:
<fpage>15</fpage>
<lpage>21</lpage>
<pub-id pub-id-type="pmid">10939327</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0069474-Pitti4">
<label>103</label>
<mixed-citation publication-type="other">Pitti A, Blanchard A, Cardinaux M, Gaussier P (2012) Gain-field modulation mechanism in multimodal networks for spatial perception. IEEE-RAS Int Conf on Humanoids Robots: 297–307.</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
<affiliations>
<list>
<country>
<li>France</li>
<li>Japon</li>
</country>
<settlement>
<li>Tokyo</li>
</settlement>
</list>
<tree>
<country name="France">
<noRegion>
<name sortKey="Pitti, Alexandre" sort="Pitti, Alexandre" uniqKey="Pitti A" first="Alexandre" last="Pitti">Alexandre Pitti</name>
</noRegion>
<name sortKey="Gaussier, Philippe" sort="Gaussier, Philippe" uniqKey="Gaussier P" first="Philippe" last="Gaussier">Philippe Gaussier</name>
<name sortKey="Quoy, Mathias" sort="Quoy, Mathias" uniqKey="Quoy M" first="Mathias" last="Quoy">Mathias Quoy</name>
</country>
<country name="Japon">
<noRegion>
<name sortKey="Kuniyoshi, Yasuo" sort="Kuniyoshi, Yasuo" uniqKey="Kuniyoshi Y" first="Yasuo" last="Kuniyoshi">Yasuo Kuniyoshi</name>
</noRegion>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Ncbi/Merge
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 002848 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd -nk 002848 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Ncbi
   |étape=   Merge
   |type=    RBID
   |clé=     PMC:3724856
   |texte=   Modeling the Minimal Newborn's Intersubjective Mind: The Visuotopic-Somatotopic Alignment Hypothesis in the Superior Colliculus
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/RBID.i   -Sk "pubmed:23922718" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024