Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Audio-visual speech perception: a developmental ERP investigation

Identifieur interne : 002A45 ( Ncbi/Merge ); précédent : 002A44; suivant : 002A46

Audio-visual speech perception: a developmental ERP investigation

Auteurs : Victoria Cp Knowland [Royaume-Uni] ; Evelyne Mercure ; Annette Karmiloff-Smith [Royaume-Uni] ; Fred Dick [Royaume-Uni] ; Michael Sc Thomas [Royaume-Uni]

Source :

RBID : PMC:3995015

Abstract

Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language learning. We therefore explored this at the neural level. The event-related potential (ERP) technique has been used to assess the mechanisms of audio-visual speech perception in adults, with visual cues reliably modulating auditory ERP responses to speech. Previous work has shown congruence-dependent shortening of auditory N1/P2 latency and congruence-independent attenuation of amplitude in the presence of auditory and visual speech signals, compared to auditory alone. The aim of this study was to chart the development of these well-established modulatory effects over mid-to-late childhood. Experiment 1 employed an adult sample to validate a child-friendly stimulus set and paradigm by replicating previously observed effects of N1/P2 amplitude and latency modulation by visual speech cues; it also revealed greater attenuation of component amplitude given incongruent audio-visual stimuli, pointing to a new interpretation of the amplitude modulation effect. Experiment 2 used the same paradigm to map cross-sectional developmental change in these ERP responses between 6 and 11 years of age. The effect of amplitude modulation by visual cues emerged over development, while the effect of latency modulation was stable over the child sample. These data suggest that auditory ERP modulation by visual speech represents separable underlying cognitive processes, some of which show earlier maturation than others over the course of development.


Url:
DOI: 10.1111/desc.12098
PubMed: 24176002
PubMed Central: 3995015

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:3995015

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Audio-visual speech perception: a developmental ERP investigation</title>
<author>
<name sortKey="Knowland, Victoria Cp" sort="Knowland, Victoria Cp" uniqKey="Knowland V" first="Victoria Cp" last="Knowland">Victoria Cp Knowland</name>
<affiliation wicri:level="1">
<nlm:aff id="au1">
<institution>School of Health Sciences, City University</institution>
<addr-line>London, UK</addr-line>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>London</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="au2">
<institution>Department of Psychological Sciences, Birkbeck College</institution>
<addr-line>London, UK</addr-line>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>London</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Mercure, Evelyne" sort="Mercure, Evelyne" uniqKey="Mercure E" first="Evelyne" last="Mercure">Evelyne Mercure</name>
<affiliation>
<nlm:aff id="au3">
<institution>Institute of Cognitive Neuroscience, UCL</institution>
<addr-line>UK</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Karmiloff Smith, Annette" sort="Karmiloff Smith, Annette" uniqKey="Karmiloff Smith A" first="Annette" last="Karmiloff-Smith">Annette Karmiloff-Smith</name>
<affiliation wicri:level="1">
<nlm:aff id="au4">
<institution>Department of Psychological Sciences, Birkbeck College</institution>
<addr-line>London, UK</addr-line>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>London</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Dick, Fred" sort="Dick, Fred" uniqKey="Dick F" first="Fred" last="Dick">Fred Dick</name>
<affiliation wicri:level="1">
<nlm:aff id="au4">
<institution>Department of Psychological Sciences, Birkbeck College</institution>
<addr-line>London, UK</addr-line>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>London</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Thomas, Michael Sc" sort="Thomas, Michael Sc" uniqKey="Thomas M" first="Michael Sc" last="Thomas">Michael Sc Thomas</name>
<affiliation wicri:level="1">
<nlm:aff id="au4">
<institution>Department of Psychological Sciences, Birkbeck College</institution>
<addr-line>London, UK</addr-line>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>London</wicri:regionArea>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">24176002</idno>
<idno type="pmc">3995015</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3995015</idno>
<idno type="RBID">PMC:3995015</idno>
<idno type="doi">10.1111/desc.12098</idno>
<date when="2013">2013</date>
<idno type="wicri:Area/Pmc/Corpus">001C78</idno>
<idno type="wicri:Area/Pmc/Curation">001C78</idno>
<idno type="wicri:Area/Pmc/Checkpoint">001396</idno>
<idno type="wicri:Area/Ncbi/Merge">002A45</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Audio-visual speech perception: a developmental ERP investigation</title>
<author>
<name sortKey="Knowland, Victoria Cp" sort="Knowland, Victoria Cp" uniqKey="Knowland V" first="Victoria Cp" last="Knowland">Victoria Cp Knowland</name>
<affiliation wicri:level="1">
<nlm:aff id="au1">
<institution>School of Health Sciences, City University</institution>
<addr-line>London, UK</addr-line>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>London</wicri:regionArea>
</affiliation>
<affiliation wicri:level="1">
<nlm:aff id="au2">
<institution>Department of Psychological Sciences, Birkbeck College</institution>
<addr-line>London, UK</addr-line>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>London</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Mercure, Evelyne" sort="Mercure, Evelyne" uniqKey="Mercure E" first="Evelyne" last="Mercure">Evelyne Mercure</name>
<affiliation>
<nlm:aff id="au3">
<institution>Institute of Cognitive Neuroscience, UCL</institution>
<addr-line>UK</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Karmiloff Smith, Annette" sort="Karmiloff Smith, Annette" uniqKey="Karmiloff Smith A" first="Annette" last="Karmiloff-Smith">Annette Karmiloff-Smith</name>
<affiliation wicri:level="1">
<nlm:aff id="au4">
<institution>Department of Psychological Sciences, Birkbeck College</institution>
<addr-line>London, UK</addr-line>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>London</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Dick, Fred" sort="Dick, Fred" uniqKey="Dick F" first="Fred" last="Dick">Fred Dick</name>
<affiliation wicri:level="1">
<nlm:aff id="au4">
<institution>Department of Psychological Sciences, Birkbeck College</institution>
<addr-line>London, UK</addr-line>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>London</wicri:regionArea>
</affiliation>
</author>
<author>
<name sortKey="Thomas, Michael Sc" sort="Thomas, Michael Sc" uniqKey="Thomas M" first="Michael Sc" last="Thomas">Michael Sc Thomas</name>
<affiliation wicri:level="1">
<nlm:aff id="au4">
<institution>Department of Psychological Sciences, Birkbeck College</institution>
<addr-line>London, UK</addr-line>
</nlm:aff>
<country xml:lang="fr">Royaume-Uni</country>
<wicri:regionArea>London</wicri:regionArea>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Developmental Science</title>
<idno type="ISSN">1363-755X</idno>
<idno type="eISSN">1467-7687</idno>
<imprint>
<date when="2013">2013</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available
<italic>visual speech cues</italic>
until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language learning. We therefore explored this at the neural level. The event-related potential (ERP) technique has been used to assess the mechanisms of audio-visual speech perception in adults, with visual cues reliably modulating auditory ERP responses to speech. Previous work has shown congruence-dependent shortening of auditory N1/P2 latency and congruence-independent attenuation of amplitude in the presence of auditory and visual speech signals, compared to auditory alone. The aim of this study was to chart the development of these well-established modulatory effects over mid-to-late childhood. Experiment 1 employed an adult sample to validate a child-friendly stimulus set and paradigm by replicating previously observed effects of N1/P2 amplitude and latency modulation by visual speech cues; it also revealed greater attenuation of component amplitude given incongruent audio-visual stimuli, pointing to a new interpretation of the amplitude modulation effect. Experiment 2 used the same paradigm to map cross-sectional developmental change in these ERP responses between 6 and 11 years of age. The effect of amplitude modulation by visual cues emerged over development, while the effect of latency modulation was stable over the child sample. These data suggest that auditory ERP modulation by visual speech represents separable underlying cognitive processes, some of which show earlier maturation than others over the course of development.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Alais, D" uniqKey="Alais D">D Alais</name>
</author>
<author>
<name sortKey="Burr, D" uniqKey="Burr D">D Burr</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bernstein, Le" uniqKey="Bernstein L">LE Bernstein</name>
</author>
<author>
<name sortKey="Auer, Et" uniqKey="Auer E">ET Auer</name>
</author>
<author>
<name sortKey="Wagner, M" uniqKey="Wagner M">M Wagner</name>
</author>
<author>
<name sortKey="Ponton, Cw" uniqKey="Ponton C">CW Ponton</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Besle, J" uniqKey="Besle J">J Besle</name>
</author>
<author>
<name sortKey="Bertrand, O" uniqKey="Bertrand O">O Bertrand</name>
</author>
<author>
<name sortKey="Giard, Mh" uniqKey="Giard M">MH Giard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Besle, J" uniqKey="Besle J">J Besle</name>
</author>
<author>
<name sortKey="Fischer, C" uniqKey="Fischer C">C Fischer</name>
</author>
<author>
<name sortKey="Bidet Caulet, A" uniqKey="Bidet Caulet A">A Bidet-Caulet</name>
</author>
<author>
<name sortKey="Lecaignard, F" uniqKey="Lecaignard F">F Lecaignard</name>
</author>
<author>
<name sortKey="Bertrand, O" uniqKey="Bertrand O">O Bertrand</name>
</author>
<author>
<name sortKey="Giard, M H" uniqKey="Giard M">M-H Giard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Besle, J" uniqKey="Besle J">J Besle</name>
</author>
<author>
<name sortKey="Fort, A" uniqKey="Fort A">A Fort</name>
</author>
<author>
<name sortKey="Delpuech, C" uniqKey="Delpuech C">C Delpuech</name>
</author>
<author>
<name sortKey="Giard, M H" uniqKey="Giard M">M-H Giard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bishop, Dvm" uniqKey="Bishop D">DVM Bishop</name>
</author>
<author>
<name sortKey="Hardiman, M" uniqKey="Hardiman M">M Hardiman</name>
</author>
<author>
<name sortKey="Uwer, R" uniqKey="Uwer R">R Uwer</name>
</author>
<author>
<name sortKey="Von Suchodeltz, W" uniqKey="Von Suchodeltz W">W von Suchodeltz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bristow, D" uniqKey="Bristow D">D Bristow</name>
</author>
<author>
<name sortKey="Dehaene Lambertz, G" uniqKey="Dehaene Lambertz G">G Dehaene-Lambertz</name>
</author>
<author>
<name sortKey="Mattout, J" uniqKey="Mattout J">J Mattout</name>
</author>
<author>
<name sortKey="Soares, C" uniqKey="Soares C">C Soares</name>
</author>
<author>
<name sortKey="Gilga, T" uniqKey="Gilga T">T Gilga</name>
</author>
<author>
<name sortKey="Baillet, S" uniqKey="Baillet S">S Baillet</name>
</author>
<author>
<name sortKey="Mangin, F" uniqKey="Mangin F">F Mangin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Burnham, D" uniqKey="Burnham D">D Burnham</name>
</author>
<author>
<name sortKey="Dodd, B" uniqKey="Dodd B">B Dodd</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bushara, Ko" uniqKey="Bushara K">KO Bushara</name>
</author>
<author>
<name sortKey="Hanawaka, T" uniqKey="Hanawaka T">T Hanawaka</name>
</author>
<author>
<name sortKey="Immisch, I" uniqKey="Immisch I">I Immisch</name>
</author>
<author>
<name sortKey="Toma, K" uniqKey="Toma K">K Toma</name>
</author>
<author>
<name sortKey="Kansaku, K" uniqKey="Kansaku K">K Kansaku</name>
</author>
<author>
<name sortKey="Hallett, M" uniqKey="Hallett M">M Hallett</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Callan, De" uniqKey="Callan D">DE Callan</name>
</author>
<author>
<name sortKey="Jones, Ja" uniqKey="Jones J">JA Jones</name>
</author>
<author>
<name sortKey="Munhall, K" uniqKey="Munhall K">K Munhall</name>
</author>
<author>
<name sortKey="Kroos, C" uniqKey="Kroos C">C Kroos</name>
</author>
<author>
<name sortKey="Callan, Am" uniqKey="Callan A">AM Callan</name>
</author>
<author>
<name sortKey="Vatikiotis Bateson, E" uniqKey="Vatikiotis Bateson E">E Vatikiotis-Bateson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Callaway, E" uniqKey="Callaway E">E Callaway</name>
</author>
<author>
<name sortKey="Halliday, R" uniqKey="Halliday R">R Halliday</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Calvert, G" uniqKey="Calvert G">G Calvert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Calvert, Ga" uniqKey="Calvert G">GA Calvert</name>
</author>
<author>
<name sortKey="Bullmore, E" uniqKey="Bullmore E">E Bullmore</name>
</author>
<author>
<name sortKey="Brammer, Mj" uniqKey="Brammer M">MJ Brammer</name>
</author>
<author>
<name sortKey="Campbell, R" uniqKey="Campbell R">R Campbell</name>
</author>
<author>
<name sortKey="Woodruff, P" uniqKey="Woodruff P">P Woodruff</name>
</author>
<author>
<name sortKey="Mcguire, P" uniqKey="Mcguire P">P McGuire</name>
</author>
<author>
<name sortKey="Williams, S" uniqKey="Williams S">S Williams</name>
</author>
<author>
<name sortKey="Iversen, Sd" uniqKey="Iversen S">SD Iversen</name>
</author>
<author>
<name sortKey="David, As" uniqKey="David A">AS David</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Calvert, Ga" uniqKey="Calvert G">GA Calvert</name>
</author>
<author>
<name sortKey="Campbell, R" uniqKey="Campbell R">R Campbell</name>
</author>
<author>
<name sortKey="Brammer, Mj" uniqKey="Brammer M">MJ Brammer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Campbell, R" uniqKey="Campbell R">R Campbell</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Capek, Cm" uniqKey="Capek C">CM Capek</name>
</author>
<author>
<name sortKey="Bavelier, D" uniqKey="Bavelier D">D Bavelier</name>
</author>
<author>
<name sortKey="Corina, D" uniqKey="Corina D">D Corina</name>
</author>
<author>
<name sortKey="Newman, Aj" uniqKey="Newman A">AJ Newman</name>
</author>
<author>
<name sortKey="Jezzard, P" uniqKey="Jezzard P">P Jezzard</name>
</author>
<author>
<name sortKey="Neville, Hj" uniqKey="Neville H">HJ Neville</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chandrasekaran, C" uniqKey="Chandrasekaran C">C Chandrasekaran</name>
</author>
<author>
<name sortKey="Trubanova, A" uniqKey="Trubanova A">A Trubanova</name>
</author>
<author>
<name sortKey="Stillittano, S" uniqKey="Stillittano S">S Stillittano</name>
</author>
<author>
<name sortKey="Caplier, A" uniqKey="Caplier A">A Caplier</name>
</author>
<author>
<name sortKey="Ghazanfar, Aa" uniqKey="Ghazanfar A">AA Ghazanfar</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Desjardins, R" uniqKey="Desjardins R">R Desjardins</name>
</author>
<author>
<name sortKey="Werker, Jf" uniqKey="Werker J">JF Werker</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dick, As" uniqKey="Dick A">AS Dick</name>
</author>
<author>
<name sortKey="Solodkin, A" uniqKey="Solodkin A">A Solodkin</name>
</author>
<author>
<name sortKey="Small, S" uniqKey="Small S">S Small</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fort, M" uniqKey="Fort M">M Fort</name>
</author>
<author>
<name sortKey="Spinelli, E" uniqKey="Spinelli E">E Spinelli</name>
</author>
<author>
<name sortKey="Savariaux, C" uniqKey="Savariaux C">C Savariaux</name>
</author>
<author>
<name sortKey="Kandel, S" uniqKey="Kandel S">S Kandel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Giard, M H" uniqKey="Giard M">M-H Giard</name>
</author>
<author>
<name sortKey="Perrin, F" uniqKey="Perrin F">F Perrin</name>
</author>
<author>
<name sortKey="Echallier, Jf" uniqKey="Echallier J">JF Echallier</name>
</author>
<author>
<name sortKey="Thevenet, M" uniqKey="Thevenet M">M Thevenet</name>
</author>
<author>
<name sortKey="Fromenet, Jc" uniqKey="Fromenet J">JC Fromenet</name>
</author>
<author>
<name sortKey="Pernier, J" uniqKey="Pernier J">J Pernier</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gori, M" uniqKey="Gori M">M Gori</name>
</author>
<author>
<name sortKey="Del Viva, M" uniqKey="Del Viva M">M Del Viva</name>
</author>
<author>
<name sortKey="Sandini, G" uniqKey="Sandini G">G Sandini</name>
</author>
<author>
<name sortKey="Burr, Dc" uniqKey="Burr D">DC Burr</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gotgay, N" uniqKey="Gotgay N">N Gotgay</name>
</author>
<author>
<name sortKey="Giedd, J" uniqKey="Giedd J">J Giedd</name>
</author>
<author>
<name sortKey="Lusk, L" uniqKey="Lusk L">L Lusk</name>
</author>
<author>
<name sortKey="Hayashi, Km" uniqKey="Hayashi K">KM Hayashi</name>
</author>
<author>
<name sortKey="Greenstein, D" uniqKey="Greenstein D">D Greenstein</name>
</author>
<author>
<name sortKey="Vaituzis, Ac" uniqKey="Vaituzis A">AC Vaituzis</name>
</author>
<author>
<name sortKey="Nugent, Tf" uniqKey="Nugent T">TF Nugent</name>
</author>
<author>
<name sortKey="Iii Herman, Dh" uniqKey="Iii Herman D">DH III Herman</name>
</author>
<author>
<name sortKey="Clasen, Ls" uniqKey="Clasen L">LS Clasen</name>
</author>
<author>
<name sortKey="Toga, Aw" uniqKey="Toga A">AW Toga</name>
</author>
<author>
<name sortKey="Rapopport, Jl" uniqKey="Rapopport J">JL Rapopport</name>
</author>
<author>
<name sortKey="Thompson, Pm" uniqKey="Thompson P">PM Thompson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Grant, Kw" uniqKey="Grant K">KW Grant</name>
</author>
<author>
<name sortKey="Greenberg, S" uniqKey="Greenberg S">S Greenberg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Grant, Kw" uniqKey="Grant K">KW Grant</name>
</author>
<author>
<name sortKey="Seitz, Pf" uniqKey="Seitz P">PF Seitz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Green, Kp" uniqKey="Green K">KP Green</name>
</author>
<author>
<name sortKey="Kuhl, Pk" uniqKey="Kuhl P">PK Kuhl</name>
</author>
<author>
<name sortKey="Meltzoff, An" uniqKey="Meltzoff A">AN Meltzoff</name>
</author>
<author>
<name sortKey="Stevens, Eb" uniqKey="Stevens E">EB Stevens</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hall, Da" uniqKey="Hall D">DA Hall</name>
</author>
<author>
<name sortKey="Fussell, C" uniqKey="Fussell C">C Fussell</name>
</author>
<author>
<name sortKey="Summerfield, Aq" uniqKey="Summerfield A">AQ Summerfield</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hoonhorst, I" uniqKey="Hoonhorst I">I Hoonhorst</name>
</author>
<author>
<name sortKey="Serniclaes, W" uniqKey="Serniclaes W">W Serniclaes</name>
</author>
<author>
<name sortKey="Collet, G" uniqKey="Collet G">G Collet</name>
</author>
<author>
<name sortKey="Colin, C" uniqKey="Colin C">C Colin</name>
</author>
<author>
<name sortKey="Markessis, E" uniqKey="Markessis E">E Markessis</name>
</author>
<author>
<name sortKey="Radeau, M" uniqKey="Radeau M">M Radeau</name>
</author>
<author>
<name sortKey="Deltenrea, P" uniqKey="Deltenrea P">P Deltenrea</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hocking, J" uniqKey="Hocking J">J Hocking</name>
</author>
<author>
<name sortKey="Price, Cj" uniqKey="Price C">CJ Price</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hockley, Ns" uniqKey="Hockley N">NS Hockley</name>
</author>
<author>
<name sortKey="Polka, L" uniqKey="Polka L">L Polka</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jasper, Hh" uniqKey="Jasper H">HH Jasper</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jerger, S" uniqKey="Jerger S">S Jerger</name>
</author>
<author>
<name sortKey="Damian, Mf" uniqKey="Damian M">MF Damian</name>
</author>
<author>
<name sortKey="Spence, Mj" uniqKey="Spence M">MJ Spence</name>
</author>
<author>
<name sortKey="Tye Murray, N" uniqKey="Tye Murray N">N Tye-Murray</name>
</author>
<author>
<name sortKey="Abdi, H" uniqKey="Abdi H">H Abdi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kawabe, T" uniqKey="Kawabe T">T Kawabe</name>
</author>
<author>
<name sortKey="Shirai, N" uniqKey="Shirai N">N Shirai</name>
</author>
<author>
<name sortKey="Wada, Y" uniqKey="Wada Y">Y Wada</name>
</author>
<author>
<name sortKey="Miura, K" uniqKey="Miura K">K Miura</name>
</author>
<author>
<name sortKey="Kanazawa, S" uniqKey="Kanazawa S">S Kanazawa</name>
</author>
<author>
<name sortKey="Yamaguci, Mk" uniqKey="Yamaguci M">MK Yamaguci</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kim, J" uniqKey="Kim J">J Kim</name>
</author>
<author>
<name sortKey="Davis, C" uniqKey="Davis C">C Davis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Klucharev, K" uniqKey="Klucharev K">K Klucharev</name>
</author>
<author>
<name sortKey="Mottonen, R" uniqKey="Mottonen R">R Mottonen</name>
</author>
<author>
<name sortKey="Sams, M" uniqKey="Sams M">M Sams</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kuhl, P" uniqKey="Kuhl P">P Kuhl</name>
</author>
<author>
<name sortKey="Meltzoff, A" uniqKey="Meltzoff A">A Meltzoff</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kuperman, V" uniqKey="Kuperman V">V Kuperman</name>
</author>
<author>
<name sortKey="Stadthagen Gonzalez, H" uniqKey="Stadthagen Gonzalez H">H Stadthagen-Gonzalez</name>
</author>
<author>
<name sortKey="Brysbaert, M" uniqKey="Brysbaert M">M Brysbaert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kushnerenko, E" uniqKey="Kushnerenko E">E Kushnerenko</name>
</author>
<author>
<name sortKey="Teinonen, T" uniqKey="Teinonen T">T Teinonen</name>
</author>
<author>
<name sortKey="Volien, A" uniqKey="Volien A">A Volien</name>
</author>
<author>
<name sortKey="Csibra, G" uniqKey="Csibra G">G Csibra</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lange, K" uniqKey="Lange K">K Lange</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Leech, R" uniqKey="Leech R">R Leech</name>
</author>
<author>
<name sortKey="Holt, Ll" uniqKey="Holt L">LL Holt</name>
</author>
<author>
<name sortKey="Devlin, Jt" uniqKey="Devlin J">JT Devlin</name>
</author>
<author>
<name sortKey="Dick, F" uniqKey="Dick F">F Dick</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lenroot, Rk" uniqKey="Lenroot R">RK Lenroot</name>
</author>
<author>
<name sortKey="Giedd, Jn" uniqKey="Giedd J">JN Giedd</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lewkowicz, Dj" uniqKey="Lewkowicz D">DJ Lewkowicz</name>
</author>
<author>
<name sortKey="Hansen Tift, Am" uniqKey="Hansen Tift A">AM Hansen-Tift</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Liebenthal, E" uniqKey="Liebenthal E">E Liebenthal</name>
</author>
<author>
<name sortKey="Desai, R" uniqKey="Desai R">R Desai</name>
</author>
<author>
<name sortKey="Ellinson, Mm" uniqKey="Ellinson M">MM Ellinson</name>
</author>
<author>
<name sortKey="Ramachandran, B" uniqKey="Ramachandran B">B Ramachandran</name>
</author>
<author>
<name sortKey="Desai, A" uniqKey="Desai A">A Desai</name>
</author>
<author>
<name sortKey="Binder, Jr" uniqKey="Binder J">JR Binder</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lippe, S" uniqKey="Lippe S">S Lippe</name>
</author>
<author>
<name sortKey="Kovacevic, N" uniqKey="Kovacevic N">N Kovacevic</name>
</author>
<author>
<name sortKey="Mcintosh, Ar" uniqKey="Mcintosh A">AR McIntosh</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mcgurk, H" uniqKey="Mcgurk H">H McGurk</name>
</author>
<author>
<name sortKey="Macdonald, J" uniqKey="Macdonald J">J MacDonald</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Martin, L" uniqKey="Martin L">L Martin</name>
</author>
<author>
<name sortKey="Barajas, Jj" uniqKey="Barajas J">JJ Barajas</name>
</author>
<author>
<name sortKey="Fernandez, R" uniqKey="Fernandez R">R Fernandez</name>
</author>
<author>
<name sortKey="Torres, E" uniqKey="Torres E">E Torres</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Massaro, D" uniqKey="Massaro D">D Massaro</name>
</author>
<author>
<name sortKey="Thompson, L" uniqKey="Thompson L">L Thompson</name>
</author>
<author>
<name sortKey="Barron, B" uniqKey="Barron B">B Barron</name>
</author>
<author>
<name sortKey="Laren, E" uniqKey="Laren E">E Laren</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Moore, Jk" uniqKey="Moore J">JK Moore</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Musacchia, G" uniqKey="Musacchia G">G Musacchia</name>
</author>
<author>
<name sortKey="Sams, M" uniqKey="Sams M">M Sams</name>
</author>
<author>
<name sortKey="Nicol, T" uniqKey="Nicol T">T Nicol</name>
</author>
<author>
<name sortKey="Kraus, N" uniqKey="Kraus N">N Kraus</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nath, Ar" uniqKey="Nath A">AR Nath</name>
</author>
<author>
<name sortKey="Beauchamp, Ms" uniqKey="Beauchamp M">MS Beauchamp</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nath, Ar" uniqKey="Nath A">AR Nath</name>
</author>
<author>
<name sortKey="Fava, Ee" uniqKey="Fava E">EE Fava</name>
</author>
<author>
<name sortKey="Beauchamp, Ms" uniqKey="Beauchamp M">MS Beauchamp</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pang, Ew" uniqKey="Pang E">EW Pang</name>
</author>
<author>
<name sortKey="Taylor, Mj" uniqKey="Taylor M">MJ Taylor</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Patterson, M" uniqKey="Patterson M">M Patterson</name>
</author>
<author>
<name sortKey="Werker, J" uniqKey="Werker J">J Werker</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Picton, Tw" uniqKey="Picton T">TW Picton</name>
</author>
<author>
<name sortKey="Hillyard, Sa" uniqKey="Hillyard S">SA Hillyard</name>
</author>
<author>
<name sortKey="Krausz, Hi" uniqKey="Krausz H">HI Krausz</name>
</author>
<author>
<name sortKey="Galambos, R" uniqKey="Galambos R">R Galambos</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pilling, M" uniqKey="Pilling M">M Pilling</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ponton, C" uniqKey="Ponton C">C Ponton</name>
</author>
<author>
<name sortKey="Eggermont, Jj" uniqKey="Eggermont J">JJ Eggermont</name>
</author>
<author>
<name sortKey="Khosla, D" uniqKey="Khosla D">D Khosla</name>
</author>
<author>
<name sortKey="Kwong, B" uniqKey="Kwong B">B Kwong</name>
</author>
<author>
<name sortKey="Don, M" uniqKey="Don M">M Don</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Reale, Ra" uniqKey="Reale R">RA Reale</name>
</author>
<author>
<name sortKey="Calvert, Ga" uniqKey="Calvert G">GA Calvert</name>
</author>
<author>
<name sortKey="Thesen, T" uniqKey="Thesen T">T Thesen</name>
</author>
<author>
<name sortKey="Jenison, Rl" uniqKey="Jenison R">RL Jenison</name>
</author>
<author>
<name sortKey="Kawasaki, H" uniqKey="Kawasaki H">H Kawasaki</name>
</author>
<author>
<name sortKey="Oys, H" uniqKey="Oys H">H Oys</name>
</author>
<author>
<name sortKey="Howard, Ma" uniqKey="Howard M">MA Howard</name>
</author>
<author>
<name sortKey="Brugg, Jf" uniqKey="Brugg J">JF Brugg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ritter, W" uniqKey="Ritter W">W Ritter</name>
</author>
<author>
<name sortKey="Simson, R" uniqKey="Simson R">R Simson</name>
</author>
<author>
<name sortKey="Vaughn, H" uniqKey="Vaughn H">H Vaughn</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rosenblum, Ld" uniqKey="Rosenblum L">LD Rosenblum</name>
</author>
<author>
<name sortKey="Schmuckler, Ma" uniqKey="Schmuckler M">MA Schmuckler</name>
</author>
<author>
<name sortKey="Johnson, Ja" uniqKey="Johnson J">JA Johnson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ross, La" uniqKey="Ross L">LA Ross</name>
</author>
<author>
<name sortKey="Molholm, S" uniqKey="Molholm S">S Molholm</name>
</author>
<author>
<name sortKey="Blanco, D" uniqKey="Blanco D">D Blanco</name>
</author>
<author>
<name sortKey="Gomez Ramirez, M" uniqKey="Gomez Ramirez M">M Gomez-Ramirez</name>
</author>
<author>
<name sortKey="Saint Amour, D" uniqKey="Saint Amour D">D Saint-Amour</name>
</author>
<author>
<name sortKey="Foxe, J" uniqKey="Foxe J">J Foxe</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Skipper, Ji" uniqKey="Skipper J">JI Skipper</name>
</author>
<author>
<name sortKey="Nusbaum, Hc" uniqKey="Nusbaum H">HC Nusbaum</name>
</author>
<author>
<name sortKey="Small, Sl" uniqKey="Small S">SL Small</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Spreng, M" uniqKey="Spreng M">M Spreng</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stekelenburg, Jj" uniqKey="Stekelenburg J">JJ Stekelenburg</name>
</author>
<author>
<name sortKey="Vroomen, J" uniqKey="Vroomen J">J Vroomen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sumby, W" uniqKey="Sumby W">W Sumby</name>
</author>
<author>
<name sortKey="Pollack, I" uniqKey="Pollack I">I Pollack</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tanabe, Hc" uniqKey="Tanabe H">HC Tanabe</name>
</author>
<author>
<name sortKey="Honda, M" uniqKey="Honda M">M Honda</name>
</author>
<author>
<name sortKey="Sadato, N" uniqKey="Sadato N">N Sadato</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Teder Salejarvi, Wa" uniqKey="Teder Salejarvi W">WA Teder-Salejarvi</name>
</author>
<author>
<name sortKey="Mcdonald, Jj" uniqKey="Mcdonald J">JJ McDonald</name>
</author>
<author>
<name sortKey="Dirusso, F" uniqKey="Dirusso F">F DiRusso</name>
</author>
<author>
<name sortKey="Hillyard, Sa" uniqKey="Hillyard S">SA Hillyard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Teinonen, T" uniqKey="Teinonen T">T Teinonen</name>
</author>
<author>
<name sortKey="Aslin, R" uniqKey="Aslin R">R Aslin</name>
</author>
<author>
<name sortKey="Alku, P" uniqKey="Alku P">P Alku</name>
</author>
<author>
<name sortKey="Csibra, G" uniqKey="Csibra G">G Csibra</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thomas, Msc" uniqKey="Thomas M">MSC Thomas</name>
</author>
<author>
<name sortKey="Annaz, D" uniqKey="Annaz D">D Annaz</name>
</author>
<author>
<name sortKey="Ansari, D" uniqKey="Ansari D">D Ansari</name>
</author>
<author>
<name sortKey="Serif, G" uniqKey="Serif G">G Serif</name>
</author>
<author>
<name sortKey="Jarrold, C" uniqKey="Jarrold C">C Jarrold</name>
</author>
<author>
<name sortKey="Karmiloff Smith, A" uniqKey="Karmiloff Smith A">A Karmiloff-Smith</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thornton, Ard" uniqKey="Thornton A">ARD Thornton</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tremblay, C" uniqKey="Tremblay C">C Tremblay</name>
</author>
<author>
<name sortKey="Champoux, F" uniqKey="Champoux F">F Champoux</name>
</author>
<author>
<name sortKey="Voss, P" uniqKey="Voss P">P Voss</name>
</author>
<author>
<name sortKey="Bacon, Ba" uniqKey="Bacon B">BA Bacon</name>
</author>
<author>
<name sortKey="Lapore, F" uniqKey="Lapore F">F Lapore</name>
</author>
<author>
<name sortKey="Theoret, H" uniqKey="Theoret H">H Theoret</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Van Wassenhove, V" uniqKey="Van Wassenhove V">V Van Wassenhove</name>
</author>
<author>
<name sortKey="Grant, Kw" uniqKey="Grant K">KW Grant</name>
</author>
<author>
<name sortKey="Poeppel, D" uniqKey="Poeppel D">D Poeppel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Viswanathan, D" uniqKey="Viswanathan D">D Viswanathan</name>
</author>
<author>
<name sortKey="Jansen, Bh" uniqKey="Jansen B">BH Jansen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wada, Y" uniqKey="Wada Y">Y Wada</name>
</author>
<author>
<name sortKey="Shirai, N" uniqKey="Shirai N">N Shirai</name>
</author>
<author>
<name sortKey="Midorikawa, A" uniqKey="Midorikawa A">A Midorikawa</name>
</author>
<author>
<name sortKey="Kanazawa, S" uniqKey="Kanazawa S">S Kanazawa</name>
</author>
<author>
<name sortKey="Dan, I" uniqKey="Dan I">I Dan</name>
</author>
<author>
<name sortKey="Yamaguchi, Mk" uniqKey="Yamaguchi M">MK Yamaguchi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wightman, F" uniqKey="Wightman F">F Wightman</name>
</author>
<author>
<name sortKey="Kistler, D" uniqKey="Kistler D">D Kistler</name>
</author>
<author>
<name sortKey="Brungart, D" uniqKey="Brungart D">D Brungart</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wright, Tm" uniqKey="Wright T">TM Wright</name>
</author>
<author>
<name sortKey="Pelphrey, Ka" uniqKey="Pelphrey K">KA Pelphrey</name>
</author>
<author>
<name sortKey="Allison, T" uniqKey="Allison T">T Allison</name>
</author>
<author>
<name sortKey="Mckeown, Mj" uniqKey="Mckeown M">MJ McKeown</name>
</author>
<author>
<name sortKey="Mccarthy, G" uniqKey="Mccarthy G">G McCarthy</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Dev Sci</journal-id>
<journal-id journal-id-type="iso-abbrev">Dev Sci</journal-id>
<journal-id journal-id-type="publisher-id">desc</journal-id>
<journal-title-group>
<journal-title>Developmental Science</journal-title>
</journal-title-group>
<issn pub-type="ppub">1363-755X</issn>
<issn pub-type="epub">1467-7687</issn>
<publisher>
<publisher-name>John Wiley & Sons Ltd</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">24176002</article-id>
<article-id pub-id-type="pmc">3995015</article-id>
<article-id pub-id-type="doi">10.1111/desc.12098</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Paper</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Audio-visual speech perception: a developmental ERP investigation</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Knowland</surname>
<given-names>Victoria CP</given-names>
</name>
<xref ref-type="aff" rid="au1">1</xref>
<xref ref-type="aff" rid="au2">2</xref>
<xref ref-type="corresp" rid="cor1"></xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Mercure</surname>
<given-names>Evelyne</given-names>
</name>
<xref ref-type="aff" rid="au3">3</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Karmiloff-Smith</surname>
<given-names>Annette</given-names>
</name>
<xref ref-type="aff" rid="au4">4</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Dick</surname>
<given-names>Fred</given-names>
</name>
<xref ref-type="aff" rid="au4">4</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Thomas</surname>
<given-names>Michael SC</given-names>
</name>
<xref ref-type="aff" rid="au4">4</xref>
</contrib>
<aff id="au1">
<label>1</label>
<institution>School of Health Sciences, City University</institution>
<addr-line>London, UK</addr-line>
</aff>
<aff id="au2">
<label>2</label>
<institution>Department of Psychological Sciences, Birkbeck College</institution>
<addr-line>London, UK</addr-line>
</aff>
<aff id="au3">
<label>3</label>
<institution>Institute of Cognitive Neuroscience, UCL</institution>
<addr-line>UK</addr-line>
</aff>
<aff id="au4">
<label>4</label>
<institution>Department of Psychological Sciences, Birkbeck College</institution>
<addr-line>London, UK</addr-line>
</aff>
</contrib-group>
<author-notes>
<corresp id="cor1">Address for correspondence: Victoria C.P. Knowland, Department of Language and Communication Science, City University, Northampton Square, London EC1V 0HB, UK; e-mail:
<email>victoria.knowland.1@city.ac.uk</email>
</corresp>
</author-notes>
<pub-date pub-type="ppub">
<month>1</month>
<year>2014</year>
</pub-date>
<pub-date pub-type="epub">
<day>31</day>
<month>10</month>
<year>2013</year>
</pub-date>
<volume>17</volume>
<issue>1</issue>
<fpage>110</fpage>
<lpage>124</lpage>
<permissions>
<copyright-statement>© 2013 The Authors Developmental Science Published by John Wiley & Sons Ltd</copyright-statement>
<copyright-year>2013</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by-nc/3.0/">
<license-p>This is an open access article under the terms of the Creative Commons Attribution-NonCommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes.</license-p>
</license>
</permissions>
<abstract>
<p>Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available
<italic>visual speech cues</italic>
until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language learning. We therefore explored this at the neural level. The event-related potential (ERP) technique has been used to assess the mechanisms of audio-visual speech perception in adults, with visual cues reliably modulating auditory ERP responses to speech. Previous work has shown congruence-dependent shortening of auditory N1/P2 latency and congruence-independent attenuation of amplitude in the presence of auditory and visual speech signals, compared to auditory alone. The aim of this study was to chart the development of these well-established modulatory effects over mid-to-late childhood. Experiment 1 employed an adult sample to validate a child-friendly stimulus set and paradigm by replicating previously observed effects of N1/P2 amplitude and latency modulation by visual speech cues; it also revealed greater attenuation of component amplitude given incongruent audio-visual stimuli, pointing to a new interpretation of the amplitude modulation effect. Experiment 2 used the same paradigm to map cross-sectional developmental change in these ERP responses between 6 and 11 years of age. The effect of amplitude modulation by visual cues emerged over development, while the effect of latency modulation was stable over the child sample. These data suggest that auditory ERP modulation by visual speech represents separable underlying cognitive processes, some of which show earlier maturation than others over the course of development.</p>
</abstract>
</article-meta>
</front>
<body>
<sec>
<title>Research highlights</title>
<list list-type="bullet">
<list-item>
<p>The electrophysiological correlates of audio-visual speech perception show a course of gradual maturation over mid-to-late childhood.</p>
</list-item>
<list-item>
<p>Electrophysiological data reveal that the speed of processing auditory speech is modulated by visual cues earlier in development then is suggested by behavioural data with children.</p>
</list-item>
<list-item>
<p>In adults, the attenuation of auditory ERP component amplitude by visual speech cues is interpreted as an effect of cross-modal competition.</p>
</list-item>
<list-item>
<p>It is suggested that the shortening of auditory ERP component latency by visual cues in adults may represent the prediction of both content and timing of the up-coming auditory speech signal.</p>
</list-item>
</list>
</sec>
<sec>
<title>Speech is multisensory</title>
<p>During face-to-face interaction the perception of speech is a multisensory process, with visual cues available from the talking face according a substantial benefit to adult listeners. Audio-visual speech perception has been fairly extensively studied in the adult population, yet little is understood about the extent to which, or how, children make use of these powerful cues when learning language. The aim of this study was to illuminate this matter through event-related potential (ERP) recordings with a developmental sample to establish how visual input modulates auditory processing over mid-to-late childhood.</p>
<p>Visual speech cues, that is movements of the lips, jaw, tongue and larynx, correlate closely with auditory output (Chandrasekaran, Trubanova, Stillittano, Caplier & Ghazanfar,
<xref rid="b17" ref-type="bibr">2009</xref>
). Such cues are of particular benefit to adult listeners under conditions of auditory noise, when their availability can result in improvements in response accuracy equivalent to as much as a 15 dB increase in the auditory signal-to-noise ratio (Grant & Greenberg,
<xref rid="b24" ref-type="bibr">2001</xref>
; Grant & Seitz,
<xref rid="b25" ref-type="bibr">2000</xref>
; Sumby & Pollack,
<xref rid="b64" ref-type="bibr">1954</xref>
). Visual cues can also create some powerful illusions, including the McGurk illusion, where incongruent auditory and visual inputs result in an overall percept derived from but different to the input from each sensory modality (McGurk & MacDonald,
<xref rid="b45" ref-type="bibr">1976</xref>
). For example a visual /ga/ dubbed over an auditory /ba/ often results in the percept /da/. Other illusions similarly involve visual cues altering the perceived content (Green, Kuhl, Meltzoff & Stevens,
<xref rid="b26" ref-type="bibr">1991</xref>
) or location (Alais & Burr,
<xref rid="b1" ref-type="bibr">2004</xref>
) of the auditory signal.</p>
</sec>
<sec>
<title>The development of audio-visual speech perception</title>
<p>Work with infants indicates a very early sensitivity to multisensory speech cues. By two months of age infants can match auditory and visual vowels behaviourally (Kuhl & Meltzoff,
<xref rid="b36" ref-type="bibr">1982</xref>
; Patterson & Werker,
<xref rid="b53" ref-type="bibr">1999</xref>
). Bristow and colleagues (Bristow, Dehaene-Lambertz, Mattout, Soares, Gilga, Baillet & Mangin,
<xref rid="b7" ref-type="bibr">2008</xref>
) used an electrophysiological mismatch negativity paradigm to show that visual speech cues habituated 10-week-old infants to auditory tokens of the same phoneme, but not auditory tokens of a different phoneme. Such evidence suggests that infants have a multisensory representation of the phonemes tested, or at least are able to match across senses in the speech domain. By 5 months of age, infants are sensitive to the McGurk illusion, as shown both behaviourally (Burnham & Dodd,
<xref rid="b8" ref-type="bibr">2004</xref>
; Rosenblum, Schmuckler & Johnson
<xref rid="b59" ref-type="bibr">1997</xref>
; Patterson & Werker,
<xref rid="b53" ref-type="bibr">1999</xref>
), and electrophysiologically (Kushnerenko, Teinonen, Volein & Csibra,
<xref rid="b38" ref-type="bibr">2008</xref>
). Notably though, audio-visual speech perception may not be robust or consistent at this age due to a relative lack of experience (Desjardins & Werker,
<xref rid="b18" ref-type="bibr">2004</xref>
). Nevertheless, infants pay attention to the mouths of speakers at critical times for language development over the first year (Lewkowicz & Hansen-Tift,
<xref rid="b42" ref-type="bibr">2012</xref>
), during which time they may even use visual cues to help develop phonemic categories (Teinonen, Aslin, Alku & Csibra,
<xref rid="b67" ref-type="bibr">2008</xref>
).</p>
<p>By contrast, children do not seem to show sensitivity to, or benefit from, visual cues to the extent that the infant data might predict (e.g. Massaro, Thompson, Barron & Laren,
<xref rid="b47" ref-type="bibr">1986</xref>
). Typically, children have been shown to be insensitive to the McGurk illusion at age 5, then to show a gradual or stepped developmental progression to the end of primary school or into the teenage years (Hockley & Polka,
<xref rid="b30" ref-type="bibr">1994</xref>
; McGurk & MacDonald,
<xref rid="b45" ref-type="bibr">1976</xref>
). Reliable responses to this illusion emerge at around 8 or 9 years (Tremblay, Champoux, Voss, Bacon, Lapore & Theoret,
<xref rid="b70" ref-type="bibr">2007</xref>
), the same age at which children robustly use visual cues to help overcome noise in the auditory signal (Wightman, Kistler & Brungart,
<xref rid="b74" ref-type="bibr">2006</xref>
). Ross and colleagues (Ross, Molholm, Blanco, Gomez-Ramirez, Saint-Amour & Foxe,
<xref rid="b60" ref-type="bibr">2011</xref>
) demonstrated not only the increasing benefit of visual cues over the ages of 5 to 14, but also a change in the profile of how useful visual speech cues were under conditions of different auditory signal-to-noise ratios. Of particular interest in a discussion of developmental trajectories is the finding from an indirect measure of audio-visual speech perception that, while 5-year-olds do not show sensitivity to visual cues, 4-year-olds do (Jerger, Damian, Spence, Tye-Murray & Abdi,
<xref rid="b32" ref-type="bibr">2009</xref>
); hinting at a U-shaped developmental trajectory in audio-visual speech development.</p>
<p>This developmental pattern of very early sensitivity but late mastery is mirrored in other domains of multisensory development. For example, at 4 months old infants are subject to low-level audio-visual illusions (Kawabe, Shirai, Wada, Miura, Kanazawa & Yamaguci,
<xref rid="b33" ref-type="bibr">2010</xref>
; Wada, Shirai, Midorikawa, Kanazawa, Dan & Yamaguchi,
<xref rid="b73" ref-type="bibr">2009</xref>
). However, accuracy in the use of information from multiple senses continues to improve through childhood, and mastering the ability to appropriately weight information from different senses according to their reliability only emerges from around age 8 (Gori, Del Viva, Sandini & Burr,
<xref rid="b22" ref-type="bibr">2008</xref>
).</p>
</sec>
<sec>
<title>Electrophysiological recordings of multisensory speech</title>
<p>The aim of the current work was to understand the development of audio-visual speech perception at the neurophysiological level. Event-related potential (ERP) recordings have repeatedly been used to explore the mechanisms of multisensory processing with adult samples, largely due to the excellent temporal resolution of this technique (Besle, Bertrand & Giard,
<xref rid="b3" ref-type="bibr">2009</xref>
; Teder-Salejarvi, McDonald, DiRusso & Hillyard,
<xref rid="b66" ref-type="bibr">2002</xref>
). In this case, we were interested in how visual cues influence, or modulate, auditory processing of speech stimuli. The auditory N1 and P2 ERP components, often referred to together as the
<italic>vertex potential</italic>
, are highly responsive to auditory speech (e.g. Hoonhorst, Serniclaes, Collet, Colin, Markessis, Radeau & Deltenrea,
<xref rid="b28" ref-type="bibr">2009</xref>
; Pang & Taylor,
<xref rid="b52" ref-type="bibr">2000</xref>
). The characteristics of these early-to-mid latency auditory components, when evoked in response to speech stimuli, are modulated by the presence of visual speech cues in adults (Bernstein, Auer, Wagner & Ponton,
<xref rid="b2" ref-type="bibr">2007</xref>
; Besle, Fischer, Bidet-Caulet, Lecaignard, Bertrand & Giard,
<xref rid="b4" ref-type="bibr">2008</xref>
; Besle, Fort, Delpuech & Giard,
<xref rid="b5" ref-type="bibr">2004</xref>
; Klucharev, Mottonen & Sams,
<xref rid="b35" ref-type="bibr">2003</xref>
; Pilling,
<xref rid="b55" ref-type="bibr">2009</xref>
; Stekelenburg & Vroomen,
<xref rid="b63" ref-type="bibr">2007</xref>
; van Wassenhove, Grant & Poeppel,
<xref rid="b71" ref-type="bibr">2005</xref>
). Visual cues are shown to both attenuate the amplitude of N1 and P2 as well as, given congruence between auditory and visual inputs, shorten their latency (Pilling,
<xref rid="b55" ref-type="bibr">2009</xref>
; van Wassenhove
<italic>et al</italic>
.,
<xref rid="b71" ref-type="bibr">2005</xref>
). While auditory N1 and P2 are most robustly modulated by visual speech, even earlier electrophysiological activity is affected. The auditory P50 is attenuated during intracranial (Besle
<italic>et al</italic>
.,
<xref rid="b4" ref-type="bibr">2008</xref>
) and sub-dural (Reale, Calvert, Thesen, Jenison, Kawasaki, Oys, Howard & Brugg,
<xref rid="b57" ref-type="bibr">2007</xref>
) recordings over the lateral superior temporal gyrus; and even auditory brainstem responses and middle latency auditory evoked potentials attenuate in amplitude and reduce in latency when participants are able to see a talking face (Musacchia, Sams, Nicol & Kraus,
<xref rid="b49" ref-type="bibr">2006</xref>
).</p>
<p>Given multiple replications of the modulation of auditory N1 and P2 by visual speech cues in adults (Bernstein
<italic>et al</italic>
.,
<xref rid="b2" ref-type="bibr">2007</xref>
; Besle
<italic>et al</italic>
.,
<xref rid="b5" ref-type="bibr">2004</xref>
,
<xref rid="b4" ref-type="bibr">2008</xref>
; Klucharev
<italic>et al</italic>
.,
<xref rid="b35" ref-type="bibr">2003</xref>
; Pilling,
<xref rid="b55" ref-type="bibr">2009</xref>
; Stekelenburg & Vroomen,
<xref rid="b63" ref-type="bibr">2007</xref>
; Van Wassenhove et al.,
<xref rid="b71" ref-type="bibr">2005</xref>
), and the correlation of these effects with the perception of multisensory illusions (Van Wassenhove et al.,
<xref rid="b71" ref-type="bibr">2005</xref>
), this can reasonably be taken to represent at least the influence of visual cues on auditory processing, even if not necessarily the integration of information at the single-neuron level. Here we traced these markers of audio-visual speech perception through development. Finding either the modulation of amplitude or latency of the N1/P2 complex over development could help establish the limitations on children's use of multisensory speech cues. Experiment 1 therefore used an adult sample to validate a novel child-friendly paradigm and stimulus set by replicating previous findings of congruence-dependent latency modulation and congruence-independent amplitude modulation of auditory N1 and P2 by visual cues (Van Wassenhove et al.,
<xref rid="b71" ref-type="bibr">2005</xref>
). Four experimental conditions allowed the assessment of the impact of visual speech cues on auditory processing: Auditory-only, Visual-only, congruent Audio-Visual and incongruent audio-visual, referred to as Mismatch. The Mismatch condition was included to assess the effect of audio-visual congruency and to control for a more general effect of attention to the talking face. Experiment 2 used the same paradigm to trace the development of these modulatory effects over mid-to-late childhood, with a sample of children ranging from 6 to 11 years.</p>
</sec>
<sec>
<title>Experiment 1</title>
<sec sec-type="methods">
<title>Method</title>
<sec>
<title>Participants</title>
<p>Participants were 12 native English-speaking adults, who were naive to the experimental hypotheses (mean age = 28.10 years, age range = 20.0–34.0 years). Participants were recruited through the Birkbeck College participant pool and were paid in exchange for taking part. Participants gave their written, informed consent. The experiment was approved by the Birkbeck College Ethics Committee.</p>
</sec>
<sec>
<title>Stimuli</title>
<p>When studying auditory ERP components in response to speech stimuli, previous studies have used repetitive consonant-vowel (CV) syllables such as [pa] (e.g. Besle
<italic>et al</italic>
.,
<xref rid="b5" ref-type="bibr">2004</xref>
) or single vowels (Klucharev
<italic>et al</italic>
.,
<xref rid="b35" ref-type="bibr">2003</xref>
). Here, the stimulus set was chosen to be as consistent with previous studies as possible while maximizing the likelihood that young children would remain attentive and motivated. The stimuli therefore consisted of a set of monosyllabic, concrete, highly imageable nouns such as ‘bell’ and ‘pen’. The stimuli were recorded by a phonetically trained, female, native English speaker. In total 62 nouns were used, 19 of which were animal names such as ‘cat’ and ‘pig’. The animal names acted as targets during the paradigm and were therefore not included in the ERP analysis. Of the 43 non-target nouns, 31 began with fricatives and three with affricates (of these 18 were bilabial, nine were alveolar and seven were velar), seven stimuli began with liquids and two with a vowel; in total, 29 stimuli began with a voiced phoneme. Sharp acoustic onsets were maintained across the stimulus set as the auditory N1 is sensitive to changes such as rise time (Spreng,
<xref rid="b62" ref-type="bibr">1980</xref>
). Average age of acquisition of the non-target stimuli was 4.2 years (
<italic>SD</italic>
= 0.9 years) according to American norms (Kuperman, Stadthagen-Gonzalez & Brysbaert,
<xref rid="b37" ref-type="bibr">2012</xref>
), and only two of the stimuli (‘rose’ and ‘jam’) had an age of acquisition marginally above the age of the youngest participant.</p>
<p>Stimuli were recorded with a digital camera, at 25 frames per second, and a separate audio recording was made simultaneously. Each token was recorded twice and the clearest exemplar was used to create the stimulus set. Auditory tokens were lined up with their corresponding visual tokens by matching the points of auditory onset in the tokens recorded by the external microphone and the video-camera's built-in microphone; auditory recordings were made at a sampling rate of 44.1 kHz. Each token was edited to be 2000 ms long, including an 800 ms period at the start of each clip before auditory onset. There were therefore 800 ms during which visual articulatory cues were available before the onset of auditory information. This allowed for the natural temporal dynamics of audio-visual speech to remain intact while ensuring that each clip began with a neutral face. The length of this period was determined by the clip with the latest auditory onset relative to the onset of natural mouth movements, thus ensuring that no clips were manipulated in order to include this 800 ms visual-only period. The audible portion of each clip lasted on average 437 ms (
<italic>SD</italic>
= 51 ms).</p>
<p>These tokens were used as the stimulus set for the congruent
<italic>Audio-visual</italic>
(AV) condition. The stimuli for the three other conditions were then derived from them. A set of
<italic>Auditory-only</italic>
(AO) and a set of
<italic>Visual-only</italic>
(VO) stimuli were created by splitting the original tokens into their auditory and visual components. A final set of incongruent audio-visual,
<italic>Mismatch</italic>
(MM), stimuli were created by mismatching auditory and visual tokens but maintaining the relative timing. For example the auditory token [lake] was dubbed on top of the visual token |rose| 800 ms after its onset. Tokens were paired according to onset phoneme, but such that none resulted in an illusory percept. Animal tokens were kept separate from non-animal tokens when Mismatch stimuli were made, as they were task-relevant.</p>
</sec>
<sec>
<title>Procedure</title>
<p>Testing was conducted in an electrically shielded room with dimmed lights. Participants were told that they would either see, hear, or both see and hear a woman saying words and that whenever she said an animal word they should press the mouse button. The button press task was included to help maintain the attention and motivation of the child participants. The role of attention is particularly important here, as the auditory N1 is both amplified and shows more temporal precision with increased selective attention (Martin, Barajas, Fernandez & Torres,
<xref rid="b46" ref-type="bibr">1988</xref>
; Ritter, Simson & Vaughn,
<xref rid="b58" ref-type="bibr">1988</xref>
; Thornton,
<xref rid="b69" ref-type="bibr">2008</xref>
). Stimuli were presented via headphones at approximately 65 dB (SPL), as measured by a sound level meter 2 inches from the centre of the ear pad. Participants were seated in a chair 60 cm from the stimulus presentation screen, and used a chin rest to help keep their heads still and ensure that distance from the screen was kept constant.</p>
<p>Participants completed five blocks of 60 trials. Over the course of five blocks, 75 stimuli of each condition were played, including five animal stimuli per block, resulting in a total of 300 trials per participant. In total, 25 trials were target (animal) trials and were therefore not included in the analysis. The 43 non-target nouns were each repeated either once or twice in each of the four conditions over the course of the experiment. Conditions were randomly presented during each block, although the stimuli presented in each block were the same for each participant. During an audio-visual (AV or MM) or Visual-only (VO) trial a fixation screen appeared for a random period of time between 100 and 400 ms, followed immediately by the video clip, as shown in Figure 
<xref ref-type="fig" rid="fig01">1</xref>
. The fixation variation was intended to minimize expectancy, which has been shown to both attenuate N1 amplitude (Lange,
<xref rid="b39" ref-type="bibr">2009</xref>
; Viswanathan & Jansen,
<xref rid="b72" ref-type="bibr">2010</xref>
) and result in slow wave motor anticipatory activity (Teder-Salejarvi
<italic>et al</italic>
.,
<xref rid="b66" ref-type="bibr">2002</xref>
). During Auditory-only (AO) trials, the fixation screen remained during the stimulus presentation, after the same jittered period before auditory stimulus onset as for the other conditions. Participants were instructed to remain looking at the centre of the screen at all times, and deviations of gaze were monitored during each session using a video camera. Cartoon eyes on a white background were used as fixation and were located where the bottom of the speaker's nose appeared during video clips. The testing procedure lasted around 45 minutes.</p>
<fig id="fig01" position="float">
<label>Figure 1</label>
<caption>
<p>Example audio-visual trial timeline.</p>
</caption>
<graphic xlink:href="desc0017-0110-f1"></graphic>
</fig>
</sec>
<sec>
<title>Recording</title>
<p>High density Electrical Geodesics, Inc. (EGI) caps with 128 electrodes joined and aligned according to the international 10–20 system (Jasper,
<xref rid="b31" ref-type="bibr">1958</xref>
) were used. All bio-electrical signals were recorded using EGI NetAmps (Eugene, OR), with gain set to 10,000 times. The signals were recorded referenced to the vertex (Cz), and were re-referenced to the average during analysis. Data were recorded at 500 Hz and band-pass filtered online between 0.1 and 200 Hz. An oscilloscope and audio monitor were used to measure the accuracy of the relationship between stimulus presentation and electrophysiological recording, and to check the preservation of the relationship between auditory and visual stimuli. No more than 1 ms difference in disparity between audio and visual timing was recorded for any condition.</p>
</sec>
</sec>
<sec>
<title>Analysis and results</title>
<sec>
<title>Analysis</title>
<p>The region of interest was defined as that which has previously been reported as most appropriate for recording mid-to-late latency auditory ERP components (see e.g. Giard, Perrin, Echallier, Thevenet, Fromenet & Pernier,
<xref rid="b21" ref-type="bibr">1994</xref>
; Picton, Hillyard, Krausz & Galambos,
<xref rid="b54" ref-type="bibr">1974</xref>
). The region comprised five channels around, and including, the apex, Cz, which showed the clearest auditory components for these data. The two components analysed at this region of interest were the auditory N1 and auditory P2, with an average of activity taken over the five electrodes. The two ERP measures taken were peak-to-peak amplitude and peak latency for the N1 and P2 components. Windows of analysis were defined as follows: for the P1 (the amplitude of which was used to analyse the N1 component as the N1 and P2 were measured as peak-to-peak values) a window from 40 to 90 ms post stimulus onset was used; for the N1, 80–140 ms; and for the P2, 160–230 ms. The analysis windows were based on a visual inspection of the grand average waveform and checked against data for each individual participant.</p>
<p>Artefact detection was conducted using an automatic algorithm to mark channels as bad if activity exceeded 100 μV at any point; these data were then checked by hand. Trials were rejected if 15 or more channels (12%) were marked as bad. Of those trials included in the analysis, an average of 1.1 channels (0.9%) were marked bad and the data for those channels were interpolated from the remaining channels. Participants were included in the analysis if they contributed at least 30 non-target trials per condition. All adult participants met this condition. The average percentage of trials included per condition was as follows: AO – 79% (
<italic>SD</italic>
= 16.8), VO – 90% (
<italic>SD</italic>
= 9.8), AV – 85% (
<italic>SD</italic>
= 10.4), MM – 83% (
<italic>SD</italic>
= 13.4).</p>
<p>We directly compared activity in response to the audio-visual conditions with that in response to the AO condition, as only the modulation of auditory responses was of interest for the current purposes. Directly comparing unisensory and multisensory conditions avoids the issue of subtracting activity common to both auditory and visual unimodal responses, which can occur when using the more traditional model of comparing multisensory activity to the sum of the unisensory responses (Stekelenburg & Vroomen,
<xref rid="b63" ref-type="bibr">2007</xref>
; Teder-Salejarvi
<italic>et al</italic>
.,
<xref rid="b66" ref-type="bibr">2002</xref>
).</p>
</sec>
</sec>
<sec>
<title>Results</title>
<sec>
<title>Behavioural results</title>
<p>Accuracy of behavioural responses was converted to d′, with a button press in response to an animal trial counting as a hit and any other button press as a false alarm. Only responses to AO and AV trials are reported here as the main aim of the behavioural task was to maintain attention. VO trials are not reported as the task was not designed to assess lip-reading ability, nor MM trials, due to difficulty in interpretation. The average d′ for AO trials was 3.7 (
<italic>SD</italic>
= 1.4) and for AV trials was significantly greater (
<italic>t</italic>
(11) = 3.22,
<italic></italic>
=
<italic> </italic>
.008) at 5.7 (
<italic>SD</italic>
= 2.3). Correlations were run between these behavioural measures and each electrophysiological measure taken, but none reached significance after Bonferroni correction for multiple comparisons.</p>
</sec>
<sec>
<title>Electrophysiological results</title>
<p>The adult electrophysiological data followed the same pattern as that seen in previous studies, but with an additional effect of amplitude modulation for the P2 component. A 3 × 2 repeated measures ANOVA was run with three levels of Condition (AO, AV, MM) and two levels of Component (N1 and P2), for amplitude and latency separately. For amplitude, a main effect of Condition was found,
<italic>F</italic>
(2, 22) = 28.43,
<italic></italic>
<
<italic> </italic>
.001, ηp² = 0.72, with Bonferroni corrected pairwise comparisons revealing differences (
<italic></italic>
<
<italic> </italic>
.05) between each condition, AO > AV > MM. An interaction between Condition and Component also emerged,
<italic>F</italic>
(2, 22) = 9.90,
<italic></italic>
=
<italic> </italic>
.001, ηp² = 0.47 with P2,
<italic>F</italic>
(2, 22) = 26.47,
<italic></italic>
<
<italic> </italic>
.001, ηp² = 0.71, being more strongly modulated than N1,
<italic>F</italic>
(2, 22) = 16.33,
<italic></italic>
<
<italic> </italic>
.001, ηp² = 0.60. Notably, after Bonferroni correction P2 showed significant (
<italic></italic>
<
<italic> </italic>
.01) modulation between all levels of Condition, whereas N1 only showed a difference between AO and each audio-visual condition, at
<italic></italic>
<
<italic> </italic>
.01 (see Figure 
<xref ref-type="fig" rid="fig02">2</xref>
). For latency, there was a main effect of condition,
<italic>F</italic>
(2, 22) = 4.89,
<italic></italic>
=
<italic> </italic>
.017, ηp² = 0.31, driven by the difference (
<italic></italic>
<
<italic> </italic>
.05) between the AV condition and the other two conditions, such that AV < AO = MM, given Bonferroni correction for multiple comparisons. Latency modulation was therefore congruence-dependent. No interaction between condition and component emerged.</p>
<fig id="fig02" position="float">
<label>Figure 2</label>
<caption>
<p>Peak amplitude and latency of the auditory N1 and P2 components under the Auditory-only (AO), congruent Audio-visual (AV) and Mismatch (MM) conditions, for the adult participants. *p = 0.05; **p = 0.01.</p>
</caption>
<graphic xlink:href="desc0017-0110-f2"></graphic>
</fig>
</sec>
</sec>
<sec>
<title>Discussion</title>
<p>The aim of experiment 1 was to replicate in adults previous findings of the modulation of auditory ERP components by visual speech cues using a child-friendly paradigm and stimulus set.</p>
<sec>
<title>Adult use of visual cues</title>
<p>Compared to auditory-only speech stimuli, audio-visual stimuli resulted in congruence-independent attenuation of N1 and P2 component amplitude and congruence-dependent shortening of component latency. The modulation of auditory ERP components therefore replicated previous findings (Pilling,
<xref rid="b55" ref-type="bibr">2009</xref>
; Van Wassenhove et al.,
<xref rid="b71" ref-type="bibr">2005</xref>
). This data set validated the use of the child-friendly paradigm on adults for subsequent use with a developmental sample.</p>
<p>van Wassenhove and colleagues (2005) proposed that the shortening of component latency in the presence of visual speech cues represents the use of visual cues to predict the content of the upcoming auditory signal; a proposal known as the ‘predictive coding hypothesis’. This is possible in natural speech as the onset of visual cues occurs between 100 and 300 ms before their auditory counterparts (Chandrasekaran
<italic>et al</italic>
.,
<xref rid="b17" ref-type="bibr">2009</xref>
). van Wassenhove and colleagues found particularly strong support for this notion as latency shortening was not only sensitive to congruency but further to the degree of ambiguity of the onset phoneme. Greater latency modulation was recorded given the syllable [pa] over [ta] and given [ta] over [ka]. In this study |pa| was the least ambiguous viseme (the visual correlate of an auditory phoneme), and as such was suggested to make a stronger prediction and result in faster processing of the more expected auditory signal. Ease of processing has previously been associated with the shortening of auditory N1 latency (Callaway & Halliday,
<xref rid="b11" ref-type="bibr">1982</xref>
). Although stimuli in the current study could not be analysed by onset phoneme, the congruence-dependent shortening of latency further supports the predictive coding hypothesis.</p>
<p>We additionally replicated findings of amplitude modulation regardless of congruency between the auditory and visual inputs, driven predominantly by the P2 component (Pilling,
<xref rid="b55" ref-type="bibr">2009</xref>
; Van Wassenhove et al.,
<xref rid="b71" ref-type="bibr">2005</xref>
). Two hypotheses have been put forward in the literature to explain congruence-independent effects of one sensory modality on another. van Wassenhove and colleagues suggested that a reduction of amplitude results from visual speech cues driving more efficient auditory processing. The authors proposed that redundant information, carried in both senses, need not be fully processed by the auditory system, resulting in more efficient processing of information available through the auditory channel. In the case of visual speech cues, this may entail a reduction in processing of information from the second and third formants, which carry information about place of articulation.</p>
<p>An alternative explanation, known as the ‘deactivation hypothesis’ (Bushara, Hanawaka, Immisch, Toma, Kansaku & Hallett,
<xref rid="b9" ref-type="bibr">2003</xref>
; Wright, Pelphrey, Allison, McKeown & McCarthy,
<xref rid="b75" ref-type="bibr">2003</xref>
), asserts that different parts of the multisensory processing stream are in competition, such that stimuli from different senses showing temporal and spatial synchrony produce super-additive activity in some areas, but suppression of activity in others. Under this view, when multisensory stimuli are available, regions that process more than one sense dominate over unisensory areas. So, for example, responses in auditory cortex are reduced in the presence of visual information about the same object or event, as multisensory processing regions compete and dominate. Experimental evidence from fMRI studies supports the theoretical notion of competition between unisensory and multisensory areas (Bushara
<italic>et al</italic>
.,
<xref rid="b9" ref-type="bibr">2003</xref>
).</p>
<p>However, in the current data set the attenuation of P2 amplitude was greater for the audio-visual Mismatch condition than for the congruent Audio-visual condition. Given that an incongruent visual cue does not provide more reliable information regarding place of articulation, nor does it result in the perception of a multisensory event, these data are difficult to reconcile with either of the above hypotheses. A possible explanation lies in the nature of the stimuli used here. In the current study, the Mismatch stimuli consisted of entirely unrelated words presented in each sensory modality, for example, auditory [lake] paired with visual |rose| This is in contrast to previous studies which have used McGurk stimuli (Pilling,
<xref rid="b55" ref-type="bibr">2009</xref>
; Van Wassenhove et al.,
<xref rid="b71" ref-type="bibr">2005</xref>
), that is, incongruent CV syllables which can form coherent percepts despite their physical mismatch.</p>
<p>The current data therefore support an alternative hypothesis that amplitude attenuation reflects competition between sensory inputs, with competition being greater when auditory and visual systems are processing incompatible, and irreconcilable, stimuli. That this effect is restricted to the P2 component is compatible with evidence that it originates in posterior superior temporal cortex (Liebenthal, Desai, Ellinson, Ramachandran, Desai & Binder,
<xref rid="b43" ref-type="bibr">2010</xref>
). The posterior superior temporal cortex is composed of the posterior superior temporal gyrus (pSTG) and sulcus (pSTS) and forms part of a network of regions implicated in audio-visual speech processing. This network also includes primary sensory cortices, frontal and pre-motor regions and the supramarginal gyrus (see Campbell,
<xref rid="b15" ref-type="bibr">2008</xref>
, for a review). The pSTS is the most reliably activated region in fMRI studies in response to audio-visual over auditory speech, and lip-reading (Calvert, Bullmore, Brammer, Campbell, Woodruff, McGuire, Williams, Iversen & David,
<xref rid="b13" ref-type="bibr">1997</xref>
; Calvert, Campbell & Brammer,
<xref rid="b14" ref-type="bibr">2000</xref>
; Callan, Jones, Munhall, Kroos, Callan & Vatikiotis-Bateson,
<xref rid="b10" ref-type="bibr">2004</xref>
; Capek, Bavelier, Corina, Newman, Jezzard & Neville,
<xref rid="b16" ref-type="bibr">2004</xref>
; Hall, Fussell & Summerfield,
<xref rid="b27" ref-type="bibr">2005</xref>
; Skipper, Nusbaum & Small,
<xref rid="b61" ref-type="bibr">2005</xref>
). Furthermore, pSTS is associated with learning inter-sensory pairings (Tanabe, Honda & Sadato,
<xref rid="b65" ref-type="bibr">2005</xref>
), with auditory expertise (Leech, Holt, Devlin & Dick,
<xref rid="b40" ref-type="bibr">2009</xref>
) and shows sensitivity to congruency in ongoing audio-visual speech (Calvert
<italic>et al</italic>
.,
<xref rid="b14" ref-type="bibr">2000</xref>
). In a systematic analysis of the role of pSTS in audio-visual processing, Hocking and Price (
<xref rid="b29" ref-type="bibr">2008</xref>
) suggest that this region is involved in conceptual matching regardless of input modality.</p>
<p>Given that cortical regions involved in the generation of the auditory P2 component are sensitive to matching auditory and visual stimuli, the attenuation of P2 may reflect competition between neurons in a multisensory population responsive to different modalities, with competition increasing given irreconcilable incongruence. A possible next step in the examination of this hypothesis is to compare reconcilable (i.e. McGurk) and irreconcilable incongruent audio-visual speech stimuli within the same paradigm.</p>
</sec>
</sec>
</sec>
<sec>
<title>Experiment 2</title>
<p>Experiment 2 traced the developmental trajectory of auditory ERP modulation by visual speech cues from age 6 to 12, over which period children establish a reliable use of visual cues to aid speech perception as shown using behavioural measures (e.g. Wightman
<italic>et al</italic>
.,
<xref rid="b74" ref-type="bibr">2006</xref>
). We sought to determine whether modulation of ERPs due to multisensory processing could be observed at an earlier age than has been measured behaviourally.</p>
<sec>
<title>Method</title>
<sec>
<title>Participants</title>
<p>Thirty-eight typically developing children participated (mean age = 8.9 years,
<italic>SD</italic>
= 21 months, age range = 6.0–11.10 years, with between five and seven children in each year group). Children were recruited by placing advertisements in the local press, and were rewarded for their participation with small toys. Parents gave written, informed consent for their children. The experiment was approved by the Birkbeck College Ethics Committee. One child was excluded from the analysis as a result of excessive noise in the data.</p>
</sec>
<sec>
<title>Recording and procedure</title>
<p>The experimental procedure for children was almost identical to that used in Experiment 1 for adult participants. The procedure lasted slightly longer for children, around 60 minutes, as more time was spent practising sitting still. Blinking was not mentioned as it was judged that this would be hard for young children to control and would only serve to draw attention to the act. Paediatric EGI electroencephalographic nets with 128 electrodes were used for all child participants.</p>
</sec>
<sec>
<title>Analysis</title>
<p>The same region of interest and the same epoch windows were used for the child sample based on grand average data for each age group and checked against data for each individual participant. After artefact rejection, slightly more data were discarded as noisy than for the adult sample. For child participants, an average of 3.6 channels (2.8%) were marked bad on accepted trials. As per the adults, participants were included in the analysis if they contributed at least 30 non-target trials per condition; one child was excluded from analysis on these grounds. The average percentage of trials included for the child sample was: AO – 57% (
<italic>SD</italic>
= 14.4), VO – 73% (
<italic>SD</italic>
= 12.6), AV – 68% (
<italic>SD</italic>
= 14.7), MM – 67% (
<italic>SD</italic>
= 13.9).</p>
</sec>
</sec>
<sec>
<title>Results</title>
<sec>
<title>Behavioural results</title>
<p>The average d′ for the child sample was 2.5 (
<italic>SD</italic>
= 1.9) for AO and 2.7 (
<italic>SD</italic>
= 1.9) for AV trials. d′ was consistently good, with each age group scoring significantly above zero on each measure at
<italic></italic>
<
<italic> </italic>
.05, indicating satisfactory attention across all ages. Behavioural performance improved over developmental time, with Age predicting performance on both AO (
<italic>R</italic>
<sup>2</sup>
= 0.19,
<italic>F</italic>
(1, 35) = 8.26,
<italic></italic>
=
<italic> </italic>
.007) and AV trials (
<italic>R</italic>
<sup>2</sup>
= 0.15,
<italic>F</italic>
(1, 35) = 6.11,
<italic></italic>
=
<italic> </italic>
.018). Unlike the adult sample in Experiment 1, on this simple detection task the child sample showed no behavioural benefit of AV trials over AO trials. Correlations between behavioural d′ and brain responses were calculated for the child sample, but again no correlations survived Bonferroni correction for multiple comparisons.</p>
</sec>
<sec>
<title>Electrophysiological results</title>
<p>Figure 
<xref ref-type="fig" rid="fig03">3</xref>
shows the grand average waveforms for the 6-and 7-year-olds, the 8-and 9-year-olds, the 10-and 11-year-olds as well as the adults from Experiment 1, with the amplitude and latency values for the auditory N1 and P2 components shown in Table 
<xref ref-type="table" rid="tbl1">1</xref>
. These categorical age groupings are used here to illustrate developmental change but in further analyses age is treated as a continuous variable. To assess change over time, the developmental data were entered into a repeated measures ANCOVA with Condition (AO, AV, MM), and Component (N1, P2) as the within subjects factors, and Age (in months) added as a covariate. Main effects of Condition were analysed separately in an ANOVA (see Thomas, Annaz, Ansari, Serif, Jarrold & Karmiloff-Smith,
<xref rid="b68" ref-type="bibr">2009</xref>
).</p>
<table-wrap id="tbl1" position="float">
<label>Table 1</label>
<caption>
<p>Means (and standard deviations) for auditory N1 and P2 amplitude (peak to peak) and peak latency, for each age group. Latency values are not given for the Visual-only condition, as amplitude values show latent activity within the window of analysis rather than components</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th rowspan="1" colspan="1"></th>
<th rowspan="1" colspan="1"></th>
<th align="center" rowspan="1" colspan="1">Auditory-only</th>
<th align="center" rowspan="1" colspan="1">Visual-only</th>
<th align="center" rowspan="1" colspan="1">Audio-visual</th>
<th align="center" rowspan="1" colspan="1">Mismatch</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">N1 amplitude (μV)</td>
<td align="center" rowspan="1" colspan="1">6&7</td>
<td align="center" rowspan="1" colspan="1">2.5 (1.6)</td>
<td align="center" rowspan="1" colspan="1">2.3 (1.4)</td>
<td align="center" rowspan="1" colspan="1">3.3 (2.2)</td>
<td align="center" rowspan="1" colspan="1">2.3 (1.6)</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="center" rowspan="1" colspan="1">8&9</td>
<td align="center" rowspan="1" colspan="1">3.5 (3.7)</td>
<td align="center" rowspan="1" colspan="1">1.6 (0.9)</td>
<td align="center" rowspan="1" colspan="1">2.9 (2.1)</td>
<td align="center" rowspan="1" colspan="1">3.1 (3.6)</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="center" rowspan="1" colspan="1">10&11</td>
<td align="center" rowspan="1" colspan="1">3.7 (1.3)</td>
<td align="center" rowspan="1" colspan="1">1.6 (0.9)</td>
<td align="center" rowspan="1" colspan="1">2.3 (1.1)</td>
<td align="center" rowspan="1" colspan="1">3.0 (1.4)</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="center" rowspan="1" colspan="1">Adult</td>
<td align="center" rowspan="1" colspan="1">5.5 (1.4)</td>
<td align="center" rowspan="1" colspan="1">1.5 (0.4)</td>
<td align="center" rowspan="1" colspan="1">4.4 (1.2)</td>
<td align="center" rowspan="1" colspan="1">4.3 (1.4)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">N1 latency (ms)</td>
<td align="center" rowspan="1" colspan="1">6&7</td>
<td align="center" rowspan="1" colspan="1">114.8 (14.7)</td>
<td align="center" rowspan="1" colspan="1"></td>
<td align="center" rowspan="1" colspan="1">114.1 (10.5)</td>
<td align="center" rowspan="1" colspan="1">117.0 (11.1)</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="center" rowspan="1" colspan="1">8&9</td>
<td align="center" rowspan="1" colspan="1">105.5 (10.7)</td>
<td align="center" rowspan="1" colspan="1"></td>
<td align="center" rowspan="1" colspan="1">105.8 (12.4)</td>
<td align="center" rowspan="1" colspan="1">110.4 (14.0)</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="center" rowspan="1" colspan="1">10&11</td>
<td align="center" rowspan="1" colspan="1">109.1 (12.5)</td>
<td align="center" rowspan="1" colspan="1"></td>
<td align="center" rowspan="1" colspan="1">102.0 (12.7)</td>
<td align="center" rowspan="1" colspan="1">105.0 (11.4)</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="center" rowspan="1" colspan="1">Adult</td>
<td align="center" rowspan="1" colspan="1">103.3 (11.1)</td>
<td align="center" rowspan="1" colspan="1"></td>
<td align="center" rowspan="1" colspan="1">95.6 (11.0)</td>
<td align="center" rowspan="1" colspan="1">101.2 (7.0)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">P2 amplitude (μV)</td>
<td align="center" rowspan="1" colspan="1">6&7</td>
<td align="center" rowspan="1" colspan="1">3.4 (2.3)</td>
<td align="center" rowspan="1" colspan="1">1.5 (1.1)</td>
<td align="center" rowspan="1" colspan="1">2.9 (2.3)</td>
<td align="center" rowspan="1" colspan="1">2.2 (2.1)</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="center" rowspan="1" colspan="1">8&9</td>
<td align="center" rowspan="1" colspan="1">6.6 (3.7)</td>
<td align="center" rowspan="1" colspan="1">1.8 (1.2)</td>
<td align="center" rowspan="1" colspan="1">4.7 (2.8)</td>
<td align="center" rowspan="1" colspan="1">4.2 (4.0)</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="center" rowspan="1" colspan="1">10&11</td>
<td align="center" rowspan="1" colspan="1">6.5 (2.7)</td>
<td align="center" rowspan="1" colspan="1">1.5 (1.4)</td>
<td align="center" rowspan="1" colspan="1">4.2 (2.9)</td>
<td align="center" rowspan="1" colspan="1">3.9 (2.2)</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="center" rowspan="1" colspan="1">Adult</td>
<td align="center" rowspan="1" colspan="1">10.8 (2.9)</td>
<td align="center" rowspan="1" colspan="1">1.5 (0.6)</td>
<td align="center" rowspan="1" colspan="1">9.1 (2.0)</td>
<td align="center" rowspan="1" colspan="1">8.1 (2.1)</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">P2 latency (ms)</td>
<td align="center" rowspan="1" colspan="1">6&7</td>
<td align="center" rowspan="1" colspan="1">195.2 (15.5)</td>
<td align="center" rowspan="1" colspan="1"></td>
<td align="center" rowspan="1" colspan="1">182.6 (17.1)</td>
<td align="center" rowspan="1" colspan="1">187.0 (20.2)</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="center" rowspan="1" colspan="1">8&9</td>
<td align="center" rowspan="1" colspan="1">188.9 (11.6)</td>
<td align="center" rowspan="1" colspan="1"></td>
<td align="center" rowspan="1" colspan="1">182.0 (13.9)</td>
<td align="center" rowspan="1" colspan="1">183.5 (9.6)</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="center" rowspan="1" colspan="1">10&11</td>
<td align="center" rowspan="1" colspan="1">195.1 (17.6)</td>
<td align="center" rowspan="1" colspan="1"></td>
<td align="center" rowspan="1" colspan="1">183.3 (19.8)</td>
<td align="center" rowspan="1" colspan="1">180.4 (11.7)</td>
</tr>
<tr>
<td rowspan="1" colspan="1"></td>
<td align="center" rowspan="1" colspan="1">Adult</td>
<td align="center" rowspan="1" colspan="1">196.2 (8.7)</td>
<td align="center" rowspan="1" colspan="1"></td>
<td align="center" rowspan="1" colspan="1">190.0 (9.5)</td>
<td align="center" rowspan="1" colspan="1">196.1 (12.4)</td>
</tr>
</tbody>
</table>
</table-wrap>
<fig id="fig03" position="float">
<label>Figure 3</label>
<caption>
<p>Grand average waveforms for each condition, Auditory-only (AO), Audio-visual (AV), Mismatch (MM) and Visual-only (VO) at the region of interest. Waveforms are shown divided by age group. The onset points of the visual and auditory stimuli are shown.</p>
</caption>
<graphic xlink:href="desc0017-0110-f3"></graphic>
</fig>
<p>A main effect of Condition emerged,
<italic>F</italic>
(2, 72) = 10.16,
<italic></italic>
<
<italic> </italic>
.001, ηp² = 0.22, with Bonferroni corrected pairwise comparisons revealing differences (
<italic></italic>
<
<italic> </italic>
.01) between AO and each multisensory condition, AO > AV = MM. An interaction between Condition and Component emerged,
<italic>F</italic>
(2, 72) = 9.59,
<italic></italic>
<
<italic> </italic>
.001, ηp² = 0.21, with the P2 component being effected by Condition,
<italic>F</italic>
(2, 72) = 17.12,
<italic></italic>
<
<italic> </italic>
.001, ηp² = 0.32, but not the N1 (
<italic></italic>
=
<italic> </italic>
.420). Again this P2 effect was driven by the difference (
<italic></italic>
<
<italic> </italic>
.001) between AO and each multisensory condition (AO > AV = MM), as shown by Bonferroni corrected pairwise comparisons.</p>
<p>There was no main effect of Age, but there was a significant interaction between Age and both Component,
<italic>F</italic>
(1, 35) = 9.52,
<italic></italic>
=
<italic> </italic>
.004, ηp² = 0.21, and Condition,
<italic>F</italic>
(2, 70) = 4.05,
<italic></italic>
=
<italic> </italic>
.022, ηp² = 0.10. The first of these interactions was driven by the P2 component showing a main effect of Age,
<italic>F</italic>
(1, 35) = 5.31,
<italic></italic>
=
<italic> </italic>
.027, ηp² = 0.13, whereas the N1 component did not (
<italic></italic>
=
<italic> </italic>
.991). The Age by Condition interaction was driven by the AO condition showing a main effect of Age,
<italic>F</italic>
(1, 35) = 4.14,
<italic></italic>
=
<italic> </italic>
.050, ηp² = 0.11, but not the AV (
<italic></italic>
=
<italic> </italic>
.97) or the MM (
<italic></italic>
=
<italic> </italic>
.198) conditions. So, the main effect of Condition revealed by the ANOVA seems to have been driven predominantly by the older children, and as a result of the AO response getting larger over development (as illustrated in Figure 
<xref ref-type="fig" rid="fig04">4</xref>
).</p>
<fig id="fig04" position="float">
<label>Figure 4</label>
<caption>
<p>Developmental trajectories for the Auditory-only (AO), Audio-visual (AV) and Mismatch (MM) conditions for auditory N1 and P2 peak to peak amplitude and peak latency.</p>
</caption>
<graphic xlink:href="desc0017-0110-f4"></graphic>
</fig>
<p>To further assess the changing relationship between Conditions over Age, a linear regression was run with Age as a predictor of the difference between AO and each audio-visual condition for N1 and P2. Age was found to significantly predict the difference between the AO and AV conditions for N1 amplitude,
<italic>R</italic>
² = 0.13,
<italic>F</italic>
(1, 35) = 5.38,
<italic></italic>
=
<italic> </italic>
.026, β  =  0.365, and P2 amplitude,
<italic>R</italic>
² = 0.13,
<italic>F</italic>
(1, 35) = 5.077,
<italic></italic>
=
<italic> </italic>
.031, β  =  0.356. The age at which the difference between conditions became significant was determined using the 95% confidence intervals around the regression lines (see Figure 
<xref ref-type="fig" rid="fig05">5</xref>
). The lower boundary crossed zero at 122 months (10.1 years) for N1 amplitude, and at 89 months (7.4 years) for P2 amplitude. The increasing difference between conditions was approximately equivalent for each component. However, Figure 
<xref ref-type="fig" rid="fig04">4</xref>
suggests that for the N1 component, the change in difference results predominantly from a decrease in Audio-visual response amplitude, while for P2 the change was predominantly driven by an increase in Auditory-only amplitude. Age did not predict the difference between the AO and MM conditions for either the N1 (
<italic></italic>
=
<italic> </italic>
.846) or P2 (
<italic></italic>
=
<italic> </italic>
.087) components.</p>
<fig id="fig05" position="float">
<label>Figure 5</label>
<caption>
<p>Regression model with age predicting the difference between the AO and audio-visual conditions for auditory N1 and P2 amplitude. The arrows show the points at which the lower 95% confidence interval crosses 0 (122 and 89 months, respectively).</p>
</caption>
<graphic xlink:href="desc0017-0110-f5"></graphic>
</fig>
<p>For latency, the ANOVA revealed a main effect of Condition,
<italic>F</italic>
(2, 72) = 5.14,
<italic></italic>
=
<italic> </italic>
.008, ηp² = 0.13, driven by the difference (
<italic></italic>
<
<italic> </italic>
.05) between the AO and each audio-visual condition, AO > AV = MM. An interaction also emerged between Condition and Component,
<italic>F</italic>
(2, 72) = 5.52,
<italic></italic>
=
<italic> </italic>
.006, ηp² = 0.13. The P2 component was significantly influenced by Condition,
<italic>F</italic>
(2, 72) = 7.30,
<italic></italic>
=
<italic> </italic>
.001, ηp² = 0.17, driven by the Bonferroni corrected difference (
<italic></italic>
<
<italic> </italic>
.05) between AO and both audio-visual conditions; the N1 component was not influenced by Condition (
<italic></italic>
=
<italic> </italic>
.128).</p>
<p>The ANCOVA for latency revealed a main effect of Age,
<italic>F</italic>
(1, 35) = 4.56,
<italic></italic>
=
<italic> </italic>
.040, ηp² = 0.12, but no interaction between Age and Condition (see Figure 
<xref ref-type="fig" rid="fig04">4</xref>
.). So, the latency of these auditory components was seen to shorten over development, but the effect of Condition did not change over this age range.</p>
<p>All analyses were re-run comparing responses to the multisensory conditions with responses to the sum of the unisensory conditions. This is a more traditional approach adopted in multisensory processing studies (see Calvert,
<xref rid="b12" ref-type="bibr">2001</xref>
). The results of this analysis showed the same pattern but with larger sub-additive effects, that is, the effect of Condition was exaggerated for all comparisons and was therefore less conservative.</p>
</sec>
</sec>
<sec>
<title>Discussion</title>
<sec>
<title>The influence of visual cues over mid-to-late childhood</title>
<p>With regard to amplitude, as a group, the children responded similarly to the adults, in that the P2 component was attenuated given congruent and incongruent visual cues compared to the Auditory-only condition. Over developmental time the P2 component increased in amplitude, with this effect being driven by an increase in response to the Auditory-only condition. Age predicted the difference between the Auditory-only and Audio-Visual (congruent) conditions for both components, with this effect on P2 predominantly resulting from an increased response to the Auditory-only stimuli, while for the N1 component a slight decrease in amplitude in response to the Audio-visual stimuli seems to be responsible. The difference between conditions became significant from 10.1 years for the N1 component, and at 7.4 years for the P2 component. The period between these two ages matches that seen in behavioural studies when visual speech cues come to reliably influence auditory perception both in terms of the McGurk illusion and audio-visual advantage during speech-in-noise (e.g. Tremblay
<italic>et al</italic>
.,
<xref rid="b70" ref-type="bibr">2007</xref>
; Wightman
<italic>et al</italic>
.,
<xref rid="b74" ref-type="bibr">2006</xref>
). These results suggest that the modulation of different auditory components represents separate processes in the integration and/or use of visual speech cues, and that this developmental process may be traced at the behavioural level. What is not clear is exactly what the information processing correlates of N1 and P2 attenuation might be.</p>
<p>If amplitude modulation does represent competition between inputs from different sensory modalities, as suggested above, then the developmental data imply that this response only emerges over mid-to-late childhood, but is not fully mature by age 12 as the additional amplitude attenuation seen in adults to incongruent audio-visual stimuli was not seen for the oldest children in this sample. This protracted period of maturation maps onto imaging data showing regions in superior temporal cortex, which contribute to P2 generation in children as they do in adults (Ponton, Eggermont, Khosla, Kwong & Don,
<xref rid="b56" ref-type="bibr">2002</xref>
), do not mature until the teenage years (Gotgay
<italic>et al</italic>
.,
<xref rid="b23" ref-type="bibr">2004</xref>
; see Lenroot & Giedd,
<xref rid="b41" ref-type="bibr">2006</xref>
). Recent functional imaging data mirror this late development and support the role of STS in children's audio-visual speech perception (Nath, Fava & Beauchamp,
<xref rid="b51" ref-type="bibr">2011</xref>
). Dick and colleagues (Dick, Solodkin & Small,
<xref rid="b19" ref-type="bibr">2010</xref>
) measured brain activity in response to auditory and audio-visual speech in adults and 8-to 11-year-old children, and found that while the same areas were involved in perception for both adults and children, the relationships between those areas differed. For example, the functional connectivity between pSTS and frontal pre-motor regions was stronger for adults given audio-visual over auditory-only speech, but weaker for children.</p>
<p>With regard to latency, a different pattern emerged for the children, as a group, compared to the adult sample in Experiment 1. For the children, only the P2 component exhibited latency modulation in response to visual speech cues, and latency shortening was observed regardless of congruency between auditory and visual cues. Interpretations of previous adult data (Pilling,
<xref rid="b55" ref-type="bibr">2009</xref>
; Van Wassenhove et al.,
<xref rid="b71" ref-type="bibr">2005</xref>
) have rested on the effect of congruence-dependency, with congruent visual cues suggested to allow a prediction of the upcoming auditory signal, such that the degree of latency shortening reflects the difference between expected and perceived events. The current developmental data are not sensitive to congruency, and therefore cannot be interpreted entirely with recourse to the prediction of signal content. The present and previous adult data may therefore not tell the whole story regarding latency modulation. One possibility is that visual cues are involved in predicting not just
<italic>what</italic>
is about to be presented, but also
<italic>when</italic>
it is to be presented. Certainly, using non-speech stimuli, the auditory N1 and P2 components have been shown to be sensitive to both the content and timing of stimulus presentation (Viswanathan & Jansen,
<xref rid="b72" ref-type="bibr">2010</xref>
). In this case, children of the age range tested here may use visual cues to predict the timing but not the content of the upcoming auditory signal.</p>
<p>The idea that visual speech cues may allow a prediction of when important information in the auditory stream will be presented has been proposed before under the ‘peak listening’ hypothesis (Kim & Davis,
<xref rid="b34" ref-type="bibr">2004</xref>
). This theory states that visual speech cues predict when in the auditory signal energetic peaks will occur, which are particularly beneficial when processing speech in noise. If the shortening of latency does represent two predictive measures, then future work should reveal that latency shortening is sensitive to manipulations of both predictability of content and timing of the auditory signal relative to visual cues. Age did not interact with Condition with respect to latency modulation, so no change in the ability to predict the upcoming auditory stimulus emerged over this developmental window. The influence of visual speech cues on the latency of auditory components from age 6 may therefore represent an aspect of audio-visual speech perception that is continuous from infancy despite the U-shaped behavioural trajectory outlined in the introduction. However, the change in congruency dependence must occur after the age of 12, possibly revealing a much later sensitivity to upcoming auditory content.</p>
<p>Over developmental time, a main effect of age on component latency was revealed, indicating that children process these stimuli more rapidly as they get older. Auditory ERP responses are known to show a gradual course of developmental change and maturation over childhood and adolescence (Bishop, Hardiman, Uwer & von Suchodeltz,
<xref rid="b6" ref-type="bibr">2007</xref>
; Lippe, Kovacevic & McIntosh,
<xref rid="b44" ref-type="bibr">2009</xref>
). It is hard to tease apart the extent to which these changes result from the slow physiological maturation of the auditory cortex (Moore,
<xref rid="b48" ref-type="bibr">2002</xref>
), or changes in cognitive processes functionally underlying the activity or, more likely, a complex interaction between the two.</p>
</sec>
</sec>
</sec>
<sec>
<title>General discussion</title>
<p>The aim of this study was to chart the trajectory of the modulation of auditory ERP components by visual speech cues over developmental time. We first validated a new child-friendly paradigm using adult participants, which replicated previous findings of congruence-dependent shortening of ERP component latency and congruence-independent attenuation of component amplitude. A greater attenuation of amplitude emerged given mismatched visual speech cues, suggesting that attenuation may represent competition between inputs from different sensory modalities. This competition may be important for the process of evaluating the nature of multisensory stimuli in order to determine whether information across modalities refers to the same object or event. We have shown that the modulation of auditory ERP components by visual speech cues gradually emerges over developmental time, maturing at around the age when behavioural studies have revealed a use of visual cues in speech perception tasks. Notably though, the additional sensitivity to incongruent visual cues seen in adults was not evident in this developmental sample.</p>
<p>Regarding latency shortening, our adult results replicated previous findings, supporting the notion that latency modulation represents the process of predicting the content of the upcoming auditory signal, the predictive coding hypothesis. However, data from our child sample showed latency shortening for the P2 component regardless of the congruence between auditory and visual signals. We have therefore suggested that latency shortening may represent two predictive processes, relating to both the content and timing of the upcoming auditory signal, but that children within the age range tested here are not yet able to make content predictions.</p>
<p>Overall, these data support and extend previous studies pointing to the influence of visual cues on processing auditory speech. We have supported the notion that amplitude and latency modulation represent different aspects of audio-visual signal processing, but reinterpreted those data in the light of our new paradigm, and the developmental results. Furthermore, we have presented new data revealing that these responses gradually emerge over childhood.</p>
<sec>
<title>Study limitations and outstanding questions</title>
<p>This study was successful in its aim to develop a child-friendly ERP paradigm for the study of audio-visual speech, but was limited in a number of respects. The age range tested here, although relatively wide, was not sufficient to fully trace the development of the electrophysiological markers of audio-visual speech perception into adulthood. Another limitation, in terms of being able to draw firm conclusions, was that the audio-visual Mismatch stimuli used here were all irreconcilably incongruent. While this led to an interesting finding when compared to previous studies with adults, it might also have changed the strategy of participants. As matched and mismatched multisensory stimuli were randomly intermixed within each block, participants may have adopted more of a ‘wait and see’ strategy than they would under more naturalistic settings. One way for future studies to address whether this factor had a significant impact on the results would be to separate conditions by block.</p>
<p>Finally it should be noted that all the stimuli here were presented under conditions of no notable auditory noise. This factor may turn out to substantially impact on electrophysiological data given that dynamic functional changes in connectivity have been recorded between unisensory cortices and the STS as a function of noise (Nath & Beauchamp,
<xref rid="b50" ref-type="bibr">2011</xref>
). This modulation is thought to reflect changes in the weighting of information from each sensory modality, and should be considered in future electrophysiological investigations.</p>
<p>One question that has emerged from the current work is exactly what the development of electrophysiological responses represents at the level of information processing. The data on amplitude modulation presented here fit well with the behavioural data examining the gross benefit of visual cues to children. However, the modulation of component latency was evident at younger ages, and certainly the use of visual cues in infancy suggests that the process is one of continuous change rather than simply ‘coming online’ later in childhood. This developmental profile may represent changes in how visual speech cues are utilized in childhood with increasing experience and cortical maturation. For example, Fort and colleagues (Fort, Spinelli, Savariaux & Kandel,
<xref rid="b20" ref-type="bibr">2012</xref>
) found that during a vowel monitoring task both adults and 5-to 10-year-old children, as a group, benefited from the availability of visual speech cues, but only adults showed an additional benefit of lexicality. These authors suggest that where adults use visual cues to help retrieve lexical information, children use the same cues to process the phonetic aspects of speech. Over developmental time, then, children may first use visual speech cues to aid phonetic processing, and later to aid comprehension.</p>
<p>This critical issue of the changing relationship between brain and behaviour over development needs to be addressed with further electrophysiological exploration in conjunction with more sensitive behavioural methods aimed at elucidating the different potential uses of visual speech cues. The exploration of audio-visual speech over childhood is important not just for typically developing children learning about the world in auditory noise, but also critically for those children growing up with developmental language disorders, for whom multisensory cues may contain valuable information to assist language development.</p>
</sec>
</sec>
</body>
<back>
<ack>
<p>This work was supported by an ESRC studentship awarded to Victoria C.P. Knowland, and an ESRC grant, ERS-062-23-2721, awarded to Michael S.C. Thomas. The authors would like to thank Torsten Baldeweg for his helpful comments.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="b1">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Alais</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Burr</surname>
<given-names>D</given-names>
</name>
</person-group>
<article-title>The ventriloquist effect results from near optimal bimodal integration</article-title>
<source>Current Biology</source>
<year>2004</year>
<volume>14</volume>
<issue>3</issue>
<fpage>257</fpage>
<lpage>262</lpage>
<pub-id pub-id-type="pmid">14761661</pub-id>
</element-citation>
</ref>
<ref id="b2">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bernstein</surname>
<given-names>LE</given-names>
</name>
<name>
<surname>Auer</surname>
<given-names>ET</given-names>
</name>
<name>
<surname>Wagner</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Ponton</surname>
<given-names>CW</given-names>
</name>
</person-group>
<article-title>Spatiotemporal dynamics of audio-visual speech processing</article-title>
<source>NeuroImage</source>
<year>2007</year>
<volume>39</volume>
<fpage>423</fpage>
<lpage>435</lpage>
<pub-id pub-id-type="pmid">17920933</pub-id>
</element-citation>
</ref>
<ref id="b3">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Besle</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Bertrand</surname>
<given-names>O</given-names>
</name>
<name>
<surname>Giard</surname>
<given-names>MH</given-names>
</name>
</person-group>
<article-title>Electrophysiological (EEG, sEEG, MEG) evidence for multiple audio-visual interactions in the human auditory cortex</article-title>
<source>Hearing Research</source>
<year>2009</year>
<volume>258</volume>
<fpage>143</fpage>
<lpage>151</lpage>
<pub-id pub-id-type="pmid">19573583</pub-id>
</element-citation>
</ref>
<ref id="b4">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Besle</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Fischer</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Bidet-Caulet</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Lecaignard</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Bertrand</surname>
<given-names>O</given-names>
</name>
<name>
<surname>Giard</surname>
<given-names>M-H</given-names>
</name>
</person-group>
<article-title>Visual activation and audio-visual interactions in the auditory cortex during speech perception: intercranial recording in humans</article-title>
<source>Journal of Neuroscience</source>
<year>2008</year>
<volume>28</volume>
<issue>52</issue>
<fpage>14301</fpage>
<lpage>14310</lpage>
<pub-id pub-id-type="pmid">19109511</pub-id>
</element-citation>
</ref>
<ref id="b5">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Besle</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Fort</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Delpuech</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Giard</surname>
<given-names>M-H</given-names>
</name>
</person-group>
<article-title>Bimodal speech: early suppressive visual effects in human auditory cortex</article-title>
<source>European Journal of Neuroscience</source>
<year>2004</year>
<volume>20</volume>
<fpage>2225</fpage>
<lpage>2234</lpage>
<pub-id pub-id-type="pmid">15450102</pub-id>
</element-citation>
</ref>
<ref id="b6">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bishop</surname>
<given-names>DVM</given-names>
</name>
<name>
<surname>Hardiman</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Uwer</surname>
<given-names>R</given-names>
</name>
<name>
<surname>von Suchodeltz</surname>
<given-names>W</given-names>
</name>
</person-group>
<article-title>Maturation of the long-latency auditory ERP: step function changes at start and end of adolescence</article-title>
<source>Developmental Science</source>
<year>2007</year>
<volume>10</volume>
<issue>5</issue>
<fpage>565</fpage>
<lpage>575</lpage>
<pub-id pub-id-type="pmid">17683343</pub-id>
</element-citation>
</ref>
<ref id="b7">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bristow</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Dehaene-Lambertz</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Mattout</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Soares</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Gilga</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Baillet</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Mangin</surname>
<given-names>F</given-names>
</name>
</person-group>
<article-title>Hearing faces: how the infant brain matches the face it sees with the speech it hears</article-title>
<source>Journal of Cognitive Neuroscience</source>
<year>2008</year>
<volume>21</volume>
<issue>5</issue>
<fpage>905</fpage>
<lpage>921</lpage>
<pub-id pub-id-type="pmid">18702595</pub-id>
</element-citation>
</ref>
<ref id="b8">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Burnham</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Dodd</surname>
<given-names>B</given-names>
</name>
</person-group>
<article-title>Auditory-visual speech integration by pre-linguistic infants: perception of an emergent consonant in the McGurk effect</article-title>
<source>Developmental Psychobiology</source>
<year>2004</year>
<volume>44</volume>
<issue>4</issue>
<fpage>204</fpage>
<lpage>220</lpage>
<pub-id pub-id-type="pmid">15549685</pub-id>
</element-citation>
</ref>
<ref id="b9">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bushara</surname>
<given-names>KO</given-names>
</name>
<name>
<surname>Hanawaka</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Immisch</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Toma</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Kansaku</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Hallett</surname>
<given-names>M</given-names>
</name>
</person-group>
<article-title>Neural correlates of cross-modal binding</article-title>
<source>Nature Neuroscience</source>
<year>2003</year>
<volume>6</volume>
<issue>2</issue>
<fpage>190</fpage>
<lpage>195</lpage>
</element-citation>
</ref>
<ref id="b10">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Callan</surname>
<given-names>DE</given-names>
</name>
<name>
<surname>Jones</surname>
<given-names>JA</given-names>
</name>
<name>
<surname>Munhall</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Kroos</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Callan</surname>
<given-names>AM</given-names>
</name>
<name>
<surname>Vatikiotis-Bateson</surname>
<given-names>E</given-names>
</name>
</person-group>
<article-title>Multisensory integration sites identified by perception of spatial wavelet filtered visual speech gesture information</article-title>
<source>Journal of Cognitive Neuroscience</source>
<year>2004</year>
<volume>16</volume>
<fpage>805</fpage>
<lpage>816</lpage>
<pub-id pub-id-type="pmid">15200708</pub-id>
</element-citation>
</ref>
<ref id="b11">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Callaway</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Halliday</surname>
<given-names>R</given-names>
</name>
</person-group>
<article-title>The effect of attentional effort on visual evoked potential N1 latency</article-title>
<source>Psychiatry Research</source>
<year>1982</year>
<volume>7</volume>
<fpage>299</fpage>
<lpage>308</lpage>
<pub-id pub-id-type="pmid">6962438</pub-id>
</element-citation>
</ref>
<ref id="b12">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Calvert</surname>
<given-names>G</given-names>
</name>
</person-group>
<article-title>Crossmodal processing in the human brain: insights from functional neuroimaging studies</article-title>
<source>Cerebral Cortex</source>
<year>2001</year>
<volume>11</volume>
<issue>12</issue>
<fpage>1110</fpage>
<lpage>1123</lpage>
<pub-id pub-id-type="pmid">11709482</pub-id>
</element-citation>
</ref>
<ref id="b13">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Calvert</surname>
<given-names>GA</given-names>
</name>
<name>
<surname>Bullmore</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Brammer</surname>
<given-names>MJ</given-names>
</name>
<name>
<surname>Campbell</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Woodruff</surname>
<given-names>P</given-names>
</name>
<name>
<surname>McGuire</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Williams</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Iversen</surname>
<given-names>SD</given-names>
</name>
<name>
<surname>David</surname>
<given-names>AS</given-names>
</name>
</person-group>
<article-title>Activation of auditory cortex during silent speechreading</article-title>
<source>Science</source>
<year>1997</year>
<volume>276</volume>
<fpage>593</fpage>
<lpage>596</lpage>
<pub-id pub-id-type="pmid">9110978</pub-id>
</element-citation>
</ref>
<ref id="b14">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Calvert</surname>
<given-names>GA</given-names>
</name>
<name>
<surname>Campbell</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Brammer</surname>
<given-names>MJ</given-names>
</name>
</person-group>
<article-title>Evidence from functional magnetic resonance imaging of crossmodal binding in human heteromodal cortex</article-title>
<source>Current Biology</source>
<year>2000</year>
<volume>10</volume>
<fpage>649</fpage>
<lpage>657</lpage>
<pub-id pub-id-type="pmid">10837246</pub-id>
</element-citation>
</ref>
<ref id="b15">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Campbell</surname>
<given-names>R</given-names>
</name>
</person-group>
<article-title>The processing of audio-visual speech: empirical and neural bases</article-title>
<source>Philosophical Transactions of the Royal Society</source>
<year>2008</year>
<volume>363</volume>
<fpage>1001</fpage>
<lpage>1010</lpage>
<comment>B, doi:
<ext-link ext-link-type="doi" xlink:href="10.1098/rstb.2007.2155">10.1098/rstb.2007.2155</ext-link>
</comment>
</element-citation>
</ref>
<ref id="b16">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Capek</surname>
<given-names>CM</given-names>
</name>
<name>
<surname>Bavelier</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Corina</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Newman</surname>
<given-names>AJ</given-names>
</name>
<name>
<surname>Jezzard</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Neville</surname>
<given-names>HJ</given-names>
</name>
</person-group>
<article-title>The cortical organization of audio-visual sentence comprehension: an fMRI study at 4 Tesla</article-title>
<source>Cognitive Brain Research</source>
<year>2004</year>
<volume>20</volume>
<fpage>111</fpage>
<lpage>119</lpage>
<pub-id pub-id-type="pmid">15183384</pub-id>
</element-citation>
</ref>
<ref id="b17">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chandrasekaran</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Trubanova</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Stillittano</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Caplier</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Ghazanfar</surname>
<given-names>AA</given-names>
</name>
</person-group>
<article-title>The natural statistics of audiovisual speech</article-title>
<source>PLOS Computational Biology</source>
<year>2009</year>
<volume>5</volume>
<issue>7</issue>
<fpage>e1000436</fpage>
<pub-id pub-id-type="pmid">19609344</pub-id>
</element-citation>
</ref>
<ref id="b18">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Desjardins</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Werker</surname>
<given-names>JF</given-names>
</name>
</person-group>
<article-title>Is the integration of heard and seen speech mandatory for infants?</article-title>
<source>Developmental Psychobiology</source>
<year>2004</year>
<volume>45</volume>
<issue>4</issue>
<fpage>187</fpage>
<lpage>203</lpage>
<pub-id pub-id-type="pmid">15549681</pub-id>
</element-citation>
</ref>
<ref id="b19">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dick</surname>
<given-names>AS</given-names>
</name>
<name>
<surname>Solodkin</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Small</surname>
<given-names>S</given-names>
</name>
</person-group>
<article-title>Neural development of networks for audiovisual speech comprehension</article-title>
<source>Brain and Language</source>
<year>2010</year>
<volume>114</volume>
<issue>2</issue>
<fpage>101</fpage>
<lpage>114</lpage>
<pub-id pub-id-type="pmid">19781755</pub-id>
</element-citation>
</ref>
<ref id="b20">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fort</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Spinelli</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Savariaux</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Kandel</surname>
<given-names>S</given-names>
</name>
</person-group>
<article-title>Audiovisual vowel monitoring and the word superiority effect in children</article-title>
<source>International Journal of Behavioral Development</source>
<year>2012</year>
<volume>36</volume>
<issue>6</issue>
<fpage>457</fpage>
<lpage>467</lpage>
</element-citation>
</ref>
<ref id="b21">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Giard</surname>
<given-names>M-H</given-names>
</name>
<name>
<surname>Perrin</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Echallier</surname>
<given-names>JF</given-names>
</name>
<name>
<surname>Thevenet</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Fromenet</surname>
<given-names>JC</given-names>
</name>
<name>
<surname>Pernier</surname>
<given-names>J</given-names>
</name>
</person-group>
<article-title>Dissociation of temporal and frontal components in the human auditory N1 wave: a scalp current density and dipole model analysis</article-title>
<source>Electroencephalography & Clinical Neurophysiology</source>
<year>1994</year>
<volume>92</volume>
<fpage>238</fpage>
<lpage>252</lpage>
<pub-id pub-id-type="pmid">7514993</pub-id>
</element-citation>
</ref>
<ref id="b22">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gori</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Del Viva</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Sandini</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Burr</surname>
<given-names>DC</given-names>
</name>
</person-group>
<article-title>Young children do not integrate visual and haptic form information</article-title>
<source>Current Biology</source>
<year>2008</year>
<volume>18</volume>
<issue>9</issue>
<fpage>694</fpage>
<lpage>698</lpage>
<pub-id pub-id-type="pmid">18450446</pub-id>
</element-citation>
</ref>
<ref id="b23">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gotgay</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Giedd</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Lusk</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Hayashi</surname>
<given-names>KM</given-names>
</name>
<name>
<surname>Greenstein</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Vaituzis</surname>
<given-names>AC</given-names>
</name>
<name>
<surname>Nugent</surname>
<given-names>TF</given-names>
</name>
<name>
<surname>III Herman</surname>
<given-names>DH</given-names>
</name>
<name>
<surname>Clasen</surname>
<given-names>LS</given-names>
</name>
<name>
<surname>Toga</surname>
<given-names>AW</given-names>
</name>
<name>
<surname>Rapopport</surname>
<given-names>JL</given-names>
</name>
<name>
<surname>Thompson</surname>
<given-names>PM</given-names>
</name>
</person-group>
<article-title>Dynamic mapping of human cortical development during childhood through early adulthood</article-title>
<source>Proceedings of the National Academy of Sciences, USA</source>
<year>2004</year>
<volume>101</volume>
<issue>21</issue>
<fpage>1874</fpage>
<lpage>1879</lpage>
</element-citation>
</ref>
<ref id="b24">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Grant</surname>
<given-names>KW</given-names>
</name>
<name>
<surname>Greenberg</surname>
<given-names>S</given-names>
</name>
</person-group>
<article-title>Speech intelligibility derived from asynchronous processing of auditory-visual information</article-title>
<source>Proceedings of the Workshop on Audio-Visual Speech Processing</source>
<year>2001</year>
<comment>(AVSP-2001)</comment>
</element-citation>
</ref>
<ref id="b25">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Grant</surname>
<given-names>KW</given-names>
</name>
<name>
<surname>Seitz</surname>
<given-names>PF</given-names>
</name>
</person-group>
<article-title>The use of visible speech cues for improving auditory detection of spoken sentences</article-title>
<source>Journal of the Acoustical Society of America</source>
<year>2000</year>
<volume>108</volume>
<issue>3</issue>
<fpage>1197</fpage>
<lpage>1208</lpage>
<pub-id pub-id-type="pmid">11008820</pub-id>
</element-citation>
</ref>
<ref id="b26">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Green</surname>
<given-names>KP</given-names>
</name>
<name>
<surname>Kuhl</surname>
<given-names>PK</given-names>
</name>
<name>
<surname>Meltzoff</surname>
<given-names>AN</given-names>
</name>
<name>
<surname>Stevens</surname>
<given-names>EB</given-names>
</name>
</person-group>
<article-title>Integrating speech information across talkers, gender, and sensory modality: female faces and male voices in the McGurk effect</article-title>
<source>Perception & Psychophysics</source>
<year>1991</year>
<volume>50</volume>
<fpage>524</fpage>
<lpage>536</lpage>
<pub-id pub-id-type="pmid">1780200</pub-id>
</element-citation>
</ref>
<ref id="b27">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hall</surname>
<given-names>DA</given-names>
</name>
<name>
<surname>Fussell</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Summerfield</surname>
<given-names>AQ</given-names>
</name>
</person-group>
<article-title>Reading fluent speech from talking faces: typical brain networks and individual differences</article-title>
<source>Journal of Cognitive Neuroscience</source>
<year>2005</year>
<volume>17</volume>
<fpage>939</fpage>
<lpage>953</lpage>
<pub-id pub-id-type="pmid">15969911</pub-id>
</element-citation>
</ref>
<ref id="b28">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hoonhorst</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Serniclaes</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Collet</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Colin</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Markessis</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Radeau</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Deltenrea</surname>
<given-names>P</given-names>
</name>
</person-group>
<article-title>N1b and Na subcomponents of the N100 long latency auditory evoked-potential:
<italic>N</italic>
europhysiological correlates of voicing in French-speaking subjects</article-title>
<source>Clinical Neurophysiology</source>
<year>2009</year>
<volume>120</volume>
<issue>5</issue>
<fpage>897</fpage>
<lpage>903</lpage>
<pub-id pub-id-type="pmid">19329357</pub-id>
</element-citation>
</ref>
<ref id="b29">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hocking</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Price</surname>
<given-names>CJ</given-names>
</name>
</person-group>
<article-title>The role of the posterior temporal sulcus in audiovisual processing</article-title>
<source>Cerebral Cortex</source>
<year>2008</year>
<volume>18</volume>
<issue>10</issue>
<fpage>2439</fpage>
<lpage>2449</lpage>
<pub-id pub-id-type="pmid">18281303</pub-id>
</element-citation>
</ref>
<ref id="b30">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hockley</surname>
<given-names>NS</given-names>
</name>
<name>
<surname>Polka</surname>
<given-names>L</given-names>
</name>
</person-group>
<article-title>A developmental study of audiovisual speech perception using the McGurk paradigm</article-title>
<source>Journal of the Acoustical Society of America</source>
<year>1994</year>
<volume>96</volume>
<issue>5</issue>
<fpage>3309</fpage>
<lpage>3309</lpage>
</element-citation>
</ref>
<ref id="b31">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jasper</surname>
<given-names>HH</given-names>
</name>
</person-group>
<article-title>The ten–twenty electrode system of the International Federation</article-title>
<source>Electroencephalography and Clinical Neurophysiology</source>
<year>1958</year>
<volume>10</volume>
<fpage>371</fpage>
<lpage>375</lpage>
</element-citation>
</ref>
<ref id="b32">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jerger</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Damian</surname>
<given-names>MF</given-names>
</name>
<name>
<surname>Spence</surname>
<given-names>MJ</given-names>
</name>
<name>
<surname>Tye-Murray</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Abdi</surname>
<given-names>H</given-names>
</name>
</person-group>
<article-title>Developmental shifts in children's sensitivity to visual speech: a new multisensory picture word task</article-title>
<source>Journal of Experimental Child Psychology</source>
<year>2009</year>
<volume>102</volume>
<fpage>40</fpage>
<lpage>59</lpage>
<pub-id pub-id-type="pmid">18829049</pub-id>
</element-citation>
</ref>
<ref id="b33">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kawabe</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Shirai</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Wada</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Miura</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Kanazawa</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Yamaguci</surname>
<given-names>MK</given-names>
</name>
</person-group>
<article-title>The audiovisual tau effect in infancy</article-title>
<source>PLoS ONE</source>
<year>2010</year>
<volume>5</volume>
<issue>3</issue>
<fpage>e9503</fpage>
<comment>doi:
<ext-link ext-link-type="doi" xlink:href="10.1371/journal.pone.0009503">10.1371/journal.pone.0009503</ext-link>
</comment>
<pub-id pub-id-type="pmid">20209137</pub-id>
</element-citation>
</ref>
<ref id="b34">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kim</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Davis</surname>
<given-names>C</given-names>
</name>
</person-group>
<article-title>Investigating the audio-visual speech detection advantage</article-title>
<source>Speech Communication</source>
<year>2004</year>
<volume>44</volume>
<fpage>19</fpage>
<lpage>30</lpage>
</element-citation>
</ref>
<ref id="b35">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Klucharev</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Mottonen</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Sams</surname>
<given-names>M</given-names>
</name>
</person-group>
<article-title>Electrophysiological indicators of phonetic and non-phonetic multisensory interactions during audio-visual speech perception</article-title>
<source>Cognitive Brain Research</source>
<year>2003</year>
<volume>18</volume>
<fpage>65</fpage>
<lpage>75</lpage>
<pub-id pub-id-type="pmid">14659498</pub-id>
</element-citation>
</ref>
<ref id="b36">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kuhl</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Meltzoff</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>The bimodal perception of speech in infancy</article-title>
<source>Science</source>
<year>1982</year>
<volume>218</volume>
<fpage>1138</fpage>
<lpage>1141</lpage>
<pub-id pub-id-type="pmid">7146899</pub-id>
</element-citation>
</ref>
<ref id="b37">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kuperman</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Stadthagen-Gonzalez</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Brysbaert</surname>
<given-names>M</given-names>
</name>
</person-group>
<article-title>Age of acquisition ratings for 30,000 English words</article-title>
<source>Behavioural Research Methods</source>
<year>2012</year>
<volume>44</volume>
<issue>4</issue>
<fpage>978</fpage>
<lpage>990</lpage>
</element-citation>
</ref>
<ref id="b38">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kushnerenko</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Teinonen</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Volien</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Csibra</surname>
<given-names>G</given-names>
</name>
</person-group>
<article-title>Electrophysiological evidence of illusory audiovisual speech percept in human infants</article-title>
<source>Proceedings of the National Academy of Sciences, USA</source>
<year>2008</year>
<volume>105</volume>
<issue>32</issue>
<fpage>11442</fpage>
<lpage>11445</lpage>
</element-citation>
</ref>
<ref id="b39">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lange</surname>
<given-names>K</given-names>
</name>
</person-group>
<article-title>Brain correlates of early auditory processing are attenuated by expectations for time and pitch</article-title>
<source>Brain and Cognition</source>
<year>2009</year>
<volume>69</volume>
<issue>1</issue>
<fpage>127</fpage>
<lpage>137</lpage>
<pub-id pub-id-type="pmid">18644669</pub-id>
</element-citation>
</ref>
<ref id="b40">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Leech</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Holt</surname>
<given-names>LL</given-names>
</name>
<name>
<surname>Devlin</surname>
<given-names>JT</given-names>
</name>
<name>
<surname>Dick</surname>
<given-names>F</given-names>
</name>
</person-group>
<article-title>Expertise with artificial non-speech sounds recruits speech-sensitive cortical regions</article-title>
<source>Journal of Neuroscience</source>
<year>2009</year>
<volume>29</volume>
<issue>16</issue>
<fpage>5234</fpage>
<lpage>5239</lpage>
<pub-id pub-id-type="pmid">19386919</pub-id>
</element-citation>
</ref>
<ref id="b41">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lenroot</surname>
<given-names>RK</given-names>
</name>
<name>
<surname>Giedd</surname>
<given-names>JN</given-names>
</name>
</person-group>
<article-title>Brain development in children and adolescents: insights from anatomical magnetic resonance imaging</article-title>
<source>Neuroscience and Biobehavioral Reviews</source>
<year>2006</year>
<volume>30</volume>
<fpage>718</fpage>
<lpage>729</lpage>
<pub-id pub-id-type="pmid">16887188</pub-id>
</element-citation>
</ref>
<ref id="b42">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lewkowicz</surname>
<given-names>DJ</given-names>
</name>
<name>
<surname>Hansen-Tift</surname>
<given-names>AM</given-names>
</name>
</person-group>
<article-title>Infants deploy selective attention to the mouth of a talking face when learning speech</article-title>
<source>Proceedings of the National Academy of Sciences, USA</source>
<year>2012</year>
<volume>109</volume>
<issue>5</issue>
<fpage>1431</fpage>
<lpage>1436</lpage>
</element-citation>
</ref>
<ref id="b43">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Liebenthal</surname>
<given-names>E</given-names>
</name>
<name>
<surname>Desai</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Ellinson</surname>
<given-names>MM</given-names>
</name>
<name>
<surname>Ramachandran</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Desai</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Binder</surname>
<given-names>JR</given-names>
</name>
</person-group>
<article-title>Specialisation along the left superior temporal sulcus for auditory categorisation</article-title>
<source>Cerebral Cortex</source>
<year>2010</year>
<volume>20</volume>
<issue>12</issue>
<fpage>2958</fpage>
<lpage>2970</lpage>
<pub-id pub-id-type="pmid">20382643</pub-id>
</element-citation>
</ref>
<ref id="b44">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lippe</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Kovacevic</surname>
<given-names>N</given-names>
</name>
<name>
<surname>McIntosh</surname>
<given-names>AR</given-names>
</name>
</person-group>
<article-title>Differential maturation of brain signal complexity in the human auditory and visual system</article-title>
<source>Frontiers in Human Neuroscience</source>
<year>2009</year>
<volume>3</volume>
<fpage>48</fpage>
<pub-id pub-id-type="pmid">19949455</pub-id>
</element-citation>
</ref>
<ref id="b45">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>McGurk</surname>
<given-names>H</given-names>
</name>
<name>
<surname>MacDonald</surname>
<given-names>J</given-names>
</name>
</person-group>
<article-title>Hearing lips and seeing voices</article-title>
<source>Nature</source>
<year>1976</year>
<volume>264</volume>
<fpage>746</fpage>
<lpage>748</lpage>
<pub-id pub-id-type="pmid">1012311</pub-id>
</element-citation>
</ref>
<ref id="b46">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Martin</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Barajas</surname>
<given-names>JJ</given-names>
</name>
<name>
<surname>Fernandez</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Torres</surname>
<given-names>E</given-names>
</name>
</person-group>
<article-title>Auditory event-related potentials in well-characterized groups of children</article-title>
<source>Electroencephalography and Clinical Neurophysiology: Evoked Potentials</source>
<year>1988</year>
<volume>71</volume>
<fpage>375</fpage>
<lpage>381</lpage>
<pub-id pub-id-type="pmid">2457489</pub-id>
</element-citation>
</ref>
<ref id="b47">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Massaro</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Thompson</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Barron</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Laren</surname>
<given-names>E</given-names>
</name>
</person-group>
<article-title>Developmental changes in visual and auditory contributions to speech perception</article-title>
<source>Journal of Experimental Child Psychology</source>
<year>1986</year>
<volume>41</volume>
<fpage>93</fpage>
<lpage>113</lpage>
<pub-id pub-id-type="pmid">3950540</pub-id>
</element-citation>
</ref>
<ref id="b48">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Moore</surname>
<given-names>JK</given-names>
</name>
</person-group>
<article-title>Maturation of human auditory cortex: implications for speech perception</article-title>
<source>Annals of Otology, Rhinology and Laryngology</source>
<year>2002</year>
<volume>111</volume>
<fpage>7</fpage>
<lpage>10</lpage>
</element-citation>
</ref>
<ref id="b49">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Musacchia</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Sams</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Nicol</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Kraus</surname>
<given-names>N</given-names>
</name>
</person-group>
<article-title>Seeing speech affects acoustic information processing in the human brainstem</article-title>
<source>Experimental Brain Research</source>
<year>2006</year>
<volume>168</volume>
<issue>1–2</issue>
<fpage>1</fpage>
<lpage>10</lpage>
<pub-id pub-id-type="pmid">16217645</pub-id>
</element-citation>
</ref>
<ref id="b50">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nath</surname>
<given-names>AR</given-names>
</name>
<name>
<surname>Beauchamp</surname>
<given-names>MS</given-names>
</name>
</person-group>
<article-title>Dynamic changes in superior temporal sulcus connectivity during perception of noisy audiovisual speech</article-title>
<source>Journal of Neuroscience</source>
<year>2011</year>
<volume>31</volume>
<issue>5</issue>
<fpage>1704</fpage>
<lpage>1714</lpage>
<pub-id pub-id-type="pmid">21289179</pub-id>
</element-citation>
</ref>
<ref id="b51">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nath</surname>
<given-names>AR</given-names>
</name>
<name>
<surname>Fava</surname>
<given-names>EE</given-names>
</name>
<name>
<surname>Beauchamp</surname>
<given-names>MS</given-names>
</name>
</person-group>
<article-title>Neural correlates of interindividual differences in children's audiovisual speech perception</article-title>
<source>Journal of Neuroscience</source>
<year>2011</year>
<volume>31</volume>
<issue>39</issue>
<fpage>13963</fpage>
<lpage>13971</lpage>
<pub-id pub-id-type="pmid">21957257</pub-id>
</element-citation>
</ref>
<ref id="b52">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pang</surname>
<given-names>EW</given-names>
</name>
<name>
<surname>Taylor</surname>
<given-names>MJ</given-names>
</name>
</person-group>
<article-title>Tracking the development of the N1 from age 3 to adulthood: an examination of speech and non-speech stimuli</article-title>
<source>Clinical Neurophysiology</source>
<year>2000</year>
<volume>111</volume>
<issue>3</issue>
<fpage>388</fpage>
<lpage>397</lpage>
<pub-id pub-id-type="pmid">10699397</pub-id>
</element-citation>
</ref>
<ref id="b53">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Patterson</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Werker</surname>
<given-names>J</given-names>
</name>
</person-group>
<article-title>Matching phonetic information in lips and voice is robust in 4.5-month-old infants</article-title>
<source>Infant Behavior and Development</source>
<year>1999</year>
<volume>22</volume>
<fpage>237</fpage>
<lpage>247</lpage>
</element-citation>
</ref>
<ref id="b54">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Picton</surname>
<given-names>TW</given-names>
</name>
<name>
<surname>Hillyard</surname>
<given-names>SA</given-names>
</name>
<name>
<surname>Krausz</surname>
<given-names>HI</given-names>
</name>
<name>
<surname>Galambos</surname>
<given-names>R</given-names>
</name>
</person-group>
<article-title>Human auditory evoked potentials</article-title>
<source>Electroencephalography and Clinical Neurophysiology</source>
<year>1974</year>
<volume>36</volume>
<fpage>179</fpage>
<lpage>190</lpage>
<pub-id pub-id-type="pmid">4129630</pub-id>
</element-citation>
</ref>
<ref id="b55">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pilling</surname>
<given-names>M</given-names>
</name>
</person-group>
<article-title>Auditory event-related potentials (ERPs) in audio-visual speech perception</article-title>
<source>Journal of Speech, Language and Hearing Research</source>
<year>2009</year>
<volume>52</volume>
<fpage>1073</fpage>
<lpage>1081</lpage>
</element-citation>
</ref>
<ref id="b56">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ponton</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Eggermont</surname>
<given-names>JJ</given-names>
</name>
<name>
<surname>Khosla</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Kwong</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Don</surname>
<given-names>M</given-names>
</name>
</person-group>
<article-title>Maturation of human central auditory system activity: separating auditory evoked potentials by dipole source modeling</article-title>
<source>Clinical Neurophysiology</source>
<year>2002</year>
<volume>113</volume>
<fpage>407</fpage>
<lpage>420</lpage>
<pub-id pub-id-type="pmid">11897541</pub-id>
</element-citation>
</ref>
<ref id="b57">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Reale</surname>
<given-names>RA</given-names>
</name>
<name>
<surname>Calvert</surname>
<given-names>GA</given-names>
</name>
<name>
<surname>Thesen</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Jenison</surname>
<given-names>RL</given-names>
</name>
<name>
<surname>Kawasaki</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Oys</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Howard</surname>
<given-names>MA</given-names>
</name>
<name>
<surname>Brugg</surname>
<given-names>JF</given-names>
</name>
</person-group>
<article-title>Auditory-visual processing represented in the human superior temporal gyrus</article-title>
<source>Neuroscience</source>
<year>2007</year>
<volume>145</volume>
<issue>1</issue>
<fpage>162</fpage>
<lpage>184</lpage>
<pub-id pub-id-type="pmid">17241747</pub-id>
</element-citation>
</ref>
<ref id="b58">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ritter</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Simson</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Vaughn</surname>
<given-names>H</given-names>
</name>
</person-group>
<article-title>Effects of the amount of stimulus information processed on negative event-related potentials</article-title>
<source>Electroencephalography and Clinical Neuropsychiology</source>
<year>1988</year>
<volume>28</volume>
<fpage>244</fpage>
<lpage>258</lpage>
</element-citation>
</ref>
<ref id="b59">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rosenblum</surname>
<given-names>LD</given-names>
</name>
<name>
<surname>Schmuckler</surname>
<given-names>MA</given-names>
</name>
<name>
<surname>Johnson</surname>
<given-names>JA</given-names>
</name>
</person-group>
<article-title>The McGurk effect in infants</article-title>
<source>Perception & Psychophysics</source>
<year>1997</year>
<volume>59</volume>
<issue>3</issue>
<fpage>347</fpage>
<lpage>357</lpage>
<pub-id pub-id-type="pmid">9136265</pub-id>
</element-citation>
</ref>
<ref id="b60">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ross</surname>
<given-names>LA</given-names>
</name>
<name>
<surname>Molholm</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Blanco</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Gomez-Ramirez</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Saint-Amour</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Foxe</surname>
<given-names>J</given-names>
</name>
</person-group>
<article-title>The development of multisensory speech perception continues into the late childhood years</article-title>
<source>European Journal of Neuroscience</source>
<year>2011</year>
<volume>33</volume>
<issue>12</issue>
<fpage>2329</fpage>
<lpage>2337</lpage>
<pub-id pub-id-type="pmid">21615556</pub-id>
</element-citation>
</ref>
<ref id="b61">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Skipper</surname>
<given-names>JI</given-names>
</name>
<name>
<surname>Nusbaum</surname>
<given-names>HC</given-names>
</name>
<name>
<surname>Small</surname>
<given-names>SL</given-names>
</name>
</person-group>
<article-title>Listening to talking faces: motor cortical activation during speech perception</article-title>
<source>NeuroImage</source>
<year>2005</year>
<volume>25</volume>
<fpage>76</fpage>
<lpage>89</lpage>
<pub-id pub-id-type="pmid">15734345</pub-id>
</element-citation>
</ref>
<ref id="b62">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Spreng</surname>
<given-names>M</given-names>
</name>
</person-group>
<article-title>Influence of impulsive and fluctuating noise upon physiological excitations and short-time readaptation</article-title>
<source>Scandanavian Audiology</source>
<year>1980</year>
<volume>12</volume>
<issue>Suppl</issue>
<fpage>299</fpage>
<lpage>306</lpage>
</element-citation>
</ref>
<ref id="b63">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stekelenburg</surname>
<given-names>JJ</given-names>
</name>
<name>
<surname>Vroomen</surname>
<given-names>J</given-names>
</name>
</person-group>
<article-title>Neural correlates of multisensory integration of ecologically valid audio-visual events</article-title>
<source>Journal of Cognitive Neuroscience</source>
<year>2007</year>
<volume>19</volume>
<issue>12</issue>
<fpage>1964</fpage>
<lpage>1973</lpage>
<pub-id pub-id-type="pmid">17892381</pub-id>
</element-citation>
</ref>
<ref id="b64">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sumby</surname>
<given-names>W</given-names>
</name>
<name>
<surname>Pollack</surname>
<given-names>I</given-names>
</name>
</person-group>
<source>Visual contribution to speech intelligibility in noise, Journal of the Acoustical Society of America</source>
<year>1954</year>
<volume>26</volume>
<fpage>212</fpage>
<lpage>215</lpage>
</element-citation>
</ref>
<ref id="b65">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tanabe</surname>
<given-names>HC</given-names>
</name>
<name>
<surname>Honda</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Sadato</surname>
<given-names>N</given-names>
</name>
</person-group>
<article-title>Functionally segregated neural substrates for arbitrary audio-visual paired-association learning</article-title>
<source>Journal of Neuroscience</source>
<year>2005</year>
<volume>25</volume>
<issue>27</issue>
<fpage>6409</fpage>
<lpage>6418</lpage>
<pub-id pub-id-type="pmid">16000632</pub-id>
</element-citation>
</ref>
<ref id="b66">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Teder-Salejarvi</surname>
<given-names>WA</given-names>
</name>
<name>
<surname>McDonald</surname>
<given-names>JJ</given-names>
</name>
<name>
<surname>DiRusso</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Hillyard</surname>
<given-names>SA</given-names>
</name>
</person-group>
<article-title>An analysis of audio-visual crossmodal integration by means of event-related potential (ERP) recordings</article-title>
<source>Cognitive Brain Research</source>
<year>2002</year>
<volume>14</volume>
<fpage>106</fpage>
<lpage>114</lpage>
<pub-id pub-id-type="pmid">12063134</pub-id>
</element-citation>
</ref>
<ref id="b67">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Teinonen</surname>
<given-names>T</given-names>
</name>
<name>
<surname>Aslin</surname>
<given-names>R</given-names>
</name>
<name>
<surname>Alku</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Csibra</surname>
<given-names>G</given-names>
</name>
</person-group>
<article-title>Visual speech contributes to phonetic learning in 6-month-old infants</article-title>
<source>Cognition</source>
<year>2008</year>
<volume>108</volume>
<fpage>850</fpage>
<lpage>855</lpage>
<pub-id pub-id-type="pmid">18590910</pub-id>
</element-citation>
</ref>
<ref id="b68">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Thomas</surname>
<given-names>MSC</given-names>
</name>
<name>
<surname>Annaz</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Ansari</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Serif</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Jarrold</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Karmiloff-Smith</surname>
<given-names>A</given-names>
</name>
</person-group>
<article-title>Using developmental trajectories to understand developmental disorders</article-title>
<source>Journal of Speech, Language, and Hearing Research</source>
<year>2009</year>
<volume>52</volume>
<fpage>336</fpage>
<lpage>358</lpage>
</element-citation>
</ref>
<ref id="b69">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Thornton</surname>
<given-names>ARD</given-names>
</name>
</person-group>
<article-title>Evaluation of a technique to measure latency jitter in event-related potentials</article-title>
<source>Journal of Neuroscience Methods</source>
<year>2008</year>
<volume>8</volume>
<issue>1</issue>
<fpage>248</fpage>
<lpage>255</lpage>
<pub-id pub-id-type="pmid">18006068</pub-id>
</element-citation>
</ref>
<ref id="b70">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tremblay</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Champoux</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Voss</surname>
<given-names>P</given-names>
</name>
<name>
<surname>Bacon</surname>
<given-names>BA</given-names>
</name>
<name>
<surname>Lapore</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Theoret</surname>
<given-names>H</given-names>
</name>
</person-group>
<article-title>Speech and non-speech audio-visual illusions: a developmental study</article-title>
<source>PLoS</source>
<year>2007</year>
<volume>8</volume>
<fpage>742</fpage>
</element-citation>
</ref>
<ref id="b71">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Van Wassenhove</surname>
<given-names>V</given-names>
</name>
<name>
<surname>Grant</surname>
<given-names>KW</given-names>
</name>
<name>
<surname>Poeppel</surname>
<given-names>D</given-names>
</name>
</person-group>
<article-title>Visual speech speeds up the neural processing of auditory speech</article-title>
<source>Proceedings of the National Academy of Sciences, USA</source>
<year>2005</year>
<volume>102</volume>
<issue>4</issue>
<fpage>1181</fpage>
<lpage>1186</lpage>
</element-citation>
</ref>
<ref id="b72">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Viswanathan</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Jansen</surname>
<given-names>BH</given-names>
</name>
</person-group>
<article-title>The effect of stimulus expectancy on dishabituation of auditory evoked potentials</article-title>
<source>International Journal of Psychophysiology</source>
<year>2010</year>
<volume>78</volume>
<fpage>251</fpage>
<lpage>256</lpage>
<pub-id pub-id-type="pmid">20800624</pub-id>
</element-citation>
</ref>
<ref id="b73">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wada</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Shirai</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Midorikawa</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Kanazawa</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Dan</surname>
<given-names>I</given-names>
</name>
<name>
<surname>Yamaguchi</surname>
<given-names>MK</given-names>
</name>
</person-group>
<article-title>Sound enhances detection of visual target during infancy: a study using illusory contours</article-title>
<source>Journal of Experimental Child Psychology</source>
<year>2009</year>
<volume>102</volume>
<issue>3</issue>
<fpage>315</fpage>
<lpage>322</lpage>
<pub-id pub-id-type="pmid">18755476</pub-id>
</element-citation>
</ref>
<ref id="b74">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wightman</surname>
<given-names>F</given-names>
</name>
<name>
<surname>Kistler</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Brungart</surname>
<given-names>D</given-names>
</name>
</person-group>
<article-title>Informational masking of speech in children: auditory–visual integration</article-title>
<source>Journal of the Acoustical Society of America</source>
<year>2006</year>
<volume>119</volume>
<issue>6</issue>
<fpage>3940</fpage>
<lpage>3949</lpage>
<pub-id pub-id-type="pmid">16838537</pub-id>
</element-citation>
</ref>
<ref id="b75">
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wright</surname>
<given-names>TM</given-names>
</name>
<name>
<surname>Pelphrey</surname>
<given-names>KA</given-names>
</name>
<name>
<surname>Allison</surname>
<given-names>T</given-names>
</name>
<name>
<surname>McKeown</surname>
<given-names>MJ</given-names>
</name>
<name>
<surname>McCarthy</surname>
<given-names>G</given-names>
</name>
</person-group>
<article-title>Polysensory interactions along lateral temporal regions evoked by audio-visual speech</article-title>
<source>Cerebral Cortex</source>
<year>2003</year>
<volume>13</volume>
<fpage>1034</fpage>
<lpage>1043</lpage>
<pub-id pub-id-type="pmid">12967920</pub-id>
</element-citation>
</ref>
</ref-list>
</back>
</pmc>
<affiliations>
<list>
<country>
<li>Royaume-Uni</li>
</country>
</list>
<tree>
<noCountry>
<name sortKey="Mercure, Evelyne" sort="Mercure, Evelyne" uniqKey="Mercure E" first="Evelyne" last="Mercure">Evelyne Mercure</name>
</noCountry>
<country name="Royaume-Uni">
<noRegion>
<name sortKey="Knowland, Victoria Cp" sort="Knowland, Victoria Cp" uniqKey="Knowland V" first="Victoria Cp" last="Knowland">Victoria Cp Knowland</name>
</noRegion>
<name sortKey="Dick, Fred" sort="Dick, Fred" uniqKey="Dick F" first="Fred" last="Dick">Fred Dick</name>
<name sortKey="Karmiloff Smith, Annette" sort="Karmiloff Smith, Annette" uniqKey="Karmiloff Smith A" first="Annette" last="Karmiloff-Smith">Annette Karmiloff-Smith</name>
<name sortKey="Knowland, Victoria Cp" sort="Knowland, Victoria Cp" uniqKey="Knowland V" first="Victoria Cp" last="Knowland">Victoria Cp Knowland</name>
<name sortKey="Thomas, Michael Sc" sort="Thomas, Michael Sc" uniqKey="Thomas M" first="Michael Sc" last="Thomas">Michael Sc Thomas</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Ncbi/Merge
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 002A45 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd -nk 002A45 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Ncbi
   |étape=   Merge
   |type=    RBID
   |clé=     PMC:3995015
   |texte=   Audio-visual speech perception: a developmental ERP investigation
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/RBID.i   -Sk "pubmed:24176002" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024