Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Visual Feedback of Tongue Movement for Novel Speech Sound Learning

Identifieur interne : 003E10 ( Ncbi/Merge ); précédent : 003E09; suivant : 003E11

Visual Feedback of Tongue Movement for Novel Speech Sound Learning

Auteurs : William F. Katz ; Sonya Mehta

Source :

RBID : PMC:4652268

Abstract

Pronunciation training studies have yielded important information concerning the processing of audiovisual (AV) information. Second language (L2) learners show increased reliance on bottom-up, multimodal input for speech perception (compared to monolingual individuals). However, little is known about the role of viewing one's own speech articulation processes during speech training. The current study investigated whether real-time, visual feedback for tongue movement can improve a speaker's learning of non-native speech sounds. An interactive 3D tongue visualization system based on electromagnetic articulography (EMA) was used in a speech training experiment. Native speakers of American English produced a novel speech sound (/ɖ/; a voiced, coronal, palatal stop) before, during, and after trials in which they viewed their own speech movements using the 3D model. Talkers' productions were evaluated using kinematic (tongue-tip spatial positioning) and acoustic (burst spectra) measures. The results indicated a rapid gain in accuracy associated with visual feedback training. The findings are discussed with respect to neural models for multimodal speech processing.


Url:
DOI: 10.3389/fnhum.2015.00612
PubMed: 26635571
PubMed Central: 4652268

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4652268

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Visual Feedback of Tongue Movement for Novel Speech Sound Learning</title>
<author>
<name sortKey="Katz, William F" sort="Katz, William F" uniqKey="Katz W" first="William F." last="Katz">William F. Katz</name>
</author>
<author>
<name sortKey="Mehta, Sonya" sort="Mehta, Sonya" uniqKey="Mehta S" first="Sonya" last="Mehta">Sonya Mehta</name>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">26635571</idno>
<idno type="pmc">4652268</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4652268</idno>
<idno type="RBID">PMC:4652268</idno>
<idno type="doi">10.3389/fnhum.2015.00612</idno>
<date when="2015">2015</date>
<idno type="wicri:Area/Pmc/Corpus">000177</idno>
<idno type="wicri:Area/Pmc/Curation">000177</idno>
<idno type="wicri:Area/Pmc/Checkpoint">000206</idno>
<idno type="wicri:Area/Ncbi/Merge">003E10</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Visual Feedback of Tongue Movement for Novel Speech Sound Learning</title>
<author>
<name sortKey="Katz, William F" sort="Katz, William F" uniqKey="Katz W" first="William F." last="Katz">William F. Katz</name>
</author>
<author>
<name sortKey="Mehta, Sonya" sort="Mehta, Sonya" uniqKey="Mehta S" first="Sonya" last="Mehta">Sonya Mehta</name>
</author>
</analytic>
<series>
<title level="j">Frontiers in Human Neuroscience</title>
<idno type="eISSN">1662-5161</idno>
<imprint>
<date when="2015">2015</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Pronunciation training studies have yielded important information concerning the processing of audiovisual (AV) information. Second language (L2) learners show increased reliance on bottom-up, multimodal input for speech perception (compared to monolingual individuals). However, little is known about the role of viewing one's own speech articulation processes during speech training. The current study investigated whether real-time, visual feedback for tongue movement can improve a speaker's learning of non-native speech sounds. An interactive 3D tongue visualization system based on electromagnetic articulography (EMA) was used in a speech training experiment. Native speakers of American English produced a novel speech sound (/
<underline>ɖ</underline>
/; a voiced, coronal, palatal stop) before, during, and after trials in which they viewed their own speech movements using the 3D model. Talkers' productions were evaluated using kinematic (tongue-tip spatial positioning) and acoustic (burst spectra) measures. The results indicated a rapid gain in accuracy associated with visual feedback training. The findings are discussed with respect to neural models for multimodal speech processing.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Arbib, M A" uniqKey="Arbib M">M. A. Arbib</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Arnold, P" uniqKey="Arnold P">P. Arnold</name>
</author>
<author>
<name sortKey="Hill, F" uniqKey="Hill F">F. Hill</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Badin, P" uniqKey="Badin P">P. Badin</name>
</author>
<author>
<name sortKey="Elisei, F" uniqKey="Elisei F">F. Elisei</name>
</author>
<author>
<name sortKey="Bailly, G" uniqKey="Bailly G">G. Bailly</name>
</author>
<author>
<name sortKey="Tarabalka, Y" uniqKey="Tarabalka Y">Y. Tarabalka</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Badin, P" uniqKey="Badin P">P. Badin</name>
</author>
<author>
<name sortKey="Tarabalka, Y" uniqKey="Tarabalka Y">Y. Tarabalka</name>
</author>
<author>
<name sortKey="Elisei, F" uniqKey="Elisei F">F. Elisei</name>
</author>
<author>
<name sortKey="Bailly, G" uniqKey="Bailly G">G. Bailly</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ballard, K J" uniqKey="Ballard K">K. J. Ballard</name>
</author>
<author>
<name sortKey="Smith, H D" uniqKey="Smith H">H. D. Smith</name>
</author>
<author>
<name sortKey="Paramatmuni, D" uniqKey="Paramatmuni D">D. Paramatmuni</name>
</author>
<author>
<name sortKey="Mccabe, P" uniqKey="Mccabe P">P. McCabe</name>
</author>
<author>
<name sortKey="Theodoros, D G" uniqKey="Theodoros D">D. G. Theodoros</name>
</author>
<author>
<name sortKey="Murdoch, B E" uniqKey="Murdoch B">B. E. Murdoch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Berlucchi, G" uniqKey="Berlucchi G">G. Berlucchi</name>
</author>
<author>
<name sortKey="Aglioti, S" uniqKey="Aglioti S">S. Aglioti</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bernhardt, B" uniqKey="Bernhardt B">B. Bernhardt</name>
</author>
<author>
<name sortKey="Gick, B" uniqKey="Gick B">B. Gick</name>
</author>
<author>
<name sortKey="Bacsfalvi, P" uniqKey="Bacsfalvi P">P. Bacsfalvi</name>
</author>
<author>
<name sortKey="Adler Bock, M" uniqKey="Adler Bock M">M. Adler-Bock</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bernstein, L E" uniqKey="Bernstein L">L. E. Bernstein</name>
</author>
<author>
<name sortKey="Liebenthal, E" uniqKey="Liebenthal E">E. Liebenthal</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Berry, J J" uniqKey="Berry J">J. J. Berry</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bislick, L P" uniqKey="Bislick L">L. P. Bislick</name>
</author>
<author>
<name sortKey="Weir, P C" uniqKey="Weir P">P. C. Weir</name>
</author>
<author>
<name sortKey="Spencer, K" uniqKey="Spencer K">K. Spencer</name>
</author>
<author>
<name sortKey="Kendall, D" uniqKey="Kendall D">D. Kendall</name>
</author>
<author>
<name sortKey="Yorkston, K M" uniqKey="Yorkston K">K. M. Yorkston</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Boersma, P" uniqKey="Boersma P">P. Boersma</name>
</author>
<author>
<name sortKey="Weenink, D" uniqKey="Weenink D">D. Weenink</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Civier, O" uniqKey="Civier O">O. Civier</name>
</author>
<author>
<name sortKey="Tasko, S M" uniqKey="Tasko S">S. M. Tasko</name>
</author>
<author>
<name sortKey="Guenther, F H" uniqKey="Guenther F">F. H. Guenther</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Curio, G" uniqKey="Curio G">G. Curio</name>
</author>
<author>
<name sortKey="Neuloh, G" uniqKey="Neuloh G">G. Neuloh</name>
</author>
<author>
<name sortKey="Numminen, J" uniqKey="Numminen J">J. Numminen</name>
</author>
<author>
<name sortKey="Jousm Ki, V" uniqKey="Jousm Ki V">V. Jousmäki</name>
</author>
<author>
<name sortKey="Hari, R" uniqKey="Hari R">R. Hari</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="D Ausilio, A" uniqKey="D Ausilio A">A. D'Ausilio</name>
</author>
<author>
<name sortKey="Bartoli, E" uniqKey="Bartoli E">E. Bartoli</name>
</author>
<author>
<name sortKey="Maffongelli, L" uniqKey="Maffongelli L">L. Maffongelli</name>
</author>
<author>
<name sortKey="Berry, J J" uniqKey="Berry J">J. J. Berry</name>
</author>
<author>
<name sortKey="Fadiga, L" uniqKey="Fadiga L">L. Fadiga</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dagenais, P A" uniqKey="Dagenais P">P. A. Dagenais</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Daprati, E" uniqKey="Daprati E">E. Daprati</name>
</author>
<author>
<name sortKey="Sirigu, A" uniqKey="Sirigu A">A. Sirigu</name>
</author>
<author>
<name sortKey="Nico, D" uniqKey="Nico D">D. Nico</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dart, S N" uniqKey="Dart S">S. N. Dart</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dayan, E" uniqKey="Dayan E">E. Dayan</name>
</author>
<author>
<name sortKey="Hamann, J M" uniqKey="Hamann J">J. M. Hamann</name>
</author>
<author>
<name sortKey="Averbeck, B B" uniqKey="Averbeck B">B. B. Averbeck</name>
</author>
<author>
<name sortKey="Cohen, L G" uniqKey="Cohen L">L. G. Cohen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Engelen, L" uniqKey="Engelen L">L. Engelen</name>
</author>
<author>
<name sortKey="Prinz, J F" uniqKey="Prinz J">J. F. Prinz</name>
</author>
<author>
<name sortKey="Bosman, F" uniqKey="Bosman F">F. Bosman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Engwall, O" uniqKey="Engwall O">O. Engwall</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Engwall, O" uniqKey="Engwall O">O. Engwall</name>
</author>
<author>
<name sortKey="B Lter, O" uniqKey="B Lter O">O. Bälter</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Engwall, O" uniqKey="Engwall O">O. Engwall</name>
</author>
<author>
<name sortKey="B Lter, O" uniqKey="B Lter O">O. Bälter</name>
</author>
<author>
<name sortKey="Oster, A M" uniqKey="Oster A">A.-M. Öster</name>
</author>
<author>
<name sortKey="Kjellstrom, H" uniqKey="Kjellstrom H">H. Kjellström</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Engwall, O" uniqKey="Engwall O">O. Engwall</name>
</author>
<author>
<name sortKey="Wik, P" uniqKey="Wik P">P. Wik</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Erber, N P" uniqKey="Erber N">N. P. Erber</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fagel, S" uniqKey="Fagel S">S. Fagel</name>
</author>
<author>
<name sortKey="Madany, K" uniqKey="Madany K">K. Madany</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fant, G" uniqKey="Fant G">G. Fant</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Farrer, C" uniqKey="Farrer C">C. Farrer</name>
</author>
<author>
<name sortKey="Franck, N" uniqKey="Franck N">N. Franck</name>
</author>
<author>
<name sortKey="Georgieff, N" uniqKey="Georgieff N">N. Georgieff</name>
</author>
<author>
<name sortKey="Frith, C D" uniqKey="Frith C">C. D. Frith</name>
</author>
<author>
<name sortKey="Decety, J" uniqKey="Decety J">J. Decety</name>
</author>
<author>
<name sortKey="Jeannerod, M" uniqKey="Jeannerod M">M. Jeannerod</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Felps, D" uniqKey="Felps D">D. Felps</name>
</author>
<author>
<name sortKey="Bortfeld, H" uniqKey="Bortfeld H">H. Bortfeld</name>
</author>
<author>
<name sortKey="Gutierrez Osuna, R" uniqKey="Gutierrez Osuna R">R. Gutierrez-Osuna</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fridriksson, J" uniqKey="Fridriksson J">J. Fridriksson</name>
</author>
<author>
<name sortKey="Hubbard, H I" uniqKey="Hubbard H">H. I. Hubbard</name>
</author>
<author>
<name sortKey="Hudspeth, S G" uniqKey="Hudspeth S">S. G. Hudspeth</name>
</author>
<author>
<name sortKey="Holland, A L" uniqKey="Holland A">A. L. Holland</name>
</author>
<author>
<name sortKey="Bonilha, L" uniqKey="Bonilha L">L. Bonilha</name>
</author>
<author>
<name sortKey="Fromm, D" uniqKey="Fromm D">D. Fromm</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gentilucci, M" uniqKey="Gentilucci M">M. Gentilucci</name>
</author>
<author>
<name sortKey="Corballis, M C" uniqKey="Corballis M">M. C. Corballis</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Goozee, J V" uniqKey="Goozee J">J. V. Goozee</name>
</author>
<author>
<name sortKey="Murdoch, B E" uniqKey="Murdoch B">B. E. Murdoch</name>
</author>
<author>
<name sortKey="Theodoros, D G" uniqKey="Theodoros D">D. G. Theodoros</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Guenther, F H" uniqKey="Guenther F">F. H. Guenther</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Guenther, F H" uniqKey="Guenther F">F. H. Guenther</name>
</author>
<author>
<name sortKey="Ghosh, S S" uniqKey="Ghosh S">S. S. Ghosh</name>
</author>
<author>
<name sortKey="Tourville, J A" uniqKey="Tourville J">J. A. Tourville</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Guenther, F H" uniqKey="Guenther F">F. H. Guenther</name>
</author>
<author>
<name sortKey="Perkell, J S" uniqKey="Perkell J">J. S. Perkell</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Guenther, F H" uniqKey="Guenther F">F. H. Guenther</name>
</author>
<author>
<name sortKey="Vladusich, T" uniqKey="Vladusich T">T. Vladusich</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gunji, A" uniqKey="Gunji A">A. Gunji</name>
</author>
<author>
<name sortKey="Hoshiyama, M" uniqKey="Hoshiyama M">M. Hoshiyama</name>
</author>
<author>
<name sortKey="Kakigi, R" uniqKey="Kakigi R">R. Kakigi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Haggard, P" uniqKey="Haggard P">P. Haggard</name>
</author>
<author>
<name sortKey="De Boer, L" uniqKey="De Boer L">L. de Boer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hamann, S" uniqKey="Hamann S">S. Hamann</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hardcastle, W J" uniqKey="Hardcastle W">W. J. Hardcastle</name>
</author>
<author>
<name sortKey="Gibbon, F E" uniqKey="Gibbon F">F. E. Gibbon</name>
</author>
<author>
<name sortKey="Jones, W" uniqKey="Jones W">W. Jones</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hartelius, L" uniqKey="Hartelius L">L. Hartelius</name>
</author>
<author>
<name sortKey="Theodoros, D" uniqKey="Theodoros D">D. Theodoros</name>
</author>
<author>
<name sortKey="Murdoch, B" uniqKey="Murdoch B">B. Murdoch</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Heinks Maldonado, T H" uniqKey="Heinks Maldonado T">T. H. Heinks-Maldonado</name>
</author>
<author>
<name sortKey="Nagarajan, S S" uniqKey="Nagarajan S">S. S. Nagarajan</name>
</author>
<author>
<name sortKey="Houde, J F" uniqKey="Houde J">J. F. Houde</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hickok, G" uniqKey="Hickok G">G. Hickok</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hickok, G" uniqKey="Hickok G">G. Hickok</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hodges, N J" uniqKey="Hodges N">N. J. Hodges</name>
</author>
<author>
<name sortKey="Franks, I M" uniqKey="Franks I">I. M. Franks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Houde, J F" uniqKey="Houde J">J. F. Houde</name>
</author>
<author>
<name sortKey="Nagarajan, S S" uniqKey="Nagarajan S">S. S. Nagarajan</name>
</author>
<author>
<name sortKey="Sekihara, K" uniqKey="Sekihara K">K. Sekihara</name>
</author>
<author>
<name sortKey="Merzenich, M M" uniqKey="Merzenich M">M. M. Merzenich</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hueber, T" uniqKey="Hueber T">T. Hueber</name>
</author>
<author>
<name sortKey="Ben Youssef, A" uniqKey="Ben Youssef A">A. Ben-Youssef</name>
</author>
<author>
<name sortKey="Badin, P" uniqKey="Badin P">P. Badin</name>
</author>
<author>
<name sortKey="Bailly, G" uniqKey="Bailly G">G. Bailly</name>
</author>
<author>
<name sortKey="Elisei, F" uniqKey="Elisei F">F. Elisei</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jacks, A" uniqKey="Jacks A">A. Jacks</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jakobson, R" uniqKey="Jakobson R">R. Jakobson</name>
</author>
<author>
<name sortKey="Fant, G" uniqKey="Fant G">G. Fant</name>
</author>
<author>
<name sortKey="Halle, M" uniqKey="Halle M">M. Halle</name>
</author>
<author>
<name sortKey="Jakobson, R J" uniqKey="Jakobson R">R. J. Jakobson</name>
</author>
<author>
<name sortKey="Fant, R G" uniqKey="Fant R">R. G. Fant</name>
</author>
<author>
<name sortKey="Halle, M" uniqKey="Halle M">M. Halle</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Katz, W F" uniqKey="Katz W">W. F. Katz</name>
</author>
<author>
<name sortKey="Campbell, T F" uniqKey="Campbell T">T. F. Campbell</name>
</author>
<author>
<name sortKey="Wang, J" uniqKey="Wang J">J. Wang</name>
</author>
<author>
<name sortKey="Farrar, E" uniqKey="Farrar E">E. Farrar</name>
</author>
<author>
<name sortKey="Eubanks, J C" uniqKey="Eubanks J">J. C. Eubanks</name>
</author>
<author>
<name sortKey="Balasubramanian, A" uniqKey="Balasubramanian A">A. Balasubramanian</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Katz, W F" uniqKey="Katz W">W. F. Katz</name>
</author>
<author>
<name sortKey="Mcneil, M R" uniqKey="Mcneil M">M. R. McNeil</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Keating, P" uniqKey="Keating P">P. Keating</name>
</author>
<author>
<name sortKey="Lahiri, A" uniqKey="Lahiri A">A. Lahiri</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kohler, E" uniqKey="Kohler E">E. Kohler</name>
</author>
<author>
<name sortKey="Keysers, C" uniqKey="Keysers C">C. Keysers</name>
</author>
<author>
<name sortKey="Umilta, M A" uniqKey="Umilta M">M. A. Umiltá</name>
</author>
<author>
<name sortKey="Fogassi, L" uniqKey="Fogassi L">L. Fogassi</name>
</author>
<author>
<name sortKey="Gallese, V" uniqKey="Gallese V">V. Gallese</name>
</author>
<author>
<name sortKey="Rizzolatti, G" uniqKey="Rizzolatti G">G. Rizzolatti</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kroger, B J" uniqKey="Kroger B">B. J. Kröger</name>
</author>
<author>
<name sortKey="Birkholz, P" uniqKey="Birkholz P">P. Birkholz</name>
</author>
<author>
<name sortKey="Hoffmann, R" uniqKey="Hoffmann R">R. Hoffmann</name>
</author>
<author>
<name sortKey="Meng, H" uniqKey="Meng H">H. Meng</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kroger, B J" uniqKey="Kroger B">B. J. Kröger</name>
</author>
<author>
<name sortKey="Kannampuzha, J" uniqKey="Kannampuzha J">J. Kannampuzha</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kroger, B J" uniqKey="Kroger B">B. J. Kröger</name>
</author>
<author>
<name sortKey="Kannampuzha, J" uniqKey="Kannampuzha J">J. Kannampuzha</name>
</author>
<author>
<name sortKey="Neuschaefer Rube, C" uniqKey="Neuschaefer Rube C">C. Neuschaefer-Rube</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kroos, C" uniqKey="Kroos C">C. Kroos</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ladefoged, P" uniqKey="Ladefoged P">P. Ladefoged</name>
</author>
<author>
<name sortKey="Maddieson, I" uniqKey="Maddieson I">I. Maddieson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Levitt, J S" uniqKey="Levitt J">J. S. Levitt</name>
</author>
<author>
<name sortKey="Katz, W F" uniqKey="Katz W">W. F. Katz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Levitt, J S" uniqKey="Levitt J">J. S. Levitt</name>
</author>
<author>
<name sortKey="Katz, W F" uniqKey="Katz W">W. F. Katz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Liu, X" uniqKey="Liu X">X. Liu</name>
</author>
<author>
<name sortKey="Hairston, J" uniqKey="Hairston J">J. Hairston</name>
</author>
<author>
<name sortKey="Schrier, M" uniqKey="Schrier M">M. Schrier</name>
</author>
<author>
<name sortKey="Fan, J" uniqKey="Fan J">J. Fan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Liu, Y" uniqKey="Liu Y">Y. Liu</name>
</author>
<author>
<name sortKey="Massaro, D W" uniqKey="Massaro D">D. W. Massaro</name>
</author>
<author>
<name sortKey="Chen, T H" uniqKey="Chen T">T. H. Chen</name>
</author>
<author>
<name sortKey="Chan, D" uniqKey="Chan D">D. Chan</name>
</author>
<author>
<name sortKey="Perfetti, C" uniqKey="Perfetti C">C. Perfetti</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Maas, E" uniqKey="Maas E">E. Maas</name>
</author>
<author>
<name sortKey="Mailend, M L" uniqKey="Mailend M">M.-L. Mailend</name>
</author>
<author>
<name sortKey="Guenther, F H" uniqKey="Guenther F">F. H. Guenther</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Maas, E" uniqKey="Maas E">E. Maas</name>
</author>
<author>
<name sortKey="Robin, D A" uniqKey="Robin D">D. A. Robin</name>
</author>
<author>
<name sortKey="Austermann Hula, S N" uniqKey="Austermann Hula S">S. N. Austermann Hula</name>
</author>
<author>
<name sortKey="Freedman, S E" uniqKey="Freedman S">S. E. Freedman</name>
</author>
<author>
<name sortKey="Wulf, G" uniqKey="Wulf G">G. Wulf</name>
</author>
<author>
<name sortKey="Ballard, K J" uniqKey="Ballard K">K. J. Ballard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Marian, V" uniqKey="Marian V">V. Marian</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Massaro, D W" uniqKey="Massaro D">D. W. Massaro</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Massaro, D W" uniqKey="Massaro D">D. W. Massaro</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Massaro, D W" uniqKey="Massaro D">D. W. Massaro</name>
</author>
<author>
<name sortKey="Bigler, S" uniqKey="Bigler S">S. Bigler</name>
</author>
<author>
<name sortKey="Chen, T H" uniqKey="Chen T">T. H. Chen</name>
</author>
<author>
<name sortKey="Perlman, M" uniqKey="Perlman M">M. Perlman</name>
</author>
<author>
<name sortKey="Ouni, S" uniqKey="Ouni S">S. Ouni</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Massaro, D W" uniqKey="Massaro D">D. W. Massaro</name>
</author>
<author>
<name sortKey="Cohen, M M" uniqKey="Cohen M">M. M. Cohen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Massaro, D W" uniqKey="Massaro D">D. W. Massaro</name>
</author>
<author>
<name sortKey="Light, J" uniqKey="Light J">J. Light</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Massaro, D W" uniqKey="Massaro D">D. W. Massaro</name>
</author>
<author>
<name sortKey="Liu, Y" uniqKey="Liu Y">Y. Liu</name>
</author>
<author>
<name sortKey="Chen, T H" uniqKey="Chen T">T. H. Chen</name>
</author>
<author>
<name sortKey="Perfetti, C" uniqKey="Perfetti C">C. Perfetti</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Max, L" uniqKey="Max L">L. Max</name>
</author>
<author>
<name sortKey="Guenther, F H" uniqKey="Guenther F">F. H. Guenther</name>
</author>
<author>
<name sortKey="Gracco, V L" uniqKey="Gracco V">V. L. Gracco</name>
</author>
<author>
<name sortKey="Ghosh, S S" uniqKey="Ghosh S">S. S. Ghosh</name>
</author>
<author>
<name sortKey="Wallace, M E" uniqKey="Wallace M">M. E. Wallace</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mcgurk, H" uniqKey="Mcgurk H">H. McGurk</name>
</author>
<author>
<name sortKey="Macdonald, J" uniqKey="Macdonald J">J. MacDonald</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mehta, S" uniqKey="Mehta S">S. Mehta</name>
</author>
<author>
<name sortKey="Katz, W F" uniqKey="Katz W">W. F. Katz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mochida, T" uniqKey="Mochida T">T. Mochida</name>
</author>
<author>
<name sortKey="Kimura, T" uniqKey="Kimura T">T. Kimura</name>
</author>
<author>
<name sortKey="Hiroya, S" uniqKey="Hiroya S">S. Hiroya</name>
</author>
<author>
<name sortKey="Kitagawa, N" uniqKey="Kitagawa N">N. Kitagawa</name>
</author>
<author>
<name sortKey="Gomi, H" uniqKey="Gomi H">H. Gomi</name>
</author>
<author>
<name sortKey="Kondo, T" uniqKey="Kondo T">T. Kondo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mottonen, R" uniqKey="Mottonen R">R. Möttönen</name>
</author>
<author>
<name sortKey="Schurmann, M" uniqKey="Schurmann M">M. Schürmann</name>
</author>
<author>
<name sortKey="Sams, M" uniqKey="Sams M">M. Sams</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Navarra, J" uniqKey="Navarra J">J. Navarra</name>
</author>
<author>
<name sortKey="Soto Faraco, S" uniqKey="Soto Faraco S">S. Soto-Faraco</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nordberg, A" uniqKey="Nordberg A">A. Nordberg</name>
</author>
<author>
<name sortKey="Goran, C" uniqKey="Goran C">C. Göran</name>
</author>
<author>
<name sortKey="Lohmander, A" uniqKey="Lohmander A">A. Lohmander</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Numbers, M E" uniqKey="Numbers M">M. E. Numbers</name>
</author>
<author>
<name sortKey="Hudgins, C V" uniqKey="Hudgins C">C. V. Hudgins</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="O Neill, J J" uniqKey="O Neill J">J. J. O'Neill</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ojanen, V" uniqKey="Ojanen V">V. Ojanen</name>
</author>
<author>
<name sortKey="Mottonen, R" uniqKey="Mottonen R">R. Möttönen</name>
</author>
<author>
<name sortKey="Pekkola, J" uniqKey="Pekkola J">J. Pekkola</name>
</author>
<author>
<name sortKey="J Skel Inen, I P" uniqKey="J Skel Inen I">I. P. Jääskeläinen</name>
</author>
<author>
<name sortKey="Joensuu, R" uniqKey="Joensuu R">R. Joensuu</name>
</author>
<author>
<name sortKey="Autti, T" uniqKey="Autti T">T. Autti</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ouni, S" uniqKey="Ouni S">S. Ouni</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pekkola, J" uniqKey="Pekkola J">J. Pekkola</name>
</author>
<author>
<name sortKey="Ojanen, V" uniqKey="Ojanen V">V. Ojanen</name>
</author>
<author>
<name sortKey="Autti, T" uniqKey="Autti T">T. Autti</name>
</author>
<author>
<name sortKey="J Skel Inen, I P" uniqKey="J Skel Inen I">I. P. Jääskeläinen</name>
</author>
<author>
<name sortKey="Mottonen, R" uniqKey="Mottonen R">R. Möttönen</name>
</author>
<author>
<name sortKey="Sams, M" uniqKey="Sams M">M. Sams</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pochon, J B" uniqKey="Pochon J">J. B. Pochon</name>
</author>
<author>
<name sortKey="Levy, R" uniqKey="Levy R">R. Levy</name>
</author>
<author>
<name sortKey="Fossati, P" uniqKey="Fossati P">P. Fossati</name>
</author>
<author>
<name sortKey="Lehericy, S" uniqKey="Lehericy S">S. Lehericy</name>
</author>
<author>
<name sortKey="Poline, J B" uniqKey="Poline J">J. B. Poline</name>
</author>
<author>
<name sortKey="Pillon, B" uniqKey="Pillon B">B. Pillon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Preston, J L" uniqKey="Preston J">J. L. Preston</name>
</author>
<author>
<name sortKey="Leaman, M" uniqKey="Leaman M">M. Leaman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Preston, J L" uniqKey="Preston J">J. L. Preston</name>
</author>
<author>
<name sortKey="Mccabe, P" uniqKey="Mccabe P">P. McCabe</name>
</author>
<author>
<name sortKey="Rivera Campos, A" uniqKey="Rivera Campos A">A. Rivera-Campos</name>
</author>
<author>
<name sortKey="Whittle, J L" uniqKey="Whittle J">J. L. Whittle</name>
</author>
<author>
<name sortKey="Landry, E" uniqKey="Landry E">E. Landry</name>
</author>
<author>
<name sortKey="Maas, E" uniqKey="Maas E">E. Maas</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pulvermuller, F" uniqKey="Pulvermuller F">F. Pulvermüller</name>
</author>
<author>
<name sortKey="Fadiga, L" uniqKey="Fadiga L">L. Fadiga</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pulvermuller, F" uniqKey="Pulvermuller F">F. Pulvermüller</name>
</author>
<author>
<name sortKey="Huss, M" uniqKey="Huss M">M. Huss</name>
</author>
<author>
<name sortKey="Kherif, F" uniqKey="Kherif F">F. Kherif</name>
</author>
<author>
<name sortKey="Moscoso Del Prado Martin, F" uniqKey="Moscoso Del Prado Martin F">F. Moscoso del Prado Martin</name>
</author>
<author>
<name sortKey="Hauk, O" uniqKey="Hauk O">O. Hauk</name>
</author>
<author>
<name sortKey="Shtyrov, Y" uniqKey="Shtyrov Y">Y. Shtyrov</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Reetz, H" uniqKey="Reetz H">H. Reetz</name>
</author>
<author>
<name sortKey="Jongman, A" uniqKey="Jongman A">A. Jongman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Reisberg, D" uniqKey="Reisberg D">D. Reisberg</name>
</author>
<author>
<name sortKey="Mclean, J" uniqKey="Mclean J">J. McLean</name>
</author>
<author>
<name sortKey="Goldfield, A" uniqKey="Goldfield A">A. Goldfield</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rizzolatti, G" uniqKey="Rizzolatti G">G. Rizzolatti</name>
</author>
<author>
<name sortKey="Arbib, M A" uniqKey="Arbib M">M. A. Arbib</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rizzolatti, G" uniqKey="Rizzolatti G">G. Rizzolatti</name>
</author>
<author>
<name sortKey="Cattaneo, L" uniqKey="Cattaneo L">L. Cattaneo</name>
</author>
<author>
<name sortKey="Fabbri Destro, M" uniqKey="Fabbri Destro M">M. Fabbri-Destro</name>
</author>
<author>
<name sortKey="Rozzi, S" uniqKey="Rozzi S">S. Rozzi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rizzolatti, G" uniqKey="Rizzolatti G">G. Rizzolatti</name>
</author>
<author>
<name sortKey="Craighero, L" uniqKey="Craighero L">L. Craighero</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sams, M" uniqKey="Sams M">M. Sams</name>
</author>
<author>
<name sortKey="Mottonen, R" uniqKey="Mottonen R">R. Möttönen</name>
</author>
<author>
<name sortKey="Sihvonen, T" uniqKey="Sihvonen T">T. Sihvonen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sato, M" uniqKey="Sato M">M. Sato</name>
</author>
<author>
<name sortKey="Troille, E" uniqKey="Troille E">E. Troille</name>
</author>
<author>
<name sortKey="Menard, L" uniqKey="Menard L">L. Ménard</name>
</author>
<author>
<name sortKey="Cathiard, M A" uniqKey="Cathiard M">M.-A. Cathiard</name>
</author>
<author>
<name sortKey="Gracco, V" uniqKey="Gracco V">V. Gracco</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schmidt, R" uniqKey="Schmidt R">R. Schmidt</name>
</author>
<author>
<name sortKey="Lee, T" uniqKey="Lee T">T. Lee</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Scruggs, T E" uniqKey="Scruggs T">T. E. Scruggs</name>
</author>
<author>
<name sortKey="Mastropieri, M A" uniqKey="Mastropieri M">M. A. Mastropieri</name>
</author>
<author>
<name sortKey="Casto, G" uniqKey="Casto G">G. Casto</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Scruggs, T E" uniqKey="Scruggs T">T. E. Scruggs</name>
</author>
<author>
<name sortKey="Mastropieri, M A" uniqKey="Mastropieri M">M. A. Mastropieri</name>
</author>
<author>
<name sortKey="Cook, S B" uniqKey="Cook S">S. B. Cook</name>
</author>
<author>
<name sortKey="Escobar, C" uniqKey="Escobar C">C. Escobar</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Shirahige, C" uniqKey="Shirahige C">C. Shirahige</name>
</author>
<author>
<name sortKey="Oki, K" uniqKey="Oki K">K. Oki</name>
</author>
<author>
<name sortKey="Morimoto, Y" uniqKey="Morimoto Y">Y. Morimoto</name>
</author>
<author>
<name sortKey="Oisaka, N" uniqKey="Oisaka N">N. Oisaka</name>
</author>
<author>
<name sortKey="Minagi, S" uniqKey="Minagi S">S. Minagi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sigrist, R" uniqKey="Sigrist R">R. Sigrist</name>
</author>
<author>
<name sortKey="Rauter, G" uniqKey="Rauter G">G. Rauter</name>
</author>
<author>
<name sortKey="Riener, R" uniqKey="Riener R">R. Riener</name>
</author>
<author>
<name sortKey="Wolf, P" uniqKey="Wolf P">P. Wolf</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Skipper, J I" uniqKey="Skipper J">J. I. Skipper</name>
</author>
<author>
<name sortKey="Goldin Meadow, S" uniqKey="Goldin Meadow S">S. Goldin-Meadow</name>
</author>
<author>
<name sortKey="Nusbaum, H C" uniqKey="Nusbaum H">H. C. Nusbaum</name>
</author>
<author>
<name sortKey="Small, S L" uniqKey="Small S">S. L. Small</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Skipper, J I" uniqKey="Skipper J">J. I. Skipper</name>
</author>
<author>
<name sortKey="Nusbaum, H C" uniqKey="Nusbaum H">H. C. Nusbaum</name>
</author>
<author>
<name sortKey="Small, S L" uniqKey="Small S">S. L. Small</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Skipper, J I" uniqKey="Skipper J">J. I. Skipper</name>
</author>
<author>
<name sortKey="Nusbaum, H C" uniqKey="Nusbaum H">H. C. Nusbaum</name>
</author>
<author>
<name sortKey="Small, S L" uniqKey="Small S">S. L. Small</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Skipper, J I" uniqKey="Skipper J">J. I. Skipper</name>
</author>
<author>
<name sortKey="Van Wassenhove, V" uniqKey="Van Wassenhove V">V. van Wassenhove</name>
</author>
<author>
<name sortKey="Nusbaum, H C" uniqKey="Nusbaum H">H. C. Nusbaum</name>
</author>
<author>
<name sortKey="Small, S L" uniqKey="Small S">S. L. Small</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stella, M" uniqKey="Stella M">M. Stella</name>
</author>
<author>
<name sortKey="Stella, A" uniqKey="Stella A">A. Stella</name>
</author>
<author>
<name sortKey="Sigona, F" uniqKey="Sigona F">F. Sigona</name>
</author>
<author>
<name sortKey="Bernardini, P" uniqKey="Bernardini P">P. Bernardini</name>
</author>
<author>
<name sortKey="Grimaldi, M" uniqKey="Grimaldi M">M. Grimaldi</name>
</author>
<author>
<name sortKey="Gili Fivela, B" uniqKey="Gili Fivela B">B. Gili Fivela</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stevens, K N" uniqKey="Stevens K">K. N. Stevens</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stevens, K N" uniqKey="Stevens K">K. N. Stevens</name>
</author>
<author>
<name sortKey="Blumstein, S E" uniqKey="Blumstein S">S. E. Blumstein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stevens, K N" uniqKey="Stevens K">K. N. Stevens</name>
</author>
<author>
<name sortKey="Blumstein, S E" uniqKey="Blumstein S">S. E. Blumstein</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Suemitsu, A" uniqKey="Suemitsu A">A. Suemitsu</name>
</author>
<author>
<name sortKey="Ito, T" uniqKey="Ito T">T. Ito</name>
</author>
<author>
<name sortKey="Tiede, M" uniqKey="Tiede M">M. Tiede</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sumby, W H" uniqKey="Sumby W">W. H. Sumby</name>
</author>
<author>
<name sortKey="Pollack, I" uniqKey="Pollack I">I. Pollack</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Summerfield, Q" uniqKey="Summerfield Q">Q. Summerfield</name>
</author>
<author>
<name sortKey="Mcgrath, M" uniqKey="Mcgrath M">M. McGrath</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Swinnen, S P" uniqKey="Swinnen S">S. P. Swinnen</name>
</author>
<author>
<name sortKey="Walter, C B" uniqKey="Walter C">C. B. Walter</name>
</author>
<author>
<name sortKey="Lee, T D" uniqKey="Lee T">T. D. Lee</name>
</author>
<author>
<name sortKey="Serrien, D J" uniqKey="Serrien D">D. J. Serrien</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Terband, H" uniqKey="Terband H">H. Terband</name>
</author>
<author>
<name sortKey="Maassen, B" uniqKey="Maassen B">B. Maassen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Terband, H" uniqKey="Terband H">H. Terband</name>
</author>
<author>
<name sortKey="Maassen, B" uniqKey="Maassen B">B. Maassen</name>
</author>
<author>
<name sortKey="Guenther, F H" uniqKey="Guenther F">F. H. Guenther</name>
</author>
<author>
<name sortKey="Brumberg, J" uniqKey="Brumberg J">J. Brumberg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Terband, H" uniqKey="Terband H">H. Terband</name>
</author>
<author>
<name sortKey="Maassen, B" uniqKey="Maassen B">B. Maassen</name>
</author>
<author>
<name sortKey="Guenther, F H" uniqKey="Guenther F">F. H. Guenther</name>
</author>
<author>
<name sortKey="Brumberg, J" uniqKey="Brumberg J">J. Brumberg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Terband, H" uniqKey="Terband H">H. Terband</name>
</author>
<author>
<name sortKey="Van Brenk, F" uniqKey="Van Brenk F">F. van Brenk</name>
</author>
<author>
<name sortKey="Van Doornik Van Der Zee, A" uniqKey="Van Doornik Van Der Zee A">A. van Doornik-van der Zee</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tian, X" uniqKey="Tian X">X. Tian</name>
</author>
<author>
<name sortKey="Poeppel, D" uniqKey="Poeppel D">D. Poeppel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Uddin, L Q" uniqKey="Uddin L">L. Q. Uddin</name>
</author>
<author>
<name sortKey="Molnar Szakacs, I" uniqKey="Molnar Szakacs I">I. Molnar-Szakacs</name>
</author>
<author>
<name sortKey="Zaidel, E" uniqKey="Zaidel E">E. Zaidel</name>
</author>
<author>
<name sortKey="Iacoboni, M" uniqKey="Iacoboni M">M. Iacoboni</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wik, P" uniqKey="Wik P">P. Wik</name>
</author>
<author>
<name sortKey="Engwall, O" uniqKey="Engwall O">O. Engwall</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wilson, S M" uniqKey="Wilson S">S. M. Wilson</name>
</author>
<author>
<name sortKey="Iacoboni, M" uniqKey="Iacoboni M">M. Iacoboni</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wilson, S" uniqKey="Wilson S">S. Wilson</name>
</author>
<author>
<name sortKey="Saygin, A P" uniqKey="Saygin A">A. P. Saygin</name>
</author>
<author>
<name sortKey="Sereno, M I" uniqKey="Sereno M">M. I. Sereno</name>
</author>
<author>
<name sortKey="Iacoboni, M" uniqKey="Iacoboni M">M. Iacoboni</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Yano, J" uniqKey="Yano J">J. Yano</name>
</author>
<author>
<name sortKey="Shirahige, C" uniqKey="Shirahige C">C. Shirahige</name>
</author>
<author>
<name sortKey="Oki, K" uniqKey="Oki K">K. Oki</name>
</author>
<author>
<name sortKey="Oisaka, N" uniqKey="Oisaka N">N. Oisaka</name>
</author>
<author>
<name sortKey="Kumakura, I" uniqKey="Kumakura I">I. Kumakura</name>
</author>
<author>
<name sortKey="Tsubahara, A" uniqKey="Tsubahara A">A. Tsubahara</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zaehle, T" uniqKey="Zaehle T">T. Zaehle</name>
</author>
<author>
<name sortKey="Geiser, E" uniqKey="Geiser E">E. Geiser</name>
</author>
<author>
<name sortKey="Alter, K" uniqKey="Alter K">K. Alter</name>
</author>
<author>
<name sortKey="Jancke, L" uniqKey="Jancke L">L. Jancke</name>
</author>
<author>
<name sortKey="Meyer, M" uniqKey="Meyer M">M. Meyer</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Front Hum Neurosci</journal-id>
<journal-id journal-id-type="iso-abbrev">Front Hum Neurosci</journal-id>
<journal-id journal-id-type="publisher-id">Front. Hum. Neurosci.</journal-id>
<journal-title-group>
<journal-title>Frontiers in Human Neuroscience</journal-title>
</journal-title-group>
<issn pub-type="epub">1662-5161</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">26635571</article-id>
<article-id pub-id-type="pmc">4652268</article-id>
<article-id pub-id-type="doi">10.3389/fnhum.2015.00612</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Visual Feedback of Tongue Movement for Novel Speech Sound Learning</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Katz</surname>
<given-names>William F.</given-names>
</name>
<xref ref-type="author-notes" rid="fn001">
<sup>*</sup>
</xref>
<uri xlink:type="simple" xlink:href="http://loop.frontiersin.org/people/186649/overview"></uri>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Mehta</surname>
<given-names>Sonya</given-names>
</name>
</contrib>
</contrib-group>
<aff>
<institution>Speech Production Lab, Callier Center for Communication Disorders, School of Behavioral and Brain Sciences, The University of Texas at Dallas</institution>
<country>Dallas, TX, USA</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>Edited by: Marcelo L. Berthier, University of Malaga, Spain</p>
</fn>
<fn fn-type="edited-by">
<p>Reviewed by: Peter Sörös, University of Western Ontario, Canada; Caroline A. Niziolek, Boston University, USA</p>
</fn>
<corresp id="fn001">*Correspondence: William F. Katz
<email xlink:type="simple">wkatz@utdallas.edu</email>
</corresp>
</author-notes>
<pub-date pub-type="epub">
<day>19</day>
<month>11</month>
<year>2015</year>
</pub-date>
<pub-date pub-type="collection">
<year>2015</year>
</pub-date>
<volume>9</volume>
<elocation-id>612</elocation-id>
<history>
<date date-type="received">
<day>13</day>
<month>6</month>
<year>2015</year>
</date>
<date date-type="accepted">
<day>26</day>
<month>10</month>
<year>2015</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright © 2015 Katz and Mehta.</copyright-statement>
<copyright-year>2015</copyright-year>
<copyright-holder>Katz and Mehta</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</license-p>
</license>
</permissions>
<abstract>
<p>Pronunciation training studies have yielded important information concerning the processing of audiovisual (AV) information. Second language (L2) learners show increased reliance on bottom-up, multimodal input for speech perception (compared to monolingual individuals). However, little is known about the role of viewing one's own speech articulation processes during speech training. The current study investigated whether real-time, visual feedback for tongue movement can improve a speaker's learning of non-native speech sounds. An interactive 3D tongue visualization system based on electromagnetic articulography (EMA) was used in a speech training experiment. Native speakers of American English produced a novel speech sound (/
<underline>ɖ</underline>
/; a voiced, coronal, palatal stop) before, during, and after trials in which they viewed their own speech movements using the 3D model. Talkers' productions were evaluated using kinematic (tongue-tip spatial positioning) and acoustic (burst spectra) measures. The results indicated a rapid gain in accuracy associated with visual feedback training. The findings are discussed with respect to neural models for multimodal speech processing.</p>
</abstract>
<kwd-group>
<kwd>speech production</kwd>
<kwd>second language learning</kwd>
<kwd>visual feedback</kwd>
<kwd>audiovisual integration</kwd>
<kwd>electromagnetic articulography</kwd>
<kwd>articulation therapy</kwd>
</kwd-group>
<funding-group>
<award-group>
<funding-source id="cn001">National Institutes of Health
<named-content content-type="fundref-id">10.13039/100000002</named-content>
</funding-source>
<award-id rid="cn001">R43 DC013467</award-id>
</award-group>
</funding-group>
<counts>
<fig-count count="5"></fig-count>
<table-count count="0"></table-count>
<equation-count count="0"></equation-count>
<ref-count count="122"></ref-count>
<page-count count="13"></page-count>
<word-count count="10688"></word-count>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>Introduction</title>
<p>Natural conversation is a multimodal process, where the visual information contained in a speaker's face plays an important role in decoding the speech signal. Integration of the auditory and visual modalities has long been known to be more advantageous to speech perception than either input alone. Early studies of lip-reading found that individuals with hearing loss could more accurately recognize familiar utterances when provided with both auditory and visual cues compared to either modality on its own (Numbers and Hudgins,
<xref rid="B78" ref-type="bibr">1948</xref>
; Erber,
<xref rid="B24" ref-type="bibr">1975</xref>
). Research on healthy hearing populations has also shown that audiovisual integration enhances comprehension of spoken stimuli, particularly in noisy environments or situations where the speaker has a strong foreign accent (O'Neill,
<xref rid="B79" ref-type="bibr">1954</xref>
; Sumby and Pollack,
<xref rid="B109" ref-type="bibr">1954</xref>
; Erber,
<xref rid="B24" ref-type="bibr">1975</xref>
; Reisberg et al.,
<xref rid="B89" ref-type="bibr">1987</xref>
). Even under optimal listening conditions, observing a talker's face improves comprehension for complex utterances, suggesting that visual correlates of speech movement are a central component to processing speech sounds (Reisberg et al.,
<xref rid="B89" ref-type="bibr">1987</xref>
; Arnold and Hill,
<xref rid="B2" ref-type="bibr">2001</xref>
).</p>
<p>Studies investigating how listeners process conflicting audio and visual signals also support a critical role of the visual system during speech perception (McGurk and MacDonald,
<xref rid="B72" ref-type="bibr">1976</xref>
; Massaro,
<xref rid="B65" ref-type="bibr">1984</xref>
; Summerfield and McGrath,
<xref rid="B110" ref-type="bibr">1984</xref>
). For example, listeners presented with the auditory signal for “ba” concurrently with the visual signal for “ga” typically report a blended percept, the well-known “McGurk effect.” A recent study by Sams et al. (
<xref rid="B93" ref-type="bibr">2005</xref>
) demonstrated that the McGurk effect occurs even if the source of the visual input is the listener's
<italic>own</italic>
face. In this study, subjects wore headphones and silently articulated a “pa” or “ka” while observing their productions in a mirror as a congruent or incongruent audio stimulus was simultaneously presented. In addition to replicating the basic McGurk (blended) effect, researchers found that simultaneous silent articulation alone moderately improved auditory comprehension, suggesting that knowledge from one's own motor experience in speech production is also exploited during speech perception. Other cross-modal studies support this view. For instance, silently articulating a syllable in synchrony with the presentation of a concordant auditory and/or visually ambiguous speech stimulus has been found to improve syllable identification, with concurrent mouthing further speeding the perceptual processing of a concordant stimulus (Sato et al.,
<xref rid="B94" ref-type="bibr">2013</xref>
; also see Mochida et al.,
<xref rid="B74" ref-type="bibr">2013</xref>
; D'Ausilio et al.,
<xref rid="B14" ref-type="bibr">2014</xref>
). Taken together, these studies indicate that listeners benefit from multimodal speech information during the perception process.</p>
<p>Audiovisual (AV) information also plays an important role in acquiring novel speech sounds, according to studies of second language (L2) learning. Research has shown that speech comprehension by non-native speakers is influenced by the presence/absence of visual input (see Marian,
<xref rid="B64" ref-type="bibr">2009</xref>
, for review). For instance, Spanish-speakers exposed to Catalan can better discriminate the non-native tense-lax vowel pair /e/ and /ε/ when visual information is added (Navarra and Soto-Faraco,
<xref rid="B76" ref-type="bibr">2007</xref>
).</p>
<p>Computer-assisted pronunciation training (CAPT) systems have provided a new means of examining AV processing during language learning. Many CAPT systems, such as “Baldi” (Massaro and Cohen,
<xref rid="B68" ref-type="bibr">1998</xref>
; Massaro,
<xref rid="B66" ref-type="bibr">2003</xref>
; Massaro et al.,
<xref rid="B70" ref-type="bibr">2006</xref>
), “ARTUR” (Engwall et al.,
<xref rid="B22" ref-type="bibr">2006</xref>
; Engwall and Bälter,
<xref rid="B21" ref-type="bibr">2007</xref>
; Engwall,
<xref rid="B20" ref-type="bibr">2008</xref>
), “ATH” (Badin et al.,
<xref rid="B3" ref-type="bibr">2008</xref>
), “Vivian” (Fagel and Madany,
<xref rid="B25" ref-type="bibr">2008</xref>
), and “Speech Tutor” (Kröger et al.,
<xref rid="B53" ref-type="bibr">2010</xref>
), employ animated talking heads, most of which can optionally display transparent vocal tracts showing tongue movement. “Tongue reading” studies based on these systems have shown small but consistent perceptual improvement when tongue movement information is added to the visual display. Such effects have been noted in word retrieval for acoustically degraded sentences (Wik and Engwall,
<xref rid="B118" ref-type="bibr">2008</xref>
) and in a forced-choice consonant identification task (Badin et al.,
<xref rid="B4" ref-type="bibr">2010</xref>
).</p>
<p>Whereas the visual effects on speech perception are fairly well-established, the visual effects on speech production are less clearly understood. Massaro and Light (
<xref rid="B69" ref-type="bibr">2003</xref>
) investigated the effectiveness of using Baldi in teaching non-native phonetic contrasts (/r/-/l/) to Japanese learners of English. Both external and internal views (i.e., showing images of the speech articulators) of Baldi were found to be effective, with no added benefit noted for the internal articulatory view. A subsequent, rather preliminary report on English-speaking students learning Chinese and Arabic phonetic contrasts reported similar negative results for the addition of visual, articulatory information (Massaro et al.,
<xref rid="B67" ref-type="bibr">2008</xref>
). In this study, training with the Baldi avatar showing face (Mandarin) or internal articulatory processes (Arabic) provided no significant improvement in a small group of students' productions, as rated by native listeners.</p>
<p>In contrast, Liu et al. (
<xref rid="B61" ref-type="bibr">2007</xref>
) observed potentially positive effects of visual feedback on speech production for 101 English-speaking students learning Mandarin. This investigation contrasted three feedback conditions: audio only, human audiovisual, and a Baldi avatar showing visible articulators. Results indicated that all three methods improved students' pronunciation accuracy. However, for the final rime pronunciation both the human audiovisual and Baldi condition scores were higher than audio-only, with the Baldi condition significantly higher than the audio condition. This pattern is compatible with the view that information concerning the internal articulators helps relay information to assist in L2 production. Taken together, these studies suggest that adding visual articulatory information to 3D tutors can lead to improvements for producing certain language contrasts. However, more work is needed to establish the effectiveness, consistency, and strength of these techniques.</p>
<p>At the neurophysiological level, AV speech processing can be related to the issue of whether speech perception and production is supported by a joined action-observation matching system. Such a system has been related to “mirror” neurons originally described in the macaque brain [for reviews see (Rizzolatti and Craighero,
<xref rid="B92" ref-type="bibr">2004</xref>
; Pulvermüller and Fadiga,
<xref rid="B86" ref-type="bibr">2010</xref>
; Rizzolatti et al.,
<xref rid="B91" ref-type="bibr">2014</xref>
); although see (Hickok,
<xref rid="B42" ref-type="bibr">2009</xref>
,
<xref rid="B43" ref-type="bibr">2010</xref>
) for an opposing view]. Mirror neurons are thought to fire both during goal-directed actions and while watching a similar action made by another individual. Research has extended this finding to audiovisual systems in monkeys (Kohler et al.,
<xref rid="B52" ref-type="bibr">2002</xref>
) and speech processing in humans (e.g., Rizzolatti and Arbib,
<xref rid="B90" ref-type="bibr">1998</xref>
; Arbib,
<xref rid="B1" ref-type="bibr">2005</xref>
; Gentilucci and Corballis,
<xref rid="B30" ref-type="bibr">2006</xref>
).</p>
<p>In support of this view, studies have linked auditory and/or visual speech perception with increased activity in brain areas involved in motor speech planning, execution, and proprioceptive control of the mouth (e.g., Möttönen et al.,
<xref rid="B75" ref-type="bibr">2004</xref>
; Wilson et al.,
<xref rid="B120" ref-type="bibr">2004</xref>
; Ojanen et al.,
<xref rid="B80" ref-type="bibr">2005</xref>
; Skipper et al.,
<xref rid="B101" ref-type="bibr">2005</xref>
,
<xref rid="B102" ref-type="bibr">2006</xref>
,
<xref rid="B100" ref-type="bibr">2007a</xref>
,
<xref rid="B103" ref-type="bibr">b</xref>
; Pekkola et al.,
<xref rid="B82" ref-type="bibr">2006</xref>
; Pulvermüller et al.,
<xref rid="B87" ref-type="bibr">2006</xref>
; Wilson and Iacoboni,
<xref rid="B119" ref-type="bibr">2006</xref>
; Zaehle et al.,
<xref rid="B122" ref-type="bibr">2008</xref>
). Similarly, magnetoencephalography (MEG) studies have linked speech production with activity in brain areas specialized for auditory and/or visual speech perception processes (e.g., Curio et al.,
<xref rid="B13" ref-type="bibr">2000</xref>
; Gunji et al.,
<xref rid="B36" ref-type="bibr">2001</xref>
; Houde et al.,
<xref rid="B45" ref-type="bibr">2002</xref>
; Heinks-Maldonado et al.,
<xref rid="B41" ref-type="bibr">2006</xref>
; Tian and Poeppel,
<xref rid="B116" ref-type="bibr">2010</xref>
). While auditory activation during speech production is expected (because acoustic input is normally present), Tian and Poeppel's (
<xref rid="B116" ref-type="bibr">2010</xref>
) study shows auditory cortex activation in the absence of auditory input. This suggests that an imaginary motor speech task can nevertheless generate forward predictions via an auditory efference copy.</p>
<p>Overall, these neurophysiological findings suggest a brain basis for the learning of speech motor patterns via visual input, which in turn would strengthen the multimodal speech representations in feedforward models. In everyday situations, visual articulatory input would normally be lip information only. However, instrumental methods of transducing tongue motion (e.g., magnetometry, ultrasound, MRI) raise the possibility that visual tongue information may also play a role.</p>
<p>Neurocomputational models of speech production provide a potentially useful framework for understanding the intricacies of AV speech processing. These models seek to provide an integrated explanation for speech processing, incorporated in testable artificial neural networks. Two prominent models include “Directions Into Velocities of Articulators” (DIVA) (Guenther and Perkell,
<xref rid="B34" ref-type="bibr">2004</xref>
; Guenther,
<xref rid="B32" ref-type="bibr">2006</xref>
; Guenther et al.,
<xref rid="B33" ref-type="bibr">2006</xref>
; Guenther and Vladusich,
<xref rid="B35" ref-type="bibr">2012</xref>
) and “ACTion” (ACT) (Kröger et al.,
<xref rid="B55" ref-type="bibr">2009</xref>
). These models assume as input an abstract speech sound unit (a phoneme, syllable, or word), and generate as output both articulatory and auditory representations of speech. The systems operate by computing neural layers (or “maps”) as distributed activation patterns. Production of an utterance involves fine-tuning between speech sound maps, sensory maps, and motor maps, guided by feedforward (predictive) processes and concurrent feedback from the periphery. Learning in these models critically relies on forward and inverse processes, with the internal speech model iteratively strengthened by the interaction of feedback information.</p>
<p>Researchers have used neurocomputational frameworks to gain important insights about speech and language disorders, including apraxia of speech (AOS) in adults (Jacks,
<xref rid="B47" ref-type="bibr">2008</xref>
; Maas et al.,
<xref rid="B62" ref-type="bibr">2015</xref>
), childhood apraxia (Terband et al.,
<xref rid="B113" ref-type="bibr">2009</xref>
; Terband and Maassen,
<xref rid="B112" ref-type="bibr">2010</xref>
), developmental speech sound disorders (Terband et al.,
<xref rid="B114" ref-type="bibr">2014a</xref>
,
<xref rid="B115" ref-type="bibr">b</xref>
), and stuttering (Max et al.,
<xref rid="B71" ref-type="bibr">2004</xref>
; Civier et al.,
<xref rid="B12" ref-type="bibr">2010</xref>
). For example, DIVA simulations have been used to test the claim that apraxic disorders result from relatively preserved feedback (and impaired feed-forward) speech motor processes (Civier et al.,
<xref rid="B12" ref-type="bibr">2010</xref>
; see also Maas et al.,
<xref rid="B62" ref-type="bibr">2015</xref>
). These neurocomputational modeling-based findings correspond with largely positive results from visual augmented feedback intervention studies for individuals with AOS (see Katz and McNeil,
<xref rid="B50" ref-type="bibr">2010</xref>
for review; also, Preston and Leaman,
<xref rid="B84" ref-type="bibr">2014</xref>
). Overall, these intervention findings have suggested that visual augmented feedback of tongue movement can help remediate speech errors in individuals with AOS, presumably by strengthening the internal model. Other clinical studies have reported that visual feedback can positively influence the speech of individuals with a variety of speech and language problems in children and adults, including articulation/phonological disorders, residual sound errors, and dysarthria. This research has included training with electropalatography (EPG) (Hardcastle et al.,
<xref rid="B39" ref-type="bibr">1991</xref>
; Dagenais,
<xref rid="B15" ref-type="bibr">1995</xref>
; Goozee et al.,
<xref rid="B31" ref-type="bibr">1999</xref>
; Hartelius et al.,
<xref rid="B40" ref-type="bibr">2005</xref>
; Nordberg et al.,
<xref rid="B77" ref-type="bibr">2011</xref>
), ultrasound (Bernhardt et al.,
<xref rid="B7" ref-type="bibr">2005</xref>
; Preston et al.,
<xref rid="B85" ref-type="bibr">2014</xref>
) and strain gauge transducer systems (Shirahige et al.,
<xref rid="B98" ref-type="bibr">2012</xref>
; Yano et al.,
<xref rid="B121" ref-type="bibr">2015</xref>
).</p>
<p>Visual feedback training has also been used to study information processing during second language (L2) learning. For example, Levitt and Katz (
<xref rid="B58" ref-type="bibr">2008</xref>
) examined augmented visual feedback in the production of a non-native consonant sound. Two groups of adult monolingual American English speakers were trained to produce the Japanese post-alveolar flap /ɽ/. One group received traditional second language instruction alone and the other group received traditional second language instruction plus visual feedback for tongue movement provided by a 2D EMA system (Carstens AG100, Carstens Medizinelektronik GmbH, Bovenden, Germany,
<ext-link ext-link-type="uri" xlink:href="http://www.articulograph.de">www.articulograph.de</ext-link>
). The data were perceptually rated by monolingual Japanese native listeners and were also analyzed acoustically for flap consonant duration. The results indicated improved acquisition and maintenance by the participants who received traditional instruction plus EMA training. These findings suggest that visual information regarding consonant place of articulation can assist second language learners with accent reduction.</p>
<p>In another recent study, Suemitsu et al. (
<xref rid="B108" ref-type="bibr">2013</xref>
) tested a 2D EMA-based articulatory feedback approach to facilitate production of an unfamiliar English vowel (/æ/) by five native speakers of Japanese. Learner-specific vowel positions were computed for each participant and provided as feedback in the form of a multiple-sensor, mid-sagittal display. Acoustic analysis of subjects' productions indicated that acoustic and articulatory training resulted in significantly improved /æ/ productions. The results suggest feasibility and applicability to vowel production, although additional research will be needed to determine the separable roles of acoustic and articulatory feedback in this version of EMA training.</p>
<p>Recent research has shown that 3D articulography systems afford several advantages over 2D systems: recording in x/y/z dimensions (and two angles), increased accuracy, and the ability to track movement from multiple articulators placed at positions other than tongue midline (Berry,
<xref rid="B9" ref-type="bibr">2011</xref>
; Kroos,
<xref rid="B56" ref-type="bibr">2012</xref>
; Stella et al.,
<xref rid="B104" ref-type="bibr">2013</xref>
). As such, visual augmented feedback provided by these systems may offer new insights on information processing during speech production. A preliminary test of a 3D EMA-based articulatory feedback system was conducted by Katz et al. (
<xref rid="B49" ref-type="bibr">2014</xref>
). Monolingual English speakers were asked to produce several series of four CV syllables. Each series contained four different places of articulation, one of which was an alveolar (e.g., bilabial, velar, alveolar, palatal; such as /pa/-/ka/-/ta/-/ja/). A 1-cm target sphere was placed at each participant's alveolar region. Four of the five participants attempted the series with no visible feedback. The fifth subject was given articulatory visual feedback of their tongue movement and requested to “hit the target” during their series production. The results showed that subjects in the no-feedback condition ranged between 50 and 80% accuracy, while the subject given feedback showed 90% accuracy. These preliminary findings suggested that the 3D EMA system could successfully track lingual movement for consonant feedback purposes, and that feedback could be used by talkers to improve consonantal place of articulation during speech.</p>
<p>A more stringent test of whether 3D visual feedback can modify speech production would involve examining how individuals perform when they must achieve an unfamiliar articulatory target, such as a foreign speech sound. Therefore, in the present experiment we investigated the accuracy with which healthy monolingual talkers could produce a novel, non-English, speech sound (articulated by placing the tongue blade at the palatal region of the oral cavity) and whether this gesture could benefit from short-term articulatory training with visual feedback.</p>
</sec>
<sec sec-type="materials and methods" id="s2">
<title>Materials and methods</title>
<p>This study was conducted in accordance with the Department of Health and Human Services regulations for the protection of human research subjects, with written informed consent received from all subjects prior to the experiment. The protocol for this research was approved by the Institutional Review Board at the University of Texas at Dallas. Consent was obtained from all subjects appearing in audio, video, or figure content included in this article.</p>
<sec>
<title>Participants and stimuli</title>
<p>Five college-age subjects (three male, two female) with General American English (GAE) accents participated in this study. All talkers were native speakers of English with no speech, hearing, or language disorders. Three participants had elementary speaking proficiency with a foreign language (M03, F02:
<italic>Spanish</italic>
; F01:
<italic>French</italic>
). Participants were trained to produce a novel consonant in the /ɑCɑ/ context while an electromagnetic articulograph system recorded lingual movement. For this task, we selected a speech sound not attested as a phoneme among the world's languages: a voiced, coronal, palatal stop. Unlike palatal stops produced with the tongue body, found in languages such as Czech (/c/ and /
<inline-graphic xlink:href="fnhum-09-00612-i0001.jpg"></inline-graphic>
/), subjects were asked to produce a closure with the tongue anterior (tip/blade) contacting the hard palate. This sound is similar to a voiced retroflex alveolar /ɖ/, but is articulated in the palatal, not immediately post-alveolar region. As such, it may be represented in the IPA as a backed, voiced retroflex stop: /
<underline>ɖ</underline>
/. Attested cases appear rarely in the world's languages and only as allophones. For instance, Dart (
<xref rid="B17" ref-type="bibr">1991</xref>
) notes some speakers of O'odham (Papago) produce voiced palatal sounds with (coronal) laminal articulation, instead of the more usual tongue body articulation (see Supplementary Materials for a sample sound file used in the present experiment).</p>
<p>Stimuli were elicited in blocks of 10 /ɑCɑ/ production attempts under a single-subject ABA design. Initially, the experimental protocol called for three pre-training, three training, and three post-training blocks from each subject (for a total of 90 productions). However, because data for this study were collected as part of a larger investigation of stop consonant productions, there was some subject attrition and reduced participation for the current experiment. Thus, the criterion for completion of the experiment was changed to a minimum of one block of baseline (no feedback) probes, 2–3 blocks of visual feedback training, and 1–3 blocks of post-feedback probes, for a total of 40–80 productions from each participant. All trials were conducted within a single experimental session lasting approximately 15 min.</p>
</sec>
<sec>
<title>Procedure</title>
<p>Training sessions were conducted in a quiet testing room at the University of Texas at Dallas. Each participant was seated next to the Wave system, facing a computer monitor located approximately 1 m away. Five sensors were glued to the subject's tongue using a biocompatible adhesive: one each at tongue tip (~1 cm posterior to the apex), tongue middle (~3 cm posterior to apex), tongue back (~4 cm posterior to the apex), and both left and right tongue lateral positions. Sensors were also attached to a pair of glasses worn by the subject to establish a frame of reference for head movement. A single sensor was taped on the center of the chin to track jaw movement.</p>
<sec>
<title>Visual feedback apparatus</title>
<p>External visual feedback for lingual movement was provided to subjects using a 3D EMA-based system (
<italic>Opti-Speech</italic>
, Vulintus LLC, Sachse, Texas, United States,
<ext-link ext-link-type="uri" xlink:href="http://www.vulintus.com/">http://www.vulintus.com/</ext-link>
). This system works by tracking speech movement with a magnetometer (
<italic>Wave</italic>
, Northern Digital Incorporated, Waterloo, Ontario, Canada). An interface allows users to view their current tongue position (represented by an image consisting of flesh-point markers and a modeled tongue surface) within a transparent head with a moving jaw. Small blue spheres mark different regions on the animated tongue (tongue tip, tongue middle, tongue back, or tongue left/right lateral). Users may adjust the visibility of these individual markers and/or select or deselect “active” markers for speech training purposes. Articulatory targets, shown on the screen as semi-transparent red or orange spheres, can be placed by the user in the virtual oral cavity. The targets change color to green when the active marker enters, indicating correct tongue position, thus providing immediate visual feedback for place of articulation (see Katz et al.,
<xref rid="B49" ref-type="bibr">2014</xref>
for more information). The target size and “hold time on target” can be varied by the user to make the target matching task easier or harder. An illustration of the system is shown in Figure
<xref ref-type="fig" rid="F1">1</xref>
.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption>
<p>
<bold>Illustration of the
<italic>
<bold>Opti-Speech</bold>
</italic>
system, with subject wearing sensors and head-orientation glasses (lower right insert)</bold>
. A sample target sphere, placed in this example at the subject's alveolar ridge, is shown in red. A blue marker indicates the tongue tip/blade (TT) sensor.</p>
</caption>
<graphic xlink:href="fnhum-09-00612-g0001"></graphic>
</fig>
</sec>
<sec>
<title>Pronunciation training</title>
<p>The backed palatal stop consonant /
<underline>ɖ</underline>
/ is produced by making a closure between the tongue tip and hard palate. Therefore, the tongue tip marker was designated as the active marker for this study. A single target was placed at the palatal place of articulation to indicate where the point of maximum constriction should occur during the production of /
<underline>ɖ</underline>
/. To help set the target, participants were requested to press their tongue to the roof of their mouth, allowing the tongue sensors to conform to the contours of the palate. The experimenter then placed the virtual target at the location of the tongue middle sensor, which was estimated to correspond to the palatal (typically, pre-palatal) region. Based on previous work (Katz et al.,
<xref rid="B49" ref-type="bibr">2014</xref>
), we selected a target sphere of 1.00 cm in volume, with no hold time.</p>
<p>The current experiment was conducted as part of a larger study investigating stop consonant production that employed visual feedback for training purposes. As such, by the start of the experiment each participant had received an opportunity to accommodate to the presence of the Wave sensors on the tongue and to practice speaking English syllables and words under visual feedback conditions for approximately 25–30 min. In order to keep practice conditions uniform in the actual experiment, none of these warmup tasks involved producing a novel, non-English sound.</p>
<p>For the present experiment, participants were trained to produce the voiced, coronal, palatal stop, /
<underline>ɖ</underline>
/. The investigator (SM) described the sound to subjects as “sound[ing] like a ‘
<italic>d</italic>
,’ but produced further back in the mouth.” A more precise articulatory explanation was also provided, instructing participants to feel along the top of their mouth from front to back to help identify the alveolar ridge. Participants were then told to “place the tip of [their] tongue behind the alveolar ridge and slide it backwards to meet with the roof, or palate, of the mouth.” The investigator, a graduate student with a background in phonetics instruction, produced three repetitions of /ɑ
<underline>ɖ</underline>
ɑ/ (live) for participants to imitate. Each participant was allowed to practice making the novel consonantal sound 3–5 times before beginning the no-feedback trial sessions. This practice schedule was devised based on pilot data suggesting 3–5 practice attempts were sufficient for participants to combine the articulatory, modeled, and feedback information to produce a series of successive “best attempts” at the novel sound. Throughout the training procedure, the investigator provided generally encouraging comments. In addition, if an attempt was judged perceptually to be off-target (e.g., closer to an English /d/ or the palatalized alveolar stop, /d
<sup>j</sup>
/), the investigator pointed out the error and repeated the (articulatory) instructions.</p>
<p>When the participant indicated that he/she understood all of the instructions, pre-training (baseline) trials began. After each block of attempts, participants were given general feedback about their performance and the instructions were reiterated if necessary. Once all pre-training sessions were completed, the participant was informed that the
<italic>Opti-Speech</italic>
visual feedback system would now be used to help them track their tongue movement. Subjects were instructed to use the tongue model as a guide for producing the palatal sound by moving the tongue tip upwards and backwards until the tongue tip marker entered the palatal region and the target lit up green, indicating success (see Figure
<xref ref-type="fig" rid="F2">2</xref>
). Each participant was allowed three practice attempts at producing the novel consonant while simultaneously watching the tongue model and aiming for the virtual target.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption>
<p>
<bold>Close-up of tongue avatar during a “hit” for the production of the voiced, retroflex, palatal stop consonant</bold>
. The target sphere lights up green, providing visual feedback for the correct place of articulation.</p>
</caption>
<graphic xlink:href="fnhum-09-00612-g0002"></graphic>
</fig>
<p>After completing the training sessions, the subject was asked to once again attempt to produce the sound with the visual feedback removed. No practice attempts were allowed between the training and post-training trial sessions. During all trials, the system recorded the talker's kinematic data, including a record of target hits (i.e., accuracy of the tongue-tip sensor entering the subject's palatal zone). The experiments were also audio- and video-recorded.</p>
</sec>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>Results</title>
<sec>
<title>Kinematic results</title>
<p>All participants completed the speaking task without noticeable difficulty. Speakers' accuracy in achieving the correct articulation was measured as the number of hit targets out of the number of attempts in each block. Talker performance is summarized in Figure
<xref ref-type="fig" rid="F3">3</xref>
, which shows accuracy at the baseline (pre-training), visual feedback (shaded), and post-feedback (post-training) probes.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption>
<p>
<bold>Accuracy for five talkers producing a coronal palatal stop</bold>
. Shaded regions indicate visual feedback conditions. Baseline (pre-training) and post-training phases are also indicated.</p>
</caption>
<graphic xlink:href="fnhum-09-00612-g0003"></graphic>
</fig>
<p>All talkers performed relatively poorly at baseline phase, ranging from 0 to 50% (
<italic>x</italic>
= 12.6%,
<italic>sd</italic>
= 14.1%) accuracy. Each participant showed a rapid increase in accuracy during the visual feedback phase (shaded), ranging from 50 to 100% (
<italic>x</italic>
= 74.9%,
<italic>sd</italic>
= 15.6). These gains appeared to be maintained during the post-feedback probes, with scores ranging from 70 to 100% (
<italic>x</italic>
= 85.3%,
<italic>sd</italic>
= 12.8%). Group patterns were examined using two-way paired
<italic>t</italic>
-tests. The results indicated a significant difference between pre-training and training phases,
<italic>t</italic>
<sub>(4)</sub>
= 8.73,
<italic>p</italic>
< 0.001, and pre-training and post-training phases,
<italic>t</italic>
<sub>(4)</sub>
= 14.0,
<italic>p</italic>
< 0.001. No significant difference was found between training and post-training,
<italic>t</italic>
<sub>(4)</sub>
= 1.66,
<italic>ns</italic>
. This pattern suggests acquisition during the training phase, and maintenance of learned behavior immediately post-training.</p>
<p>An effect size for each subject was computed using the Percentage of Non-overlapping Data (PND) method described by Scruggs et al. (
<xref rid="B96" ref-type="bibr">1987</xref>
). This non-parametric analysis compares points of non-overlap between baseline and successive intervention phases, and criteria are suggested for interpretation (Scruggs et al.,
<xref rid="B97" ref-type="bibr">1986</xref>
). Using this metric, all of the subjects' patterns were found to be greater than 90% (
<italic>highly effective</italic>
) for comparisons of both pre-training vs. training, and pre-training vs. post-training.</p>
</sec>
<sec>
<title>Acoustic results</title>
<p>In order to corroborate training effects, we sought acoustic evidence of coronal (tongue blade) palatal stop integrity. This second analysis investigated whether the observed improvement in talkers' articulatory precision resulting from training would be reflected in patterns of the consonant burst spectra. Short-term spectral analyses were obtained at the moment of burst release (Stevens and Blumstein,
<xref rid="B106" ref-type="bibr">1975</xref>
,
<xref rid="B107" ref-type="bibr">1978</xref>
). Although, burst spectra may vary considerably from speaker to speaker, certain general patterns may be noted. Coronals generally have energy distribution across the whole spectrum, with at least two peaks between 1.2 and 3.6 kHz), termed “diffuse” in the feature system of Jakobson et al. (
<xref rid="B48" ref-type="bibr">1952</xref>
). Also, coronals typically result in relatively higher-frequency spectral components than articulations produced by lips or the tongue body, and these spectra are therefore described as being “acute” (Jakobson et al.,
<xref rid="B48" ref-type="bibr">1952</xref>
; Hamann,
<xref rid="B38" ref-type="bibr">2003</xref>
) or “diffuse-rising” (Stevens and Blumstein,
<xref rid="B107" ref-type="bibr">1978</xref>
).</p>
<p>Burst frequencies vary as a function of the length of the vocal tract anterior to the constriction. Thus, alveolar constriction results in a relatively high burst, ranging from approximately 2.5 to 4.5 kHz (e.g., Reetz and Jongman,
<xref rid="B88" ref-type="bibr">2009</xref>
), while velar stops, having a longer vocal tract anterior to the constriction, produce lower burst frequencies (ranging from approximately 1.5 to 2.5 kHz). Since palatal stops are produced with a constriction located between the alveolar and velar regions, palatal stop bursts may be expected to have regions of spectral prominence between the two ranges, in the 3.0–5.0 kHz span. Acoustic analyses of Czech or Hungarian velar and palatal stops generally support this view. For instance, Keating and Lahiri (
<xref rid="B51" ref-type="bibr">1993</xref>
) note that the Hungarian palatal stop /ca/ spectrum slopes up to its highest peak “at 3.0–4.0 kHz or ever higher,” but otherwise show “a few peaks of similar amplitude which together dominate the spectrum in a single broad region” (p. 97). A study by Dart (
<xref rid="B17" ref-type="bibr">1991</xref>
) obtained palatographic and spectral data for O'odham (Papago) voiced palatal sounds produced with laminal articulation. Analysis of the burst spectra for these (O'odham) productions revealed mostly diffuse rising spectra, with some talkers showing “a high amplitude peak around 3.0–5.0 Hz” (p. 142).</p>
<p>For the present experiment, three predictions were made: (1) palatal stop consonant bursts prior to training will have diffuse rising spectra with characteristic peaks in the 3.0–5.0 kHz range, and (2) following training, these spectral peaks will shift downwards, reflecting a more posterior constriction (e.g., from an alveolar toward a palatal place of articulation), and (3) post-training token-to-token variability should be lower than at baseline, reflecting increased articulatory ability.</p>
</sec>
<sec>
<title>Spectral analysis</title>
<p>Talkers' consonantal productions were digitized and analyzed using PRAAT (Boersma and Weenink,
<xref rid="B11" ref-type="bibr">2001</xref>
) with a scripting procedure using linear predictive coding (LPC) analysis. A cursor was placed at the beginning of the consonant burst of each syllable and a 12 ms Kaiser window was centered over the stop transient. Autocorrelation-based LPC (24 pole model, +6 dB pre-emphasis) yielded spectral sections. Overlapping plots of subjects' repeat utterances were obtained for visual inspection, with spectral peaks recorded for analysis.</p>
<p>Figure
<xref ref-type="fig" rid="F4">4</xref>
shows overlapping plots of spectra obtained pre- and post-EMA training for 4/5 talkers. Plots containing (RMS) averages for pre-training (incorrect) and post-training (correct) spectra are also shown, for comparison. Spectra for talker F01 could not be compared because this talker's initial productions were realized as CV syllables (instead of VCV), and differing vowel context is known to greatly affect burst consonant spectral characteristics (Stevens,
<xref rid="B105" ref-type="bibr">2008</xref>
).</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption>
<p>
<bold>Overlapping plots of short-term spectra for bursts of voiced, coronal, palatal stops produced before and after EMA training</bold>
. Correct place of articulation (hits) are marked in blue, and errors (misses) in red. Computed averages of incorrect pre-training (red) and correct post-training (blue) spectra are shown at right, for comparison.</p>
</caption>
<graphic xlink:href="fnhum-09-00612-g0004"></graphic>
</fig>
<p>Results revealed mixed support for the experimental predictions. Similar to previous reports (e.g., Dart,
<xref rid="B17" ref-type="bibr">1991</xref>
), there were considerable differences in the shapes of the burst spectral patterns from talker to talker. Three of the four talkers' spectra (M01, M02, and M03) were diffuse, having at least two peaks between 1.2 and 3.6 kHz, while the spectra of talker F01 had peaks in a mid-frequency (“compact”) range of 2.0–3.0 kHz. Patterns of spectral tilt for all speakers were generally falling (instead of rising, as expected).</p>
<p>The prediction that 3.0–5.0 kHz spectral peak frequencies would lower following training was not uniformly obtained. Because standard deviations were relatively high and there was much inter-talker variability, the data are summarized, rather than tested statistically.</p>
<p>Talker M01's data had six peaks pre-treatment (
<italic>x</italic>
= 3967;
<italic>sd</italic>
= 596) and five peaks post-training (
<italic>x</italic>
= 4575;
<italic>sd</italic>
= 281). Talker M02's productions yielded five peaks pre-training (
<italic>x</italic>
= 3846;
<italic>sd</italic>
= 473) and nine peaks post-training (
<italic>x</italic>
= 3620;
<italic>sd</italic>
= 265). Talker M03 had six peaks pre-training (
<italic>x</italic>
= 4495
<italic>sd</italic>
= 353) and nine peaks post-training (
<italic>x</italic>
= 3687;
<italic>sd</italic>
= 226). The spectra of talker F02 had peaks in a mid-frequency (“compact”) range of approximately 2.0–3.0 kHz. This talker's spectral peak values did not shift with training (pre-training:
<italic>x</italic>
= 2359 Hz,
<italic>sd</italic>
= 139 Hz; post-training:
<italic>x</italic>
= 2390 Hz,
<italic>sd</italic>
= 194 Hz). In summary, talkers M03 and M02 showed the expected pattern of spectra peak lowering, F02 showed no training-dependent changes, and M01 showed a pattern in the opposite direction.</p>
<p>Of the talkers with spectra data available, three (M01, M02, and M03) showed marked reduction in variability (i.e., reduced standard deviation values) from pre-training to post-training, suggesting that training corresponded with increased production consistency. However, this was not the case for talker F02, whose mid-range spectral peaks showed a slight increase in variability after training.</p>
</sec>
</sec>
<sec sec-type="discussion" id="s4">
<title>Discussion</title>
<p>Five English-speaking subjects learned a novel consonant (a voiced, coronal, and palatal stop) following a brief training technique involving visual augmented feedback of tongue movement. The results of kinematic analyses indicate that real-time visual (articulatory) feedback resulted in improved accuracy of consonant place of articulation. Articulatory feedback training for place of articulation corresponded with a rapid increase in the accuracy of tongue tip spatial positioning, and post-training probes indicated (short-term) retention of learned skills.</p>
<p>Acoustic data for talkers' burst spectra obtained pre- and post-training only partially confirmed the kinematic findings, and there were a number of differences noted from predictions. First, for those talkers that showed diffuse spectra (e.g., with two peaks between 1.2 and 3.6 kHz), the spectra were falling, instead of rising. This may have been due to a number of possible factors, including the current choice of a Kaiser window for spectral analysis. Some of the original studies, such as those which first noted the classic “diffuse rising” patterns in spectral slices, fitted half-Hamming windows over the burst to obtain optimum pre-emphasis for LPC analysis (e.g., Stevens and Blumstein,
<xref rid="B107" ref-type="bibr">1978</xref>
). Second, talker F02 showed mid-range (“compact”) spectral peaks ranging between 2.0 and 3.0 kHz. This may be due to tongue shape, which can affect the affect spectral characteristics of the stop burst. For example, laminal (tongue blade) articulation results in relatively even spectral spread, while apical (tongue-tip) articulation results in strong mid-frequency peaks (Ladefoged and Maddieson,
<xref rid="B57" ref-type="bibr">1996</xref>
) and less spread (Fant,
<xref rid="B26" ref-type="bibr">1973</xref>
). In the present data, the spectra of talker F02 fits that pattern of a more apical production.</p>
<p>Despite individual differences, there was some evidence supporting the notion of training effects in the acoustic data. Chiefly, the three subjects with diffuse spectra (M01, M02, and M03) showed decreased variability (lowered standard deviations) following training, suggesting stabilized articulatory behavior. Although the current data are few, they suggest that burst spectra variability may be a useful metric to be explored in future studies.</p>
<p>It was predicted that spectral peaks in the 3.0–5.0 kHz range would lower in frequency as talkers improved their place of articulation, with training. However, the findings do not generally support this prediction: Talker M03 showed this pattern, M02 showed a trend, F02 showed no differences, and M01 trended in the opposite direction, with higher spectral peaks after training. Since the kinematic data establish that all talkers significantly increased tongue placement accuracy post-training, we speculate that several factors affecting burst spectra (e.g., tongue shape, background noise, or room acoustics) may have obscured any such underlying spectral shifts for the talkers. Future research should examine how burst spectra may be best used to evaluate outcomes in speech training studies.</p>
<p>The current kinematic data replicate and extend the findings of Ouni (
<xref rid="B81" ref-type="bibr">2013</xref>
) who found that talkers produced tongue body gestures more accurately after being exposed to a short training session of real-time ultrasound feedback (post-test) than when recorded at baseline (pre-test). The present results are also consistent with earlier work from our laboratory which found that monolingual English speakers showed faster and more effective learning of the Japanese post-alveolar flap, /ɽ/ using EMA-based visual feedback, when compared with traditional Japanese pronunciation instruction (Levitt and Katz,
<xref rid="B58" ref-type="bibr">2008</xref>
). Taken together with the experimental data from this study, there is evidence that EMA-provided articulatory visual feedback may provide a means for helping L2 learners improve novel consonant distinctions.</p>
<p>However, a number of caveats must be considered. First, the current data are limited and the study should therefore be considered preliminary. The number of subjects tested was few (
<italic>n</italic>
= 5). Also, since the consonant trained, /
<underline>ɖ</underline>
/, is not a phoneme in any of the world's language, it was not possible to include perceptual data, such as native listener judgments (e.g., Levitt and Katz,
<xref rid="B59" ref-type="bibr">2010</xref>
). Additional data obtained from more talkers will therefore be required before any firm conclusions can be drawn concerning the relation to natural language pronunciation.</p>
<p>Second, real-time (live) examples were given to subjects by the experimenter (SM) during the training phase, allowing for the possibility of experimenter bias. This procedure was adopted to simulate a typical second-language instruction setting, and care was taken to produce consistent examples, so as to not introduce “unfair” variability at the start of the experiment. Nevertheless, in retrospect it would have been optimal to have included a condition in which talkers were trained with pre-recorded examples, to eliminate this potential bias.</p>
<p>Third, since articulatory training is assumed to draw on principles of motor learning, several experimental factors must be controlled before it is possible to conclude that a given intervention is optimal for a skill being acquired, generalized, or maintained (e.g., Maas et al.,
<xref rid="B63" ref-type="bibr">2008</xref>
; Bislick et al.,
<xref rid="B10" ref-type="bibr">2012</xref>
; Schmidt and Lee,
<xref rid="B95" ref-type="bibr">2013</xref>
; Sigrist et al.,
<xref rid="B99" ref-type="bibr">2013</xref>
). For example, Ballard et al. (
<xref rid="B5" ref-type="bibr">2012</xref>
) conducted a study in which a group of English talkers was taught the Russian trilled /r/ sound using an EPG-based visual feedback system. In a short-term (five session) learning paradigm, subjects practiced in conditions either with continuous visual feedback provided by an EPG system, or were given no visual feedback. The results suggested that providing kinematic feedback continually though treatment corresponded with lower skill retention. This finding suggests that speech training follows the principle that kinematic feedback is most beneficial in the early phases of training, but may interfere with long-term retention if provided throughout training (Swinnen et al.,
<xref rid="B111" ref-type="bibr">1993</xref>
; Hodges and Franks,
<xref rid="B44" ref-type="bibr">2001</xref>
; Schmidt and Lee,
<xref rid="B95" ref-type="bibr">2013</xref>
). A pattern in the current data also potentially supports this principle. Three of the five participants (M01, M03, and F02) reached their maximum performance in the post-training phase, immediately after the feedback was removed. While this pattern was not statistically significant, it may suggest some interference effects from the ongoing feedback used. Future research should examine factors such as feedback type and frequency in order to better improve speech sound learning.</p>
<p>The current findings support the notion of a visual feedback pathway during speech processing, as proposed in the ACT neurocomputational model of speech production (Kröger and Kannampuzha,
<xref rid="B54" ref-type="bibr">2008</xref>
). Similar to the DIVA model, ACT relies on feedforward and feedback pathways between distributed neural activation patterns, or maps. ACT includes explicit provisions for separate visual and auditory information processing. In Figure
<xref ref-type="fig" rid="F5">5</xref>
, we present a simplified model of ACT (adapted from Kröger et al.,
<xref rid="B55" ref-type="bibr">2009</xref>
) with (optional) modifications added to highlight pathways for external and internal audiovisual input. Since people do not ordinarily rely on visual feedback of tongue movement, these modifications explain how people learn under conditions of augmented feedback, rather than serving as key components of everyday speech.</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption>
<p>
<bold>Simplified version of ACT model (Kröger and Kannampuzha,
<xref rid="B54" ref-type="bibr">2008</xref>
), showing input pathways for external audiovisual stimuli (oval at bottom right) and optional feedback circuits to the vocal tract (shaded box at bottom)</bold>
. Visual feedback (dotted line) is provided by either external (mirroring) or internal (instrumental augmented) routes.</p>
</caption>
<graphic xlink:href="fnhum-09-00612-g0005"></graphic>
</fig>
<p>The external input route (dotted circle on the right) indicates an outside speech source, including speech that is produced while hearing/observing human talkers or a computerized training agent (e.g., BALDI, ARTUR, ATH, or Vivian). The input audio and visual data are received, preprocessed, and relayed as input to respective unimodal maps. These maps yield output to a multimodal phonetic map that also receives (as input) information from a somatosensory map and from a phonemic map. Reciprocal feedback connections between the phonetic map, visual-phonetic processing, and auditory-phonetic processing modules can account for training effects from computerized training avatars. These pathways would presumably also be involved in AV model-learning behavior, including lip-reading abilities (see Bernstein and Liebenthal,
<xref rid="B8" ref-type="bibr">2014</xref>
for review) and compensatory tendencies noted in individuals with left-hemisphere brain damage, who appear to benefit from visual entrainment to talking mouths other than their own (Fridriksson et al.,
<xref rid="B29" ref-type="bibr">2012</xref>
).</p>
<p>In the (internal) visual feedback route (dotted arrows), a talker's own speech articulation is observed during production. This may include simple mirroring of the lips and jaw, or instrumentally augmented visualizations of the tongue (via EMA, ultrasound, MRI, or articulatory inversion systems that convert sound signals to visual images of the articulators; e.g., Hueber et al.,
<xref rid="B46" ref-type="bibr">2012</xref>
). The remaining audio and visual preprocessing and mapping stages are similar between this internal route and the external (modeled) pathways. The present findings of improved consonantal place of articulation under conditions of visual (self) feedback training supports this internal route and the role of body sense/motor familiarity. This internal route may also play a role in explaining a number of other phenomena described in the literature, including the fact that talkers can discern between natural and unnatural tongue movements displayed by an avatar (Engwall and Wik,
<xref rid="B23" ref-type="bibr">2009</xref>
), and that training systems based on a talkers' own speech may be especially beneficial for L2 learners (see Felps et al.,
<xref rid="B28" ref-type="bibr">2009</xref>
for discussion).</p>
<p>The actual neurophysiological mechanisms underlying AV learning and feedback are currently being investigated. Recent work on oral somatosensory awareness suggests people have a unified “mouth image” that may be qualitatively different from other parts of the body (Haggard and de Boer,
<xref rid="B37" ref-type="bibr">2014</xref>
). Since visual feedback does not ordinarily play a role in mouth experiences, other attributes, such as self-touch, may play a heightened role. For instance, Engelen et al. (
<xref rid="B19" ref-type="bibr">2002</xref>
) note that subjects can achieve high accuracy in determining the size of ball-bearings placed in the mouth, but show reduced performance when fitted with a plastic palate. This suggests that relative movement of an object between tongue and palate is important in oral size perception. We speculate that visual feedback systems rely in part on oral self-touch mechanism (particularly for consonant production), by visually guiding participants to the correct place of articulation, at which point somatosensory processes take over. This mechanism may prove particularly important for consonants, as opposed to vowels, which are produced with less articulatory contact.</p>
<p>Providing real-time motor feedback may engage different cortical pathways than are recruited in learning systems that employ more traditional methodologies. For example, Farrer et al. (
<xref rid="B27" ref-type="bibr">2003</xref>
) conducted positron emission tomography (PET) experiments in which subjects controlled a virtual hand on a screen under conditions ranging from full control, to partial control, to a condition where another person controlled the hand and there was no control. The results showed right inferior parietal lobule activation when subjects felt least in control of the hand, with reverse covariation in the insula. A crucial aspect here is corporeal identity, the feeling of one' own body, in order to determine motor behavior in the environment. Data suggest that body awareness is supported by a large network of neurological structures including parietal and insular cortex, with primary and secondary somatosensory cortex, insula, and posterior parietal cortex playing specific roles (see Daprati et al.,
<xref rid="B16" ref-type="bibr">2010</xref>
for review). A region of particular interest is the right inferior parietal lobule (IPL), often associated to own-body perception and other body discrimination (Berlucchi and Aglioti,
<xref rid="B6" ref-type="bibr">1997</xref>
; Farrer et al.,
<xref rid="B27" ref-type="bibr">2003</xref>
; Uddin et al.,
<xref rid="B117" ref-type="bibr">2006</xref>
). Additional neural structures that likely play a role in augmented feedback training systems include those associated with reward dependence during behavioral performance, including lateral prefrontal cortex (Pochon et al.,
<xref rid="B83" ref-type="bibr">2002</xref>
; Liu et al.,
<xref rid="B60" ref-type="bibr">2011</xref>
; Dayan et al.,
<xref rid="B18" ref-type="bibr">2014</xref>
). As behavioral data accrue with respect to both external (mirroring) and internal (“tongue reading”) visual speech feedback, it will be important to also describe the relevant neural control structures, in order to best develop more complete models of speech production.</p>
<p>In summary, we have presented small-scale but promising results from an EMA-based feedback investigation suggesting that augmented visual information concerning one's own tongue movements boosts skill acquisition during the learning of consonant place of articulation. Taken together with other recent data (e.g., Levitt and Katz,
<xref rid="B59" ref-type="bibr">2010</xref>
; Ouni,
<xref rid="B81" ref-type="bibr">2013</xref>
; Suemitsu et al.,
<xref rid="B108" ref-type="bibr">2013</xref>
) the results may have potentially important implications for models of speech production. Specifically, distinct AV learning mechanisms (and likely, underlying neural substrates) appear to be engaged for different types of CAPT systems, with interactive, on-line, eye-to-tongue coordination involved in systems such as
<italic>Opti-Speech</italic>
(and perhaps
<italic>Vizart3D</italic>
, Hueber et al.,
<xref rid="B46" ref-type="bibr">2012</xref>
) being arguably different than processing involved in using external avatar trainers, such as ARTUR, BALDI, ATH, or Vivian. These different processing routes may be important when interpreting other data, such as the results of real-time, discordant, cross-modal feedback (e.g., McGurk effect). Future, studies should focus on extending the range of speech sounds, features, and articulatory structures trained with real-time feedback, with a focus on vowels as well as consonants (see Mehta and Katz,
<xref rid="B73" ref-type="bibr">2015</xref>
). As findings are strengthened with designs that systematically test motor training principles, the results may open new avenues for understanding how AV information is used in speech processing.</p>
</sec>
<sec id="s5">
<title>Author contributions</title>
<p>WK and SM designed the experiments. SM recruited the participants and collected the data. WK and SM performed the kinematic analysis. WK conducted the spectral analysis. WK and SM wrote the manuscript.</p>
<sec>
<title>Conflict of interest statement</title>
<p>This research was partially supported by a grant to Vulintus, LLC entitled “Development of a software package for speech therapy” (NIH-SBIR 1 R43 DC013467). However, the sources of support for this work had no role in the study design, collection, analysis or interpretation of data, or the decision to submit this report for publication. The corresponding author (William F. Katz) had full access to all of the data in the study and takes complete responsibility for the integrity and accuracy of the data. The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</sec>
</body>
<back>
<ack>
<p>The authors gratefully acknowledge support from the University of Texas at Dallas Office of Sponsored Projects, the UTD Callier Center Excellence in Education Fund, and a grant awarded by NIH/NIDCD (R43 DC013467). We thank the participants for volunteering their time and Carstens Medezinelektronik GmbH for material support toward our research. We would also like to thank Marcus Jones, Amy Berglund, Cameron Watkins, Bill Watts, and Holle Carey for their contributions in apparatus design, data collection, data processing, and other support for this project.</p>
</ack>
<sec sec-type="supplementary-material" id="s6">
<title>Supplementary material</title>
<p>The Supplementary Material for this article can be found online at:
<ext-link ext-link-type="uri" xlink:href="http://journal.frontiersin.org/article/10.3389/fnhum.2015.00612">http://journal.frontiersin.org/article/10.3389/fnhum.2015.00612</ext-link>
</p>
<supplementary-material content-type="local-data">
<media xlink:href="Audio1.WAV">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data">
<media xlink:href="Video1.MOV">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Arbib</surname>
<given-names>M. A.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>From monkey-like action recognition to human language: an evolutionary framework for neurolinguistics</article-title>
.
<source>Behav. Brain Sci.</source>
<volume>28</volume>
,
<fpage>105</fpage>
<lpage>124</lpage>
.
<pub-id pub-id-type="doi">10.1017/S0140525X05000038</pub-id>
<pub-id pub-id-type="pmid">16201457</pub-id>
</mixed-citation>
</ref>
<ref id="B2">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Arnold</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Hill</surname>
<given-names>F.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Bisensory augmentation: a speechreading advantage when speech is clearly audible and intact</article-title>
.
<source>Br. J. Psychol.</source>
<volume>92</volume>
(
<issue>Pt 2</issue>
),
<fpage>339</fpage>
<lpage>355</lpage>
.
<pub-id pub-id-type="doi">10.1348/000712601162220</pub-id>
<pub-id pub-id-type="pmid">11802877</pub-id>
</mixed-citation>
</ref>
<ref id="B3">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Badin</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Elisei</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Bailly</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Tarabalka</surname>
<given-names>Y.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>An audiovisual talking head for augmented speech generation: models and animations based on a real speaker's articulatory data</article-title>
, in
<source>Vth Conference on Articulated Motion and Deformable Objects (AMDO 2008, LNCS 5098)</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Perales</surname>
<given-names>F. J.</given-names>
</name>
<name>
<surname>Fisher</surname>
<given-names>R. B.</given-names>
</name>
</person-group>
(
<publisher-loc>Berlin; Heidelberg</publisher-loc>
:
<publisher-name>Springer Verlag</publisher-name>
),
<fpage>132</fpage>
<lpage>143</lpage>
.
<pub-id pub-id-type="doi">10.1007/978-3-540-70517-8_14</pub-id>
</mixed-citation>
</ref>
<ref id="B4">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Badin</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Tarabalka</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Elisei</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Bailly</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Can you ‘read’ tongue movements? Evaluation of the contribution of tongue display to speech understanding</article-title>
.
<source>Speech Commun.</source>
<volume>52</volume>
,
<fpage>493</fpage>
<lpage>503</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.specom.2010.03.002</pub-id>
</mixed-citation>
</ref>
<ref id="B5">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ballard</surname>
<given-names>K. J.</given-names>
</name>
<name>
<surname>Smith</surname>
<given-names>H. D.</given-names>
</name>
<name>
<surname>Paramatmuni</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>McCabe</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Theodoros</surname>
<given-names>D. G.</given-names>
</name>
<name>
<surname>Murdoch</surname>
<given-names>B. E.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Amount of kinematic feedback affects learning of speech motor skills</article-title>
.
<source>Motor Contr.</source>
<volume>16</volume>
,
<fpage>106</fpage>
<lpage>119</lpage>
.
<pub-id pub-id-type="pmid">22402216</pub-id>
</mixed-citation>
</ref>
<ref id="B6">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Berlucchi</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Aglioti</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>1997</year>
).
<article-title>The body in the brain: neural bases of corporeal awareness</article-title>
.
<source>Trends Neurosci.</source>
<volume>20</volume>
,
<fpage>560</fpage>
<lpage>564</lpage>
.
<pub-id pub-id-type="doi">10.1016/S0166-2236(97)01136-3</pub-id>
<pub-id pub-id-type="pmid">9416668</pub-id>
</mixed-citation>
</ref>
<ref id="B7">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bernhardt</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Gick</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Bacsfalvi</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Adler-Bock</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Ultrasound in speech therapy with adolescents and adults</article-title>
.
<source>Clin. Linguist. Phon.</source>
<volume>19</volume>
,
<fpage>605</fpage>
<lpage>617</lpage>
.
<pub-id pub-id-type="doi">10.1080/02699200500114028</pub-id>
<pub-id pub-id-type="pmid">16206487</pub-id>
</mixed-citation>
</ref>
<ref id="B8">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bernstein</surname>
<given-names>L. E.</given-names>
</name>
<name>
<surname>Liebenthal</surname>
<given-names>E.</given-names>
</name>
</person-group>
(
<year>2014</year>
).
<article-title>Neural pathways for visual speech perception</article-title>
.
<source>Front. Neurosci.</source>
<volume>8</volume>
:
<issue>386</issue>
.
<pub-id pub-id-type="doi">10.3389/fnins.2014.00386</pub-id>
<pub-id pub-id-type="pmid">25520611</pub-id>
</mixed-citation>
</ref>
<ref id="B9">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Berry</surname>
<given-names>J. J.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Accuracy of the NDI wave speech research system</article-title>
.
<source>J. Speech Lang. Hear. Res.</source>
<volume>54</volume>
,
<fpage>1295</fpage>
<lpage>1301</lpage>
.
<pub-id pub-id-type="doi">10.1044/1092-4388(2011/10-0226)</pub-id>
<pub-id pub-id-type="pmid">21498575</pub-id>
</mixed-citation>
</ref>
<ref id="B10">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bislick</surname>
<given-names>L. P.</given-names>
</name>
<name>
<surname>Weir</surname>
<given-names>P. C.</given-names>
</name>
<name>
<surname>Spencer</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Kendall</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Yorkston</surname>
<given-names>K. M.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Do principles of motor learning enhance retention and transfer of speech skills? A systematic review</article-title>
.
<source>Aphasiology</source>
<volume>26</volume>
,
<fpage>709</fpage>
<lpage>728</lpage>
.
<pub-id pub-id-type="doi">10.1080/02687038.2012.676888</pub-id>
</mixed-citation>
</ref>
<ref id="B11">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Boersma</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Weenink</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Praat, a system for doing phonetics by computer</article-title>
.
<source>Glot International</source>
<volume>5</volume>
,
<fpage>341</fpage>
<lpage>345</lpage>
.</mixed-citation>
</ref>
<ref id="B12">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Civier</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Tasko</surname>
<given-names>S. M.</given-names>
</name>
<name>
<surname>Guenther</surname>
<given-names>F. H.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Overreliance on auditory feedback may lead to sound/syllable repetitions: simulations of stuttering and fluency-inducing conditions with a neural model of speech production</article-title>
.
<source>J. Fluency Disord.</source>
<volume>35</volume>
,
<fpage>246</fpage>
<lpage>279</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.jfludis.2010.05.002</pub-id>
<pub-id pub-id-type="pmid">20831971</pub-id>
</mixed-citation>
</ref>
<ref id="B13">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Curio</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Neuloh</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Numminen</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Jousmäki</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Hari</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>2000</year>
).
<article-title>Speaking modifies voice−evoked activity in the human auditory cortex</article-title>
.
<source>Hum. Brain Mapp.</source>
<volume>9</volume>
,
<fpage>183</fpage>
<lpage>191</lpage>
.
<pub-id pub-id-type="doi">10.1002/(SICI)1097-0193(200004)9:4<183::AID-HBM1>3.0.CO;2-Z</pub-id>
<pub-id pub-id-type="pmid">10770228</pub-id>
</mixed-citation>
</ref>
<ref id="B14">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>D'Ausilio</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Bartoli</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Maffongelli</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Berry</surname>
<given-names>J. J.</given-names>
</name>
<name>
<surname>Fadiga</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2014</year>
).
<article-title>Vision of tongue movements bias auditory speech perception</article-title>
.
<source>Neuropsychologia</source>
<volume>63</volume>
,
<fpage>85</fpage>
<lpage>91</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2014.08.018</pub-id>
<pub-id pub-id-type="pmid">25172391</pub-id>
</mixed-citation>
</ref>
<ref id="B15">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dagenais</surname>
<given-names>P. A.</given-names>
</name>
</person-group>
(
<year>1995</year>
).
<article-title>Electropalatography in the treatment of articulation/phonological disorders</article-title>
.
<source>J. Commun. Disord.</source>
<volume>28</volume>
,
<fpage>303</fpage>
<lpage>329</lpage>
.
<pub-id pub-id-type="doi">10.1016/0021-9924(95)00059-1</pub-id>
<pub-id pub-id-type="pmid">8576412</pub-id>
</mixed-citation>
</ref>
<ref id="B16">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Daprati</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Sirigu</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Nico</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Body and movement: consciousness in the parietal lobes</article-title>
.
<source>Neuropsychologia</source>
<volume>48</volume>
,
<fpage>756</fpage>
<lpage>762</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2009.10.008</pub-id>
<pub-id pub-id-type="pmid">19837100</pub-id>
</mixed-citation>
</ref>
<ref id="B17">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Dart</surname>
<given-names>S. N.</given-names>
</name>
</person-group>
(
<year>1991</year>
).
<source>Articulatory and Acoustic Properties of Apical and Laminal Articulations</source>
,
<volume>Vol. 79</volume>
<publisher-loc>Los Angeles, CA</publisher-loc>
:
<publisher-name>UCLA Phonetics Laboratory</publisher-name>
.</mixed-citation>
</ref>
<ref id="B18">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dayan</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Hamann</surname>
<given-names>J. M.</given-names>
</name>
<name>
<surname>Averbeck</surname>
<given-names>B. B.</given-names>
</name>
<name>
<surname>Cohen</surname>
<given-names>L. G.</given-names>
</name>
</person-group>
(
<year>2014</year>
).
<article-title>Brain structural substrates of reward dependence during behavioral performance</article-title>
.
<source>J. Neurosci.</source>
<volume>34</volume>
,
<fpage>16433</fpage>
<lpage>16441</lpage>
.
<pub-id pub-id-type="doi">10.1523/JNEUROSCI.3141-14.2014</pub-id>
<pub-id pub-id-type="pmid">25471581</pub-id>
</mixed-citation>
</ref>
<ref id="B19">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Engelen</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Prinz</surname>
<given-names>J. F.</given-names>
</name>
<name>
<surname>Bosman</surname>
<given-names>F.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>The influence of density and material on oral perception of ball size with and without palatal coverage</article-title>
.
<source>Arch. Oral Biol.</source>
<volume>47</volume>
,
<fpage>197</fpage>
<lpage>201</lpage>
.
<pub-id pub-id-type="doi">10.1016/S0003-9969(01)00106-6</pub-id>
<pub-id pub-id-type="pmid">11839355</pub-id>
</mixed-citation>
</ref>
<ref id="B20">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Engwall</surname>
<given-names>O.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Can audio-visual instructions help learners improve their articulation?-an ultrasound study of short term changes</article-title>
, in
<source>Interspeech</source>
(
<publisher-loc>Brisbane</publisher-loc>
),
<fpage>2631</fpage>
<lpage>2634</lpage>
.</mixed-citation>
</ref>
<ref id="B21">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Engwall</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Bälter</surname>
<given-names>O.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Pronunciation feedback from real and virtual language teachers</article-title>
.
<source>Comput. Assist. Lang. Learn.</source>
<volume>20</volume>
,
<fpage>235</fpage>
<lpage>262</lpage>
.
<pub-id pub-id-type="doi">10.1080/09588220701489507</pub-id>
</mixed-citation>
</ref>
<ref id="B22">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Engwall</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Bälter</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Öster</surname>
<given-names>A.-M.</given-names>
</name>
<name>
<surname>Kjellström</surname>
<given-names>H.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Feedback management in the pronunciation training system ARTUR</article-title>
, in
<source>CHI'06 Extended Abstracts on Human Factors in Computing Systems</source>
(
<publisher-loc>Montreal</publisher-loc>
:
<publisher-name>ACM</publisher-name>
),
<fpage>231</fpage>
<lpage>234</lpage>
.</mixed-citation>
</ref>
<ref id="B23">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Engwall</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Wik</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Can you tell if tongue movements are real or synthesized?</article-title>
in
<source>Proceedings of Auditory-Visual Speech Processing</source>
(
<publisher-loc>Norwich</publisher-loc>
:
<publisher-name>University of East Anglia</publisher-name>
),
<fpage>96</fpage>
<lpage>101</lpage>
.</mixed-citation>
</ref>
<ref id="B24">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Erber</surname>
<given-names>N. P.</given-names>
</name>
</person-group>
(
<year>1975</year>
).
<article-title>Auditory-visual perception of speech</article-title>
.
<source>J. Speech Hear. Disord.</source>
<volume>40</volume>
,
<fpage>481</fpage>
<lpage>492</lpage>
.
<pub-id pub-id-type="doi">10.1044/jshd.4004.481</pub-id>
<pub-id pub-id-type="pmid">1234963</pub-id>
</mixed-citation>
</ref>
<ref id="B25">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Fagel</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Madany</surname>
<given-names>K.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>A 3-D virtual head as a tool for speech therapy for children</article-title>
, in
<source>Proceedings of Interspeech 2008</source>
(
<publisher-name>Brisbane, QLD</publisher-name>
),
<fpage>2643</fpage>
<lpage>2646</lpage>
.</mixed-citation>
</ref>
<ref id="B26">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Fant</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>1973</year>
).
<source>Speech Sounds and Features.</source>
<publisher-loc>Cambridge, MA</publisher-loc>
:
<publisher-name>The MIT Press</publisher-name>
.</mixed-citation>
</ref>
<ref id="B27">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Farrer</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Franck</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Georgieff</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Frith</surname>
<given-names>C. D.</given-names>
</name>
<name>
<surname>Decety</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Jeannerod</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Modulating the experience of agency: a positron emission tomography study</article-title>
.
<source>Neuroimage</source>
<volume>18</volume>
,
<fpage>324</fpage>
<lpage>333</lpage>
.
<pub-id pub-id-type="doi">10.1016/S1053-8119(02)00041-1</pub-id>
<pub-id pub-id-type="pmid">12595186</pub-id>
</mixed-citation>
</ref>
<ref id="B28">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Felps</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Bortfeld</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Gutierrez-Osuna</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Foreign accent conversion in computer assisted pronunciation training</article-title>
.
<source>Speech Commun.</source>
<volume>51</volume>
,
<fpage>920</fpage>
<lpage>932</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.specom.2008.11.004</pub-id>
<pub-id pub-id-type="pmid">21124807</pub-id>
</mixed-citation>
</ref>
<ref id="B29">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fridriksson</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Hubbard</surname>
<given-names>H. I.</given-names>
</name>
<name>
<surname>Hudspeth</surname>
<given-names>S. G.</given-names>
</name>
<name>
<surname>Holland</surname>
<given-names>A. L.</given-names>
</name>
<name>
<surname>Bonilha</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Fromm</surname>
<given-names>D.</given-names>
</name>
<etal></etal>
</person-group>
. (
<year>2012</year>
).
<article-title>Speech entrainment enables patients with Broca's aphasia to produce fluent speech</article-title>
.
<source>Brain</source>
<volume>135</volume>
,
<fpage>3815</fpage>
<lpage>3829</lpage>
.
<pub-id pub-id-type="doi">10.1093/brain/aws301</pub-id>
<pub-id pub-id-type="pmid">23250889</pub-id>
</mixed-citation>
</ref>
<ref id="B30">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gentilucci</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Corballis</surname>
<given-names>M. C.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>From manual gesture to speech: a gradual transition</article-title>
.
<source>Neurosci. Biobehav. Rev.</source>
<volume>30</volume>
,
<fpage>949</fpage>
<lpage>960</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neubiorev.2006.02.004</pub-id>
<pub-id pub-id-type="pmid">16620983</pub-id>
</mixed-citation>
</ref>
<ref id="B31">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Goozee</surname>
<given-names>J. V.</given-names>
</name>
<name>
<surname>Murdoch</surname>
<given-names>B. E.</given-names>
</name>
<name>
<surname>Theodoros</surname>
<given-names>D. G.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Electropalatographic assessment of articulatory timing characteristics in dysarthria following traumatic brain injury</article-title>
.
<source>J. Med. Speech Lang. Pathol.</source>
<volume>7</volume>
,
<fpage>209</fpage>
<lpage>222</lpage>
.</mixed-citation>
</ref>
<ref id="B32">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Guenther</surname>
<given-names>F. H.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Cortical interactions underlying the production of speech sounds</article-title>
.
<source>J. Commun. Disord.</source>
<volume>39</volume>
,
<fpage>350</fpage>
<lpage>365</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.jcomdis.2006.06.013</pub-id>
<pub-id pub-id-type="pmid">16887139</pub-id>
</mixed-citation>
</ref>
<ref id="B33">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Guenther</surname>
<given-names>F. H.</given-names>
</name>
<name>
<surname>Ghosh</surname>
<given-names>S. S.</given-names>
</name>
<name>
<surname>Tourville</surname>
<given-names>J. A.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Neural modeling and imaging of the cortical interactions underlying syllable production</article-title>
.
<source>Brain Lang.</source>
<volume>96</volume>
,
<fpage>280</fpage>
<lpage>301</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.bandl.2005.06.001</pub-id>
<pub-id pub-id-type="pmid">16040108</pub-id>
</mixed-citation>
</ref>
<ref id="B34">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Guenther</surname>
<given-names>F. H.</given-names>
</name>
<name>
<surname>Perkell</surname>
<given-names>J. S.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>A neural model of speech production and its application to studies of the role of auditory feedback in speech</article-title>
, in
<source>Speech Motor Control in Normal and Disordered Speech</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Maassen</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Kent</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Peters</surname>
<given-names>H. F. M.</given-names>
</name>
<name>
<surname>Van Lieshout</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Hulstijn</surname>
<given-names>W.</given-names>
</name>
</person-group>
(
<publisher-name>Oxford University Press</publisher-name>
),
<fpage>29</fpage>
<lpage>50</lpage>
.</mixed-citation>
</ref>
<ref id="B35">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Guenther</surname>
<given-names>F. H.</given-names>
</name>
<name>
<surname>Vladusich</surname>
<given-names>T.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>A neural theory of speech acquisition and production</article-title>
.
<source>J. Neurolinguistics</source>
<volume>25</volume>
,
<fpage>408</fpage>
<lpage>422</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.jneuroling.2009.08.006</pub-id>
<pub-id pub-id-type="pmid">22711978</pub-id>
</mixed-citation>
</ref>
<ref id="B36">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gunji</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Hoshiyama</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Kakigi</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Auditory response following vocalization: a magnetoencephalographic study</article-title>
.
<source>Clin. Neurophysiol</source>
.
<volume>112</volume>
,
<fpage>514</fpage>
<lpage>520</lpage>
.
<pub-id pub-id-type="doi">10.1016/S1388-2457(01)00462-X</pub-id>
<pub-id pub-id-type="pmid">11222973</pub-id>
</mixed-citation>
</ref>
<ref id="B37">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Haggard</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>de Boer</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2014</year>
).
<article-title>Oral somatosensory awareness</article-title>
.
<source>Neurosci. Biobehav. Rev.</source>
<volume>47</volume>
,
<fpage>469</fpage>
<lpage>484</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neubiorev.2014.09.015</pub-id>
<pub-id pub-id-type="pmid">25284337</pub-id>
</mixed-citation>
</ref>
<ref id="B38">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Hamann</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<source>The Phonetics and Phonology of Retroflexes</source>
. Ph.D. dissertation,
<publisher-name>Netherlands Graduate School of Linguistics, University of Utrecht</publisher-name>
,
<publisher-loc>LOT, Utrecht</publisher-loc>
.</mixed-citation>
</ref>
<ref id="B39">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hardcastle</surname>
<given-names>W. J.</given-names>
</name>
<name>
<surname>Gibbon</surname>
<given-names>F. E.</given-names>
</name>
<name>
<surname>Jones</surname>
<given-names>W.</given-names>
</name>
</person-group>
(
<year>1991</year>
).
<article-title>Visual display of tongue-palate contact: electropalatography in the assessment and remediation of speech disorders</article-title>
.
<source>Int. J. Lang. Commun. Disord.</source>
<volume>26</volume>
,
<fpage>41</fpage>
<lpage>74</lpage>
.
<pub-id pub-id-type="doi">10.3109/13682829109011992</pub-id>
<pub-id pub-id-type="pmid">1954115</pub-id>
</mixed-citation>
</ref>
<ref id="B40">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hartelius</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Theodoros</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Murdoch</surname>
<given-names>B.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Use of electropalatography in the treatment of disordered articulation following traumatic brain injury: a case study</article-title>
.
<source>J. Med. Speech Lang. Pathol.</source>
<volume>13</volume>
,
<fpage>189</fpage>
<lpage>204</lpage>
.</mixed-citation>
</ref>
<ref id="B41">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Heinks-Maldonado</surname>
<given-names>T. H.</given-names>
</name>
<name>
<surname>Nagarajan</surname>
<given-names>S. S.</given-names>
</name>
<name>
<surname>Houde</surname>
<given-names>J. F.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Magnetoencephalographic evidence for a precise forward model in speech production</article-title>
.
<source>Neuroreport</source>
<volume>17</volume>
,
<fpage>1375</fpage>
.
<pub-id pub-id-type="doi">10.1097/01.wnr.0000233102.43526.e9</pub-id>
<pub-id pub-id-type="pmid">16932142</pub-id>
</mixed-citation>
</ref>
<ref id="B42">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hickok</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Eight problems for the mirror neuron theory of action understanding in monkeys and humans</article-title>
.
<source>J. Cogn. Neurosci.</source>
<volume>21</volume>
,
<fpage>1229</fpage>
<lpage>1243</lpage>
.
<pub-id pub-id-type="doi">10.1162/jocn.2009.21189</pub-id>
<pub-id pub-id-type="pmid">19199415</pub-id>
</mixed-citation>
</ref>
<ref id="B43">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hickok</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>The role of mirror neurons in speech perception and action word semantics</article-title>
.
<source>Lang. Cogn. Processes</source>
<volume>25</volume>
,
<fpage>749</fpage>
<lpage>776</lpage>
.
<pub-id pub-id-type="doi">10.1080/01690961003595572</pub-id>
</mixed-citation>
</ref>
<ref id="B44">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hodges</surname>
<given-names>N. J.</given-names>
</name>
<name>
<surname>Franks</surname>
<given-names>I. M.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Learning a coordination skill: interactive effects of instruction and feedback</article-title>
.
<source>Res. Q. Exerc. Sport</source>
<volume>72</volume>
,
<fpage>132</fpage>
<lpage>142</lpage>
.
<pub-id pub-id-type="doi">10.1080/02701367.2001.10608943</pub-id>
<pub-id pub-id-type="pmid">11393876</pub-id>
</mixed-citation>
</ref>
<ref id="B45">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Houde</surname>
<given-names>J. F.</given-names>
</name>
<name>
<surname>Nagarajan</surname>
<given-names>S. S.</given-names>
</name>
<name>
<surname>Sekihara</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Merzenich</surname>
<given-names>M. M.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Modulation of the auditory cortex during speech: an MEG study</article-title>
.
<source>J. Cogn. Neurosci.</source>
<volume>14</volume>
,
<fpage>1125</fpage>
<lpage>1138</lpage>
.
<pub-id pub-id-type="doi">10.1162/089892902760807140</pub-id>
<pub-id pub-id-type="pmid">12495520</pub-id>
</mixed-citation>
</ref>
<ref id="B46">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hueber</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Ben-Youssef</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Badin</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Bailly</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Elisei</surname>
<given-names>F.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Vizart3D: retour articulatoire visuel pour l'aide à la pronunciation</article-title>
, in
<source>29e Journées d'Études sur la Parole (JEP-TALN-RECITAL'2012)</source>
,
<volume>Vol. 5</volume>
,
<fpage>17</fpage>
<lpage>18</lpage>
.</mixed-citation>
</ref>
<ref id="B47">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jacks</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Bite block vowel production in apraxia of speech</article-title>
.
<source>J. Speech Lang. Hear. Res.</source>
<volume>51</volume>
,
<fpage>898</fpage>
<lpage>913</lpage>
.
<pub-id pub-id-type="doi">10.1044/1092-4388(2008/066)</pub-id>
<pub-id pub-id-type="pmid">18658060</pub-id>
</mixed-citation>
</ref>
<ref id="B48">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Jakobson</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Fant</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Halle</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Jakobson</surname>
<given-names>R. J.</given-names>
</name>
<name>
<surname>Fant</surname>
<given-names>R. G.</given-names>
</name>
<name>
<surname>Halle</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>1952</year>
).
<source>Preliminaries to Speech Analysis: the Distinctive Features and their Correlates.</source>
Technical Report, Acoustics Laboratory. No. 13.
<publisher-name>MIT</publisher-name>
.</mixed-citation>
</ref>
<ref id="B49">
<mixed-citation publication-type="webpage">
<person-group person-group-type="author">
<name>
<surname>Katz</surname>
<given-names>W. F.</given-names>
</name>
<name>
<surname>Campbell</surname>
<given-names>T. F.</given-names>
</name>
<name>
<surname>Wang</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Farrar</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Eubanks</surname>
<given-names>J. C.</given-names>
</name>
<name>
<surname>Balasubramanian</surname>
<given-names>A.</given-names>
</name>
<etal></etal>
</person-group>
(
<year>2014</year>
).
<article-title>Opti-speech: A real-time, 3D visual feedback system for speech training</article-title>
, in
<source>Procceedings of Interspeech</source>
. Available online at:
<ext-link ext-link-type="uri" xlink:href="https://www.utdallas.edu/~wangjun/paper/Interspeech14_opti-speech.pdf">https://www.utdallas.edu/~wangjun/paper/Interspeech14_opti-speech.pdf</ext-link>
</mixed-citation>
</ref>
<ref id="B50">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Katz</surname>
<given-names>W. F.</given-names>
</name>
<name>
<surname>McNeil</surname>
<given-names>M. R.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Studies of articulatory feedback treatment for apraxia of speech based on electromagnetic articulography</article-title>
.
<source>SIG 2 Perspect. Neurophysiol. Neurogenic Speech Lang. Disord.</source>
<volume>20</volume>
,
<fpage>73</fpage>
<lpage>79</lpage>
.
<pub-id pub-id-type="doi">10.1044/nnsld20.3.73</pub-id>
</mixed-citation>
</ref>
<ref id="B51">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Keating</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Lahiri</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>1993</year>
).
<article-title>Fronted velars, palatalized velars, and palatals</article-title>
.
<source>Phonetica</source>
<volume>50</volume>
,
<fpage>73</fpage>
<lpage>101</lpage>
.
<pub-id pub-id-type="doi">10.1159/000261928</pub-id>
<pub-id pub-id-type="pmid">8316582</pub-id>
</mixed-citation>
</ref>
<ref id="B52">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kohler</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Keysers</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Umiltá</surname>
<given-names>M. A.</given-names>
</name>
<name>
<surname>Fogassi</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Gallese</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Rizzolatti</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>2002</year>
).
<article-title>Hearing sounds, understanding actions: action representation in mirror neurons</article-title>
.
<source>Science</source>
<volume>297</volume>
,
<fpage>846</fpage>
<lpage>848</lpage>
.
<pub-id pub-id-type="doi">10.1126/science.1070311</pub-id>
<pub-id pub-id-type="pmid">12161656</pub-id>
</mixed-citation>
</ref>
<ref id="B53">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Kröger</surname>
<given-names>B. J.</given-names>
</name>
<name>
<surname>Birkholz</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Hoffmann</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Meng</surname>
<given-names>H.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Audiovisual tools for phonetic and articulatory visualization in computer-aided pronunciation training</article-title>
, in
<source>Development of Multimodal Interfaces: Active Listening and Synchrony: Second COST 2102 International Training School</source>
,
<volume>Vol. 5967</volume>
, eds
<person-group person-group-type="editor">
<name>
<surname>Esposito</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Campbell</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Vogel</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Hussain</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Nijholt</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<publisher-loc>Dublin</publisher-loc>
:
<publisher-name>Springer</publisher-name>
),
<fpage>337</fpage>
<lpage>345</lpage>
.</mixed-citation>
</ref>
<ref id="B54">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Kröger</surname>
<given-names>B. J.</given-names>
</name>
<name>
<surname>Kannampuzha</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>A neurofunctional model of speech production including aspects of auditory and audio-visual speech perception</article-title>
, in
<source>International Conference onf Auditory-Visual Speech Processing</source>
(
<publisher-loc>Queensland</publisher-loc>
),
<fpage>83</fpage>
<lpage>88</lpage>
.</mixed-citation>
</ref>
<ref id="B55">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kröger</surname>
<given-names>B. J.</given-names>
</name>
<name>
<surname>Kannampuzha</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Neuschaefer-Rube</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Towards a neurocomputational model of speech production and perception</article-title>
.
<source>Speech Commun.</source>
<volume>51</volume>
,
<fpage>793</fpage>
<lpage>809</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.specom.2008.08.002</pub-id>
</mixed-citation>
</ref>
<ref id="B56">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kroos</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Evaluation of the measurement precision in three-dimensional electromagnetic articulography (Carstens AG500)</article-title>
.
<source>J. Phonet.</source>
<volume>40</volume>
,
<fpage>453</fpage>
<lpage>465</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.wocn.2012.03.002</pub-id>
</mixed-citation>
</ref>
<ref id="B57">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Ladefoged</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Maddieson</surname>
<given-names>I.</given-names>
</name>
</person-group>
(
<year>1996</year>
).
<source>The Sounds of the World's Languages</source>
.
<publisher-loc>Oxford</publisher-loc>
:
<publisher-name>Blackwell</publisher-name>
.</mixed-citation>
</ref>
<ref id="B58">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Levitt</surname>
<given-names>J. S.</given-names>
</name>
<name>
<surname>Katz</surname>
<given-names>W. F.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Augmented visual feedback in second language learning: training Japanese post-alveolar flaps to American English speakers</article-title>
, in
<source>Proceedings of Meetings on Acoustics</source>
,
<volume>Vol. 2</volume>
(
<publisher-loc>New Orleans, LA</publisher-loc>
),
<fpage>060002</fpage>
.</mixed-citation>
</ref>
<ref id="B59">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Levitt</surname>
<given-names>J. S.</given-names>
</name>
<name>
<surname>Katz</surname>
<given-names>W. F.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>The effects of EMA-based augmented visual feedback on the English speakers' acquisition of the Japanese flap: a perceptual study</article-title>
, in
<source>Procceedings of Interspeech</source>
(
<publisher-name>Chiba</publisher-name>
:
<publisher-loc>Makuhari</publisher-loc>
),
<fpage>1862</fpage>
<lpage>1865</lpage>
.</mixed-citation>
</ref>
<ref id="B60">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Liu</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Hairston</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Schrier</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Fan</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Common and distinct networks underlying reward valence and processing stages: a meta-analysis of functional neuroimaging studies</article-title>
.
<source>Neurosci. Biobehav. Rev.</source>
<volume>35</volume>
,
<fpage>1219</fpage>
<lpage>1236</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neubiorev.2010.12.012</pub-id>
<pub-id pub-id-type="pmid">21185861</pub-id>
</mixed-citation>
</ref>
<ref id="B61">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Liu</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Massaro</surname>
<given-names>D. W.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>T. H.</given-names>
</name>
<name>
<surname>Chan</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Perfetti</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Using visual speech for training Chinese pronunciation: an in-vivo experiment</article-title>
, in
<source>SLaTE</source>
.
<fpage>29</fpage>
<lpage>32</lpage>
.</mixed-citation>
</ref>
<ref id="B62">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Maas</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Mailend</surname>
<given-names>M.-L.</given-names>
</name>
<name>
<surname>Guenther</surname>
<given-names>F. H.</given-names>
</name>
</person-group>
(
<year>2015</year>
).
<article-title>Feedforward and feedback control in apraxia of speech (AOS): effects of noise masking on vowel production</article-title>
.
<source>J. Speech Lang. Hear. Res.</source>
<volume>58</volume>
,
<fpage>185</fpage>
<lpage>200</lpage>
.
<pub-id pub-id-type="doi">10.1044/2014_JSLHR-S-13-0300</pub-id>
<pub-id pub-id-type="pmid">25565143</pub-id>
</mixed-citation>
</ref>
<ref id="B63">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Maas</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Robin</surname>
<given-names>D. A.</given-names>
</name>
<name>
<surname>Austermann Hula</surname>
<given-names>S. N.</given-names>
</name>
<name>
<surname>Freedman</surname>
<given-names>S. E.</given-names>
</name>
<name>
<surname>Wulf</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Ballard</surname>
<given-names>K. J.</given-names>
</name>
<etal></etal>
</person-group>
. (
<year>2008</year>
).
<article-title>Principles of motor learning in treatment of motor speech disorders</article-title>
.
<source>Am. J. Speech Lang. Pathol.</source>
<volume>17</volume>
,
<fpage>277</fpage>
<lpage>298</lpage>
.
<pub-id pub-id-type="doi">10.1044/1058-0360(2008/025)</pub-id>
<pub-id pub-id-type="pmid">18663111</pub-id>
</mixed-citation>
</ref>
<ref id="B64">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Marian</surname>
<given-names>V.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Audio-visual integration during bilingual language processing.</article-title>
, in
<source>The Bilingual Mental Lexicon: Interdisciplinary Approaches</source>
, ed
<person-group person-group-type="editor">
<name>
<surname>Pavlenko</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<publisher-loc>Bristol, UK</publisher-loc>
:
<publisher-name>Multilingual Matters</publisher-name>
),
<fpage>52</fpage>
<lpage>78</lpage>
. </mixed-citation>
</ref>
<ref id="B65">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Massaro</surname>
<given-names>D. W.</given-names>
</name>
</person-group>
(
<year>1984</year>
).
<article-title>Children's perception of visual and auditory speech</article-title>
.
<source>Child Dev.</source>
<volume>55</volume>
,
<fpage>1777</fpage>
<lpage>1788</lpage>
.
<pub-id pub-id-type="doi">10.2307/1129925</pub-id>
<pub-id pub-id-type="pmid">6510054</pub-id>
</mixed-citation>
</ref>
<ref id="B66">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Massaro</surname>
<given-names>D. W.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>A computer-animated tutor for spoken and written language learning</article-title>
, in
<source>Proceedings of the 5th International Conference on Multimodal Interfaces</source>
(
<publisher-loc>New York, NY</publisher-loc>
:
<publisher-name>ACM</publisher-name>
),
<fpage>172</fpage>
<lpage>175</lpage>
.
<pub-id pub-id-type="doi">10.1145/958432.958466</pub-id>
</mixed-citation>
</ref>
<ref id="B67">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Massaro</surname>
<given-names>D. W.</given-names>
</name>
<name>
<surname>Bigler</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>T. H.</given-names>
</name>
<name>
<surname>Perlman</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Ouni</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Pronunciation training: the role of eye and ear</article-title>
, in
<source>Proceedings of Interspeech</source>
, (
<publisher-loc>Brisbane, QLD</publisher-loc>
),
<fpage>2623</fpage>
<lpage>2626</lpage>
.</mixed-citation>
</ref>
<ref id="B68">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Massaro</surname>
<given-names>D. W.</given-names>
</name>
<name>
<surname>Cohen</surname>
<given-names>M. M.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>Visible speech and its potential value for speech training for hearing-impaired perceivers.</article-title>
in
<source>STiLL-Speech Technology in Language Learning</source>
(
<publisher-loc>Marholmen</publisher-loc>
),
<fpage>171</fpage>
<lpage>174</lpage>
.</mixed-citation>
</ref>
<ref id="B69">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Massaro</surname>
<given-names>D. W.</given-names>
</name>
<name>
<surname>Light</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2003</year>
).
<article-title>Read my tongue movements: bimodal learning to perceive and produce non-native speech /r/ and /l/</article-title>
, in
<source>Proceedings of Eurospeech (Interspeech)</source>
(
<publisher-loc>Geneva</publisher-loc>
: 8th European Conference on Speech Communication and Technology).</mixed-citation>
</ref>
<ref id="B70">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Massaro</surname>
<given-names>D. W.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>T. H.</given-names>
</name>
<name>
<surname>Perfetti</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>A multilingual embodied conversational agent for tutoring speech and language learning</article-title>
, in
<source>Proceedings of Interspeech</source>
(
<publisher-loc>Pittsburgh, PA</publisher-loc>
).</mixed-citation>
</ref>
<ref id="B71">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Max</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Guenther</surname>
<given-names>F. H.</given-names>
</name>
<name>
<surname>Gracco</surname>
<given-names>V. L.</given-names>
</name>
<name>
<surname>Ghosh</surname>
<given-names>S. S.</given-names>
</name>
<name>
<surname>Wallace</surname>
<given-names>M. E.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Unstable or insufficiently activated internal models and feedback-biased motor control as sources of dysfluency: a theoretical model of stuttering</article-title>
.
<source>Contemp. Issues Commun. Sci. Disord.</source>
<volume>31</volume>
,
<fpage>105</fpage>
<lpage>122</lpage>
.
<pub-id pub-id-type="pmid">26177690</pub-id>
</mixed-citation>
</ref>
<ref id="B72">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>McGurk</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>MacDonald</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>1976</year>
).
<article-title>Hearing lips and seeing voices</article-title>
.
<source>Nature</source>
<volume>264</volume>
,
<fpage>746</fpage>
<lpage>748</lpage>
.
<pub-id pub-id-type="doi">10.1038/264746a0</pub-id>
<pub-id pub-id-type="pmid">1012311</pub-id>
</mixed-citation>
</ref>
<ref id="B73">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mehta</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Katz</surname>
<given-names>W. F.</given-names>
</name>
</person-group>
(
<year>2015</year>
).
<article-title>Articulatory and acoustic correlates of English front vowel productions by native Japanese speakers</article-title>
.
<source>J. Acoust. Soc. Am.</source>
<volume>137</volume>
,
<fpage>2380</fpage>
<lpage>2380</lpage>
.
<pub-id pub-id-type="doi">10.1121/1.4920648</pub-id>
</mixed-citation>
</ref>
<ref id="B74">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mochida</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Kimura</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Hiroya</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Kitagawa</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Gomi</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Kondo</surname>
<given-names>T.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>Speech misperception: speaking and seeing interfere differently with hearing</article-title>
.
<source>PLoS ONE</source>
<volume>8</volume>
:
<fpage>e68619</fpage>
.
<pub-id pub-id-type="doi">10.1371/journal.pone.0068619</pub-id>
<pub-id pub-id-type="pmid">23844227</pub-id>
</mixed-citation>
</ref>
<ref id="B75">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Möttönen</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Schürmann</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Sams</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Time course of multisensory interactions during audiovisual speech perception in humans: a magnetoencephalographic study</article-title>
.
<source>Neurosci. Lett.</source>
<volume>363</volume>
,
<fpage>112</fpage>
<lpage>115</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neulet.2004.03.076</pub-id>
<pub-id pub-id-type="pmid">15172096</pub-id>
</mixed-citation>
</ref>
<ref id="B76">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Navarra</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Soto-Faraco</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>Hearing lips in a second language: visual articulatory information enables the perception of second language sounds</article-title>
.
<source>Psychol. Res.</source>
<volume>71</volume>
,
<fpage>4</fpage>
<lpage>12</lpage>
.
<pub-id pub-id-type="doi">10.1007/s00426-005-0031-5</pub-id>
<pub-id pub-id-type="pmid">16362332</pub-id>
</mixed-citation>
</ref>
<ref id="B77">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nordberg</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Göran</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Lohmander</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Electropalatography in the description and treatment of speech disorders in five children with cerebral palsy</article-title>
.
<source>Clin. Linguist. Phon.</source>
<volume>25</volume>
,
<fpage>831</fpage>
<lpage>852</lpage>
.
<pub-id pub-id-type="doi">10.3109/02699206.2011.573122</pub-id>
<pub-id pub-id-type="pmid">21591933</pub-id>
</mixed-citation>
</ref>
<ref id="B78">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Numbers</surname>
<given-names>M. E.</given-names>
</name>
<name>
<surname>Hudgins</surname>
<given-names>C. V.</given-names>
</name>
</person-group>
(
<year>1948</year>
).
<article-title>Speech perception in present day education for deaf children</article-title>
.
<source>Volta Rev.</source>
<volume>50</volume>
,
<fpage>449</fpage>
<lpage>456</lpage>
.</mixed-citation>
</ref>
<ref id="B79">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>O'Neill</surname>
<given-names>J. J.</given-names>
</name>
</person-group>
(
<year>1954</year>
).
<article-title>Contributions of the visual components of oral symbols to speech comprehension</article-title>
.
<source>J. Speech Hear. Disord.</source>
<volume>19</volume>
,
<fpage>429</fpage>
<lpage>439</lpage>
.
<pub-id pub-id-type="pmid">13222457</pub-id>
</mixed-citation>
</ref>
<ref id="B80">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ojanen</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Möttönen</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Pekkola</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Jääskeläinen</surname>
<given-names>I. P.</given-names>
</name>
<name>
<surname>Joensuu</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Autti</surname>
<given-names>T.</given-names>
</name>
<etal></etal>
</person-group>
. (
<year>2005</year>
).
<article-title>Processing of audiovisual speech in Broca's area</article-title>
.
<source>Neuroimage</source>
<volume>25</volume>
,
<fpage>333</fpage>
<lpage>338</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2004.12.001</pub-id>
<pub-id pub-id-type="pmid">15784412</pub-id>
</mixed-citation>
</ref>
<ref id="B81">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ouni</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>Tongue control and its implication in pronunciation training</article-title>
.
<source>Comp. Assist. Lang. Learn.</source>
<volume>27</volume>
,
<fpage>439</fpage>
<lpage>453</lpage>
.
<pub-id pub-id-type="doi">10.1080/09588221.2012.761637</pub-id>
</mixed-citation>
</ref>
<ref id="B82">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pekkola</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Ojanen</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Autti</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Jääskeläinen</surname>
<given-names>I. P.</given-names>
</name>
<name>
<surname>Möttönen</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Sams</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Attention to visual speech gestures enhances hemodynamic activity in the left planum temporale</article-title>
.
<source>Hum. Brain Mapp.</source>
<volume>27</volume>
,
<fpage>471</fpage>
<lpage>477</lpage>
.
<pub-id pub-id-type="doi">10.1002/hbm.20190</pub-id>
<pub-id pub-id-type="pmid">16161166</pub-id>
</mixed-citation>
</ref>
<ref id="B83">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pochon</surname>
<given-names>J. B.</given-names>
</name>
<name>
<surname>Levy</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Fossati</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Lehericy</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Poline</surname>
<given-names>J. B.</given-names>
</name>
<name>
<surname>Pillon</surname>
<given-names>B.</given-names>
</name>
<etal></etal>
</person-group>
. (
<year>2002</year>
).
<article-title>The neural system that bridges reward and cognition in humans: an fMRI study</article-title>
.
<source>Proc. Natl. Acad. Sci. U.S.A.</source>
<volume>99</volume>
,
<fpage>5669</fpage>
<lpage>5674</lpage>
.
<pub-id pub-id-type="doi">10.1073/pnas.082111099</pub-id>
<pub-id pub-id-type="pmid">11960021</pub-id>
</mixed-citation>
</ref>
<ref id="B84">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Preston</surname>
<given-names>J. L.</given-names>
</name>
<name>
<surname>Leaman</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2014</year>
).
<article-title>Ultrasound visual feedback for acquired apraxia of speech: a case report</article-title>
.
<source>Aphasiology</source>
<volume>28</volume>
,
<fpage>278</fpage>
<lpage>295</lpage>
.
<pub-id pub-id-type="doi">10.1080/02687038.2013.852901</pub-id>
</mixed-citation>
</ref>
<ref id="B85">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Preston</surname>
<given-names>J. L.</given-names>
</name>
<name>
<surname>McCabe</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Rivera-Campos</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Whittle</surname>
<given-names>J. L.</given-names>
</name>
<name>
<surname>Landry</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Maas</surname>
<given-names>E.</given-names>
</name>
</person-group>
(
<year>2014</year>
).
<article-title>Ultrasound visual feedback treatment and practice variability for residual speech sound errors</article-title>
.
<source>J. Speech Lang. Hear. Res.</source>
<volume>57</volume>
,
<fpage>2102</fpage>
<lpage>2115</lpage>
.
<pub-id pub-id-type="doi">10.1044/2014_JSLHR-S-14-0031</pub-id>
<pub-id pub-id-type="pmid">25087938</pub-id>
</mixed-citation>
</ref>
<ref id="B86">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pulvermüller</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Fadiga</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Active perception: sensorimotor circuits as a cortical basis for language</article-title>
.
<source>Nat. Rev. Neurosci.</source>
<volume>11</volume>
,
<fpage>351</fpage>
<lpage>360</lpage>
.
<pub-id pub-id-type="doi">10.1038/nrn2811</pub-id>
<pub-id pub-id-type="pmid">20383203</pub-id>
</mixed-citation>
</ref>
<ref id="B87">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pulvermüller</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Huss</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Kherif</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Moscoso del Prado Martin</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Hauk</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Shtyrov</surname>
<given-names>Y.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Motor cortex maps articulatory features of speech sounds</article-title>
.
<source>Proc. Natl. Acad. Sci. U.S.A.</source>
<volume>103</volume>
,
<fpage>7865</fpage>
<lpage>7870</lpage>
.
<pub-id pub-id-type="doi">10.1073/pnas.0509989103</pub-id>
<pub-id pub-id-type="pmid">16682637</pub-id>
</mixed-citation>
</ref>
<ref id="B88">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Reetz</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Jongman</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<source>Phonetics: Transcription, Production, Acoustics, and Perception.</source>
<publisher-loc>Chichester</publisher-loc>
:
<publisher-name>Wiley-Blackwell</publisher-name>
.</mixed-citation>
</ref>
<ref id="B89">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Reisberg</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>McLean</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Goldfield</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>1987</year>
).
<article-title>Easy to hear but hard to understand: a lip-reading advantage with intact auditory stimuli</article-title>
, in
<source>Hearing by Eye: The Psychology of Lip-reading</source>
, eds
<person-group person-group-type="editor">
<name>
<surname>Dodd</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Campbell</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<publisher-loc>Hillsdale, NJ</publisher-loc>
:
<publisher-name>Lawrence Erlbaum Associates</publisher-name>
),
<fpage>97</fpage>
<lpage>114</lpage>
.</mixed-citation>
</ref>
<ref id="B90">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rizzolatti</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Arbib</surname>
<given-names>M. A.</given-names>
</name>
</person-group>
(
<year>1998</year>
).
<article-title>Language within our grasp</article-title>
.
<source>Trends Neurosci.</source>
<volume>21</volume>
,
<fpage>188</fpage>
<lpage>194</lpage>
.
<pub-id pub-id-type="doi">10.1016/S0166-2236(98)01260-0</pub-id>
<pub-id pub-id-type="pmid">9610880</pub-id>
</mixed-citation>
</ref>
<ref id="B91">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rizzolatti</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Cattaneo</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Fabbri-Destro</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Rozzi</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2014</year>
).
<article-title>Cortical mechanisms underlying the organization of goal-directed actions and mirror neuron-based action understanding</article-title>
.
<source>Physiol. Rev.</source>
<volume>94</volume>
,
<fpage>655</fpage>
<lpage>706</lpage>
.
<pub-id pub-id-type="doi">10.1152/physrev.00009.2013</pub-id>
<pub-id pub-id-type="pmid">24692357</pub-id>
</mixed-citation>
</ref>
<ref id="B92">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rizzolatti</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Craighero</surname>
<given-names>L.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>The mirror-neuron system</article-title>
.
<source>Annu. Rev. Neurosci</source>
.
<volume>27</volume>
,
<fpage>169</fpage>
<lpage>192</lpage>
.
<pub-id pub-id-type="doi">10.1146/annurev.neuro.27.070203.144230</pub-id>
<pub-id pub-id-type="pmid">15217330</pub-id>
</mixed-citation>
</ref>
<ref id="B93">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sams</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Möttönen</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Sihvonen</surname>
<given-names>T.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Seeing and hearing others and oneself talk</article-title>
.
<source>Cogn. Brain Res.</source>
<volume>23</volume>
,
<fpage>429</fpage>
<lpage>435</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.cogbrainres.2004.11.006</pub-id>
<pub-id pub-id-type="pmid">15820649</pub-id>
</mixed-citation>
</ref>
<ref id="B94">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sato</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Troille</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Ménard</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Cathiard</surname>
<given-names>M.-A.</given-names>
</name>
<name>
<surname>Gracco</surname>
<given-names>V.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>Silent articulation modulates auditory and audiovisual speech perception</article-title>
.
<source>Exp. Brain Res.</source>
<volume>227</volume>
,
<fpage>275</fpage>
<lpage>288</lpage>
.
<pub-id pub-id-type="doi">10.1007/s00221-013-3510-8</pub-id>
<pub-id pub-id-type="pmid">23591689</pub-id>
</mixed-citation>
</ref>
<ref id="B95">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Schmidt</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>T.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<source>Motor Learning and Performance: From Principles to Application, 5th Edn</source>
.
<publisher-loc>Champaign, IL</publisher-loc>
:
<publisher-name>Human Kinetics</publisher-name>
.</mixed-citation>
</ref>
<ref id="B96">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Scruggs</surname>
<given-names>T. E.</given-names>
</name>
<name>
<surname>Mastropieri</surname>
<given-names>M. A.</given-names>
</name>
<name>
<surname>Casto</surname>
<given-names>G.</given-names>
</name>
</person-group>
(
<year>1987</year>
).
<article-title>The quantitative synthesis of single-subject research methodology and validation</article-title>
.
<source>Remedial Special Educ.</source>
<volume>8</volume>
,
<fpage>24</fpage>
<lpage>33</lpage>
.
<pub-id pub-id-type="doi">10.1177/074193258700800206</pub-id>
</mixed-citation>
</ref>
<ref id="B97">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Scruggs</surname>
<given-names>T. E.</given-names>
</name>
<name>
<surname>Mastropieri</surname>
<given-names>M. A.</given-names>
</name>
<name>
<surname>Cook</surname>
<given-names>S. B.</given-names>
</name>
<name>
<surname>Escobar</surname>
<given-names>C.</given-names>
</name>
</person-group>
(
<year>1986</year>
).
<article-title>Early intervention for children with conduct disorders: a quantitative synthesis of single-subject research</article-title>
.
<source>Behav. Disord.</source>
<volume>11</volume>
,
<fpage>260</fpage>
<lpage>271</lpage>
.</mixed-citation>
</ref>
<ref id="B98">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Shirahige</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Oki</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Morimoto</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Oisaka</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Minagi</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Dynamics of posterior tongue during pronunciation and voluntary tongue lift movement in young adults</article-title>
.
<source>J. Oral. Rehabil.</source>
<volume>39</volume>
,
<fpage>370</fpage>
<lpage>376</lpage>
.
<pub-id pub-id-type="doi">10.1111/j.1365-2842.2011.02283.x</pub-id>
<pub-id pub-id-type="pmid">22288951</pub-id>
</mixed-citation>
</ref>
<ref id="B99">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sigrist</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Rauter</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Riener</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Wolf</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>Augmented visual, auditory, haptic, and multimodal feedback in motor learning: a review</article-title>
.
<source>Psychon. Bullet. Rev.</source>
<volume>20</volume>
,
<fpage>21</fpage>
<lpage>53</lpage>
.
<pub-id pub-id-type="doi">10.3758/s13423-012-0333-8</pub-id>
<pub-id pub-id-type="pmid">23132605</pub-id>
</mixed-citation>
</ref>
<ref id="B100">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Skipper</surname>
<given-names>J. I.</given-names>
</name>
<name>
<surname>Goldin-Meadow</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Nusbaum</surname>
<given-names>H. C.</given-names>
</name>
<name>
<surname>Small</surname>
<given-names>S. L.</given-names>
</name>
</person-group>
(
<year>2007a</year>
).
<article-title>Speech-associated gestures, Broca's area, and the human mirror system</article-title>
.
<source>Brain Lang.</source>
<volume>101</volume>
,
<fpage>260</fpage>
<lpage>277</lpage>
.
<pub-id pub-id-type="pmid">17533001</pub-id>
</mixed-citation>
</ref>
<ref id="B101">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Skipper</surname>
<given-names>J. I.</given-names>
</name>
<name>
<surname>Nusbaum</surname>
<given-names>H. C.</given-names>
</name>
<name>
<surname>Small</surname>
<given-names>S. L.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Listening to talking faces: motor cortical activation during speech perception</article-title>
.
<source>Neuroimage</source>
<volume>25</volume>
,
<fpage>76</fpage>
<lpage>89</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2004.11.006</pub-id>
<pub-id pub-id-type="pmid">15734345</pub-id>
</mixed-citation>
</ref>
<ref id="B102">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Skipper</surname>
<given-names>J. I.</given-names>
</name>
<name>
<surname>Nusbaum</surname>
<given-names>H. C.</given-names>
</name>
<name>
<surname>Small</surname>
<given-names>S. L.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Lending a helping hand to hearing: another motor theory of speech perception.</article-title>
, in
<source>Action to Language Via the Mirror Neuron System</source>
, ed
<person-group person-group-type="editor">
<name>
<surname>Arbib</surname>
<given-names>M. A.</given-names>
</name>
</person-group>
(
<publisher-loc>Cambridge</publisher-loc>
:
<publisher-name>Cambridge University Press</publisher-name>
),
<fpage>250</fpage>
<lpage>285</lpage>
.</mixed-citation>
</ref>
<ref id="B103">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Skipper</surname>
<given-names>J. I.</given-names>
</name>
<name>
<surname>van Wassenhove</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Nusbaum</surname>
<given-names>H. C.</given-names>
</name>
<name>
<surname>Small</surname>
<given-names>S. L.</given-names>
</name>
</person-group>
(
<year>2007b</year>
).
<article-title>Hearing lips and seeing voices: how cortical areas supporting speech production mediate audiovisual speech perception</article-title>
.
<source>Cereb. Cortex</source>
<volume>17</volume>
,
<fpage>2387</fpage>
<lpage>2399</lpage>
.
<pub-id pub-id-type="doi">10.1093/cercor/bhl147</pub-id>
<pub-id pub-id-type="pmid">17218482</pub-id>
</mixed-citation>
</ref>
<ref id="B104">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Stella</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Stella</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Sigona</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Bernardini</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Grimaldi</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Gili Fivela</surname>
<given-names>B.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>Electromagnetic Articulography with AG500 and AG501</article-title>
, in
<source>14th Annual Conference of the International Speech Communication Association</source>
(
<publisher-loc>Lyon</publisher-loc>
),
<fpage>1316</fpage>
<lpage>1320</lpage>
.</mixed-citation>
</ref>
<ref id="B105">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Stevens</surname>
<given-names>K. N.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<source>Acoustic Phonetics</source>
.
<publisher-loc>Cambridge, MA</publisher-loc>
:
<publisher-name>MIT Press</publisher-name>
.</mixed-citation>
</ref>
<ref id="B106">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stevens</surname>
<given-names>K. N.</given-names>
</name>
<name>
<surname>Blumstein</surname>
<given-names>S. E.</given-names>
</name>
</person-group>
(
<year>1975</year>
).
<article-title>Quantal aspects of consonant production and perception: a study of retroflex stop consonants</article-title>
.
<source>J. Phonet.</source>
<volume>3</volume>
,
<fpage>215</fpage>
<lpage>233</lpage>
.</mixed-citation>
</ref>
<ref id="B107">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stevens</surname>
<given-names>K. N.</given-names>
</name>
<name>
<surname>Blumstein</surname>
<given-names>S. E.</given-names>
</name>
</person-group>
(
<year>1978</year>
).
<article-title>Invariant cues for place of articulation in stop consonants</article-title>
.
<source>J. Acoustic. Soc. Am.</source>
<volume>64</volume>
,
<fpage>1358</fpage>
<lpage>1368</lpage>
.
<pub-id pub-id-type="doi">10.1121/1.382102</pub-id>
<pub-id pub-id-type="pmid">744836</pub-id>
</mixed-citation>
</ref>
<ref id="B108">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Suemitsu</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Ito</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Tiede</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>An EMA-based articulatory feedback approach to facilitate L2 speech production learning</article-title>
.
<source>J. Acoustic. Soc. Am.</source>
<volume>133</volume>
,
<fpage>3336</fpage>
<pub-id pub-id-type="doi">10.1121/1.4805613</pub-id>
</mixed-citation>
</ref>
<ref id="B109">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sumby</surname>
<given-names>W. H.</given-names>
</name>
<name>
<surname>Pollack</surname>
<given-names>I.</given-names>
</name>
</person-group>
(
<year>1954</year>
).
<article-title>Visual contribution to speech intelligibility in noise</article-title>
.
<source>J. Acoustic. Soc. Am.</source>
<volume>26</volume>
,
<fpage>212</fpage>
<lpage>215</lpage>
.
<pub-id pub-id-type="doi">10.1121/1.1907309</pub-id>
</mixed-citation>
</ref>
<ref id="B110">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Summerfield</surname>
<given-names>Q.</given-names>
</name>
<name>
<surname>McGrath</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>1984</year>
).
<article-title>Detection and resolution of audio-visual incompatibility in the perception of vowels</article-title>
.
<source>Q. J. Exp. Psychol. Hum. Exp. Psychol.</source>
<volume>36</volume>
,
<fpage>51</fpage>
<lpage>74</lpage>
.
<pub-id pub-id-type="doi">10.1080/14640748408401503</pub-id>
<pub-id pub-id-type="pmid">6536037</pub-id>
</mixed-citation>
</ref>
<ref id="B111">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Swinnen</surname>
<given-names>S. P.</given-names>
</name>
<name>
<surname>Walter</surname>
<given-names>C. B.</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>T. D.</given-names>
</name>
<name>
<surname>Serrien</surname>
<given-names>D. J.</given-names>
</name>
</person-group>
(
<year>1993</year>
).
<article-title>Acquiring bimanual skills: contrasting forms of information feedback for interlimb decoupling</article-title>
.
<source>J. Exp. Psychol. Learn. Mem. Cogn.</source>
<volume>19</volume>
,
<fpage>1328</fpage>
.
<pub-id pub-id-type="doi">10.1037/0278-7393.19.6.1328</pub-id>
<pub-id pub-id-type="pmid">8270889</pub-id>
</mixed-citation>
</ref>
<ref id="B112">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Terband</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Maassen</surname>
<given-names>B.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Speech motor development in Childhood Apraxia of Speech: generating testable hypotheses by neurocomputational modeling</article-title>
.
<source>Folia Phoniatr. Logop.</source>
<volume>62</volume>
,
<fpage>134</fpage>
<lpage>142</lpage>
.
<pub-id pub-id-type="doi">10.1159/000287212</pub-id>
<pub-id pub-id-type="pmid">20424469</pub-id>
</mixed-citation>
</ref>
<ref id="B113">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Terband</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Maassen</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Guenther</surname>
<given-names>F. H.</given-names>
</name>
<name>
<surname>Brumberg</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Computational neural modeling of speech motor control in Childhood Apraxia of Speech (CAS)</article-title>
.
<source>J. Speech Lang. Hear. Res.</source>
<volume>52</volume>
,
<fpage>1595</fpage>
<lpage>1609</lpage>
.
<pub-id pub-id-type="doi">10.1044/1092-4388(2009/07-0283)</pub-id>
<pub-id pub-id-type="pmid">19951927</pub-id>
</mixed-citation>
</ref>
<ref id="B114">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Terband</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Maassen</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Guenther</surname>
<given-names>F. H.</given-names>
</name>
<name>
<surname>Brumberg</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2014a</year>
).
<article-title>Auditory–motor interactions in pediatric motor speech disorders: neurocomputational modeling of disordered development</article-title>
.
<source>J. Communic. Disord.</source>
<volume>47</volume>
,
<fpage>17</fpage>
<lpage>33</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.jcomdis.2014.01.001</pub-id>
<pub-id pub-id-type="pmid">24491630</pub-id>
</mixed-citation>
</ref>
<ref id="B115">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Terband</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>van Brenk</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>van Doornik-van der Zee</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2014b</year>
).
<article-title>Auditory feedback perturbation in children with developmental speech sound disorders</article-title>
.
<source>J. Communic. Disord.</source>
<volume>51</volume>
,
<fpage>64</fpage>
<lpage>77</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.jcomdis.2014.06.009</pub-id>
<pub-id pub-id-type="pmid">25127854</pub-id>
</mixed-citation>
</ref>
<ref id="B116">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tian</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Poeppel</surname>
<given-names>D.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Mental imagery of speech and movement implicates the dynamics of internal forward models</article-title>
.
<source>Front. Psychol.</source>
<volume>1</volume>
:
<issue>166</issue>
.
<pub-id pub-id-type="doi">10.3389/fpsyg.2010.00166</pub-id>
<pub-id pub-id-type="pmid">21897822</pub-id>
</mixed-citation>
</ref>
<ref id="B117">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Uddin</surname>
<given-names>L. Q.</given-names>
</name>
<name>
<surname>Molnar-Szakacs</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Zaidel</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Iacoboni</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>rTMS to the right inferior parietal lobule disrupts self–other discrimination</article-title>
.
<source>Soc. Cogn. Affect. Neurosci.</source>
<volume>1</volume>
,
<fpage>65</fpage>
<lpage>71</lpage>
.
<pub-id pub-id-type="doi">10.1093/scan/nsl003</pub-id>
<pub-id pub-id-type="pmid">17387382</pub-id>
</mixed-citation>
</ref>
<ref id="B118">
<mixed-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Wik</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Engwall</surname>
<given-names>O.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Can visualization of internal articulators support speech perception?</article-title>
, in
<source>Proceedings of Interspeech</source>
(
<publisher-loc>Brisbane</publisher-loc>
),
<fpage>2627</fpage>
<lpage>2630</lpage>
.</mixed-citation>
</ref>
<ref id="B119">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wilson</surname>
<given-names>S. M.</given-names>
</name>
<name>
<surname>Iacoboni</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2006</year>
).
<article-title>Neural responses to non-native phonemes varying in producibility: evidence for the sensorimotor nature of speech perception</article-title>
.
<source>Neuroimage</source>
<volume>33</volume>
,
<fpage>316</fpage>
<lpage>325</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2006.05.032</pub-id>
<pub-id pub-id-type="pmid">16919478</pub-id>
</mixed-citation>
</ref>
<ref id="B120">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wilson</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Saygin</surname>
<given-names>A. P.</given-names>
</name>
<name>
<surname>Sereno</surname>
<given-names>M. I.</given-names>
</name>
<name>
<surname>Iacoboni</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2004</year>
).
<article-title>Listening to speech activates motor areas involved in speech production</article-title>
.
<source>Nat. Neurosci.</source>
<volume>7</volume>
,
<fpage>701</fpage>
<lpage>702</lpage>
.
<pub-id pub-id-type="doi">10.1038/nn1263</pub-id>
<pub-id pub-id-type="pmid">15184903</pub-id>
</mixed-citation>
</ref>
<ref id="B121">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Yano</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Shirahige</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Oki</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Oisaka</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Kumakura</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Tsubahara</surname>
<given-names>A.</given-names>
</name>
<etal></etal>
</person-group>
. (
<year>2015</year>
).
<article-title>Effect of visual biofeedback of posterior tongue movement on articulation rehabilitation in dysarthria patients</article-title>
.
<source>J. Oral Rehabil.</source>
<volume>42</volume>
,
<fpage>571</fpage>
<lpage>579</lpage>
.
<pub-id pub-id-type="doi">10.1111/joor.12293</pub-id>
<pub-id pub-id-type="pmid">25786577</pub-id>
</mixed-citation>
</ref>
<ref id="B122">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zaehle</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Geiser</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Alter</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Jancke</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Meyer</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Segmental processing in the human auditory dorsal stream</article-title>
.
<source>Brain Res.</source>
<volume>1220</volume>
,
<fpage>179</fpage>
<lpage>190</lpage>
.
<pub-id pub-id-type="doi">10.1016/j.brainres.2007.11.013</pub-id>
<pub-id pub-id-type="pmid">18096139</pub-id>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
<affiliations>
<list></list>
<tree>
<noCountry>
<name sortKey="Katz, William F" sort="Katz, William F" uniqKey="Katz W" first="William F." last="Katz">William F. Katz</name>
<name sortKey="Mehta, Sonya" sort="Mehta, Sonya" uniqKey="Mehta S" first="Sonya" last="Mehta">Sonya Mehta</name>
</noCountry>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Ncbi/Merge
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 003E10 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd -nk 003E10 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Ncbi
   |étape=   Merge
   |type=    RBID
   |clé=     PMC:4652268
   |texte=   Visual Feedback of Tongue Movement for Novel Speech Sound Learning
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/RBID.i   -Sk "pubmed:26635571" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024