Robust speech perception: Recognize the familiar, generalize to the similar, and adapt to the novel
Identifieur interne : 003815 ( Ncbi/Checkpoint ); précédent : 003814; suivant : 003816Robust speech perception: Recognize the familiar, generalize to the similar, and adapt to the novel
Auteurs : Dave F. Kleinschmidt ; T. Florian JaegerSource :
- Psychological review [ 0033-295X ] ; 2015.
Abstract
Successful speech perception requires that listeners map the acoustic signal to linguistic categories. These mappings are not only probabilistic, but change depending on the situation. For example, one talker’s /p/ might be physically indistinguishable from another talker’s /b/ (cf.
Url:
DOI: 10.1037/a0038695
PubMed: 25844873
PubMed Central: 4744792
Affiliations:
Links toward previous steps (curation, corpus...)
- to stream Pmc, to step Corpus: 001674
- to stream Pmc, to step Curation: 001674
- to stream Pmc, to step Checkpoint: 000358
- to stream Ncbi, to step Merge: 003815
- to stream Ncbi, to step Curation: 003815
Links to Exploration step
PMC:4744792Le document en format XML
<record><TEI><teiHeader><fileDesc><titleStmt><title xml:lang="en">Robust speech perception: Recognize the familiar, generalize to the similar, and adapt to the novel</title>
<author><name sortKey="Kleinschmidt, Dave F" sort="Kleinschmidt, Dave F" uniqKey="Kleinschmidt D" first="Dave F." last="Kleinschmidt">Dave F. Kleinschmidt</name>
</author>
<author><name sortKey="Jaeger, T Florian" sort="Jaeger, T Florian" uniqKey="Jaeger T" first="T. Florian" last="Jaeger">T. Florian Jaeger</name>
</author>
</titleStmt>
<publicationStmt><idno type="wicri:source">PMC</idno>
<idno type="pmid">25844873</idno>
<idno type="pmc">4744792</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4744792</idno>
<idno type="RBID">PMC:4744792</idno>
<idno type="doi">10.1037/a0038695</idno>
<date when="2015">2015</date>
<idno type="wicri:Area/Pmc/Corpus">001674</idno>
<idno type="wicri:Area/Pmc/Curation">001674</idno>
<idno type="wicri:Area/Pmc/Checkpoint">000358</idno>
<idno type="wicri:Area/Ncbi/Merge">003815</idno>
<idno type="wicri:Area/Ncbi/Curation">003815</idno>
<idno type="wicri:Area/Ncbi/Checkpoint">003815</idno>
</publicationStmt>
<sourceDesc><biblStruct><analytic><title xml:lang="en" level="a" type="main">Robust speech perception: Recognize the familiar, generalize to the similar, and adapt to the novel</title>
<author><name sortKey="Kleinschmidt, Dave F" sort="Kleinschmidt, Dave F" uniqKey="Kleinschmidt D" first="Dave F." last="Kleinschmidt">Dave F. Kleinschmidt</name>
</author>
<author><name sortKey="Jaeger, T Florian" sort="Jaeger, T Florian" uniqKey="Jaeger T" first="T. Florian" last="Jaeger">T. Florian Jaeger</name>
</author>
</analytic>
<series><title level="j">Psychological review</title>
<idno type="ISSN">0033-295X</idno>
<idno type="eISSN">1939-1471</idno>
<imprint><date when="2015">2015</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc><textClass></textClass>
</profileDesc>
</teiHeader>
<front><div type="abstract" xml:lang="en"><p id="P1">Successful speech perception requires that listeners map the acoustic signal to linguistic categories. These mappings are not only probabilistic, but change depending on the situation. For example, one talker’s /p/ might be physically indistinguishable from another talker’s /b/ (cf. <italic>lack of invariance</italic>
). We characterize the computational problem posed by such a subjectively non-stationary world and propose that the speech perception system overcomes this challenge by (1) recognizing previously encountered situations, (2) generalizing to other situations based on previous similar experience, and (3) adapting to novel situations. We formalize this proposal in the <italic>ideal adapter</italic>
framework: (1) to (3) can be understood as inference under uncertainty about the appropriate generative model for the current talker, thereby facilitating robust speech perception despite the lack of invariance. We focus on two critical aspects of the ideal adapter. First, in situations that clearly deviate from previous experience, listeners need to adapt. We develop a distributional (belief-updating) learning model of incremental adaptation. The model provides a good fit against known and novel phonetic adaptation data, including perceptual recalibration and selective adaptation. Second, robust speech recognition requires listeners learn to represent the <italic>structured</italic>
component of cross-situation variability in the speech signal. We discuss how these two aspects of the ideal adapter provide a unifying explanation for adaptation, talker-specificity, and generalization across talkers and groups of talkers (e.g., accents and dialects). The ideal adapter provides a guiding framework for future investigations into speech perception and adaptation, and more broadly language comprehension.</p>
</div>
</front>
</TEI>
<affiliations><list></list>
<tree><noCountry><name sortKey="Jaeger, T Florian" sort="Jaeger, T Florian" uniqKey="Jaeger T" first="T. Florian" last="Jaeger">T. Florian Jaeger</name>
<name sortKey="Kleinschmidt, Dave F" sort="Kleinschmidt, Dave F" uniqKey="Kleinschmidt D" first="Dave F." last="Kleinschmidt">Dave F. Kleinschmidt</name>
</noCountry>
</tree>
</affiliations>
</record>
Pour manipuler ce document sous Unix (Dilib)
EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Ncbi/Checkpoint
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 003815 | SxmlIndent | more
Ou
HfdSelect -h $EXPLOR_AREA/Data/Ncbi/Checkpoint/biblio.hfd -nk 003815 | SxmlIndent | more
Pour mettre un lien sur cette page dans le réseau Wicri
{{Explor lien |wiki= Ticri/CIDE |area= HapticV1 |flux= Ncbi |étape= Checkpoint |type= RBID |clé= PMC:4744792 |texte= Robust speech perception: Recognize the familiar, generalize to the similar, and adapt to the novel }}
Pour générer des pages wiki
HfdIndexSelect -h $EXPLOR_AREA/Data/Ncbi/Checkpoint/RBID.i -Sk "pubmed:25844873" \ | HfdSelect -Kh $EXPLOR_AREA/Data/Ncbi/Checkpoint/biblio.hfd \ | NlmPubMed2Wicri -a HapticV1
This area was generated with Dilib version V0.6.23. |