Serveur d'exploration sur la recherche en informatique en Lorraine

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Non-Linear Interpolation Methods for Speaker Recognition and Verification

Identifieur interne : 00D255 ( Main/Merge ); précédent : 00D254; suivant : 00D256

Non-Linear Interpolation Methods for Speaker Recognition and Verification

Auteurs : Y. Gong ; Jean-Paul Haton [France]

Source :

RBID : CRIN:gong94b

English descriptors

Abstract

We address two problems related to text-dependent speaker recognition and verification using very short utterances (less than 1 second) both for training and recognition/verification\, : speaker acoustic models and verification decision thresholds. The approach to speaker models consists in exploiting speaker-specific acoustic correlations between two sets of parameter vectors relating to the same speaker. A non-linear vector interpolation technique is used to capture speaker-specific information through least-square-error optimization. To determine an optimum threshold for speaker verification, we studied the minimum risk and minimum error criteria based on Bayes decision rule. Experiments are based on five utterances of 4 phonemes contained in one sentence. One utterance is used for test and the remaining 4 for training. Evaluated on 72 speakers we obtained 3,9========percnt; speaker recognition error rate and 0,45========percnt; minimum risk speaker verification total error rate.

Links toward previous steps (curation, corpus...)


Links to Exploration step

CRIN:gong94b

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en" wicri:score="532">Non-Linear Interpolation Methods for Speaker Recognition and Verification</title>
</titleStmt>
<publicationStmt>
<idno type="RBID">CRIN:gong94b</idno>
<date when="1994" year="1994">1994</date>
<idno type="wicri:Area/Crin/Corpus">001539</idno>
<idno type="wicri:Area/Crin/Curation">001539</idno>
<idno type="wicri:explorRef" wicri:stream="Crin" wicri:step="Curation">001539</idno>
<idno type="wicri:Area/Crin/Checkpoint">002F57</idno>
<idno type="wicri:explorRef" wicri:stream="Crin" wicri:step="Checkpoint">002F57</idno>
<idno type="wicri:Area/Main/Merge">00D255</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en">Non-Linear Interpolation Methods for Speaker Recognition and Verification</title>
<author>
<name sortKey="Gong, Y" sort="Gong, Y" uniqKey="Gong Y" first="Y." last="Gong">Y. Gong</name>
</author>
<author>
<name sortKey="Haton, J P" sort="Haton, J P" uniqKey="Haton J" first="J.-P." last="Haton">Jean-Paul Haton</name>
<affiliation>
<country>France</country>
<placeName>
<settlement type="city">Nancy</settlement>
<region type="region" nuts="2">Grand Est</region>
<region type="region" nuts="2">Lorraine (région)</region>
</placeName>
<orgName type="laboratoire" n="5">Laboratoire lorrain de recherche en informatique et ses applications</orgName>
<orgName type="university">Université de Lorraine</orgName>
<orgName type="institution">Centre national de la recherche scientifique</orgName>
<orgName type="institution">Institut national de recherche en informatique et en automatique</orgName>
</affiliation>
</author>
</analytic>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>optimum decision</term>
<term>speaker model</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en" wicri:score="2502">We address two problems related to text-dependent speaker recognition and verification using very short utterances (less than 1 second) both for training and recognition/verification\, : speaker acoustic models and verification decision thresholds. The approach to speaker models consists in exploiting speaker-specific acoustic correlations between two sets of parameter vectors relating to the same speaker. A non-linear vector interpolation technique is used to capture speaker-specific information through least-square-error optimization. To determine an optimum threshold for speaker verification, we studied the minimum risk and minimum error criteria based on Bayes decision rule. Experiments are based on five utterances of 4 phonemes contained in one sentence. One utterance is used for test and the remaining 4 for training. Evaluated on 72 speakers we obtained 3,9========percnt; speaker recognition error rate and 0,45========percnt; minimum risk speaker verification total error rate.</div>
</front>
</TEI>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Lorraine/explor/InforLorV4/Data/Main/Merge
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 00D255 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Main/Merge/biblio.hfd -nk 00D255 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Lorraine
   |area=    InforLorV4
   |flux=    Main
   |étape=   Merge
   |type=    RBID
   |clé=     CRIN:gong94b
   |texte=   Non-Linear Interpolation Methods for Speaker Recognition and Verification
}}

Wicri

This area was generated with Dilib version V0.6.33.
Data generation: Mon Jun 10 21:56:28 2019. Site generation: Fri Feb 25 15:29:27 2022