Channel robust speaker verification via bayesian blind stochastic feature transformation

Kwok Kwong Yiu, Man Wai Mak, Sun Yuan Kung

Research output: Contribution to conferencePaperpeer-review


In telephone-based speaker verification, the channel conditions can be varied significantly from sessions to sessions. Therefore, it is desirable to estimate the channel conditions online and compensate the acoustic distortion without prior knowledge of the channel characteristics. Because no a priori knowledge is used, the estimation accuracy depends greatly on the length of the verification utterances. This paper extends the Blind Stochastic Feature Transformation (BSFT) algorithm that we recently proposed to handle the short-utterance scenario. The idea is to estimate a set of prior transformation parameters from a development set in which a wide variety of channel conditions exists in the verification utterances. The prior transformations are then incorporated into the online estimation of the BSFT parameters in a Bayesian (maximum a posteriori) fashion. The resulting transformation parameters are therefore dependent on both the prior transformations and the verification utterances. For short (long) utterances, the prior transformations play a more (less) important role. We referred the extended algorithm to as Bayesian BSFT (BBSFT) and applied it to the 2001 NIST SRE task. Results show that Bayesian BSFT outperforms BSFT for utterances shorter than or equal to 4 seconds.

Original languageEnglish (US)
Number of pages4
StatePublished - 2005
Event9th European Conference on Speech Communication and Technology - Lisbon, Portugal
Duration: Sep 4 2005Sep 8 2005


Other9th European Conference on Speech Communication and Technology

All Science Journal Classification (ASJC) codes

  • General Engineering


Dive into the research topics of 'Channel robust speaker verification via bayesian blind stochastic feature transformation'. Together they form a unique fingerprint.

Cite this