Environment adaptation for robust speaker verification by cascading maximum likelihood linear regression and reinforced learning

K. K. Yiu, M. W. Mak, S. Y. Kung

Research output: Contribution to journalArticlepeer-review

8 Scopus citations

Abstract

In speaker verification over public telephone networks, utterances can be obtained from different types of handsets. Different handsets may introduce different degrees of distortion to the speech signals. This paper attempts to combine a handset selector with (1) handset-specific transformations, (2) reinforced learning, and (3) stochastic feature transformation to reduce the effect caused by the acoustic distortion. Specifically, during training, the clean speaker models and background models are firstly transformed by MLLR-based handset-specific transformations using a small amount of distorted speech data. Then reinforced learning is applied to adapt the transformed models to handset-dependent speaker models and handset-dependent background models using stochastically transformed speaker patterns. During a verification session, a GMM-based handset classifier is used to identify the most likely handset used by the claimant; then the corresponding handset-dependent speaker and background model pairs are used for verification. Experimental results based on 150 speakers of the HTIMIT corpus show that environment adaptation based on the combination of MLLR, reinforced learning and feature transformation outperforms CMS, Hnorm, Tnorm, and speaker model synthesis.

Original languageEnglish (US)
Pages (from-to)231-246
Number of pages16
JournalComputer Speech and Language
Volume21
Issue number2
DOIs
StatePublished - Apr 2007

All Science Journal Classification (ASJC) codes

  • Software
  • Theoretical Computer Science
  • Human-Computer Interaction

Fingerprint

Dive into the research topics of 'Environment adaptation for robust speaker verification by cascading maximum likelihood linear regression and reinforced learning'. Together they form a unique fingerprint.

Cite this