TY - JOUR

T1 - Some Properties of Continuous Hidden Markov Model Representations

AU - Rabiner, L. R.

AU - Juang, B. ‐H

AU - Levinson, S. E.

AU - Sondhi, M. M.

PY - 1985/1/1

Y1 - 1985/1/1

N2 - Many signals can be modeled as probabilistic functions of Markov chains in which the observed signal is a random vector whose probability density function (pdf) depends on the current state of an underlying Markov chain. Such models are called Hidden Markov Models (HMMs) and are useful representations for speech signals in terms of some convenient observations (e.g., cepstral coefficients or pseudolog area ratios). One method of estimating parameters of HMMs is the well‐known Baum‐Welch reestimation method. For continuous pdf's, the method was known to work only for elliptically symmetric densities. We have recently shown that the method can be generalized to handle mixtures of elliptically symmetric pdf's. Any continuous pdf can be approximated to any desired accuracy by such mixtures, in particular, by mixtures of multivariate Gaussian pdf's. To effectively make use of this method of parameter estimation, it is necessary to understand how it is affected by the amount of training data available, the number of states in the Markov chain, the dimensionality of the signal, etc. To study these issues, Markov chains and random vector generators were simulated to generate training sequences from “toy” models. The model parameters were estimated from these training sequences and compared to the “true” parameters by means of an appropriate distance measure. The results of several such experiments show the strong sensitivity of the method to some (but not all) of the model parameters. A procedure for getting good initial parameter estimates is, therefore, of considerable importance.

AB - Many signals can be modeled as probabilistic functions of Markov chains in which the observed signal is a random vector whose probability density function (pdf) depends on the current state of an underlying Markov chain. Such models are called Hidden Markov Models (HMMs) and are useful representations for speech signals in terms of some convenient observations (e.g., cepstral coefficients or pseudolog area ratios). One method of estimating parameters of HMMs is the well‐known Baum‐Welch reestimation method. For continuous pdf's, the method was known to work only for elliptically symmetric densities. We have recently shown that the method can be generalized to handle mixtures of elliptically symmetric pdf's. Any continuous pdf can be approximated to any desired accuracy by such mixtures, in particular, by mixtures of multivariate Gaussian pdf's. To effectively make use of this method of parameter estimation, it is necessary to understand how it is affected by the amount of training data available, the number of states in the Markov chain, the dimensionality of the signal, etc. To study these issues, Markov chains and random vector generators were simulated to generate training sequences from “toy” models. The model parameters were estimated from these training sequences and compared to the “true” parameters by means of an appropriate distance measure. The results of several such experiments show the strong sensitivity of the method to some (but not all) of the model parameters. A procedure for getting good initial parameter estimates is, therefore, of considerable importance.

UR - http://www.scopus.com/inward/record.url?scp=0022099671&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0022099671&partnerID=8YFLogxK

U2 - 10.1002/j.1538-7305.1985.tb00274.x

DO - 10.1002/j.1538-7305.1985.tb00274.x

M3 - Article

AN - SCOPUS:0022099671

VL - 64

SP - 1251

EP - 1270

JO - Bell Labs Technical Journal

JF - Bell Labs Technical Journal

SN - 1089-7089

IS - 6

ER -