Chair: Petar M. Djuric, State University of New York at Stony Brook, USA
Austin B Frakt, MIT (U.S.A.)
Alan S Willsky, MIT (U.S.A.)
Few fast statistical signal processing algorithms exist for large problems involving non-stationary processes and irregular measurements. A recently introduced class of multiscale autoregressive models indexed by trees admits signal processing algorithms which can efficiently deal with problems of this type. In this paper we provide a novel and efficient algorithm for translating any second-order prior model to a multiscale autoregressive prior model so that these efficient signal processing algorithms may be applied.
Hakan Hjalmarsson, KTH (Sweden)
Brett M. Ninness, University of Newcastle (Australia)
The solution of many important signal processing problems depends on the estimation of the parameters of a Hidden Markov Model (HMM).Unfortunately, to date the only knownmethods for performing this estimation have been iterative, andtherefore computationally demanding. By way of contrast,this paper presents a new fast and non-iterative method that utilizes certainrecent `state spaced subspace system identification' (4SID) ideas from the control theory literature. A short simulation examplepresented here indicates this new technique to be almost as accurate asMaximum-Likelihood estimation, but an order of magnitude less computationallydemanding than the Baum--Welch (EM) algorithm.
Paul T. Troughton, University of Cambridge (U.K.)
Simon J. Godsill, University of Cambridge (U.K.)
We use reversible jump Markov chain Monte Carlo (MCMC) methods to address the problem of model order uncertainty in autoregressive (AR) time series within a Bayesian framework. Efficient model jumping is achieved by proposing model space moves from the the full conditional density for the AR parameters, which is obtained analytically. This is compared with an alternative method, for which the moves are cheaper to compute, in which proposals are made only for new parameters in each move. Results are presented for both synthetic and audio time series.
Iain B Collings, University of Melbourne (Australia)
Tobias Ryden, Lund University (Sweden)
This paper presents a new algorithm for on-line identification of hidden Markov model (HMM) parameters. The scheme is gradient based, and provides parameter estimates which recursively maximise the likelihood function. It is therefore a recursive maximum likelihood (RML) algorithm, and it has optimal asymptotic properties. The only current on-line HMM identification algorithm with anything other than suboptimal rate of convergence is based on a prediction error (PE)cost function. As well as presenting a new algorithm, this paper also highlights and explains a counter-intuitive convergence problem for the current recursive PE (RPE) algorithm, when operating in low noise conditions. Importantly, this problem does not exist for the new RML algorithm. Simulation studies demonstrate the superior performance of the new algorithm, compared to current techniques.
Olivier Cappé, ENST / CNRS (France)
Vincent Buchoux, ENST / CNET (France)
Eric Moulines, ENST / CNRS (France)
Hidden Markov models (HMMs) are used in many signal processing applications including speech recognition, blind equalization of digital communications channels, etc. The most widely used method for maximum likelihood estimation of HMM parameters is the forward-backward (or Baum-Welch) algorithm which is an early example of application of the Expectation-Maximization (EM) principle. In this contribution, an alternative fast-converging approach for maximum likelihood estimation of HMM parameters is described. This new techniques is based on the use of general purpose quasi-Newton optimization methods as well as on an efficient purely recursive algorithm for computing the log-likelihood and its derivative.
Petar M. Djuric, SUNY, Stony Brook (U.S.A.)
Simon J. Godsill, University of Cambridge (U.K.)
William J. Fitzgerald, University of Cambridge (U.K.)
Peter J.W. Rayner, University of Cambridge (U.K.)
Markov Chain Monte Carlo (MCMC) samplers have been a very powerful methodology for estimating signal parameters. With the introduction of the reversible jump MCMC sampler, which is a Metropolis-Hastings method adapted to general state spaces, the potential of the MCMC methods has risen to a new level. Consequently, the MCMC methods currently play a major role in many research activities. In this paper we propose a reversible jump MCMC sampler based on predictive densities obtained by integrating out unwanted parameters. The proposal densities are approximations of the posterior distributions of the remaining parameters obtained by sampling importance resampling (SIR). We apply the method to the problem of signal detection and parameter estimation of signals. To illustrate the procedure, we present an example of sinusoids embedded in noise.
Susan M Schweizer, Carnegie Mellon University (U.S.A.)
José M.F. Moura, Carnegie Mellon University (U.S.A.)
One aim of using hyperspectral imaging sensors is in discriminating man-made objects from dominant clutter environments. Sensors like Aviris or Hydice simultaneously collect hundreds of contiguous and narrowly spaced spectral band images for the same scene. The challenge lies in processing the corresponding large volume of data that is collected by the sensors. Usual implementations of the Maximum-Likelihood (ML) detector are precluded because they require the inversion of large data covariance matrices. We apply a Gauss-Markov random field (GMRF) model to derive a computationally efficient ML-detector implementation that avoids inversion of the covariance matrix. The paper details the structure of the GMRF model, presents an estimation algorithm to fit the GMRF to the hyperspectral sensor data, and finally, develops the structure of the ML-detector.
Matthew S Crouse, Rice University (U.S.A.)
Richard G Baraniuk, Rice University (U.S.A.)
Wavelet-domain Hidden Markov Models (HMMs) are a potent new tool for modeling the statistical properties of wavelet transforms. In addition to characterizing the statistics of individual wavelet coefficients, HMMs capture the salient interactions between wavelet coefficients. However, as we model an increasing number of wavelet coefficient interactions, HMM-based signal processing becomes increasingly complicated. In this paper, we propose a new approach to HMMs based on the notion of context. By modeling wavelet coefficient inter-dependencies via contexts, we retain the approximation capabilities of HMMs, yet substantially reduce their complexity. To illustrate the power of this approach, we develop new algorithms for signal estimation and for efficient synthesis of non Gaussian, long-range-dependent network traffic.