Signal Processing with Sparseness Constraint

Chair: Bhaskar D. Rao, University of California San Diego, USA

Home


Signal Processing with the Sparseness Constraint

Authors:

Bhaskar D. Rao, University of California, San Diego (U.S.A.)

Volume 3, Page 1861, Paper number 2199

Abstract:

An overview is given of the role of the sparseness constraint in signal processing problems. It is shown that this is a fundamental problem deserving of attention. This is illustrated by describing several applications where sparseness of solution is desired. Lastly, a review is given of some of the algorithms that are currently available for computing sparse solutions.

ic982199.pdf (From Postscript)

TOP



Application of Basis Pursuit in Spectrum Estimation

Authors:

Scott Shaobing Chen, IBM (U.S.A.)
David L. Donoho, Stanford University (U.S.A.)

Volume 3, Page 1865, Paper number 5259

Abstract:

In this paper, we apply Basis Pursuit, an atomic decomposition technique, for spectrum estimation. Compared with several modern time series methods, our approach can greatly reduce the problem of power leakage: it is able to superresolve; moreover, it works well with noisy and unevenly sampled signals. We present expiriments on bizarrely spaced radial velocity data from one of the newly-discoved extra planetary systems.

ic985259.pdf (Scanned)

TOP



Parsimony and Wavelet Methods for Denoising

Authors:

Hamid Krim, MIT (U.S.A.)
Jean-Christophe Pesquet, University Paris Sud (France)
Irvin C Schick, GTE Internetworking and Harvard University (U.S.A.)

Volume 3, Page 1869, Paper number 2465

Abstract:

Some wavelet-based methods for signal estimation in the presence of noise are reviewed in the context of the parsimonious representation of the underlying signal. Three approaches are considered. The first is based on the application of the MDL principle. The robustness of this method is improved in the second approach, by relaxing the assumption of known noise distribution following Huber's work. In the third approach, a Bayesian strategy is adopted in order to incorporate prior information pertaining to the signal of interest; this method is especially useful at low signal-to-noise ratios.

ic982465.pdf (From Postscript)

TOP



Parsimonious Side Propagation

Authors:

Paul S Bradley, University of Wisconsin-Madison (U.S.A.)
Olvi L Mangasarian, University of Wisconsin-Madison (U.S.A.)

Volume 3, Page 1873, Paper number 1221

Abstract:

A fast parsimonious linear-programming-based algorithm for training neural networks is proposed that suppresses redundant features while using a minimal number of hidden units. This is achieved by propagating sideways to newly added hidden units the task of separating successive groups of unclassified points. Computational results show an improvement of 26.53 % and 19.76 % in tenfold cross-validation test correctness over a parsimonious perceptron on two publicly available datasets.

ic981221.pdf (From Postscript)

TOP



Fast Optimal and Suboptimal Algorithms for Sparse Solutions to Linear Inverse Problems

Authors:

Gopal Harikumar, Tellabs Research (U.S.A.)
Christophe Couvreur, University of Illinois, Urbana-Champaign (U.S.A.)
Yoram Bresler, University of Illinois, Urbana-Champaign (U.S.A.)

Volume 3, Page 1877, Paper number 5267

Abstract:

We present two ""fast"" approaches to the NP-hard problem of computing a maximally sparse approximate solution to linear inverse problems, also known as best subset selection. The first approach, a heuristic, is an iterative algorithm globally convergent to sparse elements of any given convex, compact S C R th. We demonstrate its effectiveness in bandlimited extrapolation and in sparse filter design. The second approach is a polynomial-time greedy sequential backward elimination algorithm. We show that if A has full column rank and e is small enough, then the algorithm will find the sparsest x satisfying II Ax - b II < (or equal to) e, if such exists.

ic985267.pdf (Scanned)

TOP



Measures and Algorithms for Best Basis Selection

Authors:

Kenneth Kreutz-Delgado, University of California, San Diego (U.S.A.)
Bhaskar D. Rao, University of California, San Diego (U.S.A.)

Volume 3, Page 1881, Paper number 2190

Abstract:

A general framework based on majorization, Schur-concavity, and concavity is given that facilitates the analysis of algorithm performance and clarifies the relationships between existing proposed diversity measures useful for best basis selection. Admissible sparsity measures are given by the Schur-concave functions, which are the class of functions consistent with the partial ordering on vectors known as majorization. Concave functions form an important subclass of the Schur-concave functions which attain their minima at sparse solutions to the basis selection problem. Based on a particular functional factorization of the gradient, we give a general affine scaling optimization algorithm that converges to a sparse solution for measures chosen from within this subclass.

ic982190.pdf (From Postscript)

TOP



Sparse Inverse Solution Methods for Signal and Image Processing Applications

Authors:

Brian D Jeffs, Brigham Young University (U.S.A.)

Volume 3, Page 1885, Paper number 2309

Abstract:

This paper addresses image and signal processing problems where the result most consistent with prior knowledge is the minimum order, or ``maximally sparse'' solution. These problems arise in such diverse areas as astronomical star image deblurring, neuromagnetic image reconstruction, seismic deconvolution, and thinned array beamformer design. An optimization theoretic formulation for sparse solutions is presented, and its relationship to the MUSIC algorithm is discussed. Two algorithms for sparse inverse problems are introduced, and examples of their application to beamforming array design and star image deblurring are presented.

ic982309.pdf (From Postscript)

TOP



Image Denoising Using Multiple Compaction Domains

Authors:

Prakash Ishwar, University of Illinois, Urbana-Champaign (U.S.A.)
Krishna C. Ratakonda, University of Illinois, Urbana-Champaign (U.S.A.)
Pierre Moulin, University of Illinois, Urbana-Champaign (U.S.A.)
Narendra Ahuja, University of Illinois, Urbana-Champaign (U.S.A.)

Volume 3, Page 1889, Paper number 5249

Abstract:

We present a novel framework for denoising signals from their compact representation in multiple domains. Each domain captures, uniquely, certain signal characteristics better than others. We define confidence sets around data in each domain and find sparse estimates that lie in the intersection of these sets, using a POCS algorithm. Simulations demonstrate the superior nature of the reconstruction (both in terms of mean-square error and perceptual quality) in comparison to the adaptive Wiener filter.

ic985249.pdf (Scanned)

TOP