Spacer ICASSP '98 Main Page

Spacer
General Information
Spacer
Conference Schedule
Spacer
Technical Program
Spacer
    Overview
    50th Annivary Events
    Plenary Sessions
    Special Sessions
    Tutorials
    Technical Sessions
    
By Date
    May 12, Tue
May 13, Wed
May 14, Thur
May 15, Fri
    
By Category
    AE    ANNIV   
COMM    DSP   
IMDSP    MMSP   
NNSP    PLEN   
SP    SPEC   
SSAP    UA   
VLSI   
    
By Author
    A    B    C    D    E   
F    G    H    I    J   
K    L    M    N    O   
P    Q    R    S    T   
U    V    W    X    Y   
Z   

    Invited Speakers
Spacer
Registration
Spacer
Exhibits
Spacer
Social Events
Spacer
Coming to Seattle
Spacer
Satellite Events
Spacer
Call for Papers/
Author's Kit

Spacer
Future Conferences
Spacer
Help

Abstract -  DSP4   


 
DSP4.1

   
Fast, Accurate Subspace Tracking Using Operator Restriction Analysis
C. MacInnes  (NUWC, USA)
A new noniterative subspace tracking method is presented. This method is called operator restriction analysis (OPERA) and it can be used whenever an update to the principal components of an EVD or SVD of a rank-one update of a given matrix are needed. The updating algorithms are based on the technique of restricting a linear operator to a subspace, and the two concepts of invariant subspace updating and its generalization, singular subspace updating. The algorithm is shown to be as accurate as an EVD or SVD. An application is made to bearing estimation of highly nonstationary sources. Similarities and differences with other subspace tracking algorithms are discussed. Flop counts, tracking accuracy and signal subspace accuracy for OPERA are compared with other fast algorithms and with the EVD.
 
DSP4.2

   
Recursive Methods for Estimating Multiple Missing Values of a Multivariate Stationary Process
P. Bondon  (CNRS, France);   D. Ruiz, A. Gallego  (Univ. Granada, Spain)
Existing methods for estimating linearly s future values of a m-variate stationary random process using a record of p vectors from the past consist in first solving the one-step prediction problem and then all the h-step prediction problems for h>1 independently. When the Levinson algorithm is used, each prediction problem is solved with a numerical complexity proportional to p^2. In this paper, we propose new methods to solve the h-step prediction problems for h>1 with a numerical complexity proportional to p.
 
DSP4.3

   
A generalized Schur algorithm in the Krein space and its application to H-Infinity filtering
K. Kim, J. Chun  (KAIST, Korea)
This paper introduces a generalized Schur algorithm in the Krein space with an indefinite inner product. Concepts such as Carath-eodory classes and Schur classes used in the classical Schur algorithm cannot be applied in the Krein space since the positive-definiteness corresponds merely to the nonsingularity in the Krein space. We note also that these problems appear when fast algorithms for suboptimal $H^\infty$ filtering are implemented. We shall derive the extended Chandrasekhar algorithm which is a fast implementation of $H^\infty$ filtering, and explain the connection between the generalized Schur algorithm and the Chandrasekhar algorithm. Using this result it is possible to derive a fast algorithm for suboptimal $H^\infty$ filtering.
 
DSP4.4

   
Matrix Sign Algorithm for Sinusoidal Frequency and DOA Estimation Problems
M. Hasan  (University of Minnesota, Duluth, USA);   J. Hasan  (University of Baghdad, Iraq)
Fast algorithms based on the matrix sign function are developed to estimate the signal and noise subspaces of the samole correlation matrices. These subspaces are then utilized to develop high resolution methods such as MUSIC and ESPRIT for sinusoidal frequency and direction of arrival problems The main feature of these algorithms is that they generate subspaces that are parameterized by the signal-to-noise ratio. Signifuicant computational saving will be obtained due to the fast convergence of these higher order iterations and to the fact that subspaces rather than individual eigenvectors are actually computed. Simulations showing the performance of these methods were also presented.
 
DSP4.5

   
A Delta Least Squares Lattice Algorithm for Fast Sampling
P. De, H. Fan  (University of Cincinnati, USA)
Most shift operator-based adaptive algorithms exhibit poor numerical behavior when the input discrete time process is obtained from a continuous time process by fast sampling. This includes the shift operator based least squares lattice algorithm. In this paper, we develop a delta least squares lattice algorithm. This algorithm has low computational complexity compared to the delta Levinson RLS algorithm and shows better numerical properties compared to the shift least squares lattice algorithm. Computer simulations show that the new algorithm also outperforms an existing delta least squares lattice algorithm.
 
DSP4.6

   
A New DCT Algorithm Based on Encoding Algebraic Integers
V. Dimitrov, G. Jullien, W. Miller  (University of Windsor, Canada)
In this paper we introduce an algebraic integer encoding scheme for the basis matrix elements of 8X8 DCTs and IDCTs. In particular, we encode the function cos(pi/16) and generate the other matrix elements using standard trigonometric identities. This encoding technique eliminates the requirement to approximate the matrix elements; rather we use algebraic ‘placeholders’ for them. Using this encoding scheme we are able to produce a multiplication free implementation of the Feig-Winograd algorithm.
 
DSP4.7

   
FFTW: An Adaptive Software Architecture for the FFT
M. Frigo  (MIT LCS, USA);   S. Johnson  (MIT, USA)
FFT literature has been mostly concerned with minimizing the number of floating-point operations performed by an algorithm. Unfortunately, on present-day microprocessors this measure is far less important than it used to be, and interactions with the processor pipeline and the memory hierarchy have a larger impact on performance. Consequently, one must know the details of a computer architecture in order to design a fast algorithm. In this paper, we propose an adaptive FFT program that tunes the computation automatically for any particular hardware. We compared our program, called FFTW, with over 40 implementations of the FFT on 7 machines. Our tests show that FFTW's self-optimizing approach usually yields significantly better performance than all other publicly available software. FFTW also compares favorably with machine-specific, vendor-optimized libraries.
 
DSP4.8

   
Mapped Inverse Discrete Wavelet Transform for Data Compression
H. Guo  (Chromatic Research, USA)
The Discrete Wavelet Transform (DWT) has been applied to data compression to decorrelate the data and concentrate the energy in a small portion of the coefficients. Compression can be achieved since most of the quantized wavelet coefficients are zeros. For the decoder, the traditional inverse discrete wavelet transform (IDWT) has a complexity proportional to the size of the data. In this paper, we propose a mapped inverse discrete wavelet transform algorithm (MIDWT) that takes advantage of the sparsity of the quantized wavelet coefficients, and significantly lowers the complexity of the IDWT to the level that is proportional to the number of non-zero coefficients. We further generalize the MIDWT to progressive decoding, and propose a realization of progressive IDWT without any run-time multiplication operations. Experiments show that our algorthms outperform the traditional IDWT for sparse coefficients, especially for progressive decompression.
 
DSP4.9

   
A Fast Orthogonal Matching Pursuit Algorithm
M. Gharavi-Alkhansari, T. Huang  (University of Illinois, Urbana-Champaign, USA)
The problem of optimal approximation of members of a vector space by a linear combination of members of a large overcomplete library of vectors is of importance in many areas including image and video coding, image analysis, control theory, and statistics. Finding the optimal solution in the general case is mathematically intractable. Matching pursuit, and its orthogonal version, provide greedy solutions to this problem. Orthogonal matching pursuit typically provides significantly better solution compared to the nonorthogonal version, but requires much more computation. This paper presents a fast algorithm for implementation of orthogonal matching pursuit which for many coding applications has computational complexity very close to that of the nonorthogonal version.
 
DSP4.10

   
Four Step Genetic Search for Block Motion Estimation
M. So, A. Wu  (City University of Hong Kong, P R China)
Genetic Algorithms (GA) is well known for searching global maxima and minima. In general, number of search points required by GA for searching global extreme is much lower than the exhausted search. GA has been applied to Block Matching Algorithm (BMA) and demonstrates positively its capability in BMA. The mean square error (MSE) performance of GA based BMA is close to full search (FS). However, the disadvantage of GA is the computational requirement for practical use. A four-step genetic search algorithm is proposed to BMA. The proposed method takes advantage of GA and 4SS. The simulation result shows the proposed method has similar performance to full-search (FS) in terms of MSE. In addition, number of search points required by the proposed algorithm is approximately equal to 14% of FS and closed to three-step search (3SS). The speed up ratio between proposed algorithm and FS is 5.6 times.
 

< Previous Abstract - DSP3

DSP5 - Next Abstract >