Spacer ICASSP '98 Main Page

Spacer
General Information
Spacer
Conference Schedule
Spacer
Technical Program
Spacer
    Overview
    50th Annivary Events
    Plenary Sessions
    Special Sessions
    Tutorials
    Technical Sessions
    
By Date
    May 12, Tue
May 13, Wed
May 14, Thur
May 15, Fri
    
By Category
    AE    ANNIV   
COMM    DSP   
IMDSP    MMSP   
NNSP    PLEN   
SP    SPEC   
SSAP    UA   
VLSI   
    
By Author
    A    B    C    D    E   
F    G    H    I    J   
K    L    M    N    O   
P    Q    R    S    T   
U    V    W    X    Y   
Z   

    Invited Speakers
Spacer
Registration
Spacer
Exhibits
Spacer
Social Events
Spacer
Coming to Seattle
Spacer
Satellite Events
Spacer
Call for Papers/
Author's Kit

Spacer
Future Conferences
Spacer
Help

Abstract -  DSP12   


 
DSP12.1

   
Analysis of a Delayless Subband Adaptive Filter Structure
P. Diniz, R. Merched, M. Petraglia  (COPPE/EE/UFRJ, Brazil)
In this paper, we present an analysis of the delayless subband adaptive filter structure previously proposed by the authors. We derive a simple expression for the excess MSE of the proposed structure, and show that it requires up to 3.7 less computational complexity than the corresponding fullband LMS structure. Also, we estabilish a connection between subband block adaptive filtering, where the latter can be interpreted as a special case of the former. Some computer simulations are presented in order to verify the performance of the proposed structure and the theorectical results.
 
DSP12.2

   
A Comparison of Initialization Schemes for Blind Adaptive Beamforming
T. Biedka  (Raytheon E-Systems, USA)
Many blind adaptive beamforming algorithms require the selection of one or more non-zero initial weight vectors. Proper selection of the initial weight vectors can speed algorithm convergence and help ensure convergence to the desired solutions. Three alternative initialization approaches are compared here, all of which depend only on second order statistics of the observed data. These methods are based on Gram-Schmidt orthogonalization, eigendecomposition, and QR-decomposition of the observed data covariance matrix. We show through computer simulations that the eigendecomposition approach yields the best performance.
 
DSP12.3

   
MSE Analysis of the M-Max NLMS Adaptive Algorithm
T. Aboulnasr  (University of Ottawa, Canada);   K. Mayyas  (Jordan University of Science and Tech, Jordan)
In this paper, we provide a mean square analysis of the M-Max NLMS (MMNLMS) adaptive algorithm introduced in [1]. The algorithm selects, at each iteration, a specified number of coefficients that provide the largest reduction in the error. It is shown that while the MMNLMS algorithm reduces the complexity of the adaptive filter, it maintains the closest performance to the full update NLMS filter for a given number of updates. The stability of the algorithm is shown to be guaranteed for the extreme case of only one update/iteration. Analysis of the MSE convergence and steady state performance for i.i.d. signals is also provided for that extreme case.
 
DSP12.4

   
Analysis of the Sign-Sign Algorithm Based on Gaussian Distributed Tap Weights
S. Koike  (NEC Corporation, Japan)
In this paper, a new set of difference equations is derived for convergence analysis of adaptive filters using the Sign-Sign Algorithm with Gaussian input reference and additive Gaussian noise. The analysis is based on the assumption that the tap weights are jointly Gaussian distributed. Residual mean squared error after convergence and simpler approximate difference equations are further developed. Results of experiment exhibit good agreement between theoretically calculated convergence and that of simulation for a wide range of parameter values of adaptive filters.
 
DSP12.5

   
Mean-Squared Error Analysis of the Binormalized Data-Reusing LMS Algorithm Using a Discrete-Angular-Distribution Model for the Input Signal
M. De Campos  (Instituto Militar de Engenharia, Brazil);   J. Apolinario, Jr., P. Diniz  (COPPE/UFRJ, Brazil)
Providing a quantitative mean-squared error analysis of adaptation algorithms is of great importance for determining their usefulness and for comparison with other algorithms. However, when the algorithm reutilizes previous data, such analysis becomes very involved as the independence assumption cannot be used. In this paper, a thorough mean-squared-error analysis of the binormalized data-reusing LMS algorithm is carried out. The analysis is based on a simplified model for the input-signal vector, assuming independence between the continous radial probability distribution and the discrete angular probability distribution. Throughout the analysis only parallel and orthogonal input-signal vectors are used in order to obtain a closed-form formula for the excess mean-squared error. The formula agrees closely with simulation results even when the input-signal vector is a delay line. Furthermore, the analysis can be readily extended to other algorithms with expected similar accuracy.
 
DSP12.6

   
On a Perturbation Approach for the Analysis of Stochastic Tracking Algorithms
E. Moulines  (EcoleNationale Superieure des Telecommunications, France);   P. Priouret, R. Aguech  (Universite de Paris VI, France)
In this paper, a perturbation expansion technique is introduced to decompose the tracking error of a general adaptive tracking algorithm in a linear regression model. This method allow to obtain tracking error bound but also tight approximate expressions for the moments of the tracking error. These expressions allow to evaluate, both qualitatively and quantitatively, the impact of several factors on the tracking error performance which have been overlooked in previous contributions.
 
DSP12.7

   
Numerical Properties of the Linearly Constrained QRD-RLS Adaptive Filter
J. Huo, Y. Leung  (Curtin University of Technology, Australia)
Shepherd and McWhirter proposed a QRD-RLS algorithm for adaptive filtering with linear constraints. In this paper, the numerical properties of this algorithm are considered. In particular, it is shown that the computed weight vector satisfies a set of constraints which are perturbed from the original ones, the amount of the perturbation being dependent on the wordlength. The linearly constrained FLS algorithm of Resende et al is also studied. Simulation results show that this algorithm is numerically unstable even in the absence of explosive divergence.
 
DSP12.8

   
Analysis of the Euclidean Direction Set Adaptive Algorithm
G. Xu, T. Bose  (University of Colorado, USA)
A mathematica analysis is performed on a recently reported gradient based adaptive algorithm named the Euclidean Direction Set (EDS) method. It has been shown that the EDS algorithm has a computational complexity of O(N) for each system update and a rate of convergence (based on computer simulations) comparable to the RLS algorithm. In this paper, the stability of the EDS method is studied and it is shown that the algorithm converges to the true solution. It is also proved that the convergence rate of the EDS method is superior to that of the steepest descent method.
 
DSP12.9

   
Quantifying the Accuracy of Adaptive Tracking Algorithms
B. Ninness, J. Gomez  (University of Newcastle, Australia)
The use of adaptive algorithms such as Kalman Filtering, LMS and RLS together with FIR model structures is very common and extensivelyanalysed. In the interests of improved performance an extension of the FIR structure has been proposed in which the fixed poles are not all at the origin, but instead are chosen by prior knowledge to be close to where the true poles ae. Existing FIR analysis would indicate that the noise and tracking properties of such a scheme are invariant to the choice of fixed pole location. This paper establishes both numerically and theoretically that in fact this is not the case. Instead, the dependence of fixed pole location is made explicit by deriving frequency domain expressions that are obtained by using new results on generalised Fourier series and generalised Toeplitz matrices.
 
DSP12.10

   
A Unifying View of Error Nonlinearities in LMS Adaptation
T. Al-Naffouri  (Georgia Institute of Technology, USA);   A. Zerguine, M. Bettayeb  (King Fahd University of Petroleum and Minererals, Saudi Arabia)
This paper presents a unifying view of various error nonlinearities that are used in least mean square (LMS) adaptation such as the least mean fourth (LMF) algorithm and its family and the least-mean mixed-norm algorithm. Specifically, it is shown that the LMS algorithm and its error-modified variants are approximations of two newly proposed optimum nonlinearities which are expressed in terms of the additive noise probability density function (pdf). This is demonstrated through an approximation of the optimum nonlinearities by expanding the noise pdf in a Gram-Charlier series. Thus, a link is established between intuitively proposed and theoretically justiied variants of the LMS algorithm. The approximation has also a practical advantage in that it provides a trade-off between simplicity and more accurate realization of the optimum nonlinearities.
 

< Previous Abstract - DSP11

DSP13 - Next Abstract >