Chair: V. Stonick, Oregon State University, USA
Paulo S.R. Diniz, COPPE/EE/UFRJ (Brazil)
Ricardo Merched, COPPE/EE/UFRJ (Brazil)
Mariane R Petraglia, COPPE/EE/UFRJ (Brazil)
In this paper, we present an analysis of the delayless subband adaptive filter structure previously proposed by the authors. We derive a simple expression for the excess MSE of the proposed structure, and show that it requires up to 3.7 less computational complexity than the corresponding fullband LMS structure. Also, we estabilish a connection between subband block adaptive filtering, where the latter can be interpreted as a special case of the former. Some computer simulations are presented in order to verify the performance of the proposed structure and the theorectical results.
Thomas E Biedka, Raytheon E-Systems (U.S.A.)
Many blind adaptive beamforming algorithms require the selection of one or more non-zero initial weight vectors. Proper selection of the initial weight vectors can speed algorithm convergence and help ensure convergence to the desired solutions. Three alternative initialization approaches are compared here, all of which depend only on second order statistics of the observed data. These methods are based on Gram-Schmidt orthogonalization, eigendecomposition, and QR-decomposition of the observed data covariance matrix. We show through computer simulations that the eigendecomposition approach yields the best performance.
T. Aboulnasr, University of Ottawa (Canada)
K. Mayyas, University of Science and Technology (Jordan)
In this paper, we provide a mean square analysis of the M-Max NLMS (MMNLMS) adaptive algorithm introduced in [1]. The algorithm selects, at each iteration, a specified number of coefficients that provide the largest reduction in the error. It is shown that while the MMNLMS algorithm reduces the complexity of the adaptive filter, it maintains the closest performance to the full update NLMS filter for a given number of updates. The stability of the algorithm is shown to be guaranteed for the extreme case of only one update/iteration. Analysis of the MSE convergence and steady state performance for i.i.d. signals is also provided for that extreme case.
Shin'ichi Koike, NEC Corporation (Japan)
In this paper, a new set of difference equations is derived for convergence analysis of adaptive filters using the Sign-Sign Algorithm with Gaussian input reference and additive Gaussian noise. The analysis is based on the assumption that the tap weights are jointly Gaussian distributed. Residual mean squared error after convergence and simpler approximate difference equations are further developed. Results of experiment exhibit good agreement between theoretically calculated convergence and that of simulation for a wide range of parameter values of adaptive filters.
Marcello L.R. De Campos, Instituto Militar de Engenharia (Brazil)
Jose A Apolinario Jr, COPPE/UFRJ (Brazil)
Paulo S.R. Diniz, COPPE/UFRJ (Brazil)
Providing a quantitative mean-squared error analysis of adaptation algorithms is of great importance for determining their usefulness and for comparison with other algorithms. However, when the algorithm reutilizes previous data, such analysis becomes very involved as the independence assumption cannot be used. In this paper, a thorough mean-squared-error analysis of the binormalized data-reusing LMS algorithm is carried out. The analysis is based on a simplified model for the input-signal vector, assuming independence between the continous radial probability distribution and the discrete angular probability distribution. Throughout the analysis only parallel and orthogonal input-signal vectors are used in order to obtain a closed-form formula for the excess mean-squared error. The formula agrees closely with simulation results even when the input-signal vector is a delay line. Furthermore, the analysis can be readily extended to other algorithms with expected similar accuracy.
Eric Moulines, Ecole Nationale Superieure des Telecommunications (France)
Pierre Priouret, Universite de Paris VI (France)
Rafik Aguech, Universite de Paris VI (France)
In this paper, a perturbation expansion technique is introduced to decompose the tracking errorof a general adaptive tracking algorithm in a linear regression model. This method allow to obtain tracking error bound but also tight approximate expressions for the moments of the tracking error. These expressions allow to evaluate, both qualitatively and quantitatively, the impact of several factors on the tracking error performance which have been overlooked in previous contributions.
Jiaquan Huo, Curtin University of Technology (Australia)
Yee H Leung, Curtin University of Technology (Australia)
Shepherd and McWhirter proposed a QRD-RLS algorithm for adaptive filtering with linear constraints. In this paper, the numerical properties of this algorithm are considered. In particular, it is shown that the computed weight vector satisfies a set of constraints which are perturbed from the original ones, the amount of the perturbation being dependent on the wordlength. The linearly constrained FLS algorithm of Resende et al is also studied. Simulation results show that this algorithm is numerically unstable even in the absence of explosive divergence.
Guo Fang Xu, University of Colorado (U.S.A.)
Tamal Bose, University of Colorado (U.S.A.)
A mathematica analysis is performed on a recently reported gradient based adaptive algorithm named the Euclidean Direction Set (EDS) method. It has been shown that the EDS algorithm has a computational complexity of O(N) for each system update and a rate of convergence (based on computer simulations) comparable to the RLS algorithm. In this paper, the stability of the EDS method is studied and it is shown that the algorithm converges to the true solution. It is also proved that the convergence rate of the EDS method is superior to that of the steepest descent method.
Brett M. Ninness, University of Newcastle (Australia)
Juan-Carlos Gomez, University of Newcastle (Australia)
The use of adaptive algorithms such as Kalman Filtering, LMS and RLS together with FIR model structures is very common and extensivelyanalysed. In the interests of improved performance an extension of the FIR structure has been proposed in which the fixed poles are not all at the origin, but instead are chosen by prior knowledge to be close to where the true poles ae. Existing FIR analysis would indicate that the noise and tracking properties of such a scheme are invariant to the choice of fixed pole location. This paper establishes both numerically and theoretically that in fact this is not the case. Instead, the dependence of fixed pole location is made explicit by deriving frequency domain expressions that are obtained by using new results on generalised Fourier series and generalised Toeplitz matrices.
Tareq Y Al-Naffouri, Georgia Institute of Technology (U.S.A.)
Azzedine Zerguine, King Fahd University of Petroleum and Minerals (Saudi Arabia)
Maamar Bettayeb, King Fahd University of Petroleum and Minerals (Saudi Arabia)
This paper presents a unifying view of various error nonlinearities that are used in least mean square (LMS) adaptation such as the least mean fourth (LMF) algorithm and its family and the least-mean mixed-norm algorithm. Specifically, it is shown that the LMS algorithm and its error-modified variants are approximations of two newly proposed optimum nonlinearities which are expressed in terms of the additive noise probability density function (pdf). This is demonstrated through an approximation of the optimum nonlinearities by expanding the noise pdf in a Gram-Charlier series. Thus, a link is established between intuitively proposed and theoretically justiied variants of the LMS algorithm. The approximation has also a practical advantage in that it provides a trade-off between simplicity and more accurate realization of the optimum nonlinearities.