Jorge D. Ortiz-Fuentes, Universitat d'Alacant. (Spain)
Mikel L. Forcada, Universitat d'Alacant. (Spain)
This paper shows a comparison between three different first-order recurrent neural network (RNN) architectures (fully recurrent, partially recurrent, and Elman), trained using the real-time recurrent learning (RTRL) algorithm and the GSM training sequence ratio (26/114) for digital equalization of 2-ary PAM signals. The results show no substantial effect of the particular architecture or the number of units on the overall performance. This is due to the assumption of a suboptimal equalization scheme by the RNNs, because of the learning algorithm. The results are compared to those obtained using a classical (decision-feedback equalizer) approach.
João Gomes, IST - ISR (Portugal)
Victor Barroso, IST - ISR (Portugal)
The design of adaptive equalizers is an important topic for practical implementation of efficient digital communications. In this paper, the application of a radial basis function neural network (RBF) for blind channel equalization is analysed. This architecture is well suited for equalization of finite impulse response (FIR) channels partly because the network model closely matches the data model. This allows a rather straightforward design of an optimal receiver, in a Bayesian sense. It also provides a simple framework for data classification, in which more complex nonlinear distortions can be accomodated with virtually no modifications. A clustering algorithm for dynamic creation and combination of local units is proposed, which eliminates the need for channel order estimation. An initialization procedure for the output linear layer is also presented. The network performance is illustrated with Monte Carlo simulations for a family of random channels.
Cheolwoo You, Yonsei University (Korea)
Daesik Hong, Yonsei University (Korea)
In this paper, a "stop-and-go" decision-directed blind equalization scheme is newly proposed. This scheme uses the structure of complex-valued multilayer feedforward neural networks, instead of the linear transversal filters that are usually used in conventional LMS-type blind equalization schemes. A complex-valued activation function composed of two real functions is used. Each real activation function has multi-saturated output region in order to deal with QAM signals of any constellation sizes. Also, a modified complex backpropagation algorithm is derived for the proposed scheme. Computer simulations are performed to compare the proposed scheme with the conventional "stop-and-go" algorithm in terms of convergence speed, MSE value in the steady state, and constellation of QAM signals after the initial convergence. Simulation results demonstrate the effectiveness of the proposed scheme.
Paolo Campolucci, University of Ancona (Italy)
Simone Fiori, University of Ancona (Italy)
Aurelio Uncini, University of Ancona (Italy)
Francesco Piazza, University of Ancona (Italy)
In this paper we propose a new learning algorithm for locally recurrent neural networks, called Truncated Recursive Back Propagation which can be easily implemented on-line with good performance. Moreover it generalises the algorithm proposed by Waibel et al. for TDNN, and includes the Back and Tsoi algorithm as well as BPS and standard on-line Back Propagation as particular cases. The proposed algorithm has a memory and computational complexity that can be adjusted by a careful choice of two parameters h and h' and so it is more flexible than a previous algorithm by us.Although for the sake of brevity we present the new algorithm only for IIR-MLP networks, it can be applied also to any locally recurrent neural network.Some computer simulations of dynamical system identification tests, reported in literature, are also presented to assess the performance of the proposed algorithm applied to the IIR-MLP.
Tamás Szabó, TUB DMIE (Hungary)
Gábor Horváth, TUB DMIE (Hungary)
This paper deals with the effects of finite precision data representation and arithmetics in principal component analysis (PCA) networks. PCA or Karhunen Loeve Transform (KLT) is a statistical method that determines an optimal linear transformation of input vectors of a stationary stochastic process. The PCA networks are single layer linear neural networks that use some versions of Oja's learning rule. The paper concentrates on the errors which will arise during learning if fixed point data representation and arithmetics are used. It gives analytical results based on the additive noise model of quantization. In the analysis all three components of the finite precision effects are considered: (i) the error due to the input data quantization, (ii) the error caused be finite precision representation of the weights of the network, and (iii) the effects of the finite precision arithmetics. The results can be used directly to determine the required word-lengths for special hardware implementation of the neural net.
Kari Torkkola, Motorola (U.S.A.)
Starting from maximizing information flow through a nonlinear neuron Bell and Sejnowski derived adaptation equations for blind deconvolution using an FIR filter. In this paper we will derive a simpler form of the adaptation and we will apply it to more complex filter structures, such as recursive filters. As an application, we study blind echo cancellation for speech signals. We will also present a method that avoids whitening the signals in the procedure.
Shichun He, Southeast University (China)
Zhenya He, Southeast University (China)
This paper investigates the application of a Recurrent Wavelet Neural Network(RWNN) to the blind equalization of nonlinear communication channels. We propose a RWNN based structure and a novel training approach for blind equalization, and we evaluate its performance via computer simulations for nonlinear communication channel model. It is shown that the RWNN blind equalizer performs much better than the linear CMA and the RRBF blind blind equalizers in nonlinear channel case. The small size and high performance of the RWNN equalizer make it suitable for high speed channel blind equalization.