DocumentCode
1194131
Title
Floating-point error analysis of recursive least-squares and least-mean-squares adaptive filters
Author
Ardalan, Sasan H.
Volume
33
Issue
12
fYear
1986
fDate
12/1/1986 12:00:00 AM
Firstpage
1192
Lastpage
1208
Abstract
A floating-point error analysis of the Recursive LeastSquares (RLS) and Least-Mean-Squares (LMS) algorithms is presented. Both the prewindowed growing memory RLS algorithm
for stationary systems and the exponentially windowed RLS algorithm
for time-varying systems are studied. For both algorithms, the expression for the mean-square prediction error and the expected value of the weight error vector norm are derived in terms of the variance of the floating-point noise sources. The results point to a tradeoff in the choice of the forgetting factor
. In order to reduce the effects of additive noise and the floatingpoint noise due to the inner product calculation of the desired signal,
must be chosen close to one. On the other hand, the floating-point noise due to floating-point addition in the weight vector update recursion increases as
. Floating point errors in the calculation of the weight vector correction term, however, do not affect the steady-state error and have a transient effect. For the prewindowed growing memory RLS algorithm, exponential divergence may occur due to errors in the floatingpoint addition in the weight vector update recursion. Conditions for weight vector updating termination are also presented for stationary systems. The results for the LMS algorithm show that the excess mean-square error due to floating-point arithmetic increases inversely to loop gain for errors introduced by the summation in the weight vector recursion. The calculation of the desired signal prediction and prediction error lead to an additive noise term as in the RLS algorithm. Simulations are presented which confirm the theoretical findings of the paper.
for stationary systems and the exponentially windowed RLS algorithm
for time-varying systems are studied. For both algorithms, the expression for the mean-square prediction error and the expected value of the weight error vector norm are derived in terms of the variance of the floating-point noise sources. The results point to a tradeoff in the choice of the forgetting factor
. In order to reduce the effects of additive noise and the floatingpoint noise due to the inner product calculation of the desired signal,
must be chosen close to one. On the other hand, the floating-point noise due to floating-point addition in the weight vector update recursion increases as
. Floating point errors in the calculation of the weight vector correction term, however, do not affect the steady-state error and have a transient effect. For the prewindowed growing memory RLS algorithm, exponential divergence may occur due to errors in the floatingpoint addition in the weight vector update recursion. Conditions for weight vector updating termination are also presented for stationary systems. The results for the LMS algorithm show that the excess mean-square error due to floating-point arithmetic increases inversely to loop gain for errors introduced by the summation in the weight vector recursion. The calculation of the desired signal prediction and prediction error lead to an additive noise term as in the RLS algorithm. Simulations are presented which confirm the theoretical findings of the paper.Keywords
Adaptive filters; DSP; Digital signal processing (DSP); Floating-point arithmetic; Least-squares optimization; Recursive digital filter wordlength effects; Recursive estimation; Adaptive filters; Additive noise; Error analysis; Error correction; Floating-point arithmetic; Least squares approximation; Noise reduction; Resonance light scattering; Steady-state; Time varying systems;
fLanguage
English
Journal_Title
Circuits and Systems, IEEE Transactions on
Publisher
ieee
ISSN
0098-4094
Type
jour
DOI
10.1109/TCS.1986.1085877
Filename
1085877
Link To Document