DocumentCode :
938693
Title :
Adaptive filtering with binary reinforcement
Author :
Gersho, Allen
Volume :
30
Issue :
2
fYear :
1984
fDate :
3/1/1984 12:00:00 AM
Firstpage :
191
Lastpage :
199
Abstract :
Recently there has been increased interest in high speed adaptive filtering where the usual stochastic gradient or least mean-square (LMS) algorithm is replaced with the simpler algorithm where adaptation is guided only by the polarity of the error signal. In this paper the convergence of this binary reinforcement (BR) algorithm is proved under the usual independence assumption, and the surprising observation is made that, unlike the LMS algorithm, convergence occurs for any positive value of the step-size parameter. While the stochastic gradient algorithm attempts to minimize a mean-square error cost function, the binary reinforcement algorithm in fact attempts to minimize a mean-absolute error cost function. It is proved for the binary reinforcement algorithm that the tap weight vector converges in distribution to a random vector that is suitably concentrated about the optimal value based on a least mean-absolute error cost function. For a sufficiently small step size, the expected cost of the asymptotic weight vector can be made as close as desired to thc minimum cost attain,ed by the optimum weight vector.
Keywords :
Adaptive filters; Adaptive filters; Adaptive systems; Automatic control; Convergence; Cost function; Filtering algorithms; Gain measurement; Least squares approximation; Lyapunov method; Stochastic processes;
fLanguage :
English
Journal_Title :
Information Theory, IEEE Transactions on
Publisher :
ieee
ISSN :
0018-9448
Type :
jour
DOI :
10.1109/TIT.1984.1056890
Filename :
1056890
Link To Document :
بازگشت