Author :
Liu, Weifeng ; Pokharel, Puskal P. ; Principe, Jose C.
Abstract :
The combination of the famed kernel trick and the least-mean-square (LMS) algorithm provides an interesting sample-by-sample update for an adaptive filter in reproducing kernel Hilbert spaces (RKHS), which is named in this paper the KLMS. Unlike the accepted view in kernel methods, this paper shows that in the finite training data case, the KLMS algorithm is well posed in RKHS without the addition of an extra regularization term to penalize solution norms as was suggested by Kivinen [Kivinen, Smola and Williamson, ldquoOnline Learning With Kernels,rdquo IEEE Transactions on Signal Processing, vol. 52, no. 8, pp. 2165-2176, Aug. 2004] and Smale [Smale and Yao, ldquoOnline Learning Algorithms,rdquo Foundations in Computational Mathematics, vol. 6, no. 2, pp. 145-176, 2006]. This result is the main contribution of the paper and enhances the present understanding of the LMS algorithm with a machine learning perspective. The effect of the KLMS step size is also studied from the viewpoint of regularization. Two experiments are presented to support our conclusion that with finite data the KLMS algorithm can be readily used in high dimensional spaces and particularly in RKHS to derive nonlinear, stable algorithms with comparable performance to batch, regularized solutions.
Keywords :
Hilbert spaces; adaptive filters; learning (artificial intelligence); least mean squares methods; adaptive filter; kernel Hilbert spaces; kernel least-mean-square algorithm; machine learning; sample-by-sample update; Adaptive filters; Algorithm design and analysis; Hilbert space; Kernel; Least squares approximation; Machine learning algorithms; Principal component analysis; Radio access networks; Signal processing algorithms; Training data; Kernel methods; Tikhonov regularization; least mean square;