Title :
Comments on "Learning convergence in the cerebellar model articulation controller"
Author :
Brown, M. ; Harris, Chris J.
Author_Institution :
Dept. of Aeronaut. & Astronaut., Southampton Univ., UK
fDate :
7/1/1995 12:00:00 AM
Abstract :
The commenter refers to the paper by Wong-Sideris (ibid. vol.3, p.115-21 (1992)) claiming that the original Albus CMAC (or binary CMAC) is capable of learning an arbitrary multivariate lookup table, the linear optimization process is strictly positive definite, and that the basis functions are linearly independent, given sufficient training data. In recent work by Brown et al. (1994), however, it has been proved that the multivariate binary CMAC is unable to learn certain multivariate lookup tables and the number of such orthogonal functions increases exponentially as the generalization parameter is increased. A simple 2D orthogonal function is presented as a counterexample to the original theory. It is also demonstrated that the basis functions are-always linearly dependent, both for the univariate and the multivariate case, and hence the linear optimization process is only positive semi-definite and there always exists an infinite number of possible optimal weight vectors.<>
Keywords :
cerebellar model arithmetic computers; convergence; generalisation (artificial intelligence); learning (artificial intelligence); optimisation; table lookup; 2D orthogonal function; Albus CMAC; binary CMAC; cerebellar model articulation controller; generalization parameter; learning convergence; linear optimization; multivariate lookup table; optimal weight vectors; such orthogonal functions; Associative memory; Computer networks; Convergence; Fluctuations; Fuzzy systems; Neural networks; Signal processing; Stochastic processes; Table lookup; Vector quantization;
Journal_Title :
Neural Networks, IEEE Transactions on