Title :
Toward second-order generalisation
Author :
Neville, R.S. ; Luk, P.C.K.
Author_Institution :
Dept. of Electr. & Electron. Eng., Hertfordshire Univ., Hatfield, UK
Abstract :
Generalisation in artificial neural networks may be cast into two basic categories, `standard´ and `higher-order´. We view `standard´ generalisation as a means to interpolate and extrapolate data. A two-layer perceptron network performs `standard´ generalisation when if has learnt a function from a set of discretised vectors that represent a given function. The trained network can then interpolate and extrapolate between the data points it was initially trained on. We define `higher-order´ generalisation as generally of a more abstract nature. For example if one trains a unit to learn a function, then one manipulates the weight matrix of the unit. If the transformed weight matrix allows the unit to perform the inverse function then this is a `higher-order´ generalisation. The article relates how one can perform a set of transforms on the nets weight matrix to enable the transformed net to perform a type of `higher-order´ generalisation
Keywords :
generalisation (artificial intelligence); matrix algebra; neural nets; random-access storage; higher-order generalisation; second-order generalisation; standard generalisation; two-layer perceptron network; weight matrix; Artificial neural networks; Lattices; Mirrors; Multilayer perceptrons; Neurons; Random access memory; Read-write memory; Symmetric matrices; Table lookup; Training data;
Conference_Titel :
Neural Networks Proceedings, 1998. IEEE World Congress on Computational Intelligence. The 1998 IEEE International Joint Conference on
Conference_Location :
Anchorage, AK
Print_ISBN :
0-7803-4859-1
DOI :
10.1109/IJCNN.1998.685965