DocumentCode
328277
Title
Improved generalization through learning a similarity metric and kernel size
Author
Lowe, David G.
Author_Institution
Dept. of Comput. Sci., British Columbia Univ., Vancouver, BC, Canada
Volume
1
fYear
1993
fDate
25-29 Oct. 1993
Firstpage
501
Abstract
Nearest-neighbour interpolation algorithms have many useful properties for applications to learning, but they often exhibit poor generalization. In this paper, it is shown that much better generalization can be obtained by using a variable interpolation kernel in combination with conjugate gradient optimization of the similarity metric and kernel size. The resulting method is called variable-kernel similarity metric (VSM) learning. It has been tested on a number of standard classification data sets, and on these problems it shows better generalization than backpropagation and most other learning methods. An important advantage is that the system can operate as a black box in which no model minimization parameters need to be experimentally set by the user. The number of parameters that must be determined through optimization are orders of magnitude less than for backpropagation or RBF networks, which may indicate that the method better captures the essential degrees of variation in learning.
Keywords
generalisation (artificial intelligence); interpolation; learning (artificial intelligence); neural nets; optimisation; conjugate gradient optimization; generalization; learning; nearest-neighbour interpolation; neural network; variable interpolation kernel; variable-kernel similarity metric learning; Application software; Computer science; Interpolation; Kernel; Learning systems; Neural networks; Optimization methods; Radial basis function networks; Testing; Training data;
fLanguage
English
Publisher
ieee
Conference_Titel
Neural Networks, 1993. IJCNN '93-Nagoya. Proceedings of 1993 International Joint Conference on
Print_ISBN
0-7803-1421-2
Type
conf
DOI
10.1109/IJCNN.1993.713963
Filename
713963
Link To Document