Title :
Structural generalization in neural classification: incorporation of prior probabilities
Author_Institution :
Control Syst. Centre, Univ. of Manchester, Inst. of Sci. & Technol., UK
Abstract :
Supervised learning of classifications under l2 costing of error trains learning classifiers to recall Bayesian a posteriori probabilities of the possible classes, given observed measurements. This result leads to a number of insights concerning the validation of training, access to the likelihood function, creating networks of networks, incorporation of prior probabilities (which may vary in real-time), and how to choose the training set. The author focuses on the latter two points. Contextual information in the form of priors is used to generalise training data to economise on both training and computation. Structural generalization is the process whereby data is generalised architecturally rather than parametrically. A training procedure and postprocessing technique are given which enable learning under one set of prior classification probabilities to be generalized to give (asymptotically) Bayes optimal classifications under all others
Keywords :
Bayes methods; classification; neural nets; Bayes optimal classifications; Bayesian a posteriori probabilities; learning classifiers; postprocessing technique; prior probabilities; training set;
Conference_Titel :
Adaptive Filtering, Non-Linear Dynamics and Neural Networks, IEE Colloquium on
Conference_Location :
London