Title :
Invariance constraints for improving generalization in probabilistic neural networks
Author :
Tråvén, Hans G C
Author_Institution :
Dept. of Numerical Anal. & Comput. Sci., R. Inst. of Technol., Stockholm, Sweden
Abstract :
Probabilistic neural networks can approximate class conditional densities in optimal (Bayesian) pattern classifiers. In natural pattern recognition applications, the size of the training set is always limited, making the approximation task difficult. Invariance constraints can significantly simplify the task of density approximation. A technique is presented for learning invariant representations, based on a statistical approach to ground invariance. An iterative method is developed formally for computing the maximum likelihood estimate to the parameters of an invariant mixture model. The method can be interpreted as a competitive training strategy for a radial basis function (RBF) network. It can be used for self-organizing formation of both invariant templates and features
Keywords :
iterative methods; learning (artificial intelligence); maximum likelihood estimation; neural nets; pattern recognition; class conditional densities; competitive training strategy; density approximation; ground invariance; invariant mixture model; invariant representations; invariant templates; iterative method; maximum likelihood estimate; natural pattern recognition applications; optimal pattern classifiers; probabilistic neural networks; radial basis function; training set; Artificial neural networks; Bayesian methods; Computer networks; Intelligent networks; Neural networks; Numerical analysis; Organizing; Parameter estimation; Pattern recognition; Storage area networks;
Conference_Titel :
Neural Networks, 1993., IEEE International Conference on
Conference_Location :
San Francisco, CA
Print_ISBN :
0-7803-0999-5
DOI :
10.1109/ICNN.1993.298753