Title :
Scaling in a hierarchical unsupervised network
Author :
Ghahramani, Zoubin ; Korenberg, Alexander T. ; Hinton, Geoffrey E.
Author_Institution :
Gatsby Comput. Neurosci. Unit, Univ. Coll. London, UK
Abstract :
A persistent worry with computational models of unsupervised learning is that learning will become more difficult as the problem is scaled. We examine this issue in the context of a novel hierarchical, generative model that can be viewed as a nonlinear generalisation of factor analysis and can be implemented in a neural network. The model performs perceptual inference in a probabilistically consistent manner by using top-down, bottom-up and lateral connections. These connections can be learned using simple rules that require only locally available information. We first demonstrate that the model can extract a sparse, distributed, hierarchical representation of global disparity from simplified random-dot stereograms. We then investigate some of the scaling properties of the algorithm on this problem and find that: 1) increasing the image size leads to faster and more reliable learning; 2) increasing the depth of the network from one to two hidden layers leads to better representations at the first hidden layer; and 3) once one part of the network has discovered how to represent disparity, it “supervises” other parts of the network, greatly speeding up their learning
Keywords :
neural nets; Gaussian belief net; computational models; factor analysis; global disparity; hierarchical unsupervised network; neural network; nonlinear generalisation; perceptual inference; random-dot stereograms; scaling; unsupervised learning;
Conference_Titel :
Artificial Neural Networks, 1999. ICANN 99. Ninth International Conference on (Conf. Publ. No. 470)
Conference_Location :
Edinburgh
Print_ISBN :
0-85296-721-7
DOI :
10.1049/cp:19991077