Title :
Convergence of higher-order neural networks with modified updating
Author_Institution :
Centre for AI & Robotics, Bangalore, India
Abstract :
The problem of maximizing a general objective function over the hypercube {-1, 1}n is formulated as that of maximizing a multilinear polynominal over {-1, 1}n. Two methods are given for updating the state vector of the neural network, i.e., the asynchronous and the synchronous rules. They are natural generalizations of the corresponding rules for Hopfield networks with a quadratic objective function. It is shown that the asynchronous updating rule converges to a local maximum of the objective function within a finite number of time steps. A modified synchronous updating rule is presented. It incorporates both temporal as well as spatial correlations among the neurons. For the modified updating rule, it is shown that, after a finite number of time steps, the network state vector goes into a limit cycle of length m, where m is the degree of the objective function
Keywords :
Hopfield neural nets; convergence; correlation theory; hypercube networks; Hopfield networks; asynchronous rules; higher-order neural networks; hypercube; limit cycle; local maximum; multilinear polynominal; network state vector; objective function; quadratic objective; spatial correlations; state vector; synchronous rules; Artificial intelligence; Convergence; Hopfield neural networks; Hypercubes; Limit-cycles; NP-complete problem; Neural networks; Neurons; Polynomials; Robots;
Conference_Titel :
Neural Networks, 1993., IEEE International Conference on
Conference_Location :
San Francisco, CA
Print_ISBN :
0-7803-0999-5
DOI :
10.1109/ICNN.1993.298758