Abstract :
An attempt is made to establish a mathematical theory that shows the intrinsic mechanisms, capabilities, and limitations of information processing by various architectures of neural networks. A method of statistically analyzing one-layer neural networks is given, covering the stability of associative mapping and mapping by totally random networks. A fundamental problem of statistical neurodynamics is considered in a way that is different from the spin-glass approach. A dynamic analysis of associative memory models and a general theory of neural learning, in which the learning potential function plays a role, are given. An advanced theory of learning and self-organization is proposed, covering backpropagation and its generalizations as well as the formation of topological maps and neural representations of information
Keywords :
content-addressable storage; learning systems; neural nets; statistical analysis; topology; associative mapping; associative memory models; backpropagation; information processing; neural learning; neural networks; neurocomputing; statistical neurodynamics; topological maps; Associative memory; Biological neural networks; Brain modeling; Evolution (biology); Humans; Information analysis; Information processing; Information representation; Mathematical model; Neurodynamics;