Abstract :
Novel general algorithms for multiple-descent cost-competitive learning are presented. These algorithms self-organize neural networks and possess the following features: optimal grouping of applied vector inputs, product form of neurons, neural topologies, excitatory and inhibitory connections, fair competitive bias, oblivion, winner-take-quota rule, stochastic update, and applicability to a wide class of costs. Both batch and successive training algorithms are given. Each type has its own merits. However, these two classes are equivalent, since a problem solved in the batch mode can be computed successively, and vice versa. The algorithms cover a class of combinatorial optimizations besides traditional standard pattern set design
Keywords :
learning systems; neural nets; applied vector inputs; combinatorial optimizations; cost-competitive learning; excitatory connections; fair competitive bias; inhibitory connections; multiple-descent; neural networks; neural topologies; oblivion; optimal grouping; pattern set design; product form; stochastic update; successive self-organization; successive training algorithms; winner-take-quota rule;