DocumentCode :
2214341
Title :
Using information entropy bounds to design VLSI friendly neural networks
Author :
Draghici, Sorin
Author_Institution :
Dept. of Comput. Sci., Wayne State Univ., Detroit, MI, USA
Volume :
1
fYear :
1998
fDate :
4-8 May 1998
Firstpage :
547
Abstract :
This paper presents a method for calculating the minimal size of a VLSI optimal network for a given problem. A VLSI optimal network is a network using integer weights in a given range [-p,p] and units with a small constant fan-in. The value p is a small integer which is calculated from the problem parameters such that the problem is guaranteed to have a solution. It is shown that the number of weights can be lower bounded in the worst case by the expression: m(n+1)[nlog( dmin/dmx)+(n-1)log(dmin/1 +1)+½log(n-1)-12.(n-1)loge+c] where d min is the minimum distance between patterns of opposite classes, dmax is the maximum distance between any patterns, m is the number of patterns in the largest class, n is the number of dimensions and c is a constant. The methodology is tested on various problems using a limited precision integer weights constructive algorithm
Keywords :
VLSI; integrated circuit design; learning (artificial intelligence); maximum entropy methods; neural chips; VLSI; information entropy bounds; integer weights; learning algorithm; lower bound; maximum distance; neural networks; optimal network; Computer network reliability; Computer networks; Computer science; Costs; Heuristic algorithms; Information entropy; Laboratories; Neural network hardware; Neural networks; Very large scale integration;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks Proceedings, 1998. IEEE World Congress on Computational Intelligence. The 1998 IEEE International Joint Conference on
Conference_Location :
Anchorage, AK
ISSN :
1098-7576
Print_ISBN :
0-7803-4859-1
Type :
conf
DOI :
10.1109/IJCNN.1998.682326
Filename :
682326
Link To Document :
بازگشت