DocumentCode :
3269904
Title :
Understanding internal representations and generalization properties in backpropagation networks
Author :
Kamangar, Farhad A. ; Leeth
Author_Institution :
Dept. of Comput. Sci. Eng., Texas Univ., Arlington, TX, USA
fYear :
1989
fDate :
0-0 1989
Abstract :
Summary form only given, as follows. Evidence is presented which supports a hypothesis that the behavior and properties of backpropagation (BP) networks with binary input/output values can be interpreted and predicted using propositional logic. First it is shown that any n-to-m mapping of binary values can be fully described by deriving expressions of Boolean operators. When converted to ´conjunctive normal´ form, these expressions form the basis for designing near minimally connected networks capable of computing any arbitrary mapping function. The proposition is then explored that if such interpretations of BP network internal representations are correct, they should be able to predict how networks will generalize, having been trained with a partial set of input/response patterns. Experimental data are presented which support this hypothesis. Finally, the significance of these preliminary findings is discussed.<>
Keywords :
Boolean algebra; formal logic; neural nets; Boolean operators; backpropagation; backpropagation networks; binary values; mapping function; neural nets; propositional logic; Boolean algebra; Logic; Neural networks;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1989. IJCNN., International Joint Conference on
Conference_Location :
Washington, DC, USA
Type :
conf
DOI :
10.1109/IJCNN.1989.118523
Filename :
118523
Link To Document :
بازگشت