DocumentCode
395098
Title
Simple recurrent networks and random indexing
Author
Sakurai, Akito ; Hyodo, Daisuke
Author_Institution
Keio Univ., Yokohama, Japan
Volume
1
fYear
2002
fDate
18-22 Nov. 2002
Firstpage
35
Abstract
We first show that the dendrogram depicting a lexical hierarchy among words that Elman obtained by training an SRN (simple recurrent networks) is in fact obtained without training the SRN. We then show that the reason why training was not required is that the SRN itself (1) assigns a random code, which is in fact a set of weights of the SRN, to each word and (2) assigns a composite code reflecting contexts (a set of words) of a word, which is in fact a vector of hidden unit activations of the SRN, to the word, since a lexical hierarchy among words can be built upon the similarity of their contexts. We ascertained that the above scheme is valid although the codes are skewed by non-linear output function (the standard sigmoidal function). We also note that the coding scheme is similar to the random indexing proposed by Kanerva and his group.
Keywords
computational linguistics; indexing; recurrent neural nets; coding scheme; composite code; dendrogram; hidden unit activations; lexical hierarchy; nonlinear output function; random code; random indexing; simple recurrent networks; standard sigmoidal function; Animals; Backpropagation algorithms; Code standards; Educational technology; Indexing; Joining processes; Natural languages; Recurrent neural networks;
fLanguage
English
Publisher
ieee
Conference_Titel
Neural Information Processing, 2002. ICONIP '02. Proceedings of the 9th International Conference on
Print_ISBN
981-04-7524-1
Type
conf
DOI
10.1109/ICONIP.2002.1202126
Filename
1202126
Link To Document