Title :
Learning, extracting, inserting and verifying grammatical information in recurrent neural networks
Author_Institution :
NEC Res. Inst., Princeton, NJ, USA
fDate :
6/15/1905 12:00:00 AM
Abstract :
Recurrent neural networks can be trained from string examples to behave like deterministic finite-state automata (DFA´s) and pushdown automata (adapts) i.e. they recognize respectively deterministic regular and context-free grammars (DCFG´s). The author discusses some of the successes and failures of this type of ´recurrent neural network´ grammatical inference engine, as well as some of the issues of effectively using a priori symbolic knowledge in training dynamic networks. The author presents a method for networks with second-order weights where inserting prior knowledge into a network becomes a straight-forward mapping (or programming) of grammatical rules into weights. A more sophisticated hybrid machine was also developed, denoted as a neural network pushdown automata (NNPDA)-a recurrent net connected to a stack memory. This NNPDA learns to operate an external stack and recognize simple DCFG´s from string examples. When hints about the grammars are given during training, the NNPDA is capable of learning more sophisticated DCFG´s.
Keywords :
"Formal languages","Finite automata","Learning systems","Recurrent neural networks"
Conference_Titel :
Grammatical Inference: Theory, Applications and Alternatives, IEE Colloquium on