Title :
Recurrent neural networks can learn simple, approximate regular languages
Author :
Forcada, Mikel L. ; Corbí-Bellot, Antonio M. ; Gori, Marco ; Maggini, Marco
Author_Institution :
Dept. de Llenguatges i Sistemes Inf., Univ. d´´Alacant, Spain
Abstract :
A number of researchers have shown that discrete-time recurrent neural networks (DTRNN) are capable of inferring deterministic finite automata from sets of example and counterexample strings; however, discrete algorithmic methods are much better at this task and clearly outperform DTRNN in terms of space and time complexity. We show how DTRNN may be used to learn not the exact language that explains the whole learning set but an approximate and much simpler language that explains a great majority of the examples by using simpler rules. This is accomplished by gradually varying the error function in such a way that the DTRNN is eventually allowed to classify clearly but incorrectly those strings that it has found to be difficult to learn, which are treated as exceptions. The results show that in this way, the DTRNN usually manages to learn a simplified approximate language
Keywords :
finite automata; formal languages; function approximation; learning (artificial intelligence); recurrent neural nets; deterministic finite automata; discrete-time neural networks; error function; learning set; recurrent neural networks; simple regular languages; simplified approximate language; Doped fiber amplifiers; Error correction; Government; Learning automata; Natural languages; Neural networks; Polynomials; Read only memory; Recurrent neural networks; State-space methods;
Conference_Titel :
Neural Networks, 1999. IJCNN '99. International Joint Conference on
Conference_Location :
Washington, DC
Print_ISBN :
0-7803-5529-6
DOI :
10.1109/IJCNN.1999.832596