Abstract :
In this paper a method for expanding the limited vocabulary of neural-network based language systems is introduced. The proposed method draws on developmental constraints observed in human language acquisition, to generate increasingly specialist feature maps in linked orthogonal spaces. Each space acts as a semantic filter, channelling words to more specialist spaces. The resultant trace through each space corresponds to a full feature list for the word, which can be manipulated symbolically or by another network. This approach allows arbitrary feature accuracy for any word, whilst limiting input dimensionality to the minimum required to uniquely specify the word in the relevant specialist space. Consequently crossover between unrelated words is also minimised, so avoiding the n-squared relation between computation and vocabulary size found in fully connected networks. The resultant topology of spaces also suggests that complex inferences are possible, and the use of a perception-based feature set allows a common knowledge base to be shared between languages