Title of article :
A language learning model for finite parameter spaces
Author/Authors :
Niyogi، نويسنده , , Partha and Berwick، نويسنده , , Robert C.، نويسنده ,
Issue Information :
روزنامه با شماره پیاپی سال 1996
Pages :
33
From page :
161
To page :
193
Abstract :
This paper shows how to formally characterize language learning in a finite parameter space, for instance, in the principles-and-parameters approach to language, as a Markov structure. New language learning results follow directly; we can explicitly calculate how many positive examples on average (“sample complexity”) it will take for a learner to correctly identify a target language with high probability. We show how sample complexity varies with input distributions and learning regimes. In particular we find that the average time to converge under reasonable language input distributions for a simple three-parameter system first described by Gibson and Wexler (1994) is psychologically plausible, in the range of 100–150 positive examples. We further find that a simple random step algorithm - that is, simply jumping from one language hypothesis to another rather than changing one parameter at a time - works faster and always converges to the right target language, in contrast to the single-step, local parameter setting method advocated in some recent work.
Journal title :
Cognition
Serial Year :
1996
Journal title :
Cognition
Record number :
2075120
Link To Document :
بازگشت