Title :
Near-Optimal Approximation Rates for Distribution Free Learning with Exponentially, Mixing Observations
Author :
Kurdila, A.J. ; Bin Xu
Author_Institution :
Dept. of Mech. Eng., Virginia Polytech. Inst. & State Univ., Blacksburg, VA, USA
fDate :
June 30 2010-July 2 2010
Abstract :
This paper derives the rate of convergence for the distribution free learning problem when the observation process is an exponentially strongly mixing (α-mixing with an exponential rate) Markov chain. If {zk}K=1∞ = {(xk, yk)}k=1∞ ⊂ x × Y ≡ Z is an exponentially strongly mixing Markov chain with stationary measure ρ, it is shown that the empirical estimate fz that minimizes the discrete quadratic risk satisfies the bound Ez∈Zm (∥ fρ - fz ∥L2(ρx)) ≤ C (lna/a)r/(2r+1) where Ez∈Zm (·) is the expectation over the first m-steps of the chain, fρ is the regressor function in L2(ρX) associated with ρ, r is related to the abstract smoothness of the regressor, ρX is the marginal measure associated with ρ, and a is the rate of concentration of the Markov chain.
Keywords :
Markov processes; approximation theory; convergence of numerical methods; function approximation; learning (artificial intelligence); minimisation; Markov chain; discrete quadratic risk; learning theory; near-optimal approximation; regressor function; Convergence; Hilbert space; History; Kernel; Machine learning; Mechanical engineering; Pattern recognition; Statistical learning; Time measurement;
Conference_Titel :
American Control Conference (ACC), 2010
Conference_Location :
Baltimore, MD
Print_ISBN :
978-1-4244-7426-4
DOI :
10.1109/ACC.2010.5530863