DocumentCode :
3536811
Title :
Exponentially fast parameter estimation in networks using distributed dual averaging
Author :
Shahrampour, Shahin ; Jadbabaie, A.
Author_Institution :
Dept. of Electr. & Syst. Eng. & Gen. Robot., Univ. of Pennsylvania, Philadelphia, PA, USA
fYear :
2013
fDate :
10-13 Dec. 2013
Firstpage :
6196
Lastpage :
6201
Abstract :
In this paper we present an optimization-based view of distributed parameter estimation and observational social learning in networks. Agents receive a sequence of random, independent and identically distributed (i.i.d.) signals, each of which individually may not be informative about the underlying true state, but the signals together are globally informative enough to make the true state identifiable. Using an optimization-based characterization of Bayesian learning as proximal stochastic gradient descent (with Kullback-Leibler divergence from a prior as a proximal function), we show how to efficiently use a distributed, online variant of Nesterov´s dual averaging method to solve the estimation with purely local information. When the true state is globally identifiable, and the network is connected, we prove that agents eventually learn the true parameter using a randomized gossip scheme. We demonstrate that with high probability the convergence is exponentially fast with a rate dependent on the KL divergence of observations under the true state from observations under the second likeliest state. Furthermore, our work also highlights the possibility of learning under continuous adaptation of network which is a consequence of employing constant, unit stepsize for the algorithm.
Keywords :
belief networks; gradient methods; learning (artificial intelligence); network theory (graphs); optimisation; parameter estimation; probability; Bayesian learning; KL divergence; Kullback-Leibler divergence; Nesterov dual averaging method; distributed dual averaging method; exponentially fast parameter estimation; iid signals; independent and identically distributed signals; network adaptation; observational social learning; optimization-based characterization; optimization-based view; probability; proximal function; proximal stochastic gradient descent; randomized gossip scheme; Bayes methods; Convergence; Maximum likelihood estimation; Optimization; Parameter estimation; Robot sensing systems; Vectors;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Decision and Control (CDC), 2013 IEEE 52nd Annual Conference on
Conference_Location :
Firenze
ISSN :
0743-1546
Print_ISBN :
978-1-4673-5714-2
Type :
conf
DOI :
10.1109/CDC.2013.6760868
Filename :
6760868
Link To Document :
بازگشت