Title :
Network optimization under uncertainty
Author :
Zargham, Michael ; Ribeiro, Alejandro ; Jadbabaie, A.
Author_Institution :
Dept. of Electr. & Syst. Eng., Univ. of Pennsylvania, Philadelphia, PA, USA
Abstract :
Network optimization problems are often solved by dual gradient descent algorithms which can be implemented in a distributed manner but are known to have slow convergence rates. The accelerated dual descent (ADD) method improves this convergence rate by distributed computation of approximate Newton steps. This paper shows that a stochastic version of ADD can be used to solve network optimization problems with uncertainty in the constraints as is typical of communication networks. We prove almost sure convergence to an error neighborhood when the mean square error of the uncertainty is bounded and give a more restrictive sufficient condition for exact almost sure convergence to the optimal point. Numerical experiments show that stochastic ADD converges in two orders of magnitude fewer iterations than stochastic gradient descent.
Keywords :
Newton method; convergence of numerical methods; gradient methods; mean square error methods; network theory (graphs); stochastic programming; ADD method; accelerated dual descent method; bounded mean square error; communication networks; convergence rate improvement; dual gradient descent algorithms; error neighborhood; network optimization problems; stochastic ADD; sufficient condition; uncertainty; Approximation algorithms; Approximation methods; Convergence; Optimization; Stochastic processes; Uncertainty; Vectors;
Conference_Titel :
Decision and Control (CDC), 2012 IEEE 51st Annual Conference on
Conference_Location :
Maui, HI
Print_ISBN :
978-1-4673-2065-8
Electronic_ISBN :
0743-1546
DOI :
10.1109/CDC.2012.6426324