DocumentCode :
3743914
Title :
Distributed subgradient methods for saddle-point problems
Author :
David Mateos-Núñez;Jorge Cortés
Author_Institution :
UC San Diego, United States of America
fYear :
2015
Firstpage :
5462
Lastpage :
5467
Abstract :
We present provably correct distributed subgradient methods for general min-max problems with agreement constraints on a subset of the arguments of both the convex and concave parts. Applications include separable constrained minimization problems where each constraint is a sum of convex functions of local variables for the agents. The proposed algorithm then reduces to primal-dual updates using local subgradients and Laplacian averaging on local copies of the multipliers associated to the global constraints. The framework also encodes minimization problems with semidefinite constraints, which results in novel distributed strategies that are scalable if the order of the matrix inequalities is independent of the network size. Our analysis establishes for the case of general convex-concave functions the convergence of the running time-averages of the local estimates to a saddle point under periodic connectivity of the communication digraphs. Specifically, choosing the gradient step-sizes in a suitable way, we show that the evaluation error is proportional to 1/√t where t is the iteration step.
Keywords :
"Optimization","Convergence","Convex functions","Symmetric matrices","Laplace equations","Minimization","Linear matrix inequalities"
Publisher :
ieee
Conference_Titel :
Decision and Control (CDC), 2015 IEEE 54th Annual Conference on
Type :
conf
DOI :
10.1109/CDC.2015.7403075
Filename :
7403075
Link To Document :
بازگشت