• DocumentCode
    9609
  • Title

    Randomized Gradient-Free Method for Multiagent Optimization Over Time-Varying Networks

  • Author

    Deming Yuan ; Ho, Daniel W. C.

  • Author_Institution
    Coll. of Autom., Nanjing Univ. of Posts & Telecommun., Nanjing, China
  • Volume
    26
  • Issue
    6
  • fYear
    2015
  • fDate
    Jun-15
  • Firstpage
    1342
  • Lastpage
    1347
  • Abstract
    In this brief, we consider the multiagent optimization over a network where multiple agents try to minimize a sum of nonsmooth but Lipschitz continuous functions, subject to a convex state constraint set. The underlying network topology is modeled as time varying. We propose a randomized derivative-free method, where in each update, the random gradient-free oracles are utilized instead of the subgradients (SGs). In contrast to the existing work, we do not require that agents are able to compute the SGs of their objective functions. We establish the convergence of the method to an approximate solution of the multiagent optimization problem within the error level depending on the smoothing parameter and the Lipschitz constant of each agent´s objective function. Finally, a numerical example is provided to demonstrate the effectiveness of the method.
  • Keywords
    gradient methods; multi-agent systems; optimisation; time-varying systems; Lipschitz continuous functions; multiagent optimization problem; randomized derivative-free method; randomized gradient-free method; time-varying networks; Convergence; Learning systems; Linear programming; Network topology; Optimization; Smoothing methods; Vectors; Average consensus; distributed multiagent system; distributed optimization; networked control systems;
  • fLanguage
    English
  • Journal_Title
    Neural Networks and Learning Systems, IEEE Transactions on
  • Publisher
    ieee
  • ISSN
    2162-237X
  • Type

    jour

  • DOI
    10.1109/TNNLS.2014.2336806
  • Filename
    6870494