Title :
Gradient methods for iterative distributed control synthesis
Author :
Mårtensson, Karl ; Rantzer, Anders
Author_Institution :
Autom. Control LTH, Lund Univ., Lund, Sweden
Abstract :
In this paper we present a gradient method to iteratively update local controllers of a distributed linear system driven by stochastic disturbances. The control objective is to minimize the sum of the variances of states and inputs in all nodes. We show that the gradients of this objective can be estimated distributively using data from a forward simulation of the system model and a backward simulation of the adjoint equations. Iterative updates of local controllers using the gradient estimates gives convergence towards a locally optimal distributed controller.
Keywords :
control system synthesis; distributed parameter systems; gradient methods; iterative methods; optimal control; convergence; distributed linear system; gradient methods; iterative distributed control synthesis; iterative update local controllers; local optimal distributed controller; stochastic disturbances; Control system synthesis; Control systems; Convergence; Distributed control; Equations; Gradient methods; Iterative methods; Linear systems; Optimal control; Stochastic systems;
Conference_Titel :
Decision and Control, 2009 held jointly with the 2009 28th Chinese Control Conference. CDC/CCC 2009. Proceedings of the 48th IEEE Conference on
Conference_Location :
Shanghai
Print_ISBN :
978-1-4244-3871-6
Electronic_ISBN :
0191-2216
DOI :
10.1109/CDC.2009.5400233