DocumentCode
671495
Title
Recurrent neural networks with fixed time convergence for linear and quadratic programming
Author
Sanchez-Torres, Juan Diego ; Sanchez, Edgar N. ; Loukianov, Alexander G.
Author_Institution
Autom. Control Lab., CINVESTAV-IPN Guadalajara, Guadalajara, Mexico
fYear
2013
fDate
4-9 Aug. 2013
Firstpage
1
Lastpage
5
Abstract
In this paper, a new class of recurrent neural networks which solve linear and quadratic programs are presented. Their design is considered as a sliding mode control problem, where the network structure is based on the Karush-Kuhn-Tucker (KKT) optimality conditions with the KKT multipliers considered as control inputs to be implemented with fixed time stabilizing terms, instead of common used activation functions. Thus, the main feature of the proposed network is its fixed convergence time to the solution. That means, there is time independent to the initial conditions in which the network converges to the optimization solution. Simulations show the feasibility of the current approach.
Keywords
linear programming; quadratic programming; recurrent neural nets; KKT multipliers; Karush-Kuhn-Tucker optimality conditions; activation functions; fixed time convergence; fixed time stabilizing terms; linear programming; network structure; optimization solution; quadratic programming; quadratic programs; recurrent neural networks; sliding mode control problem; Convergence; Linear programming; Quadratic programming; Recurrent neural networks; Stability analysis;
fLanguage
English
Publisher
ieee
Conference_Titel
Neural Networks (IJCNN), The 2013 International Joint Conference on
Conference_Location
Dallas, TX
ISSN
2161-4393
Print_ISBN
978-1-4673-6128-6
Type
conf
DOI
10.1109/IJCNN.2013.6706835
Filename
6706835
Link To Document