• DocumentCode
    1980685
  • Title

    Improved linear programming neural networks

  • Author

    Maa, Chia-Yiu ; Shanblatt, Michael A.

  • Author_Institution
    Dept. of Electr. Eng., Michigan State Univ., East Lansing, MI, USA
  • fYear
    1989
  • fDate
    14-16 Aug 1989
  • Firstpage
    748
  • Abstract
    It is shown that from the viewpoint of optimization theory a proper form of the Tank-Hopfield network for linear programming may be considered as a means to fulfil the Kuhn-Tucker optimality conditions. Due to the nature of the network, however, the convergence state is not the exact solution but an approximation. To get a better approximate solution, a new network formulation is introduced. The convergence state can be made very close to the exact solution by sufficiently increasing the network parameter λ. The result of this work is not limited to linear programming, it can be applied directly to any nonlinear programming problem whose objective function and constraints are convex and differentiable. This work may also be used as a foundation for the application of artificial neural networks to more general optimization problems
  • Keywords
    linear programming; neural nets; nonlinear programming; Kuhn-Tucker optimality conditions; Tank-Hopfield network; artificial neural networks; convergence state; linear programming neural networks; network formulation; network parameter; nonlinear programming problem; objective function; optimization theory; Analytical models; Artificial neural networks; Cities and towns; Convergence; Linear programming; Neural networks; Neurons; Nonlinear circuits; Traveling salesman problems; Vectors;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Circuits and Systems, 1989., Proceedings of the 32nd Midwest Symposium on
  • Conference_Location
    Champaign, IL
  • Type

    conf

  • DOI
    10.1109/MWSCAS.1989.101963
  • Filename
    101963