• DocumentCode
    3062628
  • Title

    A Non-monotone Memory Gradient Method for Unconstrained Optimization

  • Author

    Gui, Shenghua ; Wang, Hua

  • Author_Institution
    Dept. of Math., Shanghai Second Polytech. Univ., Shanghai, China
  • fYear
    2012
  • fDate
    23-26 June 2012
  • Firstpage
    385
  • Lastpage
    389
  • Abstract
    The memory gradient method is used for unconstrained optimization, especially large scale problems. In this paper, we develop a nonmonotone memory gradient method for unconstrained optimization, where a class of memory gradient direction is combined efficiently. The global and Rlinear convergence is obtained by using a nonmonotone line search strategy and the numerical tests are also given to show the efficiency of the proposed algorithm.
  • Keywords
    approximation theory; convergence; gradient methods; search problems; R-linear convergence; large scale problems; memorygradient direction; nonmonotone line search strategy; nonmonotone memory gradient method; numerical tests; unconstrained optimization; Convergence; Educational institutions; Gradient methods; Iterative methods; Search problems; R-linear convergence; memory gradient method; nonmonotone line search; unconstrained optimization;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Computational Sciences and Optimization (CSO), 2012 Fifth International Joint Conference on
  • Conference_Location
    Harbin
  • Print_ISBN
    978-1-4673-1365-0
  • Type

    conf

  • DOI
    10.1109/CSO.2012.92
  • Filename
    6274751