A nonmonotone memory gradient method for unconstrained optimization

研究成果: Article査読

3 被引用数 (Scopus)

抄録

Memory gradient methods are used for unconstrained optimization, especially large scale problems. They were first proposed by Miele and Cantrell (1969) and Cragg and Levy (1969). Recently Narushima and Yabe (2006) proposed a new memory gradient method which generates a descent search direction for the objective function at every iteration and converges globally to the solution if the Wolfe conditions are satisfied within the line search strategy. In this paper, we propose a nonmonotone memory gradient method based on this work. We show that our method converges globally to the solution. Our numerical results show that the proposed method is efficient for some standard test problems if we choose a parameter included in the method suitably.

本文言語English
ページ(範囲)31-45
ページ数15
ジャーナルJournal of the Operations Research Society of Japan
50
1
DOI
出版ステータスPublished - 2007 3月
外部発表はい

ASJC Scopus subject areas

  • 決定科学(全般)
  • 経営科学およびオペレーションズ リサーチ

フィンガープリント

「A nonmonotone memory gradient method for unconstrained optimization」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル