Conjugate gradient methods using value of objective function for unconstrained optimization

Hideaki Iiduka, Yasushi Narushima

Research output: Contribution to journalArticlepeer-review

15 Citations (Scopus)


Conjugate gradient methods have been widely used as schemes to solve large-scale unconstrained optimization problems. The search directions for the conventional methods are defined by using the gradient of the objective function. This paper proposes two nonlinear conjugate gradient methods which take into account mostly information about the objective function. We prove that they converge globally and numerically compare them with conventional methods. The results show that with slight modification to the direction, one of our methods performs as well as the best conventional method employing the Hestenes-Stiefel formula.

Original languageEnglish
Pages (from-to)941-955
Number of pages15
JournalOptimization Letters
Issue number5
Publication statusPublished - 2012 Jun
Externally publishedYes


  • Conjugate gradient method
  • Global convergence
  • Unconstrained optimization problem
  • Wolfe conditions

ASJC Scopus subject areas

  • Control and Optimization


Dive into the research topics of 'Conjugate gradient methods using value of objective function for unconstrained optimization'. Together they form a unique fingerprint.

Cite this