Abstract
Conjugate gradient methods have been widely used as schemes to solve large-scale unconstrained optimization problems. The search directions for the conventional methods are defined by using the gradient of the objective function. This paper proposes two nonlinear conjugate gradient methods which take into account mostly information about the objective function. We prove that they converge globally and numerically compare them with conventional methods. The results show that with slight modification to the direction, one of our methods performs as well as the best conventional method employing the Hestenes-Stiefel formula.
Original language | English |
---|---|
Pages (from-to) | 941-955 |
Number of pages | 15 |
Journal | Optimization Letters |
Volume | 6 |
Issue number | 5 |
DOIs | |
Publication status | Published - 2012 Jun |
Externally published | Yes |
Keywords
- Conjugate gradient method
- Global convergence
- Unconstrained optimization problem
- Wolfe conditions
ASJC Scopus subject areas
- Control and Optimization