Skip to main content
Log in

Exploiting damped techniques for nonlinear conjugate gradient methods

  • Original Article
  • Published:
Mathematical Methods of Operations Research Aims and scope Submit manuscript

Abstract

In this paper we propose the use of damped techniques within Nonlinear Conjugate Gradient (NCG) methods. Damped techniques were introduced by Powell and recently reproposed by Al-Baali and till now, only applied in the framework of quasi-Newton methods. We extend their use to NCG methods in large scale unconstrained optimization, aiming at possibly improving the efficiency and the robustness of the latter methods, especially when solving difficult problems. We consider both unpreconditioned and Preconditioned NCG. In the latter case, we embed damped techniques within a class of preconditioners based on quasi-Newton updates. Our purpose is to possibly provide efficient preconditioners which approximate, in some sense, the inverse of the Hessian matrix, while still preserving information provided by the secant equation or some of its modifications. The results of an extensive numerical experience highlights that the proposed approach is quite promising.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

References

  • Al-Baali M (1985) Descent property and global convergence of the Fletcher–Reeves method with inexact line search. IMA J Numer Anal 5:121–124

    Article  MathSciNet  MATH  Google Scholar 

  • Al-Baali M (2014) Damped techniques for enforcing convergence of quasi-Newton methods. Optim Methods Softw 29:919–936

    Article  MathSciNet  MATH  Google Scholar 

  • Al-Baali M, Fletcher R (1996) On the order of convergence of preconditioned nonlinear conjugate gradient methods. SIAM J Sci Comput 17:658–665

    Article  MathSciNet  MATH  Google Scholar 

  • Al-Baali M, Grandinetti L (2009) On practical modifications of the quasi-Newton BFGS methods. AMO Adv Model Optim 11:63–76

    MathSciNet  MATH  Google Scholar 

  • Al-Baali M, Grandinetti L (2017) Improved damped quasi-Newton methods for unconstrained optimization. Pacific J Optim (to appear)

  • Al-Baali M, Purnama A (2012) Numerical experience with damped quasi-Newton optimization methods when the objective function is quadratic. SQU J Sci 17:1–11

    Article  Google Scholar 

  • Al-Baali M, Grandinetti L, Pisacane O (2014a) Damped techniques for the limited memory BFGS method for large-scale optimization. J Optim Theory Appl 161:688–699

    Article  MathSciNet  MATH  Google Scholar 

  • Al-Baali M, Spedicato E, Maggioni F (2014b) Broyden’s quasi-Newton methods for a nonlinear system of equations and unconstrained optimization: a review and open problems. Optim Methods Softw 29:937–954

    Article  MathSciNet  MATH  Google Scholar 

  • Boyd S, Vandenberghe L (2004) Convex optimization. Cambridge University Press, New York

    Book  MATH  Google Scholar 

  • Caliciotti A, Fasano G, Roma M (2016) Preconditioning strategies for nonlinear conjugate gradient methods, based on quasi-Newton updates. In: Sergeyev Y, Kvasov D, Dell’Accio F, Mukhametzhanov M (eds) AIP conference proceedings, vol 1776. American Institute of Physics

  • Caliciotti A, Fasano G, Roma M (2017a) Novel preconditioners based on quasi-Newton updates for nonlinear conjugate gradient methods. Optim Lett 11:835–853

  • Caliciotti A, Fasano G, Roma M (2017b) Preconditioned nonlinear conjugate gradient methods based on a modified secant equation. Appl Math Comput (submitted to)

  • Dolan ED, Moré J (2002) Benchmarking optimization software with performance profiles. Math Program 91:201–213

    Article  MathSciNet  MATH  Google Scholar 

  • Fasano G, Roma M (2013) Preconditioning Newton–Krylov methods in nonconvex large scale optimization. Comput Optim Appl 56:253–290

    Article  MathSciNet  MATH  Google Scholar 

  • Fasano G, Roma M (2016) A novel class of approximate inverse preconditioners for large scale positive definite linear systems in optimization. Comput Optim Appl 65:399–429

    Article  MathSciNet  MATH  Google Scholar 

  • Fletcher R (1987) Practical methods of optimization. Wiley, New York

    MATH  Google Scholar 

  • Gilbert J, Nocedal J (1992) Global convergence properties of conjugate gradient methods for optimization. SIAM J Optim 2:21–42

    Article  MathSciNet  MATH  Google Scholar 

  • Gould NIM, Orban D, Toint PL (2015) CUTEst: a constrained and unconstrained testing environment with safe threads. Comput Optim Appl 60:545–557

    Article  MathSciNet  MATH  Google Scholar 

  • Gratton S, Sartenaer A, Tshimanga J (2011) On a class of limited memory preconditioners for large scale linear systems with multiple right-hand sides. SIAM J Optim 21:912–935

    Article  MathSciNet  MATH  Google Scholar 

  • Grippo L, Lucidi S (1997) A globally convergent version of Polak–Ribière conjugate gradient method. Math Program 78:375–391

    MATH  Google Scholar 

  • Grippo L, Lucidi S (2005) Convergence conditions, line search algorithms and trust region implementations for the Polak–Ribière conjugate gradient method. Optim Methods Softw 20:71–98

    Article  MathSciNet  MATH  Google Scholar 

  • Grippo L, Sciandrone M (2011) Metodi di ottimizzazione non vincolata. Springer, Milan

    Book  MATH  Google Scholar 

  • Hager W, Zhang H (2013) The limited memory conjugate gradient method. SIAM J Optim 23:2150–2168

    Article  MathSciNet  MATH  Google Scholar 

  • Kelley CT (1999) Iterative methods for optimization. Frontiers in applied mathematics. SIAM, Philadelphia, PA

    Book  Google Scholar 

  • Morales J, Nocedal J (2000) Automatic preconditioning by limited memory quasi-Newton updating. SIAM J Optim 10:1079–1096

    Article  MathSciNet  MATH  Google Scholar 

  • Moré J, Thuente D (1994) Line search algorithms with guaranteed sufficient decrease. ACM Trans Math Softw (TOMS) 20:286–307

    Article  MathSciNet  MATH  Google Scholar 

  • Nocedal J, Wright S (2006) Numerical optimization, 2nd edn. Springer, New York

    MATH  Google Scholar 

  • Powell MJD (1978) Algorithms for nonlinear constraints that use Lagrangian functions. Math Program 14:224–248

    Article  MathSciNet  MATH  Google Scholar 

  • Powell MJD (1986) How bad are the BFGS and DFP methods when the objective function is quadratic? Math Program 34:34–47

    Article  MathSciNet  MATH  Google Scholar 

  • Pytlak R (2009) Conjugate gradient algorithms in nonconvex optimization. Springer, Berlin

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Giovanni Fasano.

Appendix

Appendix

In this appendix, we report a technical result used in the proofs of Proposition 1 (see Grippo and Sciandrone 2011).

Lemma 1

Let \(\{\xi _k\}\) be a sequence of nonnegative real numbers. Let \(\varOmega >0\) and \(q \in (0,1)\) and suppose that there exists \(k_1 \ge 1\) such that

$$\begin{aligned} \xi _k \le \varOmega + q \xi _{k-1}, \qquad \hbox {for any}\qquad k \ge k_1. \end{aligned}$$

Then,

$$\begin{aligned} \xi _k \le \frac{\varOmega }{1-q} + \left( \xi _{k_1} - \frac{\varOmega }{1-q}\right) q^{k-k_1}, \qquad \hbox {for any}\qquad k \ge k_1. \end{aligned}$$

Proof

Starting from relation

$$\begin{aligned} \xi _k \le \varOmega + q \xi _{k-1}, \qquad \hbox {for any}\qquad k \ge k_1, \end{aligned}$$

considering \(k-k_1\) iterations we get:

$$\begin{aligned} \xi _k \le \varOmega \left( \sum _{i=0}^{k-k_1-1}q^i \right) + q^{k-k_1}\xi _{k_1}, \end{aligned}$$

from which we obtain

$$\begin{aligned} \xi _k \le \varOmega \Big (\frac{1-q^{k-k_1}}{1-q}\Big )+q^{k-k_1} \xi _{k_1} = \frac{\varOmega }{1-q} + \left( \xi _{k_1} - \frac{\varOmega }{1-q} \right) q^{k-k_1}. \end{aligned}$$

\(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Al-Baali, M., Caliciotti, A., Fasano, G. et al. Exploiting damped techniques for nonlinear conjugate gradient methods. Math Meth Oper Res 86, 501–522 (2017). https://doi.org/10.1007/s00186-017-0593-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00186-017-0593-1

Keywords

Navigation