Complete Solvers

Linear Programming

The linear programming problem can be stated in standard form as

\[\min_{x \in \mathbb{R}^n} \ c^T x \quad \text{subject to} \ Ax=b, \ x \geq 0,\]

for some matrix \(A\). It is typical to reformulate arbitrary linear programs as an equivalent linear program in standard form. However, in the next solver, they are reformulated in so-called slack form using the SlackFramework module. See slacks-section.

Convex Quadratic Programming

The convex quadratic programming problem can be stated in standard form as

\[\min_{x \in \mathbb{R}^n} \ c^T x + \tfrac{1}{2} x^T Q x \quad \text{subject to} \ Ax=b, \ x \geq 0,\]

for some matrix \(A\) and some square symmetric positive semi-definite matrix \(Q\). It is typical to reformulate arbitrary quadratic programs as an equivalent quadratic program in standard form. However, in the next solver, they are reformulated in so-called slack form using the SlackFramework module. See slacks-section.

Unconstrained Programming

The unconstrained programming problem can be stated as

\[\min_{x \in \mathbb{R}^n} \ f(x)\]

for some smooth function \(f: \mathbb{R}^n \to \R\). Typically, \(f\) is required to be twice continuously differentiable.

The trunk solver requires access to exact first and second derivatives of \(f\). If minimizes \(f\) by solving a sequence of trust-region subproblems, i.e., problems of the form

\[\min_{d \in \mathbb{R}^n} \ g_k^T d + \tfrac{1}{2} d^T H_k d \quad \text{s.t.} \ \|d\| \leq \Delta_k,\]

where \(g_k = \nabla f(x_k)\), \(H_k = \nabla^2 f(x_k)\) and \(\Delta_k > 0\) is the current trust-region radius.

TRUNK: Trust-Region Method for Unconstrained Programming.

  1. Orban Montreal Sept. 2003
class trunk.Trunk(nlp, tr, tr_solver, **kwargs)

Bases: object

Abstract trust-region method for unconstrained optimization.

A stationary point of the unconstrained problem

minimize f(x)

is identified by solving a sequence of trust-region constrained quadratic subproblems

min gᵀs + ½ s’Hs subject to ‖s‖ ≤ Δ.
__init__(nlp, tr, tr_solver, **kwargs)

Instantiate a trust-region solver for nlp.

Parameters:
nlp:a NLPModel instance.
tr:a TrustRegion instance.
tr_solver:a trust-region solver to be passed as argument to the TrustRegionSolver constructor.
Keywords:
x0:starting point (nlp.x0)
reltol:relative stopping tolerance (nlp.stop_d)
abstol:absolute stopping tolerance (1.0e-6)
maxiter:maximum number of iterations (max(1000,10n))
inexact:use inexact Newton stopping tol (False)
ny:apply Nocedal/Yuan linesearch (False)
nbk:max number of backtracking steps in Nocedal/Yuan linesearch (5)
monotone:use monotone descent strategy (False)
n_non_monotone:number of iterations for which non-strict descent is tolerated if monotone=False (25)
logger_name:name of a logger object that can be used in the post iteration (None)

Once a Trunk object has been instantiated and the problem is set up, solve problem by issuing a call to TRNK.solve(). The algorithm stops as soon as the Euclidian norm of the gradient falls below

max(abstol, reltol * g0)

where g0 is the Euclidian norm of the initial gradient.

post_iteration(**kwargs)

Perform work at the end of an iteration.

Use this method for preconditioners that need updating, e.g., a limited-memory BFGS preconditioner.

precon(v, **kwargs)

Generic preconditioning method—must be overridden.

solve(**kwargs)

Solve.

Keywords:
maxiter:maximum number of iterations.

All other keyword arguments are passed directly to the constructor of the trust-region solver.

class trunk.QNTrunk(*args, **kwargs)

Bases: trunk.Trunk

__init__(*args, **kwargs)
post_iteration(**kwargs)
precon(v, **kwargs)

Generic preconditioning method—must be overridden.

solve(**kwargs)

Solve.

Keywords:
maxiter:maximum number of iterations.

All other keyword arguments are passed directly to the constructor of the trust-region solver.

The L-BFGS method only requires access to exact first derivatives of \(f\) and maintains its own approximation to the second derivatives.

The limited-memory BFGS linesearch method for unconstrained optimization.

class lbfgs.LBFGS(model, **kwargs)

Bases: object

Solve unconstrained problems with the limited-memory BFGS method.

__init__(model, **kwargs)

Instantiate a L-BFGS solver for model.

Parameters:
model:a QuasiNewtonModel based on InverseLBFGSOperator
Keywords:
maxiter:maximum number of iterations (default: max(10n, 1000))
atol:absolute stopping tolerance (default: 1.0e-8)
rtol:relative stopping tolerance (default: 1.0e-6)
logger_name:name of a logger (default: ‘nlp.lbfgs’)
post_iteration()

Bookkeeping at the end of a general iteration.

setup_linesearch(line_model, step0)

Set up linesearch for the line model with the given initial step.

By default, use an ArmijoWolfeLineSearch. Override this method to use a different line search.

solve()

Solve model with the L-BFGS method.

class lbfgs.WolfeLBFGS(model, **kwargs)

Bases: lbfgs.LBFGS

L-BFGS with a strong Wolfe linesearch.

__init__(model, **kwargs)

Instantiate a L-BFGS solver for model.

Parameters:
model:a QuasiNewtonModel based on InverseLBFGSOperator
Keywords:
maxiter:maximum number of iterations (default: max(10n, 1000))
atol:absolute stopping tolerance (default: 1.0e-8)
rtol:relative stopping tolerance (default: 1.0e-6)
logger_name:name of a logger (default: ‘nlp.lbfgs’)
post_iteration()

Bookkeeping at the end of a general iteration.

setup_linesearch(line_model, step0)

Set up linesearch for the line model with the given initial step.

This variant uses the strong Wolfe linesearch of Moré and Thuente.

solve()

Solve model with the L-BFGS method.

Bound-Constrained Programming

The unconstrained programming problem can be stated as

\[\min_{x \in \R^n} \ f(x) \quad \text{s.t.} \ x_i \geq 0 \ (i \in \mathcal{B})\]

for some smooth function \(f: \R^n \to \R\). Typically, \(f\) is required to be twice continuously differentiable.

Trust-Region Method for Bound-Constrained Programming.

A pure Python/Numpy implementation of TRON as described in

Chih-Jen Lin and Jorge J. Moré, Newton’s Method for Large Bound- Constrained Optimization Problems, SIAM J. Optim., 9(4), 1100–1127, 1999.

class tron.TRON(model, tr_solver, **kwargs)

Bases: object

Trust-region Newton method for bound-constrained problems.

__init__(model, tr_solver, **kwargs)

Instantiate a trust-region solver for a bound-constrained problem.

The model should have the general form

min f(x) subject to l ≤ x ≤ u.
Parameters:
model:a NLPModel instance.
Keywords:
x0:starting point (model.x0)
reltol:relative stopping tolerance (1.0e-5)
abstol:absolute stopping tolerance (1.0e-12)
maxiter:maximum number of iterations (max(1000,10n))
maxfuncall:maximum number of objective function evaluations (1000)
ny:perform backtracking linesearch when trust-region step is rejected (False)
logger_name:name of a logger object that can be used in the post iteration (None)
cauchy(x, g, H, l, u, delta, alpha)

Compute a Cauchy step.

This step must satisfy a trust region constraint and a sufficient decrease condition. The Cauchy step is computed for the quadratic

q(s) = gᵀs + ½ sᵀHs,

where H=Hᵀ and g is a vector. Given a parameter α, the Cauchy step is

s[α] = P[x - α g] - x,

with P the projection into the box [l, u]. The Cauchy step satisfies the trust-region constraint and the sufficient decrease condition

‖s‖ ≤ Δ, q(s) ≤ μ₀ gᵀs,

where μ₀ ∈ (0, 1).

post_iteration(**kwargs)

Override this method to perform work at the end of an iteration.

For example, use this method for preconditioners that need updating, e.g., a limited-memory BFGS preconditioner.

precon(v, **kwargs)

Generic preconditioning method—must be overridden.

projected_linesearch(x, l, u, g, d, H, alpha=1.0)

Use a projected search to compute a satisfactory step.

This step must satisfy a sufficient decrease condition for the quadratic

q(s) = gᵀs + ½ sᵀHs,

where H=Hᵀ and g is a vector. Given the parameter α, the step is

s[α] = P[x + α d] - x,

where d is the search direction and P the projection into the box [l, u]. The final step s = s[α] satisfies the sufficient decrease condition

q(s) ≤ μ₀ gᵀs,

where μ₀ ∈ (0, 1).

The search direction d must be a descent direction for the quadratic q at x such that the quadratic is decreasing along the ray x + α d for 0 ≤ α ≤ 1.

projected_newton_step(x, g, H, delta, l, u, s, cgtol, itermax)

Generate a sequence of approximate minimizers to the QP subproblem.

min q(x) subject to l ≤ x ≤ u

where q(x₀ + s) = gᵀs + ½ sᵀHs,

x₀ is a base point provided by the user, H=Hᵀ and g is a vector.

At each stage we have an approximate minimizer xₖ, and generate a direction pₖ by using a preconditioned conjugate gradient method on the subproblem

min {q(xₖ + p) | ‖p‖ ≤ Δ, s(fixed)=0 },

where fixed is the set of variables fixed at xₖ and Δ is the trust-region bound. Given pₖ, the next minimizer is generated by a projected search.

The starting point for this subroutine is x₁ = x₀ + s, where s is the Cauchy step.

Returned status is one of the following:
info = 1 Convergence. The final step s satisfies
‖(g + H s)[free]‖ ≤ cgtol ‖g[free]‖, and the final x is an approximate minimizer in the face defined by the free variables.
info = 2 Termination. The trust region bound does not allow
further progress: ‖pₖ‖ = Δ.

info = 3 Failure to converge within itermax iterations.

info = 4 The trust region solver could make no further progress
on the problem, i.e. the computed step is zero. Return with the current point.
solve()

Solve method.

All keyword arguments are passed directly to the constructor of the trust-region solver.

General Nonlinear Programming

The general nonlinear programming problem can be stated as

\[\begin{split}\begin{array}{ll} \min_{x \in \mathbb{R}^n} & f(x) \\ \text{s.t.} & h(x) = 0, \\ & c(x) \geq 0, \end{array}\end{split}\]

for smooth functions \(f\), \(h\) and \(c\).

Todo

Insert this module.