Modeling in NLP.py¶
The general problem consists in minimizing an objective \(f(x)\) subject to general constraints and bounds:
where some or all lower bounds \(g_j^L\) and \(x_k^L\) may be equal to \(-\infty\), and some or all upper bounds \(g_j^U\) and \(x_k^U\) may be equal to \(+\infty\).
Modeling with NLPModel Objects¶
NLPModel objects are the most basic type of model. They require that functions and their derivatives be coded explicitly.
The nlp
Module¶
Abstract base classes to represent continuous optimization models.
-
class
nlpmodel.
NLPModel
(n, m=0, name='Generic', **kwargs)¶ Bases:
object
Abstract continuous optimization model.
The model features methods to evaluate the objective and constraints, and their derivatives. Instances of the general class do not do anything interesting; they must be subclassed and specialized.
-
__init__
(n, m=0, name='Generic', **kwargs)¶ Initialize a model with n variables and m constraints.
Parameters: n: number of variables m: number of general (non bound) constraints (default: 0) name: model name (default: ‘Generic’) Keywords: x0: initial point (default: all 0) pi0: vector of initial multipliers (default: all 0) Lvar: vector of lower bounds on the variables (default: all -Infinity) Uvar: vector of upper bounds on the variables (default: all +Infinity) Lcon: vector of lower bounds on the constraints (default: all -Infinity) Ucon: vector of upper bounds on the constraints (default: all +Infinity)
-
at_optimality
(x, z, **kwargs)¶ Check whether the KKT residuals meet the stopping conditions.
-
bounds
(x)¶ Return the vector with components x[i]-Lvar[i] or Uvar[i]-x[i].
Bound constraints on the problem variables are then equivalent to bounds(x) >= 0. The bounds are odered as follows:
[lowerB | upperB | rangeB (lower) | rangeB (upper) ].
-
complementarity
(x, y, z, c=None)¶ Evaluate the complementarity residuals at (x,y,z).
If c is specified, it should conform to
cons_pos()
and the multipliers y should appear in the same order. The multipliers z should conform toget_bounds()
.Returns: cy: complementarity residual for general constraints xz: complementarity residual for bound constraints.
-
compute_scaling_cons
(x=None, g_max=100.0, reset=False)¶ Compute constraint scaling.
Parameters: x: Determine scaling by evaluating functions at this point. Default is to use self.x0
.g_max: Maximum allowed gradient. Default: g_max = 1e2
.reset: Set to True to unscale the problem.
-
compute_scaling_obj
(x=None, g_max=100.0, reset=False)¶ Compute objective scaling.
Parameters: x: Determine scaling by evaluating functions at this point. Default is to use self.x0
.g_max: Maximum allowed gradient. Default: g_max = 1e2
.reset: Set to True to unscale the problem. The procedure used here closely follows IPOPT’s behavior; see Section 3.8 of
Waecther and Biegler, ‘On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming’, Math. Prog. A (106), pp.25-57, 2006which is a scalar rescaling that ensures the inf-norm of the gradient (at x) isn’t larger than ‘g_max’.
-
cons
(x, **kwargs)¶ Evaluate vector of constraints at x.
-
cons_pos
(x)¶ Convenience function to return constraints as non negative ones.
Constraints are reformulated as
ci(x) - ai = 0 for i in equalC ci(x) - Li >= 0 for i in lowerC + rangeC Ui - ci(x) >= 0 for i in upperC + rangeC.The constraints appear in natural order, except for the fact that the ‘upper side’ of range constraints is appended to the list.
Scaling should be applied in cons().
-
display_basic_info
()¶ Display vital statistics about the current model.
-
dual_feasibility
(x, y, z, g=None, J=None, **kwargs)¶ Evaluate the dual feasibility residual at (x,y,z).
The argument J, if supplied, should be a linear operator representing the constraints Jacobian. It should conform to either
jac()
orjac_pos()
depending on the value of all_pos (see below).The multipliers z should conform to
get_bounds()
.Keywords: obj_weight: weight of the objective gradient in dual feasibility. Set to zero to check Fritz-John conditions instead of KKT conditions. (default: 1.0) all_pos: if True, indicates that the multipliers y conform to jac_pos()
. If False, y conforms tojac()
. In all cases, y should be appropriately ordered. If the positional argument J is specified, it must be consistent with the layout of y. (default: True)
-
get_stopping_tolerances
(*args, **kwargs)¶ Return current stopping tolerances.
-
ghivprod
(x, g, v, **kwargs)¶ Evaluate individual dot products (g, Hi*v).
Evaluate the vector of dot products (g, Hi*v) where Hi is the Hessian of the i-th constraint at x, i = 1, ..., ncon.
-
grad
(x, **kwargs)¶ Evaluate the objective gradient at x.
-
hess
(x, z=None, **kwargs)¶ Evaluate Lagrangian Hessian at (x, z).
-
hiprod
(i, x, p, **kwargs)¶ Constraint Hessian-vector product.
Evaluate matrix-vector product between the Hessian of the i-th constraint at x and p.
-
hop
(x, z=None, **kwargs)¶ Obtain Lagrangian Hessian at (x, z) as a linear operator.
-
hprod
(x, z, p, **kwargs)¶ Hessian-vector product.
Evaluate matrix-vector product between the Hessian of the Lagrangian at (x, z) and p.
-
icons
(i, x, **kwargs)¶ Evaluate i-th constraint at x.
-
igrad
(i, x, **kwargs)¶ Evalutate i-th dense constraint gradient at x.
-
jac
(x, **kwargs)¶ Evaluate constraints Jacobian at x.
-
jac_pos
(x, **kwargs)¶ Evaluate the Jacobian of
cons_pos()
at x.
-
jop
(x)¶ Obtain Jacobian at x as a linear operator.
-
jop_pos
(x)¶ Jacobian of
cons_pos()
at x as a linear operator.
-
jprod
(x, p, **kwargs)¶ Evaluate Jacobian-vector product at x with p.
-
jtprod
(x, p, **kwargs)¶ Evaluate transposed-Jacobian-vector product at x with p.
-
kkt_residuals
(x, y, z, c=None, g=None, J=None, **kwargs)¶ Compute the first-order residuals.
There is no check on the sign of the multipliers unless check is set to True. Keyword arguments not specified below are passed directly to
primal_feasibility()
,dual_feasibility()
andcomplementarity()
.If J is specified, it should conform to
jac_pos()
and the multipliers y should be consistent with the Jacobian.Keywords: check: check sign of multipliers. Returns: kkt: KKT residuals as a KKTresidual instance.
-
lag
(x, z, **kwargs)¶ Evaluate Lagrangian at (x, z).
The constraints and bounds are assumed to be ordered as in
cons_pos()
andbounds()
.
-
lin
¶ Return the indices of linear constraints.
-
m
¶ Number of constraints (excluding bounds).
-
n
¶ Number of variables.
-
name
¶ Problem name.
-
ncon
¶ Number of constraints (excluding bounds).
-
net
¶ Return the indices of network constraints.
-
nlin
¶ Number of linear constraints.
-
nln
¶ Return the indices of nonlinear constraints.
-
nnet
¶ Number of network constraints.
-
nnln
¶ Number of nonlinear constraints.
-
nvar
¶ Number of variables.
-
obj
(x, **kwargs)¶ Evaluate the objective function at x.
-
primal_feasibility
(x, c=None)¶ Evaluate the primal feasibility residual at x.
If c is given, it should conform to
cons_pos()
.
-
set_stopping_tolerances
(*args, **kwargs)¶ Set stopping tolerances.
-
sigrad
(i, x, **kwargs)¶ Evaluate i-th sparse constraint gradient at x.
-
stop_c
¶ Tolerance on complementarity.
-
stop_d
¶ Tolerance on dual feasibility.
-
stop_p
¶ Tolerance on primal feasibility.
-
Example¶
Todo
Insert example.
Models in the AMPL Modeling Language¶
AmplModel objects are convenient in that the model is written in the AMPL modeling language, and the AMPL Solver Library takes care of evaluating derivatives for us.
See the AMPL home page for more information on the AMPL algebraic modeling language.
The amplpy
Module¶
Python interface to the AMPL modeling language.
-
class
amplmodel.
AmplModel
(stub, **kwargs)¶ Bases:
nlp.model.nlpmodel.NLPModel
AmplModel creates an instance of an AMPL model.
If the nl file is already available, simply call AmplModel(stub) where the string stub is the name of the model. For instance: AmplModel(‘elec’). If only the .mod file is available, set the positional parameter neednl to True so AMPL generates the nl file, as in AmplModel(‘elec.mod’, data=’elec.dat’, neednl=True).
Among important attributes of this class are
nvar
, the number of variables,ncon
, the number of constraints, andnbounds
, the number of variables subject to at least one bound constraint.-
A
(*args, **kwargs)¶ Evaluate sparse Jacobian of the linear part of the constraints.
Useful to obtain constraint matrix when problem is a linear programming problem.
-
__init__
(stub, **kwargs)¶
-
cons
(x)¶ Evaluate vector of constraints at x.
Returns a Numpy array. The constraints appear in natural order. To order them as follows
- equalities
- lower bound only
- upper bound only
- range constraints,
use the permC permutation vector.
-
cost
()¶ Evaluate sparse cost vector.
Useful when problem is a linear program. Return a sparse vector. This method changes the sign of the cost vector if the problem is a maximization problem.
-
display_basic_info
()¶ Display vital statistics about the current model.
-
get_pi0
()¶
-
ghivprod
(x, g, v, **kwargs)¶ Evaluate individual dot products (g, Hi(x)*v).
Evaluate the vector of dot products (g, Hi(x)*v) where Hi(x) is the Hessian of the i-th constraint at point x, i=1..m.
-
grad
(x, obj_num=0)¶ Evaluate objective gradient at x.
Returns a Numpy array. This method changes the sign of the objective gradient if the problem is a maximization problem.
-
hess
(x, z=None, obj_num=0, *args, **kwargs)¶ Evaluate Hessian.
Evaluate sparse lower triangular Lagrangian Hessian at (x, z). By convention, the Lagrangian has the form L = f - c’z.
-
hiprod
(x, i, v, **kwargs)¶ Constraint Hessian-vector product.
Returns a Numpy array. Bug: x is ignored. See hprod above.
-
hprod
(x, z, v, **kwargs)¶ Hessian-vector product.
Evaluate matrix-vector product H(x,z) * v, where H is the Hessian of the Lagrangian evaluated at the primal-dual pair (x,z). Zero multipliers can be specified as an array of zeros or as None.
Returns a Numpy array.
Bug: x is ignored, and is determined as the point at which the objective or gradient were last evaluated.
Keywords: obj_weight: Add a weight to the Hessian of the objective function. By default, the weight is one. Setting it to zero allows to exclude the Hessian of the objective from the Hessian of the Lagrangian.
-
icons
(i, x)¶ Evaluate value of i-th constraint at x.
Returns a floating-point number.
-
igrad
(i, x)¶ Evaluate dense gradient of i-th constraint at x.
Returns a Numpy array.
-
irow
(i)¶ Evaluate sparse gradient of the linear part of the i-th constraint.
Useful to obtain constraint rows when problem is a linear programming problem.
-
islp
()¶ Determine whether problem is a linear programming problem.
-
jac
(x, *args, **kwargs)¶ Evaluate sparse Jacobian of constraints at x.
Returns a sparse matrix in coordinate format.
-
jac_pos
(x, **kwargs)¶ Convenience function to evaluate the Jacobian matrix of the constraints reformulated as
ci(x) = ai for i in equalC ci(x) - Li >= 0 for i in lowerC ci(x) - Li >= 0 for i in rangeC Ui - ci(x) >= 0 for i in upperC Ui - ci(x) >= 0 for i in rangeC.The gradients of the general constraints appear in ‘natural’ order, i.e., in the order in which they appear in the problem. The gradients of range constraints appear in two places: first in the ‘natural’ location and again after all other general constraints, with a flipped sign to account for the upper bound on those constraints.
The overall Jacobian of the new constraints thus has the form
[ J ] [-JR]
This is a (m + nrangeC)-by-n matrix, where J is the Jacobian of the general constraints in the order above in which the sign of the ‘less than’ constraints is flipped, and JR is the Jacobian of the ‘less than’ side of range constraints.
-
jop
(x, *args, **kwargs)¶ Jacobian at x as a linear operator.
-
jprod
(x, p, **kwargs)¶ Evaluate Jacobian-vector product at x with p.
-
jtprod
(x, p, **kwargs)¶ Evaluate transposed-Jacobian-vector product at x with p.
-
obj
(x, obj_num=0)¶ Evaluate objective function value at x.
Returns a floating-point number. This method changes the sign of the objective value if the problem is a maximization problem.
-
set_x
(x)¶ Freeze independent variables.
Set x as current value for subsequent calls to
obj()
,grad()
,jac()
, etc. If several ofobj()
,grad()
,jac()
, ..., will be called with the same argument x, it may be more efficient to first call set_x(x). In AMPL,obj()
,grad()
, etc., normally check whether their argument has changed since the last call. Calling set_x() skips this check.See also
unset_x()
.
-
sgrad
(x)¶ Evaluate sparse objective gradient at x.
Returns a sparse vector. This method changes the sign of the objective gradient if the problem is a maximization problem.
-
sigrad
(i, x)¶ Evaluate sparse gradient of i-th constraint at x.
Returns a sparse vector representing the sparse gradient in coordinate format.
-
unset_x
()¶ Release independent variables.
Reinstates the default behavior of
obj()
,grad()
, jac, etc., which is to check whether their argument has changed since the last call.See also
set_x()
.
-
writesol
(x, z, msg)¶ Write primal-dual solution and message msg to stub.sol.
-
Example¶
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 | # -*- coding: utf-8 -*-
"""Test for amplmodel module."""
from nlp.model.amplmodel import AmplModel
import numpy as np
import sys
nargs = len(sys.argv)
if nargs < 1:
sys.stderr.write('Please specify problem name')
exit(1)
problem_name = sys.argv[1]
# Create a model
print 'Problem', problem_name
model = AmplModel(problem_name)
# Query the model
x0 = model.x0
pi0 = model.pi0
nvar = model.nvar
ncon = model.ncon
print 'There are %d variables and %d constraints' % (nvar, ncon)
np.set_printoptions(precision=3, linewidth=79, threshold=10, edgeitems=3)
print 'Initial point: ', x0
print 'Lower bounds on x: ', model.Lvar
print 'Upper bounds on x: ', model.Uvar
print 'f(x0) = ', model.obj(x0)
g0 = model.grad(x0)
print '∇f(x0) = ', g0
if ncon > 0:
print 'Initial multipliers: ', pi0
print 'Lower constraint bounds: ', model.Lcon
print 'Upper constraint bounds: ', model.Ucon
print 'c(x0) = ', model.cons(x0)
jvals, jrows, jcols = model.jac(x0)
hvals, hrows, hcols = model.hess(x0, pi0)
print
print ' nnzJ = ', len(jvals)
print ' nnzH = ', len(hvals)
print 'J(x0) = (in coordinate format)'
print 'vals: ', jvals
print 'rows: ', jrows
print 'cols: ', jcols
print 'Hessian (lower triangle in coordinate format):'
print 'vals: ', hvals
print 'rows: ', hrows
print 'cols: ', hcols
if ncon > 0:
print
print ' Evaluating constraints individually, sparse gradients'
print
for i in range(min(ncon, 5)):
ci = model.icons(i, x0)
print 'c%d(x0) = %-g' % (i, ci)
sgi = model.sigrad(i, x0)
k = sgi.keys()
ssgi = {}
for j in range(min(5, len(k))):
ssgi[k[j]] = sgi[k[j]]
print '∇c%d(x0) = ' % i, ssgi
print
print ' Testing matrix-vector product:'
print
e = np.ones(nvar)
He = model.hprod(x0, pi0, e)
print 'He = ', He
print
print ' Testing objective scaling:'
print
print 'Maximum/Minimum gradient (unscaled): %12.5e / %12.5e' \
% (max(abs(g0)), min(abs(g0)))
model.compute_scaling_obj() # default is to use x0
g = model.grad(x0)
print 'Maximum/Minimum gradient ( scaled): %12.5e / %12.5e' \
% (max(abs(g)), min(abs(g)))
model.compute_scaling_obj(reset=True)
g = model.grad(x0)
print '... after a reset ...'
print 'Maximum/Minimum gradient (unscaled): %12.5e / %12.5e' \
% (max(abs(g)), min(abs(g)))
print
print ' Testing constraint scaling:'
print
for i in xrange(min(ncon, 5)):
model.compute_scaling_cons(reset=True)
sgi = model.sigrad(i, x0)
imax = max(sgi.values, key=lambda x: abs(sgi.values.get(x)))
imin = min(sgi.values, key=lambda x: abs(sgi.values.get(x)))
print 'Constraint %3i: ' % i,
print ' Max/Min gradient (unscaled): %12.5e (%3i) / %12.5e (%3i)' \
% (sgi[imax], imax, sgi[imin], imin)
model.compute_scaling_cons()
sgi = model.sigrad(i, x0)
imax = max(sgi.values, key=lambda x: abs(sgi.values.get(x)))
imin = min(sgi.values, key=lambda x: abs(sgi.values.get(x)))
print 'Constraint %3i: ' % i,
print ' Max/Min gradient ( scaled): %12.5e (%3i) / %12.5e (%3i)' \
% (sgi[imax], imax, sgi[imin], imin)
# Output "solution"
model.writesol(x0, pi0, 'And the winner is')
|
Python Models with Automatic Differentiation¶
The adolcmodel
Module¶
Models where derivatives are computed by ADOL-C.
-
class
adolcmodel.
AdolcModel
(n, m=0, name='Adolc-Generic', **kwargs)¶ Bases:
nlp.model.nlpmodel.NLPModel
Model with derivatives computed by ADOL-C.
A class to represent optimization problems in which derivatives are computed via algorithmic differentiation through ADOL-C. By default, the Jacobian and Hessian are returned in dense format. See the documentation of NLPModel for further information.
-
__init__
(n, m=0, name='Adolc-Generic', **kwargs)¶ Initialize a model with n variables and m constraints.
-
con_trace_id
¶ Return the trace id for the constraints.
-
cons_pos_trace_id
¶ Return the trace id for the reformulated constraints.
-
grad
(x, **kwargs)¶ Evaluate the objective gradient at x.
-
hess
(x, z=None, **kwargs)¶ Return the dense Hessian of the objective at x.
-
hprod
(x, z, v, **kwargs)¶ Return the Hessian-vector product at (x,z) with v.
-
jac
(x, **kwargs)¶ Return dense constraints Jacobian at x.
-
jac_pos
(x, **kwargs)¶ Return dense Jacobian of reformulated constraints at x.
-
jprod
(x, v, **kwargs)¶ Return the product of v with the Jacobian at x.
-
jtprod
(x, v, **kwargs)¶ Return the product of v with the transpose Jacobian at x.
-
lag_trace_id
¶ Return the trace id for the Lagrangian.
-
obj_trace_id
¶ Return the trace id for the objective function.
-
The cppadmodel
Module¶
Models where derivatives are computed by CPPAD.
-
class
cppadmodel.
CppADModel
(n, m=0, name='CppAD-Generic', **kwargs)¶ Bases:
nlp.model.nlpmodel.NLPModel
Model with derivatives computed by CPPAD.
A class to represent optimization problems in which derivatives are computed via algorithmic differentiation through CPPAD. See the documentation of NLPModel for further information.
-
__init__
(n, m=0, name='CppAD-Generic', **kwargs)¶ Initialize a model with n variables and m constraints. :parameters:
n: number of variables m: number of general (non bound) constraints (default: 0) name: model name (default: ‘Generic’)
-
grad
(x, **kwargs)¶ Return the objective gradient at x.
-
hess
(x, z=None, **kwargs)¶ Return the Hessian of the Lagrangian at (x,z).
-
hprod
(x, z, v, **kwargs)¶ Return the Hessian-vector product at x with v.
-
jac
(x, **kwargs)¶ Return constraints Jacobian at x.
-
jprod
(x, v, **kwargs)¶ Return the product of v with the Jacobian at x.
-
jtprod
(x, v, **kwargs)¶ Return the product of v with the transpose Jacobian at x.
-
The algopymodel
Module¶
Models with derivatives computed by AlgoPy.
Because AlgoPy does not support fancy indexing, it is necessary to formulate constraints in the form
ci(x) = 0 for i in equalC ci(x) >= 0 for i in lowerC.
-
class
algopymodel.
AlgopyModel
(n, m=0, name='Algopy-Generic', **kwargs)¶ Bases:
nlp.model.nlpmodel.NLPModel
Model with derivatives computed by AlgoPy.
A class to represent optimization problems in which derivatives are computed via algorithmic differentiation through AlgoPy. AlgoPy only supplies dense derivatives. See the documentation of NLPModel for further information.
-
__init__
(n, m=0, name='Algopy-Generic', **kwargs)¶ Initialize a model with n variables and m constraints.
Parameters: n: number of variables (default: 0) m: number of general (non bound) constraints (default: 0) name: model name (default: ‘Generic’)
-
cg_cons
¶ Constraint call graph.
-
cg_lag
¶ Lagrangian call graph.
-
cg_obj
¶ Objective function call graph.
-
grad
(x, **kwargs)¶ Evaluate the objective gradient at x.
-
hess
(x, z=None, **kwargs)¶ Return the Hessian of the objective at x.
-
hprod
(x, z, v, **kwargs)¶ Return the Hessian-vector product at x with v.
-
jac
(x, **kwargs)¶ Return constraints Jacobian at x.
-
jac_pos
(x, **kwargs)¶ Return reformulated constraints Jacobian at x.
-
jprod
(x, v, **kwargs)¶ Return the Jacobian-vector product at x with v.
-
jtprod
(x, v, **kwargs)¶ Return the transpose-Jacobian-vector product at x with v.
-
lag
(x, z, **kwargs)¶ Evaluate Lagrangian at (x, z).
The constraints and bounds are assumed to be ordered as in
cons_pos()
andbounds()
.
-
Reformulating Models¶
The snlp
Module¶
A slack framework for NLP.py.
-
class
snlp.
SlackModel
(model, **kwargs)¶ Bases:
nlp.model.nlpmodel.NLPModel
General framework for converting a nonlinear optimization problem to a form using slack variables.
Original problem:
cᴸ ≤ c(x) c(x) ≤ cᵁ cᴿᴸ ≤ c(x) ≤ cᴿᵁ c(x) = cᴱ l ≤ x ≤ u
is transformed to:
c(x) - sᴸ = 0 c(x) - sᵁ = 0 c(x) - sᴿ = 0 c(x) - cᴱ = 0 cᴸ ≤ sᴸ sᵁ ≤ cᵁ cᴿᴸ ≤ sᴿ ≤ cᴿᵁ l ≤ x ≤ u
In the latter problem, the only inequality constraints are bounds on the slack and original variables. The other constraints are (typically) nonlinear equalities.
The order of variables in the transformed problem is as follows:
- x, the original problem variables.
- sᴸ, the slack variables corresponding to general constraints with a lower bound only.
- sᵁ, the slack variables corresponding to general constraints with an upper bound only.
- sᴿ, the slack variables corresponding to general constraints with a lower bound and an upper bound.
This framework initializes the slack variables sL and sU to zero by default.
Note that the slack framework does not update all members of NLPModel, such as the index set of constraints with an upper bound, etc., but rather performs the evaluations of the constraints for the updated model implicitly.
-
A
()¶ Return the constraint matrix if the problem is a linear program.
See the documentation of
jac()
for more information.
-
__init__
(model, **kwargs)¶ Initialize a slack form of an
NLPModel
.Parameters: model: Original model to be transformed into a slack form.
-
cons
(x)¶ Evaluate vector of constraints at x.
Constraints are stored in the order in which they appear in the original problem.
-
ghivprod
(x, g, v, **kwargs)¶ Evaluate individual dot products (g, Hi(x)*v).
Evaluate the vector of dot products (g, Hi(x)*v) where Hi(x) is the Hessian of the i-th constraint at point x, i=1..m.
-
grad
(x)¶ Evaluate the objective gradient at x.
This function is specialized since the original objective function only depends on a subvector of x.
-
hess
(x, z=None, *args, **kwargs)¶ Evaluate Lagrangian Hessian at (x, z).
-
hprod
(x, y, v, **kwargs)¶ Hessian-vector product.
Evaluate matrix-vector product between the Hessian of the Lagrangian at (x, z) and p.
-
initialize_slacks
(val=0.0, **kwargs)¶ Initialize all slack variables to given value.
This method may need to be overridden.
-
jac
(x)¶ Evaluate constraints Jacobian at x.
The gradients of the general constraints appear in ‘natural’ order, i.e., in the order in which they appear in the problem.
The overall Jacobian of the constraints has the form:
[ J -I ]
where the columns correspond to the variables x and s, and the rows correspond to the general constraints (in natural order).
-
jprod
(x, v, **kwargs)¶ Evaluate Jacobian-vector product at x with p.
See the documentation of
jac()
for more details on how the constraints are ordered.
-
jtprod
(x, v, **kwargs)¶ Evaluate transposed-Jacobian-vector product at x with p.
See the documentation of
jac()
for more details on how the constraints are ordered.
-
obj
(x)¶ Evaluate the objective function at x..
This function is specialized since the original objective function only depends on a subvector of x.
Example¶
Todo
Insert example.