For QuadraticProgram backend, the package supports following Function-in-Set constraints:
| MOI Function | MOI Set |
|---|---|
VariableIndex |
GreaterThan |
VariableIndex |
LessThan |
VariableIndex |
EqualTo |
ScalarAffineFunction |
GreaterThan |
ScalarAffineFunction |
LessThan |
ScalarAffineFunction |
EqualTo |
and the following objective types:
| MOI Function |
|---|
VariableIndex |
ScalarAffineFunction |
ScalarQuadraticFunction |
For the ConicProgram backend, the package supports following Function-in-Set constraints:
| MOI Function | MOI Set |
|---|---|
VectorOfVariables |
Nonnegatives |
VectorOfVariables |
Nonpositives |
VectorOfVariables |
Zeros |
VectorOfVariables |
SecondOrderCone |
VectorOfVariables |
PositiveSemidefiniteConeTriangle |
VectorAffineFunction |
Nonnegatives |
VectorAffineFunction |
Nonpositives |
VectorAffineFunction |
Zeros |
VectorAffineFunction |
SecondOrderCone |
VectorAffineFunction |
PositiveSemidefiniteConeTriangle |
and the following objective types:
| MOI Function |
|---|
VariableIndex |
ScalarAffineFunction |
Other conic sets such as RotatedSecondOrderCone and PositiveSemidefiniteConeSquare are supported through bridges.
For the NonlinearProgram backend, the package supports following Function-in-Set constraints:
| MOI Function | MOI Set |
|---|---|
VariableIndex |
GreaterThan |
VariableIndex |
LessThan |
VariableIndex |
EqualTo |
ScalarAffineFunction |
GreaterThan |
ScalarAffineFunction |
LessThan |
ScalarAffineFunction |
EqualTo |
ScalarQuadraticFunction |
GreaterThan |
ScalarQuadraticFunction |
LessThan |
ScalarQuadraticFunction |
EqualTo |
ScalarNonlinearFunction |
GreaterThan |
ScalarNonlinearFunction |
LessThan |
ScalarNonlinearFunction |
EqualTo |
and the following objective types:
| MOI Function |
|---|
VariableIndex |
ScalarAffineFunction |
ScalarQuadraticFunction |
ScalarNonlinearFunction |
DiffOpt.NonLinearProgram supports MOI.VectorOfVariables in
MOI.VectorNonlinearOracle through an internal bridge that rewrites the vector
oracle into scalar nonlinear constraints.
At a high level, for a vector oracle
- one scalar nonlinear operator is registered per output row
$f_i(x)$ - each row is converted into one or two scalar constraints based on bounds:
- finite and equal
$l[i] == u[i]$ :EqualTo(l[i]) - finite lower only:
GreaterThan(l[i]) - finite upper only:
LessThan(u[i]) - both finite and different: one
GreaterThanand oneLessThan - infinite bounds are skipped
- finite and equal
Callback signature requirements follow MOI:
- univariate (
input_dimension == 1):f(x)::Real∇f(x)::Real∇²f(x)::Real
- multivariate (
input_dimension > 1):f(x...)::Real∇f(g, x...)fillsg∇²f(H, x...)fills the lower-triangular part ofH
Warm-start mapping for bridged constraints:
ConstraintPrimalStartexpects an input vectorx(notf(x)), evaluates the oracle atx, and writes starts for each generated scalar constraint.ConstraintDualStartaccepts either:- length = output dimension
m: treated as direct row duals - length = input dimension
n: interpreted asJ' * λand converted to row dualsλvia a least-squares solve
- length = output dimension
Current limitation:
- for dual starts on rows with both finite bounds (
l[i] < u[i]), only the lower-bound side is propagated explicitly (the upper-side split is not modeled in this bridge-level helper).
Minimal JuMP example:
using JuMP, DiffOpt, Ipopt
import MathOptInterface as MOI
model = DiffOpt.nonlinear_diff_model(Ipopt.Optimizer)
@variable(model, x[1:2])
@objective(model, Min, x[1]^2 + x[2]^2)
function eval_f(ret, z)
ret[1] = z[1]^2 + z[2]^2
return
end
function eval_jacobian(ret, z)
ret[1] = 2z[1]
ret[2] = 2z[2]
return
end
function eval_hessian_lagrangian(ret, z, μ)
ret[1] = 2μ[1] # (1,1)
ret[2] = 2μ[1] # (2,2)
return
end
set = MOI.VectorNonlinearOracle(;
dimension = 2,
l = [-Inf],
u = [1.0],
eval_f,
jacobian_structure = [(1, 1), (1, 2)],
eval_jacobian,
hessian_lagrangian_structure = [(1, 1), (2, 2)],
eval_hessian_lagrangian,
)
@constraint(model, [x[1], x[2]] in set)
optimize!(model)You can create a differentiable optimizer over an existing MOI solver by using the diff_optimizer utility.
diff_optimizer
DiffOpt requires taking projections and finding projection gradients of vectors while computing the jacobians. For this purpose, we use MathOptSetDistances.jl, which is a dedicated package for computing set distances, projections and projection gradients.
!!! note
As of now, when defining a conic or convex quadratic problem, the package is using SCS geometric form for affine expressions in cones.
Consider a convex conic optimization problem in its primal (P) and dual (D) forms:
where
x \in R^nis the primal variable,y \in R^mis the dual variable, ands \in R^mis the primal slack variable\mathcal{K} \subseteq R^mis a closed convex cone and\mathcal{K}^* \subseteq R^mis the corresponding dual cone variableA \in R^{m \times n},b \in R^m,c \in R^nare problem data
In the light of above, DiffOpt differentiates program variables x, s, y w.r.t pertubations/sensivities in problem data i.e. dA, db, dc. This is achieved via implicit differentiation and matrix differential calculus.
Note that the primal (P) and dual (D) are self-duals of each other. Similarly, for the constraints we support,
\mathcal{K}is same in format as\mathcal{K}^*.
- Differentiating Through a Cone Program - Akshay Agrawal, Shane Barratt, Stephen Boyd, Enzo Busseti, Walaa M. Moursi, 2019
- A fast and differentiable QP solver for PyTorch. Crafted by Brandon Amos and J. Zico Kolter.
- OptNet: Differentiable Optimization as a Layer in Neural Networks
One possible point of confusion in finding Jacobians is the role of the backward pass vector - above eqn (7), OptNet: Differentiable Optimization as a Layer in Neural Networks. While differentiating convex programs, it is often the case that we don't want to find the actual derivatives, rather we might be interested in computing the product of Jacobians with a backward pass vector, often used in backpropagation in machine learning/automatic differentiation. This is what happens in DiffOpt backends.