1666354860

# FastPow.jl: Optimal Addition-chain Exponentiation for Julia

## FastPow

This package provides a macro `@fastpow` that can speed up the computation of integer powers in any Julia expression by transforming them into optimal sequences of multiplications, with a slight sacrifice in accuracy compared to Julia's built-in `x^n` function. It also optimizes powers of the form `1^p`, `(-1)^p`, `2^p`, and `10^p`.

In particular, it uses optimal addition-chain exponentiation for (literal) integer powers up to 255, and for larger powers uses repeated squaring to first reduce the power to ≤ 255 and then use addition chains.

For example, `@fastpow z^25` requires 6 multiplications, and for `z = 0.73` it gives the correct answer to a relative error of `≈ 1.877e-15` (about 8 ulps), vs. the default `z^25` which gives the correct answer to a relative error of `≈ 6.03e-16` (about 3 ulps) but is about 10× slower.

Note that you can apply the `@fastpow` macro to a whole block of Julia code at once. For example,

``@fastpow function foo(x,y)    z = sin(x)^3 + sqrt(y)    return z^7 - 4x^5.3 + 3y^12end``

applies the `@fastpow` transformation to every literal integer exponent (`^3`, `^7`, and `^12`) in the function `foo`.

An alternative to `@fastpow` is to use Julia's built-in `@fastmath` macro, which enables various LLVM optimizations including, in some cases, faster integer powers using repeated multiplication. The advantages of `@fastpow` are that it guarantees optimal addition-chain exponentiation and that it works for exponentiating any Julia type (e.g. complex numbers, matrices, …), whereas LLVM will only optimize a small set of hardware numeric types.

Author: JuliaMath
Source Code: https://github.com/JuliaMath/FastPow.jl

1666354860

## FastPow

This package provides a macro `@fastpow` that can speed up the computation of integer powers in any Julia expression by transforming them into optimal sequences of multiplications, with a slight sacrifice in accuracy compared to Julia's built-in `x^n` function. It also optimizes powers of the form `1^p`, `(-1)^p`, `2^p`, and `10^p`.

In particular, it uses optimal addition-chain exponentiation for (literal) integer powers up to 255, and for larger powers uses repeated squaring to first reduce the power to ≤ 255 and then use addition chains.

For example, `@fastpow z^25` requires 6 multiplications, and for `z = 0.73` it gives the correct answer to a relative error of `≈ 1.877e-15` (about 8 ulps), vs. the default `z^25` which gives the correct answer to a relative error of `≈ 6.03e-16` (about 3 ulps) but is about 10× slower.

Note that you can apply the `@fastpow` macro to a whole block of Julia code at once. For example,

``@fastpow function foo(x,y)    z = sin(x)^3 + sqrt(y)    return z^7 - 4x^5.3 + 3y^12end``

applies the `@fastpow` transformation to every literal integer exponent (`^3`, `^7`, and `^12`) in the function `foo`.

An alternative to `@fastpow` is to use Julia's built-in `@fastmath` macro, which enables various LLVM optimizations including, in some cases, faster integer powers using repeated multiplication. The advantages of `@fastpow` are that it guarantees optimal addition-chain exponentiation and that it works for exponentiating any Julia type (e.g. complex numbers, matrices, …), whereas LLVM will only optimize a small set of hardware numeric types.

Author: JuliaMath
Source Code: https://github.com/JuliaMath/FastPow.jl

1666842360

## Gurobi.jl

Gurobi.jl is a wrapper for the Gurobi Optimizer.

It has two components:

The C API can be accessed via `Gurobi.GRBxx` functions, where the names and arguments are identical to the C API. See the Gurobi documentation for details.

Note: This wrapper is maintained by the JuMP community and is not officially supported by Gurobi. However, we thank Gurobi for providing us with a license to test Gurobi.jl on GitHub. If you are a commercial customer interested in official support for Gurobi in Julia, let them know!.

## Installation

Minimum version requirement: Gurobi.jl requires Gurobi version 9.0 or 9.1 or 9.5.

First, obtain a license of Gurobi and install Gurobi solver, following the instructions on Gurobi's website. Then, set the `GUROBI_HOME` environment variable as appropriate and run `Pkg.add("Gurobi")`, the `Pkg.build("Gurobi")`. For example:

``````# On Windows, this might be
ENV["GUROBI_HOME"] = "C:\\Program Files\\gurobi950\\win64"
# ... or perhaps ...
ENV["GUROBI_HOME"] = "C:\\gurobi950\\win64"
import Pkg
Pkg.build("Gurobi")

# On Mac, this might be
ENV["GUROBI_HOME"] = "/Library/gurobi950/mac64"
import Pkg
Pkg.build("Gurobi")
``````

Note: your path may differ. Check which folder you installed Gurobi in, and update the path accordingly.

By default, `build`ing Gurobi.jl will fail if the Gurobi library is not found. This may not be desirable in certain cases, for example when part of a package's test suite uses Gurobi as an optional test dependency, but Gurobi cannot be installed on a CI server running the test suite. To support this use case, the `GUROBI_JL_SKIP_LIB_CHECK` environment variable may be set (to any value) to make Gurobi.jl installable (but not usable).

## Use with JuMP

We highly recommend that you use the Gurobi.jl package with higher level packages such as JuMP.jl.

This can be done using the `Gurobi.Optimizer` object. Here is how to create a JuMP model that uses Gurobi as the solver.

``````using JuMP, Gurobi

model = Model(Gurobi.Optimizer)
set_optimizer_attribute(model, "TimeLimit", 100)
set_optimizer_attribute(model, "Presolve", 0)
``````

See the Gurobi Documentation for a list and description of allowable parameters.

## Reusing the same Gurobi environment for multiple solves

When using this package via other packages such as JuMP.jl, the default behavior is to obtain a new Gurobi license token every time a model is created. If you are using Gurobi in a setting where the number of concurrent Gurobi uses is limited (e.g. "Single-Use" or "Floating-Use" licenses), you might instead prefer to obtain a single license token that is shared by all models that your program solves. You can do this by passing a Gurobi Environment object as the first parameter to `Gurobi.Optimizer`. For example, the follow code snippet solves multiple problems with JuMP using the same license token:

``````using JuMP, Gurobi

const GRB_ENV = Gurobi.Env()

model1 = Model(() -> Gurobi.Optimizer(GRB_ENV))

# The solvers can have different options too
model2 = Model(() -> Gurobi.Optimizer(GRB_ENV))
set_optimizer_attribute(model2, "OutputFlag", 0)
``````

## Accessing Gurobi-specific attributes via JuMP

You can get and set Gurobi-specific variable, constraint, and model attributes via JuMP as follows:

``````using JuMP, Gurobi

model = direct_model(Gurobi.Optimizer())
@variable(model, x >= 0)
@constraint(model, c, 2x >= 1)
@objective(model, Min, x)
MOI.set(model, Gurobi.ConstraintAttribute("Lazy"), c, 2)
optimize!(model)

MOI.get(model, Gurobi.VariableAttribute("LB"), x)  # Returns 0.0
MOI.get(model, Gurobi.ModelAttribute("NumConstrs")) # Returns 1
``````

Note that we are using JuMP in direct-mode.

A complete list of supported Gurobi attributes can be found in their online documentation.

## Callbacks

Here is an example using Gurobi's solver-specific callbacks.

``````using JuMP, Gurobi, Test

model = direct_model(Gurobi.Optimizer())
@variable(model, 0 <= x <= 2.5, Int)
@variable(model, 0 <= y <= 2.5, Int)
@objective(model, Max, y)
cb_calls = Cint[]
function my_callback_function(cb_data, cb_where::Cint)
# You can reference variables outside the function as normal
push!(cb_calls, cb_where)
# You can select where the callback is run
if cb_where != GRB_CB_MIPSOL && cb_where != GRB_CB_MIPNODE
return
end
# You can query a callback attribute using GRBcbget
if cb_where == GRB_CB_MIPNODE
resultP = Ref{Cint}()
GRBcbget(cb_data, cb_where, GRB_CB_MIPNODE_STATUS, resultP)
if resultP[] != GRB_OPTIMAL
return  # Solution is something other than optimal.
end
end
# Before querying `callback_value`, you must call:
x_val = callback_value(cb_data, x)
y_val = callback_value(cb_data, y)
# You can submit solver-independent MathOptInterface attributes such as
# lazy constraints, user-cuts, and heuristic solutions.
if y_val - x_val > 1 + 1e-6
con = @build_constraint(y - x <= 1)
MOI.submit(model, MOI.LazyConstraint(cb_data), con)
elseif y_val + x_val > 3 + 1e-6
con = @build_constraint(y + x <= 3)
MOI.submit(model, MOI.LazyConstraint(cb_data), con)
end
if rand() < 0.1
# You can terminate the callback as follows:
GRBterminate(backend(model))
end
return
end
# You _must_ set this parameter if using lazy constraints.
MOI.set(model, MOI.RawOptimizerAttribute("LazyConstraints"), 1)
MOI.set(model, Gurobi.CallbackFunction(), my_callback_function)
optimize!(model)
@test termination_status(model) == MOI.OPTIMAL
@test primal_status(model) == MOI.FEASIBLE_POINT
@test value(x) == 1
@test value(y) == 2
``````

See the Gurobi documentation for other information that can be queried with `GRBcbget`.

### Common Performance Pitfall with JuMP

Gurobi's API works differently than most solvers. Any changes to the model are not applied immediately, but instead go sit in a internal buffer (making any modifications appear to be instantaneous) waiting for a call to `GRBupdatemodel` (where the work is done).

This leads to a common performance pitfall that has the following message as its main symptom:

``````Warning: excessive time spent in model updates. Consider calling update less
frequently.
``````

This often means the JuMP program was structured in such a way that Gurobi.jl ends up calling `GRBupdatemodel` each iteration of a loop. Usually, it is possible (and easy) to restructure the JuMP program in a way it stays solver-agnostic and has a close-to-ideal performance with Gurobi.

To guide such restructuring it is good to keep in mind the following bits of information:

1. `GRBupdatemodel` is only called if changes were done since last `GRBupdatemodel` (i.e., the internal buffer is not empty).
2. `GRBupdatemodel` is called when `JuMP.optimize!` is called, but this often is not the source of the problem.
3. `GRBupdatemodel` may be called when ANY model attribute is queried even if that specific attribute was not changed, and this often the source of the problem.
4. The worst-case scenario is, therefore, a loop of modify-query-modify-query, even if what is being modified and what is being queried are two completely distinct things.

As an example, prefer:

``````# GOOD
model = Model(Gurobi.Optimizer)
@variable(model, x[1:100] >= 0)
# All modifications are done before any queries.
for i = 1:100
set_upper_bound(x[i], i)
end
for i = 1:100
# Only the first `lower_bound` query may trigger an `GRBupdatemodel`.
println(lower_bound(x[i]))
end
``````

to:

``````# BAD
model = Model(Gurobi.Optimizer)
@variable(model, x[1:100] >= 0)
for i = 1:100
set_upper_bound(x[i], i)
# `GRBupdatemodel` called on each iteration of this loop.
println(lower_bound(x[i]))
end
``````

## Using Gurobi v9.0 and you got an error like `Q not PSD`?

You need to set the NonConvex parameter:

``````model = Model(Gurobi.Optimizer)
set_optimizer_attribute(model, "NonConvex", 2)``````

Author: jump-dev
Source Code: https://github.com/jump-dev/Gurobi.jl

1666914000

## CPLEX.jl: Julia interface for The CPLEX Optimization Software

CPLEX.jl underwent a major rewrite between versions 0.6.6 and 0.7.0. Users of JuMP should see no breaking changes, but if you used the lower-level C API (e.g., for callbacks), you will need to update your code accordingly. For a full description of the changes, read this discourse post.

To revert to the old API, use:

``````import Pkg
Pkg.add(Pkg.PackageSpec(name = "CPLEX", version = v"0.6"))
``````

Then restart Julia for the change to take effect.

## CPLEX.jl

CPLEX.jl is a wrapper for the IBM® ILOG® CPLEX® Optimization Studio

You cannot use CPLEX.jl without having purchased and installed a copy of CPLEX Optimization Studio from IBM. However, CPLEX is available for free to academics and students.

CPLEX.jl has two components:

The C API can be accessed via `CPLEX.CPXxx` functions, where the names and arguments are identical to the C API. See the CPLEX documentation for details.

Note: This wrapper is maintained by the JuMP community and is not officially supported by IBM. However, we thank IBM for providing us with a CPLEX license to test `CPLEX.jl` on GitHub. If you are a commercial customer interested in official support for CPLEX in Julia, let them know!.

## Installation

Minimum version requirement: CPLEX.jl requires CPLEX version 12.10 or 20.1.

First, obtain a license of CPLEX and install CPLEX solver, following the instructions on IBM's website. Then, set the `CPLEX_STUDIO_BINARIES` environment variable as appropriate and run `Pkg.add("CPLEX")`, then `Pkg.build("CPLEX")`. For example:

``````# On Windows, this might be
ENV["CPLEX_STUDIO_BINARIES"] = "C:\\Program Files\\CPLEX_Studio1210\\cplex\\bin\\x86-64_win\\"
import Pkg
Pkg.build("CPLEX")

# On OSX, this might be
ENV["CPLEX_STUDIO_BINARIES"] = "/Applications/CPLEX_Studio1210/cplex/bin/x86-64_osx/"
import Pkg
Pkg.build("CPLEX")

# On Unix, this might be
ENV["CPLEX_STUDIO_BINARIES"] = "/opt/CPLEX_Studio1210/cplex/bin/x86-64_linux/"
import Pkg
Pkg.build("CPLEX")
``````

Note: your path may differ. Check which folder you installed CPLEX in, and update the path accordingly.

## Use with JuMP

We highly recommend that you use the CPLEX.jl package with higher level packages such as JuMP.jl.

This can be done using the `CPLEX.Optimizer` object. Here is how to create a JuMP model that uses CPLEX as the solver.

``````using JuMP, CPLEX

model = Model(CPLEX.Optimizer)
set_optimizer_attribute(model, "CPX_PARAM_EPINT", 1e-8)
``````

Parameters match those of the C API in the CPLEX documentation.

## Callbacks

Here is an example using CPLEX's solver-specific callbacks.

``````using JuMP, CPLEX, Test

model = direct_model(CPLEX.Optimizer())
set_silent(model)

# This is very, very important!!! Only use callbacks in single-threaded mode.

@variable(model, 0 <= x <= 2.5, Int)
@variable(model, 0 <= y <= 2.5, Int)
@objective(model, Max, y)
cb_calls = Clong[]
function my_callback_function(cb_data::CPLEX.CallbackContext, context_id::Clong)
# You can reference variables outside the function as normal
push!(cb_calls, context_id)
# You can select where the callback is run
if context_id != CPX_CALLBACKCONTEXT_CANDIDATE
return
end
ispoint_p = Ref{Cint}()
ret = CPXcallbackcandidateispoint(cb_data, ispoint_p)
if ret != 0 || ispoint_p[] == 0
return  # No candidate point available or error
end
# You can query CALLBACKINFO items
valueP = Ref{Cdouble}()
ret = CPXcallbackgetinfodbl(cb_data, CPXCALLBACKINFO_BEST_BND, valueP)
@info "Best bound is currently: \$(valueP[])"
# As well as any other C API
x_p = Vector{Cdouble}(undef, 2)
obj_p = Ref{Cdouble}()
ret = CPXcallbackgetincumbent(cb_data, x_p, 0, 1, obj_p)
if ret == 0
@info "Objective incumbent is: \$(obj_p[])"
@info "Incumbent solution is: \$(x_p)"
# Use CPLEX.column to map between variable references and the 1-based
# column.
x_col = CPLEX.column(cb_data, index(x))
@info "x = \$(x_p[x_col])"
else
# Unable to query incumbent.
end

# Before querying `callback_value`, you must call:
x_val = callback_value(cb_data, x)
y_val = callback_value(cb_data, y)
# You can submit solver-independent MathOptInterface attributes such as
# lazy constraints, user-cuts, and heuristic solutions.
if y_val - x_val > 1 + 1e-6
con = @build_constraint(y - x <= 1)
MOI.submit(model, MOI.LazyConstraint(cb_data), con)
elseif y_val + x_val > 3 + 1e-6
con = @build_constraint(y + x <= 3)
MOI.submit(model, MOI.LazyConstraint(cb_data), con)
end
end
MOI.set(model, CPLEX.CallbackFunction(), my_callback_function)
optimize!(model)
@test termination_status(model) == MOI.OPTIMAL
@test primal_status(model) == MOI.FEASIBLE_POINT
@test value(x) == 1
@test value(y) == 2
``````

## Annotations for automatic Benders' decomposition

Here is an example of using CPLEX's annotation feature for automatic Benders' decomposition:

``````using JuMP, CPLEX

model::JuMP.Model,
variable_classification::Dict;
all_variables::Bool = true,
)
num_variables = sum(length(it) for it in values(variable_classification))
if all_variables
@assert num_variables == JuMP.num_variables(model)
end
indices, annotations = CPXINT[], CPXLONG[]
for (key, value) in variable_classification
for variable_ref in value
push!(indices, variable_ref.index.value - 1)
push!(annotations, CPX_BENDERS_MASTERVALUE + key)
end
end
cplex = backend(model)
index_p = Ref{CPXINT}()
CPXnewlongannotation(
cplex.env,
cplex.lp,
CPX_BENDERS_ANNOTATION,
CPX_BENDERS_MASTERVALUE,
)
CPXgetlongannotationindex(
cplex.env,
cplex.lp,
CPX_BENDERS_ANNOTATION,
index_p,
)
CPXsetlongannotations(
cplex.env,
cplex.lp,
index_p[],
CPX_ANNOTATIONOBJ_COL,
length(indices),
indices,
annotations,
)
return
end

# Problem

function illustrate_full_annotation()
c_1, c_2 = [1, 4], [2, 3]
dim_x, dim_y = length(c_1), length(c_2)
b = [-2; -3]
A_1, A_2 = [1 -3; -1 -3], [1 -2; -1 -1]
model = JuMP.direct_model(CPLEX.Optimizer())
set_optimizer_attribute(model, "CPXPARAM_Benders_Strategy", 1)
@variable(model, x[1:dim_x] >= 0, Bin)
@variable(model, y[1:dim_y] >= 0)
variable_classification = Dict(0 => [x[1], x[2]], 1 => [y[1], y[2]])
@constraint(model, A_2 * y + A_1 * x .<= b)
@objective(model, Min, c_1' * x + c_2' * y)
optimize!(model)
x_optimal = value.(x)
y_optimal = value.(y)
println("x: \$(x_optimal), y: \$(y_optimal)")
end

function illustrate_partial_annotation()
c_1, c_2 = [1, 4], [2, 3]
dim_x, dim_y = length(c_1), length(c_2)
b = [-2; -3]
A_1, A_2 = [1 -3; -1 -3], [1 -2; -1 -1]
model = JuMP.direct_model(CPLEX.Optimizer())
# Note that the "CPXPARAM_Benders_Strategy" has to be set to 2 if partial
# annotation is provided. If "CPXPARAM_Benders_Strategy" is set to 1, then
# the following error will be thrown:
# `CPLEX Error  2002: Invalid Benders decomposition.`
set_optimizer_attribute(model, "CPXPARAM_Benders_Strategy", 2)
@variable(model, x[1:dim_x] >= 0, Bin)
@variable(model, y[1:dim_y] >= 0)
variable_classification = Dict(0 => [x[1]], 1 => [y[1], y[2]])
@constraint(model, A_2 * y + A_1 * x .<= b)
@objective(model, Min, c_1' * x + c_2' * y)
optimize!(model)
x_optimal = value.(x)
y_optimal = value.(y)
println("x: \$(x_optimal), y: \$(y_optimal)")
end``````

Author: jump-dev
Source Code: https://github.com/jump-dev/CPLEX.jl

1667505840

## Xpress.jl

Xpress.jl is a wrapper for the FICO Xpress Solver.

It has two components:

The C API can be accessed via `Xpress.Lib.XPRSxx` functions, where the names and arguments are identical to the C API. See the Xpress documentation for details.

The Xpress wrapper for Julia is community driven and not officially supported by FICO Xpress. If you are a commercial customer interested in official support for Julia from FICO Xpress, let them know!

## Install:

Here is the procedure to setup this package:

Obtain a license of Xpress and install Xpress solver, following the instructions on FICO's website.

Install this package using `Pkg.add("Xpress")`.

Make sure the XPRESSDIR environmental variable is set to the path of the Xpress directory. This is part of a standard installation. The Xpress library will be searched for in XPRESSDIR/lib on unix platforms and XPRESSDIR/bin on Windows.

Now, you can start using it.

You should use the xpress version matching to your julia installation and vice-versa.

By default, `build`ing Xpress.jl will fail if the Xpress library is not found. This may not be desirable in certain cases, for example when part of a package's test suite uses Xpress as an optional test dependency, but Xpress cannot be installed on a CI server running the test suite. To support this use case, the `XPRESS_JL_SKIP_LIB_CHECK` environment variable may be set (to any value) to make Xpress.jl installable (but not usable).

## Use Other Packages

We highly recommend that you use the Xpress.jl package with higher level packages such as JuMP.jl or MathOptInterface.jl.

This can be done using the `Xpress.Optimizer` object. Here is how to create a JuMP model that uses Xpress as the solver. Parameters are passed as keyword arguments:

``````using JuMP, Xpress

model = Model(()->Xpress.Optimizer(DEFAULTALG=2, PRESOLVE=0, logfile = "output.log"))
``````

In order to initialize an optimizer without console printing run `Xpress.Optimizer(OUTPUTLOG = 0)`. Setting `OUTPUTLOG` to zero will also disable printing to the log file in all systems.

For other parameters use Xpress Optimizer manual or type `julia -e "using Xpress; println(keys(Xpress.XPRS_ATTRIBUTES))"`.

If logfile is set to `""`, log file is disabled and output is printed to the console (there might be issues with console output on windows (it is manually implemented with callbacks)). If logfile is set to a filepath, output is printed to the file. By default, logfile is set to `""` (console).

Parameters in a JuMP model can be directly modified:

``````julia> using Xpress, JuMP;

julia> model = Model(()->Xpress.Optimizer());

julia> get_optimizer_attribute(model, "logfile")

julia> set_optimizer_attribute(model, "logfile", "output.log")

julia> get_optimizer_attribute(model, "logfile")
"output.log"
``````

If you've already created an instance of an MOI `Optimizer`, you can use `MOI.RawParameter` to get and set the location of the current logfile.

``````julia> using Xpress, MathOptInterface; const MOI = MathOptInterface;

julia> OPTIMIZER = Xpress.Optimizer();

julia> MOI.get(OPTIMIZER, MOI.RawParameter("logfile"))
""

julia> MOI.set(OPTIMIZER, MOI.RawParameter("logfile"), "output.log")

julia> MOI.get(OPTIMIZER, MOI.RawParameter("logfile"))
"output.log"
``````

## Callbacks

Solver specific and solver independent callbacks are working in MathOptInterface and, consequently, in JuMP. However, the current implementation should be considered experimental.

## Environement variables

`XPRESS_JL_SKIP_LIB_CHECK` - Used to skip build lib check as previsouly described.

`XPRESS_JL_NO_INFO` - Disable license info log.

`XPRESS_JL_NO_DEPS_ERROR` - Disable error when do deps.jl file is found.

`XPRESS_JL_NO_AUTO_INIT` - Disable automatic run of `Xpress.initialize()`. Specially useful for explicitly loading the dynamic library.

## Julia version warning

The Julia versions 1.1.x do not work properly with MOI dues to Julia bugs. Hence, these versions are not supported.

## Skipping Xpress.postsolve

In older versions of Xpress the command `XPRSpostsolve` throws an error in Infeasible models. In these older versions the post solve should not be executed. To do this one can use the `RawOptimizerAttribute("MOI_POST_SOLVE")` to skip this routine.

## Reference:

FICO optimizer manual

Author: jump-dev
Source Code: https://github.com/jump-dev/Xpress.jl

1666925880

## ECOS.jl

Julia wrapper for the ECOS embeddable conic optimization interior point solver.

The wrapper has two components:

## Installation

Install ECOS.jl using `Pkg.add`:

``````import Pkg; Pkg.add("ECOS")
``````

In addition to installing the ECOS.jl package, this will also download and install the ECOS binaries. (You do not need to install ECOS separately.)

To use a custom binary, read the Custom solver binaries section of the JuMP documentation.

`ECOS.jl` is licensed under the MIT License (see LICENSE.md), but note that ECOS itself is GPL v3.

## Use with JuMP

TO use ECOS with JuMP, use `ECOS.Optimizer`:

``````using JuMP, ECOS
model = Model(ECOS.Optimizer)
set_optimizer_attribute(model, "maxit", 100)
``````

## Options

The list of options is defined the `ecos.h` header, which we reproduce here:

``````gamma          # scaling the final step length
delta          # regularization parameter
eps            # regularization threshold
feastol        # primal/dual infeasibility tolerance
abstol         # absolute tolerance on duality gap
reltol         # relative tolerance on duality gap
feastol_inacc  # primal/dual infeasibility relaxed tolerance
abstol_inacc   # absolute relaxed tolerance on duality gap
reltol_inacc   # relative relaxed tolerance on duality gap
nitref         # number of iterative refinement steps
maxit          # maximum number of iterations
verbose        # verbosity bool for PRINTLEVEL < 3``````