1666914000
CPLEX.jl underwent a major rewrite between versions 0.6.6 and 0.7.0. Users of JuMP should see no breaking changes, but if you used the lower-level C API (e.g., for callbacks), you will need to update your code accordingly. For a full description of the changes, read this discourse post.
To revert to the old API, use:
import Pkg
Pkg.add(Pkg.PackageSpec(name = "CPLEX", version = v"0.6"))
Then restart Julia for the change to take effect.
CPLEX.jl is a wrapper for the IBM® ILOG® CPLEX® Optimization Studio
You cannot use CPLEX.jl without having purchased and installed a copy of CPLEX Optimization Studio from IBM. However, CPLEX is available for free to academics and students.
CPLEX.jl has two components:
The C API can be accessed via CPLEX.CPXxx
functions, where the names and arguments are identical to the C API. See the CPLEX documentation for details.
Note: This wrapper is maintained by the JuMP community and is not officially supported by IBM. However, we thank IBM for providing us with a CPLEX license to test CPLEX.jl
on GitHub. If you are a commercial customer interested in official support for CPLEX in Julia, let them know!.
Minimum version requirement: CPLEX.jl requires CPLEX version 12.10 or 20.1.
First, obtain a license of CPLEX and install CPLEX solver, following the instructions on IBM's website. Then, set the CPLEX_STUDIO_BINARIES
environment variable as appropriate and run Pkg.add("CPLEX")
, then Pkg.build("CPLEX")
. For example:
# On Windows, this might be
ENV["CPLEX_STUDIO_BINARIES"] = "C:\\Program Files\\CPLEX_Studio1210\\cplex\\bin\\x86-64_win\\"
import Pkg
Pkg.add("CPLEX")
Pkg.build("CPLEX")
# On OSX, this might be
ENV["CPLEX_STUDIO_BINARIES"] = "/Applications/CPLEX_Studio1210/cplex/bin/x86-64_osx/"
import Pkg
Pkg.add("CPLEX")
Pkg.build("CPLEX")
# On Unix, this might be
ENV["CPLEX_STUDIO_BINARIES"] = "/opt/CPLEX_Studio1210/cplex/bin/x86-64_linux/"
import Pkg
Pkg.add("CPLEX")
Pkg.build("CPLEX")
Note: your path may differ. Check which folder you installed CPLEX in, and update the path accordingly.
We highly recommend that you use the CPLEX.jl package with higher level packages such as JuMP.jl.
This can be done using the CPLEX.Optimizer
object. Here is how to create a JuMP model that uses CPLEX as the solver.
using JuMP, CPLEX
model = Model(CPLEX.Optimizer)
set_optimizer_attribute(model, "CPX_PARAM_EPINT", 1e-8)
Parameters match those of the C API in the CPLEX documentation.
Here is an example using CPLEX's solver-specific callbacks.
using JuMP, CPLEX, Test
model = direct_model(CPLEX.Optimizer())
set_silent(model)
# This is very, very important!!! Only use callbacks in single-threaded mode.
MOI.set(model, MOI.NumberOfThreads(), 1)
@variable(model, 0 <= x <= 2.5, Int)
@variable(model, 0 <= y <= 2.5, Int)
@objective(model, Max, y)
cb_calls = Clong[]
function my_callback_function(cb_data::CPLEX.CallbackContext, context_id::Clong)
# You can reference variables outside the function as normal
push!(cb_calls, context_id)
# You can select where the callback is run
if context_id != CPX_CALLBACKCONTEXT_CANDIDATE
return
end
ispoint_p = Ref{Cint}()
ret = CPXcallbackcandidateispoint(cb_data, ispoint_p)
if ret != 0 || ispoint_p[] == 0
return # No candidate point available or error
end
# You can query CALLBACKINFO items
valueP = Ref{Cdouble}()
ret = CPXcallbackgetinfodbl(cb_data, CPXCALLBACKINFO_BEST_BND, valueP)
@info "Best bound is currently: $(valueP[])"
# As well as any other C API
x_p = Vector{Cdouble}(undef, 2)
obj_p = Ref{Cdouble}()
ret = CPXcallbackgetincumbent(cb_data, x_p, 0, 1, obj_p)
if ret == 0
@info "Objective incumbent is: $(obj_p[])"
@info "Incumbent solution is: $(x_p)"
# Use CPLEX.column to map between variable references and the 1-based
# column.
x_col = CPLEX.column(cb_data, index(x))
@info "x = $(x_p[x_col])"
else
# Unable to query incumbent.
end
# Before querying `callback_value`, you must call:
CPLEX.load_callback_variable_primal(cb_data, context_id)
x_val = callback_value(cb_data, x)
y_val = callback_value(cb_data, y)
# You can submit solver-independent MathOptInterface attributes such as
# lazy constraints, user-cuts, and heuristic solutions.
if y_val - x_val > 1 + 1e-6
con = @build_constraint(y - x <= 1)
MOI.submit(model, MOI.LazyConstraint(cb_data), con)
elseif y_val + x_val > 3 + 1e-6
con = @build_constraint(y + x <= 3)
MOI.submit(model, MOI.LazyConstraint(cb_data), con)
end
end
MOI.set(model, CPLEX.CallbackFunction(), my_callback_function)
optimize!(model)
@test termination_status(model) == MOI.OPTIMAL
@test primal_status(model) == MOI.FEASIBLE_POINT
@test value(x) == 1
@test value(y) == 2
Here is an example of using CPLEX's annotation feature for automatic Benders' decomposition:
using JuMP, CPLEX
function add_annotation(
model::JuMP.Model,
variable_classification::Dict;
all_variables::Bool = true,
)
num_variables = sum(length(it) for it in values(variable_classification))
if all_variables
@assert num_variables == JuMP.num_variables(model)
end
indices, annotations = CPXINT[], CPXLONG[]
for (key, value) in variable_classification
for variable_ref in value
push!(indices, variable_ref.index.value - 1)
push!(annotations, CPX_BENDERS_MASTERVALUE + key)
end
end
cplex = backend(model)
index_p = Ref{CPXINT}()
CPXnewlongannotation(
cplex.env,
cplex.lp,
CPX_BENDERS_ANNOTATION,
CPX_BENDERS_MASTERVALUE,
)
CPXgetlongannotationindex(
cplex.env,
cplex.lp,
CPX_BENDERS_ANNOTATION,
index_p,
)
CPXsetlongannotations(
cplex.env,
cplex.lp,
index_p[],
CPX_ANNOTATIONOBJ_COL,
length(indices),
indices,
annotations,
)
return
end
# Problem
function illustrate_full_annotation()
c_1, c_2 = [1, 4], [2, 3]
dim_x, dim_y = length(c_1), length(c_2)
b = [-2; -3]
A_1, A_2 = [1 -3; -1 -3], [1 -2; -1 -1]
model = JuMP.direct_model(CPLEX.Optimizer())
set_optimizer_attribute(model, "CPXPARAM_Benders_Strategy", 1)
@variable(model, x[1:dim_x] >= 0, Bin)
@variable(model, y[1:dim_y] >= 0)
variable_classification = Dict(0 => [x[1], x[2]], 1 => [y[1], y[2]])
@constraint(model, A_2 * y + A_1 * x .<= b)
@objective(model, Min, c_1' * x + c_2' * y)
add_annotation(model, variable_classification)
optimize!(model)
x_optimal = value.(x)
y_optimal = value.(y)
println("x: $(x_optimal), y: $(y_optimal)")
end
function illustrate_partial_annotation()
c_1, c_2 = [1, 4], [2, 3]
dim_x, dim_y = length(c_1), length(c_2)
b = [-2; -3]
A_1, A_2 = [1 -3; -1 -3], [1 -2; -1 -1]
model = JuMP.direct_model(CPLEX.Optimizer())
# Note that the "CPXPARAM_Benders_Strategy" has to be set to 2 if partial
# annotation is provided. If "CPXPARAM_Benders_Strategy" is set to 1, then
# the following error will be thrown:
# `CPLEX Error 2002: Invalid Benders decomposition.`
set_optimizer_attribute(model, "CPXPARAM_Benders_Strategy", 2)
@variable(model, x[1:dim_x] >= 0, Bin)
@variable(model, y[1:dim_y] >= 0)
variable_classification = Dict(0 => [x[1]], 1 => [y[1], y[2]])
@constraint(model, A_2 * y + A_1 * x .<= b)
@objective(model, Min, c_1' * x + c_2' * y)
add_annotation(model, variable_classification; all_variables = false)
optimize!(model)
x_optimal = value.(x)
y_optimal = value.(y)
println("x: $(x_optimal), y: $(y_optimal)")
end
Author: jump-dev
Source Code: https://github.com/jump-dev/CPLEX.jl
License: MIT license
#julia #interface #optimization
1666914000
CPLEX.jl underwent a major rewrite between versions 0.6.6 and 0.7.0. Users of JuMP should see no breaking changes, but if you used the lower-level C API (e.g., for callbacks), you will need to update your code accordingly. For a full description of the changes, read this discourse post.
To revert to the old API, use:
import Pkg
Pkg.add(Pkg.PackageSpec(name = "CPLEX", version = v"0.6"))
Then restart Julia for the change to take effect.
CPLEX.jl is a wrapper for the IBM® ILOG® CPLEX® Optimization Studio
You cannot use CPLEX.jl without having purchased and installed a copy of CPLEX Optimization Studio from IBM. However, CPLEX is available for free to academics and students.
CPLEX.jl has two components:
The C API can be accessed via CPLEX.CPXxx
functions, where the names and arguments are identical to the C API. See the CPLEX documentation for details.
Note: This wrapper is maintained by the JuMP community and is not officially supported by IBM. However, we thank IBM for providing us with a CPLEX license to test CPLEX.jl
on GitHub. If you are a commercial customer interested in official support for CPLEX in Julia, let them know!.
Minimum version requirement: CPLEX.jl requires CPLEX version 12.10 or 20.1.
First, obtain a license of CPLEX and install CPLEX solver, following the instructions on IBM's website. Then, set the CPLEX_STUDIO_BINARIES
environment variable as appropriate and run Pkg.add("CPLEX")
, then Pkg.build("CPLEX")
. For example:
# On Windows, this might be
ENV["CPLEX_STUDIO_BINARIES"] = "C:\\Program Files\\CPLEX_Studio1210\\cplex\\bin\\x86-64_win\\"
import Pkg
Pkg.add("CPLEX")
Pkg.build("CPLEX")
# On OSX, this might be
ENV["CPLEX_STUDIO_BINARIES"] = "/Applications/CPLEX_Studio1210/cplex/bin/x86-64_osx/"
import Pkg
Pkg.add("CPLEX")
Pkg.build("CPLEX")
# On Unix, this might be
ENV["CPLEX_STUDIO_BINARIES"] = "/opt/CPLEX_Studio1210/cplex/bin/x86-64_linux/"
import Pkg
Pkg.add("CPLEX")
Pkg.build("CPLEX")
Note: your path may differ. Check which folder you installed CPLEX in, and update the path accordingly.
We highly recommend that you use the CPLEX.jl package with higher level packages such as JuMP.jl.
This can be done using the CPLEX.Optimizer
object. Here is how to create a JuMP model that uses CPLEX as the solver.
using JuMP, CPLEX
model = Model(CPLEX.Optimizer)
set_optimizer_attribute(model, "CPX_PARAM_EPINT", 1e-8)
Parameters match those of the C API in the CPLEX documentation.
Here is an example using CPLEX's solver-specific callbacks.
using JuMP, CPLEX, Test
model = direct_model(CPLEX.Optimizer())
set_silent(model)
# This is very, very important!!! Only use callbacks in single-threaded mode.
MOI.set(model, MOI.NumberOfThreads(), 1)
@variable(model, 0 <= x <= 2.5, Int)
@variable(model, 0 <= y <= 2.5, Int)
@objective(model, Max, y)
cb_calls = Clong[]
function my_callback_function(cb_data::CPLEX.CallbackContext, context_id::Clong)
# You can reference variables outside the function as normal
push!(cb_calls, context_id)
# You can select where the callback is run
if context_id != CPX_CALLBACKCONTEXT_CANDIDATE
return
end
ispoint_p = Ref{Cint}()
ret = CPXcallbackcandidateispoint(cb_data, ispoint_p)
if ret != 0 || ispoint_p[] == 0
return # No candidate point available or error
end
# You can query CALLBACKINFO items
valueP = Ref{Cdouble}()
ret = CPXcallbackgetinfodbl(cb_data, CPXCALLBACKINFO_BEST_BND, valueP)
@info "Best bound is currently: $(valueP[])"
# As well as any other C API
x_p = Vector{Cdouble}(undef, 2)
obj_p = Ref{Cdouble}()
ret = CPXcallbackgetincumbent(cb_data, x_p, 0, 1, obj_p)
if ret == 0
@info "Objective incumbent is: $(obj_p[])"
@info "Incumbent solution is: $(x_p)"
# Use CPLEX.column to map between variable references and the 1-based
# column.
x_col = CPLEX.column(cb_data, index(x))
@info "x = $(x_p[x_col])"
else
# Unable to query incumbent.
end
# Before querying `callback_value`, you must call:
CPLEX.load_callback_variable_primal(cb_data, context_id)
x_val = callback_value(cb_data, x)
y_val = callback_value(cb_data, y)
# You can submit solver-independent MathOptInterface attributes such as
# lazy constraints, user-cuts, and heuristic solutions.
if y_val - x_val > 1 + 1e-6
con = @build_constraint(y - x <= 1)
MOI.submit(model, MOI.LazyConstraint(cb_data), con)
elseif y_val + x_val > 3 + 1e-6
con = @build_constraint(y + x <= 3)
MOI.submit(model, MOI.LazyConstraint(cb_data), con)
end
end
MOI.set(model, CPLEX.CallbackFunction(), my_callback_function)
optimize!(model)
@test termination_status(model) == MOI.OPTIMAL
@test primal_status(model) == MOI.FEASIBLE_POINT
@test value(x) == 1
@test value(y) == 2
Here is an example of using CPLEX's annotation feature for automatic Benders' decomposition:
using JuMP, CPLEX
function add_annotation(
model::JuMP.Model,
variable_classification::Dict;
all_variables::Bool = true,
)
num_variables = sum(length(it) for it in values(variable_classification))
if all_variables
@assert num_variables == JuMP.num_variables(model)
end
indices, annotations = CPXINT[], CPXLONG[]
for (key, value) in variable_classification
for variable_ref in value
push!(indices, variable_ref.index.value - 1)
push!(annotations, CPX_BENDERS_MASTERVALUE + key)
end
end
cplex = backend(model)
index_p = Ref{CPXINT}()
CPXnewlongannotation(
cplex.env,
cplex.lp,
CPX_BENDERS_ANNOTATION,
CPX_BENDERS_MASTERVALUE,
)
CPXgetlongannotationindex(
cplex.env,
cplex.lp,
CPX_BENDERS_ANNOTATION,
index_p,
)
CPXsetlongannotations(
cplex.env,
cplex.lp,
index_p[],
CPX_ANNOTATIONOBJ_COL,
length(indices),
indices,
annotations,
)
return
end
# Problem
function illustrate_full_annotation()
c_1, c_2 = [1, 4], [2, 3]
dim_x, dim_y = length(c_1), length(c_2)
b = [-2; -3]
A_1, A_2 = [1 -3; -1 -3], [1 -2; -1 -1]
model = JuMP.direct_model(CPLEX.Optimizer())
set_optimizer_attribute(model, "CPXPARAM_Benders_Strategy", 1)
@variable(model, x[1:dim_x] >= 0, Bin)
@variable(model, y[1:dim_y] >= 0)
variable_classification = Dict(0 => [x[1], x[2]], 1 => [y[1], y[2]])
@constraint(model, A_2 * y + A_1 * x .<= b)
@objective(model, Min, c_1' * x + c_2' * y)
add_annotation(model, variable_classification)
optimize!(model)
x_optimal = value.(x)
y_optimal = value.(y)
println("x: $(x_optimal), y: $(y_optimal)")
end
function illustrate_partial_annotation()
c_1, c_2 = [1, 4], [2, 3]
dim_x, dim_y = length(c_1), length(c_2)
b = [-2; -3]
A_1, A_2 = [1 -3; -1 -3], [1 -2; -1 -1]
model = JuMP.direct_model(CPLEX.Optimizer())
# Note that the "CPXPARAM_Benders_Strategy" has to be set to 2 if partial
# annotation is provided. If "CPXPARAM_Benders_Strategy" is set to 1, then
# the following error will be thrown:
# `CPLEX Error 2002: Invalid Benders decomposition.`
set_optimizer_attribute(model, "CPXPARAM_Benders_Strategy", 2)
@variable(model, x[1:dim_x] >= 0, Bin)
@variable(model, y[1:dim_y] >= 0)
variable_classification = Dict(0 => [x[1]], 1 => [y[1], y[2]])
@constraint(model, A_2 * y + A_1 * x .<= b)
@objective(model, Min, c_1' * x + c_2' * y)
add_annotation(model, variable_classification; all_variables = false)
optimize!(model)
x_optimal = value.(x)
y_optimal = value.(y)
println("x: $(x_optimal), y: $(y_optimal)")
end
Author: jump-dev
Source Code: https://github.com/jump-dev/CPLEX.jl
License: MIT license
1603753200
So far in our journey through the Machine Learning universe, we covered several big topics. We investigated some regression algorithms, classification algorithms and algorithms that can be used for both types of problems (SVM**, **Decision Trees and Random Forest). Apart from that, we dipped our toes in unsupervised learning, saw how we can use this type of learning for clustering and learned about several clustering techniques.
We also talked about how to quantify machine learning model performance and how to improve it with regularization. In all these articles, we used Python for “from the scratch” implementations and libraries like TensorFlow, Pytorch and SciKit Learn. The word optimization popped out more than once in these articles, so in this and next article, we focus on optimization techniques which are an important part of the machine learning process.
In general, every machine learning algorithm is composed of three integral parts:
As you were able to see in previous articles, some algorithms were created intuitively and didn’t have optimization criteria in mind. In fact, mathematical explanations of why and how these algorithms work were done later. Some of these algorithms are Decision Trees and kNN. Other algorithms, which were developed later had this thing in mind beforehand. SVMis one example.
During the training, we change the parameters of our machine learning model to try and minimize the loss function. However, the question of how do you change those parameters arises. Also, by how much should we change them during training and when. To answer all these questions we use optimizers. They put all different parts of the machine learning algorithm together. So far we mentioned Gradient Decent as an optimization technique, but we haven’t explored it in more detail. In this article, we focus on that and we cover the grandfather of all optimization techniques and its variation. Note that these techniques are not machine learning algorithms. They are solvers of minimization problems in which the function to minimize has a gradient in most points of its domain.
Data that we use in this article is the famous Boston Housing Dataset . This dataset is composed 14 features and contains information collected by the U.S Census Service concerning housing in the area of Boston Mass. It is a small dataset with only 506 samples.
For the purpose of this article, make sure that you have installed the following _Python _libraries:
Once installed make sure that you have imported all the necessary modules that are used in this tutorial.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import SGDRegressor
Apart from that, it would be good to be at least familiar with the basics of linear algebra, calculus and probability.
Note that we also use simple Linear Regression in all examples. Due to the fact that we explore optimizationtechniques, we picked the easiest machine learning algorithm. You can see more details about Linear regression here. As a quick reminder the formula for linear regression goes like this:
where w and b are parameters of the machine learning algorithm. The entire point of the training process is to set the correct values to the w and b, so we get the desired output from the machine learning model. This means that we are trying to make the value of our error vector as small as possible, i.e. to find a global minimum of the cost function.
One way of solving this problem is to use calculus. We could compute derivatives and then use them to find places where is an extrema of the cost function. However, the cost function is not a function of one or a few variables; it is a function of all parameters of a machine learning algorithm, so these calculations will quickly grow into a monster. That is why we use these optimizers.
#ai #machine learning #python #artificaial inteligance #artificial intelligence #batch gradient descent #data science #datascience #deep learning #from scratch #gradient descent #machine learning #machine learning optimizers #ml optimization #optimizers #scikit learn #software #software craft #software craftsmanship #software development #stochastic gradient descent
1625043360
So far in our journey through the Machine Learning universe, we covered several big topics. We investigated some regression algorithms, classification algorithms and algorithms that can be used for both types of problems (SVM, Decision Trees and Random Forest). Apart from that, we dipped our toes in unsupervised learning, saw how we can use this type of learning for clustering and learned about several clustering techniques.
We also talked about how to quantify machine learning model performance and how to improve it with regularization. In all these articles, we used Python for “from the scratch” implementations and libraries like TensorFlow, Pytorch and SciKit Learn. The word optimization popped out more than once in these articles, so in this article, we focus on optimization techniques which are an important part of the machine learning process.
#ai #machine learning #python #artificaial inteligance #artificial intelligence #batch gradient descent #data science #datascience #deep learning #from scratch #gradient descent #machine learning optimizers #ml optimization #optimizers #scikit learn #software #software craft #software craftsmanship #software development
1666842360
Gurobi.jl is a wrapper for the Gurobi Optimizer.
It has two components:
The C API can be accessed via Gurobi.GRBxx
functions, where the names and arguments are identical to the C API. See the Gurobi documentation for details.
Note: This wrapper is maintained by the JuMP community and is not officially supported by Gurobi. However, we thank Gurobi for providing us with a license to test Gurobi.jl on GitHub. If you are a commercial customer interested in official support for Gurobi in Julia, let them know!.
Minimum version requirement: Gurobi.jl requires Gurobi version 9.0 or 9.1 or 9.5.
First, obtain a license of Gurobi and install Gurobi solver, following the instructions on Gurobi's website. Then, set the GUROBI_HOME
environment variable as appropriate and run Pkg.add("Gurobi")
, the Pkg.build("Gurobi")
. For example:
# On Windows, this might be
ENV["GUROBI_HOME"] = "C:\\Program Files\\gurobi950\\win64"
# ... or perhaps ...
ENV["GUROBI_HOME"] = "C:\\gurobi950\\win64"
import Pkg
Pkg.add("Gurobi")
Pkg.build("Gurobi")
# On Mac, this might be
ENV["GUROBI_HOME"] = "/Library/gurobi950/mac64"
import Pkg
Pkg.add("Gurobi")
Pkg.build("Gurobi")
Note: your path may differ. Check which folder you installed Gurobi in, and update the path accordingly.
By default, build
ing Gurobi.jl will fail if the Gurobi library is not found. This may not be desirable in certain cases, for example when part of a package's test suite uses Gurobi as an optional test dependency, but Gurobi cannot be installed on a CI server running the test suite. To support this use case, the GUROBI_JL_SKIP_LIB_CHECK
environment variable may be set (to any value) to make Gurobi.jl installable (but not usable).
We highly recommend that you use the Gurobi.jl package with higher level packages such as JuMP.jl.
This can be done using the Gurobi.Optimizer
object. Here is how to create a JuMP model that uses Gurobi as the solver.
using JuMP, Gurobi
model = Model(Gurobi.Optimizer)
set_optimizer_attribute(model, "TimeLimit", 100)
set_optimizer_attribute(model, "Presolve", 0)
See the Gurobi Documentation for a list and description of allowable parameters.
When using this package via other packages such as JuMP.jl, the default behavior is to obtain a new Gurobi license token every time a model is created. If you are using Gurobi in a setting where the number of concurrent Gurobi uses is limited (e.g. "Single-Use" or "Floating-Use" licenses), you might instead prefer to obtain a single license token that is shared by all models that your program solves. You can do this by passing a Gurobi Environment object as the first parameter to Gurobi.Optimizer
. For example, the follow code snippet solves multiple problems with JuMP using the same license token:
using JuMP, Gurobi
const GRB_ENV = Gurobi.Env()
model1 = Model(() -> Gurobi.Optimizer(GRB_ENV))
# The solvers can have different options too
model2 = Model(() -> Gurobi.Optimizer(GRB_ENV))
set_optimizer_attribute(model2, "OutputFlag", 0)
You can get and set Gurobi-specific variable, constraint, and model attributes via JuMP as follows:
using JuMP, Gurobi
model = direct_model(Gurobi.Optimizer())
@variable(model, x >= 0)
@constraint(model, c, 2x >= 1)
@objective(model, Min, x)
MOI.set(model, Gurobi.ConstraintAttribute("Lazy"), c, 2)
optimize!(model)
MOI.get(model, Gurobi.VariableAttribute("LB"), x) # Returns 0.0
MOI.get(model, Gurobi.ModelAttribute("NumConstrs")) # Returns 1
Note that we are using JuMP in direct-mode.
A complete list of supported Gurobi attributes can be found in their online documentation.
Here is an example using Gurobi's solver-specific callbacks.
using JuMP, Gurobi, Test
model = direct_model(Gurobi.Optimizer())
@variable(model, 0 <= x <= 2.5, Int)
@variable(model, 0 <= y <= 2.5, Int)
@objective(model, Max, y)
cb_calls = Cint[]
function my_callback_function(cb_data, cb_where::Cint)
# You can reference variables outside the function as normal
push!(cb_calls, cb_where)
# You can select where the callback is run
if cb_where != GRB_CB_MIPSOL && cb_where != GRB_CB_MIPNODE
return
end
# You can query a callback attribute using GRBcbget
if cb_where == GRB_CB_MIPNODE
resultP = Ref{Cint}()
GRBcbget(cb_data, cb_where, GRB_CB_MIPNODE_STATUS, resultP)
if resultP[] != GRB_OPTIMAL
return # Solution is something other than optimal.
end
end
# Before querying `callback_value`, you must call:
Gurobi.load_callback_variable_primal(cb_data, cb_where)
x_val = callback_value(cb_data, x)
y_val = callback_value(cb_data, y)
# You can submit solver-independent MathOptInterface attributes such as
# lazy constraints, user-cuts, and heuristic solutions.
if y_val - x_val > 1 + 1e-6
con = @build_constraint(y - x <= 1)
MOI.submit(model, MOI.LazyConstraint(cb_data), con)
elseif y_val + x_val > 3 + 1e-6
con = @build_constraint(y + x <= 3)
MOI.submit(model, MOI.LazyConstraint(cb_data), con)
end
if rand() < 0.1
# You can terminate the callback as follows:
GRBterminate(backend(model))
end
return
end
# You _must_ set this parameter if using lazy constraints.
MOI.set(model, MOI.RawOptimizerAttribute("LazyConstraints"), 1)
MOI.set(model, Gurobi.CallbackFunction(), my_callback_function)
optimize!(model)
@test termination_status(model) == MOI.OPTIMAL
@test primal_status(model) == MOI.FEASIBLE_POINT
@test value(x) == 1
@test value(y) == 2
See the Gurobi documentation for other information that can be queried with GRBcbget
.
Gurobi's API works differently than most solvers. Any changes to the model are not applied immediately, but instead go sit in a internal buffer (making any modifications appear to be instantaneous) waiting for a call to GRBupdatemodel
(where the work is done).
This leads to a common performance pitfall that has the following message as its main symptom:
Warning: excessive time spent in model updates. Consider calling update less
frequently.
This often means the JuMP program was structured in such a way that Gurobi.jl ends up calling GRBupdatemodel
each iteration of a loop. Usually, it is possible (and easy) to restructure the JuMP program in a way it stays solver-agnostic and has a close-to-ideal performance with Gurobi.
To guide such restructuring it is good to keep in mind the following bits of information:
GRBupdatemodel
is only called if changes were done since last GRBupdatemodel
(i.e., the internal buffer is not empty).GRBupdatemodel
is called when JuMP.optimize!
is called, but this often is not the source of the problem.GRBupdatemodel
may be called when ANY model attribute is queried even if that specific attribute was not changed, and this often the source of the problem.As an example, prefer:
# GOOD
model = Model(Gurobi.Optimizer)
@variable(model, x[1:100] >= 0)
# All modifications are done before any queries.
for i = 1:100
set_upper_bound(x[i], i)
end
for i = 1:100
# Only the first `lower_bound` query may trigger an `GRBupdatemodel`.
println(lower_bound(x[i]))
end
to:
# BAD
model = Model(Gurobi.Optimizer)
@variable(model, x[1:100] >= 0)
for i = 1:100
set_upper_bound(x[i], i)
# `GRBupdatemodel` called on each iteration of this loop.
println(lower_bound(x[i]))
end
Q not PSD
?You need to set the NonConvex parameter:
model = Model(Gurobi.Optimizer)
set_optimizer_attribute(model, "NonConvex", 2)
Author: jump-dev
Source Code: https://github.com/jump-dev/Gurobi.jl
License: MIT license
1609741584
Custom Software or Off-the-shelf software, the question in mind for many business personnel. Read this blog to get help to make the right decision that will benefit your business.
For a business that wants to upgrade and modernize itself with the help of software, a common dilemma it is whether to go for custom-made software or opt for off-the-shelf software. You can find many top software development companies worldwide, but before that all, you should first decide the type of software –an off-the-shelf software or a custom one.
This blog aims to overcome the dilemma and accord some clarity to a business looking to automate its business processes.
#custom software vs off-the-shelf software #custom software development companies #top software development companies #off-the-shelf software development #customized software solution #custom software development