diff --git a/documentation/api/optimizers/callbacks/LazyCutCallback.rst b/documentation/api/optimizers/callbacks/LazyCutCallback.rst index 52bdd879..2197b5f0 100644 --- a/documentation/api/optimizers/callbacks/LazyCutCallback.rst +++ b/documentation/api/optimizers/callbacks/LazyCutCallback.rst @@ -3,4 +3,58 @@ LazyCutCallback =============== +Lazy cuts are essentially constraints which are part of an optimization model but which have been omitted in the current +definition of the model. Lazy cuts typically arise in models with a large number of constraints. Instead of enumerating +then all, it may be judicious to omit some of them to get a smaller optimization model. Then, the returned solution is +checked for feasibility against the whole set of constraints. If a violated constraint is identified, we add the +constraint to the model and resolve. If not, then all the constraints are satisfied and the solution is optimal for the +original model in which all constraints materializes. + +The LazyCutCallback can be used to implement lazy cut constraint generation on the fly, during the optimization process. + +Consider the following optimization problem: + +.. math:: + + \begin{align} + \min_x \ & c^\top x \\ + \text{s.t.} \ & x\in X, \\ + & \xi^\top x \le \xi_0 \quad (\xi_0,\xi) \in \Xi, + \end{align} + +in which assume that :math:`|\Xi|` is large (potentially infinite). The idea is to start by solving the following +relaxed master problem in which constraints associated to :math:`(\xi_0,\xi)` have been omitted. + +.. math:: + + \begin{align} + \min_x \ & c^\top x \\ + \text{s.t.} \ & x\in X. + \end{align} + +If we assume that this problem is feasible and bounded, we can denote by :math:`x^*` a solution of this problem. Then, +we search for a violated constraint ":math:`\xi^\top x \le \xi_0`" for some :math:`(\xi_0,\xi)\in\Xi`. Observe that +a constraint is violated if, and only if, the following holds: + +.. math:: + + \left(\exists(\xi_0,\xi)\in\Xi, \ \xi^\top x^* > \xi_0\right) + \Leftrightarrow + \left(\max_{(\xi_0,\xi)\in\Xi} \xi_0 - \xi^\top x^* < 0\right). + +Thus, the LazyCutCallback automatically solve the optimization problem on the right handside and checks for its value. +A new constraint is added if, and only if, + +.. math:: + \max_{\xi\in\Xi} \xi_0 - \xi^\top x^* < -\varepsilon, + +with :math:`\varepsilon` a given tolerance (by default, :code:`Tolerance::Feasibility`). + + +.. hint:: + + You may also be interested by a tutorial showing how you can implement a simple Benders Decomposition using lazy + cuts. :ref:`See Benders Decomposition tutorial `. + + .. doxygenclass:: idol::LazyCutCallback \ No newline at end of file diff --git a/documentation/tutorials/decomposition_methods/benders.rst b/documentation/tutorials/decomposition_methods/benders.rst new file mode 100644 index 00000000..e7fbc3c0 --- /dev/null +++ b/documentation/tutorials/decomposition_methods/benders.rst @@ -0,0 +1,153 @@ +.. _decomposition_benders: + +.. role:: cpp(code) + :language: cpp + +Benders Decomposition (with LazyCutCallback) +============================================ + +In this section, we will show how to use the LazyCutCallback callback to implement a simple Benders Decomposition +algorithm. + +.. hint:: + + This tutorial regards the `advanced topic` of Benders Decomposition. + Rudimentary notions in the following subjects are recommended: + + - `Benders Decomposition `_ + - `Linear Programming duality `_ + +Mathematical Model +------------------ + +Original formulation +^^^^^^^^^^^^^^^^^^^^ + +We will base our example on the following model taken from `Blanco, V., (2016), Benders Decomposition, MINLP School: Theory +and Applications `_. + +.. math:: + + \begin{align} + \min_{x,y} \ & 2 x_0 + 3x_1 + 2y \\ + \text{s.t.} \ & x_0 + 2x_1 + y \ge 3, \\ + & 2x_0 - x_1 + 3y \ge 4, \\ + & x,y\ge 0. + \end{align} + +Benders reformulation +^^^^^^^^^^^^^^^^^^^^^ + +We apply a Benders reformulation to this problem by considering :math:`y` as the complicating variable. +The Benders reformulation reads: + +.. math:: + + \begin{align} + \min_{y,z} \ & 2y + z \\ + \text{s.t.} \ & z \ge \lambda_1 ( 3 - y ) + \lambda_2(4 - 3y) \quad \lambda \in \Lambda, \\ + & z \ge 0, + \end{align} + +with :math:`\Lambda` defined as the set of all :math:`\lambda\in\mathbb R^2_+` such that + +.. math:: + + \begin{align} + & \lambda_0 + 2 \lambda_1 \le 2, \\ + & 2\lambda_0 - \lambda_1 \le 3. + \end{align} + +Implementation +-------------- + +We are now ready to implement our decomposition method. We will need to define three different things: + +- the master problem; +- the dual space :math:`\Lambda`; +- the shape of the cuts to be added. + +The master problem +^^^^^^^^^^^^^^^^^^ + +The master problem is created like any optimization model; see our :ref:`Modeling tutorial `. + +.. code:: + + Env env; + + Model master(env); + + auto y = master.add_var(0, Inf, Continuous, "y"); + auto z = master.add_var(0, Inf, Continuous, "z"); + + master.set_obj_expr(2 * y + z); + +The dual space :math:`\Lambda` +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +To provide the description of the dual space :math:`\Lambda`, we use another Model object which will contain the variables +and constraints defining :math:`\Lambda`. The objective function is not used and can be left to zero. + +.. code:: + + Model dual_space(env); + + auto lambda = dual_space.add_vars(Dim<1>(2), 0, Inf, Continuous, "lambda"); + + dual_space.add_ctr(lambda[0] + 2 + lambda[1] <= 2); + dual_space.add_ctr(2 * lambda[0] - lambda[1] <= 3); + +The cuts to be added +^^^^^^^^^^^^^^^^^^^^ + +Finally, we need to define the cuts to be added to the master problem for a given dual variable :math:`\lambda`. +The cuts are always expressed in the "master space". What we mean by this is that, here, :math:`y` should be a *variable* +in the constraint while :math:`\lambda` should be a *constant*. + +This is done as follows. + +.. code:: + + auto benders_cut = z >= !lambda[0] * (3 - y) + !lambda[1] * (4 - 3 * y); + +See how the lambda variables are "turned into" constants by prepending them with an "!" symbol. + +Solving the model +----------------- + +In this section, we solve our model using GLPK and the LazyCutCallback from idol. + +This is done as follows. + +.. code:: + + master.use( + GLPK() + .with_callback( + LazyCutCallback(dual_space, benders_cut) + .with_separation_optimizer(GLPK()) + ) + ); + + master.optimize(); + + std::cout << save_primal(master) << std::endl; + +See how we specified also an optimizer for solving the separation problem. Here, we use GLPK. + +.. warning:: + + If you are using Gurobi with the LazyCutCallback, make sure to call the :code:`Gurobi::with_lazy_cuts` method. + This is necessary to turn off some parameters of Gurobi which would otherwise lead to wrong solutions. + + .. code:: + + master.use( + Gurobi() + .with_lazy_cuts(true) + .with_callback( + LazyCutCallback(dual_space, benders_cut) + .with_separation_optimizer(Gurobi()) + ) + ); diff --git a/documentation/tutorials/decomposition_methods/dantzig_wolfe.rst b/documentation/tutorials/decomposition_methods/dantzig_wolfe.rst index 589357c3..8aa3066f 100644 --- a/documentation/tutorials/decomposition_methods/dantzig_wolfe.rst +++ b/documentation/tutorials/decomposition_methods/dantzig_wolfe.rst @@ -3,8 +3,8 @@ .. role:: cpp(code) :language: cpp -Dantzig-Wolfe decomposition -=========================== +Dantzig-Wolfe Decomposition (Automatic) +======================================= In this section, we will show how to use the Branch-and-Price solver to solve the *Generalized Assignment Problem* (GAP) using an external solver to solve each sub-problem. @@ -14,7 +14,7 @@ using an external solver to solve each sub-problem. This tutorial regards the `advanced topic` of Column Generation and Dantzig-Wolfe decomposition. Rudimentary notions in the following subjects are recommended: - - `Column Generation and Branch-and-Price algorithmd `_ + - `Column Generation and Branch-and-Price algorithms `_ - `Dantzig-Wolfe decomposition `_ - `Generalized Assignment Problem `_. diff --git a/documentation/tutorials/decomposition_methods/index.rst b/documentation/tutorials/decomposition_methods/index.rst index b6113c99..997bd205 100644 --- a/documentation/tutorials/decomposition_methods/index.rst +++ b/documentation/tutorials/decomposition_methods/index.rst @@ -8,3 +8,4 @@ Decomposition methods :glob: dantzig_wolfe + benders