Saturday, September 22, 2012

lagrangian relaxation with gurobipy

Note: this is still a new technique to me, so please point out any errors and I will correct them in the post.

Correction: $b_{ij}$ is now $b_j$ in the second set of constraints.

We've been studying Lagrangian Relaxation (LR) in the combinatorial optimization course I'm taking this term, and I had some difficulty finding a simple example covering its application. In case anyone else finds it useful, I'm posting a Python version for solving the Generalized Assignment Problem (GAP). This won't discuss the theory of LR at all, just give example code using Gurobi 5.0.1.

The GAP as defined by Wolsey consists of a maximization problem subject to a set of set packing constraints followed by a set of knapsack constraints:

max $\sum_i \sum_j c_{ij} x_{ij}$
subject to $\sum_j x_{ij} \leq 1 \forall i$
$\sum_i a_{ij} x_{ij} \leq b_{ij} \forall j$
$x_{ij} \in \{0, 1\}$

An unadulterated version of this BIP using gurobipy might look like the following:

#!/usr/bin/env python

# This is the GAP per Wolsey, pg 182

from gurobipy import GRB, Model

model = Model('GAP per Wolsey')
model.modelSense = GRB.MAXIMIZE
model.setParam('OutputFlag', False) # turns off solver chatter

b = [15, 15, 15]
c = [
    [ 6, 10,  1],
    [12, 12,  5],
    [15,  4,  3],
    [10,  3,  9],
    [ 8,  9,  5]
]
a = [
    [ 5,  7,  2],
    [14,  8,  7],
    [10,  6, 12],
    [ 8,  4, 15],
    [ 6, 12,  5]
]

# x[i][j] = 1 if i is assigned to j
x = []
for i in range(len(c)):
    x_i = []
    for j in c[i]:
        x_i.append(model.addVar(vtype=GRB.BINARY))
    x.append(x_i)

# We have to update the model so it knows about new variables
model.update()

# sum j: x_ij <= 1 for all i
for x_i in x:
    model.addConstr(sum(x_i) <= 1)

# sum i: a_ij * x_ij <= b[j] for all j
for j in range(len(b)):
    model.addConstr(sum(a[i][j] * x[i][j] for i in range(len(x))) <= b[j])

# max sum i,j: c_ij * x_ij
model.setObjective(
    sum(
        sum(c_ij * x_ij for c_ij, x_ij in zip(c_i, x_i))
        for c_i, x_i in zip(c, x)
    )
)
model.optimize()

# Pull objective and variable values out of model
print 'objective =', model.objVal
print 'x = ['
for x_i in x:
    print '   ', [1 if x_ij.x >= 0.5 else 0 for x_ij in x_i]
print ']'

The solver quickly finds the following optimal solution of this toy problem:

objective = 46.0
x = [
    [0, 1, 0]
    [0, 1, 0]
    [1, 0, 0]
    [0, 0, 1]
    [0, 0, 0]
]

There are two sets of constraints we can dualize. It can be beneficial to apply Lagrangian Relaxation against problems composed of knapsack constraints, so we will dualize the set packing ones. The first thing we do is replace the constraints generated in lines 39-40 with a new set of variables, penalties, which take the values of the slacks on the set packing constraints. We then modify the objective function, adding Lagrangian multipliers times these penalties.

Instead of optimizing once, we do so iteratively. An important consideration is we may get nothing more than a dual bound from this process. Any integer solution is not guaranteed to be primal feasible unless it satisfies complementary slackness conditions -- for each dualized constraint either its multiplier or penalty must be zero.

Setting the intial multiplier values to 2 and using subgradient optimization with a step size of 1 / (iteration #) to adjust them, we arrive at the following model:

#!/usr/bin/env python

# This is the GAP per Wolsey, pg 182, using Lagrangian Relaxation

from gurobipy import GRB, Model

model = Model('GAP per Wolsey with Lagrangian Relaxation')
model.modelSense = GRB.MAXIMIZE
model.setParam('OutputFlag', False) # turns off solver chatter

b = [15, 15, 15]
c = [
    [ 6, 10,  1],
    [12, 12,  5],
    [15,  4,  3],
    [10,  3,  9],
    [ 8,  9,  5]
]
a = [
    [ 5,  7,  2],
    [14,  8,  7],
    [10,  6, 12],
    [ 8,  4, 15],
    [ 6, 12,  5]
]

# x[i][j] = 1 if i is assigned to j
x = []
for i in range(len(c)):
    x_i = []
    for j in c[i]:
        x_i.append(model.addVar(vtype=GRB.BINARY))
    x.append(x_i)

# As stated, the GAP has these following constraints. We dualize these into
# penalties instead, using variables so we can easily extract their values.
penalties = [model.addVar() for _ in x]
model.update()

# Dualized constraints: sum j: x_ij <= 1 for all i
for p, x_i in zip(penalties, x):
    model.addConstr(p == 1 - sum(x_i))

# sum i: a_ij * x_ij <= b[j] for all j
for j in range(len(b)):
    model.addConstr(sum(a[i][j] * x[i][j] for i in range(len(x))) <= b[j])

# u[i] = Lagrangian Multiplier for the set packing contraint i
u = [2.0] * len(x)

# Re-optimize until either we have run a certain number of iterations
# or complementary slackness conditions apply.
for k in range(1, 101):
    # max sum i,j: c_ij * x_ij
    model.setObjective(
        sum(
            # Original objective function
            sum(c_ij * x_ij for c_ij, x_ij in zip(c_i, x_i))
            for c_i, x_i in zip(c, x)
        ) + sum (
            # Penalties for dualized constraints
            u_j * p_j for u_j, p_j in zip(u, penalties)
        )
    )
    model.optimize()

    print 'iteration', k, 'obj =', model.objVal, \
        'u =', u, 'penalties =', [p.x for p in penalties]

    # Test for complementary slackness
    stop = True
    eps = 10e-6
    for u_i, p_i in zip(u, penalties):
        if abs(u_i) > eps and abs(p_i.x) > eps:
            stop = False
            break

    if stop:
        print 'primal feasible & optimal'
        break

    else:
        s = 1.0 / k
        for i in range(len(x)):
            u[i] = max(u[i] - s*(penalties[i].x), 0.0)

# Pull objective and variable values out of model
print 'objective =', model.objVal
print 'x = ['
for x_i in x:
    print '   ', [1 if x_ij.x >= 0.5 else 0 for x_ij in x_i]
print ']'

On this problem LR converges very quickly to an optimal solution, as we can see here:

iteration 1 obj = 48.0 u = [2.0, 2.0, 2.0, 2.0, 2.0] penalties = [0.0, 0.0, 0.0, 0.0, 1.0]
iteration 2 obj = 47.0 u = [2.0, 2.0, 2.0, 2.0, 1.0] penalties = [0.0, 0.0, 0.0, 0.0, 1.0]
iteration 3 obj = 46.5 u = [2.0, 2.0, 2.0, 2.0, 0.5] penalties = [0.0, 0.0, 0.0, 0.0, 1.0]
iteration 4 obj = 46.1666666667 u = [2.0, 2.0, 2.0, 2.0, 0.16666666666666669] penalties = [0.0, 0.0, 0.0, 0.0, 1.0]
iteration 5 obj = 46.0 u = [2.0, 2.0, 2.0, 2.0, 0.0] penalties = [0.0, 0.0, 0.0, 0.0, 1.0]
primal feasible & optimal
objective = 46.0
x = [
    [0, 1, 0]
    [0, 1, 0]
    [1, 0, 0]
    [0, 0, 1]
    [0, 0, 0]
]

Exercise for the reader: change the script to dualize the knapsack constraints instead of the set packing constraints. What is the result of this change in terms of convergence?

2 comments :

  1. There its a typo in the mathematical formulation: b should have just a single subscript (j).

    Also, I'm not sure that I trust your stopping criterion. I think it is possible for your code to stop with a violation of one of the packing constraints and a corresponding penalty multiplier of zero.

    ReplyDelete
  2. Thanks! I corrected the variable subscript.

    Regarding the latter comment: I suspect you are correct, but I'm having trouble locating the error. Is my understanding of the termination conditions correct (for each penalty and multiplier pair, at least one must be <= epsilon)?

    I might abstract this and package it as a Python module to make studying LR easier, and I want to make sure I get it right.

    ReplyDelete