Notes on Linear Programming

Dr. John Travis

(Adapted from http://ford.ieor.berkeley.edu/riot/Tools/InteractLP/)

Linear optimization consists in trying to find the optimal value (maximal or minimal value, depending on the problem) of a linear function of a certain number of variables (usually labeled x1, x2, ...xn), given a set of m linear constraints on these variables (equalities or inequalities).

Even if it may seem quite theoretical in a first approach, linear optimization has a lot of practical applications in real problems. Especially, it is often used in industry, governmental, organizations, ecological sciences...to minimize objectives functions (that can be production costs, numbers of employees to hire, quantity of polluants relessed) given a set of constraints (availability of workers, of machines, ...).

Linear programming consists in solving the linear problems mentioned above, and developing the algorithms and software able to find the optimal solution to a problem, if there is one, or to prove that no solution exists if that is the case.

Each problem in Linear Optimization can differ in terms of the objective function (the objective can be either to minimize or maximize a function), as well as in terms of the linear constraints (equality constraints, inequality constraints, smaller or equal, strictly smaller...). The Linear Program is then usually converted into its standard form.

The generic linear program is formulated as follows:
 

Minimize c1x1 + c2x2 +.....+ cnxn (Objective function)

Subject to:

1st constraint: a1,1x1+ a1,2x2+ ....+ a1,nxn= b1
2nd constraint: a2,1x1+ a2,2x2+ ....+ a2,nxn= b2
... ... ... ... ... ...
... ... ... ... ... ...
m-th constraint: am,1x1+ am,2x2+ ....+ am,nxn= bm
 
Positivity: x1>0 x2>0 ... xn> 0  

Or, using vectors and matrices:
 

Min CX
Subject to:
where X > 0
and C such that (c1, c2, ...cn)

Conversions needed:

Conversion of "less than" inqualites to equalities

Suppose that the kth constraint is a smaller than equality labeled as follows:

ak,1x1+ak,2x2+... +ak,mxm < bk

It is equivalent to say that, for some value xm+1,
ak,1x1+ak,2x2+... +ak,mxm+xm+1=bk

where xm+1, is a positive variable know as a slack variable.  So, we must consider a new column-vector X=(x1, x2, ... , xn )t.  To incorporate this into the original problem, add a column to matrix A with zeros everywhere except on the kth line (where we will put a 1, so as to include the xm+1 in the kth constraint), so that A becomes:

Further, we must add a column to vector C (simply one zero, since the new slack variable, xm+1, must not affect the value of the objective function) so that C becomes
(c1, c2, ..., cn, cn+1)
where cn+1=0.

So we have a new problem, with the same n constraints, but m+1 variables. If we repeat this process with all the "smaller than" inequalities, and introduce one slack variable for each of them, then we obtain a new problem where all the "smaller than" inequalities are replaced by equalities.

Conversion of "greater than" inqualites to equalities
Suppose that the kth constraint is a bigger than equality labeled as follows:

ak,1x1+ak,2x2+... +ak,mxm >bk
It is equivalent to say that, for some value xm+1,
ak,1x1+ak,2x2+... +ak,mxm-xm+1=bk
with xm+1 a positive slack variable.  Proceed in a manner similar to above to modify A and C.

Conversion of non-positive variables

You have certainly noticed that the standard formulation of a linear program is quite restrictive, since it does only consider positive variables. In reality, it does not constitute a problem, since we can easily convert a problem where some variables are not required to be positive into a problem with only positive variables.

Indeed, suppose that the kth variable xk has no positivity constraint.  We the introduce two "artificial variables": x'k and x''k and replace xk by xk = x'k - x''k.  So, we can still have xk either positive or negative, with x'k and x''k positive.  The new variable column-vector is

(x1, x2, ... , x'k, x''k, ... , xn )t.
where x'k and x''k positive.

This duplication of variable xk into two variables obliges us to update matrix A and vector C:

Maximization Problems

You have certainly noticed that the standard formulation of a linear program is quite restrictive, since it only considers minimization problems. In reality, it does not constitute a problem, and it is very easy to convert a maximization problem into a minimization problem.

Indeed, suppose that your problem is:    Maximize c1x1 + c2x2 +.....+ cnxn (Objective function)
This is strictly equivalent to:  Minimize -c1x1 - c2x2 -.....- cnxn
and take the opposite of the optimal value of the objective function for this minimization problem as the optimal value for the original (maximization) problem.
So, what you only need to do is:

HOW TO SOLVE THE LINEAR PROGRAMMING PROBLEM

Geometric Solution:  Since the constraints are all linear expressions, graph the boundaries of these constraints and create a "feasible region" in the plane.  Identify all corners of this feasible region.  Draw the (linear) objective function for some point in the feasible region and note that this corresponds to a "level curve" of all values in the feasible region which all have the same value for the objective function.  Then, to minimize the objective function, determine which direction give level curves with smaller values until you arrive at one of the endpoints.

Theorem 9.1:  Suppose the feasible region of a linear program is a nonempty and bounded convex set.  Then, the objective function must attain both a maximum and minimum value occurring at extreme points of the region.  If the feasible region is unbounded, the objective function need not assume its optimal values.  If either a maximum or minimum does occur, it must occur at one of the extreme points.

HOMEWORK: page 310 #2, 4

Simplex Method:  As above, first convert all inequalities to equalities by using slack variables and all variables to non-negative variables using artificial variables, if needed.  Consider the set of original variables as comprising the independent set and the slack variables as comprising the dependent set.

STEPS:

Sensitivity Analysis:

THE BASIC CERTIFICATES

When you try to solve a problem in linear optimization, one thing that you would usually like to do is to prove that your conclusions are true, i.e that your problem is really infeasible, or unbounded, or that the optimal solution that you propose is really optimal. In order to do that, the theory of linear programming provides us for some equations that are likely to prove these three possible conclusions. These equations, if we solve them (i.e find some vectors Y or U or W that satisfy them, as you will see later on) prove that our conclusions are true. They are called the basic certificates.

Certificate of Infeasibility

The certificate of infeasibility consists in one (1*n) row-vector Y such that:
  1. Y * A  < 0.
  2. Y * B > 0.
Indeed, if there existed any feasible vector X solution to the problem (i.e. a vector X such that A*X=B, X > 0) then for any vector Y satisfying inequality (1.), we would have:
Y*A < 0
=> (Y*A)*X < 0 (since X> 0)
=> Y*B < 0 (since A*X = B)
=> no vector Y that verifies (1.) can verify (2.).

Certificate of Unboundedness

The unbondedness certificate consists in: These two vectors are such that:
  1. A*X = B.
  2. X > 0.
  3. A*W = 0.
  4. W >  0.
Indeed, 1/ and 2/ guarantee that X is a feasible solution to the problem. 3/ guarantees that, for any number l>0, X + l*W is still a feasible solution, since: Last, 4/ guarantees that, if we choose l large enough, we can find a feasible solution (equal to X+lW) that gives a value of the objective function as small as we want (since C * (X+lW) = (C*X) + l(C*W), with C*W < 0 and l > 0, as large as we want.
 

Certificate of Optimality

The certificate consists of: These two vectors are such that:
  1. A*X = B
  2. X > 0.
  3. Y*A < C.
  4. [C-(Y*A)]*X = 0.
Indeed, 1/ and 2/ guarantee that X is a feasible solution to the problem. 3/ Guarantees that for any feasible solution X, then C*X > Y*B (We obtain this result by multiplying 3/ by X and replacing, if X is a feasible solution, A*X by its value, i.e. B). So, Y*B is a lower bound on the value of the objective function for any feasible solution.
4/ guarantees that C*X = (Y*A)*X = Y*B = lower bound on the value of the objective function.
So, X is a feasible solution, and the value of the objective function for X is equal to the lower bound. Thus, X is an optimal solution.