Packages Used: No new packages are used in this notebook.
Question: You are planning an international trip and have the opportunity to purchase some estate-grown coffee and high-quality dried peppers while there that you can sell upon your return to a friend who supplies several local restaurants. You can sell the coffee and peppers to your friend for \$6 and \\$15 per pound, respectively, and can purchase them at your destination for \$2 and \\$6 per pound, respectively. How many pounds of coffee and peppers should you purchase?
Objective will be to try to maximize the total profit. Unit profits for coffee and peppers are $ \pi_1 = \left( p_1 - c_1 \right) = 6 - 2 = \$4 $ and $ \pi_2 = \left( p_2 - c_2 \right) = 15 - 6 = \$9 $ per pound, respectively, and so total profit will be the linear function
$ \quad \mbox{Total profit:} \quad \mathit{\Pi}(x_1, x_2) = \pi_1 x_1 + \pi_2 x_2 = 4x_1 + 9x_2$,
where the decision variables $x_1$ and $x_2$ are pounds of coffee and peppers purchased, respectively. Since no limit is given on the amount of coffee and peppers that can be purchased and sold, the optimal answer is to purchase as much of each as you can since the gradient is always positive:
$\quad \nabla\!\mathit{\Pi}(x_1,x_2) = \left[ \begin{array}{c} 4 \\ 9 \end{array} \right] $
Limits: If limits were placed on each item, then the optimal solution would be to purchase the limit amounts of each.
Question: Why does this simple procedure work in this case, while more complicated analytical or numerical procedures were needed for the other examples considered so far?
using Optim, Plots, Statistics
P = [1 1; 6 1; 6 5]
dist2(x,P) = sqrt.(sum((x' .- P).^2, dims = 2))
xrng = 0:.1:7 # x-axis range
yrng = 0:.1:6 # y-axis range
contour(xrng, yrng, (x,y) -> sum(dist2([x, y],P)))
scatter!(P[:,1],P[:,2], label="Points in P", legend=:topleft)
# Using [0,0] as starting point for optimization
xy0 = [0., 0.]
scatter!([xy0[1]], [xy0[2]], markercolor=:blue, label="Start at [0,0]")
xyᵒ = optimize(xy -> sum(dist2(xy,P)), xy0).minimizer
scatter!([xyᵒ[1]], [xyᵒ[2]], markercolor=:yellow, markersize=8, label="End from [0,0]")
# Using mean of P as starting point for optimization
xy0 = mean(P, dims=1)[:]
scatter!([xy0[1]], [xy0[2]], markercolor=:green, label="Start at mean P")
xyᵒ = optimize(xy -> sum(dist2(xy,P)), xy0).minimizer
scatter!([xyᵒ[1]], [xyᵒ[2]], markercolor=:red, label="End from mean P")
surface(xrng, yrng, (x,y) -> sum(dist2([x, y],P)))
If there is a need to limit possible optimal solutions so that they remain within some limits, then it is necessary to include these limits as constraints in the optimization process.
Constraints on the feasibility of a solution can be incorporated into an optimization problem in two ways:
A polytope (or polygon in 2-D) can be used to indicate feasibility, either requiring feasible solutions to be inside or outside of the polytope. A convex polytope can be represented by a series of linear equations $ \mathbf{a}_i \mathbf{x} = b_i $, representing a half-space i of the polytope (or line i of a polygon). The equation $ \mathbf{a}_i \mathbf{x} \le b_i $ can be used to indicate points on or below the half-space, and $ \mathbf{a}_i \mathbf{x} \gt b_i \implies -\mathbf{a}_i \mathbf{x} \le -b_i $ to indicate points on or above. Each half-space can be represented as a row in a matrix $\mathbf{A} = \left[ \mathbf{a}_i \right]$ and as an element in a vector $\mathbf{b} = \left[ b_i \right]$, allowing the boolean-valued indicator function $S$ to be defined to determine if a point $\mathbf{x}$ is on or inside of the polytope:
$ \quad \mbox{In polytope: } S(\mathbf{x}) = \begin{cases} \mathsf{true}, & \mbox{if } \mathbf{Ax \le b} \\ \mathsf{false}, & \mbox{otherwise } \end{cases} $
Any minimum-distance location is restricted to being outside of a polygon region. The region could represent, for example, a small lake, where it would not be feasible to locate. In the following example, the infeasible region is a rectangle defined by x- and y-axis upper and lower bounds:
$ \begin{eqnarray*} \quad \textit{x}\textrm{-axis UB:}\quad x_1 &\le& 6 = 1x_1 + 0x_2 \le 6 = a_{1,1}x_1 + a_{1,2}x_2 \le b_1 = \mathbf{a}_1 \mathbf{x} \le b_1 \\ \textit{y}\textrm{-axis UB:}\quad x_2 &\le& 4 = 0x_1 + 1x_2 \le 4 = a_{2,1}x_1 + a_{2,2}x_2 \le b_2 = \mathbf{a}_2 \mathbf{x} \le b_2 \\ \textit{x}\textrm{-axis LB:}\quad x_1 &\ge& 3 = -x_1 \le -3 = -1x_1 + 0x_2 \le -3 = a_{3,1}x_1 + a_{3,2}x_2 \le b_3 = \mathbf{a}_3 \mathbf{x} \le b_3 \\ \textit{y}\textrm{-axis LB:}\quad x_2 &\ge& 1 = -x_2 \le -1 = 0x_1 + -1x_2 \le -1 = a_{4,1}x_1 + a_{4,2}x_2 \le b_4 = \mathbf{a}_4 \mathbf{x} \le b_4 \\ \end{eqnarray*} $
$\quad\mathbf{Ax \le b} = \begin{bmatrix} a_{1,1} & a_{1,2} \\ a_{2,1} & a_{2,2} \\ a_{3,1} & a_{3,2} \\ a_{4,1} & a_{4,2} \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} \le \begin{bmatrix} b_1 \\ b_2 \end{bmatrix} = \left[ \begin{array}{r} 1 & 0 \\ 0 & 1 \\ -1 & 0 \\ 0 & -1 \end{array} \right] = \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} \le \left[ \begin{array}{r} 6 \\ 4 \\ -3 \\ -1 \end{array} \right]$
A = [1 0;
0 1;
-1 0;
0 -1]
b = [6, 4, -3, -1]
S(x) = all(A*x[:] .<= b) ? true : false
plot(Shape([3, 6, 6, 3],[1, 1, 4, 4]), legend=false, title=" \nInfeasible Region",
xlabel="x-axis", ylabel="y-axis")
First, solve for the unconstrained location:
P = [1 1; 6 1; 6 5]
dist2(x,P) = sqrt.(sum((x' .- P).^2, dims = 2))
xrng = 0:.1:7 # x-axis range
yrng = 0:.1:6 # y-axis range
contour(xrng, yrng, (x,y) -> sum(dist2([x, y],P)))
plot!(Shape([3, 6, 6, 3],[1, 1, 4, 4]), opacity=.5, label="Infeasible region")
scatter!(P[:,1],P[:,2], label="Points in P", legend=:left, markersize=8)
xyᵒ = optimize(xy -> sum(dist2(xy,P)), [0., 0.]).minimizer
scatter!([xyᵒ[1]],[xyᵒ[2]], label="Unconstrained optimal")
Next, modify the distance function dist2
by using the indicator function S
to return a penalty cost of infinity (Inf
) for any location in the infeasible region since this is a minimization problem with the constraint that the solution is outside of the region; otherwise, just return the distance:
dist2cstr(xy,P) = S(xy) ? Inf : dist2(xy,P) # Infeasible if inside for min.
# Using [0,0] as starting point for optimization
xy0 = [0., 0.]
scatter!([xy0[1]], [xy0[2]], markercolor=:yellow, label="Start at [0,0]")
xyᵒ = optimize(xy -> sum(dist2cstr(xy,P)), xy0).minimizer
scatter!([xyᵒ[1]], [xyᵒ[2]], markercolor=:yellow, label="End from [0,0]")
# Using [2,5] as starting point for optimization
xy0 = [2., 5.]
scatter!([xy0[1]], [xy0[2]], markercolor=:red, label="Start at [2,5]")
xyᵒ = optimize(xy -> sum(dist2cstr(xy,P)), xy0).minimizer
scatter!([xyᵒ[1]], [xyᵒ[2]], markercolor=:red, label="End from [2,5]")
The mean of P
is inside of the infeasible region. If this mean is used as the starting point for the optimization (see white circle inside the region, below), the Nelder-Mead procedure stops after four function calls and no iterations (but still reports success!); this is because infinity is a hard penalty that cannot be improved. Instead, one needs to either make sure the starting point is not inside the region or, if it is, that a soft penalty is used so that progress can be made to a feasible point:
# Using infeasible mean of P as starting point for optimization
function dist2cstr_with_plot_xy(xy,P)
scatter!([xy[1]],[xy[2]], markersize=1)
return dist2cstr(xy,P)
end
xy0 = mean(P, dims=1)[:]
scatter!([xy0[1]], [xy0[2]], markercolor=:white)
res = optimize(xy -> sum(dist2cstr_with_plot_xy(xy,P)), xy0)
xyᵒ = res.minimizer
display((res, xyᵒ))
scatter!([xyᵒ[1]], [xyᵒ[2]], markercolor=:white, legend=false)
( * Status: success * Candidate solution Final objective value: 9.834433e+00 * Found with Algorithm: Nelder-Mead * Convergence measures √(Σ(yᵢ-ȳ)²)/n ≰ 1.0e-08 * Work counters Seconds run: 0 (vs limit Inf) Iterations: 0 f(x) calls: 4 , [6.525, 2.3333333333333335])
A finite region surrounding a set of points P
is needed to restruct possible solutions that maximise the minimum distance to the closest point in P
; otherwise, the optimal solution is unbounded and Nelder-Mead fails to find a feasible soltion:
P = [1 1; 6 1; 6 5; 1 5]
xrng = -3:01:10
yrng = -3:01:9
display(surface(xrng, yrng, (x,y) -> minimum(dist2([x, y], P)), camera=(45,60)))
contour(xrng, yrng, (x,y) -> minimum(dist2([x, y], P)))
display(scatter!(P[:,1],P[:,2], label="Points in P", legend=false))
optimize(xy -> -minimum(dist2(xy, P)), [0., 0.])
* Status: failure (reached maximum number of iterations) * Candidate solution Final objective value: -Inf * Found with Algorithm: Nelder-Mead * Convergence measures √(Σ(yᵢ-ȳ)²)/n ≰ 1.0e-08 * Work counters Seconds run: 0 (vs limit Inf) Iterations: 1000 f(x) calls: 2629
Restricting possible solutions to a finite region defined by $-2 \le x \le 9$ and $-2 \le y \le 8$ results in Nelder-Mead being able to find feasible solutions; in this case, there are multiple local optima that can be reached from different starting points:
A = [1 0;
0 1;
-1 0;
0 -1]
b = [9, 8, 2, 2]
S(x) = all(A*x[:] .<= b) ? true : false
plot!(Shape([-2, 9, 9, -2],[-2, -2, 8, 8]) , opacity=.25, label="Feasible region")
Similar to what was done in the previous example, the distance function dist2
is modified, but now the indicator function S
is used to return a penalty cost of -Inf
for any location outside of the feasible region since this is a maximization problem with the constraint that the solution be inside of the region; otherwise, just return the distance:
dist2cstr(xy,P) = S(xy) ? dist2(xy,P) : -Inf # Feasible if inside for max.
# Using [0, 0] as starting point for optimization
xy0 = [0., 0.]
xyᵒ = optimize(xy -> -minimum(dist2cstr(xy,P)), xy0).minimizer
scatter!([xyᵒ[1]], [xyᵒ[2]], markercolor=:green, label="End from [0,0]")
# Using [7, 6] as starting point for optimization
xy0 = [7., 6.]
xyᵒ = optimize(xy -> -minimum(dist2cstr(xy,P)), xy0).minimizer
scatter!([xyᵒ[1]], [xyᵒ[2]], markercolor=:blue, label="End from [7,6]")
# Using [2, 2] as starting point for optimization
xy0 = [2., 2.]
xyᵒ = optimize(xy -> -minimum(dist2cstr(xy,P)), xy0).minimizer
display(scatter!([xyᵒ[1]], [xyᵒ[2]], markercolor=:red, label="End from [2,2]",
legend=:bottomright))
Question: You are planning an international trip and have the opportunity to purchase some estate-grown coffee and high-quality dried peppers while there that you can sell upon your return to a friend who supplies several local restaurants. You can sell the coffee and peppers to your friend for \$6 and \\$15 per pound, respectively, and can purchase them at your destination for \$2 and \\$6 per pound, respectively. You have a budget of \$120 to spend on these purchases, and you have estimated that you will be able to pack up to 30 pounds, in total, of both items in your knapsack since there is a maximum weight limit of 50 pounds on the one allowed checked bag and your clothes and the knapsack together weigh 20 pounds. How many pounds of coffee and peppers should you purchase?
Objective will be to try to maximize the total profit from a mix of both items. Unit profits for coffee and peppers are $ \pi_1 = \left( p_1 - c_1 \right) = 6 - 2 = \$4 $ and $ \pi_2 = \left( p_2 - c_2 \right) = 15 - 6 = \$9 $ per pound, respectively, and so total profit will be the linear function
$ \quad \mbox{Total profit:} \quad \mathit{\Pi} = \pi_1 x_1 + \pi_2 x_2 = 4x_1 + 9x_2$,
where the decision variables $x_1$ and $x_2$ are pounds of coffee and peppers purchased, respectively. The budget and weight constaints to define a polygon region to represent feasible solutions:
$ \begin{eqnarray*} \quad\mbox{Budget constraint:} \quad \mathbf{a}_1 \mathbf{x} \le b_1 &=& a_{1,1}x_1 + a_{1,2}x_2 \le b_1 \\ &=& c_1 x_1 + c_2 x_2 \le 120 = 2x_1 + 6x_2 \le 120 \\ \mbox{Weight constraint:} \quad \mathbf{a}_2 \mathbf{x} \le b_2 &=& a_{2,1}x_1 + a_{2,2}x_2 \le b_2 = x_1 + x_2 \le 30 \\ \mathbf{Ax \le b} &=& \begin{bmatrix} a_{1,1} & a_{1,2} \\ a_{2,1} & a_{2,2} \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} \le \begin{bmatrix} b_1 \\ b_2 \end{bmatrix} \\ &=& \begin{bmatrix} 2 & 6 \\ 1 & 1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} \le \begin{bmatrix} 120 \\ 30 \end{bmatrix} \end{eqnarray*}$
Also, the two rows $ \begin{bmatrix} -1 & 0 \end{bmatrix} $ and $ \begin{bmatrix} 0 & -1 \end{bmatrix} $ can be added to $\mathbf{A}$ and two $0$'s to $\mathbf{b}$ to represent the lower-bound constraints $x_1 \ge 0$ and $x_2 \ge 0$.
plot(Shape([0, 30, 15, 0],[0, 0, 15, 20]) , opacity=.25, label="Feasible region",
xlims=(0,65), ylims=(0,35), xlabel="Coffee (lb)", ylabel="Peppers (lb)",
aspect_ratio=:equal)
x₁ = 0:.1:65
x₂ = 0:.1:35
# Line function: Using sqrt(eps()) = 1.5e-8 to avoid dividing by 0,
# where eps() is the smallest representable floating point value or "machine epsilon"
fline(x₁,a₁,a₂,b) = abs(a₂) > sqrt(eps()) ? (b - a₁*x₁)/a₂ : x₁ # Line function
plot!(x₁, [fline(i,2,6,120) for i in x₁], linewidth=3, label="Budget constraint")
plot!(x₁, [fline(i,1,1,30) for i in x₁], linewidth=3, label="Weight constraint")
fobj(x) = 4x[1] + 9x[2]
contour!(x₁, x₂, (x,y) -> fobj([x, y]))
The display of the feasible region and contour plot indicate the values for the decision values that maximize total profit are at the intersection of the Budget and Weight constraints. To solve for their intersection:
x₁ᵒ = optimize(x -> abs(fline(x,2,6,120) - fline(x,1,1,30)), 0, 60).minimizer
15.000000071039214
x₂ᵒ = 30 - x₁ᵒ
14.999999928960786
Πᵒ = fobj([x₁ᵒ, x₂ᵒ])
194.99999964480395
plot(Shape([0, 30, 15, 0],[0, 0, 15, 20]) , opacity=.25, label="Feasible region",
xlims=(0,65), ylims=(0,35), xlabel="Coffee (lb)", ylabel="Peppers (lb)",
aspect_ratio=:equal)
plot!(x₁, [fline(i,2,6,120) for i in x₁], linewidth=2, label="Budget constraint")
plot!(x₁, [fline(i,1,1,30) for i in x₁], linewidth=2, label="Weight constraint")
plot!(x₁, [fline(i,4,9,Πᵒ) for i in x₁], linewidth=2, label="Objective function")
scatter!([x₁ᵒ], [x₂ᵒ], label="Optimal solution")
Using the matrix $A$ and vector $b$ to define the feasible region, modify the objective function fobj
by using the indicator function S
to return a penalty cost of infinity for any point outside the feasible region:
A = [2 6;
1 1;
-1 0;
0 -1]
b = [120, 30, 0, 0]
S(x) = all(A*x[:] .<= b) ? true : false
fobjcstr(x) = S(x) ? fobj(x) : -Inf # Feasible if inside for max
fobjcstr (generic function with 1 method)
Starting at point [0, 0], Nelder-Mead terminates successfully, but it has not found the known (global) optimal solution at [15, 15], which corresponds to a total profit of \$195:
xy0 = [0., 0.] # Using [0,0] as starting point for optimization
res = optimize(xy -> -minimum(fobjcstr(xy)), xy0)
(res, res.minimizer)
( * Status: success * Candidate solution Final objective value: -1.835139e+02 * Found with Algorithm: Nelder-Mead * Convergence measures √(Σ(yᵢ-ȳ)²)/n ≤ 1.0e-08 * Work counters Seconds run: 0 (vs limit Inf) Iterations: 99 f(x) calls: 202 , [3.513886499714739, 18.828704498252776])
xy0 = [0., 0.] # Using [0,0] as starting point for optimization
res = optimize(xy -> -minimum(fobjcstr(xy)), xy0, LBFGS())
(res, res.minimizer)
( * Status: success * Candidate solution Final objective value: -0.000000e+00 * Found with Algorithm: L-BFGS * Convergence measures |x - x'| = 0.00e+00 ≤ 0.0e+00 |x - x'|/|x'| = NaN ≰ 0.0e+00 |f(x) - f(x')| = NaN ≰ 0.0e+00 |f(x) - f(x')|/|f(x')| = NaN ≰ 0.0e+00 |g(x)| = Inf ≰ 1.0e-08 * Work counters Seconds run: 0 (vs limit Inf) Iterations: 0 f(x) calls: 1 ∇f(x) calls: 1 , [0.0, 0.0])
L-BFGS, which uses finite differences to estimate the gradient, did not work, even though it reports a "sucess"! Can try to tighten the tolerance used by Nelder-Mead to decide when to terminate (was 1e-8, now 1e-12). The only result it that the number of iterations increases, but solution stays the same:
res = optimize(xy -> -minimum(fobjcstr(xy)), xy0, Optim.Options(g_tol = 1e-12))
(res, res.minimizer)
( * Status: success * Candidate solution Final objective value: -1.835139e+02 * Found with Algorithm: Nelder-Mead * Convergence measures √(Σ(yᵢ-ȳ)²)/n ≤ 1.0e-12 * Work counters Seconds run: 0 (vs limit Inf) Iterations: 125 f(x) calls: 254 , [3.5138864990810075, 18.82870450030618])
Using different starting points that are located more towards the interior of the feasible region, the known optimal solution is found, but the number of iterations required by Nelder-Mean is increasing even though the starting point is getting closer to the optimal point at [15, 15]:
xy0 = [1., 1.]
res = optimize(xy -> -minimum(fobjcstr(xy)), xy0)
(res, res.minimizer)
( * Status: success * Candidate solution Final objective value: -1.949998e+02 * Found with Algorithm: Nelder-Mead * Convergence measures √(Σ(yᵢ-ȳ)²)/n ≤ 1.0e-08 * Work counters Seconds run: 0 (vs limit Inf) Iterations: 119 f(x) calls: 242 , [14.999806933063688, 15.000064355504307])
Clearly, Nelder-Mead and gradient-based methods are not effective procedures for solving problems with a linear objective function. Luckily, there are specialized procedures like the simplex method (1947) for solving these problems effectively.
The basic idea of the simplex method is as follows:
The procedure will eventually terminate at the optimal corner point as long as the feasible region is bounded; otherwise, it will report that the problem is unbounded and that no finite optimal solution exists. It will always terminate because, at each corner point considered, the objective is strictly improving (e.g., strictly increasing if it is a maximization and strictly decreasing if it is a minimization problem).
The simplex method utilizes the same matrix $A$ and vector $b$ that were used by the penalty method to define the feasible region, except that the lower bounds on the decision variables can be removed because they are assumed to be greater-than or equal to zero by default:
$ \begin{eqnarray*} \quad \mbox{Maximize} \quad 4x_1 + 9x_2 \\ \quad \mbox{subject to} \quad 2x_1 + 6x_2 &\le& 120 \\ x_1 + x_2 &\le& 30 \\ x_i &\ge& 0 \mbox{,} \end{eqnarray*} $
Linear algebra is used to solve a series of equations and, as such, each (inquality) constaint has to be first transformed into an equation by adding an additional slack variable, converting the problem to 4-D:
$ \begin{eqnarray*} \quad \mbox{Maximize} \quad 4x_1 + 9x_2 + 0x_3 + 0x_4 \\ \quad \mbox{subject to} \quad 2x_1 + 6x_2 + x_3 &=& 120 \\ x_1 + x_2 + x_4 &=& 30 \\ x_i &\ge& 0 \mbox{.} \end{eqnarray*} $
Pick a starting corner point: Since this is a maximization problem and the point (0,0) is part of the feasible region, the two additional slack variables make it easy to define an initial basic feaible solution (0, 0, 120, 30) for the 4-D problem:
A = [2 6 1 0; 1 1 0 1]
b = [120, 30]
c = [4, 9, 0, 0]
x = [0, 0, 120, 30]
[c' 0; x' 0; A b]
4×5 Matrix{Int64}: 4 9 0 0 0 0 0 120 30 0 2 6 1 0 120 1 1 0 1 30
TC = c'*x
0
The two variables $x_3$ and $x_4$ define the initital basis; in the final optimal solution, the simplex method will have moved both of these variables out of the basis so that only the real, non-slack variables define the basis. The following procedure is termed the revised simplex method because it is more computationally efficient than the original simplex method (in particular, no costly matrix inversions are required).
Check if optimal; otherwise move to new corner point:
idxB = [3, 4] # Indices of variable in the initial basis
idxN = setdiff(1:length(c),idxB) # Indices of variables not in the initial basis
w = A[:,idxB]'\c[idxB] # Simplex multipliers
r = c[idxN]' - w'*A[:,idxN] # Reduced cost
1×2 adjoint(::Vector{Float64}) with eltype Float64: 4.0 9.0
idxN2 = findall(r' .> 0) # For maximization, find any positive reduced cost
2-element Vector{Int64}: 1 2
if isempty(idxN2) println("Stop: Optimal solution found") end
idxin = minimum(idxN[idxN2]) # Select variable to enter basis
# (pick min index to avoid cycling (Bland's rule))
1
d = A[:,idxB]\-A[:,idxin] # Edge direction
2-element Vector{Float64}: -2.0 -1.0
idxB2 = findall(d .<= 0) # Find all negative directions
2-element Vector{Int64}: 1 2
if isempty(idxB2) error("Stop: Problem unbounded") end
α = -x[idxB[idxB2]]./d[idxB2] # Step lengths
2-element Vector{Float64}: 60.0 30.0
idxB2out = argmin(α) # Select variable with minimum α to leave basis
2
x[idxin] = α[idxB2out] # Update value of variable entering the basis
30.0
x[idxB] .+= α[idxB2out] * d # Update values of variables still in the basis
2-element view(::Vector{Int64}, [3, 4]) with eltype Int64: 60 0
idxB[idxB2[idxB2out]] = idxin # Switch basis variable
idxN = setdiff(1:length(c),idxB)
println("TC = ", c'*x, "\tx = ", x, "\tidxB = ", idxB, "\tidxN = ", idxN)
TC = 120 x = [30, 0, 60, 0] idxB = [3, 1] idxN = [2, 4]
Putting all of these steps into a single procedure:
Revised Simplex Method
A = [2 6 1 0; 1 1 0 1] # Constraint matrix
b = [120, 30] # Right-hand side vector
c = [4, 9, 0, 0] # "Cost" vector
idxB = [3, 4] # Indices of variable in the initial basis
idxN = setdiff(1:length(c),idxB) # Indices of variables not in the inital basis
x = [0, 0, 120, 30] # Solution vector
println("TC = ", c'*x, "\tx = ", x, "\tidxB = ", idxB, "\tidxN = ", idxN)
TC = 0 x = [0, 0, 120, 30] idxB = [3, 4] idxN = [1, 2]
done = false
while !done
idxN = setdiff(1:length(c),idxB) # Indices of variables not in the inital basis
println("TC = ", c'*x, "\tx = ", x, "\tidxB = ", idxB, "\tidxN = ", idxN)
w = A[:,idxB]'\c[idxB] # Simplex multipliers
r = c[idxN]' - w'*A[:,idxN] # Reduced cost
idxN2 = findall(r' .> 0) # For maximization, find any positive reduced cost
# (for minimization, find any negative reduced cost)
if isempty(idxN2)
println("Stop: Optimal solution found")
done = true
else
idxin = minimum(idxN[idxN2]) # Select variable to enter basis
d = A[:,idxB]\-A[:,idxin] # Edge direction
idxB2 = findall(d .< 0) # Find all negative directions
if isempty(idxB2)
error("Stop: Problem unbounded")
end
α = -x[idxB[idxB2]]./d[idxB2] # Step lengths
idxB2out = argmin(α) # Select variable with minimum α to leave basis
x[idxin] = α[idxB2out] # Update value of variable entering the basis
x[idxB] .+= α[idxB2out] * d # Update values of variables still in the basis
idxB[idxB2[idxB2out]] = idxin # Switch basis variable
end
end
TC = 0 x = [0, 0, 120, 30] idxB = [3, 4] idxN = [1, 2] TC = 120 x = [30, 0, 60, 0] idxB = [3, 1] idxN = [2, 4] TC = 195 x = [15, 15, 0, 0] idxB = [2, 1] idxN = [3, 4] Stop: Optimal solution found