2. Loc 8: Discrete Location and MILP¶

ISE 754, Fall 2024

Package Used: Functions from the following package are used in this notebook for the first time:

  • JuMP: Documentation, modeling language for mathematical optimization embedded in Julia.

  • Cbc: Cbc.jl, Julia wrapper for the open-source COIN-OR Branch and Cut (Cbc) solver.

  • Gurobi: Gurobi.jl, a Julia interface to the commercial Gurobi Optimizer.

Mixed-integer programming is used when some of the decision variables need to be integer-valued and/or, more importantly, decisions need to be made as part of the solution procedure, and these "internal decisions" can only be implemented using discrete (typically binary) decision variables.

Mixed-integer program MIP NLP + some integer variables
Mixed-integer linear program MILP LP + some integer variables
Integer linear program ILP LP + all integer variables
Binary integer program BIP LP + all binary variables

In the following, the focus will be on MILP because it has the widest range of applications. Due to the difficultly in implementing effective procedures, MIP with NLP is usually restricted to only a handful of special NLP models like quadratic programming (QP).

1. Solving a MILP¶

Branch and Bound Steps¶

$ \begin{eqnarray*} \quad \mbox{Maximize} \quad 6x_1 + 8x_2 \\ \quad \mbox{subject to} \quad 2x_1 + 3x_2 &\le& 11 \\ 2x_1 &\le& 7 \\ x_1, x_2 \ge 0 \mbox{, integer} \\ \end{eqnarray*} $

image.png

image.png

image.png

For a maximization problem, solving the problem as an LP without integer variable constraints provides an upper bound on the integer-restricted solution. Initially, the lower bound on the solution is zero because the variables are restricted to being nonnegative. Once the first integer feasible solution is found (termed the incumbent solution), it then becomes the current lower bound. The difference between the UB and LB defines the gap, and the search continues through the branch and bound tree until the gap falls below a threshold. Solutions that are not feasible are fathomed, that is, eliminated from the search tree.

First, solve as an LP (branch node 0 in tree):

In [1]:
# Node 0: Initial LP
using JuMP, Cbc

m = Model(Cbc.Optimizer)
@variable(m, 0 <= x₁ )                         # Continuous variable
@variable(m, 0 <= x₂ )
@objective(m, Max, 6x₁ + 8x₂ )
@constraint(m, 2x₁ + 3x₂ <= 11 )
@constraint(m, 2x₁ <= 7 )
print(m)
optimize!(m)
UB = objective_value(m)
LB = 0
println("\nObj: ", objective_value(m), ", x₁: ", value(x₁), ", x₂: ", value(x₂))
println(" UB: ", UB, ", LB: ", LB, ", Gap: ", UB - LB, "\n")
$$ \begin{aligned} \max\quad & 6 x₁ + 8 x₂\\ \text{Subject to} \quad & 2 x₁ + 3 x₂ \leq 11\\ & 2 x₁ \leq 7\\ & x₁ \geq 0\\ & x₂ \geq 0\\ \end{aligned} $$
Obj: 31.666666666666664, x₁: 3.5, x₂: 1.3333333333333333
 UB: 31.666666666666664, LB: 0, Gap: 31.666666666666664

Presolve 0 (-2) rows, 0 (-2) columns and 0 (-3) elements
Optimal - objective value 31.666667
After Postsolve, objective 31.666667, infeasibilities - dual 0 (0), primal 0 (0)
Optimal objective 31.66666667 - 0 iterations time 0.002, Presolve 0.00

Next, select one of the fractional decision variables and add two integer constraints to the tree. Selection of the variable can be based on the most fractional, least fractional, and many other criteria.

In [2]:
# Node 1
@constraint(m, c1, x₁ <= 3)
optimize!(m)
UB = objective_value(m)
println("\nObj: ", objective_value(m), ", x₁: ", value(x₁), ", x₂: ", value(x₂))
println(" UB: ", UB, ", LB: ", LB, ", Gap: ", UB - LB, "\n")
Obj: 31.333333333333336, x₁: 3.0, x₂: 1.6666666666666667
 UB: 31.333333333333336, LB: 0, Gap: 31.333333333333336

Presolve 0 (-3) rows, 0 (-2) columns and 0 (-4) elements
Optimal - objective value 31.333333
After Postsolve, objective 31.333333, infeasibilities - dual 0 (0), primal 0 (0)
Optimal objective 31.33333333 - 0 iterations time 0.002, Presolve 0.00
In [3]:
# Node 2: Incumbent (integer-feasible solution) found
@constraint(m, c2, x₂ <= 1)
optimize!(m)
LB = objective_value(m)
println("\nObj: ", objective_value(m), ", x₁: ", value(x₁), ", x₂: ", value(x₂))
println(" UB: ", UB, ", LB: ", LB, ", Gap: ", UB - LB, "\n")
Obj: 26.0, x₁: 3.0, x₂: 1.0
 UB: 31.333333333333336, LB: 26.0, Gap: 5.333333333333336

Presolve 0 (-4) rows, 0 (-2) columns and 0 (-5) elements
Optimal - objective value 26
After Postsolve, objective 26, infeasibilities - dual 0 (0), primal 0 (0)
Optimal objective 26 - 0 iterations time 0.002, Presolve 0.00
In [4]:
# Node 3
delete(m, c2)
@constraint(m, c3, x₂ >= 2)
optimize!(m)
UB = objective_value(m)
println("\nObj: ", objective_value(m), ", x₁: ", value(x₁), ", x₂: ", value(x₂))
println(" UB: ", UB, ", LB: ", LB, ", Gap: ", UB - LB, "\n")
Obj: 31.0, x₁: 2.5, x₂: 2.0
 UB: 31.0, LB: 26.0, Gap: 5.0

Presolve 1 (-3) rows, 2 (0) columns and 2 (-3) elements
0  Obj 16 Primal inf 1.6666657 (1) Dual inf 16.999998 (2)
1  Obj 31
Optimal - objective value 31
After Postsolve, objective 31, infeasibilities - dual 0 (0), primal 0 (0)
Optimal objective 31 - 1 iterations time 0.002, Presolve 0.00
In [5]:
# Node 4
@constraint(m, c4, x₁ <= 2)
optimize!(m)
UB = objective_value(m)
println("\nObj: ", objective_value(m), ", x₁: ", value(x₁), ", x₂: ", value(x₂))
println(" UB: ", UB, ", LB: ", LB, ", Gap: ", UB - LB, "\n")
Obj: 30.666666666666668, x₁: 2.0, x₂: 2.3333333333333335
 UB: 30.666666666666668, LB: 26.0, Gap: 4.666666666666668

Presolve 0 (-5) rows, 0 (-2) columns and 0 (-6) elements
Optimal - objective value 30.666667
After Postsolve, objective 30.666667, infeasibilities - dual 0 (0), primal 0 (0)
Optimal objective 30.66666667 - 0 iterations time 0.002, Presolve 0.00
In [6]:
# Node 5: New incumbent found
@constraint(m, c5, x₂ <= 2)
optimize!(m)
LB = objective_value(m)
println("\nObj: ", objective_value(m), ", x₁: ", value(x₁), ", x₂: ", value(x₂))
println(" UB: ", UB, ", LB: ", LB, ", Gap: ", UB - LB, "\n")
Obj: 28.0, x₁: 2.0, x₂: 2.0
 UB: 30.666666666666668, LB: 28.0, Gap: 2.666666666666668

Presolve 0 (-6) rows, 0 (-2) columns and 0 (-7) elements
Optimal - objective value 28
After Postsolve, objective 28, infeasibilities - dual 0 (0), primal 0 (0)
Optimal objective 28 - 0 iterations time 0.002, Presolve 0.00
In [7]:
# Node 6: New incumbent found, could stop since absolute gap < 1
delete(m, c5)
@constraint(m, c6, x₂ >= 3)
optimize!(m)
LB = objective_value(m)
println("\nObj: ", objective_value(m), ", x₁: ", value(x₁), ", x₂: ", value(x₂))
println(" UB: ", UB, ", LB: ", LB, ", Gap: ", UB - LB, "\n")
Obj: 30.0, x₁: 0.9999999999999998, x₂: 3.0
 UB: 30.666666666666668, LB: 30.0, Gap: 0.6666666666666679

Presolve 1 (-5) rows, 2 (0) columns and 2 (-5) elements
0  Obj 24 Primal inf 0.66666567 (1) Dual inf 16.999998 (2)
1  Obj 30
Optimal - objective value 30
After Postsolve, objective 30, infeasibilities - dual 0 (0), primal 0 (0)
Optimal objective 30 - 1 iterations time 0.002, Presolve 0.00
In [8]:
# Node 7: Fathomed (node deleted from tree) since infeasible
delete(m, [c4, c6])
@constraint(m, c7, x₁ >= 3)
optimize!(m)
Presolve determined that the problem was infeasible with tolerance of 1e-08
Analysis indicates model infeasible or unbounded
0  Obj -0 Primal inf 4.9999998 (2) Dual inf 14 (2)
2  Obj 31 Primal inf 0.4999999 (1)
Primal infeasible - objective value 31
PrimalInfeasible objective 31 - 2 iterations time 0.002
In [9]:
# Node 8: Fathomed, infeasible, can stop since tree has been searched
delete(m, [c1, c3, c7])
@constraint(m, c8, x₁ >= 4)
optimize!(m)
Presolve determined that the problem was infeasible with tolerance of 1e-08
Analysis indicates model infeasible or unbounded
0  Obj -0 Primal inf 3.9999999 (1) Dual inf 14 (2)
2  Obj 31.666667 Primal inf 0.4999999 (1)
Primal infeasible - objective value 31.666667
PrimalInfeasible objective 31.66666667 - 2 iterations time 0.002

At node 6, the absolute value of the gap was less than one, and as a result, the procedure could stop. In practice, only the relative percentage gap is determined, and the search continues until the gap approaches zero or the complete tree has been searched; as a result, in practice, nodes 7 and 8 might be evaluated before being fathomed. Since the branch and bound tree can grow exponentially, in many applications, the search can be stopped when the relative gap falls below a value of 1%, for example.

Solve as an ILP¶

Solve as an ILP (integer linear program) directly:

In [10]:
m = Model(Cbc.Optimizer)
@variable(m, 0 <= x₁, Int )                     # Integer variable
@variable(m, 0 <= x₂, Int )
@objective(m, Max, 6x₁ + 8x₂ )
@constraint(m, 2x₁ + 3x₂ <= 11 )
@constraint(m, 2x₁ <= 7 )
print(m)
set_attribute(m, "logLevel", 0)                 # Added to suppress output 
optimize!(m)
println("\nObj: ", objective_value(m), ", x₁: ", value(x₁), ", x₂: ", value(x₂), "\n")
$$ \begin{aligned} \max\quad & 6 x₁ + 8 x₂\\ \text{Subject to} \quad & 2 x₁ + 3 x₂ \leq 11\\ & 2 x₁ \leq 7\\ & x₁ \geq 0\\ & x₂ \geq 0\\ & x₁ \in \mathbb{Z}\\ & x₂ \in \mathbb{Z}\\ \end{aligned} $$
Obj: 30.0, x₁: 1.0, x₂: 3.0

Here, the Cbc message level set_attribute(m, "logLevel", 2) is being used instead of set_attribute(m, "logLevel", 0) so that details of the brand and bound procedure are output:

In [11]:
m = Model(Cbc.Optimizer)
@variable(m, 0 <= x₁, Int )                     # Integer variable
@variable(m, 0 <= x₂, Int )
@objective(m, Max, 6x₁ + 8x₂ )
@constraint(m, 2x₁ + 3x₂ <= 11 )
@constraint(m, 2x₁ <= 7 )
set_attribute(m, "logLevel", 2)
optimize!(m)
println("\nObj: ", objective_value(m), ", x₁: ", value(x₁), ", x₂: ", value(x₂), "\n")
Obj: 30.0, x₁: 1.0, x₂: 3.0

Welcome to the CBC MILP Solver 
Version: 2.10.8 
Build Date: Jan  1 1970 

command line - Cbc_C_Interface -logLevel 2 -solve -quit (default strategy 1)
logLevel was changed from 1 to 2
Continuous objective value is 31.6667 - 0.00 seconds
Cgl0004I processed model has 1 rows, 2 columns (2 integer (0 of which binary)) and 2 elements
Cutoff increment increased from 1e-05 to 1.9999
Cbc0045I Heuristic feasibility pump took 0 seconds (no good)
Cbc0045I Heuristic rounding took 0 seconds (no good)
Cbc0045I Heuristic greedy cover took 0 seconds (no good)
Cbc0045I Heuristic greedy equality took 0 seconds (no good)
Cbc0045I Heuristic DiveCoefficient took 0 seconds (good)
Cbc0012I Integer solution of -26 found by DiveCoefficient after 0 iterations and 0 nodes (0.01 seconds)
Cgl0000I Cut generators found to be infeasible! (or unbounded)
Pre-processing says infeasible
Cbc0045I Heuristic RINS took 0 seconds (no good)
Cbc0046I Root node pass 1, 1 rows, 0 total tight cuts  -  objective -31.333333
Cbc0046I Root node pass 2, 2 rows, 1 total tight cuts  -  objective -30
Cbc0012I Integer solution of -30 found by DiveCoefficient after 1 iterations and 0 nodes (0.01 seconds)
Cbc0031I 1 added rows had average density of 2
Cbc0013I At root node, 1 cuts changed objective from -31.333333 to -30 in 2 passes
Cbc0014I Cut generator 0 (Probing) - 1 row cuts average 2.0 elements, 1 column cuts (1 active)  in 0.000 seconds - new frequency is 1
Cbc0014I Cut generator 1 (Gomory) - 1 row cuts average 2.0 elements, 0 column cuts (0 active)  in 0.000 seconds - new frequency is 1
Cbc0014I Cut generator 2 (Knapsack) - 0 row cuts average 0.0 elements, 0 column cuts (0 active)  in 0.001 seconds - new frequency is -100
Cbc0014I Cut generator 3 (Clique) - 0 row cuts average 0.0 elements, 0 column cuts (0 active)  in 0.000 seconds - new frequency is -100
Cbc0014I Cut generator 4 (MixedIntegerRounding2) - 0 row cuts average 0.0 elements, 0 column cuts (0 active)  in 0.000 seconds - new frequency is -100
Cbc0014I Cut generator 5 (FlowCover) - 0 row cuts average 0.0 elements, 0 column cuts (0 active)  in 0.000 seconds - new frequency is -100
Cbc0001I Search completed - best objective -30, took 1 iterations and 0 nodes (0.01 seconds)
Cbc0035I Maximum depth 0, 0 variables fixed on reduced cost
Cuts at root node changed objective from -31.3333 to -30
Probing was tried 2 times and created 2 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Gomory was tried 2 times and created 1 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
Knapsack was tried 2 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.001 seconds)
Clique was tried 2 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
MixedIntegerRounding2 was tried 2 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
FlowCover was tried 2 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
TwoMirCuts was tried 1 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)
ZeroHalf was tried 1 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)

Result - Optimal solution found

Objective value:                30.00000000
Enumerated nodes:               0
Total iterations:               1
Time (CPU seconds):             0.01
Time (Wallclock seconds):       0.01

Total time (CPU seconds):       0.01   (Wallclock seconds):       0.01

MIP Solvers¶

JuMP supports a large number of solvers (full list). The following table lists the most well-known commercial and open-source MIP solvers:

Cplex IBM, first commercial solver
Gurobi Developed by Robert Bixby
FICO Xpress Used by LLamasoft
SAS/OR Part of SAS system (not supported in JuMP)
Cbc COIN-OR open-source solver
GLPK Free Software Foundation GNU open-source solver

The Logistics Engineering Software Stack:¶

image.png

Internal Decisions¶

In an LP, no decision variable can be restricted to being integer-valued, and no decisions can be made (internally) as part of the solution procedure. In practice, what usually necessitates representing a problem as a MILP is that that one or more discrete decisions need to be made as part of the solution procedure. This leads to the following uses of discrete decision variables in a MILP:

  1. Integer variable: Variable is inherently discrete
  • Just rounding continuous variables to the nearest integer can result in a suboptimal or infeasible solution.
  • Just rounding all variables up(down) in a min(max) problem may be feasible but can result in a poor quality solution.
  • Example: In the example considered above: rounding the LP solution $x_1 = 3.5$ to $4$ would make the problem infeasible since the constraint $2x_1 \le 7$ would be violated.
  1. Direct decision: Internal decision variables are used to indicate a choice between two or more alternatives
  • Binary variables used, for example, to represent a yes or no choice, to indicate and/or count membership, or to define a relationship.
  • Constraints can be used to represent logical conditions.
  • Examples: Set covering, bin packing.
  1. Indirect control: Internal decision variables used to control other continuous and discrete variables
  • Typically, indirect variables are used to control another decision variable by turning its lower and/or upper bounds on or off
    • Note: Although it might be easier to take the product of two variables for control, it would make the model nonlinear.
  • Important simplification: only need to control conditions when the solver would otherwise want to violate a constraint to improve the solution.
  • Examples: UFL problem (binary variable used to decide if a NF is located at a site), fixed-charge problems, semi-continuous variables (used, e.g., to represent minimum order quantities), one-time setup cost, minimum/maximum/absolute values.

2. MILP Formulation of UFL¶

Ex: Same example used in Loc 5¶

Data: Asheville, Statesville, Greensboro, Raleigh, and Wilmington.

In [12]:
d1(x1, x2) = length(x1) == length(x2) ? sum(abs.(x1 .- x2)) : 
    error("Inputs not same length.")
D1(X₁, X₂) = [d1(i, j) for i in eachrow(X₁), j in eachrow(X₂)]

P = [50 150 220 295 420]'
r, f = 1, 1
w = r * f
k = [150, 200, 150, 150, 200]
C = w * D1(P, P)
Out[12]:
5×5 Matrix{Int64}:
   0  100  170  245  370
 100    0   70  145  270
 170   70    0   75  200
 245  145   75    0  125
 370  270  200  125    0

Weak formulation of the problem:

$ \begin{eqnarray*} \quad \mbox{Minimize} \quad \sum_{i \in N} k_i y_i + \sum_{i \in N} \sum_{j \in M} c_{ij} x_{ij}\\ \quad \mbox{subject to} \quad \sum_{i \in N} x_{ij} &=& 1, &\quad j \in M\\ m y_i &\ge& \sum_{j \in M} x_{ij}, &\quad i \in N\\ 0 \le x_{ij} &\le& 1, &\quad i \in N, j \in M\\ y_i &\in& \bigl\{ 0, 1 \bigr\}, &\quad i \in N,\\ \end{eqnarray*} $

In [13]:
using JuMP, Cbc

n, m = size(C)
N, M = 1:n, 1:m
model = Model(Cbc.Optimizer)
@variable(model, y[N], Bin)
@variable(model, 0 <= x[N, M] <= 1)
@objective(model, Min, 
    sum(k[i]*y[i] for i ∈ N) + sum(C[i,j]*x[i,j] for i ∈ N, j ∈ M))
@constraint(model, [j ∈ M], sum(x[i,j] for i ∈ N) == 1)
@constraint(model, [i ∈ N], m*y[i] >= sum(x[i,j] for j ∈ M))
set_attribute(model, "logLevel", 0)
optimize!(model)
TCᵒ, yᵒ, xᵒ = objective_value(model), value.(y), value.(x)
println("\nObj: ", TCᵒ, "\n  y: ", yᵒ, "\n  x: ", xᵒ, "\n")
Obj: 600.0
  y: 1-dimensional DenseAxisArray{Float64,1,...} with index sets:
    Dimension 1, 1:5
And data, a 5-element Vector{Float64}:
 1.0
 0.0
 0.0
 1.0
 0.0
  x: 2-dimensional DenseAxisArray{Float64,2,...} with index sets:
    Dimension 1, 1:5
    Dimension 2, 1:5
And data, a 5×5 Matrix{Float64}:
 1.0  1.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  1.0  1.0  1.0
 0.0  0.0  0.0  0.0  0.0

3. Gurobi Solver Installation¶

Gurobi is a commercial MIP solver that is free for students. It is able to solve some MIPs that might be difficult for the open-source Cbc solver. You will not need to use Gurobi for most of the assignments in this class, but you may want to use it for your other needs if you find Cbc unable to solve your MILP model.

To install, you must download a copy of the Individual Academic License of Gurobi to your local machine. You will need to create an account using your @ncsu.edu email and then access your free named-user or WLS academic license while connected to the NCSU network on campus. If you are off-campus and want to connect to the NCSU network, use AnyConnect VPN.

Which license to select:

  • Academic Named-User License: can be installed on only one machine, one-year with renewal.

  • Academic WLS License: can be installed on multiple machines, 90 days with renewal.

On Windows, Gurobi can be installed in the default c:\gurobi1103 folder, and the license file guorbi.lic can be stored in your home directory C:\Users\yourusername (info for other OSs). After registering the software with the license, you will be able to use Gurobi anywhere (you only need to be on the NCSU network for your initial download). To install on a second computer using the Academic WLS License, download and install Gurobi, restart, and then store a copy of your license file guorbi.lic on the computer.

After closing the window, you can add Gurobi in the Julia REPL to access it in Jupyter and allow JuMP to use it as its solver. You can test your installation by running the following:

In [14]:
d1(x1, x2) = length(x1) == length(x2) ? sum(abs.(x1 .- x2)) : 
    error("Inputs not same length.")
D1(X₁, X₂) = [d1(i, j) for i in eachrow(X₁), j in eachrow(X₂)]

P = [50 150 220 295 420]'
r, f = 1, 1
w = r * f
k = [150, 200, 150, 150, 200]
C = w * D1(P, P);

The only differences in JuMP when using different solvers are:

  • model = Model(Cbc.Optimizer) vs. model = Model(Gurobi.Optimizer)

and setting solver-specific attributes, e.g.:

  • set_attribute(model, "logLevel", 0) vs. set_attribute(model, "OutputFlag", 0)
In [15]:
using JuMP, Gurobi

n, m = size(C)
N, M = 1:n, 1:m
model = Model(Gurobi.Optimizer)           # Different than Cbc
@variable(model, y[N], Bin)
@variable(model, 0 <= x[N, M] <= 1)
@objective(model, Min, 
    sum(k[i]*y[i] for i ∈ N) + sum(C[i,j]*x[i,j] for i ∈ N, j ∈ M))
@constraint(model, [j ∈ M], sum(x[i,j] for i ∈ N) == 1)
@constraint(model, [i ∈ N], m*y[i] >= sum(x[i,j] for j ∈ M))
set_attribute(model, "OutputFlag", 0)      # Different than Cbc
optimize!(model)
TCᵒ, yᵒ, xᵒ = objective_value(model), value.(y), value.(x)
println("\nObj: ", TCᵒ, "\n  y: ", yᵒ, "\n  x: ", xᵒ, "\n")
Set parameter WLSAccessID
Set parameter WLSSecret
Set parameter LicenseID to value 2558966
Academic license 2558966 - for non-commercial use only - registered to ka___@ncsu.edu

Obj: 600.0
  y: 1-dimensional DenseAxisArray{Float64,1,...} with index sets:
    Dimension 1, 1:5
And data, a 5-element Vector{Float64}:
 1.0
 0.0
 0.0
 1.0
 0.0
  x: 2-dimensional DenseAxisArray{Float64,2,...} with index sets:
    Dimension 1, 1:5
    Dimension 2, 1:5
And data, a 5×5 Matrix{Float64}:
 1.0  1.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  0.0  0.0  0.0
 0.0  0.0  1.0  1.0  1.0
 0.0  0.0  0.0  0.0  0.0

4. Popco Example¶

Copy k from the printout at the end of Loc 7 and input saved C matrix.

In [16]:
using DataFrames, CSV

k = 1.0603652532813296e7
C = Matrix(DataFrame(CSV.File("PopcoCmatrix.csv")))
Out[16]:
204×162 Matrix{Float64}:
 5.92658e5  9.64745e5  1.292e7    4.46354e6  …  3.73774e7       1.87629e6
 2.47632e6  2.05098e5  1.27387e7  3.90231e6     3.51108e7       1.74961e6
 3.26274e6  1.25328e6  2.56588e6  3.06537e6     3.17408e7       1.56637e6
 1.00004e7  3.40615e6  2.71957e7  3.51524e5     1.97778e7       9.04456e5
 9.55826e6  3.22017e6  2.55531e7  3.51524e5     2.05911e7       9.49078e5
 6.9755e6   2.00204e6  1.79764e7  1.80016e6  …  2.62702e7       1.262e6
 1.34957e7  4.59604e6  4.19432e7  1.82073e6     1.51729e7       6.56167e5
 1.36062e7  4.74373e6  4.16259e7  1.62926e6     1.36833e7       5.67583e5
 1.2975e7   4.57954e6  3.87712e7  1.34531e6     1.44956e7       6.13991e5
 1.5991e7   5.66665e6  5.10211e7  2.68552e6     9.64018e6       3.47211e5
 1.695e7    5.94058e6  5.54607e7  3.25191e6  …  1.03528e7       4.20822e5
 4.34985e6  2.24362e6  1.23272e7  3.8397e6      3.37139e7       1.68301e6
 2.1192e7   9.04313e6  8.17951e7  1.04155e7     5.11748e7       2.71901e6
 ⋮                                           ⋱  ⋮          
 2.7867e7   1.08369e7  9.82877e7  9.02859e6     2.52029e7       1.46342e6
 2.66438e7  1.03851e7  9.35689e7  8.5855e6      2.46943e7       1.41917e6
 2.80007e7  1.08332e7  9.85572e7  8.90098e6     2.32962e7       1.37301e6
 2.64742e7  1.0266e7   9.26283e7  8.32624e6  …  2.2482e7        1.30789e6
 2.62754e7  1.01495e7  9.16694e7  8.11076e6     2.07827e7       1.22181e6
 2.56158e7  9.54883e6  8.85837e7  7.03751e6     8.61394e6       6.54261e5
 2.53801e7  9.45183e6  8.76597e7  6.92664e6     8.12283e6       6.27847e5
 2.58083e7  9.67618e6  8.92877e7  7.19273e6     1.01561e7       7.23769e5
 2.61116e7  9.88787e6  9.05093e7  7.50102e6  …  1.34562e7  875048.0
 2.38326e7  8.91583e6  8.1461e7   6.3279e6      7.67121e6  562839.0
 2.48895e7  9.12562e6  8.61517e7  6.649e6       7.22077e6       5.6225e5
 2.50296e7  9.19869e6  8.66208e7  6.7083e6      7.10896e6       5.65906e5
In [17]:
using JuMP, Cbc

n, m = size(C)
N, M = 1:n, 1:m
model = Model(Cbc.Optimizer)
@variable(model, y[N], Bin)
@variable(model, 0 <= x[N, M] <= 1)
@objective(model, Min, 
    sum(k*y[i] for i ∈ N) + sum(C[i,j]*x[i,j] for i ∈ N, j ∈ M))   # Using single `k` value
@constraint(model, [j ∈ M], sum(x[i,j] for i ∈ N) == 1)
@constraint(model, [i ∈ N], m*y[i] >= sum(x[i,j] for j ∈ M))
set_attribute(model, "logLevel", 1)
optimize!(model)
Welcome to the CBC MILP Solver 
Version: 2.10.8 
Build Date: Jan  1 1970 

command line - Cbc_C_Interface -logLevel 1 -solve -quit (default strategy 1)
Continuous objective value is 1.75117e+08 - 0.02 seconds
Cgl0004I processed model has 366 rows, 33252 columns (204 integer (204 of which binary)) and 66300 elements
Cbc0038I Initial state - 153 integers unsatisfied sum - 1
Cbc0038I Pass   1: suminf.    0.00000 (0) obj. 6.53997e+09 iterations 328
Cbc0038I Solution found of 6.53997e+09
Cbc0038I Relaxing continuous gives 6.53997e+09
Cbc0038I Before mini branch and bound, 51 integers at bound fixed and 32726 continuous
Cbc0038I Full problem 366 rows 33252 columns, reduced to 152 rows 313 columns
Cbc0038I Mini branch and bound improved solution from 6.53997e+09 to 1.57909e+09 (0.24 seconds)
Cbc0038I Freeing continuous variables gives a solution of 1.35671e+09
Cbc0038I Round again with cutoff of 1.23855e+09
Cbc0038I Pass   2: suminf.    1.00000 (24) obj. 1.23855e+09 iterations 211
Cbc0038I Pass   3: suminf.    0.00000 (0) obj. 1.23855e+09 iterations 380
Cbc0038I Solution found of 1.23855e+09
Cbc0038I Relaxing continuous gives 1.22481e+09
Cbc0038I Before mini branch and bound, 45 integers at bound fixed and 32691 continuous
Cbc0038I Full problem 366 rows 33252 columns, reduced to 184 rows 370 columns
Cbc0038I Mini branch and bound improved solution from 1.22481e+09 to 9.88317e+08 (0.41 seconds)
Cbc0038I Freeing continuous variables gives a solution of 8.7097e+08
Cbc0038I Round again with cutoff of 7.318e+08
Cbc0038I Pass   4: suminf.    1.00000 (36) obj. 7.318e+08 iterations 66
Cbc0038I Pass   5: suminf.    0.18217 (27) obj. 7.318e+08 iterations 222
Cbc0038I Pass   6: suminf.    0.12658 (21) obj. 7.318e+08 iterations 688
Cbc0038I Pass   7: suminf.    0.77025 (96) obj. 7.318e+08 iterations 509
Cbc0038I Pass   8: suminf.    0.77025 (96) obj. 7.318e+08 iterations 0
Cbc0038I Pass   9: suminf.    0.74924 (91) obj. 7.318e+08 iterations 242
Cbc0038I Pass  10: suminf.    0.73812 (91) obj. 7.318e+08 iterations 5
Cbc0038I Pass  11: suminf.    0.74999 (90) obj. 7.318e+08 iterations 170
Cbc0038I Pass  12: suminf.    0.73812 (91) obj. 7.318e+08 iterations 3
Cbc0038I Pass  13: suminf.    0.73764 (89) obj. 7.318e+08 iterations 163
Cbc0038I Pass  14: suminf.    0.72578 (90) obj. 7.318e+08 iterations 2
Cbc0038I Pass  15: suminf.    0.76046 (88) obj. 7.318e+08 iterations 193
Cbc0038I Pass  16: suminf.    0.73809 (90) obj. 7.318e+08 iterations 6
Cbc0038I Pass  17: suminf.    0.76046 (88) obj. 7.318e+08 iterations 180
Cbc0038I Pass  18: suminf.    0.73809 (90) obj. 7.318e+08 iterations 5
Cbc0038I Pass  19: suminf.    0.74995 (89) obj. 7.318e+08 iterations 184
Cbc0038I Pass  20: suminf.    0.73809 (90) obj. 7.318e+08 iterations 3
Cbc0038I Pass  21: suminf.    0.76046 (88) obj. 7.318e+08 iterations 174
Cbc0038I Pass  22: suminf.    0.73809 (90) obj. 7.318e+08 iterations 6
Cbc0038I Pass  23: suminf.    0.78502 (89) obj. 7.318e+08 iterations 165
Cbc0038I Pass  24: suminf.    0.76265 (91) obj. 7.318e+08 iterations 5
Cbc0038I Pass  25: suminf.    0.74982 (89) obj. 7.318e+08 iterations 182
Cbc0038I Pass  26: suminf.    0.73796 (90) obj. 7.318e+08 iterations 2
Cbc0038I Pass  27: suminf.    0.78502 (89) obj. 7.318e+08 iterations 179
Cbc0038I Pass  28: suminf.    0.76265 (91) obj. 7.318e+08 iterations 6
Cbc0038I Pass  29: suminf.    0.74982 (89) obj. 7.318e+08 iterations 180
Cbc0038I Pass  30: suminf.    0.80747 (93) obj. 7.318e+08 iterations 559
Cbc0038I Pass  31: suminf.    0.77606 (95) obj. 7.318e+08 iterations 10
Cbc0038I Pass  32: suminf.    0.76323 (92) obj. 7.318e+08 iterations 154
Cbc0038I Pass  33: suminf.    0.75137 (93) obj. 7.318e+08 iterations 5
Cbc0038I No solution found this major pass
Cbc0038I Before mini branch and bound, 27 integers at bound fixed and 32636 continuous
Cbc0038I Full problem 366 rows 33252 columns, reduced to 238 rows 480 columns
Cbc0038I Mini branch and bound improved solution from 8.7097e+08 to 8.50149e+08 (1.05 seconds)
Cbc0038I Freeing continuous variables gives a solution of 8.34136e+08
Cbc0038I Round again with cutoff of 5.64795e+08
Cbc0038I Pass  33: suminf.    1.00000 (44) obj. 5.64795e+08 iterations 39
Cbc0038I Pass  34: suminf.    0.25849 (38) obj. 5.64795e+08 iterations 257
Cbc0038I Pass  35: suminf.    0.76668 (112) obj. 5.64795e+08 iterations 637
Cbc0038I Pass  36: suminf.    0.75556 (112) obj. 5.64795e+08 iterations 4
Cbc0038I Pass  37: suminf.    0.69518 (105) obj. 5.64795e+08 iterations 224
Cbc0038I Pass  38: suminf.    0.67228 (103) obj. 5.64795e+08 iterations 10
Cbc0038I Pass  39: suminf.    0.74281 (104) obj. 5.64795e+08 iterations 33
Cbc0038I Pass  40: suminf.    0.72494 (108) obj. 5.64795e+08 iterations 10
Cbc0038I Pass  41: suminf.    0.74142 (103) obj. 5.64795e+08 iterations 211
Cbc0038I Pass  42: suminf.    0.71860 (106) obj. 5.64795e+08 iterations 6
Cbc0038I Pass  43: suminf.    0.74775 (105) obj. 5.64795e+08 iterations 197
Cbc0038I Pass  44: suminf.    0.77805 (105) obj. 5.64795e+08 iterations 462
Cbc0038I Pass  45: suminf.    0.73071 (110) obj. 5.64795e+08 iterations 16
Cbc0038I Pass  46: suminf.    0.74258 (108) obj. 5.64795e+08 iterations 186
Cbc0038I Pass  47: suminf.    0.73071 (110) obj. 5.64795e+08 iterations 3
Cbc0038I Pass  48: suminf.    0.74258 (108) obj. 5.64795e+08 iterations 184
Cbc0038I Pass  49: suminf.    0.73071 (110) obj. 5.64795e+08 iterations 4
Cbc0038I Pass  50: suminf.    0.76744 (103) obj. 5.64795e+08 iterations 177
Cbc0038I Pass  51: suminf.    0.73071 (110) obj. 5.64795e+08 iterations 10
Cbc0038I Pass  52: suminf.    0.77791 (103) obj. 5.64795e+08 iterations 172
Cbc0038I Pass  53: suminf.    0.73071 (110) obj. 5.64795e+08 iterations 14
Cbc0038I Pass  54: suminf.    0.75492 (109) obj. 5.64795e+08 iterations 171
Cbc0038I Pass  55: suminf.    0.74306 (111) obj. 5.64795e+08 iterations 3
Cbc0038I Pass  56: suminf.    0.76744 (104) obj. 5.64795e+08 iterations 190
Cbc0038I Pass  57: suminf.    0.73861 (110) obj. 5.64795e+08 iterations 8
Cbc0038I Pass  58: suminf.    0.75954 (105) obj. 5.64795e+08 iterations 206
Cbc0038I Pass  59: suminf.    0.73071 (110) obj. 5.64795e+08 iterations 6
Cbc0038I Pass  60: suminf.    0.75492 (109) obj. 5.64795e+08 iterations 179
Cbc0038I Pass  61: suminf.    0.74306 (111) obj. 5.64795e+08 iterations 3
Cbc0038I Pass  62: suminf.    0.74258 (108) obj. 5.64795e+08 iterations 194
Cbc0038I No solution found this major pass
Cbc0038I Before mini branch and bound, 23 integers at bound fixed and 32686 continuous
Cbc0038I Mini branch and bound did not improve solution (1.66 seconds)
Cbc0038I After 1.67 seconds - Feasibility pump exiting with objective of 8.34136e+08 - took 1.53 seconds
Cbc0012I Integer solution of 8.3413633e+08 found by feasibility pump after 0 iterations and 0 nodes (1.67 seconds)
Cbc0038I Full problem 366 rows 33252 columns, reduced to 320 rows 25754 columns - 1 fixed gives 319, 25753 - still too large
Cbc0012I Integer solution of 8.086049e+08 found by DiveCoefficient after 2181 iterations and 0 nodes (2.96 seconds)
Cbc0031I 801 added rows had average density of 3.7652934
Cbc0013I At root node, 801 cuts changed objective from 1.7511747e+08 to 7.3357813e+08 in 21 passes
Cbc0014I Cut generator 0 (Probing) - 269 row cuts average 2.0 elements, 0 column cuts (502 active)  in 0.051 seconds - new frequency is 1
Cbc0014I Cut generator 1 (Gomory) - 66 row cuts average 173.2 elements, 0 column cuts (0 active)  in 0.137 seconds - new frequency is -100
Cbc0014I Cut generator 2 (Knapsack) - 0 row cuts average 0.0 elements, 0 column cuts (0 active)  in 0.008 seconds - new frequency is -100
Cbc0014I Cut generator 3 (Clique) - 0 row cuts average 0.0 elements, 0 column cuts (0 active)  in 0.003 seconds - new frequency is -100
Cbc0014I Cut generator 4 (MixedIntegerRounding2) - 998 row cuts average 3.6 elements, 0 column cuts (0 active)  in 0.068 seconds - new frequency is 1
Cbc0014I Cut generator 5 (FlowCover) - 1 row cuts average 2.0 elements, 0 column cuts (0 active)  in 0.116 seconds - new frequency is -100
Cbc0014I Cut generator 6 (TwoMirCuts) - 466 row cuts average 68.4 elements, 0 column cuts (0 active)  in 0.138 seconds - new frequency is 1
Cbc0010I After 0 nodes, 1 on tree, 8.086049e+08 best solution, best possible 7.3357813e+08 (3.10 seconds)
Cbc0012I Integer solution of 7.6636999e+08 found by rounding after 2199 iterations and 1 nodes (3.13 seconds)
Cbc0012I Integer solution of 7.44182e+08 found by rounding after 2202 iterations and 2 nodes (3.21 seconds)
Cbc0012I Integer solution of 7.3357835e+08 found by DiveCoefficient after 2202 iterations and 2 nodes (3.23 seconds)
Cbc0001I Search completed - best objective 733578346.8894054, took 2313 iterations and 4 nodes (3.27 seconds)
Cbc0032I Strong branching done 36 times (1012 iterations), fathomed 1 nodes and fixed 0 variables
Cbc0035I Maximum depth 1, 0 variables fixed on reduced cost
Cuts at root node changed objective from 1.75117e+08 to 7.33578e+08
Probing was tried 23 times and created 269 cuts of which 502 were active after adding rounds of cuts (0.069 seconds)
Gomory was tried 21 times and created 66 cuts of which 0 were active after adding rounds of cuts (0.137 seconds)
Knapsack was tried 21 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.008 seconds)
Clique was tried 21 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.003 seconds)
MixedIntegerRounding2 was tried 23 times and created 998 cuts of which 0 were active after adding rounds of cuts (0.070 seconds)
FlowCover was tried 21 times and created 1 cuts of which 0 were active after adding rounds of cuts (0.116 seconds)
TwoMirCuts was tried 23 times and created 466 cuts of which 0 were active after adding rounds of cuts (0.138 seconds)
ZeroHalf was tried 1 times and created 0 cuts of which 0 were active after adding rounds of cuts (0.000 seconds)

Result - Optimal solution found

Objective value:                733578346.88940537
Enumerated nodes:               4
Total iterations:               2313
Time (CPU seconds):             3.35
Time (Wallclock seconds):       3.35

Total time (CPU seconds):       3.35   (Wallclock seconds):       3.35

In [18]:
TCᵒ, yᵒ, xᵒ = objective_value(model), value.(y), value.(x)
nNF = sum(yᵒ .== 1)
println("Obj: ", TCᵒ, "\nnNF: ", nNF)
Obj: 7.335783468894054e8
nNF: 28

Compare the MILP solution with the heuristic solution from Loc 7: which is "better"?

In [19]:
TCufl, nNFufl = 7.34021255286948e8, 27
println("% reduction in TC = ", 100*(TCufl - TCᵒ)/TCufl)
println("  Increase in nNF = ", nNF - nNFufl)
println("   Decrease in TC = ", TCufl - TCᵒ)
println("Fixed cost per NF = ", k)
% reduction in TC = 0.06033999619935413
  Increase in nNF = 1
   Decrease in TC = 442908.39754259586
Fixed cost per NF = 1.0603652532813296e7

5. Set Covering¶

The set covering problem refers to minimizing the number of sets whose union includes all of the objects to be covered. Each set can include (or cover) a different subset of objects, and multiple sets can cover each. There are many applications of set covering; for example, each set might represent the customers that are within a one-hour drive of a potential site for a service center, and a solution to the set covering problem would correspond to the minimum number of sites required so that each customer can be serviced within one hour.

General formulation of the problem:

$ \begin{eqnarray*} \quad M &=& \bigl\{ 1, \dots, m \bigr\}, & \quad \mbox{objects to be covered} \\ N &=& \bigl\{ 1, \dots, n \bigr\}, & \quad \mbox{indices of } n \mbox{ subsets of } M \\ M_i &\subseteq& M, i \in N, & \quad n \mbox{ subsets of } M \\ I &\subseteq& N, & \quad \mbox{covering of } M \\ I^* &=& \mathrm{arg}\underset{I}{\operatorname{min}} \bigl\{ \lvert I \rvert : \bigcup_{i \in I} M_i = M \bigr\}, & \quad \mbox{min cost covering of } M. \end{eqnarray*}$

BIP formulation of the problem:

$ \begin{eqnarray*} \quad \mbox{Minimize} \quad \sum_{i \in N} x_i \\ \quad \mbox{subject to} \quad \sum_{i \in N} a_{ji} x_i &\ge& 1, &\quad j \in M\\ x_i &\in& \bigl\{ 0, 1 \bigr\}, &\quad i \in N \end{eqnarray*} $

where

$ \begin{eqnarray*} \quad x_i &=& \begin{cases} 1, & \mbox{if } M_i \mbox{ is in cover} \\ 0, & \mbox{otherwise} \end{cases} \\ a_{ji} &=& \begin{cases} 1, & \mbox{if } j \in M_i \\ 0, & \mbox{otherwise.} \end{cases} \\ \end{eqnarray*} $

Ex: 6-Object, 5-Subsets¶

image.png

$ \begin{eqnarray*} \quad M &=& \bigl\{ 1, \dots, 6 \bigr\} \\ N &=& \bigl\{ 1, \dots, 5 \bigr\} \\ M_1 &=& \bigl\{ 1, 2 \bigr\}, M_2 = \bigl\{ 1, 4, 5 \bigr\}, M_3 = \bigl\{ 3, 5 \bigr\}, M_4 = \bigl\{ 2, 3, 6 \bigr\}, M_5 = \bigl\{ 6 \bigr\} \\ I^* &=& \mathrm{arg}\underset{I}{\operatorname{min}} \bigl\{\sum_{i \in I} x_i : \bigcup_{i \in I} M_i = M \bigr\} = \bigl\{ 2, 4 \bigr\} \\ \sum_{i \in I^*} x_i &=& 2 \end{eqnarray*}$

In [20]:
m, n = 6, 5
M = 1:m
N = 1:n
Mi = [[1,2],[1,4,5],[3,5],[2,3,6],[6]]
A = zeros(m, n)        # A = objects x subsets
for i = 1:size(A,2)
    A[Mi[i],i] .= 1
end
A
Out[20]:
6×5 Matrix{Float64}:
 1.0  1.0  0.0  0.0  0.0
 1.0  0.0  0.0  1.0  0.0
 0.0  0.0  1.0  1.0  0.0
 0.0  1.0  0.0  0.0  0.0
 0.0  1.0  1.0  0.0  0.0
 0.0  0.0  0.0  1.0  1.0
In [21]:
using JuMP, Cbc

model = Model(Cbc.Optimizer)   # Unweighted set cover
@variable(model, x[N], Bin )
@objective(model, Min, sum(x[i] for i ∈ N) )
@constraint(model, [j ∈ M], sum(A[j,i]*x[i] for i ∈ N) >= 1 )
set_attribute(model, "logLevel", 0)
optimize!(model)
TCᵒ, xᵒ = objective_value(model), Array(value.(x))
Out[21]:
(2.0, [0.0, 1.0, 0.0, 1.0, 0.0])

What to check if optimal solution was found because turned off messaging: set_attribute(model, "logLevel", 0)

In [22]:
println(solution_summary(model).termination_status)
OPTIMAL

Converted value.(x) to array Array(value.(x)) when before assigning to xᵒ so that array functions like findall can be used.

In [23]:
idx = findall(xᵒ .> 0)   # Converted value.(x) to array Array(value.(x)
Out[23]:
2-element Vector{Int64}:
 2
 4

Create function for (unweighted) set cover:

In [24]:
using JuMP, Cbc

function setcover(A)
    M = 1:size(A,1)
    N = 1:size(A,2)
    model = Model(Cbc.Optimizer)
    @variable(model, x[1:length(N)], Bin )
    @objective(model, Min, sum(x[i] for i ∈ N) )
    @constraint(model, [j ∈ M], sum(A[j,i]*x[i] for i ∈ N) >= 1 )
    set_attribute(model, "logLevel", 0)
    set_attribute(model, "seconds", 60.0)   # Solution timeout limit
    optimize!(model)
    println(solution_summary(model).termination_status)
    return findall(Array(value.(x)) .> 0)
end

setcover(A)
OPTIMAL
Out[24]:
2-element Vector{Int64}:
 2
 4

Ex: Transmitter Location¶

Determine the minimum number of transmitters needed to cover all of North Carolina given that each transmitter can reach up to 100 miles.

In [25]:
using Logjam.DataTools, DataFrames

function dgc(xy₁, xy₂; unit=:mi)
    length(xy₁) == length(xy₂) == 2 || error("Inputs must have length 2.")
    unit in [:mi, :km] || error("Unit must be :mi or :km")

    Δx, Δy = xy₂[1] - xy₁[1], xy₂[2] - xy₁[2]
    a = sind(Δy / 2)^2 + cosd(xy₁[2]) * cosd(xy₂[2]) * sind(Δx / 2)^2
    2 * asin(min(sqrt(a), 1.0)) * (unit == :mi ? 3958.75 : 6371.00)
end

Dgc(X₁, X₂) = [dgc(i, j) for i in eachrow(X₁), j in eachrow(X₂)]

df = filter(r -> (r.STFIP == st2fips(:NC)), uscounty())
first(df, 5)
Out[25]:
5×10 DataFrame
RowSTFIPCOFIPNAMESTLATLONPOPALANDAWATERCBSA
Int64Int64String31String3Float64Float64Int64Float64Float64Int64?
1371AlamanceNC36.0715-79.4132171415423.45210.78815500
2373AlexanderNC35.8893-81.193536444259.9853.64725860
3377AnsonNC34.9741-80.110822055531.4615.63816740
43713BeaufortNC35.5216-76.961844652832.736130.11247820
53719BrunswickNC34.0418-78.2166136693850.083199.4648900
In [26]:
P = hcat(df.LON, df.LAT)
D = Dgc(P, P)
Out[26]:
100×100 Matrix{Float64}:
   0.0     100.331    85.3701  142.532   …   75.0221  155.086   161.461
 100.331     0.0      87.8249  238.757      174.423   254.254    61.6489
  85.3701   87.8249    0.0     181.659      148.852   204.213   138.922
 142.532   238.757   181.659     0.0         89.8832   29.3224  300.331
 155.714   211.418   125.619   124.599      163.784   153.917   264.504
 178.996    79.0234  144.007   314.218   …  253.407   331.457    26.7238
 126.894    27.3643  100.868   263.178      201.387   279.737    38.653
  82.2834   46.8642   40.9705  206.115      155.851   225.23    100.473
 117.633    17.3402   99.9844  255.919      191.561   271.576    44.4137
 180.398   280.722   239.834    72.504      107.757    43.1829  341.608
 172.497   258.994   187.315    54.3249  …  136.877    77.893   318.985
 105.804    13.7062   81.0749  240.904      180.663   257.726    60.6543
  24.6289  111.588    74.1856  127.185       75.1373  143.444   173.201
   ⋮                                     ⋱                      
 231.441   132.822   185.153   362.407      306.33    381.324    80.1265
 130.598   229.393   179.721    22.6443      70.4439   25.057   291.015
 151.131    51.7355  131.704   290.488   …  224.082   305.703    10.6611
  57.2275   82.3016   28.1533  165.484      123.14    185.622   140.518
 167.045   267.277   223.361    54.6251      96.636    25.3851  328.505
 164.99     70.3883  118.408   294.278      240.011   313.309    45.5667
  91.4934  167.999    97.0542   87.6399      96.1141  113.651   227.429
 228.436   128.383   189.093   362.897   …  302.802   380.611    70.7225
 177.677   277.093   226.857    47.9049     111.405    23.014   338.661
  75.0221  174.423   148.852    89.8832       0.0      90.9914  234.608
 155.086   254.254   204.213    29.3224      90.9914    0.0     315.85
 161.461    61.6489  138.922   300.331      234.608   315.85      0.0

Will assume that a transmitter would be located at the population center of each county. It covers another county if the distance to the far end of the other county is less than 100 miles, where the far end is approximated by adding the radius of circular land and water area to the distance to the county.

In [27]:
a = df.ALAND .+ df.AWATER    # Area (sq-mi)
r = sqrt.(a)/pi              # Radius (mi)
Out[27]:
100-element Vector{Float64}:
  6.633077022381751
  5.168317548873791
  7.376957802042519
  9.87708960309957
 10.312174312119152
  8.177219279483754
  7.218247814579716
  6.072373262717991
  6.9345618671747085
  5.606746514095389
 11.610289283682222
  6.492460558495518
  8.475242063843732
  ⋮
  7.256299571197746
  6.802454304841427
  4.741439185283889
  7.128521296489823
  5.773021702526456
  4.9152344434011725
  9.799763201878944
  7.398524280940837
  7.778597506157308
  6.709478609659198
  6.538219885362919
  5.633104793476773
In [28]:
r[:] .+ D   # Broadcast radius as a row vector
Out[28]:
100×100 Matrix{Float64}:
   6.63308  106.964     92.0032   …   81.6552   161.719    168.094
 105.499      5.16832   92.9932      179.592    259.422     66.8172
  92.7471    95.2018     7.37696     156.229    211.59     146.299
 152.409    248.634    191.536        99.7603    39.1995   310.209
 166.026    221.73     135.931       174.097    164.229    274.816
 187.174     87.2006   152.185    …  261.584    339.634     34.9011
 134.112     34.5825   108.086       208.605    286.955     45.8712
  88.3558    52.9365    47.0429      161.923    231.302    106.545
 124.568     24.2747   106.919       198.495    278.51      51.3483
 186.005    286.329    245.441       113.364     48.7896   347.214
 184.108    270.604    198.925    …  148.487     89.5033   330.596
 112.296     20.1987    87.5673      187.155    264.218     67.1467
  33.1042   120.063     82.6608       83.6125   151.919    181.676
   ⋮                              ⋱                        
 238.698    140.079    192.409       313.587    388.581     87.3828
 137.4      236.195    186.524        77.2464    31.8594   297.818
 155.872     56.477    136.446    …  228.824    310.444     15.4026
  64.3561    89.4301    35.2818      130.268    192.75     147.647
 172.818    273.05     229.134       102.409     31.1581   334.278
 169.906     75.3035   123.323       244.926    318.225     50.4819
 101.293    177.799    106.854       105.914    123.45     237.229
 235.835    135.781    196.492    …  310.201    388.01      78.1211
 185.456    284.871    234.636       119.184     30.7926   346.439
  81.7316   181.133    155.561         6.70948   97.7008   241.318
 161.624    260.792    210.752        97.5296     6.53822  322.388
 167.094     67.282    144.555       240.241    321.483      5.6331
In [29]:
A = r[:] .+ D .< 100
Out[29]:
100×100 BitMatrix:
 1  0  1  0  0  0  0  1  0  0  0  0  1  …  0  0  0  1  0  0  1  0  0  1  0  0
 0  1  1  0  0  1  1  1  1  0  0  1  0     0  0  1  1  0  1  0  0  0  0  0  1
 1  1  1  0  0  0  0  1  0  0  0  1  1     0  0  0  1  0  0  0  0  0  0  0  0
 0  0  0  1  0  0  0  0  0  1  1  0  0     0  1  0  0  1  0  1  0  1  1  1  0
 0  0  0  0  1  0  0  0  0  0  0  0  0     0  0  0  0  0  0  1  0  0  0  0  0
 0  1  0  0  0  1  1  0  1  0  0  1  0  …  1  0  1  0  0  1  0  1  0  0  0  1
 0  1  0  0  0  1  1  1  1  0  0  1  0     0  0  1  0  0  1  0  0  0  0  0  1
 1  1  1  0  0  0  1  1  1  0  0  1  1     0  0  1  1  0  1  0  0  0  0  0  0
 0  1  0  0  0  1  1  1  1  0  0  1  0     0  0  1  0  0  1  0  0  0  0  0  1
 0  0  0  1  0  0  0  0  0  1  0  0  0     0  1  0  0  1  0  0  0  1  0  1  0
 0  0  0  1  0  0  0  0  0  0  1  0  0  …  0  1  0  0  0  0  0  0  1  0  1  0
 0  1  1  0  0  1  1  1  1  0  0  1  0     0  0  1  1  0  1  0  0  0  0  0  1
 1  0  1  0  0  0  0  1  0  0  0  0  1     0  0  0  1  0  0  1  0  0  1  0  0
 ⋮              ⋮              ⋮        ⋱        ⋮              ⋮           
 0  0  0  0  0  1  0  0  0  0  0  0  0     1  0  1  0  0  1  0  1  0  0  0  1
 0  0  0  1  0  0  0  0  0  1  1  0  0     0  1  0  0  1  0  1  0  1  1  1  0
 0  1  0  0  0  1  1  1  1  0  0  1  0  …  1  0  1  0  0  1  0  1  0  0  0  1
 1  1  1  0  0  0  0  1  0  0  0  1  1     0  0  0  1  0  0  1  0  0  0  0  0
 0  0  0  1  0  0  0  0  0  1  0  0  0     0  1  0  0  1  0  0  0  1  0  1  0
 0  1  0  0  0  1  1  1  1  0  0  1  0     1  0  1  0  0  1  0  1  0  0  0  1
 0  0  0  1  1  0  0  0  0  0  0  0  1     0  0  0  1  0  0  1  0  0  0  0  0
 0  0  0  0  0  1  0  0  0  0  0  0  0  …  1  0  1  0  0  1  0  1  0  0  0  1
 0  0  0  1  0  0  0  0  0  1  1  0  0     0  1  0  0  1  0  0  0  1  0  1  0
 1  0  0  1  0  0  0  0  0  0  0  0  1     0  1  0  0  0  0  0  0  0  1  1  0
 0  0  0  1  0  0  0  0  0  1  1  0  0     0  1  0  0  1  0  0  0  1  1  1  0
 0  1  0  0  0  1  1  0  1  0  0  1  0     1  0  1  0  0  1  0  1  0  0  0  1
In [30]:
idx = setcover(A)
OPTIMAL
Out[30]:
4-element Vector{Int64}:
 31
 53
 55
 99
In [31]:
using GeoMakie, Logjam.MapTools

fig, ax = makemap(df.LON, df.LAT; xexpand=0.1)

colors = cgrad(:darktest)[LinRange(0, 1, length(idx))]   # Automatically select colors
for (i,j) in zip(idx, colors)
    scatter!(ax, df.LON[A[:, i]], df.LAT[A[:, i]]; color=j, marker='.', markersize=24)
end
x, y = df.LON[idx], df.LAT[idx]
scatter!(ax, x, y, color=colors, markersize=10)
text!(ax, x, y, text=df.NAME[idx]; aligntext(x, y)...)

fig
Out[31]:
No description has been provided for this image