Nonlinear optimization refers to the problem of optimizing an objective function, where the elements of the objective function are points in a subset of Euclidean space. Unconstrained nonlinear optimization is when there are no constraints on the possible solutions, and when there are constraints, it is more commonly referred to as nonlinear programming; linear programming is when the objective function and constraints are all linear. Mathematical programming refers to nonlinear and linear programming. Univariate optimization corresponds to the optimization of a function of a single variable on a bounded interval and is the most basic type of nonlinear optimization, used as a building block in most nonlinear programming procedures.
$ \begin{eqnarray*} x^* &=& \mathrm{arg}\underset{x}{\operatorname{min}} \bigl\{ \, f(x) : L\!B \leq x \leq U\!B \bigr\} \\ T\!C^* &=& f(x^*) \end{eqnarray*}$
User-defined functions can be used to extend the capabilities of Julia beyond its basic functions. Although developing a set of functions to solve a particular problem is at the heart of using Julia to program, the easiest way to create a function is to do it in an incremental, scripting manner by writing each line, executing it, and, if it works, and only then copying it into the function.
Given a 1-D point x and m other 1-D points in p, create a function dist1
to determine the distance d from x to each of the m points in p:
$ \begin{array}{l} \quad\bf{d} = \left[ \begin{array}{c} \lvert x - p_{1} \rvert \\ \vdots \\ \lvert x - p_{m} \rvert \end{array} \right] \end{array} $
The best way to start is to create some example data for which you know the correct answer:
x = 3.
p = [1, 2, 5, 9];
The first step is to subtract x from each row in p:
x .- p
4-element Vector{Float64}: 2.0 1.0 -2.0 -6.0
Then take the absolute value of the result and assign the result to d:
d = abs.(x .- p)
4-element Vector{Float64}: 2.0 1.0 2.0 6.0
A function can be created and then called to assign the result to d:
function dist1(x,p)
return abs.(x .- p)
end
d = dist1(x,p)
4-element Vector{Float64}: 2.0 1.0 2.0 6.0
The function dist1
can now be used inside of any expression; e.g., to calculate the sum of the distances from x to the points in p:
sum(dist1(x,p))
11.0
One-line functions provide a convenient means to define simple functions. The function dist1
can be defined on a single line:
dist1(x,p) = abs.(x .- p) # (One-line function)
sum(dist1(x,p))
11.0
The sum of the distance from x to the points in p can be defined as the function sumdist1
with only x as an input argument and the current values of p used inside the function:
sumdist1(x) = sum(dist1(x,p))
sumdist1(3.)
11.0
sumdist1(8.)
17.0
An array comprehension can be used to create an array of outputs from sumdist1
:
[sumdist1(x) for x in [3., 8.]]
2-element Vector{Float64}: 11.0 17.0
[sumdist1(x) for x in p]
4-element Vector{Int64}: 13 11 11 19
Although the sum of the distances sumdist1
could be defined directly without using dist1
; but, by first defining dist1
, it allows other distance-related functions to be defined in addition to summation:
maxdist1(x) = maximum(dist1(x,p)) # Max distance from x to any point in p
maxdist1(3.)
6.0
maxdist1(8.)
7.0
sum2dist1(x) = sum(dist1(x,p).^2) # Sum of squared distances
sum2dist1(3.)
45.0
The function sum2dist1
of a single variable x
can be plotted between LB
and UB
:
using Plots
LB,UB = 0,10 # Plot range
plot(sum2dist1, LB, UB, legend=false) # Plotting a named function
scatter!(p, [sum2dist1(x) for x in p]) # ! is used to draw on previous plot
In order not to have to first define and name a function (like sum2dist1
, e.g.), an anonymous function can be used instead of a named function when the result is only going to be used once as an input argument inside of another function:
plot(x -> sum(dist1(x,p).^2), LB, UB, legend=false) # Plotting with anonymous function
scatter!(p, [sum2dist1(x) for x in p])
In some special cases, it is possible to determine optimal values for a function using analytical techniques, although numerical techniques work in most cases.
$ \begin{eqnarray*} \quad y &=& x - x^3 \\ \dfrac{dy}{dx} &=& 1 - 3x^2 \\ \dfrac{d^{2}y}{x^2} &=& -6x \\ x^* &=& \pm \sqrt{\dfrac{1}{3}} \end{eqnarray*}$
f(x) = x - x^3
LB,UB = -2,2
using Plots
plot(f, LB, UB, label="f(x)", ylims=(-2,2), linewidth=2)
df(x) = 1 - 3x^2
plot!(df, LB, UB, label="df(x)")
d2f(x) = -6x
plot!(d2f, LB, UB, label="d2f(x)")
xmin, xmax = -sqrt(1/3), sqrt(1/3) # Equate df to 0 => local minima and maxima
(-0.5773502691896257, 0.5773502691896257)
scatter!([xmin],[f(xmin)], label="local minimum")
scatter!([xmax],[f(xmax)], label="local maximum")
xinflect = 0 # Equate d2f to 0 => possible inflection point
scatter!([xinflect],[f(xinflect)], label="inflection point")
df(xmin), df(xmax)
(0.0, 0.0)
d2f(xmin), d2f(xmax)
(3.4641016151377544, -3.4641016151377544)
All numerical optimization methods are designed to minimize a function $f(x)$ since the function can be maximized by minimizing its negation: $-f(x)$. These methods are only guaranteed to find the minimum for unimodal functions.
A robust but slow method that successively tightens a search internal that contains the minimum point until the width of the interval is less than some specified tolerance.
using Plots
xmax = sqrt(1/3)
LB,UB = -1.5,xmax
plot(f, LB, UB, label="unimodal f(x)", ylims=(-2,2), linewidth=2)
x1 = LB + (UB - LB)/3 # Note: The interval is being divided into thirds,
x2 = LB + 2*(UB - LB)/3 # is this optimal?
scatter!([LB,x1,x2,UB],[f(LB),f(x1),f(x2),f(UB)], label="interval search")
eps() # Machine epsilon, the smallest distance between floating-point numbers
2.220446049250313e-16
tol = sqrt(eps()) # Typical tolerance for numerical optimation
1.4901161193847656e-8
function myintervalsearch(n,LB,UB)
x1 = LB + (UB - LB)/3 # Note: picking values 1/3 and 2/3 across interval
x2 = LB + 2*(UB - LB)/3
println(n,":",LB,",",x1,",",x2,",",UB)
if abs(x1 - x2) <= 1.5e-8 # Note: not using sqrt(eps()) to be machine independent
return xminest = (x1+x2)/2 # single scalar returned
elseif f(x1) < f(x2)
UB = x2
else
LB = x1
end
return (LB,UB) # single 2-element tuple returned
end
myintervalsearch (generic function with 1 method)
LB,UB = -1.5,xmax
done,n,res = false,0,Inf
while !done
n += 1
res = myintervalsearch(n,LB,UB)
if length(res) == 2
LB,UB = res
else
println("Estimated minimum = ", res)
done = true
end
end
1:-1.5,-0.8075499102701248,-0.11509982054024959,0.5773502691896257 2:-1.5,-1.0383666068467499,-0.5767332136934997,-0.11509982054024959 3:-1.0383666068467499,-0.7306110114112498,-0.4228554159757497,-0.11509982054024959 4:-0.7306110114112498,-0.5254406144542497,-0.32027021749724965,-0.11509982054024959 5:-0.7306110114112498,-0.5938307467732498,-0.45705048213524974,-0.32027021749724965 6:-0.7306110114112498,-0.6394241683192499,-0.5482373252272498,-0.45705048213524974 7:-0.6394241683192499,-0.5786329395912498,-0.5178417108632498,-0.45705048213524974 8:-0.6394241683192499,-0.5988966825005831,-0.5583691966819165,-0.5178417108632498 9:-0.5988966825005831,-0.5718783586214721,-0.5448600347423609,-0.5178417108632498 10:-0.5988966825005831,-0.5808844665811758,-0.5628722506617683,-0.5448600347423609 11:-0.5988966825005831,-0.5868885385543116,-0.5748803946080399,-0.5628722506617683 12:-0.5868885385543116,-0.5788831092567971,-0.5708776799592827,-0.5628722506617683 13:-0.5868885385543116,-0.581551585689302,-0.5762146328242923,-0.5708776799592827 14:-0.581551585689302,-0.5779936171126289,-0.5744356485359559,-0.5708776799592827 15:-0.581551585689302,-0.5791796066381866,-0.5768076275870713,-0.5744356485359559 16:-0.5791796066381866,-0.5775982872707763,-0.5760169679033661,-0.5744356485359559 17:-0.5791796066381866,-0.5781253937265798,-0.5770711808149729,-0.5760169679033661 18:-0.5781253937265798,-0.5774225851188419,-0.576719776511104,-0.5760169679033661 19:-0.5781253937265798,-0.5776568546547545,-0.5771883155829293,-0.576719776511104 20:-0.5776568546547545,-0.5773444952735376,-0.5770321358923208,-0.576719776511104 21:-0.5776568546547545,-0.5774486150672766,-0.5772403754797987,-0.5770321358923208 22:-0.5776568546547545,-0.5775180282631025,-0.5773792018714506,-0.5772403754797987 23:-0.5775180282631025,-0.5774254773353346,-0.5773329264075666,-0.5772403754797987 24:-0.5774254773353346,-0.5773637767168226,-0.5773020760983106,-0.5772403754797987 25:-0.5774254773353346,-0.57738434358966,-0.5773432098439852,-0.5773020760983106 26:-0.57738434358966,-0.5773569210925436,-0.577329498595427,-0.5773020760983106 27:-0.57738434358966,-0.5773660619249157,-0.5773477802601713,-0.577329498595427 28:-0.5773660619249157,-0.5773538741484194,-0.5773416863719233,-0.577329498595427 29:-0.5773660619249157,-0.5773579367405849,-0.5773498115562541,-0.5773416863719233 30:-0.5773579367405849,-0.577352519951031,-0.5773471031614772,-0.5773416863719233 31:-0.5773579367405849,-0.577354325547549,-0.5773507143545131,-0.5773471031614772 32:-0.577354325547549,-0.577351918085525,-0.5773495106235011,-0.5773471031614772 33:-0.577351918085525,-0.5773503131108424,-0.5773487081361598,-0.5773471031614772 34:-0.577351918085525,-0.5773508481024032,-0.5773497781192816,-0.5773487081361598 35:-0.5773508481024032,-0.577350134780322,-0.577349421458241,-0.5773487081361598 36:-0.5773508481024032,-0.5773503725543492,-0.577349897006295,-0.577349421458241 37:-0.5773508481024032,-0.5773505310703672,-0.5773502140383311,-0.577349897006295 38:-0.5773505310703672,-0.5773503197156764,-0.5773501083609858,-0.577349897006295 39:-0.5773505310703672,-0.57735039016724,-0.5773502492641129,-0.5773501083609858 40:-0.57735039016724,-0.5773502962318219,-0.5773502022964039,-0.5773501083609858 41:-0.57735039016724,-0.5773503275436279,-0.577350264920016,-0.5773502022964039 42:-0.5773503275436279,-0.5773502857945533,-0.5773502440454785,-0.5773502022964039 43:-0.5773503275436279,-0.5773502997109115,-0.577350271878195,-0.5773502440454785 44:-0.5773502997109115,-0.5773502811557671,-0.5773502626006228,-0.5773502440454785 45:-0.5773502811557671,-0.577350268785671,-0.5773502564155747,-0.5773502440454785 Estimated minimum = -0.5773502626006228
(res,xmin,abs(res-xmin))
(-0.5773502626006228, -0.5773502691896257, 6.589002898849117e-9)
Is it optimal to devide the search interval into thirds, as was done above in myintervalsearch
? At each iteration, the interval is reduced by a third. It turns out, following Moler, that the maximum reduction is $\rho = 2 - \phi = (3 - \sqrt{5})/2 \approx 0.382$, where $\phi$ is the "golden ratio".
This can be seen by setting the LB and UB of the interval to 0 and 1, respectively, and then determining what $\rho$ would maintain the following ratio as the interval is reduced:
$\quad \dfrac{1 - \rho}{1} = \dfrac{\rho}{1 - \rho}$,
or
$\quad \rho^2 - 3\rho + 1 = 0$.
Solving for $\rho$ gives
$\quad \rho = 2 - \phi = \dfrac{3 - \sqrt{5}}{2} \approx 0.382$,
where $\phi \approx 1.618$ is the golden ratio, so that
$ \begin{eqnarray*} \quad x_1 &=& 0 + \rho(1 - 0) = 0 + \dfrac{3 - \sqrt{5}}{2}(1 - 0) \\ x_2 &=& 0 + (1 - \rho)(1 - 0) = 0 + \dfrac{\sqrt{5} - 1}{2}(1 - 0) \end{eqnarray*}$
Picking values for x1
and x2
based on the golden ratio minimizes the number of iterations required in interval search:
function mygoldensearch(n,LB,UB)
x1 = LB + ((3 - sqrt(5))/2)*(UB - LB)
x2 = LB + ((sqrt(5) - 1)/2)*(UB - LB)
println(n,":",LB,",",x1,",",x2,",",UB)
if abs(x1 - x2) <= 1.5e-8 # Note: not using sqrt(eps()) to be machine independent
return xminest = (x1+x2)/2
elseif f(x1) < f(x2)
UB = x2
else
LB = x1
end
return (LB,UB)
end
mygoldensearch (generic function with 1 method)
LB,UB = -1.5,xmax
done,n,res = false,0,Inf
while !done
n += 1
res = mygoldensearch(n,LB,UB)
if length(res) == 2
LB,UB = res
else
println("Estimated minimum = ", res)
done = true
end
end
1:-1.5,-0.7065228037083066,-0.2161269271020676,0.5773502691896257 2:-1.5,-1.0096041233937614,-0.7065228037083062,-0.2161269271020676 3:-1.0096041233937614,-0.7065228037083064,-0.5192082467875225,-0.2161269271020676 4:-0.7065228037083064,-0.5192082467875226,-0.4034414840228514,-0.2161269271020676 5:-0.7065228037083064,-0.5907560409436352,-0.5192082467875226,-0.4034414840228514 6:-0.7065228037083064,-0.6349750095521939,-0.5907560409436352,-0.5192082467875226 7:-0.6349750095521939,-0.5907560409436352,-0.5634272153960813,-0.5192082467875226 8:-0.6349750095521939,-0.60764618400464,-0.5907560409436352,-0.5634272153960813 9:-0.60764618400464,-0.5907560409436352,-0.5803173584570861,-0.5634272153960813 10:-0.5907560409436352,-0.5803173584570861,-0.5738658978826304,-0.5634272153960813 11:-0.5907560409436352,-0.5843045803691795,-0.5803173584570861,-0.5738658978826304 12:-0.5843045803691795,-0.5803173584570861,-0.5778531197947239,-0.5738658978826304 13:-0.5803173584570861,-0.577853119794724,-0.5763301365449925,-0.5738658978826304 14:-0.5803173584570861,-0.5787943752073547,-0.577853119794724,-0.5763301365449925 15:-0.5787943752073547,-0.5778531197947239,-0.5772713919576233,-0.5763301365449925 16:-0.5778531197947239,-0.5772713919576232,-0.5769118643820932,-0.5763301365449925 17:-0.5778531197947239,-0.5774935922191937,-0.5772713919576233,-0.5769118643820932 18:-0.5774935922191937,-0.5772713919576232,-0.5771340646436637,-0.5769118643820932 19:-0.5774935922191937,-0.5773562649052341,-0.5772713919576233,-0.5771340646436637 20:-0.5774935922191937,-0.577408719271583,-0.5773562649052341,-0.5772713919576233 21:-0.577408719271583,-0.5773562649052342,-0.5773238463239722,-0.5772713919576233 22:-0.577408719271583,-0.5773763006903211,-0.5773562649052341,-0.5773238463239722 23:-0.5773763006903211,-0.5773562649052342,-0.5773438821090591,-0.5773238463239722 24:-0.5773763006903211,-0.5773639178941461,-0.5773562649052341,-0.5773438821090591 25:-0.5773639178941461,-0.5773562649052342,-0.577351535097971,-0.5773438821090591 26:-0.5773562649052342,-0.577351535097971,-0.5773486119163223,-0.5773438821090591 27:-0.5773562649052342,-0.5773533417235853,-0.5773515350979711,-0.5773486119163223 28:-0.5773533417235853,-0.5773515350979711,-0.5773504185419365,-0.5773486119163223 29:-0.5773515350979711,-0.5773504185419366,-0.5773497284723568,-0.5773486119163223 30:-0.5773515350979711,-0.5773508450283914,-0.5773504185419365,-0.5773497284723568 31:-0.5773508450283914,-0.5773504185419365,-0.5773501549588117,-0.5773497284723568 32:-0.5773504185419365,-0.5773501549588116,-0.5773499920554817,-0.5773497284723568 33:-0.5773504185419365,-0.5773502556386064,-0.5773501549588117,-0.5773499920554817 34:-0.5773504185419365,-0.5773503178621416,-0.5773502556386065,-0.5773501549588117 35:-0.5773503178621416,-0.5773502556386065,-0.5773502171823468,-0.5773501549588117 36:-0.5773503178621416,-0.577350279405882,-0.5773502556386064,-0.5773502171823468 37:-0.5773503178621416,-0.577350294094866,-0.577350279405882,-0.5773502556386064 Estimated minimum = -0.577350286750374
A parabola can be fit to any three non-collinear points on a function (termed a "triplet"), and then the minimum value of the fitted parabola can be used to replace one of the points to (hopefully) narrow the search. This can potentially require fewer function evaluations as compared to Golden Section Search.
Given three points $(x_1,y_1),(x_2,y_2),(x_3,y_3)$ on the function, can create three equations and solve for the three coefficients of the parabola that passes through the points:
$ \begin{eqnarray*} \quad y_1 &=& \alpha_1 x_1^2 + \alpha_2 x_1 + \alpha_3 \\ \quad y_2 &=& \alpha_1 x_2^2 + \alpha_2 x_3 + \alpha_3 \\ \quad y_3 &=& \alpha_1 x_3^2 + \alpha_2 x_3 + \alpha_3 \end{eqnarray*}$
The minimum point of the resulting parabola is then $\dfrac{-\alpha_2}{2\alpha_1}$.
f(x) = x .- x.^3 # function (note: vectorized)
g(x,α) = α[1].*x.^2 .+ α[2].*x .+ α[3] # parabola
h(x) = [x.^2, x, 1]
x = [-1.5, -1., 0.5] # x-values of initial triplet
y = f(x) # y-values of initial triplet
X = [h(x[1]) h(x[2]) h(x[3])]'
3×3 adjoint(::Matrix{Float64}) with eltype Float64: 2.25 -1.5 1.0 1.0 -1.0 1.0 0.25 0.5 1.0
α = X\y # parameter coefficients
3-element Vector{Float64}: 2.0 1.25 -0.7499999999999999
LB,UB = -1.5,xmax
plot(f, LB, UB, label="unimodal f(x)", ylims=(-2,2), linewidth=2)
plot!(i -> g(i,α), LB, UB, label="parabola g(x)") # Note: anonymous function
scatter!(x, [g(i,α) for i in x], label="triplet")
xₚmin = -α[2]/(2α[1]) # using x\_p<TAB>min
scatter!([xₚmin],[g(xₚmin,α)], label="parabola minimum")
scatter!([xₚmin],[f(xₚmin)], label="triplet update")
x = [x[2], xₚmin, x[3]] # Update triplet
α = [h(x[1]) h(x[2]) h(x[3])]'\f(x)
plot!(i -> g(i,α), LB, UB, label="parabola update")
xₚmin = -α[2]/(2α[1])
scatter!([xₚmin], [f(xₚmin)], label="update minimum", )
LB,UB = -1.5,xmax
plot(f, LB, UB, label="unimodal f(x)", ylims=(-2,2), linewidth=2)
x = [-.25, -.1, 0] # Update triplet
α = [h(x[1]) h(x[2]) h(x[3])]'\f(x)
plot!(i -> g(i,α), LB, UB, label="bad parabola update")
scatter!(x, [g(i,α) for i in x], label="triplet")
xₚmin = -α[2]/(2α[1])
scatter!([xₚmin], [f(xₚmin)], label="bad update minimum")
Brent's method (1973) is the default method used for univariate optimization in many software packages. It uses parabolic interpolation whenever possible because it can converge to the optimum solution much faster than interval search. Except, there is the possibility (as seen above) that the parabola minimum may be outside of the endpoints of the triplet; when this is detected, Brent's method switches to interval search for the next step. Also, the triplet points may be collinear, making it impossible to fit a parabola; when this is detected, Brent's method switches to Secant search for the next step, which only requires two, possibly collinear, points.
Optimize
Function¶The function optimize
in the Optim
package performs general-purpose nonlinear minimization. Note: maximization corresponds to the minimization of the negative of the function to be maximized.
using Optim
LB,UB = -1.5,xmax
optimize(f, LB, UB) # Brent's method is the default
Results of Optimization Algorithm * Algorithm: Brent's Method * Search Interval: [-1.500000, 0.577350] * Minimizer: -5.773503e-01 * Minimum: -3.849002e-01 * Iterations: 11 * Convergence: max(|x - x_upper|, |x - x_lower|) <= 2*(1.5e-08*|x|+2.2e-16): true * Objective Function Calls: 12
optimize(f, LB, UB, GoldenSection())
Results of Optimization Algorithm * Algorithm: Golden Section Search * Search Interval: [-1.500000, 0.577350] * Minimizer: -5.773503e-01 * Minimum: -3.849002e-01 * Iterations: 38 * Convergence: max(|x - x_upper|, |x - x_lower|) <= 2*(1.5e-08*|x|+2.2e-16): true * Objective Function Calls: 39
dist1
¶Given a bounded search interval [LB, UB], determine a value for x that minimizes the sum of squared distances to each point in p. In this case, the centroid/center-of-gravity/mean of p corresponds to the point that minimizes the sum of squared distances:
using Statistics
mean(p)
4.25
This can also be determined using optimize
, with the advantage that is can also minimize other functions where there is not a known analytical solution:
LB, UB = 0,10
using Optim
optimize(x -> sum(dist1(x,p).^2), LB, UB)
Results of Optimization Algorithm * Algorithm: Brent's Method * Search Interval: [0.000000, 10.000000] * Minimizer: 4.250000e+00 * Minimum: 3.875000e+01 * Iterations: 5 * Convergence: max(|x - x_upper|, |x - x_lower|) <= 2*(1.5e-08*|x|+2.2e-16): true * Objective Function Calls: 6
To assign the optimal location and minimum total distance values to variables:
res = optimize(x -> sum(dist1(x,p).^2), LB, UB)
xᵒ = res.minimizer
TDᵒ = res.minimum
@show xᵒ TDᵒ;
xᵒ = 4.25 TDᵒ = 38.75
To return only the optimal location :
xᵒ = optimize(x -> sum(dist1(x,p).^2), LB, UB).minimizer
4.25
Can then get the minimimum total distance if needed:
TDᵒ = sum(dist1(xᵒ,p).^2)
38.75
plot(x -> sum(dist1(x,p).^2), LB, UB, legend=false, xticks=p)
scatter!([xᵒ],[sum(dist1(xᵒ,p).^2)])
xᵒ = optimize(x -> maximum(dist1(x,p)), LB, UB).minimizer
plot(x -> maximum(dist1(x,p)),LB,UB, legend=false, xticks=p)
scatter!([xᵒ],[maximum(dist1(xᵒ,p))])
Trying to determine a location x between between 0 and 10 that is as far away as possible from any of the points 1, 2, 5, and 9 in p. This corresponds to trying to maximize the minimum distance of x from any point in p.
xᵒ = optimize(x -> -minimum(dist1(x,p)), LB, UB).minimizer
plot(x -> minimum(dist1(x,p)), LB, UB, legend=false, xticks=p)
scatter!([xᵒ],[minimum(dist1(xᵒ,p))])
Beware!: Can see that the global maximum of 2 at 7 was not found. Can cut off local maximum of 1.5 at 3.5 by increasing the LB
used in optimize
to 4:
xᵒ = optimize(x -> -minimum(dist1(x,p)), 4, UB).minimizer
plot(x -> minimum(dist1(x,p)), LB, UB, legend=false, xticks=p)
scatter!([xᵒ], [minimum(dist1(xᵒ,p))])
sumsqrtdist1(x) = sum(sqrt.(dist1(x,p)))
xᵒ = optimize(sumsqrtdist1, LB, UB).minimizer
plot(sumsqrtdist1, LB, UB, legend=false, xticks=p)
scatter!([xᵒ],[sumsqrtdist1(xᵒ)])
f(x) = x - x^3
using Plots
plot(f, -2, 2, label="f(x)", ylims=(-2,2))
scatter!([-1, 0, 1], [0, 0, 0], label="roots")
optimize(x -> abs(f(x)), -2, 2).minimizer
0.9999999961311786
optimize(x -> abs(f(x)), -2, -.5).minimizer
-1.0000000021033948
optimize(x -> abs(f(x)), -.5, .5).minimizer
2.7755575615628914e-17
f(x) = x - x^3
g(x) = 2x^2 - 1
using Plots
plot(f, -2, 2, label="f(x)", ylims=(-2,2))
plot!(g, -2, 2, label="g(x)", ylims=(-2,2))
x = optimize(x -> abs(f(x) - g(x)), -1, 0).minimizer
scatter!([x], [f(x)], label="intersection")
x = optimize(x -> abs(f(x) - g(x)), 0, 1).minimizer
scatter!([x], [f(x)], label="intersection")