Function benchmarks¶

This notebook showcases how the new and old implementation of the algorithms compare in terms of runtime.

We perform a benchmark for 60 seconds on both versions of each algorithm and present the results. The old version of each algorithm is provided (which you can compare to previous commits in EvoLP.jl), while the new version is loaded directly from EvoLP.jl 1.1.

In [1]:
using Statistics
using EvoLP
using BenchmarkTools

GA¶

Original¶

For this example we will use:

  • Rosenbrock function with 1 ms sleep
  • Random normal generator
  • Rank based selection
  • Interpolation crossover with $\lambda=0.5$
  • Gaussian mutation with $\sigma = 0.05$
  • No logbook
In [2]:
f(x) = begin
    sleep(0.001)
    return rosenbrock(x)
end
Out[2]:
f (generic function with 1 method)
In [3]:
pop_size = 100
population = normal_rand_vector_pop(pop_size, [0, 0], [1 0; 0 1])
first(population, 3)
Out[3]:
3-element Vector{Vector{Float64}}:
 [1.0517614595972073, 1.8402213160040617]
 [-0.36639692426819936, 0.8691721709188464]
 [-2.403222766197478, 0.5402756188420645]
In [4]:
S = RankBasedSelectionGenerational()
C = InterpolationCrossover(0.5)
M = GaussianMutation(0.05)
Out[4]:
GaussianMutation(0.05)
In [5]:
function oldGA(
    f::Function,
    pop::AbstractVector,
    k_max::Integer,
    S::EvoLP.SelectionMethod,
    C::EvoLP.CrossoverMethod,
    M::EvoLP.MutationMethod
)
    n = length(pop)
	for _ in 1:k_max
		parents = select(S, f.(pop)) # O(k_max * n)
		offspring = [cross(C, pop[p[1]], pop[p[2]]) for p in parents]
		pop .= mutate.(Ref(M), offspring)  # whole population is replaced
	end

    # x, fx
	best, best_i = findmin(f, pop) # O(n)
    n_evals = k_max * n + n
    result = Result(best, pop[best_i], pop, k_max, n_evals)
	return result
end
Out[5]:
oldGA (generic function with 1 method)
In [6]:
@benchmark oldGA(f, population, 10, S, C, M) seconds=60
Out[6]:
BenchmarkTools.Trial: 25 samples with 1 evaluation.
 Range (min … max):  2.375 s …   2.430 s  ┊ GC (min … max): 0.00% … 0.00%
 Time  (median):     2.412 s              ┊ GC (median):    0.00%
 Time  (mean ± σ):   2.407 s ± 14.472 ms  ┊ GC (mean ± σ):  0.02% ± 0.06%

                               █                             
  ▅▁▁▁▁▁▁▅▁▁▅▁▁▁▅▁▅▁▁▁▁▁▁▅▁▁▅▁▁█▁▁▁▁▁▁▁▅▁██▁█▅▅▁▁▁▅▅▁▁▁▅▁▁▅ ▁
  2.37 s         Histogram: frequency by time        2.43 s <

 Memory estimate: 4.20 MiB, allocs estimate: 17782.

Optimised¶

Here we perform the same benchmark, this time on the optimised GA in EvoLP 1.1:

In [7]:
@benchmark GA(f, population, 10, S, C, M) seconds=60
Out[7]:
BenchmarkTools.Trial: 28 samples with 1 evaluation.
 Range (min … max):  2.160 s …   2.200 s  ┊ GC (min … max): 0.00% … 0.00%
 Time  (median):     2.185 s              ┊ GC (median):    0.00%
 Time  (mean ± σ):   2.185 s ± 10.243 ms  ┊ GC (mean ± σ):  0.02% ± 0.08%

  ▁            ▁█  ▁ ▁  ▁  ▁   ▁ ▁▁▁▁▁▁▁▁▁▁   ▁  ██  ▁▁   █  
  █▁▁▁▁▁▁▁▁▁▁▁▁██▁▁█▁█▁▁█▁▁█▁▁▁█▁██████████▁▁▁█▁▁██▁▁██▁▁▁█ ▁
  2.16 s         Histogram: frequency by time         2.2 s <

 Memory estimate: 4.19 MiB, allocs estimate: 17264.

PSO¶

Original¶

For this example we will use:

  • Same function (Rosenbrock function with 1 ms sleep)
  • Random normal Particle Generator
    • The new generator should work for both new and old versions of the solver
  • No logbook
In [8]:
population = normal_rand_particle_pop(pop_size, [0, 0], [1 0; 0 1])
first(population, 3)
Out[8]:
3-element Vector{Particle}:
 Particle([0.21505942279789356, -0.41108445112773156], [0.0, 0.0], Inf, [0.21505942279789356, -0.41108445112773156], Inf)
 Particle([-0.6592316786828291, 0.2805535941858653], [0.0, 0.0], Inf, [-0.6592316786828291, 0.2805535941858653], Inf)
 Particle([0.22781682502062509, 0.16678151290193638], [0.0, 0.0], Inf, [0.22781682502062509, 0.16678151290193638], Inf)
In [9]:
function OldPSO(
    f::Function, population::Vector{EvoLP.Particle}, k_max::Integer;
    w=1, c1=1, c2=1
)
    d = length(population[1].x)
    x_best, y_best = copy(population[1].x_best), Inf

    # evaluation loop
    for P in population
        y = f(P.x)  # O(pop)

        if y < y_best
            x_best[:] = P.x
            y_best = y
        end
    end

    for _ in 1:k_max
        for P in population
            r1, r2 = rand(d), rand(d)
            P.x += P.v
            P.v = w*P.v + c1*r1 .* (P.x_best - P.x) + c2*r2 .* (x_best - P.x)
            y = f(P.x)  # O(k_max * pop)

            if y < y_best
                x_best[:] = P.x
                y_best = y
            end

            if y < f(P.x_best)  # O(k_max * pop)
                P.x_best[:] = P.x
            end
        end
    end

    best_i = argmin([f(P.x_best) for P in population])
    best = population[best_i]
    n_evals = 2 * length(population) + 2 * k_max  * length(population) + 1

    result = Result(f(best.x_best), best, population, k_max, n_evals)
    return result
end
Out[9]:
OldPSO (generic function with 1 method)
In [10]:
@benchmark OldPSO(f, population, 10) seconds=60
Out[10]:
BenchmarkTools.Trial: 13 samples with 1 evaluation.
 Range (min … max):  4.960 s …   4.989 s  ┊ GC (min … max): 0.00% … 0.00%
 Time  (median):     4.970 s              ┊ GC (median):    0.00%
 Time  (mean ± σ):   4.972 s ± 10.978 ms  ┊ GC (mean ± σ):  0.00% ± 0.00%

  █                                                          
  █▇▁▁▁▁▇▁▁▇▁▁▁▁▁▁▁▁▁▁▇▁▁▁▁▁▁▁▁▇▁▁▁▇▁▁▁▁▁▁▁▁▁▁▁▁▇▇▁▁▁▇▁▁▁▁▇ ▁
  4.96 s         Histogram: frequency by time        4.99 s <

 Memory estimate: 1.33 MiB, allocs estimate: 28419.

Optimised¶

Here is the new version, which takes half the time:

In [11]:
@benchmark PSO(f, population, 10) seconds=60
Out[11]:
BenchmarkTools.Trial: 25 samples with 1 evaluation.
 Range (min … max):  2.409 s …   2.465 s  ┊ GC (min … max): 0.00% … 0.00%
 Time  (median):     2.436 s              ┊ GC (median):    0.00%
 Time  (mean ± σ):   2.439 s ± 14.486 ms  ┊ GC (mean ± σ):  0.00% ± 0.00%

  ▁           ▁ ▁▁▁  █▁  ▁▁█ ▁     ▁▁█ ▁   ▁    █▁▁      ▁▁  
  █▁▁▁▁▁▁▁▁▁▁▁█▁███▁▁██▁▁███▁█▁▁▁▁▁███▁█▁▁▁█▁▁▁▁███▁▁▁▁▁▁██ ▁
  2.41 s         Histogram: frequency by time        2.47 s <

 Memory estimate: 1.14 MiB, allocs estimate: 21705.

1 + 1 EA¶

Original¶

For this problem we will use:

  • OneMax problem with a 1ms delay
  • Random binary individual of length 100
  • Bitwise mutation of 1%
  • No logbook
In [12]:
sleep_onemax(x) = begin
    sleep(0.001)
    return -onemax(x)
end
Out[12]:
sleep_onemax (generic function with 1 method)
In [13]:
population = binary_vector_pop(1, 100)[1]
M = BitwiseMutation(0.001)
Out[13]:
BitwiseMutation(0.001)
In [14]:
function oldoneplusone(
    f::Function, ind::AbstractVector, k_max::Integer, M::EvoLP.MutationMethod
)
	for _ in 1:k_max
        c = mutate(M, ind)

        if f(c) <= f(ind)  # O(2 * k_max)
            ind = c
        end
	end

    f_ind = f(ind) # O(1)
    n_evals = 2 * k_max + 1
    result = Result(f_ind, ind, [ind], k_max, n_evals)

	return result
end
Out[14]:
oldoneplusone (generic function with 1 method)
In [15]:
@benchmark oldoneplusone(sleep_onemax, population, 10, M) seconds=60
Out[15]:
BenchmarkTools.Trial: 1302 samples with 1 evaluation.
 Range (min … max):  43.003 ms …  48.687 ms  ┊ GC (min … max): 0.00% … 0.00%
 Time  (median):     46.163 ms               ┊ GC (median):    0.00%
 Time  (mean ± σ):   46.060 ms ± 888.528 μs  ┊ GC (mean ± σ):  0.00% ± 0.00%

                             ▂ ▃▄▁▂▅▅▂▄▆▄▇▇▇█▅▄▅▁▂▁▁            
  ▂▁▂▃▂▃▂▂▁▃▃▃▃▃▄▃▄▅▅▄▆▅▇█▇▆████████████████████████▆▅▆▄▅▃▂▃▃▃ ▅
  43 ms           Histogram: frequency by time           48 ms <

 Memory estimate: 20.25 KiB, allocs estimate: 1117.

Optimised¶

Here is the optimised version. Improvement is negligible since it's just 1 fewer call. In very expensive computations, it could help a little bit.

In [16]:
@benchmark oneplusone(sleep_onemax, population, 10, M) seconds=60
Out[16]:
BenchmarkTools.Trial: 1369 samples with 1 evaluation.
 Range (min … max):  41.086 ms …  45.970 ms  ┊ GC (min … max): 0.00% … 0.00%
 Time  (median):     43.890 ms               ┊ GC (median):    0.00%
 Time  (mean ± σ):   43.794 ms ± 803.492 μs  ┊ GC (mean ± σ):  0.00% ± 0.00%

                          ▁▁▁▂ ▂▃▃▃▄▃▆▂▄▆▅█▅██▄▆▂▃              
  ▃▃▁▂▁▁▃▃▂▃▂▃▅▃▄▄▃▃▄▆▅▄▇██████████████████████████▆▆▅▆▅▄▄▄▄▂▃ ▅
  41.1 ms         Histogram: frequency by time         45.5 ms <

 Memory estimate: 20.11 KiB, allocs estimate: 1112.

Conclusion¶

We benchmarked and compared the new implementation of the algorithms provided in EvoLP 1.1 against their previous version. There is a noticeable reduction in runtime for PSO. For the GA and 1+1 EA, the reduction is little but can still become significative if doing expensive computations in the objective function.