From 40c4d5746c61f35d7c844bad9781dc244745f113 Mon Sep 17 00:00:00 2001 From: Guillaume Dalle <22795598+gdalle@users.noreply.github.com> Date: Tue, 12 Sep 2023 19:09:44 +0200 Subject: [PATCH] Remove outdated info on benchmarks (#46) --- docs/src/alt_performance.md | 12 ------------ 1 file changed, 12 deletions(-) diff --git a/docs/src/alt_performance.md b/docs/src/alt_performance.md index c66c9d99..74c889b4 100644 --- a/docs/src/alt_performance.md +++ b/docs/src/alt_performance.md @@ -17,9 +17,6 @@ The test case is an HMM with diagonal multivariate normal observations. - `K`: number of trajectories - `I`: number of Baum-Welch iterations -!!! danger "Why is this empty?" - The benchmark suite is computationally expensive, and we only run it once for each new release. If the following section contains no plots and the links are broken, you're probably reading the development documentation or a local build of the website. Check out the [stable documentation](https://gdalle.github.io/HiddenMarkovModels.jl/stable/) instead. - ### Single sequence Full benchmark logs: [`results_single_sequence.csv`](./assets/benchmark/results/results_single_sequence.csv). @@ -68,15 +65,6 @@ On the plots, we compensate it by subtracting the runtime of the same algorithm This estimate for the overhead is put in parentheses in the legend. It is probably over-pessimistic, which is fair because it means that the comparison is now biased against Julia. -### Allocations - -A major bottleneck of performance in Julia is memory allocations. -The benchmarks for HMMs.jl thus employ a custom implementation of diagonal multivariate normals, which is entirely allocation-free. - -This partly explains the performance gap with HMMBase.jl as the dimension `D` grows beyond 1. -Such a trick is also possible with HMMBase.jl, but slightly more demanding since it requires subtyping `Distribution` from Distributions.jl, instead of just implementing DensityInterface.jl. -We might do it in future benchmarks. - ### Parallelism The packages we include have different approaches to parallelism, which can bias the evaluation in complex ways: