Skip to content

Commit

Permalink
Typos
Browse files Browse the repository at this point in the history
  • Loading branch information
gdalle committed Feb 22, 2024
1 parent fa52fbe commit d601615
Show file tree
Hide file tree
Showing 3 changed files with 25 additions and 26 deletions.
13 changes: 7 additions & 6 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -42,11 +42,7 @@ function literate_title(path)
end

pages = [
"First steps" => [
"Home" => "index.md",
"Alternatives" => "alternatives.md",
"API reference" => "api.md",
],
"Home" => "index.md",
"Tutorials" => [
"Basics" => joinpath("examples", "basics.md"),
"Types" => joinpath("examples", "types.md"),
Expand All @@ -55,7 +51,12 @@ pages = [
"Control dependency" => joinpath("examples", "controlled.md"),
"Autodiff" => joinpath("examples", "autodiff.md"),
],
"Advanced" => ["Debugging" => "debugging.md", "Formulas" => "formulas.md"],
"API reference" => "api.md",
"Advanced" => [
"Alternatives" => "alternatives.md",
"Debugging" => "debugging.md",
"Formulas" => "formulas.md",
],
]

fmt = Documenter.HTML(;
Expand Down
36 changes: 17 additions & 19 deletions docs/src/alternatives.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,26 +4,25 @@

We compare features among the following Julia packages:

* HiddenMarkovModels.jl (abbreviated to HMMs.jl)
* HiddenMarkovModels.jl
* [HMMBase.jl](https://github.com/maxmouchet/HMMBase.jl)
* [HMMGradients.jl](https://github.com/idiap/HMMGradients.jl)

We discard [MarkovModels.jl](https://github.com/FAST-ASR/MarkovModels.jl) because its focus is GPU computation.
There are also more generic packages for probabilistic programming, which are able to perform MCMC or variational inference (eg. [Turing.jl](https://github.com/TuringLang/Turing.jl)) but we leave those aside.

| | HMMs.jl | HMMBase.jl | HMMGradients.jl |
| ------------------------- | ------------------- | ---------------- | --------------- |
| Algorithms[^1] | V, FB, BW | V, FB, BW | FB |
| Number types | anything | `Float64` | `AbstractFloat` |
| Observation types | anything | number or vector | anything |
| Observation distributions | DensityInterface.jl | Distributions.jl | manual |
| Multiple sequences | yes | no | yes |
| Priors / structures | possible | no | possible |
| Temporal dependency | yes | no | no |
| Control dependency | yes | no | no |
| Automatic differentiation | yes | no | yes |
| Linear algebra speedup | yes | yes | no |
| Numerical stability | scaling+ | scaling+ | log |
| | HiddenMarkovModels.jl | HMMBase.jl | HMMGradients.jl |
| ------------------------- | --------------------- | ---------------- | --------------- |
| Algorithms[^1] | V, FB, BW | V, FB, BW | FB |
| Number types | anything | `Float64` | `AbstractFloat` |
| Observation types | anything | number or vector | anything |
| Observation distributions | DensityInterface.jl | Distributions.jl | manual |
| Multiple sequences | yes | no | yes |
| Priors / structures | possible | no | possible |
| Control dependency | yes | no | no |
| Automatic differentiation | yes | no | yes |
| Linear algebra speedup | yes | yes | no |
| Numerical stability | scaling+ | scaling+ | log |


!!! info "Very small probabilities"
Expand All @@ -43,17 +42,16 @@ We compare features among the following Python packages:

| | hmmlearn | pomegranate | dynamax |
| ------------------------- | -------------------- | --------------------- | -------------------- |
| Algorithms[^1] | V, FB, BW, VI | V, FB, BW | FB, V, BW, GD |
| Number types | NumPy format | PyTorch format | JAX format |
| Algorithms[^1] | V, FB, BW, VI | FB, BW | FB, V, BW, GD |
| Number types | NumPy formats | PyTorch formats | JAX formats |
| Observation types | number or vector | number or vector | number or vector |
| Observation distributions | discrete or Gaussian | pomegranate catalogue | discrete or Gaussian |
| Multiple sequences | yes | yes | yes |
| Priors / structures | yes | no | ? |
| Temporal dependency | no | no | no |
| Priors / structures | yes | no | yes |
| Control dependency | no | no | no |
| Automatic differentiation | no | yes | yes |
| Linear algebra speedup | yes | yes | yes |
| Logarithmic probabilities | scaling / log | log | log |
| Numerical stability | scaling / log | log | log |


[^1]: V = Viterbi, FB = Forward-Backward, BW = Baum-Welch, VI = Variational Inference, GD = Gradient Descent
2 changes: 1 addition & 1 deletion examples/autodiff.jl
Original file line number Diff line number Diff line change
Expand Up @@ -168,7 +168,7 @@ Enzyme.jl requires preallocated storage for the gradients, which we happily prov

∇parameters_enzyme = Enzyme.make_zero(parameters)
∇obs_enzyme = Enzyme.make_zero(obs_seq)
∇control_enzyme = Enzyme.make_zero(control_seq)
∇control_enzyme = Enzyme.make_zero(control_seq);

#=
The syntax is a bit more complex, see the Enzyme.jl docs for details.
Expand Down

0 comments on commit d601615

Please sign in to comment.