Skip to content

Commit

Permalink
Added navbar and removed insert_navbar.sh
Browse files Browse the repository at this point in the history
  • Loading branch information
github-actions[bot] committed Oct 13, 2024
1 parent 339d3db commit 09ad1f0
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 0 deletions.
1 change: 1 addition & 0 deletions previews/PR100/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -457,6 +457,7 @@
});
</script>
<!-- NAVBAR END -->

<div id="documenter"><nav class="docs-sidebar"><div class="docs-package-name"><span class="docs-autofit"><a href>ParetoSmooth.jl</a></span></div><button class="docs-search-query input is-rounded is-small is-clickable my-2 mx-auto py-1 px-2" id="documenter-search-query">Search docs (Ctrl + /)</button><ul class="docs-menu"><li class="is-active"><a class="tocitem" href>Home</a></li><li><a class="tocitem" href="turing/">Using with Turing</a></li></ul><div class="docs-version-selector field has-addons"><div class="control"><span class="docs-label button is-static is-size-7">Version</span></div><div class="docs-selector control is-expanded"><div class="select is-fullwidth is-size-7"><select id="documenter-version-selector"></select></div></div></div></nav><div class="docs-main"><header class="docs-navbar"><a class="docs-sidebar-button docs-navbar-link fa-solid fa-bars is-hidden-desktop" id="documenter-sidebar-button" href="#"></a><nav class="breadcrumb"><ul class="is-hidden-mobile"><li class="is-active"><a href>Home</a></li></ul><ul class="is-hidden-tablet"><li class="is-active"><a href>Home</a></li></ul></nav><div class="docs-right"><a class="docs-navbar-link" href="https://github.com/TuringLang/ParetoSmooth.jl/blob/main/docs/src/index.md#" title="Edit source on GitHub"><span class="docs-icon fa-solid"></span></a><a class="docs-settings-button docs-navbar-link fa-solid fa-gear" id="documenter-settings-button" href="#" title="Settings"></a><a class="docs-article-toggle-button fa-solid fa-chevron-up" id="documenter-article-toggle-button" href="javascript:;" title="Collapse all docstrings"></a></div></header><article class="content" id="documenter-page"><h1 id="ParetoSmooth"><a class="docs-heading-anchor" href="#ParetoSmooth">ParetoSmooth</a><a id="ParetoSmooth-1"></a><a class="docs-heading-anchor-permalink" href="#ParetoSmooth" title="Permalink"></a></h1><p>Documentation for <a href="https://github.com/TuringLang/ParetoSmooth.jl">ParetoSmooth</a>.</p><ul><li><a href="#ParetoSmooth.ModelComparison"><code>ParetoSmooth.ModelComparison</code></a></li><li><a href="#ParetoSmooth.Psis"><code>ParetoSmooth.Psis</code></a></li><li><a href="#ParetoSmooth.PsisLoo"><code>ParetoSmooth.PsisLoo</code></a></li><li><a href="#ParetoSmooth.loo-Tuple"><code>ParetoSmooth.loo</code></a></li><li><a href="#ParetoSmooth.loo_compare-Union{Tuple{AbstractVector{&lt;:PsisLoo}}, Tuple{S}} where S&lt;:(Union{Tuple{Vararg{var&quot;#s41&quot;}}, AbstractVector{&lt;:var&quot;#s41&quot;}} where var&quot;#s41&quot;&lt;:Union{AbstractString, Symbol})"><code>ParetoSmooth.loo_compare</code></a></li><li><a href="#ParetoSmooth.loo_from_psis-Tuple{AbstractArray{&lt;:Real, 3}, Psis}"><code>ParetoSmooth.loo_from_psis</code></a></li><li><a href="#ParetoSmooth.naive_lpd"><code>ParetoSmooth.naive_lpd</code></a></li><li><a href="#ParetoSmooth.pointwise_log_likelihoods-Tuple{Function, AbstractArray{&lt;:Union{Missing, Real}, 3}, Any}"><code>ParetoSmooth.pointwise_log_likelihoods</code></a></li><li><a href="#ParetoSmooth.psis-Union{Tuple{AbstractArray{T, 3}}, Tuple{T}} where T&lt;:Real"><code>ParetoSmooth.psis</code></a></li><li><a href="#ParetoSmooth.psis!-Union{Tuple{AbstractVector{T}}, Tuple{T}, Tuple{AbstractVector{T}, T}} where T&lt;:Real"><code>ParetoSmooth.psis!</code></a></li><li><a href="#ParetoSmooth.psis_ess-Union{Tuple{T}, Tuple{AbstractMatrix{T}, AbstractVector{T}}} where T&lt;:Real"><code>ParetoSmooth.psis_ess</code></a></li><li><a href="#ParetoSmooth.psis_loo-Tuple{AbstractArray{&lt;:Real, 3}, Vararg{Any}}"><code>ParetoSmooth.psis_loo</code></a></li><li><a href="#ParetoSmooth.relative_eff-Tuple{AbstractArray{&lt;:Real, 3}}"><code>ParetoSmooth.relative_eff</code></a></li><li><a href="#ParetoSmooth.sup_ess-Union{Tuple{T}, Tuple{AbstractMatrix{T}, AbstractVector{T}}} where T&lt;:Real"><code>ParetoSmooth.sup_ess</code></a></li></ul><article class="docstring"><header><a class="docstring-article-toggle-button fa-solid fa-chevron-down" href="javascript:;" title="Collapse docstring"></a><a class="docstring-binding" id="ParetoSmooth.ModelComparison" href="#ParetoSmooth.ModelComparison"><code>ParetoSmooth.ModelComparison</code></a><span class="docstring-category">Type</span><span class="is-flex-grow-1 docstring-article-toggle-button" title="Collapse docstring"></span></header><section><div><pre><code class="language-julia hljs">ModelComparison</code></pre><p>A struct containing the results of model comparison.</p><p><strong>Fields</strong></p><ul><li><code>pointwise::KeyedArray</code>: A <code>KeyedArray</code> of pointwise estimates. See [<code>PsisLoo</code>]@ref.<ul><li><code>estimates::KeyedArray</code>: A table containing the results of model comparison, with the following columns –<ul><li><code>cv_elpd</code>: The difference in total leave-one-out cross validation scores between models.</li><li><code>cv_avg</code>: The difference in average LOO-CV scores between models.</li><li><code>weight</code>: A set of Akaike-like weights assigned to each model, which can be used in pseudo-Bayesian model averaging.</li></ul></li><li><code>std_err::NamedTuple</code>: A named tuple containing the standard error of <code>cv_elpd</code>. Note that these estimators (incorrectly) assume all folds are independent, despite their substantial overlap, which creates a downward biased estimator. LOO-CV differences are <em>not</em> asymptotically normal, so these standard errors cannot be used to calculate a confidence interval.</li><li><code>gmpd::NamedTuple</code>: The geometric mean of the predictive distribution. It equals the geometric mean of the probability assigned to each data point by the model, that is, <code>exp(cv_avg)</code>. This measure is only meaningful for classifiers (variables with discrete outcomes). We can think of it as measuring how often the model was right: A model that always predicts incorrectly will have a GMPD of 0, while a model that always predicts correctly will have a GMPD of 1. However, the GMPD gives a model &quot;Partial points&quot; between 0 and 1 whenever the model assigns a probability other than 0 or 1 to the outcome that actually happened.</li></ul></li></ul><p>See also: <a href="#ParetoSmooth.PsisLoo"><code>PsisLoo</code></a></p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/TuringLang/ParetoSmooth.jl/blob/e96153ef76594bb4560845977b4c0db90bfa6ce9/src/ModelComparison.jl#L6-L36">source</a></section></article><article class="docstring"><header><a class="docstring-article-toggle-button fa-solid fa-chevron-down" href="javascript:;" title="Collapse docstring"></a><a class="docstring-binding" id="ParetoSmooth.Psis" href="#ParetoSmooth.Psis"><code>ParetoSmooth.Psis</code></a><span class="docstring-category">Type</span><span class="is-flex-grow-1 docstring-article-toggle-button" title="Collapse docstring"></span></header><section><div><pre><code class="language-julia hljs">Psis{R&lt;:Real, AT&lt;:AbstractArray{R, 3}, VT&lt;:AbstractVector{R}}</code></pre><p>A struct containing the results of Pareto-smoothed importance sampling.</p><p><strong>Fields</strong></p><ul><li><code>weights</code>: A vector of smoothed, truncated, and normalized importance sampling weights.</li><li><code>pareto_k</code>: Estimates of the shape parameter <code>k</code> of the generalized Pareto distribution.</li><li><code>ess</code>: Estimated effective sample size for each LOO evaluation, based on the variance of the weights.</li><li><code>sup_ess</code>: Estimated effective sample size for each LOO evaluation, based on the supremum norm, i.e. the size of the largest weight. More likely than <code>ess</code> to warn when importance sampling has failed. However, it can have a high variance.</li><li><code>r_eff</code>: The relative efficiency of the MCMC chain, i.e. ESS / posterior sample size.</li><li><code>tail_len</code>: Vector indicating how large the &quot;tail&quot; is for each observation.</li><li><code>posterior_sample_size</code>: How many draws from an MCMC chain were used for PSIS.</li><li><code>data_size</code>: How many data points were used for PSIS.</li></ul></div><a class="docs-sourcelink" target="_blank" href="https://github.com/TuringLang/ParetoSmooth.jl/blob/e96153ef76594bb4560845977b4c0db90bfa6ce9/src/ImportanceSampling.jl#L18-L36">source</a></section></article><article class="docstring"><header><a class="docstring-article-toggle-button fa-solid fa-chevron-down" href="javascript:;" title="Collapse docstring"></a><a class="docstring-binding" id="ParetoSmooth.PsisLoo" href="#ParetoSmooth.PsisLoo"><code>ParetoSmooth.PsisLoo</code></a><span class="docstring-category">Type</span><span class="is-flex-grow-1 docstring-article-toggle-button" title="Collapse docstring"></span></header><section><div><pre><code class="language-julia hljs">PsisLoo &lt;: AbstractCV</code></pre><p>A struct containing the results of leave-one-out cross validation computed with Pareto smoothed importance sampling.</p><p><strong>Fields</strong></p><ul><li><code>estimates::KeyedArray</code>: A KeyedArray with columns <code>:total, :se_total, :mean, :se_mean</code>, and rows <code>:cv_elpd, :naive_lpd, :p_eff</code>. See <code># Extended help</code> for more.<ul><li><code>:cv_elpd</code> contains estimates for the out-of-sample prediction error, as estimated using leave-one-out cross validation.</li><li><code>:naive_lpd</code> contains estimates of the in-sample prediction error.</li><li><code>:p_eff</code> is the effective number of parameters – a model with a <code>p_eff</code> of 2 is &quot;about as overfit&quot; as a model with 2 parameters and no regularization.</li></ul></li><li><code>pointwise::KeyedArray</code>: A <code>KeyedArray</code> of pointwise estimates with 5 columns –<ul><li><code>:cv_elpd</code> contains the estimated out-of-sample error for this point, as measured</li></ul>using leave-one-out cross validation.<ul><li><code>:naive_lpd</code> contains the in-sample estimate of error for this point.</li><li><code>:p_eff</code> is the difference in the two previous estimates.</li><li><code>:ess</code> is the L2 effective sample size, which estimates the simulation error caused by using Monte Carlo estimates. It does not measure model performance. </li><li><code>:inf_ess</code> is the supremum-based effective sample size, which estimates the simulation error caused by using Monte Carlo estimates. It is more robust than <code>:ess</code> and should therefore be preferred. It does not measure model performance. </li><li><code>:pareto_k</code> is the estimated value for the parameter <code>ξ</code> of the generalized Pareto distribution. Values above .7 indicate that PSIS has failed to approximate the true distribution.</li></ul></li><li><code>psis_object::Psis</code>: A <code>Psis</code> object containing the results of Pareto-smoothed importance sampling.</li><li><code>gmpd</code>: The geometric mean of the predictive density. It is defined as the geometric mean of the probability assigned to each data point by the model, i.e. <code>exp(cv_avg)</code>. This measure is only interpretable for classifiers (variables with discrete outcomes). We can think of it as measuring how often the model was right: A model that always predicts incorrectly will have a GMPD of 0, while a model that always predicts correctly will have a GMPD of 1. However, the GMPD gives a model &quot;Partial points&quot; between 0 and 1 whenever the model assigns a probability other than 0 or 1 to the outcome that actually happened, making it a fully Bayesian measure of model quality.</li><li><code>mcse</code>: A float containing the estimated Monte Carlo standard error for the total cross-validation estimate.</li></ul><p><strong>Extended help</strong></p><p>The total score depends on the sample size, and summarizes the weight of evidence for or against a model. Total scores are on an interval scale, meaning that only differences of scores are meaningful. <em>It is not possible to interpret a total score by looking at it.</em> The total score is not a goodness-of-fit statistic (for this, see the average score).</p><p>The average score is the total score, divided by the sample size. It estimates the expected log score, i.e. the expectation of the log probability density of observing the next point. The average score is a relative goodness-of-fit statistic which does not depend on sample size. </p><p>Unlike for chi-square goodness of fit tests, models do not have to be nested for model comparison using cross-validation methods.</p><p>See also: [<code>loo</code>]@ref, [<code>bayes_cv</code>]@ref, [<code>psis_loo</code>]@ref, [<code>Psis</code>]@ref</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/TuringLang/ParetoSmooth.jl/blob/e96153ef76594bb4560845977b4c0db90bfa6ce9/src/LeaveOneOut.jl#L23-L32">source</a></section></article><article class="docstring"><header><a class="docstring-article-toggle-button fa-solid fa-chevron-down" href="javascript:;" title="Collapse docstring"></a><a class="docstring-binding" id="ParetoSmooth.loo-Tuple" href="#ParetoSmooth.loo-Tuple"><code>ParetoSmooth.loo</code></a><span class="docstring-category">Method</span><span class="is-flex-grow-1 docstring-article-toggle-button" title="Collapse docstring"></span></header><section><div><pre><code class="language-julia hljs">function loo(args...; kwargs...) -&gt; PsisLoo</code></pre><p>Compute an approximate leave-one-out cross-validation score.</p><p>Currently, this function only serves to call <code>psis_loo</code>, but this could change in the future. The default methods or return type may change without warning, so we recommend using <code>psis_loo</code> instead if reproducibility is required.</p><p>See also: <a href="#ParetoSmooth.psis_loo-Tuple{AbstractArray{&lt;:Real, 3}, Vararg{Any}}"><code>psis_loo</code></a>, <a href="#ParetoSmooth.PsisLoo"><code>PsisLoo</code></a>.</p></div><a class="docs-sourcelink" target="_blank" href="https://github.com/TuringLang/ParetoSmooth.jl/blob/e96153ef76594bb4560845977b4c0db90bfa6ce9/src/LeaveOneOut.jl#L71-L81">source</a></section></article><article class="docstring"><header><a class="docstring-article-toggle-button fa-solid fa-chevron-down" href="javascript:;" title="Collapse docstring"></a><a class="docstring-binding" id="ParetoSmooth.loo_compare-Union{Tuple{AbstractVector{&lt;:PsisLoo}}, Tuple{S}} where S&lt;:(Union{Tuple{Vararg{var&quot;#s41&quot;}}, AbstractVector{&lt;:var&quot;#s41&quot;}} where var&quot;#s41&quot;&lt;:Union{AbstractString, Symbol})" href="#ParetoSmooth.loo_compare-Union{Tuple{AbstractVector{&lt;:PsisLoo}}, Tuple{S}} where S&lt;:(Union{Tuple{Vararg{var&quot;#s41&quot;}}, AbstractVector{&lt;:var&quot;#s41&quot;}} where var&quot;#s41&quot;&lt;:Union{AbstractString, Symbol})"><code>ParetoSmooth.loo_compare</code></a><span class="docstring-category">Method</span><span class="is-flex-grow-1 docstring-article-toggle-button" title="Collapse docstring"></span></header><section><div><pre><code class="language-julia hljs">function loo_compare(
cv_results...;
sort_models::Bool=true,
Expand Down
1 change: 1 addition & 0 deletions previews/PR100/turing/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -457,6 +457,7 @@
});
</script>
<!-- NAVBAR END -->

<div id="documenter"><nav class="docs-sidebar"><div class="docs-package-name"><span class="docs-autofit"><a href="../">ParetoSmooth.jl</a></span></div><button class="docs-search-query input is-rounded is-small is-clickable my-2 mx-auto py-1 px-2" id="documenter-search-query">Search docs (Ctrl + /)</button><ul class="docs-menu"><li><a class="tocitem" href="../">Home</a></li><li class="is-active"><a class="tocitem" href>Using with Turing</a><ul class="internal"><li><a class="tocitem" href="#For-Loop-Method"><span>For Loop Method</span></a></li><li><a class="tocitem" href="#Dot-Vectorization-Method"><span>Dot Vectorization Method</span></a></li><li><a class="tocitem" href="#Incorrect-Model-Specification"><span>Incorrect Model Specification</span></a></li></ul></li></ul><div class="docs-version-selector field has-addons"><div class="control"><span class="docs-label button is-static is-size-7">Version</span></div><div class="docs-selector control is-expanded"><div class="select is-fullwidth is-size-7"><select id="documenter-version-selector"></select></div></div></div></nav><div class="docs-main"><header class="docs-navbar"><a class="docs-sidebar-button docs-navbar-link fa-solid fa-bars is-hidden-desktop" id="documenter-sidebar-button" href="#"></a><nav class="breadcrumb"><ul class="is-hidden-mobile"><li class="is-active"><a href>Using with Turing</a></li></ul><ul class="is-hidden-tablet"><li class="is-active"><a href>Using with Turing</a></li></ul></nav><div class="docs-right"><a class="docs-navbar-link" href="https://github.com/TuringLang/ParetoSmooth.jl/blob/main/docs/src/turing.md#" title="Edit source on GitHub"><span class="docs-icon fa-solid"></span></a><a class="docs-settings-button docs-navbar-link fa-solid fa-gear" id="documenter-settings-button" href="#" title="Settings"></a><a class="docs-article-toggle-button fa-solid fa-chevron-up" id="documenter-article-toggle-button" href="javascript:;" title="Collapse all docstrings"></a></div></header><article class="content" id="documenter-page"><h1 id="Turing-Example"><a class="docs-heading-anchor" href="#Turing-Example">Turing Example</a><a id="Turing-Example-1"></a><a class="docs-heading-anchor-permalink" href="#Turing-Example" title="Permalink"></a></h1><p>This example demonstrates how to correctly compute PSIS LOO for a model developed with <a href="https://turinglang.org/stable/">Turing.jl</a>. Below, we show two ways to correctly specify the model in Turing. What is most important is to specify the model so that pointwise log densities are computed for each observation. </p><p>To make things simple, we will use a Gaussian model in each example. Suppose observations <span>$Y = \{y_1,y_2,\dots y_n\}$</span> come from a Gaussian distribution with an uknown parameter <span>$\mu$</span> and known parameter <span>$\sigma=1$</span>. The model can be stated as follows:</p><p><span>$\mu \sim \mathrm{normal}(0, 1)$</span></p><p><span>$Y \sim \mathrm{Normal}(\mu, 1)$</span></p><h2 id="For-Loop-Method"><a class="docs-heading-anchor" href="#For-Loop-Method">For Loop Method</a><a id="For-Loop-Method-1"></a><a class="docs-heading-anchor-permalink" href="#For-Loop-Method" title="Permalink"></a></h2><p>One way to specify a model to correctly compute PSIS LOO is to iterate over the observations using a for loop, as follows:</p><pre><code class="language-julia hljs">using Turing
using ParetoSmooth
using Distributions
Expand Down

0 comments on commit 09ad1f0

Please sign in to comment.