diff --git a/docs/articles/cmdstanr.html b/docs/articles/cmdstanr.html index 0e221bd2..cac1ca1b 100644 --- a/docs/articles/cmdstanr.html +++ b/docs/articles/cmdstanr.html @@ -26,7 +26,7 @@ - +
- +
@@ -133,15 +133,15 @@

Getting started with CmdStanR

Jonah Gabry, Rok Češnovar, and Andrew Johnson

- - + + Source: vignettes/cmdstanr.Rmd
- - + +

Introduction

@@ -154,7 +154,7 @@

Introduction
 # we recommend running this is a fresh R session or restarting your current session
-install.packages("cmdstanr", repos = c("https://mc-stan.org/r-packages/", getOption("repos")))

+install.packages("cmdstanr", repos = c("https://stan-dev.r-universe.dev", getOption("repos")))

We can now load the package like any other R package. We’ll also load the bayesplot and posterior packages to use later in examples.

@@ -277,30 +277,30 @@

Running MCMC)

Running MCMC with 4 parallel chains...
 
-Chain 1 Iteration:    1 / 2000 [  0%]  (Warmup) 
-Chain 1 Iteration:  500 / 2000 [ 25%]  (Warmup) 
-Chain 1 Iteration: 1000 / 2000 [ 50%]  (Warmup) 
-Chain 1 Iteration: 1001 / 2000 [ 50%]  (Sampling) 
-Chain 1 Iteration: 1500 / 2000 [ 75%]  (Sampling) 
-Chain 1 Iteration: 2000 / 2000 [100%]  (Sampling) 
-Chain 2 Iteration:    1 / 2000 [  0%]  (Warmup) 
-Chain 2 Iteration:  500 / 2000 [ 25%]  (Warmup) 
-Chain 2 Iteration: 1000 / 2000 [ 50%]  (Warmup) 
-Chain 2 Iteration: 1001 / 2000 [ 50%]  (Sampling) 
-Chain 2 Iteration: 1500 / 2000 [ 75%]  (Sampling) 
-Chain 2 Iteration: 2000 / 2000 [100%]  (Sampling) 
-Chain 3 Iteration:    1 / 2000 [  0%]  (Warmup) 
-Chain 3 Iteration:  500 / 2000 [ 25%]  (Warmup) 
-Chain 3 Iteration: 1000 / 2000 [ 50%]  (Warmup) 
-Chain 3 Iteration: 1001 / 2000 [ 50%]  (Sampling) 
-Chain 3 Iteration: 1500 / 2000 [ 75%]  (Sampling) 
-Chain 3 Iteration: 2000 / 2000 [100%]  (Sampling) 
-Chain 4 Iteration:    1 / 2000 [  0%]  (Warmup) 
-Chain 4 Iteration:  500 / 2000 [ 25%]  (Warmup) 
-Chain 4 Iteration: 1000 / 2000 [ 50%]  (Warmup) 
-Chain 4 Iteration: 1001 / 2000 [ 50%]  (Sampling) 
-Chain 4 Iteration: 1500 / 2000 [ 75%]  (Sampling) 
-Chain 4 Iteration: 2000 / 2000 [100%]  (Sampling) 
+Chain 1 Iteration:    1 / 2000 [  0%]  (Warmup)
+Chain 1 Iteration:  500 / 2000 [ 25%]  (Warmup)
+Chain 1 Iteration: 1000 / 2000 [ 50%]  (Warmup)
+Chain 1 Iteration: 1001 / 2000 [ 50%]  (Sampling)
+Chain 1 Iteration: 1500 / 2000 [ 75%]  (Sampling)
+Chain 1 Iteration: 2000 / 2000 [100%]  (Sampling)
+Chain 2 Iteration:    1 / 2000 [  0%]  (Warmup)
+Chain 2 Iteration:  500 / 2000 [ 25%]  (Warmup)
+Chain 2 Iteration: 1000 / 2000 [ 50%]  (Warmup)
+Chain 2 Iteration: 1001 / 2000 [ 50%]  (Sampling)
+Chain 2 Iteration: 1500 / 2000 [ 75%]  (Sampling)
+Chain 2 Iteration: 2000 / 2000 [100%]  (Sampling)
+Chain 3 Iteration:    1 / 2000 [  0%]  (Warmup)
+Chain 3 Iteration:  500 / 2000 [ 25%]  (Warmup)
+Chain 3 Iteration: 1000 / 2000 [ 50%]  (Warmup)
+Chain 3 Iteration: 1001 / 2000 [ 50%]  (Sampling)
+Chain 3 Iteration: 1500 / 2000 [ 75%]  (Sampling)
+Chain 3 Iteration: 2000 / 2000 [100%]  (Sampling)
+Chain 4 Iteration:    1 / 2000 [  0%]  (Warmup)
+Chain 4 Iteration:  500 / 2000 [ 25%]  (Warmup)
+Chain 4 Iteration: 1000 / 2000 [ 50%]  (Warmup)
+Chain 4 Iteration: 1001 / 2000 [ 50%]  (Sampling)
+Chain 4 Iteration: 1500 / 2000 [ 75%]  (Sampling)
+Chain 4 Iteration: 2000 / 2000 [100%]  (Sampling)
 Chain 1 finished in 0.0 seconds.
 Chain 2 finished in 0.0 seconds.
 Chain 3 finished in 0.0 seconds.
@@ -564,11 +564,11 @@ 

Optimization$optimize().

 fit_mle <- mod$optimize(data = data_list, seed = 123)
-
Initial log joint probability = -9.51104 
-    Iter      log prob        ||dx||      ||grad||       alpha      alpha0  # evals  Notes  
-       6      -5.00402   0.000103557   2.55661e-07           1           1        9    
-Optimization terminated normally:  
-  Convergence detected: relative gradient magnitude is below tolerance 
+
Initial log joint probability = -9.51104
+    Iter      log prob        ||dx||      ||grad||       alpha      alpha0  # evals  Notes
+       6      -5.00402   0.000103557   2.55661e-07           1           1        9
+Optimization terminated normally:
+  Convergence detected: relative gradient magnitude is below tolerance
 Finished in  0.2 seconds.
 fit_mle$print() # includes lp__ (log prob calculated by Stan program)
@@ -598,11 +598,11 @@

Optimization= TRUE, seed = 123 )

-
Initial log joint probability = -11.0088 
-    Iter      log prob        ||dx||      ||grad||       alpha      alpha0  # evals  Notes  
-       5      -6.74802   0.000938344   1.39149e-05           1           1        8    
-Optimization terminated normally:  
-  Convergence detected: relative gradient magnitude is below tolerance 
+
Initial log joint probability = -11.0088
+    Iter      log prob        ||dx||      ||grad||       alpha      alpha0  # evals  Notes
+       5      -6.74802   0.000938344   1.39149e-05           1           1        8
+Optimization terminated normally:
+  Convergence detected: relative gradient magnitude is below tolerance
 Finished in  0.1 seconds.
-
Calculating Hessian 
-Calculating inverse of Cholesky factor 
-Generating draws 
-iteration: 0 
-iteration: 1000 
-iteration: 2000 
-iteration: 3000 
+
Calculating Hessian
+Calculating inverse of Cholesky factor
+Generating draws
+iteration: 0
+iteration: 1000
+iteration: 2000
+iteration: 3000
 Finished in  0.1 seconds.
 fit_laplace$print("theta")
@@ -658,28 +658,28 @@

Variational (ADVI)= 123, draws = 4000 )

-
------------------------------------------------------------ 
-EXPERIMENTAL ALGORITHM: 
-  This procedure has not been thoroughly tested and may be unstable 
-  or buggy. The interface is subject to change. 
------------------------------------------------------------- 
-Gradient evaluation took 5e-06 seconds 
-1000 transitions using 10 leapfrog steps per transition would take 0.05 seconds. 
-Adjust your expectations accordingly! 
-Begin eta adaptation. 
-Iteration:   1 / 250 [  0%]  (Adaptation) 
-Iteration:  50 / 250 [ 20%]  (Adaptation) 
-Iteration: 100 / 250 [ 40%]  (Adaptation) 
-Iteration: 150 / 250 [ 60%]  (Adaptation) 
-Iteration: 200 / 250 [ 80%]  (Adaptation) 
-Success! Found best value [eta = 1] earlier than expected. 
-Begin stochastic gradient ascent. 
-  iter             ELBO   delta_ELBO_mean   delta_ELBO_med   notes  
-   100           -6.262             1.000            1.000 
-   200           -6.263             0.500            1.000 
-   300           -6.307             0.336            0.007   MEDIAN ELBO CONVERGED 
-Drawing a sample of size 4000 from the approximate posterior...  
-COMPLETED. 
+
------------------------------------------------------------
+EXPERIMENTAL ALGORITHM:
+  This procedure has not been thoroughly tested and may be unstable
+  or buggy. The interface is subject to change.
+------------------------------------------------------------
+Gradient evaluation took 5e-06 seconds
+1000 transitions using 10 leapfrog steps per transition would take 0.05 seconds.
+Adjust your expectations accordingly!
+Begin eta adaptation.
+Iteration:   1 / 250 [  0%]  (Adaptation)
+Iteration:  50 / 250 [ 20%]  (Adaptation)
+Iteration: 100 / 250 [ 40%]  (Adaptation)
+Iteration: 150 / 250 [ 60%]  (Adaptation)
+Iteration: 200 / 250 [ 80%]  (Adaptation)
+Success! Found best value [eta = 1] earlier than expected.
+Begin stochastic gradient ascent.
+  iter             ELBO   delta_ELBO_mean   delta_ELBO_med   notes
+   100           -6.262             1.000            1.000
+   200           -6.263             0.500            1.000
+   300           -6.307             0.336            0.007   MEDIAN ELBO CONVERGED
+Drawing a sample of size 4000 from the approximate posterior...
+COMPLETED.
 Finished in  0.1 seconds.
 fit_vb$print("theta")
@@ -703,23 +703,23 @@

Variational (Pathfinder) seed = 123, draws = 4000 )

-
Path [1] :Initial log joint density = -11.008832 
-Path [1] : Iter      log prob        ||dx||      ||grad||     alpha      alpha0      # evals       ELBO    Best ELBO        Notes  
-              5      -6.748e+00      9.383e-04   1.391e-05    1.000e+00  1.000e+00       126 -6.264e+00 -6.264e+00                   
-Path [1] :Best Iter: [3] ELBO (-6.195408) evaluations: (126) 
-Path [2] :Initial log joint density = -7.318450 
-Path [2] : Iter      log prob        ||dx||      ||grad||     alpha      alpha0      # evals       ELBO    Best ELBO        Notes  
-              4      -6.748e+00      5.414e-03   1.618e-04    1.000e+00  1.000e+00       101 -6.251e+00 -6.251e+00                   
-Path [2] :Best Iter: [3] ELBO (-6.229174) evaluations: (101) 
-Path [3] :Initial log joint density = -12.374612 
-Path [3] : Iter      log prob        ||dx||      ||grad||     alpha      alpha0      # evals       ELBO    Best ELBO        Notes  
-              5      -6.748e+00      1.419e-03   2.837e-05    1.000e+00  1.000e+00       126 -6.199e+00 -6.199e+00                   
-Path [3] :Best Iter: [5] ELBO (-6.199185) evaluations: (126) 
-Path [4] :Initial log joint density = -13.009824 
-Path [4] : Iter      log prob        ||dx||      ||grad||     alpha      alpha0      # evals       ELBO    Best ELBO        Notes  
-              5      -6.748e+00      1.677e-03   3.885e-05    1.000e+00  1.000e+00       126 -6.173e+00 -6.173e+00                   
-Path [4] :Best Iter: [5] ELBO (-6.172860) evaluations: (126) 
-Total log probability function evaluations:4379 
+
Path [1] :Initial log joint density = -11.008832
+Path [1] : Iter      log prob        ||dx||      ||grad||     alpha      alpha0      # evals       ELBO    Best ELBO        Notes
+              5      -6.748e+00      9.383e-04   1.391e-05    1.000e+00  1.000e+00       126 -6.264e+00 -6.264e+00
+Path [1] :Best Iter: [3] ELBO (-6.195408) evaluations: (126)
+Path [2] :Initial log joint density = -7.318450
+Path [2] : Iter      log prob        ||dx||      ||grad||     alpha      alpha0      # evals       ELBO    Best ELBO        Notes
+              4      -6.748e+00      5.414e-03   1.618e-04    1.000e+00  1.000e+00       101 -6.251e+00 -6.251e+00
+Path [2] :Best Iter: [3] ELBO (-6.229174) evaluations: (101)
+Path [3] :Initial log joint density = -12.374612
+Path [3] : Iter      log prob        ||dx||      ||grad||     alpha      alpha0      # evals       ELBO    Best ELBO        Notes
+              5      -6.748e+00      1.419e-03   2.837e-05    1.000e+00  1.000e+00       126 -6.199e+00 -6.199e+00
+Path [3] :Best Iter: [5] ELBO (-6.199185) evaluations: (126)
+Path [4] :Initial log joint density = -13.009824
+Path [4] : Iter      log prob        ||dx||      ||grad||     alpha      alpha0      # evals       ELBO    Best ELBO        Notes
+              5      -6.748e+00      1.677e-03   3.885e-05    1.000e+00  1.000e+00       126 -6.173e+00 -6.173e+00
+Path [4] :Best Iter: [5] ELBO (-6.172860) evaluations: (126)
+Total log probability function evaluations:4379
 Finished in  0.1 seconds.
 fit_pf$print("theta")
@@ -902,10 +902,10 @@

Additional resources - +
- +
@@ -168,7 +168,7 @@

Installing the R packageYou can install the latest beta release of the cmdstanr R package with

+install.packages("cmdstanr", repos = c("https://stan-dev.r-universe.dev", getOption("repos")))

This does not install the vignettes, which take a long time to build, but they are always available online at https://mc-stan.org/cmdstanr/articles/.

To instead install the latest development version of the package from GitHub use

@@ -259,10 +259,10 @@ 

Developers

- - + +