From 526d0bc4e5ef52a9df19d9e8a22c3f1edb22b867 Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Sun, 31 Mar 2024 11:47:24 +0000 Subject: [PATCH] build based on 2e7b473 --- v0.1.36/.documenter-siteinfo.json | 2 +- v0.1.36/api/index.html | 98 ++++++------- v0.1.36/call_index/index.html | 2 +- v0.1.36/how-to/loops/index.html | 10 +- v0.1.36/how-to/obc/index.html | 34 ++--- v0.1.36/index.html | 2 +- v0.1.36/search_index.js | 2 +- v0.1.36/tutorials/calibration/index.html | 118 +++++++-------- v0.1.36/tutorials/estimation/index.html | 110 +++++++------- v0.1.36/tutorials/install/index.html | 2 +- v0.1.36/tutorials/rbc/index.html | 16 +-- v0.1.36/tutorials/sw03/index.html | 168 +++++++++++----------- v0.1.36/unfinished_docs/dsl/index.html | 2 +- v0.1.36/unfinished_docs/how_to/index.html | 2 +- v0.1.36/unfinished_docs/todo/index.html | 2 +- 15 files changed, 283 insertions(+), 287 deletions(-) diff --git a/v0.1.36/.documenter-siteinfo.json b/v0.1.36/.documenter-siteinfo.json index 4846af93..d42f0cda 100644 --- a/v0.1.36/.documenter-siteinfo.json +++ b/v0.1.36/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.2","generation_timestamp":"2024-03-30T20:13:10","documenter_version":"1.3.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.2","generation_timestamp":"2024-03-31T11:47:17","documenter_version":"1.3.0"}} \ No newline at end of file diff --git a/v0.1.36/api/index.html b/v0.1.36/api/index.html index 89902c29..e23e30bb 100644 --- a/v0.1.36/api/index.html +++ b/v0.1.36/api/index.html @@ -1,12 +1,12 @@ API · MacroModelling.jl
MacroModelling.BetaMethod
Beta(μ, σ, lower_bound, upper_bound; μσ)
-

Convenience wrapper for the truncated Beta distribution.

If μσ = true then μ and σ are translated to the parameters of the distribution. Otherwise μ and σ represent the parameters of the distribution.

Arguments

  • μ [Type: Real]: mean or first parameter of the distribution,
  • σ [Type: Real]: standard deviation or first parameter of the distribution
  • lower_bound [Type: Real]: truncation lower bound of the distribution
  • upper_bound [Type: Real]: truncation upper bound of the distribution

Keyword Arguments

  • μσ [Type: Bool]: switch whether μ and σ represent the moments of the distribution or their parameters
source
MacroModelling.BetaMethod
Beta(μ, σ; μσ)
-

Convenience wrapper for the Beta distribution.

If μσ = true then μ and σ are translated to the parameters of the distribution. Otherwise μ and σ represent the parameters of the distribution.

Arguments

  • μ [Type: Real]: mean or first parameter of the distribution,
  • σ [Type: Real]: standard deviation or first parameter of the distribution

Keyword Arguments

  • μσ [Type: Bool]: switch whether μ and σ represent the moments of the distribution or their parameters
source
MacroModelling.GammaMethod
Gamma(μ, σ, lower_bound, upper_bound; μσ)
-

Convenience wrapper for the truncated Inverse Gamma distribution.

If μσ = true then μ and σ are translated to the parameters of the distribution. Otherwise μ and σ represent the parameters of the distribution.

Arguments

  • μ [Type: Real]: mean or first parameter of the distribution,
  • σ [Type: Real]: standard deviation or first parameter of the distribution
  • lower_bound [Type: Real]: truncation lower bound of the distribution
  • upper_bound [Type: Real]: truncation upper bound of the distribution

Keyword Arguments

  • μσ [Type: Bool]: switch whether μ and σ represent the moments of the distribution or their parameters
source
MacroModelling.GammaMethod
Gamma(μ, σ; μσ)
-

Convenience wrapper for the Gamma distribution.

If μσ = true then μ and σ are translated to the parameters of the distribution. Otherwise μ and σ represent the parameters of the distribution.

Arguments

  • μ [Type: Real]: mean or first parameter of the distribution,
  • σ [Type: Real]: standard deviation or first parameter of the distribution

Keyword Arguments

  • μσ [Type: Bool]: switch whether μ and σ represent the moments of the distribution or their parameters
source
MacroModelling.InverseGammaMethod
InverseGamma(μ, σ, lower_bound, upper_bound; μσ)
-

Convenience wrapper for the truncated Inverse Gamma distribution.

If μσ = true then μ and σ are translated to the parameters of the distribution. Otherwise μ and σ represent the parameters of the distribution.

Arguments

  • μ [Type: Real]: mean or first parameter of the distribution,
  • σ [Type: Real]: standard deviation or first parameter of the distribution
  • lower_bound [Type: Real]: truncation lower bound of the distribution
  • upper_bound [Type: Real]: truncation upper bound of the distribution

Keyword Arguments

  • μσ [Type: Bool]: switch whether μ and σ represent the moments of the distribution or their parameters
source
MacroModelling.InverseGammaMethod
InverseGamma(μ, σ; μσ)
-

Convenience wrapper for the Inverse Gamma distribution.

If μσ = true then μ and σ are translated to the parameters of the distribution. Otherwise μ and σ represent the parameters of the distribution.

Arguments

  • μ [Type: Real]: mean or first parameter of the distribution,
  • σ [Type: Real]: standard deviation or first parameter of the distribution

Keyword Arguments

  • μσ [Type: Bool]: switch whether μ and σ represent the moments of the distribution or their parameters
source
MacroModelling.NormalMethod
Normal(μ, σ, lower_bound, upper_bound)
-

Convenience wrapper for the truncated Normal distribution.

Arguments

  • μ [Type: Real]: mean of the distribution,
  • σ [Type: Real]: standard deviation of the distribution
  • lower_bound [Type: Real]: truncation lower bound of the distribution
  • upper_bound [Type: Real]: truncation upper bound of the distribution
source
MacroModelling.get_autocorrelationMethod
get_autocorrelation(
+

Convenience wrapper for the truncated Beta distribution.

If μσ = true then μ and σ are translated to the parameters of the distribution. Otherwise μ and σ represent the parameters of the distribution.

Arguments

  • μ [Type: Real]: mean or first parameter of the distribution,
  • σ [Type: Real]: standard deviation or first parameter of the distribution
  • lower_bound [Type: Real]: truncation lower bound of the distribution
  • upper_bound [Type: Real]: truncation upper bound of the distribution

Keyword Arguments

  • μσ [Type: Bool]: switch whether μ and σ represent the moments of the distribution or their parameters
source
MacroModelling.BetaMethod
Beta(μ, σ; μσ)
+

Convenience wrapper for the Beta distribution.

If μσ = true then μ and σ are translated to the parameters of the distribution. Otherwise μ and σ represent the parameters of the distribution.

Arguments

  • μ [Type: Real]: mean or first parameter of the distribution,
  • σ [Type: Real]: standard deviation or first parameter of the distribution

Keyword Arguments

  • μσ [Type: Bool]: switch whether μ and σ represent the moments of the distribution or their parameters
source
MacroModelling.GammaMethod
Gamma(μ, σ, lower_bound, upper_bound; μσ)
+

Convenience wrapper for the truncated Inverse Gamma distribution.

If μσ = true then μ and σ are translated to the parameters of the distribution. Otherwise μ and σ represent the parameters of the distribution.

Arguments

  • μ [Type: Real]: mean or first parameter of the distribution,
  • σ [Type: Real]: standard deviation or first parameter of the distribution
  • lower_bound [Type: Real]: truncation lower bound of the distribution
  • upper_bound [Type: Real]: truncation upper bound of the distribution

Keyword Arguments

  • μσ [Type: Bool]: switch whether μ and σ represent the moments of the distribution or their parameters
source
MacroModelling.GammaMethod
Gamma(μ, σ; μσ)
+

Convenience wrapper for the Gamma distribution.

If μσ = true then μ and σ are translated to the parameters of the distribution. Otherwise μ and σ represent the parameters of the distribution.

Arguments

  • μ [Type: Real]: mean or first parameter of the distribution,
  • σ [Type: Real]: standard deviation or first parameter of the distribution

Keyword Arguments

  • μσ [Type: Bool]: switch whether μ and σ represent the moments of the distribution or their parameters
source
MacroModelling.InverseGammaMethod
InverseGamma(μ, σ, lower_bound, upper_bound; μσ)
+

Convenience wrapper for the truncated Inverse Gamma distribution.

If μσ = true then μ and σ are translated to the parameters of the distribution. Otherwise μ and σ represent the parameters of the distribution.

Arguments

  • μ [Type: Real]: mean or first parameter of the distribution,
  • σ [Type: Real]: standard deviation or first parameter of the distribution
  • lower_bound [Type: Real]: truncation lower bound of the distribution
  • upper_bound [Type: Real]: truncation upper bound of the distribution

Keyword Arguments

  • μσ [Type: Bool]: switch whether μ and σ represent the moments of the distribution or their parameters
source
MacroModelling.InverseGammaMethod
InverseGamma(μ, σ; μσ)
+

Convenience wrapper for the Inverse Gamma distribution.

If μσ = true then μ and σ are translated to the parameters of the distribution. Otherwise μ and σ represent the parameters of the distribution.

Arguments

  • μ [Type: Real]: mean or first parameter of the distribution,
  • σ [Type: Real]: standard deviation or first parameter of the distribution

Keyword Arguments

  • μσ [Type: Bool]: switch whether μ and σ represent the moments of the distribution or their parameters
source
MacroModelling.NormalMethod
Normal(μ, σ, lower_bound, upper_bound)
+

Convenience wrapper for the truncated Normal distribution.

Arguments

  • μ [Type: Real]: mean of the distribution,
  • σ [Type: Real]: standard deviation of the distribution
  • lower_bound [Type: Real]: truncation lower bound of the distribution
  • upper_bound [Type: Real]: truncation upper bound of the distribution
source
MacroModelling.get_autocorrelationMethod
get_autocorrelation(
     𝓂;
     autocorrelation_periods,
     parameters,
@@ -40,7 +40,7 @@
   (:c)    0.966974    0.927263    0.887643    0.849409    0.812761
   (:k)    0.971015    0.931937    0.892277    0.853876    0.817041
   (:q)    0.32237     0.181562    0.148347    0.136867    0.129944
-  (:z)    0.2         0.04        0.008       0.0016      0.00032
source
MacroModelling.get_calibrated_parametersMethod
get_calibrated_parameters(𝓂; values)
 

Returns the parameters (and optionally the values) which are determined by a calibration equation.

Arguments

Keyword Arguments

  • values [Default: false, Type: Bool]: return the values together with the parameter names

Examples

using MacroModelling
 
 @model RBC begin
@@ -67,7 +67,7 @@
 get_calibrated_parameters(RBC)
 # output
 1-element Vector{String}:
- "δ"
source
MacroModelling.get_calibration_equation_parametersMethod
get_calibration_equation_parameters(𝓂)
 

Returns the parameters used in calibration equations which are not used in the equations of the model (see capital_to_output in example).

Arguments

Examples

using MacroModelling
 
 @model RBC begin
@@ -94,7 +94,7 @@
 get_calibration_equation_parameters(RBC)
 # output
 1-element Vector{String}:
- "capital_to_output"
source
MacroModelling.get_calibration_equationsMethod
get_calibration_equations(𝓂)
 

Return the calibration equations declared in the @parameters block. Calibration equations are additional equations which are part of the non-stochastic steady state problem. The additional equation is matched with a calibated parameter which is part of the equations declared in the @model block and can be retrieved with: get_calibrated_parameters

In case programmatic model writing was used this function returns the parsed equations (see loop over shocks in example).

Note that the ouput assumes the equations are equal to 0. As in, k / (q * 4) - capital_to_output implies k / (q * 4) - capital_to_output = 0 and therefore: k / (q * 4) = capital_to_output.

Arguments

Examples

using MacroModelling
 
 @model RBC begin
@@ -121,7 +121,7 @@
 get_calibration_equations(RBC)
 # output
 1-element Vector{String}:
- "k / (q * 4) - capital_to_output"
source
MacroModelling.get_correlationMethod
get_correlation(𝓂; parameters, algorithm, verbose)
 

Return the correlations of endogenous variables using the first, pruned second, or pruned third order perturbation solution.

Arguments

Keyword Arguments

  • parameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.
  • algorithm [Default: :first_order, Type: Symbol]: algorithm to solve for the dynamics of the model.
  • verbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.

Examples

using MacroModelling
 
 @model RBC begin
@@ -296,7 +296,7 @@
   (:c)   1.0        0.999812   0.550168   0.314562
   (:k)   0.999812   1.0        0.533879   0.296104
   (:q)   0.550168   0.533879   1.0        0.965726
-  (:z)   0.314562   0.296104   0.965726   1.0
source
MacroModelling.get_dynamic_auxilliary_variablesMethod
get_dynamic_auxilliary_variables(𝓂)
 

Returns the auxilliary variables, without timing subscripts, part of the augmented system of equations describing the model dynamics. Augmented means that, in case of variables with leads or lags larger than 1, or exogenous shocks with leads or lags, the system is augemented by auxilliary variables containing variables or shocks in lead or lag. because the original equations included variables with leads or lags certain expression cannot be negative (e.g. given log(c/q) an auxilliary variable is created for c/q).

See get_dynamic_equations for more details on the auxilliary variables and equations.

Arguments

Examples

using MacroModelling
 
 @model RBC begin
@@ -325,7 +325,7 @@
 3-element Vector{String}:
  "kᴸ⁽⁻²⁾"
  "kᴸ⁽⁻³⁾"
- "kᴸ⁽⁻¹⁾"
source
MacroModelling.get_dynamic_equationsMethod
get_dynamic_equations(𝓂)
 

Return the augmented system of equations describing the model dynamics. Augmented means that, in case of variables with leads or lags larger than 1, or exogenous shocks with leads or lags, the system is augemented by auxilliary equations containing variables in lead or lag. The augmented system features only variables which are in the present [0], future [1], or past [-1]. For example, Δk_4q[0] = log(k[0]) - log(k[-3]) contains k[-3]. By introducing 2 auxilliary variables (kᴸ⁽⁻¹⁾ and kᴸ⁽⁻²⁾ with being the lead/lag operator) and augmenting the system (kᴸ⁽⁻²⁾[0] = kᴸ⁽⁻¹⁾[-1] and kᴸ⁽⁻¹⁾[0] = k[-1]) we can ensure that the timing is smaller than 1 in absolute terms: Δk_4q[0] - (log(k[0]) - log(kᴸ⁽⁻²⁾[-1])).

In case programmatic model writing was used this function returns the parsed equations (see loop over shocks in example).

Note that the ouput assumes the equations are equal to 0. As in, kᴸ⁽⁻¹⁾[0] - k[-1] implies kᴸ⁽⁻¹⁾[0] - k[-1] = 0 and therefore: kᴸ⁽⁻¹⁾[0] = k[-1].

Arguments

Examples

using MacroModelling
 
 @model RBC begin
@@ -363,7 +363,7 @@
  "kᴸ⁽⁻³⁾[0] - kᴸ⁽⁻²⁾[-1]"
  "kᴸ⁽⁻²⁾[0] - kᴸ⁽⁻¹⁾[-1]"
  "kᴸ⁽⁻¹⁾[0] - k[-1]"
- "Δk_4q[0] - (log(k[0]) - log(kᴸ⁽⁻³⁾[-1]))"
source
MacroModelling.get_equationsMethod
get_equations(𝓂)
 

Return the equations of the model. In case programmatic model writing was used this function returns the parsed equations (see loop over shocks in example).

Arguments

Examples

using MacroModelling
 
 @model RBC begin
@@ -396,7 +396,7 @@
  "z{TFP}[0] = ρ{TFP} * z{TFP}[-1]" ⋯ 18 bytes ⋯ "TFP}[x] + eps_news{TFP}[x - 1])"
  "z{δ}[0] = ρ{δ} * z{δ}[-1] + σ{δ} * (eps{δ}[x] + eps_news{δ}[x - 1])"
  "Δc_share[0] = log(c[0] / q[0]) - log(c[-1] / q[-1])"
- "Δk_4q[0] = log(k[0]) - log(k[-4])"
source
MacroModelling.get_estimated_shocksMethod
get_estimated_shocks(
     𝓂,
     data;
     parameters,
@@ -433,7 +433,7 @@
 →   Periods ∈ 40-element UnitRange{Int64}
 And data, 1×40 Matrix{Float64}:
                (1)          (2)         (3)         (4)         …  (37)         (38)        (39)         (40)
-  (:eps_z₍ₓ₎)    0.0603617    0.614652   -0.519048    0.711454       -0.873774     1.27918    -0.929701    -0.2255
source
MacroModelling.get_estimated_variable_standard_deviationsMethod
get_estimated_variable_standard_deviations(
     𝓂,
     data;
     parameters,
@@ -470,7 +470,7 @@
   (:c)    1.23202e-9    1.84069e-10    8.23181e-11    8.23181e-11        8.23181e-11     8.23181e-11     0.0
   (:k)    0.00509299    0.000382934    2.87922e-5     2.16484e-6         1.6131e-9       9.31323e-10     1.47255e-9
   (:q)    0.0612887     0.0046082      0.000346483    2.60515e-5         1.31709e-9      1.31709e-9      9.31323e-10
-  (:z)    0.00961766    0.000723136    5.43714e-5     4.0881e-6          3.08006e-10     3.29272e-10     2.32831e-10
source
MacroModelling.get_estimated_variablesMethod
get_estimated_variables(
     𝓂,
     data;
     parameters,
@@ -511,7 +511,7 @@
   (:c)    5.92901       5.92797       5.92847       5.92048          5.95845       5.95697         5.95686        5.96173
   (:k)   47.3185       47.3087       47.3125       47.2392          47.6034       47.5969         47.5954        47.6402
   (:q)    6.87159       6.86452       6.87844       6.79352          7.00476       6.9026          6.90727        6.95841
-  (:z)   -0.00109471   -0.00208056    4.43613e-5   -0.0123318        0.0162992     0.000445065     0.00119089     0.00863586
source
MacroModelling.get_irfMethod
get_irf(
+  (:z)   -0.00109471   -0.00208056    4.43613e-5   -0.0123318        0.0162992     0.000445065     0.00119089     0.00863586
source
MacroModelling.get_irfMethod
get_irf(
     𝓂,
     parameters;
     periods,
@@ -546,7 +546,7 @@
  0.00674687  0.00729773  0.00715114  0.00687615  …  0.00146962   0.00140619
  0.0620937   0.0718322   0.0712153   0.0686381      0.0146789    0.0140453
  0.0688406   0.0182781   0.00797091  0.0057232      0.00111425   0.00106615
- 0.01        0.002       0.0004      8.0e-5         2.74878e-29  5.49756e-30
source
MacroModelling.get_irfMethod
get_irf(
     𝓂;
     periods,
     algorithm,
@@ -589,7 +589,7 @@
   (:c)    0.00674687    0.00729773        0.00146962      0.00140619
   (:k)    0.0620937     0.0718322         0.0146789       0.0140453
   (:q)    0.0688406     0.0182781         0.00111425      0.00106615
-  (:z)    0.01          0.002             2.74878e-29     5.49756e-30
source
MacroModelling.get_jump_variablesMethod
get_jump_variables(𝓂)
 

Returns the jump variables of the model. Jumper variables occur in the future and not in the past or occur in all three: past, present, and future.

In case programmatic model writing was used this function returns the parsed variables (see z in example).

Arguments

Examples

using MacroModelling
 
 @model RBC begin
@@ -618,7 +618,7 @@
 3-element Vector{String}:
  "c"
  "z{TFP}"
- "z{δ}"
source
MacroModelling.get_loglikelihoodMethod
get_loglikelihood(
     𝓂,
     data,
     parameter_values;
@@ -649,7 +649,7 @@
 
 get_loglikelihood(RBC, simulated_data([:k], :, :simulate), RBC.parameter_values)
 # output
-58.24780188977981
source
MacroModelling.get_momentsMethod
get_moments(
     𝓂;
     parameters,
     non_stochastic_steady_state,
@@ -704,7 +704,7 @@
   (:c)   0.0266642              2.66642     -0.384359   0.2626     0.144789
   (:k)   0.264677              26.4677      -5.74194    2.99332    6.30323
   (:q)   0.0739325              7.39325     -0.974722   0.726551   1.08
-  (:z)   0.0102062              1.02062      0.0        0.0        0.0
source
MacroModelling.get_non_stochastic_steady_state_residualsMethod
get_non_stochastic_steady_state_residuals(
     𝓂,
     values;
     parameters
@@ -737,7 +737,7 @@
  (:Equation₂)             0.0
  (:Equation₃)             0.0
  (:Equation₄)             0.0
- (:CalibrationEquation₁)  0.0

getnonstochasticsteadystate_residuals(RBC, [1.1641597, 3.0635781, 1.2254312, 0.0, 0.18157895])

output

1-dimensional KeyedArray(NamedDimsArray(...)) with keys: ↓ Equation ∈ 5-element Vector{Symbol} And data, 5-element Vector{Float64}: (:Equation₁) 2.7360991250446887e-10 (:Equation₂) 6.199999980083248e-8 (:Equation₃) 2.7897102183871425e-8 (:Equation₄) 0.0 (:CalibrationEquation₁) 8.160392850342646e-8 ```

source
MacroModelling.get_nonnegativity_auxilliary_variablesMethod
get_nonnegativity_auxilliary_variables(𝓂)
+ (:CalibrationEquation₁)  0.0

getnonstochasticsteadystate_residuals(RBC, [1.1641597, 3.0635781, 1.2254312, 0.0, 0.18157895])

output

1-dimensional KeyedArray(NamedDimsArray(...)) with keys: ↓ Equation ∈ 5-element Vector{Symbol} And data, 5-element Vector{Float64}: (:Equation₁) 2.7360991250446887e-10 (:Equation₂) 6.199999980083248e-8 (:Equation₃) 2.7897102183871425e-8 (:Equation₄) 0.0 (:CalibrationEquation₁) 8.160392850342646e-8 ```

source
MacroModelling.get_nonnegativity_auxilliary_variablesMethod
get_nonnegativity_auxilliary_variables(𝓂)
 

Returns the auxilliary variables, without timing subscripts, added to the non-stochastic steady state problem because certain expression cannot be negative (e.g. given log(c/q) an auxilliary variable is created for c/q).

See get_steady_state_equations for more details on the auxilliary variables and equations.

Arguments

Examples

using MacroModelling
 
 @model RBC begin
@@ -765,7 +765,7 @@
 # output
 2-element Vector{String}:
  "➕₁"
- "➕₂"
source
MacroModelling.get_parametersMethod
get_parameters(𝓂; values)
 

Returns the parameters (and optionally the values) which have an impact on the model dynamics but do not depend on other parameters and are not determined by calibration equations.

In case programmatic model writing was used this function returns the parsed parameters (see σ in example).

Arguments

Keyword Arguments

  • values [Default: false, Type: Bool]: return the values together with the parameter names

Examples

using MacroModelling
 
 @model RBC begin
@@ -798,7 +798,7 @@
  "ρ{δ}"
  "capital_to_output"
  "alpha"
- "β"
source
MacroModelling.get_parameters_defined_by_parametersMethod
get_parameters_defined_by_parameters(𝓂)
 

Returns the parameters which are defined by other parameters which are not necessarily used in the equations of the model (see α in example).

Arguments

Examples

using MacroModelling
 
 @model RBC begin
@@ -825,7 +825,7 @@
 get_parameters_defined_by_parameters(RBC)
 # output
 1-element Vector{String}:
- "α"
source
MacroModelling.get_parameters_defining_parametersMethod
get_parameters_defining_parameters(𝓂)
 

Returns the parameters which define other parameters in the @parameters block which are not necessarily used in the equations of the model (see alpha in example).

Arguments

Examples

using MacroModelling
 
 @model RBC begin
@@ -852,7 +852,7 @@
 get_parameters_defining_parameters(RBC)
 # output
 1-element Vector{String}:
- "alpha"
source
MacroModelling.get_parameters_in_equationsMethod
get_parameters_in_equations(𝓂)
 

Returns the parameters contained in the model equations. Note that these parameters might be determined by other parameters or calibration equations defined in the @parameters block.

In case programmatic model writing was used this function returns the parsed parameters (see σ in example).

Arguments

Examples

using MacroModelling
 
 @model RBC begin
@@ -885,7 +885,7 @@
  "ρ{TFP}"
  "ρ{δ}"
  "σ{TFP}"
- "σ{δ}"
source
MacroModelling.get_shock_decompositionMethod
get_shock_decomposition(
     𝓂,
     data;
     parameters,
@@ -939,7 +939,7 @@
   (:c)   0.0437976   -0.000187505
   (:k)   0.4394      -0.00187284
   (:q)   0.00985518  -0.000142164
-  (:z)  -0.00366442   8.67362e-19
source
MacroModelling.get_shocksMethod
get_shocks(𝓂)
 

Returns the exogenous shocks.

In case programmatic model writing was used this function returns the parsed variables (see eps in example).

Arguments

Examples

using MacroModelling
 
 @model RBC begin
@@ -969,7 +969,7 @@
  "eps_news{TFP}"
  "eps_news{δ}"
  "eps{TFP}"
- "eps{δ}"
source
MacroModelling.get_solutionMethod
get_solution(𝓂; parameters, algorithm, verbose)
 

Return the solution of the model. In the linear case it returns the linearised solution and the non stochastic steady state (NSSS) of the model. In the nonlinear case (higher order perturbation) the function returns a multidimensional array with the endogenous variables as the second dimension and the state variables, shocks, and perturbation parameter (:Volatility) in the case of higher order solutions as the other dimensions.

The values of the output represent the NSSS in the case of a linear solution and below it the effect that deviations from the NSSS of the respective past states, shocks, and perturbation parameter have (perturbation parameter = 1) on the present value (NSSS deviation) of the model variables.

Arguments

Keyword Arguments

  • parameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.
  • algorithm [Default: :first_order, Type: Symbol]: algorithm to solve for the dynamics of the model.
  • verbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.

The returned KeyedArray shows as columns the endogenous variables inlcuding the auxilliary endogenous and exogenous variables (due to leads and lags > 1). The rows and other dimensions (depending on the chosen perturbation order) include the NSSS for the linear case only, followed by the states, and exogenous shocks. Subscripts following variable names indicate the timing (e.g. variable₍₋₁₎ indicates the variable being in the past). Superscripts indicate leads or lags (e.g. variableᴸ⁽²⁾ indicates the variable being in lead by two periods). If no super- or subscripts follow the variable name, the variable is in the present.

Examples

using MacroModelling
 
 @model RBC begin
@@ -997,7 +997,7 @@
   (:Steady_state)   5.93625     47.3903      6.88406     0.0
   (:k₍₋₁₎)          0.0957964    0.956835    0.0726316  -0.0
   (:z₍₋₁₎)          0.134937     1.24187     1.37681     0.2
-  (:eps_z₍ₓ₎)       0.00674687   0.0620937   0.0688406   0.01
source
MacroModelling.get_state_variablesMethod
get_state_variables(𝓂)
 

Returns the state variables of the model. State variables occur in the past and not in the future or occur in all three: past, present, and future.

In case programmatic model writing was used this function returns the parsed variables (see z in example).

Arguments

Examples

using MacroModelling
 
 @model RBC begin
@@ -1033,7 +1033,7 @@
  "kᴸ⁽⁻¹⁾"
  "q"
  "z{TFP}"
- "z{δ}"
source
MacroModelling.get_statisticsMethod
get_statistics(
     𝓂,
     parameter_values;
     parameters,
@@ -1067,7 +1067,7 @@
 get_statistics(RBC, RBC.parameter_values, parameters = RBC.parameters, standard_deviation = RBC.var)
 # output
 1-element Vector{Any}:
- [0.02666420378525503, 0.26467737291221793, 0.07393254045396483, 0.010206207261596574]
source
MacroModelling.get_steady_stateMethod
get_steady_state(
     𝓂;
     parameters,
     derivatives,
@@ -1106,7 +1106,7 @@
   (:c)   5.93625          0.0       0.0   -116.072    55.786     76.1014
   (:k)  47.3903           0.0       0.0  -1304.95    555.264   1445.93
   (:q)   6.88406          0.0       0.0    -94.7805   66.8912   105.02
-  (:z)   0.0              0.0       0.0      0.0       0.0        0.0
source
MacroModelling.get_steady_state_equationsMethod
get_steady_state_equations(𝓂)
 

Return the non-stochastic steady state (NSSS) equations of the model. The difference to the equations as they were written in the @model block is that exogenous shocks are set to 0, time subscripts are eliminated (e.g. c[-1] becomes c), trivial simplifications are carried out (e.g. log(k) - log(k) = 0), and auxilliary variables are added for expressions that cannot become negative.

Auxilliary variables facilitate the solution of the NSSS problem. The package substitutes expressions which cannot become negative with auxilliary variables and adds another equation to the system of equations determining the NSSS. For example, log(c/q) cannot be negative and c/q is substituted by an auxilliary varaible ➕₁ and an additional equation is added: ➕₁ = c / q.

Note that the ouput assumes the equations are equal to 0. As in, -z{δ} * ρ{δ} + z{δ} implies -z{δ} * ρ{δ} + z{δ} = 0 and therefore: z{δ} * ρ{δ} = z{δ}.

Arguments

Examples

using MacroModelling
 
 @model RBC begin
@@ -1141,7 +1141,7 @@
  "➕₁ - c / q"
  "➕₂ - c / q"
  "(Δc_share - log(➕₁)) + log(➕₂)"
- "Δk_4q - 0"
source
MacroModelling.get_variablesMethod
get_variables(𝓂)
 

Returns the variables of the model without timing subscripts and not including auxilliary variables.

In case programmatic model writing was used this function returns the parsed variables (see z in example).

Arguments

Examples

using MacroModelling
 
 @model RBC begin
@@ -1174,7 +1174,7 @@
  "z{TFP}"
  "z{δ}"
  "Δc_share"
- "Δk_4q"
source
MacroModelling.get_variance_decompositionMethod
get_variance_decomposition(𝓂; parameters, verbose)
 

Return the variance decomposition of endogenous variables with regards to the shocks using the linearised solution.

Arguments

Keyword Arguments

  • parameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.
  • verbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.

Examples

using MacroModelling
 
 @model RBC_CME begin
@@ -1213,7 +1213,7 @@
   (:c)         0.0134672     0.986533
   (:k)         0.00869568    0.991304
   (:y)         0.000313462   0.999687
-  (:z_delta)   1.0           0.0
source
MacroModelling.plot_model_estimatesMethod
plot_model_estimates(
     𝓂,
     data;
     parameters,
@@ -1399,7 +1399,7 @@
 
 simulation = simulate(RBC_CME)
 
-plot_model_estimates(RBC_CME, simulation([:k],:,:simulate))
source
MacroModelling.translate_mod_fileMethod
translate_mod_file(path_to_mod_file)
-

Reads in a dynare .mod-file, adapts the syntax, tries to capture parameter definitions, and writes a julia file in the same folder containing the model equations and parameters in MacroModelling.jl syntax. This function is not guaranteed to produce working code. It's purpose is to make it easier to port a model from dynare to MacroModelling.jl.

The recommended workflow is to use this function to translate a .mod-file, and then adapt the output so that it runs and corresponds to the input.

Note that this function copies the .mod-file to a temporary folder and executes it there. All references within that .mod-file are therefore not valid (because those filesare not copied) and must be made copied into the .mod-file.

Arguments

  • path_to_mod_file [Type: AbstractString]: path including filename of the .mod-file to be translated
source
MacroModelling.write_mod_fileMethod
write_mod_file(m)
-

Writes a dynare .mod-file in the current working directory. This function is not guaranteed to produce working code. It's purpose is to make it easier to port a model from MacroModelling.jl to dynare.

The recommended workflow is to use this function to write a .mod-file, and then adapt the output so that it runs and corresponds to the input.

Arguments

source
MacroModelling.@modelMacro

Parses the model equations and assigns them to an object.

Arguments

  • 𝓂: name of the object to be created containing the model information.
  • ex: equations

Optional arguments to be placed between 𝓂 and ex

  • max_obc_horizon [Default: 40, Type: Int]: maximum length of anticipated shocks and corresponding unconditional forecast horizon over which the occasionally binding constraint is to be enforced. Increase this number if no solution is found to enforce the constraint.

Variables must be defined with their time subscript in squared brackets. Endogenous variables can have the following:

  • present: c[0]
  • non-stcohastic steady state: c[ss] instead of ss any of the following is also a valid flag for the non-stochastic steady state: ss, stst, steady, steadystate, steady_state, and the parser is case-insensitive (SS or sTst will work as well).
  • past: c[-1] or any negative Integer: e.g. c[-12]
  • future: c[1] or any positive Integer: e.g. c[16] or c[+16]

Signed integers are recognised and parsed as such.

Exogenous variables (shocks) can have the following:

  • present: eps_z[x] instead of x any of the following is also a valid flag for exogenous variables: ex, exo, exogenous, and the parser is case-insensitive (Ex or exoGenous will work as well).
  • past: eps_z[x-1]
  • future: eps_z[x+1]

Parameters enter the equations without squared brackets.

If an equation contains a max or min operator, then the default dynamic (first order) solution of the model will enforce the occasionally binding constraint. You can choose to ignore it by setting ignore_obc = true in the relevant function calls.

Examples

using MacroModelling
+plot_solution(RBC_CME, :k)
source
MacroModelling.translate_mod_fileMethod
translate_mod_file(path_to_mod_file)
+

Reads in a dynare .mod-file, adapts the syntax, tries to capture parameter definitions, and writes a julia file in the same folder containing the model equations and parameters in MacroModelling.jl syntax. This function is not guaranteed to produce working code. It's purpose is to make it easier to port a model from dynare to MacroModelling.jl.

The recommended workflow is to use this function to translate a .mod-file, and then adapt the output so that it runs and corresponds to the input.

Note that this function copies the .mod-file to a temporary folder and executes it there. All references within that .mod-file are therefore not valid (because those filesare not copied) and must be made copied into the .mod-file.

Arguments

  • path_to_mod_file [Type: AbstractString]: path including filename of the .mod-file to be translated
source
MacroModelling.write_mod_fileMethod
write_mod_file(m)
+

Writes a dynare .mod-file in the current working directory. This function is not guaranteed to produce working code. It's purpose is to make it easier to port a model from MacroModelling.jl to dynare.

The recommended workflow is to use this function to write a .mod-file, and then adapt the output so that it runs and corresponds to the input.

Arguments

source
MacroModelling.@modelMacro

Parses the model equations and assigns them to an object.

Arguments

  • 𝓂: name of the object to be created containing the model information.
  • ex: equations

Optional arguments to be placed between 𝓂 and ex

  • max_obc_horizon [Default: 40, Type: Int]: maximum length of anticipated shocks and corresponding unconditional forecast horizon over which the occasionally binding constraint is to be enforced. Increase this number if no solution is found to enforce the constraint.

Variables must be defined with their time subscript in squared brackets. Endogenous variables can have the following:

  • present: c[0]
  • non-stcohastic steady state: c[ss] instead of ss any of the following is also a valid flag for the non-stochastic steady state: ss, stst, steady, steadystate, steady_state, and the parser is case-insensitive (SS or sTst will work as well).
  • past: c[-1] or any negative Integer: e.g. c[-12]
  • future: c[1] or any positive Integer: e.g. c[16] or c[+16]

Signed integers are recognised and parsed as such.

Exogenous variables (shocks) can have the following:

  • present: eps_z[x] instead of x any of the following is also a valid flag for exogenous variables: ex, exo, exogenous, and the parser is case-insensitive (Ex or exoGenous will work as well).
  • past: eps_z[x-1]
  • future: eps_z[x+1]

Parameters enter the equations without squared brackets.

If an equation contains a max or min operator, then the default dynamic (first order) solution of the model will enforce the occasionally binding constraint. You can choose to ignore it by setting ignore_obc = true in the relevant function calls.

Examples

using MacroModelling
 
 @model RBC begin
     1  /  c[0] = (β  /  c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))
     c[0] + k[0] = (1 - δ) * k[-1] + q[0]
     q[0] = exp(z[0]) * k[-1]^α
     z[0] = ρ * z[-1] + std_z * eps_z[x]
-end

Programmatic model writing

Parameters and variables can be indexed using curly braces: e.g. c{H}[0], eps_z{F}[x], or α{H}.

for loops can be used to write models programmatically. They can either be used to generate expressions where you iterate over the time index or the index in curly braces:

  • generate equation with different indices in curly braces: for co in [H,F] C{co}[0] + X{co}[0] + Z{co}[0] - Z{co}[-1] end = for co in [H,F] Y{co}[0] end
  • generate multiple equations with different indices in curly braces: for co in [H, F] K{co}[0] = (1-delta{co}) * K{co}[-1] + S{co}[0] end
  • generate equation with different time indices: Y_annual[0] = for lag in -3:0 Y[lag] end or R_annual[0] = for operator = :*, lag in -3:0 R[lag] end
source
MacroModelling.@parametersMacro

Adds parameter values and calibration equations to the previously defined model. Allows to provide an initial guess for the non-stochastic steady state (NSSS).

Arguments

  • 𝓂: name of the object previously created containing the model information.
  • ex: parameter, parameters values, and calibration equations

Parameters can be defined in either of the following ways:

  • plain number: δ = 0.02
  • expression containing numbers: δ = 1/50
  • expression containing other parameters: δ = 2 * std_z in this case it is irrelevant if std_z is defined before or after. The definitons including other parameters are treated as a system of equaitons and solved accordingly.
  • expressions containing a target parameter and an equations with endogenous variables in the non-stochastic steady state, and other parameters, or numbers: k[ss] / (4 * q[ss]) = 1.5 | δ or α | 4 * q[ss] = δ * k[ss] in this case the target parameter will be solved simultaneaously with the non-stochastic steady state using the equation defined with it.

Optional arguments to be placed between 𝓂 and ex

  • guess [Type: Dict{Symbol, <:Real}, Dict{String, <:Real}}]: Guess for the non-stochastic steady state. The keys must be the variable (and calibrated parameters) names and the values the guesses. Missing values are filled with standard starting values.
  • verbose [Default: false, Type: Bool]: print more information about how the non stochastic steady state is solved
  • silent [Default: false, Type: Bool]: do not print any information
  • symbolic [Default: false, Type: Bool]: try to solve the non stochastic steady state symbolically and fall back to a numerical solution if not possible
  • perturbation_order [Default: 1, Type: Int]: take derivatives only up to the specified order at this stage. In case you want to work with higher order perturbation later on, respective derivatives will be taken at that stage.

Examples

using MacroModelling
+end

Programmatic model writing

Parameters and variables can be indexed using curly braces: e.g. c{H}[0], eps_z{F}[x], or α{H}.

for loops can be used to write models programmatically. They can either be used to generate expressions where you iterate over the time index or the index in curly braces:

  • generate equation with different indices in curly braces: for co in [H,F] C{co}[0] + X{co}[0] + Z{co}[0] - Z{co}[-1] end = for co in [H,F] Y{co}[0] end
  • generate multiple equations with different indices in curly braces: for co in [H, F] K{co}[0] = (1-delta{co}) * K{co}[-1] + S{co}[0] end
  • generate equation with different time indices: Y_annual[0] = for lag in -3:0 Y[lag] end or R_annual[0] = for operator = :*, lag in -3:0 R[lag] end
source
MacroModelling.@parametersMacro

Adds parameter values and calibration equations to the previously defined model. Allows to provide an initial guess for the non-stochastic steady state (NSSS).

Arguments

  • 𝓂: name of the object previously created containing the model information.
  • ex: parameter, parameters values, and calibration equations

Parameters can be defined in either of the following ways:

  • plain number: δ = 0.02
  • expression containing numbers: δ = 1/50
  • expression containing other parameters: δ = 2 * std_z in this case it is irrelevant if std_z is defined before or after. The definitons including other parameters are treated as a system of equaitons and solved accordingly.
  • expressions containing a target parameter and an equations with endogenous variables in the non-stochastic steady state, and other parameters, or numbers: k[ss] / (4 * q[ss]) = 1.5 | δ or α | 4 * q[ss] = δ * k[ss] in this case the target parameter will be solved simultaneaously with the non-stochastic steady state using the equation defined with it.

Optional arguments to be placed between 𝓂 and ex

  • guess [Type: Dict{Symbol, <:Real}, Dict{String, <:Real}}]: Guess for the non-stochastic steady state. The keys must be the variable (and calibrated parameters) names and the values the guesses. Missing values are filled with standard starting values.
  • verbose [Default: false, Type: Bool]: print more information about how the non stochastic steady state is solved
  • silent [Default: false, Type: Bool]: do not print any information
  • symbolic [Default: false, Type: Bool]: try to solve the non stochastic steady state symbolically and fall back to a numerical solution if not possible
  • perturbation_order [Default: 1, Type: Int]: take derivatives only up to the specified order at this stage. In case you want to work with higher order perturbation later on, respective derivatives will be taken at that stage.

Examples

using MacroModelling
 
 @model RBC begin
     1  /  c[0] = (β  /  c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))
@@ -1477,4 +1477,4 @@
     δ = 0.02
     k[ss] / q[ss] = 2.5 | α
     β = 0.95
-end

Programmatic model writing

Variables and parameters indexed with curly braces can be either referenced specifically (e.g. c{H}[ss]) or generally (e.g. alpha). If they are referenced generaly the parse assumes all instances (indices) are meant. For example, in a model where alpha has two indices H and F, the expression alpha = 0.3 is interpreted as two expressions: alpha{H} = 0.3 and alpha{F} = 0.3. The same goes for calibration equations.

source
+end

Programmatic model writing

Variables and parameters indexed with curly braces can be either referenced specifically (e.g. c{H}[ss]) or generally (e.g. alpha). If they are referenced generaly the parse assumes all instances (indices) are meant. For example, in a model where alpha has two indices H and F, the expression alpha = 0.3 is interpreted as two expressions: alpha{H} = 0.3 and alpha{F} = 0.3. The same goes for calibration equations.

source diff --git a/v0.1.36/call_index/index.html b/v0.1.36/call_index/index.html index 9c1edfb2..0b1f7fbf 100644 --- a/v0.1.36/call_index/index.html +++ b/v0.1.36/call_index/index.html @@ -1,2 +1,2 @@ -Index · MacroModelling.jl

Index

+Index · MacroModelling.jl

Index

diff --git a/v0.1.36/how-to/loops/index.html b/v0.1.36/how-to/loops/index.html index d7a1c608..88fa6f13 100644 --- a/v0.1.36/how-to/loops/index.html +++ b/v0.1.36/how-to/loops/index.html @@ -69,10 +69,10 @@ rho{F}{F} = rho{H}{H} rho{H}{F} = 0.088 rho{F}{H} = rho{H}{F} - endRemove redundant variables in non stochastic steady state problem: 0.801 seconds -Set up non stochastic steady state problem: 0.676 seconds -Take symbolic derivatives up to first order: 0.82 seconds -Find non stochastic steady state: 6.952 seconds + endRemove redundant variables in non stochastic steady state problem: 0.65 seconds +Set up non stochastic steady state problem: 0.359 seconds +Take symbolic derivatives up to first order: 0.846 seconds +Find non stochastic steady state: 7.194 seconds Model: Backus_Kehoe_Kydland_1992 Variables Total: 56 @@ -84,4 +84,4 @@ Shocks: 2 Parameters: 28 Calibration -equations: 2 +equations: 2 diff --git a/v0.1.36/how-to/obc/index.html b/v0.1.36/how-to/obc/index.html index 834653c9..2130dbd6 100644 --- a/v0.1.36/how-to/obc/index.html +++ b/v0.1.36/how-to/obc/index.html @@ -90,10 +90,10 @@ std_nu = .0025 - endRemove redundant variables in non stochastic steady state problem: 1.824 seconds -Set up non stochastic steady state problem: 3.728 seconds -Take symbolic derivatives up to first order: 1.062 seconds -Find non stochastic steady state: 6.475 seconds + endRemove redundant variables in non stochastic steady state problem: 1.466 seconds +Set up non stochastic steady state problem: 4.305 seconds +Take symbolic derivatives up to first order: 1.124 seconds +Find non stochastic steady state: 6.99 seconds Model: Gali_2015_chapter_3_obc Variables Total: 68 @@ -231,10 +231,10 @@ std_nu = .0025 R > 1.000001 - endRemove redundant variables in non stochastic steady state problem: 1.069 seconds -Set up non stochastic steady state problem: 3.077 seconds -Take symbolic derivatives up to first order: 0.201 seconds -Find non stochastic steady state: 0.382 seconds + endRemove redundant variables in non stochastic steady state problem: 1.869 seconds +Set up non stochastic steady state problem: 3.813 seconds +Take symbolic derivatives up to first order: 0.367 seconds +Find non stochastic steady state: 0.392 seconds Model: Gali_2015_chapter_3_obc Variables Total: 68 @@ -387,10 +387,10 @@ σ = 0.05 m = 1 γ = 1 - endRemove redundant variables in non stochastic steady state problem: 0.485 seconds -Set up non stochastic steady state problem: 9.769 seconds -Take symbolic derivatives up to first order: 0.118 seconds -Find non stochastic steady state: 2.709 seconds + endRemove redundant variables in non stochastic steady state problem: 0.504 seconds +Set up non stochastic steady state problem: 10.304 seconds +Take symbolic derivatives up to first order: 0.138 seconds +Find non stochastic steady state: 2.871 seconds Model: borrowing_constraint Variables Total: 49 @@ -407,8 +407,8 @@ (:C) 0.95 (:Y) 1.0 (:Χᵒᵇᶜ⁺ꜝ¹ꜝ) 0.0 - (:λ) 0.008157894736842098 - (:χᵒᵇᶜ⁺ꜝ¹ꜝʳ) -0.008157894736842098 + (:λ) 0.008157894736842097 + (:χᵒᵇᶜ⁺ꜝ¹ꜝʳ) -0.008157894736842097 (:χᵒᵇᶜ⁺ꜝ¹ꜝˡ) 8.773722820781672e-25 (:ϵᵒᵇᶜ⁺ꜝ¹ꜝ) 0.0 ⋮ @@ -440,16 +440,16 @@ (:B) 0.999025 0.978434 1.0 1.0 (:C) 0.94805 0.907891 0.95 0.95 (:Y) 0.999025 0.978434 1.0 1.0 - (:λ) 0.00951431 0.0370327 0.00815789 0.00815789

Let's look at the mean and standard deviation of borrowing:

julia> import Statistics
julia> Statistics.mean(sims(:B,:,:))0.959722395107753

and

julia> Statistics.std(sims(:B,:,:))0.08986494438053381

Compare this to the theoretical mean of the model without the occasionally binding constraint:

julia> get_mean(borrowing_constraint)1-dimensional KeyedArray(NamedDimsArray(...)) with keys:
+  (:λ)    0.00951431    0.0370327         0.00815789       0.00815789

Let's look at the mean and standard deviation of borrowing:

julia> import Statistics
julia> Statistics.mean(sims(:B,:,:))0.9597223949671511

and

julia> Statistics.std(sims(:B,:,:))0.08986494469244331

Compare this to the theoretical mean of the model without the occasionally binding constraint:

julia> get_mean(borrowing_constraint)1-dimensional KeyedArray(NamedDimsArray(...)) with keys:
 ↓   Variables ∈ 4-element Vector{Symbol}
 And data, 4-element Vector{Float64}:
  (:B)  1.0
  (:C)  0.95
  (:Y)  1.0
- (:λ)  0.008157894736842098

and the theoretical standard deviation:

julia> get_std(borrowing_constraint)1-dimensional KeyedArray(NamedDimsArray(...)) with keys:
+ (:λ)  0.008157894736842097

and the theoretical standard deviation:

julia> get_std(borrowing_constraint)1-dimensional KeyedArray(NamedDimsArray(...)) with keys:
 ↓   Variables ∈ 4-element Vector{Symbol}
 And data, 4-element Vector{Float64}:
  (:B)  0.1147078669352811
  (:C)  0.1319140469755731
  (:Y)  0.11470786693528105
- (:λ)  0.07031743997991882

The mean of borrowing is lower in the model with occasionally binding constraints compared to the model without and the standard deviation is higher.

+ (:λ) 0.07031743997991882

The mean of borrowing is lower in the model with occasionally binding constraints compared to the model without and the standard deviation is higher.

diff --git a/v0.1.36/index.html b/v0.1.36/index.html index a65a1bf1..3a49755d 100644 --- a/v0.1.36/index.html +++ b/v0.1.36/index.html @@ -1,2 +1,2 @@ -Introduction · MacroModelling.jl

MacroModelling.jl

Author: Thore Kockerols (@thorek1)

MacroModelling.jl is a Julia package for developing and solving dynamic stochastic general equilibrium (DSGE) models.

These kinds of models describe the behavior of a macroeconomy and are particularly suited for counterfactual analysis (economic policy evaluation) and exploring / quantifying specific mechanisms (academic research). Due to the complexity of these models, efficient numerical tools are required, as analytical solutions are often unavailable. MacroModelling.jl serves as a tool for handling the complexities involved, such as forward-looking expectations, nonlinearity, and high dimensionality.

The goal of this package is to reduce coding time and speed up model development by providing functions for working with discrete-time DSGE models. The user-friendly syntax, automatic variable declaration, and effective steady state solver facilitate fast prototyping of models. Furthermore, the package allows the user to work with nonlinear model solutions (up to third order (pruned) perturbation) and estimate the model using gradient based samplers (e.g. NUTS, of HMC). Currently, DifferentiableStateSpaceModels.jl is the only other package providing functionality to estimate using gradient based samplers but the use is limited to models with an analytical solution of the non stochastic steady state (NSSS). Larger models tend to not have an analytical solution of the NSSS and MacroModelling.jl can also use gradient based sampler in this case. The target audience for the package includes central bankers, regulators, graduate students, and others working in academia with an interest in DSGE modelling.

As of now the package can:

  • parse a model written with user friendly syntax (variables are followed by time indices ...[2], [1], [0], [-1], [-2]..., or [x] for shocks)
  • (tries to) solve the model only knowing the model equations and parameter values (no steady state file needed)
  • calculate first, second, and third order (pruned) perturbation solutions (see Villemot (2011), Andreasen et al. (2017) and Levintal (2017)) using symbolic derivatives
  • handle occasionally binding constraints for linear and nonlinear solutions
  • calculate (generalised) impulse response functions, simulate the model, or do conditional forecasts for linear and nonlinear solutions
  • calibrate parameters using (non stochastic) steady state relationships
  • match model moments (also for pruned higher order solutions)
  • estimate the model on data (Kalman filter using first order perturbation; see Durbin and Koopman (2012)) with gradient based samplers (e.g. NUTS, HMC) or estimate nonlinear models using the inversion filter
  • differentiate (forward AD) the model solution, Kalman filter loglikelihood (forward and reverse-mode AD), model moments, steady state, with respect to the parameters

The package is not:

  • guaranteed to find the non stochastic steady state
  • the fastest package around if you already have a fast way to find the NSSS

The former has to do with the fact that solving systems of nonlinear equations is hard (an active area of research). Especially in cases where the values of the solution are far apart (have a high standard deviation - e.g. sol = [-46.324, .993457, 23523.3856]), the algorithms have a hard time finding a solution. The recommended way to tackle this is to set bounds in the @parameters part (e.g. r < 0.2), so that the initial points are closer to the final solution (think of steady state interest rates not being higher than 20% - meaning not being higher than 0.2 or 1.2 depending on the definition).

The latter has to do with the fact that julia code is fast once compiled, and that the package can spend more time finding the non stochastic steady state. This means that it takes more time from executing the code to define the model and parameters for the first time to seeing the first plots than with most other packages. But, once the functions are compiled and the non stochastic steady state has been found the user can benefit from the object oriented nature of the package and generate outputs or change parameters very fast.

The package contains the following models in the models folder:

Comparison with other packages

MacroModelling.jldynareDSGE.jldolo.pySolveDSGE.jlDifferentiableStateSpaceModels.jlStateSpaceEcon.jlIRISRISENBTOOLBOXgEconGDSGETaylor Projection
Host languagejuliaMATLABjuliaPythonjuliajuliajuliaMATLABMATLABMATLABRMATLABMATLAB
Non stochastic steady state solversymbolic or numerical solver of independent blocks; symbolic removal of variables redundant in steady state; inclusion of calibration equations in problemnumerical solver of independent blocks or user-supplied values/functionsnumerical solver of independent blocks or user-supplied values/functionsnumerical solvernumerical solver or user supplied values/equationsnumerical solver of independent blocks or user-supplied values/functionsnumerical solver of independent blocks or user-supplied values/functionsnumerical solver of independent blocks or user-supplied values/functionsuser-supplied steady state file or numerical solvernumerical solver; inclusion of calibration equations in problem
Automatic declaration of variables and parametersyes
Derivatives (Automatic Differentiation) wrt parametersyesyes - for all 1st, 2nd order perturbation solution related output if user supplied steady state equations
Perturbation solution order1, 2, 3k11, 2, 31, 2, 31, 2111 to 5111 to 5
Pruningyesyesyesyes
Automatic derivation of first order conditionsyes
Handles occasionally binding constraintsyesyesyesyesyesyesyes
Global solutionyesyesyes
Estimationyesyesyesyesyesyesyes
Balanced growth pathyesyesyesyesyesyes
Model inputmacro (julia)text filetext filetext filetext filemacro (julia)module (julia)text filetext filetext filetext filetext filetext file
Timing conventionend-of-periodend-of-periodend-of-periodstart-of-periodstart-of-periodend-of-periodend-of-periodend-of-periodend-of-periodend-of-periodstart-of-periodstart-of-period

Bibliography

+Introduction · MacroModelling.jl

MacroModelling.jl

Author: Thore Kockerols (@thorek1)

MacroModelling.jl is a Julia package for developing and solving dynamic stochastic general equilibrium (DSGE) models.

These kinds of models describe the behavior of a macroeconomy and are particularly suited for counterfactual analysis (economic policy evaluation) and exploring / quantifying specific mechanisms (academic research). Due to the complexity of these models, efficient numerical tools are required, as analytical solutions are often unavailable. MacroModelling.jl serves as a tool for handling the complexities involved, such as forward-looking expectations, nonlinearity, and high dimensionality.

The goal of this package is to reduce coding time and speed up model development by providing functions for working with discrete-time DSGE models. The user-friendly syntax, automatic variable declaration, and effective steady state solver facilitate fast prototyping of models. Furthermore, the package allows the user to work with nonlinear model solutions (up to third order (pruned) perturbation) and estimate the model using gradient based samplers (e.g. NUTS, of HMC). Currently, DifferentiableStateSpaceModels.jl is the only other package providing functionality to estimate using gradient based samplers but the use is limited to models with an analytical solution of the non stochastic steady state (NSSS). Larger models tend to not have an analytical solution of the NSSS and MacroModelling.jl can also use gradient based sampler in this case. The target audience for the package includes central bankers, regulators, graduate students, and others working in academia with an interest in DSGE modelling.

As of now the package can:

  • parse a model written with user friendly syntax (variables are followed by time indices ...[2], [1], [0], [-1], [-2]..., or [x] for shocks)
  • (tries to) solve the model only knowing the model equations and parameter values (no steady state file needed)
  • calculate first, second, and third order (pruned) perturbation solutions (see Villemot (2011), Andreasen et al. (2017) and Levintal (2017)) using symbolic derivatives
  • handle occasionally binding constraints for linear and nonlinear solutions
  • calculate (generalised) impulse response functions, simulate the model, or do conditional forecasts for linear and nonlinear solutions
  • calibrate parameters using (non stochastic) steady state relationships
  • match model moments (also for pruned higher order solutions)
  • estimate the model on data (Kalman filter using first order perturbation; see Durbin and Koopman (2012)) with gradient based samplers (e.g. NUTS, HMC) or estimate nonlinear models using the inversion filter
  • differentiate (forward AD) the model solution, Kalman filter loglikelihood (forward and reverse-mode AD), model moments, steady state, with respect to the parameters

The package is not:

  • guaranteed to find the non stochastic steady state
  • the fastest package around if you already have a fast way to find the NSSS

The former has to do with the fact that solving systems of nonlinear equations is hard (an active area of research). Especially in cases where the values of the solution are far apart (have a high standard deviation - e.g. sol = [-46.324, .993457, 23523.3856]), the algorithms have a hard time finding a solution. The recommended way to tackle this is to set bounds in the @parameters part (e.g. r < 0.2), so that the initial points are closer to the final solution (think of steady state interest rates not being higher than 20% - meaning not being higher than 0.2 or 1.2 depending on the definition).

The latter has to do with the fact that julia code is fast once compiled, and that the package can spend more time finding the non stochastic steady state. This means that it takes more time from executing the code to define the model and parameters for the first time to seeing the first plots than with most other packages. But, once the functions are compiled and the non stochastic steady state has been found the user can benefit from the object oriented nature of the package and generate outputs or change parameters very fast.

The package contains the following models in the models folder:

Comparison with other packages

MacroModelling.jldynareDSGE.jldolo.pySolveDSGE.jlDifferentiableStateSpaceModels.jlStateSpaceEcon.jlIRISRISENBTOOLBOXgEconGDSGETaylor Projection
Host languagejuliaMATLABjuliaPythonjuliajuliajuliaMATLABMATLABMATLABRMATLABMATLAB
Non stochastic steady state solversymbolic or numerical solver of independent blocks; symbolic removal of variables redundant in steady state; inclusion of calibration equations in problemnumerical solver of independent blocks or user-supplied values/functionsnumerical solver of independent blocks or user-supplied values/functionsnumerical solvernumerical solver or user supplied values/equationsnumerical solver of independent blocks or user-supplied values/functionsnumerical solver of independent blocks or user-supplied values/functionsnumerical solver of independent blocks or user-supplied values/functionsuser-supplied steady state file or numerical solvernumerical solver; inclusion of calibration equations in problem
Automatic declaration of variables and parametersyes
Derivatives (Automatic Differentiation) wrt parametersyesyes - for all 1st, 2nd order perturbation solution related output if user supplied steady state equations
Perturbation solution order1, 2, 3k11, 2, 31, 2, 31, 2111 to 5111 to 5
Pruningyesyesyesyes
Automatic derivation of first order conditionsyes
Handles occasionally binding constraintsyesyesyesyesyesyesyes
Global solutionyesyesyes
Estimationyesyesyesyesyesyesyes
Balanced growth pathyesyesyesyesyesyes
Model inputmacro (julia)text filetext filetext filetext filemacro (julia)module (julia)text filetext filetext filetext filetext filetext file
Timing conventionend-of-periodend-of-periodend-of-periodstart-of-periodstart-of-periodend-of-periodend-of-periodend-of-periodend-of-periodend-of-periodstart-of-periodstart-of-period

Bibliography

diff --git a/v0.1.36/search_index.js b/v0.1.36/search_index.js index 15dd9a39..b26c05c9 100644 --- a/v0.1.36/search_index.js +++ b/v0.1.36/search_index.js @@ -1,3 +1,3 @@ var documenterSearchIndex = {"docs": -[{"location":"unfinished_docs/todo/#Todo-list","page":"Todo list","title":"Todo list","text":"","category":"section"},{"location":"unfinished_docs/todo/#High-priority","page":"Todo list","title":"High priority","text":"","category":"section"},{"location":"unfinished_docs/todo/","page":"Todo list","title":"Todo list","text":"[ ] ss transition by entering new parameters at given periods\n[ ] check downgrade tests\n[ ] figure out why PG and IS return basically the prior\n[ ] allow external functions to calculate the steady state (and hand it over via SS or get_loglikelihood function) - need to use the check function for implicit derivatives and cannot use it to get him a guess from which he can use internal solver going forward\n[ ] go through custom SS solver once more and try to find parameters and logic that achieves best results\n[ ] SS solver with less equations than variables\n[ ] improve docs: timing in first sentence seems off; have something more general in first sentence; why is the syntax user friendly? give an example; make the former and the latter a footnote\n[ ] write tests/docs/technical details for nonlinear obc, forecasting, (non-linear) solution algorithms, SS solver, obc solver, and other algorithms\n[ ] change docs to reflect that the output of irfs include aux vars and also the model info Base.show includes aux vars\n[ ] recheck function examples and docs (include output description)\n[ ] Docs: document outputs and associated functions to work with function\n[ ] write documentation/docstrings using copilot\n[ ] feedback: sell the sampler better (ESS vs dynare), more details on algorithm (SS solver)\n[ ] NaNMath pow does not work (is not substituted)\n[ ] check whether its possible to run parameters macro/block without rerunning model block\n[ ] eliminate possible log, ^ terms in parameters block equations - because of nonnegativity errors\n[ ] throw error when equations appear more than once\n[ ] plot multiple solutions or models - multioptions in one graph\n[ ] make SS calc faster (func and optim, maybe inplace ops)\n[ ] try preallocation tools for forwarddiff\n[ ] add nonlinear shock decomposition\n[ ] check obc once more\n[ ] rm obc vars from get_SS\n[ ] check why warmup_iterations = 0 makes estimated shocks larger\n[ ] use analytical derivatives also for shocks matching optim (and HMC - implicit diff)\n[ ] info on when what filter is used and chosen options are overridden\n[ ] check warnings, errors throughout. check suppress not interfering with pigeons\n[ ] functions to reverse state_update (input: previous shock and current state, output previous state), find shocks corresponding to bringing one state to the next\n[ ] cover nested case: min(50,a+b+max(c,10))\n[ ] add balanced growth path handling\n[ ] higher order solutions: some kron matrix mults are later compressed. write custom compressed kron mult; check if sometimes dense mult is faster? (e.g. GNSS2010 seems dense at higher order)\n[ ] make inversion filter / higher order sols suitable for HMC (forward and reverse diff!!, currently only analytical pushforward, no implicitdiff) | analytic derivatives\n[ ] speed up sparse matrix calcs in implicit diff of higher order funcs\n[ ] compressed higher order derivatives and sparsity of jacobian\n[ ] add user facing option to choose sylvester solver\n[ ] autocorr and covariance with derivatives. return 3d array\n[ ] use ID for sparse output sylvester solvers (filed issue)\n[ ] add pydsge and econpizza to overview\n[ ] add for loop parser in @parameters\n[ ] implement more multi country models\n[ ] speed benchmarking (focus on ImplicitDiff part)\n[ ] for cond forecasting allow less shocks than conditions with a warning. should be svd then\n[ ] have parser accept rss | (r[ss] - 1) * 400 = rss\n[ ] when doing calibration with optimiser have better return values when he doesnt find a solution (probably NaN)\n[ ] sampler returned negative std. investigate and come up with solution ensuring sampler can continue\n[ ] automatically adjust plots for different legend widths and heights\n[ ] include weakdeps: https://pkgdocs.julialang.org/dev/creating-packages/#Weak-dependencies\n[ ] have get_std take variables as an input\n[ ] more informative errors when something goes wrong when writing a model\n[ ] initial state accept keyed array, SS and SSS as arguments\n[ ] plotmodelestimates with unconditional forecast at the end\n[ ] kick out unused parameters from m.parameters\n[ ] use cache for gradient calc in estimation (see DifferentiableStateSpaceModels)\n[ ] write functions to debug (fix_SS.jl...)\n[ ] model compression (speed up 2nd moment calc (derivatives) for large models; gradient loglikelihood is very slow due to large matmuls) -> model setup as maximisation problem (gEcon) -> HANK models\n[ ] implement global solution methods\n[ ] add more models\n[ ] use @assert for errors and @test_throws\n[ ] print SS dependencies (get parameters (in function of parameters) into the dependencies), show SS solver\n[ ] use strings instead of symbols internally\n[ ] write how-to for calibration equations\n[ ] make the nonnegativity trick optional or use nanmath?\n[ ] clean up different parameter types\n[ ] clean up printouts/reporting\n[ ] clean up function inputs and harmonise AD and standard commands\n[ ] figure out combinations for inputs (parameters and variables in different formats for get_irf for example)\n[ ] weed out SS solver and saved objects\n[x] streamline estimation part (dont do string matching... but rely on precomputed indices...)\n[x] estimation: run auto-tune before and use solver treating parameters as given\n[x] use arraydist in tests and docs\n[x] include guess in docs\n[x] Find any SS by optimising over both SS guesses and parameter inputs\n[x] riccati with analytical derivatives (much faster if sparse) instead of implicit diff; done for ChainRules; ForwardDiff only feasible for smaller problems -> ID is fine there\n[x] log in parameters block is recognized as variable\n[x] add termination condition if relative change in ss solver is smaller than tol (relevant when values get very large)\n[x] provide option for external SS guess; provided in parameters macro\n[x] make it possible to run multiple ss solver parameter combination including starting points when solving a model\n[x] automatically put the combi first which solves it fastest the first time\n[x] write auto-tune in case he cant find SS (add it to the warning when he cant find the SS)\n[x] nonlinear conditional forecasts for higher order and obc\n[x] for cond forecasting and kalman, get rid of observables input and use axis key of data input\n[x] fix translate dynare mod file from file written using write to dynare file (see test models): added retranslation to test\n[x] use packages for kalman filter: nope sticking to own implementation\n[x] check that there is an error if he cant find SS\n[x] bring solution error into an object of the model so we dont have to pass it on as output: errors get returned by functions and are thrown where appropriate\n[x] include option to provide pruned states for irfs\n[x] use other quadratic iteration for diffable first order solve (useful because schur can error in estimation): used try catch, schur is still fastest\n[x] fix SS solver (failed for backus in guide): works now\n[x] nonlinear estimation using unscented kalman filter / inversion filter (minimization problem: find shocks to match states with data): used inversion filter with gradient optim\n[x] check if higher order effects might distort results for autocorr (problem with order deffinition) - doesnt seem to be the case; full_covar yields same result\n[x] implement occasionally binding constraints with shocks\n[x] add QUEST3 tests\n[x] add obc tests\n[x] highlight NUTS sampler compatibility\n[x] differentiate more vs diffstatespace\n[x] reorder other toolboxes according to popularity\n[x] add JOSS article (see Makie.jl)\n[x] write to mod file for unicode characters. have them take what you would type: \\alpha\\bar\n[x] write dynare model using function converting unicode to tab completion\n[x] write parameter equations to dynare (take ordering on board)\n[x] pruning of 3rd order takes pruned 2nd order input\n[x] implement moment matching for pruned models\n[x] test pruning and add literature\n[x] use more implicit diff for the other functions as well\n[x] handle sparsity in sylvester solver better (hand over indices and nzvals instead of vec)\n[x] redo naming in moments calc and make whole process faster (precalc wrangling matrices)\n[x] write method of moments how to\n[x] check tols - all set to eps() except for dependencies tol (1e-12)\n[x] set to 0 SS values < 1e-12 - doesnt work with Zygote\n[x] sylvester with analytical derivatives (much faster if sparse) instead of implicit diff - yes but there are still way too large matrices being realised. implicitdiff is better here\n[x] autocorr to statistics output and in general for higher order pruned sols\n[x] fix product moments and test for cases with more than 2 shocks\n[x] write tests for variables argument in get_moment and for higher order moments\n[x] handle KeyedArrays with strings as dimension names as input\n[x] add mean in output funcs for higher order \n[x] recheck results for third order cov\n[x] have a look again at get_statistics function\n[x] consolidate sylvester solvers (diff)\n[x] put outside of loop the ignore derviatives for derivatives\n[x] write function to smart select variables to calc cov for\n[x] write get function for variables, parameters, equations with proper parsing so people can understand what happens when invoking for loops\n[x] have for loop where the items are multiplied or divided or whatever, defined by operator | + or * only\n[x] write documentation for string inputs\n[x] write documentation for programmatic model writing\n[x] input indices not as symbol\n[x] make sure plots and printed output also uses strings instead of symbols if adequate\n[x] have keyedarray with strings as axis type if necessary as output\n[x] write test for keyedarray with strings as primary axis\n[x] test string input\n[x] have all functions accept strings and write tests for it\n[x] parser model into per equation functions instead of single big functions\n[x] use krylov instead of linearsolve\n[x] implement for loops in model macro (e.g. to setup multi country models)\n[x] fix ss of pruned solution in plotsolution. seems detached\n[x] try solve first order with JuMP - doesnt work because JuMP cannot handle matrix constraints/objectives \n[x] get solution higher order with multidimensional array (states, 1 and 2 partial derivatives variables names as dimensions in 2order case)\n[x] add pruning\n[x] add other outputs from estimation (smoothed, filter states and shocks)\n[x] shorten plot_irf (take inspiration from model estimate)\n[x] fix solution plot\n[x] see if we can avoid try catch and test for invertability instead\n[x] have Flux solve SS field #gradient descent based is worse than LM based\n[x] have parameters keyword accept Int and 2/3\n[x] plot_solution colors change from 2nd to 2rd order\n[x] custom LM: optimize for other RBC models, use third order backtracking\n[x] add SSS for third order (can be different than the one from 2nd order, see Gali (2015)) in solution plot; also put legend to the bottom as with Condition\n[x] check out Aqua.jl as additional tests\n[x] write tests and documentation for solution, estimation... making sure results are consistent\n[x] catch cases where you define calibration equation without declaring conditional variable\n[x] flag if equations contain no info for SS, suggest to set ss values as parameters\n[x] handle SS case where there are equations which have no information for the SS. use SS definitions in parameter block to complete system | no, set steady state values to parameters instead. might fail if redundant equation has y[0] - y[-1] instead of y[0] - y[ss]\n[x] try eval instead of runtimegeneratedfunctions; eval is slower but can be typed\n[x] check correctness of solution for models added\n[x] SpecialFunctions eta and gamma cause conflicts; consider importing used functions explicitly\n[x] bring the parsing of equations after the parameters macro\n[x] rewrite redundant var part so that it works with ssauxequations instead of ss_equations\n[x] catch cases where ss vars are set to zero. x[0] * eps_z[x] in SS becomes x[0] * 0 but should be just 0 (use sympy for this)\n[x] remove duplicate nonnegative aux vars to speed up SS solver\n[x] error when defining variable more than once in parameters macro\n[x] consolidate aux vars, use sympy to simplify\n[x] error when writing equations with only one variable\n[x] error when defining variable as parameter\n[x] more options for IRFs, simulate only certain shocks - set stds to 0 instead\n[x] add NBTOOLBOX, IRIS to overview\n[x] input field for SS init guess in all functions #not necessary so far. SS solver works out everything just fine\n[x] symbolic derivatives\n[x] check SW03 SS solver\n[x] more options for IRFs, pass on shock vector\n[x] write to dynare\n[x] add plot for policy function\n[x] add plot for FEVD\n[x] add functions like getvariance, getsd, getvar, getcovar\n[x] add correlation, autocorrelation, and (conditional) variance decomposition\n[x] go through docs to reflect verbose behaviour\n[x] speed up covariance mat calc\n[x] have conditional parameters at end of entry as well (... | alpha instead of alpha | ...)\n[x] Get functions: getoutput, getmoments\n[x] get rid of init_guess\n[x] an and schorfheide estimation\n[x] estimation, IRF matching, system priors\n[x] check derivative tests with finite diff\n[x] release first version\n[x] SS solve: add domain transformation optim\n[x] revisit optimizers for SS\n[x] figure out licenses\n[x] SS: replace variables in log() with auxilliary variable which must be positive to help solver\n[x] complex example with lags > 1, [ss], calib equations, aux nonneg vars\n[x] add NLboxsolve\n[x] try NonlinearSolve - fails due to missing bounds\n[x] make noneg aux part of optim problem for NLboxsolve in order to avoid DomainErrors - not necessary\n[x] have bounds on alpha (failed previously due to naming conflict) - works now","category":"page"},{"location":"unfinished_docs/todo/#Not-high-priority","page":"Todo list","title":"Not high priority","text":"","category":"section"},{"location":"unfinished_docs/todo/","page":"Todo list","title":"Todo list","text":"[ ] estimation codes with missing values (adopt kalman filter)\n[ ] decide on whether levels = false means deviations from NSSS or relevant SS\n[ ] whats a good error measure for higher order solutions (taking whole dist of future shock into account)? use mean error for n number of future shocks\n[ ] improve redundant calculations of SS and other parts of solution\n[ ] restructure functions and containers so that compiler knows what types to expect\n[ ] use RecursiveFactorization and TriangularSolve to solve, instead of MKL or OpenBLAS\n[ ] fix SnoopCompile with generated functions\n[ ] exploit variable incidence and compression for higher order derivatives\n[ ] for estimation use CUDA with st order: linear time iteration starting from last 1st order solution and then LinearSolveCUDA solvers for higher orders. this should bring benefits for large models and HANK models\n[ ] pull request in StatsFuns to have norminv... accept type numbers and add translation from matlab: norminv to StatsFuns norminvcdf\n[ ] more informative errors when declaring equations/ calibration\n[ ] unit equation errors\n[ ] implenent reduced linearised system solver + nonlinear\n[ ] implement HANK\n[ ] implement automatic problem derivation (gEcon)\n[ ] print legend for algorithm in last subplot of plot only\n[ ] select variables for moments\n[x] rewrite first order with riccati equation MatrixEquations.jl: not necessary/feasable see dynare package\n[x] test on highly nonlinear model # caldara et al is actually epstein zin wiht stochastic vol\n[x] conditional forecasting\n[x] find way to recover from failed SS solution which is written to init guess\n[x] redo ugly solution for selecting parameters to differentiate for\n[x] conditions for when to use which solution. if solution is outdated redo all solutions which have been done so far and use smart starting points\n[x] Revise 2,3 pert codes to make it more intuitive\n[x] implement blockdiag with julia package instead of python\n[x] Pretty print linear solution\n[x] write function to get_irfs\n[x] Named arrays for irf\n[x] write state space function for solution\n[x] Status print for model container\n[x] implenent 2nd + 3rd order perturbation\n[x] implement fuctions for distributions\n[x] try speedmapping.jl - no improvement\n[x] moment matching\n[x] write tests for higher order pert and standalone function\n[x] add compression back in\n[x] FixedPointAcceleration didnt improve on iterative procedure\n[x] add exogenous variables in lead or lag\n[x] regex in parser of SS and exo\n[x] test SS solver on SW07\n[x] change calibration, distinguish SS/dyn parameters\n[x] plot multiple solutions at same time (save them in separate constructs)\n[x] implement bounds in SS finder\n[x] map pars + vars impacting SS\n[x] check bounds when putting in new calibration\n[x] Save plot option\n[x] Add shock to plot title\n[x] print model name","category":"page"},{"location":"tutorials/rbc/#Write-your-first-model-simple-RBC","page":"Write your first simple model - RBC","title":"Write your first model - simple RBC","text":"","category":"section"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"The following tutorial will walk you through the steps of writing down a model (not explained here / taken as given) and analysing it. Prior knowledge of DSGE models and their solution in practical terms (e.g. having used a mod file with dynare) is useful in understanding this tutorial. For the purpose of this tutorial we will work with a simplified version of a real business cycle (RBC) model. The model laid out below examines capital accumulation, consumption, and random technological progress. Households maximize lifetime utility from consumption, weighing current against future consumption. Firms produce using capital and a stochastic technology factor, setting capital rental rates based on marginal productivity. The model integrates households' decisions, firms' production, and random technological shifts to understand economic growth and dynamics.","category":"page"},{"location":"tutorials/rbc/#RBC-derivation-of-model-equations","page":"Write your first simple model - RBC","title":"RBC - derivation of model equations","text":"","category":"section"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Household's Problem: Households derive utility from consuming goods and discount future consumption. The decision they face every period is how much of their income to consume now versus how much to invest for future consumption.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"E_0 sum_t=0^infty beta^t ln(c_t)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Their budget constraint reflects that their available resources for consumption or investment come from returns on their owned capital (both from the rental rate and from undepreciated capital) and any profits distributed from firms.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"c_t + k_t = (1-delta) k_t-1 + R_t k_t-1 + Pi_t","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Combining the first order (optimality) conditions with respect to c_t and k_t shows that households balance the marginal utility of consuming one more unit today against the expected discounted marginal utility of consuming that unit in the future.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"frac1c_t = beta E_t left (R_t+1 + 1 - delta) frac1c_t+1 right","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Firm's Problem: Firms rent capital from households to produce goods. Their profits, Pi_t, are the difference between their revenue from selling goods and their costs from renting capital. Competition ensures that profits are 0.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Pi_t = q_t - R_t k_t-1","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Given the Cobb-Douglas production function with a stochastic technology process:","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"q_t = e^z_t k_t-1^alpha","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"The FOC with respect to capital k_t determines the optimal amount of capital the firm should rent. It equates the marginal product of capital (how much additional output one more unit of capital would produce) to its cost (the rental rate).","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"R_t = alpha e^z_t k_t-1^alpha-1","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Market Clearing: This condition ensures that every good produced in the economy is either consumed by households or invested to augment future production capabilities.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"q_t = c_t + i_t","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"With:","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"i_t = k_t - (1-delta)k_t-1","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Equations describing the dynamics of the economy:","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Household's Optimization (Euler Equation): Signifies the balance households strike between current and future consumption. The rental rate of capital has been substituted for.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"frac1c_t = fracbetac_t+1 left( alpha e^z_t+1 k_t^alpha-1 + (1 - delta) right)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Capital Accumulation: Charts the progression of capital stock over time.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"c_t + k_t = (1-delta)k_t-1 + q_t","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Production: Describes the output generation from the previous period's capital stock, enhanced by current technology.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"q_t = e^z_t k_t-1^alpha","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Technology Process: Traces the evolution of technological progress. Exogenous innovations are captured by epsilon^z_t.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"z_t = rho^z z_t-1 + sigma^z epsilon^z_t","category":"page"},{"location":"tutorials/rbc/#Define-the-model","page":"Write your first simple model - RBC","title":"Define the model","text":"","category":"section"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"The first step is always to name the model and write down the equations. Taking the RBC model described above this would go as follows.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"First, we load the package and then use the @model macro to define our model. The first argument after @model is the model name and will be the name of the object in the global environment containing all information regarding the model. The second argument to the macro are the equations, which we write down between begin and end. One equation per line and timing of endogenous variables are expressed in the squared brackets following the variable name. Exogenous variables (shocks) are followed by a keyword in squared brackets indicating them being exogenous (in this case [x]). Note that names can leverage julias unicode capabilities (alpha can be written as α).","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"ENV[\"GKSwstype\"] = \"100\"","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"using MacroModelling\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n\n q[0] = exp(z[0]) * k[-1]^α\n\n z[0] = ρᶻ * z[-1] + σᶻ * ϵᶻ[x]\nend","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"After the model is parsed we get some info on the model variables, and parameters.","category":"page"},{"location":"tutorials/rbc/#Define-the-parameters","page":"Write your first simple model - RBC","title":"Define the parameters","text":"","category":"section"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Next we need to add the parameters of the model. The macro @parameters takes care of this:","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"@parameters RBC begin\n σᶻ= 0.01\n ρᶻ= 0.2\n δ = 0.02\n α = 0.5\n β = 0.95\nend","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Parameter definitions are similar to assigning values in julia. Note that we have to write one parameter definition per line.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Given the equations and parameters, the package will first attempt to solve the system of nonlinear equations symbolically (including possible calibration equations - see next tutorial for an example). If an analytical solution is not possible, numerical solution methods are used to try and solve it. There is no guarantee that a solution can be found, but it is highly likely, given that a solution exists. The problem setup tries to incorporate parts of the structure of the problem, e.g. bounds on variables: if one equation contains log(k) it must be that k > 0. Nonetheless, the user can also provide useful information such as variable bounds or initial guesses. Bounds can be set by adding another expression to the parameter block e.g.: c > 0. Large values are typically a problem for numerical solvers. Therefore, providing a guess for these values will increase the speed of the solver. Guesses can be provided as a Dict after the model name and before the parameter definitions block, e.g.: @parameters RBC guess = Dict(k => 10) begin ... end.","category":"page"},{"location":"tutorials/rbc/#Plot-impulse-response-functions-(IRFs)","page":"Write your first simple model - RBC","title":"Plot impulse response functions (IRFs)","text":"","category":"section"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"A useful output to analyze are IRFs for the exogenous shocks. Calling plot_irf (different names for the same function are also supported: plot_irfs, or plot_IRF) will take care of this. Please note that you need to import the StatsPlots packages once before the first plot. In the background the package solves (symbolically in this simple case) for the non stochastic steady state (SS) and calculates the first order perturbation solution.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"import StatsPlots\nplot_irf(RBC)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"(Image: RBC IRF)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"When the model is solved the first time (in this case by calling plot_irf), the package breaks down the steady state problem into independent blocks and first attempts to solve them symbolically and if that fails numerically.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"The plot shows the responses of the endogenous variables (c, k, q, and z) to a one standard deviation positive (indicated by Shock⁺ in chart title) unanticipated shock in eps_z. Therefore there are as many subplots as there are combinations of shocks and endogenous variables (which are impacted by the shock). Plots are composed of up to 9 subplots and the plot title shows the model name followed by the name of the shock and which plot we are seeing out of the plots for this shock (e.g. (1/3) means we see the first out of three plots for this shock). Subplots show the sorted endogenous variables with the left y-axis showing the level of the respective variable and the right y-axis showing the percent deviation from the SS (if variable is strictly positive). The horizontal black line marks the SS.","category":"page"},{"location":"tutorials/rbc/#Explore-other-parameter-values","page":"Write your first simple model - RBC","title":"Explore other parameter values","text":"","category":"section"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Playing around with the model can be especially insightful in the early phase of model development. The package tries to facilitates this process to the extent possible. Typically one wants to try different parameter values and see how the IRFs change. This can be done by using the parameters argument of the plot_irf function. We pass a Pair with the Symbol of the parameter (: in front of the parameter name) we want to change and its new value to the parameter argument (e.g. :α => 0.3).","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"plot_irf(RBC, parameters = :α => 0.3)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"(Image: IRF plot)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"First, the package finds the new steady state, solves the model dynamics around it and saves the new parameters and solution in the model object. Second, note that the shape of the curves in the plot and the y-axis values changed. Updating the plot for new parameters is significantly faster than calling it the first time. This is because the first call triggers compilations of the model functions, and once compiled the user benefits from the performance of the specialised compiled code.","category":"page"},{"location":"tutorials/rbc/#Plot-model-simulation","page":"Write your first simple model - RBC","title":"Plot model simulation","text":"","category":"section"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Another insightful output is simulations of the model. Here we can use the plot_simulations function. Please note that you need to import the StatsPlots packages once before the first plot. To the same effect we can use the plot_irf function and specify in the shocks argument that we want to :simulate the model and set the periods argument to 100.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"plot_simulations(RBC)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"(Image: Simulate RBC)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"The plots show the models endogenous variables in response to random draws for all exogenous shocks over 100 periods.","category":"page"},{"location":"tutorials/rbc/#Plot-specific-series-of-shocks","page":"Write your first simple model - RBC","title":"Plot specific series of shocks","text":"","category":"section"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Sometimes one has a specific series of shocks in mind and wants to see the corresponding model response of endogenous variables. This can be achieved by passing a Matrix or KeyedArray of the series of shocks to the shocks argument of the plot_irf function:","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"shock_series = zeros(1,4)\nshock_series[1,2] = 1\nshock_series[1,4] = -1\nplot_irf(RBC, shocks = shock_series)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"(Image: Series of shocks RBC)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"The plot shows the two shocks hitting the economy in periods 2 and 4 and then continues the simulation for 40 more quarters.","category":"page"},{"location":"tutorials/rbc/#Model-statistics","page":"Write your first simple model - RBC","title":"Model statistics","text":"","category":"section"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"The package solves for the SS automatically and we got an idea of the SS values in the plots. If we want to see the SS values we can call get_steady_state:","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"get_steady_state(RBC)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"to get the SS and the derivatives of the SS with respect to the model parameters. The first column of the returned matrix shows the SS while the second to last columns show the derivatives of the SS values (indicated in the rows) with respect to the parameters (indicated in the columns). For example, the derivative of k with respect to β is 165.319. This means that if we increase β by 1, k would increase by 165.319 approximately. Let's see how this plays out by changing β from 0.95 to 0.951 (a change of +0.001):","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"get_steady_state(RBC,parameters = :β => .951)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Note that get_steady_state like all other get functions has the parameters argument. Hence, whatever output we are looking at we can change the parameters of the model.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"The new value of β changed the SS as expected and k increased by 0.168. The elasticity (0.168/0.001) comes close to the partial derivative previously calculated. The derivatives help understanding the effect of parameter changes on the steady state and make for easier navigation of the parameter space.","category":"page"},{"location":"tutorials/rbc/#Standard-deviations","page":"Write your first simple model - RBC","title":"Standard deviations","text":"","category":"section"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Next to the SS we can also show the model implied standard deviations of the model. get_standard_deviation takes care of this. Additionally we will set the parameter values to what they were in the beginning by passing on a Tuple of Pairs containing the Symbols of the parameters to be changed and their new (initial) values (e.g. (:α => 0.5, :β => .95)).","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"get_standard_deviation(RBC, parameters = (:α => 0.5, :β => .95))","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"The function returns the model implied standard deviations of the model variables and their derivatives with respect to the model parameters. For example, the derivative of the standard deviation of c with resect to δ is -0.384. In other words, the standard deviation of c decreases with increasing δ.","category":"page"},{"location":"tutorials/rbc/#Correlations","page":"Write your first simple model - RBC","title":"Correlations","text":"","category":"section"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Another useful statistic is the model implied correlation of variables. We use get_correlation for this:","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"get_correlation(RBC)","category":"page"},{"location":"tutorials/rbc/#Autocorrelations","page":"Write your first simple model - RBC","title":"Autocorrelations","text":"","category":"section"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Last but not least, we have a look at the model implied autocorrelations of model variables using the get_autocorrelation function:","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"get_autocorrelation(RBC)","category":"page"},{"location":"tutorials/rbc/#Model-solution","page":"Write your first simple model - RBC","title":"Model solution","text":"","category":"section"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"A further insightful output are the policy and transition functions of the the first order perturbation solution. To retrieve the solution we call the function get_solution:","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"get_solution(RBC)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"The solution provides information about how past states and present shocks impact present variables. The first row contains the SS for the variables denoted in the columns. The second to last rows contain the past states, with the time index ₍₋₁₎, and present shocks, with exogenous variables denoted by ₍ₓ₎. For example, the immediate impact of a shock to eps_z on q is 0.0688.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"There is also the possibility to visually inspect the solution. Please note that you need to import the StatsPlots packages once before the first plot. We can use the plot_solution function:","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"plot_solution(RBC, :k)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"(Image: RBC solution)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"The chart shows the first order perturbation solution mapping from the past state k to the present variables of the model. The state variable covers a range of two standard deviations around the non stochastic steady state and all other states remain in the non stochastic steady state.","category":"page"},{"location":"tutorials/rbc/#Obtain-array-of-IRFs-or-model-simulations","page":"Write your first simple model - RBC","title":"Obtain array of IRFs or model simulations","text":"","category":"section"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Last but not least the user might want to obtain simulated time series of the model or IRFs without plotting them. For IRFs this is possible by calling get_irf:","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"get_irf(RBC)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"which returns a 3-dimensional KeyedArray with variables (absolute deviations from the relevant steady state by default) in rows, the period in columns, and the shocks as the third dimension.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"For simulations this is possible by calling simulate:","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"simulate(RBC)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"which returns the simulated data in levels in a 3-dimensional KeyedArray of the same structure as for the IRFs.","category":"page"},{"location":"tutorials/rbc/#Conditional-forecasts","page":"Write your first simple model - RBC","title":"Conditional forecasts","text":"","category":"section"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Conditional forecasting is a useful tool to incorporate for example forecasts into a model and then add shocks on top.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"For example we might be interested in the model dynamics given a path for c for the first 4 quarters and the next quarter a negative shock to eps_z arrives. This can be implemented using the get_conditional_forecast function and visualised with the plot_conditional_forecast function.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"First, we define the conditions on the endogenous variables as deviations from the non stochastic steady state (c in this case) using a KeyedArray from the AxisKeys package (check get_conditional_forecast for other ways to define the conditions):","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"using AxisKeys\nconditions = KeyedArray(Matrix{Union{Nothing,Float64}}(undef,1,4),Variables = [:c], Periods = 1:4)\nconditions[1:4] .= [-.01,0,.01,.02];","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Note that all other endogenous variables not part of the KeyedArray are also not conditioned on.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Next, we define the conditions on the shocks (eps_z in this case) using a SparseArrayCSC from the SparseArrays package (check get_conditional_forecast for other ways to define the conditions on the shocks):","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"using SparseArrays\nshocks = spzeros(1,5)\nshocks[1,5] = -1;","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Note that for the first 4 periods the shock has no predetermined value and is determined by the conditions on the endogenous variables.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Finally we can get the conditional forecast:","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"get_conditional_forecast(RBC, conditions, shocks = shocks, conditions_in_levels = false)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"The function returns a KeyedArray with the values of the endogenous variables and shocks matching the conditions exactly.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"We can also plot the conditional forecast. Please note that you need to import the StatsPlots packages once before the first plot. In order to plot we can use:","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"plot_conditional_forecast(RBC, conditions, shocks = shocks, conditions_in_levels = false)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"(Image: RBC conditional forecast)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"and we need to set conditions_in_levels = false since the conditions are defined in deviations.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Note that the stars indicate the values the model is conditioned on.","category":"page"},{"location":"how-to/loops/#Programmatic-model-writing","page":"Programmatic model writing using for-loops","title":"Programmatic model writing","text":"","category":"section"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"Programmatic model writing is a powerful tool to write complex models using concise code. More specifically, the @model and @parameters macros allow for the use of indexed variables and for-loops.","category":"page"},{"location":"how-to/loops/#Model-block","page":"Programmatic model writing using for-loops","title":"Model block","text":"","category":"section"},{"location":"how-to/loops/#for-loops-for-time-indices","page":"Programmatic model writing using for-loops","title":"for loops for time indices","text":"","category":"section"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"In practice this means that you no longer need to write this:","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"Y_annual[0] = Y[0] + Y[-1] + Y[-2] + Y[-3]","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"but instead you can write this:","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"Y_annual[0] = for lag in -3:0 Y[lag] end","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"In the background the package expands the for loop and adds up the elements for the different values of lag.","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"In case you don't want the elements to be added up but multiply the items you can do so:","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"R_annual[0] = for operator = :*, lag in -3:0 R[lag] end","category":"page"},{"location":"how-to/loops/#for-loops-for-variables-/-parameter-specific-indices","page":"Programmatic model writing using for-loops","title":"for loops for variables / parameter specific indices","text":"","category":"section"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"Another use-case are models with repetitive equations such as multi-sector or multi-country models.","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"For example, defining the production function for two countries (home country H and foreign country F) would look as follows without the use of programmatic features:","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"y_H[0] = A_H[0] * k_H[-1]^alpha_H\ny_F[0] = A_F[0] * k_F[-1]^alpha_F","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"and this can be written more conveniently using loops:","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"for co in [H, F] y{co}[0] = A{co}[0] * k{co}[-1]^alpha{co} end","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"Note that the package internally writes out the for loop and creates two equations; one each for country H and F. The variables and parameters are indexed using the curly braces {}. These can also be used outside loops. When using more than one index it is important to make sure the indices are in the right order.","category":"page"},{"location":"how-to/loops/#Example-model-block","page":"Programmatic model writing using for-loops","title":"Example model block","text":"","category":"section"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"Putting these these elements together we can write the multi-country model equations of the Backus, Kehoe and Kydland (1992) model like this:","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"ENV[\"GKSwstype\"] = \"100\"\nusing Random\nRandom.seed!(3)","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"using MacroModelling\n@model Backus_Kehoe_Kydland_1992 begin\n for co in [H, F]\n Y{co}[0] = ((LAMBDA{co}[0] * K{co}[-4]^theta{co} * N{co}[0]^(1 - theta{co}))^(-nu{co}) + sigma{co} * Z{co}[-1]^(-nu{co}))^(-1 / nu{co})\n\n K{co}[0] = (1 - delta{co}) * K{co}[-1] + S{co}[0]\n\n X{co}[0] = for lag in (-4+1):0 phi{co} * S{co}[lag] end\n\n A{co}[0] = (1 - eta{co}) * A{co}[-1] + N{co}[0]\n\n L{co}[0] = 1 - alpha{co} * N{co}[0] - (1 - alpha{co}) * eta{co} * A{co}[-1]\n\n U{co}[0] = (C{co}[0]^mu{co} * L{co}[0]^(1 - mu{co}))^gamma{co}\n\n psi{co} * mu{co} / C{co}[0] * U{co}[0] = LGM[0]\n\n psi{co} * (1 - mu{co}) / L{co}[0] * U{co}[0] * (-alpha{co}) = - LGM[0] * (1 - theta{co}) / N{co}[0] * (LAMBDA{co}[0] * K{co}[-4]^theta{co} * N{co}[0]^(1 - theta{co}))^(-nu{co}) * Y{co}[0]^(1 + nu{co})\n\n for lag in 0:(4-1) \n beta{co}^lag * LGM[lag]*phi{co}\n end +\n for lag in 1:4\n -beta{co}^lag * LGM[lag] * phi{co} * (1 - delta{co})\n end = beta{co}^4 * LGM[+4] * theta{co} / K{co}[0] * (LAMBDA{co}[+4] * K{co}[0]^theta{co} * N{co}[+4]^(1 - theta{co})) ^ (-nu{co}) * Y{co}[+4]^(1 + nu{co})\n\n LGM[0] = beta{co} * LGM[+1] * (1 + sigma{co} * Z{co}[0]^(-nu{co} - 1) * Y{co}[+1]^(1 + nu{co}))\n\n NX{co}[0] = (Y{co}[0] - (C{co}[0] + X{co}[0] + Z{co}[0] - Z{co}[-1])) / Y{co}[0]\n end\n\n (LAMBDA{H}[0] - 1) = rho{H}{H} * (LAMBDA{H}[-1] - 1) + rho{H}{F} * (LAMBDA{F}[-1] - 1) + Z_E{H} * E{H}[x]\n\n (LAMBDA{F}[0] - 1) = rho{F}{F} * (LAMBDA{F}[-1] - 1) + rho{F}{H} * (LAMBDA{H}[-1] - 1) + Z_E{F} * E{F}[x]\n\n for co in [H,F] C{co}[0] + X{co}[0] + Z{co}[0] - Z{co}[-1] end = for co in [H,F] Y{co}[0] end\nend","category":"page"},{"location":"how-to/loops/#Parameter-block","page":"Programmatic model writing using for-loops","title":"Parameter block","text":"","category":"section"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"Having defined parameters and variables with indices in the model block we can also declare parameter values, including by means of calibration equations, in the parameter block.","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"In the above example we defined the production function fro countries H and F. Implicitly we have two parameters alpha and we can define their value individually by setting","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"alpha{H} = 0.3\nalpha{F} = 0.3","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"or jointly by writing","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"alpha = 0.3","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"By not using the index, the package understands that there are two parameters with this name and different indices and will set both accordingly.","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"This logic extends to calibration equations. We can write:","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"y{H}[ss] = 1 | alpha{H}\ny{F}[ss] = 1 | alpha{F}","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"to find the value of alpha that corresponds to y being equal to 1 in the non-stochastic steady state. Alternatively we can not use indices and the package understands that we refer to both indices:","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"y[ss] = 1 | alpha","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"Making use of the indices we could also target a level of y for country H with alpha for country H and target ratio of the two ys with the alpha for country F:","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"y{H}[ss] = 1 | alpha{H}\ny{H}[ss] / y{F}[ss] = y_ratio | alpha{F}\ny_ratio = 0.9","category":"page"},{"location":"how-to/loops/#Example-parameter-block","page":"Programmatic model writing using for-loops","title":"Example parameter block","text":"","category":"section"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"Making use of this and continuing the example of the Backus, Kehoe and Kydland (1992) model we can define the parameters as follows:","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"@parameters Backus_Kehoe_Kydland_1992 begin\n K_ss = 11\n K[ss] = K_ss | beta\n \n mu = 0.34\n gamma = -1.0\n alpha = 1\n eta = 0.5\n theta = 0.36\n nu = 3\n sigma = 0.01\n delta = 0.025\n phi = 1/4\n psi = 0.5\n\n Z_E = 0.00852\n \n rho{H}{H} = 0.906\n rho{F}{F} = rho{H}{H}\n rho{H}{F} = 0.088\n rho{F}{H} = rho{H}{F}\nend","category":"page"},{"location":"call_index/#Index","page":"Index","title":"Index","text":"","category":"section"},{"location":"call_index/","page":"Index","title":"Index","text":"","category":"page"},{"location":"api/","page":"API","title":"API","text":"Modules = [MacroModelling]\nOrder = [:function, :macro]","category":"page"},{"location":"api/#MacroModelling.Beta-NTuple{4, Real}","page":"API","title":"MacroModelling.Beta","text":"Beta(μ, σ, lower_bound, upper_bound; μσ)\n\n\nConvenience wrapper for the truncated Beta distribution.\n\nIf μσ = true then μ and σ are translated to the parameters of the distribution. Otherwise μ and σ represent the parameters of the distribution.\n\nArguments\n\nμ [Type: Real]: mean or first parameter of the distribution, \nσ [Type: Real]: standard deviation or first parameter of the distribution\nlower_bound [Type: Real]: truncation lower bound of the distribution\nupper_bound [Type: Real]: truncation upper bound of the distribution\n\nKeyword Arguments\n\nμσ [Type: Bool]: switch whether μ and σ represent the moments of the distribution or their parameters\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.Beta-Tuple{Real, Real}","page":"API","title":"MacroModelling.Beta","text":"Beta(μ, σ; μσ)\n\n\nConvenience wrapper for the Beta distribution.\n\nIf μσ = true then μ and σ are translated to the parameters of the distribution. Otherwise μ and σ represent the parameters of the distribution.\n\nArguments\n\nμ [Type: Real]: mean or first parameter of the distribution, \nσ [Type: Real]: standard deviation or first parameter of the distribution\n\nKeyword Arguments\n\nμσ [Type: Bool]: switch whether μ and σ represent the moments of the distribution or their parameters\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.Gamma-NTuple{4, Real}","page":"API","title":"MacroModelling.Gamma","text":"Gamma(μ, σ, lower_bound, upper_bound; μσ)\n\n\nConvenience wrapper for the truncated Inverse Gamma distribution.\n\nIf μσ = true then μ and σ are translated to the parameters of the distribution. Otherwise μ and σ represent the parameters of the distribution.\n\nArguments\n\nμ [Type: Real]: mean or first parameter of the distribution, \nσ [Type: Real]: standard deviation or first parameter of the distribution\nlower_bound [Type: Real]: truncation lower bound of the distribution\nupper_bound [Type: Real]: truncation upper bound of the distribution\n\nKeyword Arguments\n\nμσ [Type: Bool]: switch whether μ and σ represent the moments of the distribution or their parameters\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.Gamma-Tuple{Real, Real}","page":"API","title":"MacroModelling.Gamma","text":"Gamma(μ, σ; μσ)\n\n\nConvenience wrapper for the Gamma distribution.\n\nIf μσ = true then μ and σ are translated to the parameters of the distribution. Otherwise μ and σ represent the parameters of the distribution.\n\nArguments\n\nμ [Type: Real]: mean or first parameter of the distribution, \nσ [Type: Real]: standard deviation or first parameter of the distribution\n\nKeyword Arguments\n\nμσ [Type: Bool]: switch whether μ and σ represent the moments of the distribution or their parameters\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.InverseGamma-NTuple{4, Real}","page":"API","title":"MacroModelling.InverseGamma","text":"InverseGamma(μ, σ, lower_bound, upper_bound; μσ)\n\n\nConvenience wrapper for the truncated Inverse Gamma distribution.\n\nIf μσ = true then μ and σ are translated to the parameters of the distribution. Otherwise μ and σ represent the parameters of the distribution.\n\nArguments\n\nμ [Type: Real]: mean or first parameter of the distribution, \nσ [Type: Real]: standard deviation or first parameter of the distribution\nlower_bound [Type: Real]: truncation lower bound of the distribution\nupper_bound [Type: Real]: truncation upper bound of the distribution\n\nKeyword Arguments\n\nμσ [Type: Bool]: switch whether μ and σ represent the moments of the distribution or their parameters\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.InverseGamma-Tuple{Real, Real}","page":"API","title":"MacroModelling.InverseGamma","text":"InverseGamma(μ, σ; μσ)\n\n\nConvenience wrapper for the Inverse Gamma distribution.\n\nIf μσ = true then μ and σ are translated to the parameters of the distribution. Otherwise μ and σ represent the parameters of the distribution.\n\nArguments\n\nμ [Type: Real]: mean or first parameter of the distribution, \nσ [Type: Real]: standard deviation or first parameter of the distribution\n\nKeyword Arguments\n\nμσ [Type: Bool]: switch whether μ and σ represent the moments of the distribution or their parameters\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.Normal-NTuple{4, Real}","page":"API","title":"MacroModelling.Normal","text":"Normal(μ, σ, lower_bound, upper_bound)\n\n\nConvenience wrapper for the truncated Normal distribution.\n\nArguments\n\nμ [Type: Real]: mean of the distribution, \nσ [Type: Real]: standard deviation of the distribution\nlower_bound [Type: Real]: truncation lower bound of the distribution\nupper_bound [Type: Real]: truncation upper bound of the distribution\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.SS","page":"API","title":"MacroModelling.SS","text":"See get_steady_state\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.SSS-Tuple","page":"API","title":"MacroModelling.SSS","text":"Wrapper for get_steady_state with stochastic = true.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.autocorr","page":"API","title":"MacroModelling.autocorr","text":"See get_autocorrelation\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.check_residuals","page":"API","title":"MacroModelling.check_residuals","text":"See get_non_stochastic_steady_state_residuals\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.corr","page":"API","title":"MacroModelling.corr","text":"See get_correlation\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.cov","page":"API","title":"MacroModelling.cov","text":"Wrapper for get_moments with covariance = true and non_stochastic_steady_state = false, variance = false, standard_deviation = false.\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.export_dynare","page":"API","title":"MacroModelling.export_dynare","text":"See write_mod_file\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.export_mod_file","page":"API","title":"MacroModelling.export_mod_file","text":"See write_mod_file\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.export_model","page":"API","title":"MacroModelling.export_model","text":"See write_mod_file\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.export_to_dynare","page":"API","title":"MacroModelling.export_to_dynare","text":"See write_mod_file\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.fevd","page":"API","title":"MacroModelling.fevd","text":"See get_conditional_variance_decomposition\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.get_IRF","page":"API","title":"MacroModelling.get_IRF","text":"See get_irf\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.get_SS","page":"API","title":"MacroModelling.get_SS","text":"See get_steady_state\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.get_SSS-Tuple","page":"API","title":"MacroModelling.get_SSS","text":"Wrapper for get_steady_state with stochastic = true.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_autocorr","page":"API","title":"MacroModelling.get_autocorr","text":"See get_autocorrelation\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.get_autocorrelation-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_autocorrelation","text":"get_autocorrelation(\n 𝓂;\n autocorrelation_periods,\n parameters,\n algorithm,\n verbose\n)\n\n\nReturn the autocorrelations of endogenous variables using the first, pruned second, or pruned third order perturbation solution. \n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nKeyword Arguments\n\nautocorrelation_periods [Default: 1:5]: periods for which to return the autocorrelation\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\nalgorithm [Default: :first_order, Type: Symbol]: algorithm to solve for the dynamics of the model.\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend\n\n@parameters RBC begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α = 0.5\n β = 0.95\nend\n\nget_autocorrelation(RBC)\n# output\n2-dimensional KeyedArray(NamedDimsArray(...)) with keys:\n↓ Variables ∈ 4-element Vector{Symbol}\n→ Autocorrelation_orders ∈ 5-element UnitRange{Int64}\nAnd data, 4×5 Matrix{Float64}:\n (1) (2) (3) (4) (5)\n (:c) 0.966974 0.927263 0.887643 0.849409 0.812761\n (:k) 0.971015 0.931937 0.892277 0.853876 0.817041\n (:q) 0.32237 0.181562 0.148347 0.136867 0.129944\n (:z) 0.2 0.04 0.008 0.0016 0.00032\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_calibrated_parameters-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_calibrated_parameters","text":"get_calibrated_parameters(𝓂; values)\n\n\nReturns the parameters (and optionally the values) which are determined by a calibration equation. \n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nKeyword Arguments\n\nvalues [Default: false, Type: Bool]: return the values together with the parameter names\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z{TFP}[1]) * k[0]^(α - 1) + (1 - exp(z{δ}[1]) * δ))\n c[0] + k[0] = (1 - exp(z{δ}[0])δ) * k[-1] + q[0]\n q[0] = exp(z{TFP}[0]) * k[-1]^α\n for shock in [TFP, δ]\n z{shock}[0] = ρ{shock} * z{shock}[-1] + σ{shock} * (eps{shock}[x] + eps_news{shock}[x-1])\n end\n Δc_share[0] = log(c[0]/q[0]) - log(c[-1]/q[-1])\n Δk_4q[0] = log(k[0]) - log(k[-4])\nend\n\n@parameters RBC begin\n σ = 0.01\n ρ = 0.2\n capital_to_output = 1.5\n k[ss] / (4 * q[ss]) = capital_to_output | δ\n alpha = .5\n α = alpha\n β = 0.95\nend\n\nget_calibrated_parameters(RBC)\n# output\n1-element Vector{String}:\n \"δ\"\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_calibration_equation_parameters-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_calibration_equation_parameters","text":"get_calibration_equation_parameters(𝓂)\n\n\nReturns the parameters used in calibration equations which are not used in the equations of the model (see capital_to_output in example).\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z{TFP}[1]) * k[0]^(α - 1) + (1 - exp(z{δ}[1]) * δ))\n c[0] + k[0] = (1 - exp(z{δ}[0])δ) * k[-1] + q[0]\n q[0] = exp(z{TFP}[0]) * k[-1]^α\n for shock in [TFP, δ]\n z{shock}[0] = ρ{shock} * z{shock}[-1] + σ{shock} * (eps{shock}[x] + eps_news{shock}[x-1])\n end\n Δc_share[0] = log(c[0]/q[0]) - log(c[-1]/q[-1])\n Δk_4q[0] = log(k[0]) - log(k[-4])\nend\n\n@parameters RBC begin\n σ = 0.01\n ρ = 0.2\n capital_to_output = 1.5\n k[ss] / (4 * q[ss]) = capital_to_output | δ\n alpha = .5\n α = alpha\n β = 0.95\nend\n\nget_calibration_equation_parameters(RBC)\n# output\n1-element Vector{String}:\n \"capital_to_output\"\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_calibration_equations-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_calibration_equations","text":"get_calibration_equations(𝓂)\n\n\nReturn the calibration equations declared in the @parameters block. Calibration equations are additional equations which are part of the non-stochastic steady state problem. The additional equation is matched with a calibated parameter which is part of the equations declared in the @model block and can be retrieved with: get_calibrated_parameters\n\nIn case programmatic model writing was used this function returns the parsed equations (see loop over shocks in example).\n\nNote that the ouput assumes the equations are equal to 0. As in, k / (q * 4) - capital_to_output implies k / (q * 4) - capital_to_output = 0 and therefore: k / (q * 4) = capital_to_output.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z{TFP}[1]) * k[0]^(α - 1) + (1 - exp(z{δ}[1]) * δ))\n c[0] + k[0] = (1 - exp(z{δ}[0])δ) * k[-1] + q[0]\n q[0] = exp(z{TFP}[0]) * k[-1]^α\n for shock in [TFP, δ]\n z{shock}[0] = ρ{shock} * z{shock}[-1] + σ{shock} * (eps{shock}[x] + eps_news{shock}[x-1])\n end\n Δc_share[0] = log(c[0]/q[0]) - log(c[-1]/q[-1])\n Δk_4q[0] = log(k[0]) - log(k[-4])\nend\n\n@parameters RBC begin\n σ = 0.01\n ρ = 0.2\n capital_to_output = 1.5\n k[ss] / (4 * q[ss]) = capital_to_output | δ\n alpha = .5\n α = alpha\n β = 0.95\nend\n\nget_calibration_equations(RBC)\n# output\n1-element Vector{String}:\n \"k / (q * 4) - capital_to_output\"\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_conditional_forecast-Tuple{MacroModelling.ℳ, Union{KeyedArray{Union{Nothing, Float64}}, KeyedArray{Float64}, SparseArrays.SparseMatrixCSC{Float64}, Matrix{Union{Nothing, Float64}}}}","page":"API","title":"MacroModelling.get_conditional_forecast","text":"get_conditional_forecast(\n 𝓂,\n conditions;\n shocks,\n initial_state,\n periods,\n parameters,\n variables,\n conditions_in_levels,\n algorithm,\n levels,\n verbose\n)\n\n\nReturn the conditional forecast given restrictions on endogenous variables and shocks (optional) in a 2-dimensional array. By default (see levels), the values represent absolute deviations from the relevant steady state (e.g. higher order perturbation algorithms are relative to the stochastic steady state). A constrained minimisation problem is solved to find the combinations of shocks with the smallest magnitude to match the conditions.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\nconditions [Type: Union{Matrix{Union{Nothing,Float64}}, SparseMatrixCSC{Float64}, KeyedArray{Union{Nothing,Float64}}, KeyedArray{Float64}}]: conditions for which to find the corresponding shocks. The input can have multiple formats, but for all types of entries the first dimension corresponds to the number of variables and the second dimension to the number of periods. The conditions can be specified using a matrix of type Matrix{Union{Nothing,Float64}}. In this case the conditions are matrix elements of type Float64 and all remaining (free) entries are nothing. You can also use a SparseMatrixCSC{Float64} as input. In this case only non-zero elements are taken as conditions. Note that you cannot condition variables to be zero using a SparseMatrixCSC{Float64} as input (use other input formats to do so). Another possibility to input conditions is by using a KeyedArray. You can use a KeyedArray{Union{Nothing,Float64}} where, similar to Matrix{Union{Nothing,Float64}}, all entries of type Float64 are recognised as conditions and all other entries have to be nothing. Furthermore, you can specify in the primary axis a subset of variables (of type Symbol or String) for which you specify conditions and all other variables are considered free. The same goes for the case when you use KeyedArray{Float64}} as input, whereas in this case the conditions for the specified variables bind for all periods specified in the KeyedArray, because there are no nothing entries permitted with this type.\n\nKeyword Arguments\n\nshocks [Default: nothing, Type: Union{Matrix{Union{Nothing,Float64}}, SparseMatrixCSC{Float64}, KeyedArray{Union{Nothing,Float64}}, KeyedArray{Float64}, Nothing} = nothing]: known values of shocks. This entry allows the user to include certain shock values. By entering restrictions on the shock sin this way the problem to match the conditions on endogenous variables is restricted to the remaining free shocks in the repective period. The input can have multiple formats, but for all types of entries the first dimension corresponds to the number of shocks and the second dimension to the number of periods. The shocks can be specified using a matrix of type Matrix{Union{Nothing,Float64}}. In this case the shocks are matrix elements of type Float64 and all remaining (free) entries are nothing. You can also use a SparseMatrixCSC{Float64} as input. In this case only non-zero elements are taken as certain shock values. Note that you cannot condition shocks to be zero using a SparseMatrixCSC{Float64} as input (use other input formats to do so). Another possibility to input known shocks is by using a KeyedArray. You can use a KeyedArray{Union{Nothing,Float64}} where, similar to Matrix{Union{Nothing,Float64}}, all entries of type Float64 are recognised as known shocks and all other entries have to be nothing. Furthermore, you can specify in the primary axis a subset of shocks (of type Symbol or String) for which you specify values and all other shocks are considered free. The same goes for the case when you use KeyedArray{Float64}} as input, whereas in this case the values for the specified shocks bind for all periods specified in the KeyedArray, because there are no nothing entries permitted with this type.\ninitial_state [Default: [0.0], Type: Union{Vector{Vector{Float64}},Vector{Float64}}]: The initial state defines the starting point for the model and is relevant for normal IRFs. In the case of pruned solution algorithms the initial state can be given as multiple state vectors (Vector{Vector{Float64}}). In this case the initial state must be given in devations from the non-stochastic steady state. In all other cases the initial state must be given in levels. If a pruned solution algorithm is selected and initial state is a Vector{Float64} then it impacts the first order initial state vector only. The state includes all variables as well as exogenous variables in leads or lags if present.\nperiods [Default: 40, Type: Int]: the total number of periods is the sum of the argument provided here and the maximum of periods of the shocks or conditions argument.\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\nvariables [Default: :all_excluding_obc]: variables for which to show the results. Inputs can be a variable name passed on as either a Symbol or String (e.g. :y or \"y\"), or Tuple, Matrix or Vector of String or Symbol. Any variables not part of the model will trigger a warning. :all_excluding_auxilliary_and_obc contains all shocks less those related to auxilliary variables and related to occasionally binding constraints (obc). :all_excluding_obc contains all shocks less those related to auxilliary variables. :all will contain all variables.\nconditions_in_levels [Default: true, Type: Bool]: indicator whether the conditions are provided in levels. If true the input to the conditions argument will have the non stochastic steady state substracted.\nalgorithm [Default: :first_order, Type: Symbol]: algorithm to solve for the dynamics of the model.\nlevels [Default: false, Type: Bool]: return levels or absolute deviations from steady state corresponding to the solution algorithm (e.g. stochastic steady state for higher order solution algorithms).\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling\nusing SparseArrays, AxisKeys\n\n@model RBC_CME begin\n y[0]=A[0]*k[-1]^alpha\n 1/c[0]=beta*1/c[1]*(alpha*A[1]*k[0]^(alpha-1)+(1-delta))\n 1/c[0]=beta*1/c[1]*(R[0]/Pi[+1])\n R[0] * beta =(Pi[0]/Pibar)^phi_pi\n A[0]*k[-1]^alpha=c[0]+k[0]-(1-delta*z_delta[0])*k[-1]\n z_delta[0] = 1 - rho_z_delta + rho_z_delta * z_delta[-1] + std_z_delta * delta_eps[x]\n A[0] = 1 - rhoz + rhoz * A[-1] + std_eps * eps_z[x]\nend\n\n\n@parameters RBC_CME begin\n alpha = .157\n beta = .999\n delta = .0226\n Pibar = 1.0008\n phi_pi = 1.5\n rhoz = .9\n std_eps = .0068\n rho_z_delta = .9\n std_z_delta = .005\nend\n\n# c is conditioned to deviate by 0.01 in period 1 and y is conditioned to deviate by 0.02 in period 3\nconditions = KeyedArray(Matrix{Union{Nothing,Float64}}(undef,2,2),Variables = [:c,:y], Periods = 1:2)\nconditions[1,1] = .01\nconditions[2,2] = .02\n\n# in period 2 second shock (eps_z) is conditioned to take a value of 0.05\nshocks = Matrix{Union{Nothing,Float64}}(undef,2,1)\nshocks[1,1] = .05\n\nget_conditional_forecast(RBC_CME, conditions, shocks = shocks, conditions_in_levels = false)\n# output\n2-dimensional KeyedArray(NamedDimsArray(...)) with keys:\n↓ Variables_and_shocks ∈ 9-element Vector{Symbol}\n→ Periods ∈ 42-element UnitRange{Int64}\nAnd data, 9×42 Matrix{Float64}:\n (1) (2) … (41) (42)\n (:A) 0.0313639 0.0134792 0.000221372 0.000199235\n (:Pi) 0.000780257 0.00020929 -0.000146071 -0.000140137\n (:R) 0.00117156 0.00031425 -0.000219325 -0.000210417\n (:c) 0.01 0.00600605 0.00213278 0.00203751\n (:k) 0.034584 0.0477482 … 0.0397631 0.0380482\n (:y) 0.0446375 0.02 0.00129544 0.001222\n (:z_delta) 0.00025 0.000225 3.69522e-6 3.3257e-6\n (:delta_eps) 0.05 0.0 0.0 0.0\n (:eps_z) 4.61234 -2.16887 0.0 0.0\n\n# The same can be achieved with the other input formats:\n# conditions = Matrix{Union{Nothing,Float64}}(undef,7,2)\n# conditions[4,1] = .01\n# conditions[6,2] = .02\n\n# using SparseArrays\n# conditions = spzeros(7,2)\n# conditions[4,1] = .01\n# conditions[6,2] = .02\n\n# shocks = KeyedArray(Matrix{Union{Nothing,Float64}}(undef,1,1),Variables = [:delta_eps], Periods = [1])\n# shocks[1,1] = .05\n\n# using SparseArrays\n# shocks = spzeros(2,1)\n# shocks[1,1] = .05\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_conditional_variance_decomposition-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_conditional_variance_decomposition","text":"get_conditional_variance_decomposition(\n 𝓂;\n periods,\n parameters,\n verbose\n)\n\n\nReturn the conditional variance decomposition of endogenous variables with regards to the shocks using the linearised solution. \n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nKeyword Arguments\n\nperiods [Default: [1:20...,Inf], Type: Union{Vector{Int},Vector{Float64},UnitRange{Int64}}]: vector of periods for which to calculate the conditional variance decomposition. If the vector conatins Inf, also the unconditional variance decomposition is calculated (same output as get_variance_decomposition).\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling\n\n@model RBC_CME begin\n y[0]=A[0]*k[-1]^alpha\n 1/c[0]=beta*1/c[1]*(alpha*A[1]*k[0]^(alpha-1)+(1-delta))\n 1/c[0]=beta*1/c[1]*(R[0]/Pi[+1])\n R[0] * beta =(Pi[0]/Pibar)^phi_pi\n A[0]*k[-1]^alpha=c[0]+k[0]-(1-delta*z_delta[0])*k[-1]\n z_delta[0] = 1 - rho_z_delta + rho_z_delta * z_delta[-1] + std_z_delta * delta_eps[x]\n A[0] = 1 - rhoz + rhoz * A[-1] + std_eps * eps_z[x]\nend\n\n\n@parameters RBC_CME begin\n alpha = .157\n beta = .999\n delta = .0226\n Pibar = 1.0008\n phi_pi = 1.5\n rhoz = .9\n std_eps = .0068\n rho_z_delta = .9\n std_z_delta = .005\nend\n\nget_conditional_variance_decomposition(RBC_CME)\n# output\n3-dimensional KeyedArray(NamedDimsArray(...)) with keys:\n↓ Variables ∈ 7-element Vector{Symbol}\n→ Shocks ∈ 2-element Vector{Symbol}\n◪ Periods ∈ 21-element Vector{Float64}\nAnd data, 7×2×21 Array{Float64, 3}:\n[showing 3 of 21 slices]\n[:, :, 1] ~ (:, :, 1.0):\n (:delta_eps) (:eps_z)\n (:A) 0.0 1.0\n (:Pi) 0.00158668 0.998413\n (:R) 0.00158668 0.998413\n (:c) 0.0277348 0.972265\n (:k) 0.00869568 0.991304\n (:y) 0.0 1.0\n (:z_delta) 1.0 0.0\n\n[:, :, 11] ~ (:, :, 11.0):\n (:delta_eps) (:eps_z)\n (:A) 5.88653e-32 1.0\n (:Pi) 0.0245641 0.975436\n (:R) 0.0245641 0.975436\n (:c) 0.0175249 0.982475\n (:k) 0.00869568 0.991304\n (:y) 7.63511e-5 0.999924\n (:z_delta) 1.0 0.0\n\n[:, :, 21] ~ (:, :, Inf):\n (:delta_eps) (:eps_z)\n (:A) 9.6461e-31 1.0\n (:Pi) 0.0156771 0.984323\n (:R) 0.0156771 0.984323\n (:c) 0.0134672 0.986533\n (:k) 0.00869568 0.991304\n (:y) 0.000313462 0.999687\n (:z_delta) 1.0 0.0\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_corr","page":"API","title":"MacroModelling.get_corr","text":"See get_correlation\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.get_correlation-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_correlation","text":"get_correlation(𝓂; parameters, algorithm, verbose)\n\n\nReturn the correlations of endogenous variables using the first, pruned second, or pruned third order perturbation solution. \n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nKeyword Arguments\n\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\nalgorithm [Default: :first_order, Type: Symbol]: algorithm to solve for the dynamics of the model.\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend\n\n@parameters RBC begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α = 0.5\n β = 0.95\nend\n\nget_correlation(RBC)\n# output\n2-dimensional KeyedArray(NamedDimsArray(...)) with keys:\n↓ Variables ∈ 4-element Vector{Symbol}\n→ 𝑉𝑎𝑟𝑖𝑎𝑏𝑙𝑒𝑠 ∈ 4-element Vector{Symbol}\nAnd data, 4×4 Matrix{Float64}:\n (:c) (:k) (:q) (:z)\n (:c) 1.0 0.999812 0.550168 0.314562\n (:k) 0.999812 1.0 0.533879 0.296104\n (:q) 0.550168 0.533879 1.0 0.965726\n (:z) 0.314562 0.296104 0.965726 1.0\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_cov","page":"API","title":"MacroModelling.get_cov","text":"Wrapper for get_moments with covariance = true and non_stochastic_steady_state = false, variance = false, standard_deviation = false.\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.get_covariance-Tuple","page":"API","title":"MacroModelling.get_covariance","text":"Wrapper for get_moments with covariance = true and non_stochastic_steady_state = false, variance = false, standard_deviation = false.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_dynamic_auxilliary_variables-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_dynamic_auxilliary_variables","text":"get_dynamic_auxilliary_variables(𝓂)\n\n\nReturns the auxilliary variables, without timing subscripts, part of the augmented system of equations describing the model dynamics. Augmented means that, in case of variables with leads or lags larger than 1, or exogenous shocks with leads or lags, the system is augemented by auxilliary variables containing variables or shocks in lead or lag. because the original equations included variables with leads or lags certain expression cannot be negative (e.g. given log(c/q) an auxilliary variable is created for c/q).\n\nSee get_dynamic_equations for more details on the auxilliary variables and equations.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z{TFP}[1]) * k[0]^(α - 1) + (1 - exp(z{δ}[1]) * δ))\n c[0] + k[0] = (1 - exp(z{δ}[0])δ) * k[-1] + q[0]\n q[0] = exp(z{TFP}[0]) * k[-1]^α\n for shock in [TFP, δ]\n z{shock}[0] = ρ{shock} * z{shock}[-1] + σ{shock} * (eps{shock}[x] + eps_news{shock}[x-1])\n end\n Δc_share[0] = log(c[0]/q[0]) - log(c[-1]/q[-1])\n Δk_4q[0] = log(k[0]) - log(k[-4])\nend\n\n@parameters RBC begin\n σ = 0.01\n ρ = 0.2\n capital_to_output = 1.5\n k[ss] / (4 * q[ss]) = capital_to_output | δ\n alpha = .5\n α = alpha\n β = 0.95\nend\n\nget_dynamic_auxilliary_variables(RBC)\n# output\n3-element Vector{String}:\n \"kᴸ⁽⁻²⁾\"\n \"kᴸ⁽⁻³⁾\"\n \"kᴸ⁽⁻¹⁾\"\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_dynamic_equations-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_dynamic_equations","text":"get_dynamic_equations(𝓂)\n\n\nReturn the augmented system of equations describing the model dynamics. Augmented means that, in case of variables with leads or lags larger than 1, or exogenous shocks with leads or lags, the system is augemented by auxilliary equations containing variables in lead or lag. The augmented system features only variables which are in the present [0], future [1], or past [-1]. For example, Δk_4q[0] = log(k[0]) - log(k[-3]) contains k[-3]. By introducing 2 auxilliary variables (kᴸ⁽⁻¹⁾ and kᴸ⁽⁻²⁾ with ᴸ being the lead/lag operator) and augmenting the system (kᴸ⁽⁻²⁾[0] = kᴸ⁽⁻¹⁾[-1] and kᴸ⁽⁻¹⁾[0] = k[-1]) we can ensure that the timing is smaller than 1 in absolute terms: Δk_4q[0] - (log(k[0]) - log(kᴸ⁽⁻²⁾[-1])).\n\nIn case programmatic model writing was used this function returns the parsed equations (see loop over shocks in example).\n\nNote that the ouput assumes the equations are equal to 0. As in, kᴸ⁽⁻¹⁾[0] - k[-1] implies kᴸ⁽⁻¹⁾[0] - k[-1] = 0 and therefore: kᴸ⁽⁻¹⁾[0] = k[-1].\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z{TFP}[1]) * k[0]^(α - 1) + (1 - exp(z{δ}[1]) * δ))\n c[0] + k[0] = (1 - exp(z{δ}[0])δ) * k[-1] + q[0]\n q[0] = exp(z{TFP}[0]) * k[-1]^α\n for shock in [TFP, δ]\n z{shock}[0] = ρ{shock} * z{shock}[-1] + σ{shock} * (eps{shock}[x] + eps_news{shock}[x-1])\n end\n Δc_share[0] = log(c[0]/q[0]) - log(c[-1]/q[-1])\n Δk_4q[0] = log(k[0]) - log(k[-4])\nend\n\n@parameters RBC begin\n σ = 0.01\n ρ = 0.2\n capital_to_output = 1.5\n k[ss] / (4 * q[ss]) = capital_to_output | δ\n alpha = .5\n α = alpha\n β = 0.95\nend\n\nget_dynamic_equations(RBC)\n# output\n12-element Vector{String}:\n \"1 / c[0] - (β / c[1]) * (α * ex\" ⋯ 25 bytes ⋯ \" - 1) + (1 - exp(z{δ}[1]) * δ))\"\n \"(c[0] + k[0]) - ((1 - exp(z{δ}[0]) * δ) * k[-1] + q[0])\"\n \"q[0] - exp(z{TFP}[0]) * k[-1] ^ α\"\n \"eps_news{TFP}[0] - eps_news{TFP}[x]\"\n \"z{TFP}[0] - (ρ{TFP} * z{TFP}[-1] + σ{TFP} * (eps{TFP}[x] + eps_news{TFP}[-1]))\"\n \"eps_news{δ}[0] - eps_news{δ}[x]\"\n \"z{δ}[0] - (ρ{δ} * z{δ}[-1] + σ{δ} * (eps{δ}[x] + eps_news{δ}[-1]))\"\n \"Δc_share[0] - (log(c[0] / q[0]) - log(c[-1] / q[-1]))\"\n \"kᴸ⁽⁻³⁾[0] - kᴸ⁽⁻²⁾[-1]\"\n \"kᴸ⁽⁻²⁾[0] - kᴸ⁽⁻¹⁾[-1]\"\n \"kᴸ⁽⁻¹⁾[0] - k[-1]\"\n \"Δk_4q[0] - (log(k[0]) - log(kᴸ⁽⁻³⁾[-1]))\"\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_equations-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_equations","text":"get_equations(𝓂)\n\n\nReturn the equations of the model. In case programmatic model writing was used this function returns the parsed equations (see loop over shocks in example).\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z{TFP}[1]) * k[0]^(α - 1) + (1 - exp(z{δ}[1]) * δ))\n c[0] + k[0] = (1 - exp(z{δ}[0])δ) * k[-1] + q[0]\n q[0] = exp(z{TFP}[0]) * k[-1]^α\n for shock in [TFP, δ]\n z{shock}[0] = ρ{shock} * z{shock}[-1] + σ{shock} * (eps{shock}[x] + eps_news{shock}[x-1])\n end\n Δc_share[0] = log(c[0]/q[0]) - log(c[-1]/q[-1])\n Δk_4q[0] = log(k[0]) - log(k[-4])\nend\n\n@parameters RBC begin\n σ = 0.01\n ρ = 0.2\n capital_to_output = 1.5\n k[ss] / (4 * q[ss]) = capital_to_output | δ\n alpha = .5\n α = alpha\n β = 0.95\nend\n\nget_equations(RBC)\n# output\n7-element Vector{String}:\n \"1 / c[0] = (β / c[1]) * (α * ex\" ⋯ 25 bytes ⋯ \" - 1) + (1 - exp(z{δ}[1]) * δ))\"\n \"c[0] + k[0] = (1 - exp(z{δ}[0]) * δ) * k[-1] + q[0]\"\n \"q[0] = exp(z{TFP}[0]) * k[-1] ^ α\"\n \"z{TFP}[0] = ρ{TFP} * z{TFP}[-1]\" ⋯ 18 bytes ⋯ \"TFP}[x] + eps_news{TFP}[x - 1])\"\n \"z{δ}[0] = ρ{δ} * z{δ}[-1] + σ{δ} * (eps{δ}[x] + eps_news{δ}[x - 1])\"\n \"Δc_share[0] = log(c[0] / q[0]) - log(c[-1] / q[-1])\"\n \"Δk_4q[0] = log(k[0]) - log(k[-4])\"\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_estimated_shocks-Tuple{MacroModelling.ℳ, KeyedArray{Float64}}","page":"API","title":"MacroModelling.get_estimated_shocks","text":"get_estimated_shocks(\n 𝓂,\n data;\n parameters,\n algorithm,\n filter,\n warmup_iterations,\n data_in_levels,\n smooth,\n verbose\n)\n\n\nReturn the estimated shocks based on the inversion filter (depending on the filter keyword argument), or Kalman filter or smoother (depending on the smooth keyword argument) using the provided data and (non-)linear solution of the model. Data is by default assumed to be in levels unless data_in_levels is set to false.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\ndata [Type: KeyedArray]: data matrix with variables (String or Symbol) in rows and time in columns\n\nKeyword Arguments\n\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\nalgorithm [Default: :first_order, Type: Symbol]: algorithm to solve for the dynamics of the model.\nfilter [Default: :kalman, Type: Symbol]: filter used to compute the shocks given the data, model, and parameters. The Kalman filter only works for linear problems, whereas the inversion filter (:inversion) works for linear and nonlinear models. If a nonlinear solution algorithm is selected, the inversion filter is used.\nwarmup_iterations [Default: 0, Type: Int]: periods added before the first observation for which shocks are computed such that the first observation is matched. A larger value alleviates the problem that the initial value is the relevant steady state.\ndata_in_levels [Default: true, Type: Bool]: indicator whether the data is provided in levels. If true the input to the data argument will have the non stochastic steady state substracted.\nsmooth [Default: true, Type: Bool]: whether to return smoothed (true) or filtered (false) shocks. Only works for the Kalman filter. The inversion filter only returns filtered shocks.\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend\n\n@parameters RBC begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α = 0.5\n β = 0.95\nend\n\nsimulation = simulate(RBC)\n\nget_estimated_shocks(RBC,simulation([:c],:,:simulate))\n# output\n2-dimensional KeyedArray(NamedDimsArray(...)) with keys:\n↓ Shocks ∈ 1-element Vector{Symbol}\n→ Periods ∈ 40-element UnitRange{Int64}\nAnd data, 1×40 Matrix{Float64}:\n (1) (2) (3) (4) … (37) (38) (39) (40)\n (:eps_z₍ₓ₎) 0.0603617 0.614652 -0.519048 0.711454 -0.873774 1.27918 -0.929701 -0.2255\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_estimated_variable_standard_deviations-Tuple{MacroModelling.ℳ, KeyedArray{Float64}}","page":"API","title":"MacroModelling.get_estimated_variable_standard_deviations","text":"get_estimated_variable_standard_deviations(\n 𝓂,\n data;\n parameters,\n data_in_levels,\n smooth,\n verbose\n)\n\n\nReturn the standard deviations of the Kalman smoother or filter (depending on the smooth keyword argument) estimates of the model variables based on the provided data and first order solution of the model. Data is by default assumed to be in levels unless data_in_levels is set to false.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\ndata [Type: KeyedArray]: data matrix with variables (String or Symbol) in rows and time in columns\n\nKeyword Arguments\n\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\ndata_in_levels [Default: true, Type: Bool]: indicator whether the data is provided in levels. If true the input to the data argument will have the non stochastic steady state substracted.\nsmooth [Default: true, Type: Bool]: whether to return smoothed (true) or filtered (false) shocks. Only works for the Kalman filter. The inversion filter only returns filtered shocks.\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend\n\n@parameters RBC begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α = 0.5\n β = 0.95\nend\n\nsimulation = simulate(RBC)\n\nget_estimated_variable_standard_deviations(RBC,simulation([:c],:,:simulate))\n# output\n2-dimensional KeyedArray(NamedDimsArray(...)) with keys:\n↓ Standard_deviations ∈ 4-element Vector{Symbol}\n→ Periods ∈ 40-element UnitRange{Int64}\nAnd data, 4×40 Matrix{Float64}:\n (1) (2) (3) (4) … (38) (39) (40)\n (:c) 1.23202e-9 1.84069e-10 8.23181e-11 8.23181e-11 8.23181e-11 8.23181e-11 0.0\n (:k) 0.00509299 0.000382934 2.87922e-5 2.16484e-6 1.6131e-9 9.31323e-10 1.47255e-9\n (:q) 0.0612887 0.0046082 0.000346483 2.60515e-5 1.31709e-9 1.31709e-9 9.31323e-10\n (:z) 0.00961766 0.000723136 5.43714e-5 4.0881e-6 3.08006e-10 3.29272e-10 2.32831e-10\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_estimated_variables-Tuple{MacroModelling.ℳ, KeyedArray{Float64}}","page":"API","title":"MacroModelling.get_estimated_variables","text":"get_estimated_variables(\n 𝓂,\n data;\n parameters,\n algorithm,\n filter,\n warmup_iterations,\n data_in_levels,\n levels,\n smooth,\n verbose\n)\n\n\nReturn the estimated variables (in levels by default, see levels keyword argument) based on the inversion filter (depending on the filter keyword argument), or Kalman filter or smoother (depending on the smooth keyword argument) using the provided data and (non-)linear solution of the model. Data is by default assumed to be in levels unless data_in_levels is set to false.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\ndata [Type: KeyedArray]: data matrix with variables (String or Symbol) in rows and time in columns\n\nKeyword Arguments\n\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\nalgorithm [Default: :first_order, Type: Symbol]: algorithm to solve for the dynamics of the model.\nfilter [Default: :kalman, Type: Symbol]: filter used to compute the shocks given the data, model, and parameters. The Kalman filter only works for linear problems, whereas the inversion filter (:inversion) works for linear and nonlinear models. If a nonlinear solution algorithm is selected, the inversion filter is used.\nwarmup_iterations [Default: 0, Type: Int]: periods added before the first observation for which shocks are computed such that the first observation is matched. A larger value alleviates the problem that the initial value is the relevant steady state.\ndata_in_levels [Default: true, Type: Bool]: indicator whether the data is provided in levels. If true the input to the data argument will have the non stochastic steady state substracted.\nlevels [Default: false, Type: Bool]: return levels or absolute deviations from steady state corresponding to the solution algorithm (e.g. stochastic steady state for higher order solution algorithms).\nsmooth [Default: true, Type: Bool]: whether to return smoothed (true) or filtered (false) shocks. Only works for the Kalman filter. The inversion filter only returns filtered shocks.\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend\n\n@parameters RBC begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α = 0.5\n β = 0.95\nend\n\nsimulation = simulate(RBC)\n\nget_estimated_variables(RBC,simulation([:c],:,:simulate))\n# output\n2-dimensional KeyedArray(NamedDimsArray(...)) with keys:\n↓ Variables ∈ 4-element Vector{Symbol}\n→ Periods ∈ 40-element UnitRange{Int64}\nAnd data, 4×40 Matrix{Float64}:\n (1) (2) (3) (4) … (37) (38) (39) (40)\n (:c) 5.92901 5.92797 5.92847 5.92048 5.95845 5.95697 5.95686 5.96173\n (:k) 47.3185 47.3087 47.3125 47.2392 47.6034 47.5969 47.5954 47.6402\n (:q) 6.87159 6.86452 6.87844 6.79352 7.00476 6.9026 6.90727 6.95841\n (:z) -0.00109471 -0.00208056 4.43613e-5 -0.0123318 0.0162992 0.000445065 0.00119089 0.00863586\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_fevd","page":"API","title":"MacroModelling.get_fevd","text":"See get_conditional_variance_decomposition\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.get_first_order_solution-Tuple","page":"API","title":"MacroModelling.get_first_order_solution","text":"Wrapper for get_solution with algorithm = :first_order.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_forecast_error_variance_decomposition","page":"API","title":"MacroModelling.get_forecast_error_variance_decomposition","text":"See get_conditional_variance_decomposition\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.get_girf-Tuple","page":"API","title":"MacroModelling.get_girf","text":"Wrapper for get_irf with shocks = :simulate.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_irf-Tuple{MacroModelling.ℳ, Vector}","page":"API","title":"MacroModelling.get_irf","text":"get_irf(\n 𝓂,\n parameters;\n periods,\n variables,\n shocks,\n negative_shock,\n initial_state,\n levels,\n verbose\n)\n\n\nReturn impulse response functions (IRFs) of the model in a 3-dimensional array. Function to use when differentiating IRFs with repect to parameters.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\nparameters [Type: Vector]: Parameter values in alphabetical order (sorted by parameter name).\n\nKeyword Arguments\n\nperiods [Default: 40, Type: Int]: number of periods for which to calculate the IRFs. In case a matrix of shocks was provided, periods defines how many periods after the series of shocks the simulation continues.\nvariables [Default: :all_excluding_obc]: variables for which to show the results. Inputs can be a variable name passed on as either a Symbol or String (e.g. :y or \"y\"), or Tuple, Matrix or Vector of String or Symbol. Any variables not part of the model will trigger a warning. :all_excluding_auxilliary_and_obc contains all shocks less those related to auxilliary variables and related to occasionally binding constraints (obc). :all_excluding_obc contains all shocks less those related to auxilliary variables. :all will contain all variables.\nshocks [Default: :all_excluding_obc]: shocks for which to calculate the IRFs. Inputs can be a shock name passed on as either a Symbol or String (e.g. :y, or \"y\"), or Tuple, Matrix or Vector of String or Symbol. :simulate triggers random draws of all shocks (excluding occasionally binding constraints (obc) related shocks). :all_excluding_obc will contain all shocks but not the obc related ones.:all will contain also the obc related shocks. A series of shocks can be passed on using either a Matrix{Float64}, or a KeyedArray{Float64} as input with shocks (Symbol or String) in rows and periods in columns. The period of the simulation will correspond to the length of the input in the period dimension + the number of periods defined in periods. If the series of shocks is input as a KeyedArray{Float64} make sure to name the rows with valid shock names of type Symbol. Any shocks not part of the model will trigger a warning. :none in combination with an initial_state can be used for deterministic simulations.\nnegative_shock [Default: false, Type: Bool]: calculate a negative shock. Relevant for generalised IRFs.\ninitial_state [Default: [0.0], Type: Vector{Float64}]: provide state (in levels, not deviations) from which to start IRFs. Relevant for normal IRFs. The state includes all variables as well as exogenous variables in leads or lags if present.\nlevels [Default: false, Type: Bool]: return levels or absolute deviations from steady state corresponding to the solution algorithm (e.g. stochastic steady state for higher order solution algorithms).\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend\n\n@parameters RBC begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α = 0.5\n β = 0.95\nend\n\nget_irf(RBC, RBC.parameter_values)\n# output\n4×40×1 Array{Float64, 3}:\n[:, :, 1] =\n 0.00674687 0.00729773 0.00715114 0.00687615 … 0.00146962 0.00140619\n 0.0620937 0.0718322 0.0712153 0.0686381 0.0146789 0.0140453\n 0.0688406 0.0182781 0.00797091 0.0057232 0.00111425 0.00106615\n 0.01 0.002 0.0004 8.0e-5 2.74878e-29 5.49756e-30\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_irf-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_irf","text":"get_irf(\n 𝓂;\n periods,\n algorithm,\n parameters,\n variables,\n shocks,\n negative_shock,\n generalised_irf,\n initial_state,\n levels,\n ignore_obc,\n verbose\n)\n\n\nReturn impulse response functions (IRFs) of the model in a 3-dimensional KeyedArray. By default (see levels), the values represent absolute deviations from the relevant steady state (e.g. higher order perturbation algorithms are relative to the stochastic steady state).\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nKeyword Arguments\n\nperiods [Default: 40, Type: Int]: number of periods for which to calculate the IRFs. In case a matrix of shocks was provided, periods defines how many periods after the series of shocks the simulation continues.\nalgorithm [Default: :first_order, Type: Symbol]: algorithm to solve for the dynamics of the model.\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\nvariables [Default: :all_excluding_obc]: variables for which to show the results. Inputs can be a variable name passed on as either a Symbol or String (e.g. :y or \"y\"), or Tuple, Matrix or Vector of String or Symbol. Any variables not part of the model will trigger a warning. :all_excluding_auxilliary_and_obc contains all shocks less those related to auxilliary variables and related to occasionally binding constraints (obc). :all_excluding_obc contains all shocks less those related to auxilliary variables. :all will contain all variables.\nshocks [Default: :all_excluding_obc]: shocks for which to calculate the IRFs. Inputs can be a shock name passed on as either a Symbol or String (e.g. :y, or \"y\"), or Tuple, Matrix or Vector of String or Symbol. :simulate triggers random draws of all shocks (excluding occasionally binding constraints (obc) related shocks). :all_excluding_obc will contain all shocks but not the obc related ones.:all will contain also the obc related shocks. A series of shocks can be passed on using either a Matrix{Float64}, or a KeyedArray{Float64} as input with shocks (Symbol or String) in rows and periods in columns. The period of the simulation will correspond to the length of the input in the period dimension + the number of periods defined in periods. If the series of shocks is input as a KeyedArray{Float64} make sure to name the rows with valid shock names of type Symbol. Any shocks not part of the model will trigger a warning. :none in combination with an initial_state can be used for deterministic simulations.\nnegative_shock [Default: false, Type: Bool]: calculate a negative shock. Relevant for generalised IRFs.\ngeneralised_irf [Default: false, Type: Bool]: calculate generalised IRFs. Relevant for nonlinear solutions. Reference steady state for deviations is the stochastic steady state.\ninitial_state [Default: [0.0], Type: Union{Vector{Vector{Float64}},Vector{Float64}}]: The initial state defines the starting point for the model and is relevant for normal IRFs. In the case of pruned solution algorithms the initial state can be given as multiple state vectors (Vector{Vector{Float64}}). In this case the initial state must be given in devations from the non-stochastic steady state. In all other cases the initial state must be given in levels. If a pruned solution algorithm is selected and initial state is a Vector{Float64} then it impacts the first order initial state vector only. The state includes all variables as well as exogenous variables in leads or lags if present.\nlevels [Default: false, Type: Bool]: return levels or absolute deviations from steady state corresponding to the solution algorithm (e.g. stochastic steady state for higher order solution algorithms).\nignore_obc [Default: false, Type: Bool]: solve the model ignoring the occasionally binding constraints.\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend\n\n@parameters RBC begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α = 0.5\n β = 0.95\nend\n\nget_irf(RBC)\n# output\n3-dimensional KeyedArray(NamedDimsArray(...)) with keys:\n↓ Variables ∈ 4-element Vector{Symbol}\n→ Periods ∈ 40-element UnitRange{Int64}\n◪ Shocks ∈ 1-element Vector{Symbol}\nAnd data, 4×40×1 Array{Float64, 3}:\n[:, :, 1] ~ (:, :, :eps_z):\n (1) (2) … (39) (40)\n (:c) 0.00674687 0.00729773 0.00146962 0.00140619\n (:k) 0.0620937 0.0718322 0.0146789 0.0140453\n (:q) 0.0688406 0.0182781 0.00111425 0.00106615\n (:z) 0.01 0.002 2.74878e-29 5.49756e-30\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_irfs","page":"API","title":"MacroModelling.get_irfs","text":"See get_irf\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.get_jump_variables-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_jump_variables","text":"get_jump_variables(𝓂)\n\n\nReturns the jump variables of the model. Jumper variables occur in the future and not in the past or occur in all three: past, present, and future.\n\nIn case programmatic model writing was used this function returns the parsed variables (see z in example).\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z{TFP}[1]) * k[0]^(α - 1) + (1 - exp(z{δ}[1]) * δ))\n c[0] + k[0] = (1 - exp(z{δ}[0])δ) * k[-1] + q[0]\n q[0] = exp(z{TFP}[0]) * k[-1]^α\n for shock in [TFP, δ]\n z{shock}[0] = ρ{shock} * z{shock}[-1] + σ{shock} * (eps{shock}[x] + eps_news{shock}[x-1])\n end\n Δc_share[0] = log(c[0]/q[0]) - log(c[-1]/q[-1])\n Δk_4q[0] = log(k[0]) - log(k[-4])\nend\n\n@parameters RBC begin\n σ = 0.01\n ρ = 0.2\n capital_to_output = 1.5\n k[ss] / (4 * q[ss]) = capital_to_output | δ\n alpha = .5\n α = alpha\n β = 0.95\nend\n\nget_jump_variables(RBC)\n# output\n3-element Vector{String}:\n \"c\"\n \"z{TFP}\"\n \"z{δ}\"\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_loglikelihood-Union{Tuple{S}, Tuple{MacroModelling.ℳ, KeyedArray{Float64}, Vector{S}}} where S","page":"API","title":"MacroModelling.get_loglikelihood","text":"get_loglikelihood(\n 𝓂,\n data,\n parameter_values;\n algorithm,\n filter,\n warmup_iterations,\n tol,\n verbose\n)\n\n\nReturn the loglikelihood of the model given the data and parameters provided. The loglikelihood is either calculated based on the inversion or the Kalman filter (depending on the filter keyword argument). In case of a nonlinear solution algorithm the inversion filter will be used. The data must be provided as a KeyedArray{Float64} with the names of the variables to be matched in rows and the periods in columns.\n\nThis function is differentiable (so far for the Kalman filter only) and can be used in gradient based sampling or optimisation.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\ndata [Type: KeyedArray]: data matrix with variables (String or Symbol) in rows and time in columns\nparameter_values [Type: Vector]: Parameter values.\n\nKeyword Arguments\n\nalgorithm [Default: :first_order, Type: Symbol]: algorithm to solve for the dynamics of the model.\nfilter [Default: :kalman, Type: Symbol]: filter used to compute the shocks given the data, model, and parameters. The Kalman filter only works for linear problems, whereas the inversion filter (:inversion) works for linear and nonlinear models. If a nonlinear solution algorithm is selected, the inversion filter is used.\nwarmup_iterations [Default: 0, Type: Int]: periods added before the first observation for which shocks are computed such that the first observation is matched. A larger value alleviates the problem that the initial value is the relevant steady state.\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend\n\n@parameters RBC begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α = 0.5\n β = 0.95\nend\n\nsimulated_data = simulate(RBC)\n\nget_loglikelihood(RBC, simulated_data([:k], :, :simulate), RBC.parameter_values)\n# output\n58.24780188977981\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_mean-Tuple","page":"API","title":"MacroModelling.get_mean","text":"Wrapper for get_moments with mean = true, and non_stochastic_steady_state = false, variance = false, standard_deviation = false, covariance = false\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_moments-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_moments","text":"get_moments(\n 𝓂;\n parameters,\n non_stochastic_steady_state,\n mean,\n standard_deviation,\n variance,\n covariance,\n variables,\n derivatives,\n parameter_derivatives,\n algorithm,\n dependencies_tol,\n verbose,\n silent\n)\n\n\nReturn the first and second moments of endogenous variables using the first, pruned second, or pruned third order perturbation solution. By default returns: non stochastic steady state (SS), and standard deviations, but can optionally return variances, and covariance matrix.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nKeyword Arguments\n\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\nnon_stochastic_steady_state [Default: true, Type: Bool]: switch to return SS of endogenous variables\nmean [Default: false, Type: Bool]: switch to return mean of endogenous variables (the mean for the linearised solutoin is the NSSS)\nstandard_deviation [Default: true, Type: Bool]: switch to return standard deviation of endogenous variables\nvariance [Default: false, Type: Bool]: switch to return variance of endogenous variables\ncovariance [Default: false, Type: Bool]: switch to return covariance matrix of endogenous variables\nvariables [Default: :all_excluding_obc]: variables for which to show the results. Inputs can be a variable name passed on as either a Symbol or String (e.g. :y or \"y\"), or Tuple, Matrix or Vector of String or Symbol. Any variables not part of the model will trigger a warning. :all_excluding_auxilliary_and_obc contains all shocks less those related to auxilliary variables and related to occasionally binding constraints (obc). :all_excluding_obc contains all shocks less those related to auxilliary variables. :all will contain all variables.\nderivatives [Default: true, Type: Bool]: calculate derivatives with respect to the parameters.\nparameter_derivatives [Default: :all]: parameters for which to calculate partial derivatives. Inputs can be a parameter name passed on as either a Symbol or String (e.g. :alpha, or \"alpha\"), or Tuple, Matrix or Vector of String or Symbol. :all will include all parameters.\nalgorithm [Default: :first_order, Type: Symbol]: algorithm to solve for the dynamics of the model.\ndependencies_tol [Default: 1e-12, Type: AbstractFloat]: tolerance for the effect of a variable on the variable of interest when isolating part of the system for calculating covariance related statistics\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend\n\n@parameters RBC begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α = 0.5\n β = 0.95\nend\n\nmoments = get_moments(RBC);\n\nmoments[1]\n# output\n2-dimensional KeyedArray(NamedDimsArray(...)) with keys:\n↓ Variables ∈ 4-element Vector{Symbol}\n→ Steady_state_and_∂steady_state∂parameter ∈ 6-element Vector{Symbol}\nAnd data, 4×6 Matrix{Float64}:\n (:Steady_state) (:std_z) (:ρ) (:δ) (:α) (:β)\n (:c) 5.93625 0.0 0.0 -116.072 55.786 76.1014\n (:k) 47.3903 0.0 0.0 -1304.95 555.264 1445.93\n (:q) 6.88406 0.0 0.0 -94.7805 66.8912 105.02\n (:z) 0.0 0.0 0.0 0.0 0.0 0.0\n\nmoments[2]\n# output\n2-dimensional KeyedArray(NamedDimsArray(...)) with keys:\n↓ Variables ∈ 4-element Vector{Symbol}\n→ Standard_deviation_and_∂standard_deviation∂parameter ∈ 6-element Vector{Symbol}\nAnd data, 4×6 Matrix{Float64}:\n (:Standard_deviation) (:std_z) … (:δ) (:α) (:β)\n (:c) 0.0266642 2.66642 -0.384359 0.2626 0.144789\n (:k) 0.264677 26.4677 -5.74194 2.99332 6.30323\n (:q) 0.0739325 7.39325 -0.974722 0.726551 1.08\n (:z) 0.0102062 1.02062 0.0 0.0 0.0\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_non_stochastic_steady_state-Tuple","page":"API","title":"MacroModelling.get_non_stochastic_steady_state","text":"Wrapper for get_steady_state with stochastic = false.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_non_stochastic_steady_state_residuals-Tuple{MacroModelling.ℳ, Union{Dict{String, Float64}, Dict{Symbol, Float64}, KeyedArray{Float64, 1}, Vector{Float64}}}","page":"API","title":"MacroModelling.get_non_stochastic_steady_state_residuals","text":"get_non_stochastic_steady_state_residuals(\n 𝓂,\n values;\n parameters\n)\n\n\nCalculate the residuals of the non-stochastic steady state equations of the model for a given set of values. Values not provided, will be filled with the non-stochastic steady state values corresponding to the current parameters.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\nvalues [Type: Union{Vector{Float64}, Dict{Symbol, Float64}, Dict{String, Float64}, KeyedArray{Float64, 1}}]: A Vector, Dict, or KeyedArray containing the values of the variables and calibrated parameters in the non-stochastic steady state equations (including calibration equations). \n\nKeyword Arguments\n\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\n\nReturns\n\nA KeyedArray containing the absolute values of the residuals of the non-stochastic steady state equations.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend\n\n@parameters RBC begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n k[ss] / q[ss] = 2.5 | α\n β = 0.95\nend\n\nsteady_state = SS(RBC, derivatives = false)\n\nget_non_stochastic_steady_state_residuals(RBC, steady_state)\n# output\n1-dimensional KeyedArray(NamedDimsArray(...)) with keys:\n↓ Equation ∈ 5-element Vector{Symbol}\nAnd data, 5-element Vector{Float64}:\n (:Equation₁) 0.0\n (:Equation₂) 0.0\n (:Equation₃) 0.0\n (:Equation₄) 0.0\n (:CalibrationEquation₁) 0.0\n\ngetnonstochasticsteadystate_residuals(RBC, [1.1641597, 3.0635781, 1.2254312, 0.0, 0.18157895])\n\noutput\n\n1-dimensional KeyedArray(NamedDimsArray(...)) with keys: ↓ Equation ∈ 5-element Vector{Symbol} And data, 5-element Vector{Float64}: (:Equation₁) 2.7360991250446887e-10 (:Equation₂) 6.199999980083248e-8 (:Equation₃) 2.7897102183871425e-8 (:Equation₄) 0.0 (:CalibrationEquation₁) 8.160392850342646e-8 ```\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_nonnegativity_auxilliary_variables-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_nonnegativity_auxilliary_variables","text":"get_nonnegativity_auxilliary_variables(𝓂)\n\n\nReturns the auxilliary variables, without timing subscripts, added to the non-stochastic steady state problem because certain expression cannot be negative (e.g. given log(c/q) an auxilliary variable is created for c/q).\n\nSee get_steady_state_equations for more details on the auxilliary variables and equations.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z{TFP}[1]) * k[0]^(α - 1) + (1 - exp(z{δ}[1]) * δ))\n c[0] + k[0] = (1 - exp(z{δ}[0])δ) * k[-1] + q[0]\n q[0] = exp(z{TFP}[0]) * k[-1]^α\n for shock in [TFP, δ]\n z{shock}[0] = ρ{shock} * z{shock}[-1] + σ{shock} * (eps{shock}[x] + eps_news{shock}[x-1])\n end\n Δc_share[0] = log(c[0]/q[0]) - log(c[-1]/q[-1])\n Δk_4q[0] = log(k[0]) - log(k[-4])\nend\n\n@parameters RBC begin\n σ = 0.01\n ρ = 0.2\n capital_to_output = 1.5\n k[ss] / (4 * q[ss]) = capital_to_output | δ\n alpha = .5\n α = alpha\n β = 0.95\nend\n\nget_nonnegativity_auxilliary_variables(RBC)\n# output\n2-element Vector{String}:\n \"➕₁\"\n \"➕₂\"\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_parameters-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_parameters","text":"get_parameters(𝓂; values)\n\n\nReturns the parameters (and optionally the values) which have an impact on the model dynamics but do not depend on other parameters and are not determined by calibration equations. \n\nIn case programmatic model writing was used this function returns the parsed parameters (see σ in example).\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nKeyword Arguments\n\nvalues [Default: false, Type: Bool]: return the values together with the parameter names\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z{TFP}[1]) * k[0]^(α - 1) + (1 - exp(z{δ}[1]) * δ))\n c[0] + k[0] = (1 - exp(z{δ}[0])δ) * k[-1] + q[0]\n q[0] = exp(z{TFP}[0]) * k[-1]^α\n for shock in [TFP, δ]\n z{shock}[0] = ρ{shock} * z{shock}[-1] + σ{shock} * (eps{shock}[x] + eps_news{shock}[x-1])\n end\n Δc_share[0] = log(c[0]/q[0]) - log(c[-1]/q[-1])\n Δk_4q[0] = log(k[0]) - log(k[-4])\nend\n\n@parameters RBC begin\n σ = 0.01\n ρ = 0.2\n capital_to_output = 1.5\n k[ss] / (4 * q[ss]) = capital_to_output | δ\n alpha = .5\n α = alpha\n β = 0.95\nend\n\nget_parameters(RBC)\n# output\n7-element Vector{String}:\n \"σ{TFP}\"\n \"σ{δ}\"\n \"ρ{TFP}\"\n \"ρ{δ}\"\n \"capital_to_output\"\n \"alpha\"\n \"β\"\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_parameters_defined_by_parameters-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_parameters_defined_by_parameters","text":"get_parameters_defined_by_parameters(𝓂)\n\n\nReturns the parameters which are defined by other parameters which are not necessarily used in the equations of the model (see α in example).\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z{TFP}[1]) * k[0]^(α - 1) + (1 - exp(z{δ}[1]) * δ))\n c[0] + k[0] = (1 - exp(z{δ}[0])δ) * k[-1] + q[0]\n q[0] = exp(z{TFP}[0]) * k[-1]^α\n for shock in [TFP, δ]\n z{shock}[0] = ρ{shock} * z{shock}[-1] + σ{shock} * (eps{shock}[x] + eps_news{shock}[x-1])\n end\n Δc_share[0] = log(c[0]/q[0]) - log(c[-1]/q[-1])\n Δk_4q[0] = log(k[0]) - log(k[-4])\nend\n\n@parameters RBC begin\n σ = 0.01\n ρ = 0.2\n capital_to_output = 1.5\n k[ss] / (4 * q[ss]) = capital_to_output | δ\n alpha = .5\n α = alpha\n β = 0.95\nend\n\nget_parameters_defined_by_parameters(RBC)\n# output\n1-element Vector{String}:\n \"α\"\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_parameters_defining_parameters-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_parameters_defining_parameters","text":"get_parameters_defining_parameters(𝓂)\n\n\nReturns the parameters which define other parameters in the @parameters block which are not necessarily used in the equations of the model (see alpha in example).\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z{TFP}[1]) * k[0]^(α - 1) + (1 - exp(z{δ}[1]) * δ))\n c[0] + k[0] = (1 - exp(z{δ}[0])δ) * k[-1] + q[0]\n q[0] = exp(z{TFP}[0]) * k[-1]^α\n for shock in [TFP, δ]\n z{shock}[0] = ρ{shock} * z{shock}[-1] + σ{shock} * (eps{shock}[x] + eps_news{shock}[x-1])\n end\n Δc_share[0] = log(c[0]/q[0]) - log(c[-1]/q[-1])\n Δk_4q[0] = log(k[0]) - log(k[-4])\nend\n\n@parameters RBC begin\n σ = 0.01\n ρ = 0.2\n capital_to_output = 1.5\n k[ss] / (4 * q[ss]) = capital_to_output | δ\n alpha = .5\n α = alpha\n β = 0.95\nend\n\nget_parameters_defining_parameters(RBC)\n# output\n1-element Vector{String}:\n \"alpha\"\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_parameters_in_equations-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_parameters_in_equations","text":"get_parameters_in_equations(𝓂)\n\n\nReturns the parameters contained in the model equations. Note that these parameters might be determined by other parameters or calibration equations defined in the @parameters block.\n\nIn case programmatic model writing was used this function returns the parsed parameters (see σ in example).\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z{TFP}[1]) * k[0]^(α - 1) + (1 - exp(z{δ}[1]) * δ))\n c[0] + k[0] = (1 - exp(z{δ}[0])δ) * k[-1] + q[0]\n q[0] = exp(z{TFP}[0]) * k[-1]^α\n for shock in [TFP, δ]\n z{shock}[0] = ρ{shock} * z{shock}[-1] + σ{shock} * (eps{shock}[x] + eps_news{shock}[x-1])\n end\n Δc_share[0] = log(c[0]/q[0]) - log(c[-1]/q[-1])\n Δk_4q[0] = log(k[0]) - log(k[-4])\nend\n\n@parameters RBC begin\n σ = 0.01\n ρ = 0.2\n capital_to_output = 1.5\n k[ss] / (4 * q[ss]) = capital_to_output | δ\n alpha = .5\n α = alpha\n β = 0.95\nend\n\nget_parameters_in_equations(RBC)\n# output\n7-element Vector{String}:\n \"α\"\n \"β\"\n \"δ\"\n \"ρ{TFP}\"\n \"ρ{δ}\"\n \"σ{TFP}\"\n \"σ{δ}\"\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_perturbation_solution-Tuple","page":"API","title":"MacroModelling.get_perturbation_solution","text":"See get_solution\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_residuals","page":"API","title":"MacroModelling.get_residuals","text":"See get_non_stochastic_steady_state_residuals\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.get_second_order_solution-Tuple","page":"API","title":"MacroModelling.get_second_order_solution","text":"Wrapper for get_solution with algorithm = :second_order.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_shock_decomposition-Tuple{MacroModelling.ℳ, KeyedArray{Float64}}","page":"API","title":"MacroModelling.get_shock_decomposition","text":"get_shock_decomposition(\n 𝓂,\n data;\n parameters,\n data_in_levels,\n smooth,\n verbose\n)\n\n\nReturn the shock decomposition in absolute deviations from the non stochastic steady state based on the Kalman smoother or filter (depending on the smooth keyword argument) using the provided data and first order solution of the model. Data is by default assumed to be in levels unless data_in_levels is set to false.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\ndata [Type: KeyedArray]: data matrix with variables (String or Symbol) in rows and time in columns\n\nKeyword Arguments\n\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\ndata_in_levels [Default: true, Type: Bool]: indicator whether the data is provided in levels. If true the input to the data argument will have the non stochastic steady state substracted.\nsmooth [Default: true, Type: Bool]: whether to return smoothed (true) or filtered (false) shocks. Only works for the Kalman filter. The inversion filter only returns filtered shocks.\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend\n\n@parameters RBC begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α = 0.5\n β = 0.95\nend\n\nsimulation = simulate(RBC)\n\nget_shock_decomposition(RBC,simulation([:c],:,:simulate))\n# output\n3-dimensional KeyedArray(NamedDimsArray(...)) with keys:\n↓ Variables ∈ 4-element Vector{Symbol}\n→ Shocks ∈ 2-element Vector{Symbol}\n◪ Periods ∈ 40-element UnitRange{Int64}\nAnd data, 4×2×40 Array{Float64, 3}:\n[showing 3 of 40 slices]\n[:, :, 1] ~ (:, :, 1):\n (:eps_z₍ₓ₎) (:Initial_values)\n (:c) 0.000407252 -0.00104779\n (:k) 0.00374808 -0.0104645\n (:q) 0.00415533 -0.000807161\n (:z) 0.000603617 -1.99957e-6\n\n[:, :, 21] ~ (:, :, 21):\n (:eps_z₍ₓ₎) (:Initial_values)\n (:c) 0.026511 -0.000433619\n (:k) 0.25684 -0.00433108\n (:q) 0.115858 -0.000328764\n (:z) 0.0150266 0.0\n\n[:, :, 40] ~ (:, :, 40):\n (:eps_z₍ₓ₎) (:Initial_values)\n (:c) 0.0437976 -0.000187505\n (:k) 0.4394 -0.00187284\n (:q) 0.00985518 -0.000142164\n (:z) -0.00366442 8.67362e-19\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_shocks-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_shocks","text":"get_shocks(𝓂)\n\n\nReturns the exogenous shocks.\n\nIn case programmatic model writing was used this function returns the parsed variables (see eps in example).\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z{TFP}[1]) * k[0]^(α - 1) + (1 - exp(z{δ}[1]) * δ))\n c[0] + k[0] = (1 - exp(z{δ}[0])δ) * k[-1] + q[0]\n q[0] = exp(z{TFP}[0]) * k[-1]^α\n for shock in [TFP, δ]\n z{shock}[0] = ρ{shock} * z{shock}[-1] + σ{shock} * (eps{shock}[x] + eps_news{shock}[x-1])\n end\n Δc_share[0] = log(c[0]/q[0]) - log(c[-1]/q[-1])\n Δk_4q[0] = log(k[0]) - log(k[-4])\nend\n\n@parameters RBC begin\n σ = 0.01\n ρ = 0.2\n capital_to_output = 1.5\n k[ss] / (4 * q[ss]) = capital_to_output | δ\n alpha = .5\n α = alpha\n β = 0.95\nend\n\nget_shocks(RBC)\n# output\n4-element Vector{String}:\n \"eps_news{TFP}\"\n \"eps_news{δ}\"\n \"eps{TFP}\"\n \"eps{δ}\"\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_simulation-Tuple","page":"API","title":"MacroModelling.get_simulation","text":"Wrapper for get_irf with shocks = :simulate. Function returns values in levels by default.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_simulations-Tuple","page":"API","title":"MacroModelling.get_simulations","text":"Wrapper for get_irf with shocks = :simulate. Function returns values in levels by default.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_solution-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_solution","text":"get_solution(𝓂; parameters, algorithm, verbose)\n\n\nReturn the solution of the model. In the linear case it returns the linearised solution and the non stochastic steady state (NSSS) of the model. In the nonlinear case (higher order perturbation) the function returns a multidimensional array with the endogenous variables as the second dimension and the state variables, shocks, and perturbation parameter (:Volatility) in the case of higher order solutions as the other dimensions.\n\nThe values of the output represent the NSSS in the case of a linear solution and below it the effect that deviations from the NSSS of the respective past states, shocks, and perturbation parameter have (perturbation parameter = 1) on the present value (NSSS deviation) of the model variables.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nKeyword Arguments\n\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\nalgorithm [Default: :first_order, Type: Symbol]: algorithm to solve for the dynamics of the model.\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nThe returned KeyedArray shows as columns the endogenous variables inlcuding the auxilliary endogenous and exogenous variables (due to leads and lags > 1). The rows and other dimensions (depending on the chosen perturbation order) include the NSSS for the linear case only, followed by the states, and exogenous shocks. Subscripts following variable names indicate the timing (e.g. variable₍₋₁₎ indicates the variable being in the past). Superscripts indicate leads or lags (e.g. variableᴸ⁽²⁾ indicates the variable being in lead by two periods). If no super- or subscripts follow the variable name, the variable is in the present.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend\n\n@parameters RBC begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α = 0.5\n β = 0.95\nend\n\nget_solution(RBC)\n# output\n2-dimensional KeyedArray(NamedDimsArray(...)) with keys:\n↓ Steady_state__States__Shocks ∈ 4-element Vector{Symbol}\n→ Variables ∈ 4-element Vector{Symbol}\nAnd data, 4×4 adjoint(::Matrix{Float64}) with eltype Float64:\n (:c) (:k) (:q) (:z)\n (:Steady_state) 5.93625 47.3903 6.88406 0.0\n (:k₍₋₁₎) 0.0957964 0.956835 0.0726316 -0.0\n (:z₍₋₁₎) 0.134937 1.24187 1.37681 0.2\n (:eps_z₍ₓ₎) 0.00674687 0.0620937 0.0688406 0.01\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_ss","page":"API","title":"MacroModelling.get_ss","text":"See get_steady_state\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.get_standard_deviation-Tuple","page":"API","title":"MacroModelling.get_standard_deviation","text":"Wrapper for get_moments with standard_deviation = true and non_stochastic_steady_state = false, variance = false, covariance = false.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_state_variables-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_state_variables","text":"get_state_variables(𝓂)\n\n\nReturns the state variables of the model. State variables occur in the past and not in the future or occur in all three: past, present, and future.\n\nIn case programmatic model writing was used this function returns the parsed variables (see z in example).\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z{TFP}[1]) * k[0]^(α - 1) + (1 - exp(z{δ}[1]) * δ))\n c[0] + k[0] = (1 - exp(z{δ}[0])δ) * k[-1] + q[0]\n q[0] = exp(z{TFP}[0]) * k[-1]^α\n for shock in [TFP, δ]\n z{shock}[0] = ρ{shock} * z{shock}[-1] + σ{shock} * (eps{shock}[x] + eps_news{shock}[x-1])\n end\n Δc_share[0] = log(c[0]/q[0]) - log(c[-1]/q[-1])\n Δk_4q[0] = log(k[0]) - log(k[-4])\nend\n\n@parameters RBC begin\n σ = 0.01\n ρ = 0.2\n capital_to_output = 1.5\n k[ss] / (4 * q[ss]) = capital_to_output | δ\n alpha = .5\n α = alpha\n β = 0.95\nend\n\nget_state_variables(RBC)\n# output\n10-element Vector{String}:\n \"c\"\n \"eps_news{TFP}\"\n \"eps_news{δ}\"\n \"k\"\n \"kᴸ⁽⁻²⁾\"\n \"kᴸ⁽⁻³⁾\"\n \"kᴸ⁽⁻¹⁾\"\n \"q\"\n \"z{TFP}\"\n \"z{δ}\"\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_statistics-Union{Tuple{T}, Tuple{U}, Tuple{Any, Vector{T}}} where {U, T}","page":"API","title":"MacroModelling.get_statistics","text":"get_statistics(\n 𝓂,\n parameter_values;\n parameters,\n non_stochastic_steady_state,\n mean,\n standard_deviation,\n variance,\n covariance,\n autocorrelation,\n autocorrelation_periods,\n algorithm,\n verbose\n)\n\n\nReturn the first and second moments of endogenous variables using either the linearised solution or the pruned second or third order perturbation solution. By default returns: non stochastic steady state (SS), and standard deviations, but can also return variances, and covariance matrix. Function to use when differentiating model moments with repect to parameters.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\nparameter_values [Type: Vector]: Parameter values.\n\nKeyword Arguments\n\nparameters [Type: Vector{Symbol}]: Corresponding names of parameters values.\nnon_stochastic_steady_state [Default: Symbol[], Type: Vector{Symbol}]: if values are provided the function returns the SS of endogenous variables\nmean [Default: Symbol[], Type: Vector{Symbol}]: if values are provided the function returns the mean of endogenous variables (the mean for the linearised solutoin is the NSSS)\nstandard_deviation [Default: Symbol[], Type: Vector{Symbol}]: if values are provided the function returns the standard deviation of the mentioned variables\nvariance [Default: Symbol[], Type: Vector{Symbol}]: if values are provided the function returns the variance of the mentioned variables\ncovariance [Default: Symbol[], Type: Vector{Symbol}]: if values are provided the function returns the covariance of the mentioned variables\nautocorrelation [Default: Symbol[], Type: Vector{Symbol}]: if values are provided the function returns the autocorrelation of the mentioned variables\nautocorrelation_periods [Default: 1:5]: periods for which to return the autocorrelation of the mentioned variables\nalgorithm [Default: :first_order, Type: Symbol]: algorithm to solve for the dynamics of the model.\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend\n\n@parameters RBC begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α = 0.5\n β = 0.95\nend\n\nget_statistics(RBC, RBC.parameter_values, parameters = RBC.parameters, standard_deviation = RBC.var)\n# output\n1-element Vector{Any}:\n [0.02666420378525503, 0.26467737291221793, 0.07393254045396483, 0.010206207261596574]\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_std","page":"API","title":"MacroModelling.get_std","text":"Wrapper for get_moments with standard_deviation = true and non_stochastic_steady_state = false, variance = false, covariance = false.\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.get_stdev","page":"API","title":"MacroModelling.get_stdev","text":"Wrapper for get_moments with standard_deviation = true and non_stochastic_steady_state = false, variance = false, covariance = false.\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.get_steady_state-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_steady_state","text":"get_steady_state(\n 𝓂;\n parameters,\n derivatives,\n stochastic,\n algorithm,\n parameter_derivatives,\n return_variables_only,\n verbose,\n silent,\n tol\n)\n\n\nReturn the (non stochastic) steady state, calibrated parameters, and derivatives with respect to model parameters.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nKeyword Arguments\n\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\nderivatives [Default: true, Type: Bool]: calculate derivatives with respect to the parameters.\nstochastic [Default: false, Type: Bool]: return stochastic steady state using second order perturbation\nalgorithm [Default: :first_order, Type: Symbol]: algorithm to solve for the dynamics of the model.\nparameter_derivatives [Default: :all]: parameters for which to calculate partial derivatives. Inputs can be a parameter name passed on as either a Symbol or String (e.g. :alpha, or \"alpha\"), or Tuple, Matrix or Vector of String or Symbol. :all will include all parameters.\nreturn_variables_only [Defaut: false, Type: Bool]: return only variables and not calibrated parameters\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nThe columns show the (non stochastic) steady state and parameters for which derivatives are taken. The rows show the variables and calibrated parameters.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend\n\n@parameters RBC begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α = 0.5\n β = 0.95\nend\n\nget_steady_state(RBC)\n# output\n2-dimensional KeyedArray(NamedDimsArray(...)) with keys:\n↓ Variables_and_calibrated_parameters ∈ 4-element Vector{Symbol}\n→ Steady_state_and_∂steady_state∂parameter ∈ 6-element Vector{Symbol}\nAnd data, 4×6 Matrix{Float64}:\n (:Steady_state) (:std_z) (:ρ) (:δ) (:α) (:β)\n (:c) 5.93625 0.0 0.0 -116.072 55.786 76.1014\n (:k) 47.3903 0.0 0.0 -1304.95 555.264 1445.93\n (:q) 6.88406 0.0 0.0 -94.7805 66.8912 105.02\n (:z) 0.0 0.0 0.0 0.0 0.0 0.0\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_steady_state_equations-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_steady_state_equations","text":"get_steady_state_equations(𝓂)\n\n\nReturn the non-stochastic steady state (NSSS) equations of the model. The difference to the equations as they were written in the @model block is that exogenous shocks are set to 0, time subscripts are eliminated (e.g. c[-1] becomes c), trivial simplifications are carried out (e.g. log(k) - log(k) = 0), and auxilliary variables are added for expressions that cannot become negative. \n\nAuxilliary variables facilitate the solution of the NSSS problem. The package substitutes expressions which cannot become negative with auxilliary variables and adds another equation to the system of equations determining the NSSS. For example, log(c/q) cannot be negative and c/q is substituted by an auxilliary varaible ➕₁ and an additional equation is added: ➕₁ = c / q.\n\nNote that the ouput assumes the equations are equal to 0. As in, -z{δ} * ρ{δ} + z{δ} implies -z{δ} * ρ{δ} + z{δ} = 0 and therefore: z{δ} * ρ{δ} = z{δ}.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z{TFP}[1]) * k[0]^(α - 1) + (1 - exp(z{δ}[1]) * δ))\n c[0] + k[0] = (1 - exp(z{δ}[0])δ) * k[-1] + q[0]\n q[0] = exp(z{TFP}[0]) * k[-1]^α\n for shock in [TFP, δ]\n z{shock}[0] = ρ{shock} * z{shock}[-1] + σ{shock} * (eps{shock}[x] + eps_news{shock}[x-1])\n end\n Δc_share[0] = log(c[0]/q[0]) - log(c[-1]/q[-1])\n Δk_4q[0] = log(k[0]) - log(k[-4])\nend\n\n@parameters RBC begin\n σ = 0.01\n ρ = 0.2\n capital_to_output = 1.5\n k[ss] / (4 * q[ss]) = capital_to_output | δ\n alpha = .5\n α = alpha\n β = 0.95\nend\n\nget_steady_state_equations(RBC)\n# output\n9-element Vector{String}:\n \"(-β * ((k ^ (α - 1) * α * exp(z{TFP}) - δ * exp(z{δ})) + 1)) / c + 1 / c\"\n \"((c - k * (-δ * exp(z{δ}) + 1)) + k) - q\"\n \"-(k ^ α) * exp(z{TFP}) + q\"\n \"-z{TFP} * ρ{TFP} + z{TFP}\"\n \"-z{δ} * ρ{δ} + z{δ}\"\n \"➕₁ - c / q\"\n \"➕₂ - c / q\"\n \"(Δc_share - log(➕₁)) + log(➕₂)\"\n \"Δk_4q - 0\"\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_stochastic_steady_state-Tuple","page":"API","title":"MacroModelling.get_stochastic_steady_state","text":"Wrapper for get_steady_state with stochastic = true.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_third_order_solution-Tuple","page":"API","title":"MacroModelling.get_third_order_solution","text":"Wrapper for get_solution with algorithm = :third_order.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_var","page":"API","title":"MacroModelling.get_var","text":"Wrapper for get_moments with variance = true and non_stochastic_steady_state = false, standard_deviation = false, covariance = false.\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.get_var_decomp","page":"API","title":"MacroModelling.get_var_decomp","text":"See get_variance_decomposition\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.get_variables-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_variables","text":"get_variables(𝓂)\n\n\nReturns the variables of the model without timing subscripts and not including auxilliary variables.\n\nIn case programmatic model writing was used this function returns the parsed variables (see z in example).\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z{TFP}[1]) * k[0]^(α - 1) + (1 - exp(z{δ}[1]) * δ))\n c[0] + k[0] = (1 - exp(z{δ}[0])δ) * k[-1] + q[0]\n q[0] = exp(z{TFP}[0]) * k[-1]^α\n for shock in [TFP, δ]\n z{shock}[0] = ρ{shock} * z{shock}[-1] + σ{shock} * (eps{shock}[x] + eps_news{shock}[x-1])\n end\n Δc_share[0] = log(c[0]/q[0]) - log(c[-1]/q[-1])\n Δk_4q[0] = log(k[0]) - log(k[-4])\nend\n\n@parameters RBC begin\n σ = 0.01\n ρ = 0.2\n capital_to_output = 1.5\n k[ss] / (4 * q[ss]) = capital_to_output | δ\n alpha = .5\n α = alpha\n β = 0.95\nend\n\nget_variables(RBC)\n# output\n7-element Vector{String}:\n \"c\"\n \"k\"\n \"q\"\n \"z{TFP}\"\n \"z{δ}\"\n \"Δc_share\"\n \"Δk_4q\"\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_variance-Tuple","page":"API","title":"MacroModelling.get_variance","text":"Wrapper for get_moments with variance = true and non_stochastic_steady_state = false, standard_deviation = false, covariance = false.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_variance_decomposition-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_variance_decomposition","text":"get_variance_decomposition(𝓂; parameters, verbose)\n\n\nReturn the variance decomposition of endogenous variables with regards to the shocks using the linearised solution. \n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nKeyword Arguments\n\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling\n\n@model RBC_CME begin\n y[0]=A[0]*k[-1]^alpha\n 1/c[0]=beta*1/c[1]*(alpha*A[1]*k[0]^(alpha-1)+(1-delta))\n 1/c[0]=beta*1/c[1]*(R[0]/Pi[+1])\n R[0] * beta =(Pi[0]/Pibar)^phi_pi\n A[0]*k[-1]^alpha=c[0]+k[0]-(1-delta*z_delta[0])*k[-1]\n z_delta[0] = 1 - rho_z_delta + rho_z_delta * z_delta[-1] + std_z_delta * delta_eps[x]\n A[0] = 1 - rhoz + rhoz * A[-1] + std_eps * eps_z[x]\nend\n\n\n@parameters RBC_CME begin\n alpha = .157\n beta = .999\n delta = .0226\n Pibar = 1.0008\n phi_pi = 1.5\n rhoz = .9\n std_eps = .0068\n rho_z_delta = .9\n std_z_delta = .005\nend\n\nget_variance_decomposition(RBC_CME)\n# output\n2-dimensional KeyedArray(NamedDimsArray(...)) with keys:\n↓ Variables ∈ 7-element Vector{Symbol}\n→ Shocks ∈ 2-element Vector{Symbol}\nAnd data, 7×2 Matrix{Float64}:\n (:delta_eps) (:eps_z)\n (:A) 9.78485e-31 1.0\n (:Pi) 0.0156771 0.984323\n (:R) 0.0156771 0.984323\n (:c) 0.0134672 0.986533\n (:k) 0.00869568 0.991304\n (:y) 0.000313462 0.999687\n (:z_delta) 1.0 0.0\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.gr_backend","page":"API","title":"MacroModelling.gr_backend","text":"gr_backend()\n\nRenaming and reexport of Plot.jl function gr() to define GR.jl as backend\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.import_dynare","page":"API","title":"MacroModelling.import_dynare","text":"See translate_mod_file\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.import_model","page":"API","title":"MacroModelling.import_model","text":"See translate_mod_file\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.plot_IRF","page":"API","title":"MacroModelling.plot_IRF","text":"See plot_irf\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.plot_conditional_forecast-Tuple{MacroModelling.ℳ, Union{KeyedArray{Union{Nothing, Float64}}, KeyedArray{Float64}, SparseArrays.SparseMatrixCSC{Float64}, Matrix{Union{Nothing, Float64}}}}","page":"API","title":"MacroModelling.plot_conditional_forecast","text":"plot_conditional_forecast(\n 𝓂,\n conditions;\n shocks,\n initial_state,\n periods,\n parameters,\n variables,\n conditions_in_levels,\n algorithm,\n levels,\n show_plots,\n save_plots,\n save_plots_format,\n save_plots_path,\n plots_per_page,\n verbose\n)\n\n\nPlot conditional forecast given restrictions on endogenous variables and shocks (optional) of the model. The algorithm finds the combinations of shocks with the smallest magnitude to match the conditions and plots both the endogenous variables and shocks.\n\nThe left axis shows the level, and the right axis the deviation from the (non) stochastic steady state, depending on the solution algorithm (e.g. higher order perturbation algorithms will show the stochastic steady state). Variable names are above the subplots, conditioned values are marked, and the title provides information about the model, and number of pages.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\nconditions [Type: Union{Matrix{Union{Nothing,Float64}}, SparseMatrixCSC{Float64}, KeyedArray{Union{Nothing,Float64}}, KeyedArray{Float64}}]: conditions for which to find the corresponding shocks. The input can have multiple formats, but for all types of entries the first dimension corresponds to the number of variables and the second dimension to the number of periods. The conditions can be specified using a matrix of type Matrix{Union{Nothing,Float64}}. In this case the conditions are matrix elements of type Float64 and all remaining (free) entries are nothing. You can also use a SparseMatrixCSC{Float64} as input. In this case only non-zero elements are taken as conditions. Note that you cannot condition variables to be zero using a SparseMatrixCSC{Float64} as input (use other input formats to do so). Another possibility to input conditions is by using a KeyedArray. You can use a KeyedArray{Union{Nothing,Float64}} where, similar to Matrix{Union{Nothing,Float64}}, all entries of type Float64 are recognised as conditions and all other entries have to be nothing. Furthermore, you can specify in the primary axis a subset of variables (of type Symbol or String) for which you specify conditions and all other variables are considered free. The same goes for the case when you use KeyedArray{Float64}} as input, whereas in this case the conditions for the specified variables bind for all periods specified in the KeyedArray, because there are no nothing entries permitted with this type.\n\nKeyword Arguments\n\nshocks [Default: nothing, Type: Union{Matrix{Union{Nothing,Float64}}, SparseMatrixCSC{Float64}, KeyedArray{Union{Nothing,Float64}}, KeyedArray{Float64}, Nothing} = nothing]: known values of shocks. This entry allows the user to include certain shock values. By entering restrictions on the shock sin this way the problem to match the conditions on endogenous variables is restricted to the remaining free shocks in the repective period. The input can have multiple formats, but for all types of entries the first dimension corresponds to the number of shocks and the second dimension to the number of periods. The shocks can be specified using a matrix of type Matrix{Union{Nothing,Float64}}. In this case the shocks are matrix elements of type Float64 and all remaining (free) entries are nothing. You can also use a SparseMatrixCSC{Float64} as input. In this case only non-zero elements are taken as certain shock values. Note that you cannot condition shocks to be zero using a SparseMatrixCSC{Float64} as input (use other input formats to do so). Another possibility to input known shocks is by using a KeyedArray. You can use a KeyedArray{Union{Nothing,Float64}} where, similar to Matrix{Union{Nothing,Float64}}, all entries of type Float64 are recognised as known shocks and all other entries have to be nothing. Furthermore, you can specify in the primary axis a subset of shocks (of type Symbol or String) for which you specify values and all other shocks are considered free. The same goes for the case when you use KeyedArray{Float64}} as input, whereas in this case the values for the specified shocks bind for all periods specified in the KeyedArray, because there are no nothing entries permitted with this type.\ninitial_state [Default: [0.0], Type: Union{Vector{Vector{Float64}},Vector{Float64}}]: The initial state defines the starting point for the model and is relevant for normal IRFs. In the case of pruned solution algorithms the initial state can be given as multiple state vectors (Vector{Vector{Float64}}). In this case the initial state must be given in devations from the non-stochastic steady state. In all other cases the initial state must be given in levels. If a pruned solution algorithm is selected and initial state is a Vector{Float64} then it impacts the first order initial state vector only. The state includes all variables as well as exogenous variables in leads or lags if present.\nperiods [Default: 40, Type: Int]: the total number of periods is the sum of the argument provided here and the maximum of periods of the shocks or conditions argument.\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\nvariables [Default: :all_excluding_obc]: variables for which to show the results. Inputs can be a variable name passed on as either a Symbol or String (e.g. :y or \"y\"), or Tuple, Matrix or Vector of String or Symbol. Any variables not part of the model will trigger a warning. :all_excluding_auxilliary_and_obc contains all shocks less those related to auxilliary variables and related to occasionally binding constraints (obc). :all_excluding_obc contains all shocks less those related to auxilliary variables. :all will contain all variables.\n\nconditions_in_levels [Default: true, Type: Bool]: indicator whether the conditions are provided in levels. If true the input to the conditions argument will have the non stochastic steady state substracted.\n\nlevels [Default: false, Type: Bool]: return levels or absolute deviations from steady state corresponding to the solution algorithm (e.g. stochastic steady state for higher order solution algorithms).\nalgorithm [Default: :first_order, Type: Symbol]: algorithm to solve for the dynamics of the model.\nshow_plots [Default: true, Type: Bool]: show plots. Separate plots per shocks and varibles depending on number of variables and plots_per_page.\nsave_plots [Default: false, Type: Bool]: switch to save plots using path and extension from save_plots_path and save_plots_format. Separate files per shocks and variables depending on number of variables and plots_per_page\nsave_plots_format [Default: :pdf, Type: Symbol]: output format of saved plots. See input formats compatible with GR for valid formats.\nsave_plots_path [Default: pwd(), Type: String]: path where to save plots\nplots_per_page [Default: 9, Type: Int]: how many plots to show per page\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling, StatsPlots\n\n@model RBC_CME begin\n y[0]=A[0]*k[-1]^alpha\n 1/c[0]=beta*1/c[1]*(alpha*A[1]*k[0]^(alpha-1)+(1-delta))\n 1/c[0]=beta*1/c[1]*(R[0]/Pi[+1])\n R[0] * beta =(Pi[0]/Pibar)^phi_pi\n A[0]*k[-1]^alpha=c[0]+k[0]-(1-delta*z_delta[0])*k[-1]\n z_delta[0] = 1 - rho_z_delta + rho_z_delta * z_delta[-1] + std_z_delta * delta_eps[x]\n A[0] = 1 - rhoz + rhoz * A[-1] + std_eps * eps_z[x]\nend\n\n@parameters RBC_CME begin\n alpha = .157\n beta = .999\n delta = .0226\n Pibar = 1.0008\n phi_pi = 1.5\n rhoz = .9\n std_eps = .0068\n rho_z_delta = .9\n std_z_delta = .005\nend\n\n# c is conditioned to deviate by 0.01 in period 1 and y is conditioned to deviate by 0.02 in period 3\nconditions = KeyedArray(Matrix{Union{Nothing,Float64}}(undef,2,2),Variables = [:c,:y], Periods = 1:2)\nconditions[1,1] = .01\nconditions[2,2] = .02\n\n# in period 2 second shock (eps_z) is conditioned to take a value of 0.05\nshocks = Matrix{Union{Nothing,Float64}}(undef,2,1)\nshocks[1,1] = .05\n\nplot_conditional_forecast(RBC_CME, conditions, shocks = shocks, conditions_in_levels = false)\n\n# The same can be achieved with the other input formats:\n# conditions = Matrix{Union{Nothing,Float64}}(undef,7,2)\n# conditions[4,1] = .01\n# conditions[6,2] = .02\n\n# using SparseArrays\n# conditions = spzeros(7,2)\n# conditions[4,1] = .01\n# conditions[6,2] = .02\n\n# shocks = KeyedArray(Matrix{Union{Nothing,Float64}}(undef,1,1),Variables = [:delta_eps], Periods = [1])\n# shocks[1,1] = .05\n\n# using SparseArrays\n# shocks = spzeros(2,1)\n# shocks[1,1] = .05\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.plot_conditional_variance_decomposition-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.plot_conditional_variance_decomposition","text":"plot_conditional_variance_decomposition(\n 𝓂;\n periods,\n variables,\n parameters,\n show_plots,\n save_plots,\n save_plots_format,\n save_plots_path,\n plots_per_page,\n verbose\n)\n\n\nPlot conditional variance decomposition of the model.\n\nThe vertical axis shows the share of the shocks variance contribution, and horizontal axis the period of the variance decomposition. The stacked bars represent each shocks variance contribution at a specific time horizon.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nKeyword Arguments\n\nperiods [Default: 40, Type: Int]: number of periods for which to calculate the IRFs. In case a matrix of shocks was provided, periods defines how many periods after the series of shocks the simulation continues.\nvariables [Default: :all_excluding_obc]: variables for which to show the results. Inputs can be a variable name passed on as either a Symbol or String (e.g. :y or \"y\"), or Tuple, Matrix or Vector of String or Symbol. Any variables not part of the model will trigger a warning. :all_excluding_auxilliary_and_obc contains all shocks less those related to auxilliary variables and related to occasionally binding constraints (obc). :all_excluding_obc contains all shocks less those related to auxilliary variables. :all will contain all variables.\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\nshow_plots [Default: true, Type: Bool]: show plots. Separate plots per shocks and varibles depending on number of variables and plots_per_page.\nsave_plots [Default: false, Type: Bool]: switch to save plots using path and extension from save_plots_path and save_plots_format. Separate files per shocks and variables depending on number of variables and plots_per_page\nsave_plots_format [Default: :pdf, Type: Symbol]: output format of saved plots. See input formats compatible with GR for valid formats.\nsave_plots_path [Default: pwd(), Type: String]: path where to save plots\nplots_per_page [Default: 9, Type: Int]: how many plots to show per page\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling, StatsPlots\n\n@model RBC_CME begin\n y[0]=A[0]*k[-1]^alpha\n 1/c[0]=beta*1/c[1]*(alpha*A[1]*k[0]^(alpha-1)+(1-delta))\n 1/c[0]=beta*1/c[1]*(R[0]/Pi[+1])\n R[0] * beta =(Pi[0]/Pibar)^phi_pi\n A[0]*k[-1]^alpha=c[0]+k[0]-(1-delta*z_delta[0])*k[-1]\n z_delta[0] = 1 - rho_z_delta + rho_z_delta * z_delta[-1] + std_z_delta * delta_eps[x]\n A[0] = 1 - rhoz + rhoz * A[-1] + std_eps * eps_z[x]\nend\n\n@parameters RBC_CME begin\n alpha = .157\n beta = .999\n delta = .0226\n Pibar = 1.0008\n phi_pi = 1.5\n rhoz = .9\n std_eps = .0068\n rho_z_delta = .9\n std_z_delta = .005\nend\n\nplot_conditional_variance_decomposition(RBC_CME)\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.plot_fevd","page":"API","title":"MacroModelling.plot_fevd","text":"See plot_conditional_variance_decomposition\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.plot_forecast_error_variance_decomposition","page":"API","title":"MacroModelling.plot_forecast_error_variance_decomposition","text":"See plot_conditional_variance_decomposition\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.plot_girf-Tuple","page":"API","title":"MacroModelling.plot_girf","text":"Wrapper for plot_irf with generalised_irf = true.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.plot_irf-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.plot_irf","text":"plot_irf(\n 𝓂;\n periods,\n shocks,\n variables,\n parameters,\n show_plots,\n save_plots,\n save_plots_format,\n save_plots_path,\n plots_per_page,\n algorithm,\n negative_shock,\n generalised_irf,\n initial_state,\n ignore_obc,\n verbose\n)\n\n\nPlot impulse response functions (IRFs) of the model.\n\nThe left axis shows the level, and the right the deviation from the reference steady state. Linear solutions have the non stochastic steady state as reference other solution the stochastic steady state. The horizontal black line indicates the reference steady state. Variable names are above the subplots and the title provides information about the model, shocks and number of pages per shock.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nKeyword Arguments\n\nperiods [Default: 40, Type: Int]: number of periods for which to calculate the IRFs. In case a matrix of shocks was provided, periods defines how many periods after the series of shocks the simulation continues.\nshocks [Default: :all_excluding_obc]: shocks for which to calculate the IRFs. Inputs can be a shock name passed on as either a Symbol or String (e.g. :y, or \"y\"), or Tuple, Matrix or Vector of String or Symbol. :simulate triggers random draws of all shocks (excluding occasionally binding constraints (obc) related shocks). :all_excluding_obc will contain all shocks but not the obc related ones.:all will contain also the obc related shocks. A series of shocks can be passed on using either a Matrix{Float64}, or a KeyedArray{Float64} as input with shocks (Symbol or String) in rows and periods in columns. The period of the simulation will correspond to the length of the input in the period dimension + the number of periods defined in periods. If the series of shocks is input as a KeyedArray{Float64} make sure to name the rows with valid shock names of type Symbol. Any shocks not part of the model will trigger a warning. :none in combination with an initial_state can be used for deterministic simulations.\nvariables [Default: :all_excluding_obc]: variables for which to show the results. Inputs can be a variable name passed on as either a Symbol or String (e.g. :y or \"y\"), or Tuple, Matrix or Vector of String or Symbol. Any variables not part of the model will trigger a warning. :all_excluding_auxilliary_and_obc contains all shocks less those related to auxilliary variables and related to occasionally binding constraints (obc). :all_excluding_obc contains all shocks less those related to auxilliary variables. :all will contain all variables.\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\nshow_plots [Default: true, Type: Bool]: show plots. Separate plots per shocks and varibles depending on number of variables and plots_per_page.\nsave_plots [Default: false, Type: Bool]: switch to save plots using path and extension from save_plots_path and save_plots_format. Separate files per shocks and variables depending on number of variables and plots_per_page\nsave_plots_format [Default: :pdf, Type: Symbol]: output format of saved plots. See input formats compatible with GR for valid formats.\nsave_plots_path [Default: pwd(), Type: String]: path where to save plots\nplots_per_page [Default: 9, Type: Int]: how many plots to show per page\nalgorithm [Default: :first_order, Type: Symbol]: algorithm to solve for the dynamics of the model.\nnegative_shock [Default: false, Type: Bool]: calculate a negative shock. Relevant for generalised IRFs.\ngeneralised_irf [Default: false, Type: Bool]: calculate generalised IRFs. Relevant for nonlinear solutions. Reference steady state for deviations is the stochastic steady state.\ninitial_state [Default: [0.0], Type: Union{Vector{Vector{Float64}},Vector{Float64}}]: The initial state defines the starting point for the model and is relevant for normal IRFs. In the case of pruned solution algorithms the initial state can be given as multiple state vectors (Vector{Vector{Float64}}). In this case the initial state must be given in devations from the non-stochastic steady state. In all other cases the initial state must be given in levels. If a pruned solution algorithm is selected and initial state is a Vector{Float64} then it impacts the first order initial state vector only. The state includes all variables as well as exogenous variables in leads or lags if present.\nignore_obc [Default: false, Type: Bool]: solve the model ignoring the occasionally binding constraints.\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling, StatsPlots\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend;\n\n@parameters RBC begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α = 0.5\n β = 0.95\nend;\n\nplot_irf(RBC)\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.plot_irfs","page":"API","title":"MacroModelling.plot_irfs","text":"See plot_irf\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.plot_model_estimates-Tuple{MacroModelling.ℳ, KeyedArray{Float64}}","page":"API","title":"MacroModelling.plot_model_estimates","text":"plot_model_estimates(\n 𝓂,\n data;\n parameters,\n algorithm,\n filter,\n warmup_iterations,\n variables,\n shocks,\n data_in_levels,\n shock_decomposition,\n smooth,\n show_plots,\n save_plots,\n save_plots_format,\n save_plots_path,\n plots_per_page,\n transparency,\n verbose\n)\n\n\nPlot model estimates of the variables given the data. The default plot shows the estimated variables, shocks, and the data to estimate the former. The left axis shows the level, and the right the deviation from the reference steady state. The horizontal black line indicates the non stochastic steady state. Variable names are above the subplots and the title provides information about the model, shocks and number of pages per shock.\n\nIn case shock_decomposition = true, then the plot shows the variables, shocks, and data in absolute deviations from the non stochastic steady state plus the contribution of the shocks as a stacked bar chart per period.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\ndata [Type: KeyedArray]: data matrix with variables (String or Symbol) in rows and time in columns\n\nKeyword Arguments\n\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\nvariables [Default: :all_excluding_obc]: variables for which to show the results. Inputs can be a variable name passed on as either a Symbol or String (e.g. :y or \"y\"), or Tuple, Matrix or Vector of String or Symbol. Any variables not part of the model will trigger a warning. :all_excluding_auxilliary_and_obc contains all shocks less those related to auxilliary variables and related to occasionally binding constraints (obc). :all_excluding_obc contains all shocks less those related to auxilliary variables. :all will contain all variables.\nshocks [Default: :all]: shocks for which to plot the estimates. Inputs can be either a Symbol (e.g. :y, or :all), Tuple{Symbol, Vararg{Symbol}}, Matrix{Symbol}, or Vector{Symbol}.\ndata_in_levels [Default: true, Type: Bool]: indicator whether the data is provided in levels. If true the input to the data argument will have the non stochastic steady state substracted.\nshock_decomposition [Default: false, Type: Bool]: whether to show the contribution of the shocks to the deviations from NSSS for each variable. If false, the plot shows the values of the selected variables, data, and shocks\nsmooth [Default: true, Type: Bool]: whether to return smoothed (true) or filtered (false) shocks. Only works for the Kalman filter. The inversion filter only returns filtered shocks.\nshow_plots [Default: true, Type: Bool]: show plots. Separate plots per shocks and varibles depending on number of variables and plots_per_page.\nsave_plots [Default: false, Type: Bool]: switch to save plots using path and extension from save_plots_path and save_plots_format. Separate files per shocks and variables depending on number of variables and plots_per_page\nsave_plots_format [Default: :pdf, Type: Symbol]: output format of saved plots. See input formats compatible with GR for valid formats.\nsave_plots_path [Default: pwd(), Type: String]: path where to save plots\nplots_per_page [Default: 9, Type: Int]: how many plots to show per page\ntransparency [Default: 0.6, Type: Float64]: transparency of bars\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling, StatsPlots\n\n\n@model RBC_CME begin\n y[0]=A[0]*k[-1]^alpha\n 1/c[0]=beta*1/c[1]*(alpha*A[1]*k[0]^(alpha-1)+(1-delta))\n 1/c[0]=beta*1/c[1]*(R[0]/Pi[+1])\n R[0] * beta =(Pi[0]/Pibar)^phi_pi\n A[0]*k[-1]^alpha=c[0]+k[0]-(1-delta*z_delta[0])*k[-1]\n z_delta[0] = 1 - rho_z_delta + rho_z_delta * z_delta[-1] + std_z_delta * delta_eps[x]\n A[0] = 1 - rhoz + rhoz * A[-1] + std_eps * eps_z[x]\nend\n\n@parameters RBC_CME begin\n alpha = .157\n beta = .999\n delta = .0226\n Pibar = 1.0008\n phi_pi = 1.5\n rhoz = .9\n std_eps = .0068\n rho_z_delta = .9\n std_z_delta = .005\nend\n\nsimulation = simulate(RBC_CME)\n\nplot_model_estimates(RBC_CME, simulation([:k],:,:simulate))\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.plot_shock_decomposition-Tuple","page":"API","title":"MacroModelling.plot_shock_decomposition","text":"Wrapper for plot_model_estimates with shock_decomposition = true.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.plot_simulation-Tuple","page":"API","title":"MacroModelling.plot_simulation","text":"Wrapper for plot_irf with shocks = :simulate and periods = 100.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.plot_simulations-Tuple","page":"API","title":"MacroModelling.plot_simulations","text":"Wrapper for plot_irf with shocks = :simulate and periods = 100.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.plot_solution-Tuple{MacroModelling.ℳ, Union{String, Symbol}}","page":"API","title":"MacroModelling.plot_solution","text":"plot_solution(\n 𝓂,\n state;\n variables,\n algorithm,\n σ,\n parameters,\n ignore_obc,\n show_plots,\n save_plots,\n save_plots_format,\n save_plots_path,\n plots_per_page,\n verbose\n)\n\n\nPlot the solution of the model (mapping of past states to present variables) around the (non) stochastic steady state (depending on chosen solution algorithm). Each plot shows the relationship between the chosen state (defined in state) and one of the chosen variables (defined in variables). \n\nThe (non) stochastic steady state is plotted along with the mapping from the chosen past state to one present variable per plot. All other (non-chosen) states remain in the (non) stochastic steady state.\n\nIn the case of pruned solutions there as many (latent) state vectors as the perturbation order. The first and third order baseline state vectors are the non stochastic steady state and the second order baseline state vector is the stochastic steady state. Deviations for the chosen state are only added to the first order baseline state. The plot shows the mapping from σ standard deviations (first order) added to the first order non stochastic steady state and the present variables. Note that there is no unique mapping from the \"pruned\" states and the \"actual\" reported state. Hence, the plots shown are just one realisation of inifite possible mappings.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\nstate [Type: Union{Symbol,String}]: state variable to be shown on x-axis.\n\nKeyword Arguments\n\nvariables [Default: :all_excluding_obc]: variables for which to show the results. Inputs can be a variable name passed on as either a Symbol or String (e.g. :y or \"y\"), or Tuple, Matrix or Vector of String or Symbol. Any variables not part of the model will trigger a warning. :all_excluding_auxilliary_and_obc contains all shocks less those related to auxilliary variables and related to occasionally binding constraints (obc). :all_excluding_obc contains all shocks less those related to auxilliary variables. :all will contain all variables.\nalgorithm [Default: :first_order, Type: Union{Symbol,Vector{Symbol}}]: solution algorithm for which to show the IRFs. Can be more than one, e.g.: [:second_order,:pruned_third_order]\"\nσ [Default: 2, Type: Union{Int64,Float64}]: defines the range of the state variable around the (non) stochastic steady state in standard deviations. E.g. a value of 2 means that the state variable is plotted for values of the (non) stochastic steady state in standard deviations +/- 2 standard deviations.\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\nignore_obc [Default: false, Type: Bool]: solve the model ignoring the occasionally binding constraints.\nshow_plots [Default: true, Type: Bool]: show plots. Separate plots per shocks and varibles depending on number of variables and plots_per_page.\nsave_plots [Default: false, Type: Bool]: switch to save plots using path and extension from save_plots_path and save_plots_format. Separate files per shocks and variables depending on number of variables and plots_per_page\nsave_plots_format [Default: :pdf, Type: Symbol]: output format of saved plots. See input formats compatible with GR for valid formats.\nsave_plots_path [Default: pwd(), Type: String]: path where to save plots\nplots_per_page [Default: 6, Type: Int]: how many plots to show per page\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling, StatsPlots\n\n@model RBC_CME begin\n y[0]=A[0]*k[-1]^alpha\n 1/c[0]=beta*1/c[1]*(alpha*A[1]*k[0]^(alpha-1)+(1-delta))\n 1/c[0]=beta*1/c[1]*(R[0]/Pi[+1])\n R[0] * beta =(Pi[0]/Pibar)^phi_pi\n A[0]*k[-1]^alpha=c[0]+k[0]-(1-delta*z_delta[0])*k[-1]\n z_delta[0] = 1 - rho_z_delta + rho_z_delta * z_delta[-1] + std_z_delta * delta_eps[x]\n A[0] = 1 - rhoz + rhoz * A[-1] + std_eps * eps_z[x]\nend\n\n@parameters RBC_CME begin\n alpha = .157\n beta = .999\n delta = .0226\n Pibar = 1.0008\n phi_pi = 1.5\n rhoz = .9\n std_eps = .0068\n rho_z_delta = .9\n std_z_delta = .005\nend\n\nplot_solution(RBC_CME, :k)\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.plotlyjs_backend","page":"API","title":"MacroModelling.plotlyjs_backend","text":"plotlyjs_backend()\n\nRenaming and reexport of Plot.jl function plotlyjs() to define PlotlyJS.jl as backend\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.simulate-Tuple","page":"API","title":"MacroModelling.simulate","text":"Wrapper for get_irf with shocks = :simulate. Function returns values in levels by default.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.ss-Tuple","page":"API","title":"MacroModelling.ss","text":"See get_steady_state\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.sss-Tuple","page":"API","title":"MacroModelling.sss","text":"Wrapper for get_steady_state with stochastic = true.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.std","page":"API","title":"MacroModelling.std","text":"Wrapper for get_moments with standard_deviation = true and non_stochastic_steady_state = false, variance = false, covariance = false.\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.stdev","page":"API","title":"MacroModelling.stdev","text":"Wrapper for get_moments with standard_deviation = true and non_stochastic_steady_state = false, variance = false, covariance = false.\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.steady_state","page":"API","title":"MacroModelling.steady_state","text":"See get_steady_state\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.translate_dynare_file","page":"API","title":"MacroModelling.translate_dynare_file","text":"See translate_mod_file\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.translate_mod_file-Tuple{AbstractString}","page":"API","title":"MacroModelling.translate_mod_file","text":"translate_mod_file(path_to_mod_file)\n\n\nReads in a dynare .mod-file, adapts the syntax, tries to capture parameter definitions, and writes a julia file in the same folder containing the model equations and parameters in MacroModelling.jl syntax. This function is not guaranteed to produce working code. It's purpose is to make it easier to port a model from dynare to MacroModelling.jl. \n\nThe recommended workflow is to use this function to translate a .mod-file, and then adapt the output so that it runs and corresponds to the input.\n\nNote that this function copies the .mod-file to a temporary folder and executes it there. All references within that .mod-file are therefore not valid (because those filesare not copied) and must be made copied into the .mod-file.\n\nArguments\n\npath_to_mod_file [Type: AbstractString]: path including filename of the .mod-file to be translated\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.var","page":"API","title":"MacroModelling.var","text":"Wrapper for get_moments with variance = true and non_stochastic_steady_state = false, standard_deviation = false, covariance = false.\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.write_dynare_file","page":"API","title":"MacroModelling.write_dynare_file","text":"See write_mod_file\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.write_mod_file-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.write_mod_file","text":"write_mod_file(m)\n\n\nWrites a dynare .mod-file in the current working directory. This function is not guaranteed to produce working code. It's purpose is to make it easier to port a model from MacroModelling.jl to dynare. \n\nThe recommended workflow is to use this function to write a .mod-file, and then adapt the output so that it runs and corresponds to the input.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.write_to_dynare","page":"API","title":"MacroModelling.write_to_dynare","text":"See write_mod_file\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.write_to_dynare_file","page":"API","title":"MacroModelling.write_to_dynare_file","text":"See write_mod_file\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.@model-Tuple{Any, Vararg{Any}}","page":"API","title":"MacroModelling.@model","text":"Parses the model equations and assigns them to an object.\n\nArguments\n\n𝓂: name of the object to be created containing the model information.\nex: equations\n\nOptional arguments to be placed between 𝓂 and ex\n\nmax_obc_horizon [Default: 40, Type: Int]: maximum length of anticipated shocks and corresponding unconditional forecast horizon over which the occasionally binding constraint is to be enforced. Increase this number if no solution is found to enforce the constraint.\n\nVariables must be defined with their time subscript in squared brackets. Endogenous variables can have the following:\n\npresent: c[0]\nnon-stcohastic steady state: c[ss] instead of ss any of the following is also a valid flag for the non-stochastic steady state: ss, stst, steady, steadystate, steady_state, and the parser is case-insensitive (SS or sTst will work as well).\npast: c[-1] or any negative Integer: e.g. c[-12]\nfuture: c[1] or any positive Integer: e.g. c[16] or c[+16]\n\nSigned integers are recognised and parsed as such.\n\nExogenous variables (shocks) can have the following:\n\npresent: eps_z[x] instead of x any of the following is also a valid flag for exogenous variables: ex, exo, exogenous, and the parser is case-insensitive (Ex or exoGenous will work as well).\npast: eps_z[x-1]\nfuture: eps_z[x+1]\n\nParameters enter the equations without squared brackets.\n\nIf an equation contains a max or min operator, then the default dynamic (first order) solution of the model will enforce the occasionally binding constraint. You can choose to ignore it by setting ignore_obc = true in the relevant function calls.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend\n\nProgrammatic model writing\n\nParameters and variables can be indexed using curly braces: e.g. c{H}[0], eps_z{F}[x], or α{H}.\n\nfor loops can be used to write models programmatically. They can either be used to generate expressions where you iterate over the time index or the index in curly braces:\n\ngenerate equation with different indices in curly braces: for co in [H,F] C{co}[0] + X{co}[0] + Z{co}[0] - Z{co}[-1] end = for co in [H,F] Y{co}[0] end\ngenerate multiple equations with different indices in curly braces: for co in [H, F] K{co}[0] = (1-delta{co}) * K{co}[-1] + S{co}[0] end\ngenerate equation with different time indices: Y_annual[0] = for lag in -3:0 Y[lag] end or R_annual[0] = for operator = :*, lag in -3:0 R[lag] end\n\n\n\n\n\n","category":"macro"},{"location":"api/#MacroModelling.@parameters-Tuple{Any, Vararg{Any}}","page":"API","title":"MacroModelling.@parameters","text":"Adds parameter values and calibration equations to the previously defined model. Allows to provide an initial guess for the non-stochastic steady state (NSSS).\n\nArguments\n\n𝓂: name of the object previously created containing the model information.\nex: parameter, parameters values, and calibration equations\n\nParameters can be defined in either of the following ways:\n\nplain number: δ = 0.02\nexpression containing numbers: δ = 1/50\nexpression containing other parameters: δ = 2 * std_z in this case it is irrelevant if std_z is defined before or after. The definitons including other parameters are treated as a system of equaitons and solved accordingly.\nexpressions containing a target parameter and an equations with endogenous variables in the non-stochastic steady state, and other parameters, or numbers: k[ss] / (4 * q[ss]) = 1.5 | δ or α | 4 * q[ss] = δ * k[ss] in this case the target parameter will be solved simultaneaously with the non-stochastic steady state using the equation defined with it.\n\nOptional arguments to be placed between 𝓂 and ex\n\nguess [Type: Dict{Symbol, <:Real}, Dict{String, <:Real}}]: Guess for the non-stochastic steady state. The keys must be the variable (and calibrated parameters) names and the values the guesses. Missing values are filled with standard starting values.\nverbose [Default: false, Type: Bool]: print more information about how the non stochastic steady state is solved\nsilent [Default: false, Type: Bool]: do not print any information\nsymbolic [Default: false, Type: Bool]: try to solve the non stochastic steady state symbolically and fall back to a numerical solution if not possible\nperturbation_order [Default: 1, Type: Int]: take derivatives only up to the specified order at this stage. In case you want to work with higher order perturbation later on, respective derivatives will be taken at that stage.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend\n\n@parameters RBC verbose = true begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α = 0.5\n β = 0.95\nend\n\n@model RBC_calibrated begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend\n\n@parameters RBC_calibrated verbose = true guess = Dict(:k => 3) begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n k[ss] / q[ss] = 2.5 | α\n β = 0.95\nend\n\nProgrammatic model writing\n\nVariables and parameters indexed with curly braces can be either referenced specifically (e.g. c{H}[ss]) or generally (e.g. alpha). If they are referenced generaly the parse assumes all instances (indices) are meant. For example, in a model where alpha has two indices H and F, the expression alpha = 0.3 is interpreted as two expressions: alpha{H} = 0.3 and alpha{F} = 0.3. The same goes for calibration equations.\n\n\n\n\n\n","category":"macro"},{"location":"tutorials/install/#Installation","page":"Installation","title":"Installation","text":"","category":"section"},{"location":"tutorials/install/","page":"Installation","title":"Installation","text":"MacroModelling.jl requires julia version 1.8 or higher and an IDE is recommended (e.g. VS Code with the julia extension).","category":"page"},{"location":"tutorials/install/","page":"Installation","title":"Installation","text":"Once set up you can install MacroModelling.jl by typing the following in the julia REPL:","category":"page"},{"location":"tutorials/install/","page":"Installation","title":"Installation","text":"using Pkg; Pkg.add(\"MacroModelling\")","category":"page"},{"location":"how-to/obc/#Occasionally-Binding-Constraints","page":"Occasionally binding constraints","title":"Occasionally Binding Constraints","text":"","category":"section"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Occasionally binding constraints are a form of nonlinearity frequently used to model effects like the zero lower bound on interest rates, or borrowing constraints. Perturbation methods are not able to capture them as they are local approximations. Nonetheless, there are ways to combine the speed of perturbation solutions and the flexibility of occasionally binding constraints. MacroModelling.jl provides a convenient way to write down the constraints and automatically enforces the constraint equation with shocks. More specifically, the constraint equation is enforced for each periods unconditional forecast (default forecast horizon of 40 periods) by constraint equation specific anticipated shocks, while minimising the shock size.","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"This guide will demonstrate how to write down models containing occasionally binding constraints (e.g. effective lower bound and borrowing constraint), show some potential problems the user may encounter and how to overcome them, and go through some use cases.","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Common problems that may occur are that no perturbation solution is found, or that the algorithm cannot find a combination of shocks which enforce the constraint equation. The former has to do with the fact that occasionally binding constraints can give rise to more than one steady state but only one is suitable for a perturbation solution. The latter has to do with the dynamics of the model and the fact that we use a finite amount of shocks to enforce the constraint equation.","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Beyond the examples outlined in this guide there is a version of Smets and Wouters (2003) with the ELB in the models folder (filename: SW03_obc.jl).","category":"page"},{"location":"how-to/obc/#Example:-Effective-lower-bound-on-interest-rates","page":"Occasionally binding constraints","title":"Example: Effective lower bound on interest rates","text":"","category":"section"},{"location":"how-to/obc/#Writing-a-model-with-occasionally-binding-constraints","page":"Occasionally binding constraints","title":"Writing a model with occasionally binding constraints","text":"","category":"section"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Let us take the Galı́ (2015), Chapter 3 model containing a Taylor rule and implement an effective lower bound on interest rates. The Taylor rule in the model: R[0] = 1 / β * Pi[0] ^ ϕᵖⁱ * (Y[0] / Y[ss]) ^ ϕʸ * exp(nu[0]) needs to be modified so that R[0] never goes below an effective lower bound R̄. We can do this using the max operator: R[0] = max(R̄ , 1 / β * Pi[0] ^ ϕᵖⁱ * (Y[0] / Y[ss]) ^ ϕʸ * exp(nu[0]))","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"The model definition after the change of the Taylor rule looks like this:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"ENV[\"GKSwstype\"] = \"100\"\nusing Random\nRandom.seed!(30)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"using MacroModelling\n@model Gali_2015_chapter_3_obc begin\n W_real[0] = C[0] ^ σ * N[0] ^ φ\n\n Q[0] = β * (C[1] / C[0]) ^ (-σ) * Z[1] / Z[0] / Pi[1]\n\n R[0] = 1 / Q[0]\n\n Y[0] = A[0] * (N[0] / S[0]) ^ (1 - α)\n\n R[0] = Pi[1] * realinterest[0]\n\n C[0] = Y[0]\n\n log(A[0]) = ρ_a * log(A[-1]) + std_a * eps_a[x]\n\n log(Z[0]) = ρ_z * log(Z[-1]) - std_z * eps_z[x]\n\n nu[0] = ρ_ν * nu[-1] + std_nu * eps_nu[x]\n\n MC[0] = W_real[0] / (S[0] * Y[0] * (1 - α) / N[0])\n\n 1 = θ * Pi[0] ^ (ϵ - 1) + (1 - θ) * Pi_star[0] ^ (1 - ϵ)\n\n S[0] = (1 - θ) * Pi_star[0] ^ (( - ϵ) / (1 - α)) + θ * Pi[0] ^ (ϵ / (1 - α)) * S[-1]\n\n Pi_star[0] ^ (1 + ϵ * α / (1 - α)) = ϵ * x_aux_1[0] / x_aux_2[0] * (1 - τ) / (ϵ - 1)\n\n x_aux_1[0] = MC[0] * Y[0] * Z[0] * C[0] ^ (-σ) + β * θ * Pi[1] ^ (ϵ + α * ϵ / (1 - α)) * x_aux_1[1]\n\n x_aux_2[0] = Y[0] * Z[0] * C[0] ^ (-σ) + β * θ * Pi[1] ^ (ϵ - 1) * x_aux_2[1]\n\n log_y[0] = log(Y[0])\n\n log_W_real[0] = log(W_real[0])\n\n log_N[0] = log(N[0])\n\n pi_ann[0] = 4 * log(Pi[0])\n\n i_ann[0] = 4 * log(R[0])\n\n r_real_ann[0] = 4 * log(realinterest[0])\n\n M_real[0] = Y[0] / R[0] ^ η\n\n R[0] = max(R̄ , 1 / β * Pi[0] ^ ϕᵖⁱ * (Y[0] / Y[ss]) ^ ϕʸ * exp(nu[0]))\n\nend","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"In the background the system of equations is augmented by a series of anticipated shocks added to the equation containing the constraint (max/min operator). This explains the large number of auxilliary variables and shocks.","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Next we define the parameters including the new parameter defining the effective lower bound (which we set to 1, which implements a zero lower bound):","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"@parameters Gali_2015_chapter_3_obc begin\n R̄ = 1.0\n\n σ = 1\n\n φ = 5\n\n ϕᵖⁱ = 1.5\n\n ϕʸ = 0.125\n\n θ = 0.75\n\n ρ_ν = 0.5\n\n ρ_z = 0.5\n\n ρ_a = 0.9\n\n β = 0.99\n\n η = 3.77\n\n α = 0.25\n\n ϵ = 9\n\n τ = 0\n\n std_a = .01\n\n std_z = .05\n\n std_nu = .0025\n\nend","category":"page"},{"location":"how-to/obc/#Verify-the-non-stochastic-steady-state","page":"Occasionally binding constraints","title":"Verify the non stochastic steady state","text":"","category":"section"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Let's check out the non stochastic steady state (NSSS):","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"SS(Gali_2015_chapter_3_obc)\nSS(Gali_2015_chapter_3_obc)(:R)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"There are a few things to note here. First, we get the NSSS values of the auxilliary variables related to the occasionally binding constraint. Second, the NSSS value of R is 1, and thereby the effective lower bound is binding in the NSSS. While this is a viable NSSS it is not a viable approximation point for perturbation. We can only find a perturbation solution if the effective lower bound is not binding in NSSS. Calling get_solution reveals that there is no stable solution at this NSSS:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"get_solution(Gali_2015_chapter_3_obc)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"In order to get the other viable NSSS we have to restrict the values of R to be larger than the effective lower bound. We can do this by adding a constraint on the variable in the @parameter section. Let us redefine the model:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"@model Gali_2015_chapter_3_obc begin\n W_real[0] = C[0] ^ σ * N[0] ^ φ\n\n Q[0] = β * (C[1] / C[0]) ^ (-σ) * Z[1] / Z[0] / Pi[1]\n\n R[0] = 1 / Q[0]\n\n Y[0] = A[0] * (N[0] / S[0]) ^ (1 - α)\n\n R[0] = Pi[1] * realinterest[0]\n\n C[0] = Y[0]\n\n log(A[0]) = ρ_a * log(A[-1]) + std_a * eps_a[x]\n\n log(Z[0]) = ρ_z * log(Z[-1]) - std_z * eps_z[x]\n\n nu[0] = ρ_ν * nu[-1] + std_nu * eps_nu[x]\n\n MC[0] = W_real[0] / (S[0] * Y[0] * (1 - α) / N[0])\n\n 1 = θ * Pi[0] ^ (ϵ - 1) + (1 - θ) * Pi_star[0] ^ (1 - ϵ)\n\n S[0] = (1 - θ) * Pi_star[0] ^ (( - ϵ) / (1 - α)) + θ * Pi[0] ^ (ϵ / (1 - α)) * S[-1]\n\n Pi_star[0] ^ (1 + ϵ * α / (1 - α)) = ϵ * x_aux_1[0] / x_aux_2[0] * (1 - τ) / (ϵ - 1)\n\n x_aux_1[0] = MC[0] * Y[0] * Z[0] * C[0] ^ (-σ) + β * θ * Pi[1] ^ (ϵ + α * ϵ / (1 - α)) * x_aux_1[1]\n\n x_aux_2[0] = Y[0] * Z[0] * C[0] ^ (-σ) + β * θ * Pi[1] ^ (ϵ - 1) * x_aux_2[1]\n\n log_y[0] = log(Y[0])\n\n log_W_real[0] = log(W_real[0])\n\n log_N[0] = log(N[0])\n\n pi_ann[0] = 4 * log(Pi[0])\n\n i_ann[0] = 4 * log(R[0])\n\n r_real_ann[0] = 4 * log(realinterest[0])\n\n M_real[0] = Y[0] / R[0] ^ η\n\n R[0] = max(R̄ , 1 / β * Pi[0] ^ ϕᵖⁱ * (Y[0] / Y[ss]) ^ ϕʸ * exp(nu[0]))\n\nend\n\n@parameters Gali_2015_chapter_3_obc begin\n R̄ = 1.0\n\n σ = 1\n\n φ = 5\n\n ϕᵖⁱ = 1.5\n\n ϕʸ = 0.125\n\n θ = 0.75\n\n ρ_ν = 0.5\n\n ρ_z = 0.5\n\n ρ_a = 0.9\n\n β = 0.99\n\n η = 3.77\n\n α = 0.25\n\n ϵ = 9\n\n τ = 0\n\n std_a = .01\n\n std_z = .05\n\n std_nu = .0025\n\n R > 1.000001\nend","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"and check the NSSS once more:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"SS(Gali_2015_chapter_3_obc)\nSS(Gali_2015_chapter_3_obc)(:R)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Now we get R > R̄, so that the constraint is not binding in the NSSS and we can work with a stable first order solution:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"get_solution(Gali_2015_chapter_3_obc)","category":"page"},{"location":"how-to/obc/#Generate-model-output","page":"Occasionally binding constraints","title":"Generate model output","text":"","category":"section"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Having defined the system with an occasionally binding constraint we can simply simulate the model by calling:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"import StatsPlots\nplot_simulations(Gali_2015_chapter_3_obc)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"(Image: Simulation_elb)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"In the background an optimisation problem is set up to find the smallest shocks in magnitude which enforce the equation containing the occasionally binding constraint over the unconditional forecast horizon (default 40 periods) at each period of the simulation. The plots show multiple spells of a binding effective lower bound and many other variables are skewed as a result of the nonlinearity. It can happen that it is not possible to find a combination of shocks which enforce the occasionally binding constraint equation. In this case one solution can be to make the horizon larger over which the algorithm tries to enforce the equation. You can do this by setting the parameter at the beginning of the @model section: @model Gali_2015_chapter_3_obc max_obc_horizon = 60 begin ... end.","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Next let us change the effective lower bound to 0.99 and plot once more:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"plot_simulations(Gali_2015_chapter_3_obc, parameters = :R̄ => 0.99)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"(Image: Simulation_elb2)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Now, the effect of the effective lower bound becomes less important as it binds less often.","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"If you want to ignore the occasionally binding constraint you can simply call:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"plot_simulations(Gali_2015_chapter_3_obc, ignore_obc = true)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"(Image: Simulation_no_elb)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"and you get the simulation based on the first order solution approximated around the NSSS, which is the same as the one for the model without the modified Taylor rule.","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"We can plot the impulse response functions for the eps_z shock, while setting the parameter of the occasionally binding constraint back to 1, as follows:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"plot_irf(Gali_2015_chapter_3_obc, shocks = :eps_z, parameters = :R̄ => 1.0)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"(Image: IRF_elb)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"As you can see R remains above the effective lower bound in the first period.","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Next, let us simulate the model using a series of shocks. E.g. three positive shocks to eps_z in periods 5, 10, and 15 in decreasing magnitude:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"shcks = zeros(1,15)\nshcks[5] = 3.0\nshcks[10] = 2.0\nshcks[15] = 1.0\n\nsks = KeyedArray(shcks; Shocks = [:eps_z], Periods = 1:15)\n\nplot_irf(Gali_2015_chapter_3_obc, shocks = sks, periods = 10)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"(Image: Shock_series_elb)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"The effective lower bound is binding after all three shocks but the length of the constraint being binding varies with the shock size and is completely endogenous.","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Last but not least, we can get the simulated moments of the model (theoretical moments are not available):","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"sims = get_irf(Gali_2015_chapter_3_obc, periods = 1000, shocks = :simulate, levels = true)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Let's look at the mean and standard deviation of borrowing:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"import Statistics\nStatistics.mean(sims(:Y,:,:))","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"and","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Statistics.std(sims(:Y,:,:))","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Compare this to the theoretical mean of the model without the occasionally binding constraint:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"get_mean(Gali_2015_chapter_3_obc)\nget_mean(Gali_2015_chapter_3_obc)(:Y)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"and the theoretical standard deviation:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"get_std(Gali_2015_chapter_3_obc)\nget_std(Gali_2015_chapter_3_obc)(:Y)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"The mean of output is lower in the model with effective lower bound compared to the model without and the standard deviation is higher.","category":"page"},{"location":"how-to/obc/#Example:-Borrowing-constraint","page":"Occasionally binding constraints","title":"Example: Borrowing constraint","text":"","category":"section"},{"location":"how-to/obc/#Model-definition","page":"Occasionally binding constraints","title":"Model definition","text":"","category":"section"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Let us start with a consumption-saving model containing a borrowing constraint (see [@citet cuba2019likelihood] for details). Output is exogenously given, and households can only borrow up to a fraction of output and decide between saving and consumption. The first order conditions of the model are:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"beginalign*\nY_t + B_t = C_t + R B_t-1\nlog(Y_t) = rho log(Y_t-1) + sigma varepsilon_t\nC_t^-gamma = beta R mathbbE_t (C_t+1^-gamma) + lambda_t\n0 = lambda_t (B_t - mY_t)\nendalign*","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"in order to write this model down we need to express the Karush-Kuhn-Tucker condition (last equation) using a max (or min) operator, so that it becomes:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"0 = max(B_t - mY_t -lambda_t)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"We can write this model containing an occasionally binding constraint in a very convenient way:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"@model borrowing_constraint begin\n Y[0] + B[0] = C[0] + R * B[-1]\n\n log(Y[0]) = ρ * log(Y[-1]) + σ * ε[x]\n\n C[0]^(-γ) = β * R * C[1]^(-γ) + λ[0]\n\n 0 = max(B[0] - m * Y[0], -λ[0])\nend","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"In the background the system of equations is augmented by a series of anticipated shocks added to the equation containing the constraint (max/min operator). This explains the large number of auxilliary variables and shocks.","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Next we define the parameters as usual:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"@parameters borrowing_constraint begin\n R = 1.05\n β = 0.945\n ρ = 0.9\n σ = 0.05\n m = 1\n γ = 1\nend","category":"page"},{"location":"how-to/obc/#Working-with-the-model","page":"Occasionally binding constraints","title":"Working with the model","text":"","category":"section"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"For the non stochastic steady state (NSSS) to exist the constraint has to be binding (B[0] = m * Y[0]). This implies a wedge in the Euler equation (λ > 0).","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"We can check this by getting the NSSS:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"SS(borrowing_constraint)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"A common task is to plot impulse response function for positive and negative shocks. This should allow us to understand the role of the constraint.","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"First, we need to import the StatsPlots package and then we can plot the positive shock.","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"import StatsPlots\nplot_irf(borrowing_constraint)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"(Image: Positive_shock)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"We can see that the constraint is no longer binding in the first five periods because Y and B do not increase by the same amount. They should move by the same amount in the case of a negative shock:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"import StatsPlots\nplot_irf(borrowing_constraint, negative_shock = true)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"(Image: Negative_shock)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"and indeed in this case they move by the same amount. The difference between a positive and negative shock demonstrates the influence of the occasionally binding constraint.","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Another common exercise is to plot the impulse response functions from a series of shocks. Let's assume in period 10 there is a positive shocks and in period 30 a negative one. Let's view the results for 50 more periods. We can do this as follows:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"shcks = zeros(1,30)\nshcks[10] = .6\nshcks[30] = -.6\n\nsks = KeyedArray(shcks; Shocks = [:ε], Periods = 1:30)\n\nplot_irf(borrowing_constraint, shocks = sks, periods = 50)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"(Image: Simulation)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"In this case the difference between the shocks and the impact of the constraint become quite obvious. Let's compare this with a version of the model that ignores the occasionally binding constraint. In order to plot the impulse response functions without dynamically enforcing the constraint we can simply write:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"plot_irf(borrowing_constraint, shocks = sks, periods = 50, ignore_obc = true)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"(Image: Simulation)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Another interesting statistic is model moments. As there are no theoretical moments we have to rely on simulated data:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"sims = get_irf(borrowing_constraint, periods = 1000, shocks = :simulate, levels = true)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Let's look at the mean and standard deviation of borrowing:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"import Statistics\nStatistics.mean(sims(:B,:,:))","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"and","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Statistics.std(sims(:B,:,:))","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Compare this to the theoretical mean of the model without the occasionally binding constraint:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"get_mean(borrowing_constraint)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"and the theoretical standard deviation:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"get_std(borrowing_constraint)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"The mean of borrowing is lower in the model with occasionally binding constraints compared to the model without and the standard deviation is higher.","category":"page"},{"location":"unfinished_docs/how_to/#Use-calibration-equations","page":"-","title":"Use calibration equations","text":"","category":"section"},{"location":"unfinished_docs/how_to/","page":"-","title":"-","text":"Next we need to add the parameters of the model. The macro @parameters takes care of this:","category":"page"},{"location":"unfinished_docs/how_to/","page":"-","title":"-","text":"@parameters RBC begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α = 0.5\n β = 0.95\nend","category":"page"},{"location":"unfinished_docs/how_to/","page":"-","title":"-","text":"No need for line endings. If you want to define a parameter as a function of another parameter you can do this:","category":"page"},{"location":"unfinished_docs/how_to/","page":"-","title":"-","text":"@parameters RBC begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α = 0.5\n beta1 = 1\n beta2 = .95\n β | β = beta2/beta1\nend","category":"page"},{"location":"unfinished_docs/how_to/","page":"-","title":"-","text":"Note that the parser takes parameters assigned to a numerical value first and then solves for the parameters defined by relationships: β | .... This means also the following will work:","category":"page"},{"location":"unfinished_docs/how_to/","page":"-","title":"-","text":"@parameters RBC begin\n β | β = beta2/beta1\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α = 0.5\n beta1 = 1\n beta2 = .95\nend","category":"page"},{"location":"unfinished_docs/how_to/","page":"-","title":"-","text":"More interestingly one can use (non-stochastic) steady state values in the relationships:","category":"page"},{"location":"unfinished_docs/how_to/","page":"-","title":"-","text":"@parameters RBC begin\n β = .95\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α | k[ss] / (4 * q[ss]) = 1.5\nend","category":"page"},{"location":"unfinished_docs/how_to/#Higher-order-perturbation-solutions","page":"-","title":"Higher order perturbation solutions","text":"","category":"section"},{"location":"unfinished_docs/how_to/#How-to-estimate-a-model","page":"-","title":"How to estimate a model","text":"","category":"section"},{"location":"unfinished_docs/how_to/#Interactive-plotting","page":"-","title":"Interactive plotting","text":"","category":"section"},{"location":"unfinished_docs/dsl/#DSL","page":"-","title":"DSL","text":"","category":"section"},{"location":"unfinished_docs/dsl/","page":"-","title":"-","text":"MacroModelling parses models written using a user-friendly syntax:","category":"page"},{"location":"unfinished_docs/dsl/","page":"-","title":"-","text":"@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend","category":"page"},{"location":"unfinished_docs/dsl/","page":"-","title":"-","text":"The most important rule is that variables are followed by the timing in squared brackets for endogenous variables, e.g. Y[0], exogenous variables are marked by certain keywords (see below), e.g. ϵ[x], and parameters need no further syntax, e.g. α.","category":"page"},{"location":"unfinished_docs/dsl/","page":"-","title":"-","text":"A model written with this syntax allows the parser to identify, endogenous and exogenous variables and their timing as well as parameters.","category":"page"},{"location":"unfinished_docs/dsl/","page":"-","title":"-","text":"Note that variables in the present (period t or 0) have to be denoted as such: [0]. The parser also takes care of creating auxilliary variables in case the model contains leads or lags of the variables larger than 1:","category":"page"},{"location":"unfinished_docs/dsl/","page":"-","title":"-","text":"@model RBC_lead_lag begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * (eps_z[x-8] + eps_z[x-4] + eps_z[x+4] + eps_z_s[x])\n c̄⁻[0] = (c[0] + c[-1] + c[-2] + c[-3]) / 4\n c̄⁺[0] = (c[0] + c[1] + c[2] + c[3]) / 4\nend","category":"page"},{"location":"unfinished_docs/dsl/","page":"-","title":"-","text":"The parser recognises a variable as exogenous if the timing bracket contains one of the keyword/letters (case insensitive): x, ex, exo, exogenous. ","category":"page"},{"location":"unfinished_docs/dsl/","page":"-","title":"-","text":"Valid declarations of exogenous variables: ϵ[x], ϵ[Exo], ϵ[exOgenous]. ","category":"page"},{"location":"unfinished_docs/dsl/","page":"-","title":"-","text":"Invalid declarations: ϵ[xo], ϵ[exogenously], ϵ[main shock x]","category":"page"},{"location":"unfinished_docs/dsl/","page":"-","title":"-","text":"Endogenous and exogenous variables can be in lead or lag, e.g.: the following describe a lead of 1 period: Y[1], Y[+1], Y[+ 1], eps[x+1], eps[Exo + 1] and the same goes for lags and periods > 1: `k[-2], c[+12], eps[x-4]","category":"page"},{"location":"unfinished_docs/dsl/","page":"-","title":"-","text":"Invalid declarations: Y[t-1], Y[t], Y[whatever], eps[x+t+1]","category":"page"},{"location":"unfinished_docs/dsl/","page":"-","title":"-","text":"Equations must be within one line and the = sign is optional.","category":"page"},{"location":"unfinished_docs/dsl/","page":"-","title":"-","text":"The parser recognises all functions in julia including those from StatsFuns.jl. Note that the syntax for distributions is the same as in MATLAB, e.g. normcdf. For those familiar with R the following also work: pnorm, dnorm, qnorm, and it also recognises: norminvcdf and norminv.","category":"page"},{"location":"unfinished_docs/dsl/","page":"-","title":"-","text":"Given these rules it is straightforward to write down a model. Once declared using the @model macro, the package creates an object containing all necessary information regarding the equations of the model.","category":"page"},{"location":"unfinished_docs/dsl/#Lead-/-lags-and-auxilliary-variables","page":"-","title":"Lead / lags and auxilliary variables","text":"","category":"section"},{"location":"tutorials/calibration/#Calibration-/-method-of-moments-Gali-(2015)","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments - Gali (2015)","text":"","category":"section"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"This tutorial is intended to show the workflow to calibrate a model using the method of moments. The tutorial is based on a standard model of monetary policy and will showcase the the use of gradient based optimisers and 2nd and 3rd order pruned solutions.","category":"page"},{"location":"tutorials/calibration/#Define-the-model","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Define the model","text":"","category":"section"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"The first step is always to name the model and write down the equations. For the Galı́ (2015), Chapter 3 this would go as follows:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"ENV[\"GKSwstype\"] = \"100\"\nusing Random\nRandom.seed!(30)","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"using MacroModelling\n\n@model Gali_2015 begin\n W_real[0] = C[0] ^ σ * N[0] ^ φ\n\n Q[0] = β * (C[1] / C[0]) ^ (-σ) * Z[1] / Z[0] / Pi[1]\n\n R[0] = 1 / Q[0]\n\n Y[0] = A[0] * (N[0] / S[0]) ^ (1 - α)\n\n R[0] = Pi[1] * realinterest[0]\n\n R[0] = 1 / β * Pi[0] ^ ϕᵖⁱ * (Y[0] / Y[ss]) ^ ϕʸ * exp(nu[0])\n\n C[0] = Y[0]\n\n log(A[0]) = ρ_a * log(A[-1]) + std_a * eps_a[x]\n\n log(Z[0]) = ρ_z * log(Z[-1]) - std_z * eps_z[x]\n\n nu[0] = ρ_ν * nu[-1] + std_nu * eps_nu[x]\n\n MC[0] = W_real[0] / (S[0] * Y[0] * (1 - α) / N[0])\n\n 1 = θ * Pi[0] ^ (ϵ - 1) + (1 - θ) * Pi_star[0] ^ (1 - ϵ)\n\n S[0] = (1 - θ) * Pi_star[0] ^ (( - ϵ) / (1 - α)) + θ * Pi[0] ^ (ϵ / (1 - α)) * S[-1]\n\n Pi_star[0] ^ (1 + ϵ * α / (1 - α)) = ϵ * x_aux_1[0] / x_aux_2[0] * (1 - τ) / (ϵ - 1)\n\n x_aux_1[0] = MC[0] * Y[0] * Z[0] * C[0] ^ (-σ) + β * θ * Pi[1] ^ (ϵ + α * ϵ / (1 - α)) * x_aux_1[1]\n\n x_aux_2[0] = Y[0] * Z[0] * C[0] ^ (-σ) + β * θ * Pi[1] ^ (ϵ - 1) * x_aux_2[1]\n\n log_y[0] = log(Y[0])\n\n log_W_real[0] = log(W_real[0])\n\n log_N[0] = log(N[0])\n\n pi_ann[0] = 4 * log(Pi[0])\n\n i_ann[0] = 4 * log(R[0])\n\n r_real_ann[0] = 4 * log(realinterest[0])\n\n M_real[0] = Y[0] / R[0] ^ η\n\nend","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"First, we load the package and then use the @model macro to define our model. The first argument after @model is the model name and will be the name of the object in the global environment containing all information regarding the model. The second argument to the macro are the equations, which we write down between begin and end. Equations can contain an equality sign or the expression is assumed to equal 0. Equations cannot span multiple lines (unless you wrap the expression in brackets) and the timing of endogenous variables are expressed in the squared brackets following the variable name (e.g. [-1] for the past period). Exogenous variables (shocks) are followed by a keyword in squared brackets indicating them being exogenous (in this case [x]). Note that names can leverage julia's unicode capabilities (e.g. alpha can be written as α).","category":"page"},{"location":"tutorials/calibration/#Define-the-parameters","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Define the parameters","text":"","category":"section"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Next we need to add the parameters of the model. The macro @parameters takes care of this:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"@parameters Gali_2015 begin\n σ = 1\n\n φ = 5\n\n ϕᵖⁱ = 1.5\n \n ϕʸ = 0.125\n\n θ = 0.75\n\n ρ_ν = 0.5\n\n ρ_z = 0.5\n\n ρ_a = 0.9\n\n β = 0.99\n\n η = 3.77\n\n α = 0.25\n\n ϵ = 9\n\n τ = 0\n\n std_a = .01\n\n std_z = .05\n\n std_nu = .0025\n\nend","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"The block defining the parameters above only describes the simple parameter definitions the same way you assign values (e.g. α = .25).","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Note that we have to write one parameter definition per line.","category":"page"},{"location":"tutorials/calibration/#Linear-solution","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Linear solution","text":"","category":"section"},{"location":"tutorials/calibration/#Inspect-model-moments","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Inspect model moments","text":"","category":"section"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Given the equations and parameters, we have everything to we need for the package to generate the theoretical model moments. You can retrieve the mean of the linearised model as follows:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"get_mean(Gali_2015)","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"and the standard deviation like this:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"get_standard_deviation(Gali_2015)","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"You could also simply use: std or get_std to the same effect.","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Another interesting output is the autocorrelation of the model variables:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"get_autocorrelation(Gali_2015)","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"or the covariance:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"get_covariance(Gali_2015)","category":"page"},{"location":"tutorials/calibration/#Parameter-sensitivities","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Parameter sensitivities","text":"","category":"section"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Before embarking on calibrating the model it is useful to get familiar with the impact of parameter changes on model moments. MacroModelling.jl provides the partial derivatives of the model moments with respect to the model parameters. The model we are working with is of a medium size and by default derivatives are automatically shown as long as the calculation does not take too long (too many derivatives need to be taken). In this case they are not shown but it is possible to show them by explicitly defining the parameter for which to take the partial derivatives for:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"get_mean(Gali_2015, parameter_derivatives = :σ)","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"or for multiple parameters:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"get_mean(Gali_2015, parameter_derivatives = [:σ, :α, :β, :ϕᵖⁱ, :φ])","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"We can do the same for standard deviation or variance, and all parameters:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"get_std(Gali_2015, parameter_derivatives = get_parameters(Gali_2015))","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"get_variance(Gali_2015, parameter_derivatives = get_parameters(Gali_2015))","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"You can use this information to calibrate certain values to your targets. For example, let's say we want to have higher real wages (:W_real), and lower inflation volatility. Since there are too many variables and parameters for them to be shown here, let's print only a subset of them:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"get_mean(Gali_2015, parameter_derivatives = [:σ, :std_a, :α], variables = [:W_real,:Pi])","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"get_std(Gali_2015, parameter_derivatives = [:σ, :std_a, :α], variables = [:W_real,:Pi])","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Looking at the sensitivity table we see that lowering the production function parameter :α will increase real wages, but at the same time it will increase inflation volatility. We could compensate that effect by decreasing the standard deviation of the total factor productivity shock :std_a.","category":"page"},{"location":"tutorials/calibration/#Method-of-moments","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Method of moments","text":"","category":"section"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Instead of doing this by hand we can also set a target and have an optimiser find the corresponding parameter values. In order to do that we need to define targets, and set up an optimisation problem.","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Our targets are:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Mean of W_real = 0.7\nStandard deviation of Pi = 0.01","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"For the optimisation problem we use the L-BFGS algorithm implemented in Optim.jl. This optimisation algorithm is very efficient and gradient based. Note that all model outputs are differentiable with respect to the parameters using automatic and implicit differentiation.","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"The package provides functions specialised for the use with gradient based code (e.g. gradient-based optimisers or samplers). For model statistics we can use get_statistics to get the mean of real wages and the standard deviation of inflation like this:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"get_statistics(Gali_2015, Gali_2015.parameter_values, parameters = Gali_2015.parameters, mean = [:W_real], standard_deviation = [:Pi])","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"First we pass on the model object, followed by the parameter values and the parameter names the values correspond to. Then we define the outputs we want: for the mean we want real wages and for the standard deviation we want inflation. We can also get outputs for variance, covariance, or autocorrelation the same way as for the mean and standard deviation.","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Next, let's define a function measuring how close we are to our target for given values of :α and :std_a:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"function distance_to_target(parameter_value_inputs)\n model_statistics = get_statistics(Gali_2015, parameter_value_inputs, parameters = [:α, :std_a], mean = [:W_real], standard_deviation = [:Pi])\n targets = [0.7, 0.01]\n return sum(abs2, vcat(model_statistics...) - targets)\nend","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Now let's test the function with the current parameter values. In case we forgot the parameter values we can also look them up like this:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"get_parameters(Gali_2015, values = true)","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"with this we can test the distance function:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"distance_to_target([0.25, 0.01])","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Next we can pass it on to an optimiser and find the parameters corresponding to the best fit like this:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"using Optim, LineSearches\nsol = Optim.optimize(distance_to_target,\n [0,0], \n [1,1], \n [0.25, 0.01], \n Optim.Fminbox(Optim.LBFGS(linesearch = LineSearches.BackTracking(order = 3))))","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"The first argument to the optimisation call is the function we defined previously, followed by lower and upper bounds, the starting values, and finally the algorithm. For the algorithm we have to add Fminbox because we have bounds (optional) and we set the specific line search method to speed up convergence (recommended but optional).","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"The output shows that we could almost perfectly match the target and the values of the parameters found by the optimiser are:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"sol.minimizer","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"slightly lower for both parameters (in line with what we understood from the sensitivities).","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"You can combine the method of moments with estimation by simply adding the distance to the target to the posterior loglikelihood.","category":"page"},{"location":"tutorials/calibration/#Nonlinear-solutions","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Nonlinear solutions","text":"","category":"section"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"So far we used the linearised solution of the model. The package also provides nonlinear solutions and can calculate the theoretical model moments for pruned second and third order perturbation solutions. This can be of interest because nonlinear solutions capture volatility effects (at second order) and asymmetries (at third order). Furthermore, the moments of the data are often non-gaussian while linear solutions with gaussian noise can only generate gaussian distributions of model variables. Nonetheless, already pruned second order solutions produce non-gaussian skewness and kurtosis with gaussian noise.","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"From a user perspective little changes other than specifying that the solution algorithm is :pruned_second_order or :pruned_third_order.","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"For example we can get the mean for the pruned second order solution:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"get_mean(Gali_2015, parameter_derivatives = [:σ, :std_a, :α], variables = [:W_real,:Pi], algorithm = :pruned_second_order)","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Note that the mean of real wages is lower, while inflation is higher. We can see the effect of volatility with the partial derivatives for the shock standard deviations being non-zero. Larger shocks sizes drive down the mean of real wages while they increase inflation.","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"The mean of the variables does not change if we use pruned third order perturbation by construction but the standard deviation does. Let's look at the standard deviations for the pruned second order solution first:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"get_std(Gali_2015, parameter_derivatives = [:σ, :std_a, :α], variables = [:W_real,:Pi], algorithm = :pruned_second_order)","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"for both inflation and real wages the volatility is higher and the standard deviation of the total factor productivity shock std_a has a much larger impact on the standard deviation of real wages compared to the linear solution.","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"At third order we get the following results:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"get_std(Gali_2015, parameter_derivatives = [:σ, :std_a, :α], variables = [:W_real,:Pi], algorithm = :pruned_third_order)","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"standard deviations of inflation is more than two times as high and for real wages it is also substantially higher. Furthermore, standard deviations of shocks matter even more for the volatility of the endogenous variables.","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"These results make it clear that capturing the nonlinear interactions by using nonlinear solutions has important implications for the model moments and by extension the model dynamics.","category":"page"},{"location":"tutorials/calibration/#Method-of-moments-for-nonlinear-solutions","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Method of moments for nonlinear solutions","text":"","category":"section"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Matching the theoretical moments of the nonlinear model solution to the data is no more complicated for the user than in the linear solution case (see above).","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"We need to define the target value and function and let an optimiser find the parameters minimising the distance to the target.","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Keeping the targets:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Mean of W_real = 0.7\nStandard deviation of Pi = 0.01","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"we need to define the target function and specify that we use a nonlinear solution algorithm (e.g. pruned third order):","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"function distance_to_target(parameter_value_inputs)\n model_statistics = get_statistics(Gali_2015, parameter_value_inputs, algorithm = :pruned_third_order, parameters = [:α, :std_a], mean = [:W_real], standard_deviation = [:Pi])\n targets = [0.7, 0.01]\n return sum(abs2, vcat(model_statistics...) - targets)\nend","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"and then we can use the same code to optimise as in the linear solution case:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"sol = Optim.optimize(distance_to_target,\n [0,0], \n [1,1], \n [0.25, 0.01], \n Optim.Fminbox(Optim.LBFGS(linesearch = LineSearches.BackTracking(order = 3))))","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"the calculations take substantially longer and we don't get as close to our target as for the linear solution case. The parameter values minimising the distance are:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"sol.minimizer","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"lower than for the linear solution case and the theoretical moments given these parameter are:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"get_statistics(Gali_2015, sol.minimizer, algorithm = :pruned_third_order, parameters = [:α, :std_a], mean = [:W_real], standard_deviation = [:Pi])","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"The solution does not match the standard deviation of inflation very well.","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Potentially the partial derivatives change a lot for small changes in parameters and even though the partial derivatives for standard deviation of inflation were large wrt std_a they might be small for value returned from the optimisation. We can check this with:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"get_std(Gali_2015, parameter_derivatives = [:σ, :std_a, :α], variables = [:W_real,:Pi], algorithm = :pruned_third_order, parameters = [:α, :std_a] .=> sol.minimizer)","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"and indeed it seems also the second derivative is large since the first derivative changed significantly.","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Another parameter we can try is σ. It has a positive impact on the mean of real wages and a negative impact on standard deviation of inflation.","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"We need to redefine our target function and optimise it. Note that the previous call made a permanent change of parameters (as do all calls where parameters are explicitly set) and now std_a is set to 2.91e-9 and no longer 0.01.","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"function distance_to_target(parameter_value_inputs)\n model_statistics = get_statistics(Gali_2015, parameter_value_inputs, algorithm = :pruned_third_order, parameters = [:α, :σ], mean = [:W_real], standard_deviation = [:Pi])\n targets = [0.7, 0.01]\n return sum(abs2, vcat(model_statistics...) - targets)\nend\n\nsol = Optim.optimize(distance_to_target,\n [0,0], \n [1,3], \n [0.25, 1], \n Optim.Fminbox(Optim.LBFGS(linesearch = LineSearches.BackTracking(order = 3))))\n\nsol.minimizer","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Given the new value for std_a and optimising over σ allows us to match the target exactly.","category":"page"},{"location":"tutorials/estimation/#Estimate-a-simple-model-Schorfheide-(2000)","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a simple model - Schorfheide (2000)","text":"","category":"section"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"This tutorial is intended to show the workflow to estimate a model using the No-U-Turn sampler (NUTS). The tutorial works with a benchmark model for estimation and can therefore be compared to results from other software packages (e.g. dynare).","category":"page"},{"location":"tutorials/estimation/#Define-the-model","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Define the model","text":"","category":"section"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"The first step is always to name the model and write down the equations. For the Schorfheide (2000) model this would go as follows:","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"ENV[\"GKSwstype\"] = \"100\"\nusing Random\nRandom.seed!(3)","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"using MacroModelling\n\n@model FS2000 begin\n dA[0] = exp(gam + z_e_a * e_a[x])\n\n log(m[0]) = (1 - rho) * log(mst) + rho * log(m[-1]) + z_e_m * e_m[x]\n\n - P[0] / (c[1] * P[1] * m[0]) + bet * P[1] * (alp * exp( - alp * (gam + log(e[1]))) * k[0] ^ (alp - 1) * n[1] ^ (1 - alp) + (1 - del) * exp( - (gam + log(e[1])))) / (c[2] * P[2] * m[1])=0\n\n W[0] = l[0] / n[0]\n\n - (psi / (1 - psi)) * (c[0] * P[0] / (1 - n[0])) + l[0] / n[0] = 0\n\n R[0] = P[0] * (1 - alp) * exp( - alp * (gam + z_e_a * e_a[x])) * k[-1] ^ alp * n[0] ^ ( - alp) / W[0]\n\n 1 / (c[0] * P[0]) - bet * P[0] * (1 - alp) * exp( - alp * (gam + z_e_a * e_a[x])) * k[-1] ^ alp * n[0] ^ (1 - alp) / (m[0] * l[0] * c[1] * P[1]) = 0\n\n c[0] + k[0] = exp( - alp * (gam + z_e_a * e_a[x])) * k[-1] ^ alp * n[0] ^ (1 - alp) + (1 - del) * exp( - (gam + z_e_a * e_a[x])) * k[-1]\n\n P[0] * c[0] = m[0]\n\n m[0] - 1 + d[0] = l[0]\n\n e[0] = exp(z_e_a * e_a[x])\n\n y[0] = k[-1] ^ alp * n[0] ^ (1 - alp) * exp( - alp * (gam + z_e_a * e_a[x]))\n\n gy_obs[0] = dA[0] * y[0] / y[-1]\n\n gp_obs[0] = (P[0] / P[-1]) * m[-1] / dA[0]\n\n log_gy_obs[0] = log(gy_obs[0])\n\n log_gp_obs[0] = log(gp_obs[0])\nend","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"First, we load the package and then use the @model macro to define our model. The first argument after @model is the model name and will be the name of the object in the global environment containing all information regarding the model. The second argument to the macro are the equations, which we write down between begin and end. Equations can contain an equality sign or the expression is assumed to equal 0. Equations cannot span multiple lines (unless you wrap the expression in brackets) and the timing of endogenous variables are expressed in the squared brackets following the variable name (e.g. [-1] for the past period). Exogenous variables (shocks) are followed by a keyword in squared brackets indicating them being exogenous (in this case [x]). Note that names can leverage julia's unicode capabilities (e.g. alpha can be written as α).","category":"page"},{"location":"tutorials/estimation/#Define-the-parameters","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Define the parameters","text":"","category":"section"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"Next we need to add the parameters of the model. The macro @parameters takes care of this:","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"@parameters FS2000 begin \n alp = 0.356\n bet = 0.993\n gam = 0.0085\n mst = 1.0002\n rho = 0.129\n psi = 0.65\n del = 0.01\n z_e_a = 0.035449\n z_e_m = 0.008862\nend","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"The block defining the parameters above only describes the simple parameter definitions the same way you assign values (e.g. alp = .356).","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"Note that we have to write one parameter definition per line.","category":"page"},{"location":"tutorials/estimation/#Load-data","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Load data","text":"","category":"section"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"Given the equations and parameters, we only need the entries in the data which correspond to the observables in the model (need to have the exact same name) to estimate the model. First, we load in the data from a CSV file (using the CSV and DataFrames packages) and convert it to a KeyedArray (using the AxisKeys package). Furthermore, we log transform the data provided in levels, and last but not least we select only those variables in the data which are observables in the model.","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"using CSV, DataFrames, AxisKeys\n\n# load data\ndat = CSV.read(\"../assets/FS2000_data.csv\", DataFrame)\ndata = KeyedArray(Array(dat)',Variable = Symbol.(\"log_\".*names(dat)),Time = 1:size(dat)[1])\ndata = log.(data)\n\n# declare observables\nobservables = sort(Symbol.(\"log_\".*names(dat)))\n\n# subset observables in data\ndata = data(observables,:)","category":"page"},{"location":"tutorials/estimation/#Define-bayesian-model","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Define bayesian model","text":"","category":"section"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"Next we define the parameter priors using the Turing package. The @model macro of the Turing package allows us to define the prior distributions over the parameters and combine it with the (Kalman filter) loglikelihood of the model and parameters given the data with the help of the get_loglikelihood function. We define the prior distributions in an array and pass it on to the arraydist function inside the @model macro from the Turing package. It is also possible to define the prior distributions inside the macro but especially for reverse mode auto differentiation the arraydist function is substantially faster. When defining the prior distributions we can rely n the distribution implemented in the Distributions package. Note that the μσ parameter allows us to hand over the moments (μ and σ) of the distribution as parameters in case of the non-normal distributions (Gamma, Beta, InverseGamma), and we can also define upper and lower bounds truncating the distribution as third and fourth arguments to the distribution functions. Last but not least, we define the loglikelihood and add it to the posterior loglikelihood with the help of the @addlogprob! macro.","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"import Turing\nimport Turing: NUTS, sample, logpdf\n\nprior_distributions = [\n Beta(0.356, 0.02, μσ = true), # alp\n Beta(0.993, 0.002, μσ = true), # bet\n Normal(0.0085, 0.003), # gam\n Normal(1.0002, 0.007), # mst\n Beta(0.129, 0.223, μσ = true), # rho\n Beta(0.65, 0.05, μσ = true), # psi\n Beta(0.01, 0.005, μσ = true), # del\n InverseGamma(0.035449, Inf, μσ = true), # z_e_a\n InverseGamma(0.008862, Inf, μσ = true) # z_e_m\n]\n\nTuring.@model function FS2000_loglikelihood_function(data, model)\n parameters ~ Turing.arraydist(prior_distributions)\n\n Turing.@addlogprob! get_loglikelihood(model, data, parameters)\nend","category":"page"},{"location":"tutorials/estimation/#Sample-from-posterior:-No-U-Turn-Sampler-(NUTS)","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Sample from posterior: No-U-Turn Sampler (NUTS)","text":"","category":"section"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"We use the NUTS sampler to retrieve the posterior distribution of the parameters. This sampler uses the gradient of the posterior loglikelihood with respect to the model parameters to navigate the parameter space. The NUTS sampler is considered robust, fast, and user-friendly (auto-tuning of hyper-parameters).","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"First we define the loglikelihood model with the specific data, and model. Next, we draw 1000 samples from the model:","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"FS2000_loglikelihood = FS2000_loglikelihood_function(data, FS2000);\n\nn_samples = 1000\n\nchain_NUTS = sample(FS2000_loglikelihood, NUTS(), n_samples, progress = false);","category":"page"},{"location":"tutorials/estimation/#Inspect-posterior","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Inspect posterior","text":"","category":"section"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"In order to understand the posterior distribution and the sequence of sample we are plot them:","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"using StatsPlots\nStatsPlots.plot(chain_NUTS);","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"(Image: NUTS chain)","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"Next, we are plotting the posterior loglikelihood along two parameters dimensions, with the other parameters ket at the posterior mean, and add the samples to the visualisation. This visualisation allows us to understand the curvature of the posterior and puts the samples in context.","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"using ComponentArrays, MCMCChains, DynamicPPL\n\nparameter_mean = mean(chain_NUTS)\npars = ComponentArray(parameter_mean.nt[2],Axis(parameter_mean.nt[1]))\n\nlogjoint(FS2000_loglikelihood, pars)\n\nfunction calculate_log_probability(par1, par2, pars_syms, orig_pars, model)\n orig_pars[pars_syms] = [par1, par2]\n logjoint(model, orig_pars)\nend\n\ngranularity = 32;\n\npar1 = :del;\npar2 = :gam;\npar_range1 = collect(range(minimum(chain_NUTS[par1]), stop = maximum(chain_NUTS[par1]), length = granularity));\npar_range2 = collect(range(minimum(chain_NUTS[par2]), stop = maximum(chain_NUTS[par2]), length = granularity));\n\np = surface(par_range1, par_range2, \n (x,y) -> calculate_log_probability(x, y, [par1, par2], pars, FS2000_loglikelihood),\n camera=(30, 65),\n colorbar=false,\n color=:inferno);\n\n\njoint_loglikelihood = [logjoint(FS2000_loglikelihood, ComponentArray(reduce(hcat, get(chain_NUTS, FS2000.parameters)[FS2000.parameters])[s,:], Axis(FS2000.parameters))) for s in 1:length(chain_NUTS)];\n\nscatter3d!(vec(collect(chain_NUTS[par1])),\n vec(collect(chain_NUTS[par2])),\n joint_loglikelihood,\n mc = :viridis, \n marker_z = collect(1:length(chain_NUTS)), \n msw = 0,\n legend = false, \n colorbar = false, \n xlabel = string(par1),\n ylabel = string(par2),\n zlabel = \"Log probability\",\n alpha = 0.5);\n\np","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"(Image: Posterior surface)","category":"page"},{"location":"tutorials/estimation/#Find-posterior-mode","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Find posterior mode","text":"","category":"section"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"Other than the mean and median of the posterior distribution we can also calculate the mode. To this end we will use L-BFGS optimisation routines from the Optim package.","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"First, we define the posterior loglikelihood function, similar to how we defined it for the Turing model macro.","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"function calculate_posterior_loglikelihood(parameters)\n alp, bet, gam, mst, rho, psi, del, z_e_a, z_e_m = parameters\n log_lik = 0\n log_lik -= get_loglikelihood(FS2000, data, parameters)\n log_lik -= logpdf(Beta(0.356, 0.02, μσ = true),alp)\n log_lik -= logpdf(Beta(0.993, 0.002, μσ = true),bet)\n log_lik -= logpdf(Normal(0.0085, 0.003),gam)\n log_lik -= logpdf(Normal(1.0002, 0.007),mst)\n log_lik -= logpdf(Beta(0.129, 0.223, μσ = true),rho)\n log_lik -= logpdf(Beta(0.65, 0.05, μσ = true),psi)\n log_lik -= logpdf(Beta(0.01, 0.005, μσ = true),del)\n log_lik -= logpdf(InverseGamma(0.035449, Inf, μσ = true),z_e_a)\n log_lik -= logpdf(InverseGamma(0.008862, Inf, μσ = true),z_e_m)\n return log_lik\nend","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"Next, we set up the optimisation problem, parameter bounds, and use the optimizer L-BFGS.","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"using Optim, LineSearches\n\nlbs = [0,0,-10,-10,0,0,0,0,0];\nubs = [1,1,10,10,1,1,1,100,100];\n\nsol = optimize(calculate_posterior_loglikelihood, lbs, ubs , FS2000.parameter_values, Fminbox(LBFGS(linesearch = LineSearches.BackTracking(order = 3))); autodiff = :forward)\n\nsol.minimum","category":"page"},{"location":"tutorials/estimation/#Model-estimates-given-the-data-and-the-model-solution","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Model estimates given the data and the model solution","text":"","category":"section"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"Having found the parameters at the posterior mode we can retrieve model estimates of the shocks which explain the data used to estimate it. This can be done with the get_estimated_shocks function:","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"get_estimated_shocks(FS2000, data, parameters = sol.minimizer)","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"As the first argument we pass the model, followed by the data (in levels), and then we pass the parameters at the posterior mode. The model is solved with this parameterisation and the shocks are calculated using the Kalman smoother.","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"We estimated the model on two variables but our model allows us to look at all variables given the data. Looking at the estimated variables can be done using the get_estimated_variables function:","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"get_estimated_variables(FS2000, data)","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"Since we already solved the model with the parameters at the posterior mode we do not need to do so again. The function returns a KeyedArray with the values of the variables in levels at each point in time.","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"Another useful tool is a historical shock decomposition. It allows us to understand the contribution of the shocks for each variable. This can be done using the get_shock_decomposition function:","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"get_shock_decomposition(FS2000, data)","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"We get a 3-dimensional array with variables, shocks, and time periods as dimensions. The shocks dimension also includes the initial value as a residual between the actual value and what was explained by the shocks. This computation also relies on the Kalman smoother.","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"Last but not least, we can also plot the model estimates and the shock decomposition. The model estimates plot, using plot_model_estimates:","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"plot_model_estimates(FS2000, data)","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"(Image: Model estimates)","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"shows the variables of the model (blue), the estimated shocks (in the last panel), and the data (red) used to estimate the model.","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"The shock decomposition can be plotted using plot_shock_decomposition:","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"plot_shock_decomposition(FS2000, data)","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"(Image: Shock decomposition)","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"and it shows the contribution of the shocks and the contribution of the initial value to the deviations of the variables.","category":"page"},{"location":"tutorials/sw03/#Work-with-a-complex-model-Smets-and-Wouters-(2003)","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a complex model - Smets and Wouters (2003)","text":"","category":"section"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"This tutorial is intended to show more advanced features of the package which come into play with more complex models. The tutorial will walk through the same steps as for the simple RBC model but will use the nonlinear Smets and Wouters (2003) model instead. Prior knowledge of DSGE models and their solution in practical terms (e.g. having used a mod file with dynare) is useful in understanding this tutorial.","category":"page"},{"location":"tutorials/sw03/#Define-the-model","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Define the model","text":"","category":"section"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"The first step is always to name the model and write down the equations. For the Smets and Wouters (2003) model this would go as follows:","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"ENV[\"GKSwstype\"] = \"100\"","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"using MacroModelling\n@model SW03 begin\n -q[0] + beta * ((1 - tau) * q[1] + epsilon_b[1] * (r_k[1] * z[1] - psi^-1 * r_k[ss] * (-1 + exp(psi * (-1 + z[1])))) * (C[1] - h * C[0])^(-sigma_c))\n -q_f[0] + beta * ((1 - tau) * q_f[1] + epsilon_b[1] * (r_k_f[1] * z_f[1] - psi^-1 * r_k_f[ss] * (-1 + exp(psi * (-1 + z_f[1])))) * (C_f[1] - h * C_f[0])^(-sigma_c))\n -r_k[0] + alpha * epsilon_a[0] * mc[0] * L[0]^(1 - alpha) * (K[-1] * z[0])^(-1 + alpha)\n -r_k_f[0] + alpha * epsilon_a[0] * mc_f[0] * L_f[0]^(1 - alpha) * (K_f[-1] * z_f[0])^(-1 + alpha)\n -G[0] + T[0]\n -G[0] + G_bar * epsilon_G[0]\n -G_f[0] + T_f[0]\n -G_f[0] + G_bar * epsilon_G[0]\n -L[0] + nu_w[0]^-1 * L_s[0]\n -L_s_f[0] + L_f[0] * (W_i_f[0] * W_f[0]^-1)^(lambda_w^-1 * (-1 - lambda_w))\n L_s_f[0] - L_f[0]\n L_s_f[0] + lambda_w^-1 * L_f[0] * W_f[0]^-1 * (-1 - lambda_w) * (-W_disutil_f[0] + W_i_f[0]) * (W_i_f[0] * W_f[0]^-1)^(-1 + lambda_w^-1 * (-1 - lambda_w))\n Pi_ws_f[0] - L_s_f[0] * (-W_disutil_f[0] + W_i_f[0])\n Pi_ps_f[0] - Y_f[0] * (-mc_f[0] + P_j_f[0]) * P_j_f[0]^(-lambda_p^-1 * (1 + lambda_p))\n -Q[0] + epsilon_b[0]^-1 * q[0] * (C[0] - h * C[-1])^(sigma_c)\n -Q_f[0] + epsilon_b[0]^-1 * q_f[0] * (C_f[0] - h * C_f[-1])^(sigma_c)\n -W[0] + epsilon_a[0] * mc[0] * (1 - alpha) * L[0]^(-alpha) * (K[-1] * z[0])^alpha\n -W_f[0] + epsilon_a[0] * mc_f[0] * (1 - alpha) * L_f[0]^(-alpha) * (K_f[-1] * z_f[0])^alpha\n -Y_f[0] + Y_s_f[0]\n Y_s[0] - nu_p[0] * Y[0]\n -Y_s_f[0] + Y_f[0] * P_j_f[0]^(-lambda_p^-1 * (1 + lambda_p))\n beta * epsilon_b[1] * (C_f[1] - h * C_f[0])^(-sigma_c) - epsilon_b[0] * R_f[0]^-1 * (C_f[0] - h * C_f[-1])^(-sigma_c)\n beta * epsilon_b[1] * pi[1]^-1 * (C[1] - h * C[0])^(-sigma_c) - epsilon_b[0] * R[0]^-1 * (C[0] - h * C[-1])^(-sigma_c)\n Y_f[0] * P_j_f[0]^(-lambda_p^-1 * (1 + lambda_p)) - lambda_p^-1 * Y_f[0] * (1 + lambda_p) * (-mc_f[0] + P_j_f[0]) * P_j_f[0]^(-1 - lambda_p^-1 * (1 + lambda_p))\n epsilon_b[0] * W_disutil_f[0] * (C_f[0] - h * C_f[-1])^(-sigma_c) - omega * epsilon_b[0] * epsilon_L[0] * L_s_f[0]^sigma_l\n -1 + xi_p * (pi[0]^-1 * pi[-1]^gamma_p)^(-lambda_p^-1) + (1 - xi_p) * pi_star[0]^(-lambda_p^-1)\n -1 + (1 - xi_w) * (w_star[0] * W[0]^-1)^(-lambda_w^-1) + xi_w * (W[-1] * W[0]^-1)^(-lambda_w^-1) * (pi[0]^-1 * pi[-1]^gamma_w)^(-lambda_w^-1)\n -Phi - Y_s[0] + epsilon_a[0] * L[0]^(1 - alpha) * (K[-1] * z[0])^alpha\n -Phi - Y_f[0] * P_j_f[0]^(-lambda_p^-1 * (1 + lambda_p)) + epsilon_a[0] * L_f[0]^(1 - alpha) * (K_f[-1] * z_f[0])^alpha\n std_eta_b * eta_b[x] - log(epsilon_b[0]) + rho_b * log(epsilon_b[-1])\n -std_eta_L * eta_L[x] - log(epsilon_L[0]) + rho_L * log(epsilon_L[-1])\n std_eta_I * eta_I[x] - log(epsilon_I[0]) + rho_I * log(epsilon_I[-1])\n std_eta_w * eta_w[x] - f_1[0] + f_2[0]\n std_eta_a * eta_a[x] - log(epsilon_a[0]) + rho_a * log(epsilon_a[-1])\n std_eta_p * eta_p[x] - g_1[0] + g_2[0] * (1 + lambda_p)\n std_eta_G * eta_G[x] - log(epsilon_G[0]) + rho_G * log(epsilon_G[-1])\n -f_1[0] + beta * xi_w * f_1[1] * (w_star[0]^-1 * w_star[1])^(lambda_w^-1) * (pi[1]^-1 * pi[0]^gamma_w)^(-lambda_w^-1) + epsilon_b[0] * w_star[0] * L[0] * (1 + lambda_w)^-1 * (C[0] - h * C[-1])^(-sigma_c) * (w_star[0] * W[0]^-1)^(-lambda_w^-1 * (1 + lambda_w))\n -f_2[0] + beta * xi_w * f_2[1] * (w_star[0]^-1 * w_star[1])^(lambda_w^-1 * (1 + lambda_w) * (1 + sigma_l)) * (pi[1]^-1 * pi[0]^gamma_w)^(-lambda_w^-1 * (1 + lambda_w) * (1 + sigma_l)) + omega * epsilon_b[0] * epsilon_L[0] * (L[0] * (w_star[0] * W[0]^-1)^(-lambda_w^-1 * (1 + lambda_w)))^(1 + sigma_l)\n -g_1[0] + beta * xi_p * pi_star[0] * g_1[1] * pi_star[1]^-1 * (pi[1]^-1 * pi[0]^gamma_p)^(-lambda_p^-1) + epsilon_b[0] * pi_star[0] * Y[0] * (C[0] - h * C[-1])^(-sigma_c)\n -g_2[0] + beta * xi_p * g_2[1] * (pi[1]^-1 * pi[0]^gamma_p)^(-lambda_p^-1 * (1 + lambda_p)) + epsilon_b[0] * mc[0] * Y[0] * (C[0] - h * C[-1])^(-sigma_c)\n -nu_w[0] + (1 - xi_w) * (w_star[0] * W[0]^-1)^(-lambda_w^-1 * (1 + lambda_w)) + xi_w * nu_w[-1] * (W[-1] * pi[0]^-1 * W[0]^-1 * pi[-1]^gamma_w)^(-lambda_w^-1 * (1 + lambda_w))\n -nu_p[0] + (1 - xi_p) * pi_star[0]^(-lambda_p^-1 * (1 + lambda_p)) + xi_p * nu_p[-1] * (pi[0]^-1 * pi[-1]^gamma_p)^(-lambda_p^-1 * (1 + lambda_p))\n -K[0] + K[-1] * (1 - tau) + I[0] * (1 - 0.5 * varphi * (-1 + I[-1]^-1 * epsilon_I[0] * I[0])^2)\n -K_f[0] + K_f[-1] * (1 - tau) + I_f[0] * (1 - 0.5 * varphi * (-1 + I_f[-1]^-1 * epsilon_I[0] * I_f[0])^2)\n U[0] - beta * U[1] - epsilon_b[0] * ((1 - sigma_c)^-1 * (C[0] - h * C[-1])^(1 - sigma_c) - omega * epsilon_L[0] * (1 + sigma_l)^-1 * L_s[0]^(1 + sigma_l))\n U_f[0] - beta * U_f[1] - epsilon_b[0] * ((1 - sigma_c)^-1 * (C_f[0] - h * C_f[-1])^(1 - sigma_c) - omega * epsilon_L[0] * (1 + sigma_l)^-1 * L_s_f[0]^(1 + sigma_l))\n -epsilon_b[0] * (C[0] - h * C[-1])^(-sigma_c) + q[0] * (1 - 0.5 * varphi * (-1 + I[-1]^-1 * epsilon_I[0] * I[0])^2 - varphi * I[-1]^-1 * epsilon_I[0] * I[0] * (-1 + I[-1]^-1 * epsilon_I[0] * I[0])) + beta * varphi * I[0]^-2 * epsilon_I[1] * q[1] * I[1]^2 * (-1 + I[0]^-1 * epsilon_I[1] * I[1])\n -epsilon_b[0] * (C_f[0] - h * C_f[-1])^(-sigma_c) + q_f[0] * (1 - 0.5 * varphi * (-1 + I_f[-1]^-1 * epsilon_I[0] * I_f[0])^2 - varphi * I_f[-1]^-1 * epsilon_I[0] * I_f[0] * (-1 + I_f[-1]^-1 * epsilon_I[0] * I_f[0])) + beta * varphi * I_f[0]^-2 * epsilon_I[1] * q_f[1] * I_f[1]^2 * (-1 + I_f[0]^-1 * epsilon_I[1] * I_f[1])\n std_eta_pi * eta_pi[x] - log(pi_obj[0]) + rho_pi_bar * log(pi_obj[-1]) + log(calibr_pi_obj) * (1 - rho_pi_bar)\n -C[0] - I[0] - T[0] + Y[0] - psi^-1 * r_k[ss] * K[-1] * (-1 + exp(psi * (-1 + z[0])))\n -calibr_pi + std_eta_R * eta_R[x] - log(R[ss]^-1 * R[0]) + r_Delta_pi * (-log(pi[ss]^-1 * pi[-1]) + log(pi[ss]^-1 * pi[0])) + r_Delta_y * (-log(Y[ss]^-1 * Y[-1]) + log(Y[ss]^-1 * Y[0]) + log(Y_f[ss]^-1 * Y_f[-1]) - log(Y_f[ss]^-1 * Y_f[0])) + rho * log(R[ss]^-1 * R[-1]) + (1 - rho) * (log(pi_obj[0]) + r_pi * (-log(pi_obj[0]) + log(pi[ss]^-1 * pi[-1])) + r_Y * (log(Y[ss]^-1 * Y[0]) - log(Y_f[ss]^-1 * Y_f[0])))\n -C_f[0] - I_f[0] + Pi_ws_f[0] - T_f[0] + Y_f[0] + L_s_f[0] * W_disutil_f[0] - L_f[0] * W_f[0] - psi^-1 * r_k_f[ss] * K_f[-1] * (-1 + exp(psi * (-1 + z_f[0])))\n epsilon_b[0] * (K[-1] * r_k[0] - r_k[ss] * K[-1] * exp(psi * (-1 + z[0]))) * (C[0] - h * C[-1])^(-sigma_c)\n epsilon_b[0] * (K_f[-1] * r_k_f[0] - r_k_f[ss] * K_f[-1] * exp(psi * (-1 + z_f[0]))) * (C_f[0] - h * C_f[-1])^(-sigma_c)\nend","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"First, we load the package and then use the @model macro to define our model. The first argument after @model is the model name and will be the name of the object in the global environment containing all information regarding the model. The second argument to the macro are the equations, which we write down between begin and end. Equations can contain an equality sign or the expression is assumed to equal 0. Equations cannot span multiple lines (unless you wrap the expression in brackets) and the timing of endogenous variables are expressed in the squared brackets following the variable name (e.g. [-1] for the past period). Exogenous variables (shocks) are followed by a keyword in squared brackets indicating them being exogenous (in this case [x]). In this example there are also variables in the non stochastic steady state denoted by [ss]. Note that names can leverage julia's unicode capabilities (alpha can be written as α).","category":"page"},{"location":"tutorials/sw03/#Define-the-parameters","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Define the parameters","text":"","category":"section"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Next we need to add the parameters of the model. The macro @parameters takes care of this:","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"@parameters SW03 begin \n lambda_p = .368\n G_bar = .362\n lambda_w = 0.5\n Phi = .819\n\n alpha = 0.3\n beta = 0.99\n gamma_w = 0.763\n gamma_p = 0.469\n h = 0.573\n omega = 1\n psi = 0.169\n\n r_pi = 1.684\n r_Y = 0.099\n r_Delta_pi = 0.14\n r_Delta_y = 0.159\n\n sigma_c = 1.353\n sigma_l = 2.4\n tau = 0.025\n varphi = 6.771\n xi_w = 0.737\n xi_p = 0.908\n\n rho = 0.961\n rho_b = 0.855\n rho_L = 0.889\n rho_I = 0.927\n rho_a = 0.823\n rho_G = 0.949\n rho_pi_bar = 0.924\n\n std_eta_b = 0.336\n std_eta_L = 3.52\n std_eta_I = 0.085\n std_eta_a = 0.598\n std_eta_w = 0.6853261\n std_eta_p = 0.7896512\n std_eta_G = 0.325\n std_eta_R = 0.081\n std_eta_pi = 0.017\n\n calibr_pi_obj | 1 = pi_obj[ss]\n calibr_pi | pi[ss] = pi_obj[ss]\nend","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"The block defining the parameters above has two different inputs.","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"First, there are simple parameter definition the same way you assign values (e.g. Phi = .819).","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Second, there are calibration equations where we treat the value of a parameter as unknown (e.g. calibr_pi_obj) and want an additional equation to hold (e.g. 1 = pi_obj[ss]). The additional equation can contain variables in SS or parameters. Putting it together a calibration equation is defined by the unknown parameter, and the calibration equation, separated by | (e.g. calibr_pi_obj | 1 = pi_obj[ss] and also 1 = pi_obj[ss] | calibr_pi_obj).","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Note that we have to write one parameter definition per line.","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Given the equations and parameters, the package will first attempt to solve the system of nonlinear equations symbolically (including possible calibration equations). If an analytical solution is not possible, numerical solution methods are used to try and solve it. There is no guarantee that a solution can be found, but it is highly likely, given that a solution exists. The problem setup tries to incorporate parts of the structure of the problem, e.g. bounds on variables: if one equation contains log(k) it must be that k > 0. Nonetheless, the user can also provide useful information such as variable bounds or initial guesses. Bounds can be set by adding another expression to the parameter block e.g.: c > 0. Large values are typically a problem for numerical solvers. Therefore, providing a guess for these values will increase the speed of the solver. Guesses can be provided as a Dict after the model name and before the parameter definitions block, e.g.: @parameters RBC guess = Dict(k => 10) begin ... end.","category":"page"},{"location":"tutorials/sw03/#Plot-impulse-response-functions-(IRFs)","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Plot impulse response functions (IRFs)","text":"","category":"section"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"A useful output to analyze are IRFs for the exogenous shocks. Calling plot_irf (different names for the same function are also supported: plot_irfs, or plot_IRF) will take care of this. Please note that you need to import the StatsPlots packages once before the first plot. In the background the package solves (numerically in this complex case) for the non stochastic steady state (SS) and calculates the first order perturbation solution.","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"import StatsPlots\nplot_irf(SW03)","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"(Image: RBC IRF)","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"When the model is solved the first time (in this case by calling plot_irf), the package breaks down the steady state problem into independent blocks and first attempts to solve them symbolically and if that fails numerically.","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"The plots show the responses of the endogenous variables to a one standard deviation positive (indicated by Shock⁺ in chart title) unanticipated shock. Therefore there are as many subplots as there are combinations of shocks and endogenous variables (which are impacted by the shock). Plots are composed of up to 9 subplots and the plot title shows the model name followed by the name of the shock and which plot we are seeing out of the plots for this shock (e.g. (1/3) means we see the first out of three plots for this shock). Subplots show the sorted endogenous variables with the left y-axis showing the level of the respective variable and the right y-axis showing the percent deviation from the SS (if variable is strictly positive). The horizontal black line marks the SS.","category":"page"},{"location":"tutorials/sw03/#Explore-other-parameter-values","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Explore other parameter values","text":"","category":"section"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Playing around with the model can be especially insightful in the early phase of model development. The package tries to facilitate this process to the extent possible. Typically one wants to try different parameter values and see how the IRFs change. This can be done by using the parameters argument of the plot_irf function. We pass a Pair with the Symbol of the parameter (: in front of the parameter name) we want to change and its new value to the parameter argument (e.g. :alpha => 0.305). Furthermore, we want to focus on certain shocks and variables. We select for the example the eta_R shock by passing it as a Symbol to the shocks argument of the plot_irf function. For the variables we choose to plot: U, Y, I, R, and C and achieve that by passing the Vector of Symbols to the variables argument of the plot_irf function:","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"plot_irf(SW03, \n parameters = :alpha => 0.305, \n variables = [:U,:Y,:I,:R,:C], \n shocks = :eta_R)","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"(Image: IRF plot)","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"First, the package finds the new steady state, solves the model dynamics around it and saves the new parameters and solution in the model object. Second, note that with the parameters the IRFs changed (e.g. compare the y-axis values for U). Updating the plot for new parameters is significantly faster than calling it the first time. This is because the first call triggers compilations of the model functions, and once compiled the user benefits from the performance of the specialised compiled code. Furthermore, finding the SS from a valid SS as a starting point is faster.","category":"page"},{"location":"tutorials/sw03/#Plot-model-simulation","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Plot model simulation","text":"","category":"section"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Another insightful output is simulations of the model. Here we can use the plot_simulations function. Again we want to only look at a subset of the variables and specify it in the variables argument. Please note that you need to import the StatsPlots packages once before the first plot. To the same effect we can use the plot_irf function and specify in the shocks argument that we want to :simulate the model and set the periods argument to 100.","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"plot_simulations(SW03, variables = [:U,:Y,:I,:R,:C])","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"(Image: Simulate SW03)","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"The plots show the models endogenous variables in response to random draws for all exogenous shocks over 100 periods.","category":"page"},{"location":"tutorials/sw03/#Plot-specific-series-of-shocks","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Plot specific series of shocks","text":"","category":"section"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Sometimes one has a specific series of shocks in mind and wants to see the corresponding model response of endogenous variables. This can be achieved by passing a Matrix or KeyedArray of the series of shocks to the shocks argument of the plot_irf function. Let's assume there is a positive 1 standard deviation shock to eta_b in period 2 and a negative 1 standard deviation shock to eta_w in period 12. This can be implemented as follows:","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"using AxisKeys\nshock_series = KeyedArray(zeros(2,12), Shocks = [:eta_b, :eta_w], Periods = 1:12)\nshock_series[1,2] = 1\nshock_series[2,12] = -1\nplot_irf(SW03, shocks = shock_series, variables = [:W,:r_k,:w_star,:R])","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"(Image: Series of shocks RBC)","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"First, we construct the KeyedArray containing the series of shocks and pass it to the shocks argument. The plot shows the paths of the selected variables for the two shocks hitting the economy in periods 2 and 12 and 40 quarters thereafter.","category":"page"},{"location":"tutorials/sw03/#Model-statistics","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Model statistics","text":"","category":"section"},{"location":"tutorials/sw03/#Steady-state","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Steady state","text":"","category":"section"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"The package solves for the SS automatically and we got an idea of the SS values in the plots. If we want to see the SS values and the derivatives of the SS with respect to the model parameters we can call get_steady_state. The model has 39 parameters and 54 variables. Since we are not interested in all derivatives for all parameters we select a subset. This can be done by passing on a Vector of Symbols of the parameters to the parameter_derivatives argument:","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"get_steady_state(SW03, parameter_derivatives = [:alpha,:beta])","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"The first column of the returned matrix shows the SS while the second to last columns show the derivatives of the SS values (indicated in the rows) with respect to the parameters (indicated in the columns). For example, the derivative of C with respect to beta is 14.4994. This means that if we increase beta by 1, C would increase by 14.4994 approximately. Let's see how this plays out by changing beta from 0.99 to 0.991 (a change of +0.001):","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"get_steady_state(SW03, \n parameter_derivatives = [:alpha,:G_bar], \n parameters = :beta => .991)","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Note that get_steady_state like all other get functions has the parameters argument. Hence, whatever output we are looking at we can change the parameters of the model.","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"The new value of beta changed the SS as expected and C increased by 0.01465. The elasticity (0.01465/0.001) comes close to the partial derivative previously calculated. The derivatives help understanding the effect of parameter changes on the steady state and make for easier navigation of the parameter space.","category":"page"},{"location":"tutorials/sw03/#Standard-deviations","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Standard deviations","text":"","category":"section"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Next to the SS we can also show the model implied standard deviations of the model. get_standard_deviation takes care of this. Additionally we will set the parameter values to what they were in the beginning by passing on a Tuple of Pairs containing the Symbols of the parameters to be changed and their new (initial) values (e.g. (:alpha => 0.3, :beta => .99)).","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"get_standard_deviation(SW03, \n parameter_derivatives = [:alpha,:beta], \n parameters = (:alpha => 0.3, :beta => .99))","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"The function returns the model implied standard deviations of the model variables and their derivatives with respect to the model parameters. For example, the derivative of the standard deviation of q with resect to alpha is -19.0184. In other words, the standard deviation of q decreases with increasing alpha.","category":"page"},{"location":"tutorials/sw03/#Correlations","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Correlations","text":"","category":"section"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Another useful statistic is the model implied correlation of variables. We use get_correlation for this:","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"get_correlation(SW03)","category":"page"},{"location":"tutorials/sw03/#Autocorrelations","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Autocorrelations","text":"","category":"section"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Next, we have a look at the model implied aautocorrelations of model variables using the get_autocorrelation function:","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"get_autocorrelation(SW03)","category":"page"},{"location":"tutorials/sw03/#Variance-decomposition","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Variance decomposition","text":"","category":"section"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"The model implied contribution of each shock to the variance of the model variables can be calculate by using the get_variance_decomposition function:","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"get_variance_decomposition(SW03)","category":"page"},{"location":"tutorials/sw03/#Conditional-variance-decomposition","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Conditional variance decomposition","text":"","category":"section"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Last but not least, we have look at the model implied contribution of each shock per period to the variance of the model variables (also called forecast error variance decomposition) by using the get_conditional_variance_decomposition function:","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"get_conditional_variance_decomposition(SW03)","category":"page"},{"location":"tutorials/sw03/#Plot-conditional-variance-decomposition","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Plot conditional variance decomposition","text":"","category":"section"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Especially for the conditional variance decomposition it is convenient to look at a plot instead of the raw numbers. This can be done using the plot_conditional_variance_decomposition function. Please note that you need to import the StatsPlots packages once before the first plot.","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"plot_conditional_variance_decomposition(SW03, variables = [:U,:Y,:I,:R,:C])","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"(Image: FEVD SW03)","category":"page"},{"location":"tutorials/sw03/#Model-solution","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Model solution","text":"","category":"section"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"A further insightful output are the policy and transition functions of the the first order perturbation solution. To retrieve the solution we call the function get_solution:","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"get_solution(SW03)","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"The solution provides information about how past states and present shocks impact present variables. The first row contains the SS for the variables denoted in the columns. The second to last rows contain the past states, with the time index ₍₋₁₎, and present shocks, with exogenous variables denoted by ₍ₓ₎. For example, the immediate impact of a shock to eta_w on z is 0.00222469.","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"There is also the possibility to visually inspect the solution using the plot_solution function. Please note that you need to import the StatsPlots packages once before the first plot.","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"plot_solution(SW03, :pi, variables = [:C,:I,:K,:L,:W,:R])","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"(Image: SW03 solution)","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"The chart shows the first order perturbation solution mapping from the past state pi to the present variables C, I, K, L, W, and R. The state variable covers a range of two standard deviations around the non stochastic steady state and all other states remain in the non stochastic steady state.","category":"page"},{"location":"tutorials/sw03/#Obtain-array-of-IRFs-or-model-simulations","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Obtain array of IRFs or model simulations","text":"","category":"section"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Last but not least the user might want to obtain simulated time series of the model or IRFs without plotting them. For IRFs this is possible by calling get_irf:","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"get_irf(SW03)","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"which returns a 3-dimensional KeyedArray with variables (absolute deviations from the relevant steady state by default) in rows, the period in columns, and the shocks as the third dimension.","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"For simulations this is possible by calling simulate:","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"simulate(SW03)","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"which returns the simulated data in levels in a 3-dimensional KeyedArray of the same structure as for the IRFs.","category":"page"},{"location":"tutorials/sw03/#Conditional-forecasts","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Conditional forecasts","text":"","category":"section"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Conditional forecasting is a useful tool to incorporate for example forecasts into a model and then add shocks on top.","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"For example we might be interested in the model dynamics given a path for Y and pi for the first 4 quarters and the next quarter a negative shock to eta_w arrives. Furthermore, we want that the first two periods only a subset of shocks is used to match the conditions on the endogenous variables. This can be implemented using the get_conditional_forecast function and visualised with the plot_conditional_forecast function.","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"First, we define the conditions on the endogenous variables as deviations from the non stochastic steady state (Y and pi in this case) using a KeyedArray from the AxisKeys package (check get_conditional_forecast for other ways to define the conditions):","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"using AxisKeys\nconditions = KeyedArray(Matrix{Union{Nothing,Float64}}(undef,2,4),Variables = [:Y, :pi], Periods = 1:4)\nconditions[1,1:4] .= [-.01,0,.01,.02];\nconditions[2,1:4] .= [.01,0,-.01,-.02];","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Note that all other endogenous variables not part of the KeyedArray are also not conditioned on.","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Next, we define the conditions on the shocks using a Matrix (check get_conditional_forecast for other ways to define the conditions on the shocks):","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"shocks = Matrix{Union{Nothing,Float64}}(undef,9,5)\nshocks[[1:3...,5,9],1:2] .= 0;\nshocks[9,5] = -1;","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"The above shock Matrix means that for the first two periods shocks 1, 2, 3, 5, and 9 are fixed at zero and in the fifth period there is a negative shock of eta_w (the 9th shock).","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Finally we can get the conditional forecast:","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"get_conditional_forecast(SW03, conditions, shocks = shocks, variables = [:Y,:pi,:W], conditions_in_levels = false)","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"The function returns a KeyedArray with the values of the endogenous variables and shocks matching the conditions exactly.","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"We can also plot the conditional forecast. Please note that you need to import the StatsPlots packages once before the first plot.","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"plot_conditional_forecast(SW03,conditions, shocks = shocks, plots_per_page = 6,variables = [:Y,:pi,:W],conditions_in_levels = false)","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"(Image: SW03 conditional forecast 1)","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"(Image: SW03 conditional forecast 2)","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"and we need to set conditions_in_levels = false since the conditions are defined in deviations.","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Note that the stars indicate the values the model is conditioned on.","category":"page"},{"location":"#MacroModelling.jl","page":"Introduction","title":"MacroModelling.jl","text":"","category":"section"},{"location":"","page":"Introduction","title":"Introduction","text":"Author: Thore Kockerols (@thorek1)","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"MacroModelling.jl is a Julia package for developing and solving dynamic stochastic general equilibrium (DSGE) models.","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"These kinds of models describe the behavior of a macroeconomy and are particularly suited for counterfactual analysis (economic policy evaluation) and exploring / quantifying specific mechanisms (academic research). Due to the complexity of these models, efficient numerical tools are required, as analytical solutions are often unavailable. MacroModelling.jl serves as a tool for handling the complexities involved, such as forward-looking expectations, nonlinearity, and high dimensionality.","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"The goal of this package is to reduce coding time and speed up model development by providing functions for working with discrete-time DSGE models. The user-friendly syntax, automatic variable declaration, and effective steady state solver facilitate fast prototyping of models. Furthermore, the package allows the user to work with nonlinear model solutions (up to third order (pruned) perturbation) and estimate the model using gradient based samplers (e.g. NUTS, of HMC). Currently, DifferentiableStateSpaceModels.jl is the only other package providing functionality to estimate using gradient based samplers but the use is limited to models with an analytical solution of the non stochastic steady state (NSSS). Larger models tend to not have an analytical solution of the NSSS and MacroModelling.jl can also use gradient based sampler in this case. The target audience for the package includes central bankers, regulators, graduate students, and others working in academia with an interest in DSGE modelling.","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"As of now the package can:","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"parse a model written with user friendly syntax (variables are followed by time indices ...[2], [1], [0], [-1], [-2]..., or [x] for shocks)\n(tries to) solve the model only knowing the model equations and parameter values (no steady state file needed)\ncalculate first, second, and third order (pruned) perturbation solutions (see Villemot (2011), Andreasen et al. (2017) and Levintal (2017)) using symbolic derivatives\nhandle occasionally binding constraints for linear and nonlinear solutions\ncalculate (generalised) impulse response functions, simulate the model, or do conditional forecasts for linear and nonlinear solutions\ncalibrate parameters using (non stochastic) steady state relationships\nmatch model moments (also for pruned higher order solutions)\nestimate the model on data (Kalman filter using first order perturbation; see Durbin and Koopman (2012)) with gradient based samplers (e.g. NUTS, HMC) or estimate nonlinear models using the inversion filter\ndifferentiate (forward AD) the model solution, Kalman filter loglikelihood (forward and reverse-mode AD), model moments, steady state, with respect to the parameters","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"The package is not:","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"guaranteed to find the non stochastic steady state\nthe fastest package around if you already have a fast way to find the NSSS","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"The former has to do with the fact that solving systems of nonlinear equations is hard (an active area of research). Especially in cases where the values of the solution are far apart (have a high standard deviation - e.g. sol = [-46.324, .993457, 23523.3856]), the algorithms have a hard time finding a solution. The recommended way to tackle this is to set bounds in the @parameters part (e.g. r < 0.2), so that the initial points are closer to the final solution (think of steady state interest rates not being higher than 20% - meaning not being higher than 0.2 or 1.2 depending on the definition).","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"The latter has to do with the fact that julia code is fast once compiled, and that the package can spend more time finding the non stochastic steady state. This means that it takes more time from executing the code to define the model and parameters for the first time to seeing the first plots than with most other packages. But, once the functions are compiled and the non stochastic steady state has been found the user can benefit from the object oriented nature of the package and generate outputs or change parameters very fast.","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"The package contains the following models in the models folder:","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"Aguiar and Gopinath (2007) Aguiar_Gopinath_2007.jl\nAscari and Sbordone (2014) Ascari_sbordone_2014.jl\nBackus, Kehoe, and Kydland (1992) Backus_Kehoe_Kydland_1992\nBaxter and King (1993) Baxter_King_1993.jl\nCaldara et al. (2012) Caldara_et_al_2012.jl\nGali (2015) - Chapter 3 Gali_2015_chapter_3_nonlinear.jl\nGali and Monacelli (2005) - CPI inflation-based Taylor rule Gali_Monacelli_2005_CITR.jl\nGerali, Neri, Sessa, and Signoretti (2010) GNSS_2010.jl\nGhironi and Melitz (2005) Ghironi_Melitz_2005.jl\nIreland (2004) Ireland_2004.jl\nJermann and Quadrini (2012) - RBC JQ_2012_RBC.jl\nNew Area-Wide Model (2008) - Euro Area - US NAWM_EAUS_2008.jl\nQUEST3 (2009) QUEST3_2009.jl\nSchmitt-Grohé and Uribe (2003) - debt premium SGU_2003_debt_premium.jl\nSchorfheide (2000) FS2000.jl\nSmets and Wouters (2003) SW03.jl\nSmets and Wouters (2007) SW07.jl","category":"page"},{"location":"#Comparison-with-other-packages","page":"Introduction","title":"Comparison with other packages","text":"","category":"section"},{"location":"","page":"Introduction","title":"Introduction","text":" MacroModelling.jl dynare DSGE.jl dolo.py SolveDSGE.jl DifferentiableStateSpaceModels.jl StateSpaceEcon.jl IRIS RISE NBTOOLBOX gEcon GDSGE Taylor Projection\nHost language julia MATLAB julia Python julia julia julia MATLAB MATLAB MATLAB R MATLAB MATLAB\nNon stochastic steady state solver symbolic or numerical solver of independent blocks; symbolic removal of variables redundant in steady state; inclusion of calibration equations in problem numerical solver of independent blocks or user-supplied values/functions numerical solver of independent blocks or user-supplied values/functions numerical solver numerical solver or user supplied values/equations numerical solver of independent blocks or user-supplied values/functions numerical solver of independent blocks or user-supplied values/functions numerical solver of independent blocks or user-supplied values/functions user-supplied steady state file or numerical solver numerical solver; inclusion of calibration equations in problem \nAutomatic declaration of variables and parameters yes \nDerivatives (Automatic Differentiation) wrt parameters yes yes - for all 1st, 2nd order perturbation solution related output if user supplied steady state equations \nPerturbation solution order 1, 2, 3 k 1 1, 2, 3 1, 2, 3 1, 2 1 1 1 to 5 1 1 1 to 5\nPruning yes yes yes yes \nAutomatic derivation of first order conditions yes \nHandles occasionally binding constraints yes yes yes yes yes yes yes \nGlobal solution yes yes yes \nEstimation yes yes yes yes yes yes yes \nBalanced growth path yes yes yes yes yes yes \nModel input macro (julia) text file text file text file text file macro (julia) module (julia) text file text file text file text file text file text file\nTiming convention end-of-period end-of-period end-of-period start-of-period start-of-period end-of-period end-of-period end-of-period end-of-period end-of-period start-of-period start-of-period","category":"page"},{"location":"#Bibliography","page":"Introduction","title":"Bibliography","text":"","category":"section"},{"location":"","page":"Introduction","title":"Introduction","text":"Andreasen, M. M.; Fernández-Villaverde, J. and Rubio-Ramírez, J. F. (2017). The Pruned State-Space System for Non-Linear DSGE Models: Theory and Empirical Applications. The Review of Economic Studies 85, 1–49, arXiv:https://academic.oup.com/restud/article-pdf/85/1/1/23033725/rdx037.pdf.\n\n\n\nDurbin, J. and Koopman, S. J. (2012). Time Series Analysis by State Space Methods, 2nd edn (Oxford University Press).\n\n\n\nGalı́, J. (2015). Monetary policy, inflation, and the business cycle: an introduction to the new Keynesian framework and its applications (Princeton University Press).\n\n\n\nLevintal, O. (2017). Fifth-Order Perturbation Solution to DSGE models. Journal of Economic Dynamics and Control 80, 1–16.\n\n\n\nSchorfheide, F. (2000). Loss function-based evaluation of DSGE models. Journal of Applied Econometrics 15, 645–670, arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/jae.582.\n\n\n\nSmets, F. and Wouters, R. (2003). AN ESTIMATED DYNAMIC STOCHASTIC GENERAL EQUILIBRIUM MODEL OF THE EURO AREA. Journal of the European Economic Association 1, 1123–1175, arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1162/154247603770383415.\n\n\n\nVillemot, S. (2011). Solving rational expectations models at first order: what Dynare does (Dynare Working Papers 2, CEPREMAP).\n\n\n\n","category":"page"}] +[{"location":"unfinished_docs/todo/#Todo-list","page":"Todo list","title":"Todo list","text":"","category":"section"},{"location":"unfinished_docs/todo/#High-priority","page":"Todo list","title":"High priority","text":"","category":"section"},{"location":"unfinished_docs/todo/","page":"Todo list","title":"Todo list","text":"[ ] ss transition by entering new parameters at given periods\n[ ] check downgrade tests\n[ ] figure out why PG and IS return basically the prior\n[ ] allow external functions to calculate the steady state (and hand it over via SS or get_loglikelihood function) - need to use the check function for implicit derivatives and cannot use it to get him a guess from which he can use internal solver going forward\n[ ] go through custom SS solver once more and try to find parameters and logic that achieves best results\n[ ] SS solver with less equations than variables\n[ ] improve docs: timing in first sentence seems off; have something more general in first sentence; why is the syntax user friendly? give an example; make the former and the latter a footnote\n[ ] write tests/docs/technical details for nonlinear obc, forecasting, (non-linear) solution algorithms, SS solver, obc solver, and other algorithms\n[ ] change docs to reflect that the output of irfs include aux vars and also the model info Base.show includes aux vars\n[ ] recheck function examples and docs (include output description)\n[ ] Docs: document outputs and associated functions to work with function\n[ ] write documentation/docstrings using copilot\n[ ] feedback: sell the sampler better (ESS vs dynare), more details on algorithm (SS solver)\n[ ] NaNMath pow does not work (is not substituted)\n[ ] check whether its possible to run parameters macro/block without rerunning model block\n[ ] eliminate possible log, ^ terms in parameters block equations - because of nonnegativity errors\n[ ] throw error when equations appear more than once\n[ ] plot multiple solutions or models - multioptions in one graph\n[ ] make SS calc faster (func and optim, maybe inplace ops)\n[ ] try preallocation tools for forwarddiff\n[ ] add nonlinear shock decomposition\n[ ] check obc once more\n[ ] rm obc vars from get_SS\n[ ] check why warmup_iterations = 0 makes estimated shocks larger\n[ ] use analytical derivatives also for shocks matching optim (and HMC - implicit diff)\n[ ] info on when what filter is used and chosen options are overridden\n[ ] check warnings, errors throughout. check suppress not interfering with pigeons\n[ ] functions to reverse state_update (input: previous shock and current state, output previous state), find shocks corresponding to bringing one state to the next\n[ ] cover nested case: min(50,a+b+max(c,10))\n[ ] add balanced growth path handling\n[ ] higher order solutions: some kron matrix mults are later compressed. write custom compressed kron mult; check if sometimes dense mult is faster? (e.g. GNSS2010 seems dense at higher order)\n[ ] make inversion filter / higher order sols suitable for HMC (forward and reverse diff!!, currently only analytical pushforward, no implicitdiff) | analytic derivatives\n[ ] speed up sparse matrix calcs in implicit diff of higher order funcs\n[ ] compressed higher order derivatives and sparsity of jacobian\n[ ] add user facing option to choose sylvester solver\n[ ] autocorr and covariance with derivatives. return 3d array\n[ ] use ID for sparse output sylvester solvers (filed issue)\n[ ] add pydsge and econpizza to overview\n[ ] add for loop parser in @parameters\n[ ] implement more multi country models\n[ ] speed benchmarking (focus on ImplicitDiff part)\n[ ] for cond forecasting allow less shocks than conditions with a warning. should be svd then\n[ ] have parser accept rss | (r[ss] - 1) * 400 = rss\n[ ] when doing calibration with optimiser have better return values when he doesnt find a solution (probably NaN)\n[ ] sampler returned negative std. investigate and come up with solution ensuring sampler can continue\n[ ] automatically adjust plots for different legend widths and heights\n[ ] include weakdeps: https://pkgdocs.julialang.org/dev/creating-packages/#Weak-dependencies\n[ ] have get_std take variables as an input\n[ ] more informative errors when something goes wrong when writing a model\n[ ] initial state accept keyed array, SS and SSS as arguments\n[ ] plotmodelestimates with unconditional forecast at the end\n[ ] kick out unused parameters from m.parameters\n[ ] use cache for gradient calc in estimation (see DifferentiableStateSpaceModels)\n[ ] write functions to debug (fix_SS.jl...)\n[ ] model compression (speed up 2nd moment calc (derivatives) for large models; gradient loglikelihood is very slow due to large matmuls) -> model setup as maximisation problem (gEcon) -> HANK models\n[ ] implement global solution methods\n[ ] add more models\n[ ] use @assert for errors and @test_throws\n[ ] print SS dependencies (get parameters (in function of parameters) into the dependencies), show SS solver\n[ ] use strings instead of symbols internally\n[ ] write how-to for calibration equations\n[ ] make the nonnegativity trick optional or use nanmath?\n[ ] clean up different parameter types\n[ ] clean up printouts/reporting\n[ ] clean up function inputs and harmonise AD and standard commands\n[ ] figure out combinations for inputs (parameters and variables in different formats for get_irf for example)\n[ ] weed out SS solver and saved objects\n[x] streamline estimation part (dont do string matching... but rely on precomputed indices...)\n[x] estimation: run auto-tune before and use solver treating parameters as given\n[x] use arraydist in tests and docs\n[x] include guess in docs\n[x] Find any SS by optimising over both SS guesses and parameter inputs\n[x] riccati with analytical derivatives (much faster if sparse) instead of implicit diff; done for ChainRules; ForwardDiff only feasible for smaller problems -> ID is fine there\n[x] log in parameters block is recognized as variable\n[x] add termination condition if relative change in ss solver is smaller than tol (relevant when values get very large)\n[x] provide option for external SS guess; provided in parameters macro\n[x] make it possible to run multiple ss solver parameter combination including starting points when solving a model\n[x] automatically put the combi first which solves it fastest the first time\n[x] write auto-tune in case he cant find SS (add it to the warning when he cant find the SS)\n[x] nonlinear conditional forecasts for higher order and obc\n[x] for cond forecasting and kalman, get rid of observables input and use axis key of data input\n[x] fix translate dynare mod file from file written using write to dynare file (see test models): added retranslation to test\n[x] use packages for kalman filter: nope sticking to own implementation\n[x] check that there is an error if he cant find SS\n[x] bring solution error into an object of the model so we dont have to pass it on as output: errors get returned by functions and are thrown where appropriate\n[x] include option to provide pruned states for irfs\n[x] use other quadratic iteration for diffable first order solve (useful because schur can error in estimation): used try catch, schur is still fastest\n[x] fix SS solver (failed for backus in guide): works now\n[x] nonlinear estimation using unscented kalman filter / inversion filter (minimization problem: find shocks to match states with data): used inversion filter with gradient optim\n[x] check if higher order effects might distort results for autocorr (problem with order deffinition) - doesnt seem to be the case; full_covar yields same result\n[x] implement occasionally binding constraints with shocks\n[x] add QUEST3 tests\n[x] add obc tests\n[x] highlight NUTS sampler compatibility\n[x] differentiate more vs diffstatespace\n[x] reorder other toolboxes according to popularity\n[x] add JOSS article (see Makie.jl)\n[x] write to mod file for unicode characters. have them take what you would type: \\alpha\\bar\n[x] write dynare model using function converting unicode to tab completion\n[x] write parameter equations to dynare (take ordering on board)\n[x] pruning of 3rd order takes pruned 2nd order input\n[x] implement moment matching for pruned models\n[x] test pruning and add literature\n[x] use more implicit diff for the other functions as well\n[x] handle sparsity in sylvester solver better (hand over indices and nzvals instead of vec)\n[x] redo naming in moments calc and make whole process faster (precalc wrangling matrices)\n[x] write method of moments how to\n[x] check tols - all set to eps() except for dependencies tol (1e-12)\n[x] set to 0 SS values < 1e-12 - doesnt work with Zygote\n[x] sylvester with analytical derivatives (much faster if sparse) instead of implicit diff - yes but there are still way too large matrices being realised. implicitdiff is better here\n[x] autocorr to statistics output and in general for higher order pruned sols\n[x] fix product moments and test for cases with more than 2 shocks\n[x] write tests for variables argument in get_moment and for higher order moments\n[x] handle KeyedArrays with strings as dimension names as input\n[x] add mean in output funcs for higher order \n[x] recheck results for third order cov\n[x] have a look again at get_statistics function\n[x] consolidate sylvester solvers (diff)\n[x] put outside of loop the ignore derviatives for derivatives\n[x] write function to smart select variables to calc cov for\n[x] write get function for variables, parameters, equations with proper parsing so people can understand what happens when invoking for loops\n[x] have for loop where the items are multiplied or divided or whatever, defined by operator | + or * only\n[x] write documentation for string inputs\n[x] write documentation for programmatic model writing\n[x] input indices not as symbol\n[x] make sure plots and printed output also uses strings instead of symbols if adequate\n[x] have keyedarray with strings as axis type if necessary as output\n[x] write test for keyedarray with strings as primary axis\n[x] test string input\n[x] have all functions accept strings and write tests for it\n[x] parser model into per equation functions instead of single big functions\n[x] use krylov instead of linearsolve\n[x] implement for loops in model macro (e.g. to setup multi country models)\n[x] fix ss of pruned solution in plotsolution. seems detached\n[x] try solve first order with JuMP - doesnt work because JuMP cannot handle matrix constraints/objectives \n[x] get solution higher order with multidimensional array (states, 1 and 2 partial derivatives variables names as dimensions in 2order case)\n[x] add pruning\n[x] add other outputs from estimation (smoothed, filter states and shocks)\n[x] shorten plot_irf (take inspiration from model estimate)\n[x] fix solution plot\n[x] see if we can avoid try catch and test for invertability instead\n[x] have Flux solve SS field #gradient descent based is worse than LM based\n[x] have parameters keyword accept Int and 2/3\n[x] plot_solution colors change from 2nd to 2rd order\n[x] custom LM: optimize for other RBC models, use third order backtracking\n[x] add SSS for third order (can be different than the one from 2nd order, see Gali (2015)) in solution plot; also put legend to the bottom as with Condition\n[x] check out Aqua.jl as additional tests\n[x] write tests and documentation for solution, estimation... making sure results are consistent\n[x] catch cases where you define calibration equation without declaring conditional variable\n[x] flag if equations contain no info for SS, suggest to set ss values as parameters\n[x] handle SS case where there are equations which have no information for the SS. use SS definitions in parameter block to complete system | no, set steady state values to parameters instead. might fail if redundant equation has y[0] - y[-1] instead of y[0] - y[ss]\n[x] try eval instead of runtimegeneratedfunctions; eval is slower but can be typed\n[x] check correctness of solution for models added\n[x] SpecialFunctions eta and gamma cause conflicts; consider importing used functions explicitly\n[x] bring the parsing of equations after the parameters macro\n[x] rewrite redundant var part so that it works with ssauxequations instead of ss_equations\n[x] catch cases where ss vars are set to zero. x[0] * eps_z[x] in SS becomes x[0] * 0 but should be just 0 (use sympy for this)\n[x] remove duplicate nonnegative aux vars to speed up SS solver\n[x] error when defining variable more than once in parameters macro\n[x] consolidate aux vars, use sympy to simplify\n[x] error when writing equations with only one variable\n[x] error when defining variable as parameter\n[x] more options for IRFs, simulate only certain shocks - set stds to 0 instead\n[x] add NBTOOLBOX, IRIS to overview\n[x] input field for SS init guess in all functions #not necessary so far. SS solver works out everything just fine\n[x] symbolic derivatives\n[x] check SW03 SS solver\n[x] more options for IRFs, pass on shock vector\n[x] write to dynare\n[x] add plot for policy function\n[x] add plot for FEVD\n[x] add functions like getvariance, getsd, getvar, getcovar\n[x] add correlation, autocorrelation, and (conditional) variance decomposition\n[x] go through docs to reflect verbose behaviour\n[x] speed up covariance mat calc\n[x] have conditional parameters at end of entry as well (... | alpha instead of alpha | ...)\n[x] Get functions: getoutput, getmoments\n[x] get rid of init_guess\n[x] an and schorfheide estimation\n[x] estimation, IRF matching, system priors\n[x] check derivative tests with finite diff\n[x] release first version\n[x] SS solve: add domain transformation optim\n[x] revisit optimizers for SS\n[x] figure out licenses\n[x] SS: replace variables in log() with auxilliary variable which must be positive to help solver\n[x] complex example with lags > 1, [ss], calib equations, aux nonneg vars\n[x] add NLboxsolve\n[x] try NonlinearSolve - fails due to missing bounds\n[x] make noneg aux part of optim problem for NLboxsolve in order to avoid DomainErrors - not necessary\n[x] have bounds on alpha (failed previously due to naming conflict) - works now","category":"page"},{"location":"unfinished_docs/todo/#Not-high-priority","page":"Todo list","title":"Not high priority","text":"","category":"section"},{"location":"unfinished_docs/todo/","page":"Todo list","title":"Todo list","text":"[ ] estimation codes with missing values (adopt kalman filter)\n[ ] decide on whether levels = false means deviations from NSSS or relevant SS\n[ ] whats a good error measure for higher order solutions (taking whole dist of future shock into account)? use mean error for n number of future shocks\n[ ] improve redundant calculations of SS and other parts of solution\n[ ] restructure functions and containers so that compiler knows what types to expect\n[ ] use RecursiveFactorization and TriangularSolve to solve, instead of MKL or OpenBLAS\n[ ] fix SnoopCompile with generated functions\n[ ] exploit variable incidence and compression for higher order derivatives\n[ ] for estimation use CUDA with st order: linear time iteration starting from last 1st order solution and then LinearSolveCUDA solvers for higher orders. this should bring benefits for large models and HANK models\n[ ] pull request in StatsFuns to have norminv... accept type numbers and add translation from matlab: norminv to StatsFuns norminvcdf\n[ ] more informative errors when declaring equations/ calibration\n[ ] unit equation errors\n[ ] implenent reduced linearised system solver + nonlinear\n[ ] implement HANK\n[ ] implement automatic problem derivation (gEcon)\n[ ] print legend for algorithm in last subplot of plot only\n[ ] select variables for moments\n[x] rewrite first order with riccati equation MatrixEquations.jl: not necessary/feasable see dynare package\n[x] test on highly nonlinear model # caldara et al is actually epstein zin wiht stochastic vol\n[x] conditional forecasting\n[x] find way to recover from failed SS solution which is written to init guess\n[x] redo ugly solution for selecting parameters to differentiate for\n[x] conditions for when to use which solution. if solution is outdated redo all solutions which have been done so far and use smart starting points\n[x] Revise 2,3 pert codes to make it more intuitive\n[x] implement blockdiag with julia package instead of python\n[x] Pretty print linear solution\n[x] write function to get_irfs\n[x] Named arrays for irf\n[x] write state space function for solution\n[x] Status print for model container\n[x] implenent 2nd + 3rd order perturbation\n[x] implement fuctions for distributions\n[x] try speedmapping.jl - no improvement\n[x] moment matching\n[x] write tests for higher order pert and standalone function\n[x] add compression back in\n[x] FixedPointAcceleration didnt improve on iterative procedure\n[x] add exogenous variables in lead or lag\n[x] regex in parser of SS and exo\n[x] test SS solver on SW07\n[x] change calibration, distinguish SS/dyn parameters\n[x] plot multiple solutions at same time (save them in separate constructs)\n[x] implement bounds in SS finder\n[x] map pars + vars impacting SS\n[x] check bounds when putting in new calibration\n[x] Save plot option\n[x] Add shock to plot title\n[x] print model name","category":"page"},{"location":"tutorials/rbc/#Write-your-first-model-simple-RBC","page":"Write your first simple model - RBC","title":"Write your first model - simple RBC","text":"","category":"section"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"The following tutorial will walk you through the steps of writing down a model (not explained here / taken as given) and analysing it. Prior knowledge of DSGE models and their solution in practical terms (e.g. having used a mod file with dynare) is useful in understanding this tutorial. For the purpose of this tutorial we will work with a simplified version of a real business cycle (RBC) model. The model laid out below examines capital accumulation, consumption, and random technological progress. Households maximize lifetime utility from consumption, weighing current against future consumption. Firms produce using capital and a stochastic technology factor, setting capital rental rates based on marginal productivity. The model integrates households' decisions, firms' production, and random technological shifts to understand economic growth and dynamics.","category":"page"},{"location":"tutorials/rbc/#RBC-derivation-of-model-equations","page":"Write your first simple model - RBC","title":"RBC - derivation of model equations","text":"","category":"section"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Household's Problem: Households derive utility from consuming goods and discount future consumption. The decision they face every period is how much of their income to consume now versus how much to invest for future consumption.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"E_0 sum_t=0^infty beta^t ln(c_t)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Their budget constraint reflects that their available resources for consumption or investment come from returns on their owned capital (both from the rental rate and from undepreciated capital) and any profits distributed from firms.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"c_t + k_t = (1-delta) k_t-1 + R_t k_t-1 + Pi_t","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Combining the first order (optimality) conditions with respect to c_t and k_t shows that households balance the marginal utility of consuming one more unit today against the expected discounted marginal utility of consuming that unit in the future.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"frac1c_t = beta E_t left (R_t+1 + 1 - delta) frac1c_t+1 right","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Firm's Problem: Firms rent capital from households to produce goods. Their profits, Pi_t, are the difference between their revenue from selling goods and their costs from renting capital. Competition ensures that profits are 0.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Pi_t = q_t - R_t k_t-1","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Given the Cobb-Douglas production function with a stochastic technology process:","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"q_t = e^z_t k_t-1^alpha","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"The FOC with respect to capital k_t determines the optimal amount of capital the firm should rent. It equates the marginal product of capital (how much additional output one more unit of capital would produce) to its cost (the rental rate).","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"R_t = alpha e^z_t k_t-1^alpha-1","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Market Clearing: This condition ensures that every good produced in the economy is either consumed by households or invested to augment future production capabilities.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"q_t = c_t + i_t","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"With:","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"i_t = k_t - (1-delta)k_t-1","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Equations describing the dynamics of the economy:","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Household's Optimization (Euler Equation): Signifies the balance households strike between current and future consumption. The rental rate of capital has been substituted for.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"frac1c_t = fracbetac_t+1 left( alpha e^z_t+1 k_t^alpha-1 + (1 - delta) right)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Capital Accumulation: Charts the progression of capital stock over time.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"c_t + k_t = (1-delta)k_t-1 + q_t","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Production: Describes the output generation from the previous period's capital stock, enhanced by current technology.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"q_t = e^z_t k_t-1^alpha","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Technology Process: Traces the evolution of technological progress. Exogenous innovations are captured by epsilon^z_t.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"z_t = rho^z z_t-1 + sigma^z epsilon^z_t","category":"page"},{"location":"tutorials/rbc/#Define-the-model","page":"Write your first simple model - RBC","title":"Define the model","text":"","category":"section"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"The first step is always to name the model and write down the equations. Taking the RBC model described above this would go as follows.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"First, we load the package and then use the @model macro to define our model. The first argument after @model is the model name and will be the name of the object in the global environment containing all information regarding the model. The second argument to the macro are the equations, which we write down between begin and end. One equation per line and timing of endogenous variables are expressed in the squared brackets following the variable name. Exogenous variables (shocks) are followed by a keyword in squared brackets indicating them being exogenous (in this case [x]). Note that names can leverage julias unicode capabilities (alpha can be written as α).","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"ENV[\"GKSwstype\"] = \"100\"","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"using MacroModelling\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n\n q[0] = exp(z[0]) * k[-1]^α\n\n z[0] = ρᶻ * z[-1] + σᶻ * ϵᶻ[x]\nend","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"After the model is parsed we get some info on the model variables, and parameters.","category":"page"},{"location":"tutorials/rbc/#Define-the-parameters","page":"Write your first simple model - RBC","title":"Define the parameters","text":"","category":"section"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Next we need to add the parameters of the model. The macro @parameters takes care of this:","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"@parameters RBC begin\n σᶻ= 0.01\n ρᶻ= 0.2\n δ = 0.02\n α = 0.5\n β = 0.95\nend","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Parameter definitions are similar to assigning values in julia. Note that we have to write one parameter definition per line.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Given the equations and parameters, the package will first attempt to solve the system of nonlinear equations symbolically (including possible calibration equations - see next tutorial for an example). If an analytical solution is not possible, numerical solution methods are used to try and solve it. There is no guarantee that a solution can be found, but it is highly likely, given that a solution exists. The problem setup tries to incorporate parts of the structure of the problem, e.g. bounds on variables: if one equation contains log(k) it must be that k > 0. Nonetheless, the user can also provide useful information such as variable bounds or initial guesses. Bounds can be set by adding another expression to the parameter block e.g.: c > 0. Large values are typically a problem for numerical solvers. Therefore, providing a guess for these values will increase the speed of the solver. Guesses can be provided as a Dict after the model name and before the parameter definitions block, e.g.: @parameters RBC guess = Dict(k => 10) begin ... end.","category":"page"},{"location":"tutorials/rbc/#Plot-impulse-response-functions-(IRFs)","page":"Write your first simple model - RBC","title":"Plot impulse response functions (IRFs)","text":"","category":"section"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"A useful output to analyze are IRFs for the exogenous shocks. Calling plot_irf (different names for the same function are also supported: plot_irfs, or plot_IRF) will take care of this. Please note that you need to import the StatsPlots packages once before the first plot. In the background the package solves (symbolically in this simple case) for the non stochastic steady state (SS) and calculates the first order perturbation solution.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"import StatsPlots\nplot_irf(RBC)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"(Image: RBC IRF)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"When the model is solved the first time (in this case by calling plot_irf), the package breaks down the steady state problem into independent blocks and first attempts to solve them symbolically and if that fails numerically.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"The plot shows the responses of the endogenous variables (c, k, q, and z) to a one standard deviation positive (indicated by Shock⁺ in chart title) unanticipated shock in eps_z. Therefore there are as many subplots as there are combinations of shocks and endogenous variables (which are impacted by the shock). Plots are composed of up to 9 subplots and the plot title shows the model name followed by the name of the shock and which plot we are seeing out of the plots for this shock (e.g. (1/3) means we see the first out of three plots for this shock). Subplots show the sorted endogenous variables with the left y-axis showing the level of the respective variable and the right y-axis showing the percent deviation from the SS (if variable is strictly positive). The horizontal black line marks the SS.","category":"page"},{"location":"tutorials/rbc/#Explore-other-parameter-values","page":"Write your first simple model - RBC","title":"Explore other parameter values","text":"","category":"section"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Playing around with the model can be especially insightful in the early phase of model development. The package tries to facilitates this process to the extent possible. Typically one wants to try different parameter values and see how the IRFs change. This can be done by using the parameters argument of the plot_irf function. We pass a Pair with the Symbol of the parameter (: in front of the parameter name) we want to change and its new value to the parameter argument (e.g. :α => 0.3).","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"plot_irf(RBC, parameters = :α => 0.3)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"(Image: IRF plot)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"First, the package finds the new steady state, solves the model dynamics around it and saves the new parameters and solution in the model object. Second, note that the shape of the curves in the plot and the y-axis values changed. Updating the plot for new parameters is significantly faster than calling it the first time. This is because the first call triggers compilations of the model functions, and once compiled the user benefits from the performance of the specialised compiled code.","category":"page"},{"location":"tutorials/rbc/#Plot-model-simulation","page":"Write your first simple model - RBC","title":"Plot model simulation","text":"","category":"section"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Another insightful output is simulations of the model. Here we can use the plot_simulations function. Please note that you need to import the StatsPlots packages once before the first plot. To the same effect we can use the plot_irf function and specify in the shocks argument that we want to :simulate the model and set the periods argument to 100.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"plot_simulations(RBC)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"(Image: Simulate RBC)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"The plots show the models endogenous variables in response to random draws for all exogenous shocks over 100 periods.","category":"page"},{"location":"tutorials/rbc/#Plot-specific-series-of-shocks","page":"Write your first simple model - RBC","title":"Plot specific series of shocks","text":"","category":"section"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Sometimes one has a specific series of shocks in mind and wants to see the corresponding model response of endogenous variables. This can be achieved by passing a Matrix or KeyedArray of the series of shocks to the shocks argument of the plot_irf function:","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"shock_series = zeros(1,4)\nshock_series[1,2] = 1\nshock_series[1,4] = -1\nplot_irf(RBC, shocks = shock_series)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"(Image: Series of shocks RBC)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"The plot shows the two shocks hitting the economy in periods 2 and 4 and then continues the simulation for 40 more quarters.","category":"page"},{"location":"tutorials/rbc/#Model-statistics","page":"Write your first simple model - RBC","title":"Model statistics","text":"","category":"section"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"The package solves for the SS automatically and we got an idea of the SS values in the plots. If we want to see the SS values we can call get_steady_state:","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"get_steady_state(RBC)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"to get the SS and the derivatives of the SS with respect to the model parameters. The first column of the returned matrix shows the SS while the second to last columns show the derivatives of the SS values (indicated in the rows) with respect to the parameters (indicated in the columns). For example, the derivative of k with respect to β is 165.319. This means that if we increase β by 1, k would increase by 165.319 approximately. Let's see how this plays out by changing β from 0.95 to 0.951 (a change of +0.001):","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"get_steady_state(RBC,parameters = :β => .951)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Note that get_steady_state like all other get functions has the parameters argument. Hence, whatever output we are looking at we can change the parameters of the model.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"The new value of β changed the SS as expected and k increased by 0.168. The elasticity (0.168/0.001) comes close to the partial derivative previously calculated. The derivatives help understanding the effect of parameter changes on the steady state and make for easier navigation of the parameter space.","category":"page"},{"location":"tutorials/rbc/#Standard-deviations","page":"Write your first simple model - RBC","title":"Standard deviations","text":"","category":"section"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Next to the SS we can also show the model implied standard deviations of the model. get_standard_deviation takes care of this. Additionally we will set the parameter values to what they were in the beginning by passing on a Tuple of Pairs containing the Symbols of the parameters to be changed and their new (initial) values (e.g. (:α => 0.5, :β => .95)).","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"get_standard_deviation(RBC, parameters = (:α => 0.5, :β => .95))","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"The function returns the model implied standard deviations of the model variables and their derivatives with respect to the model parameters. For example, the derivative of the standard deviation of c with resect to δ is -0.384. In other words, the standard deviation of c decreases with increasing δ.","category":"page"},{"location":"tutorials/rbc/#Correlations","page":"Write your first simple model - RBC","title":"Correlations","text":"","category":"section"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Another useful statistic is the model implied correlation of variables. We use get_correlation for this:","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"get_correlation(RBC)","category":"page"},{"location":"tutorials/rbc/#Autocorrelations","page":"Write your first simple model - RBC","title":"Autocorrelations","text":"","category":"section"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Last but not least, we have a look at the model implied autocorrelations of model variables using the get_autocorrelation function:","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"get_autocorrelation(RBC)","category":"page"},{"location":"tutorials/rbc/#Model-solution","page":"Write your first simple model - RBC","title":"Model solution","text":"","category":"section"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"A further insightful output are the policy and transition functions of the the first order perturbation solution. To retrieve the solution we call the function get_solution:","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"get_solution(RBC)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"The solution provides information about how past states and present shocks impact present variables. The first row contains the SS for the variables denoted in the columns. The second to last rows contain the past states, with the time index ₍₋₁₎, and present shocks, with exogenous variables denoted by ₍ₓ₎. For example, the immediate impact of a shock to eps_z on q is 0.0688.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"There is also the possibility to visually inspect the solution. Please note that you need to import the StatsPlots packages once before the first plot. We can use the plot_solution function:","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"plot_solution(RBC, :k)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"(Image: RBC solution)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"The chart shows the first order perturbation solution mapping from the past state k to the present variables of the model. The state variable covers a range of two standard deviations around the non stochastic steady state and all other states remain in the non stochastic steady state.","category":"page"},{"location":"tutorials/rbc/#Obtain-array-of-IRFs-or-model-simulations","page":"Write your first simple model - RBC","title":"Obtain array of IRFs or model simulations","text":"","category":"section"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Last but not least the user might want to obtain simulated time series of the model or IRFs without plotting them. For IRFs this is possible by calling get_irf:","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"get_irf(RBC)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"which returns a 3-dimensional KeyedArray with variables (absolute deviations from the relevant steady state by default) in rows, the period in columns, and the shocks as the third dimension.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"For simulations this is possible by calling simulate:","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"simulate(RBC)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"which returns the simulated data in levels in a 3-dimensional KeyedArray of the same structure as for the IRFs.","category":"page"},{"location":"tutorials/rbc/#Conditional-forecasts","page":"Write your first simple model - RBC","title":"Conditional forecasts","text":"","category":"section"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Conditional forecasting is a useful tool to incorporate for example forecasts into a model and then add shocks on top.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"For example we might be interested in the model dynamics given a path for c for the first 4 quarters and the next quarter a negative shock to eps_z arrives. This can be implemented using the get_conditional_forecast function and visualised with the plot_conditional_forecast function.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"First, we define the conditions on the endogenous variables as deviations from the non stochastic steady state (c in this case) using a KeyedArray from the AxisKeys package (check get_conditional_forecast for other ways to define the conditions):","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"using AxisKeys\nconditions = KeyedArray(Matrix{Union{Nothing,Float64}}(undef,1,4),Variables = [:c], Periods = 1:4)\nconditions[1:4] .= [-.01,0,.01,.02];","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Note that all other endogenous variables not part of the KeyedArray are also not conditioned on.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Next, we define the conditions on the shocks (eps_z in this case) using a SparseArrayCSC from the SparseArrays package (check get_conditional_forecast for other ways to define the conditions on the shocks):","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"using SparseArrays\nshocks = spzeros(1,5)\nshocks[1,5] = -1;","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Note that for the first 4 periods the shock has no predetermined value and is determined by the conditions on the endogenous variables.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Finally we can get the conditional forecast:","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"get_conditional_forecast(RBC, conditions, shocks = shocks, conditions_in_levels = false)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"The function returns a KeyedArray with the values of the endogenous variables and shocks matching the conditions exactly.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"We can also plot the conditional forecast. Please note that you need to import the StatsPlots packages once before the first plot. In order to plot we can use:","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"plot_conditional_forecast(RBC, conditions, shocks = shocks, conditions_in_levels = false)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"(Image: RBC conditional forecast)","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"and we need to set conditions_in_levels = false since the conditions are defined in deviations.","category":"page"},{"location":"tutorials/rbc/","page":"Write your first simple model - RBC","title":"Write your first simple model - RBC","text":"Note that the stars indicate the values the model is conditioned on.","category":"page"},{"location":"how-to/loops/#Programmatic-model-writing","page":"Programmatic model writing using for-loops","title":"Programmatic model writing","text":"","category":"section"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"Programmatic model writing is a powerful tool to write complex models using concise code. More specifically, the @model and @parameters macros allow for the use of indexed variables and for-loops.","category":"page"},{"location":"how-to/loops/#Model-block","page":"Programmatic model writing using for-loops","title":"Model block","text":"","category":"section"},{"location":"how-to/loops/#for-loops-for-time-indices","page":"Programmatic model writing using for-loops","title":"for loops for time indices","text":"","category":"section"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"In practice this means that you no longer need to write this:","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"Y_annual[0] = Y[0] + Y[-1] + Y[-2] + Y[-3]","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"but instead you can write this:","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"Y_annual[0] = for lag in -3:0 Y[lag] end","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"In the background the package expands the for loop and adds up the elements for the different values of lag.","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"In case you don't want the elements to be added up but multiply the items you can do so:","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"R_annual[0] = for operator = :*, lag in -3:0 R[lag] end","category":"page"},{"location":"how-to/loops/#for-loops-for-variables-/-parameter-specific-indices","page":"Programmatic model writing using for-loops","title":"for loops for variables / parameter specific indices","text":"","category":"section"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"Another use-case are models with repetitive equations such as multi-sector or multi-country models.","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"For example, defining the production function for two countries (home country H and foreign country F) would look as follows without the use of programmatic features:","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"y_H[0] = A_H[0] * k_H[-1]^alpha_H\ny_F[0] = A_F[0] * k_F[-1]^alpha_F","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"and this can be written more conveniently using loops:","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"for co in [H, F] y{co}[0] = A{co}[0] * k{co}[-1]^alpha{co} end","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"Note that the package internally writes out the for loop and creates two equations; one each for country H and F. The variables and parameters are indexed using the curly braces {}. These can also be used outside loops. When using more than one index it is important to make sure the indices are in the right order.","category":"page"},{"location":"how-to/loops/#Example-model-block","page":"Programmatic model writing using for-loops","title":"Example model block","text":"","category":"section"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"Putting these these elements together we can write the multi-country model equations of the Backus, Kehoe and Kydland (1992) model like this:","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"ENV[\"GKSwstype\"] = \"100\"\nusing Random\nRandom.seed!(3)","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"using MacroModelling\n@model Backus_Kehoe_Kydland_1992 begin\n for co in [H, F]\n Y{co}[0] = ((LAMBDA{co}[0] * K{co}[-4]^theta{co} * N{co}[0]^(1 - theta{co}))^(-nu{co}) + sigma{co} * Z{co}[-1]^(-nu{co}))^(-1 / nu{co})\n\n K{co}[0] = (1 - delta{co}) * K{co}[-1] + S{co}[0]\n\n X{co}[0] = for lag in (-4+1):0 phi{co} * S{co}[lag] end\n\n A{co}[0] = (1 - eta{co}) * A{co}[-1] + N{co}[0]\n\n L{co}[0] = 1 - alpha{co} * N{co}[0] - (1 - alpha{co}) * eta{co} * A{co}[-1]\n\n U{co}[0] = (C{co}[0]^mu{co} * L{co}[0]^(1 - mu{co}))^gamma{co}\n\n psi{co} * mu{co} / C{co}[0] * U{co}[0] = LGM[0]\n\n psi{co} * (1 - mu{co}) / L{co}[0] * U{co}[0] * (-alpha{co}) = - LGM[0] * (1 - theta{co}) / N{co}[0] * (LAMBDA{co}[0] * K{co}[-4]^theta{co} * N{co}[0]^(1 - theta{co}))^(-nu{co}) * Y{co}[0]^(1 + nu{co})\n\n for lag in 0:(4-1) \n beta{co}^lag * LGM[lag]*phi{co}\n end +\n for lag in 1:4\n -beta{co}^lag * LGM[lag] * phi{co} * (1 - delta{co})\n end = beta{co}^4 * LGM[+4] * theta{co} / K{co}[0] * (LAMBDA{co}[+4] * K{co}[0]^theta{co} * N{co}[+4]^(1 - theta{co})) ^ (-nu{co}) * Y{co}[+4]^(1 + nu{co})\n\n LGM[0] = beta{co} * LGM[+1] * (1 + sigma{co} * Z{co}[0]^(-nu{co} - 1) * Y{co}[+1]^(1 + nu{co}))\n\n NX{co}[0] = (Y{co}[0] - (C{co}[0] + X{co}[0] + Z{co}[0] - Z{co}[-1])) / Y{co}[0]\n end\n\n (LAMBDA{H}[0] - 1) = rho{H}{H} * (LAMBDA{H}[-1] - 1) + rho{H}{F} * (LAMBDA{F}[-1] - 1) + Z_E{H} * E{H}[x]\n\n (LAMBDA{F}[0] - 1) = rho{F}{F} * (LAMBDA{F}[-1] - 1) + rho{F}{H} * (LAMBDA{H}[-1] - 1) + Z_E{F} * E{F}[x]\n\n for co in [H,F] C{co}[0] + X{co}[0] + Z{co}[0] - Z{co}[-1] end = for co in [H,F] Y{co}[0] end\nend","category":"page"},{"location":"how-to/loops/#Parameter-block","page":"Programmatic model writing using for-loops","title":"Parameter block","text":"","category":"section"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"Having defined parameters and variables with indices in the model block we can also declare parameter values, including by means of calibration equations, in the parameter block.","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"In the above example we defined the production function fro countries H and F. Implicitly we have two parameters alpha and we can define their value individually by setting","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"alpha{H} = 0.3\nalpha{F} = 0.3","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"or jointly by writing","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"alpha = 0.3","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"By not using the index, the package understands that there are two parameters with this name and different indices and will set both accordingly.","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"This logic extends to calibration equations. We can write:","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"y{H}[ss] = 1 | alpha{H}\ny{F}[ss] = 1 | alpha{F}","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"to find the value of alpha that corresponds to y being equal to 1 in the non-stochastic steady state. Alternatively we can not use indices and the package understands that we refer to both indices:","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"y[ss] = 1 | alpha","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"Making use of the indices we could also target a level of y for country H with alpha for country H and target ratio of the two ys with the alpha for country F:","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"y{H}[ss] = 1 | alpha{H}\ny{H}[ss] / y{F}[ss] = y_ratio | alpha{F}\ny_ratio = 0.9","category":"page"},{"location":"how-to/loops/#Example-parameter-block","page":"Programmatic model writing using for-loops","title":"Example parameter block","text":"","category":"section"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"Making use of this and continuing the example of the Backus, Kehoe and Kydland (1992) model we can define the parameters as follows:","category":"page"},{"location":"how-to/loops/","page":"Programmatic model writing using for-loops","title":"Programmatic model writing using for-loops","text":"@parameters Backus_Kehoe_Kydland_1992 begin\n K_ss = 11\n K[ss] = K_ss | beta\n \n mu = 0.34\n gamma = -1.0\n alpha = 1\n eta = 0.5\n theta = 0.36\n nu = 3\n sigma = 0.01\n delta = 0.025\n phi = 1/4\n psi = 0.5\n\n Z_E = 0.00852\n \n rho{H}{H} = 0.906\n rho{F}{F} = rho{H}{H}\n rho{H}{F} = 0.088\n rho{F}{H} = rho{H}{F}\nend","category":"page"},{"location":"call_index/#Index","page":"Index","title":"Index","text":"","category":"section"},{"location":"call_index/","page":"Index","title":"Index","text":"","category":"page"},{"location":"api/","page":"API","title":"API","text":"Modules = [MacroModelling]\nOrder = [:function, :macro]","category":"page"},{"location":"api/#MacroModelling.Beta-NTuple{4, Real}","page":"API","title":"MacroModelling.Beta","text":"Beta(μ, σ, lower_bound, upper_bound; μσ)\n\n\nConvenience wrapper for the truncated Beta distribution.\n\nIf μσ = true then μ and σ are translated to the parameters of the distribution. Otherwise μ and σ represent the parameters of the distribution.\n\nArguments\n\nμ [Type: Real]: mean or first parameter of the distribution, \nσ [Type: Real]: standard deviation or first parameter of the distribution\nlower_bound [Type: Real]: truncation lower bound of the distribution\nupper_bound [Type: Real]: truncation upper bound of the distribution\n\nKeyword Arguments\n\nμσ [Type: Bool]: switch whether μ and σ represent the moments of the distribution or their parameters\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.Beta-Tuple{Real, Real}","page":"API","title":"MacroModelling.Beta","text":"Beta(μ, σ; μσ)\n\n\nConvenience wrapper for the Beta distribution.\n\nIf μσ = true then μ and σ are translated to the parameters of the distribution. Otherwise μ and σ represent the parameters of the distribution.\n\nArguments\n\nμ [Type: Real]: mean or first parameter of the distribution, \nσ [Type: Real]: standard deviation or first parameter of the distribution\n\nKeyword Arguments\n\nμσ [Type: Bool]: switch whether μ and σ represent the moments of the distribution or their parameters\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.Gamma-NTuple{4, Real}","page":"API","title":"MacroModelling.Gamma","text":"Gamma(μ, σ, lower_bound, upper_bound; μσ)\n\n\nConvenience wrapper for the truncated Inverse Gamma distribution.\n\nIf μσ = true then μ and σ are translated to the parameters of the distribution. Otherwise μ and σ represent the parameters of the distribution.\n\nArguments\n\nμ [Type: Real]: mean or first parameter of the distribution, \nσ [Type: Real]: standard deviation or first parameter of the distribution\nlower_bound [Type: Real]: truncation lower bound of the distribution\nupper_bound [Type: Real]: truncation upper bound of the distribution\n\nKeyword Arguments\n\nμσ [Type: Bool]: switch whether μ and σ represent the moments of the distribution or their parameters\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.Gamma-Tuple{Real, Real}","page":"API","title":"MacroModelling.Gamma","text":"Gamma(μ, σ; μσ)\n\n\nConvenience wrapper for the Gamma distribution.\n\nIf μσ = true then μ and σ are translated to the parameters of the distribution. Otherwise μ and σ represent the parameters of the distribution.\n\nArguments\n\nμ [Type: Real]: mean or first parameter of the distribution, \nσ [Type: Real]: standard deviation or first parameter of the distribution\n\nKeyword Arguments\n\nμσ [Type: Bool]: switch whether μ and σ represent the moments of the distribution or their parameters\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.InverseGamma-NTuple{4, Real}","page":"API","title":"MacroModelling.InverseGamma","text":"InverseGamma(μ, σ, lower_bound, upper_bound; μσ)\n\n\nConvenience wrapper for the truncated Inverse Gamma distribution.\n\nIf μσ = true then μ and σ are translated to the parameters of the distribution. Otherwise μ and σ represent the parameters of the distribution.\n\nArguments\n\nμ [Type: Real]: mean or first parameter of the distribution, \nσ [Type: Real]: standard deviation or first parameter of the distribution\nlower_bound [Type: Real]: truncation lower bound of the distribution\nupper_bound [Type: Real]: truncation upper bound of the distribution\n\nKeyword Arguments\n\nμσ [Type: Bool]: switch whether μ and σ represent the moments of the distribution or their parameters\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.InverseGamma-Tuple{Real, Real}","page":"API","title":"MacroModelling.InverseGamma","text":"InverseGamma(μ, σ; μσ)\n\n\nConvenience wrapper for the Inverse Gamma distribution.\n\nIf μσ = true then μ and σ are translated to the parameters of the distribution. Otherwise μ and σ represent the parameters of the distribution.\n\nArguments\n\nμ [Type: Real]: mean or first parameter of the distribution, \nσ [Type: Real]: standard deviation or first parameter of the distribution\n\nKeyword Arguments\n\nμσ [Type: Bool]: switch whether μ and σ represent the moments of the distribution or their parameters\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.Normal-NTuple{4, Real}","page":"API","title":"MacroModelling.Normal","text":"Normal(μ, σ, lower_bound, upper_bound)\n\n\nConvenience wrapper for the truncated Normal distribution.\n\nArguments\n\nμ [Type: Real]: mean of the distribution, \nσ [Type: Real]: standard deviation of the distribution\nlower_bound [Type: Real]: truncation lower bound of the distribution\nupper_bound [Type: Real]: truncation upper bound of the distribution\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.SS","page":"API","title":"MacroModelling.SS","text":"See get_steady_state\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.SSS-Tuple","page":"API","title":"MacroModelling.SSS","text":"Wrapper for get_steady_state with stochastic = true.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.autocorr","page":"API","title":"MacroModelling.autocorr","text":"See get_autocorrelation\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.check_residuals","page":"API","title":"MacroModelling.check_residuals","text":"See get_non_stochastic_steady_state_residuals\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.corr","page":"API","title":"MacroModelling.corr","text":"See get_correlation\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.cov","page":"API","title":"MacroModelling.cov","text":"Wrapper for get_moments with covariance = true and non_stochastic_steady_state = false, variance = false, standard_deviation = false.\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.export_dynare","page":"API","title":"MacroModelling.export_dynare","text":"See write_mod_file\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.export_mod_file","page":"API","title":"MacroModelling.export_mod_file","text":"See write_mod_file\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.export_model","page":"API","title":"MacroModelling.export_model","text":"See write_mod_file\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.export_to_dynare","page":"API","title":"MacroModelling.export_to_dynare","text":"See write_mod_file\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.fevd","page":"API","title":"MacroModelling.fevd","text":"See get_conditional_variance_decomposition\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.get_IRF","page":"API","title":"MacroModelling.get_IRF","text":"See get_irf\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.get_SS","page":"API","title":"MacroModelling.get_SS","text":"See get_steady_state\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.get_SSS-Tuple","page":"API","title":"MacroModelling.get_SSS","text":"Wrapper for get_steady_state with stochastic = true.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_autocorr","page":"API","title":"MacroModelling.get_autocorr","text":"See get_autocorrelation\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.get_autocorrelation-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_autocorrelation","text":"get_autocorrelation(\n 𝓂;\n autocorrelation_periods,\n parameters,\n algorithm,\n verbose\n)\n\n\nReturn the autocorrelations of endogenous variables using the first, pruned second, or pruned third order perturbation solution. \n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nKeyword Arguments\n\nautocorrelation_periods [Default: 1:5]: periods for which to return the autocorrelation\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\nalgorithm [Default: :first_order, Type: Symbol]: algorithm to solve for the dynamics of the model.\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend\n\n@parameters RBC begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α = 0.5\n β = 0.95\nend\n\nget_autocorrelation(RBC)\n# output\n2-dimensional KeyedArray(NamedDimsArray(...)) with keys:\n↓ Variables ∈ 4-element Vector{Symbol}\n→ Autocorrelation_orders ∈ 5-element UnitRange{Int64}\nAnd data, 4×5 Matrix{Float64}:\n (1) (2) (3) (4) (5)\n (:c) 0.966974 0.927263 0.887643 0.849409 0.812761\n (:k) 0.971015 0.931937 0.892277 0.853876 0.817041\n (:q) 0.32237 0.181562 0.148347 0.136867 0.129944\n (:z) 0.2 0.04 0.008 0.0016 0.00032\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_calibrated_parameters-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_calibrated_parameters","text":"get_calibrated_parameters(𝓂; values)\n\n\nReturns the parameters (and optionally the values) which are determined by a calibration equation. \n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nKeyword Arguments\n\nvalues [Default: false, Type: Bool]: return the values together with the parameter names\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z{TFP}[1]) * k[0]^(α - 1) + (1 - exp(z{δ}[1]) * δ))\n c[0] + k[0] = (1 - exp(z{δ}[0])δ) * k[-1] + q[0]\n q[0] = exp(z{TFP}[0]) * k[-1]^α\n for shock in [TFP, δ]\n z{shock}[0] = ρ{shock} * z{shock}[-1] + σ{shock} * (eps{shock}[x] + eps_news{shock}[x-1])\n end\n Δc_share[0] = log(c[0]/q[0]) - log(c[-1]/q[-1])\n Δk_4q[0] = log(k[0]) - log(k[-4])\nend\n\n@parameters RBC begin\n σ = 0.01\n ρ = 0.2\n capital_to_output = 1.5\n k[ss] / (4 * q[ss]) = capital_to_output | δ\n alpha = .5\n α = alpha\n β = 0.95\nend\n\nget_calibrated_parameters(RBC)\n# output\n1-element Vector{String}:\n \"δ\"\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_calibration_equation_parameters-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_calibration_equation_parameters","text":"get_calibration_equation_parameters(𝓂)\n\n\nReturns the parameters used in calibration equations which are not used in the equations of the model (see capital_to_output in example).\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z{TFP}[1]) * k[0]^(α - 1) + (1 - exp(z{δ}[1]) * δ))\n c[0] + k[0] = (1 - exp(z{δ}[0])δ) * k[-1] + q[0]\n q[0] = exp(z{TFP}[0]) * k[-1]^α\n for shock in [TFP, δ]\n z{shock}[0] = ρ{shock} * z{shock}[-1] + σ{shock} * (eps{shock}[x] + eps_news{shock}[x-1])\n end\n Δc_share[0] = log(c[0]/q[0]) - log(c[-1]/q[-1])\n Δk_4q[0] = log(k[0]) - log(k[-4])\nend\n\n@parameters RBC begin\n σ = 0.01\n ρ = 0.2\n capital_to_output = 1.5\n k[ss] / (4 * q[ss]) = capital_to_output | δ\n alpha = .5\n α = alpha\n β = 0.95\nend\n\nget_calibration_equation_parameters(RBC)\n# output\n1-element Vector{String}:\n \"capital_to_output\"\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_calibration_equations-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_calibration_equations","text":"get_calibration_equations(𝓂)\n\n\nReturn the calibration equations declared in the @parameters block. Calibration equations are additional equations which are part of the non-stochastic steady state problem. The additional equation is matched with a calibated parameter which is part of the equations declared in the @model block and can be retrieved with: get_calibrated_parameters\n\nIn case programmatic model writing was used this function returns the parsed equations (see loop over shocks in example).\n\nNote that the ouput assumes the equations are equal to 0. As in, k / (q * 4) - capital_to_output implies k / (q * 4) - capital_to_output = 0 and therefore: k / (q * 4) = capital_to_output.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z{TFP}[1]) * k[0]^(α - 1) + (1 - exp(z{δ}[1]) * δ))\n c[0] + k[0] = (1 - exp(z{δ}[0])δ) * k[-1] + q[0]\n q[0] = exp(z{TFP}[0]) * k[-1]^α\n for shock in [TFP, δ]\n z{shock}[0] = ρ{shock} * z{shock}[-1] + σ{shock} * (eps{shock}[x] + eps_news{shock}[x-1])\n end\n Δc_share[0] = log(c[0]/q[0]) - log(c[-1]/q[-1])\n Δk_4q[0] = log(k[0]) - log(k[-4])\nend\n\n@parameters RBC begin\n σ = 0.01\n ρ = 0.2\n capital_to_output = 1.5\n k[ss] / (4 * q[ss]) = capital_to_output | δ\n alpha = .5\n α = alpha\n β = 0.95\nend\n\nget_calibration_equations(RBC)\n# output\n1-element Vector{String}:\n \"k / (q * 4) - capital_to_output\"\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_conditional_forecast-Tuple{MacroModelling.ℳ, Union{KeyedArray{Union{Nothing, Float64}}, KeyedArray{Float64}, SparseArrays.SparseMatrixCSC{Float64}, Matrix{Union{Nothing, Float64}}}}","page":"API","title":"MacroModelling.get_conditional_forecast","text":"get_conditional_forecast(\n 𝓂,\n conditions;\n shocks,\n initial_state,\n periods,\n parameters,\n variables,\n conditions_in_levels,\n algorithm,\n levels,\n verbose\n)\n\n\nReturn the conditional forecast given restrictions on endogenous variables and shocks (optional) in a 2-dimensional array. By default (see levels), the values represent absolute deviations from the relevant steady state (e.g. higher order perturbation algorithms are relative to the stochastic steady state). A constrained minimisation problem is solved to find the combinations of shocks with the smallest magnitude to match the conditions.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\nconditions [Type: Union{Matrix{Union{Nothing,Float64}}, SparseMatrixCSC{Float64}, KeyedArray{Union{Nothing,Float64}}, KeyedArray{Float64}}]: conditions for which to find the corresponding shocks. The input can have multiple formats, but for all types of entries the first dimension corresponds to the number of variables and the second dimension to the number of periods. The conditions can be specified using a matrix of type Matrix{Union{Nothing,Float64}}. In this case the conditions are matrix elements of type Float64 and all remaining (free) entries are nothing. You can also use a SparseMatrixCSC{Float64} as input. In this case only non-zero elements are taken as conditions. Note that you cannot condition variables to be zero using a SparseMatrixCSC{Float64} as input (use other input formats to do so). Another possibility to input conditions is by using a KeyedArray. You can use a KeyedArray{Union{Nothing,Float64}} where, similar to Matrix{Union{Nothing,Float64}}, all entries of type Float64 are recognised as conditions and all other entries have to be nothing. Furthermore, you can specify in the primary axis a subset of variables (of type Symbol or String) for which you specify conditions and all other variables are considered free. The same goes for the case when you use KeyedArray{Float64}} as input, whereas in this case the conditions for the specified variables bind for all periods specified in the KeyedArray, because there are no nothing entries permitted with this type.\n\nKeyword Arguments\n\nshocks [Default: nothing, Type: Union{Matrix{Union{Nothing,Float64}}, SparseMatrixCSC{Float64}, KeyedArray{Union{Nothing,Float64}}, KeyedArray{Float64}, Nothing} = nothing]: known values of shocks. This entry allows the user to include certain shock values. By entering restrictions on the shock sin this way the problem to match the conditions on endogenous variables is restricted to the remaining free shocks in the repective period. The input can have multiple formats, but for all types of entries the first dimension corresponds to the number of shocks and the second dimension to the number of periods. The shocks can be specified using a matrix of type Matrix{Union{Nothing,Float64}}. In this case the shocks are matrix elements of type Float64 and all remaining (free) entries are nothing. You can also use a SparseMatrixCSC{Float64} as input. In this case only non-zero elements are taken as certain shock values. Note that you cannot condition shocks to be zero using a SparseMatrixCSC{Float64} as input (use other input formats to do so). Another possibility to input known shocks is by using a KeyedArray. You can use a KeyedArray{Union{Nothing,Float64}} where, similar to Matrix{Union{Nothing,Float64}}, all entries of type Float64 are recognised as known shocks and all other entries have to be nothing. Furthermore, you can specify in the primary axis a subset of shocks (of type Symbol or String) for which you specify values and all other shocks are considered free. The same goes for the case when you use KeyedArray{Float64}} as input, whereas in this case the values for the specified shocks bind for all periods specified in the KeyedArray, because there are no nothing entries permitted with this type.\ninitial_state [Default: [0.0], Type: Union{Vector{Vector{Float64}},Vector{Float64}}]: The initial state defines the starting point for the model and is relevant for normal IRFs. In the case of pruned solution algorithms the initial state can be given as multiple state vectors (Vector{Vector{Float64}}). In this case the initial state must be given in devations from the non-stochastic steady state. In all other cases the initial state must be given in levels. If a pruned solution algorithm is selected and initial state is a Vector{Float64} then it impacts the first order initial state vector only. The state includes all variables as well as exogenous variables in leads or lags if present.\nperiods [Default: 40, Type: Int]: the total number of periods is the sum of the argument provided here and the maximum of periods of the shocks or conditions argument.\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\nvariables [Default: :all_excluding_obc]: variables for which to show the results. Inputs can be a variable name passed on as either a Symbol or String (e.g. :y or \"y\"), or Tuple, Matrix or Vector of String or Symbol. Any variables not part of the model will trigger a warning. :all_excluding_auxilliary_and_obc contains all shocks less those related to auxilliary variables and related to occasionally binding constraints (obc). :all_excluding_obc contains all shocks less those related to auxilliary variables. :all will contain all variables.\nconditions_in_levels [Default: true, Type: Bool]: indicator whether the conditions are provided in levels. If true the input to the conditions argument will have the non stochastic steady state substracted.\nalgorithm [Default: :first_order, Type: Symbol]: algorithm to solve for the dynamics of the model.\nlevels [Default: false, Type: Bool]: return levels or absolute deviations from steady state corresponding to the solution algorithm (e.g. stochastic steady state for higher order solution algorithms).\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling\nusing SparseArrays, AxisKeys\n\n@model RBC_CME begin\n y[0]=A[0]*k[-1]^alpha\n 1/c[0]=beta*1/c[1]*(alpha*A[1]*k[0]^(alpha-1)+(1-delta))\n 1/c[0]=beta*1/c[1]*(R[0]/Pi[+1])\n R[0] * beta =(Pi[0]/Pibar)^phi_pi\n A[0]*k[-1]^alpha=c[0]+k[0]-(1-delta*z_delta[0])*k[-1]\n z_delta[0] = 1 - rho_z_delta + rho_z_delta * z_delta[-1] + std_z_delta * delta_eps[x]\n A[0] = 1 - rhoz + rhoz * A[-1] + std_eps * eps_z[x]\nend\n\n\n@parameters RBC_CME begin\n alpha = .157\n beta = .999\n delta = .0226\n Pibar = 1.0008\n phi_pi = 1.5\n rhoz = .9\n std_eps = .0068\n rho_z_delta = .9\n std_z_delta = .005\nend\n\n# c is conditioned to deviate by 0.01 in period 1 and y is conditioned to deviate by 0.02 in period 3\nconditions = KeyedArray(Matrix{Union{Nothing,Float64}}(undef,2,2),Variables = [:c,:y], Periods = 1:2)\nconditions[1,1] = .01\nconditions[2,2] = .02\n\n# in period 2 second shock (eps_z) is conditioned to take a value of 0.05\nshocks = Matrix{Union{Nothing,Float64}}(undef,2,1)\nshocks[1,1] = .05\n\nget_conditional_forecast(RBC_CME, conditions, shocks = shocks, conditions_in_levels = false)\n# output\n2-dimensional KeyedArray(NamedDimsArray(...)) with keys:\n↓ Variables_and_shocks ∈ 9-element Vector{Symbol}\n→ Periods ∈ 42-element UnitRange{Int64}\nAnd data, 9×42 Matrix{Float64}:\n (1) (2) … (41) (42)\n (:A) 0.0313639 0.0134792 0.000221372 0.000199235\n (:Pi) 0.000780257 0.00020929 -0.000146071 -0.000140137\n (:R) 0.00117156 0.00031425 -0.000219325 -0.000210417\n (:c) 0.01 0.00600605 0.00213278 0.00203751\n (:k) 0.034584 0.0477482 … 0.0397631 0.0380482\n (:y) 0.0446375 0.02 0.00129544 0.001222\n (:z_delta) 0.00025 0.000225 3.69522e-6 3.3257e-6\n (:delta_eps) 0.05 0.0 0.0 0.0\n (:eps_z) 4.61234 -2.16887 0.0 0.0\n\n# The same can be achieved with the other input formats:\n# conditions = Matrix{Union{Nothing,Float64}}(undef,7,2)\n# conditions[4,1] = .01\n# conditions[6,2] = .02\n\n# using SparseArrays\n# conditions = spzeros(7,2)\n# conditions[4,1] = .01\n# conditions[6,2] = .02\n\n# shocks = KeyedArray(Matrix{Union{Nothing,Float64}}(undef,1,1),Variables = [:delta_eps], Periods = [1])\n# shocks[1,1] = .05\n\n# using SparseArrays\n# shocks = spzeros(2,1)\n# shocks[1,1] = .05\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_conditional_variance_decomposition-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_conditional_variance_decomposition","text":"get_conditional_variance_decomposition(\n 𝓂;\n periods,\n parameters,\n verbose\n)\n\n\nReturn the conditional variance decomposition of endogenous variables with regards to the shocks using the linearised solution. \n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nKeyword Arguments\n\nperiods [Default: [1:20...,Inf], Type: Union{Vector{Int},Vector{Float64},UnitRange{Int64}}]: vector of periods for which to calculate the conditional variance decomposition. If the vector conatins Inf, also the unconditional variance decomposition is calculated (same output as get_variance_decomposition).\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling\n\n@model RBC_CME begin\n y[0]=A[0]*k[-1]^alpha\n 1/c[0]=beta*1/c[1]*(alpha*A[1]*k[0]^(alpha-1)+(1-delta))\n 1/c[0]=beta*1/c[1]*(R[0]/Pi[+1])\n R[0] * beta =(Pi[0]/Pibar)^phi_pi\n A[0]*k[-1]^alpha=c[0]+k[0]-(1-delta*z_delta[0])*k[-1]\n z_delta[0] = 1 - rho_z_delta + rho_z_delta * z_delta[-1] + std_z_delta * delta_eps[x]\n A[0] = 1 - rhoz + rhoz * A[-1] + std_eps * eps_z[x]\nend\n\n\n@parameters RBC_CME begin\n alpha = .157\n beta = .999\n delta = .0226\n Pibar = 1.0008\n phi_pi = 1.5\n rhoz = .9\n std_eps = .0068\n rho_z_delta = .9\n std_z_delta = .005\nend\n\nget_conditional_variance_decomposition(RBC_CME)\n# output\n3-dimensional KeyedArray(NamedDimsArray(...)) with keys:\n↓ Variables ∈ 7-element Vector{Symbol}\n→ Shocks ∈ 2-element Vector{Symbol}\n◪ Periods ∈ 21-element Vector{Float64}\nAnd data, 7×2×21 Array{Float64, 3}:\n[showing 3 of 21 slices]\n[:, :, 1] ~ (:, :, 1.0):\n (:delta_eps) (:eps_z)\n (:A) 0.0 1.0\n (:Pi) 0.00158668 0.998413\n (:R) 0.00158668 0.998413\n (:c) 0.0277348 0.972265\n (:k) 0.00869568 0.991304\n (:y) 0.0 1.0\n (:z_delta) 1.0 0.0\n\n[:, :, 11] ~ (:, :, 11.0):\n (:delta_eps) (:eps_z)\n (:A) 5.88653e-32 1.0\n (:Pi) 0.0245641 0.975436\n (:R) 0.0245641 0.975436\n (:c) 0.0175249 0.982475\n (:k) 0.00869568 0.991304\n (:y) 7.63511e-5 0.999924\n (:z_delta) 1.0 0.0\n\n[:, :, 21] ~ (:, :, Inf):\n (:delta_eps) (:eps_z)\n (:A) 9.6461e-31 1.0\n (:Pi) 0.0156771 0.984323\n (:R) 0.0156771 0.984323\n (:c) 0.0134672 0.986533\n (:k) 0.00869568 0.991304\n (:y) 0.000313462 0.999687\n (:z_delta) 1.0 0.0\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_corr","page":"API","title":"MacroModelling.get_corr","text":"See get_correlation\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.get_correlation-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_correlation","text":"get_correlation(𝓂; parameters, algorithm, verbose)\n\n\nReturn the correlations of endogenous variables using the first, pruned second, or pruned third order perturbation solution. \n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nKeyword Arguments\n\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\nalgorithm [Default: :first_order, Type: Symbol]: algorithm to solve for the dynamics of the model.\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend\n\n@parameters RBC begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α = 0.5\n β = 0.95\nend\n\nget_correlation(RBC)\n# output\n2-dimensional KeyedArray(NamedDimsArray(...)) with keys:\n↓ Variables ∈ 4-element Vector{Symbol}\n→ 𝑉𝑎𝑟𝑖𝑎𝑏𝑙𝑒𝑠 ∈ 4-element Vector{Symbol}\nAnd data, 4×4 Matrix{Float64}:\n (:c) (:k) (:q) (:z)\n (:c) 1.0 0.999812 0.550168 0.314562\n (:k) 0.999812 1.0 0.533879 0.296104\n (:q) 0.550168 0.533879 1.0 0.965726\n (:z) 0.314562 0.296104 0.965726 1.0\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_cov","page":"API","title":"MacroModelling.get_cov","text":"Wrapper for get_moments with covariance = true and non_stochastic_steady_state = false, variance = false, standard_deviation = false.\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.get_covariance-Tuple","page":"API","title":"MacroModelling.get_covariance","text":"Wrapper for get_moments with covariance = true and non_stochastic_steady_state = false, variance = false, standard_deviation = false.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_dynamic_auxilliary_variables-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_dynamic_auxilliary_variables","text":"get_dynamic_auxilliary_variables(𝓂)\n\n\nReturns the auxilliary variables, without timing subscripts, part of the augmented system of equations describing the model dynamics. Augmented means that, in case of variables with leads or lags larger than 1, or exogenous shocks with leads or lags, the system is augemented by auxilliary variables containing variables or shocks in lead or lag. because the original equations included variables with leads or lags certain expression cannot be negative (e.g. given log(c/q) an auxilliary variable is created for c/q).\n\nSee get_dynamic_equations for more details on the auxilliary variables and equations.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z{TFP}[1]) * k[0]^(α - 1) + (1 - exp(z{δ}[1]) * δ))\n c[0] + k[0] = (1 - exp(z{δ}[0])δ) * k[-1] + q[0]\n q[0] = exp(z{TFP}[0]) * k[-1]^α\n for shock in [TFP, δ]\n z{shock}[0] = ρ{shock} * z{shock}[-1] + σ{shock} * (eps{shock}[x] + eps_news{shock}[x-1])\n end\n Δc_share[0] = log(c[0]/q[0]) - log(c[-1]/q[-1])\n Δk_4q[0] = log(k[0]) - log(k[-4])\nend\n\n@parameters RBC begin\n σ = 0.01\n ρ = 0.2\n capital_to_output = 1.5\n k[ss] / (4 * q[ss]) = capital_to_output | δ\n alpha = .5\n α = alpha\n β = 0.95\nend\n\nget_dynamic_auxilliary_variables(RBC)\n# output\n3-element Vector{String}:\n \"kᴸ⁽⁻²⁾\"\n \"kᴸ⁽⁻³⁾\"\n \"kᴸ⁽⁻¹⁾\"\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_dynamic_equations-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_dynamic_equations","text":"get_dynamic_equations(𝓂)\n\n\nReturn the augmented system of equations describing the model dynamics. Augmented means that, in case of variables with leads or lags larger than 1, or exogenous shocks with leads or lags, the system is augemented by auxilliary equations containing variables in lead or lag. The augmented system features only variables which are in the present [0], future [1], or past [-1]. For example, Δk_4q[0] = log(k[0]) - log(k[-3]) contains k[-3]. By introducing 2 auxilliary variables (kᴸ⁽⁻¹⁾ and kᴸ⁽⁻²⁾ with ᴸ being the lead/lag operator) and augmenting the system (kᴸ⁽⁻²⁾[0] = kᴸ⁽⁻¹⁾[-1] and kᴸ⁽⁻¹⁾[0] = k[-1]) we can ensure that the timing is smaller than 1 in absolute terms: Δk_4q[0] - (log(k[0]) - log(kᴸ⁽⁻²⁾[-1])).\n\nIn case programmatic model writing was used this function returns the parsed equations (see loop over shocks in example).\n\nNote that the ouput assumes the equations are equal to 0. As in, kᴸ⁽⁻¹⁾[0] - k[-1] implies kᴸ⁽⁻¹⁾[0] - k[-1] = 0 and therefore: kᴸ⁽⁻¹⁾[0] = k[-1].\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z{TFP}[1]) * k[0]^(α - 1) + (1 - exp(z{δ}[1]) * δ))\n c[0] + k[0] = (1 - exp(z{δ}[0])δ) * k[-1] + q[0]\n q[0] = exp(z{TFP}[0]) * k[-1]^α\n for shock in [TFP, δ]\n z{shock}[0] = ρ{shock} * z{shock}[-1] + σ{shock} * (eps{shock}[x] + eps_news{shock}[x-1])\n end\n Δc_share[0] = log(c[0]/q[0]) - log(c[-1]/q[-1])\n Δk_4q[0] = log(k[0]) - log(k[-4])\nend\n\n@parameters RBC begin\n σ = 0.01\n ρ = 0.2\n capital_to_output = 1.5\n k[ss] / (4 * q[ss]) = capital_to_output | δ\n alpha = .5\n α = alpha\n β = 0.95\nend\n\nget_dynamic_equations(RBC)\n# output\n12-element Vector{String}:\n \"1 / c[0] - (β / c[1]) * (α * ex\" ⋯ 25 bytes ⋯ \" - 1) + (1 - exp(z{δ}[1]) * δ))\"\n \"(c[0] + k[0]) - ((1 - exp(z{δ}[0]) * δ) * k[-1] + q[0])\"\n \"q[0] - exp(z{TFP}[0]) * k[-1] ^ α\"\n \"eps_news{TFP}[0] - eps_news{TFP}[x]\"\n \"z{TFP}[0] - (ρ{TFP} * z{TFP}[-1] + σ{TFP} * (eps{TFP}[x] + eps_news{TFP}[-1]))\"\n \"eps_news{δ}[0] - eps_news{δ}[x]\"\n \"z{δ}[0] - (ρ{δ} * z{δ}[-1] + σ{δ} * (eps{δ}[x] + eps_news{δ}[-1]))\"\n \"Δc_share[0] - (log(c[0] / q[0]) - log(c[-1] / q[-1]))\"\n \"kᴸ⁽⁻³⁾[0] - kᴸ⁽⁻²⁾[-1]\"\n \"kᴸ⁽⁻²⁾[0] - kᴸ⁽⁻¹⁾[-1]\"\n \"kᴸ⁽⁻¹⁾[0] - k[-1]\"\n \"Δk_4q[0] - (log(k[0]) - log(kᴸ⁽⁻³⁾[-1]))\"\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_equations-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_equations","text":"get_equations(𝓂)\n\n\nReturn the equations of the model. In case programmatic model writing was used this function returns the parsed equations (see loop over shocks in example).\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z{TFP}[1]) * k[0]^(α - 1) + (1 - exp(z{δ}[1]) * δ))\n c[0] + k[0] = (1 - exp(z{δ}[0])δ) * k[-1] + q[0]\n q[0] = exp(z{TFP}[0]) * k[-1]^α\n for shock in [TFP, δ]\n z{shock}[0] = ρ{shock} * z{shock}[-1] + σ{shock} * (eps{shock}[x] + eps_news{shock}[x-1])\n end\n Δc_share[0] = log(c[0]/q[0]) - log(c[-1]/q[-1])\n Δk_4q[0] = log(k[0]) - log(k[-4])\nend\n\n@parameters RBC begin\n σ = 0.01\n ρ = 0.2\n capital_to_output = 1.5\n k[ss] / (4 * q[ss]) = capital_to_output | δ\n alpha = .5\n α = alpha\n β = 0.95\nend\n\nget_equations(RBC)\n# output\n7-element Vector{String}:\n \"1 / c[0] = (β / c[1]) * (α * ex\" ⋯ 25 bytes ⋯ \" - 1) + (1 - exp(z{δ}[1]) * δ))\"\n \"c[0] + k[0] = (1 - exp(z{δ}[0]) * δ) * k[-1] + q[0]\"\n \"q[0] = exp(z{TFP}[0]) * k[-1] ^ α\"\n \"z{TFP}[0] = ρ{TFP} * z{TFP}[-1]\" ⋯ 18 bytes ⋯ \"TFP}[x] + eps_news{TFP}[x - 1])\"\n \"z{δ}[0] = ρ{δ} * z{δ}[-1] + σ{δ} * (eps{δ}[x] + eps_news{δ}[x - 1])\"\n \"Δc_share[0] = log(c[0] / q[0]) - log(c[-1] / q[-1])\"\n \"Δk_4q[0] = log(k[0]) - log(k[-4])\"\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_estimated_shocks-Tuple{MacroModelling.ℳ, KeyedArray{Float64}}","page":"API","title":"MacroModelling.get_estimated_shocks","text":"get_estimated_shocks(\n 𝓂,\n data;\n parameters,\n algorithm,\n filter,\n warmup_iterations,\n data_in_levels,\n smooth,\n verbose\n)\n\n\nReturn the estimated shocks based on the inversion filter (depending on the filter keyword argument), or Kalman filter or smoother (depending on the smooth keyword argument) using the provided data and (non-)linear solution of the model. Data is by default assumed to be in levels unless data_in_levels is set to false.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\ndata [Type: KeyedArray]: data matrix with variables (String or Symbol) in rows and time in columns\n\nKeyword Arguments\n\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\nalgorithm [Default: :first_order, Type: Symbol]: algorithm to solve for the dynamics of the model.\nfilter [Default: :kalman, Type: Symbol]: filter used to compute the shocks given the data, model, and parameters. The Kalman filter only works for linear problems, whereas the inversion filter (:inversion) works for linear and nonlinear models. If a nonlinear solution algorithm is selected, the inversion filter is used.\nwarmup_iterations [Default: 0, Type: Int]: periods added before the first observation for which shocks are computed such that the first observation is matched. A larger value alleviates the problem that the initial value is the relevant steady state.\ndata_in_levels [Default: true, Type: Bool]: indicator whether the data is provided in levels. If true the input to the data argument will have the non stochastic steady state substracted.\nsmooth [Default: true, Type: Bool]: whether to return smoothed (true) or filtered (false) shocks. Only works for the Kalman filter. The inversion filter only returns filtered shocks.\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend\n\n@parameters RBC begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α = 0.5\n β = 0.95\nend\n\nsimulation = simulate(RBC)\n\nget_estimated_shocks(RBC,simulation([:c],:,:simulate))\n# output\n2-dimensional KeyedArray(NamedDimsArray(...)) with keys:\n↓ Shocks ∈ 1-element Vector{Symbol}\n→ Periods ∈ 40-element UnitRange{Int64}\nAnd data, 1×40 Matrix{Float64}:\n (1) (2) (3) (4) … (37) (38) (39) (40)\n (:eps_z₍ₓ₎) 0.0603617 0.614652 -0.519048 0.711454 -0.873774 1.27918 -0.929701 -0.2255\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_estimated_variable_standard_deviations-Tuple{MacroModelling.ℳ, KeyedArray{Float64}}","page":"API","title":"MacroModelling.get_estimated_variable_standard_deviations","text":"get_estimated_variable_standard_deviations(\n 𝓂,\n data;\n parameters,\n data_in_levels,\n smooth,\n verbose\n)\n\n\nReturn the standard deviations of the Kalman smoother or filter (depending on the smooth keyword argument) estimates of the model variables based on the provided data and first order solution of the model. Data is by default assumed to be in levels unless data_in_levels is set to false.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\ndata [Type: KeyedArray]: data matrix with variables (String or Symbol) in rows and time in columns\n\nKeyword Arguments\n\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\ndata_in_levels [Default: true, Type: Bool]: indicator whether the data is provided in levels. If true the input to the data argument will have the non stochastic steady state substracted.\nsmooth [Default: true, Type: Bool]: whether to return smoothed (true) or filtered (false) shocks. Only works for the Kalman filter. The inversion filter only returns filtered shocks.\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend\n\n@parameters RBC begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α = 0.5\n β = 0.95\nend\n\nsimulation = simulate(RBC)\n\nget_estimated_variable_standard_deviations(RBC,simulation([:c],:,:simulate))\n# output\n2-dimensional KeyedArray(NamedDimsArray(...)) with keys:\n↓ Standard_deviations ∈ 4-element Vector{Symbol}\n→ Periods ∈ 40-element UnitRange{Int64}\nAnd data, 4×40 Matrix{Float64}:\n (1) (2) (3) (4) … (38) (39) (40)\n (:c) 1.23202e-9 1.84069e-10 8.23181e-11 8.23181e-11 8.23181e-11 8.23181e-11 0.0\n (:k) 0.00509299 0.000382934 2.87922e-5 2.16484e-6 1.6131e-9 9.31323e-10 1.47255e-9\n (:q) 0.0612887 0.0046082 0.000346483 2.60515e-5 1.31709e-9 1.31709e-9 9.31323e-10\n (:z) 0.00961766 0.000723136 5.43714e-5 4.0881e-6 3.08006e-10 3.29272e-10 2.32831e-10\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_estimated_variables-Tuple{MacroModelling.ℳ, KeyedArray{Float64}}","page":"API","title":"MacroModelling.get_estimated_variables","text":"get_estimated_variables(\n 𝓂,\n data;\n parameters,\n algorithm,\n filter,\n warmup_iterations,\n data_in_levels,\n levels,\n smooth,\n verbose\n)\n\n\nReturn the estimated variables (in levels by default, see levels keyword argument) based on the inversion filter (depending on the filter keyword argument), or Kalman filter or smoother (depending on the smooth keyword argument) using the provided data and (non-)linear solution of the model. Data is by default assumed to be in levels unless data_in_levels is set to false.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\ndata [Type: KeyedArray]: data matrix with variables (String or Symbol) in rows and time in columns\n\nKeyword Arguments\n\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\nalgorithm [Default: :first_order, Type: Symbol]: algorithm to solve for the dynamics of the model.\nfilter [Default: :kalman, Type: Symbol]: filter used to compute the shocks given the data, model, and parameters. The Kalman filter only works for linear problems, whereas the inversion filter (:inversion) works for linear and nonlinear models. If a nonlinear solution algorithm is selected, the inversion filter is used.\nwarmup_iterations [Default: 0, Type: Int]: periods added before the first observation for which shocks are computed such that the first observation is matched. A larger value alleviates the problem that the initial value is the relevant steady state.\ndata_in_levels [Default: true, Type: Bool]: indicator whether the data is provided in levels. If true the input to the data argument will have the non stochastic steady state substracted.\nlevels [Default: false, Type: Bool]: return levels or absolute deviations from steady state corresponding to the solution algorithm (e.g. stochastic steady state for higher order solution algorithms).\nsmooth [Default: true, Type: Bool]: whether to return smoothed (true) or filtered (false) shocks. Only works for the Kalman filter. The inversion filter only returns filtered shocks.\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend\n\n@parameters RBC begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α = 0.5\n β = 0.95\nend\n\nsimulation = simulate(RBC)\n\nget_estimated_variables(RBC,simulation([:c],:,:simulate))\n# output\n2-dimensional KeyedArray(NamedDimsArray(...)) with keys:\n↓ Variables ∈ 4-element Vector{Symbol}\n→ Periods ∈ 40-element UnitRange{Int64}\nAnd data, 4×40 Matrix{Float64}:\n (1) (2) (3) (4) … (37) (38) (39) (40)\n (:c) 5.92901 5.92797 5.92847 5.92048 5.95845 5.95697 5.95686 5.96173\n (:k) 47.3185 47.3087 47.3125 47.2392 47.6034 47.5969 47.5954 47.6402\n (:q) 6.87159 6.86452 6.87844 6.79352 7.00476 6.9026 6.90727 6.95841\n (:z) -0.00109471 -0.00208056 4.43613e-5 -0.0123318 0.0162992 0.000445065 0.00119089 0.00863586\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_fevd","page":"API","title":"MacroModelling.get_fevd","text":"See get_conditional_variance_decomposition\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.get_first_order_solution-Tuple","page":"API","title":"MacroModelling.get_first_order_solution","text":"Wrapper for get_solution with algorithm = :first_order.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_forecast_error_variance_decomposition","page":"API","title":"MacroModelling.get_forecast_error_variance_decomposition","text":"See get_conditional_variance_decomposition\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.get_girf-Tuple","page":"API","title":"MacroModelling.get_girf","text":"Wrapper for get_irf with shocks = :simulate.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_irf-Tuple{MacroModelling.ℳ, Vector}","page":"API","title":"MacroModelling.get_irf","text":"get_irf(\n 𝓂,\n parameters;\n periods,\n variables,\n shocks,\n negative_shock,\n initial_state,\n levels,\n verbose\n)\n\n\nReturn impulse response functions (IRFs) of the model in a 3-dimensional array. Function to use when differentiating IRFs with repect to parameters.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\nparameters [Type: Vector]: Parameter values in alphabetical order (sorted by parameter name).\n\nKeyword Arguments\n\nperiods [Default: 40, Type: Int]: number of periods for which to calculate the IRFs. In case a matrix of shocks was provided, periods defines how many periods after the series of shocks the simulation continues.\nvariables [Default: :all_excluding_obc]: variables for which to show the results. Inputs can be a variable name passed on as either a Symbol or String (e.g. :y or \"y\"), or Tuple, Matrix or Vector of String or Symbol. Any variables not part of the model will trigger a warning. :all_excluding_auxilliary_and_obc contains all shocks less those related to auxilliary variables and related to occasionally binding constraints (obc). :all_excluding_obc contains all shocks less those related to auxilliary variables. :all will contain all variables.\nshocks [Default: :all_excluding_obc]: shocks for which to calculate the IRFs. Inputs can be a shock name passed on as either a Symbol or String (e.g. :y, or \"y\"), or Tuple, Matrix or Vector of String or Symbol. :simulate triggers random draws of all shocks (excluding occasionally binding constraints (obc) related shocks). :all_excluding_obc will contain all shocks but not the obc related ones.:all will contain also the obc related shocks. A series of shocks can be passed on using either a Matrix{Float64}, or a KeyedArray{Float64} as input with shocks (Symbol or String) in rows and periods in columns. The period of the simulation will correspond to the length of the input in the period dimension + the number of periods defined in periods. If the series of shocks is input as a KeyedArray{Float64} make sure to name the rows with valid shock names of type Symbol. Any shocks not part of the model will trigger a warning. :none in combination with an initial_state can be used for deterministic simulations.\nnegative_shock [Default: false, Type: Bool]: calculate a negative shock. Relevant for generalised IRFs.\ninitial_state [Default: [0.0], Type: Vector{Float64}]: provide state (in levels, not deviations) from which to start IRFs. Relevant for normal IRFs. The state includes all variables as well as exogenous variables in leads or lags if present.\nlevels [Default: false, Type: Bool]: return levels or absolute deviations from steady state corresponding to the solution algorithm (e.g. stochastic steady state for higher order solution algorithms).\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend\n\n@parameters RBC begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α = 0.5\n β = 0.95\nend\n\nget_irf(RBC, RBC.parameter_values)\n# output\n4×40×1 Array{Float64, 3}:\n[:, :, 1] =\n 0.00674687 0.00729773 0.00715114 0.00687615 … 0.00146962 0.00140619\n 0.0620937 0.0718322 0.0712153 0.0686381 0.0146789 0.0140453\n 0.0688406 0.0182781 0.00797091 0.0057232 0.00111425 0.00106615\n 0.01 0.002 0.0004 8.0e-5 2.74878e-29 5.49756e-30\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_irf-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_irf","text":"get_irf(\n 𝓂;\n periods,\n algorithm,\n parameters,\n variables,\n shocks,\n negative_shock,\n generalised_irf,\n initial_state,\n levels,\n ignore_obc,\n verbose\n)\n\n\nReturn impulse response functions (IRFs) of the model in a 3-dimensional KeyedArray. By default (see levels), the values represent absolute deviations from the relevant steady state (e.g. higher order perturbation algorithms are relative to the stochastic steady state).\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nKeyword Arguments\n\nperiods [Default: 40, Type: Int]: number of periods for which to calculate the IRFs. In case a matrix of shocks was provided, periods defines how many periods after the series of shocks the simulation continues.\nalgorithm [Default: :first_order, Type: Symbol]: algorithm to solve for the dynamics of the model.\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\nvariables [Default: :all_excluding_obc]: variables for which to show the results. Inputs can be a variable name passed on as either a Symbol or String (e.g. :y or \"y\"), or Tuple, Matrix or Vector of String or Symbol. Any variables not part of the model will trigger a warning. :all_excluding_auxilliary_and_obc contains all shocks less those related to auxilliary variables and related to occasionally binding constraints (obc). :all_excluding_obc contains all shocks less those related to auxilliary variables. :all will contain all variables.\nshocks [Default: :all_excluding_obc]: shocks for which to calculate the IRFs. Inputs can be a shock name passed on as either a Symbol or String (e.g. :y, or \"y\"), or Tuple, Matrix or Vector of String or Symbol. :simulate triggers random draws of all shocks (excluding occasionally binding constraints (obc) related shocks). :all_excluding_obc will contain all shocks but not the obc related ones.:all will contain also the obc related shocks. A series of shocks can be passed on using either a Matrix{Float64}, or a KeyedArray{Float64} as input with shocks (Symbol or String) in rows and periods in columns. The period of the simulation will correspond to the length of the input in the period dimension + the number of periods defined in periods. If the series of shocks is input as a KeyedArray{Float64} make sure to name the rows with valid shock names of type Symbol. Any shocks not part of the model will trigger a warning. :none in combination with an initial_state can be used for deterministic simulations.\nnegative_shock [Default: false, Type: Bool]: calculate a negative shock. Relevant for generalised IRFs.\ngeneralised_irf [Default: false, Type: Bool]: calculate generalised IRFs. Relevant for nonlinear solutions. Reference steady state for deviations is the stochastic steady state.\ninitial_state [Default: [0.0], Type: Union{Vector{Vector{Float64}},Vector{Float64}}]: The initial state defines the starting point for the model and is relevant for normal IRFs. In the case of pruned solution algorithms the initial state can be given as multiple state vectors (Vector{Vector{Float64}}). In this case the initial state must be given in devations from the non-stochastic steady state. In all other cases the initial state must be given in levels. If a pruned solution algorithm is selected and initial state is a Vector{Float64} then it impacts the first order initial state vector only. The state includes all variables as well as exogenous variables in leads or lags if present.\nlevels [Default: false, Type: Bool]: return levels or absolute deviations from steady state corresponding to the solution algorithm (e.g. stochastic steady state for higher order solution algorithms).\nignore_obc [Default: false, Type: Bool]: solve the model ignoring the occasionally binding constraints.\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend\n\n@parameters RBC begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α = 0.5\n β = 0.95\nend\n\nget_irf(RBC)\n# output\n3-dimensional KeyedArray(NamedDimsArray(...)) with keys:\n↓ Variables ∈ 4-element Vector{Symbol}\n→ Periods ∈ 40-element UnitRange{Int64}\n◪ Shocks ∈ 1-element Vector{Symbol}\nAnd data, 4×40×1 Array{Float64, 3}:\n[:, :, 1] ~ (:, :, :eps_z):\n (1) (2) … (39) (40)\n (:c) 0.00674687 0.00729773 0.00146962 0.00140619\n (:k) 0.0620937 0.0718322 0.0146789 0.0140453\n (:q) 0.0688406 0.0182781 0.00111425 0.00106615\n (:z) 0.01 0.002 2.74878e-29 5.49756e-30\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_irfs","page":"API","title":"MacroModelling.get_irfs","text":"See get_irf\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.get_jump_variables-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_jump_variables","text":"get_jump_variables(𝓂)\n\n\nReturns the jump variables of the model. Jumper variables occur in the future and not in the past or occur in all three: past, present, and future.\n\nIn case programmatic model writing was used this function returns the parsed variables (see z in example).\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z{TFP}[1]) * k[0]^(α - 1) + (1 - exp(z{δ}[1]) * δ))\n c[0] + k[0] = (1 - exp(z{δ}[0])δ) * k[-1] + q[0]\n q[0] = exp(z{TFP}[0]) * k[-1]^α\n for shock in [TFP, δ]\n z{shock}[0] = ρ{shock} * z{shock}[-1] + σ{shock} * (eps{shock}[x] + eps_news{shock}[x-1])\n end\n Δc_share[0] = log(c[0]/q[0]) - log(c[-1]/q[-1])\n Δk_4q[0] = log(k[0]) - log(k[-4])\nend\n\n@parameters RBC begin\n σ = 0.01\n ρ = 0.2\n capital_to_output = 1.5\n k[ss] / (4 * q[ss]) = capital_to_output | δ\n alpha = .5\n α = alpha\n β = 0.95\nend\n\nget_jump_variables(RBC)\n# output\n3-element Vector{String}:\n \"c\"\n \"z{TFP}\"\n \"z{δ}\"\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_loglikelihood-Union{Tuple{S}, Tuple{MacroModelling.ℳ, KeyedArray{Float64}, Vector{S}}} where S","page":"API","title":"MacroModelling.get_loglikelihood","text":"get_loglikelihood(\n 𝓂,\n data,\n parameter_values;\n algorithm,\n filter,\n warmup_iterations,\n tol,\n verbose\n)\n\n\nReturn the loglikelihood of the model given the data and parameters provided. The loglikelihood is either calculated based on the inversion or the Kalman filter (depending on the filter keyword argument). In case of a nonlinear solution algorithm the inversion filter will be used. The data must be provided as a KeyedArray{Float64} with the names of the variables to be matched in rows and the periods in columns.\n\nThis function is differentiable (so far for the Kalman filter only) and can be used in gradient based sampling or optimisation.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\ndata [Type: KeyedArray]: data matrix with variables (String or Symbol) in rows and time in columns\nparameter_values [Type: Vector]: Parameter values.\n\nKeyword Arguments\n\nalgorithm [Default: :first_order, Type: Symbol]: algorithm to solve for the dynamics of the model.\nfilter [Default: :kalman, Type: Symbol]: filter used to compute the shocks given the data, model, and parameters. The Kalman filter only works for linear problems, whereas the inversion filter (:inversion) works for linear and nonlinear models. If a nonlinear solution algorithm is selected, the inversion filter is used.\nwarmup_iterations [Default: 0, Type: Int]: periods added before the first observation for which shocks are computed such that the first observation is matched. A larger value alleviates the problem that the initial value is the relevant steady state.\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend\n\n@parameters RBC begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α = 0.5\n β = 0.95\nend\n\nsimulated_data = simulate(RBC)\n\nget_loglikelihood(RBC, simulated_data([:k], :, :simulate), RBC.parameter_values)\n# output\n58.24780188977981\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_mean-Tuple","page":"API","title":"MacroModelling.get_mean","text":"Wrapper for get_moments with mean = true, and non_stochastic_steady_state = false, variance = false, standard_deviation = false, covariance = false\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_moments-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_moments","text":"get_moments(\n 𝓂;\n parameters,\n non_stochastic_steady_state,\n mean,\n standard_deviation,\n variance,\n covariance,\n variables,\n derivatives,\n parameter_derivatives,\n algorithm,\n dependencies_tol,\n verbose,\n silent\n)\n\n\nReturn the first and second moments of endogenous variables using the first, pruned second, or pruned third order perturbation solution. By default returns: non stochastic steady state (SS), and standard deviations, but can optionally return variances, and covariance matrix.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nKeyword Arguments\n\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\nnon_stochastic_steady_state [Default: true, Type: Bool]: switch to return SS of endogenous variables\nmean [Default: false, Type: Bool]: switch to return mean of endogenous variables (the mean for the linearised solutoin is the NSSS)\nstandard_deviation [Default: true, Type: Bool]: switch to return standard deviation of endogenous variables\nvariance [Default: false, Type: Bool]: switch to return variance of endogenous variables\ncovariance [Default: false, Type: Bool]: switch to return covariance matrix of endogenous variables\nvariables [Default: :all_excluding_obc]: variables for which to show the results. Inputs can be a variable name passed on as either a Symbol or String (e.g. :y or \"y\"), or Tuple, Matrix or Vector of String or Symbol. Any variables not part of the model will trigger a warning. :all_excluding_auxilliary_and_obc contains all shocks less those related to auxilliary variables and related to occasionally binding constraints (obc). :all_excluding_obc contains all shocks less those related to auxilliary variables. :all will contain all variables.\nderivatives [Default: true, Type: Bool]: calculate derivatives with respect to the parameters.\nparameter_derivatives [Default: :all]: parameters for which to calculate partial derivatives. Inputs can be a parameter name passed on as either a Symbol or String (e.g. :alpha, or \"alpha\"), or Tuple, Matrix or Vector of String or Symbol. :all will include all parameters.\nalgorithm [Default: :first_order, Type: Symbol]: algorithm to solve for the dynamics of the model.\ndependencies_tol [Default: 1e-12, Type: AbstractFloat]: tolerance for the effect of a variable on the variable of interest when isolating part of the system for calculating covariance related statistics\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend\n\n@parameters RBC begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α = 0.5\n β = 0.95\nend\n\nmoments = get_moments(RBC);\n\nmoments[1]\n# output\n2-dimensional KeyedArray(NamedDimsArray(...)) with keys:\n↓ Variables ∈ 4-element Vector{Symbol}\n→ Steady_state_and_∂steady_state∂parameter ∈ 6-element Vector{Symbol}\nAnd data, 4×6 Matrix{Float64}:\n (:Steady_state) (:std_z) (:ρ) (:δ) (:α) (:β)\n (:c) 5.93625 0.0 0.0 -116.072 55.786 76.1014\n (:k) 47.3903 0.0 0.0 -1304.95 555.264 1445.93\n (:q) 6.88406 0.0 0.0 -94.7805 66.8912 105.02\n (:z) 0.0 0.0 0.0 0.0 0.0 0.0\n\nmoments[2]\n# output\n2-dimensional KeyedArray(NamedDimsArray(...)) with keys:\n↓ Variables ∈ 4-element Vector{Symbol}\n→ Standard_deviation_and_∂standard_deviation∂parameter ∈ 6-element Vector{Symbol}\nAnd data, 4×6 Matrix{Float64}:\n (:Standard_deviation) (:std_z) … (:δ) (:α) (:β)\n (:c) 0.0266642 2.66642 -0.384359 0.2626 0.144789\n (:k) 0.264677 26.4677 -5.74194 2.99332 6.30323\n (:q) 0.0739325 7.39325 -0.974722 0.726551 1.08\n (:z) 0.0102062 1.02062 0.0 0.0 0.0\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_non_stochastic_steady_state-Tuple","page":"API","title":"MacroModelling.get_non_stochastic_steady_state","text":"Wrapper for get_steady_state with stochastic = false.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_non_stochastic_steady_state_residuals-Tuple{MacroModelling.ℳ, Union{Dict{String, Float64}, Dict{Symbol, Float64}, KeyedArray{Float64, 1}, Vector{Float64}}}","page":"API","title":"MacroModelling.get_non_stochastic_steady_state_residuals","text":"get_non_stochastic_steady_state_residuals(\n 𝓂,\n values;\n parameters\n)\n\n\nCalculate the residuals of the non-stochastic steady state equations of the model for a given set of values. Values not provided, will be filled with the non-stochastic steady state values corresponding to the current parameters.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\nvalues [Type: Union{Vector{Float64}, Dict{Symbol, Float64}, Dict{String, Float64}, KeyedArray{Float64, 1}}]: A Vector, Dict, or KeyedArray containing the values of the variables and calibrated parameters in the non-stochastic steady state equations (including calibration equations). \n\nKeyword Arguments\n\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\n\nReturns\n\nA KeyedArray containing the absolute values of the residuals of the non-stochastic steady state equations.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend\n\n@parameters RBC begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n k[ss] / q[ss] = 2.5 | α\n β = 0.95\nend\n\nsteady_state = SS(RBC, derivatives = false)\n\nget_non_stochastic_steady_state_residuals(RBC, steady_state)\n# output\n1-dimensional KeyedArray(NamedDimsArray(...)) with keys:\n↓ Equation ∈ 5-element Vector{Symbol}\nAnd data, 5-element Vector{Float64}:\n (:Equation₁) 0.0\n (:Equation₂) 0.0\n (:Equation₃) 0.0\n (:Equation₄) 0.0\n (:CalibrationEquation₁) 0.0\n\ngetnonstochasticsteadystate_residuals(RBC, [1.1641597, 3.0635781, 1.2254312, 0.0, 0.18157895])\n\noutput\n\n1-dimensional KeyedArray(NamedDimsArray(...)) with keys: ↓ Equation ∈ 5-element Vector{Symbol} And data, 5-element Vector{Float64}: (:Equation₁) 2.7360991250446887e-10 (:Equation₂) 6.199999980083248e-8 (:Equation₃) 2.7897102183871425e-8 (:Equation₄) 0.0 (:CalibrationEquation₁) 8.160392850342646e-8 ```\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_nonnegativity_auxilliary_variables-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_nonnegativity_auxilliary_variables","text":"get_nonnegativity_auxilliary_variables(𝓂)\n\n\nReturns the auxilliary variables, without timing subscripts, added to the non-stochastic steady state problem because certain expression cannot be negative (e.g. given log(c/q) an auxilliary variable is created for c/q).\n\nSee get_steady_state_equations for more details on the auxilliary variables and equations.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z{TFP}[1]) * k[0]^(α - 1) + (1 - exp(z{δ}[1]) * δ))\n c[0] + k[0] = (1 - exp(z{δ}[0])δ) * k[-1] + q[0]\n q[0] = exp(z{TFP}[0]) * k[-1]^α\n for shock in [TFP, δ]\n z{shock}[0] = ρ{shock} * z{shock}[-1] + σ{shock} * (eps{shock}[x] + eps_news{shock}[x-1])\n end\n Δc_share[0] = log(c[0]/q[0]) - log(c[-1]/q[-1])\n Δk_4q[0] = log(k[0]) - log(k[-4])\nend\n\n@parameters RBC begin\n σ = 0.01\n ρ = 0.2\n capital_to_output = 1.5\n k[ss] / (4 * q[ss]) = capital_to_output | δ\n alpha = .5\n α = alpha\n β = 0.95\nend\n\nget_nonnegativity_auxilliary_variables(RBC)\n# output\n2-element Vector{String}:\n \"➕₁\"\n \"➕₂\"\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_parameters-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_parameters","text":"get_parameters(𝓂; values)\n\n\nReturns the parameters (and optionally the values) which have an impact on the model dynamics but do not depend on other parameters and are not determined by calibration equations. \n\nIn case programmatic model writing was used this function returns the parsed parameters (see σ in example).\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nKeyword Arguments\n\nvalues [Default: false, Type: Bool]: return the values together with the parameter names\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z{TFP}[1]) * k[0]^(α - 1) + (1 - exp(z{δ}[1]) * δ))\n c[0] + k[0] = (1 - exp(z{δ}[0])δ) * k[-1] + q[0]\n q[0] = exp(z{TFP}[0]) * k[-1]^α\n for shock in [TFP, δ]\n z{shock}[0] = ρ{shock} * z{shock}[-1] + σ{shock} * (eps{shock}[x] + eps_news{shock}[x-1])\n end\n Δc_share[0] = log(c[0]/q[0]) - log(c[-1]/q[-1])\n Δk_4q[0] = log(k[0]) - log(k[-4])\nend\n\n@parameters RBC begin\n σ = 0.01\n ρ = 0.2\n capital_to_output = 1.5\n k[ss] / (4 * q[ss]) = capital_to_output | δ\n alpha = .5\n α = alpha\n β = 0.95\nend\n\nget_parameters(RBC)\n# output\n7-element Vector{String}:\n \"σ{TFP}\"\n \"σ{δ}\"\n \"ρ{TFP}\"\n \"ρ{δ}\"\n \"capital_to_output\"\n \"alpha\"\n \"β\"\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_parameters_defined_by_parameters-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_parameters_defined_by_parameters","text":"get_parameters_defined_by_parameters(𝓂)\n\n\nReturns the parameters which are defined by other parameters which are not necessarily used in the equations of the model (see α in example).\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z{TFP}[1]) * k[0]^(α - 1) + (1 - exp(z{δ}[1]) * δ))\n c[0] + k[0] = (1 - exp(z{δ}[0])δ) * k[-1] + q[0]\n q[0] = exp(z{TFP}[0]) * k[-1]^α\n for shock in [TFP, δ]\n z{shock}[0] = ρ{shock} * z{shock}[-1] + σ{shock} * (eps{shock}[x] + eps_news{shock}[x-1])\n end\n Δc_share[0] = log(c[0]/q[0]) - log(c[-1]/q[-1])\n Δk_4q[0] = log(k[0]) - log(k[-4])\nend\n\n@parameters RBC begin\n σ = 0.01\n ρ = 0.2\n capital_to_output = 1.5\n k[ss] / (4 * q[ss]) = capital_to_output | δ\n alpha = .5\n α = alpha\n β = 0.95\nend\n\nget_parameters_defined_by_parameters(RBC)\n# output\n1-element Vector{String}:\n \"α\"\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_parameters_defining_parameters-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_parameters_defining_parameters","text":"get_parameters_defining_parameters(𝓂)\n\n\nReturns the parameters which define other parameters in the @parameters block which are not necessarily used in the equations of the model (see alpha in example).\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z{TFP}[1]) * k[0]^(α - 1) + (1 - exp(z{δ}[1]) * δ))\n c[0] + k[0] = (1 - exp(z{δ}[0])δ) * k[-1] + q[0]\n q[0] = exp(z{TFP}[0]) * k[-1]^α\n for shock in [TFP, δ]\n z{shock}[0] = ρ{shock} * z{shock}[-1] + σ{shock} * (eps{shock}[x] + eps_news{shock}[x-1])\n end\n Δc_share[0] = log(c[0]/q[0]) - log(c[-1]/q[-1])\n Δk_4q[0] = log(k[0]) - log(k[-4])\nend\n\n@parameters RBC begin\n σ = 0.01\n ρ = 0.2\n capital_to_output = 1.5\n k[ss] / (4 * q[ss]) = capital_to_output | δ\n alpha = .5\n α = alpha\n β = 0.95\nend\n\nget_parameters_defining_parameters(RBC)\n# output\n1-element Vector{String}:\n \"alpha\"\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_parameters_in_equations-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_parameters_in_equations","text":"get_parameters_in_equations(𝓂)\n\n\nReturns the parameters contained in the model equations. Note that these parameters might be determined by other parameters or calibration equations defined in the @parameters block.\n\nIn case programmatic model writing was used this function returns the parsed parameters (see σ in example).\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z{TFP}[1]) * k[0]^(α - 1) + (1 - exp(z{δ}[1]) * δ))\n c[0] + k[0] = (1 - exp(z{δ}[0])δ) * k[-1] + q[0]\n q[0] = exp(z{TFP}[0]) * k[-1]^α\n for shock in [TFP, δ]\n z{shock}[0] = ρ{shock} * z{shock}[-1] + σ{shock} * (eps{shock}[x] + eps_news{shock}[x-1])\n end\n Δc_share[0] = log(c[0]/q[0]) - log(c[-1]/q[-1])\n Δk_4q[0] = log(k[0]) - log(k[-4])\nend\n\n@parameters RBC begin\n σ = 0.01\n ρ = 0.2\n capital_to_output = 1.5\n k[ss] / (4 * q[ss]) = capital_to_output | δ\n alpha = .5\n α = alpha\n β = 0.95\nend\n\nget_parameters_in_equations(RBC)\n# output\n7-element Vector{String}:\n \"α\"\n \"β\"\n \"δ\"\n \"ρ{TFP}\"\n \"ρ{δ}\"\n \"σ{TFP}\"\n \"σ{δ}\"\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_perturbation_solution-Tuple","page":"API","title":"MacroModelling.get_perturbation_solution","text":"See get_solution\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_residuals","page":"API","title":"MacroModelling.get_residuals","text":"See get_non_stochastic_steady_state_residuals\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.get_second_order_solution-Tuple","page":"API","title":"MacroModelling.get_second_order_solution","text":"Wrapper for get_solution with algorithm = :second_order.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_shock_decomposition-Tuple{MacroModelling.ℳ, KeyedArray{Float64}}","page":"API","title":"MacroModelling.get_shock_decomposition","text":"get_shock_decomposition(\n 𝓂,\n data;\n parameters,\n data_in_levels,\n smooth,\n verbose\n)\n\n\nReturn the shock decomposition in absolute deviations from the non stochastic steady state based on the Kalman smoother or filter (depending on the smooth keyword argument) using the provided data and first order solution of the model. Data is by default assumed to be in levels unless data_in_levels is set to false.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\ndata [Type: KeyedArray]: data matrix with variables (String or Symbol) in rows and time in columns\n\nKeyword Arguments\n\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\ndata_in_levels [Default: true, Type: Bool]: indicator whether the data is provided in levels. If true the input to the data argument will have the non stochastic steady state substracted.\nsmooth [Default: true, Type: Bool]: whether to return smoothed (true) or filtered (false) shocks. Only works for the Kalman filter. The inversion filter only returns filtered shocks.\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend\n\n@parameters RBC begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α = 0.5\n β = 0.95\nend\n\nsimulation = simulate(RBC)\n\nget_shock_decomposition(RBC,simulation([:c],:,:simulate))\n# output\n3-dimensional KeyedArray(NamedDimsArray(...)) with keys:\n↓ Variables ∈ 4-element Vector{Symbol}\n→ Shocks ∈ 2-element Vector{Symbol}\n◪ Periods ∈ 40-element UnitRange{Int64}\nAnd data, 4×2×40 Array{Float64, 3}:\n[showing 3 of 40 slices]\n[:, :, 1] ~ (:, :, 1):\n (:eps_z₍ₓ₎) (:Initial_values)\n (:c) 0.000407252 -0.00104779\n (:k) 0.00374808 -0.0104645\n (:q) 0.00415533 -0.000807161\n (:z) 0.000603617 -1.99957e-6\n\n[:, :, 21] ~ (:, :, 21):\n (:eps_z₍ₓ₎) (:Initial_values)\n (:c) 0.026511 -0.000433619\n (:k) 0.25684 -0.00433108\n (:q) 0.115858 -0.000328764\n (:z) 0.0150266 0.0\n\n[:, :, 40] ~ (:, :, 40):\n (:eps_z₍ₓ₎) (:Initial_values)\n (:c) 0.0437976 -0.000187505\n (:k) 0.4394 -0.00187284\n (:q) 0.00985518 -0.000142164\n (:z) -0.00366442 8.67362e-19\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_shocks-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_shocks","text":"get_shocks(𝓂)\n\n\nReturns the exogenous shocks.\n\nIn case programmatic model writing was used this function returns the parsed variables (see eps in example).\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z{TFP}[1]) * k[0]^(α - 1) + (1 - exp(z{δ}[1]) * δ))\n c[0] + k[0] = (1 - exp(z{δ}[0])δ) * k[-1] + q[0]\n q[0] = exp(z{TFP}[0]) * k[-1]^α\n for shock in [TFP, δ]\n z{shock}[0] = ρ{shock} * z{shock}[-1] + σ{shock} * (eps{shock}[x] + eps_news{shock}[x-1])\n end\n Δc_share[0] = log(c[0]/q[0]) - log(c[-1]/q[-1])\n Δk_4q[0] = log(k[0]) - log(k[-4])\nend\n\n@parameters RBC begin\n σ = 0.01\n ρ = 0.2\n capital_to_output = 1.5\n k[ss] / (4 * q[ss]) = capital_to_output | δ\n alpha = .5\n α = alpha\n β = 0.95\nend\n\nget_shocks(RBC)\n# output\n4-element Vector{String}:\n \"eps_news{TFP}\"\n \"eps_news{δ}\"\n \"eps{TFP}\"\n \"eps{δ}\"\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_simulation-Tuple","page":"API","title":"MacroModelling.get_simulation","text":"Wrapper for get_irf with shocks = :simulate. Function returns values in levels by default.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_simulations-Tuple","page":"API","title":"MacroModelling.get_simulations","text":"Wrapper for get_irf with shocks = :simulate. Function returns values in levels by default.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_solution-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_solution","text":"get_solution(𝓂; parameters, algorithm, verbose)\n\n\nReturn the solution of the model. In the linear case it returns the linearised solution and the non stochastic steady state (NSSS) of the model. In the nonlinear case (higher order perturbation) the function returns a multidimensional array with the endogenous variables as the second dimension and the state variables, shocks, and perturbation parameter (:Volatility) in the case of higher order solutions as the other dimensions.\n\nThe values of the output represent the NSSS in the case of a linear solution and below it the effect that deviations from the NSSS of the respective past states, shocks, and perturbation parameter have (perturbation parameter = 1) on the present value (NSSS deviation) of the model variables.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nKeyword Arguments\n\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\nalgorithm [Default: :first_order, Type: Symbol]: algorithm to solve for the dynamics of the model.\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nThe returned KeyedArray shows as columns the endogenous variables inlcuding the auxilliary endogenous and exogenous variables (due to leads and lags > 1). The rows and other dimensions (depending on the chosen perturbation order) include the NSSS for the linear case only, followed by the states, and exogenous shocks. Subscripts following variable names indicate the timing (e.g. variable₍₋₁₎ indicates the variable being in the past). Superscripts indicate leads or lags (e.g. variableᴸ⁽²⁾ indicates the variable being in lead by two periods). If no super- or subscripts follow the variable name, the variable is in the present.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend\n\n@parameters RBC begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α = 0.5\n β = 0.95\nend\n\nget_solution(RBC)\n# output\n2-dimensional KeyedArray(NamedDimsArray(...)) with keys:\n↓ Steady_state__States__Shocks ∈ 4-element Vector{Symbol}\n→ Variables ∈ 4-element Vector{Symbol}\nAnd data, 4×4 adjoint(::Matrix{Float64}) with eltype Float64:\n (:c) (:k) (:q) (:z)\n (:Steady_state) 5.93625 47.3903 6.88406 0.0\n (:k₍₋₁₎) 0.0957964 0.956835 0.0726316 -0.0\n (:z₍₋₁₎) 0.134937 1.24187 1.37681 0.2\n (:eps_z₍ₓ₎) 0.00674687 0.0620937 0.0688406 0.01\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_ss","page":"API","title":"MacroModelling.get_ss","text":"See get_steady_state\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.get_standard_deviation-Tuple","page":"API","title":"MacroModelling.get_standard_deviation","text":"Wrapper for get_moments with standard_deviation = true and non_stochastic_steady_state = false, variance = false, covariance = false.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_state_variables-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_state_variables","text":"get_state_variables(𝓂)\n\n\nReturns the state variables of the model. State variables occur in the past and not in the future or occur in all three: past, present, and future.\n\nIn case programmatic model writing was used this function returns the parsed variables (see z in example).\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z{TFP}[1]) * k[0]^(α - 1) + (1 - exp(z{δ}[1]) * δ))\n c[0] + k[0] = (1 - exp(z{δ}[0])δ) * k[-1] + q[0]\n q[0] = exp(z{TFP}[0]) * k[-1]^α\n for shock in [TFP, δ]\n z{shock}[0] = ρ{shock} * z{shock}[-1] + σ{shock} * (eps{shock}[x] + eps_news{shock}[x-1])\n end\n Δc_share[0] = log(c[0]/q[0]) - log(c[-1]/q[-1])\n Δk_4q[0] = log(k[0]) - log(k[-4])\nend\n\n@parameters RBC begin\n σ = 0.01\n ρ = 0.2\n capital_to_output = 1.5\n k[ss] / (4 * q[ss]) = capital_to_output | δ\n alpha = .5\n α = alpha\n β = 0.95\nend\n\nget_state_variables(RBC)\n# output\n10-element Vector{String}:\n \"c\"\n \"eps_news{TFP}\"\n \"eps_news{δ}\"\n \"k\"\n \"kᴸ⁽⁻²⁾\"\n \"kᴸ⁽⁻³⁾\"\n \"kᴸ⁽⁻¹⁾\"\n \"q\"\n \"z{TFP}\"\n \"z{δ}\"\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_statistics-Union{Tuple{T}, Tuple{U}, Tuple{Any, Vector{T}}} where {U, T}","page":"API","title":"MacroModelling.get_statistics","text":"get_statistics(\n 𝓂,\n parameter_values;\n parameters,\n non_stochastic_steady_state,\n mean,\n standard_deviation,\n variance,\n covariance,\n autocorrelation,\n autocorrelation_periods,\n algorithm,\n verbose\n)\n\n\nReturn the first and second moments of endogenous variables using either the linearised solution or the pruned second or third order perturbation solution. By default returns: non stochastic steady state (SS), and standard deviations, but can also return variances, and covariance matrix. Function to use when differentiating model moments with repect to parameters.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\nparameter_values [Type: Vector]: Parameter values.\n\nKeyword Arguments\n\nparameters [Type: Vector{Symbol}]: Corresponding names of parameters values.\nnon_stochastic_steady_state [Default: Symbol[], Type: Vector{Symbol}]: if values are provided the function returns the SS of endogenous variables\nmean [Default: Symbol[], Type: Vector{Symbol}]: if values are provided the function returns the mean of endogenous variables (the mean for the linearised solutoin is the NSSS)\nstandard_deviation [Default: Symbol[], Type: Vector{Symbol}]: if values are provided the function returns the standard deviation of the mentioned variables\nvariance [Default: Symbol[], Type: Vector{Symbol}]: if values are provided the function returns the variance of the mentioned variables\ncovariance [Default: Symbol[], Type: Vector{Symbol}]: if values are provided the function returns the covariance of the mentioned variables\nautocorrelation [Default: Symbol[], Type: Vector{Symbol}]: if values are provided the function returns the autocorrelation of the mentioned variables\nautocorrelation_periods [Default: 1:5]: periods for which to return the autocorrelation of the mentioned variables\nalgorithm [Default: :first_order, Type: Symbol]: algorithm to solve for the dynamics of the model.\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend\n\n@parameters RBC begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α = 0.5\n β = 0.95\nend\n\nget_statistics(RBC, RBC.parameter_values, parameters = RBC.parameters, standard_deviation = RBC.var)\n# output\n1-element Vector{Any}:\n [0.02666420378525503, 0.26467737291221793, 0.07393254045396483, 0.010206207261596574]\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_std","page":"API","title":"MacroModelling.get_std","text":"Wrapper for get_moments with standard_deviation = true and non_stochastic_steady_state = false, variance = false, covariance = false.\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.get_stdev","page":"API","title":"MacroModelling.get_stdev","text":"Wrapper for get_moments with standard_deviation = true and non_stochastic_steady_state = false, variance = false, covariance = false.\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.get_steady_state-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_steady_state","text":"get_steady_state(\n 𝓂;\n parameters,\n derivatives,\n stochastic,\n algorithm,\n parameter_derivatives,\n return_variables_only,\n verbose,\n silent,\n tol\n)\n\n\nReturn the (non stochastic) steady state, calibrated parameters, and derivatives with respect to model parameters.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nKeyword Arguments\n\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\nderivatives [Default: true, Type: Bool]: calculate derivatives with respect to the parameters.\nstochastic [Default: false, Type: Bool]: return stochastic steady state using second order perturbation\nalgorithm [Default: :first_order, Type: Symbol]: algorithm to solve for the dynamics of the model.\nparameter_derivatives [Default: :all]: parameters for which to calculate partial derivatives. Inputs can be a parameter name passed on as either a Symbol or String (e.g. :alpha, or \"alpha\"), or Tuple, Matrix or Vector of String or Symbol. :all will include all parameters.\nreturn_variables_only [Defaut: false, Type: Bool]: return only variables and not calibrated parameters\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nThe columns show the (non stochastic) steady state and parameters for which derivatives are taken. The rows show the variables and calibrated parameters.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend\n\n@parameters RBC begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α = 0.5\n β = 0.95\nend\n\nget_steady_state(RBC)\n# output\n2-dimensional KeyedArray(NamedDimsArray(...)) with keys:\n↓ Variables_and_calibrated_parameters ∈ 4-element Vector{Symbol}\n→ Steady_state_and_∂steady_state∂parameter ∈ 6-element Vector{Symbol}\nAnd data, 4×6 Matrix{Float64}:\n (:Steady_state) (:std_z) (:ρ) (:δ) (:α) (:β)\n (:c) 5.93625 0.0 0.0 -116.072 55.786 76.1014\n (:k) 47.3903 0.0 0.0 -1304.95 555.264 1445.93\n (:q) 6.88406 0.0 0.0 -94.7805 66.8912 105.02\n (:z) 0.0 0.0 0.0 0.0 0.0 0.0\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_steady_state_equations-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_steady_state_equations","text":"get_steady_state_equations(𝓂)\n\n\nReturn the non-stochastic steady state (NSSS) equations of the model. The difference to the equations as they were written in the @model block is that exogenous shocks are set to 0, time subscripts are eliminated (e.g. c[-1] becomes c), trivial simplifications are carried out (e.g. log(k) - log(k) = 0), and auxilliary variables are added for expressions that cannot become negative. \n\nAuxilliary variables facilitate the solution of the NSSS problem. The package substitutes expressions which cannot become negative with auxilliary variables and adds another equation to the system of equations determining the NSSS. For example, log(c/q) cannot be negative and c/q is substituted by an auxilliary varaible ➕₁ and an additional equation is added: ➕₁ = c / q.\n\nNote that the ouput assumes the equations are equal to 0. As in, -z{δ} * ρ{δ} + z{δ} implies -z{δ} * ρ{δ} + z{δ} = 0 and therefore: z{δ} * ρ{δ} = z{δ}.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z{TFP}[1]) * k[0]^(α - 1) + (1 - exp(z{δ}[1]) * δ))\n c[0] + k[0] = (1 - exp(z{δ}[0])δ) * k[-1] + q[0]\n q[0] = exp(z{TFP}[0]) * k[-1]^α\n for shock in [TFP, δ]\n z{shock}[0] = ρ{shock} * z{shock}[-1] + σ{shock} * (eps{shock}[x] + eps_news{shock}[x-1])\n end\n Δc_share[0] = log(c[0]/q[0]) - log(c[-1]/q[-1])\n Δk_4q[0] = log(k[0]) - log(k[-4])\nend\n\n@parameters RBC begin\n σ = 0.01\n ρ = 0.2\n capital_to_output = 1.5\n k[ss] / (4 * q[ss]) = capital_to_output | δ\n alpha = .5\n α = alpha\n β = 0.95\nend\n\nget_steady_state_equations(RBC)\n# output\n9-element Vector{String}:\n \"(-β * ((k ^ (α - 1) * α * exp(z{TFP}) - δ * exp(z{δ})) + 1)) / c + 1 / c\"\n \"((c - k * (-δ * exp(z{δ}) + 1)) + k) - q\"\n \"-(k ^ α) * exp(z{TFP}) + q\"\n \"-z{TFP} * ρ{TFP} + z{TFP}\"\n \"-z{δ} * ρ{δ} + z{δ}\"\n \"➕₁ - c / q\"\n \"➕₂ - c / q\"\n \"(Δc_share - log(➕₁)) + log(➕₂)\"\n \"Δk_4q - 0\"\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_stochastic_steady_state-Tuple","page":"API","title":"MacroModelling.get_stochastic_steady_state","text":"Wrapper for get_steady_state with stochastic = true.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_third_order_solution-Tuple","page":"API","title":"MacroModelling.get_third_order_solution","text":"Wrapper for get_solution with algorithm = :third_order.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_var","page":"API","title":"MacroModelling.get_var","text":"Wrapper for get_moments with variance = true and non_stochastic_steady_state = false, standard_deviation = false, covariance = false.\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.get_var_decomp","page":"API","title":"MacroModelling.get_var_decomp","text":"See get_variance_decomposition\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.get_variables-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_variables","text":"get_variables(𝓂)\n\n\nReturns the variables of the model without timing subscripts and not including auxilliary variables.\n\nIn case programmatic model writing was used this function returns the parsed variables (see z in example).\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z{TFP}[1]) * k[0]^(α - 1) + (1 - exp(z{δ}[1]) * δ))\n c[0] + k[0] = (1 - exp(z{δ}[0])δ) * k[-1] + q[0]\n q[0] = exp(z{TFP}[0]) * k[-1]^α\n for shock in [TFP, δ]\n z{shock}[0] = ρ{shock} * z{shock}[-1] + σ{shock} * (eps{shock}[x] + eps_news{shock}[x-1])\n end\n Δc_share[0] = log(c[0]/q[0]) - log(c[-1]/q[-1])\n Δk_4q[0] = log(k[0]) - log(k[-4])\nend\n\n@parameters RBC begin\n σ = 0.01\n ρ = 0.2\n capital_to_output = 1.5\n k[ss] / (4 * q[ss]) = capital_to_output | δ\n alpha = .5\n α = alpha\n β = 0.95\nend\n\nget_variables(RBC)\n# output\n7-element Vector{String}:\n \"c\"\n \"k\"\n \"q\"\n \"z{TFP}\"\n \"z{δ}\"\n \"Δc_share\"\n \"Δk_4q\"\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_variance-Tuple","page":"API","title":"MacroModelling.get_variance","text":"Wrapper for get_moments with variance = true and non_stochastic_steady_state = false, standard_deviation = false, covariance = false.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.get_variance_decomposition-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.get_variance_decomposition","text":"get_variance_decomposition(𝓂; parameters, verbose)\n\n\nReturn the variance decomposition of endogenous variables with regards to the shocks using the linearised solution. \n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nKeyword Arguments\n\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling\n\n@model RBC_CME begin\n y[0]=A[0]*k[-1]^alpha\n 1/c[0]=beta*1/c[1]*(alpha*A[1]*k[0]^(alpha-1)+(1-delta))\n 1/c[0]=beta*1/c[1]*(R[0]/Pi[+1])\n R[0] * beta =(Pi[0]/Pibar)^phi_pi\n A[0]*k[-1]^alpha=c[0]+k[0]-(1-delta*z_delta[0])*k[-1]\n z_delta[0] = 1 - rho_z_delta + rho_z_delta * z_delta[-1] + std_z_delta * delta_eps[x]\n A[0] = 1 - rhoz + rhoz * A[-1] + std_eps * eps_z[x]\nend\n\n\n@parameters RBC_CME begin\n alpha = .157\n beta = .999\n delta = .0226\n Pibar = 1.0008\n phi_pi = 1.5\n rhoz = .9\n std_eps = .0068\n rho_z_delta = .9\n std_z_delta = .005\nend\n\nget_variance_decomposition(RBC_CME)\n# output\n2-dimensional KeyedArray(NamedDimsArray(...)) with keys:\n↓ Variables ∈ 7-element Vector{Symbol}\n→ Shocks ∈ 2-element Vector{Symbol}\nAnd data, 7×2 Matrix{Float64}:\n (:delta_eps) (:eps_z)\n (:A) 9.78485e-31 1.0\n (:Pi) 0.0156771 0.984323\n (:R) 0.0156771 0.984323\n (:c) 0.0134672 0.986533\n (:k) 0.00869568 0.991304\n (:y) 0.000313462 0.999687\n (:z_delta) 1.0 0.0\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.gr_backend","page":"API","title":"MacroModelling.gr_backend","text":"gr_backend()\n\nRenaming and reexport of Plot.jl function gr() to define GR.jl as backend\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.import_dynare","page":"API","title":"MacroModelling.import_dynare","text":"See translate_mod_file\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.import_model","page":"API","title":"MacroModelling.import_model","text":"See translate_mod_file\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.plot_IRF","page":"API","title":"MacroModelling.plot_IRF","text":"See plot_irf\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.plot_conditional_forecast-Tuple{MacroModelling.ℳ, Union{KeyedArray{Union{Nothing, Float64}}, KeyedArray{Float64}, SparseArrays.SparseMatrixCSC{Float64}, Matrix{Union{Nothing, Float64}}}}","page":"API","title":"MacroModelling.plot_conditional_forecast","text":"plot_conditional_forecast(\n 𝓂,\n conditions;\n shocks,\n initial_state,\n periods,\n parameters,\n variables,\n conditions_in_levels,\n algorithm,\n levels,\n show_plots,\n save_plots,\n save_plots_format,\n save_plots_path,\n plots_per_page,\n verbose\n)\n\n\nPlot conditional forecast given restrictions on endogenous variables and shocks (optional) of the model. The algorithm finds the combinations of shocks with the smallest magnitude to match the conditions and plots both the endogenous variables and shocks.\n\nThe left axis shows the level, and the right axis the deviation from the (non) stochastic steady state, depending on the solution algorithm (e.g. higher order perturbation algorithms will show the stochastic steady state). Variable names are above the subplots, conditioned values are marked, and the title provides information about the model, and number of pages.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\nconditions [Type: Union{Matrix{Union{Nothing,Float64}}, SparseMatrixCSC{Float64}, KeyedArray{Union{Nothing,Float64}}, KeyedArray{Float64}}]: conditions for which to find the corresponding shocks. The input can have multiple formats, but for all types of entries the first dimension corresponds to the number of variables and the second dimension to the number of periods. The conditions can be specified using a matrix of type Matrix{Union{Nothing,Float64}}. In this case the conditions are matrix elements of type Float64 and all remaining (free) entries are nothing. You can also use a SparseMatrixCSC{Float64} as input. In this case only non-zero elements are taken as conditions. Note that you cannot condition variables to be zero using a SparseMatrixCSC{Float64} as input (use other input formats to do so). Another possibility to input conditions is by using a KeyedArray. You can use a KeyedArray{Union{Nothing,Float64}} where, similar to Matrix{Union{Nothing,Float64}}, all entries of type Float64 are recognised as conditions and all other entries have to be nothing. Furthermore, you can specify in the primary axis a subset of variables (of type Symbol or String) for which you specify conditions and all other variables are considered free. The same goes for the case when you use KeyedArray{Float64}} as input, whereas in this case the conditions for the specified variables bind for all periods specified in the KeyedArray, because there are no nothing entries permitted with this type.\n\nKeyword Arguments\n\nshocks [Default: nothing, Type: Union{Matrix{Union{Nothing,Float64}}, SparseMatrixCSC{Float64}, KeyedArray{Union{Nothing,Float64}}, KeyedArray{Float64}, Nothing} = nothing]: known values of shocks. This entry allows the user to include certain shock values. By entering restrictions on the shock sin this way the problem to match the conditions on endogenous variables is restricted to the remaining free shocks in the repective period. The input can have multiple formats, but for all types of entries the first dimension corresponds to the number of shocks and the second dimension to the number of periods. The shocks can be specified using a matrix of type Matrix{Union{Nothing,Float64}}. In this case the shocks are matrix elements of type Float64 and all remaining (free) entries are nothing. You can also use a SparseMatrixCSC{Float64} as input. In this case only non-zero elements are taken as certain shock values. Note that you cannot condition shocks to be zero using a SparseMatrixCSC{Float64} as input (use other input formats to do so). Another possibility to input known shocks is by using a KeyedArray. You can use a KeyedArray{Union{Nothing,Float64}} where, similar to Matrix{Union{Nothing,Float64}}, all entries of type Float64 are recognised as known shocks and all other entries have to be nothing. Furthermore, you can specify in the primary axis a subset of shocks (of type Symbol or String) for which you specify values and all other shocks are considered free. The same goes for the case when you use KeyedArray{Float64}} as input, whereas in this case the values for the specified shocks bind for all periods specified in the KeyedArray, because there are no nothing entries permitted with this type.\ninitial_state [Default: [0.0], Type: Union{Vector{Vector{Float64}},Vector{Float64}}]: The initial state defines the starting point for the model and is relevant for normal IRFs. In the case of pruned solution algorithms the initial state can be given as multiple state vectors (Vector{Vector{Float64}}). In this case the initial state must be given in devations from the non-stochastic steady state. In all other cases the initial state must be given in levels. If a pruned solution algorithm is selected and initial state is a Vector{Float64} then it impacts the first order initial state vector only. The state includes all variables as well as exogenous variables in leads or lags if present.\nperiods [Default: 40, Type: Int]: the total number of periods is the sum of the argument provided here and the maximum of periods of the shocks or conditions argument.\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\nvariables [Default: :all_excluding_obc]: variables for which to show the results. Inputs can be a variable name passed on as either a Symbol or String (e.g. :y or \"y\"), or Tuple, Matrix or Vector of String or Symbol. Any variables not part of the model will trigger a warning. :all_excluding_auxilliary_and_obc contains all shocks less those related to auxilliary variables and related to occasionally binding constraints (obc). :all_excluding_obc contains all shocks less those related to auxilliary variables. :all will contain all variables.\n\nconditions_in_levels [Default: true, Type: Bool]: indicator whether the conditions are provided in levels. If true the input to the conditions argument will have the non stochastic steady state substracted.\n\nlevels [Default: false, Type: Bool]: return levels or absolute deviations from steady state corresponding to the solution algorithm (e.g. stochastic steady state for higher order solution algorithms).\nalgorithm [Default: :first_order, Type: Symbol]: algorithm to solve for the dynamics of the model.\nshow_plots [Default: true, Type: Bool]: show plots. Separate plots per shocks and varibles depending on number of variables and plots_per_page.\nsave_plots [Default: false, Type: Bool]: switch to save plots using path and extension from save_plots_path and save_plots_format. Separate files per shocks and variables depending on number of variables and plots_per_page\nsave_plots_format [Default: :pdf, Type: Symbol]: output format of saved plots. See input formats compatible with GR for valid formats.\nsave_plots_path [Default: pwd(), Type: String]: path where to save plots\nplots_per_page [Default: 9, Type: Int]: how many plots to show per page\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling, StatsPlots\n\n@model RBC_CME begin\n y[0]=A[0]*k[-1]^alpha\n 1/c[0]=beta*1/c[1]*(alpha*A[1]*k[0]^(alpha-1)+(1-delta))\n 1/c[0]=beta*1/c[1]*(R[0]/Pi[+1])\n R[0] * beta =(Pi[0]/Pibar)^phi_pi\n A[0]*k[-1]^alpha=c[0]+k[0]-(1-delta*z_delta[0])*k[-1]\n z_delta[0] = 1 - rho_z_delta + rho_z_delta * z_delta[-1] + std_z_delta * delta_eps[x]\n A[0] = 1 - rhoz + rhoz * A[-1] + std_eps * eps_z[x]\nend\n\n@parameters RBC_CME begin\n alpha = .157\n beta = .999\n delta = .0226\n Pibar = 1.0008\n phi_pi = 1.5\n rhoz = .9\n std_eps = .0068\n rho_z_delta = .9\n std_z_delta = .005\nend\n\n# c is conditioned to deviate by 0.01 in period 1 and y is conditioned to deviate by 0.02 in period 3\nconditions = KeyedArray(Matrix{Union{Nothing,Float64}}(undef,2,2),Variables = [:c,:y], Periods = 1:2)\nconditions[1,1] = .01\nconditions[2,2] = .02\n\n# in period 2 second shock (eps_z) is conditioned to take a value of 0.05\nshocks = Matrix{Union{Nothing,Float64}}(undef,2,1)\nshocks[1,1] = .05\n\nplot_conditional_forecast(RBC_CME, conditions, shocks = shocks, conditions_in_levels = false)\n\n# The same can be achieved with the other input formats:\n# conditions = Matrix{Union{Nothing,Float64}}(undef,7,2)\n# conditions[4,1] = .01\n# conditions[6,2] = .02\n\n# using SparseArrays\n# conditions = spzeros(7,2)\n# conditions[4,1] = .01\n# conditions[6,2] = .02\n\n# shocks = KeyedArray(Matrix{Union{Nothing,Float64}}(undef,1,1),Variables = [:delta_eps], Periods = [1])\n# shocks[1,1] = .05\n\n# using SparseArrays\n# shocks = spzeros(2,1)\n# shocks[1,1] = .05\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.plot_conditional_variance_decomposition-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.plot_conditional_variance_decomposition","text":"plot_conditional_variance_decomposition(\n 𝓂;\n periods,\n variables,\n parameters,\n show_plots,\n save_plots,\n save_plots_format,\n save_plots_path,\n plots_per_page,\n verbose\n)\n\n\nPlot conditional variance decomposition of the model.\n\nThe vertical axis shows the share of the shocks variance contribution, and horizontal axis the period of the variance decomposition. The stacked bars represent each shocks variance contribution at a specific time horizon.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nKeyword Arguments\n\nperiods [Default: 40, Type: Int]: number of periods for which to calculate the IRFs. In case a matrix of shocks was provided, periods defines how many periods after the series of shocks the simulation continues.\nvariables [Default: :all_excluding_obc]: variables for which to show the results. Inputs can be a variable name passed on as either a Symbol or String (e.g. :y or \"y\"), or Tuple, Matrix or Vector of String or Symbol. Any variables not part of the model will trigger a warning. :all_excluding_auxilliary_and_obc contains all shocks less those related to auxilliary variables and related to occasionally binding constraints (obc). :all_excluding_obc contains all shocks less those related to auxilliary variables. :all will contain all variables.\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\nshow_plots [Default: true, Type: Bool]: show plots. Separate plots per shocks and varibles depending on number of variables and plots_per_page.\nsave_plots [Default: false, Type: Bool]: switch to save plots using path and extension from save_plots_path and save_plots_format. Separate files per shocks and variables depending on number of variables and plots_per_page\nsave_plots_format [Default: :pdf, Type: Symbol]: output format of saved plots. See input formats compatible with GR for valid formats.\nsave_plots_path [Default: pwd(), Type: String]: path where to save plots\nplots_per_page [Default: 9, Type: Int]: how many plots to show per page\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling, StatsPlots\n\n@model RBC_CME begin\n y[0]=A[0]*k[-1]^alpha\n 1/c[0]=beta*1/c[1]*(alpha*A[1]*k[0]^(alpha-1)+(1-delta))\n 1/c[0]=beta*1/c[1]*(R[0]/Pi[+1])\n R[0] * beta =(Pi[0]/Pibar)^phi_pi\n A[0]*k[-1]^alpha=c[0]+k[0]-(1-delta*z_delta[0])*k[-1]\n z_delta[0] = 1 - rho_z_delta + rho_z_delta * z_delta[-1] + std_z_delta * delta_eps[x]\n A[0] = 1 - rhoz + rhoz * A[-1] + std_eps * eps_z[x]\nend\n\n@parameters RBC_CME begin\n alpha = .157\n beta = .999\n delta = .0226\n Pibar = 1.0008\n phi_pi = 1.5\n rhoz = .9\n std_eps = .0068\n rho_z_delta = .9\n std_z_delta = .005\nend\n\nplot_conditional_variance_decomposition(RBC_CME)\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.plot_fevd","page":"API","title":"MacroModelling.plot_fevd","text":"See plot_conditional_variance_decomposition\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.plot_forecast_error_variance_decomposition","page":"API","title":"MacroModelling.plot_forecast_error_variance_decomposition","text":"See plot_conditional_variance_decomposition\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.plot_girf-Tuple","page":"API","title":"MacroModelling.plot_girf","text":"Wrapper for plot_irf with generalised_irf = true.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.plot_irf-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.plot_irf","text":"plot_irf(\n 𝓂;\n periods,\n shocks,\n variables,\n parameters,\n show_plots,\n save_plots,\n save_plots_format,\n save_plots_path,\n plots_per_page,\n algorithm,\n negative_shock,\n generalised_irf,\n initial_state,\n ignore_obc,\n verbose\n)\n\n\nPlot impulse response functions (IRFs) of the model.\n\nThe left axis shows the level, and the right the deviation from the reference steady state. Linear solutions have the non stochastic steady state as reference other solution the stochastic steady state. The horizontal black line indicates the reference steady state. Variable names are above the subplots and the title provides information about the model, shocks and number of pages per shock.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\nKeyword Arguments\n\nperiods [Default: 40, Type: Int]: number of periods for which to calculate the IRFs. In case a matrix of shocks was provided, periods defines how many periods after the series of shocks the simulation continues.\nshocks [Default: :all_excluding_obc]: shocks for which to calculate the IRFs. Inputs can be a shock name passed on as either a Symbol or String (e.g. :y, or \"y\"), or Tuple, Matrix or Vector of String or Symbol. :simulate triggers random draws of all shocks (excluding occasionally binding constraints (obc) related shocks). :all_excluding_obc will contain all shocks but not the obc related ones.:all will contain also the obc related shocks. A series of shocks can be passed on using either a Matrix{Float64}, or a KeyedArray{Float64} as input with shocks (Symbol or String) in rows and periods in columns. The period of the simulation will correspond to the length of the input in the period dimension + the number of periods defined in periods. If the series of shocks is input as a KeyedArray{Float64} make sure to name the rows with valid shock names of type Symbol. Any shocks not part of the model will trigger a warning. :none in combination with an initial_state can be used for deterministic simulations.\nvariables [Default: :all_excluding_obc]: variables for which to show the results. Inputs can be a variable name passed on as either a Symbol or String (e.g. :y or \"y\"), or Tuple, Matrix or Vector of String or Symbol. Any variables not part of the model will trigger a warning. :all_excluding_auxilliary_and_obc contains all shocks less those related to auxilliary variables and related to occasionally binding constraints (obc). :all_excluding_obc contains all shocks less those related to auxilliary variables. :all will contain all variables.\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\nshow_plots [Default: true, Type: Bool]: show plots. Separate plots per shocks and varibles depending on number of variables and plots_per_page.\nsave_plots [Default: false, Type: Bool]: switch to save plots using path and extension from save_plots_path and save_plots_format. Separate files per shocks and variables depending on number of variables and plots_per_page\nsave_plots_format [Default: :pdf, Type: Symbol]: output format of saved plots. See input formats compatible with GR for valid formats.\nsave_plots_path [Default: pwd(), Type: String]: path where to save plots\nplots_per_page [Default: 9, Type: Int]: how many plots to show per page\nalgorithm [Default: :first_order, Type: Symbol]: algorithm to solve for the dynamics of the model.\nnegative_shock [Default: false, Type: Bool]: calculate a negative shock. Relevant for generalised IRFs.\ngeneralised_irf [Default: false, Type: Bool]: calculate generalised IRFs. Relevant for nonlinear solutions. Reference steady state for deviations is the stochastic steady state.\ninitial_state [Default: [0.0], Type: Union{Vector{Vector{Float64}},Vector{Float64}}]: The initial state defines the starting point for the model and is relevant for normal IRFs. In the case of pruned solution algorithms the initial state can be given as multiple state vectors (Vector{Vector{Float64}}). In this case the initial state must be given in devations from the non-stochastic steady state. In all other cases the initial state must be given in levels. If a pruned solution algorithm is selected and initial state is a Vector{Float64} then it impacts the first order initial state vector only. The state includes all variables as well as exogenous variables in leads or lags if present.\nignore_obc [Default: false, Type: Bool]: solve the model ignoring the occasionally binding constraints.\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling, StatsPlots\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend;\n\n@parameters RBC begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α = 0.5\n β = 0.95\nend;\n\nplot_irf(RBC)\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.plot_irfs","page":"API","title":"MacroModelling.plot_irfs","text":"See plot_irf\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.plot_model_estimates-Tuple{MacroModelling.ℳ, KeyedArray{Float64}}","page":"API","title":"MacroModelling.plot_model_estimates","text":"plot_model_estimates(\n 𝓂,\n data;\n parameters,\n algorithm,\n filter,\n warmup_iterations,\n variables,\n shocks,\n data_in_levels,\n shock_decomposition,\n smooth,\n show_plots,\n save_plots,\n save_plots_format,\n save_plots_path,\n plots_per_page,\n transparency,\n verbose\n)\n\n\nPlot model estimates of the variables given the data. The default plot shows the estimated variables, shocks, and the data to estimate the former. The left axis shows the level, and the right the deviation from the reference steady state. The horizontal black line indicates the non stochastic steady state. Variable names are above the subplots and the title provides information about the model, shocks and number of pages per shock.\n\nIn case shock_decomposition = true, then the plot shows the variables, shocks, and data in absolute deviations from the non stochastic steady state plus the contribution of the shocks as a stacked bar chart per period.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\ndata [Type: KeyedArray]: data matrix with variables (String or Symbol) in rows and time in columns\n\nKeyword Arguments\n\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\nvariables [Default: :all_excluding_obc]: variables for which to show the results. Inputs can be a variable name passed on as either a Symbol or String (e.g. :y or \"y\"), or Tuple, Matrix or Vector of String or Symbol. Any variables not part of the model will trigger a warning. :all_excluding_auxilliary_and_obc contains all shocks less those related to auxilliary variables and related to occasionally binding constraints (obc). :all_excluding_obc contains all shocks less those related to auxilliary variables. :all will contain all variables.\nshocks [Default: :all]: shocks for which to plot the estimates. Inputs can be either a Symbol (e.g. :y, or :all), Tuple{Symbol, Vararg{Symbol}}, Matrix{Symbol}, or Vector{Symbol}.\ndata_in_levels [Default: true, Type: Bool]: indicator whether the data is provided in levels. If true the input to the data argument will have the non stochastic steady state substracted.\nshock_decomposition [Default: false, Type: Bool]: whether to show the contribution of the shocks to the deviations from NSSS for each variable. If false, the plot shows the values of the selected variables, data, and shocks\nsmooth [Default: true, Type: Bool]: whether to return smoothed (true) or filtered (false) shocks. Only works for the Kalman filter. The inversion filter only returns filtered shocks.\nshow_plots [Default: true, Type: Bool]: show plots. Separate plots per shocks and varibles depending on number of variables and plots_per_page.\nsave_plots [Default: false, Type: Bool]: switch to save plots using path and extension from save_plots_path and save_plots_format. Separate files per shocks and variables depending on number of variables and plots_per_page\nsave_plots_format [Default: :pdf, Type: Symbol]: output format of saved plots. See input formats compatible with GR for valid formats.\nsave_plots_path [Default: pwd(), Type: String]: path where to save plots\nplots_per_page [Default: 9, Type: Int]: how many plots to show per page\ntransparency [Default: 0.6, Type: Float64]: transparency of bars\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling, StatsPlots\n\n\n@model RBC_CME begin\n y[0]=A[0]*k[-1]^alpha\n 1/c[0]=beta*1/c[1]*(alpha*A[1]*k[0]^(alpha-1)+(1-delta))\n 1/c[0]=beta*1/c[1]*(R[0]/Pi[+1])\n R[0] * beta =(Pi[0]/Pibar)^phi_pi\n A[0]*k[-1]^alpha=c[0]+k[0]-(1-delta*z_delta[0])*k[-1]\n z_delta[0] = 1 - rho_z_delta + rho_z_delta * z_delta[-1] + std_z_delta * delta_eps[x]\n A[0] = 1 - rhoz + rhoz * A[-1] + std_eps * eps_z[x]\nend\n\n@parameters RBC_CME begin\n alpha = .157\n beta = .999\n delta = .0226\n Pibar = 1.0008\n phi_pi = 1.5\n rhoz = .9\n std_eps = .0068\n rho_z_delta = .9\n std_z_delta = .005\nend\n\nsimulation = simulate(RBC_CME)\n\nplot_model_estimates(RBC_CME, simulation([:k],:,:simulate))\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.plot_shock_decomposition-Tuple","page":"API","title":"MacroModelling.plot_shock_decomposition","text":"Wrapper for plot_model_estimates with shock_decomposition = true.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.plot_simulation-Tuple","page":"API","title":"MacroModelling.plot_simulation","text":"Wrapper for plot_irf with shocks = :simulate and periods = 100.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.plot_simulations-Tuple","page":"API","title":"MacroModelling.plot_simulations","text":"Wrapper for plot_irf with shocks = :simulate and periods = 100.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.plot_solution-Tuple{MacroModelling.ℳ, Union{String, Symbol}}","page":"API","title":"MacroModelling.plot_solution","text":"plot_solution(\n 𝓂,\n state;\n variables,\n algorithm,\n σ,\n parameters,\n ignore_obc,\n show_plots,\n save_plots,\n save_plots_format,\n save_plots_path,\n plots_per_page,\n verbose\n)\n\n\nPlot the solution of the model (mapping of past states to present variables) around the (non) stochastic steady state (depending on chosen solution algorithm). Each plot shows the relationship between the chosen state (defined in state) and one of the chosen variables (defined in variables). \n\nThe (non) stochastic steady state is plotted along with the mapping from the chosen past state to one present variable per plot. All other (non-chosen) states remain in the (non) stochastic steady state.\n\nIn the case of pruned solutions there as many (latent) state vectors as the perturbation order. The first and third order baseline state vectors are the non stochastic steady state and the second order baseline state vector is the stochastic steady state. Deviations for the chosen state are only added to the first order baseline state. The plot shows the mapping from σ standard deviations (first order) added to the first order non stochastic steady state and the present variables. Note that there is no unique mapping from the \"pruned\" states and the \"actual\" reported state. Hence, the plots shown are just one realisation of inifite possible mappings.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\nstate [Type: Union{Symbol,String}]: state variable to be shown on x-axis.\n\nKeyword Arguments\n\nvariables [Default: :all_excluding_obc]: variables for which to show the results. Inputs can be a variable name passed on as either a Symbol or String (e.g. :y or \"y\"), or Tuple, Matrix or Vector of String or Symbol. Any variables not part of the model will trigger a warning. :all_excluding_auxilliary_and_obc contains all shocks less those related to auxilliary variables and related to occasionally binding constraints (obc). :all_excluding_obc contains all shocks less those related to auxilliary variables. :all will contain all variables.\nalgorithm [Default: :first_order, Type: Union{Symbol,Vector{Symbol}}]: solution algorithm for which to show the IRFs. Can be more than one, e.g.: [:second_order,:pruned_third_order]\"\nσ [Default: 2, Type: Union{Int64,Float64}]: defines the range of the state variable around the (non) stochastic steady state in standard deviations. E.g. a value of 2 means that the state variable is plotted for values of the (non) stochastic steady state in standard deviations +/- 2 standard deviations.\nparameters [Default: nothing]: If nothing is provided, the solution is calculated for the parameters defined previously. Acceptable inputs are a vector of parameter values, a vector or tuple of pairs of the parameter Symbol or String and value. If the new parameter values differ from the previously defined the solution will be recalculated.\nignore_obc [Default: false, Type: Bool]: solve the model ignoring the occasionally binding constraints.\nshow_plots [Default: true, Type: Bool]: show plots. Separate plots per shocks and varibles depending on number of variables and plots_per_page.\nsave_plots [Default: false, Type: Bool]: switch to save plots using path and extension from save_plots_path and save_plots_format. Separate files per shocks and variables depending on number of variables and plots_per_page\nsave_plots_format [Default: :pdf, Type: Symbol]: output format of saved plots. See input formats compatible with GR for valid formats.\nsave_plots_path [Default: pwd(), Type: String]: path where to save plots\nplots_per_page [Default: 6, Type: Int]: how many plots to show per page\nverbose [Default: false, Type: Bool]: print information about how the NSSS is solved (symbolic or numeric), which solver is used (Levenberg-Marquardt...), and the maximum absolute error.\n\nExamples\n\nusing MacroModelling, StatsPlots\n\n@model RBC_CME begin\n y[0]=A[0]*k[-1]^alpha\n 1/c[0]=beta*1/c[1]*(alpha*A[1]*k[0]^(alpha-1)+(1-delta))\n 1/c[0]=beta*1/c[1]*(R[0]/Pi[+1])\n R[0] * beta =(Pi[0]/Pibar)^phi_pi\n A[0]*k[-1]^alpha=c[0]+k[0]-(1-delta*z_delta[0])*k[-1]\n z_delta[0] = 1 - rho_z_delta + rho_z_delta * z_delta[-1] + std_z_delta * delta_eps[x]\n A[0] = 1 - rhoz + rhoz * A[-1] + std_eps * eps_z[x]\nend\n\n@parameters RBC_CME begin\n alpha = .157\n beta = .999\n delta = .0226\n Pibar = 1.0008\n phi_pi = 1.5\n rhoz = .9\n std_eps = .0068\n rho_z_delta = .9\n std_z_delta = .005\nend\n\nplot_solution(RBC_CME, :k)\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.plotlyjs_backend","page":"API","title":"MacroModelling.plotlyjs_backend","text":"plotlyjs_backend()\n\nRenaming and reexport of Plot.jl function plotlyjs() to define PlotlyJS.jl as backend\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.simulate-Tuple","page":"API","title":"MacroModelling.simulate","text":"Wrapper for get_irf with shocks = :simulate. Function returns values in levels by default.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.ss-Tuple","page":"API","title":"MacroModelling.ss","text":"See get_steady_state\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.sss-Tuple","page":"API","title":"MacroModelling.sss","text":"Wrapper for get_steady_state with stochastic = true.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.std","page":"API","title":"MacroModelling.std","text":"Wrapper for get_moments with standard_deviation = true and non_stochastic_steady_state = false, variance = false, covariance = false.\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.stdev","page":"API","title":"MacroModelling.stdev","text":"Wrapper for get_moments with standard_deviation = true and non_stochastic_steady_state = false, variance = false, covariance = false.\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.steady_state","page":"API","title":"MacroModelling.steady_state","text":"See get_steady_state\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.translate_dynare_file","page":"API","title":"MacroModelling.translate_dynare_file","text":"See translate_mod_file\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.translate_mod_file-Tuple{AbstractString}","page":"API","title":"MacroModelling.translate_mod_file","text":"translate_mod_file(path_to_mod_file)\n\n\nReads in a dynare .mod-file, adapts the syntax, tries to capture parameter definitions, and writes a julia file in the same folder containing the model equations and parameters in MacroModelling.jl syntax. This function is not guaranteed to produce working code. It's purpose is to make it easier to port a model from dynare to MacroModelling.jl. \n\nThe recommended workflow is to use this function to translate a .mod-file, and then adapt the output so that it runs and corresponds to the input.\n\nNote that this function copies the .mod-file to a temporary folder and executes it there. All references within that .mod-file are therefore not valid (because those filesare not copied) and must be made copied into the .mod-file.\n\nArguments\n\npath_to_mod_file [Type: AbstractString]: path including filename of the .mod-file to be translated\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.var","page":"API","title":"MacroModelling.var","text":"Wrapper for get_moments with variance = true and non_stochastic_steady_state = false, standard_deviation = false, covariance = false.\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.write_dynare_file","page":"API","title":"MacroModelling.write_dynare_file","text":"See write_mod_file\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.write_mod_file-Tuple{MacroModelling.ℳ}","page":"API","title":"MacroModelling.write_mod_file","text":"write_mod_file(m)\n\n\nWrites a dynare .mod-file in the current working directory. This function is not guaranteed to produce working code. It's purpose is to make it easier to port a model from MacroModelling.jl to dynare. \n\nThe recommended workflow is to use this function to write a .mod-file, and then adapt the output so that it runs and corresponds to the input.\n\nArguments\n\n𝓂: the object created by @model and @parameters for which to get the solution.\n\n\n\n\n\n","category":"method"},{"location":"api/#MacroModelling.write_to_dynare","page":"API","title":"MacroModelling.write_to_dynare","text":"See write_mod_file\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.write_to_dynare_file","page":"API","title":"MacroModelling.write_to_dynare_file","text":"See write_mod_file\n\n\n\n\n\n","category":"function"},{"location":"api/#MacroModelling.@model-Tuple{Any, Vararg{Any}}","page":"API","title":"MacroModelling.@model","text":"Parses the model equations and assigns them to an object.\n\nArguments\n\n𝓂: name of the object to be created containing the model information.\nex: equations\n\nOptional arguments to be placed between 𝓂 and ex\n\nmax_obc_horizon [Default: 40, Type: Int]: maximum length of anticipated shocks and corresponding unconditional forecast horizon over which the occasionally binding constraint is to be enforced. Increase this number if no solution is found to enforce the constraint.\n\nVariables must be defined with their time subscript in squared brackets. Endogenous variables can have the following:\n\npresent: c[0]\nnon-stcohastic steady state: c[ss] instead of ss any of the following is also a valid flag for the non-stochastic steady state: ss, stst, steady, steadystate, steady_state, and the parser is case-insensitive (SS or sTst will work as well).\npast: c[-1] or any negative Integer: e.g. c[-12]\nfuture: c[1] or any positive Integer: e.g. c[16] or c[+16]\n\nSigned integers are recognised and parsed as such.\n\nExogenous variables (shocks) can have the following:\n\npresent: eps_z[x] instead of x any of the following is also a valid flag for exogenous variables: ex, exo, exogenous, and the parser is case-insensitive (Ex or exoGenous will work as well).\npast: eps_z[x-1]\nfuture: eps_z[x+1]\n\nParameters enter the equations without squared brackets.\n\nIf an equation contains a max or min operator, then the default dynamic (first order) solution of the model will enforce the occasionally binding constraint. You can choose to ignore it by setting ignore_obc = true in the relevant function calls.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend\n\nProgrammatic model writing\n\nParameters and variables can be indexed using curly braces: e.g. c{H}[0], eps_z{F}[x], or α{H}.\n\nfor loops can be used to write models programmatically. They can either be used to generate expressions where you iterate over the time index or the index in curly braces:\n\ngenerate equation with different indices in curly braces: for co in [H,F] C{co}[0] + X{co}[0] + Z{co}[0] - Z{co}[-1] end = for co in [H,F] Y{co}[0] end\ngenerate multiple equations with different indices in curly braces: for co in [H, F] K{co}[0] = (1-delta{co}) * K{co}[-1] + S{co}[0] end\ngenerate equation with different time indices: Y_annual[0] = for lag in -3:0 Y[lag] end or R_annual[0] = for operator = :*, lag in -3:0 R[lag] end\n\n\n\n\n\n","category":"macro"},{"location":"api/#MacroModelling.@parameters-Tuple{Any, Vararg{Any}}","page":"API","title":"MacroModelling.@parameters","text":"Adds parameter values and calibration equations to the previously defined model. Allows to provide an initial guess for the non-stochastic steady state (NSSS).\n\nArguments\n\n𝓂: name of the object previously created containing the model information.\nex: parameter, parameters values, and calibration equations\n\nParameters can be defined in either of the following ways:\n\nplain number: δ = 0.02\nexpression containing numbers: δ = 1/50\nexpression containing other parameters: δ = 2 * std_z in this case it is irrelevant if std_z is defined before or after. The definitons including other parameters are treated as a system of equaitons and solved accordingly.\nexpressions containing a target parameter and an equations with endogenous variables in the non-stochastic steady state, and other parameters, or numbers: k[ss] / (4 * q[ss]) = 1.5 | δ or α | 4 * q[ss] = δ * k[ss] in this case the target parameter will be solved simultaneaously with the non-stochastic steady state using the equation defined with it.\n\nOptional arguments to be placed between 𝓂 and ex\n\nguess [Type: Dict{Symbol, <:Real}, Dict{String, <:Real}}]: Guess for the non-stochastic steady state. The keys must be the variable (and calibrated parameters) names and the values the guesses. Missing values are filled with standard starting values.\nverbose [Default: false, Type: Bool]: print more information about how the non stochastic steady state is solved\nsilent [Default: false, Type: Bool]: do not print any information\nsymbolic [Default: false, Type: Bool]: try to solve the non stochastic steady state symbolically and fall back to a numerical solution if not possible\nperturbation_order [Default: 1, Type: Int]: take derivatives only up to the specified order at this stage. In case you want to work with higher order perturbation later on, respective derivatives will be taken at that stage.\n\nExamples\n\nusing MacroModelling\n\n@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend\n\n@parameters RBC verbose = true begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α = 0.5\n β = 0.95\nend\n\n@model RBC_calibrated begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend\n\n@parameters RBC_calibrated verbose = true guess = Dict(:k => 3) begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n k[ss] / q[ss] = 2.5 | α\n β = 0.95\nend\n\nProgrammatic model writing\n\nVariables and parameters indexed with curly braces can be either referenced specifically (e.g. c{H}[ss]) or generally (e.g. alpha). If they are referenced generaly the parse assumes all instances (indices) are meant. For example, in a model where alpha has two indices H and F, the expression alpha = 0.3 is interpreted as two expressions: alpha{H} = 0.3 and alpha{F} = 0.3. The same goes for calibration equations.\n\n\n\n\n\n","category":"macro"},{"location":"tutorials/install/#Installation","page":"Installation","title":"Installation","text":"","category":"section"},{"location":"tutorials/install/","page":"Installation","title":"Installation","text":"MacroModelling.jl requires julia version 1.8 or higher and an IDE is recommended (e.g. VS Code with the julia extension).","category":"page"},{"location":"tutorials/install/","page":"Installation","title":"Installation","text":"Once set up you can install MacroModelling.jl by typing the following in the julia REPL:","category":"page"},{"location":"tutorials/install/","page":"Installation","title":"Installation","text":"using Pkg; Pkg.add(\"MacroModelling\")","category":"page"},{"location":"how-to/obc/#Occasionally-Binding-Constraints","page":"Occasionally binding constraints","title":"Occasionally Binding Constraints","text":"","category":"section"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Occasionally binding constraints are a form of nonlinearity frequently used to model effects like the zero lower bound on interest rates, or borrowing constraints. Perturbation methods are not able to capture them as they are local approximations. Nonetheless, there are ways to combine the speed of perturbation solutions and the flexibility of occasionally binding constraints. MacroModelling.jl provides a convenient way to write down the constraints and automatically enforces the constraint equation with shocks. More specifically, the constraint equation is enforced for each periods unconditional forecast (default forecast horizon of 40 periods) by constraint equation specific anticipated shocks, while minimising the shock size.","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"This guide will demonstrate how to write down models containing occasionally binding constraints (e.g. effective lower bound and borrowing constraint), show some potential problems the user may encounter and how to overcome them, and go through some use cases.","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Common problems that may occur are that no perturbation solution is found, or that the algorithm cannot find a combination of shocks which enforce the constraint equation. The former has to do with the fact that occasionally binding constraints can give rise to more than one steady state but only one is suitable for a perturbation solution. The latter has to do with the dynamics of the model and the fact that we use a finite amount of shocks to enforce the constraint equation.","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Beyond the examples outlined in this guide there is a version of Smets and Wouters (2003) with the ELB in the models folder (filename: SW03_obc.jl).","category":"page"},{"location":"how-to/obc/#Example:-Effective-lower-bound-on-interest-rates","page":"Occasionally binding constraints","title":"Example: Effective lower bound on interest rates","text":"","category":"section"},{"location":"how-to/obc/#Writing-a-model-with-occasionally-binding-constraints","page":"Occasionally binding constraints","title":"Writing a model with occasionally binding constraints","text":"","category":"section"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Let us take the Galı́ (2015), Chapter 3 model containing a Taylor rule and implement an effective lower bound on interest rates. The Taylor rule in the model: R[0] = 1 / β * Pi[0] ^ ϕᵖⁱ * (Y[0] / Y[ss]) ^ ϕʸ * exp(nu[0]) needs to be modified so that R[0] never goes below an effective lower bound R̄. We can do this using the max operator: R[0] = max(R̄ , 1 / β * Pi[0] ^ ϕᵖⁱ * (Y[0] / Y[ss]) ^ ϕʸ * exp(nu[0]))","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"The model definition after the change of the Taylor rule looks like this:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"ENV[\"GKSwstype\"] = \"100\"\nusing Random\nRandom.seed!(30)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"using MacroModelling\n@model Gali_2015_chapter_3_obc begin\n W_real[0] = C[0] ^ σ * N[0] ^ φ\n\n Q[0] = β * (C[1] / C[0]) ^ (-σ) * Z[1] / Z[0] / Pi[1]\n\n R[0] = 1 / Q[0]\n\n Y[0] = A[0] * (N[0] / S[0]) ^ (1 - α)\n\n R[0] = Pi[1] * realinterest[0]\n\n C[0] = Y[0]\n\n log(A[0]) = ρ_a * log(A[-1]) + std_a * eps_a[x]\n\n log(Z[0]) = ρ_z * log(Z[-1]) - std_z * eps_z[x]\n\n nu[0] = ρ_ν * nu[-1] + std_nu * eps_nu[x]\n\n MC[0] = W_real[0] / (S[0] * Y[0] * (1 - α) / N[0])\n\n 1 = θ * Pi[0] ^ (ϵ - 1) + (1 - θ) * Pi_star[0] ^ (1 - ϵ)\n\n S[0] = (1 - θ) * Pi_star[0] ^ (( - ϵ) / (1 - α)) + θ * Pi[0] ^ (ϵ / (1 - α)) * S[-1]\n\n Pi_star[0] ^ (1 + ϵ * α / (1 - α)) = ϵ * x_aux_1[0] / x_aux_2[0] * (1 - τ) / (ϵ - 1)\n\n x_aux_1[0] = MC[0] * Y[0] * Z[0] * C[0] ^ (-σ) + β * θ * Pi[1] ^ (ϵ + α * ϵ / (1 - α)) * x_aux_1[1]\n\n x_aux_2[0] = Y[0] * Z[0] * C[0] ^ (-σ) + β * θ * Pi[1] ^ (ϵ - 1) * x_aux_2[1]\n\n log_y[0] = log(Y[0])\n\n log_W_real[0] = log(W_real[0])\n\n log_N[0] = log(N[0])\n\n pi_ann[0] = 4 * log(Pi[0])\n\n i_ann[0] = 4 * log(R[0])\n\n r_real_ann[0] = 4 * log(realinterest[0])\n\n M_real[0] = Y[0] / R[0] ^ η\n\n R[0] = max(R̄ , 1 / β * Pi[0] ^ ϕᵖⁱ * (Y[0] / Y[ss]) ^ ϕʸ * exp(nu[0]))\n\nend","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"In the background the system of equations is augmented by a series of anticipated shocks added to the equation containing the constraint (max/min operator). This explains the large number of auxilliary variables and shocks.","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Next we define the parameters including the new parameter defining the effective lower bound (which we set to 1, which implements a zero lower bound):","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"@parameters Gali_2015_chapter_3_obc begin\n R̄ = 1.0\n\n σ = 1\n\n φ = 5\n\n ϕᵖⁱ = 1.5\n\n ϕʸ = 0.125\n\n θ = 0.75\n\n ρ_ν = 0.5\n\n ρ_z = 0.5\n\n ρ_a = 0.9\n\n β = 0.99\n\n η = 3.77\n\n α = 0.25\n\n ϵ = 9\n\n τ = 0\n\n std_a = .01\n\n std_z = .05\n\n std_nu = .0025\n\nend","category":"page"},{"location":"how-to/obc/#Verify-the-non-stochastic-steady-state","page":"Occasionally binding constraints","title":"Verify the non stochastic steady state","text":"","category":"section"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Let's check out the non stochastic steady state (NSSS):","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"SS(Gali_2015_chapter_3_obc)\nSS(Gali_2015_chapter_3_obc)(:R)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"There are a few things to note here. First, we get the NSSS values of the auxilliary variables related to the occasionally binding constraint. Second, the NSSS value of R is 1, and thereby the effective lower bound is binding in the NSSS. While this is a viable NSSS it is not a viable approximation point for perturbation. We can only find a perturbation solution if the effective lower bound is not binding in NSSS. Calling get_solution reveals that there is no stable solution at this NSSS:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"get_solution(Gali_2015_chapter_3_obc)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"In order to get the other viable NSSS we have to restrict the values of R to be larger than the effective lower bound. We can do this by adding a constraint on the variable in the @parameter section. Let us redefine the model:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"@model Gali_2015_chapter_3_obc begin\n W_real[0] = C[0] ^ σ * N[0] ^ φ\n\n Q[0] = β * (C[1] / C[0]) ^ (-σ) * Z[1] / Z[0] / Pi[1]\n\n R[0] = 1 / Q[0]\n\n Y[0] = A[0] * (N[0] / S[0]) ^ (1 - α)\n\n R[0] = Pi[1] * realinterest[0]\n\n C[0] = Y[0]\n\n log(A[0]) = ρ_a * log(A[-1]) + std_a * eps_a[x]\n\n log(Z[0]) = ρ_z * log(Z[-1]) - std_z * eps_z[x]\n\n nu[0] = ρ_ν * nu[-1] + std_nu * eps_nu[x]\n\n MC[0] = W_real[0] / (S[0] * Y[0] * (1 - α) / N[0])\n\n 1 = θ * Pi[0] ^ (ϵ - 1) + (1 - θ) * Pi_star[0] ^ (1 - ϵ)\n\n S[0] = (1 - θ) * Pi_star[0] ^ (( - ϵ) / (1 - α)) + θ * Pi[0] ^ (ϵ / (1 - α)) * S[-1]\n\n Pi_star[0] ^ (1 + ϵ * α / (1 - α)) = ϵ * x_aux_1[0] / x_aux_2[0] * (1 - τ) / (ϵ - 1)\n\n x_aux_1[0] = MC[0] * Y[0] * Z[0] * C[0] ^ (-σ) + β * θ * Pi[1] ^ (ϵ + α * ϵ / (1 - α)) * x_aux_1[1]\n\n x_aux_2[0] = Y[0] * Z[0] * C[0] ^ (-σ) + β * θ * Pi[1] ^ (ϵ - 1) * x_aux_2[1]\n\n log_y[0] = log(Y[0])\n\n log_W_real[0] = log(W_real[0])\n\n log_N[0] = log(N[0])\n\n pi_ann[0] = 4 * log(Pi[0])\n\n i_ann[0] = 4 * log(R[0])\n\n r_real_ann[0] = 4 * log(realinterest[0])\n\n M_real[0] = Y[0] / R[0] ^ η\n\n R[0] = max(R̄ , 1 / β * Pi[0] ^ ϕᵖⁱ * (Y[0] / Y[ss]) ^ ϕʸ * exp(nu[0]))\n\nend\n\n@parameters Gali_2015_chapter_3_obc begin\n R̄ = 1.0\n\n σ = 1\n\n φ = 5\n\n ϕᵖⁱ = 1.5\n\n ϕʸ = 0.125\n\n θ = 0.75\n\n ρ_ν = 0.5\n\n ρ_z = 0.5\n\n ρ_a = 0.9\n\n β = 0.99\n\n η = 3.77\n\n α = 0.25\n\n ϵ = 9\n\n τ = 0\n\n std_a = .01\n\n std_z = .05\n\n std_nu = .0025\n\n R > 1.000001\nend","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"and check the NSSS once more:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"SS(Gali_2015_chapter_3_obc)\nSS(Gali_2015_chapter_3_obc)(:R)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Now we get R > R̄, so that the constraint is not binding in the NSSS and we can work with a stable first order solution:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"get_solution(Gali_2015_chapter_3_obc)","category":"page"},{"location":"how-to/obc/#Generate-model-output","page":"Occasionally binding constraints","title":"Generate model output","text":"","category":"section"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Having defined the system with an occasionally binding constraint we can simply simulate the model by calling:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"import StatsPlots\nplot_simulations(Gali_2015_chapter_3_obc)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"(Image: Simulation_elb)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"In the background an optimisation problem is set up to find the smallest shocks in magnitude which enforce the equation containing the occasionally binding constraint over the unconditional forecast horizon (default 40 periods) at each period of the simulation. The plots show multiple spells of a binding effective lower bound and many other variables are skewed as a result of the nonlinearity. It can happen that it is not possible to find a combination of shocks which enforce the occasionally binding constraint equation. In this case one solution can be to make the horizon larger over which the algorithm tries to enforce the equation. You can do this by setting the parameter at the beginning of the @model section: @model Gali_2015_chapter_3_obc max_obc_horizon = 60 begin ... end.","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Next let us change the effective lower bound to 0.99 and plot once more:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"plot_simulations(Gali_2015_chapter_3_obc, parameters = :R̄ => 0.99)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"(Image: Simulation_elb2)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Now, the effect of the effective lower bound becomes less important as it binds less often.","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"If you want to ignore the occasionally binding constraint you can simply call:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"plot_simulations(Gali_2015_chapter_3_obc, ignore_obc = true)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"(Image: Simulation_no_elb)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"and you get the simulation based on the first order solution approximated around the NSSS, which is the same as the one for the model without the modified Taylor rule.","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"We can plot the impulse response functions for the eps_z shock, while setting the parameter of the occasionally binding constraint back to 1, as follows:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"plot_irf(Gali_2015_chapter_3_obc, shocks = :eps_z, parameters = :R̄ => 1.0)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"(Image: IRF_elb)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"As you can see R remains above the effective lower bound in the first period.","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Next, let us simulate the model using a series of shocks. E.g. three positive shocks to eps_z in periods 5, 10, and 15 in decreasing magnitude:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"shcks = zeros(1,15)\nshcks[5] = 3.0\nshcks[10] = 2.0\nshcks[15] = 1.0\n\nsks = KeyedArray(shcks; Shocks = [:eps_z], Periods = 1:15)\n\nplot_irf(Gali_2015_chapter_3_obc, shocks = sks, periods = 10)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"(Image: Shock_series_elb)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"The effective lower bound is binding after all three shocks but the length of the constraint being binding varies with the shock size and is completely endogenous.","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Last but not least, we can get the simulated moments of the model (theoretical moments are not available):","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"sims = get_irf(Gali_2015_chapter_3_obc, periods = 1000, shocks = :simulate, levels = true)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Let's look at the mean and standard deviation of borrowing:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"import Statistics\nStatistics.mean(sims(:Y,:,:))","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"and","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Statistics.std(sims(:Y,:,:))","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Compare this to the theoretical mean of the model without the occasionally binding constraint:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"get_mean(Gali_2015_chapter_3_obc)\nget_mean(Gali_2015_chapter_3_obc)(:Y)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"and the theoretical standard deviation:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"get_std(Gali_2015_chapter_3_obc)\nget_std(Gali_2015_chapter_3_obc)(:Y)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"The mean of output is lower in the model with effective lower bound compared to the model without and the standard deviation is higher.","category":"page"},{"location":"how-to/obc/#Example:-Borrowing-constraint","page":"Occasionally binding constraints","title":"Example: Borrowing constraint","text":"","category":"section"},{"location":"how-to/obc/#Model-definition","page":"Occasionally binding constraints","title":"Model definition","text":"","category":"section"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Let us start with a consumption-saving model containing a borrowing constraint (see [@citet cuba2019likelihood] for details). Output is exogenously given, and households can only borrow up to a fraction of output and decide between saving and consumption. The first order conditions of the model are:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"beginalign*\nY_t + B_t = C_t + R B_t-1\nlog(Y_t) = rho log(Y_t-1) + sigma varepsilon_t\nC_t^-gamma = beta R mathbbE_t (C_t+1^-gamma) + lambda_t\n0 = lambda_t (B_t - mY_t)\nendalign*","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"in order to write this model down we need to express the Karush-Kuhn-Tucker condition (last equation) using a max (or min) operator, so that it becomes:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"0 = max(B_t - mY_t -lambda_t)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"We can write this model containing an occasionally binding constraint in a very convenient way:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"@model borrowing_constraint begin\n Y[0] + B[0] = C[0] + R * B[-1]\n\n log(Y[0]) = ρ * log(Y[-1]) + σ * ε[x]\n\n C[0]^(-γ) = β * R * C[1]^(-γ) + λ[0]\n\n 0 = max(B[0] - m * Y[0], -λ[0])\nend","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"In the background the system of equations is augmented by a series of anticipated shocks added to the equation containing the constraint (max/min operator). This explains the large number of auxilliary variables and shocks.","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Next we define the parameters as usual:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"@parameters borrowing_constraint begin\n R = 1.05\n β = 0.945\n ρ = 0.9\n σ = 0.05\n m = 1\n γ = 1\nend","category":"page"},{"location":"how-to/obc/#Working-with-the-model","page":"Occasionally binding constraints","title":"Working with the model","text":"","category":"section"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"For the non stochastic steady state (NSSS) to exist the constraint has to be binding (B[0] = m * Y[0]). This implies a wedge in the Euler equation (λ > 0).","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"We can check this by getting the NSSS:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"SS(borrowing_constraint)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"A common task is to plot impulse response function for positive and negative shocks. This should allow us to understand the role of the constraint.","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"First, we need to import the StatsPlots package and then we can plot the positive shock.","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"import StatsPlots\nplot_irf(borrowing_constraint)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"(Image: Positive_shock)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"We can see that the constraint is no longer binding in the first five periods because Y and B do not increase by the same amount. They should move by the same amount in the case of a negative shock:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"import StatsPlots\nplot_irf(borrowing_constraint, negative_shock = true)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"(Image: Negative_shock)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"and indeed in this case they move by the same amount. The difference between a positive and negative shock demonstrates the influence of the occasionally binding constraint.","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Another common exercise is to plot the impulse response functions from a series of shocks. Let's assume in period 10 there is a positive shocks and in period 30 a negative one. Let's view the results for 50 more periods. We can do this as follows:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"shcks = zeros(1,30)\nshcks[10] = .6\nshcks[30] = -.6\n\nsks = KeyedArray(shcks; Shocks = [:ε], Periods = 1:30)\n\nplot_irf(borrowing_constraint, shocks = sks, periods = 50)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"(Image: Simulation)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"In this case the difference between the shocks and the impact of the constraint become quite obvious. Let's compare this with a version of the model that ignores the occasionally binding constraint. In order to plot the impulse response functions without dynamically enforcing the constraint we can simply write:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"plot_irf(borrowing_constraint, shocks = sks, periods = 50, ignore_obc = true)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"(Image: Simulation)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Another interesting statistic is model moments. As there are no theoretical moments we have to rely on simulated data:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"sims = get_irf(borrowing_constraint, periods = 1000, shocks = :simulate, levels = true)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Let's look at the mean and standard deviation of borrowing:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"import Statistics\nStatistics.mean(sims(:B,:,:))","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"and","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Statistics.std(sims(:B,:,:))","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"Compare this to the theoretical mean of the model without the occasionally binding constraint:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"get_mean(borrowing_constraint)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"and the theoretical standard deviation:","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"get_std(borrowing_constraint)","category":"page"},{"location":"how-to/obc/","page":"Occasionally binding constraints","title":"Occasionally binding constraints","text":"The mean of borrowing is lower in the model with occasionally binding constraints compared to the model without and the standard deviation is higher.","category":"page"},{"location":"unfinished_docs/how_to/#Use-calibration-equations","page":"-","title":"Use calibration equations","text":"","category":"section"},{"location":"unfinished_docs/how_to/","page":"-","title":"-","text":"Next we need to add the parameters of the model. The macro @parameters takes care of this:","category":"page"},{"location":"unfinished_docs/how_to/","page":"-","title":"-","text":"@parameters RBC begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α = 0.5\n β = 0.95\nend","category":"page"},{"location":"unfinished_docs/how_to/","page":"-","title":"-","text":"No need for line endings. If you want to define a parameter as a function of another parameter you can do this:","category":"page"},{"location":"unfinished_docs/how_to/","page":"-","title":"-","text":"@parameters RBC begin\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α = 0.5\n beta1 = 1\n beta2 = .95\n β | β = beta2/beta1\nend","category":"page"},{"location":"unfinished_docs/how_to/","page":"-","title":"-","text":"Note that the parser takes parameters assigned to a numerical value first and then solves for the parameters defined by relationships: β | .... This means also the following will work:","category":"page"},{"location":"unfinished_docs/how_to/","page":"-","title":"-","text":"@parameters RBC begin\n β | β = beta2/beta1\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α = 0.5\n beta1 = 1\n beta2 = .95\nend","category":"page"},{"location":"unfinished_docs/how_to/","page":"-","title":"-","text":"More interestingly one can use (non-stochastic) steady state values in the relationships:","category":"page"},{"location":"unfinished_docs/how_to/","page":"-","title":"-","text":"@parameters RBC begin\n β = .95\n std_z = 0.01\n ρ = 0.2\n δ = 0.02\n α | k[ss] / (4 * q[ss]) = 1.5\nend","category":"page"},{"location":"unfinished_docs/how_to/#Higher-order-perturbation-solutions","page":"-","title":"Higher order perturbation solutions","text":"","category":"section"},{"location":"unfinished_docs/how_to/#How-to-estimate-a-model","page":"-","title":"How to estimate a model","text":"","category":"section"},{"location":"unfinished_docs/how_to/#Interactive-plotting","page":"-","title":"Interactive plotting","text":"","category":"section"},{"location":"unfinished_docs/dsl/#DSL","page":"-","title":"DSL","text":"","category":"section"},{"location":"unfinished_docs/dsl/","page":"-","title":"-","text":"MacroModelling parses models written using a user-friendly syntax:","category":"page"},{"location":"unfinished_docs/dsl/","page":"-","title":"-","text":"@model RBC begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * eps_z[x]\nend","category":"page"},{"location":"unfinished_docs/dsl/","page":"-","title":"-","text":"The most important rule is that variables are followed by the timing in squared brackets for endogenous variables, e.g. Y[0], exogenous variables are marked by certain keywords (see below), e.g. ϵ[x], and parameters need no further syntax, e.g. α.","category":"page"},{"location":"unfinished_docs/dsl/","page":"-","title":"-","text":"A model written with this syntax allows the parser to identify, endogenous and exogenous variables and their timing as well as parameters.","category":"page"},{"location":"unfinished_docs/dsl/","page":"-","title":"-","text":"Note that variables in the present (period t or 0) have to be denoted as such: [0]. The parser also takes care of creating auxilliary variables in case the model contains leads or lags of the variables larger than 1:","category":"page"},{"location":"unfinished_docs/dsl/","page":"-","title":"-","text":"@model RBC_lead_lag begin\n 1 / c[0] = (β / c[1]) * (α * exp(z[1]) * k[0]^(α - 1) + (1 - δ))\n c[0] + k[0] = (1 - δ) * k[-1] + q[0]\n q[0] = exp(z[0]) * k[-1]^α\n z[0] = ρ * z[-1] + std_z * (eps_z[x-8] + eps_z[x-4] + eps_z[x+4] + eps_z_s[x])\n c̄⁻[0] = (c[0] + c[-1] + c[-2] + c[-3]) / 4\n c̄⁺[0] = (c[0] + c[1] + c[2] + c[3]) / 4\nend","category":"page"},{"location":"unfinished_docs/dsl/","page":"-","title":"-","text":"The parser recognises a variable as exogenous if the timing bracket contains one of the keyword/letters (case insensitive): x, ex, exo, exogenous. ","category":"page"},{"location":"unfinished_docs/dsl/","page":"-","title":"-","text":"Valid declarations of exogenous variables: ϵ[x], ϵ[Exo], ϵ[exOgenous]. ","category":"page"},{"location":"unfinished_docs/dsl/","page":"-","title":"-","text":"Invalid declarations: ϵ[xo], ϵ[exogenously], ϵ[main shock x]","category":"page"},{"location":"unfinished_docs/dsl/","page":"-","title":"-","text":"Endogenous and exogenous variables can be in lead or lag, e.g.: the following describe a lead of 1 period: Y[1], Y[+1], Y[+ 1], eps[x+1], eps[Exo + 1] and the same goes for lags and periods > 1: `k[-2], c[+12], eps[x-4]","category":"page"},{"location":"unfinished_docs/dsl/","page":"-","title":"-","text":"Invalid declarations: Y[t-1], Y[t], Y[whatever], eps[x+t+1]","category":"page"},{"location":"unfinished_docs/dsl/","page":"-","title":"-","text":"Equations must be within one line and the = sign is optional.","category":"page"},{"location":"unfinished_docs/dsl/","page":"-","title":"-","text":"The parser recognises all functions in julia including those from StatsFuns.jl. Note that the syntax for distributions is the same as in MATLAB, e.g. normcdf. For those familiar with R the following also work: pnorm, dnorm, qnorm, and it also recognises: norminvcdf and norminv.","category":"page"},{"location":"unfinished_docs/dsl/","page":"-","title":"-","text":"Given these rules it is straightforward to write down a model. Once declared using the @model macro, the package creates an object containing all necessary information regarding the equations of the model.","category":"page"},{"location":"unfinished_docs/dsl/#Lead-/-lags-and-auxilliary-variables","page":"-","title":"Lead / lags and auxilliary variables","text":"","category":"section"},{"location":"tutorials/calibration/#Calibration-/-method-of-moments-Gali-(2015)","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments - Gali (2015)","text":"","category":"section"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"This tutorial is intended to show the workflow to calibrate a model using the method of moments. The tutorial is based on a standard model of monetary policy and will showcase the the use of gradient based optimisers and 2nd and 3rd order pruned solutions.","category":"page"},{"location":"tutorials/calibration/#Define-the-model","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Define the model","text":"","category":"section"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"The first step is always to name the model and write down the equations. For the Galı́ (2015), Chapter 3 this would go as follows:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"ENV[\"GKSwstype\"] = \"100\"\nusing Random\nRandom.seed!(30)","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"using MacroModelling\n\n@model Gali_2015 begin\n W_real[0] = C[0] ^ σ * N[0] ^ φ\n\n Q[0] = β * (C[1] / C[0]) ^ (-σ) * Z[1] / Z[0] / Pi[1]\n\n R[0] = 1 / Q[0]\n\n Y[0] = A[0] * (N[0] / S[0]) ^ (1 - α)\n\n R[0] = Pi[1] * realinterest[0]\n\n R[0] = 1 / β * Pi[0] ^ ϕᵖⁱ * (Y[0] / Y[ss]) ^ ϕʸ * exp(nu[0])\n\n C[0] = Y[0]\n\n log(A[0]) = ρ_a * log(A[-1]) + std_a * eps_a[x]\n\n log(Z[0]) = ρ_z * log(Z[-1]) - std_z * eps_z[x]\n\n nu[0] = ρ_ν * nu[-1] + std_nu * eps_nu[x]\n\n MC[0] = W_real[0] / (S[0] * Y[0] * (1 - α) / N[0])\n\n 1 = θ * Pi[0] ^ (ϵ - 1) + (1 - θ) * Pi_star[0] ^ (1 - ϵ)\n\n S[0] = (1 - θ) * Pi_star[0] ^ (( - ϵ) / (1 - α)) + θ * Pi[0] ^ (ϵ / (1 - α)) * S[-1]\n\n Pi_star[0] ^ (1 + ϵ * α / (1 - α)) = ϵ * x_aux_1[0] / x_aux_2[0] * (1 - τ) / (ϵ - 1)\n\n x_aux_1[0] = MC[0] * Y[0] * Z[0] * C[0] ^ (-σ) + β * θ * Pi[1] ^ (ϵ + α * ϵ / (1 - α)) * x_aux_1[1]\n\n x_aux_2[0] = Y[0] * Z[0] * C[0] ^ (-σ) + β * θ * Pi[1] ^ (ϵ - 1) * x_aux_2[1]\n\n log_y[0] = log(Y[0])\n\n log_W_real[0] = log(W_real[0])\n\n log_N[0] = log(N[0])\n\n pi_ann[0] = 4 * log(Pi[0])\n\n i_ann[0] = 4 * log(R[0])\n\n r_real_ann[0] = 4 * log(realinterest[0])\n\n M_real[0] = Y[0] / R[0] ^ η\n\nend","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"First, we load the package and then use the @model macro to define our model. The first argument after @model is the model name and will be the name of the object in the global environment containing all information regarding the model. The second argument to the macro are the equations, which we write down between begin and end. Equations can contain an equality sign or the expression is assumed to equal 0. Equations cannot span multiple lines (unless you wrap the expression in brackets) and the timing of endogenous variables are expressed in the squared brackets following the variable name (e.g. [-1] for the past period). Exogenous variables (shocks) are followed by a keyword in squared brackets indicating them being exogenous (in this case [x]). Note that names can leverage julia's unicode capabilities (e.g. alpha can be written as α).","category":"page"},{"location":"tutorials/calibration/#Define-the-parameters","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Define the parameters","text":"","category":"section"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Next we need to add the parameters of the model. The macro @parameters takes care of this:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"@parameters Gali_2015 begin\n σ = 1\n\n φ = 5\n\n ϕᵖⁱ = 1.5\n \n ϕʸ = 0.125\n\n θ = 0.75\n\n ρ_ν = 0.5\n\n ρ_z = 0.5\n\n ρ_a = 0.9\n\n β = 0.99\n\n η = 3.77\n\n α = 0.25\n\n ϵ = 9\n\n τ = 0\n\n std_a = .01\n\n std_z = .05\n\n std_nu = .0025\n\nend","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"The block defining the parameters above only describes the simple parameter definitions the same way you assign values (e.g. α = .25).","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Note that we have to write one parameter definition per line.","category":"page"},{"location":"tutorials/calibration/#Linear-solution","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Linear solution","text":"","category":"section"},{"location":"tutorials/calibration/#Inspect-model-moments","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Inspect model moments","text":"","category":"section"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Given the equations and parameters, we have everything to we need for the package to generate the theoretical model moments. You can retrieve the mean of the linearised model as follows:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"get_mean(Gali_2015)","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"and the standard deviation like this:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"get_standard_deviation(Gali_2015)","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"You could also simply use: std or get_std to the same effect.","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Another interesting output is the autocorrelation of the model variables:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"get_autocorrelation(Gali_2015)","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"or the covariance:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"get_covariance(Gali_2015)","category":"page"},{"location":"tutorials/calibration/#Parameter-sensitivities","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Parameter sensitivities","text":"","category":"section"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Before embarking on calibrating the model it is useful to get familiar with the impact of parameter changes on model moments. MacroModelling.jl provides the partial derivatives of the model moments with respect to the model parameters. The model we are working with is of a medium size and by default derivatives are automatically shown as long as the calculation does not take too long (too many derivatives need to be taken). In this case they are not shown but it is possible to show them by explicitly defining the parameter for which to take the partial derivatives for:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"get_mean(Gali_2015, parameter_derivatives = :σ)","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"or for multiple parameters:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"get_mean(Gali_2015, parameter_derivatives = [:σ, :α, :β, :ϕᵖⁱ, :φ])","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"We can do the same for standard deviation or variance, and all parameters:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"get_std(Gali_2015, parameter_derivatives = get_parameters(Gali_2015))","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"get_variance(Gali_2015, parameter_derivatives = get_parameters(Gali_2015))","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"You can use this information to calibrate certain values to your targets. For example, let's say we want to have higher real wages (:W_real), and lower inflation volatility. Since there are too many variables and parameters for them to be shown here, let's print only a subset of them:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"get_mean(Gali_2015, parameter_derivatives = [:σ, :std_a, :α], variables = [:W_real,:Pi])","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"get_std(Gali_2015, parameter_derivatives = [:σ, :std_a, :α], variables = [:W_real,:Pi])","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Looking at the sensitivity table we see that lowering the production function parameter :α will increase real wages, but at the same time it will increase inflation volatility. We could compensate that effect by decreasing the standard deviation of the total factor productivity shock :std_a.","category":"page"},{"location":"tutorials/calibration/#Method-of-moments","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Method of moments","text":"","category":"section"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Instead of doing this by hand we can also set a target and have an optimiser find the corresponding parameter values. In order to do that we need to define targets, and set up an optimisation problem.","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Our targets are:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Mean of W_real = 0.7\nStandard deviation of Pi = 0.01","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"For the optimisation problem we use the L-BFGS algorithm implemented in Optim.jl. This optimisation algorithm is very efficient and gradient based. Note that all model outputs are differentiable with respect to the parameters using automatic and implicit differentiation.","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"The package provides functions specialised for the use with gradient based code (e.g. gradient-based optimisers or samplers). For model statistics we can use get_statistics to get the mean of real wages and the standard deviation of inflation like this:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"get_statistics(Gali_2015, Gali_2015.parameter_values, parameters = Gali_2015.parameters, mean = [:W_real], standard_deviation = [:Pi])","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"First we pass on the model object, followed by the parameter values and the parameter names the values correspond to. Then we define the outputs we want: for the mean we want real wages and for the standard deviation we want inflation. We can also get outputs for variance, covariance, or autocorrelation the same way as for the mean and standard deviation.","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Next, let's define a function measuring how close we are to our target for given values of :α and :std_a:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"function distance_to_target(parameter_value_inputs)\n model_statistics = get_statistics(Gali_2015, parameter_value_inputs, parameters = [:α, :std_a], mean = [:W_real], standard_deviation = [:Pi])\n targets = [0.7, 0.01]\n return sum(abs2, vcat(model_statistics...) - targets)\nend","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Now let's test the function with the current parameter values. In case we forgot the parameter values we can also look them up like this:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"get_parameters(Gali_2015, values = true)","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"with this we can test the distance function:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"distance_to_target([0.25, 0.01])","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Next we can pass it on to an optimiser and find the parameters corresponding to the best fit like this:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"using Optim, LineSearches\nsol = Optim.optimize(distance_to_target,\n [0,0], \n [1,1], \n [0.25, 0.01], \n Optim.Fminbox(Optim.LBFGS(linesearch = LineSearches.BackTracking(order = 3))))","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"The first argument to the optimisation call is the function we defined previously, followed by lower and upper bounds, the starting values, and finally the algorithm. For the algorithm we have to add Fminbox because we have bounds (optional) and we set the specific line search method to speed up convergence (recommended but optional).","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"The output shows that we could almost perfectly match the target and the values of the parameters found by the optimiser are:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"sol.minimizer","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"slightly lower for both parameters (in line with what we understood from the sensitivities).","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"You can combine the method of moments with estimation by simply adding the distance to the target to the posterior loglikelihood.","category":"page"},{"location":"tutorials/calibration/#Nonlinear-solutions","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Nonlinear solutions","text":"","category":"section"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"So far we used the linearised solution of the model. The package also provides nonlinear solutions and can calculate the theoretical model moments for pruned second and third order perturbation solutions. This can be of interest because nonlinear solutions capture volatility effects (at second order) and asymmetries (at third order). Furthermore, the moments of the data are often non-gaussian while linear solutions with gaussian noise can only generate gaussian distributions of model variables. Nonetheless, already pruned second order solutions produce non-gaussian skewness and kurtosis with gaussian noise.","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"From a user perspective little changes other than specifying that the solution algorithm is :pruned_second_order or :pruned_third_order.","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"For example we can get the mean for the pruned second order solution:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"get_mean(Gali_2015, parameter_derivatives = [:σ, :std_a, :α], variables = [:W_real,:Pi], algorithm = :pruned_second_order)","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Note that the mean of real wages is lower, while inflation is higher. We can see the effect of volatility with the partial derivatives for the shock standard deviations being non-zero. Larger shocks sizes drive down the mean of real wages while they increase inflation.","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"The mean of the variables does not change if we use pruned third order perturbation by construction but the standard deviation does. Let's look at the standard deviations for the pruned second order solution first:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"get_std(Gali_2015, parameter_derivatives = [:σ, :std_a, :α], variables = [:W_real,:Pi], algorithm = :pruned_second_order)","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"for both inflation and real wages the volatility is higher and the standard deviation of the total factor productivity shock std_a has a much larger impact on the standard deviation of real wages compared to the linear solution.","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"At third order we get the following results:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"get_std(Gali_2015, parameter_derivatives = [:σ, :std_a, :α], variables = [:W_real,:Pi], algorithm = :pruned_third_order)","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"standard deviations of inflation is more than two times as high and for real wages it is also substantially higher. Furthermore, standard deviations of shocks matter even more for the volatility of the endogenous variables.","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"These results make it clear that capturing the nonlinear interactions by using nonlinear solutions has important implications for the model moments and by extension the model dynamics.","category":"page"},{"location":"tutorials/calibration/#Method-of-moments-for-nonlinear-solutions","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Method of moments for nonlinear solutions","text":"","category":"section"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Matching the theoretical moments of the nonlinear model solution to the data is no more complicated for the user than in the linear solution case (see above).","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"We need to define the target value and function and let an optimiser find the parameters minimising the distance to the target.","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Keeping the targets:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Mean of W_real = 0.7\nStandard deviation of Pi = 0.01","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"we need to define the target function and specify that we use a nonlinear solution algorithm (e.g. pruned third order):","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"function distance_to_target(parameter_value_inputs)\n model_statistics = get_statistics(Gali_2015, parameter_value_inputs, algorithm = :pruned_third_order, parameters = [:α, :std_a], mean = [:W_real], standard_deviation = [:Pi])\n targets = [0.7, 0.01]\n return sum(abs2, vcat(model_statistics...) - targets)\nend","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"and then we can use the same code to optimise as in the linear solution case:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"sol = Optim.optimize(distance_to_target,\n [0,0], \n [1,1], \n [0.25, 0.01], \n Optim.Fminbox(Optim.LBFGS(linesearch = LineSearches.BackTracking(order = 3))))","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"the calculations take substantially longer and we don't get as close to our target as for the linear solution case. The parameter values minimising the distance are:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"sol.minimizer","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"lower than for the linear solution case and the theoretical moments given these parameter are:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"get_statistics(Gali_2015, sol.minimizer, algorithm = :pruned_third_order, parameters = [:α, :std_a], mean = [:W_real], standard_deviation = [:Pi])","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"The solution does not match the standard deviation of inflation very well.","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Potentially the partial derivatives change a lot for small changes in parameters and even though the partial derivatives for standard deviation of inflation were large wrt std_a they might be small for value returned from the optimisation. We can check this with:","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"get_std(Gali_2015, parameter_derivatives = [:σ, :std_a, :α], variables = [:W_real,:Pi], algorithm = :pruned_third_order, parameters = [:α, :std_a] .=> sol.minimizer)","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"and indeed it seems also the second derivative is large since the first derivative changed significantly.","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Another parameter we can try is σ. It has a positive impact on the mean of real wages and a negative impact on standard deviation of inflation.","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"We need to redefine our target function and optimise it. Note that the previous call made a permanent change of parameters (as do all calls where parameters are explicitly set) and now std_a is set to 2.91e-9 and no longer 0.01.","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"function distance_to_target(parameter_value_inputs)\n model_statistics = get_statistics(Gali_2015, parameter_value_inputs, algorithm = :pruned_third_order, parameters = [:α, :σ], mean = [:W_real], standard_deviation = [:Pi])\n targets = [0.7, 0.01]\n return sum(abs2, vcat(model_statistics...) - targets)\nend\n\nsol = Optim.optimize(distance_to_target,\n [0,0], \n [1,3], \n [0.25, 1], \n Optim.Fminbox(Optim.LBFGS(linesearch = LineSearches.BackTracking(order = 3))))\n\nsol.minimizer","category":"page"},{"location":"tutorials/calibration/","page":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","title":"Calibration / method of moments (for higher order perturbation solutions) - Gali (2015)","text":"Given the new value for std_a and optimising over σ allows us to match the target exactly.","category":"page"},{"location":"tutorials/estimation/#Estimate-a-simple-model-Schorfheide-(2000)","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a simple model - Schorfheide (2000)","text":"","category":"section"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"This tutorial is intended to show the workflow to estimate a model using the No-U-Turn sampler (NUTS). The tutorial works with a benchmark model for estimation and can therefore be compared to results from other software packages (e.g. dynare).","category":"page"},{"location":"tutorials/estimation/#Define-the-model","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Define the model","text":"","category":"section"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"The first step is always to name the model and write down the equations. For the Schorfheide (2000) model this would go as follows:","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"ENV[\"GKSwstype\"] = \"100\"\nusing Random\nRandom.seed!(3)","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"using MacroModelling\n\n@model FS2000 begin\n dA[0] = exp(gam + z_e_a * e_a[x])\n\n log(m[0]) = (1 - rho) * log(mst) + rho * log(m[-1]) + z_e_m * e_m[x]\n\n - P[0] / (c[1] * P[1] * m[0]) + bet * P[1] * (alp * exp( - alp * (gam + log(e[1]))) * k[0] ^ (alp - 1) * n[1] ^ (1 - alp) + (1 - del) * exp( - (gam + log(e[1])))) / (c[2] * P[2] * m[1])=0\n\n W[0] = l[0] / n[0]\n\n - (psi / (1 - psi)) * (c[0] * P[0] / (1 - n[0])) + l[0] / n[0] = 0\n\n R[0] = P[0] * (1 - alp) * exp( - alp * (gam + z_e_a * e_a[x])) * k[-1] ^ alp * n[0] ^ ( - alp) / W[0]\n\n 1 / (c[0] * P[0]) - bet * P[0] * (1 - alp) * exp( - alp * (gam + z_e_a * e_a[x])) * k[-1] ^ alp * n[0] ^ (1 - alp) / (m[0] * l[0] * c[1] * P[1]) = 0\n\n c[0] + k[0] = exp( - alp * (gam + z_e_a * e_a[x])) * k[-1] ^ alp * n[0] ^ (1 - alp) + (1 - del) * exp( - (gam + z_e_a * e_a[x])) * k[-1]\n\n P[0] * c[0] = m[0]\n\n m[0] - 1 + d[0] = l[0]\n\n e[0] = exp(z_e_a * e_a[x])\n\n y[0] = k[-1] ^ alp * n[0] ^ (1 - alp) * exp( - alp * (gam + z_e_a * e_a[x]))\n\n gy_obs[0] = dA[0] * y[0] / y[-1]\n\n gp_obs[0] = (P[0] / P[-1]) * m[-1] / dA[0]\n\n log_gy_obs[0] = log(gy_obs[0])\n\n log_gp_obs[0] = log(gp_obs[0])\nend","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"First, we load the package and then use the @model macro to define our model. The first argument after @model is the model name and will be the name of the object in the global environment containing all information regarding the model. The second argument to the macro are the equations, which we write down between begin and end. Equations can contain an equality sign or the expression is assumed to equal 0. Equations cannot span multiple lines (unless you wrap the expression in brackets) and the timing of endogenous variables are expressed in the squared brackets following the variable name (e.g. [-1] for the past period). Exogenous variables (shocks) are followed by a keyword in squared brackets indicating them being exogenous (in this case [x]). Note that names can leverage julia's unicode capabilities (e.g. alpha can be written as α).","category":"page"},{"location":"tutorials/estimation/#Define-the-parameters","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Define the parameters","text":"","category":"section"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"Next we need to add the parameters of the model. The macro @parameters takes care of this:","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"@parameters FS2000 begin \n alp = 0.356\n bet = 0.993\n gam = 0.0085\n mst = 1.0002\n rho = 0.129\n psi = 0.65\n del = 0.01\n z_e_a = 0.035449\n z_e_m = 0.008862\nend","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"The block defining the parameters above only describes the simple parameter definitions the same way you assign values (e.g. alp = .356).","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"Note that we have to write one parameter definition per line.","category":"page"},{"location":"tutorials/estimation/#Load-data","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Load data","text":"","category":"section"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"Given the equations and parameters, we only need the entries in the data which correspond to the observables in the model (need to have the exact same name) to estimate the model. First, we load in the data from a CSV file (using the CSV and DataFrames packages) and convert it to a KeyedArray (using the AxisKeys package). Furthermore, we log transform the data provided in levels, and last but not least we select only those variables in the data which are observables in the model.","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"using CSV, DataFrames, AxisKeys\n\n# load data\ndat = CSV.read(\"../assets/FS2000_data.csv\", DataFrame)\ndata = KeyedArray(Array(dat)',Variable = Symbol.(\"log_\".*names(dat)),Time = 1:size(dat)[1])\ndata = log.(data)\n\n# declare observables\nobservables = sort(Symbol.(\"log_\".*names(dat)))\n\n# subset observables in data\ndata = data(observables,:)","category":"page"},{"location":"tutorials/estimation/#Define-bayesian-model","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Define bayesian model","text":"","category":"section"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"Next we define the parameter priors using the Turing package. The @model macro of the Turing package allows us to define the prior distributions over the parameters and combine it with the (Kalman filter) loglikelihood of the model and parameters given the data with the help of the get_loglikelihood function. We define the prior distributions in an array and pass it on to the arraydist function inside the @model macro from the Turing package. It is also possible to define the prior distributions inside the macro but especially for reverse mode auto differentiation the arraydist function is substantially faster. When defining the prior distributions we can rely n the distribution implemented in the Distributions package. Note that the μσ parameter allows us to hand over the moments (μ and σ) of the distribution as parameters in case of the non-normal distributions (Gamma, Beta, InverseGamma), and we can also define upper and lower bounds truncating the distribution as third and fourth arguments to the distribution functions. Last but not least, we define the loglikelihood and add it to the posterior loglikelihood with the help of the @addlogprob! macro.","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"import Turing\nimport Turing: NUTS, sample, logpdf\n\nprior_distributions = [\n Beta(0.356, 0.02, μσ = true), # alp\n Beta(0.993, 0.002, μσ = true), # bet\n Normal(0.0085, 0.003), # gam\n Normal(1.0002, 0.007), # mst\n Beta(0.129, 0.223, μσ = true), # rho\n Beta(0.65, 0.05, μσ = true), # psi\n Beta(0.01, 0.005, μσ = true), # del\n InverseGamma(0.035449, Inf, μσ = true), # z_e_a\n InverseGamma(0.008862, Inf, μσ = true) # z_e_m\n]\n\nTuring.@model function FS2000_loglikelihood_function(data, model)\n parameters ~ Turing.arraydist(prior_distributions)\n\n Turing.@addlogprob! get_loglikelihood(model, data, parameters)\nend","category":"page"},{"location":"tutorials/estimation/#Sample-from-posterior:-No-U-Turn-Sampler-(NUTS)","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Sample from posterior: No-U-Turn Sampler (NUTS)","text":"","category":"section"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"We use the NUTS sampler to retrieve the posterior distribution of the parameters. This sampler uses the gradient of the posterior loglikelihood with respect to the model parameters to navigate the parameter space. The NUTS sampler is considered robust, fast, and user-friendly (auto-tuning of hyper-parameters).","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"First we define the loglikelihood model with the specific data, and model. Next, we draw 1000 samples from the model:","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"FS2000_loglikelihood = FS2000_loglikelihood_function(data, FS2000);\n\nn_samples = 1000\n\nchain_NUTS = sample(FS2000_loglikelihood, NUTS(), n_samples, progress = false);","category":"page"},{"location":"tutorials/estimation/#Inspect-posterior","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Inspect posterior","text":"","category":"section"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"In order to understand the posterior distribution and the sequence of sample we are plot them:","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"using StatsPlots\nStatsPlots.plot(chain_NUTS);","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"(Image: NUTS chain)","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"Next, we are plotting the posterior loglikelihood along two parameters dimensions, with the other parameters ket at the posterior mean, and add the samples to the visualisation. This visualisation allows us to understand the curvature of the posterior and puts the samples in context.","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"using ComponentArrays, MCMCChains, DynamicPPL\n\nparameter_mean = mean(chain_NUTS)\n\npars = ComponentArray([parameter_mean.nt[2]], Axis(:parameters));\n\nlogjoint(FS2000_loglikelihood, pars)\n\nfunction calculate_log_probability(par1, par2, pars_syms, orig_pars, model)\n orig_pars[1][pars_syms] = [par1, par2]\n logjoint(model, orig_pars)\nend\n\ngranularity = 32;\n\npar1 = :del;\npar2 = :gam;\n\nparidx1 = indexin([par1], FS2000.parameters)[1];\nparidx2 = indexin([par2], FS2000.parameters)[1];\n\npar_range1 = collect(range(minimum(chain_NUTS[Symbol(\"parameters[$paridx1]\")]), stop = maximum(chain_NUTS[Symbol(\"parameters[$paridx1]\")]), length = granularity));\npar_range2 = collect(range(minimum(chain_NUTS[Symbol(\"parameters[$paridx2]\")]), stop = maximum(chain_NUTS[Symbol(\"parameters[$paridx2]\")]), length = granularity));\n\np = surface(par_range1, par_range2, \n (x,y) -> calculate_log_probability(x, y, [paridx1, paridx2], pars, FS2000_loglikelihood),\n camera=(30, 65),\n colorbar=false,\n color=:inferno);\n\njoint_loglikelihood = [logjoint(FS2000_loglikelihood, ComponentArray([reduce(hcat, get(chain_NUTS, :parameters)[1])[s,:]], Axis(:parameters))) for s in 1:length(chain_NUTS)];\n\nscatter3d!(vec(collect(chain_NUTS[Symbol(\"parameters[$paridx1]\")])),\n vec(collect(chain_NUTS[Symbol(\"parameters[$paridx2]\")])),\n joint_loglikelihood,\n mc = :viridis, \n marker_z = collect(1:length(chain_NUTS)), \n msw = 0,\n legend = false, \n colorbar = false, \n xlabel = string(par1),\n ylabel = string(par2),\n zlabel = \"Log probability\",\n alpha = 0.5);\n\np","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"(Image: Posterior surface)","category":"page"},{"location":"tutorials/estimation/#Find-posterior-mode","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Find posterior mode","text":"","category":"section"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"Other than the mean and median of the posterior distribution we can also calculate the mode. To this end we will use L-BFGS optimisation routines from the Optim package.","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"First, we define the posterior loglikelihood function, similar to how we defined it for the Turing model macro.","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"function calculate_posterior_loglikelihood(parameters, prior_distribuions)\n alp, bet, gam, mst, rho, psi, del, z_e_a, z_e_m = parameters\n log_lik = 0\n log_lik -= get_loglikelihood(FS2000, data, parameters)\n\n for (dist, val) in zip(prior_distribuions, parameters)\n log_lik -= logpdf(dist, val)\n end\n\n return log_lik\nend","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"Next, we set up the optimisation problem, parameter bounds, and use the optimizer L-BFGS.","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"using Optim, LineSearches\n\nlbs = [0,0,-10,-10,0,0,0,0,0];\nubs = [1,1,10,10,1,1,1,100,100];\n\nsol = optimize(x -> calculate_posterior_loglikelihood(x, prior_distributions), lbs, ubs , FS2000.parameter_values, Fminbox(LBFGS(linesearch = LineSearches.BackTracking(order = 3))); autodiff = :forward)\n\nsol.minimum","category":"page"},{"location":"tutorials/estimation/#Model-estimates-given-the-data-and-the-model-solution","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Model estimates given the data and the model solution","text":"","category":"section"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"Having found the parameters at the posterior mode we can retrieve model estimates of the shocks which explain the data used to estimate it. This can be done with the get_estimated_shocks function:","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"get_estimated_shocks(FS2000, data, parameters = sol.minimizer)","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"As the first argument we pass the model, followed by the data (in levels), and then we pass the parameters at the posterior mode. The model is solved with this parameterisation and the shocks are calculated using the Kalman smoother.","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"We estimated the model on two variables but our model allows us to look at all variables given the data. Looking at the estimated variables can be done using the get_estimated_variables function:","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"get_estimated_variables(FS2000, data)","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"Since we already solved the model with the parameters at the posterior mode we do not need to do so again. The function returns a KeyedArray with the values of the variables in levels at each point in time.","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"Another useful tool is a historical shock decomposition. It allows us to understand the contribution of the shocks for each variable. This can be done using the get_shock_decomposition function:","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"get_shock_decomposition(FS2000, data)","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"We get a 3-dimensional array with variables, shocks, and time periods as dimensions. The shocks dimension also includes the initial value as a residual between the actual value and what was explained by the shocks. This computation also relies on the Kalman smoother.","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"Last but not least, we can also plot the model estimates and the shock decomposition. The model estimates plot, using plot_model_estimates:","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"plot_model_estimates(FS2000, data)","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"(Image: Model estimates)","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"shows the variables of the model (blue), the estimated shocks (in the last panel), and the data (red) used to estimate the model.","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"The shock decomposition can be plotted using plot_shock_decomposition:","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"plot_shock_decomposition(FS2000, data)","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"(Image: Shock decomposition)","category":"page"},{"location":"tutorials/estimation/","page":"Estimate a model using gradient based samplers - Schorfheide (2000)","title":"Estimate a model using gradient based samplers - Schorfheide (2000)","text":"and it shows the contribution of the shocks and the contribution of the initial value to the deviations of the variables.","category":"page"},{"location":"tutorials/sw03/#Work-with-a-complex-model-Smets-and-Wouters-(2003)","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a complex model - Smets and Wouters (2003)","text":"","category":"section"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"This tutorial is intended to show more advanced features of the package which come into play with more complex models. The tutorial will walk through the same steps as for the simple RBC model but will use the nonlinear Smets and Wouters (2003) model instead. Prior knowledge of DSGE models and their solution in practical terms (e.g. having used a mod file with dynare) is useful in understanding this tutorial.","category":"page"},{"location":"tutorials/sw03/#Define-the-model","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Define the model","text":"","category":"section"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"The first step is always to name the model and write down the equations. For the Smets and Wouters (2003) model this would go as follows:","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"ENV[\"GKSwstype\"] = \"100\"","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"using MacroModelling\n@model Smets_Wouters_2003 begin\n -q[0] + beta * ((1 - tau) * q[1] + epsilon_b[1] * (r_k[1] * z[1] - psi^-1 * r_k[ss] * (-1 + exp(psi * (-1 + z[1])))) * (C[1] - h * C[0])^(-sigma_c))\n -q_f[0] + beta * ((1 - tau) * q_f[1] + epsilon_b[1] * (r_k_f[1] * z_f[1] - psi^-1 * r_k_f[ss] * (-1 + exp(psi * (-1 + z_f[1])))) * (C_f[1] - h * C_f[0])^(-sigma_c))\n -r_k[0] + alpha * epsilon_a[0] * mc[0] * L[0]^(1 - alpha) * (K[-1] * z[0])^(-1 + alpha)\n -r_k_f[0] + alpha * epsilon_a[0] * mc_f[0] * L_f[0]^(1 - alpha) * (K_f[-1] * z_f[0])^(-1 + alpha)\n -G[0] + T[0]\n -G[0] + G_bar * epsilon_G[0]\n -G_f[0] + T_f[0]\n -G_f[0] + G_bar * epsilon_G[0]\n -L[0] + nu_w[0]^-1 * L_s[0]\n -L_s_f[0] + L_f[0] * (W_i_f[0] * W_f[0]^-1)^(lambda_w^-1 * (-1 - lambda_w))\n L_s_f[0] - L_f[0]\n L_s_f[0] + lambda_w^-1 * L_f[0] * W_f[0]^-1 * (-1 - lambda_w) * (-W_disutil_f[0] + W_i_f[0]) * (W_i_f[0] * W_f[0]^-1)^(-1 + lambda_w^-1 * (-1 - lambda_w))\n Pi_ws_f[0] - L_s_f[0] * (-W_disutil_f[0] + W_i_f[0])\n Pi_ps_f[0] - Y_f[0] * (-mc_f[0] + P_j_f[0]) * P_j_f[0]^(-lambda_p^-1 * (1 + lambda_p))\n -Q[0] + epsilon_b[0]^-1 * q[0] * (C[0] - h * C[-1])^(sigma_c)\n -Q_f[0] + epsilon_b[0]^-1 * q_f[0] * (C_f[0] - h * C_f[-1])^(sigma_c)\n -W[0] + epsilon_a[0] * mc[0] * (1 - alpha) * L[0]^(-alpha) * (K[-1] * z[0])^alpha\n -W_f[0] + epsilon_a[0] * mc_f[0] * (1 - alpha) * L_f[0]^(-alpha) * (K_f[-1] * z_f[0])^alpha\n -Y_f[0] + Y_s_f[0]\n Y_s[0] - nu_p[0] * Y[0]\n -Y_s_f[0] + Y_f[0] * P_j_f[0]^(-lambda_p^-1 * (1 + lambda_p))\n beta * epsilon_b[1] * (C_f[1] - h * C_f[0])^(-sigma_c) - epsilon_b[0] * R_f[0]^-1 * (C_f[0] - h * C_f[-1])^(-sigma_c)\n beta * epsilon_b[1] * pi[1]^-1 * (C[1] - h * C[0])^(-sigma_c) - epsilon_b[0] * R[0]^-1 * (C[0] - h * C[-1])^(-sigma_c)\n Y_f[0] * P_j_f[0]^(-lambda_p^-1 * (1 + lambda_p)) - lambda_p^-1 * Y_f[0] * (1 + lambda_p) * (-mc_f[0] + P_j_f[0]) * P_j_f[0]^(-1 - lambda_p^-1 * (1 + lambda_p))\n epsilon_b[0] * W_disutil_f[0] * (C_f[0] - h * C_f[-1])^(-sigma_c) - omega * epsilon_b[0] * epsilon_L[0] * L_s_f[0]^sigma_l\n -1 + xi_p * (pi[0]^-1 * pi[-1]^gamma_p)^(-lambda_p^-1) + (1 - xi_p) * pi_star[0]^(-lambda_p^-1)\n -1 + (1 - xi_w) * (w_star[0] * W[0]^-1)^(-lambda_w^-1) + xi_w * (W[-1] * W[0]^-1)^(-lambda_w^-1) * (pi[0]^-1 * pi[-1]^gamma_w)^(-lambda_w^-1)\n -Phi - Y_s[0] + epsilon_a[0] * L[0]^(1 - alpha) * (K[-1] * z[0])^alpha\n -Phi - Y_f[0] * P_j_f[0]^(-lambda_p^-1 * (1 + lambda_p)) + epsilon_a[0] * L_f[0]^(1 - alpha) * (K_f[-1] * z_f[0])^alpha\n std_eta_b * eta_b[x] - log(epsilon_b[0]) + rho_b * log(epsilon_b[-1])\n -std_eta_L * eta_L[x] - log(epsilon_L[0]) + rho_L * log(epsilon_L[-1])\n std_eta_I * eta_I[x] - log(epsilon_I[0]) + rho_I * log(epsilon_I[-1])\n std_eta_w * eta_w[x] - f_1[0] + f_2[0]\n std_eta_a * eta_a[x] - log(epsilon_a[0]) + rho_a * log(epsilon_a[-1])\n std_eta_p * eta_p[x] - g_1[0] + g_2[0] * (1 + lambda_p)\n std_eta_G * eta_G[x] - log(epsilon_G[0]) + rho_G * log(epsilon_G[-1])\n -f_1[0] + beta * xi_w * f_1[1] * (w_star[0]^-1 * w_star[1])^(lambda_w^-1) * (pi[1]^-1 * pi[0]^gamma_w)^(-lambda_w^-1) + epsilon_b[0] * w_star[0] * L[0] * (1 + lambda_w)^-1 * (C[0] - h * C[-1])^(-sigma_c) * (w_star[0] * W[0]^-1)^(-lambda_w^-1 * (1 + lambda_w))\n -f_2[0] + beta * xi_w * f_2[1] * (w_star[0]^-1 * w_star[1])^(lambda_w^-1 * (1 + lambda_w) * (1 + sigma_l)) * (pi[1]^-1 * pi[0]^gamma_w)^(-lambda_w^-1 * (1 + lambda_w) * (1 + sigma_l)) + omega * epsilon_b[0] * epsilon_L[0] * (L[0] * (w_star[0] * W[0]^-1)^(-lambda_w^-1 * (1 + lambda_w)))^(1 + sigma_l)\n -g_1[0] + beta * xi_p * pi_star[0] * g_1[1] * pi_star[1]^-1 * (pi[1]^-1 * pi[0]^gamma_p)^(-lambda_p^-1) + epsilon_b[0] * pi_star[0] * Y[0] * (C[0] - h * C[-1])^(-sigma_c)\n -g_2[0] + beta * xi_p * g_2[1] * (pi[1]^-1 * pi[0]^gamma_p)^(-lambda_p^-1 * (1 + lambda_p)) + epsilon_b[0] * mc[0] * Y[0] * (C[0] - h * C[-1])^(-sigma_c)\n -nu_w[0] + (1 - xi_w) * (w_star[0] * W[0]^-1)^(-lambda_w^-1 * (1 + lambda_w)) + xi_w * nu_w[-1] * (W[-1] * pi[0]^-1 * W[0]^-1 * pi[-1]^gamma_w)^(-lambda_w^-1 * (1 + lambda_w))\n -nu_p[0] + (1 - xi_p) * pi_star[0]^(-lambda_p^-1 * (1 + lambda_p)) + xi_p * nu_p[-1] * (pi[0]^-1 * pi[-1]^gamma_p)^(-lambda_p^-1 * (1 + lambda_p))\n -K[0] + K[-1] * (1 - tau) + I[0] * (1 - 0.5 * varphi * (-1 + I[-1]^-1 * epsilon_I[0] * I[0])^2)\n -K_f[0] + K_f[-1] * (1 - tau) + I_f[0] * (1 - 0.5 * varphi * (-1 + I_f[-1]^-1 * epsilon_I[0] * I_f[0])^2)\n U[0] - beta * U[1] - epsilon_b[0] * ((1 - sigma_c)^-1 * (C[0] - h * C[-1])^(1 - sigma_c) - omega * epsilon_L[0] * (1 + sigma_l)^-1 * L_s[0]^(1 + sigma_l))\n U_f[0] - beta * U_f[1] - epsilon_b[0] * ((1 - sigma_c)^-1 * (C_f[0] - h * C_f[-1])^(1 - sigma_c) - omega * epsilon_L[0] * (1 + sigma_l)^-1 * L_s_f[0]^(1 + sigma_l))\n -epsilon_b[0] * (C[0] - h * C[-1])^(-sigma_c) + q[0] * (1 - 0.5 * varphi * (-1 + I[-1]^-1 * epsilon_I[0] * I[0])^2 - varphi * I[-1]^-1 * epsilon_I[0] * I[0] * (-1 + I[-1]^-1 * epsilon_I[0] * I[0])) + beta * varphi * I[0]^-2 * epsilon_I[1] * q[1] * I[1]^2 * (-1 + I[0]^-1 * epsilon_I[1] * I[1])\n -epsilon_b[0] * (C_f[0] - h * C_f[-1])^(-sigma_c) + q_f[0] * (1 - 0.5 * varphi * (-1 + I_f[-1]^-1 * epsilon_I[0] * I_f[0])^2 - varphi * I_f[-1]^-1 * epsilon_I[0] * I_f[0] * (-1 + I_f[-1]^-1 * epsilon_I[0] * I_f[0])) + beta * varphi * I_f[0]^-2 * epsilon_I[1] * q_f[1] * I_f[1]^2 * (-1 + I_f[0]^-1 * epsilon_I[1] * I_f[1])\n std_eta_pi * eta_pi[x] - log(pi_obj[0]) + rho_pi_bar * log(pi_obj[-1]) + log(calibr_pi_obj) * (1 - rho_pi_bar)\n -C[0] - I[0] - T[0] + Y[0] - psi^-1 * r_k[ss] * K[-1] * (-1 + exp(psi * (-1 + z[0])))\n -calibr_pi + std_eta_R * eta_R[x] - log(R[ss]^-1 * R[0]) + r_Delta_pi * (-log(pi[ss]^-1 * pi[-1]) + log(pi[ss]^-1 * pi[0])) + r_Delta_y * (-log(Y[ss]^-1 * Y[-1]) + log(Y[ss]^-1 * Y[0]) + log(Y_f[ss]^-1 * Y_f[-1]) - log(Y_f[ss]^-1 * Y_f[0])) + rho * log(R[ss]^-1 * R[-1]) + (1 - rho) * (log(pi_obj[0]) + r_pi * (-log(pi_obj[0]) + log(pi[ss]^-1 * pi[-1])) + r_Y * (log(Y[ss]^-1 * Y[0]) - log(Y_f[ss]^-1 * Y_f[0])))\n -C_f[0] - I_f[0] + Pi_ws_f[0] - T_f[0] + Y_f[0] + L_s_f[0] * W_disutil_f[0] - L_f[0] * W_f[0] - psi^-1 * r_k_f[ss] * K_f[-1] * (-1 + exp(psi * (-1 + z_f[0])))\n epsilon_b[0] * (K[-1] * r_k[0] - r_k[ss] * K[-1] * exp(psi * (-1 + z[0]))) * (C[0] - h * C[-1])^(-sigma_c)\n epsilon_b[0] * (K_f[-1] * r_k_f[0] - r_k_f[ss] * K_f[-1] * exp(psi * (-1 + z_f[0]))) * (C_f[0] - h * C_f[-1])^(-sigma_c)\nend","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"First, we load the package and then use the @model macro to define our model. The first argument after @model is the model name and will be the name of the object in the global environment containing all information regarding the model. The second argument to the macro are the equations, which we write down between begin and end. Equations can contain an equality sign or the expression is assumed to equal 0. Equations cannot span multiple lines (unless you wrap the expression in brackets) and the timing of endogenous variables are expressed in the squared brackets following the variable name (e.g. [-1] for the past period). Exogenous variables (shocks) are followed by a keyword in squared brackets indicating them being exogenous (in this case [x]). In this example there are also variables in the non stochastic steady state denoted by [ss]. Note that names can leverage julia's unicode capabilities (alpha can be written as α).","category":"page"},{"location":"tutorials/sw03/#Define-the-parameters","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Define the parameters","text":"","category":"section"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Next we need to add the parameters of the model. The macro @parameters takes care of this:","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"@parameters Smets_Wouters_2003 begin \n lambda_p = .368\n G_bar = .362\n lambda_w = 0.5\n Phi = .819\n\n alpha = 0.3\n beta = 0.99\n gamma_w = 0.763\n gamma_p = 0.469\n h = 0.573\n omega = 1\n psi = 0.169\n\n r_pi = 1.684\n r_Y = 0.099\n r_Delta_pi = 0.14\n r_Delta_y = 0.159\n\n sigma_c = 1.353\n sigma_l = 2.4\n tau = 0.025\n varphi = 6.771\n xi_w = 0.737\n xi_p = 0.908\n\n rho = 0.961\n rho_b = 0.855\n rho_L = 0.889\n rho_I = 0.927\n rho_a = 0.823\n rho_G = 0.949\n rho_pi_bar = 0.924\n\n std_eta_b = 0.336\n std_eta_L = 3.52\n std_eta_I = 0.085\n std_eta_a = 0.598\n std_eta_w = 0.6853261\n std_eta_p = 0.7896512\n std_eta_G = 0.325\n std_eta_R = 0.081\n std_eta_pi = 0.017\n\n calibr_pi_obj | 1 = pi_obj[ss]\n calibr_pi | pi[ss] = pi_obj[ss]\nend","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"The block defining the parameters above has two different inputs.","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"First, there are simple parameter definition the same way you assign values (e.g. Phi = .819).","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Second, there are calibration equations where we treat the value of a parameter as unknown (e.g. calibr_pi_obj) and want an additional equation to hold (e.g. 1 = pi_obj[ss]). The additional equation can contain variables in SS or parameters. Putting it together a calibration equation is defined by the unknown parameter, and the calibration equation, separated by | (e.g. calibr_pi_obj | 1 = pi_obj[ss] and also 1 = pi_obj[ss] | calibr_pi_obj).","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Note that we have to write one parameter definition per line.","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Given the equations and parameters, the package will first attempt to solve the system of nonlinear equations symbolically (including possible calibration equations). If an analytical solution is not possible, numerical solution methods are used to try and solve it. There is no guarantee that a solution can be found, but it is highly likely, given that a solution exists. The problem setup tries to incorporate parts of the structure of the problem, e.g. bounds on variables: if one equation contains log(k) it must be that k > 0. Nonetheless, the user can also provide useful information such as variable bounds or initial guesses. Bounds can be set by adding another expression to the parameter block e.g.: c > 0. Large values are typically a problem for numerical solvers. Therefore, providing a guess for these values will increase the speed of the solver. Guesses can be provided as a Dict after the model name and before the parameter definitions block, e.g.: @parameters Smets_Wouters_2003 guess = Dict(k => 10) begin ... end.","category":"page"},{"location":"tutorials/sw03/#Plot-impulse-response-functions-(IRFs)","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Plot impulse response functions (IRFs)","text":"","category":"section"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"A useful output to analyze are IRFs for the exogenous shocks. Calling plot_irf (different names for the same function are also supported: plot_irfs, or plot_IRF) will take care of this. Please note that you need to import the StatsPlots packages once before the first plot. In the background the package solves (numerically in this complex case) for the non stochastic steady state (SS) and calculates the first order perturbation solution.","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"import StatsPlots\nplot_irf(Smets_Wouters_2003)","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"(Image: RBC IRF)","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"When the model is solved the first time (in this case by calling plot_irf), the package breaks down the steady state problem into independent blocks and first attempts to solve them symbolically and if that fails numerically.","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"The plots show the responses of the endogenous variables to a one standard deviation positive (indicated by Shock⁺ in chart title) unanticipated shock. Therefore there are as many subplots as there are combinations of shocks and endogenous variables (which are impacted by the shock). Plots are composed of up to 9 subplots and the plot title shows the model name followed by the name of the shock and which plot we are seeing out of the plots for this shock (e.g. (1/3) means we see the first out of three plots for this shock). Subplots show the sorted endogenous variables with the left y-axis showing the level of the respective variable and the right y-axis showing the percent deviation from the SS (if variable is strictly positive). The horizontal black line marks the SS.","category":"page"},{"location":"tutorials/sw03/#Explore-other-parameter-values","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Explore other parameter values","text":"","category":"section"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Playing around with the model can be especially insightful in the early phase of model development. The package tries to facilitate this process to the extent possible. Typically one wants to try different parameter values and see how the IRFs change. This can be done by using the parameters argument of the plot_irf function. We pass a Pair with the Symbol of the parameter (: in front of the parameter name) we want to change and its new value to the parameter argument (e.g. :alpha => 0.305). Furthermore, we want to focus on certain shocks and variables. We select for the example the eta_R shock by passing it as a Symbol to the shocks argument of the plot_irf function. For the variables we choose to plot: U, Y, I, R, and C and achieve that by passing the Vector of Symbols to the variables argument of the plot_irf function:","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"plot_irf(Smets_Wouters_2003, \n parameters = :alpha => 0.305, \n variables = [:U,:Y,:I,:R,:C], \n shocks = :eta_R)","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"(Image: IRF plot)","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"First, the package finds the new steady state, solves the model dynamics around it and saves the new parameters and solution in the model object. Second, note that with the parameters the IRFs changed (e.g. compare the y-axis values for U). Updating the plot for new parameters is significantly faster than calling it the first time. This is because the first call triggers compilations of the model functions, and once compiled the user benefits from the performance of the specialised compiled code. Furthermore, finding the SS from a valid SS as a starting point is faster.","category":"page"},{"location":"tutorials/sw03/#Plot-model-simulation","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Plot model simulation","text":"","category":"section"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Another insightful output is simulations of the model. Here we can use the plot_simulations function. Again we want to only look at a subset of the variables and specify it in the variables argument. Please note that you need to import the StatsPlots packages once before the first plot. To the same effect we can use the plot_irf function and specify in the shocks argument that we want to :simulate the model and set the periods argument to 100.","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"plot_simulations(Smets_Wouters_2003, variables = [:U,:Y,:I,:R,:C])","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"(Image: Simulate Smets_Wouters_2003)","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"The plots show the models endogenous variables in response to random draws for all exogenous shocks over 100 periods.","category":"page"},{"location":"tutorials/sw03/#Plot-specific-series-of-shocks","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Plot specific series of shocks","text":"","category":"section"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Sometimes one has a specific series of shocks in mind and wants to see the corresponding model response of endogenous variables. This can be achieved by passing a Matrix or KeyedArray of the series of shocks to the shocks argument of the plot_irf function. Let's assume there is a positive 1 standard deviation shock to eta_b in period 2 and a negative 1 standard deviation shock to eta_w in period 12. This can be implemented as follows:","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"using AxisKeys\nshock_series = KeyedArray(zeros(2,12), Shocks = [:eta_b, :eta_w], Periods = 1:12)\nshock_series[1,2] = 1\nshock_series[2,12] = -1\nplot_irf(Smets_Wouters_2003, shocks = shock_series, variables = [:W,:r_k,:w_star,:R])","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"(Image: Series of shocks RBC)","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"First, we construct the KeyedArray containing the series of shocks and pass it to the shocks argument. The plot shows the paths of the selected variables for the two shocks hitting the economy in periods 2 and 12 and 40 quarters thereafter.","category":"page"},{"location":"tutorials/sw03/#Model-statistics","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Model statistics","text":"","category":"section"},{"location":"tutorials/sw03/#Steady-state","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Steady state","text":"","category":"section"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"The package solves for the SS automatically and we got an idea of the SS values in the plots. If we want to see the SS values and the derivatives of the SS with respect to the model parameters we can call get_steady_state. The model has 39 parameters and 54 variables. Since we are not interested in all derivatives for all parameters we select a subset. This can be done by passing on a Vector of Symbols of the parameters to the parameter_derivatives argument:","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"get_steady_state(Smets_Wouters_2003, parameter_derivatives = [:alpha,:beta])","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"The first column of the returned matrix shows the SS while the second to last columns show the derivatives of the SS values (indicated in the rows) with respect to the parameters (indicated in the columns). For example, the derivative of C with respect to beta is 14.4994. This means that if we increase beta by 1, C would increase by 14.4994 approximately. Let's see how this plays out by changing beta from 0.99 to 0.991 (a change of +0.001):","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"get_steady_state(Smets_Wouters_2003, \n parameter_derivatives = [:alpha,:G_bar], \n parameters = :beta => .991)","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Note that get_steady_state like all other get functions has the parameters argument. Hence, whatever output we are looking at we can change the parameters of the model.","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"The new value of beta changed the SS as expected and C increased by 0.01465. The elasticity (0.01465/0.001) comes close to the partial derivative previously calculated. The derivatives help understanding the effect of parameter changes on the steady state and make for easier navigation of the parameter space.","category":"page"},{"location":"tutorials/sw03/#Standard-deviations","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Standard deviations","text":"","category":"section"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Next to the SS we can also show the model implied standard deviations of the model. get_standard_deviation takes care of this. Additionally we will set the parameter values to what they were in the beginning by passing on a Tuple of Pairs containing the Symbols of the parameters to be changed and their new (initial) values (e.g. (:alpha => 0.3, :beta => .99)).","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"get_standard_deviation(Smets_Wouters_2003, \n parameter_derivatives = [:alpha,:beta], \n parameters = (:alpha => 0.3, :beta => .99))","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"The function returns the model implied standard deviations of the model variables and their derivatives with respect to the model parameters. For example, the derivative of the standard deviation of q with resect to alpha is -19.0184. In other words, the standard deviation of q decreases with increasing alpha.","category":"page"},{"location":"tutorials/sw03/#Correlations","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Correlations","text":"","category":"section"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Another useful statistic is the model implied correlation of variables. We use get_correlation for this:","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"get_correlation(Smets_Wouters_2003)","category":"page"},{"location":"tutorials/sw03/#Autocorrelations","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Autocorrelations","text":"","category":"section"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Next, we have a look at the model implied aautocorrelations of model variables using the get_autocorrelation function:","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"get_autocorrelation(Smets_Wouters_2003)","category":"page"},{"location":"tutorials/sw03/#Variance-decomposition","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Variance decomposition","text":"","category":"section"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"The model implied contribution of each shock to the variance of the model variables can be calculate by using the get_variance_decomposition function:","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"get_variance_decomposition(Smets_Wouters_2003)","category":"page"},{"location":"tutorials/sw03/#Conditional-variance-decomposition","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Conditional variance decomposition","text":"","category":"section"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Last but not least, we have look at the model implied contribution of each shock per period to the variance of the model variables (also called forecast error variance decomposition) by using the get_conditional_variance_decomposition function:","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"get_conditional_variance_decomposition(Smets_Wouters_2003)","category":"page"},{"location":"tutorials/sw03/#Plot-conditional-variance-decomposition","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Plot conditional variance decomposition","text":"","category":"section"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Especially for the conditional variance decomposition it is convenient to look at a plot instead of the raw numbers. This can be done using the plot_conditional_variance_decomposition function. Please note that you need to import the StatsPlots packages once before the first plot.","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"plot_conditional_variance_decomposition(Smets_Wouters_2003, variables = [:U,:Y,:I,:R,:C])","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"(Image: FEVD Smets_Wouters_2003)","category":"page"},{"location":"tutorials/sw03/#Model-solution","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Model solution","text":"","category":"section"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"A further insightful output are the policy and transition functions of the the first order perturbation solution. To retrieve the solution we call the function get_solution:","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"get_solution(Smets_Wouters_2003)","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"The solution provides information about how past states and present shocks impact present variables. The first row contains the SS for the variables denoted in the columns. The second to last rows contain the past states, with the time index ₍₋₁₎, and present shocks, with exogenous variables denoted by ₍ₓ₎. For example, the immediate impact of a shock to eta_w on z is 0.00222469.","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"There is also the possibility to visually inspect the solution using the plot_solution function. Please note that you need to import the StatsPlots packages once before the first plot.","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"plot_solution(Smets_Wouters_2003, :pi, variables = [:C,:I,:K,:L,:W,:R])","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"(Image: Smets_Wouters_2003 solution)","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"The chart shows the first order perturbation solution mapping from the past state pi to the present variables C, I, K, L, W, and R. The state variable covers a range of two standard deviations around the non stochastic steady state and all other states remain in the non stochastic steady state.","category":"page"},{"location":"tutorials/sw03/#Obtain-array-of-IRFs-or-model-simulations","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Obtain array of IRFs or model simulations","text":"","category":"section"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Last but not least the user might want to obtain simulated time series of the model or IRFs without plotting them. For IRFs this is possible by calling get_irf:","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"get_irf(Smets_Wouters_2003)","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"which returns a 3-dimensional KeyedArray with variables (absolute deviations from the relevant steady state by default) in rows, the period in columns, and the shocks as the third dimension.","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"For simulations this is possible by calling simulate:","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"simulate(Smets_Wouters_2003)","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"which returns the simulated data in levels in a 3-dimensional KeyedArray of the same structure as for the IRFs.","category":"page"},{"location":"tutorials/sw03/#Conditional-forecasts","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Conditional forecasts","text":"","category":"section"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Conditional forecasting is a useful tool to incorporate for example forecasts into a model and then add shocks on top.","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"For example we might be interested in the model dynamics given a path for Y and pi for the first 4 quarters and the next quarter a negative shock to eta_w arrives. Furthermore, we want that the first two periods only a subset of shocks is used to match the conditions on the endogenous variables. This can be implemented using the get_conditional_forecast function and visualised with the plot_conditional_forecast function.","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"First, we define the conditions on the endogenous variables as deviations from the non stochastic steady state (Y and pi in this case) using a KeyedArray from the AxisKeys package (check get_conditional_forecast for other ways to define the conditions):","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"using AxisKeys\nconditions = KeyedArray(Matrix{Union{Nothing,Float64}}(undef,2,4),Variables = [:Y, :pi], Periods = 1:4)\nconditions[1,1:4] .= [-.01,0,.01,.02];\nconditions[2,1:4] .= [.01,0,-.01,-.02];","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Note that all other endogenous variables not part of the KeyedArray are also not conditioned on.","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Next, we define the conditions on the shocks using a Matrix (check get_conditional_forecast for other ways to define the conditions on the shocks):","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"shocks = Matrix{Union{Nothing,Float64}}(undef,9,5)\nshocks[[1:3...,5,9],1:2] .= 0;\nshocks[9,5] = -1;","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"The above shock Matrix means that for the first two periods shocks 1, 2, 3, 5, and 9 are fixed at zero and in the fifth period there is a negative shock of eta_w (the 9th shock).","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Finally we can get the conditional forecast:","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"get_conditional_forecast(Smets_Wouters_2003, conditions, shocks = shocks, variables = [:Y,:pi,:W], conditions_in_levels = false)","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"The function returns a KeyedArray with the values of the endogenous variables and shocks matching the conditions exactly.","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"We can also plot the conditional forecast. Please note that you need to import the StatsPlots packages once before the first plot.","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"plot_conditional_forecast(Smets_Wouters_2003,conditions, shocks = shocks, plots_per_page = 6,variables = [:Y,:pi,:W],conditions_in_levels = false)","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"(Image: Smets_Wouters_2003 conditional forecast 1)","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"(Image: Smets_Wouters_2003 conditional forecast 2)","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"and we need to set conditions_in_levels = false since the conditions are defined in deviations.","category":"page"},{"location":"tutorials/sw03/","page":"Work with a more complex model - Smets and Wouters (2003)","title":"Work with a more complex model - Smets and Wouters (2003)","text":"Note that the stars indicate the values the model is conditioned on.","category":"page"},{"location":"#MacroModelling.jl","page":"Introduction","title":"MacroModelling.jl","text":"","category":"section"},{"location":"","page":"Introduction","title":"Introduction","text":"Author: Thore Kockerols (@thorek1)","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"MacroModelling.jl is a Julia package for developing and solving dynamic stochastic general equilibrium (DSGE) models.","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"These kinds of models describe the behavior of a macroeconomy and are particularly suited for counterfactual analysis (economic policy evaluation) and exploring / quantifying specific mechanisms (academic research). Due to the complexity of these models, efficient numerical tools are required, as analytical solutions are often unavailable. MacroModelling.jl serves as a tool for handling the complexities involved, such as forward-looking expectations, nonlinearity, and high dimensionality.","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"The goal of this package is to reduce coding time and speed up model development by providing functions for working with discrete-time DSGE models. The user-friendly syntax, automatic variable declaration, and effective steady state solver facilitate fast prototyping of models. Furthermore, the package allows the user to work with nonlinear model solutions (up to third order (pruned) perturbation) and estimate the model using gradient based samplers (e.g. NUTS, of HMC). Currently, DifferentiableStateSpaceModels.jl is the only other package providing functionality to estimate using gradient based samplers but the use is limited to models with an analytical solution of the non stochastic steady state (NSSS). Larger models tend to not have an analytical solution of the NSSS and MacroModelling.jl can also use gradient based sampler in this case. The target audience for the package includes central bankers, regulators, graduate students, and others working in academia with an interest in DSGE modelling.","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"As of now the package can:","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"parse a model written with user friendly syntax (variables are followed by time indices ...[2], [1], [0], [-1], [-2]..., or [x] for shocks)\n(tries to) solve the model only knowing the model equations and parameter values (no steady state file needed)\ncalculate first, second, and third order (pruned) perturbation solutions (see Villemot (2011), Andreasen et al. (2017) and Levintal (2017)) using symbolic derivatives\nhandle occasionally binding constraints for linear and nonlinear solutions\ncalculate (generalised) impulse response functions, simulate the model, or do conditional forecasts for linear and nonlinear solutions\ncalibrate parameters using (non stochastic) steady state relationships\nmatch model moments (also for pruned higher order solutions)\nestimate the model on data (Kalman filter using first order perturbation; see Durbin and Koopman (2012)) with gradient based samplers (e.g. NUTS, HMC) or estimate nonlinear models using the inversion filter\ndifferentiate (forward AD) the model solution, Kalman filter loglikelihood (forward and reverse-mode AD), model moments, steady state, with respect to the parameters","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"The package is not:","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"guaranteed to find the non stochastic steady state\nthe fastest package around if you already have a fast way to find the NSSS","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"The former has to do with the fact that solving systems of nonlinear equations is hard (an active area of research). Especially in cases where the values of the solution are far apart (have a high standard deviation - e.g. sol = [-46.324, .993457, 23523.3856]), the algorithms have a hard time finding a solution. The recommended way to tackle this is to set bounds in the @parameters part (e.g. r < 0.2), so that the initial points are closer to the final solution (think of steady state interest rates not being higher than 20% - meaning not being higher than 0.2 or 1.2 depending on the definition).","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"The latter has to do with the fact that julia code is fast once compiled, and that the package can spend more time finding the non stochastic steady state. This means that it takes more time from executing the code to define the model and parameters for the first time to seeing the first plots than with most other packages. But, once the functions are compiled and the non stochastic steady state has been found the user can benefit from the object oriented nature of the package and generate outputs or change parameters very fast.","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"The package contains the following models in the models folder:","category":"page"},{"location":"","page":"Introduction","title":"Introduction","text":"Aguiar and Gopinath (2007) Aguiar_Gopinath_2007.jl\nAscari and Sbordone (2014) Ascari_sbordone_2014.jl\nBackus, Kehoe, and Kydland (1992) Backus_Kehoe_Kydland_1992\nBaxter and King (1993) Baxter_King_1993.jl\nCaldara et al. (2012) Caldara_et_al_2012.jl\nGali (2015) - Chapter 3 Gali_2015_chapter_3_nonlinear.jl\nGali and Monacelli (2005) - CPI inflation-based Taylor rule Gali_Monacelli_2005_CITR.jl\nGerali, Neri, Sessa, and Signoretti (2010) GNSS_2010.jl\nGhironi and Melitz (2005) Ghironi_Melitz_2005.jl\nIreland (2004) Ireland_2004.jl\nJermann and Quadrini (2012) - RBC JQ_2012_RBC.jl\nNew Area-Wide Model (2008) - Euro Area - US NAWM_EAUS_2008.jl\nQUEST3 (2009) QUEST3_2009.jl\nSchmitt-Grohé and Uribe (2003) - debt premium SGU_2003_debt_premium.jl\nSchorfheide (2000) FS2000.jl\nSmets and Wouters (2003) SW03.jl\nSmets and Wouters (2007) SW07.jl","category":"page"},{"location":"#Comparison-with-other-packages","page":"Introduction","title":"Comparison with other packages","text":"","category":"section"},{"location":"","page":"Introduction","title":"Introduction","text":" MacroModelling.jl dynare DSGE.jl dolo.py SolveDSGE.jl DifferentiableStateSpaceModels.jl StateSpaceEcon.jl IRIS RISE NBTOOLBOX gEcon GDSGE Taylor Projection\nHost language julia MATLAB julia Python julia julia julia MATLAB MATLAB MATLAB R MATLAB MATLAB\nNon stochastic steady state solver symbolic or numerical solver of independent blocks; symbolic removal of variables redundant in steady state; inclusion of calibration equations in problem numerical solver of independent blocks or user-supplied values/functions numerical solver of independent blocks or user-supplied values/functions numerical solver numerical solver or user supplied values/equations numerical solver of independent blocks or user-supplied values/functions numerical solver of independent blocks or user-supplied values/functions numerical solver of independent blocks or user-supplied values/functions user-supplied steady state file or numerical solver numerical solver; inclusion of calibration equations in problem \nAutomatic declaration of variables and parameters yes \nDerivatives (Automatic Differentiation) wrt parameters yes yes - for all 1st, 2nd order perturbation solution related output if user supplied steady state equations \nPerturbation solution order 1, 2, 3 k 1 1, 2, 3 1, 2, 3 1, 2 1 1 1 to 5 1 1 1 to 5\nPruning yes yes yes yes \nAutomatic derivation of first order conditions yes \nHandles occasionally binding constraints yes yes yes yes yes yes yes \nGlobal solution yes yes yes \nEstimation yes yes yes yes yes yes yes \nBalanced growth path yes yes yes yes yes yes \nModel input macro (julia) text file text file text file text file macro (julia) module (julia) text file text file text file text file text file text file\nTiming convention end-of-period end-of-period end-of-period start-of-period start-of-period end-of-period end-of-period end-of-period end-of-period end-of-period start-of-period start-of-period","category":"page"},{"location":"#Bibliography","page":"Introduction","title":"Bibliography","text":"","category":"section"},{"location":"","page":"Introduction","title":"Introduction","text":"Andreasen, M. M.; Fernández-Villaverde, J. and Rubio-Ramírez, J. F. (2017). The Pruned State-Space System for Non-Linear DSGE Models: Theory and Empirical Applications. The Review of Economic Studies 85, 1–49, arXiv:https://academic.oup.com/restud/article-pdf/85/1/1/23033725/rdx037.pdf.\n\n\n\nDurbin, J. and Koopman, S. J. (2012). Time Series Analysis by State Space Methods, 2nd edn (Oxford University Press).\n\n\n\nGalı́, J. (2015). Monetary policy, inflation, and the business cycle: an introduction to the new Keynesian framework and its applications (Princeton University Press).\n\n\n\nLevintal, O. (2017). Fifth-Order Perturbation Solution to DSGE models. Journal of Economic Dynamics and Control 80, 1–16.\n\n\n\nSchorfheide, F. (2000). Loss function-based evaluation of DSGE models. Journal of Applied Econometrics 15, 645–670, arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/jae.582.\n\n\n\nSmets, F. and Wouters, R. (2003). AN ESTIMATED DYNAMIC STOCHASTIC GENERAL EQUILIBRIUM MODEL OF THE EURO AREA. Journal of the European Economic Association 1, 1123–1175, arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1162/154247603770383415.\n\n\n\nVillemot, S. (2011). Solving rational expectations models at first order: what Dynare does (Dynare Working Papers 2, CEPREMAP).\n\n\n\n","category":"page"}] } diff --git a/v0.1.36/tutorials/calibration/index.html b/v0.1.36/tutorials/calibration/index.html index 5f99c83c..b699e1f5 100644 --- a/v0.1.36/tutorials/calibration/index.html +++ b/v0.1.36/tutorials/calibration/index.html @@ -88,10 +88,10 @@ std_nu = .0025 - endRemove redundant variables in non stochastic steady state problem: 1.495 seconds -Set up non stochastic steady state problem: 0.575 seconds -Take symbolic derivatives up to first order: 0.076 seconds -Find non stochastic steady state: 6.309 seconds + endRemove redundant variables in non stochastic steady state problem: 1.462 seconds +Set up non stochastic steady state problem: 0.593 seconds +Take symbolic derivatives up to first order: 0.077 seconds +Find non stochastic steady state: 6.449 seconds Model: Gali_2015 Variables Total: 23 @@ -105,39 +105,39 @@ ↓ Variables ∈ 23-element Vector{Symbol} And data, 23-element Vector{Float64}: (:A) 1.0 - (:C) 0.9505798249541406 - (:MC) 0.8888888888888886 - (:M_real) 0.9152363832868936 + (:C) 0.9505798249541407 + (:MC) 0.8888888888888887 + (:M_real) 0.9152363832868945 (:N) 0.934655265184067 - (:Pi) 0.9999999999999993 - (:Pi_star) 0.999999999999998 - (:Q) 0.9900000000000009 + (:Pi) 0.9999999999999991 + (:Pi_star) 0.9999999999999973 + (:Q) 0.9900000000000011 ⋮ - (:log_y) -0.050683138513520666 + (:log_y) -0.05068313851352055 (:nu) 0.0 - (:pi_ann) -2.6645352591003765e-15 + (:pi_ann) -3.5527136788005025e-15 (:r_real_ann) 0.04020134341400426 (:realinterest) 1.0101010101010097 - (:x_aux_1) 3.4519956850053126 - (:x_aux_2) 3.883495145631008

and the standard deviation like this:

julia> get_standard_deviation(Gali_2015)1-dimensional KeyedArray(NamedDimsArray(...)) with keys:
+ (:x_aux_1)        3.451995685005287
+ (:x_aux_2)        3.8834951456309885

and the standard deviation like this:

julia> get_standard_deviation(Gali_2015)1-dimensional KeyedArray(NamedDimsArray(...)) with keys:
 ↓   Variables ∈ 23-element Vector{Symbol}
 And data, 23-element Vector{Float64}:
- (:A)             0.022941573387055523
- (:C)             0.033571696779990466
- (:MC)            0.21609085046271806
- (:M_real)        0.05926619832099237
- (:N)             0.03786945958932759
- (:Pi)            0.012358762176559495
- (:Pi_star)       0.03707628652967771
- (:Q)             0.02046853220935796
+ (:A)             0.022941573387056217
+ (:C)             0.03357169677998876
+ (:MC)            0.21609085046272014
+ (:M_real)        0.05926619832099089
+ (:N)             0.03786945958932772
+ (:Pi)            0.012358762176559925
+ (:Pi_star)       0.03707628652967878
+ (:Q)             0.02046853220935823
   ⋮
- (:log_y)         0.03531707269466832
- (:nu)            0.002886751345948121
- (:pi_ann)        0.049435048706238015
- (:r_real_ann)    0.05644645066322966
- (:realinterest)  0.014254154207886272
- (:x_aux_1)       0.9515263512142259
- (:x_aux_2)       0.516658656932619

You could also simply use: std or get_std to the same effect.

Another interesting output is the autocorrelation of the model variables:

julia> get_autocorrelation(Gali_2015)2-dimensional KeyedArray(NamedDimsArray(...)) with keys:
+ (:log_y)         0.03531707269466643
+ (:nu)            0.0028867513459481255
+ (:pi_ann)        0.049435048706239736
+ (:r_real_ann)    0.05644645066322981
+ (:realinterest)  0.014254154207886309
+ (:x_aux_1)       0.9515263512142411
+ (:x_aux_2)       0.5166586569326244

You could also simply use: std or get_std to the same effect.

Another interesting output is the autocorrelation of the model variables:

julia> get_autocorrelation(Gali_2015)2-dimensional KeyedArray(NamedDimsArray(...)) with keys:
 ↓   Variables ∈ 23-element Vector{Symbol}Autocorrelation_orders ∈ 5-element UnitRange{Int64}
 And data, 23×5 Matrix{Float64}:
@@ -168,7 +168,7 @@
   (:Pi)            -0.000159411   0.000169707      0.0115462    0.00587158
    ⋮                                           ⋱
   (:log_y)          0.000425097   0.00118565       0.00768558   0.000326461
-  (:nu)            -3.20378e-17  -8.20937e-6      -0.00016948  -5.38539e-5
+  (:nu)            -1.17583e-17  -8.20937e-6      -0.00016948  -5.38539e-5
   (:pi_ann)        -0.000637646   0.000678826  …   0.0461849    0.0234863
   (:r_real_ann)    -0.000170039   0.00143464       0.0418524    0.0185996
   (:realinterest)  -4.29391e-5    0.000362283      0.0105688    0.00469687
@@ -180,14 +180,14 @@
                    (:Mean)       (:σ)
   (:A)              1.0           0.0
   (:C)              0.95058       0.0060223
-  (:MC)             0.888889      6.68609e-19
+  (:MC)             0.888889     -6.93889e-18
   (:M_real)         0.915236      0.00579838
   (:N)              0.934655      0.00789521
   (:Pi)             1.0          -0.0
    ⋮
   (:log_y)         -0.0506831     0.00633539
   (:nu)             0.0           0.0
-  (:pi_ann)        -2.66454e-15  -0.0
+  (:pi_ann)        -3.55271e-15  -0.0
   (:r_real_ann)     0.0402013     0.0
   (:realinterest)   1.0101        0.0
   (:x_aux_1)        3.452         0.174958
@@ -197,24 +197,24 @@
 And data, 23×6 Matrix{Float64}:
                    (:Mean)       (:σ)(:β)          (:α)
   (:A)              1.0           0.0              0.0           0.0
-  (:C)              0.95058       0.0060223        3.4721e-15   -0.0941921
-  (:MC)             0.888889      6.68609e-19      2.62013e-14  -1.11421e-15
+  (:C)              0.95058       0.0060223        4.78473e-15  -0.0941921
+  (:MC)             0.888889     -6.93889e-18      3.64153e-14  -1.47167e-15
   (:M_real)         0.915236      0.00579838       3.48529      -0.09069
-  (:N)              0.934655      0.00789521   …   4.65293e-15  -0.207701
-  (:Pi)             1.0          -0.0              4.39648e-16  -0.0
+  (:N)              0.934655      0.00789521   …   6.3651e-15   -0.207701
+  (:Pi)             1.0          -0.0             -1.46549e-16  -0.0
    ⋮                                           ⋱   ⋮
-  (:log_y)         -0.0506831     0.00633539       3.65261e-15  -0.0990891
+  (:log_y)         -0.0506831     0.00633539       5.03348e-15  -0.0990891
   (:nu)             0.0           0.0              0.0           0.0
-  (:pi_ann)        -2.66454e-15  -0.0          …   1.75859e-15  -0.0
+  (:pi_ann)        -3.55271e-15  -0.0          …  -5.86198e-16  -0.0
   (:r_real_ann)     0.0402013     0.0             -4.0404        0.0
   (:realinterest)   1.0101        0.0             -1.0203        0.0
-  (:x_aux_1)        3.452         0.174958        10.0544       -1.10452e-13
-  (:x_aux_2)        3.8835        0.196828        11.3112       -8.54455e-17

We can do the same for standard deviation or variance, and all parameters:

julia> get_std(Gali_2015, parameter_derivatives = get_parameters(Gali_2015))2-dimensional KeyedArray(NamedDimsArray(...)) with keys:
+  (:x_aux_1)        3.452         0.174958        10.0544       -1.47167e-13
+  (:x_aux_2)        3.8835        0.196828        11.3112       -0.0

We can do the same for standard deviation or variance, and all parameters:

julia> get_std(Gali_2015, parameter_derivatives = get_parameters(Gali_2015))2-dimensional KeyedArray(NamedDimsArray(...)) with keys:
 ↓   Variables ∈ 23-element Vector{Symbol}Standard_deviation_and_∂standard_deviation∂parameter ∈ 17-element Vector{Symbol}
 And data, 23×17 Matrix{Float64}:
                    (:Standard_deviation)(:std_z)      (:std_nu)
-  (:A)              0.0229416                 4.85894e-33   2.84982e-35
+  (:A)              0.0229416                -2.62184e-33   4.96938e-33
   (:C)              0.0335717                 0.48179       0.0963579
   (:MC)             0.216091                  4.18882       0.837765
   (:M_real)         0.0592662                 0.491332      0.254845
@@ -222,7 +222,7 @@
   (:Pi)             0.0123588                 0.167366      0.0334732
    ⋮                                      ⋱
   (:log_y)          0.0353171                 0.506838      0.101368
-  (:nu)             0.00288675                0.0           1.1547
+  (:nu)             0.00288675               -4.95663e-31   1.1547
   (:pi_ann)         0.049435              …   0.669465      0.133893
   (:r_real_ann)     0.0564465                 1.09678       0.253692
   (:realinterest)   0.0142542                 0.276965      0.0640637
@@ -232,7 +232,7 @@
 →   Variance_and_∂variance∂parameter ∈ 17-element Vector{Symbol}
 And data, 23×17 Matrix{Real}:
                    (:Variance)   (:σ)(:std_z)      (:std_nu)
-  (:A)              0.000526316   9.12332e-20      2.22944e-34   1.30759e-36
+  (:A)              0.000526316  -5.50772e-20     -1.20298e-34   2.28011e-34
   (:C)              0.00112706   -0.00101983       0.032349      0.0064698
   (:MC)             0.0466953    -0.0394622        1.81033       0.362067
   (:M_real)         0.00351248   -0.000388635      0.0582387     0.0302074
@@ -240,7 +240,7 @@
   (:Pi)             0.000152739  -6.64228e-5       0.00413688    0.000827376
    ⋮                                           ⋱
   (:log_y)          0.0012473    -0.00114444       0.0358        0.00716001
-  (:nu)             8.33333e-6    3.1209e-22       0.0           0.00666667
+  (:nu)             8.33333e-6   -1.50032e-21     -2.86171e-33   0.00666667
   (:pi_ann)         0.00244382   -0.00106276   …   0.06619       0.013238
   (:r_real_ann)     0.0031862    -0.00279404       0.123819      0.02864
   (:realinterest)   0.000203181  -0.000178173      0.00789579    0.00182635
@@ -258,8 +258,8 @@
              (:Standard_deviation)  (:σ)         (:α)        (:std_a)
   (:Pi)       0.0123588             -0.00268728  -0.0166021   0.390677
   (:W_real)   0.156462              -0.0674815    0.141894    0.0348056

Looking at the sensitivity table we see that lowering the production function parameter will increase real wages, but at the same time it will increase inflation volatility. We could compensate that effect by decreasing the standard deviation of the total factor productivity shock :std_a.

Method of moments

Instead of doing this by hand we can also set a target and have an optimiser find the corresponding parameter values. In order to do that we need to define targets, and set up an optimisation problem.

Our targets are:

  • Mean of W_real = 0.7
  • Standard deviation of Pi = 0.01

For the optimisation problem we use the L-BFGS algorithm implemented in Optim.jl. This optimisation algorithm is very efficient and gradient based. Note that all model outputs are differentiable with respect to the parameters using automatic and implicit differentiation.

The package provides functions specialised for the use with gradient based code (e.g. gradient-based optimisers or samplers). For model statistics we can use get_statistics to get the mean of real wages and the standard deviation of inflation like this:

julia> get_statistics(Gali_2015, Gali_2015.parameter_values, parameters = Gali_2015.parameters, mean = [:W_real], standard_deviation = [:Pi])2-element Vector{AbstractArray{Float64}}:
- [0.6780252644037243]
- [0.012358762176559495]

First we pass on the model object, followed by the parameter values and the parameter names the values correspond to. Then we define the outputs we want: for the mean we want real wages and for the standard deviation we want inflation. We can also get outputs for variance, covariance, or autocorrelation the same way as for the mean and standard deviation.

Next, let's define a function measuring how close we are to our target for given values of and :std_a:

julia> function distance_to_target(parameter_value_inputs)
+ [0.6780252644037245]
+ [0.012358762176559925]

First we pass on the model object, followed by the parameter values and the parameter names the values correspond to. Then we define the outputs we want: for the mean we want real wages and for the standard deviation we want inflation. We can also get outputs for variance, covariance, or autocorrelation the same way as for the mean and standard deviation.

Next, let's define a function measuring how close we are to our target for given values of and :std_a:

julia> function distance_to_target(parameter_value_inputs)
            model_statistics = get_statistics(Gali_2015, parameter_value_inputs, parameters = [:α, :std_a], mean = [:W_real], standard_deviation = [:Pi])
            targets = [0.7, 0.01]
            return sum(abs2, vcat(model_statistics...) - targets)
@@ -279,7 +279,7 @@
       "τ" => 0.0
   "std_a" => 0.01
   "std_z" => 0.05
- "std_nu" => 0.0025

with this we can test the distance function:

julia> distance_to_target([0.25, 0.01])0.00048845276353179

Next we can pass it on to an optimiser and find the parameters corresponding to the best fit like this:

julia> using Optim, LineSearches
julia> sol = Optim.optimize(distance_to_target, + "std_nu" => 0.0025

with this we can test the distance function:

julia> distance_to_target([0.25, 0.01])0.0004884527635317872

Next we can pass it on to an optimiser and find the parameters corresponding to the best fit like this:

julia> using Optim, LineSearches
julia> sol = Optim.optimize(distance_to_target, [0,0], [1,1], [0.25, 0.01], @@ -296,15 +296,15 @@ |x - x'|/|x'| = 1.03e-05 ≰ 0.0e+00 |f(x) - f(x')| = 0.00e+00 ≤ 0.0e+00 |f(x) - f(x')|/|f(x')| = 0.00e+00 ≤ 0.0e+00 - |g(x)| = 8.71e-09 ≤ 1.0e-08 + |g(x)| = 8.72e-09 ≤ 1.0e-08 * Work counters Seconds run: 1 (vs limit Inf) Iterations: 4 f(x) calls: 26 ∇f(x) calls: 24

The first argument to the optimisation call is the function we defined previously, followed by lower and upper bounds, the starting values, and finally the algorithm. For the algorithm we have to add Fminbox because we have bounds (optional) and we set the specific line search method to speed up convergence (recommended but optional).

The output shows that we could almost perfectly match the target and the values of the parameters found by the optimiser are:

julia> sol.minimizer2-element Vector{Float64}:
- 0.2233025595376729
- 9.506079827289288e-8

slightly lower for both parameters (in line with what we understood from the sensitivities).

You can combine the method of moments with estimation by simply adding the distance to the target to the posterior loglikelihood.

Nonlinear solutions

So far we used the linearised solution of the model. The package also provides nonlinear solutions and can calculate the theoretical model moments for pruned second and third order perturbation solutions. This can be of interest because nonlinear solutions capture volatility effects (at second order) and asymmetries (at third order). Furthermore, the moments of the data are often non-gaussian while linear solutions with gaussian noise can only generate gaussian distributions of model variables. Nonetheless, already pruned second order solutions produce non-gaussian skewness and kurtosis with gaussian noise.

From a user perspective little changes other than specifying that the solution algorithm is :pruned_second_order or :pruned_third_order.

For example we can get the mean for the pruned second order solution:

julia> get_mean(Gali_2015, parameter_derivatives = [:σ, :std_a, :α], variables = [:W_real,:Pi], algorithm = :pruned_second_order)2-dimensional KeyedArray(NamedDimsArray(...)) with keys:
+ 0.22330255953850617
+ 9.506284750850915e-8

slightly lower for both parameters (in line with what we understood from the sensitivities).

You can combine the method of moments with estimation by simply adding the distance to the target to the posterior loglikelihood.

Nonlinear solutions

So far we used the linearised solution of the model. The package also provides nonlinear solutions and can calculate the theoretical model moments for pruned second and third order perturbation solutions. This can be of interest because nonlinear solutions capture volatility effects (at second order) and asymmetries (at third order). Furthermore, the moments of the data are often non-gaussian while linear solutions with gaussian noise can only generate gaussian distributions of model variables. Nonetheless, already pruned second order solutions produce non-gaussian skewness and kurtosis with gaussian noise.

From a user perspective little changes other than specifying that the solution algorithm is :pruned_second_order or :pruned_third_order.

For example we can get the mean for the pruned second order solution:

julia> get_mean(Gali_2015, parameter_derivatives = [:σ, :std_a, :α], variables = [:W_real,:Pi], algorithm = :pruned_second_order)2-dimensional KeyedArray(NamedDimsArray(...)) with keys:
 ↓   Variables ∈ 2-element Vector{Symbol}Mean_and_∂mean∂parameter ∈ 4-element Vector{Symbol}
 And data, 2×4 Matrix{Float64}:
@@ -346,14 +346,14 @@
     |g(x)|                 = 2.92e-09 ≤ 1.0e-08
 
  * Work counters
-    Seconds run:   14  (vs limit Inf)
+    Seconds run:   10  (vs limit Inf)
     Iterations:    3
-    f(x) calls:    69
-    ∇f(x) calls:   36

the calculations take substantially longer and we don't get as close to our target as for the linear solution case. The parameter values minimising the distance are:

julia> sol.minimizer2-element Vector{Float64}:
- 0.19722918358106703
- 2.923641833967549e-9

lower than for the linear solution case and the theoretical moments given these parameter are:

julia> get_statistics(Gali_2015, sol.minimizer, algorithm = :pruned_third_order, parameters = [:α, :std_a], mean = [:W_real], standard_deviation = [:Pi])2-element Vector{AbstractArray{Float64}}:
- [0.6999560170746297]
- [0.015199050418151934]

The solution does not match the standard deviation of inflation very well.

Potentially the partial derivatives change a lot for small changes in parameters and even though the partial derivatives for standard deviation of inflation were large wrt std_a they might be small for value returned from the optimisation. We can check this with:

julia> get_std(Gali_2015, parameter_derivatives = [:σ, :std_a, :α], variables = [:W_real,:Pi], algorithm = :pruned_third_order, parameters = [:α, :std_a] .=> sol.minimizer)2-dimensional KeyedArray(NamedDimsArray(...)) with keys:
+    f(x) calls:    32
+    ∇f(x) calls:   30

the calculations take substantially longer and we don't get as close to our target as for the linear solution case. The parameter values minimising the distance are:

julia> sol.minimizer2-element Vector{Float64}:
+ 0.19722918310784004
+ 2.923939582130735e-9

lower than for the linear solution case and the theoretical moments given these parameter are:

julia> get_statistics(Gali_2015, sol.minimizer, algorithm = :pruned_third_order, parameters = [:α, :std_a], mean = [:W_real], standard_deviation = [:Pi])2-element Vector{AbstractArray{Float64}}:
+ [0.6999560174656241]
+ [0.0151990504214938]

The solution does not match the standard deviation of inflation very well.

Potentially the partial derivatives change a lot for small changes in parameters and even though the partial derivatives for standard deviation of inflation were large wrt std_a they might be small for value returned from the optimisation. We can check this with:

julia> get_std(Gali_2015, parameter_derivatives = [:σ, :std_a, :α], variables = [:W_real,:Pi], algorithm = :pruned_third_order, parameters = [:α, :std_a] .=> sol.minimizer)2-dimensional KeyedArray(NamedDimsArray(...)) with keys:
 ↓   Variables ∈ 2-element Vector{Symbol}Standard_deviation_and_∂standard_deviation∂parameter ∈ 4-element Vector{Symbol}
 And data, 2×4 Matrix{Float64}:
@@ -387,5 +387,5 @@
     Iterations:    3
     f(x) calls:    41
     ∇f(x) calls:   41
julia> sol.minimizer2-element Vector{Float64}: - 0.20874375760774933 - 2.123741771007158

Given the new value for std_a and optimising over σ allows us to match the target exactly.

+ 0.20874375760773212 + 2.1237417710035174

Given the new value for std_a and optimising over σ allows us to match the target exactly.

diff --git a/v0.1.36/tutorials/estimation/index.html b/v0.1.36/tutorials/estimation/index.html index 50349bb4..ae93e081 100644 --- a/v0.1.36/tutorials/estimation/index.html +++ b/v0.1.36/tutorials/estimation/index.html @@ -50,10 +50,10 @@ del = 0.01 z_e_a = 0.035449 z_e_m = 0.008862 - endRemove redundant variables in non stochastic steady state problem: 1.862 seconds -Set up non stochastic steady state problem: 2.584 seconds -Take symbolic derivatives up to first order: 0.475 seconds -Find non stochastic steady state: 3.925 seconds + endRemove redundant variables in non stochastic steady state problem: 1.874 seconds +Set up non stochastic steady state problem: 2.482 seconds +Take symbolic derivatives up to first order: 0.5 seconds +Find non stochastic steady state: 4.049 seconds Model: FS2000 Variables Total: 18 @@ -144,24 +144,24 @@ parameters mean Symbol Float64 - parameters[1] 0.4038 + parameters[1] 0.4040 parameters[2] 0.9905 parameters[3] 0.0046 - parameters[4] 1.0142 - parameters[5] 0.8464 - parameters[6] 0.6833 - parameters[7] 0.0024 + parameters[4] 1.0141 + parameters[5] 0.8477 + parameters[6] 0.6822 + parameters[7] 0.0025 parameters[8] 0.0138 - parameters[9] 0.0033
julia> pars = ComponentArray(parameter_mean.nt[2],Axis(parameter_mean.nt[1]))ComponentVector{Float64}(parameters[1] = 0.40382038681748134, parameters[2] = 0.9905215966959959, parameters[3] = 0.004616432653593915, parameters[4] = 1.0141917324986325, parameters[5] = 0.8464069524607625, parameters[6] = 0.6833437289006508, parameters[7] = 0.002443856990694954, parameters[8] = 0.01382125045107092, parameters[9] = 0.0033384241822524622)
julia> logjoint(FS2000_loglikelihood, pars)ERROR: type NamedTuple has no field parameters
julia> function calculate_log_probability(par1, par2, pars_syms, orig_pars, model) - orig_pars[pars_syms] = [par1, par2] + parameters[9] 0.0033
julia> pars = ComponentArray([parameter_mean.nt[2]], Axis(:parameters));
julia> logjoint(FS2000_loglikelihood, pars)1343.412656463074
julia> function calculate_log_probability(par1, par2, pars_syms, orig_pars, model) + orig_pars[1][pars_syms] = [par1, par2] logjoint(model, orig_pars) - endcalculate_log_probability (generic function with 1 method)
julia> granularity = 32;
julia> par1 = :del;
julia> par2 = :gam;
julia> par_range1 = collect(range(minimum(chain_NUTS[par1]), stop = maximum(chain_NUTS[par1]), length = granularity));ERROR: ArgumentError: index del not found
julia> par_range2 = collect(range(minimum(chain_NUTS[par2]), stop = maximum(chain_NUTS[par2]), length = granularity));ERROR: ArgumentError: index gam not found
julia> p = surface(par_range1, par_range2, - (x,y) -> calculate_log_probability(x, y, [par1, par2], pars, FS2000_loglikelihood), + endcalculate_log_probability (generic function with 1 method)
julia> granularity = 32;
julia> par1 = :del;
julia> par2 = :gam;
julia> paridx1 = indexin([par1], FS2000.parameters)[1];
julia> paridx2 = indexin([par2], FS2000.parameters)[1];
julia> par_range1 = collect(range(minimum(chain_NUTS[Symbol("parameters[$paridx1]")]), stop = maximum(chain_NUTS[Symbol("parameters[$paridx1]")]), length = granularity));
julia> par_range2 = collect(range(minimum(chain_NUTS[Symbol("parameters[$paridx2]")]), stop = maximum(chain_NUTS[Symbol("parameters[$paridx2]")]), length = granularity));
julia> p = surface(par_range1, par_range2, + (x,y) -> calculate_log_probability(x, y, [paridx1, paridx2], pars, FS2000_loglikelihood), camera=(30, 65), colorbar=false, - color=:inferno);ERROR: UndefVarError: `par_range1` not defined
julia> joint_loglikelihood = [logjoint(FS2000_loglikelihood, ComponentArray(reduce(hcat, get(chain_NUTS, FS2000.parameters)[FS2000.parameters])[s,:], Axis(FS2000.parameters))) for s in 1:length(chain_NUTS)];ERROR: type NamedTuple has no field alp
julia> scatter3d!(vec(collect(chain_NUTS[par1])), - vec(collect(chain_NUTS[par2])), - joint_loglikelihood, + color=:inferno);
julia> joint_loglikelihood = [logjoint(FS2000_loglikelihood, ComponentArray([reduce(hcat, get(chain_NUTS, :parameters)[1])[s,:]], Axis(:parameters))) for s in 1:length(chain_NUTS)];
julia> scatter3d!(vec(collect(chain_NUTS[Symbol("parameters[$paridx1]")])), + vec(collect(chain_NUTS[Symbol("parameters[$paridx2]")])), + joint_loglikelihood, mc = :viridis, marker_z = collect(1:length(chain_NUTS)), msw = 0, @@ -170,21 +170,17 @@ xlabel = string(par1), ylabel = string(par2), zlabel = "Log probability", - alpha = 0.5);ERROR: ArgumentError: index del not found
julia> pERROR: UndefVarError: `p` not defined

Posterior surface

Find posterior mode

Other than the mean and median of the posterior distribution we can also calculate the mode. To this end we will use L-BFGS optimisation routines from the Optim package.

First, we define the posterior loglikelihood function, similar to how we defined it for the Turing model macro.

julia> function calculate_posterior_loglikelihood(parameters)
+                   alpha = 0.5);
julia> pPlot{Plots.GRBackend() n=2}

Posterior surface

Find posterior mode

Other than the mean and median of the posterior distribution we can also calculate the mode. To this end we will use L-BFGS optimisation routines from the Optim package.

First, we define the posterior loglikelihood function, similar to how we defined it for the Turing model macro.

julia> function calculate_posterior_loglikelihood(parameters, prior_distribuions)
            alp, bet, gam, mst, rho, psi, del, z_e_a, z_e_m = parameters
            log_lik = 0
            log_lik -= get_loglikelihood(FS2000, data, parameters)
-           log_lik -= logpdf(Beta(0.356, 0.02, μσ = true),alp)
-           log_lik -= logpdf(Beta(0.993, 0.002, μσ = true),bet)
-           log_lik -= logpdf(Normal(0.0085, 0.003),gam)
-           log_lik -= logpdf(Normal(1.0002, 0.007),mst)
-           log_lik -= logpdf(Beta(0.129, 0.223, μσ = true),rho)
-           log_lik -= logpdf(Beta(0.65, 0.05, μσ = true),psi)
-           log_lik -= logpdf(Beta(0.01, 0.005, μσ = true),del)
-           log_lik -= logpdf(InverseGamma(0.035449, Inf, μσ = true),z_e_a)
-           log_lik -= logpdf(InverseGamma(0.008862, Inf, μσ = true),z_e_m)
+       
+           for (dist, val) in zip(prior_distribuions, parameters)
+               log_lik -= logpdf(dist, val)
+           end
+       
            return log_lik
-       endcalculate_posterior_loglikelihood (generic function with 1 method)

Next, we set up the optimisation problem, parameter bounds, and use the optimizer L-BFGS.

julia> using Optim, LineSearches
julia> lbs = [0,0,-10,-10,0,0,0,0,0];
julia> ubs = [1,1,10,10,1,1,1,100,100];
julia> sol = optimize(calculate_posterior_loglikelihood, lbs, ubs , FS2000.parameter_values, Fminbox(LBFGS(linesearch = LineSearches.BackTracking(order = 3))); autodiff = :forward) * Status: success + endcalculate_posterior_loglikelihood (generic function with 1 method)

Next, we set up the optimisation problem, parameter bounds, and use the optimizer L-BFGS.

julia> using Optim, LineSearches
julia> lbs = [0,0,-10,-10,0,0,0,0,0];
julia> ubs = [1,1,10,10,1,1,1,100,100];
julia> sol = optimize(x -> calculate_posterior_loglikelihood(x, prior_distributions), lbs, ubs , FS2000.parameter_values, Fminbox(LBFGS(linesearch = LineSearches.BackTracking(order = 3))); autodiff = :forward) * Status: success * Candidate solution Final objective value: -1.343749e+03 @@ -193,35 +189,35 @@ Algorithm: Fminbox with L-BFGS * Convergence measures - |x - x'| = 1.29e-17 ≰ 0.0e+00 - |x - x'|/|x'| = 7.02e-18 ≰ 0.0e+00 + |x - x'| = 1.20e-17 ≰ 0.0e+00 + |x - x'|/|x'| = 6.52e-18 ≰ 0.0e+00 |f(x) - f(x')| = 0.00e+00 ≤ 0.0e+00 |f(x) - f(x')|/|f(x')| = 0.00e+00 ≤ 0.0e+00 - |g(x)| = 9.61e-05 ≰ 1.0e-08 + |g(x)| = 1.64e-03 ≰ 1.0e-08 * Work counters - Seconds run: 20 (vs limit Inf) - Iterations: 8 - f(x) calls: 608 - ∇f(x) calls: 92
julia> sol.minimum-1343.7491257510762

Model estimates given the data and the model solution

Having found the parameters at the posterior mode we can retrieve model estimates of the shocks which explain the data used to estimate it. This can be done with the get_estimated_shocks function:

julia> get_estimated_shocks(FS2000, data, parameters = sol.minimizer)2-dimensional KeyedArray(NamedDimsArray(...)) with keys:
+    Seconds run:   19  (vs limit Inf)
+    Iterations:    6
+    f(x) calls:    487
+    ∇f(x) calls:   83
julia> sol.minimum-1343.7491257511065

Model estimates given the data and the model solution

Having found the parameters at the posterior mode we can retrieve model estimates of the shocks which explain the data used to estimate it. This can be done with the get_estimated_shocks function:

julia> get_estimated_shocks(FS2000, data, parameters = sol.minimizer)2-dimensional KeyedArray(NamedDimsArray(...)) with keys:
 ↓   Shocks ∈ 2-element Vector{Symbol}Periods ∈ 192-element UnitRange{Int64}
 And data, 2×192 Matrix{Float64}:
              (1)         (2)(191)         (192)
-  (:e_a₍ₓ₎)    3.07802     2.02956          0.31192       0.0219214
+  (:e_a₍ₓ₎)    3.07802     2.02956          0.31192       0.0219213
   (:e_m₍ₓ₎)   -0.338786    0.529109        -0.455173     -0.596633

As the first argument we pass the model, followed by the data (in levels), and then we pass the parameters at the posterior mode. The model is solved with this parameterisation and the shocks are calculated using the Kalman smoother.

We estimated the model on two variables but our model allows us to look at all variables given the data. Looking at the estimated variables can be done using the get_estimated_variables function:

julia> get_estimated_variables(FS2000, data)2-dimensional KeyedArray(NamedDimsArray(...)) with keys:
 ↓   Variables ∈ 18-element Vector{Symbol}Periods ∈ 192-element UnitRange{Int64}
 And data, 18×192 Matrix{Float64}:
                  (1)          (2)(191)           (192)
-  (:P)             0.585869     0.596628          0.560845        0.559733
-  (:Pᴸ⁽¹⁾)         0.585108     0.595478          0.561547        0.560631
+  (:P)             0.585869     0.596628          0.560846        0.559734
+  (:Pᴸ⁽¹⁾)         0.585109     0.595479          0.561547        0.560631
   (:R)             1.02576      1.02692           1.01966         1.01858
   (:W)             2.99871      3.01049           2.96335         2.9599
   (:c)             1.73321      1.7013     …      1.80033         1.80164
-  (:cᴸ⁽¹⁾)         1.73494      1.7039            1.79936         1.80039
+  (:cᴸ⁽¹⁾)         1.73494      1.7039            1.79935         1.80039
    ⋮                                       ⋱      ⋮
-  (:k)            53.4886      52.032            56.2707         56.2515
+  (:k)            53.4885      52.0319           56.2707         56.2515
   (:l)             0.724385     0.733136          0.705017        0.704388
   (:log_gp_obs)   -0.0044494    0.0038943         0.00295712      0.00285919
   (:log_gy_obs)    0.0376443    0.026348   …      0.00793817      0.00487735
@@ -237,37 +233,37 @@
                  (:e_a₍ₓ₎)     (:e_m₍ₓ₎)     (:Initial_values)
   (:P)            0.0148888    -0.00108998    0.00810035
   (:Pᴸ⁽¹⁾)        0.0146127    -0.000913525   0.00743931
-  (:R)            1.70865e-16  -0.000955353   0.0031234
+  (:R)            1.27548e-15  -0.000955352   0.0031234
    ⋮
   (:log_gy_obs)   0.0328867     8.71844e-5    0.000120402
   (:m)           -0.0          -0.00111942    0.00365979
-  (:n)            0.00322234    3.46638e-5    0.000868554
-  (:y)           -0.018474      0.000187179  -0.0062412
+  (:n)            0.00322234    3.46637e-5    0.000868553
+  (:y)           -0.018474      0.000187179  -0.00624119
 
 [:, :, 97] ~ (:, :, 97):
                  (:e_a₍ₓ₎)     (:e_m₍ₓ₎)     (:Initial_values)
-  (:P)            0.0288478     0.00602704    0.000719126
-  (:Pᴸ⁽¹⁾)        0.0283128     0.00503281    0.000705791
-  (:R)           -1.12372e-15   0.00539571    3.21953e-10
+  (:P)            0.0288477     0.00602704    0.000719125
+  (:Pᴸ⁽¹⁾)        0.0283128     0.00503281    0.00070579
+  (:R)            5.82183e-15   0.00539571    3.21952e-10
    ⋮
-  (:log_gy_obs)  -0.0168618     0.000336676   7.85288e-6
-  (:m)           -8.40128e-16   0.00632233    3.77243e-10
-  (:n)            0.00624343   -0.000223709   0.000155638
-  (:y)           -0.0357943    -0.000897025  -0.000892291
+  (:log_gy_obs)  -0.0168618     0.000336676   7.85287e-6
+  (:m)            8.35267e-15   0.00632233    3.77241e-10
+  (:n)            0.00624342   -0.000223709   0.000155638
+  (:y)           -0.0357942    -0.000897024  -0.000892289
 
 [:, :, 192] ~ (:, :, 192):
                  (:e_a₍ₓ₎)     (:e_m₍ₓ₎)     (:Initial_values)
-  (:P)            0.00129293   -0.00565097    0.000121485
-  (:Pᴸ⁽¹⁾)        0.00126895   -0.00472692    0.000119232
-  (:R)            8.16629e-17  -0.00500928    3.46945e-17
+  (:P)            0.00129291   -0.00565098    0.000121484
+  (:Pᴸ⁽¹⁾)        0.00126893   -0.00472692    0.000119231
+  (:R)           -9.13394e-16  -0.00500928    6.67869e-17
    ⋮
-  (:log_gy_obs)   0.000247177   7.87663e-5    1.32661e-6
-  (:m)            1.06287e-16  -0.00586953    4.16334e-17
-  (:n)            0.000279824   0.000195656   2.62925e-5
-  (:y)           -0.00160426    0.000901756  -0.000150738

We get a 3-dimensional array with variables, shocks, and time periods as dimensions. The shocks dimension also includes the initial value as a residual between the actual value and what was explained by the shocks. This computation also relies on the Kalman smoother.

Last but not least, we can also plot the model estimates and the shock decomposition. The model estimates plot, using plot_model_estimates:

julia> plot_model_estimates(FS2000, data)3-element Vector{Any}:
+  (:log_gy_obs)   0.000247176   7.87664e-5    1.32661e-6
+  (:m)           -1.05673e-15  -0.00586953    8.1532e-17
+  (:n)            0.000279819   0.000195656   2.62924e-5
+  (:y)           -0.00160423    0.000901755  -0.000150737

We get a 3-dimensional array with variables, shocks, and time periods as dimensions. The shocks dimension also includes the initial value as a residual between the actual value and what was explained by the shocks. This computation also relies on the Kalman smoother.

Last but not least, we can also plot the model estimates and the shock decomposition. The model estimates plot, using plot_model_estimates:

julia> plot_model_estimates(FS2000, data)3-element Vector{Any}:
  Plot{Plots.GRBackend() n=38}
  Plot{Plots.GRBackend() n=36}
  Plot{Plots.GRBackend() n=6}

Model estimates

shows the variables of the model (blue), the estimated shocks (in the last panel), and the data (red) used to estimate the model.

The shock decomposition can be plotted using plot_shock_decomposition:

julia> plot_shock_decomposition(FS2000, data)3-element Vector{Any}:
  Plot{Plots.GRBackend() n=50}
  Plot{Plots.GRBackend() n=52}
- Plot{Plots.GRBackend() n=9}

Shock decomposition

and it shows the contribution of the shocks and the contribution of the initial value to the deviations of the variables.

+ Plot{Plots.GRBackend() n=9}

Shock decomposition

and it shows the contribution of the shocks and the contribution of the initial value to the deviations of the variables.

diff --git a/v0.1.36/tutorials/install/index.html b/v0.1.36/tutorials/install/index.html index 69701cff..e3b8af3f 100644 --- a/v0.1.36/tutorials/install/index.html +++ b/v0.1.36/tutorials/install/index.html @@ -1,2 +1,2 @@ -Installation · MacroModelling.jl

Installation

MacroModelling.jl requires julia version 1.8 or higher and an IDE is recommended (e.g. VS Code with the julia extension).

Once set up you can install MacroModelling.jl by typing the following in the julia REPL:

using Pkg; Pkg.add("MacroModelling")
+Installation · MacroModelling.jl

Installation

MacroModelling.jl requires julia version 1.8 or higher and an IDE is recommended (e.g. VS Code with the julia extension).

Once set up you can install MacroModelling.jl by typing the following in the julia REPL:

using Pkg; Pkg.add("MacroModelling")
diff --git a/v0.1.36/tutorials/rbc/index.html b/v0.1.36/tutorials/rbc/index.html index 93fad81b..fa2817c3 100644 --- a/v0.1.36/tutorials/rbc/index.html +++ b/v0.1.36/tutorials/rbc/index.html @@ -22,9 +22,9 @@ δ = 0.02 α = 0.5 β = 0.95 - endRemove redundant variables in non stochastic steady state problem: 0.491 seconds -Set up non stochastic steady state problem: 0.342 seconds -Take symbolic derivatives up to first order: 0.234 seconds + endRemove redundant variables in non stochastic steady state problem: 0.51 seconds +Set up non stochastic steady state problem: 0.353 seconds +Take symbolic derivatives up to first order: 0.206 seconds Find non stochastic steady state: 0.0 seconds Model: RBC Variables @@ -106,10 +106,10 @@ And data, 4×40×1 Array{Float64, 3}: [:, :, 1] ~ (:, :, :simulate): (1) (2)(39) (40) - (:c) 5.93171 5.93828 5.92241 5.91892 - (:k) 47.3485 47.4057 47.2394 47.2194 - (:q) 6.83774 6.94252 7.02507 6.84366 - (:z) -0.00672898 0.00893312 0.0237741 -0.00427716

which returns the simulated data in levels in a 3-dimensional KeyedArray of the same structure as for the IRFs.

Conditional forecasts

Conditional forecasting is a useful tool to incorporate for example forecasts into a model and then add shocks on top.

For example we might be interested in the model dynamics given a path for c for the first 4 quarters and the next quarter a negative shock to eps_z arrives. This can be implemented using the get_conditional_forecast function and visualised with the plot_conditional_forecast function.

First, we define the conditions on the endogenous variables as deviations from the non stochastic steady state (c in this case) using a KeyedArray from the AxisKeys package (check get_conditional_forecast for other ways to define the conditions):

julia> using AxisKeys
julia> conditions = KeyedArray(Matrix{Union{Nothing,Float64}}(undef,1,4),Variables = [:c], Periods = 1:4)2-dimensional KeyedArray(NamedDimsArray(...)) with keys: + (:c) 5.9336 5.93924 5.89896 5.90622 + (:k) 47.3658 47.4159 47.0237 47.0863 + (:q) 6.85696 6.93661 6.78392 6.9093 + (:z) -0.00393651 0.00789129 -0.0112774 0.00753332

which returns the simulated data in levels in a 3-dimensional KeyedArray of the same structure as for the IRFs.

Conditional forecasts

Conditional forecasting is a useful tool to incorporate for example forecasts into a model and then add shocks on top.

For example we might be interested in the model dynamics given a path for c for the first 4 quarters and the next quarter a negative shock to eps_z arrives. This can be implemented using the get_conditional_forecast function and visualised with the plot_conditional_forecast function.

First, we define the conditions on the endogenous variables as deviations from the non stochastic steady state (c in this case) using a KeyedArray from the AxisKeys package (check get_conditional_forecast for other ways to define the conditions):

julia> using AxisKeys
julia> conditions = KeyedArray(Matrix{Union{Nothing,Float64}}(undef,1,4),Variables = [:c], Periods = 1:4)2-dimensional KeyedArray(NamedDimsArray(...)) with keys: ↓ Variables ∈ 1-element Vector{Symbol}Periods ∈ 4-element UnitRange{Int64} And data, 1×4 Matrix{Union{Nothing, Float64}}: @@ -125,4 +125,4 @@ (:q) -0.102033 0.0832729 0.00177022 0.0016938 (:z) -0.0148217 0.0130675 -3.6669e-30 -7.3338e-31 (:ϵᶻ₍ₓ₎) -1.48217 1.60318 … 0.0 0.0

The function returns a KeyedArray with the values of the endogenous variables and shocks matching the conditions exactly.

We can also plot the conditional forecast. Please note that you need to import the StatsPlots packages once before the first plot. In order to plot we can use:

julia> plot_conditional_forecast(RBC, conditions, shocks = shocks, conditions_in_levels = false)1-element Vector{Any}:
- Plot{Plots.GRBackend() n=20}

RBC conditional forecast

and we need to set conditions_in_levels = false since the conditions are defined in deviations.

Note that the stars indicate the values the model is conditioned on.

+ Plot{Plots.GRBackend() n=20}

RBC conditional forecast

and we need to set conditions_in_levels = false since the conditions are defined in deviations.

Note that the stars indicate the values the model is conditioned on.

diff --git a/v0.1.36/tutorials/sw03/index.html b/v0.1.36/tutorials/sw03/index.html index 410cfe55..13d32f8f 100644 --- a/v0.1.36/tutorials/sw03/index.html +++ b/v0.1.36/tutorials/sw03/index.html @@ -1,5 +1,5 @@ -Work with a more complex model - Smets and Wouters (2003) · MacroModelling.jl

Work with a complex model - Smets and Wouters (2003)

This tutorial is intended to show more advanced features of the package which come into play with more complex models. The tutorial will walk through the same steps as for the simple RBC model but will use the nonlinear Smets and Wouters (2003) model instead. Prior knowledge of DSGE models and their solution in practical terms (e.g. having used a mod file with dynare) is useful in understanding this tutorial.

Define the model

The first step is always to name the model and write down the equations. For the Smets and Wouters (2003) model this would go as follows:

julia> using MacroModelling
julia> @model SW03 begin +Work with a more complex model - Smets and Wouters (2003) · MacroModelling.jl

Work with a complex model - Smets and Wouters (2003)

This tutorial is intended to show more advanced features of the package which come into play with more complex models. The tutorial will walk through the same steps as for the simple RBC model but will use the nonlinear Smets and Wouters (2003) model instead. Prior knowledge of DSGE models and their solution in practical terms (e.g. having used a mod file with dynare) is useful in understanding this tutorial.

Define the model

The first step is always to name the model and write down the equations. For the Smets and Wouters (2003) model this would go as follows:

julia> using MacroModelling
julia> @model Smets_Wouters_2003 begin -q[0] + beta * ((1 - tau) * q[1] + epsilon_b[1] * (r_k[1] * z[1] - psi^-1 * r_k[ss] * (-1 + exp(psi * (-1 + z[1])))) * (C[1] - h * C[0])^(-sigma_c)) -q_f[0] + beta * ((1 - tau) * q_f[1] + epsilon_b[1] * (r_k_f[1] * z_f[1] - psi^-1 * r_k_f[ss] * (-1 + exp(psi * (-1 + z_f[1])))) * (C_f[1] - h * C_f[0])^(-sigma_c)) -r_k[0] + alpha * epsilon_a[0] * mc[0] * L[0]^(1 - alpha) * (K[-1] * z[0])^(-1 + alpha) @@ -54,7 +54,7 @@ -C_f[0] - I_f[0] + Pi_ws_f[0] - T_f[0] + Y_f[0] + L_s_f[0] * W_disutil_f[0] - L_f[0] * W_f[0] - psi^-1 * r_k_f[ss] * K_f[-1] * (-1 + exp(psi * (-1 + z_f[0]))) epsilon_b[0] * (K[-1] * r_k[0] - r_k[ss] * K[-1] * exp(psi * (-1 + z[0]))) * (C[0] - h * C[-1])^(-sigma_c) epsilon_b[0] * (K_f[-1] * r_k_f[0] - r_k_f[ss] * K_f[-1] * exp(psi * (-1 + z_f[0]))) * (C_f[0] - h * C_f[-1])^(-sigma_c) - endModel: SW03 + endModel: Smets_Wouters_2003 Variables Total: 54 Auxiliary: 0 @@ -63,7 +63,7 @@ Jumpers: 21 Auxiliary: 0 Shocks: 9 -Parameters: 39

First, we load the package and then use the @model macro to define our model. The first argument after @model is the model name and will be the name of the object in the global environment containing all information regarding the model. The second argument to the macro are the equations, which we write down between begin and end. Equations can contain an equality sign or the expression is assumed to equal 0. Equations cannot span multiple lines (unless you wrap the expression in brackets) and the timing of endogenous variables are expressed in the squared brackets following the variable name (e.g. [-1] for the past period). Exogenous variables (shocks) are followed by a keyword in squared brackets indicating them being exogenous (in this case [x]). In this example there are also variables in the non stochastic steady state denoted by [ss]. Note that names can leverage julia's unicode capabilities (alpha can be written as α).

Define the parameters

Next we need to add the parameters of the model. The macro @parameters takes care of this:

julia> @parameters SW03 begin
+Parameters:   39

First, we load the package and then use the @model macro to define our model. The first argument after @model is the model name and will be the name of the object in the global environment containing all information regarding the model. The second argument to the macro are the equations, which we write down between begin and end. Equations can contain an equality sign or the expression is assumed to equal 0. Equations cannot span multiple lines (unless you wrap the expression in brackets) and the timing of endogenous variables are expressed in the squared brackets following the variable name (e.g. [-1] for the past period). Exogenous variables (shocks) are followed by a keyword in squared brackets indicating them being exogenous (in this case [x]). In this example there are also variables in the non stochastic steady state denoted by [ss]. Note that names can leverage julia's unicode capabilities (alpha can be written as α).

Define the parameters

Next we need to add the parameters of the model. The macro @parameters takes care of this:

julia> @parameters Smets_Wouters_2003 begin
            lambda_p = .368
            G_bar = .362
            lambda_w = 0.5
@@ -109,11 +109,11 @@
        
            calibr_pi_obj | 1 = pi_obj[ss]
            calibr_pi | pi[ss] = pi_obj[ss]
-       endRemove redundant variables in non stochastic steady state problem:	6.681 seconds
-Set up non stochastic steady state problem:				2.271 seconds
-Take symbolic derivatives up to first order:				1.219 seconds
-Find non stochastic steady state:					9.556 seconds
-Model:        SW03
+       endRemove redundant variables in non stochastic steady state problem:	7.355 seconds
+Set up non stochastic steady state problem:				2.803 seconds
+Take symbolic derivatives up to first order:				1.295 seconds
+Find non stochastic steady state:					9.702 seconds
+Model:        Smets_Wouters_2003
 Variables
  Total:       54
   Auxiliary:  0
@@ -124,7 +124,7 @@
 Shocks:       9
 Parameters:   39
 Calibration
-equations:    2

The block defining the parameters above has two different inputs.

First, there are simple parameter definition the same way you assign values (e.g. Phi = .819).

Second, there are calibration equations where we treat the value of a parameter as unknown (e.g. calibr_pi_obj) and want an additional equation to hold (e.g. 1 = pi_obj[ss]). The additional equation can contain variables in SS or parameters. Putting it together a calibration equation is defined by the unknown parameter, and the calibration equation, separated by | (e.g. calibr_pi_obj | 1 = pi_obj[ss] and also 1 = pi_obj[ss] | calibr_pi_obj).

Note that we have to write one parameter definition per line.

Given the equations and parameters, the package will first attempt to solve the system of nonlinear equations symbolically (including possible calibration equations). If an analytical solution is not possible, numerical solution methods are used to try and solve it. There is no guarantee that a solution can be found, but it is highly likely, given that a solution exists. The problem setup tries to incorporate parts of the structure of the problem, e.g. bounds on variables: if one equation contains log(k) it must be that k > 0. Nonetheless, the user can also provide useful information such as variable bounds or initial guesses. Bounds can be set by adding another expression to the parameter block e.g.: c > 0. Large values are typically a problem for numerical solvers. Therefore, providing a guess for these values will increase the speed of the solver. Guesses can be provided as a Dict after the model name and before the parameter definitions block, e.g.: @parameters RBC guess = Dict(k => 10) begin ... end.

Plot impulse response functions (IRFs)

A useful output to analyze are IRFs for the exogenous shocks. Calling plot_irf (different names for the same function are also supported: plot_irfs, or plot_IRF) will take care of this. Please note that you need to import the StatsPlots packages once before the first plot. In the background the package solves (numerically in this complex case) for the non stochastic steady state (SS) and calculates the first order perturbation solution.

julia> import StatsPlots
julia> plot_irf(SW03)37-element Vector{Any}: +equations: 2

The block defining the parameters above has two different inputs.

First, there are simple parameter definition the same way you assign values (e.g. Phi = .819).

Second, there are calibration equations where we treat the value of a parameter as unknown (e.g. calibr_pi_obj) and want an additional equation to hold (e.g. 1 = pi_obj[ss]). The additional equation can contain variables in SS or parameters. Putting it together a calibration equation is defined by the unknown parameter, and the calibration equation, separated by | (e.g. calibr_pi_obj | 1 = pi_obj[ss] and also 1 = pi_obj[ss] | calibr_pi_obj).

Note that we have to write one parameter definition per line.

Given the equations and parameters, the package will first attempt to solve the system of nonlinear equations symbolically (including possible calibration equations). If an analytical solution is not possible, numerical solution methods are used to try and solve it. There is no guarantee that a solution can be found, but it is highly likely, given that a solution exists. The problem setup tries to incorporate parts of the structure of the problem, e.g. bounds on variables: if one equation contains log(k) it must be that k > 0. Nonetheless, the user can also provide useful information such as variable bounds or initial guesses. Bounds can be set by adding another expression to the parameter block e.g.: c > 0. Large values are typically a problem for numerical solvers. Therefore, providing a guess for these values will increase the speed of the solver. Guesses can be provided as a Dict after the model name and before the parameter definitions block, e.g.: @parameters Smets_Wouters_2003 guess = Dict(k => 10) begin ... end.

Plot impulse response functions (IRFs)

A useful output to analyze are IRFs for the exogenous shocks. Calling plot_irf (different names for the same function are also supported: plot_irfs, or plot_IRF) will take care of this. Please note that you need to import the StatsPlots packages once before the first plot. In the background the package solves (numerically in this complex case) for the non stochastic steady state (SS) and calculates the first order perturbation solution.

julia> import StatsPlots
julia> plot_irf(Smets_Wouters_2003)37-element Vector{Any}: Plot{Plots.GRBackend() n=36} Plot{Plots.GRBackend() n=36} Plot{Plots.GRBackend() n=32} @@ -144,19 +144,19 @@ Plot{Plots.GRBackend() n=20} Plot{Plots.GRBackend() n=34} Plot{Plots.GRBackend() n=36} - Plot{Plots.GRBackend() n=16}

RBC IRF

When the model is solved the first time (in this case by calling plot_irf), the package breaks down the steady state problem into independent blocks and first attempts to solve them symbolically and if that fails numerically.

The plots show the responses of the endogenous variables to a one standard deviation positive (indicated by Shock⁺ in chart title) unanticipated shock. Therefore there are as many subplots as there are combinations of shocks and endogenous variables (which are impacted by the shock). Plots are composed of up to 9 subplots and the plot title shows the model name followed by the name of the shock and which plot we are seeing out of the plots for this shock (e.g. (1/3) means we see the first out of three plots for this shock). Subplots show the sorted endogenous variables with the left y-axis showing the level of the respective variable and the right y-axis showing the percent deviation from the SS (if variable is strictly positive). The horizontal black line marks the SS.

Explore other parameter values

Playing around with the model can be especially insightful in the early phase of model development. The package tries to facilitate this process to the extent possible. Typically one wants to try different parameter values and see how the IRFs change. This can be done by using the parameters argument of the plot_irf function. We pass a Pair with the Symbol of the parameter (: in front of the parameter name) we want to change and its new value to the parameter argument (e.g. :alpha => 0.305). Furthermore, we want to focus on certain shocks and variables. We select for the example the eta_R shock by passing it as a Symbol to the shocks argument of the plot_irf function. For the variables we choose to plot: U, Y, I, R, and C and achieve that by passing the Vector of Symbols to the variables argument of the plot_irf function:

julia> plot_irf(SW03,
+ Plot{Plots.GRBackend() n=16}

RBC IRF

When the model is solved the first time (in this case by calling plot_irf), the package breaks down the steady state problem into independent blocks and first attempts to solve them symbolically and if that fails numerically.

The plots show the responses of the endogenous variables to a one standard deviation positive (indicated by Shock⁺ in chart title) unanticipated shock. Therefore there are as many subplots as there are combinations of shocks and endogenous variables (which are impacted by the shock). Plots are composed of up to 9 subplots and the plot title shows the model name followed by the name of the shock and which plot we are seeing out of the plots for this shock (e.g. (1/3) means we see the first out of three plots for this shock). Subplots show the sorted endogenous variables with the left y-axis showing the level of the respective variable and the right y-axis showing the percent deviation from the SS (if variable is strictly positive). The horizontal black line marks the SS.

Explore other parameter values

Playing around with the model can be especially insightful in the early phase of model development. The package tries to facilitate this process to the extent possible. Typically one wants to try different parameter values and see how the IRFs change. This can be done by using the parameters argument of the plot_irf function. We pass a Pair with the Symbol of the parameter (: in front of the parameter name) we want to change and its new value to the parameter argument (e.g. :alpha => 0.305). Furthermore, we want to focus on certain shocks and variables. We select for the example the eta_R shock by passing it as a Symbol to the shocks argument of the plot_irf function. For the variables we choose to plot: U, Y, I, R, and C and achieve that by passing the Vector of Symbols to the variables argument of the plot_irf function:

julia> plot_irf(Smets_Wouters_2003,
                 parameters = :alpha => 0.305,
                 variables = [:U,:Y,:I,:R,:C],
                 shocks = :eta_R)1-element Vector{Any}:
- Plot{Plots.GRBackend() n=18}

IRF plot

First, the package finds the new steady state, solves the model dynamics around it and saves the new parameters and solution in the model object. Second, note that with the parameters the IRFs changed (e.g. compare the y-axis values for U). Updating the plot for new parameters is significantly faster than calling it the first time. This is because the first call triggers compilations of the model functions, and once compiled the user benefits from the performance of the specialised compiled code. Furthermore, finding the SS from a valid SS as a starting point is faster.

Plot model simulation

Another insightful output is simulations of the model. Here we can use the plot_simulations function. Again we want to only look at a subset of the variables and specify it in the variables argument. Please note that you need to import the StatsPlots packages once before the first plot. To the same effect we can use the plot_irf function and specify in the shocks argument that we want to :simulate the model and set the periods argument to 100.

julia> plot_simulations(SW03, variables = [:U,:Y,:I,:R,:C])1-element Vector{Any}:
- Plot{Plots.GRBackend() n=12}

Simulate SW03

The plots show the models endogenous variables in response to random draws for all exogenous shocks over 100 periods.

Plot specific series of shocks

Sometimes one has a specific series of shocks in mind and wants to see the corresponding model response of endogenous variables. This can be achieved by passing a Matrix or KeyedArray of the series of shocks to the shocks argument of the plot_irf function. Let's assume there is a positive 1 standard deviation shock to eta_b in period 2 and a negative 1 standard deviation shock to eta_w in period 12. This can be implemented as follows:

julia> using AxisKeys
julia> shock_series = KeyedArray(zeros(2,12), Shocks = [:eta_b, :eta_w], Periods = 1:12)2-dimensional KeyedArray(NamedDimsArray(...)) with keys: + Plot{Plots.GRBackend() n=18}

IRF plot

First, the package finds the new steady state, solves the model dynamics around it and saves the new parameters and solution in the model object. Second, note that with the parameters the IRFs changed (e.g. compare the y-axis values for U). Updating the plot for new parameters is significantly faster than calling it the first time. This is because the first call triggers compilations of the model functions, and once compiled the user benefits from the performance of the specialised compiled code. Furthermore, finding the SS from a valid SS as a starting point is faster.

Plot model simulation

Another insightful output is simulations of the model. Here we can use the plot_simulations function. Again we want to only look at a subset of the variables and specify it in the variables argument. Please note that you need to import the StatsPlots packages once before the first plot. To the same effect we can use the plot_irf function and specify in the shocks argument that we want to :simulate the model and set the periods argument to 100.

julia> plot_simulations(Smets_Wouters_2003, variables = [:U,:Y,:I,:R,:C])1-element Vector{Any}:
+ Plot{Plots.GRBackend() n=12}

Simulate Smets_Wouters_2003

The plots show the models endogenous variables in response to random draws for all exogenous shocks over 100 periods.

Plot specific series of shocks

Sometimes one has a specific series of shocks in mind and wants to see the corresponding model response of endogenous variables. This can be achieved by passing a Matrix or KeyedArray of the series of shocks to the shocks argument of the plot_irf function. Let's assume there is a positive 1 standard deviation shock to eta_b in period 2 and a negative 1 standard deviation shock to eta_w in period 12. This can be implemented as follows:

julia> using AxisKeys
julia> shock_series = KeyedArray(zeros(2,12), Shocks = [:eta_b, :eta_w], Periods = 1:12)2-dimensional KeyedArray(NamedDimsArray(...)) with keys: ↓ Shocks ∈ 2-element Vector{Symbol}Periods ∈ 12-element UnitRange{Int64} And data, 2×12 Matrix{Float64}: (1) (2) (3) (4)(9) (10) (11) (12) (:eta_b) 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 - (:eta_w) 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
julia> shock_series[1,2] = 11
julia> shock_series[2,12] = -1-1
julia> plot_irf(SW03, shocks = shock_series, variables = [:W,:r_k,:w_star,:R])1-element Vector{Any}: - Plot{Plots.GRBackend() n=16}

Series of shocks RBC

First, we construct the KeyedArray containing the series of shocks and pass it to the shocks argument. The plot shows the paths of the selected variables for the two shocks hitting the economy in periods 2 and 12 and 40 quarters thereafter.

Model statistics

Steady state

The package solves for the SS automatically and we got an idea of the SS values in the plots. If we want to see the SS values and the derivatives of the SS with respect to the model parameters we can call get_steady_state. The model has 39 parameters and 54 variables. Since we are not interested in all derivatives for all parameters we select a subset. This can be done by passing on a Vector of Symbols of the parameters to the parameter_derivatives argument:

julia> get_steady_state(SW03, parameter_derivatives = [:alpha,:beta])2-dimensional KeyedArray(NamedDimsArray(...)) with keys:
+  (:eta_w)    0.0    0.0    0.0    0.0       0.0     0.0     0.0     0.0
julia> shock_series[1,2] = 11
julia> shock_series[2,12] = -1-1
julia> plot_irf(Smets_Wouters_2003, shocks = shock_series, variables = [:W,:r_k,:w_star,:R])1-element Vector{Any}: + Plot{Plots.GRBackend() n=16}

Series of shocks RBC

First, we construct the KeyedArray containing the series of shocks and pass it to the shocks argument. The plot shows the paths of the selected variables for the two shocks hitting the economy in periods 2 and 12 and 40 quarters thereafter.

Model statistics

Steady state

The package solves for the SS automatically and we got an idea of the SS values in the plots. If we want to see the SS values and the derivatives of the SS with respect to the model parameters we can call get_steady_state. The model has 39 parameters and 54 variables. Since we are not interested in all derivatives for all parameters we select a subset. This can be done by passing on a Vector of Symbols of the parameters to the parameter_derivatives argument:

julia> get_steady_state(Smets_Wouters_2003, parameter_derivatives = [:alpha,:beta])2-dimensional KeyedArray(NamedDimsArray(...)) with keys:
 ↓   Variables_and_calibrated_parameters ∈ 56-element Vector{Symbol}Steady_state_and_∂steady_state∂parameter ∈ 3-element Vector{Symbol}
 And data, 56×3 Matrix{Float64}:
@@ -168,13 +168,13 @@
   (:I)               0.456928         3.13855      18.5261
   (:I_f)             0.456928         3.13855      18.5261
    ⋮
-  (:r_k)             0.035101        -6.59434e-16  -1.0203
-  (:r_k_f)           0.035101        -2.13196e-16  -1.0203
+  (:r_k)             0.035101        -1.95456e-16  -1.0203
+  (:r_k_f)           0.035101        -3.66578e-16  -1.0203
   (:w_star)          1.14353          4.37676      14.5872
-  (:z)               1.0              3.81412e-15  -3.52282e-16
-  (:z_f)             1.0             -3.31797e-16  -1.57121e-14
+  (:z)               1.0             -1.1115e-15    4.12391e-16
+  (:z_f)             1.0             -2.50298e-15  -6.02757e-15
   (:calibr_pi_obj)   1.0              0.0           0.0
-  (:calibr_pi)       0.0              0.0           0.0

The first column of the returned matrix shows the SS while the second to last columns show the derivatives of the SS values (indicated in the rows) with respect to the parameters (indicated in the columns). For example, the derivative of C with respect to beta is 14.4994. This means that if we increase beta by 1, C would increase by 14.4994 approximately. Let's see how this plays out by changing beta from 0.99 to 0.991 (a change of +0.001):

julia> get_steady_state(SW03,
+  (:calibr_pi)       0.0              0.0           0.0

The first column of the returned matrix shows the SS while the second to last columns show the derivatives of the SS values (indicated in the rows) with respect to the parameters (indicated in the columns). For example, the derivative of C with respect to beta is 14.4994. This means that if we increase beta by 1, C would increase by 14.4994 approximately. Let's see how this plays out by changing beta from 0.99 to 0.991 (a change of +0.001):

julia> get_steady_state(Smets_Wouters_2003,
                         parameter_derivatives = [:alpha,:G_bar],
                         parameters = :beta => .991)2-dimensional KeyedArray(NamedDimsArray(...)) with keys:
 ↓   Variables_and_calibrated_parameters ∈ 56-element Vector{Symbol}
@@ -188,23 +188,23 @@
   (:I)               0.47613          0.102174      3.29983
   (:I_f)             0.47613          0.102174      3.29983
    ⋮
-  (:r_k)             0.0340817        9.56313e-18  -1.49371e-16
-  (:r_k_f)           0.0340817       -1.46491e-17   2.56188e-16
-  (:w_star)          1.15842          2.58338e-16   4.5044
-  (:z)               1.0             -8.89889e-17  -1.35345e-15
-  (:z_f)             1.0              3.73191e-17   1.95831e-15
+  (:r_k)             0.0340817       -3.5695e-20   -6.34317e-17
+  (:r_k_f)           0.0340817        6.99695e-17  -9.79316e-16
+  (:w_star)          1.15842          6.50769e-16   4.5044
+  (:z)               1.0             -7.25976e-17  -7.20244e-16
+  (:z_f)             1.0             -3.20374e-16   1.79173e-15
   (:calibr_pi_obj)   1.0              0.0           0.0
-  (:calibr_pi)       0.0              0.0           0.0

Note that get_steady_state like all other get functions has the parameters argument. Hence, whatever output we are looking at we can change the parameters of the model.

The new value of beta changed the SS as expected and C increased by 0.01465. The elasticity (0.01465/0.001) comes close to the partial derivative previously calculated. The derivatives help understanding the effect of parameter changes on the steady state and make for easier navigation of the parameter space.

Standard deviations

Next to the SS we can also show the model implied standard deviations of the model. get_standard_deviation takes care of this. Additionally we will set the parameter values to what they were in the beginning by passing on a Tuple of Pairs containing the Symbols of the parameters to be changed and their new (initial) values (e.g. (:alpha => 0.3, :beta => .99)).

julia> get_standard_deviation(SW03,
+  (:calibr_pi)       0.0              0.0           0.0

Note that get_steady_state like all other get functions has the parameters argument. Hence, whatever output we are looking at we can change the parameters of the model.

The new value of beta changed the SS as expected and C increased by 0.01465. The elasticity (0.01465/0.001) comes close to the partial derivative previously calculated. The derivatives help understanding the effect of parameter changes on the steady state and make for easier navigation of the parameter space.

Standard deviations

Next to the SS we can also show the model implied standard deviations of the model. get_standard_deviation takes care of this. Additionally we will set the parameter values to what they were in the beginning by passing on a Tuple of Pairs containing the Symbols of the parameters to be changed and their new (initial) values (e.g. (:alpha => 0.3, :beta => .99)).

julia> get_standard_deviation(Smets_Wouters_2003,
                               parameter_derivatives = [:alpha,:beta],
                               parameters = (:alpha => 0.3, :beta => .99))2-dimensional KeyedArray(NamedDimsArray(...)) with keys:
 ↓   Variables ∈ 54-element Vector{Symbol}Standard_deviation_and_∂standard_deviation∂parameter ∈ 3-element Vector{Symbol}
 And data, 54×3 Matrix{Float64}:
              (:Standard_deviation)   (:alpha)        (:beta)
-  (:C)        2.0521                  9.69534        -9.78064
-  (:C_f)      3.05478                13.0916         -0.834583
-  (:G)        0.373165               -8.42617e-11     3.20633e-9
-  (:G_f)      0.373165                9.40394e-11    -1.1577e-8
+  (:C)        2.0521                  9.69534        -9.7807
+  (:C_f)      3.05478                13.0916         -0.834574
+  (:G)        0.373165                1.11462e-10    -4.29636e-9
+  (:G_f)      0.373165                6.85886e-11    -3.85164e-9
   (:I)        3.00453                13.9594         95.9106
   (:I_f)      3.46854                14.3687        105.856
    ⋮
@@ -214,7 +214,7 @@
   (:r_k_f)    0.024052               -0.0464337      -0.879571
   (:w_star)   1.78357                 7.29607        16.3162
   (:z)        3.6145                 -5.55293       -23.9594
-  (:z_f)      4.05456                -7.82757       -30.4171

The function returns the model implied standard deviations of the model variables and their derivatives with respect to the model parameters. For example, the derivative of the standard deviation of q with resect to alpha is -19.0184. In other words, the standard deviation of q decreases with increasing alpha.

Correlations

Another useful statistic is the model implied correlation of variables. We use get_correlation for this:

julia> get_correlation(SW03)2-dimensional KeyedArray(NamedDimsArray(...)) with keys:
+  (:z_f)      4.05456                -7.82757       -30.4171

The function returns the model implied standard deviations of the model variables and their derivatives with respect to the model parameters. For example, the derivative of the standard deviation of q with resect to alpha is -19.0184. In other words, the standard deviation of q decreases with increasing alpha.

Correlations

Another useful statistic is the model implied correlation of variables. We use get_correlation for this:

julia> get_correlation(Smets_Wouters_2003)2-dimensional KeyedArray(NamedDimsArray(...)) with keys:
 ↓   Variables ∈ 54-element Vector{Symbol}𝑉𝑎𝑟𝑖𝑎𝑏𝑙𝑒𝑠 ∈ 54-element Vector{Symbol}
 And data, 54×54 Matrix{Float64}:
@@ -232,7 +232,7 @@
   (:r_k_f)   -0.235315   -0.0656837     -0.235914    0.708626    1.0
   (:w_star)   0.268043   -0.270933       1.0         0.0530152  -0.235914
   (:z)       -0.376451   -0.360704       0.0530152   1.0         0.708626
-  (:z_f)     -0.235315   -0.0656837     -0.235914    0.708626    1.0

Autocorrelations

Next, we have a look at the model implied aautocorrelations of model variables using the get_autocorrelation function:

julia> get_autocorrelation(SW03)2-dimensional KeyedArray(NamedDimsArray(...)) with keys:
+  (:z_f)     -0.235315   -0.0656837     -0.235914    0.708626    1.0

Autocorrelations

Next, we have a look at the model implied aautocorrelations of model variables using the get_autocorrelation function:

julia> get_autocorrelation(Smets_Wouters_2003)2-dimensional KeyedArray(NamedDimsArray(...)) with keys:
 ↓   Variables ∈ 54-element Vector{Symbol}Autocorrelation_orders ∈ 5-element UnitRange{Int64}
 And data, 54×5 Matrix{Float64}:
@@ -250,25 +250,25 @@
   (:r_k_f)     0.987354    0.964983    0.939136    0.912395    0.885865
   (:w_star)    0.958378    0.913491    0.865866    0.816377    0.76593
   (:z)         0.982643    0.964915    0.946827    0.928418    0.90976
-  (:z_f)       0.987354    0.964983    0.939136    0.912395    0.885865

Variance decomposition

The model implied contribution of each shock to the variance of the model variables can be calculate by using the get_variance_decomposition function:

julia> get_variance_decomposition(SW03)2-dimensional KeyedArray(NamedDimsArray(...)) with keys:
+  (:z_f)       0.987354    0.964983    0.939136    0.912395    0.885865

Variance decomposition

The model implied contribution of each shock to the variance of the model variables can be calculate by using the get_variance_decomposition function:

julia> get_variance_decomposition(Smets_Wouters_2003)2-dimensional KeyedArray(NamedDimsArray(...)) with keys:
 ↓   Variables ∈ 54-element Vector{Symbol}Shocks ∈ 9-element Vector{Symbol}
 And data, 54×9 Matrix{Float64}:
              (:eta_G)      (:eta_I)(:eta_pi)     (:eta_w)
   (:C)        0.00393945    0.00120919       0.00114385    3.04469e-7
-  (:C_f)      0.00271209    0.000559959      6.17323e-29   8.92466e-33
-  (:G)        1.0           1.37467e-27      2.97128e-27   1.93563e-31
-  (:G_f)      1.0           2.63731e-27      9.61128e-27   9.07727e-31
+  (:C_f)      0.00271209    0.000559959      4.45339e-28   4.58183e-32
+  (:G)        1.0           3.70875e-27      8.40534e-28   2.33665e-31
+  (:G_f)      1.0           1.18096e-27      1.44324e-28   2.46004e-32
   (:I)        0.00281317    0.00430133   …   0.00143853    4.08588e-7
-  (:I_f)      0.00297832    0.00230662       6.26823e-28   5.50665e-32
+  (:I_f)      0.00297832    0.00230662       2.02583e-28   7.08682e-32
    ⋮                                     ⋱
   (:q)        0.00661399    0.00311207       0.00147593    5.46081e-7
-  (:q_f)      0.00680774    0.00191603       3.08693e-28   3.77049e-32
+  (:q_f)      0.00680774    0.00191603       3.2038e-28    1.09488e-31
   (:r_k)      0.00532774    0.0044839    …   0.00144016    1.57736e-6
-  (:r_k_f)    0.00496212    0.00271529       1.47349e-27   1.37743e-31
+  (:r_k_f)    0.00496212    0.00271529       4.46976e-27   6.37958e-31
   (:w_star)   0.000299473   0.000385553      0.00378742    5.33425e-5
   (:z)        0.00532774    0.0044839        0.00144016    1.57736e-6
-  (:z_f)      0.00496212    0.00271529       1.47466e-27   1.37891e-31

Conditional variance decomposition

Last but not least, we have look at the model implied contribution of each shock per period to the variance of the model variables (also called forecast error variance decomposition) by using the get_conditional_variance_decomposition function:

julia> get_conditional_variance_decomposition(SW03)3-dimensional KeyedArray(NamedDimsArray(...)) with keys:
+  (:z_f)      0.00496212    0.00271529       4.48704e-27   6.39522e-31

Conditional variance decomposition

Last but not least, we have look at the model implied contribution of each shock per period to the variance of the model variables (also called forecast error variance decomposition) by using the get_conditional_variance_decomposition function:

julia> get_conditional_variance_decomposition(Smets_Wouters_2003)3-dimensional KeyedArray(NamedDimsArray(...)) with keys:
 ↓   Variables ∈ 54-element Vector{Symbol}Shocks ∈ 9-element Vector{Symbol}Periods ∈ 21-element Vector{Float64}
@@ -277,55 +277,55 @@
 [:, :, 1] ~ (:, :, 1.0):
              (:eta_G)      (:eta_I)(:eta_pi)     (:eta_w)
   (:C)        0.00112613    0.000121972      0.000641471   1.13378e-8
-  (:C_f)      0.000858896   8.75785e-6       1.09656e-30   6.36773e-36
-  (:G)        1.0           8.23169e-34      5.77917e-34   7.17518e-34
+  (:C_f)      0.000858896   8.75785e-6       2.43515e-31   4.99092e-34
+  (:G)        1.0           7.07279e-32      1.54421e-32   4.20519e-34
    ⋮                                     ⋱
-  (:r_k_f)    0.00247112    2.51971e-5   …   3.37779e-30   3.6962e-35
+  (:r_k_f)    0.00247112    2.51971e-5   …   9.37925e-31   3.7531e-34
   (:w_star)   0.000499182   5.21776e-5       0.00154091    0.000651141
   (:z)        0.00467774    2.66346e-5       0.000191172   1.41817e-5
-  (:z_f)      0.00247112    2.51971e-5       3.37779e-30   3.6962e-35
+  (:z_f)      0.00247112    2.51971e-5       9.37925e-31   3.7531e-34
 
 [:, :, 11] ~ (:, :, 11.0):
              (:eta_G)      (:eta_I)(:eta_pi)     (:eta_w)
   (:C)        0.00206241    0.000426699      0.000835055   1.18971e-7
-  (:C_f)      0.00154049    0.000234041      3.12416e-30   4.8928e-33
-  (:G)        1.0           2.385e-28        3.08957e-27   1.01738e-31
+  (:C_f)      0.00154049    0.000234041      2.91219e-28   1.82237e-32
+  (:G)        1.0           5.45023e-28      2.89701e-28   1.23378e-31
    ⋮                                     ⋱
-  (:r_k_f)    0.00172562    7.81447e-5   …   5.41185e-27   3.3167e-31
+  (:r_k_f)    0.00172562    7.81447e-5   …   2.47098e-26   2.70479e-30
   (:w_star)   0.000290106   0.000212217      0.00281349    7.51886e-5
   (:z)        0.0045622     0.000147997      0.00200035    8.29763e-6
-  (:z_f)      0.00172562    7.81447e-5       5.4187e-27    3.3258e-31
+  (:z_f)      0.00172562    7.81447e-5       2.48097e-26   2.71504e-30
 
 [:, :, 21] ~ (:, :, Inf):
              (:eta_G)      (:eta_I)(:eta_pi)     (:eta_w)
-  (:C)        0.00393945    0.00120919       0.00114385    3.0448e-7
-  (:C_f)      0.00271209    0.000559959      1.47366e-27   9.7329e-32
-  (:G)        1.0           1.37463e-27     -3.40354e-25   1.93564e-31
+  (:C)        0.00393945    0.00120919       0.00114385    3.04467e-7
+  (:C_f)      0.00271209    0.000559959      3.14453e-28  -7.74191e-30
+  (:G)        1.0           1.93619e-26      8.60407e-28   2.33717e-31
    ⋮                                     ⋱
-  (:r_k_f)    0.00496212    0.00271529   …  -1.05153e-25   1.42884e-31
+  (:r_k_f)    0.00496212    0.00271529   …   5.13763e-27  -1.73289e-31
   (:w_star)   0.000299473   0.000385553      0.00378742    5.33425e-5
-  (:z)        0.00532774    0.0044839        0.00144016    1.57739e-6
-  (:z_f)      0.00496212    0.00271529      -1.05315e-25   1.43032e-31

Plot conditional variance decomposition

Especially for the conditional variance decomposition it is convenient to look at a plot instead of the raw numbers. This can be done using the plot_conditional_variance_decomposition function. Please note that you need to import the StatsPlots packages once before the first plot.

julia> plot_conditional_variance_decomposition(SW03, variables = [:U,:Y,:I,:R,:C])1-element Vector{Any}:
- Plot{Plots.GRBackend() n=54}

FEVD SW03

Model solution

A further insightful output are the policy and transition functions of the the first order perturbation solution. To retrieve the solution we call the function get_solution:

julia> get_solution(SW03)2-dimensional KeyedArray(NamedDimsArray(...)) with keys:
+  (:z)        0.00532774    0.0044839        0.00144016    1.57735e-6
+  (:z_f)      0.00496212    0.00271529       5.15806e-27  -1.71693e-31

Plot conditional variance decomposition

Especially for the conditional variance decomposition it is convenient to look at a plot instead of the raw numbers. This can be done using the plot_conditional_variance_decomposition function. Please note that you need to import the StatsPlots packages once before the first plot.

julia> plot_conditional_variance_decomposition(Smets_Wouters_2003, variables = [:U,:Y,:I,:R,:C])1-element Vector{Any}:
+ Plot{Plots.GRBackend() n=54}

FEVD Smets_Wouters_2003

Model solution

A further insightful output are the policy and transition functions of the the first order perturbation solution. To retrieve the solution we call the function get_solution:

julia> get_solution(Smets_Wouters_2003)2-dimensional KeyedArray(NamedDimsArray(...)) with keys:
 ↓   Steady_state__States__Shocks ∈ 29-element Vector{Symbol}Variables ∈ 54-element Vector{Symbol}
 And data, 29×54 adjoint(::Matrix{Float64}) with eltype Float64:
                    (:C)         (:C_f)(:z)         (:z_f)
   (:Steady_state)   1.20438      1.20438          1.0          1.0
-  (:C₍₋₁₎)          0.536418    -3.36322e-15      0.214756    -2.35619e-14
+  (:C₍₋₁₎)          0.536418     1.9662e-15       0.214756     1.87886e-14
   (:C_f₍₋₁₎)        0.0163196    0.410787         0.00766973   0.139839
-  (:I₍₋₁₎)         -0.102895     1.29327e-14      0.317145     1.01754e-13
+  (:I₍₋₁₎)         -0.102895     7.06502e-14      0.317145     4.47815e-13
   (:I_f₍₋₁₎)        0.0447033   -0.24819      …   0.0276803    0.213957
-  (:K₍₋₁₎)          0.00880026  -1.33396e-15     -0.0587166   -3.98696e-15
+  (:K₍₋₁₎)          0.00880026  -1.0321e-15      -0.0587166   -1.88265e-14
    ⋮                                          ⋱   ⋮
   (:eta_L₍ₓ₎)       0.252424     0.838076         0.0982057    0.430915
-  (:eta_R₍ₓ₎)      -0.229757     1.5001e-14      -0.179716    -1.28709e-14
+  (:eta_R₍ₓ₎)      -0.229757     7.64564e-15     -0.179716    -6.91023e-15
   (:eta_a₍ₓ₎)       0.185454     0.699595     …  -0.551529     0.348639
   (:eta_b₍ₓ₎)       0.087379     0.0150811        0.0338767   -0.013001
-  (:eta_p₍ₓ₎)      -9.72053e-5   5.79773e-17     -0.00134983  -2.38144e-17
-  (:eta_pi₍ₓ₎)      0.0100939   -1.1438e-15       0.00816804   1.02027e-15
-  (:eta_w₍ₓ₎)      -4.24362e-5   2.75629e-18      0.00222469   3.37502e-18

The solution provides information about how past states and present shocks impact present variables. The first row contains the SS for the variables denoted in the columns. The second to last rows contain the past states, with the time index ₍₋₁₎, and present shocks, with exogenous variables denoted by ₍ₓ₎. For example, the immediate impact of a shock to eta_w on z is 0.00222469.

There is also the possibility to visually inspect the solution using the plot_solution function. Please note that you need to import the StatsPlots packages once before the first plot.

julia> plot_solution(SW03, :pi, variables = [:C,:I,:K,:L,:W,:R])1-element Vector{Any}:
- Plot{Plots.GRBackend() n=15}

SW03 solution

The chart shows the first order perturbation solution mapping from the past state pi to the present variables C, I, K, L, W, and R. The state variable covers a range of two standard deviations around the non stochastic steady state and all other states remain in the non stochastic steady state.

Obtain array of IRFs or model simulations

Last but not least the user might want to obtain simulated time series of the model or IRFs without plotting them. For IRFs this is possible by calling get_irf:

julia> get_irf(SW03)3-dimensional KeyedArray(NamedDimsArray(...)) with keys:
+  (:eta_p₍ₓ₎)      -9.72053e-5   1.5871e-17      -0.00134983  -8.52598e-18
+  (:eta_pi₍ₓ₎)      0.0100939   -5.39008e-16      0.00816804   5.3763e-16
+  (:eta_w₍ₓ₎)      -4.24362e-5  -2.44018e-17      0.00222469   1.07546e-17

The solution provides information about how past states and present shocks impact present variables. The first row contains the SS for the variables denoted in the columns. The second to last rows contain the past states, with the time index ₍₋₁₎, and present shocks, with exogenous variables denoted by ₍ₓ₎. For example, the immediate impact of a shock to eta_w on z is 0.00222469.

There is also the possibility to visually inspect the solution using the plot_solution function. Please note that you need to import the StatsPlots packages once before the first plot.

julia> plot_solution(Smets_Wouters_2003, :pi, variables = [:C,:I,:K,:L,:W,:R])1-element Vector{Any}:
+ Plot{Plots.GRBackend() n=15}

Smets_Wouters_2003 solution

The chart shows the first order perturbation solution mapping from the past state pi to the present variables C, I, K, L, W, and R. The state variable covers a range of two standard deviations around the non stochastic steady state and all other states remain in the non stochastic steady state.

Obtain array of IRFs or model simulations

Last but not least the user might want to obtain simulated time series of the model or IRFs without plotting them. For IRFs this is possible by calling get_irf:

julia> get_irf(Smets_Wouters_2003)3-dimensional KeyedArray(NamedDimsArray(...)) with keys:
 ↓   Variables ∈ 54-element Vector{Symbol}Periods ∈ 40-element UnitRange{Int64}Shocks ∈ 9-element Vector{Symbol}
@@ -346,7 +346,7 @@
              (1)            (2)(39)            (40)
   (:C)         0.185454       0.266366           0.0448634       0.0445393
   (:C_f)       0.699595       0.807224           0.0641954       0.0635383
-  (:G)         6.08256e-16    8.65275e-15        1.32161e-14     1.3033e-14
+  (:G)        -3.04128e-16   -1.61702e-14       -1.43235e-14    -1.40698e-14
    ⋮                                       ⋱     ⋮
   (:r_k_f)     0.00206815     0.00250599   …    -0.0012247      -0.00120755
   (:w_star)   -0.0685999     -0.0257599          0.0104469       0.00989232
@@ -356,32 +356,32 @@
 [:, :, 9] ~ (:, :, :eta_w):
              (1)            (2)(39)            (40)
   (:C)        -4.24362e-5    -7.53116e-5        -0.000102464    -0.000101596
-  (:C_f)       2.75629e-18    7.84464e-17       -1.19391e-17    -1.3356e-17
-  (:G)        -3.15144e-18   -6.48513e-17        5.58408e-18     4.87665e-18
+  (:C_f)      -2.44018e-17    1.84578e-16       -2.23551e-17    -1.98313e-17
+  (:G)         2.4126e-18    -6.61872e-17        1.89553e-17     1.85081e-17
    ⋮                                       ⋱     ⋮
-  (:r_k_f)     2.00209e-20    3.399e-18    …     7.32537e-19     7.659e-19
+  (:r_k_f)     6.3797e-20     9.37099e-18  …    -4.48838e-19    -3.86194e-19
   (:w_star)    0.012691       0.00165858        -8.17445e-5     -7.60783e-5
   (:z)         0.00222469     0.00187724         0.000271989     0.000277738
-  (:z_f)       3.37502e-18    5.73747e-16        1.23521e-16     1.29146e-16

which returns a 3-dimensional KeyedArray with variables (absolute deviations from the relevant steady state by default) in rows, the period in columns, and the shocks as the third dimension.

For simulations this is possible by calling simulate:

julia> simulate(SW03)3-dimensional KeyedArray(NamedDimsArray(...)) with keys:
+  (:z_f)       1.07546e-17    1.58276e-15       -7.5349e-17     -6.47676e-17

which returns a 3-dimensional KeyedArray with variables (absolute deviations from the relevant steady state by default) in rows, the period in columns, and the shocks as the third dimension.

For simulations this is possible by calling simulate:

julia> simulate(Smets_Wouters_2003)3-dimensional KeyedArray(NamedDimsArray(...)) with keys:
 ↓   Variables ∈ 54-element Vector{Symbol}Periods ∈ 40-element UnitRange{Int64}Shocks ∈ 1-element Vector{Symbol}
 And data, 54×40×1 Array{Float64, 3}:
 [:, :, 1] ~ (:, :, :simulate):
              (1)          (2)(39)          (40)
-  (:C)         0.834869     1.31842          2.05501       1.92408
-  (:C_f)       1.09006      3.52585          3.30309       3.19297
-  (:G)         0.390094     0.0479555        0.317865      0.284799
-  (:G_f)       0.390094     0.0479555        0.317865      0.284799
-  (:I)         0.322064     0.500648   …     0.635638      0.657672
-  (:I_f)       0.465436     1.32531          1.0585        1.18285
+  (:C)         1.77365      2.32079          2.35199       2.28005
+  (:C_f)       1.02874      2.21491          2.10372       2.07571
+  (:G)         0.319892     0.178097         0.341678      0.42833
+  (:G_f)       0.319892     0.178097         0.341678      0.42833
+  (:I)         0.769242     1.15501    …     0.0119923     0.299007
+  (:I_f)       0.392206     0.757379        -0.716135     -0.403297
    ⋮                                   ⋱     ⋮
-  (:q_f)       2.43147      0.500904         1.3297        1.07765
-  (:r_k)       0.0336726    0.0266621  …     0.0311913     0.0309051
-  (:r_k_f)     0.0349633    0.0414808        0.0315747     0.0315506
-  (:w_star)    0.733772    -0.0282716        1.73368       1.34716
-  (:z)         0.759203    -0.422595         0.340921      0.292677
-  (:z_f)       0.976786     2.07548          0.405558      0.401497

which returns the simulated data in levels in a 3-dimensional KeyedArray of the same structure as for the IRFs.

Conditional forecasts

Conditional forecasting is a useful tool to incorporate for example forecasts into a model and then add shocks on top.

For example we might be interested in the model dynamics given a path for Y and pi for the first 4 quarters and the next quarter a negative shock to eta_w arrives. Furthermore, we want that the first two periods only a subset of shocks is used to match the conditions on the endogenous variables. This can be implemented using the get_conditional_forecast function and visualised with the plot_conditional_forecast function.

First, we define the conditions on the endogenous variables as deviations from the non stochastic steady state (Y and pi in this case) using a KeyedArray from the AxisKeys package (check get_conditional_forecast for other ways to define the conditions):

julia> using AxisKeys
julia> conditions = KeyedArray(Matrix{Union{Nothing,Float64}}(undef,2,4),Variables = [:Y, :pi], Periods = 1:4)2-dimensional KeyedArray(NamedDimsArray(...)) with keys: + (:q_f) 2.54091 1.81632 0.505155 1.28558 + (:r_k) 0.0386991 0.0340932 … 0.0357681 0.0364107 + (:r_k_f) 0.0345109 0.0376428 0.0270067 0.0284239 + (:w_star) 2.05142 1.9815 1.02772 1.11208 + (:z) 1.60655 0.830116 1.11245 1.22078 + (:z_f) 0.900519 1.42849 -0.364497 -0.125593

which returns the simulated data in levels in a 3-dimensional KeyedArray of the same structure as for the IRFs.

Conditional forecasts

Conditional forecasting is a useful tool to incorporate for example forecasts into a model and then add shocks on top.

For example we might be interested in the model dynamics given a path for Y and pi for the first 4 quarters and the next quarter a negative shock to eta_w arrives. Furthermore, we want that the first two periods only a subset of shocks is used to match the conditions on the endogenous variables. This can be implemented using the get_conditional_forecast function and visualised with the plot_conditional_forecast function.

First, we define the conditions on the endogenous variables as deviations from the non stochastic steady state (Y and pi in this case) using a KeyedArray from the AxisKeys package (check get_conditional_forecast for other ways to define the conditions):

julia> using AxisKeys
julia> conditions = KeyedArray(Matrix{Union{Nothing,Float64}}(undef,2,4),Variables = [:Y, :pi], Periods = 1:4)2-dimensional KeyedArray(NamedDimsArray(...)) with keys: ↓ Variables ∈ 2-element Vector{Symbol}Periods ∈ 4-element UnitRange{Int64} And data, 2×4 Matrix{Union{Nothing, Float64}}: @@ -396,14 +396,14 @@ nothing nothing nothing nothing nothing nothing nothing nothing nothing nothing nothing nothing nothing nothing nothing - nothing nothing nothing nothing nothing
julia> shocks[[1:3...,5,9],1:2] .= 0;
julia> shocks[9,5] = -1;

The above shock Matrix means that for the first two periods shocks 1, 2, 3, 5, and 9 are fixed at zero and in the fifth period there is a negative shock of eta_w (the 9th shock).

Finally we can get the conditional forecast:

julia> get_conditional_forecast(SW03, conditions, shocks = shocks, variables = [:Y,:pi,:W], conditions_in_levels = false)2-dimensional KeyedArray(NamedDimsArray(...)) with keys:
+ nothing  nothing  nothing  nothing  nothing
julia> shocks[[1:3...,5,9],1:2] .= 0;
julia> shocks[9,5] = -1;

The above shock Matrix means that for the first two periods shocks 1, 2, 3, 5, and 9 are fixed at zero and in the fifth period there is a negative shock of eta_w (the 9th shock).

Finally we can get the conditional forecast:

julia> get_conditional_forecast(Smets_Wouters_2003, conditions, shocks = shocks, variables = [:Y,:pi,:W], conditions_in_levels = false)2-dimensional KeyedArray(NamedDimsArray(...)) with keys:
 ↓   Variables_and_shocks ∈ 12-element Vector{Symbol}Periods ∈ 45-element UnitRange{Int64}
 And data, 12×45 Matrix{Float64}:
                 (1)           (2)(44)            (45)
   (:W)           -0.00477569   -0.00178866        -0.00209378     -0.00154841
-  (:Y)           -0.01         -2.77556e-17        0.0028801       0.00269595
-  (:pi)           0.01         -1.30104e-18       -0.000376449    -0.00032659
+  (:Y)           -0.01          8.32667e-17        0.0028801       0.00269595
+  (:pi)           0.01          5.20417e-18       -0.000376449    -0.00032659
   (:eta_G₍ₓ₎)     0.0           0.0                0.0             0.0
   (:eta_I₍ₓ₎)     0.0           0.0          …     0.0             0.0
   (:eta_L₍ₓ₎)     0.0           0.0                0.0             0.0
@@ -412,6 +412,6 @@
   (:eta_b₍ₓ₎)    -1.9752        1.99989            0.0             0.0
   (:eta_p₍ₓ₎)     0.712388     -0.726688     …     0.0             0.0
   (:eta_pi₍ₓ₎)    0.548245     -0.563207           0.0             0.0
-  (:eta_w₍ₓ₎)     0.0           0.0                0.0             0.0

The function returns a KeyedArray with the values of the endogenous variables and shocks matching the conditions exactly.

We can also plot the conditional forecast. Please note that you need to import the StatsPlots packages once before the first plot.

julia> plot_conditional_forecast(SW03,conditions, shocks = shocks, plots_per_page = 6,variables = [:Y,:pi,:W],conditions_in_levels = false)2-element Vector{Any}:
+  (:eta_w₍ₓ₎)     0.0           0.0                0.0             0.0

The function returns a KeyedArray with the values of the endogenous variables and shocks matching the conditions exactly.

We can also plot the conditional forecast. Please note that you need to import the StatsPlots packages once before the first plot.

julia> plot_conditional_forecast(Smets_Wouters_2003,conditions, shocks = shocks, plots_per_page = 6,variables = [:Y,:pi,:W],conditions_in_levels = false)2-element Vector{Any}:
  Plot{Plots.GRBackend() n=25}
- Plot{Plots.GRBackend() n=16}

SW03 conditional forecast 1

SW03 conditional forecast 2

and we need to set conditions_in_levels = false since the conditions are defined in deviations.

Note that the stars indicate the values the model is conditioned on.

+ Plot{Plots.GRBackend() n=16}

Smets_Wouters_2003 conditional forecast 1

Smets_Wouters_2003 conditional forecast 2

and we need to set conditions_in_levels = false since the conditions are defined in deviations.

Note that the stars indicate the values the model is conditioned on.

diff --git a/v0.1.36/unfinished_docs/dsl/index.html b/v0.1.36/unfinished_docs/dsl/index.html index bc482d99..0e164d23 100644 --- a/v0.1.36/unfinished_docs/dsl/index.html +++ b/v0.1.36/unfinished_docs/dsl/index.html @@ -11,4 +11,4 @@ z[0] = ρ * z[-1] + std_z * (eps_z[x-8] + eps_z[x-4] + eps_z[x+4] + eps_z_s[x]) c̄⁻[0] = (c[0] + c[-1] + c[-2] + c[-3]) / 4 c̄⁺[0] = (c[0] + c[1] + c[2] + c[3]) / 4 -end

The parser recognises a variable as exogenous if the timing bracket contains one of the keyword/letters (case insensitive): x, ex, exo, exogenous.

Valid declarations of exogenous variables: ϵ[x], ϵ[Exo], ϵ[exOgenous].

Invalid declarations: ϵ[xo], ϵ[exogenously], ϵ[main shock x]

Endogenous and exogenous variables can be in lead or lag, e.g.: the following describe a lead of 1 period: Y[1], Y[+1], Y[+ 1], eps[x+1], eps[Exo + 1] and the same goes for lags and periods > 1: `k[-2], c[+12], eps[x-4]

Invalid declarations: Y[t-1], Y[t], Y[whatever], eps[x+t+1]

Equations must be within one line and the = sign is optional.

The parser recognises all functions in julia including those from StatsFuns.jl. Note that the syntax for distributions is the same as in MATLAB, e.g. normcdf. For those familiar with R the following also work: pnorm, dnorm, qnorm, and it also recognises: norminvcdf and norminv.

Given these rules it is straightforward to write down a model. Once declared using the @model <name of the model> macro, the package creates an object containing all necessary information regarding the equations of the model.

Lead / lags and auxilliary variables

+end

The parser recognises a variable as exogenous if the timing bracket contains one of the keyword/letters (case insensitive): x, ex, exo, exogenous.

Valid declarations of exogenous variables: ϵ[x], ϵ[Exo], ϵ[exOgenous].

Invalid declarations: ϵ[xo], ϵ[exogenously], ϵ[main shock x]

Endogenous and exogenous variables can be in lead or lag, e.g.: the following describe a lead of 1 period: Y[1], Y[+1], Y[+ 1], eps[x+1], eps[Exo + 1] and the same goes for lags and periods > 1: `k[-2], c[+12], eps[x-4]

Invalid declarations: Y[t-1], Y[t], Y[whatever], eps[x+t+1]

Equations must be within one line and the = sign is optional.

The parser recognises all functions in julia including those from StatsFuns.jl. Note that the syntax for distributions is the same as in MATLAB, e.g. normcdf. For those familiar with R the following also work: pnorm, dnorm, qnorm, and it also recognises: norminvcdf and norminv.

Given these rules it is straightforward to write down a model. Once declared using the @model <name of the model> macro, the package creates an object containing all necessary information regarding the equations of the model.

Lead / lags and auxilliary variables

diff --git a/v0.1.36/unfinished_docs/how_to/index.html b/v0.1.36/unfinished_docs/how_to/index.html index b54534cf..ec56594e 100644 --- a/v0.1.36/unfinished_docs/how_to/index.html +++ b/v0.1.36/unfinished_docs/how_to/index.html @@ -27,4 +27,4 @@ ρ = 0.2 δ = 0.02 α | k[ss] / (4 * q[ss]) = 1.5 -end

Higher order perturbation solutions

How to estimate a model

Interactive plotting

+end

Higher order perturbation solutions

How to estimate a model

Interactive plotting

diff --git a/v0.1.36/unfinished_docs/todo/index.html b/v0.1.36/unfinished_docs/todo/index.html index 840345c1..e0417279 100644 --- a/v0.1.36/unfinished_docs/todo/index.html +++ b/v0.1.36/unfinished_docs/todo/index.html @@ -1,2 +1,2 @@ -Todo list · MacroModelling.jl

Todo list

High priority

  • [ ] ss transition by entering new parameters at given periods

  • [ ] check downgrade tests

  • [ ] figure out why PG and IS return basically the prior

  • [ ] allow external functions to calculate the steady state (and hand it over via SS or get_loglikelihood function) - need to use the check function for implicit derivatives and cannot use it to get him a guess from which he can use internal solver going forward

  • [ ] go through custom SS solver once more and try to find parameters and logic that achieves best results

  • [ ] SS solver with less equations than variables

  • [ ] improve docs: timing in first sentence seems off; have something more general in first sentence; why is the syntax user friendly? give an example; make the former and the latter a footnote

  • [ ] write tests/docs/technical details for nonlinear obc, forecasting, (non-linear) solution algorithms, SS solver, obc solver, and other algorithms

  • [ ] change docs to reflect that the output of irfs include aux vars and also the model info Base.show includes aux vars

  • [ ] recheck function examples and docs (include output description)

  • [ ] Docs: document outputs and associated functions to work with function

  • [ ] write documentation/docstrings using copilot

  • [ ] feedback: sell the sampler better (ESS vs dynare), more details on algorithm (SS solver)

  • [ ] NaNMath pow does not work (is not substituted)

  • [ ] check whether its possible to run parameters macro/block without rerunning model block

  • [ ] eliminate possible log, ^ terms in parameters block equations - because of nonnegativity errors

  • [ ] throw error when equations appear more than once

  • [ ] plot multiple solutions or models - multioptions in one graph

  • [ ] make SS calc faster (func and optim, maybe inplace ops)

  • [ ] try preallocation tools for forwarddiff

  • [ ] add nonlinear shock decomposition

  • [ ] check obc once more

  • [ ] rm obc vars from get_SS

  • [ ] check why warmup_iterations = 0 makes estimated shocks larger

  • [ ] use analytical derivatives also for shocks matching optim (and HMC - implicit diff)

  • [ ] info on when what filter is used and chosen options are overridden

  • [ ] check warnings, errors throughout. check suppress not interfering with pigeons

  • [ ] functions to reverse state_update (input: previous shock and current state, output previous state), find shocks corresponding to bringing one state to the next

  • [ ] cover nested case: min(50,a+b+max(c,10))

  • [ ] add balanced growth path handling

  • [ ] higher order solutions: some kron matrix mults are later compressed. write custom compressed kron mult; check if sometimes dense mult is faster? (e.g. GNSS2010 seems dense at higher order)

  • [ ] make inversion filter / higher order sols suitable for HMC (forward and reverse diff!!, currently only analytical pushforward, no implicitdiff) | analytic derivatives

  • [ ] speed up sparse matrix calcs in implicit diff of higher order funcs

  • [ ] compressed higher order derivatives and sparsity of jacobian

  • [ ] add user facing option to choose sylvester solver

  • [ ] autocorr and covariance with derivatives. return 3d array

  • [ ] use ID for sparse output sylvester solvers (filed issue)

  • [ ] add pydsge and econpizza to overview

  • [ ] add for loop parser in @parameters

  • [ ] implement more multi country models

  • [ ] speed benchmarking (focus on ImplicitDiff part)

  • [ ] for cond forecasting allow less shocks than conditions with a warning. should be svd then

  • [ ] have parser accept rss | (r[ss] - 1) * 400 = rss

  • [ ] when doing calibration with optimiser have better return values when he doesnt find a solution (probably NaN)

  • [ ] sampler returned negative std. investigate and come up with solution ensuring sampler can continue

  • [ ] automatically adjust plots for different legend widths and heights

  • [ ] include weakdeps: https://pkgdocs.julialang.org/dev/creating-packages/#Weak-dependencies

  • [ ] have get_std take variables as an input

  • [ ] more informative errors when something goes wrong when writing a model

  • [ ] initial state accept keyed array, SS and SSS as arguments

  • [ ] plotmodelestimates with unconditional forecast at the end

  • [ ] kick out unused parameters from m.parameters

  • [ ] use cache for gradient calc in estimation (see DifferentiableStateSpaceModels)

  • [ ] write functions to debug (fix_SS.jl...)

  • [ ] model compression (speed up 2nd moment calc (derivatives) for large models; gradient loglikelihood is very slow due to large matmuls) -> model setup as maximisation problem (gEcon) -> HANK models

  • [ ] implement global solution methods

  • [ ] add more models

  • [ ] use @assert for errors and @test_throws

  • [ ] print SS dependencies (get parameters (in function of parameters) into the dependencies), show SS solver

  • [ ] use strings instead of symbols internally

  • [ ] write how-to for calibration equations

  • [ ] make the nonnegativity trick optional or use nanmath?

  • [ ] clean up different parameter types

  • [ ] clean up printouts/reporting

  • [ ] clean up function inputs and harmonise AD and standard commands

  • [ ] figure out combinations for inputs (parameters and variables in different formats for get_irf for example)

  • [ ] weed out SS solver and saved objects

  • [x] streamline estimation part (dont do string matching... but rely on precomputed indices...)

  • [x] estimation: run auto-tune before and use solver treating parameters as given

  • [x] use arraydist in tests and docs

  • [x] include guess in docs

  • [x] Find any SS by optimising over both SS guesses and parameter inputs

  • [x] riccati with analytical derivatives (much faster if sparse) instead of implicit diff; done for ChainRules; ForwardDiff only feasible for smaller problems -> ID is fine there

  • [x] log in parameters block is recognized as variable

  • [x] add termination condition if relative change in ss solver is smaller than tol (relevant when values get very large)

  • [x] provide option for external SS guess; provided in parameters macro

  • [x] make it possible to run multiple ss solver parameter combination including starting points when solving a model

  • [x] automatically put the combi first which solves it fastest the first time

  • [x] write auto-tune in case he cant find SS (add it to the warning when he cant find the SS)

  • [x] nonlinear conditional forecasts for higher order and obc

  • [x] for cond forecasting and kalman, get rid of observables input and use axis key of data input

  • [x] fix translate dynare mod file from file written using write to dynare file (see test models): added retranslation to test

  • [x] use packages for kalman filter: nope sticking to own implementation

  • [x] check that there is an error if he cant find SS

  • [x] bring solution error into an object of the model so we dont have to pass it on as output: errors get returned by functions and are thrown where appropriate

  • [x] include option to provide pruned states for irfs

  • [x] use other quadratic iteration for diffable first order solve (useful because schur can error in estimation): used try catch, schur is still fastest

  • [x] fix SS solver (failed for backus in guide): works now

  • [x] nonlinear estimation using unscented kalman filter / inversion filter (minimization problem: find shocks to match states with data): used inversion filter with gradient optim

  • [x] check if higher order effects might distort results for autocorr (problem with order deffinition) - doesnt seem to be the case; full_covar yields same result

  • [x] implement occasionally binding constraints with shocks

  • [x] add QUEST3 tests

  • [x] add obc tests

  • [x] highlight NUTS sampler compatibility

  • [x] differentiate more vs diffstatespace

  • [x] reorder other toolboxes according to popularity

  • [x] add JOSS article (see Makie.jl)

  • [x] write to mod file for unicode characters. have them take what you would type: \alpha\bar

  • [x] write dynare model using function converting unicode to tab completion

  • [x] write parameter equations to dynare (take ordering on board)

  • [x] pruning of 3rd order takes pruned 2nd order input

  • [x] implement moment matching for pruned models

  • [x] test pruning and add literature

  • [x] use more implicit diff for the other functions as well

  • [x] handle sparsity in sylvester solver better (hand over indices and nzvals instead of vec)

  • [x] redo naming in moments calc and make whole process faster (precalc wrangling matrices)

  • [x] write method of moments how to

  • [x] check tols - all set to eps() except for dependencies tol (1e-12)

  • [x] set to 0 SS values < 1e-12 - doesnt work with Zygote

  • [x] sylvester with analytical derivatives (much faster if sparse) instead of implicit diff - yes but there are still way too large matrices being realised. implicitdiff is better here

  • [x] autocorr to statistics output and in general for higher order pruned sols

  • [x] fix product moments and test for cases with more than 2 shocks

  • [x] write tests for variables argument in get_moment and for higher order moments

  • [x] handle KeyedArrays with strings as dimension names as input

  • [x] add mean in output funcs for higher order

  • [x] recheck results for third order cov

  • [x] have a look again at get_statistics function

  • [x] consolidate sylvester solvers (diff)

  • [x] put outside of loop the ignore derviatives for derivatives

  • [x] write function to smart select variables to calc cov for

  • [x] write get function for variables, parameters, equations with proper parsing so people can understand what happens when invoking for loops

  • [x] have for loop where the items are multiplied or divided or whatever, defined by operator | + or * only

  • [x] write documentation for string inputs

  • [x] write documentation for programmatic model writing

  • [x] input indices not as symbol

  • [x] make sure plots and printed output also uses strings instead of symbols if adequate

  • [x] have keyedarray with strings as axis type if necessary as output

  • [x] write test for keyedarray with strings as primary axis

  • [x] test string input

  • [x] have all functions accept strings and write tests for it

  • [x] parser model into per equation functions instead of single big functions

  • [x] use krylov instead of linearsolve

  • [x] implement for loops in model macro (e.g. to setup multi country models)

  • [x] fix ss of pruned solution in plotsolution. seems detached

  • [x] try solve first order with JuMP - doesnt work because JuMP cannot handle matrix constraints/objectives

  • [x] get solution higher order with multidimensional array (states, 1 and 2 partial derivatives variables names as dimensions in 2order case)

  • [x] add pruning

  • [x] add other outputs from estimation (smoothed, filter states and shocks)

  • [x] shorten plot_irf (take inspiration from model estimate)

  • [x] fix solution plot

  • [x] see if we can avoid try catch and test for invertability instead

  • [x] have Flux solve SS field #gradient descent based is worse than LM based

  • [x] have parameters keyword accept Int and 2/3

  • [x] plot_solution colors change from 2nd to 2rd order

  • [x] custom LM: optimize for other RBC models, use third order backtracking

  • [x] add SSS for third order (can be different than the one from 2nd order, see Gali (2015)) in solution plot; also put legend to the bottom as with Condition

  • [x] check out Aqua.jl as additional tests

  • [x] write tests and documentation for solution, estimation... making sure results are consistent

  • [x] catch cases where you define calibration equation without declaring conditional variable

  • [x] flag if equations contain no info for SS, suggest to set ss values as parameters

  • [x] handle SS case where there are equations which have no information for the SS. use SS definitions in parameter block to complete system | no, set steady state values to parameters instead. might fail if redundant equation has y[0] - y[-1] instead of y[0] - y[ss]

  • [x] try eval instead of runtimegeneratedfunctions; eval is slower but can be typed

  • [x] check correctness of solution for models added

  • [x] SpecialFunctions eta and gamma cause conflicts; consider importing used functions explicitly

  • [x] bring the parsing of equations after the parameters macro

  • [x] rewrite redundant var part so that it works with ssauxequations instead of ss_equations

  • [x] catch cases where ss vars are set to zero. x[0] * eps_z[x] in SS becomes x[0] * 0 but should be just 0 (use sympy for this)

  • [x] remove duplicate nonnegative aux vars to speed up SS solver

  • [x] error when defining variable more than once in parameters macro

  • [x] consolidate aux vars, use sympy to simplify

  • [x] error when writing equations with only one variable

  • [x] error when defining variable as parameter

  • [x] more options for IRFs, simulate only certain shocks - set stds to 0 instead

  • [x] add NBTOOLBOX, IRIS to overview

  • [x] input field for SS init guess in all functions #not necessary so far. SS solver works out everything just fine

  • [x] symbolic derivatives

  • [x] check SW03 SS solver

  • [x] more options for IRFs, pass on shock vector

  • [x] write to dynare

  • [x] add plot for policy function

  • [x] add plot for FEVD

  • [x] add functions like getvariance, getsd, getvar, getcovar

  • [x] add correlation, autocorrelation, and (conditional) variance decomposition

  • [x] go through docs to reflect verbose behaviour

  • [x] speed up covariance mat calc

  • [x] have conditional parameters at end of entry as well (... | alpha instead of alpha | ...)

  • [x] Get functions: getoutput, getmoments

  • [x] get rid of init_guess

  • [x] an and schorfheide estimation

  • [x] estimation, IRF matching, system priors

  • [x] check derivative tests with finite diff

  • [x] release first version

  • [x] SS solve: add domain transformation optim

  • [x] revisit optimizers for SS

  • [x] figure out licenses

  • [x] SS: replace variables in log() with auxilliary variable which must be positive to help solver

  • [x] complex example with lags > 1, [ss], calib equations, aux nonneg vars

  • [x] add NLboxsolve

  • [x] try NonlinearSolve - fails due to missing bounds

  • [x] make noneg aux part of optim problem for NLboxsolve in order to avoid DomainErrors - not necessary

  • [x] have bounds on alpha (failed previously due to naming conflict) - works now

Not high priority

  • [ ] estimation codes with missing values (adopt kalman filter)

  • [ ] decide on whether levels = false means deviations from NSSS or relevant SS

  • [ ] whats a good error measure for higher order solutions (taking whole dist of future shock into account)? use mean error for n number of future shocks

  • [ ] improve redundant calculations of SS and other parts of solution

  • [ ] restructure functions and containers so that compiler knows what types to expect

  • [ ] use RecursiveFactorization and TriangularSolve to solve, instead of MKL or OpenBLAS

  • [ ] fix SnoopCompile with generated functions

  • [ ] exploit variable incidence and compression for higher order derivatives

  • [ ] for estimation use CUDA with st order: linear time iteration starting from last 1st order solution and then LinearSolveCUDA solvers for higher orders. this should bring benefits for large models and HANK models

  • [ ] pull request in StatsFuns to have norminv... accept type numbers and add translation from matlab: norminv to StatsFuns norminvcdf

  • [ ] more informative errors when declaring equations/ calibration

  • [ ] unit equation errors

  • [ ] implenent reduced linearised system solver + nonlinear

  • [ ] implement HANK

  • [ ] implement automatic problem derivation (gEcon)

  • [ ] print legend for algorithm in last subplot of plot only

  • [ ] select variables for moments

  • [x] rewrite first order with riccati equation MatrixEquations.jl: not necessary/feasable see dynare package

  • [x] test on highly nonlinear model # caldara et al is actually epstein zin wiht stochastic vol

  • [x] conditional forecasting

  • [x] find way to recover from failed SS solution which is written to init guess

  • [x] redo ugly solution for selecting parameters to differentiate for

  • [x] conditions for when to use which solution. if solution is outdated redo all solutions which have been done so far and use smart starting points

  • [x] Revise 2,3 pert codes to make it more intuitive

  • [x] implement blockdiag with julia package instead of python

  • [x] Pretty print linear solution

  • [x] write function to get_irfs

  • [x] Named arrays for irf

  • [x] write state space function for solution

  • [x] Status print for model container

  • [x] implenent 2nd + 3rd order perturbation

  • [x] implement fuctions for distributions

  • [x] try speedmapping.jl - no improvement

  • [x] moment matching

  • [x] write tests for higher order pert and standalone function

  • [x] add compression back in

  • [x] FixedPointAcceleration didnt improve on iterative procedure

  • [x] add exogenous variables in lead or lag

  • [x] regex in parser of SS and exo

  • [x] test SS solver on SW07

  • [x] change calibration, distinguish SS/dyn parameters

  • [x] plot multiple solutions at same time (save them in separate constructs)

  • [x] implement bounds in SS finder

  • [x] map pars + vars impacting SS

  • [x] check bounds when putting in new calibration

  • [x] Save plot option

  • [x] Add shock to plot title

  • [x] print model name

+Todo list · MacroModelling.jl

Todo list

High priority

  • [ ] ss transition by entering new parameters at given periods

  • [ ] check downgrade tests

  • [ ] figure out why PG and IS return basically the prior

  • [ ] allow external functions to calculate the steady state (and hand it over via SS or get_loglikelihood function) - need to use the check function for implicit derivatives and cannot use it to get him a guess from which he can use internal solver going forward

  • [ ] go through custom SS solver once more and try to find parameters and logic that achieves best results

  • [ ] SS solver with less equations than variables

  • [ ] improve docs: timing in first sentence seems off; have something more general in first sentence; why is the syntax user friendly? give an example; make the former and the latter a footnote

  • [ ] write tests/docs/technical details for nonlinear obc, forecasting, (non-linear) solution algorithms, SS solver, obc solver, and other algorithms

  • [ ] change docs to reflect that the output of irfs include aux vars and also the model info Base.show includes aux vars

  • [ ] recheck function examples and docs (include output description)

  • [ ] Docs: document outputs and associated functions to work with function

  • [ ] write documentation/docstrings using copilot

  • [ ] feedback: sell the sampler better (ESS vs dynare), more details on algorithm (SS solver)

  • [ ] NaNMath pow does not work (is not substituted)

  • [ ] check whether its possible to run parameters macro/block without rerunning model block

  • [ ] eliminate possible log, ^ terms in parameters block equations - because of nonnegativity errors

  • [ ] throw error when equations appear more than once

  • [ ] plot multiple solutions or models - multioptions in one graph

  • [ ] make SS calc faster (func and optim, maybe inplace ops)

  • [ ] try preallocation tools for forwarddiff

  • [ ] add nonlinear shock decomposition

  • [ ] check obc once more

  • [ ] rm obc vars from get_SS

  • [ ] check why warmup_iterations = 0 makes estimated shocks larger

  • [ ] use analytical derivatives also for shocks matching optim (and HMC - implicit diff)

  • [ ] info on when what filter is used and chosen options are overridden

  • [ ] check warnings, errors throughout. check suppress not interfering with pigeons

  • [ ] functions to reverse state_update (input: previous shock and current state, output previous state), find shocks corresponding to bringing one state to the next

  • [ ] cover nested case: min(50,a+b+max(c,10))

  • [ ] add balanced growth path handling

  • [ ] higher order solutions: some kron matrix mults are later compressed. write custom compressed kron mult; check if sometimes dense mult is faster? (e.g. GNSS2010 seems dense at higher order)

  • [ ] make inversion filter / higher order sols suitable for HMC (forward and reverse diff!!, currently only analytical pushforward, no implicitdiff) | analytic derivatives

  • [ ] speed up sparse matrix calcs in implicit diff of higher order funcs

  • [ ] compressed higher order derivatives and sparsity of jacobian

  • [ ] add user facing option to choose sylvester solver

  • [ ] autocorr and covariance with derivatives. return 3d array

  • [ ] use ID for sparse output sylvester solvers (filed issue)

  • [ ] add pydsge and econpizza to overview

  • [ ] add for loop parser in @parameters

  • [ ] implement more multi country models

  • [ ] speed benchmarking (focus on ImplicitDiff part)

  • [ ] for cond forecasting allow less shocks than conditions with a warning. should be svd then

  • [ ] have parser accept rss | (r[ss] - 1) * 400 = rss

  • [ ] when doing calibration with optimiser have better return values when he doesnt find a solution (probably NaN)

  • [ ] sampler returned negative std. investigate and come up with solution ensuring sampler can continue

  • [ ] automatically adjust plots for different legend widths and heights

  • [ ] include weakdeps: https://pkgdocs.julialang.org/dev/creating-packages/#Weak-dependencies

  • [ ] have get_std take variables as an input

  • [ ] more informative errors when something goes wrong when writing a model

  • [ ] initial state accept keyed array, SS and SSS as arguments

  • [ ] plotmodelestimates with unconditional forecast at the end

  • [ ] kick out unused parameters from m.parameters

  • [ ] use cache for gradient calc in estimation (see DifferentiableStateSpaceModels)

  • [ ] write functions to debug (fix_SS.jl...)

  • [ ] model compression (speed up 2nd moment calc (derivatives) for large models; gradient loglikelihood is very slow due to large matmuls) -> model setup as maximisation problem (gEcon) -> HANK models

  • [ ] implement global solution methods

  • [ ] add more models

  • [ ] use @assert for errors and @test_throws

  • [ ] print SS dependencies (get parameters (in function of parameters) into the dependencies), show SS solver

  • [ ] use strings instead of symbols internally

  • [ ] write how-to for calibration equations

  • [ ] make the nonnegativity trick optional or use nanmath?

  • [ ] clean up different parameter types

  • [ ] clean up printouts/reporting

  • [ ] clean up function inputs and harmonise AD and standard commands

  • [ ] figure out combinations for inputs (parameters and variables in different formats for get_irf for example)

  • [ ] weed out SS solver and saved objects

  • [x] streamline estimation part (dont do string matching... but rely on precomputed indices...)

  • [x] estimation: run auto-tune before and use solver treating parameters as given

  • [x] use arraydist in tests and docs

  • [x] include guess in docs

  • [x] Find any SS by optimising over both SS guesses and parameter inputs

  • [x] riccati with analytical derivatives (much faster if sparse) instead of implicit diff; done for ChainRules; ForwardDiff only feasible for smaller problems -> ID is fine there

  • [x] log in parameters block is recognized as variable

  • [x] add termination condition if relative change in ss solver is smaller than tol (relevant when values get very large)

  • [x] provide option for external SS guess; provided in parameters macro

  • [x] make it possible to run multiple ss solver parameter combination including starting points when solving a model

  • [x] automatically put the combi first which solves it fastest the first time

  • [x] write auto-tune in case he cant find SS (add it to the warning when he cant find the SS)

  • [x] nonlinear conditional forecasts for higher order and obc

  • [x] for cond forecasting and kalman, get rid of observables input and use axis key of data input

  • [x] fix translate dynare mod file from file written using write to dynare file (see test models): added retranslation to test

  • [x] use packages for kalman filter: nope sticking to own implementation

  • [x] check that there is an error if he cant find SS

  • [x] bring solution error into an object of the model so we dont have to pass it on as output: errors get returned by functions and are thrown where appropriate

  • [x] include option to provide pruned states for irfs

  • [x] use other quadratic iteration for diffable first order solve (useful because schur can error in estimation): used try catch, schur is still fastest

  • [x] fix SS solver (failed for backus in guide): works now

  • [x] nonlinear estimation using unscented kalman filter / inversion filter (minimization problem: find shocks to match states with data): used inversion filter with gradient optim

  • [x] check if higher order effects might distort results for autocorr (problem with order deffinition) - doesnt seem to be the case; full_covar yields same result

  • [x] implement occasionally binding constraints with shocks

  • [x] add QUEST3 tests

  • [x] add obc tests

  • [x] highlight NUTS sampler compatibility

  • [x] differentiate more vs diffstatespace

  • [x] reorder other toolboxes according to popularity

  • [x] add JOSS article (see Makie.jl)

  • [x] write to mod file for unicode characters. have them take what you would type: \alpha\bar

  • [x] write dynare model using function converting unicode to tab completion

  • [x] write parameter equations to dynare (take ordering on board)

  • [x] pruning of 3rd order takes pruned 2nd order input

  • [x] implement moment matching for pruned models

  • [x] test pruning and add literature

  • [x] use more implicit diff for the other functions as well

  • [x] handle sparsity in sylvester solver better (hand over indices and nzvals instead of vec)

  • [x] redo naming in moments calc and make whole process faster (precalc wrangling matrices)

  • [x] write method of moments how to

  • [x] check tols - all set to eps() except for dependencies tol (1e-12)

  • [x] set to 0 SS values < 1e-12 - doesnt work with Zygote

  • [x] sylvester with analytical derivatives (much faster if sparse) instead of implicit diff - yes but there are still way too large matrices being realised. implicitdiff is better here

  • [x] autocorr to statistics output and in general for higher order pruned sols

  • [x] fix product moments and test for cases with more than 2 shocks

  • [x] write tests for variables argument in get_moment and for higher order moments

  • [x] handle KeyedArrays with strings as dimension names as input

  • [x] add mean in output funcs for higher order

  • [x] recheck results for third order cov

  • [x] have a look again at get_statistics function

  • [x] consolidate sylvester solvers (diff)

  • [x] put outside of loop the ignore derviatives for derivatives

  • [x] write function to smart select variables to calc cov for

  • [x] write get function for variables, parameters, equations with proper parsing so people can understand what happens when invoking for loops

  • [x] have for loop where the items are multiplied or divided or whatever, defined by operator | + or * only

  • [x] write documentation for string inputs

  • [x] write documentation for programmatic model writing

  • [x] input indices not as symbol

  • [x] make sure plots and printed output also uses strings instead of symbols if adequate

  • [x] have keyedarray with strings as axis type if necessary as output

  • [x] write test for keyedarray with strings as primary axis

  • [x] test string input

  • [x] have all functions accept strings and write tests for it

  • [x] parser model into per equation functions instead of single big functions

  • [x] use krylov instead of linearsolve

  • [x] implement for loops in model macro (e.g. to setup multi country models)

  • [x] fix ss of pruned solution in plotsolution. seems detached

  • [x] try solve first order with JuMP - doesnt work because JuMP cannot handle matrix constraints/objectives

  • [x] get solution higher order with multidimensional array (states, 1 and 2 partial derivatives variables names as dimensions in 2order case)

  • [x] add pruning

  • [x] add other outputs from estimation (smoothed, filter states and shocks)

  • [x] shorten plot_irf (take inspiration from model estimate)

  • [x] fix solution plot

  • [x] see if we can avoid try catch and test for invertability instead

  • [x] have Flux solve SS field #gradient descent based is worse than LM based

  • [x] have parameters keyword accept Int and 2/3

  • [x] plot_solution colors change from 2nd to 2rd order

  • [x] custom LM: optimize for other RBC models, use third order backtracking

  • [x] add SSS for third order (can be different than the one from 2nd order, see Gali (2015)) in solution plot; also put legend to the bottom as with Condition

  • [x] check out Aqua.jl as additional tests

  • [x] write tests and documentation for solution, estimation... making sure results are consistent

  • [x] catch cases where you define calibration equation without declaring conditional variable

  • [x] flag if equations contain no info for SS, suggest to set ss values as parameters

  • [x] handle SS case where there are equations which have no information for the SS. use SS definitions in parameter block to complete system | no, set steady state values to parameters instead. might fail if redundant equation has y[0] - y[-1] instead of y[0] - y[ss]

  • [x] try eval instead of runtimegeneratedfunctions; eval is slower but can be typed

  • [x] check correctness of solution for models added

  • [x] SpecialFunctions eta and gamma cause conflicts; consider importing used functions explicitly

  • [x] bring the parsing of equations after the parameters macro

  • [x] rewrite redundant var part so that it works with ssauxequations instead of ss_equations

  • [x] catch cases where ss vars are set to zero. x[0] * eps_z[x] in SS becomes x[0] * 0 but should be just 0 (use sympy for this)

  • [x] remove duplicate nonnegative aux vars to speed up SS solver

  • [x] error when defining variable more than once in parameters macro

  • [x] consolidate aux vars, use sympy to simplify

  • [x] error when writing equations with only one variable

  • [x] error when defining variable as parameter

  • [x] more options for IRFs, simulate only certain shocks - set stds to 0 instead

  • [x] add NBTOOLBOX, IRIS to overview

  • [x] input field for SS init guess in all functions #not necessary so far. SS solver works out everything just fine

  • [x] symbolic derivatives

  • [x] check SW03 SS solver

  • [x] more options for IRFs, pass on shock vector

  • [x] write to dynare

  • [x] add plot for policy function

  • [x] add plot for FEVD

  • [x] add functions like getvariance, getsd, getvar, getcovar

  • [x] add correlation, autocorrelation, and (conditional) variance decomposition

  • [x] go through docs to reflect verbose behaviour

  • [x] speed up covariance mat calc

  • [x] have conditional parameters at end of entry as well (... | alpha instead of alpha | ...)

  • [x] Get functions: getoutput, getmoments

  • [x] get rid of init_guess

  • [x] an and schorfheide estimation

  • [x] estimation, IRF matching, system priors

  • [x] check derivative tests with finite diff

  • [x] release first version

  • [x] SS solve: add domain transformation optim

  • [x] revisit optimizers for SS

  • [x] figure out licenses

  • [x] SS: replace variables in log() with auxilliary variable which must be positive to help solver

  • [x] complex example with lags > 1, [ss], calib equations, aux nonneg vars

  • [x] add NLboxsolve

  • [x] try NonlinearSolve - fails due to missing bounds

  • [x] make noneg aux part of optim problem for NLboxsolve in order to avoid DomainErrors - not necessary

  • [x] have bounds on alpha (failed previously due to naming conflict) - works now

Not high priority

  • [ ] estimation codes with missing values (adopt kalman filter)

  • [ ] decide on whether levels = false means deviations from NSSS or relevant SS

  • [ ] whats a good error measure for higher order solutions (taking whole dist of future shock into account)? use mean error for n number of future shocks

  • [ ] improve redundant calculations of SS and other parts of solution

  • [ ] restructure functions and containers so that compiler knows what types to expect

  • [ ] use RecursiveFactorization and TriangularSolve to solve, instead of MKL or OpenBLAS

  • [ ] fix SnoopCompile with generated functions

  • [ ] exploit variable incidence and compression for higher order derivatives

  • [ ] for estimation use CUDA with st order: linear time iteration starting from last 1st order solution and then LinearSolveCUDA solvers for higher orders. this should bring benefits for large models and HANK models

  • [ ] pull request in StatsFuns to have norminv... accept type numbers and add translation from matlab: norminv to StatsFuns norminvcdf

  • [ ] more informative errors when declaring equations/ calibration

  • [ ] unit equation errors

  • [ ] implenent reduced linearised system solver + nonlinear

  • [ ] implement HANK

  • [ ] implement automatic problem derivation (gEcon)

  • [ ] print legend for algorithm in last subplot of plot only

  • [ ] select variables for moments

  • [x] rewrite first order with riccati equation MatrixEquations.jl: not necessary/feasable see dynare package

  • [x] test on highly nonlinear model # caldara et al is actually epstein zin wiht stochastic vol

  • [x] conditional forecasting

  • [x] find way to recover from failed SS solution which is written to init guess

  • [x] redo ugly solution for selecting parameters to differentiate for

  • [x] conditions for when to use which solution. if solution is outdated redo all solutions which have been done so far and use smart starting points

  • [x] Revise 2,3 pert codes to make it more intuitive

  • [x] implement blockdiag with julia package instead of python

  • [x] Pretty print linear solution

  • [x] write function to get_irfs

  • [x] Named arrays for irf

  • [x] write state space function for solution

  • [x] Status print for model container

  • [x] implenent 2nd + 3rd order perturbation

  • [x] implement fuctions for distributions

  • [x] try speedmapping.jl - no improvement

  • [x] moment matching

  • [x] write tests for higher order pert and standalone function

  • [x] add compression back in

  • [x] FixedPointAcceleration didnt improve on iterative procedure

  • [x] add exogenous variables in lead or lag

  • [x] regex in parser of SS and exo

  • [x] test SS solver on SW07

  • [x] change calibration, distinguish SS/dyn parameters

  • [x] plot multiple solutions at same time (save them in separate constructs)

  • [x] implement bounds in SS finder

  • [x] map pars + vars impacting SS

  • [x] check bounds when putting in new calibration

  • [x] Save plot option

  • [x] Add shock to plot title

  • [x] print model name