Skip to content

Commit

Permalink
Deploying to gh-pages from @ 2aa69f9 🚀
Browse files Browse the repository at this point in the history
  • Loading branch information
rickecon committed Nov 10, 2023
1 parent a7a9129 commit 9e68a7f
Show file tree
Hide file tree
Showing 5 changed files with 51 additions and 51 deletions.
4 changes: 2 additions & 2 deletions basic_empirics/BasicEmpirMethods.html
Original file line number Diff line number Diff line change
Expand Up @@ -940,7 +940,7 @@ <h2> Contents </h2>
</div>
</div>
<div class="cell_output docutils container">
<div class="output stderr highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>/tmp/ipykernel_2788/3993614049.py:4: SettingWithCopyWarning:
<div class="output stderr highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>/tmp/ipykernel_2739/3993614049.py:4: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
Expand Down Expand Up @@ -1600,7 +1600,7 @@ <h2> Contents </h2>
Model: OLS Adj. R-squared: 0.608
Method: Least Squares F-statistic: 171.4
Date: Fri, 10 Nov 2023 Prob (F-statistic): 4.16e-24
Time: 08:34:48 Log-Likelihood: -119.71
Time: 09:12:57 Log-Likelihood: -119.71
No. Observations: 111 AIC: 243.4
Df Residuals: 109 BIC: 248.8
Df Model: 1
Expand Down
32 changes: 16 additions & 16 deletions basic_empirics/LogisticReg.html
Original file line number Diff line number Diff line change
Expand Up @@ -1141,29 +1141,29 @@ <h2> Contents </h2>
<section id="interpreting-coefficients-log-odds-ratio">
<span id="sec-loglogitinterpret"></span><h4><span class="section-number">13.2.2.4. </span>Interpreting coefficients (log odds ratio)<a class="headerlink" href="#interpreting-coefficients-log-odds-ratio" title="Permalink to this heading">#</a></h4>
<p>The odds ratio in the logistic model is provides a nice way to interpret logit model coefficients. Let <span class="math notranslate nohighlight">\(z\equiv X^T\beta = \beta_0 + \beta_1 x_{1,i} + ...\beta_K x_{K,i}\)</span>. The logistic model is stated by the probability that the binary categorical dependent variable equals one <span class="math notranslate nohighlight">\(y_i=1\)</span>.</p>
<div class="amsmath math notranslate nohighlight" id="equation-8fc74fbe-b7bd-4f5b-81d3-5347c2435265">
<span class="eqno">(13.11)<a class="headerlink" href="#equation-8fc74fbe-b7bd-4f5b-81d3-5347c2435265" title="Permalink to this equation">#</a></span>\[\begin{equation}
<div class="amsmath math notranslate nohighlight" id="equation-1d4c173e-2fce-4bfb-a5e8-99a272ff928a">
<span class="eqno">(13.11)<a class="headerlink" href="#equation-1d4c173e-2fce-4bfb-a5e8-99a272ff928a" title="Permalink to this equation">#</a></span>\[\begin{equation}
P(y_i=1|X,\theta) = \frac{e^z}{1 + e^z}
\end{equation}\]</div>
<p>Given this equation, we know that the probability of the dependent variable being zero <span class="math notranslate nohighlight">\(y_i=0\)</span> is just one minus the probability above.</p>
<div class="amsmath math notranslate nohighlight" id="equation-39d03220-d236-4391-896c-d4ca43300d38">
<span class="eqno">(13.12)<a class="headerlink" href="#equation-39d03220-d236-4391-896c-d4ca43300d38" title="Permalink to this equation">#</a></span>\[\begin{equation}
<div class="amsmath math notranslate nohighlight" id="equation-c8e1e2be-42d8-4f04-9c76-37fb356f100c">
<span class="eqno">(13.12)<a class="headerlink" href="#equation-c8e1e2be-42d8-4f04-9c76-37fb356f100c" title="Permalink to this equation">#</a></span>\[\begin{equation}
P(y_i=0|X,\theta) = 1 - P(y_i=1|X,\theta) = 1 - \frac{e^z}{1 + e^z} = \frac{1}{1 + e^z}
\end{equation}\]</div>
<p>The odds ratio is a common way of expressing the probability of an event versus all other events. For example, if the probability of your favorite team winning a game is <span class="math notranslate nohighlight">\(P(win)=0.8\)</span>, then we know that the probability of your favorite team losing that game is <span class="math notranslate nohighlight">\(P(lose)=1-P(win)=0.2\)</span>. The odds ratio is the ratio of these two probabilities.</p>
<div class="amsmath math notranslate nohighlight" id="equation-d42be9db-6ddf-4105-8b82-c29213390469">
<span class="eqno">(13.13)<a class="headerlink" href="#equation-d42be9db-6ddf-4105-8b82-c29213390469" title="Permalink to this equation">#</a></span>\[\begin{equation}
<div class="amsmath math notranslate nohighlight" id="equation-8e6cde9b-7bf4-43f7-af28-db2ebf625f19">
<span class="eqno">(13.13)<a class="headerlink" href="#equation-8e6cde9b-7bf4-43f7-af28-db2ebf625f19" title="Permalink to this equation">#</a></span>\[\begin{equation}
\frac{P(win)}{P(lose)} = \frac{P(win)}{1 - P(win)} = \frac{0.8}{0.2} = \frac{4}{1} \quad\text{or}\quad 4
\end{equation}\]</div>
<p>The odds ratio tells you that the probability of your team winning is four times as likely as your team losing. A gambler would say that your odds are 4-to-1. Another way of saying it is that your team will win four out of five times and will lose 1 out of five times.</p>
<p>In the logistic model, the odds ratio reduces the problem nicely.</p>
<div class="amsmath math notranslate nohighlight" id="equation-a43a33df-4990-48f2-93bb-4d1cfe81c61e">
<span class="eqno">(13.14)<a class="headerlink" href="#equation-a43a33df-4990-48f2-93bb-4d1cfe81c61e" title="Permalink to this equation">#</a></span>\[\begin{equation}
<div class="amsmath math notranslate nohighlight" id="equation-d6949e45-66d2-4c95-bae0-fb9253806395">
<span class="eqno">(13.14)<a class="headerlink" href="#equation-d6949e45-66d2-4c95-bae0-fb9253806395" title="Permalink to this equation">#</a></span>\[\begin{equation}
\frac{P(y_i=1|X,\theta)}{1 - P(y_i=1|X,\theta)} = \frac{\frac{e^z}{1 + e^z}}{\frac{1}{1 + e^z}} = e^z
\end{equation}\]</div>
<p>If we take the log of both sides, we see that the log odds ratio is equal to the linear predictor <span class="math notranslate nohighlight">\(z\equiv X^T\beta = \beta_0 + \beta_1 x_{1,i} + ...\beta_K x_{K,i}\)</span>.</p>
<div class="amsmath math notranslate nohighlight" id="equation-8708766c-5003-4636-97ef-d197cb5d34ff">
<span class="eqno">(13.15)<a class="headerlink" href="#equation-8708766c-5003-4636-97ef-d197cb5d34ff" title="Permalink to this equation">#</a></span>\[\begin{equation}
<div class="amsmath math notranslate nohighlight" id="equation-7408916c-8094-4dd1-9a40-f720b2c8f2c3">
<span class="eqno">(13.15)<a class="headerlink" href="#equation-7408916c-8094-4dd1-9a40-f720b2c8f2c3" title="Permalink to this equation">#</a></span>\[\begin{equation}
\ln\left(\frac{P(y_i=1|X,\theta)}{1 - P(y_i=1|X,\theta)}\right) = z = \beta_0 + \beta_1 x_{1,i} + ...\beta_K x_{K,i}
\end{equation}\]</div>
<p>So the interpretation of the coeficients <span class="math notranslate nohighlight">\(\beta_k\)</span> is that a one-unit increase of the variable <span class="math notranslate nohighlight">\(x_{k,i}\)</span> increases the odds ratio or the odds of <span class="math notranslate nohighlight">\(y_i=1\)</span> by <span class="math notranslate nohighlight">\(\beta_{k,i}\)</span> percent.</p>
Expand All @@ -1175,18 +1175,18 @@ <h2> Contents </h2>
<p>The multinomial logit model is a natural extension of the logit model. In contrast to the logit model in which the dependent variable has only two categories, the multinomial logit model accomodates <span class="math notranslate nohighlight">\(J\geq2\)</span> categories in the dependent variable. Let <span class="math notranslate nohighlight">\(\eta_j\)</span> be the linear predictor for the <span class="math notranslate nohighlight">\(j\)</span>th category.
$<span class="math notranslate nohighlight">\( \eta_j\equiv \beta_{j,0} + \beta_{j,1}x_{1,i} + ...\beta_{j,K}x_{K,i} \quad\forall y_i = j \)</span>$</p>
<p>The multinomial logit model gives the probability of <span class="math notranslate nohighlight">\(y_i=j\)</span> relative to some reference category <span class="math notranslate nohighlight">\(J\)</span> that is left out.</p>
<div class="amsmath math notranslate nohighlight" id="equation-25201e9d-bffd-4676-aaf1-719ae1ae98c5">
<span class="eqno">(13.16)<a class="headerlink" href="#equation-25201e9d-bffd-4676-aaf1-719ae1ae98c5" title="Permalink to this equation">#</a></span>\[\begin{equation}
<div class="amsmath math notranslate nohighlight" id="equation-442935c0-e9b7-4e2c-a307-bb0ff473ac8e">
<span class="eqno">(13.16)<a class="headerlink" href="#equation-442935c0-e9b7-4e2c-a307-bb0ff473ac8e" title="Permalink to this equation">#</a></span>\[\begin{equation}
Pr(y_i=j|X,\theta) = \frac{e^{\eta_j}}{1 + \sum_v^{J-1}e^{\eta_v}} \quad\text{for}\quad 1\leq j\leq J-1
\end{equation}\]</div>
<p>Once the <span class="math notranslate nohighlight">\(J-1\)</span> sets of coefficients are estimated, the final <span class="math notranslate nohighlight">\(J\)</span>th set of coefficients are a residual based on the following expression.</p>
<div class="amsmath math notranslate nohighlight" id="equation-eb40ee8c-8a2f-4eab-882c-0eb7285d33e7">
<span class="eqno">(13.17)<a class="headerlink" href="#equation-eb40ee8c-8a2f-4eab-882c-0eb7285d33e7" title="Permalink to this equation">#</a></span>\[\begin{equation}
<div class="amsmath math notranslate nohighlight" id="equation-41de3b10-6c1d-4ada-9e7a-3fd9c4b2a505">
<span class="eqno">(13.17)<a class="headerlink" href="#equation-41de3b10-6c1d-4ada-9e7a-3fd9c4b2a505" title="Permalink to this equation">#</a></span>\[\begin{equation}
Pr(y_i=J|X,\theta) = \frac{1}{1 + \sum_v^{J-1}e^{\eta_v}}
\end{equation}\]</div>
<p>The analogous log odds ratio interpretation applies to the multinomial logit model.</p>
<div class="amsmath math notranslate nohighlight" id="equation-150f6b5e-01ae-4ad0-9b7c-cb3ae77dfa78">
<span class="eqno">(13.18)<a class="headerlink" href="#equation-150f6b5e-01ae-4ad0-9b7c-cb3ae77dfa78" title="Permalink to this equation">#</a></span>\[\begin{equation}
<div class="amsmath math notranslate nohighlight" id="equation-c0f46b23-cac6-407e-803b-9f52242c2c4b">
<span class="eqno">(13.18)<a class="headerlink" href="#equation-c0f46b23-cac6-407e-803b-9f52242c2c4b" title="Permalink to this equation">#</a></span>\[\begin{equation}
\ln\left(\frac{Pr(y_i=j|X,\theta)}{Pr(y_i=J|X,\theta)}\right) = \eta_j = \beta_{j,0} + \beta_{j,1}x_{1,i} + ...\beta_{j,K}x_{K,i} \quad\text{for}\quad 1\leq j \leq J-1
\end{equation}\]</div>
<p>This is the odds ratio of <span class="math notranslate nohighlight">\(y_i=j\)</span> relative to <span class="math notranslate nohighlight">\(y_i=J\)</span>. The interpretation of the <span class="math notranslate nohighlight">\(\beta_{j,k}\)</span> coefficient is the predicted percentage change in the log odds ratio of <span class="math notranslate nohighlight">\(y_i=j\)</span> to <span class="math notranslate nohighlight">\(y_i=J\)</span> from a one-unit increase in variable <span class="math notranslate nohighlight">\(x_{k,i}\)</span>.</p>
Expand Down
4 changes: 2 additions & 2 deletions python/SciPy.html
Original file line number Diff line number Diff line change
Expand Up @@ -678,7 +678,7 @@ <h2> Contents </h2>
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span> message: The solution converged.
success: True
status: 1
fun: [-1.239e-13 -1.013e-13]
fun: [-1.235e-13 -1.017e-13]
x: [ 8.463e-01 2.331e+00]
nfev: 10
fjac: [[-8.332e-01 5.529e-01]
Expand All @@ -688,7 +688,7 @@ <h2> Contents </h2>

The solution for (x, y) is: [0.84630378 2.33101497]

The error values for eq1 and eq2 at the solution are: [-1.2390089e-13 -1.0125234e-13]
The error values for eq1 and eq2 at the solution are: [-1.23456800e-13 -1.01696429e-13]
</pre></div>
</div>
</div>
Expand Down
2 changes: 1 addition & 1 deletion searchindex.js

Large diffs are not rendered by default.

60 changes: 30 additions & 30 deletions struct_est/SMM.html
Original file line number Diff line number Diff line change
Expand Up @@ -1289,7 +1289,7 @@ <h2> Contents </h2>
</div>
</div>
<div class="cell_output docutils container">
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>mu_SMM1_1= 612.3319048024534 sig_SMM1_1= 197.26303566149244
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>mu_SMM1_1= 612.3371352249138 sig_SMM1_1= 197.26434895262162
</pre></div>
</div>
</div>
Expand Down Expand Up @@ -1317,18 +1317,18 @@ <h2> Contents </h2>
<div class="cell_output docutils container">
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>Data mean of scores = 341.90869565217395 , Data variance of scores = 7827.997292398056

Model mean 1 = 341.6690130387879 , Model variance 1 = 7827.866466512297
Model mean 1 = 341.6692110494425 , Model variance 1 = 7827.864496338213

Error vector 1 = [-7.01013506e-04 -1.67125614e-05]
Error vector 1 = [-7.00434373e-04 -1.69642445e-05]

Results from scipy.opmtimize.minimize:
message: CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_&lt;=_PGTOL
success: True
status: 0
fun: 4.916992449560748e-07
fun: 4.908960959342433e-07
x: [ 6.123e+02 1.973e+02]
nit: 17
jac: [-7.454e-07 2.357e-06]
jac: [-7.436e-07 2.350e-06]
nfev: 72
njev: 24
hess_inv: &lt;2x2 LbfgsInvHessProduct with dtype=float64&gt;
Expand Down Expand Up @@ -1453,19 +1453,19 @@ <h2> Contents </h2>
</div>
<div class="cell_output docutils container">
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>Jacobian matrix of derivatives of moment error functions is:
[[ 0.0008975 -0.00290434]
[-0.00114133 0.004457 ]]
[[ 0.00089749 -0.00290433]
[-0.00114132 0.00445698]]

Weighting matrix W is:
[[1. 0.]
[0. 1.]]

Variance-covariance matrix of estimated parameter vector is:
[[602499.98744184 163793.83380005]
[163793.83380005 44881.85524372]]
[[602535.18442996 163802.17330123]
[163802.17330123 44883.79092131]]

Std. err. mu_hat= 776.2087267235777
Std. err. sig_hat= 211.8533814781294
Std. err. mu_hat= 776.23139876583
Std. err. sig_hat= 211.85794986573154
</pre></div>
</div>
</div>
Expand Down Expand Up @@ -1554,12 +1554,12 @@ <h2> Contents </h2>
</div>
<div class="cell_output docutils container">
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>2nd stage est. of var-cov matrix of moment error vec across sims:
[[ 0.00033411 -0.00142288]
[-0.00142288 0.01592874]]
[[ 0.00033411 -0.00142289]
[-0.00142289 0.01592879]]

2nd state est. of optimal weighting matrix:
[[4830.85282784 431.53075653]
[ 431.53075653 101.32740541]]
[[4830.88530228 431.53378728]
[ 431.53378728 101.32749623]]
</pre></div>
</div>
</div>
Expand All @@ -1577,7 +1577,7 @@ <h2> Contents </h2>
</div>
</div>
<div class="cell_output docutils container">
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>mu_SMM2_1= 619.4295828414915 sig_SMM2_1= 199.0745499003818
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>mu_SMM2_1= 619.4303074248937 sig_SMM2_1= 199.0747813692372
</pre></div>
</div>
</div>
Expand Down Expand Up @@ -1607,15 +1607,15 @@ <h2> Contents </h2>
[-0.0011259 0.00443426]]

Weighting matrix W is:
[[4830.85282784 431.53075653]
[ 431.53075653 101.32740541]]
[[4830.88530228 431.53378728]
[ 431.53378728 101.32749623]]

Variance-covariance matrix of estimated parameter vector is:
[[2397.35566572 745.28979427]
[ 745.28979427 232.01568064]]
[[2397.38054356 745.29670501]
[ 745.29670501 232.01757158]]

Std. err. mu_hat= 48.96279879380848
Std. err. sig_hat= 15.232060945338231
Std. err. mu_hat= 48.963052841479445
Std. err. sig_hat= 15.232123016118733
</pre></div>
</div>
</div>
Expand Down Expand Up @@ -1865,7 +1865,7 @@ <h2> Contents </h2>
</div>
</div>
<div class="cell_output docutils container">
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>mu_SMM4_1= 362.560593472098 sig_SMM4_1 46.57515195652185
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>mu_SMM4_1= 362.560593472098 sig_SMM4_1 46.5751519565219
message: CONVERGENCE: REL_REDUCTION_OF_F_&lt;=_FACTR*EPSMCH
success: True
status: 0
Expand Down Expand Up @@ -2016,8 +2016,8 @@ <h2> Contents </h2>
[[33.48770545 23.53170341]
[23.53170341 18.13459776]]

Std. err. mu_hat= 5.786856266717139
Std. err. sig_hat= 4.258473642422251
Std. err. mu_hat= 5.786856266717126
Std. err. sig_hat= 4.258473642422245
</pre></div>
</div>
</div>
Expand Down Expand Up @@ -2139,11 +2139,11 @@ <h2> Contents </h2>
</div>
</div>
<div class="cell_output docutils container">
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>mu_SMM4_2= 362.5605400454758 sig_SMM4_2 46.57507128065559
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>mu_SMM4_2= 362.5605400454758 sig_SMM4_2 46.57507128065564
message: Optimization terminated successfully
success: True
status: 0
fun: 0.9984266286568925
fun: 0.9984266286568926
x: [ 3.626e+02 4.658e+01]
nit: 1
jac: [ 5.467e-02 8.255e-02]
Expand Down Expand Up @@ -2188,7 +2188,7 @@ <h2> Contents </h2>

Error vector (pct. dev.) = [-0.98 0.04678571 0.11720721 -0.075 ]

Criterion func val = 0.9984266286568925
Criterion func val = 0.9984266286568926
</pre></div>
</div>
</div>
Expand Down Expand Up @@ -2273,8 +2273,8 @@ <h2> Contents </h2>
[[3.53697411 2.47391182]
[2.47391182 1.77184156]]

Std. err. mu_hat= 1.8806844791000032
Std. err. sig_hat= 1.3311053920876743
Std. err. mu_hat= 1.8806844791000217
Std. err. sig_hat= 1.3311053920876885
</pre></div>
</div>
</div>
Expand Down

0 comments on commit 9e68a7f

Please sign in to comment.