Skip to content

Commit

Permalink
Deploying to gh-pages from @ 804e7eb 🚀
Browse files Browse the repository at this point in the history
  • Loading branch information
rickecon committed Nov 10, 2023
1 parent 5a6aba0 commit d214084
Show file tree
Hide file tree
Showing 5 changed files with 51 additions and 51 deletions.
4 changes: 2 additions & 2 deletions basic_empirics/BasicEmpirMethods.html
Original file line number Diff line number Diff line change
Expand Up @@ -940,7 +940,7 @@ <h2> Contents </h2>
</div>
</div>
<div class="cell_output docutils container">
<div class="output stderr highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>/tmp/ipykernel_2720/3993614049.py:4: SettingWithCopyWarning:
<div class="output stderr highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>/tmp/ipykernel_2762/3993614049.py:4: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
Expand Down Expand Up @@ -1600,7 +1600,7 @@ <h2> Contents </h2>
Model: OLS Adj. R-squared: 0.608
Method: Least Squares F-statistic: 171.4
Date: Fri, 10 Nov 2023 Prob (F-statistic): 4.16e-24
Time: 08:07:03 Log-Likelihood: -119.71
Time: 08:14:56 Log-Likelihood: -119.71
No. Observations: 111 AIC: 243.4
Df Residuals: 109 BIC: 248.8
Df Model: 1
Expand Down
32 changes: 16 additions & 16 deletions basic_empirics/LogisticReg.html
Original file line number Diff line number Diff line change
Expand Up @@ -1141,29 +1141,29 @@ <h2> Contents </h2>
<section id="interpreting-coefficients-log-odds-ratio">
<span id="sec-loglogitinterpret"></span><h4><span class="section-number">13.2.2.4. </span>Interpreting coefficients (log odds ratio)<a class="headerlink" href="#interpreting-coefficients-log-odds-ratio" title="Permalink to this heading">#</a></h4>
<p>The odds ratio in the logistic model is provides a nice way to interpret logit model coefficients. Let <span class="math notranslate nohighlight">\(z\equiv X^T\beta = \beta_0 + \beta_1 x_{1,i} + ...\beta_K x_{K,i}\)</span>. The logistic model is stated by the probability that the binary categorical dependent variable equals one <span class="math notranslate nohighlight">\(y_i=1\)</span>.</p>
<div class="amsmath math notranslate nohighlight" id="equation-ee860652-3193-48f7-a1bd-372ccd95f10b">
<span class="eqno">(13.11)<a class="headerlink" href="#equation-ee860652-3193-48f7-a1bd-372ccd95f10b" title="Permalink to this equation">#</a></span>\[\begin{equation}
<div class="amsmath math notranslate nohighlight" id="equation-f6e74078-e905-42f7-a486-7317612d994b">
<span class="eqno">(13.11)<a class="headerlink" href="#equation-f6e74078-e905-42f7-a486-7317612d994b" title="Permalink to this equation">#</a></span>\[\begin{equation}
P(y_i=1|X,\theta) = \frac{e^z}{1 + e^z}
\end{equation}\]</div>
<p>Given this equation, we know that the probability of the dependent variable being zero <span class="math notranslate nohighlight">\(y_i=0\)</span> is just one minus the probability above.</p>
<div class="amsmath math notranslate nohighlight" id="equation-dc5c7f4a-5b93-4c95-97c6-1b1dbbe9cb58">
<span class="eqno">(13.12)<a class="headerlink" href="#equation-dc5c7f4a-5b93-4c95-97c6-1b1dbbe9cb58" title="Permalink to this equation">#</a></span>\[\begin{equation}
<div class="amsmath math notranslate nohighlight" id="equation-63df07e2-a99c-4dfe-a268-ebd0bf4935c9">
<span class="eqno">(13.12)<a class="headerlink" href="#equation-63df07e2-a99c-4dfe-a268-ebd0bf4935c9" title="Permalink to this equation">#</a></span>\[\begin{equation}
P(y_i=0|X,\theta) = 1 - P(y_i=1|X,\theta) = 1 - \frac{e^z}{1 + e^z} = \frac{1}{1 + e^z}
\end{equation}\]</div>
<p>The odds ratio is a common way of expressing the probability of an event versus all other events. For example, if the probability of your favorite team winning a game is <span class="math notranslate nohighlight">\(P(win)=0.8\)</span>, then we know that the probability of your favorite team losing that game is <span class="math notranslate nohighlight">\(P(lose)=1-P(win)=0.2\)</span>. The odds ratio is the ratio of these two probabilities.</p>
<div class="amsmath math notranslate nohighlight" id="equation-6d572d03-9c4e-49e6-95f1-42e3f372fea1">
<span class="eqno">(13.13)<a class="headerlink" href="#equation-6d572d03-9c4e-49e6-95f1-42e3f372fea1" title="Permalink to this equation">#</a></span>\[\begin{equation}
<div class="amsmath math notranslate nohighlight" id="equation-4daf6d79-8a28-43f0-a7ea-ab45f64d7cdf">
<span class="eqno">(13.13)<a class="headerlink" href="#equation-4daf6d79-8a28-43f0-a7ea-ab45f64d7cdf" title="Permalink to this equation">#</a></span>\[\begin{equation}
\frac{P(win)}{P(lose)} = \frac{P(win)}{1 - P(win)} = \frac{0.8}{0.2} = \frac{4}{1} \quad\text{or}\quad 4
\end{equation}\]</div>
<p>The odds ratio tells you that the probability of your team winning is four times as likely as your team losing. A gambler would say that your odds are 4-to-1. Another way of saying it is that your team will win four out of five times and will lose 1 out of five times.</p>
<p>In the logistic model, the odds ratio reduces the problem nicely.</p>
<div class="amsmath math notranslate nohighlight" id="equation-08682f55-fc6d-4e3c-bdb6-daf92ae6d8dd">
<span class="eqno">(13.14)<a class="headerlink" href="#equation-08682f55-fc6d-4e3c-bdb6-daf92ae6d8dd" title="Permalink to this equation">#</a></span>\[\begin{equation}
<div class="amsmath math notranslate nohighlight" id="equation-18cc280d-8b86-40c7-a87f-44fd0f334972">
<span class="eqno">(13.14)<a class="headerlink" href="#equation-18cc280d-8b86-40c7-a87f-44fd0f334972" title="Permalink to this equation">#</a></span>\[\begin{equation}
\frac{P(y_i=1|X,\theta)}{1 - P(y_i=1|X,\theta)} = \frac{\frac{e^z}{1 + e^z}}{\frac{1}{1 + e^z}} = e^z
\end{equation}\]</div>
<p>If we take the log of both sides, we see that the log odds ratio is equal to the linear predictor <span class="math notranslate nohighlight">\(z\equiv X^T\beta = \beta_0 + \beta_1 x_{1,i} + ...\beta_K x_{K,i}\)</span>.</p>
<div class="amsmath math notranslate nohighlight" id="equation-a6785f3d-6ae0-46d3-abb0-543b6ffde234">
<span class="eqno">(13.15)<a class="headerlink" href="#equation-a6785f3d-6ae0-46d3-abb0-543b6ffde234" title="Permalink to this equation">#</a></span>\[\begin{equation}
<div class="amsmath math notranslate nohighlight" id="equation-a3e73411-6b51-4185-afaf-33357aee1137">
<span class="eqno">(13.15)<a class="headerlink" href="#equation-a3e73411-6b51-4185-afaf-33357aee1137" title="Permalink to this equation">#</a></span>\[\begin{equation}
\ln\left(\frac{P(y_i=1|X,\theta)}{1 - P(y_i=1|X,\theta)}\right) = z = \beta_0 + \beta_1 x_{1,i} + ...\beta_K x_{K,i}
\end{equation}\]</div>
<p>So the interpretation of the coeficients <span class="math notranslate nohighlight">\(\beta_k\)</span> is that a one-unit increase of the variable <span class="math notranslate nohighlight">\(x_{k,i}\)</span> increases the odds ratio or the odds of <span class="math notranslate nohighlight">\(y_i=1\)</span> by <span class="math notranslate nohighlight">\(\beta_{k,i}\)</span> percent.</p>
Expand All @@ -1175,18 +1175,18 @@ <h2> Contents </h2>
<p>The multinomial logit model is a natural extension of the logit model. In contrast to the logit model in which the dependent variable has only two categories, the multinomial logit model accomodates <span class="math notranslate nohighlight">\(J\geq2\)</span> categories in the dependent variable. Let <span class="math notranslate nohighlight">\(\eta_j\)</span> be the linear predictor for the <span class="math notranslate nohighlight">\(j\)</span>th category.
$<span class="math notranslate nohighlight">\( \eta_j\equiv \beta_{j,0} + \beta_{j,1}x_{1,i} + ...\beta_{j,K}x_{K,i} \quad\forall y_i = j \)</span>$</p>
<p>The multinomial logit model gives the probability of <span class="math notranslate nohighlight">\(y_i=j\)</span> relative to some reference category <span class="math notranslate nohighlight">\(J\)</span> that is left out.</p>
<div class="amsmath math notranslate nohighlight" id="equation-3aa96cd6-eac7-4892-a370-66b758debda0">
<span class="eqno">(13.16)<a class="headerlink" href="#equation-3aa96cd6-eac7-4892-a370-66b758debda0" title="Permalink to this equation">#</a></span>\[\begin{equation}
<div class="amsmath math notranslate nohighlight" id="equation-35d8f320-74c1-4f3d-b5e4-38a73d908198">
<span class="eqno">(13.16)<a class="headerlink" href="#equation-35d8f320-74c1-4f3d-b5e4-38a73d908198" title="Permalink to this equation">#</a></span>\[\begin{equation}
Pr(y_i=j|X,\theta) = \frac{e^{\eta_j}}{1 + \sum_v^{J-1}e^{\eta_v}} \quad\text{for}\quad 1\leq j\leq J-1
\end{equation}\]</div>
<p>Once the <span class="math notranslate nohighlight">\(J-1\)</span> sets of coefficients are estimated, the final <span class="math notranslate nohighlight">\(J\)</span>th set of coefficients are a residual based on the following expression.</p>
<div class="amsmath math notranslate nohighlight" id="equation-aaff249d-6d36-4f61-8fc4-c9121c77f20d">
<span class="eqno">(13.17)<a class="headerlink" href="#equation-aaff249d-6d36-4f61-8fc4-c9121c77f20d" title="Permalink to this equation">#</a></span>\[\begin{equation}
<div class="amsmath math notranslate nohighlight" id="equation-28b4d2d6-f5bd-41c9-8ff7-5c1bc19a56e6">
<span class="eqno">(13.17)<a class="headerlink" href="#equation-28b4d2d6-f5bd-41c9-8ff7-5c1bc19a56e6" title="Permalink to this equation">#</a></span>\[\begin{equation}
Pr(y_i=J|X,\theta) = \frac{1}{1 + \sum_v^{J-1}e^{\eta_v}}
\end{equation}\]</div>
<p>The analogous log odds ratio interpretation applies to the multinomial logit model.</p>
<div class="amsmath math notranslate nohighlight" id="equation-e49ab23e-2478-4263-bd3d-c5b0d6b42fe5">
<span class="eqno">(13.18)<a class="headerlink" href="#equation-e49ab23e-2478-4263-bd3d-c5b0d6b42fe5" title="Permalink to this equation">#</a></span>\[\begin{equation}
<div class="amsmath math notranslate nohighlight" id="equation-fb2b90c3-be20-4e3e-8c5e-3004b522ceec">
<span class="eqno">(13.18)<a class="headerlink" href="#equation-fb2b90c3-be20-4e3e-8c5e-3004b522ceec" title="Permalink to this equation">#</a></span>\[\begin{equation}
\ln\left(\frac{Pr(y_i=j|X,\theta)}{Pr(y_i=J|X,\theta)}\right) = \eta_j = \beta_{j,0} + \beta_{j,1}x_{1,i} + ...\beta_{j,K}x_{K,i} \quad\text{for}\quad 1\leq j \leq J-1
\end{equation}\]</div>
<p>This is the odds ratio of <span class="math notranslate nohighlight">\(y_i=j\)</span> relative to <span class="math notranslate nohighlight">\(y_i=J\)</span>. The interpretation of the <span class="math notranslate nohighlight">\(\beta_{j,k}\)</span> coefficient is the predicted percentage change in the log odds ratio of <span class="math notranslate nohighlight">\(y_i=j\)</span> to <span class="math notranslate nohighlight">\(y_i=J\)</span> from a one-unit increase in variable <span class="math notranslate nohighlight">\(x_{k,i}\)</span>.</p>
Expand Down
4 changes: 2 additions & 2 deletions python/SciPy.html
Original file line number Diff line number Diff line change
Expand Up @@ -678,7 +678,7 @@ <h2> Contents </h2>
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span> message: The solution converged.
success: True
status: 1
fun: [-1.235e-13 -1.017e-13]
fun: [-1.239e-13 -1.013e-13]
x: [ 8.463e-01 2.331e+00]
nfev: 10
fjac: [[-8.332e-01 5.529e-01]
Expand All @@ -688,7 +688,7 @@ <h2> Contents </h2>

The solution for (x, y) is: [0.84630378 2.33101497]

The error values for eq1 and eq2 at the solution are: [-1.23456800e-13 -1.01696429e-13]
The error values for eq1 and eq2 at the solution are: [-1.2390089e-13 -1.0125234e-13]
</pre></div>
</div>
</div>
Expand Down
2 changes: 1 addition & 1 deletion searchindex.js

Large diffs are not rendered by default.

60 changes: 30 additions & 30 deletions struct_est/SMM.html
Original file line number Diff line number Diff line change
Expand Up @@ -1289,7 +1289,7 @@ <h2> Contents </h2>
</div>
</div>
<div class="cell_output docutils container">
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>mu_SMM1_1= 612.3371352249138 sig_SMM1_1= 197.26434895262162
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>mu_SMM1_1= 612.3319048024534 sig_SMM1_1= 197.26303566149244
</pre></div>
</div>
</div>
Expand Down Expand Up @@ -1317,18 +1317,18 @@ <h2> Contents </h2>
<div class="cell_output docutils container">
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>Data mean of scores = 341.90869565217395 , Data variance of scores = 7827.997292398056

Model mean 1 = 341.6692110494425 , Model variance 1 = 7827.864496338213
Model mean 1 = 341.6690130387879 , Model variance 1 = 7827.866466512297

Error vector 1 = [-7.00434373e-04 -1.69642445e-05]
Error vector 1 = [-7.01013506e-04 -1.67125614e-05]

Results from scipy.opmtimize.minimize:
message: CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_&lt;=_PGTOL
success: True
status: 0
fun: 4.908960959342433e-07
fun: 4.916992449560748e-07
x: [ 6.123e+02 1.973e+02]
nit: 17
jac: [-7.436e-07 2.350e-06]
jac: [-7.454e-07 2.357e-06]
nfev: 72
njev: 24
hess_inv: &lt;2x2 LbfgsInvHessProduct with dtype=float64&gt;
Expand Down Expand Up @@ -1453,19 +1453,19 @@ <h2> Contents </h2>
</div>
<div class="cell_output docutils container">
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>Jacobian matrix of derivatives of moment error functions is:
[[ 0.00089749 -0.00290433]
[-0.00114132 0.00445698]]
[[ 0.0008975 -0.00290434]
[-0.00114133 0.004457 ]]

Weighting matrix W is:
[[1. 0.]
[0. 1.]]

Variance-covariance matrix of estimated parameter vector is:
[[602535.18442996 163802.17330123]
[163802.17330123 44883.79092131]]
[[602499.98744184 163793.83380005]
[163793.83380005 44881.85524372]]

Std. err. mu_hat= 776.23139876583
Std. err. sig_hat= 211.85794986573154
Std. err. mu_hat= 776.2087267235777
Std. err. sig_hat= 211.8533814781294
</pre></div>
</div>
</div>
Expand Down Expand Up @@ -1554,12 +1554,12 @@ <h2> Contents </h2>
</div>
<div class="cell_output docutils container">
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>2nd stage est. of var-cov matrix of moment error vec across sims:
[[ 0.00033411 -0.00142289]
[-0.00142289 0.01592879]]
[[ 0.00033411 -0.00142288]
[-0.00142288 0.01592874]]

2nd state est. of optimal weighting matrix:
[[4830.88530228 431.53378728]
[ 431.53378728 101.32749623]]
[[4830.85282784 431.53075653]
[ 431.53075653 101.32740541]]
</pre></div>
</div>
</div>
Expand All @@ -1577,7 +1577,7 @@ <h2> Contents </h2>
</div>
</div>
<div class="cell_output docutils container">
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>mu_SMM2_1= 619.4303074248937 sig_SMM2_1= 199.0747813692372
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>mu_SMM2_1= 619.4295828414915 sig_SMM2_1= 199.0745499003818
</pre></div>
</div>
</div>
Expand Down Expand Up @@ -1607,15 +1607,15 @@ <h2> Contents </h2>
[-0.0011259 0.00443426]]

Weighting matrix W is:
[[4830.88530228 431.53378728]
[ 431.53378728 101.32749623]]
[[4830.85282784 431.53075653]
[ 431.53075653 101.32740541]]

Variance-covariance matrix of estimated parameter vector is:
[[2397.38054356 745.29670501]
[ 745.29670501 232.01757158]]
[[2397.35566572 745.28979427]
[ 745.28979427 232.01568064]]

Std. err. mu_hat= 48.963052841479445
Std. err. sig_hat= 15.232123016118733
Std. err. mu_hat= 48.96279879380848
Std. err. sig_hat= 15.232060945338231
</pre></div>
</div>
</div>
Expand Down Expand Up @@ -1865,7 +1865,7 @@ <h2> Contents </h2>
</div>
</div>
<div class="cell_output docutils container">
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>mu_SMM4_1= 362.560593472098 sig_SMM4_1 46.5751519565219
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>mu_SMM4_1= 362.560593472098 sig_SMM4_1 46.57515195652185
message: CONVERGENCE: REL_REDUCTION_OF_F_&lt;=_FACTR*EPSMCH
success: True
status: 0
Expand Down Expand Up @@ -2016,8 +2016,8 @@ <h2> Contents </h2>
[[33.48770545 23.53170341]
[23.53170341 18.13459776]]

Std. err. mu_hat= 5.786856266717126
Std. err. sig_hat= 4.258473642422245
Std. err. mu_hat= 5.786856266717139
Std. err. sig_hat= 4.258473642422251
</pre></div>
</div>
</div>
Expand Down Expand Up @@ -2139,11 +2139,11 @@ <h2> Contents </h2>
</div>
</div>
<div class="cell_output docutils container">
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>mu_SMM4_2= 362.5605400454758 sig_SMM4_2 46.57507128065564
<div class="output stream highlight-myst-ansi notranslate"><div class="highlight"><pre><span></span>mu_SMM4_2= 362.5605400454758 sig_SMM4_2 46.57507128065559
message: Optimization terminated successfully
success: True
status: 0
fun: 0.9984266286568926
fun: 0.9984266286568925
x: [ 3.626e+02 4.658e+01]
nit: 1
jac: [ 5.467e-02 8.255e-02]
Expand Down Expand Up @@ -2188,7 +2188,7 @@ <h2> Contents </h2>

Error vector (pct. dev.) = [-0.98 0.04678571 0.11720721 -0.075 ]

Criterion func val = 0.9984266286568926
Criterion func val = 0.9984266286568925
</pre></div>
</div>
</div>
Expand Down Expand Up @@ -2273,8 +2273,8 @@ <h2> Contents </h2>
[[3.53697411 2.47391182]
[2.47391182 1.77184156]]

Std. err. mu_hat= 1.8806844791000217
Std. err. sig_hat= 1.3311053920876885
Std. err. mu_hat= 1.8806844791000032
Std. err. sig_hat= 1.3311053920876743
</pre></div>
</div>
</div>
Expand Down

0 comments on commit d214084

Please sign in to comment.