Code
using AlgebraOfGraphics
@@ -421,10 +421,10 @@ 2 The Emotikon da
3 Downloading and importing the RDS file
This is similar to some of the code shown by Julius Krumbiegel on Monday. In the data directory of the emotikon project on osf.io under Data, the url for the rds data file is found to be [https://osf.io/xawdb/]. Note that we want version 2 of this file.
-
+
= Downloads.download("https://osf.io/xawdb/download?version=2"); fn
-
+
= rcopy(R"readRDS($fn)") dfrm
525126×7 DataFrame525101 rows omitted
@@ -718,7 +718,7 @@ 3 Downloading and
Now write this file as a Arrow file and read it back in.
-
+
= joinpath("data", "fggk21.arrow")
arrowfn write(arrowfn, dfrm; compress=:lz4)
Arrow.= Arrow.Table(arrowfn) tbl
@@ -733,13 +733,13 @@ 3 Downloading and
:score Float64
-
+
filesize(arrowfn)
3077850
-
+
= DataFrame(tbl) df
525126×7 DataFrame525101 rows omitted
@@ -1044,7 +1044,7 @@ 4 Avoiding needle
5 Creating the smaller table
-
+
= unique(select(df, :Child, :School, :Cohort, :Sex, :age)) Child
108295×5 DataFrame108270 rows omitted
@@ -1281,13 +1281,13 @@ 5 Creating the sm
-
+
length(unique(Child.Child)) # should be 108295
108295
-
+
filesize(
write("./data/fggk21_Child.arrow", Child; compress=:lz4)
Arrow. )
@@ -1295,7 +1295,7 @@ 5 Creating the sm
1774946
-
+
filesize(
write(
Arrow."./data/fggk21_Score.arrow",
@@ -1321,7 +1321,7 @@ 5 Creating the sm
Now read the Arrow tables in and reassemble the original table.
-
+
= DataFrame(Arrow.Table("./data/fggk21_Score.arrow")) Score
525126×3 DataFrame525101 rows omitted
@@ -1503,7 +1503,7 @@ 5 Creating the sm
At this point we can create the z-score column by standardizing the scores for each Test
. The code to do this follows Julius’s presentation on Monday.
-
+
@transform!(groupby(Score, :Test), :zScore = @bycol zscore(:score))
525126×4 DataFrame525101 rows omitted
@@ -1712,7 +1712,7 @@ 5 Creating the sm
-
+
= DataFrame(Arrow.Table("./data/fggk21_Child.arrow")) Child
108295×5 DataFrame108270 rows omitted
@@ -1949,7 +1949,7 @@ 5 Creating the sm
-
+
= disallowmissing!(leftjoin(Score, Child; on=:Child)) df1
525126×8 DataFrame525101 rows omitted
@@ -2287,7 +2287,7 @@ 5 Creating the sm
6 Discovering patterns in the data
One of the motivations for creating the Child
table was to be able to bin the ages according to the age of each child, not the age of each Child-Test
combination. Not all children have all 5 test results. We can check the number of results by grouping on :Child
and evaluate the number of rows in each group.
-
+
= combine(groupby(Score, :Child), nrow => :ntest) nobsChild
108295×2 DataFrame108270 rows omitted
@@ -2441,7 +2441,7 @@ 6 Discovering pat
Now create a table of the number of children with 1, 2, …, 5 test scores.
-
+
combine(groupby(nobsChild, :ntest), nrow)
5×2 DataFrame
@@ -2491,7 +2491,7 @@ 6 Discovering pat
A natural question at this point is whether there is something about those students who have few observations. For example, are they from only a few schools?
One approach to examining properties like is to add the number of observations for each child to the :Child table. Later we can group the table according to this :ntest
to look at properties of :Child
by :ntest
.
-
+
= groupby(
gdf disallowmissing!(leftjoin(Child, nobsChild; on=:Child)), :ntest
)
@@ -3031,7 +3031,7 @@ 6 Discovering pat
Are the sexes represented more-or-less equally?
-
+
combine(groupby(first(gdf), :Sex), nrow => :nchild)
2×2 DataFrame
@@ -3064,7 +3064,7 @@ 6 Discovering pat
-
+
combine(groupby(last(gdf), :Sex), nrow => :nchild)
2×2 DataFrame
diff --git a/bootstrap.html b/bootstrap.html
index 1605ee8..44aceb3 100644
--- a/bootstrap.html
+++ b/bootstrap.html
@@ -391,7 +391,7 @@ 1 The parametric
A parametric bootstrap is used with a parametric model, m
, that has been fit to data. The procedure is to simulate n
response vectors from m
using the estimated parameter values and refit m
to these responses in turn, accumulating the statistics of interest at each iteration.
The parameters of a LinearMixedModel
object are the fixed-effects parameters, β
, the standard deviation, σ
, of the per-observation noise, and the covariance parameter, θ
, that defines the variance-covariance matrices of the random effects. A technical description of the covariance parameter can be found in the MixedModels.jl docs. Lisa Schwetlick and Daniel Backhaus have provided a more beginner-friendly description of the covariance parameter in the documentation for MixedModelsSim.jl. For today’s purposes – looking at the uncertainty in the estimates from a fitted model – we can simply use values from the fitted model, but we will revisit the parametric bootstrap as a convenient way to simulate new data, potentially with different parameter values, for power analysis.
Attach the packages to be used
-
+
Code
using AlgebraOfGraphics
@@ -414,9 +414,9 @@ 1 The parametric
2 A model of moderate complexity
The kb07
data (Kronmüller & Barr, 2007) are one of the datasets provided by the MixedModels
package.
-
+
= dataset(:kb07) kb07
-
+
Arrow.Table with 1789 rows, 7 columns, and schema:
:subj String
:item String
@@ -428,10 +428,10 @@ 2 A model of mode
Convert the table to a DataFrame for summary.
-
+
= DataFrame(kb07)
kb07 describe(kb07)
-
+
7×7 DataFrame
@@ -533,7 +533,7 @@ 2 A model of mode
The experimental factors; spkr
, prec
, and load
, are two-level factors.
-
+
= Dict(:spkr => EffectsCoding(),
contrasts :prec => EffectsCoding(),
:load => EffectsCoding(),
@@ -543,16 +543,16 @@ 2 A model of mode
The EffectsCoding
contrast is used with these to create a ±1 encoding. Furthermore, Grouping
constrasts are assigned to the subj
and item
factors. This is not a contrast per-se but an indication that these factors will be used as grouping factors for random effects and, therefore, there is no need to create a contrast matrix. For large numbers of levels in a grouping factor, an attempt to create a contrast matrix may cause memory overflow.
It is not important in these cases but a good practice in any case.
We can look at an initial fit of moderate complexity:
-
+
= @formula(rt_trunc ~ 1 + spkr * prec * load +
form 1 + spkr + prec + load | subj) +
(1 + spkr + prec + load | item))
(= fit(MixedModel, form, kb07; contrasts) m0
-Minimizing 799 Time: 0:00:01 ( 1.68 ms/it)
- objective: 28637.123623229592
+Minimizing 894 Time: 0:00:00 ( 0.15 ms/it)
+ objective: 28637.97101084821
-
+
@@ -577,79 +577,79 @@ 2 A model of mode
(Intercept)
-2181.6729
-77.3136
-28.22
+2181.6424
+77.3505
+28.20
<1e-99
-301.8062
-362.2579
+301.8688
+362.4643
spkr: old
-67.7491
-18.2664
-3.71
+67.7496
+17.9604
+3.77
0.0002
-42.3795
-40.6807
+33.0582
+41.1114
prec: maintain
--333.9205
-47.1558
+-333.9200
+47.1534
-7.08
<1e-11
-61.9630
-246.9158
+58.8506
+247.3123
load: yes
-78.7702
-19.5298
-4.03
+78.8006
+19.7270
+3.99
<1e-04
-64.9751
-42.3890
+66.9612
+43.3929
spkr: old & prec: maintain
--21.9655
-15.8074
+-21.9960
+15.8191
-1.39
-0.1647
+0.1644
spkr: old & load: yes
-18.3837
-15.8074
+18.3832
+15.8191
1.16
-0.2448
+0.2452
prec: maintain & load: yes
-4.5333
-15.8074
+4.5327
+15.8191
0.29
-0.7743
+0.7745
spkr: old & prec: maintain & load: yes
-23.6073
-15.8074
+23.6377
+15.8191
1.49
-0.1353
+0.1351
Residual
-668.5542
+669.0519
@@ -662,9 +662,9 @@ 2 A model of mode
The default display in Quarto uses the pretty MIME show method for the model and omits the estimated correlations of the random effects.
The VarCorr
extractor displays these.
-
+
VarCorr(m0)
-
+
@@ -690,8 +690,8 @@ 2 A model of mode
subj
(Intercept)
-91087.0055
-301.8062
+91124.7853
+301.8688
@@ -699,35 +699,35 @@ 2 A model of mode
spkr: old
-1796.0221
-42.3795
-+0.79
+1092.8468
+33.0582
++1.00
prec: maintain
-3839.4126
-61.9630
--0.59
-+0.02
+3463.3971
+58.8506
+-0.62
+-0.62
load: yes
-4221.7638
-64.9751
+4483.7997
+66.9612
++0.36
+0.36
-+0.85
-+0.54
++0.51
item
(Intercept)
-131230.7914
-362.2579
+131380.3392
+362.4643
@@ -735,35 +735,35 @@ 2 A model of mode
spkr: old
-1654.9232
-40.6807
-+0.44
+1690.1464
+41.1114
++0.42
prec: maintain
-60967.4037
-246.9158
+61163.3893
+247.3123
-0.69
-+0.35
++0.37
load: yes
-1796.8284
-42.3890
-+0.32
-+0.16
--0.14
+1882.9463
+43.3929
++0.29
++0.14
+-0.13
Residual
-446964.7062
-668.5542
+447630.5113
+669.0519
@@ -773,12 +773,12 @@ 2 A model of mode
None of the two-factor or three-factor interaction terms in the fixed-effects are significant. In the random-effects terms only the scalar random effects and the prec
random effect for item
appear to be warranted, leading to the reduced formula
-
+
# formula f4 from https://doi.org/10.33016/nextjournal.100002
= @formula(rt_trunc ~ 1 + spkr * prec * load + (1 | subj) + (1 + prec | item))
form
= fit(MixedModel, form, kb07; contrasts) m1
-
+
@@ -886,9 +886,9 @@ 2 A model of mode
-
+
VarCorr(m1)
-
+
@@ -903,7 +903,7 @@ 2 A model of mode
item
(Intercept)
-133026.918
+133026.917
364.729
@@ -933,9 +933,9 @@ 2 A model of mode
These two models are nested and can be compared with a likelihood-ratio test.
-
+
likelihoodratiotest(m0, m1) MixedModels.
-
+
@@ -967,10 +967,10 @@ 2 A model of mode
rt_trunc ~ 1 + spkr + prec + load + spkr & prec + spkr & load + prec & load + spkr & prec & load + (1 + spkr + prec + load | subj) + (1 + spkr + prec + load | item)
29
-28637
+28638
21
16
-0.1650
+0.1979
@@ -981,11 +981,14 @@ 2 A model of mode
3 Bootstrap basics
To bootstrap the model parameters, first initialize a random number generator then create a bootstrap sample and extract the table of parameter estimates from it.
-
+
const RNG = MersenneTwister(42)
= parametricbootstrap(RNG, 5_000, m1)
samp = samp.tbl tbl
-
+
+WARNING: redefinition of constant RNG. This may fail, cause incorrect answers, or produce other errors.
+
+
Table with 18 columns and 5000 rows:
obj β1 β2 β3 β4 β5 β6 ⋯
┌────────────────────────────────────────────────────────────────────
@@ -1016,29 +1019,29 @@ 3 Bootstrap basic
An empirical density plot of the estimates of the residual standard deviation is obtained as
-
-= data(tbl) * mapping(:σ) * AoG.density()
- plt draw(plt; axis=(;title="Parametric bootstrap estimates of σ"))
-
+
+= data(tbl) * mapping(:σ) * AoG.density()
+ plt draw(plt; axis=(;title="Parametric bootstrap estimates of σ"))
+
A density plot of the estimates of the standard deviation of the random effects is obtained as
-
-= data(tbl) * mapping(
- plt :σ1, :σ2, :σ3] .=> "Bootstrap replicates of standard deviations";
- [=dims(1) => renamer(["Item intercept", "Item speaker", "Subj"])
- color* AoG.density()
- ) draw(plt; figure=(;supertitle="Parametric bootstrap estimates of variance components"))
-
+
+= data(tbl) * mapping(
+ plt :σ1, :σ2, :σ3] .=> "Bootstrap replicates of standard deviations";
+ [=dims(1) => renamer(["Item intercept", "Item speaker", "Subj"])
+ color* AoG.density()
+ ) draw(plt; figure=(;supertitle="Parametric bootstrap estimates of variance components"))
+
The bootstrap sample can be used to generate intervals that cover a certain percentage of the bootstrapped values. We refer to these as “coverage intervals”, similar to a confidence interval. The shortest such intervals, obtained with the shortestcovint
extractor, correspond to a highest posterior density interval in Bayesian inference.
We generate these for all random and fixed effects:
-
-confint(samp)
-
+
+confint(samp)
+
DictTable with 2 columns and 13 rows:
par lower upper
────┬─────────────────────
@@ -1050,33 +1053,33 @@ 3 Bootstrap basic
β6 │ -11.3574 51.5299
β7 │ -27.1799 36.3332
β8 │ -9.28905 53.5917
- ρ1 │ -0.912778 -0.471128
+ ρ1 │ -0.912778 -0.471124
σ │ 654.965 700.928
σ1 │ 265.276 456.967
- σ2 │ 179.402 321.016
- σ3 │ 232.096 359.744
+ σ2 │ 179.402 321.017
+ σ3 │ 232.096 359.745
-
-draw(
-data(samp.β) * mapping(:β; color=:coefname) * AoG.density();
- =(; resolution=(800, 450)),
- figure )
-
+
+draw(
+data(samp.β) * mapping(:β; color=:coefname) * AoG.density();
+ =(; resolution=(800, 450)),
+ figure )
+
For the fixed effects, MixedModelsMakie provides a convenience interface to plot the combined coverage intervals and density plots
-
-ridgeplot(samp)
-
+
+ridgeplot(samp)
+
Often the intercept will be on a different scale and potentially less interesting, so we can stop it from being included in the plot:
-
-ridgeplot(samp; show_intercept=false, xlabel="Bootstrap density and 95%CI")
-
+
+ridgeplot(samp; show_intercept=false, xlabel="Bootstrap density and 95%CI")
+
@@ -1084,10 +1087,10 @@ 3 Bootstrap basic
4 Singularity
Let’s consider the classic dysetuff dataset:
-
-= dataset(:dyestuff)
- dyestuff = fit(MixedModel, @formula(yield ~ 1 + (1 | batch)), dyestuff) mdye
-
+
+= dataset(:dyestuff)
+ dyestuff = fit(MixedModel, @formula(yield ~ 1 + (1 | batch)), dyestuff) mdye
+
@@ -1120,30 +1123,30 @@ 4 Singularity
-
-= parametricbootstrap(MersenneTwister(1234321), 10_000, mdye)
- sampdye = sampdye.tbl tbldye
-
+
+= parametricbootstrap(MersenneTwister(1234321), 10_000, mdye)
+ sampdye = sampdye.tbl tbldye
+
Table with 5 columns and 10000 rows:
obj β1 σ σ1 θ1
┌────────────────────────────────────────────────
1 │ 339.022 1509.13 67.4315 14.312 0.212245
2 │ 322.689 1538.08 47.9831 25.5673 0.53284
3 │ 324.002 1508.02 50.1346 21.7622 0.434076
- 4 │ 331.887 1538.47 53.2238 41.0559 0.771383
+ 4 │ 331.887 1538.47 53.2238 41.0559 0.771382
5 │ 317.771 1520.62 45.2975 19.1802 0.423428
6 │ 315.181 1536.94 36.7556 49.1832 1.33812
7 │ 333.641 1519.88 53.8161 46.712 0.867993
8 │ 325.729 1528.43 47.8989 37.6367 0.785752
9 │ 311.601 1497.46 41.4 15.1257 0.365355
10 │ 335.244 1532.65 64.616 0.0 0.0
- 11 │ 327.935 1552.54 57.2036 0.485275 0.00848329
+ 11 │ 327.935 1552.54 57.2036 0.485281 0.00848339
12 │ 323.861 1519.28 49.355 24.3703 0.493776
13 │ 332.736 1509.04 59.6272 18.2905 0.306747
- 14 │ 328.243 1531.7 51.5431 32.4744 0.630043
+ 14 │ 328.243 1531.7 51.5431 32.4743 0.630042
15 │ 336.186 1536.17 64.0205 15.243 0.238096
16 │ 329.468 1526.42 58.6856 0.0 0.0
- 17 │ 320.086 1517.67 43.218 35.9663 0.832207
+ 17 │ 320.086 1517.67 43.218 35.9663 0.832206
18 │ 325.887 1497.86 50.8753 25.9059 0.509205
19 │ 311.31 1529.24 33.8976 49.6557 1.46487
20 │ 309.404 1549.71 33.987 41.1105 1.20959
@@ -1153,28 +1156,28 @@ 4 Singularity
-
-= data(tbldye) * mapping(:σ1) * AoG.density()
- plt draw(plt; axis=(;title="Parametric bootstrap estimates of σ_batch"))
-
+
+= data(tbldye) * mapping(:σ1) * AoG.density()
+ plt draw(plt; axis=(;title="Parametric bootstrap estimates of σ_batch"))
+
Notice that this density plot has a spike, or mode, at zero. Although this mode appears to be diffuse, this is an artifact of the way that density plots are created. In fact, it is a pulse, as can be seen from a histogram.
-
-= data(tbldye) * mapping(:σ1) * AoG.histogram(;bins=100)
- plt draw(plt; axis=(;title="Parametric bootstrap estimates of σ_batch"))
-
+
+= data(tbldye) * mapping(:σ1) * AoG.histogram(;bins=100)
+ plt draw(plt; axis=(;title="Parametric bootstrap estimates of σ_batch"))
+
A value of zero for the standard deviation of the random effects is an example of a singular covariance. It is easy to detect the singularity in the case of a scalar random-effects term. However, it is not as straightforward to detect singularity in vector-valued random-effects terms.
For example, if we bootstrap a model fit to the sleepstudy
data
-
-= dataset(:sleepstudy)
- sleepstudy = fit(MixedModel, @formula(reaction ~ 1 + days + (1 + days | subj)),
- msleep sleepstudy)
-
+
+= dataset(:sleepstudy)
+ sleepstudy = fit(MixedModel, @formula(reaction ~ 1 + days + (1 + days | subj)),
+ msleep sleepstudy)
+
@@ -1215,43 +1218,43 @@ 4 Singularity
-
-= parametricbootstrap(MersenneTwister(666), 10_000, msleep)
- sampsleep = sampsleep.tbl tblsleep
-
+
+= parametricbootstrap(MersenneTwister(666), 10_000, msleep)
+ sampsleep = sampsleep.tbl tblsleep
+
Table with 10 columns and 10000 rows:
obj β1 β2 σ σ1 σ2 ρ1 ⋯
┌────────────────────────────────────────────────────────────────────
1 │ 1721.95 252.488 11.0328 22.4544 29.6185 6.33343 0.233383 ⋯
- 2 │ 1760.85 260.763 8.55352 27.3835 20.8063 4.32895 0.914676 ⋯
+ 2 │ 1760.85 260.763 8.55352 27.3835 20.8062 4.32893 0.914691 ⋯
3 │ 1750.88 246.709 12.4613 25.9951 15.8702 6.33404 0.200358 ⋯
4 │ 1777.33 247.683 12.9824 27.7966 27.5413 4.9878 0.121411 ⋯
- 5 │ 1738.05 245.649 10.5792 25.3596 21.5208 4.26131 0.0526768 ⋯
+ 5 │ 1738.05 245.649 10.5792 25.3596 21.5208 4.26131 0.0526769 ⋯
6 │ 1751.25 255.669 10.1984 26.1432 22.5389 4.58209 0.225968 ⋯
7 │ 1727.51 248.986 7.62095 24.6451 19.0858 4.34881 0.212916 ⋯
8 │ 1754.18 246.075 11.0469 26.9407 19.8341 4.55961 -0.202146 ⋯
9 │ 1757.47 245.407 13.7475 25.8265 20.0014 7.7647 -0.266385 ⋯
10 │ 1752.8 253.911 11.4977 25.7077 20.6409 6.27298 0.171494 ⋯
- 11 │ 1707.8 248.887 10.1608 23.9684 10.5923 4.32048 1.0 ⋯
- 12 │ 1773.69 252.542 10.7379 26.8795 27.7959 6.2055 0.156476 ⋯
+ 11 │ 1707.8 248.887 10.1608 23.9684 10.5923 4.32039 1.0 ⋯
+ 12 │ 1773.69 252.542 10.7379 26.8795 27.7956 6.20553 0.156471 ⋯
13 │ 1761.27 254.712 11.0373 25.7998 23.2005 7.30831 0.368175 ⋯
- 14 │ 1737.0 260.299 10.5659 24.6504 29.0113 4.26877 -0.0785722 ⋯
- 15 │ 1760.12 258.949 10.1464 27.2089 8.02971 7.01898 0.727216 ⋯
+ 14 │ 1737.0 260.299 10.5659 24.6504 29.0113 4.26877 -0.0785721 ⋯
+ 15 │ 1760.12 258.949 10.1464 27.2089 8.02793 7.01923 0.727289 ⋯
16 │ 1723.7 249.204 11.7868 24.9861 18.6887 3.08433 0.633218 ⋯
17 │ 1734.14 262.586 8.96611 24.0011 26.7969 5.37598 0.29709 ⋯
- 18 │ 1788.8 260.376 11.658 28.6099 26.1245 5.85586 0.0323584 ⋯
- 19 │ 1752.44 239.962 11.0195 26.2388 23.2242 4.45586 0.482518 ⋯
- 20 │ 1752.92 258.171 11.6339 25.7146 27.3026 4.87036 0.225691 ⋯
+ 18 │ 1788.8 260.376 11.658 28.6099 26.1245 5.85586 0.0323581 ⋯
+ 19 │ 1752.44 239.962 11.0195 26.2388 23.2242 4.45586 0.482511 ⋯
+ 20 │ 1752.92 258.171 11.6339 25.7146 27.3026 4.87036 0.22569 ⋯
21 │ 1740.81 254.09 7.91985 25.2195 16.2247 6.08679 0.462549 ⋯
22 │ 1756.6 245.791 10.3434 26.2627 23.289 5.50225 -0.143375 ⋯
- 23 │ 1759.01 256.131 9.10794 27.136 27.5008 3.51222 1.0 ⋯
+ 23 │ 1759.01 256.131 9.10794 27.136 27.501 3.51224 1.0 ⋯
⋮ │ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋱
the singularity can be exhibited as a standard deviation of zero or as a correlation of ±1.
-
-confint(sampsleep)
-
+
+confint(sampsleep)
+
DictTable with 2 columns and 6 rows:
par lower upper
────┬───────────────────
@@ -1264,41 +1267,41 @@ 4 Singularity
A histogram of the estimated correlations from the bootstrap sample has a spike at +1
.
-
-= data(tblsleep) * mapping(:ρ1) * AoG.histogram(;bins=100)
- plt draw(plt; axis=(;title="Parametric bootstrap samples of correlation of random effects"))
-
+
+= data(tblsleep) * mapping(:ρ1) * AoG.histogram(;bins=100)
+ plt draw(plt; axis=(;title="Parametric bootstrap samples of correlation of random effects"))
+
or, as a count,
-
-count(tblsleep.ρ1 .≈ 1)
-
-293
+
+count(tblsleep.ρ1 .≈ 1)
+
+291
Close examination of the histogram shows a few values of -1
.
-
-count(tblsleep.ρ1 .≈ -1)
-
+
+count(tblsleep.ρ1 .≈ -1)
+
2
Furthermore there are even a few cases where the estimate of the standard deviation of the random effect for the intercept is zero.
-
-count(tblsleep.σ1 .≈ 0)
-
-7
+
+count(tblsleep.σ1 .≈ 0)
+
+6
There is a general condition to check for singularity of an estimated covariance matrix or matrices in a bootstrap sample. The parameter optimized in the estimation is θ
, the relative covariance parameter. Some of the elements of this parameter vector must be non-negative and, when one of these components is approximately zero, one of the covariance matrices will be singular.
The issingular
method for a MixedModel
object that tests if a parameter vector θ
corresponds to a boundary or singular fit.
This operation is encapsulated in a method for the issingular
function that works on MixedModelBootstrap
objects.
-
-count(issingular(sampsleep))
-
-302
+
+count(issingular(sampsleep))
+
+299
diff --git a/bootstrap_files/figure-html/cell-12-output-1.svg b/bootstrap_files/figure-html/cell-12-output-1.svg
index 2307d3e..26a4283 100644
--- a/bootstrap_files/figure-html/cell-12-output-1.svg
+++ b/bootstrap_files/figure-html/cell-12-output-1.svg
@@ -2,183 +2,165 @@