diff --git a/inst/tutorials/discovr_08/discovr_08.Rmd b/inst/tutorials/discovr_08/discovr_08.Rmd index 7b9199e..e5d1de4 100644 --- a/inst/tutorials/discovr_08/discovr_08.Rmd +++ b/inst/tutorials/discovr_08/discovr_08.Rmd @@ -762,7 +762,7 @@ quiz( The *p*-values in the table all tell us the long-run probability that we would get a a value of *t* at least as large as the ones we have if the the true relationship between each predictor and album sales was 0 (i.e., *b* = 0). In all cases the probabilities are less than 0.001, which researchers would generally take to mean that the observed $\hat{b}$s are significantly different from zero. Given the $\hat{b}$s quantify the relationship between each predictor and album sales, this conclusion implies that each predictor significantly predicts album sales.
- `r pencil()` **Report it!** + `r pencil()` **Report**`r rproj()` The model that included the band's image and airplay was a significantly better fit than the model that included advertising budget alone, `r report_aov_compare(album_aov)`. The final model explained `r 100*round(album2_fit$r.squared, 3)`% of the variance in album sales. Advertising budget significantly predicted album sales $\hat{b}$ = `r report_pars(album2_par, row = 2, df_r = album2_fit$df.residual)`, as did airplay $\hat{b}$ = `r report_pars(album2_par, row = 3, df_r = album2_fit$df.residual)` and image, $\hat{b}$ = `r report_pars(album2_par, row = 4, df_r = album2_fit$df.residual)`. diff --git a/inst/tutorials/discovr_09/discovr_09.Rmd b/inst/tutorials/discovr_09/discovr_09.Rmd index d4fc043..757c6ad 100644 --- a/inst/tutorials/discovr_09/discovr_09.Rmd +++ b/inst/tutorials/discovr_09/discovr_09.Rmd @@ -430,7 +430,7 @@ question("Which of these statements about Cohen's *d* is **NOT** correct?", ```
- `r pencil()` **Report it!** + `r pencil()` **Report**`r rproj()` On average, participants given a cloak of invisibility engaged in more acts of mischief (*M* = `r cloak_mod$estimate[2]`, *SE* = 0.48), than those not given a cloak (*M* = `r cloak_mod$estimate[1]`, *SE* = 0.55). Having a cloak of invisibility did not significantly affect the amount of mischief a person got up to: the mean difference, *M* = `r round(cloak_mod$estimate[2]-cloak_mod$estimate[1], 2)`, 95% CI [`r round(cloak_mod$conf.int[1], 2)`, `r round(cloak_mod$conf.int[2], 2)`], was not significantly different from 0, *t*(`r round(as.numeric(cloak_mod$parameter, 2))`) = `r round(cloak_mod$statistic, 2)`, *p* = `r round(cloak_mod$p.value, 2)`. This effect was very large, `r report_es(d_cloak, col = "Cohens_d")`, but the confidence interval for the effect size contained zero. If this confidence interval is one of the 95% that captures the population effect size then this suggests that a zero effect is plausible.
@@ -541,7 +541,7 @@ effectsize::cohens_d(mischief ~ cloak, data = cloak_rm_tib) |>
- `r pencil()` **Report it!** + `r pencil()` **Report**`r rproj()` On average, participants given a cloak of invisibility engaged in more acts of mischief (*M* = 5, *SE* = 0.48), than those not given a cloak (*M* = 3.75, *SE* = 0.55). Having a cloak of invisibility affected the amount of mischief a person got up to: the mean difference, *M* = `r round(as.numeric(cloak_rm_mod$estimate), 2)`, 95% CI [`r round(cloak_rm_mod$conf.int[1], 2)`, `r round(cloak_rm_mod$conf.int[2], 2)`], was significantly different from 0, *t*(`r round(as.numeric(cloak_rm_mod$parameter, 2))`) = `r round(cloak_rm_mod$statistic, 2)`, *p* = `r round(cloak_rm_mod$p.value, 2)`. This effect was very large, `r report_es(d_cloak, col = "Cohens_d")`, but the confidence interval for the effect size contained zero. If this confidence interval is one of the 95% that captures the population effect size then this suggests that a zero effect is plausible.
@@ -595,7 +595,7 @@ cloak_rob <- WRS2::yuen(mischief ~ cloak, data = cloak_tib)
- `r pencil()` **Report it!** + `r pencil()` **Report**`r rproj()` There was not a significant difference in mischief scores across the two cloak groups, $T_y$ = `r round(cloak_rob$test, 2)`, *p* = `r round(cloak_rob$p.value, 3)`. On average the no cloak group performed one less mischievous act, *M* = `r cloak_rob$diff` with a 95% confidence interval for the trimmed mean difference ranging from `r round(cloak_rob$conf.int[1], 2)` to `r round(cloak_rob$conf.int[2], 2)`.
diff --git a/inst/tutorials/discovr_10/discovr_10.Rmd b/inst/tutorials/discovr_10/discovr_10.Rmd index 5fc506e..8071356 100644 --- a/inst/tutorials/discovr_10/discovr_10.Rmd +++ b/inst/tutorials/discovr_10/discovr_10.Rmd @@ -451,7 +451,7 @@ agg_rob <- parameters::model_parameters(agg_lm, vcov = TRUE, vcov.type = "HC4")
- `r pencil()` **Report it!** + `r pencil()` **Report**`r rproj()` Consistent with the non-robust model, the robust model shows a significant moderation effect, $\hat{b}$ = `r report_pars(agg_rob, row = 4, digits = 3)`.
@@ -557,7 +557,7 @@ When callous traits fall below $-17.10$, the values of *y* (the relationship bet The simple slopes analysis reports three models: the model for time spent gaming as a predictor of aggression (1) when callous traits are low (to be precise when the value of callous traits is $-9.62$); (2) at the mean value of callous traits (because we centred callous traits its mean value is 0, as indicated in the output); and (3) when the value of callous traits is 9.62 (i.e., high). We interpret these models as we would any other linear model by looking at the value of b (called Est. in the output), and its significance.
- `r pencil()` **Report it!** + `r pencil()` **Report**`r rproj()` When callous traits are low, there is a non-significant negative relationship between time spent gaming and aggression, $\hat{b} = -0.09$, 95% CI [$-0.30$, $0.12$], $t = -0.86$, $p = 0.39$. At the mean value of callous traits, there is a significant positive relationship between time spent gaming and aggression, $\hat{b} = 0.17$, 95% CI [$0.02$, $0.32$], $t = 2.23$, $p = 0.03$. When callous traits are high, there is a significant positive relationship between time spent gaming and aggression, $\hat{b} = 0.43$, 95% CI [$0.23$, $0.63$], $t = 4.26$, $p < 0.01$.
@@ -817,7 +817,7 @@ In the second output, for the effects that we assigned labels (a, b, c, indirect The bottom row shows the total effect of pornography consumption on infidelity (outcome). Remember that the total effect is the effect of the predictor on the outcome when the mediator is not present in the model. When relationship commitment is not in the model, pornography consumption significantly predicts infidelity, $\hat{b}$ = `r report_pars(porn_par, row = 11, digits = 2)`. As is the case when we include relationship commitment in the model, pornography consumption has a positive relationship with infidelity (as shown by the positive *b*-value). The most important part of the output is the penultimate row because it displays the results for the indirect effect of pornography consumption on infidelity (i.e., the effect via relationship commitment). The indirect effect is not quite significant, $\hat{b}$ = `r report_pars(porn_par, row = 10, digits = 2)`, suggesting that there isn't a significant mediation effect.
- `r pencil()` **Report it!** + `r pencil()` **Report**`r rproj()` When relationship commitment was not in the model, pornography consumption had a significant positive relationship with infidelity, $\hat{b}$ = `r report_pars(porn_par, row = 11, digits = 2)`. With relationship commitment included in the model, pornography consumption did not quite significantly predict infidelity, $\hat{c}$ = `r report_pars(porn_par, row = 1, digits = 2)`. Pornography consumption significantly predicted relationship commitment, $\hat{a}$ = `r report_pars(porn_par, row = 3, digits = 2)`, and relationship commitment significantly predicted infidelity, $\hat{b}$ = `r report_pars(porn_par, row = 2, digits = 2)`. Most important, the indirect of pornography consumtpion on infidelity was not quite significant, $\hat{b}$ = `r report_pars(porn_par, row = 10, digits = 2)`, suggesting that there a non-significant mediation effect.
diff --git a/inst/tutorials/discovr_11/discovr_11.Rmd b/inst/tutorials/discovr_11/discovr_11.Rmd index 4d14641..7f1d329 100644 --- a/inst/tutorials/discovr_11/discovr_11.Rmd +++ b/inst/tutorials/discovr_11/discovr_11.Rmd @@ -477,7 +477,7 @@ Moving onto the parameter estimates, $\hat{b}_0$ (the value in the column labell The *b*-value for the second dummy variable (labelled [dose30 mins]{.alt}) is equal to the difference between the means of the 30-minute group and the control group (`r pup_sum$mean[3]` $−$ `r pup_sum$mean[1]` = `r get_par(pup_par, row = 3)`). These values demonstrate how dummy coding partitions the variance in happiness scores to compare specific group means. We can see from the significance values of the associated *t*-tests that the difference between the 30-minute group and the control group is significant because *p* = `r sprintf("%.3f", pup_par$p[3])`, which is less than 0.05; however, the difference between the 15-minute and the control group is not (*p* = `r sprintf("%.3f", pup_par$p[2])`).
- `r pencil()` **Report it!** + `r pencil()` **Report**`r rproj()` Overall, happiness was significantly different across the three therapy groups, `r report_aovf(pup_aov)`. Happiness was significantly different to zero in the no puppies group, $\hat{b}$ = `r report_pars(pup_par, row = 1)`, was not significantly higher in the 15-minute therapy group compared to the no puppy control, $\hat{b}$ = `r report_pars(pup_par, row = 2)`, but was significantly higher the 15-minute therapy group compared to the no puppy control, $\hat{b}$ = `r report_pars(pup_par, row = 3)`. A 30-minutes dose of puppies, therefore, appears to improve happiness compared to no puppies but a 15-minutes does does not.
@@ -775,7 +775,7 @@ The table of parameter estimates is different to before. Notice that the contras The second contrast shows that the mean happiness across the people having 30-minutes of puppy therapy was `r sprintf("%.2f", con_par$estimate[3])` higher than those having 15 minutes. Again, if we assume this sample is one of the 95% that yields confidence intervals containing the population values then this difference could be anything between `r sprintf("%.2f", con_par$conf.low[3])` (people who have 30 minutes of puppy therapy are less happy than those having 15 minutes) and `r sprintf("%.2f", con_par$conf.high[3])` (people having 30 minutes of puppy therapy are a fair bit happier than those having 15 minutes). The observed difference of `r sprintf("%.2f", con_par$estimate[3])` is not statistically significantly different from 0 as shown by the *t*-test, which has a *p* = `r sprintf("%.3f", con_par$p.value[3])`. This contrast suggests that happiness was statistically comparable in those receiving 15- and 30-minutes of puppy therapy.
- `r pencil()` **Report it!** + `r pencil()` **Report**`r rproj()` Overall, happiness was significantly different across the three therapy groups, `r report_aovf(con_aov)`. Happiness was significantly different to zero in the no puppies group, $\hat{b}$ = `r report_pars(con_par, row = 1)`. Happiness was significantly higher for those that had any puppy therapy compared to the no puppy control, $\hat{b}$ = `r report_pars(con_par, row = 2)`, but was not significantly different in the 30-minute therapy group compared to the 15-minute group, $\hat{b}$ = `r report_pars(con_par, row = 3)`. A dose of puppies, therefore, appears to improve happiness compared to no puppies but the duration of therapy did not have a significant impact.
@@ -1006,7 +1006,7 @@ d_1530 <-puppy_tib |> ```
- `r pencil()` **Report it!** + `r pencil()` **Report**`r rproj()` Participants were significantly more happy after 30-minutes of puppy therapy compared to no puppies, $M_{\text{difference}}$ = `r report_pars(pup_ph, row = 3)`, `r report_es(d_con30, col = "Hedges_g")`. The effect size was suspiciously large. There was no significant difference in happiness between those exposed for 15-minutes compared to no puppies, $M_{\text{difference}}$ = `r report_pars(pup_ph, row = 2)`, `r report_es(d_con15, col = "Hedges_g")` although the effect was large. Also, there was no significant difference in happiness between those exposed for 15-minutes compared to 30-minutes, $M_{\text{difference}}$ = `r report_pars(pup_ph, row = 1)`, `r report_es(d_1530, col = "Hedges_g")` although the difference was greater than a standard deviation. diff --git a/inst/tutorials/discovr_13/discovr_13.Rmd b/inst/tutorials/discovr_13/discovr_13.Rmd index 50c2eac..7a4757d 100644 --- a/inst/tutorials/discovr_13/discovr_13.Rmd +++ b/inst/tutorials/discovr_13/discovr_13.Rmd @@ -517,7 +517,7 @@ gog_afx_tbl <- goggles_afx$anova_table ```
- `r pencil()` **Report it!** + `r pencil()` **Report**`r rproj()` There was a significant effects of the type of face used, `r report_afx(gog_afx_tbl, row = 1)`, and the dose of alcohol, `r report_afx(gog_afx_tbl, row = 2)`. However, these effects were superseded by a significant interaction between the type of face being rated and the dose of alcohol, `r report_afx(gog_afx_tbl, row = 3)`. This interaction suggests that the effect of alcohol is moderated by the type of face being rated (and vice versa). Based on the means (see plot) this interaction supports the 'beer-googles' hypothesis: when no alcohol is consumed symmetric faces were rated as more attractive than asymmetric faces but this difference diminishes as more alcohol is consumed.
@@ -778,7 +778,7 @@ There are two key effects here: To sum up, the significant interaction is being driven by alcohol consumption (any dose compared to placebo, and high dose compared to low) affecting ratings of unattractive face stimuli significantly more than it affects ratings of attractive face stimuli.
- `r pencil()` **Report it!** + `r pencil()` **Report**`r rproj()` There was a significant effects of the type of face used, `r report_afx(gog_afx_tbl, row = 1)`, and the dose of alcohol, `r report_afx(gog_afx_tbl, row = 2)`. However, these effects were superseded by a significant interaction between the type of face being rated and the dose of alcohol, `r report_afx(gog_afx_tbl, row = 3)`. Contrasts suggested that the difference between ratings of symmetric and asymmetric faces was significantly smaller after any dose of alcohol compared to no alcohol, $\hat{b}$ = `r report_pars(goggles_par, row = 5)`, and became smaller still when comparing a high- to a low-dose of alcohol, $\hat{b}$ = `r report_pars(goggles_par, row = 6)`. These effects support the 'beer-googles' hypothesis: when no alcohol is consumed symmetric faces were rated as more attractive than asymmetric faces but this difference diminishes as more alcohol is consumed.
@@ -902,7 +902,7 @@ gog_se <- emmeans::joint_tests(goggles_afx, "alcohol") ```
- `r pencil()` **Report it!** + `r pencil()` **Report**`r rproj()` There was a significant effects of the type of face used, `r report_afx(gog_afx_tbl, row = 1)`, and the dose of alcohol, `r report_afx(gog_afx_tbl, row = 2)`. However, these effects were superseded by a significant interaction between the type of face being rated and the dose of alcohol, `r report_afx(gog_afx_tbl, row = 3)`. Simple effects analysis revealed that symmetric faces were rated as significant more attractive than asymmetric faces after no alcohol, `r report_se(gog_se, row = 1)`, and a low dose, `r report_se(gog_se, row = 2)`, but were rated comparably after a high dose of alcohol, `r report_se(gog_se, row = 3)`. These effects support the 'beer-googles' hypothesis: the standard tendency to rate symmetric faces as more attractive than asymmetric faces was present at low doses and no alcohol, but was eliminated by a high dose of alcohol.
@@ -1099,7 +1099,7 @@ gog_os <- goggles_afx |> The effect sizes are slightly smaller than (as we'd expect) using omega-squared. The interaction effect now explains about 25% of variation in attractiveness ratings.
- `r pencil()` **Report it!** + `r pencil()` **Report**`r rproj()` There was a significant effects of the type of face used, `r report_afx(gog_afx_tbl, row = 1)`, `r report_es(gog_os, col = "Omega2_partial", row = 1)`, and the dose of alcohol, `r report_afx(gog_afx_tbl, row = 2)`, `r report_es(gog_os, col = "Omega2_partial", row = 2)`. However, these effects were superseded by a significant interaction between the type of face being rated and the dose of alcohol, `r report_afx(gog_afx_tbl, row = 3)`, `r report_es(gog_os, col = "Omega2_partial", row = 3)`. This interaction suggests that the effect of alcohol is moderated by the type of face being rated (and vice versa). Based on the means (see plot) this interaction supports the 'beer-googles' hypothesis: when no alcohol is consumed symmetric faces were rated as more attractive than asymmetric faces but this difference diminishes as more alcohol is consumed.
diff --git a/inst/tutorials/discovr_14/discovr_14.Rmd b/inst/tutorials/discovr_14/discovr_14.Rmd index 9b8c179..768e012 100644 --- a/inst/tutorials/discovr_14/discovr_14.Rmd +++ b/inst/tutorials/discovr_14/discovr_14.Rmd @@ -1078,7 +1078,7 @@ bob_aov <- anova(cosmetic_bob) |> tibble::as_tibble(rownames = "effect")
- `r pencil()` **Report it!** + `r pencil()` **Report**`r rproj()` There are significant effects of baseline quality of life, `r report_aov(bob_aov, row = 3)` and the months × reason interaction, `r report_aov(bob_aov, row = 4)`, but not the overall effect of months, `r report_aov(bob_aov, row = 1)`, or the main effect of reason, `r report_aov(bob_aov, row = 2)`.
@@ -1203,7 +1203,7 @@ The resulting plot shows what we already know from the parameter estimates for t
- `r pencil()` **Report it!** + `r pencil()` **Report**`r rproj()` There was non-zero variability in intercepts and slopes. The estimate of standard deviation of intercepts across clinics was $\hat{\sigma}_{u_0}$ = `r report_pars(bob_coef, row = 6, fixed = F)`, the standard deviation of slopes across clinics was $\hat{\sigma}_{u_\text{months}}$ = `r report_pars(bob_coef, row = 8, fixed = F)`, and the residual standard deviation was $\sigma$ = `r report_pars(bob_coef, row = 9, fixed = F)`. The estimated correlation between slopes and intercepts was $r_{u_0, u_\text{months}}$ = `r report_pars(bob_coef, row = 7, fixed = F)` suggesting that clinics with large intercepts tended to have smaller slopes. diff --git a/inst/tutorials/discovr_15/discovr_15.Rmd b/inst/tutorials/discovr_15/discovr_15.Rmd index 2816542..f436b38 100644 --- a/inst/tutorials/discovr_15/discovr_15.Rmd +++ b/inst/tutorials/discovr_15/discovr_15.Rmd @@ -439,7 +439,7 @@ We've discussed many times that having a single arbitrary cut-off for significan Generalized partial eta-squared differs from eta-squared in ways that you probably don't care about. (tl;dr: $\eta^2_{G}$ is more consistent than $\eta^2_{p}$ across study designs.) We can interpret it in much the same way, for our data, **entity** explains about `r sniff_es`% of the variance in vocalisations, which is a very substantial effect despite what the *p*-value might have you believe.
- `r pencil()` **Report it!** + `r pencil()` **Report**`r rproj()` There was no significant effect of the type of entity on sniffer dog's vocalisations when approaching them, `r report_afx(sniff_tble)`. However, the type of entity explained `r sniff_es`% of the variance in vocalisations ($\eta^2_{G}$ = `r get_par(sniff_tble, col = "ges")`), which is a very substantial effect suggesting that the study may have been under powered.
@@ -1090,7 +1090,7 @@ Joy wouldn't feel so good if it wasn't for pain (Many Men (Wish Death))"), ```
- `r pencil()` **Report it!** + `r pencil()` **Report**`r rproj()` There were significant main effects of entity, `r report_afx(scent_tbl)`, $\eta^2_G$ = `r get_par(scent_tbl, col = "ges", row = 1)`, and scent, `r report_afx(scent_tbl, row = 2)`, $\eta^2_G$ = `r get_par(scent_tbl, col = "ges", row = 2)` on the number of vocalisations dogs made when approaching an entity. However, these effects were superseded by a significant entity $\times$ scent interaction, `r report_afx(scent_tbl, row = 3)`, $\eta^2_G$ = `r get_par(scent_tbl, col = "ges", row = 3)`, suggetsing that the effect of scent on vocalisations was moderated by the type of entity sniffed (and vice versa). Simple effects analysis revealed that the effect of entity was significant when no scent was used, `r report_se(scent_se, row = 1)`, when human scent was used, `r report_se(scent_se, row = 2)` and also when fox scent was used, `r report_se(scent_se, row = 3)`.
@@ -1233,7 +1233,7 @@ To sum up, the scents don't distract the sniffer dogs from detecting aliens comp
- `r pencil()` **Report it!** + `r pencil()` **Report**`r rproj()` There were significant main effects of entity, `r report_afx(scent_tbl)`, $\eta^2_G$ = `r get_par(scent_tbl, col = "ges", row = 1)`, and scent, `r report_afx(scent_tbl, row = 2)`, $\eta^2_G$ = `r get_par(scent_tbl, col = "ges", row = 2)` on the number of vocalisations dogs made when approaching an entity. However, these effects were superseded by a significant entity $\times$ scent interaction, `r report_afx(scent_tbl, row = 3)`, $\eta^2_G$ = `r get_par(scent_tbl, col = "ges", row = 3)`, suggesting that the effect of scent on vocalisations was moderated by the type of entity sniffed (and vice versa). diff --git a/inst/tutorials/discovr_15_growth/discovr_15_growth.Rmd b/inst/tutorials/discovr_15_growth/discovr_15_growth.Rmd index a5f9648..62e5c49 100644 --- a/inst/tutorials/discovr_15_growth/discovr_15_growth.Rmd +++ b/inst/tutorials/discovr_15_growth/discovr_15_growth.Rmd @@ -768,7 +768,7 @@ The output shows that * The correlation between slopes and intercepts across the zombies was `r report_pars(rehab_mod_fe, row = 6, fixed = F)`]
- `r pencil()` **Report it!** + `r pencil()` **Report**`r rproj()` There was non-zero variability in intercepts and slopes. The estimate of standard deviation of intercepts across zombies was $\hat{\sigma}_{u_0}$ = `r report_pars(rehab_mod_fe, row = 5, fixed = F)`, the standard deviation of slopes across zombies was $\hat{\sigma}_{u_\text{months}}$ = `r report_pars(rehab_mod_fe, row = 7, fixed = F)`, and the residual standard deviation was $\sigma$ = `r report_pars(rehab_mod_fe, row = 8, fixed = F)`. The estimated correlation between slopes and intercepts was $r_{u_0, u_\text{months}}$ = `r report_pars(rehab_mod_fe, row = 6, fixed = F)` suggesting that clinics with large intercepts tended to have smaller slopes. diff --git a/inst/tutorials/discovr_16/discovr_16.Rmd b/inst/tutorials/discovr_16/discovr_16.Rmd index 51c9325..6e34cf3 100644 --- a/inst/tutorials/discovr_16/discovr_16.Rmd +++ b/inst/tutorials/discovr_16/discovr_16.Rmd @@ -903,7 +903,7 @@ three_way_con <- emmeans::contrast( - The final contrast shows the effect of high attractive dates relative to average-looking ones, when they display high charisma compared average charisma, when dates played hard to get relative to when they didn't, `r report_em(three_way_con, row = 4)`. This contrast seems to show that interest in dating (as indicated by high ratings) attractive dates was the same regardless of whether they had high or average charisma (the green and blue dots are in a similar place). However, for average-looking dates, there was more interest when that person had high charisma rather than average charisma (the green dot is lower than the blue dot). The non-significance of this contrast indicates that this pattern of results is very similar when dates played hard to get and when they didn't.
- `r pencil()` **Report it!** + `r pencil()` **Report**`r rproj()` Greenhouse-Geisser corrected degrees of freedom are reported throughout. The main effect of strategy was not significant, `r report_afx(date_tbl, row = 1)`, but the main effects of looks, `r report_afx(date_tbl, row = 2)`, and personality, `r report_afx(date_tbl, row = 4)`, were. These effects were superseded by the following significant interactions: strategy $\times$ looks, `r report_afx(date_tbl, row = 3)`, strategy $\times$ personality, `r report_afx(date_tbl, row = 5)`, and personality $\times$ looks, `r report_afx(date_tbl, row = 6)`. These interactions were also superseded by the significant strategy $\times$ personality $\times$ looks interaction, `r report_afx(date_tbl, row = 7)`. Contrasts were used to break down this interaction. diff --git a/inst/tutorials/discovr_19/discovr_19.Rmd b/inst/tutorials/discovr_19/discovr_19.Rmd index 2d1a832..e1fd237 100644 --- a/inst/tutorials/discovr_19/discovr_19.Rmd +++ b/inst/tutorials/discovr_19/discovr_19.Rmd @@ -622,7 +622,7 @@ cat_chi <- chisq.test(dance_tib$reward, dance_tib$dance) Whichever method you use, the output will be the same. The value of the chi-square statistic is `r sprintf("%.2f", cat_chi$statistic)` and this value is highly significant because the associated *p* is is smaller than 0.05 (it is `r sprintf("%.8f", cat_chi$p.value)`).
- `r pencil()` **Report it!** + `r pencil()` **Report**`r rproj()` There was a significant association between the type of reward used and whether cats danced, $\chi^2$(`r cat_chi$parameter`) = `r sprintf("%.2f", cat_chi$statistic)`, *p* < 0.001.
@@ -735,7 +735,7 @@ You can take the reciprocal of the odds ratio to reverse the direction of the ef
- `r pencil()` **Report it!** + `r pencil()` **Report**`r rproj()` There was a significant association between the type of reward used and whether cats danced, $\chi^2$(`r cat_chi$parameter`) = `r sprintf("%.2f", cat_chi$statistic)`, *p* < 0.001. If a cat was trained with affection the odds of their dancing were `r sprintf("%.2f", cat_or$Odds_ratio)` times the odds if they had been trained with affection, `r report_es(cat_or, col = "Odds_ratio")` @@ -809,7 +809,7 @@ so the test reveals that the observed odds ratio is significantly different from
- `r pencil()` **Report it!** + `r pencil()` **Report**`r rproj()` Fisher's exact test showed that the odds ratio was significantly different from 1, $\widehat{\text{OR}}$ = `r sprintf("%.2f", cat_fish$estimate)`, [`r sprintf("%.2f", cat_fish$conf.int[1])`, `r sprintf("%.2f", cat_fish$conf.int[2])`], *p* < 0.001. If we assume that our sample is one of the 95% of samples that generates a 95% confidence interval containing the population value, that the population odds ratio is also not 1. In other words, there was a non-zero effect of reward on dancing.
diff --git a/inst/tutorials/discovr_20/discovr_20.Rmd b/inst/tutorials/discovr_20/discovr_20.Rmd index c99b184..ffe121f 100644 --- a/inst/tutorials/discovr_20/discovr_20.Rmd +++ b/inst/tutorials/discovr_20/discovr_20.Rmd @@ -587,7 +587,7 @@ If the confidence interval contains 1 then the population value might be one tha Now we have the basic understanding of what the parameters mean, consider working through the optional section on looking at the overall model fit.
- `r pencil()` **Report it!** + `r pencil()` **Report**`r rproj()` A logistic regression model was fit predicting delivery of presents from the type of treat offered. The odds of delivery in those receiving pudding as a treat was significantly different to 1, odds = `r report_pars(santa_exp, row = 1)`. The odds ratio for the type of treat was also significant, $\widehat{OR}$ = `r report_pars(santa_exp, row = 2)`. The odds of delivery after wine were `r get_par(santa_exp, row = 2)` the size of the odds of delivery after pudding. @@ -986,7 +986,7 @@ The output shows the effect of the quantity of mulled wine consumed on delivery The interaction effect, therefore, reflects the fact that the effect of quantity on delivery is significantly different for Christmas pudding and mulled wine. As a rough approximation it means that in the plot, the blue and orange lines have different slopes.
- `r pencil()` **Report it!** + `r pencil()` **Report**`r rproj()` A logistic regression model was fit predicting delivery of presents from the type of treat offered, the quantity of treats and their interaction. Table 2 shows the parameter estimates, their 95% confidence intervals and significance tests. The main effects of Treat and Quantoity were not significant but the interaction was, suggesting that the effect of quantity was moderated by the type of treat (and vice versa).