Skip to content

Commit

Permalink
Optimized vignettes for faster compilation.
Browse files Browse the repository at this point in the history
  • Loading branch information
demsarjure committed Dec 7, 2020
1 parent 50d57c6 commit 75d73cb
Show file tree
Hide file tree
Showing 7 changed files with 32 additions and 20 deletions.
2 changes: 1 addition & 1 deletion DESCRIPTION
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
Package: bayes4psy
Version: 1.2.4
Version: 1.2.5
Title: User Friendly Bayesian Data Analysis for Psychology
Description: Contains several Bayesian models for data analysis of psychological tests. A user friendly interface for these models should enable students and researchers to perform professional level Bayesian data analysis without advanced knowledge in programming and Bayesian statistics. This package is based on the Stan platform (Carpenter et el. 2017 <doi:10.18637/jss.v076.i01>).
Authors@R:
Expand Down
6 changes: 6 additions & 0 deletions NEWS.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,9 @@
# bayes4psy 1.2.5

## New features and improvements
Optimized vignettes for faster compilation.


# bayes4psy 1.2.4

## New features and improvements
Expand Down
4 changes: 4 additions & 0 deletions cran-comments.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@
# Revisions

## CRAN submission 7. 12. 2020

Vignettes are now optimized to enable faster package compilation.

## CRAN submission 20. 11. 2020

Revised tests to avoid stability issues on Solaris.
Expand Down
4 changes: 2 additions & 2 deletions vignettes/adaptation_level.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -53,12 +53,12 @@ Once the data is prepared we can fit the Bayesian models, the input data comes i
fit1 <- b_linear(x=group1_part2$sequence,
y=group1_part2$response,
s=group1_part2$subject,
iter=500, warmup=250, chains=1)
iter=200, warmup=100, chains=1)
fit2 <- b_linear(x=group2_part2$sequence,
y=group2_part2$response,
s=group2_part2$subject,
iter=500, warmup=250, chains=1)
iter=200, warmup=100, chains=1)
```

The fitting process is always followed by the quality analysis.
Expand Down
12 changes: 6 additions & 6 deletions vignettes/afterimages.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ data_red <- data.frame(r=data_red$r,
b=data_red$b)
# fit
fit_red <- b_color(colors=data_red, chains=1, iter=500, warmup=100)
fit_red <- b_color(colors=data_red, chains=1, iter=200, warmup=100)
# inspect
plot_trace(fit_red)
Expand All @@ -66,35 +66,35 @@ data_green <- data_all %>% filter(stimuli == "green")
data_green <- data.frame(r=data_green$r,
g=data_green$g,
b=data_green$b)
fit_green <- b_color(colors=data_green, chains=1, iter=500, warmup=100)
fit_green <- b_color(colors=data_green, chains=1, iter=200, warmup=100)
# blue
data_blue <- data_all %>% filter(stimuli == "blue")
data_blue <- data.frame(r=data_blue$r,
g=data_blue$g,
b=data_blue$b)
fit_blue <- b_color(colors=data_blue, chains=1, iter=500, warmup=100)
fit_blue <- b_color(colors=data_blue, chains=1, iter=200, warmup=100)
# yellow
data_yellow <- data_all %>% filter(stimuli == "yellow")
data_yellow <- data.frame(r=data_yellow$r,
g=data_yellow$g,
b=data_yellow$b)
fit_yellow <- b_color(colors=data_yellow, chains=1, iter=500, warmup=100)
fit_yellow <- b_color(colors=data_yellow, chains=1, iter=200, warmup=100)
# cyan
data_cyan <- data_all %>% filter(stimuli == "cyan")
data_cyan <- data.frame(r=data_cyan$r,
g=data_cyan$g,
b=data_cyan$b)
fit_cyan <- b_color(colors=data_cyan, chains=1, iter=500, warmup=100)
fit_cyan <- b_color(colors=data_cyan, chains=1, iter=200, warmup=100)
# magenta
data_magenta <- data_all %>% filter(stimuli == "magenta")
data_magenta <- data.frame(r=data_magenta$r,
g=data_magenta$g,
b=data_magenta$b)
fit_magenta <- b_color(colors=data_magenta, chains=1, iter=500, warmup=100)
fit_magenta <- b_color(colors=data_magenta, chains=1, iter=200, warmup=100)
```

We start the analysis by loading data about the colors predicted by the trichromatic or the opponent-process theory. We can then use the **plot\_distributions\_hsv** function of the Bayesian color model to produce a visualization of the accuracy of both color coding mechanisms predictions for each stimuli independently. Each graph visualizes the fitted distribution, displayed stimuli and responses predicted by the trichromatic and opponent-process coding. This additional information can be added to the visualization via annotation points and lines.
Expand Down
9 changes: 4 additions & 5 deletions vignettes/flanker.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -59,19 +59,18 @@ Now we are ready to fit the Bayesian reaction time model for both groups. The mo
# fit
rt_control_fit <- b_reaction_time(t=control_rt$rt,
s=control_rt$subject,
chains=1, iter=500, warmup=250)
chains=1, iter=200, warmup=100)
rt_test_fit <- b_reaction_time(t=test_rt$rt,
s=test_rt$subject,
chains=1, iter=500, warmup=250)
chains=1, iter=200, warmup=100)
```

Before we interpret the results, we check MCMC diagnostics and model fit.

```{r, message=FALSE, warning=FALSE}
# plot trace
plot_trace(rt_control_fit)
plot_trace(rt_test_fit)
```

Expand Down Expand Up @@ -170,12 +169,12 @@ priors <- list(c("p", p_prior))
sr_control_fit <- b_success_rate(r=control_sr$result_numeric,
s=control_sr$subject,
priors=priors,
chains=1, iter=500, warmup=100)
chains=1, iter=200, warmup=100)
sr_test_fit <- b_success_rate(r=test_sr$result_numeric,
s=test_sr$subject,
priors=priors,
chains=1, iter=500, warmup=100)
chains=1, iter=200, warmup=100)
```

The process for inspecting Bayesian fits is the same as above. When visually inspecting the quality of the fit (the **plot** function) we can set the **subjects** parameter to **FALSE**, which visualizes the fit on the group level. This offers a quicker, but less detailed method of inspection. Again one of the commands is commented out for the sake of brevity.
Expand Down
15 changes: 9 additions & 6 deletions vignettes/stroop.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ In each of the listed conditions the participants had to name or read 100 stimul

We are primarily interested in expected task completion times. Since our data is composed from averaged reading times we can use the Bayesian t-test. The nature of the Stroop test requires the use of t-test for dependent samples. This example first shows how to execute the Bayesian t-test for dependent samples and in the second part for illustrative purposes only also how to execute the Bayesian t-test for independent samples. The example for independent samples also shows how to use \pkg{bayes4psy} to compare multiple groups simultaneously, such a comparison makes sense only when working with independent groups.

To execute the Bayesian t-test for dependent samples we first have to calculate the difference between the samples and then perform Bayesian modelling on those differences. The example below compares reading times between neutral and incongruent cases.
To execute the Bayesian t-test for dependent samples we first have to calculate the difference between the samples and then perform Bayesian modelling on those differences. The example below compares reading times between neutral and incongruent cases. Note that all fitting processes use an extremely small amount of iterations (100 warmup and 100 sampling). To increase the building speed of vignettes we greatly reduced the amount of iterations, use an appropriate amount of iterations when executing actual analyses!

```{r, message=FALSE, warning=FALSE, results = 'hide'}
# libs
Expand All @@ -49,7 +49,10 @@ ri_vs_rn <- data$reading_incongruent - data$reading_neutral
# fit
fit_ri_vs_rn <- b_ttest(ri_vs_rn,
iter=4000, warmup=500, chains=1)
iter=200, warmup=100, chains=1)
# traceplot
#plot_trace(fit_ri_vs_rn)
```

Once we fit the Bayesian t-test model to the differences between the reading neutral and reading incongruent conditions, we can compare whether the means differ from 0.
Expand All @@ -74,19 +77,19 @@ priors <- list(c("mu", mu_prior),
# fit
fit_reading_neutral <- b_ttest(data$reading_neutral,
priors=priors,
iter=1000, warmup=500, chains=1)
iter=200, warmup=100, chains=1)
fit_reading_incongruent <- b_ttest(data$reading_incongruent,
priors=priors,
iter=1000, warmup=500, chains=1)
iter=200, warmup=100, chains=1)
fit_naming_neutral <- b_ttest(data$naming_neutral,
priors=priors,
iter=1000, warmup=500, chains=1)
iter=200, warmup=100, chains=1)
fit_naming_incongruent <- b_ttest(data$naming_incongruent,
priors=priors,
iter=1000, warmup=500, chains=1)
iter=200, warmup=100, chains=1)
```

Next we perform MCMC diagnostics and visual checks of model fits.
Expand Down

0 comments on commit 75d73cb

Please sign in to comment.