Skip to content

Commit

Permalink
Modifications so we are CRAN compliant.
Browse files Browse the repository at this point in the history
  • Loading branch information
demsarjure committed Dec 13, 2019
1 parent a6c3320 commit 8d597d5
Show file tree
Hide file tree
Showing 8 changed files with 43 additions and 39 deletions.
4 changes: 2 additions & 2 deletions CRAN-RELEASE
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
This package was submitted to CRAN on 2019-12-09.
Once it is accepted, delete this file and tag the release (commit a74156ffe8).
This package was submitted to CRAN on 2019-12-13.
Once it is accepted, delete this file and tag the release (commit a6c3320489).
11 changes: 6 additions & 5 deletions DESCRIPTION
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ LazyData: true
ByteCompile: true
Depends:
methods,
R (>= 3.6.1),
R (>= 3.5.0),
Rcpp (>= 0.12.16)
Imports:
circular (>= 0.4.93),
Expand All @@ -29,7 +29,11 @@ Imports:
rstantools (>= 1.5.0),
mcmcse (>= 1.3.2),
stats,
utils,
utils
Suggests:
testthat,
rmarkdown,
knitr
LinkingTo:
StanHeaders (>= 2.17.2),
rstan (>= 2.17.3),
Expand All @@ -40,6 +44,3 @@ SystemRequirements: GNU make
VignetteBuilder: knitr
NeedsCompilation: yes
RoxygenNote: 6.1.1
Suggests:
testthat,
knitr
1 change: 1 addition & 0 deletions R/linear_class.R
Original file line number Diff line number Diff line change
Expand Up @@ -211,6 +211,7 @@ setMethod(f="show", signature(object="linear_class"), definition=function(object
#' @title plot
#' @description \code{plot} plots fitted model against the data. Use this function to explore the quality of your fit. You can plot on the subject level (subjects=TRUE) or on the group level (subjects=FALSE).
#' @param x linear_class object.
#' @param y empty dummy variable, ignore this.
#' @param ... subjects - plot fits on a subject level (default = TRUE).
#' @exportMethod plot
#'
Expand Down
2 changes: 2 additions & 0 deletions man/plot-linear_class-missing-method.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

13 changes: 6 additions & 7 deletions vignettes/adaptation_level.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,8 @@
knitr::opts_chunk$set(fig.width=6, fig.height=4.5)
options(width=800)
# parallel execution
options(mc.cores = parallel::detectCores())
# vignettes support only 2 cores
options(mc.cores=2)
```

# Introduction
Expand All @@ -39,9 +39,8 @@ We will conduct the analysis by using the hierarchical linear model. First we ha
```{r, message=FALSE, warning=FALSE}
# libs
library(bayes4psy)
library(ggplot2)
library(plyr)
library(dplyr)
library(ggplot2)
# load data
data <- adaptation_level
Expand All @@ -51,18 +50,18 @@ group1_part2 <- data %>% filter(group == 1 & part == 2)
group2_part2 <- data %>% filter(group == 2 & part == 2)
```

Once the data is prepared we can fit the Bayesian models, the input data comes in the form of three vectors, $x$ stores indexes of the measurements, $y$ subject's responses and $s$ indexes of subjects.
Once the data is prepared we can fit the Bayesian models, the input data comes in the form of three vectors, $x$ stores indexes of the measurements, $y$ subject's responses and $s$ indexes of subjects. Note here that, due to vignette limitations, all fits are built using only two chains, using more chains in parallel is usually more efficient.

```{r, message=FALSE, warning=FALSE}
fit1 <- b_linear(x=group1_part2$sequence,
y=group1_part2$response,
s=group1_part2$subject,
iter=2000, warmup=500)
iter=2000, warmup=500, chains=2)
fit2 <- b_linear(x=group2_part2$sequence,
y=group2_part2$response,
s=group2_part2$subject,
iter=2000, warmup=500)
iter=2000, warmup=500, chains=2)
```

The fitting process is always followed by the quality analysis.
Expand Down
20 changes: 10 additions & 10 deletions vignettes/afterimages.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,8 @@
knitr::opts_chunk$set(fig.width=6, fig.height=4.5)
options(width=800)
# parallel execution
options(mc.cores = parallel::detectCores())
# vignettes support only 2 cores
options(mc.cores=2)
```

# Introduction
Expand All @@ -31,9 +31,9 @@ We start our analysis by loading the experiment and stimuli data. The experiment
```{r, message=FALSE, warning=FALSE}
# libs
library(bayes4psy)
library(cowplot)
library(dplyr)
library(ggplot2)
library(cowplot)
# load data
data_all <- after_images
Expand All @@ -42,7 +42,7 @@ data_all <- after_images
stimuli <- after_images_stimuli
```

Once we load required libraries and thata we can start fitting the Bayesian color models. Below is a detailed example of fitting the Bayesian color model for red color stimuli.
Once we load required libraries and thata we can start fitting the Bayesian color models. Below is a detailed example of fitting the Bayesian color model for red color stimuli. Note here that, due to vignette limitations, all fits are built using only two chains, using more chains in parallel is usually more efficient.

```{r, message=FALSE, warning=FALSE}
# prepare data
Expand All @@ -52,7 +52,7 @@ data_red <- data.frame(r=data_red$r,
b=data_red$b)
# fit
fit_red <- b_color(colors=data_red)
fit_red <- b_color(colors=data_red, chains=2)
# inspect
plot_trace(fit_red)
Expand All @@ -68,35 +68,35 @@ data_green <- data_all %>% filter(stimuli == "green")
data_green <- data.frame(r=data_green$r,
g=data_green$g,
b=data_green$b)
fit_green <- b_color(colors=data_green)
fit_green <- b_color(colors=data_green, chains=2)
# blue
data_blue <- data_all %>% filter(stimuli == "blue")
data_blue <- data.frame(r=data_blue$r,
g=data_blue$g,
b=data_blue$b)
fit_blue <- b_color(colors=data_blue)
fit_blue <- b_color(colors=data_blue, chains=2)
# yellow
data_yellow <- data_all %>% filter(stimuli == "yellow")
data_yellow <- data.frame(r=data_yellow$r,
g=data_yellow$g,
b=data_yellow$b)
fit_yellow <- b_color(colors=data_yellow)
fit_yellow <- b_color(colors=data_yellow, chains=2)
# cyan
data_cyan <- data_all %>% filter(stimuli == "cyan")
data_cyan <- data.frame(r=data_cyan$r,
g=data_cyan$g,
b=data_cyan$b)
fit_cyan <- b_color(colors=data_cyan)
fit_cyan <- b_color(colors=data_cyan, chains=2)
# magenta
data_magenta <- data_all %>% filter(stimuli == "magenta")
data_magenta <- data.frame(r=data_magenta$r,
g=data_magenta$g,
b=data_magenta$b)
fit_magenta <- b_color(colors=data_magenta)
fit_magenta <- b_color(colors=data_magenta, chains=2)
```

We start the analysis by loading data about the colors predicted by the trichromatic or the opponent-process theory. We can then use the **plot\_distributions\_hsv** function of the Bayesian color model to produce a visualization of the accuracy of both color coding mechanisms predictions for each stimuli independently. Each graph visualizes the fitted distribution, displayed stimuli and responses predicted by the trichromatic and opponent-process coding. This additional information can be added to the visualization via annotation points and lines.
Expand Down
16 changes: 9 additions & 7 deletions vignettes/flanker.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,8 @@
knitr::opts_chunk$set(fig.width=6, fig.height=4.5)
options(width=800)
# parallel execution
options(mc.cores = parallel::detectCores())
# vignettes support only 2 cores
options(mc.cores=2)
```

# Introduction
Expand Down Expand Up @@ -56,15 +56,17 @@ The model requires subjects to be indexed from $1$ to $n$. Control group subject
control_rt$subject <- control_rt$subject - 21
```

Now we are ready to fit the Bayesian reaction time model for both groups. The model function requires two parameters -- a vector of reaction times $t$ and the vector of subject indexes $s$.
Now we are ready to fit the Bayesian reaction time model for both groups. The model function requires two parameters -- a vector of reaction times $t$ and the vector of subject indexes $s$. Note here that, due to vignette limitations, all fits are built using only two chains, using more chains in parallel is usually more efficient.

```{r, message=FALSE, warning=FALSE, results = 'hide'}
# fit
rt_control_fit <- b_reaction_time(t=control_rt$rt,
s=control_rt$subject)
s=control_rt$subject,
chains=2)
rt_test_fit <- b_reaction_time(t=test_rt$rt,
s=test_rt$subject)
s=test_rt$subject,
chains=2)
```

Before we interpret the results, we check MCMC diagnostics and model fit.
Expand Down Expand Up @@ -173,12 +175,12 @@ priors <- list(c("p", p_prior))
sr_control_fit <- b_success_rate(r=control_sr$result_numeric,
s=control_sr$subject,
priors=priors,
iter=4000)
iter=4000, chains=2)
sr_test_fit <- b_success_rate(r=test_sr$result_numeric,
s=test_sr$subject,
priors=priors,
iter=4000)
iter=4000, chains=2)
```

The process for inspecting Bayesian fits is the same as above. When visually inspecting the quality of the fit (the **plot** function) we can set the **subjects** parameter to **FALSE**, which visualizes the fit on the group level. This offers a quicker, but less detailed method of inspection. Again one of the commands is commented out for the sake of brevity.
Expand Down
15 changes: 7 additions & 8 deletions vignettes/stroop.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,8 @@
knitr::opts_chunk$set(fig.width=6, fig.height=4.5)
options(width=800)
# parallel execution
options(mc.cores = parallel::detectCores())
# vignettes support only 2 cores
options(mc.cores=2)
```

# Introduction
Expand All @@ -33,12 +33,11 @@ In our version of the Stroop test participants were faced with four types of con
* Reading incongruent -- name of the color was printed in incongruent ink, the participant had to read the written name of the color.
* Naming incongruent -- name of the color was printed in incongruent ink, the participant had to name the ink color.

We are primarily interested in expected task completion times. Every participant had the same number of stimuli in every condition, so we opt for a Bayesian t-test. The data are already split into the four conditions described above, so we only need to specify the priors. We based them on our previous experience with similar tasks -- participants finish the task in approximately 1 minute and the typical standard deviation for a participant is less than 2 minutes.
We are primarily interested in expected task completion times. Every participant had the same number of stimuli in every condition, so we opt for a Bayesian t-test. The data are already split into the four conditions described above, so we only need to specify the priors. We based them on our previous experience with similar tasks -- participants finish the task in approximately 1 minute and the typical standard deviation for a participant is less than 2 minutes. Note here that, due to vignette limitations, all fits are built using only two chains, using more chains in parallel is usually more efficient.

```{r, message=FALSE, warning=FALSE}
# libs
library(bayes4psy)
library(dplyr)
library(ggplot2)
# load data
Expand All @@ -53,19 +52,19 @@ priors <- list(c("mu", mu_prior),
# fit
fit_reading_neutral <- b_ttest(data$reading_neutral,
priors=priors,
iter=4000, warmup=500)
iter=4000, warmup=500, chains=2)
fit_reading_incongruent <- b_ttest(data$reading_incongruent,
priors=priors,
iter=4000, warmup=500)
iter=4000, warmup=500, chains=2)
fit_naming_neutral <- b_ttest(data$naming_neutral,
priors=priors,
iter=4000, warmup=500)
iter=4000, warmup=500, chains=2)
fit_naming_incongruent <- b_ttest(data$naming_incongruent,
priors=priors,
iter=4000, warmup=500)
iter=4000, warmup=500, chains=2)
```

Next we perform MCMC diagnostics and visual checks of model fits.
Expand Down

0 comments on commit 8d597d5

Please sign in to comment.