Skip to content

Commit

Permalink
Update readme for revision
Browse files Browse the repository at this point in the history
  • Loading branch information
Rausch authored and Rausch committed Dec 8, 2024
1 parent 3fb8727 commit 58237ad
Show file tree
Hide file tree
Showing 3 changed files with 30 additions and 9 deletions.
2 changes: 1 addition & 1 deletion DESCRIPTION
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ BugReports: https://github.com/ManuelRausch/StatConfR/issues
Depends:
R (>= 4.0)
Imports:
parallel, plyr, stats, utils
parallel, plyr, stats, utils, ggplot2, Rmisc
Date/Publication: 2023-09-12
Encoding: UTF-8
LazyData: true
Expand Down
33 changes: 27 additions & 6 deletions R/plotConfModelFit.R
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
#' determined by the different values of this column)
#'
#' @param fitted_pars a `data.frame` with one row for each participant and model parameters in different columns.
#' fitted_pars also may contain a column called `model`specifying the model to be visualized.
#' fitted_pars also may contain a column called `model` specifying the model to be visualized.
#' If there is no model column in data or if there are multiple models in fitted_pars,
#' it is necessary to specify the model argument.
#'
Expand All @@ -36,12 +36,28 @@
#' # 1. Fit some models to each subject of the masked orientation discrimination experiment
#' # Normally, the fits should be created using the function fitConfModels
#' # Fits <- fitConfModels(data, models = "WEV", .parallel = TRUE)
#' # Here, we create a dummy dataframe because fitting models takes about 10 minutes per model fit per participant on a 2.8GHz processor
#'
#' # Here, we create the dataframe manually because fitting models takes about 10 minutes per model fit per participant on a 2.8GHz processor
#' pars <- data.frame(participant = 1:16,
#' d_1 = c(0.20, 0.05, 0.41, 0.03, 0.00, 0.01, 0.11, 0.03, 0.19, 0.08, 0.00, 0.24, 0.00, 0.00, 0.25, 0.01),
#' d_2 = c(0.61, 0.19, 0.86, 0.18, 0.17, 0.39, 0.69, 0.14, 0.45, 0.30, 0.00, 0.27, 0.00, 0.05, 0.57, 0.23),
#' d_3 = c(1.08, 1.04, 2.71, 2.27, 1.50, 1.21, 1.83, 0.80, 1.06, 0.68, 0.29, 0.83, 0.77, 2.19, 1.93, 0.54),
#' d_4 = c(3.47, 4.14, 6.92, 4.79, 3.72, 3.24, 4.55, 2.51, 3.78, 2.40, 1.95, 2.55, 4.59, 4.27, 4.08, 1.80),
#' d_5 = c(4.08, 5.29, 7.99, 5.31, 4.53, 4.66, 6.21, 4.67, 5.85, 3.39, 3.39, 4.42, 6.48, 5.35, 5.28, 2.87),
#' c = c(-0.30, -0.15, -1.37, 0.17, -0.12, -0.19, -0.12, 0.41, -0.27, 0.00, -0.19, -0.21, -0.91, -0.26, -0.20, 0.10),
#' theta_minus.4 = c(-2.07, -2.04, -2.76, -2.32, -2.21, -2.33, -2.27, -2.29, -2.69, -3.80, -2.83, -1.74, -2.58, -3.09, -2.20, -1.57),
#' theta_minus.3 = c(-1.25, -1.95, -1.92, -2.07, -1.62, -1.68, -2.04, -2.02, -1.84, -3.37, -1.89, -1.44, -2.31, -2.08, -1.53, -1.46),
#' theta_minus.2 = c(-0.42, -1.40, -0.37, -1.96, -1.45, -1.27, -1.98, -1.66, -1.11, -2.69, -1.60, -1.25, -2.21, -1.68, -1.08, -1.17),
#' theta_minus.1 = c(0.13, -0.90, 0.93, -1.71, -1.25, -0.59, -1.40, -1.00, -0.34, -1.65, -1.21, -0.76, -1.99, -0.92, -0.28, -0.99),
#' theta_plus.1 = c(-0.62, 0.82, -2.77, 2.01, 1.39, 0.60, 1.51, 0.90, 0.18, 1.62, 0.99,0.88, 1.67, 0.92, 0.18, 0.88),
#' theta_plus.2 = c(0.15, 1.45, -1.13,2.17, 1.61, 1.24, 1.99, 1.55, 0.96, 2.44, 1.53, 1.66, 2.00, 1.51, 1.08, 1.05),
#' theta_plus.3 = c(1.40, 2.24, 0.77, 2.32, 1.80, 1.58, 2.19, 2.19, 1.54, 3.17, 1.86, 1.85, 2.16, 2.09, 1.47, 1.70),
#' theta_plus.4 = c(2.19, 2.40, 1.75, 2.58, 2.53, 2.24, 2.59, 2.55, 2.58, 3.85, 2.87, 2.15, 2.51, 3.31, 2.27, 1.79),
#' sigma = c(1.01, 0.64, 1.33, 0.39, 0.30, 0.75, 0.75, 1.07, 0.65, 0.29, 0.31, 0.78, 0.39, 0.42, 0.69, 0.52),
#' w = c(0.54, 0.50, 0.38, 0.38, 0.36, 0.44, 0.48, 0.48, 0.52, 0.46, 0.53, 0.48, 0.29, 0.45, 0.51, 0.63))
#'
#' # 2. Plot the predicted probabilies based on model and fitted parameter over the observed relative frequencies.
#'
#' PlotFitWEV <- plotConfModelFit(MaskOri, fitted_pars, model="WEV")
#' PlotFitWEV <- plotConfModelFit(MaskOri, pars, model="WEV")
#' PlotFitWEV
#'
#' @import ggplot2
Expand Down Expand Up @@ -135,8 +151,13 @@ plotConfModelFit <- function(data, fitted_pars, model = NULL){
'GN' = predictDataNoisy,
'PDA' = predictDataISDT,
'logN' = predictDataLognorm)
myParams <- fitted_pars[fitted_pars$model == model,]
predictedData <- plyr::ddply(myParams, ~participant, predictDataFun)

if("model" %in% colnames(fitted_pars)){
if(length(unique(fitted_pars$model))>1){
fitted_pars <- fitted_pars[fitted_pars$model == model,]
}}

predictedData <- plyr::ddply(fitted_pars, ~participant, predictDataFun)
predictedData$correct <-
factor(ifelse(predictedData$stimulus==predictedData$response, "correct", "incorrect"))

Expand Down
4 changes: 2 additions & 2 deletions README.rmd
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ In addition, the `statConfR` package provides functions for estimating different
kinds of measures of metacognition:

- meta-d$^\prime$/d$^\prime$, the most widely-used measure of metacognitive efficiency, allowing both Maniscalco and Lau (2012)'s and Fleming (2017)'s model specification. Fitting models of confidence is a way to test the assumptions underlying meta-d$^\prime$/d$^\prime$.
- Information-theoretic measures of metacognition (Dayan, 2023), including
- information-theoretic measures of metacognition (Dayan, 2023), including

- meta-I, an information-theoretic measures of metacognitive sensitivity,
- $meta-I_{1}^{r}$ and $meta-I_{2}^{r}$, two measures of metacognitive efficiency proposed by Dayan (2023),
Expand Down Expand Up @@ -158,7 +158,7 @@ features in the confidence judgment. The parameters $w$ and $\sigma$ are free pa

# Measures of metacognition

## meta-d$^\prime$/d$^\prime$
## Meta-d$^\prime$/d$^\prime$
The conceptual idea of meta-d$^\prime$ is to quantify metacognition in terms of sensitivity in a hypothetical signal detection rating model describing the primary task, under the assumption that participants had perfect access to the sensory evidence and were perfectly consistent in placing their confidence criteria (Maniscalco & Lau, 2012, 2014). Using a signal detection model describing the primary task to quantify metacognition, it allows a direct comparison between metacognitive accuracy and discrimination performance because both are measured on the same scale. Meta-d$^\prime$ can be compared against the estimate of the distance between the two stimulus distributions estimated from discrimination responses, which is referred to as d$^\prime$: If meta-$^\prime$ equals d$^\prime$, it means that metacognitive accuracy is exactly as good as expected from discrimination performance. If meta-d$^\prime$ is lower than d$^\prime$, it means that metacognitive accuracy is not optimal. It can be shown that the implicit model of confidence underlying the meta-d$^\prime$/d$^\prime$ method is identical to different versions of the independent truncated Gaussian model (Rausch et al., 2023), depending on whether the original model specification by Maniscalco and Lau (2012) or alternatively the specification by Fleming (2017) is used. We strongly recommend to test whether the independent truncated Gaussian models are adequate descriptions of the data before quantifying metacognitive efficiency with meta-d$^\prime$/d$^\prime$ (see Rausch et al., 2023).

## Information-theoretic measures of metacognition
Expand Down

0 comments on commit 58237ad

Please sign in to comment.