Skip to content

Commit

Permalink
knitted readme
Browse files Browse the repository at this point in the history
  • Loading branch information
Rausch authored and Rausch committed Dec 2, 2024
1 parent 0513393 commit 0d3f7d3
Show file tree
Hide file tree
Showing 7 changed files with 268 additions and 266 deletions.
41 changes: 41 additions & 0 deletions R/compareBayes.R
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
#' Bayesian Model Comparison

#' \code{compareBayes} performs a Bayesian model comparison based on marginal
#' likelihoods (alias model evidence), given for different models across different
#' subject on a group level using a fixed effects model and a random effects model
#' on the distribution of model probabilities (see Rigoux at al., 2014; Daunizeau et al., 2014)
#' `compareBayes` can be used with the output of \code{\link{fitConfModels}}, i.e. a data frame with information
#' criteria for different models and subjects, using a information criterion to
#' approximate the model evidence.
#' \code{summaryCompareBayes} p
#'

#' @param fits a data frame as returned by \code{\link{fitRTConfModels}}.
#' Should contain a column `model`indicating the model name, a column
#' `subject` (alternatively `sbj` or `participant`) indicating the grouping
#' structure of the data, and a column with the name given by the `measure`
#' argument containing the values of the information criterion that should be
#' used to approximate model evidence.
#' @param measure the name of the column indicating the information criterion
#' to approximate model evidence. For outputs of \code{\link{fitRTConfModels}},
#' the available measures are 'BIC', 'AIC', and 'AICc'. Any other approximation
#' for the model evidence may be used, the measure is transferred to log model
#' evidence by taking -measure/2.
#' @param opts a list with options for the iteration algorithm to estimate
#' the parameter of the Dirichlet distribution. Following values may be provided:
#' * \code{maxiter} the maximum number of iterations (Default: 200)
#' * \code{tol} the tolerance for changes in the free energy approximation
#' to stop the algorithm, if abs(FE(i+1)-FE(i))<tol the algorithm
#' is stopped (Default: 1e-4)
#' * \code{eps} The number to substitute values of 0 in calls to log (Default: 1e-32)
#'
#' #' @return a matrix with rows for each model (row names indicate the
#' model names for `group_BMS_fits` and for `group_BMS` if
#' row names are available in `mlp`), and following columns:
#' `alpha` (the alpha parameter of the Dirichlet posterior
#' over model probabilities in the population), `r` (the
#' mean probabilities of each model in the population), `ep`
#' and `pep` (exceedance and protected exceedance
#' probabilities for each model), and `fx_prop` (the
#' posterior model probabilities if a fixed true model is
#' assumed in the population).
426 changes: 169 additions & 257 deletions README.md

Large diffs are not rendered by default.

23 changes: 15 additions & 8 deletions README.rmd
Original file line number Diff line number Diff line change
Expand Up @@ -20,14 +20,7 @@ gitbranch <- "main/"

The `statConfR` package provides functions to fit static models of
decision-making and confidence derived from signal detection theory for
binary discrimination tasks, meta-d′/d′, the most prominent measure of metacognitive efficiency,
meta-I, an information-theoretic measures of metacognitive sensitivity,
as well as $meta-I_{1}^{r}$ and $meta-I_{2}^{r}$, two information-theoretic measures of metacognitive efficiency.

Fitting models of confidence can be used to test the assumptions underlying
meta-d′/d′. Several static models of decision-making and confidence include a metacognition parameter that may
serve as an alternative when the assumptions of meta-d′/d′ assuming the
corresponding model provides a better fit to the data. The following models are included:
binary discrimination tasks. Up to now, the following models have been included:

- signal detection rating model (Green & Swets, 1966),
- Gaussian noise model (Maniscalco & Lau, 2016),
Expand All @@ -38,6 +31,18 @@ corresponding model provides a better fit to the data. The following models are
- lognormal noise model (Shekhar & Rahnev, 2021), and
- lognormal weighted evidence and visibility model (Shekhar & Rahnev, 2023).

Bayesian model selection on the group level is performed using a fixed effects model (i.e. assuming that data from each subject was caused by the same generative model) and a and a Dirichelet random-effects model (which assumes that different generative models may cause the data from different subjects), as proposed by Rigoux et al. (2014) and Daunizeau et al. (2014).

In addition, the `statConfR` package provides functions for estimating different
kinds of measures of metacognition:
- meta-d$^\prime$/d$^\prime$, the most widely-used measure of metacognitive efficiency, allowing both Maniscalco and Lau (2012)'s and Fleming (2017)'s model specification. Fitting models of confidence is a way to test the assumptions underlying meta-d′/d′.
- Information-theoretic measures of metacognition (Dayan, 2023), including
- meta-I, an information-theoretic measures of metacognitive sensitivity,
- $meta-I_{1}^{r}$ and $meta-I_{2}^{r}$, two measures of metacognitive efficiency proposed by Dayan (2023),
- RMI, a novel measure of metacognitive accuracy, also derived from information theory.



## Mathematical description of implemented models of confidence
The models included in the statConfR package are all based on signal detection theory (Green & Swets, 1966).
It is assumed that participants select a binary discrimination response $R$ about a stimulus $S$.
Expand Down Expand Up @@ -332,6 +337,7 @@ or [sebastian.hellmann\@ku.de](mailto:[email protected])
or [submit an issue](https://github.com/ManuelRausch/StatConfR/issues).

## References
- Daunizeau, J., Adam, V., & Rigoux, L. (2014). Vba: A probabilistic treatment of nonlinear models for neurobiological and behavioural data. \emph{PLOS Computational Biology}, 10(1), e1003441. https://doi.org/10.1371/journal.pcbi.1003441
- Dayan, P. (2023). Metacognitive Information Theory. Open Mind, 7, 392–411. https://doi.org/10.1162/opmi_a_00091
- Fleming, S. M. (2017). HMeta-d: Hierarchical Bayesian estimation of metacognitive efficiency from confidence ratings. Neuroscience of Consciousness, 1, 1–14. https://doi.org/10.1093/nc/nix007
- Green, D. M., & Swets, J. A. (1966). Signal detection theory and psychophysics. Wiley.
Expand All @@ -342,6 +348,7 @@ or [submit an issue](https://github.com/ManuelRausch/StatConfR/issues).
- Palminteri, S., Wyart, V., & Koechlin, E. (2017). The importance of falsification in computational cognitive modeling. Trends in Cognitive Sciences, 21(6), 425–433. https://doi.org/10.1016/j.tics.2017.03.011
- Rausch, M., Hellmann, S., & Zehetleitner, M. (2018). Confidence in masked orientation judgments is informed by both evidence and visibility. Attention, Perception, and Psychophysics, 80(1), 134–154. https://doi.org/10.3758/s13414-017-1431-5
- Rausch, M., & Zehetleitner, M. (2017). Should metacognition be measured by logistic regression? Consciousness and Cognition, 49, 291–312. https://doi.org/10.1016/j.concog.2017.02.007
- Rigoux, L., Stephan, K. E., Friston, K. J., & Daunizeau, J. (2014). Bayesian model selection for group studies - revisited. \emph{NeuroImage}, 84, 971–985. https://doi.org/10.1016/j.neuroimage.2013.08.065
- Shekhar, M., & Rahnev, D. (2021). The Nature of Metacognitive Inefficiency in Perceptual Decision Making. Psychological Review, 128(1), 45–70. https://doi.org/10.1037/rev0000249
- Shekhar, M., & Rahnev, D. (2024). How Do Humans Give Confidence? A Comprehensive Comparison of Process Models of Perceptual Metacognition. Journal of Experimental Psychology: General, 153(3), 656–688. https://doi.org/10.1037/xge0001524

Binary file modified README_files/figure-gfm/unnamed-chunk-5-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added README_files/figure-gfm/unnamed-chunk-6-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
37 changes: 37 additions & 0 deletions paper.bib
Original file line number Diff line number Diff line change
Expand Up @@ -117,6 +117,23 @@ @article{Rausch2017
file = {PDF:C\:\\Users\\PPA714\\Zotero\\storage\\S5EC6DSC\\Rausch (2017) Should metacognition be measured by logistic regression.pdf:application/pdf},
}

@article{Rigoux2014,
title = {Bayesian model selection for group studies - {Revisited}},
volume = {84},
issn = {10959572},
url = {http://dx.doi.org/10.1016/j.neuroimage.2013.08.065},
doi = {10.1016/j.neuroimage.2013.08.065},
abstract = {In this paper, we revisit the problem of Bayesian model selection (BMS) at the group level. We originally addressed this issue in Stephan et al. (2009), where models are treated as random effects that could differ between subjects, with an unknown population distribution. Here, we extend this work, by (i) introducing the Bayesian omnibus risk (BOR) as a measure of the statistical risk incurred when performing group BMS, (ii) highlighting the difference between random effects BMS and classical random effects analyses of parameter estimates, and (iii) addressing the problem of between group or condition model comparisons. We address the first issue by quantifying the chance likelihood of apparent differences in model frequencies. This leads to the notion of protected exceedance probabilities. The second issue arises when people want to ask "whether a model parameter is zero or not" at the group level. Here, we provide guidance as to whether to use a classical second-level analysis of parameter estimates, or random effects BMS. The third issue rests on the evidence for a difference in model labels or frequencies across groups or conditions. Overall, we hope that the material presented in this paper finesses the problems of group-level BMS in the analysis of neuroimaging and behavioural data. © 2013 Elsevier Inc.},
journal = {NeuroImage},
author = {Rigoux, L. and Stephan, K. E. and Friston, K. J. and Daunizeau, J.},
year = {2014},
pmid = {24018303},
note = {Publisher: Elsevier Inc.},
keywords = {Between-condition comparison, Between-group comparison, DCM, Exceedance probability, Mixed effects, Random effects, Statistical risk},
pages = {971--985},
file = {PDF:C\:\\Users\\PPA714\\Zotero\\storage\\YKPFAD78\\Rigoux (2014) Model selection.pdf:application/pdf},
}

@article{Rausch2018,
title = {Confidence in masked orientation judgments is informed by both evidence and visibility},
volume = {80},
Expand Down Expand Up @@ -351,3 +368,23 @@ @book{hautus_detection_2021
author = {Hautus, Michael J and Macmillan, Neil A and Creelman, C Douglas},
year = {2021},
}

@article{daunizeau_vba_2014,
title = {{VBA}: {A} {Probabilistic} {Treatment} of {Nonlinear} {Models} for {Neurobiological} and {Behavioural} {Data}},
volume = {10},
issn = {1553-7358},
shorttitle = {{VBA}},
url = {https://dx.plos.org/10.1371/journal.pcbi.1003441},
doi = {10.1371/journal.pcbi.1003441},
abstract = {This work is in line with an on-going effort tending toward a computational (quantitative and refutable) understanding of human neuro-cognitive processes. Many sophisticated models for behavioural and neurobiological data have flourished during the past decade. Most of these models are partly unspecified (i.e. they have unknown parameters) and nonlinear. This makes them difficult to peer with a formal statistical data analysis framework. In turn, this compromises the reproducibility of model-based empirical studies. This work exposes a software toolbox that provides generic, efficient and robust probabilistic solutions to the three problems of model-based analysis of empirical data: (i) data simulation, (ii) parameter estimation/model selection, and (iii) experimental design optimization.},
language = {en},
number = {1},
urldate = {2024-12-02},
journal = {PLoS Computational Biology},
author = {Daunizeau, Jean and Adam, Vincent and Rigoux, Lionel},
editor = {Prlic, Andreas},
month = jan,
year = {2014},
pages = {e1003441},
file = {PDF:C\:\\Users\\PPA714\\Zotero\\storage\\IU3HL9IB\\Daunizeau et al. - 2014 - VBA A Probabilistic Treatment of Nonlinear Models for Neurobiological and Behavioural Data.pdf:application/pdf},
}
7 changes: 6 additions & 1 deletion paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,11 @@ binary discrimination tasks with confidence ratings:
- the independent truncated Gaussian model [@rausch_measures_2023] based on the model specification
used for the original meta-d$^\prime$/d$^\prime$ method [@Maniscalco2012; @Maniscalco2014], and
- the independent truncated Gaussian model based on the model specification of Hmetad [@Fleming2017a].
Bayesian model selection on the group level is performed using a fixed effects model
(i.e. assuming that data from each subject was caused by the same generative model) and a
and a Dirichelet random-effects model (which assumes that different generative models may cause the data from different subjects),
as proposed by @Rigoux2014 and @daunizeau_vba_2014.

In addition, the `statConfR` package provides functions for estimating different
kinds of measures of metacognition:
- meta-d$^\prime$/d$^\prime$, the most widely-used measure of metacognitive efficiency, allowing both @Maniscalco2012's and @Fleming2017a's model specification,
Expand Down Expand Up @@ -83,7 +88,7 @@ underlying the data is not the independent truncated Gaussian model [@rausch_mea
Likewise, receiver operating characteristics in rating experiments are only appropriate measures of discrimination sensitivity
if the assumptions of the signal detection rating model are correct [@Green1966; @hautus_detection_2021].
At the time of writing, `statConfR` is the only available package for an open software that allows
researchers to fit a set of static models of decision confidence.
researchers to fit a comprehensive set of static models of decision confidence.
The ReMeta toolbox provides Python code to fit a variety of different confidence models [@guggenmos_reverse_2022], too,
but some important models such as the independent truncated Gaussian model are missing.
Previous studies modelling confidence have made their analysis scripts freely available on the OSF website
Expand Down

0 comments on commit 0d3f7d3

Please sign in to comment.