From 1bc86dc3bb7c172686720653154a7b662fad63c4 Mon Sep 17 00:00:00 2001 From: szorowi1 Date: Mon, 30 Oct 2023 15:36:04 +0000 Subject: [PATCH] deploy: ee17a2685814cd5eb9ce6b4c3fdb378e9f7e7481 --- about/index.html | 2 +- content.json | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/about/index.html b/about/index.html index 75004101..dc08e50a 100644 --- a/about/index.html +++ b/about/index.html @@ -14,7 +14,7 @@

What is the scope of OpenData?

The scope of this project is to catalogue any and all open datasets involving experimental or cognitive tasks (e.g., Stroop, delay discounting, 2-arm bandits). Datasets involving more naturalistic behaviors are also welcomed. The only firm requirement is that trial-level data must be available.

Who maintains OpenData?

-This project is maintained by the Data Science and Sharing Team at the National Institute of Mental Health. +This project is maintained by Sam Zorowitz and the Data Science and Sharing Team at the National Institute of Mental Health.

How can I add a dataset?

Please see the contributing page. diff --git a/content.json b/content.json index 7e5f1171..8ea2496a 100644 --- a/content.json +++ b/content.json @@ -1 +1 @@ -{"pages":[{"title":"About OpenData","text":"OpenData is a database of publicly available behavioral datasets. To browse the database, click on the links above or use the search bar at the top-right of this page. What is the goal of OpenData? The goal of this project is simply to make it easier for researchers to find and use publicly available behavioral data as part of research. There's already so much out there that can be used to: Test new hypotheses or models Calculate effect sizes for power analysis Estimate meta-analytic effects across studies What is the scope of OpenData? The scope of this project is to catalogue any and all open datasets involving experimental or cognitive tasks (e.g., Stroop, delay discounting, 2-arm bandits). Datasets involving more naturalistic behaviors are also welcomed. The only firm requirement is that trial-level data must be available. Who maintains OpenData? This project is maintained by the Data Science and Sharing Team at the National Institute of Mental Health. How can I add a dataset? Please see the contributing page. How can I report an issue? Please open an issue on our Github or directly contact the maintainer.","link":"/opendata/about/index.html"},{"title":"Contributing to OpenData","text":"Contributions to OpenData are welcome! All of the code is managed through the GitHub repository. How to submit a dataset There are three main ways you can submit a dataset for entry to OpenData, which are described in turn below. Note that they are ranked in order of preference, from most preferred to least preferred, by the maintainers of Open Data. 1. Fill out the Google form Have a dataset to submit? Fill out this form: https://forms.gle/N87Dqhs73vbfoPec8 2. Submit a pull request To directly add a dataset yourself, you can open a pull request on Github. First you should fork the OpenData library via GitHub and make modifications on your fork. The OpenData database is written in markdown using a standardized, easy-to-copy template. Once your modification is complete, submit a pull request to merge your changes into the docs branch of OpenData. Pull requests will be reviewed by the maintainers. 3. Open an issue To advertise a dataset for entry into the database, you can open a new issue via GitHub. Please provide a link to a repository for the data and, if available, a link to a corresponding manuscript describing the data.","link":"/opendata/contribute/index.html"}],"posts":[{"title":"Bein et al. (2021)","text":"When our experience violates our predictions, it is adaptive to update our knowledge to promote a more accurate representation of the world and facilitate future predictions. Theoretical models propose that these mnemonic prediction errors should be encoded into a distinct memory trace to prevent interference with previous, conflicting memories. We investigated this proposal by repeatedly exposing participants to pairs of sequentially presented objects (A → B), thus evoking expectations. Then, we violated participants expectations by replacing the second object in the pairs with a novel object (A → C). The following item memory test required participants to discriminate between identical old items and similar lures, thus testing detailed and distinctive item memory representations. In two experiments, mnemonic prediction errors enhanced item memory: Participants correctly identified more old items as old when those items violated expectations during learning, compared with items that did not violate expectations. This memory enhancement for C items was only observed when participants later showed intact memory for the related A → B pairs, suggesting that strong predictions are required to facilitate memory for violations. Following up on this, a third experiment reduced prediction strength prior to violation and subsequently eliminated the memory advantage of violations. Interestingly, mnemonic prediction errors did not increase gist-based mistakes of identifying old items as similar lures or identifying similar lures as old. Enhanced item memory in the absence of gist-based mistakes suggests that violations enhanced memory for items details, which could be mediated via distinct memory traces. Together, these results advance our knowledge of how mnemonic prediction errors promote memory formation.","link":"/opendata/bein-et-al-2021/"},{"title":"Abeles et al. (2023)","text":"People show vast variability in skill performance and learning. What determines a person’s individual performance and learning ability? In this study we explored the possibility to predict participants’ future performance and learning, based on their behavior during initial skill acquisition. We recruited a large online multi-session sample of participants performing a sequential tapping skill learning task. We used machine learning to predict future performance and learning from raw data acquired during initial skill acquisition, and from engineered features calculated from the raw data. Strong correlations were observed between initial and final performance, and individual learning was not predicted. While canonical experimental tasks developed and selected to detect average effects may constrain insights regarding individual variability, development of novel tasks may shed light on the underlying mechanism of individual skill learning, relevant for real-life scenarios.","link":"/opendata/abeles-et-al-2023/"},{"title":"Abir et al. (2023)","text":"The purpose of exploration is to reduce goal-relevant uncertainty. This can be achieved by choosing to explore the parts of the environment one is most uncertain about. Humans, however, often choose to avoid uncertainty. How do humans balance approaching and avoiding uncertainty during exploration? To answer this question, we developed a task requiring participants to explore a simulated environment towards a clear goal. We compared human choices to the predictions of the optimal exploration policy and a hierarchy of simpler strategies. We found that participants generally explored the object they were more uncertain about. However, when overall uncertainty about choice options was high, participants avoided objects they were more uncertain about, learning instead about better known objects. We examined reaction times and individual differences to understand the costs and benefits of this strategy. We conclude that balancing approaching and avoiding uncertainty ameliorates the costs of exploration in a resource-rational manner.","link":"/opendata/abir-et-al-2023/"},{"title":"Abivardi et al. (2023)","text":"Detecting unusual auditory stimuli is crucial for discovering potential threat. Locus coeruleus (LC), which coordinates attention, and amygdala, which is implicated in resource prioritization, both respond to deviant sounds. Evidence concerning their interaction, however, is sparse. Seeking to elucidate if human amygdala affects estimated LC activity during this process, we recorded pupillary responses during an auditory oddball and an illuminance change task, in a female with bilateral amygdala lesions (BG) and in n = 23 matched controls. Neural input in response to oddballs was estimated via pupil dilation, a reported proxy of LC activity, harnessing a linear-time invariant system and individual pupillary dilation response function (IRF) inferred from illuminance responses. While oddball recognition remained intact, estimated LC input for BG was compacted to an impulse rather than the prolonged waveform seen in healthy controls. This impulse had the earliest response mean and highest kurtosis in the sample. As a secondary finding, BG showed enhanced early pupillary constriction to darkness. These findings suggest that LC-amygdala communication is required to sustain LC activity in response to anomalous sounds. Our results provide further evidence for amygdala involvement in processing deviant sound targets, although it is not required for their behavioral recognition.","link":"/opendata/abivardi-et-al-2023/"},{"title":"Adkins et al. (2021)","text":"Incentives can be used to increase motivation, leading to better learning and performance on skilled motor tasks. Prior work has shown that monetary punishments enhance on-line performance while equivalent monetary rewards enhance off-line skill retention. However, a large body of literature on loss aversion has shown that losses are treated as larger than equivalent gains. The divergence between the effects of punishments and reward on motor learning could be due to perceived differences in incentive value rather than valence per se. We test this hypothesis by manipulating incentive value and valence while participants trained to perform motor sequences. Consistent with our hypothesis, we found that large reward enhanced on-line performance but impaired the ability to retain the level of performance achieved during training. However, we also found that on-line performance was better with reward than punishment and that the effect of increasing incentive value was more linear with reward (small, medium, large) while the effect of value was more binary with punishment (large vs not large). These results suggest that there are differential effects of punishment and reward on motor learning and that these effects of valence are unlikely to be driven by differences in the subjective magnitude of gains and losses.","link":"/opendata/adkins-et-al-2021/"},{"title":"Alameda et al. (2023)","text":"Throughout the day, humans show natural fluctuations in arousal that impact cognitive function. To study the behavioural dynamics of cognitive control during high and low arousal states, healthy participants performed an auditory conflict task during high-intensity physical exercise (N= 39) or drowsiness (N= 33). In line with the pre-registered hypothesis, conflict and conflict adaptation effects were preserved during both altered arousal states. Overall task performance was markedly poorer during low arousal, but not for high arousal. Modelling behavioural dynamics with drift-diffusion analyses revealed evidence accumulation and non-decision time decelerated, and decisional boundaries became wider during low arousal, whereas high arousal was unexpectedly associated with a decrease in the interference of task-irrelevant information processing. These findings show how arousal differentially modulates cognitive control at both sides of normal alertness, and further validates drowsiness and physical exercise as key experimental models to disentangle the interaction between physiological fluctuations on cognitive dynamics.Statement of RelevanceThe variability in arousal and alertness that we naturally experience in our everyday activities is rarely considered in cognitive neuroscience frameworks. Here, we presented a Simon task where the word “left” or “right” was heard through the participant’s right or left ear while either exercising at high-intensity or falling asleep, in order to map the behavioural dynamics of cognitive control on both sides of the arousal spectrum. We disentangled different elements involved in decision-making with computational modelling analyses. While basic effects of conflict in cognitive control were preserved regardless of the arousal state, high and low arousal seem to differentially impact decision-making processes. These results emphasise that naturally inducing drowsiness and physical exercise can provide a suitable model to test the resilience of decision-making processes when challenged by arousal and show the resilience of cognitive control mechanisms in face of physiological fluctuations.","link":"/opendata/alameda-et-al-2023/"},{"title":"Albrecht et al. (2016)","text":"The negative symptoms of schizophrenia (SZ) are associated with a pattern of reinforcement learning (RL) deficits likely related to degraded representations of reward values. However, the RL tasks used to date have required active responses to both reward and punishing stimuli. Pavlovian biases have been shown to affect performance on these tasks through invigoration of action to reward and inhibition of action to punishment, and may be partially responsible for the effects found in patients. Forty-five patients with schizophrenia and 30 demographically-matched controls completed a four-stimulus reinforcement learning task that crossed action (Go or NoGo) and the valence of the optimal outcome (reward or punishment-avoidance), such that all combinations of action and outcome valence were tested. Behaviour was modelled using a six-parameter RL model and EEG was simultaneously recorded. Patients demonstrated a reduction in Pavlovian performance bias that was evident in a reduced Go bias across the full group. In a subset of patients administered clozapine, the reduction in Pavlovian bias was enhanced. The reduction in Pavlovian bias in SZ patients was accompanied by feedback processing differences at the time of the P3a component. The reduced Pavlovian bias in patients is suggested to be due to reduced fidelity in the communication between striatal regions and frontal cortex. It may also partially account for previous findings of poorer Go-learning in schizophrenia where Go responses or Pavlovian consistent responses are required for optimal performance. An attenuated P3a component dynamic in patients is consistent with a view that deficits in operant learning are due to impairments in adaptively using feedback to update representations of stimulus value.","link":"/opendata/albrecht-et-al-2016/"},{"title":"Aldridge-Waddon et al. (2022)","text":"The feelings of reward associated with social interaction help to motivate social behaviour and influence preferences for different types of social contact. In two studies conducted in a general population sample, we investigated self-reported and experimentally-assessed social reward processing in personality spectra with prominent interpersonal features, namely schizotypy and psychopathy. Study 1 (n = 154) measured social reward processing using the Social Reward Questionnaire, and a modified version of a Monetary and Social Incentive Delay Task. Study 2 (n = 42; a subsample of Study 1) investigated social reward processing using a Social Reward Subtype Incentive Delay Task. Our results show that schizotypy (specifically Cognitive-Perceptual dimension) and psychopathy (specifically Lifestyle dimension) are associated with diverging responses to social scenarios involving large gatherings or meeting new people (Sociability), with reduced processing in schizotypy and heightened processing in psychopathy. No difference, however, occurred for other social scenarios-with similar patterns of increased antisocial (Negative Social Potency) and reduced prosocial (Admiration, Sociability) reward processing across schizotypy and psychopathy dimensions. Our findings contribute new knowledge on social reward processing within these personality spectra and, with the important exception of Sociability, highlight potentially converging patterns of social reward processing in association with schizotypy and psychopathy.","link":"/opendata/aldridge-waddon-et-al-2022/"},{"title":"Algermissen et al. (2021)","text":"Action selection is biased by the valence of anticipated outcomes. To assess mechanisms by which these motivational biases are expressed and controlled, we measured simultaneous EEG-fMRI during a motivational Go/NoGo learning task (N = 36), leveraging the temporal resolution of EEG and subcortical access of fMRI. VmPFC BOLD encoded cue valence, importantly predicting trial-by-trial valence-driven response speed differences and EEG theta power around cue onset. In contrast, striatal BOLD encoded selection of active Go responses and correlated with theta power around response time. Within trials, theta power ramped in the fashion of an evidence accumulation signal for the value of making a Go response, capturing the faster responding to reward cues. Our findings reveal a dual nature of midfrontal theta power, with early components reflecting the vmPFC contribution to motivational biases, and late components reflecting their striatal translation into behavior, in line with influential recent value of work theories of striatal processing.","link":"/opendata/algermissen-et-al-2021/"},{"title":"Alister et al. (2022)","text":"The gaze cueing effect is the tendency for people to respond faster to targets appearing at locations gazed at by others compared to locations gazed away from by others. The effect is robust, widely studied, and is an influential finding within social cognition. However, much is still unknown about the cognitive processes that drive this effect. Formal evidence accumulation models provide the dominant theoretical account of the cognitive processes underlying speeded decision making but have never been applied to gaze cueing research and rarely to the study of social cognition more broadly. In this study, using a combination of individual-level and hierarchical computational modelling techniques, we applied Evidence Accumulation Models to gaze and arrow cueing data (four data sets total, n = 171, 139,001 trials) for the first time to 1) identify which cognitive mechanisms underlie the gaze cueing effect, and 2) see whether these processes could be considered the same as those that underlie arrow cueing effects. At the group level, people were best described by an attentional orienting mechanism rather than higher-order decision bias or information processing mechanisms. However, we found evidence for individual differences such that not everyone was best described by an attentional orienting mechanism. Further, the same people who were best described by an attentional orienting mechanism for gaze cues tended not to be best described by that same mechanism for arrow cues, suggesting these cueing effects may induce different responses within the same people – although we interpret this finding with caution.","link":"/opendata/alister-et-al-2022/"},{"title":"Almeras et al. (2022)","text":"Exploring novel environments through sequential sampling is essential for efficient decision-making under uncertainty. In the laboratory, human exploration has been studied in situations where exploration is traded against reward maximisation. By design, these ‘explore-exploit’ dilemmas confound the behavioural characteristics of exploration with those of the trade-off itself. Here we designed a sequential sampling task where exploration can be studied and compared in the presence and absence of trade-off with exploitation. Detailed model-based analyses of choice behaviour revealed specific exploration patterns arising in situations where information seeking is not traded against reward seeking. Human choices are directed toward the most uncertain option available, but only after an initial sampling phase consisting of choice streaks from each novel option. These findings outline competing cognitive pressures on information seeking: the repeated sampling of the current option (for hypothesis testing), and the directed sampling of the most uncertain option available (for structure mapping).","link":"/opendata/almeras-et-al-2022/"},{"title":"Amir et al. (2022)","text":"People have limited computational resources, yet they make complex strategic decisions over enormous spaces of possibilities. How do people efficiently search spaces with combinatorially branching paths? Here, we study players’ search strategies for a winning move in a “k-in-a-row” game. We find that players use scoring strategies to prune the search space and augment this pruning by a “shutter” heuristic that focuses the search on the paths emanating from their previous move. This strong pruning has its costs-both computational simulations and behavioral data indicate that the shutter size is correlated with players’ blindness to their opponent’s winning moves. However, simulations of the search while varying the shutter size, complexity levels, noise levels, branching factor, and computational limitations indicate that despite its costs, a narrow shutter strategy is the dominant strategy for most of the parameter space. Finally, we show that in the presence of computational limitations, the shutter heuristic enhances the performance of deep learning networks in these end-game scenarios. Together, our findings suggest a novel adaptive heuristic that benefits search in a vast space of possibilities of a strategic game.","link":"/opendata/amir-et-al-2022/"},{"title":"Amsalem et al. (2023)","text":"Statistical learning (SL), the extraction of regularities embedded in the environment, is often viewed as a fundamental and effortless process. However, whether spatial SL requires resources, or it can operate in parallel to other demands, is still not clear. To examine this issue, we tested spatial SL using the standard lab experiment under concurrent demands: high- and low-cognitive load (Experiment 1) and, spatial memory load (Experiment 2) during the familiarization phase. We found that any type of high-load demands during the familiarization abolished learning. Experiment 3 compared SL under spatial low-load and no-load. We found robust learning in the no-load condition that was dramatically reduced in the low-load condition. Finally, we compared a no-load condition with a very low-load, infrequent dot-probe condition that posed minimal demands while still requiring attention to the display (Experiment 4). The results showed, once again, that any concurrent task during the familiarization phase largely impaired spatial SL. Taken together, we conclude that spatial SL requires resources, a finding that challenges the view that the extraction of spatial regularities is automatic and implicit and suggests that this fundamental learning process is not as effortless as was typically assumed. We further discuss the practical and methodological implications of these findings.","link":"/opendata/amsalem-et-al-2023/"},{"title":"Antony et al. (2022)","text":"Two fundamental issues in memory research concern when later experiences strengthen or weaken initial memories and when the two memories become linked or remain independent. A promising candidate for explaining these issues is semantic relatedness. Here, across five paired-associate learning experiments (N=1000), we systematically varied the semantic relatedness between initial and later cues, initial and later targets, or both. We found that learning retroactively benefited long-term memory performance for semantically related words (vs. unshown control words), and these benefits increased as a function of relatedness. Critically, memory dependence between initial and later pairs also increased with relatedness, suggesting that pre-existing semantic relationships promote interdependence for memories formed across episodes. We also found that modest retroactive benefits, but not interdependencies, emerged when subjects learned via studying rather than practice testing. These findings demonstrate that semantic relatedness during new learning retroactively strengthens old associations while scaffolding new ones into well-fortified memory traces.","link":"/opendata/antony-et-al-2022/"},{"title":"Appelhoff et al. (2022)","text":"When judging the average value of sample stimuli (e.g., numbers) people tend to either over- or underweight extreme sample values, depending on task context. In a context of overweighting, recent work has shown that extreme sample values were overly represented also in neural signals, in terms of an anti-compressed geometry of number samples in multivariate electroencephalography (EEG) patterns. Here, we asked whether neural representational geometries may also reflect a relative underweighting of extreme values (i.e., compression) which has been observed behaviorally in a great variety of tasks. We used a simple experimental manipulation (instructions to average a single-stream or to compare dual-streams of samples) to induce compression or anti-compression in behavior when participants judged rapid number sequences. Model-based representational similarity analysis (RSA) replicated the previous finding of neural anti-compression in the dual-stream task, but failed to provide evidence for neural compression in the single-stream task, despite the evidence for compression in behavior. Instead, the results indicated enhanced neural processing of extreme values in either task, regardless of whether extremes were over- or underweighted in subsequent behavioral choice. We further observed more general differences in the neural representation of the sample information between the two tasks. Together, our results indicate a mismatch between sample-level EEG geometries and behavior, which raises new questions about the origin of common psychometric distortions, such as diminishing sensitivity for larger values.","link":"/opendata/appelhoff-et-al-2022/"},{"title":"Arato et al. (2023)","text":"What is the link between eye movements and sensory learning? Although some theories have argued for an automatic interaction between what we know and where we look that continuously modulates human information gathering behavior during both implicit and explicit learning, there exists limited experimental evidence supporting such an ongoing interplay. To address this issue, we used a visual statistical learning paradigm combined with a gaze contingent stimulus presentation and manipulated the explicitness of the task to explore how learning and eye movements interact. During both implicit exploration and explicit visual learning of unknown composite visual scenes, spatial eye movement patterns systematically and gradually changed in accordance with the underlying statistical structure of the scenes. Moreover, the degree of change was directly correlated with the amount and type of knowledge the observers acquired. Our results provide the first evidence for an ongoing and specific bidirectional interaction between hitherto accumulated knowledge and eye movements during both implicit and explicit visual statistical learning, in line with theoretical accounts of active learning.","link":"/opendata/arato-et-al-2023/"},{"title":"Arbuzova et al. (2022)","text":"It is still debated whether metacognition, or the ability to monitor our own mental states, relies on processes that are domain-general (a single set of processes can account for the monitoring of any mental process) or domain-specific (metacognition is accomplished by a collection of multiple monitoring modules, one for each cognitive domain). It has been speculated that two broad categories of metacognitive processes may exist: those that monitor primarily externally generated versus those that monitor primarily internally generated information. To test this proposed division, we measured metacognitive performance (using m-ratio, a signal detection theoretical measure) in four tasks that could be ranked along an internal-external axis of the source of information, namely memory, motor, visuomotor, and visual tasks. We found correlations between m-ratios in visuomotor and motor tasks, but no correlations between m-ratios in visual and visuomotor tasks, or between motor and memory tasks. While we found no correlation in metacognitive ability between visual and memory tasks, and a positive correlation between visuomotor and motor tasks, we found no evidence for a correlation between motor and memory tasks. This pattern of correlations does not support the grouping of domains based on whether the source of information is primarily internal or external. We suggest that other groupings could be more reflective of the nature of metacognition and discuss the need to consider other non-domain task-features when using correlations as a way to test the underlying shared processes between domains.","link":"/opendata/arbuzova-et-al-2022/"},{"title":"Armstrong et al. (2022)","text":"Attentional bias for threat is an adaptive feature of human psychology, but may become maladaptive in anxiety-related disorders, causing distress, distraction, and distorted perception of danger. Reaction time measures have revealed automatic, covert attention biases to threat, whereas eye tracking has revealed voluntary biases over a larger timescale, with monitoring or avoidance depending on context. Recently, attentional bias for threat has been studied as a conditioned fear response, providing new insight into how attentional biases are acquired and inhibited through learning experiences. However, very few studies have examined voluntary gaze biases during fear learning. In a novel eye tracking paradigm, we examine the overt components of attentional bias to threat and safety cues. We found that threat cues, but not safety cues, elicited an initial orienting bias, as well as sustained monitoring bias across 10-second trials. This collective “vigilance” response to threat cues was insensitive to extinction, whereas condition fear responding revealed by pupil size and self-report ratings showed marked extinction. Vigilance may be less prone to extinction, compared to autonomic arousal, because eye movements require less energy than preparing the body for defensive behavior. Implications for understanding vigilance in PTSD are considered.","link":"/opendata/armstrong-et-al-2022/"},{"title":"Arnau et al. (2023)","text":"Cognitive performance largely depends on how much effort is invested during task-execution. This also means that we rarely perform as good as we could. Cognitive effort is adjusted to the expected outcome of performance, meaning that it is driven by motivation. The results from recent studies investigating the effects manipulations of motivation clearly suggest that it is the expenditure of cognitive control that is particularly prone to being affected by modulations of cognitive effort. Although recent EEG studies investigated the neural underpinnings of the interaction of effort and control, reports on how cognitive effort is reflected by oscillatory activity of the EEG are quite sparse. It is the goal of the present study to bridge this gap by performing an exploratory analysis of high-density EEG data from a switching-task using manipulations of monetary incentives. A beamformer approach is used to localize the sensor-level effects in source-space. The results indicate that the manipulation of cognitive effort was successful. The participants reported significantly higher motivation and cognitive effort in high versus low reward trials. Performance was also significantly increased. The analysis of the EEG data revealed that the increase of cognitive effort was reflected by an increased mid-frontal theta activity during the cue-target interval, suggesting an increased use of proactive control. Alpha-desynchronization throughout the trial was also more pronounced in high reward trials, signaling a bias of attention towards the processing of external stimuli. Source reconstruction suggests that these effects are located in areas related to cognitive control, and visual processing.","link":"/opendata/arnau-et-al-2023/"},{"title":"Ashinoff et al. (2022)","text":"Base-rate neglect is a pervasive bias in judgment that is conceptualized as underweighting of prior information and can have serious consequences in real-world scenarios. This bias is thought to reflect variability in inferential processes but empirical support for a cohesive theory of base-rate neglect with sufficient explanatory power to account for longer-term and real-world beliefs is lacking. A Bayesian formalization of base-rate neglect in the context of sequential belief updating predicts that belief trajectories should exhibit dynamic patterns of dependence on the order in which evidence is presented and its consistency with prior beliefs. To test this, we developed a novel ‘urn-and-beads’ task that systematically manipulated the order of colored bead sequences and elicited beliefs via an incentive-compatible procedure. Our results in two independent online studies confirmed the predictions of the sequential base-rate neglect model: people exhibited beliefs that are more influenced by recent evidence and by evidence inconsistent with prior beliefs. We further found support for a noisy-sampling inference model whereby base-rate neglect results from rational discounting of noisy internal representations of prior beliefs. Finally, we found that model-derived indices of base-rate neglect-including noisier prior representation-correlated with propensity for unusual beliefs outside the laboratory. Our work supports the relevance of Bayesian accounts of sequential base-rate neglect to real-world beliefs and hints at strategies to minimize deleterious consequences of this pervasive bias.","link":"/opendata/ashinoff-et-al-2022/"},{"title":"Asutay & Västfjäll (2022)","text":"Affective experience has an important role in decision-making with recent theories suggesting a modulatory role of affect in ongoing subjective value computations. However, it is unclear how varying expectations and uncertainty dynamically influence affective experience and how dynamic representation of affect modulates risky choices. Using hierarchical Bayesian modeling on data from a risky choice task (N = 101), we find that the temporal integration of recently encountered choice parameters (expected value, uncertainty, and prediction errors) shapes affective experience and impacts subsequent choice behavior. Specifically, self-reported arousal prior to choice was associated with increased loss aversion, risk aversion, and choice consistency. Taken together, these findings provide clear behavioral evidence for continuous affective modulation of subjective value computations during risky decision-making.","link":"/opendata/asutay-vastfjall-2022/"},{"title":"Aydoğan et al. (2023)","text":"Interval timing refers to the ability to perceive and remember intervals in the seconds to minutes range. Our contemporary understanding of interval timing is derived from relatively small-scale, isolated studies that investigate a limited range of intervals with a small sample size, usually based on a single task. Consequently, the conclusions drawn from individual studies are not readily generalizable to other tasks, conditions, and task parameters. The current paper presents a live database that presents raw data from interval timing studies (currently composed of 68 datasets from eight different tasks incorporating various interval and temporal order judgments) with an online graphical user interface to easily select, compile, and download the data organized in a standard format. The Timing Database aims to promote and cultivate key and novel analyses of our timing ability by making published and future datasets accessible as open-source resources for the entire research community. In the current paper, we showcase the use of the database by testing various core ideas based on data compiled across studies (i.e., temporal accuracy, scalar property, location of the point of subjective equality, malleability of timing precision). The Timing Database will serve as the repository for interval timing studies through the submission of new datasets.","link":"/opendata/aydogan-et-al-2023/"},{"title":"Aylward et al. (2019)","text":"Anxiety is characterized by altered responses under uncertain conditions, but the precise mechanism by which uncertainty changes the behaviour of anxious individuals is unclear. Here we probe the computational basis of learning under uncertainty in healthy individuals and individuals suffering from a mix of mood and anxiety disorders. Participants were asked to choose between four competing slot machines with fluctuating reward and punishment outcomes during safety and stress. We predicted that anxious individuals under stress would learn faster about punishments and exhibit choices that were more affected by those punishments, thus formalizing our predictions as parameters in reinforcement learning accounts of behaviour. Overall, the data suggest that anxious individuals are quicker to update their behaviour in response to negative outcomes (increased punishment learning rates). When treating anxiety, it may therefore be more fruitful to encourage anxious individuals to integrate information over longer horizons when bad things happen, rather than try to blunt their responses to negative outcomes.","link":"/opendata/aylward-et-al-2019/"},{"title":"Bach (2015)","text":"Behavioural inhibition is a key anxiety-like behaviour in rodents and humans, distinct from avoidance of danger, and reduced by anxiolytic drugs. In some situations, it is not clear how behavioural inhibition minimises harm or maximises benefit for the agent, and can even appear counterproductive. Extant explanations of this phenomenon make use of descriptive models but do not provide a formal assessment of its adaptive value. This hampers a better understanding of the neural computations underlying anxiety behaviour. Here, we analyse a standard rodent anxiety model, the operant conflict test. We harvest Bayesian Decision Theory to show that behavioural inhibition normatively arises as cost-minimising strategy in temporally correlated environments. Importantly, only if behavioural inhibition is aimed at minimising cost, it depends on probability and magnitude of threat. Harnessing a virtual computer game, we test model predictions in four experiments with human participants. Humans exhibit behavioural inhibition with a strong linear dependence on threat probability and magnitude. Strikingly, inhibition occurs before motor execution and depends on the virtual environment, thus likely resulting from a neural optimisation process rather than a pre-programmed mechanism. Individual trait anxiety scores predict behavioural inhibition, underlining the validity of this anxiety model. These findings put anxiety behaviour into the context of cost-minimisation and optimal inference, and may ultimately pave the way towards a mechanistic understanding of the neural computations gone awry in human anxiety disorder.","link":"/opendata/bach-2015/"},{"title":"Bach et al. (2020)","text":"During adolescence and early adulthood, learning when to avoid threats and when to pursue rewards becomes crucial. Using a risky foraging task, we investigated individual differences in this dynamic across 781 individuals aged 14-24 years who were split into a hypothesis-generating discovery sample and a hold-out confirmation sample. Sex was the most important predictor of cautious behaviour and performance. Males earned one standard deviation (or 20%) more reward than females, collected more reward when there was little to lose and reduced foraging to the same level as females when potential losses became high. Other independent predictors of cautiousness and performance were self-reported daringness, IQ and self-reported cognitive complexity. We found no evidence for an impact of age or maturation. Thus, maleness, a high IQ or self-reported cognitive complexity, and self-reported daringness predicted greater success in risky foraging, possibly due to better exploitation of low-risk opportunities in high-risk environments.","link":"/opendata/bach-et-al-2020/"},{"title":"Bae & Luck (2019)","text":"Recent experiences influence the processing of new information even when those experiences are irrelevant to the current task. Does this reflect the indirect effects of a passively maintained representation of the previous experience, or is this representation reactivated when a new event occurs? To answer this question, we attempted to decode the orientation of the stimulus on the previous trial from the electroencephalogram on the current trial in a working memory task. Behavioral data confirmed that the previous-trial stimulus orientation influenced the reported orientation on the current trial, even though the previous-trial orientation was now task irrelevant. In two independent experiments, we found that the previous-trial orientation could be decoded from the current-trial electroencephalogram, indicating that the current-trial stimulus reactivated or boosted the representation of the previous-trial orientation. These results suggest that the effects of recent experiences on behavior are driven, in part, by a reactivation of those experiences and not solely by the indirect effects of passive memory traces.","link":"/opendata/bae-luck-2019/"},{"title":"Bahrami & Navajas (2020)","text":"The dataset includes 975 participants, who completed an online version of the4-arm bandit task in 2014. All participants gave their consent to carry the experiment. The experiment was approved by UCLResearch Ethics Committee(project 4223/001). The dataset is anonymised, and does not include information about the participants identity. The task followed the 4-arm bandit paradigm described in Daw et al. 2006. In this task the participants were asked to choose between four options on multiple trials. On each trial they had to choose an option and were then given information about the reward obtained by their choice. The rewards of each option drifted over time, in a manner also known as restless bandit, forcing the participants to constantly explore the different options to obtain the maximum reward. The rewards followed one of three drift schedules which were predefined, see below.The experiment lasted 150 trials. Participants failing to response within 4 seconds missed the trial and moved to the next one with no reward.","link":"/opendata/bahrami-navajas-2020/"},{"title":"Balasubramani et al. (2021)","text":"Choice selection strategies and decision making are typically investigated using multiple-choice gambling paradigms that require participants to maximize reward payoff. However, research shows that performance in such paradigms suffers from individual biases towards the frequency of gains to choose smaller local gains over larger longer term gain, also referred to as melioration. Here, we developed a simple two-choice reward task, implemented in 186 healthy human adult subjects across the adult lifespan to understand the behavioral, computational, and neural bases of payoff maximization versus melioration. The observed reward choice behavior on this task was best explained by a reinforcement learning model of differential future reward prediction. Simultaneously recorded and source-localized electroencephalography (EEG) showed that diminished theta-band activations in the right rostral anterior cingulate cortex (rACC) correspond to greater reward payoff maximization, specifically during the presentation of cumulative reward information at the end of each task trial. Notably, these activations (greater rACC theta) predicted depressed mood symptoms, thereby showcasing a reward processing marker of potential clinical utility. This study presents cognitive, computational and neural (EEG-based) analyses of a rapid reward-based decision-making task. The research has the following three highlights. 1) It teases apart two core aspects of reward processing, i.e. long term expected value maximization versus immediate gain frequency melioration based choice behavior. 2) It models reinforcement learning based behavioral differences between individuals showing that observed performance is best explained by differential extents of reward prediction. 3) It investigates neural correlates in 186 healthy human subjects across the adult lifespan, revealing specific theta band cortical source activations in right rostral anterior cingulate as correlates for maximization that further predict depressed mood across subjects.","link":"/opendata/balasubramani-et-al-2021/"},{"title":"Ballard et al. (2019a)","text":"Animals rely on learned associations to make decisions. Associations can be based on relationships between object features (e.g., the three leaflets of poison ivy leaves) and outcomes (e.g., rash). More often, outcomes are linked to multidimensional states (e.g., poison ivy is green in summer but red in spring). Feature-based reinforcement learning fails when the values of individual features depend on the other features present. One solution is to assign value to multi-featural conjunctive representations. Here, we test if the hippocampus forms separable conjunctive representations that enables the learning of response contingencies for stimuli of the form: AB+, B-, AC-, C+. Pattern analyses on functional MRI data show the hippocampus forms conjunctive representations that are dissociable from feature components and that these representations, along with those of cortex, influence striatal prediction errors. Our results establish a novel role for hippocampal pattern separation and conjunctive representation in reinforcement learning.","link":"/opendata/ballard-et-al-2019a/"},{"title":"Ballard et al. (2019b)","text":"Much is known about the effects of reward and punishment on behavior, yet little research has considered how these incentives influence the information-processing dynamics that underlie decision making. We fitted the linear ballistic accumulator to data from a perceptual-judgment task to examine the impacts of reward- and punishment-based incentives on three distinct components of information processing: the quality of the information processed, the quantity of that information, and the decision threshold. The threat of punishment lowered the average quality and quantity of information processed, compared with the prospect of reward or no performance incentive at all. The threat of punishment also induced less cautious decision making by lowering people’s decision thresholds relative to the prospect of reward. These findings suggest that information-processing dynamics are determined not only by objective properties of the decision environment but also by the higher order goals of the system.","link":"/opendata/ballard-et-al-2019b/"},{"title":"Balsdon et al. (2020)","text":"Perceptual decisions are accompanied by feelings of confidence that reflect the likelihood that the decision was correct. Here we aim to clarify the relationship between perception and confidence by studying the same perceptual task across three different confidence contexts. Human observers were asked to categorize the source of sequentially presented visual stimuli. Each additional stimulus provided evidence for making more accurate perceptual decisions, and better confidence judgements. We show that observers’ ability to set appropriate evidence accumulation bounds for perceptual decisions is strongly predictive of their ability to make accurate confidence judgements. When observers were not permitted to control their exposure to evidence, they imposed covert bounds on their perceptual decisions but not on their confidence decisions. This partial dissociation between decision processes is reflected in behaviour and pupil dilation. Together, these findings suggest a confidence-regulated accumulation-to-bound process that controls perceptual decision-making even in the absence of explicit speed-accuracy trade-offs.","link":"/opendata/balsdon-et-al-2020/"},{"title":"Balter & Raymond (2022)","text":"Transfer of learning refers to successful application of previously acquired knowledge or skills to novel settings. Although working memory (WM) is thought to play a role in transfer learning, direct evidence of the effect of limitations in WM on transfer learning is lacking. To investigate, we used an acquired equivalence paradigm that included tests of association and transfer learning. The effects of imposing an acute WM limitation on young adults was tested (within-subjects design: N = 27 adults; Mage = 24 years) by conducting learning transfer tests concurrent with a secondary task that required carrying a spatial WM load when performing the learned/transfer trial (Load condition) to acutely limit WM resources or no WM load (No-Load condition; WM was unloaded prior to performing the learned/transfer trial). Analysis using mixed effects models showed that although success on the transfer trials was high in the No-Load condition, performance dropped significantly to chance in the Load condition. Performance on tests of learned associations remained high in both conditions. These results indicate that transfer of learning depends on access to WM resources and suggests that even healthy young individuals may be affected in their ability to cross-utilize when cognitive resources become scarce, such as when engaging in two tasks simultaneously (e.g., using satellite navigation while driving).","link":"/opendata/balter-raymond-2022/"},{"title":"Balzus et al. (2022)","text":"Overactive performance monitoring, as reflected by enhanced neural responses to errors (the error-related negativity, ERN), is considered a biomarker for obsessive-compulsive disorder (OCD) and may be a promising target for novel treatment approaches. Prior research suggests that non-invasive brain stimulation with transcranial direct current stimulation (tDCS) may reduce the ERN in healthy individuals, yet no study has investigated its efficacy in attenuating the ERN in OCD. In this preregistered, randomized, sham-controlled, crossover study, we investigated effects of tDCS on performance monitoring in patients with OCD (n = 28) and healthy individuals (n = 28). Cathodal and sham tDCS was applied over the presupplementary motor area (pre-SMA) in two sessions, each followed by electroencephalogram recording during a flanker task. Cathodal tDCS reduced the ERN amplitude compared to sham tDCS, albeit this effect was only marginally significant (p = .052; mean difference: 0.86 μV). Additionally, cathodal tDCS reduced the correct-response negativity and increased the error positivity. These neural modulations were not accompanied by behavioral changes. Moreover, we found no evidence that the tDCS effect was more pronounced in the patient group. In summary, our findings indicate that tDCS over the pre-SMA modulates neural correlates of performance monitoring across groups. Therefore, this study represents a valuable starting point for future research to determine whether repeated tDCS application induces a more pronounced ERN attenuation and normalizes aberrant performance monitoring in the long term, thereby potentially alleviating obsessive-compulsive symptoms and providing a psychophysiological intervention strategy for individuals who do not benefit sufficiently from existing interventions.","link":"/opendata/balzus-et-al-2022/"},{"title":"Banca et al. (2023)","text":"Enhanced habit formation, greater automaticity and impaired goal/habit arbitration in obsessive-compulsive disorder (OCD) are key hypotheses from the goal/habit imbalance theory of compulsion which have not been directly investigated. This article tests these hypotheses using a combination of newly developed behavioral tasks. First, we trained patients with OCD and healthy controls, using a novel smartphone app, to perform chunked action sequences, previously shown to engage habit brain circuitry. The motor training was daily over one month period. There was equivalent procedural learning and attainment of habitual performance (measured with an objective criteria of automaticity) in both groups, despite greater subjective habitual tendencies in patients with OCD, self-reported via a recently developed questionnaire. We then used a combination of follow-up behavioral tasks to further assess the arbitration between previous automatic and new goal-directed action sequences. We found no evidence for impairments of goal/habit arbitration in OCD following re-evaluation based on monetary feedback, although there was a greater preference for engaging in the trained habitual sequence under certain conditions which may have derived from its intrinsic value. These findings may lead to a reformulation of the goal/habit imbalance hypothesis in OCD. Finally, OCD patients with higher compulsivity scores and habitual tendencies showed more engagement with the motor habit-training app and reported symptom alleviation, with implications for its potential use as a form of habit reversal therapy.","link":"/opendata/banca-et-al-2023/"},{"title":"Bang et al. (2022)","text":"Computing confidence in ones own and others decisions is critical for social success. While there has been substantial progress in our understanding of confidence estimates about oneself, little is known about how people form confidence estimates about others. Here, we address this question by asking participants undergoing fMRI to place bets on perceptual decisions made by themselves or one of three other players of varying ability. We show that participants compute confidence in another players decisions by combining distinct estimates of player ability and decision difficulty - allowing them to predict that a good player may get a difficult decision wrong and that a bad player may get an easy decision right. We find that this computation is associated with an interaction between brain systems implicated in decision-making (LIP) and theory of mind (TPJ and dmPFC). These results reveal an interplay between self- and other-related processes during a social confidence computation.","link":"/opendata/bang-et-al-2022/"},{"title":"Baptista et al. (2021)","text":"Checking behavior is a natural and adaptive strategy for resolving uncertainty in everyday situations. Here, we aimed at investigating the psychological drivers of checking and its regulation by uncertainty, in non-clinical participants and controlled experimental settings. We found that the sensitivity of participants’ explicit confidence judgments to actual performance (explicit metacognition) predicted the extent to which their checking strategy was regulated by uncertainty. Yet, a more implicit measure of metacognition (derived from asking participants to opt between trials) did not contribute to the regulation of checking behavior. Meanwhile, how participants scaled on questionnaires eliciting self-beliefs such as self-confidence and self-reported obsessive-compulsive symptoms also predicted participants’ uncertainty-guided checking tendencies. Altogether, these findings demonstrate that checking behavior is likely the outcome of a core explicit metacognitive process operating at the scale of single decisions, while remaining influenced by general self-beliefs. Our findings are thus consistent with two mechanisms (micro vs. macro) through which this otherwise adaptive behavior could go awry in certain psychiatric disorders such as obsessive-compulsive disorder.","link":"/opendata/baptista-et-al-2021/"},{"title":"Barnby et al. (2022a)","text":"Theoretical accounts suggest heightened uncertainty about the state of the world underpins aberrant belief updates, which in turn increase the risk of developing a persecutory delusion. However, this raises the question as to how an agent’s uncertainty may relate to the precise phenomenology of paranoia, as opposed to other qualitatively different forms of belief. We tested whether the same population (n=693) responded similarly to non-social and social contingency changes in a probabilistic reversal learning task and a modified repeated reversal Dictator game, and the impact of paranoia on both. We fitted computational models that included closely related parameters that quantified the rigidity across contingency reversals and the uncertainty about the environment/partner. Consistent with prior work we show that paranoia was associated with uncertainty around a partner’s behavioural policy and rigidity in harmful intent attributions in the social task. In the non-social task we found that pre-existing paranoia was associated with larger decision temperatures and commitment to suboptimal cards. We show relationships between decision temperature in the non-social task and priors over harmful intent attributions and uncertainty over beliefs about partners in the social task. Our results converge across both classes of model, suggesting paranoia is associated with a general uncertainty over the state of the world (and agents within it) that takes longer to resolve, although we demonstrate that this uncertainty is expressed asymmetrically in social contexts. Our model and data allow the representation of sociocognitive mechanisms that explain persecutory delusions and provide testable, phenomenologically relevant predictions for causal experiments.","link":"/opendata/barnby-et-al-2022a/"},{"title":"Barnby et al. (2022b)","text":"To benefit from social interactions, people need to predict how their social partners will behave. Such predictions arise through integrating prior expectations with evidence from observations, but where the priors come from and whether they influence the integration into beliefs about a social partner is not clear. Furthermore, this process can be affected by factors such as paranoia, in which the tendency to form biased impressions of others is common. Using a modified social value orientation (SVO) task in a large online sample (n = 697), we showed that participants used a Bayesian inference process to learn about partners, with priors that were based on their own preferences. Paranoia was associated with preferences for earning more than a partner and less flexible beliefs regarding a partner’s social preferences. Alignment between the preferences of participants and their partners was associated with better predictions and with reduced attributions of harmful intent to partners. Together, our data and model expand upon theories of interpersonal relationships by demonstrating how dyadic similarity mechanistically influences social interaction by generating more accurate predictions and less threatening impressions.","link":"/opendata/barnby-et-al-2022b/"},{"title":"Barnby et al. (2023)","text":"Striatal dopamine is important to paranoid attributions, although its computational role in social inference remains elusive. We employed a simple game theoretic paradigm and computational model of intentional attributions to investigate the effects of dopamine D2/D3 antagonism on ongoing mental state inference following social outcomes. Haloperidol, compared to placebo, enhanced the impact of partner behaviour on beliefs about harmful intent, and increased learning from recent encounters. These alterations caused significant changes to model covariation and negative correlations between self-interest and harmful intent attributions. Our findings suggest haloperidol improves flexibility in model-based beliefs about others and simultaneously reduces the self-relevance of social observations. Our results may reflect the role of D2/D3 dopamine in supporting self-relevant mentalisation. Our data and model bridge theory between general and social accounts of value representation. We demonstrate initial evidence for the sensitivity of our model and short social paradigm to drug intervention and clinical dimensions, allowing distinctions between mechanisms that operate across traits and states.","link":"/opendata/barnby-et-al-2023/"},{"title":"Barretto-Garcia et al. (2023)","text":"Humans are generally risk averse, preferring smaller certain over larger uncertain outcomes. Economic theories usually explain this by assuming concave utility functions. Here, we provide evidence that risk aversion can also arise from relative underestimation of larger monetary payoffs, a perceptual bias rooted in the noisy logarithmic coding of numerical magnitudes. We confirmed this with psychophysics and functional magnetic resonance imaging, by measuring behavioural and neural acuity of magnitude representations during a magnitude perception task and relating these measures to risk attitudes during separate risky financial decisions. Computational modelling indicated that participants use similar mental magnitude representations in both tasks, with correlated precision across perceptual and risky choices. Participants with more precise magnitude representations in parietal cortex showed less variable behaviour and less risk aversion. Our results highlight that at least some individual characteristics of economic behaviour can reflect capacity limitations in perceptual processing rather than processes that assign subjective values to monetary outcomes.","link":"/opendata/barretto-garcia-et-al-2023/"},{"title":"Basili et al. (2022)","text":"In September 2021 we conducted a survey to 1482 people in Italy, when the vaccination campaign against Covid19 was going on. In the first part of the survey we run three simple tests on players’ behavior in standard tasks with monetary incentives to measure their risk attitudes, willingness to contribute to a public good in an experimental game, and their beliefs about others’ behavior. In the second part, we asked respondents if they were vaccinated and, if not, for what reason. We classified as no-vaxxers those (around [Formula: see text] of the sample) who did not yet start the vaccination process and declared that they intended not to do it in the future. We find that no-vaxxers contribute less to the public good in the experimental game because they trust others less to do so. from the three tests we extrapolated a classification based on the benchmark of rationality and other-regarding preferences for each respondent, and we found that in this respect no-vaxxers do not differ from the rest of the population.","link":"/opendata/basili-et-al-2022/"},{"title":"Bavard et al. (2018)","text":"In economics and perceptual decision-making contextual effects are well documented, where decision weights are adjusted as a function of the distribution of stimuli. Yet, in reinforcement learning literature whether and how contextual information pertaining to decision states is integrated in learning algorithms has received comparably little attention. Here, we investigate reinforcement learning behavior and its computational substrates in a task where we orthogonally manipulate outcome valence and magnitude, resulting in systematic variations in state-values. Model comparison indicates that subjects behavior is best accounted for by an algorithm which includes both reference point-dependence and range-adaptation-two crucial features of state-dependent valuation. In addition, we find that state-dependent outcome valuation progressively emerges, is favored by increasing outcome information and correlated with explicit understanding of the task structure. Finally, our data clearly show that, while being locally adaptive (for instance in negative valence and small magnitude contexts), state-dependent valuation comes at the cost of seemingly irrational choices, when options are extrapolated out from their original contexts.","link":"/opendata/bavard-et-al-2018/"},{"title":"Bavard et al. (2021)","text":"Evidence suggests that economic values are rescaled as a function of the range of the available options. Although locally adaptive, range adaptation has been shown to lead to suboptimal choices, particularly notable in reinforcement learning (RL) situations when options are extrapolated from their original context to a new one. Range adaptation can be seen as the result of an adaptive coding process aiming at increasing the signal-to-noise ratio. However, this hypothesis leads to a counterintuitive prediction: Decreasing task difficulty should increase range adaptation and, consequently, extrapolation errors. Here, we tested the paradoxical relation between range adaptation and performance in a large sample of participants performing variants of an RL task, where we manipulated task difficulty. Results confirmed that range adaptation induces systematic extrapolation errors and is stronger when decreasing task difficulty. Last, we propose a range-adapting model and show that it is able to parsimoniously capture all the behavioral results.","link":"/opendata/bavard-et-al-2021/"},{"title":"Bavard & Palminteri (2023)","text":"Reinforcement learning research in humans and other species indicates that rewards are represented in a context-dependent manner. More specifically, reward representations seem to be normalized as a function of the value of the alternative options. The dominant view postulates that value context-dependence is achieved via a divisive normalization rule, inspired by perceptual decision-making research. However, behavioral and neural evidence points to another plausible mechanism: range normalization. Critically, previous experimental designs were ill-suited to disentangle the divisive and the range normalization accounts, which generate similar behavioral predictions in many circumstances. To address this question, we designed a new learning task where we manipulated, across learning contexts, the number of options and the value ranges. Behavioral and computational analyses falsify the divisive normalization account and rather provide support for the range normalization rule. Together, these results shed new light on the computational mechanisms underlying context-dependence in learning and decision-making.","link":"/opendata/bavard-palminteri-2023/"},{"title":"Bedder et al. (2023)","text":"Humans exhibit distinct risk preferences when facing choices involving potential gains and losses. These preferences are believed to be subject to neuromodulatory influence, particularly from dopamine and serotonin. As neuromodulators manifest circadian rhythms, this suggests decision making under risk might be affected by time of day. Here, in a large subject sample collected using a smartphone application, we found that risky options with potential losses were increasingly chosen over the course of the day. We observed this result in both a within-subjects design (N = 2599) comparing risky options chosen earlier and later in the day in the same individuals, and in a between-subjects design (N = 26,720) showing our effect generalizes across ages and genders. Using computational modelling, we show this diurnal change in risk preference reflects a decrease in sensitivity to increasing losses, but no change was observed in the relative impacts of gains and losses on choice (i.e., loss aversion). Thus, our findings reveal a striking diurnal modulation in human decision making, a pattern with potential importance for real-life decisions that include voting, medical decisions, and financial investments.","link":"/opendata/bedder-et-al-2023/"},{"title":"Bejjani et al. (2022)","text":"Cognitive control is guided by learning, as people adjust control to meet changing task demands. The two best-studied instances of control-learning are the enhancement of attentional task focus in response to increased frequencies of incongruent distracter stimuli, reflected in the list-wide proportion congruent (LWPC) effect, and the enhancement of switch-readiness in response to increased frequencies of task switches, reflected in the list-wide proportion switch (LWPS) effect. However, the latent architecture underpinning these adaptations in cognitive stability and flexibility - specifically, whether there is a single, domain-general, or multiple, domain-specific learners - is currently not known. To reveal the underlying structure of control-learning, we had a large sample of participants (N = 950) perform LWPC and LWPS paradigms, and afterwards assessed their explicit awareness of the task manipulations, as well as general cognitive ability and motivation. Structural equation modeling was used to evaluate several preregistered models representing different plausible hypotheses concerning the latent structure of control-learning. Task performance replicated standard LWPC and LWPS effects. Crucially, the model that best fit the data had correlated domain- and context-specific latent factors. Thus, peoples ability to adapt their on-task focus and between-task switch-readiness to changing levels of demand was mediated by distinct (though correlated) underlying factors. Model fit remained good when accounting for speed-accuracy trade-offs, variance in individual cognitive ability and self-reported motivation, as well as self-reported explicit awareness of manipulations and the order in which different levels of demand were experienced. Implications of these results for the cognitive architecture of dynamic cognitive control are discussed.","link":"/opendata/bejjani-et-al-2022/"},{"title":"Bellana et al. (2022)","text":"Some experiences linger in mind, spontaneously returning to our thoughts for minutes after their conclusion. Other experiences fall out of mind immediately. It remains unclear why. We hypothesize that an input is more likely to persist in our thoughts when it has been deeply processed: when we have extracted its situational meaning rather than its physical properties or low-level semantics. Here, participants read sequences of words with different levels of coherence (word-, sentence-, or narrative-level). We probe participants spontaneous thoughts via free word association, before and after reading. By measuring lingering subjectively (via self-report) and objectively (via changes in free association content), we find that information lingers when it is coherent at the narrative level. Furthermore, and an individuals feeling of transportation into reading material predicts lingering better than the materials objective coherence. Thus, our thoughts in the present moment echo prior experiences that have been incorporated into deeper, narrative forms of thinking.","link":"/opendata/bellana-et-al-2022/"},{"title":"Beltzer et al. (2019)","text":"Adaptive social behavior requires learning probabilities of social reward and punishment, and updating these probabilities when they change. Given prior research on aberrant reinforcement learning in affective disorders, this study examines how social anxiety affects probabilistic social reinforcement learning and dynamic updating of learned probabilities in a volatile environment. N=222 online participants completed questionnaires and a computerized ball-catching game with changing probabilities of reward and punishment. Dynamic learning rates were estimated to assess the relative importance ascribed to new information in response to volatility. Mixed-effects regression was used to analyze throw patterns as a function of social anxiety symptoms. Higher social anxiety predicted fewer throws to the previously punishing avatar and different learning rates after certain role changes, suggesting that social anxiety may be characterized by difficulty updating learned social probabilities. Socially anxious individuals may miss the chance to learn that a once-punishing situation no longer poses a threat.","link":"/opendata/beltzer-et-al-2019/"},{"title":"Ben Artzi & Shahar (2021)","text":"178 prolific workers completed an online experiment in return for monetary compensation. Participants completed a Reinforcement Learning task of four cards and two reward conditions. On each trial of the task, two cards of the four were offered by the computer, and participants were asked to pick one. Each card could lead to a reward on an independent drifting probability across trials. The difference between conditions was in whether participants won extra points or avoided the loss of points. All participants completed the OCI-R, and a partial sample also completed BDI, STAI, and SPQ.","link":"/opendata/ben-artzi-shahar-2021/"},{"title":"Ben Artzi et al. (2022a)","text":"To establish accurate action-outcome associations in the environment, individuals must refrain from assigning value to outcome-irrelevant features. However, reinforcement learning studies have largely ignored the role of attentional control processes on credit assignment (the process of assigning value to one’s actions). In the current study, we examined the extent to which working memory – a system that can filter and block the processing of irrelevant information in one’s mind – predicted credit assignment to outcome-irrelevant task features. One hundred and seventy-four individuals completed working memory capacity and outcome-irrelevant learning estimates. Outcome-irrelevant learning was estimated in a reinforcement learning task where only stimulus’ visual features predicted reward, but not the response keys used to indicate one’s selection. As expected, we found a consistent tendency to assign value to the tasks’ response keys, reflecting outcome-irrelevant learning at the group level. However, we also found substantial individual differences, such that only 55% of participants demonstrated this effect. Importantly, working memory capacity significantly moderated individual differences in outcome-irrelevant learning; individuals with higher capacity were less likely to assign credit to the outcome-irrelevant feature (i.e., response key). We discuss the influence of working memory on outcome-irrelevant learning through the perspective of cognitive control failure.","link":"/opendata/ben-artzi-et-al-2022a/"},{"title":"Ben Artzi et al. (2022b)","text":"Current studies suggest that individuals estimate the value of their choices based on observed feedback. Here, we ask whether individuals update the value of their unchosen actions, even when the associated feedback remains unknown. Two hundred and three individuals completed a multi-armed bandit task, making choices to gain rewards. We found robust evidence suggesting inverse value updating for unchosen actions based on the chosen action’s outcome. Computational modeling results suggested that this effect is mainly explained by a value updating mechanism whereby individuals integrate the outcome history for choosing an option with that of avoiding the alternative. Properties of the deliberation (i.e., duration/difficulty) did not moderate the latent value updating of unchosen actions, suggesting that memory traces generated during deliberation take a smaller role in this phenomenon than previously thought. We discuss the mechanisms facilitating credit assignment to unchosen actions and their implications for human decision-making.","link":"/opendata/ben-artzi-et-al-2022b/"},{"title":"Bennett et al. (2021)","text":"Aversion to uncertainty about the future has been proposed as a transdiagnostic trait underlying psychiatric diagnoses including obsessive-compulsive disorder and generalized anxiety. This association might explain the frequency of pathological information-seeking behaviors such as compulsive checking and reassurance-seeking in these disorders. Here we tested the behavioral predictions of this model using a noninstrumental information-seeking task that measured preferences for unusable information about future outcomes in different payout domains (gain, loss, and mixed gain/loss). We administered this task, along with a targeted battery of self-report questionnaires, to a general-population sample of 146 adult participants. Using computational cognitive modeling of choices to test competing theories of information valuation, we found evidence for a model in which preferences for costless and costly information about future outcomes were independent, and in which information preference was modulated by both outcome mean and outcome variance. Critically, we also found positive associations between a model parameter controlling preference for costly information and individual differences in latent traits of both anxiety and obsessive-compulsion. These associations were invariant across different payout domains, providing evidence that individuals high in obsessive-compulsive and anxious traits show a generalized increase in willingness-to-pay for unusable information about uncertain future outcomes, even though this behavior reduces their expected future reward.","link":"/opendata/bennett-et-al-2021/"},{"title":"Bennett, Radulescu et al. (2021)","text":"Positive and negative affective states are respectively associated with optimistic and pessimistic expectations regarding future reward. One mechanism that might underlie these affect-related expectation biases is attention to positive- versus negative-valence stimulus features (e.g., attending to the positive reviews of a restaurant versus its expensive price). Here we tested the effects of experimentally induced positive and negative affect on feature-based attention in 120 participants completing a compound-generalization task with eye-tracking. We found that participants reward expectations for novel compound stimuli were modulated by the affect induction in an affect-congruent way: positive affect increased reward expectations for compounds, whereas negative affect decreased reward expectations. Computational modelling and eye-tracking analyses each revealed that these effects were driven by affect-congruent changes in participants allocation of attention to high- versus low-value features of compound stimuli. These results provide mechanistic insight into a process by which affect produces biases in generalized reward expectations.","link":"/opendata/bennett-radulescu-et-al-2021/"},{"title":"Benwell et al. (2022)","text":"Human behaviours are guided by how confident we feel in our abilities. When confidence does not reflect objective performance, this can impact critical adaptive functions and impair life quality. Distorted decision-making and confidence have been associated with mental health problems. Here, utilising advances in computational and transdiagnostic psychiatry, we sought to map relationships between psychopathology and both decision-making and confidence in the general population across two online studies (N’s = 344 and 473, respectively). The results revealed dissociable decision-making and confidence signatures related to distinct symptom dimensions. A dimension characterised by compulsivity and intrusive thoughts was found to be associated with reduced objective accuracy but, paradoxically, increased absolute confidence, whereas a dimension characterized by anxiety and depression was associated with systematically low confidence in the absence of impairments in objective accuracy. These relationships replicated across both studies and distinct cognitive domains (perception and general knowledge), suggesting that they are reliable and domain general. Additionally, whereas Big-5 personality traits also predicted objective task performance, only symptom dimensions related to subjective confidence. Domain-general signatures of decision-making and metacognition characterise distinct psychological dispositions and psychopathology in the general population and implicate confidence as a central component of mental health.","link":"/opendata/benwell-et-al-2022/"},{"title":"Bertram et al. (2021)","text":"Information about risks and probabilities is ubiquitous in our environment, forming the basis for decisions in an uncertain world. Emotions are known to modulate subjective probability assessments when probabilistic information is emotionally valenced. Yet little is known about the role of emotions in subjective probability assessment of affectively neutral events. We investigated this in one correlational study (Study 1, N = 162) and one experimental study (Study 2, N = 119). As predicted, we found that emotional dominance modulated the degree of conservatism in respondents’ neutral probability estimates. Remarkably, this pattern also transferred to realistic risk assessments. Furthermore, respondents’ tendency to use the representativeness heuristic as a proxy for probability was increased in high dominance individuals. Our findings highlight the importance of considering emotions, particularly the little-understood emotion dimension dominance, in research on probabilistic cognition.","link":"/opendata/bertram-et-al-2021/"},{"title":"Biderman & Shohamy (2021)","text":"The goal of deliberation is to separate between options so that we can commit to one and leave the other behind. However, deliberation can, paradoxically, also form an association in memory between the chosen and unchosen options. Here, we consider this possibility and examine its consequences for how outcomes affect not only the value of the options we chose, but also, by association, the value of options we did not choose. In five experiments (total n = 612), including a preregistered experiment (n = 235), we found that the value assigned to unchosen options is inversely related to their chosen counterparts. Moreover, this inverse relationship was associated with participants’ memory of the pairs they chose between. Our findings suggest that deciding between options does not end the competition between them. Deliberation binds choice options together in memory such that the learned value of one can affect the inferred value of the other.","link":"/opendata/biderman-shohamy-2021/"},{"title":"Bioud et al. (2022)","text":"To decide whether a course of action is worth pursuing, individuals typically weigh its expected costs and benefits. Optimal decision-making relies upon accurate effort cost anticipation, which is generally assumed to be performed independently from goal valuation. In two experiments (n = 46), we challenged this independence principle of standard decision theory. We presented participants with a series of treadmill routes randomly associated to monetary rewards and collected both accept versus decline decisions and subjective estimates of energetic cost. Behavioural results show that higher monetary prospects led participants to provide higher cost estimates, although reward was independent from effort in our design. Among candidate cognitive explanations, they support a model in which prospective cost assessment is biased by the output of an automatic computation adjusting effort expenditure to goal value. This decision bias might lead people to abandon the pursuit of valuable goals that are in fact not so costly to achieve.","link":"/opendata/bioud-et-al-2022/"},{"title":"Biria et al. (2023)","text":"There has been little analysis of neurochemical correlates of compulsive behaviour to illuminate its underlying neural mechanisms. We use 7-Tesla proton magnetic resonance spectroscopy (1H-MRS) to assess the balance of excitatory and inhibitory neurotransmission by measuring glutamate and GABA levels in anterior cingulate cortex (ACC) and supplementary motor area (SMA) of healthy volunteers and participants with Obsessive-Compulsive Disorder (OCD). Within the SMA, trait and clinical measures of compulsive behaviour are related to glutamate levels, whereas a behavioural index of habitual control correlates with the glutamate:GABA ratio. Participants with OCD also show the latter relationship in the ACC while exhibiting elevated glutamate and lower GABA levels in that region. This study highlights SMA mechanisms of habitual control relevant to compulsive behaviour, common to the healthy sub-clinical and OCD populations. The results also demonstrate additional involvement of anterior cingulate in the balance between goal-directed and habitual responding in OCD.","link":"/opendata/biria-et-al-2023/"},{"title":"Bisschop (2021)","text":"Database of cognitive control task data (e.g., Stroop, Flanker tasks).","link":"/opendata/bisschop-2021/"},{"title":"Blackwell et al. (2023)","text":"According to major cognitive accounts of panic disorder, bodily sensations can lead to automatic activation of an associative fear network, potentially triggering a cascade of cognitive, emotional, and physiological responses culminating in a panic attack. However, the evidence for the automatic associations assumed by these models is mixed. This may reflect the heterogeneous nature of panic disorder, in that the relative importance of different bodily sensations and symptoms varies between individuals. The current study aimed to test this possibility via measuring the associations between three different sets of panic symptoms (cognitive, respiratory, cardiac) and scores on three symptom-specific single target implicit association tests (STIATs). A total of 226 unselected female participants aged 18-35 completed the STIATs as well as questionnaires assessing panic symptoms and related measures in a web-based study. Only limited evidence was found to support the idea of specific associations between STIAT stimuli sets and their related panic symptoms. Exploratory analyses indicated that there were only associations between STIAT scores and panic-relevant questionnaires amongst those participants who had experienced a panic attack in the previous 6 months. The results have implications for measuring panic-relevant associations and understanding their role in panic disorder.","link":"/opendata/blackwell-et-al-2023/"},{"title":"Blain & Rutledge (2020)","text":"Subjective well-being or happiness is often associated with wealth. Recent studies suggest that momentary happiness is associated with reward prediction error, the difference between experienced and predicted reward, a key component of adaptive behaviour. We tested subjects in a reinforcement learning task in which reward size and probability were uncorrelated, allowing us to dissociate between the contributions of reward and learning to happiness. Using computational modelling, we found convergent evidence across stable and volatile learning tasks that happiness, like behaviour, is sensitive to learning-relevant variables (i.e. probability prediction error). Unlike behaviour, happiness is not sensitive to learning-irrelevant variables (i.e. reward prediction error). Increasing volatility reduces how many past trials influence behaviour but not happiness. Finally, depressive symptoms reduce happiness more in volatile than stable environments. Our results suggest that how we learn about our world may be more important for how we feel than the rewards we actually receive.","link":"/opendata/blain-rutledge-2020/"},{"title":"Blain et al. (2022)","text":"Social interactions influence people’s feelings and behavior. Here, we propose that a person’s well-being is influenced not only by interactions they experience themselves, but also by those they observe. In particular, we test and quantify the influence of observed selfishness and observed inequality on a bystanders’ feelings and non-costly punishment decisions. We developed computational models that relate others’ (un)selfish acts to observers’ emotional reactions and punishment decisions. These characterize the rules by which others’ interactions are transformed into bystanders’ reactions, and successfully predict those reactions in out-of-sample participants. The models highlight the impact of two social values-‘selfishness aversion’ and ‘inequality aversion’. As for the latter we find that even small violations from perfect equality have a disproportionately large impact on feelings and punishment. In this age of internet and social media we constantly observe others’ online interactions, in addition to in-person interactions. Quantifying the consequences of such observations is important for predicting their impact on society.","link":"/opendata/blain-et-al-2022/"},{"title":"Blain et al. (2023)","text":"Humans frequently engage in intrinsically rewarding activities (for example, consuming art, reading). Despite such activities seeming diverse, we show that sensitivity to intrinsic rewards is domain general and associated with mental health. In this cross-sectional study, participants online (N = 483) were presented with putative visual, cognitive and social intrinsic rewards as well as monetary rewards and neutral stimuli. All rewards elicited positive feelings (were ‘liked’), generated consummatory behaviour (were ‘wanted’) and increased the likelihood of the action leading to them (were ‘reinforcing’). Factor analysis revealed that ~40% of response variance across stimuli was explained by a general sensitivity to all rewards, but not to neutral stimuli. Affective aspects of mental health were associated with sensitivity to intrinsic, but not monetary, rewards. These results may help explain thriving and suffering: individuals with high reward sensitivity will engage in a variety of intrinsically rewarding activities, eventually finding those they excel at, whereas low sensitivity individuals will not.","link":"/opendata/blain-et-al-2023/"},{"title":"Blankenship & Kibbe (2023)","text":"The ability to use knowledge to guide the completion of goals is a critical cognitive skill, but 3-year-olds struggle to complete goals that require multiple steps. This study asked whether 3-year-olds could benefit from “plan chunking” to complete multistep goals. Thirty-two U.S. children (range = 35.75-46.59 months; 18 girls; 9 white, 3 mixed race, 20 unknown; tested between July 2020 and April 2021) were asked to complete “treasure maps,” retrieving four colored map pieces by pressing specific buttons on a “rainbow box.” Children completed more of the four-step sequence correctly when the steps were presented in a way that encouraged chunking the steps into pairs. These findings suggest a potential mechanism supporting memory-guided planning abilities in early childhood.","link":"/opendata/blankenship-kibbe-2023/"},{"title":"Bode et al. (2023)","text":"Previous research has shown that people intrinsically value non-instrumental information, which cannot be used to change the outcome of events, but only provides an early resolution of uncertainty. This is true even for information about rather inconsequential events, such as the outcomes of small lotteries. Here we investigated whether participants’ willingness to pay for non-instrumental information about the outcome of simple coin-flip lotteries with guaranteed winnings was modulated by acute stress. Stress was induced using the Socially Evaluated Cold Pressor Test (SECPT), and information-seeking choices were compared to a warm water control group. Our results neither support the hypothesis that stress decreases information-seeking by directing cognitive resources away from the relevance of the lotteries, nor the opposite hypothesis that stress increases information-seeking by driving anxiety levels up. Instead, we found that despite successful stress induction, as evidenced by increased saliva cortisol levels in the SECPT group, information valuation was remarkably stable. This finding is in line with recent findings that experimentally increased state anxiety did not modulate non-instrumental information seeking. Together, these results suggest that the aversiveness of “not knowing” is a stable cognitive state and not easily modulated by situational context, such as acute stress.","link":"/opendata/bode-et-al-2023/"},{"title":"Bogdanov et al. (2021)","text":"Adverse effects following acute stress are traditionally thought to reflect functional impairments of central executive-dependent cognitive-control processes. However, recent evidence demonstrates that cognitive-control application is perceived as effortful and aversive, indicating that stress-related decrements in cognitive performance could denote decreased motivation to expend effort instead. To investigate this hypothesis, we tested 40 young, healthy individuals (20 female, 20 male) under both stress and control conditions in a 2-day study that had a within-subjects design. Cognitive-effort avoidance was assessed using the demand-selection task, in which participants chose between performing low-demand and high-demand variants of a task-switching paradigm. We found that acute stress indeed increased participants preference for less demanding behavior, whereas task-switching performance remained intact. Additional Bayesian and multiverse analyses confirmed the robustness of this effect. Our findings provide novel insights into how stressful experiences shape behavior by modulating our motivation to employ cognitive control.","link":"/opendata/bogdanov-et-al-2021/"},{"title":"Bognar et al. (2023)","text":"The U-shaped curve has long been recognized as a fundamental concept in psychological science, particularly in theories about motivational accounts and cognitive control. In this study (N=330), we empirically tested the prediction of a non-monotonic, curvilinear relationship between task difficulty and control adaptation. Drawing from Motivational Intensity Theory (MIT) and the expected value of control (EVC) framework, we hypothesized that control intensity would increase with task difficulty until a maximum tolerable level, after which it would decrease. To examine this hypothesis, we conducted two experiments utilizing Stroop-like conflict tasks, systematically manipulating the number of distractors to vary task difficulty. We assessed control adaptation and measured subjective task difficulty. Our results revealed a curvilinear pattern between perceived task difficulty and adaptation of control. The findings provide empirical support for the theoretical accounts of MIT and EVC, highlighting the nonlinear nature of the relationship between task difficulty and cognitive control.","link":"/opendata/bognar-et-al-2023/"},{"title":"Bolenz et al. (2019)","text":"Humans employ different strategies when making decisions. Previous research has reported reduced reliance on model-based strategies with aging, but it remains unclear whether this is due to cognitive or motivational factors. Moreover, it is not clear how aging affects the metacontrol of decision making, that is the dynamic adaptation of decision-making strategies to varying situational demands. In this cross-sectional study, we tested younger and older adults in a sequential decision-making task that dissociates model-free and model-based strategies. In contrast to previous research, model-based strategies led to higher payoffs. Moreover, we manipulated the costs and benefits of model-based strategies by varying reward magnitude and the stability of the task structure. Compared to younger adults, older adults showed reduced model-based decision making and less adaptation of decision-making strategies. Our findings suggest that aging affects the metacontrol of decision-making strategies and that reduced model-based strategies in older adults are due to limited cognitive abilities.","link":"/opendata/bolenz-et-al-2019/"},{"title":"Bolenz & Eppinger (2021)","text":"The development of metacontrol of decision making and its susceptibility to framing effects were investigated in a sample of 201 adolescents and adults in Germany (12-25 years, 111 female, ethnicity not recorded). In a task that dissociates model-free and model-based decision making, outcome magnitude and outcome valence were manipulated. Both adolescents and adults showed metacontrol and metacontrol tended to increase across adolescence. Furthermore, model-based decision making was more pronounced for loss compared to gain frames but there was no evidence that this framing effect differed with age. Thus, the strategic adaptation of decision making continues to develop into young adulthood and for both adolescents and adults, losses increase the motivation to invest cognitive resources into an effortful decision-making strategy.","link":"/opendata/bolenz-eppinger-2021/"},{"title":"Bolenz et al. (2022)","text":"Humans show metacontrol of decision making, that is they adapt their reliance on decision-making strategies toward situational differences such as differences in reward magnitude. Specifically, when higher rewards are at stake, individuals increase reliance on a more accurate but cognitively effortful strategy. We investigated whether the personality trait Need for Cognition (NFC) explains individual differences in metacontrol. Based on findings of cognitive effort expenditure in executive functions, we expected more metacontrol in individuals low in NFC. In two independent studies, metacontrol was assessed by means of a decision-making task that dissociates different reinforcement-learning strategies and in which reward magnitude was manipulated across trials. In contrast to our expectations, NFC did not account for individual differences in metacontrol of decision making. In fact, a Bayesian analysis provided moderate to strong evidence against a relationship between NFC and metacontrol. Beyond this, there was no consistent evidence for relationship between NFC and overall model-based decision making. These findings show that the effect of rewards on the engagement of effortful decision-making strategies is largely independent of the intrinsic motivation for engaging in cognitively effortful tasks and suggest a differential role of NFC for the regulation of cognitive effort in decision making and executive functions.","link":"/opendata/bolenz-et-al-2022/"},{"title":"Bolenz & Pachur (2023)","text":"Younger and older adults differ in their risky choices. Theoretical frameworks on human aging point to various cognitive and motivational factors that might underlie these differences. Using a novel computational model based on resource rationality, we find that the two age groups select qualitatively different strategies. Importantly, older adults did not use simpler strategies than younger adults, they did not select among fewer strategies, they did not make more errors, and they did not put more weight on cognitive costs. Instead, older adults selected strategies that had different risk propensities than those selected by younger adults. Our modeling approach suggests that rather than being a consequence of cognitive decline, age differences in risky choice seem to mainly reflect motivational differences between age groups.","link":"/opendata/bolenz-pachur-2023/"},{"title":"Bond et al. (2021)","text":"In uncertain or unstable environments, sometimes the best decision is to change your mind. To shed light on this flexibility, we evaluated how the underlying decision policy adapts when the most rewarding action changes. Human participants performed a dynamic two-armed bandit task that manipulated the certainty in relative reward (conflict) and the reliability of action-outcomes (volatility). Continuous estimates of conflict and volatility contributed to shifts in exploratory states by changing both the rate of evidence accumulation (drift rate) and the amount of evidence needed to make a decision (boundary height), respectively. At the trialwise level, following a switch in the optimal choice, the drift rate plummets and the boundary height weakly spikes, leading to a slow exploratory state. We find that the drift rate drives most of this response, with an unreliable contribution of boundary height across experiments. Surprisingly, we find no evidence that pupillary responses associated with decision policy changes. We conclude that humans show a stereotypical shift in their decision policies in response to environmental changes.","link":"/opendata/bond-et-al-2021/"},{"title":"Boned & López-Moliner (2022)","text":"When we try to assess the duration of an event, we are often affected by external information. Studies on multiple timing have found that simultaneous timing information can produce an averaging or central tendency effect, where the perceived duration of the elements tends to be biased towards a general average. We wanted to assess how this effect induced by simultaneous distractors could depend on the temporal similarity between stimuli. We used a duration judgment task in which participants (n = 22) had to compare the duration of two identical targets (1 s) accompanied by simultaneous distractors of different durations (0.3, 0.7, 1.5 or 3 s). We found a central tendency effect, where duration judgments of the target were systematically biased towards the duration of the distractors that accompanied them. We put forward a model based on the concept of duration-channels that can explain the central tendency effect with only one estimated parameter. This parameter modulates the rate of decay of this effect as distractors duration become more different than the duration of the target.","link":"/opendata/boned-lopez-moliner-2022/"},{"title":"Bonnaire et al. (2022)","text":"Blaszczynski and Nower (2002) conceptualized their Pathways Model by postulating the existence of three subtypes of problem gamblers who share common characteristics, but also present specific ones. This study investigated how the psychological mechanisms postulated in the Pathways Model predict clinical status in a sample that combined treatment-seeking gamblers (n = 59) and non-problematic community gamblers (n = 107). To test the Pathways Model, we computed a hierarchic logistic regression in which variables associated with each postulated pathway were entered sequentially to predict the status of the treatment-seeking gambler. Self-report questionnaires measured gambling-related cognitions, alexithymia, emotional reactivity, emotion regulation strategies and impulsivity. Behavioural tasks measured gambling persistence (slot machine task), decision-making under uncertainty (Iowa Gambling Task) and decision-making under risk (Game of Dice Task). We showed that specific factors theorized as underlying mechanisms for each pathway predicted the status of clinical gambler. For each pathway, significant predictors included gambling-related cognitive distortions and behaviourally measured gambling persistence (behaviourally conditioned pathway), emotional reactivity and emotion regulation strategies (emotionally vulnerable pathway), and lack of premeditation impulsivity facet (impulsivist-antisocial pathway). Our study adds to the body of literature confirming the validity of the Pathways Model and hold important implications in terms of assessment and treatment of problem gambling. In particular, a standardized assessment based on the Pathways Model should promote individualized treatment strategies to allow clinicians to take into account the high heterogeneity that characterizes gambling disorder.","link":"/opendata/bonnaire-et-al-2022/"},{"title":"Boschet et al. (2022)","text":"Conflicts between avoiding feared stimuli versus approaching them for competing rewards are essential for functional behavior and anxious psychopathology. Yet, little is known about the underlying decision process. We examined approach-avoidance decisions and their temporal dynamics when avoiding Pavlovian fear stimuli conflicted with gaining rewards. First, a formerly neutral stimulus (CS+) was repeatedly paired with an aversive stimulus (US) to establish Pavlovian fear. Another stimulus (CS-) was never paired with the US. A control group received neutral tones instead of aversive USs. Next, in each of 324 trials, participants chose between a CS-/low reward and a CS+/high reward option. For the latter, probability of CS+ presentation (Pavlovian fear information) and reward magnitude (reward information) varied. Computer mouse movements were tracked to capture the decision dynamics. Although no more USs occurred, pronounced and persistent costly avoidance of the Pavlovian fear CS+ was found. Time-continuous multiple regression of movement trajectories revealed a stronger and faster impact of Pavlovian fear compared to reward information during decision-making. The impact of fear information, but not reward information, modestly decreased across trials. These findings suggest a persistently stronger weighting of fear compared to reward information during approach-avoidance decisions, which may facilitate the development of pathological avoidance.","link":"/opendata/boschet-et-al-2022/"},{"title":"Bradshaw & McGettigan (2021)","text":"Recent research suggests that reinforcement learning may underlie trait formation in social interactions with faces. The current study investigated whether the same learning mechanisms could be engaged for trait learning from voices. On each trial of a training phase, participants (N = 192) chose from pairs of human or slot machine targets that varied in the (1) reward value and (2) generosity of their payouts. Targets were either auditory (voices or tones; Experiment 1) or visual (faces or icons; Experiment 2) and were presented sequentially before payout feedback. A test phase measured participant choice behaviour, and a post-test recorded their target preference ratings. For auditory targets, we found a significant effect of reward only on target choices, but saw higher preference ratings for more generous humans and slot machines. For visual targets, findings from previous studies were replicated: participants learned about both generosity and reward, but generosity was prioritised in the human condition. These findings provide one of the first demonstrations of reinforcement learning of reward with auditory stimuli in a social learning task, but suggest that the use of auditory targets does alter learning in this paradigm. Conversely, reinforcement learning of reward and trait information with visual stimuli remains intact even when sequential presentation introduces a delay in feedback.","link":"/opendata/bradshaw-mcgettigan-2021/"},{"title":"Bratzke & Ulrich (2019)","text":"The human ability to compare time between sensory modalities implies a supramodal representation of time. This notion is consistent with the pacemaker-counter model (PCM), the core architecture of prominent timing theories. Some theorists, however, have promoted modality-specific timing mechanisms, which might hamper crossmodal temporal comparison. This study tested whether PCM is sufficient to account for intra- as well as crossmodal timing. To account for modality-specific timing differences, we proceeded from the common assumption that the pacemaker runs faster for auditory than for visual stimuli. Participants reproduced short and long standards (800 vs. 2,400 ms) by terminating a comparison stimulus. In Experiment 1, in each trial the sensory modalities (auditory vs. visual) of the standard and the comparison were the same (congruent) or different (incongruent). PCM implies that timing performance depends on modality order. However, there should be virtually no congruency effects on overall performance. Although the results largely matched the predictions of PCM, there were substantial congruency effects on reproduction variability especially in the subsecond range. Three intramodal control experiments, however, showed that similar congruency effects can be observed when the standard and the comparison differ in intramodal characteristics. This suggests that temporal representations are not isolated from nontemporal stimulus characteristics, even when these are subtle and within the same modality. The present results can be interpreted as evidence for sensory timing within the subsecond range. Nevertheless, we used computer simulations to evaluate extensions of PCM that could account for the present result pattern, while retaining PCM’s supramodal property.","link":"/opendata/bratzke-ulrich-2019/"},{"title":"Braun et al. (2018)","text":"Many decisions are based on an internal model of the world. Yet, how such a model is constructed from experience and represented in memory remains unknown. We test the hypothesis that reward shapes memory for sequences of events by retroactively prioritizing memory for objects as a function of their distance from reward. Human participants encountered neutral objects while exploring a series of mazes for reward. Across six data sets, we find that reward systematically modulates memory for neutral objects, retroactively prioritizing memory for objects closest to the reward. This effect of reward on memory emerges only after a 24-hour delay and is stronger for mazes followed by a longer rest interval, suggesting a role for post-reward replay and overnight consolidation, as predicted by neurobiological data in animals. These findings demonstrate that reward retroactively prioritizes memory along a sequential gradient, consistent with the role of memory in supporting adaptive decision-making.","link":"/opendata/braun-et-al-2018/"},{"title":"Breslav et al. (2022)","text":"As children age, they can learn increasingly complex features of environmental structure-a key prerequisite for adaptive decision-making. Yet when we tested children (N = 304, 4-13 years old) in the Children’s Gambling Task, an age-appropriate variant of the Iowa Gambling Task, we found that age was negatively associated with performance. However, this paradoxical effect of age was found only in children who exhibited a maladaptive deplete-replenish bias, a tendency to shift choices after positive outcomes and repeat choices after negative outcomes. We found that this bias results from sensitivity to incidental nonrandom structure in the canonical, deterministic forms of these tasks-and that it would actually lead to optimal outcomes if the tasks were not deterministic. Our results illustrate that changes in decision-making across early childhood reflect, in part, increasing sensitivity to environmental structure.","link":"/opendata/breslav-et-al-2022/"},{"title":"Brooks & Sokol-Hessner (2023)","text":"Some of the most influential modern theories of risky monetary decision-making assume that choices result from stable, trait-like preferences, invariant to contextual influences such as recent events. Recent research has challenged this assumption, demonstrating that even when values and probabilities are explicit and known, decisions under risk are contextually sensitive, affected by recent events on multiple timescales, including immediate (previous monetary outcomes), neighborhood (recently encountered values), and global (cumulative earnings relative to dynamic expectations) events. Such temporal context-dependencies are perplexing, because relying on recent events at any timescale is inconsistent with the assumed goal of risky monetary decision-making: to maximize payoff. Identifying this suboptimal behavioral pattern raises the possibility it can be mitigated using behavioral change strategies. We tested whether the effects of temporal context in risk-taking can be attenuated with an intentional cognitive strategy. 124 participants completed two rounds of a contextually structured gambling task with trial-by-trial feedback, the Emotion Regulation Questionnaire, and working memory capacity tasks. Participants were randomly assigned to complete each gambling round with a strategy either emphasizing a natural, uncontrolled decision-making approach or directly instructing participants to ignore context. Instructions to ignore context influenced temporal context effects on the immediate timescale but did not change those on the neighborhood or global timescales. The strategy was not uniformly effective for all individuals. The cognitive strategy eliminated (and even reversed) contextual effects on the immediate timescale for individuals with moderate and high habitual use of reappraisal. However, the cognitive strategy paradoxically strengthened contextual effects on the immediate timescale for individuals with low habitual use of reappraisal. The selective effects of strategic control on contextual influence indicates both intriguing possibilities and limits on the ability to optimize human decision-making, and suggests that people do not simply maximize local utility, but rather that even simple risky decision-making may be fundamentally goal-dependent.","link":"/opendata/brooks-sokol-hessner-2023/"},{"title":"Brown et al. (2022)","text":"When navigating uncertain worlds, humans must balance exploring new options versus exploiting known rewards. Longer horizons and spatially structured option values encourage humans to explore, but the impact of real-world cognitive constraints such as environment size and memory demands on explore-exploit decisions is unclear. In the present study, humans chose between options varying in uncertainty during a multi-armed bandit task with varying environment size and memory demands. Regression and cognitive computational models of choice behavior showed that with a lower cognitive load, humans are more exploratory than a simulated value-maximizing learner, but under cognitive constraints, they adaptively scale down exploration to maintain exploitation. Thus, while humans are curious, cognitive constraints force people to decrease their strategic exploration in a resource-rational-like manner to focus on harvesting known rewards.","link":"/opendata/brown-et-al-2022/"},{"title":"Bruder et al. (2021a)","text":"In recent years the emergence of high-performance virtual reality (VR) technology has opened up new possibilities for the examination of context effects in psychological studies. The opportunity to create ecologically valid stimulation in a highly controlled lab environment is especially relevant for studies of psychiatric disorders, where it can be problematic to confront participants with certain stimuli in real life. However, before VR can be confidently applied widely it is important to establish that commonly used behavioral tasks generate reliable data within a VR surrounding. One field of research that could benefit greatly from VR-applications are studies assessing the reactivity to addiction related cues (cue-reactivity) in participants suffering from gambling disorder. Here we tested the reliability of a commonly used temporal discounting task in a novel VR set-up designed for the concurrent assessment of behavioral and psychophysiological cue-reactivity in gambling disorder. On 2 days, thirty-four healthy non-gambling participants explored two rich and navigable VR-environments (neutral: café vs. gambling-related: casino and sports-betting facility), while their electrodermal activity was measured using remote sensors. In addition, participants completed the temporal discounting task implemented in each VR environment. On a third day, participants performed the task in a standard lab testing context. We then used comprehensive computational modeling using both standard softmax and drift diffusion model (DDM) choice rules to assess the reliability of discounting model parameters assessed in VR. Test-retest reliability estimates were good to excellent for the discount rate log(k), whereas they were poor to moderate for additional DDM parameters. Differences in model parameters between standard lab testing and VR, reflecting reactivity to the different environments, were mostly numerically small and of inconclusive directionality. Finally, while exposure to VR generally increased tonic skin conductance, this effect was not modulated by the neutral versus gambling-related VR-environment. Taken together this proof-of-concept study in non-gambling participants demonstrates that temporal discounting measures obtained in VR are reliable, suggesting that VR is a promising tool for applications in computational psychiatry, including studies on cue-reactivity in addiction.","link":"/opendata/bruder-et-al-2021a/"},{"title":"Bruder et al. (2021b)","text":"High-performance virtual reality (VR) technology has opened new possibilities for the examination of the reactivity towards addiction-related cues (cue-reactivity) in addiction. In this preregistered study (https://osf.io/4mrta), we investigated the subjective, physiological, and behavioral effects of gambling-related VR environment exposure in participants reporting frequent or pathological gambling (n=31) as well as non-gambling controls (n=29). On two separate days, participants explored two rich and navigable VR-environments (neutral: café vs. gambling-related: casino/sports-betting facility), while electrodermal activity and heart rate were continuously measured using remote sensors. Within VR, participants performed a temporal discounting task and a sequential decision-making task designed to assess model-based and model-free contributions to behavior. Replicating previous findings, we found strong evidence for increased temporal discounting and reduced model-based control in participants reporting frequent or pathological gambling. Although VR gambling environment exposure increased subjective craving, there was if anything inconclusive evidence for further behavioral or physiological effects. Instead, VR exposure substantially increased physiological arousal (electrodermal activity), across groups and conditions. VR is a promising tool for the investigation of context effects in addiction, but some caution is warranted since effects of real gambling environments might not generally replicate in VR. Future studies should delineate how factors such as cognitive load and ecological validity could be balanced to create a more naturalistic VR experience.","link":"/opendata/bruder-et-al-2021b/"},{"title":"Brus et al. (2021)","text":"Confidence, the subjective estimate of decision quality, is a cognitive process necessary for learning from mistakes and guiding future actions. The origins of confidence judgments resulting from economic decisions remain unclear. We devise a task and computational framework that allowed us to formally tease apart the impact of various sources of confidence in value-based decisions, such as uncertainty emerging from encoding and decoding operations, as well as the interplay between gaze-shift dynamics and attentional effort. In line with canonical decision theories, trial-to-trial fluctuations in the precision of value encoding impact economic choice consistency. However, this uncertainty has no influence on confidence reports. Instead, confidence is associated with endogenous attentional effort towards choice alternatives and down-stream noise in the comparison process. These findings provide an explanation for confidence (miss)attributions in value-guided behaviour, suggesting mechanistic influences of endogenous attentional states for guiding decisions and metacognitive awareness of choice certainty.","link":"/opendata/brus-et-al-2021/"},{"title":"Bueno & Cravo (2021)","text":"Studies investigating the neural mechanisms of time perception often measure brain activity while participants perform a temporal task. However, several of these studies are based exclusively on tasks in which time is relevant, making it hard to dissociate activity related to decisions about time from other task-related patterns. In the present study, human participants performed a temporal or color discrimination task of visual stimuli. Participants were informed which magnitude they would have to judge before or after presenting the two stimuli (S1 and S2) in different blocks. Our behavioral results showed, as expected, that performance was better when participants knew beforehand which magnitude they would judge. Electrophysiological data (EEG) was analysed using Linear Discriminant Contrasts (LDC) and a Representational Similarity Analysis (RSA) approach to investigate whether and when information about time and color was encoded. During the presentation of S1, we did not find consistent differences in EEG activity as a function of the task. On the other hand, during S2, we found that temporal and color information was encoded in a task-relevant manner. Taken together, our results suggest that task goals strongly modulate decision-related information in EEG activity.","link":"/opendata/bueno-cravo-2021/"},{"title":"Burleigh et al. (2022)","text":"Many symptoms of anxiety and posttraumatic stress disorder are elicited by fearful mental imagery. Yet little is known about how visual imagery of conditioned stimuli (CSs) affects the acquisition of differential fear conditioning. Across three experiments with younger human adults (Experiment 1: n = 33, Experiment 2: n = 27, Experiment 3: n = 26), we observed that participants acquired differential fear conditioning to both viewed and imagined percepts serving as the CSs, as measured via self-reported fear and skin conductance responses. Additionally, this differential conditioning generalized across CS-percept modalities such that differential conditioning acquired in response to visual percepts generalized to the corresponding imagined percepts and vice versa. This is novel evidence that perceived and imagined stimuli engage learning processes in very similar ways and is consistent with the theory that mental imagery is depictive and recruits neural resources shared with visual perception. Our findings also provide new insight into the mechanisms of anxiety and related disorders.","link":"/opendata/burleigh-et-al-2022/"},{"title":"Burton et al. (2022)","text":"How people update their beliefs when faced with new information is integral to everyday life. A sizeable body of literature suggests that people’s belief updating is optimistically biased, such that their beliefs are updated more in response to good news than bad news. However, recent research demonstrates that findings previously interpreted as evidence of optimistic belief updating may be the result of flaws in experimental design, rather than motivated reasoning. In light of this controversy, we conduct three pre-registered variations of the standard belief updating paradigm (combined N = 300) in which we test for asymmetric belief updating with neutral, non-valenced stimuli using analytic approaches found in previous research. We find evidence of seemingly biased belief updating with neutral stimuli - results that cannot be attributed to a motivational, valence-based, optimism account - and further show that there is uninterpretable variability across samples and analytic techniques. Jointly, these results serve to highlight the methodological flaws in current optimistic belief updating research.","link":"/opendata/burton-et-al-2022/"},{"title":"Byrne et al. (2020a)","text":"Previous research suggests that depressive symptoms are associated with altered sensitivity to reward and punishment in various decision-making contexts. Building on this work, this study investigated whether depressed-affect symptoms influenced risky decision making under time pressure. The effect of depressed affect on risky choice was assessed in a reward (Experiments 1A and 1B) and loss (Experiment 2) context under low- and high-pressure conditions. Decisions involved learning to choose between a “sure” option and a “risky” option with identical expected values. In Experiment 1A, depressed affect predicted increased risky decision making under time pressure but did not affect decision making under low pressure. Experiment 1B replicated this effect. In contrast, in Experiment 2, depressed affect led to reduced risk taking in low-pressure condition but did not affect decision making under high pressure. These results suggest that the pattern of risky decision making among those experiencing symptoms of depressed affect depends on performance pressure demands.","link":"/opendata/byrne-et-al-2020a/"},{"title":"Byrne et al. (2020b)","text":"Acute stress has been shown to influence reward sensitivity, feedback learning, and risk-taking during decision-making, primarily through activation of the hypothalamic pituitary axis (HPA). However, it is unclear how acute stress affects decision-making among choices that vary in their degree of uncertainty. To address this question, we conducted two experiments in which participants repeatedly chose between two options-a high-uncertainty option that offered highly variable rewards but was advantageous in the long-term, and a low-uncertainty option that offered smaller yet more consistent rewards. The Socially Evaluated Cold Pressor Task (SECPT) was utilized to induce acute stress. Participants in Experiment 1 (N = 114) were exposed to either the SECPT or a warm-water control condition and then completed the decision-making under uncertainty task. Compared to the control condition, those exposed to the acute stress manipulation chose the high-uncertainty option that provided highly variable but larger rewards over the option that provided stable, smaller rewards. Experiment 2 (N = 95) incorporated a salivary cortisol measure. Results replicated the behavioral findings in Experiment 1 and demonstrated that the acute stress manipulation increased salivary cortisol. This work suggests that moderate acute stress is associated with tolerance of outcome variability in contexts that depend on learning to maximize rewards.","link":"/opendata/byrne-et-al-2020b/"},{"title":"Byrne et al. (2023)","text":"Eye movement data has been extensively utilized by researchers interested in studying decision-making within the strategic setting of economic games. In this paper, we demonstrate that both deep learning and support vector machine classification methods are able to accurately identify participants’ decision strategies before they commit to action while playing games. Our approach focuses on creating scanpath images that best capture the dynamics of a participant’s gaze behaviour in a way that is meaningful for predictions to the machine learning models. Our results demonstrate a higher classification accuracy by 18% points compared to a baseline logistic regression model, which is traditionally used to analyse gaze data recorded during economic games. In a broader context, we aim to illustrate the potential for eye-tracking data to create information asymmetries in strategic environments in favour of those who collect and process the data. These information asymmetries could become especially relevant as eye-tracking is expected to become more widespread in user applications, with the seemingly imminent mass adoption of virtual reality systems and the development of devices with the ability to record eye movement outside of a laboratory setting.","link":"/opendata/byrne-et-al-2023/"},{"title":"Callahan-Flintoft et al. (2020)","text":"Much of our world changes smoothly in time, yet the allocation of attention is typically studied with sudden changes - transients. A sizeable lag in selecting feature information is seen when stimuli change smoothly. Yet this lag is not seen with temporally uncorrelated rapid serial visual presentation (RSVP) stimuli. This suggests that temporal autocorrelation of a feature paradoxically increases the latency at which information is sampled. To test this, participants are asked to report the color of a disk when a cue was presented. There is an increase in selection latency when the disk’s color changed smoothly compared to randomly. This increase is due to the smooth color change presented after the cue rather than extrapolated predictions based on the color changes presented before. These results support an attentional drag theory, whereby attentional engagement is prolonged when features change smoothly. A computational model provides insights into the potential underlying neural mechanisms.","link":"/opendata/callahan-flintoft-et-al-2020/"},{"title":"Callaway et al. (2022)","text":"Making good decisions requires thinking ahead, but the huge number of actions and outcomes one could consider makes exhaustive planning infeasible for computationally constrained agents, such as humans. How people are nevertheless able to solve novel problems when their actions have long-reaching consequences is thus a long-standing question in cognitive science. To address this question, we propose a model of resource-constrained planning that allows us to derive optimal planning strategies. We find that previously proposed heuristics such as best-first search are near optimal under some circumstances but not others. In a mouse-tracking paradigm, we show that people adapt their planning strategies accordingly, planning in a manner that is broadly consistent with the optimal model but not with any single heuristic model. We also find systematic deviations from the optimal model that might result from additional cognitive constraints that are yet to be uncovered.","link":"/opendata/callaway-et-al-2022/"},{"title":"Canale et al. (2022)","text":"Previous research indicates that the invigorating effect of stress sensitivity on gambling behavior might be moderated by individual differences. The current preregistered study tested whether gender and negative urgency (i.e. an emotion-related impulsivity trait) moderate the relationship between perceived stress and laboratory gambling following experimentally induced sadness. One hundred twenty college students were randomly assigned to a sadness versus a control condition before completing a laboratory gambling task. Although the distribution of the main study variables forced us to slightly deviate from the preregistered data analysis plan, we were able to show that heightened stress sensitivity affects gambling behavior and that this effect differs by gender (but not in terms of negative urgency) under conditions of sadness versus neutral mood. Men with high stress sensitivity gambled more money and more frequently selected the riskier betting option in the sadness condition, whereas women with heightened stress sensitivity display the same pattern in the neutral condition. Our study is relevant from a methodological standpoint and answers recent calls for endorsing open-science practices in gambling research. Findings also suggest that more research into female gambling is warranted and that emotion-regulation skills should be a central component of problem gambling prevention.","link":"/opendata/canale-et-al-2022/"},{"title":"Cao & Tsetsos (2022)","text":"Decisions between two economic goods can be swayed by a third unavailable ‘decoy’ alternative, which does not compete for choice, notoriously violating the principles of rational choice theory. Although decoy effects typically depend on the decoy’s position in a multiattribute choice space, recent studies using risky prospects (i.e., varying in reward and probability) reported a novel ‘positive’ decoy effect operating on a single value dimension: the higher the ‘expected value’ (EV) of an unavailable (distractor) prospect was, the easier the discrimination between two available target prospects became, especially when their expected-value difference was small. Here, we show that this unidimensional distractor effect affords alternative interpretations: it occurred because the distractor’s EV covaried positively with the subjective utility difference between the two targets. Looking beyond this covariation, we report a modest ‘negative’ distractor effect operating on subjective utility, as well as classic multiattribute decoy effects. A normatively meaningful model (selective integration), in which subjective utilities are shaped by intra-attribute information distortion, reproduces the multiattribute decoy effects, and as an epiphenomenon, the negative unidimensional distractor effect. These findings clarify the modulatory role of an unavailable distracting option, shedding fresh light on the mechanisms that govern multiattribute decisions.","link":"/opendata/cao-tsetsos-2022/"},{"title":"Caron et al. (2020)","text":"Rosenbaum, Mama, and Algom (2017) reported that participants who completed the Stroop task (i.e., name the hue of a color word when the hue and word meaning are congruent or incongruent) showed a smaller Stroop effect (i.e., the difference in response times between congruent and incongruent trials) when they performed the task standing than when sitting. We report five attempted replications (analyzed sample sizes: N = 108, N = 108, N = 98, N = 78, and N = 51, respectively) of Rosenbaum et al.’s findings, which were conducted in two institutions. All experiments yielded the standard Stroop effect, but we failed to detect any consistent effect of posture (sitting vs. standing) on the magnitude of the Stroop effect. Taken together, the results suggest that posture does not influence the magnitude of the Stroop effect to the extent that was previously suggested.","link":"/opendata/caron-et-al-2020/"},{"title":"Carsten et al. (2023)","text":"Previous studies suggest that humans are capable of coregulating the speed of decisions and movements if promoted by task incentives. It is unclear however whether such behavior is inherent to the process of translating decisional information into movements, beyond posing a valid strategy in some task contexts. Therefore, in a behavioral online study we imposed time constraints to either decision- or movement phases of a sensorimotor task, ensuring that coregulating decisions and movements was not promoted by task incentives. We found that participants indeed moved faster when fast decisions were promoted and decided faster when subsequent finger tapping movements had to be executed swiftly. These results were further supported by drift diffusion modelling and inspection of psychophysical kernels: Sensorimotor delays related to initiating the finger tapping sequence were shorter in fast-decision as compared to slow-decision blocks. Likewise, the decisional speed-accuracy tradeoff shifted in favor of faster decisions in fast-tapping as compared to slow-tapping blocks. These findings suggest that decisions not only impact movement characteristics, but that properties of movement impact the time taken to decide. We interpret these behavioral results in the context of embodied decision-making, whereby shared neural mechanisms may modulate decisions and movements in a joint fashion.","link":"/opendata/carsten-et-al-2023/"},{"title":"Castellotti et al. (2022)","text":"The passing of time can be precisely measured by using clocks, whereas humans’ estimation of temporal durations is influenced by many physical, cognitive and contextual factors, which distort our internal clock. Although it has been shown that temporal estimation accuracy is impaired by non-temporal tasks performed at the same time, no studies have investigated how concurrent cognitive and motor tasks interfere with time estimation. Moreover, most experiments only tested time intervals of a few seconds. In the present study, participants were asked to perform cognitive tasks of different difficulties (look, read, solve simple and hard mathematical operations) and estimate durations of up to two minutes, while walking or sitting. The results show that if observers pay attention only to time without performing any other mental task, they tend to overestimate the durations. Meanwhile, the more difficult the concurrent task, the more they tend to underestimate the time. These distortions are even more pronounced when observers are walking. Estimation biases and uncertainties change differently with durations depending on the task, consistent with a fixed relative uncertainty. Our findings show that cognitive and motor systems interact non-linearly and interfere with time perception processes, suggesting that they all compete for the same resources.","link":"/opendata/castellotti-et-al-2022/"},{"title":"Castro-Rodrigues et al. (2022)","text":"Explicit information obtained through instruction profoundly shapes human choice behaviour. However, this has been studied in computationally simple tasks, and it is unknown how model-based and model-free systems, respectively generating goal-directed and habitual actions, are affected by the absence or presence of instructions. We assessed behaviour in a variant of a computationally more complex decision-making task, before and after providing information about task structure, both in healthy volunteers and in individuals suffering from obsessive-compulsive or other disorders. Initial behaviour was model-free, with rewards directly reinforcing preceding actions. Model-based control, employing predictions of states resulting from each action, emerged with experience in a minority of participants, and less in those with obsessive-compulsive disorder. Providing task structure information strongly increased model-based control, similarly across all groups. Thus, in humans, explicit task structural knowledge is a primary determinant of model-based reinforcement learning and is most readily acquired from instruction rather than experience.","link":"/opendata/castro-rodrigues-et-al-2022/"},{"title":"Cataldo et al. (2022)","text":"Healthy adults show better memory for low-arousal positive versus negative stimuli, but depression compromises this positive memory advantage. Existing studies are limited by small samples or analyses that provide limited insight into underlying mechanisms. Our study addresses these concerns by using a multistaged analysis, including diffusion modeling, to identify precise psychological processes underlying the positive memory advantage and its disruption by depression in a large sample. A total of 1,358 participants completed the BDI-II (Beck et al., 1996) and an emotional memory task. At encoding, participants judged whether positive and negative words were positive or self-descriptive. After a free recall test, participants viewed an equal mix of studied and unstudied words and judged whether each was old or new; if judged old, they indicated whether the study source was a valence or self-reference judgment. We replicate the positive memory advantage and its decrease in depression in recall, recognition, and source accuracy. The hierarchical drift diffusion model (HDDM; Wiecki et al., 2013) revealed that higher BDI-II scores are associated with more efficient evidence accumulation for negative words in the recognition and source memory tasks. By contrast, evidence accumulation for positive words is unaffected by BDI-II during the recognition task but becomes less efficient with increased BDI-II during the source memory task. In conclusion, in a well-controlled design with a large sample, we find that depression reduces the positive memory advantage. HDDM analyses suggest that this reflects differential effects of depression on the speed of evidence accumulation during the retrieval of positive versus negative memories.","link":"/opendata/cataldo-et-al-2022/"},{"title":"Caulfield et al. (2023)","text":"Worry is a repetitive, negative thought process that is widely experienced as difficult to control. Despite the adverse effects of uncontrollable worry on academic and other role functioning, the mechanisms by which worry becomes uncontrollable remain poorly understood. Previous experimental work has historically emphasized valence (negative versus positive or neutral). However, contemporary cognitive neuroscience also distinguishes between internally-directed attention (e.g., to thoughts) and externally-directed attention (e.g., to perceptual stimuli). To date, no studies have experimentally examined potential dissociable contributions of valence versus attentional direction to impaired disengagement from worry. In a 2 (negative or neutral valence) x 2 (internal or external attention) between-subjects, experimental and prospective design (https://osf.io/vdyfn/), participants (N = 200) completed alternating blocks of a randomly-assigned attention manipulation and validated sustained attention task. Participants also rated trait worry and distress during the experimental session (T1) and a naturalistic stressor (the week before finals; T2). There was a main effect, such that internally-directed attention impaired sustained attention (increased commission errors). Worry (internal x negative) also impaired sustained attention (faster and less accurate responding) in planned group contrasts. Trait worry did not moderate these effects. Sustained attention at T1 did not predict distress or worry during the T2 stressor. These findings augment the literature on the attentional consequences of worry and replicate and extend previous findings of altered speed-accuracy tradeoffs following experimentally-induced worry. We also find evidence for impaired disengagement from internally-directed (versus externally-directed) attention, which may help to explain impaired disengagement from related forms of perseverative thought (e.g., rumination).","link":"/opendata/caulfield-et-al-2023/"},{"title":"Cavallaro & Hawkins (2022)","text":"We investigated whether cognitive process models commonly studied in experimentalpsychology provide a deeper explanation of preferential choicesthan the descriptive random utility models commonly studied in the appliedchoice literature, with a specific focus on choices made under time pressure.In two preferential choice scenarios we used two standard manipulations oftime pressure to assess whether changes in decision time affect subjective valuationsof the features of preferential options, and whether the answer to thisquestion depends on the theoretical lens used to understand the data. Acrossfour experiments, we found that choices between preferential options and thetime taken to make them varied as a function of time pressure, reinforcingand generalising findings from lower-level perceptual decisions to higherlevelapplied choices. Critically, theoretical analyses from the two traditionsled to different psychological conclusions about how people adapted to timepressure. The random utility analyses suggested that time-pressure inducedchanges to choices were the result of changes in subjective valuations for thefeatures of preferential options. However, the cognitive process analyses attributedtime-pressure induced changes to choices to differential informationaccumulation; subjective valuations remained stable across contexts, againreinforcing decades of findings from the perceptual decision literature. Weargue that cognitive process models provide mechanistic explanations of theprocesses underlying decisions for preferential options. Furthermore, conventionaldescriptive models of choice in the applied literature may misattributevariability in choices to the incorrect latent cause, which has implications forout-of-sample prediction in the marketplace.","link":"/opendata/cavallaro-hawkins-2022/"},{"title":"Cavanagh et al. (2019)","text":"In this report, we provide the first evidence that mood and anxiety dimensions are associated with unique aspects of EEG responses to reward and punishment, respectively. We reanalyzed data from our prior publication of a categorical depiction of depression to address more sophisticated dimensional hypotheses. Highly symptomatic depressed individuals (N = 46) completed a probabilistic learning task with concurrent EEG. Measures of anxiety and depression symptomatology were significantly correlated with each other; however, only anxiety predicted better avoidance learning due to a tighter coupling of negative prediction error signaling with punishment-specific EEG features. In contrast, depression predicted a smaller reward-related EEG feature, but this did not affect prediction error coupling or the ability to learn from reward. We suggest that this reward-related alteration reflects motivational or hedonic aspects of reward and not a diminishment in the ability to represent the information content of reinforcements. These findings compel further research into the domain-specific neural systems underlying dimensional aspects of psychiatric disease.","link":"/opendata/cavanagh-et-al-2019/"},{"title":"Cavanagh (2021)","text":"Healthy control college students. 23 subjects completed the 3-armed bandit task with oscillating probabilities. For example, the ‘blue’ stim would slowly move from 20% reinforcing to 90% then back to 20 over many trials. The other ‘red’ and ‘green’ stims would move similarly, but in different phase. See Fig 1 of the paper. This makes the task great for investigating reward processing & reward prediction error in the service of novel task set generation.","link":"/opendata/cavanagh-2021/"},{"title":"Cavanaugh et al. (2017)","text":"Individual differences in dopaminergic tone underlie tendencies to learn from reward versus punishment. These effects are well documented in Parkinsons patients, who vacillate between low and high tonic dopaminergic states as a function of medication. Yet very few studies have investigated the influence of higher-level cognitive states known to affect downstream dopaminergic learning in Parkinsons patients. A dopamine-dependent cognitive influence over learning would provide a candidate mechanism for declining cognitive integrity and motivation in Parkinsons patients. In this report we tested the influence of two high-level cognitive states (cost of conflict and value of volition) that have recently been shown to cause predictable learning biases in healthy young adults as a function of dopamine receptor subtype and dopaminergic challenge. It was hypothesized that Parkinsons patients OFF medication would have an enhanced cost of conflict and a decreased value of volition, and that these effects would be remediated or reversed ON medication. Participants included N = 28 Parkinsons disease patients who were each tested ON and OFF dopaminergic medication and 28 age- and sex-matched controls. The expected cost of conflict effect was observed in Parkinsons patients OFF versus ON medication, but only in those that were more recently diagnosed (<5 years). We found an unexpected effect in the value of volition task: medication compromised the ability to learn from difficult a-volitional (instructed) choices. This novel finding was also enhanced in recently diagnosed patients. The difference in learning biases ON versus OFF medication between these two tasks was strongly correlated, bolstering the idea that they tapped into a common underlying imbalance in dopaminergic tone that is particularly variable in earlier stage Parkinsonism. The finding that these decision biases are specific to earlier but not later stage disease may offer a chance for future studies to quantify phenotypic expressions of idiosyncratic disease progression.","link":"/opendata/cavanaugh-et-al-2017/"},{"title":"Cavanagh & Frank (2023)","text":"Data from N=112 participants who completed the probabilistic selection task.","link":"/opendata/cavanagh-frank-2023/"},{"title":"Cesanek et al. (2023)","text":"Nearly all tasks of daily life involve skilled object manipulation, and successful manipulation requires knowledge of object dynamics. We recently developed a motor learning paradigm that reveals the categorical organization of motor memories of object dynamics. When participants repeatedly lift a constant-density “family” of cylindrical objects that vary in size, and then an outlier object with a greater density is interleaved into the sequence of lifts, they often fail to learn the weight of the outlier, persistently treating it as a family member despite repeated errors. Here we examine eight factors (Similarity, Cardinality, Frequency, History, Structure, Stochasticity, Persistence, and Time Pressure) that could influence the formation and retrieval of category representations in the outlier paradigm. In our web-based task, participants (N = 240) anticipated object weights by stretching a virtual spring attached to the top of each object. Using Bayesian t-tests, we analyze the relative impact of each manipulated factor on categorical encoding (strengthen, weaken, or no effect). Our results suggest that category representations of object weight are automatic, rigid, and linear and, as a consequence, the key determinant of whether an outlier is encoded as a member of the family is its discriminability from the family members.","link":"/opendata/cesanek-et-al-2023/"},{"title":"Chakroun et al. (2023)","text":"Dopamine fundamentally contributes to reinforcement learning, but recent accounts also suggest a contribution to specific action selection mechanisms and the regulation of response vigour. Here, we examine dopaminergic mechanisms underlying human reinforcement learning and action selection via a combined pharmacological neuroimaging approach in male human volunteers (n = 31, within-subjects; Placebo, 150 mg of the dopamine precursor L-dopa, 2 mg of the D2 receptor antagonist Haloperidol). We found little credible evidence for previously reported beneficial effects of L-dopa vs. Haloperidol on learning from gains and altered neural prediction error signals, which may be partly due to differences experimental design and/or drug dosages. Reinforcement learning drift diffusion models account for learning-related changes in accuracy and response times, and reveal consistent decision threshold reductions under both drugs, in line with the idea that lower dosages of D2 receptor antagonists increase striatal DA release via an autoreceptor-mediated feedback mechanism. These results are in line with the idea that dopamine regulates decision thresholds during reinforcement learning, and may help to bridge action selection and response vigor accounts of dopamine.","link":"/opendata/chakroun-et-al-2023/"},{"title":"Chambon et al. (2020)","text":"The valence of new information influences learning rates in humans: good news tends to receive more weight than bad news. We investigated this learning bias in four experiments, by systematically manipulating the source of required action (free versus forced choices), outcome contingencies (low versus high reward) and motor requirements (go versus no-go choices). Analysis of model-estimated learning rates showed that the confirmation bias in learning rates was specific to free choices, but was independent of outcome contingencies. The bias was also unaffected by the motor requirements, thus suggesting that it operates in the representational space of decisions, rather than motoric actions. Finally, model simulations revealed that learning rates estimated from the choice-confirmation model had the effect of maximizing performance across low- and high-reward environments. We therefore suggest that choice-confirmation bias may be adaptive for efficient learning of action-outcome contingencies, above and beyond fostering person-level dispositions such as self-esteem.","link":"/opendata/chambon-et-al-2020/"},{"title":"Charlton et al. (2023)","text":"To interpret the sensory environment, the brain combines ambiguous sensory measurements with knowledge that reflects context-specific prior experience. But environmental contexts can change abruptly and unpredictably, resulting in uncertainty about the current context. Here we address two questions: how should context-specific prior knowledge optimally guide the interpretation of sensory stimuli in changing environments, and do human decision-making strategies resemble this optimum? We probe these questions with a task in which subjects report the orientation of ambiguous visual stimuli that were drawn from three dynamically switching distributions, representing different environmental contexts. We derive predictions for an ideal Bayesian observer that leverages knowledge about the statistical structure of the task to maximize decision accuracy, including knowledge about the dynamics of the environment. We show that its decisions are biased by the dynamically changing task context. The magnitude of this decision bias depends on the observer’s continually evolving belief about the current context. The model therefore not only predicts that decision bias will grow as the context is indicated more reliably, but also as the stability of the environment increases, and as the number of trials since the last context switch grows. Analysis of human choice data validates all three predictions, suggesting that the brain leverages knowledge of the statistical structure of environmental change when interpreting ambiguous sensory signals.","link":"/opendata/charlton-et-al-2023/"},{"title":"Charpentier et al. (2022)","text":"Seeking information when anxious may help reduce the aversive feeling of uncertainty and guide decision-making. If information is negative or confusing, however, this may increase anxiety further. Information gathered under anxiety can thus be beneficial and/or damaging. Here, we examine whether anxiety leads to a general increase in information-seeking, or rather to changes in the type of information and/or situations in which it is sought. In two controlled laboratory studies, we show that both trait anxiety and induced anxiety lead to a selective alteration in information-seeking. In particular, anxiety did not enhance the general tendency to seek information, nor did it alter the valence of the information gathered. Rather, anxiety amplified the tendency to seek information more in response to large changes in the environment. This was true even when the cause of the anxiety was not directly related to the information sought. As anxious individuals have been shown to have problems learning in changing environments, greater information-seeking in such environments may be an adaptive compensatory mechanism.","link":"/opendata/charpentier-et-al-2022/"},{"title":"Chaumon et al. (2022)","text":"The COVID-19 pandemic and associated lockdowns triggered worldwide changes in the daily routines of human experience. The Blursday database provides repeated measures of subjective time and related processes from participants in nine countries tested on 14 questionnaires and 15 behavioural tasks during the COVID-19 pandemic. A total of 2,840 participants completed at least one task, and 439 participants completed all tasks in the first session. The database and all data collection tools are accessible to researchers for studying the effects of social isolation on temporal information processing, time perspective, decision-making, sleep, metacognition, attention, memory, self-perception and mindfulness. Blursday includes quantitative statistics such as sleep patterns, personality traits, psychological well-being and lockdown indices. The database provides quantitative insights on the effects of lockdown (stringency and mobility) and subjective confinement on time perception (duration, passage of time and temporal distances). Perceived isolation affects time perception, and we report an inter-individual central tendency effect in retrospective duration estimation.","link":"/opendata/chaumon-et-al-2022/"},{"title":"Chen et al. (2023)","text":"Failures to obtain rewards influence both the direction (choosing what to do) and intensity (response vigor) of subsequent motivated behavior. For instance, in risky decision-making, losses can induce faster responses (‘intensity’) and sometimes increase risk-taking (‘direction’), which may lead to detrimental consequences. Pauses might reduce these motivational influences of prior outcomes. To examine this question, participants (from the general population on Prolific.co in 2022 and 2023) alternated between a guess game, in which they won or lost money, and a choice game, in which they chose between two risky options. The pause between a guess and a choice game was either short (0 or 300 milliseconds) or long (3000 milliseconds). In four experiments, prior outcomes consistently influenced decision speed, but not people’s choices. Contrarily, pauses increased decision quality (choosing the option with a higher expected value) without substantially reducing decision speed. Pauses may improve decision quality by increasing attention to relevant information, as its effect was absent when the overall task attention was increased (Experiment 3). These findings have implications for both responsible gambling and risky decision-making research. Future work can examine the underlying computational and cognitive processes, and the generalizability of these findings to other contexts and populations.","link":"/opendata/chen-et-al-2023/"},{"title":"Ciranka et al. (2022a)","text":"Humans and other animals are capable of inferring never-experienced relations (for example, A > C) from other relational observations (for example, A > B and B > C). The processes behind such transitive inference are subject to intense research. Here we demonstrate a new aspect of relational learning, building on previous evidence that transitive inference can be accomplished through simple reinforcement learning mechanisms. We show in simulations that inference of novel relations benefits from an asymmetric learning policy, where observers update only their belief about the winner (or loser) in a pair. Across four experiments (n = 145), we find substantial empirical support for such asymmetries in inferential learning. The learning policy favoured by our simulations and experiments gives rise to a compression of values that is routinely observed in psychophysics and behavioural economics. In other words, a seemingly biased learning strategy that yields well-known cognitive distortions can be beneficial for transitive inferential judgements.","link":"/opendata/ciranka-et-al-2022a/"},{"title":"Ciranka & van den Bos (2022)","text":"Adolescents are known for their propensity to take risks, which may be especially strong in social contexts. People are known to use social information more when uncertain of how to decide. How feelings of uncertainty relate to the development of social susceptibility and risky choice across adolescence is unknown. To understand the effects of uncertainty on social influence, we introduce a novel task that measures risky choice under different levels of uncertainty, either with or without observing someone elses advice. Testing 161 adolescents and young adults (10-26 years of age), we show that risky-choice, social information use and subjective uncertainty decrease across development. We also fit a model wherein social information is used to reduce uncertainty to participants’ choices according to Bayesian principles. This model revealed that age-related changes in uncertainty fully accounted for age-related changes in social information use. Our results imply that uncertainty is a key mechanism in adolescents’ social susceptibility in risky behaviour.","link":"/opendata/ciranka-van-den-bos-2022/"},{"title":"Clarke et al. (2018)","text":"A striking range of individual differences has recently been reported in three different visual search tasks. These differences in performance can be attributed to strategy, that is, the efficiency with which participants control their search to complete the task quickly and accurately. Here we ask if an individual’s strategy and performance in one search task is correlated with how they perform in the other two. We tested 64 observers in the three tasks mentioned above over two sessions. Even though the test-retest reliability of the tasks is high, an observer’s performance and strategy in one task did not reliably predict their behaviour in the other two. These results suggest search strategies are stable over time, but context-specific. To understand visual search we therefore need to account not only for differences between individuals, but also how individuals interact with the search task and context. These context-specific but stable individual differences in strategy can account for a substantial proportion of variability in search performance.","link":"/opendata/clarke-et-al-2018/"},{"title":"Clarke et al. (2022)","text":"Foraging entails finding multiple targets sequentially. In humans and other animals, a key observation has been a tendency to forage in ‘runs’ of the same target type. This tendency is context-sensitive, and in humans, it is strongest when the targets are difficult to distinguish from the distractors. Many important questions have yet to be addressed about this and other tendencies in human foraging, and a key limitation is a lack of precise measures of foraging behaviour. The standard measures tend to be run statistics, such as the maximum run length and the number of runs. But these measures are not only interdependent, they are also constrained by the number and distribution of targets, making it difficult to make inferences about the effects of these aspects of the environment on foraging. Moreover, run statistics are underspecified about the underlying cognitive processes determining foraging behaviour. We present an alternative approach: modelling foraging as a procedure of generative sampling without replacement, implemented in a Bayesian multilevel model. This allows us to break behaviour down into a number of biases that influence target selection, such as the proximity of targets and a bias for selecting targets in runs, in a way that is not dependent on the number of targets present. Our method thereby facilitates direct comparison of specific foraging tendencies between search environments that differ in theoretically important dimensions. We demonstrate the use of our model with simulation examples and re-analysis of existing data. We believe our model will provide deeper insights into visual foraging and provide a foundation for further modelling work in this area.","link":"/opendata/clarke-et-al-2022/"},{"title":"Clay et al. (2022)","text":"Current models of mental effort in psychology, behavioral economics, and cognitive neuroscience typically suggest that exerting cognitive effort is aversive, and people avoid it whenever possible. The aim of this research was to challenge this view and show that people can learn to value and seek effort intrinsically. Our experiments tested the hypothesis that effort-contingent reward in a working-memory task will induce a preference for more demanding math tasks in a transfer phase, even though participants were aware that they would no longer receive any reward for task performance. In laboratory Experiment 1 (n = 121), we made reward directly contingent on mobilized cognitive effort as assessed via cardiovascular measures (β-adrenergic sympathetic activity) during the training task. Experiments 2a to 2e (n = 1,457) were conducted online to examine whether the effects of effort-contingent reward on subsequent demand seeking replicate and generalize to community samples. Taken together, the studies yielded reliable evidence that effort-contingent reward increased participants demand seeking and preference for the exertion of cognitive effort on the transfer task. Our findings provide evidence that people can learn to assign positive value to mental effort. The results challenge currently dominant theories of mental effort and provide evidence and an explanation for the positive effects of environments appreciating effort and individual growth on peoples evaluation of effort and their willingness to mobilize effort and approach challenging tasks.","link":"/opendata/clay-et-al-2022/"},{"title":"Cochrane et al. (2023)","text":"Evidence accumulation models have enabled strong advances in our understanding of decision-making, yet their application to examining learning has not been common. Using data from participants completing a dynamic random dot-motion direction discrimination task across four days, we characterized alterations in two components of perceptual decision-making (Drift Diffusion Model drift rate and response boundary). Continuous-time learning models were applied to characterize trajectories of performance change, with different models allowing for varying dynamics. The best-fitting model included drift rate changing as a continuous, exponential function of cumulative trial number. In contrast, response boundary changed within each daily session, but in an independent manner across daily sessions. Our results highlight two different processes underlying the pattern of behavior observed across the entire learning trajectory, one involving a continuous tuning of perceptual sensitivity, and another more variable process describing participants’ threshold of when enough evidence is present to act.","link":"/opendata/cochrane-et-al-2023/"},{"title":"Cohen et al. (2020)","text":"Beliefs about the controllability of positive or negative events in the environment can shape learning throughout the lifespan. Previous research has shown that adults learning is modulated by beliefs about the causal structure of the environment such that they update their value estimates to a lesser extent when the outcomes can be attributed to hidden causes. This study examined whether external causes similarly influenced outcome attributions and learning across development. Ninety participants, ages 7 to 25 years, completed a reinforcement learning task in which they chose between two options with fixed reward probabilities. Choices were made in three distinct environments in which different hidden agents occasionally intervened to generate positive, negative, or random outcomes. Participants beliefs about hidden-agent intervention aligned with the true probabilities of the positive, negative, or random outcome manipulation in each of the three environments. Computational modeling of the learning data revealed that while the choices made by both adults (ages 18-25) and adolescents (ages 13-17) were best fit by Bayesian reinforcement learning models that incorporate beliefs about hidden-agent intervention, those of children (ages 7-12) were best fit by a one learning rate model that updates value estimates based on choice outcomes alone. Together, these results suggest that while children demonstrate explicit awareness of the causal structure of the task environment, they do not implicitly use beliefs about the causal structure of the environment to guide reinforcement learning in the same manner as adolescents and adults.","link":"/opendata/cohen-et-al-2020/"},{"title":"Cohen et al. (2023)","text":"Charitable giving involves a complex economic and social decision because the giver expends resources for goods or services they will never receive. Although psychologists have identified numerous factors that influence charitable giving, there currently exists no unifying computational model of charitable choice. Here, we submit one such model, based within the strictures of Psychological Value Theory (PVT). In four experiments, we assess whether charitable giving is driven by the perceived Psychological Value of the recipient. Across all four experiments, we simultaneously predict response choice and response time with high accuracy. In a fifth experiment, we show that PVT predicts charitable giving more accurately than an account based on competence and warmth. PVT accurately predicts which charity a respondent will choose to donate to and separately, whether a respondent will choose to donate at all. PVT models the cognitive processes underlying charitable donations and it provides a computational framework for integrating known influences on charitable giving. For example, we show that in-group preference influences charitable giving by changing the Psychological Values of the options, rather than by bringing about a response bias toward the in-group.","link":"/opendata/cohen-et-al-2023/"},{"title":"Collins (2018)","text":"Learning to make rewarding choices in response to stimuli depends on a slow but steady process, reinforcement learning, and a fast and flexible, but capacity-limited process, working memory. Using both systems in parallel, with their contributions weighted based on performance, should allow us to leverage the best of each system: rapid early learning, supplemented by long-term robust acquisition. However, this assumes that using one process does not interfere with the other. We use computational modeling to investigate the interactions between the two processes in a behavioral experiment and show that working memory interferes with reinforcement learning. Previous research showed that neural representations of reward prediction errors, a key marker of reinforcement learning, were blunted when working memory was used for learning. We thus predicted that arbitrating in favor of working memory to learn faster in simple problems would weaken the reinforcement learning process. We tested this by measuring performance in a delayed testing phase where the use of working memory was impossible, and thus participant choices depended on reinforcement learning. Counterintuitively, but confirming our predictions, we observed that associations learned most easily were retained worse than associations learned slower: Using working memory to learn quickly came at the cost of long-term retention. Computational modeling confirmed that this could only be accounted for by working memory interference in reinforcement learning computations. These results further our understanding of how multiple systems contribute in parallel to human learning and may have important applications for education and computational psychiatry.","link":"/opendata/collins-2018/"},{"title":"Constant et al. (2023)","text":"According to Bayesian models, both decisions and confidence are based on the same precision-weighted integration of prior expectations (“priors”) and incoming information (“likelihoods”). This assumes that priors are integrated optimally and equally in decisions and confidence, which has not been tested. In three experiments, we quantify how priors inform decisions and confidence. With a dual-decision task we create pairs of conditions that are matched in posterior information, but differ on whether the prior or likelihood is more informative. We find that priors are underweighted in discrimination decisions, but are less underweighted in confidence about those decisions, and this is not due to differences in processing time. The same patterns remain with exogenous probabilistic cues as priors. With a Bayesian model we quantify the weighting parameters for the prior at both levels, and find converging evidence that priors are more optimally used in explicit confidence, even when underused in decisions.","link":"/opendata/constant-et-al-2023/"},{"title":"Conway et al. (2020)","text":"A large and consistent research literature demonstrates the superiority of dimensional models of mental disorder. Factor analytic research has mapped the latent dimensions underlying separate sets of mental disorders (e.g., emotional disorders), but a common framework-unencumbered by arbitrary historical boundaries between disorder groups-requires additional research. Using empirically derived measures of three key domains of psychopathological variation, the overarching goal of the current study was to explicate dimensions connecting internalizing, externalizing, and personality disorders. Participants included 1,144 racially diverse undergraduates. Exploratory structural equation modeling analyses revealed seven latent dimensions: core internalizing, core externalizing, antagonism, impulsivity, dutifulness, detachment, and suspiciousness. This meta-structure reflects a more comprehensive model of the architecture of mental disorders than accounts derived from less inclusive assessment batteries. Future empirical work is needed to evaluate the utility of this structural model in etiological research, assessment, and treatment arenas.","link":"/opendata/conway-et-al-2020/"},{"title":"Copeland et al. (2023a)","text":"A considerable number of people successfully give up tobacco smoking. In nicotine-dependent individuals, tobacco choice is determined by greater expected drug value; however, less is known about the underlying mechanisms through which people quit smoking. This study aimed to explore whether computational parameters of value-based decision-making characterise recovery from nicotine addiction. Using a pre-registered, between-subject design, current daily smokers (n = 51) and ex-smokers who used to smoke daily (n = 51) were recruited from the local community. Participants completed a two-alternative forced choice task in which they chose between either two tobacco-related images (in one block) or tobacco-unrelated images (in a different block). During each trial, participants pressed a computer key to select the image they rated most positively during a previous task block. To estimate evidence accumulation (EA) processes and response thresholds during the different blocks, a drift-diffusion model was fitted to the reaction time and error data. Ex-smokers had significantly higher response thresholds when making tobacco-related decisions (p = .01, d = .45) compared to current smokers, although there were no significant group differences during tobacco-unrelated decisions. Furthermore, there were no significant group differences in EA rates when making tobacco or tobacco-unrelated decisions. Greater cautiousness when making value-based decisions about tobacco-related cues characterised recovery from nicotine addiction.","link":"/opendata/copeland-et-al-2023a/"},{"title":"Copeland et al. (2023b)","text":"Devaluation of alcohol leads to reductions in alcohol choice and consumption; however, the cognitive mechanisms that underpin this relationship are not well-understood. In this study we applied a computational model of value-based decision-making (VBDM) to decisions made about alcohol and alcohol-unrelated cues following experimental manipulation of alcohol value. Using a pre-registered within-subject design, thirty-six regular alcohol consumers (≥14 UK units per week) completed a two-alternative forced choice task where they chose between two alcohol images (in one block) or two soft drink images (in a different block) after watching videos that emphasised the positive (alcohol value), and separately, the negative (alcohol devalue) consequences of alcohol. On each block, participants pressed a key to select the image depicting the drink they would rather consume. A drift-diffusion model (DDM) was fitted to reaction time and choice data to estimate evidence accumulation (EA) processes and response thresholds during the different blocks in each experimental condition. In the alcohol devalue condition, soft drink EA rates were significantly increased compared to alcohol EA rates (p = .04, d = .31), and compared to soft drink EA rates in the alcohol value condition (p = .01, d = .38). However, the experimental manipulation had no effect on EA rates for alcoholic drinks or on response thresholds in either priming condition. In line with behavioural economic models of addiction that emphasise the important role of alternative reinforcement, experimentally manipulating alcohol value alters the internal cognitive processes that precede soft drink choice.","link":"/opendata/copeland-et-al-2023b/"},{"title":"Corcoran et al. (2018)","text":"The ability of subjects to identify and reproduce brief temporal intervals is influenced by many factors whether they be stimulus-based, task-based or subject-based. The current study examines the role individual differences play in subsecond and suprasecond timing judgments, using the schizoptypy personality scale as a test-case approach for quantifying a broad range of individual differences. In two experiments, 129 (Experiment 1) and 141 (Experiment 2) subjects completed the O-LIFE personality questionnaire prior to performing a modified temporal-bisection task. In the bisection task, subjects responded to two identical instantiations of a luminance grating presented in a 4deg window, 4deg above fixation for 1.5 s (Experiment 1) or 3 s (Experiment 2). Subjects initiated presentation with a button-press, and released the button when they considered the stimulus to be half-way through (750/1500 ms). Subjects were then asked to indicate their ‘most accurate estimate’ of the two intervals. In this way we measure both performance on the task (a first-order measure) and the subjects’ knowledge of their performance (a second-order measure). In Experiment 1 the effect of grating-drift and feedback on performance was also examined. Experiment 2 focused on the static/no-feedback condition. For the group data, Experiment 1 showed a significant effect of presentation order in the baseline condition (no feedback), which disappeared when feedback was provided. Moving the stimulus had no effect on perceived duration. Experiment 2 showed no effect of stimulus presentation order. This elimination of the subsecond order-effect was at the expense of accuracy, as the mid-point of the suprasecond interval was generally underestimated. Response precision increased as a proportion of total duration, reducing the variance below that predicted by Weber’s law. This result is consistent with a breakdown of the scalar properties of time perception in the early suprasecond range. All subjects showed good insight into their own performance, though that insight did not necessarily correlate with the veridical bisection point. In terms of personality, we found evidence of significant differences in performance along the Unusual Experiences subscale, of most theoretical interest here, in the subsecond condition only. There was also significant correlation with Impulsive Nonconformity and Cognitive Disorganisation in the sub- and suprasecond conditions, respectively. Overall, these data support a partial dissociation of timing mechanisms at very short and slightly longer intervals. Further, these results suggest that perception is not the only critical mitigator of confidence in temporal experience, since individuals can effectively compensate for differences in perception at the level of metacognition in early suprasecond time. Though there are individual differences in performance, these are perhaps less than expected from previous reports and indicate an effective timing mechanism dealing with brief durations independent of the influence of significant personality trait differences.","link":"/opendata/corcoran-et-al-2018/"},{"title":"Corlazzoli et al. (2023)","text":"When presented with the choice to invest cognitive control in a task, several signals are monitored to reach a decision. Leading theoretical frameworks argued that the investment of cognitive control is determined by a cost-benefit computation. However, previous accounts remained silent on the potential role of subjective experience in this computation. We experience confidence when giving an answer, feel the excitement of an anticipated reward, and reflect on how much effort is required for successful task performance. Two questions are investigated in the present work: how objective task parameters give rise to subjective experience and whether these drive the decision to allocate cognitive control. To this end, we designed a task in which we manipulated three objective parameters in the same sequence of events (stimulus uncertainty, physical effort, and reward prediction error). We asked participants to report their subjective experiences associated with these manipulations: confidence, subjective physical effort, and reward satisfaction. At the end of each trial, participants indicated whether they wanted to repeat that trial on the next day. In response to the first question, we demonstrate that subjective ratings are reliable and selective. Subjective experiences closely mirrored their objective manipulations. In response to the second question, we demonstrate that subjective experiences provide a better fit for the decisions on future control investments. While objective task parameters are considered when deciding, they do not always produce the expected changes in subjective experience, and when dissociations occur, it is the subjective experience that better explains the decision to allocate cognitive control.","link":"/opendata/corlazzoli-et-al-2023/"},{"title":"Correa et al. (2018)","text":"The extent to which subjective awareness influences reward processing, and thereby affects future decisions, is currently largely unknown. In the present report, we investigated this question in a reinforcement learning framework, combining perceptual masking, computational modeling, and electroencephalographic recordings (human male and female participants). Our results indicate that degrading the visibility of the reward decreased, without completely obliterating, the ability of participants to learn from outcomes, but concurrently increased their tendency to repeat previous choices. We dissociated electrophysiological signatures evoked by the reward-based learning processes from those elicited by the reward-independent repetition of previous choices and showed that these neural activities were significantly modulated by reward visibility. Overall, this report sheds new light on the neural computations underlying reward-based learning and decision-making and highlights that awareness is beneficial for the trial-by-trial adjustment of decision-making strategies.","link":"/opendata/correa-et-al-2018/"},{"title":"Correa et al. (2023)","text":"Human behavior emerges from planning over elaborate decompositions of tasks into goals, subgoals, and low-level actions. How are these decompositions created and used? Here, we propose and evaluate a normative framework for task decomposition based on the simple idea that people decompose tasks to reduce the overall cost of planning while maintaining task performance. Analyzing 11,117 distinct graph-structured planning tasks, we find that our framework justifies several existing heuristics for task decomposition and makes predictions that can be distinguished from two alternative normative accounts. We report a behavioral study of task decomposition (N = 806) that uses 30 randomly sampled graphs, a larger and more diverse set than that of any previous behavioral study on this topic. We find that human responses are more consistent with our framework for task decomposition than alternative normative accounts and are most consistent with a heuristic-betweenness centrality-that is justified by our approach. Taken together, our results suggest the computational cost of planning is a key principle guiding the intelligent structuring of goal-directed behavior.","link":"/opendata/correa-et-al-2023/"},{"title":"Cortese et al. (2021)","text":"The human brain excels at constructing and using abstractions, such as rules, or concepts. Here, in two fMRI experiments, we demonstrate a mechanism of abstraction built upon the valuation of sensory features. Human volunteers learned novel association rules based on simple visual features. Reinforcement-learning algorithms revealed that, with learning, high-value abstract representations increasingly guided participant behaviour, resulting in better choices and higher subjective confidence. We also found that the brain area computing value signals - the ventromedial prefrontal cortex - prioritised and selected latent task elements during abstraction, both locally and through its connection to the visual cortex. Such a coding scheme predicts a causal role for valuation. Hence, in a second experiment, we used multivoxel neural reinforcement to test for the causality of feature valuation in the sensory cortex, as a mechanism of abstraction. Tagging the neural representation of a task feature with rewards evoked abstraction-based decisions. Together, these findings provide a novel interpretation of value as a goal-dependent, key factor in forging abstract representations.","link":"/opendata/cortese-et-al-2021/"},{"title":"Cotton et al. (2023)","text":"Recent events have led to a change in environments from in-person to remote work for many people. This change presents several issues for work, education, and research, particularly related to cognitive performance, as the remote environment may have more distractors. An increase in distraction is one factor that may lead to increases in mind-wandering and disengagement with the task at hand, whether it is a virtual meeting, an online lecture or a psychological experiment. The present study investigated effects of mind-wandering and multitasking during working memory tasks in remote and in-person environments. In two experiments, participants completed a working memory task with varied cognitive load during a secondary task. After each working memory trial, we asked participants about their mind-wandering during that trial. Some participants completed the procedures in-person, while others completed the procedures remotely. While overall remote participants reported significantly more mind-wandering and poorer secondary task performance than the in-person participants, this pattern was not reflected in their working memory accuracy and both groups exhibited similar multitasking effects on memory performance. Additional analyses found that for remote participants the level of engagement with the task was a better predictor of working memory performance than either cognitive load or mind-wandering rates, but in a manner that did not indicate a tradeoff in resources between tasks. Together, these results demonstrate the importance of considering multiple metrics when assessing performance and illustrate that making assumptions about the equivalence of remote and in person work is a risky proposition.","link":"/opendata/cotton-et-al-2023/"},{"title":"Crawford et al. (2022)","text":"The study investigated whether cognitive effort decision-making measured via a neuroeconomic paradigm that manipulated framing (gain vs. loss outcomes), could predict daily life engagement in mentally demanding activities in both younger and older adults.Method: Younger and older adult participants (N=310) completed the Cognitive Effort Discounting paradigm (Cog-ED), under both gain and loss conditions, to provide an experimental index of cognitive effort costs for each participant in each framing condition. A subset of participants (N=230) also completed a seven-day Ecological Momentary Assessment (EMA) protocol measuring engagement in mentally demanding daily life activities. In a large, online sample, we replicated a robust increase in cognitive effort costs among older, relative to younger, adults. Additionally, costs were found to be reduced in the loss relative to gain frame, although these effects were only reliable at high levels of task difficulty and were not moderated by age. Critically, participants who had lower effort costs in the gain frame tended to report engaging in more mentally demanding daily life activities, but the opposite pattern was observed in the loss frame. Further exploratory analysis demonstrated that older adults reported overall lower levels of engagement in daily-life mentally demanding activities, and these age differences were statistically mediated by gain and loss effort costs. Together, these results suggest that cognitive effort costs, as measured through behavioral choice patterns in a neuroeconomic decision-making task, can be used to predict and explain engagement in mentally demanding activities during daily life among both older and younger adults.","link":"/opendata/crawford-et-al-2022/"},{"title":"Crivelli-Decker et al. (2023)","text":"Recent work in cognitive and systems neuroscience has suggested that the hippocampus might support planning, imagination, and navigation by forming cognitive maps that capture the abstract structure of physical spaces, tasks, and situations. Navigation involves disambiguating similar contexts, and the planning and execution of a sequence of decisions to reach a goal. Here, we examine hippocampal activity patterns in humans during a goal-directed navigation task to investigate how contextual and goal information are incorporated in the construction and execution of navigational plans. During planning, hippocampal pattern similarity is enhanced across routes that share a context and a goal. During navigation, we observe prospective activation in the hippocampus that reflects the retrieval of pattern information related to a key-decision point. These results suggest that, rather than simply representing overlapping associations or state transitions, hippocampal activity patterns are shaped by context and goals.","link":"/opendata/crivelli-decker-et-al-2023/"},{"title":"Csifcsak et al. (2020)","text":"The ability to control the occurrence of rewarding and punishing events is crucial for our well-being. Two ways to optimize performance are to follow heuristics like Pavlovian biases to approach reward and avoid loss or to rely more on slowly accumulated stimulus-action associations. Although reduced control over outcomes has been linked to suboptimal decision-making in clinical conditions associated with learned helplessness, it is unclear how uncontrollability of the environment is related to the arbitration between different response strategies. This study directly tested whether a behavioral manipulation designed to induce learned helplessness in healthy adults (intermittent loss of control over feedback in a reinforcement learning task; yoking) would modulate the magnitude of Pavlovian bias and the neurophysiological signature of cognitive control (frontal midline theta power) in healthy adults. Using statistical analysis and computational modeling of behavioral data and electroencephalographic signals, we found stronger Pavlovian influences and alterations in frontal theta activity in the yoked group. However, these effects were not accompanied by reduced performance in experimental blocks with regained control, indicating that our behavioral manipulation was not potent enough for inducing helplessness and impaired coping ability with task demands. We conclude that the level of contingency between instrumental choices and rewards/punishments modulates Pavlovian bias during value-based decision-making, probably via interfering with the implementation of cognitive control. These findings might have implications for understanding the mechanisms underlying helplessness in various psychiatric conditions.","link":"/opendata/csifcsak-et-al-2020/"},{"title":"Csifcsak et al. (2021)","text":"Recent studies suggest that choice behavior in reinforcement learning tasks is shaped by the level of outcome controllability. In particular, Pavlovian bias (PB) seems to be enhanced under low levels of control, manifesting in approach tendencies toward rewards and response inhibition when facing potential losses. The medial prefrontal cortex (mPFC) has been implicated both in evaluating outcome controllability and in the recruitment of cognitive control (CC) to suppress maladaptive PB during reinforcement learning. The current study tested whether high-definition transcranial direct current stimulation (HD-tDCS) above the mPFC of healthy humans can influence PB, and counteract the previously documented, deleterious behavioral effects of low outcome controllability on decision-making. In a preregistered, between-group, double-blind study (N = 103 adults, both sexes), we tested the interaction between controllability and HD-tDCS on parameters of choice behavior in a Go/NoGo task. Relative to sham stimulation, HD-tDCS resulted in more robust performance improvement following reduced control, an effect that was more pronounced in appetitive trials. In addition, we found evidence for weaker PB when HD-tDCS was administered during low controllability over outcomes. Computational modeling revealed that parameter estimates of learning rate and choice randomness were modulated by controllability, HD-tDCS and their interaction. Overall, these results highlight the potential of our HD-tDCS protocol for interfering with choice arbitration under low levels of control, resulting in more adaptive behavior.","link":"/opendata/csifcsak-et-al-2021/"},{"title":"Cutler et al. (2021)","text":"Reinforcement learning is a fundamental mechanism displayed by many species. However, adaptive behaviour depends not only on learning about actions and outcomes that affect ourselves, but also those that affect others. Using computational reinforcement learning models, we tested whether young (age 18-36) and older (age 60-80, total n = 152) adults learn to gain rewards for themselves, another person (prosocial), or neither individual (control). Detailed model comparison showed that a model with separate learning rates for each recipient best explained behaviour. Young adults learned faster when their actions benefitted themselves, compared to others. Compared to young adults, older adults showed reduced self-relevant learning rates but preserved prosocial learning. Moreover, levels of subclinical self-reported psychopathic traits (including lack of concern for others) were lower in older adults and the core affective-interpersonal component of this measure negatively correlated with prosocial learning. These findings suggest learning to benefit others is preserved across the lifespan with implications for reinforcement learning and theories of healthy ageing.","link":"/opendata/cutler-et-al-2021/"},{"title":"Dai et al. (2019)","text":"Uncertainty about the waiting time before obtaining an outcome is integral to intertemporal choice. Here, we showed that people express different time preferences depending on how they learn about this temporal uncertainty. In two studies, people chose between pairs of options: one with a single, sure delay and the other involving multiple, probabilistic delays (a lottery). The probability of each delay occurring either was explicitly described (timing risk) or could be learned through experiential sampling (timing uncertainty; the delay itself was not experienced). When the shorter delay was rare, people preferred the lottery more often when it was described than when it was experienced. When the longer delay was rare, this pattern was reversed. Modeling analyses suggested that underexperiencing rare delays and different patterns of probability weighting contribute to this description-experience gap. Our results challenge traditional models of intertemporal choice with temporal uncertainty as well as the generality of inverse-S-shaped probability weighting in such choice.","link":"/opendata/dai-et-al-2019/"},{"title":"Das & Steyvers (2023)","text":"Cognitive decline often accompanies natural aging, which results in younger adults outperforming older adults, on average, on tasks requiring attention, memory, or mental flexibility skills. This performance gap between age groups persists even after people train on these tasks, but it remains unclear whether the gap persists when individuals, rather than groups, are compared at different training levels. In this paper, we analyzed 9,923 users between 18-90 years old (63% over 60) who performed a variety of cognitive tasks on an online cognitive training platform. We quantified an older adult’s potential to catch up to, or perform as well as, a younger adult. We found that the probability of catching up to someone decades younger increases with differential amounts of training on a variety of cognitive tasks. These findings suggest that age-related performance deficits can be overcome with additional training.","link":"/opendata/das-steyvers-2023/"},{"title":"da Silva & Hare (2020)","text":"Distinct model-free and model-based learning processes are thought to drive both typical and dysfunctional behaviours. Data from two-stage decision tasks have seemingly shown that human behaviour is driven by both processes operating in parallel. However, in this study, we show that more detailed task instructions lead participants to make primarily model-based choices that have little, if any, simple model-free influence. We also demonstrate that behaviour in the two-stage task may falsely appear to be driven by a combination of simple model-free and model-based learning if purely model-based agents form inaccurate models of the task because of misconceptions. Furthermore, we report evidence that many participants do misconceive the task in important ways. Overall, we argue that humans formulate a wide variety of learning models. Consequently, the simple dichotomy of model-free versus model-based learning is inadequate to explain behaviour in the two-stage task and connections between reward learning, habit formation and compulsivity.","link":"/opendata/da-silva-hare-2020/"},{"title":"de-Eccher et al. (2023)","text":"Much recent work has shown that children actively shape their learning progress by choosing what, when and from whom they want to learn. However less is known about whether children are sensitive to gaps in their knowledge, and whether they selectively solicit information about items they previously indicated having a knowledge gap about. In a cross-situational word-learning task, we asked 5-year-olds, 6-9-year-olds and adults to estimate their knowledge of newly learned word-object associations. We then examined whether participants preferentially chose to hear the labels of objects they reported not knowing the label of and whether such uncertainty-driven sampling behavior led to improved learning. We found that participants in all age groups were sensitive to gaps in their knowledge of the word-object associations, i.e., were more likely to say they had correctly indicated the label of an object when they were correct, relative to when they were incorrect. Furthermore, 6-9-year-olds and adults were more likely to sample objects whose labels they reported not knowing, while 5-year-olds were equally likely to sample objects whose labels they reported not knowing and knowing. In other words, older children and adults displayed sampling behavior directed at reducing knowledge gaps and uncertainty, while younger children did not. However, while all age groups showed evidence of having learned the word-object associations, participants who displayed more uncertainty-driven sampling behavior were not more accurate at test. Our findings underscore the role of uncertainty in driving 6-9-year-olds’ and adults’ sampling behavior and speak to the mechanisms underlying previously reported performance boosts in active learning.","link":"/opendata/de-eccher-et-al-2023/"},{"title":"de-Voogd et al. (2022)","text":"Threatening situations ask for rapid and accurate perceptual decisions to optimize coping. Theoretical models have stated that psychophysiological states, such as bradycardia during threat-anticipatory freezing, may facilitate perception. However, it’s unclear if this occurs via enhanced bottom-up sensory processing or by relying more on prior expectations. To test this, 52 (26 female) participants completed a visual target-detection paradigm under threat-of-shock (15% reinforcement rate) with a manipulation of prior expectations. Participants judged the presence of a backward-masked grating (target presence rate 50%) after systematically manipulating their decision criterion with a rare (20%) or frequent (80%) target presence rate procedure. Threat-of-shock induced stronger heart rate deceleration compared to safe, indicative of threat-anticipatory freezing. Importantly, threat-of-shock enhanced perceptual sensitivity but we did not find evidence of an altered influence of the effect of prior expectations on current decisions. Correct target detection (hits) was furthermore accompanied by an increase in the magnitude of this heart rate deceleration compared to a missed target. While this was independent of threat-of-shock manipulation, only under threat-of-shock this increase was accompanied by more hits and increased sensitivity. Together, these findings suggest that under acute threat participants may rely more on bottom-up sensory processing versus prior expectations in perceptual decision-making. Critically, bradycardia may underlie such enhanced perceptual sensitivity.","link":"/opendata/de-voogd-et-al-2022/"},{"title":"de-Waard et al. (2023)","text":"Through statistical learning, humans can learn to suppress visual areas that often contain distractors. Recent findings suggest that this form of learned suppression is insensitive to context, putting into question its real-life relevance. The current study presents a different picture: we show context-dependent learning of distractor-based regularities. Unlike previous studies which typically used background cues to differentiate contexts, the current study manipulated task context. Specifically, the task alternated from block to block between a compound search and a detection task. In both tasks, participants searched for a unique shape, while ignoring a uniquely colored distractor item. Crucially, a different high-probability distractor location was assigned to each task context in the training blocks, and all distractor locations were made equiprobable in the testing blocks. In a control experiment, participants only performed a compound search task such that the contexts were made indistinguishable, but the high-probability locations changed in exactly the same way as in the main experiment. We analyzed response times for different distractor locations and show that participants can learn to suppress a location in a context-dependent way, but suppression from previous task contexts lingers unless a new high-probability location is introduced.","link":"/opendata/de-waard-et-al-2023/"},{"title":"Decker et al. (2016)","text":"Theoretical models distinguish two decision-making strategies that have been formalized in reinforcement-learning theory. A model-based strategy leverages a cognitive model of potential actions and their consequences to make goal-directed choices, whereas a model-free strategy evaluates actions based solely on their reward history. Research in adults has begun to elucidate the psychological mechanisms and neural substrates underlying these learning processes and factors that influence their relative recruitment. However, the developmental trajectory of these evaluative strategies has not been well characterized. In this study, children, adolescents, and adults performed a sequential reinforcement-learning task that enabled estimation of model-based and model-free contributions to choice. Whereas a model-free strategy was apparent in choice behavior across all age groups, a model-based strategy was absent in children, became evident in adolescents, and strengthened in adults. These results suggest that recruitment of model-based valuation systems represents a critical cognitive component underlying the gradual maturation of goal-directed behavior.","link":"/opendata/decker-et-al-2016/"},{"title":"Decker, Dubois et al. (2022)","text":"Attentional lapses have been found to impair everything from basic perception to learning and memory. Yet, despite the well documented costs of lapses on cognition, recent work suggests that lapses might unexpectedly confer some benefits. One potential benefit is that lapses broaden our learning to integrate seemingly irrelevant content that could later prove useful–a benefit that prior research focusing only on goal relevant memory would miss. Here, we measure how fluctuations in sustained attention influence the learning of seemingly goal-irrelevant content that competes for attention with target content. Participants completed a correlated flanker task in which they categorized central targets (letters or numbers) while ignoring peripheral flanking symbols that shared hidden probabilistic relationships with the targets. We found that across participants, higher rates of attentional lapses correlated with greater learning of the target-flanker relationships. Moreover, within participants, learning was more evident during attentional lapses. These findings address long-standing theoretical debates and reveal a benefit of attentional lapses: they expand the scope of learning and decisions beyond the strictly relevant.","link":"/opendata/decker-dubois-et-al-2022/"},{"title":"Del-Popolo-Cristaldi et al. (2023)","text":"Adaptive cognitive control (CC), the ability to adjust goal-directed behavior according to changing environmental demand, can be instantiated bottom-up by implicit knowledge, including temporal predictability of task-relevant events. In S1-S2 tasks, either local (trial-by-trial hazard expectation) or global (block-by-block expectation) temporal information can induce prediction, allowing for proactive action control. Recent developmental evidence showed that adaptive CC based on global temporal prediction emerges earlier than when it is based on the local one only. However, very little is known about how children learn to dynamically adjust behavior on the fly according to changing global predictive information. Addressing this issue is nevertheless crucial to unravel the mechanisms underlying adaptive CC flexibility. Here we used a modified version of the Dynamic Temporal Prediction task to investigate how typically developing younger (6-8 years) and older children (9-11 years), adolescents (12-15 years) and adults (21-31 years) use global prediction to shape adaptive CC over time. Specifically, the short-long percentage of S2 preparatory intervals was manipulated list-wide to create a slow-fast-slow-fast fixed block sequence and test how efficiently the response speed adapted accordingly. Overall, results revealed that in all groups behavioral performance is successfully adjusted as a function of global prediction in the late phase of the task (block 3 to 4). Remarkably, only adolescents and adults exhibit an early adaptation of adaptive CC (block 1 to 2), while children younger than 11 show sluggish ability in inferring implicit changes in global predictive rules. This age-related dissociation suggests that, although being present from an early age, adaptive CC based on global predictive information needs more developmental space to become flexible in an efficient way. In the light of a neuroconstructivist approach, we suggest that bottom-up driven implicit flexibility may represent a key prerequisite for the development of efficient explicit cognitive control.","link":"/opendata/del-popolo-cristaldi-et-al-2023/"},{"title":"del-Rio et al. (2023)","text":"Our perception does not depend exclusively on the immediate sensory input. It is also influenced by our internal predictions derived from prior observations and the temporal regularities of the environment, which can result in choice history biases. However, the mechanisms facilitating this flexible use of prior information to predict the future are unclear. Prior information may offset evidence accumulation independently of the current sensory input, or it may modulate the weight of current sensory input based on its consistency with the expectation. To address this question, we used a visual decision-making task and manipulated the transitional probabilities between successive noisy grating stimuli. Using a reverse correlation analysis, we evaluated the contribution of stimulus-independent bias and stimulus-dependent sensitivity modulations to choice history biases. We found that both mechanisms coexist, whereby there was increased bias to respond in line with the predicted orientation alongside modulations in perceptual sensitivity to favour perceptual information consistent with the prediction, akin to selective attention. Furthermore, at the individual differences level, we investigated the relationship between autistic-like traits and the adaptation of choice history biases to the sequential statistics of the environment. Over two studies, we found no convincing evidence of reduced adaptation to sequential regularities in individuals with high autistic-like traits. In sum, we present robust evidence for both perceptual confirmation bias and decision bias supporting adaptation to sequential regularities in the environment.","link":"/opendata/del-rio-et-al-2023/"},{"title":"Dercon et al. (2022)","text":"Cognitive distancing is a therapeutic technique commonly used in psychological treatment of various mental health disorders, but its computational mechanisms remain unknown. To determine the effects of cognitive distancing on computational learning mechanisms, we use an online reward decision-making task, combined with reinforcement learning modelling in 935 participants, 49.1% of whom were trained to regulate their emotional response to task performance feedback. Those participants practicing cognitive distancing showed heightened learning from negative events as well as an increased integration of previous choice values. These differences seemed to represent an evolving shift in strategy by the distancing participants during the task, from exploiting optimal choices earlier in the task (as indicated by greater inverse temperature parameters), to a late-stage increase in learning from negative outcomes (represented as higher loss learning rates). Our findings suggest adaptive changes in computational learning mechanisms underpin the clinical utility of cognitive distancing in psychological therapy.","link":"/opendata/dercon-et-al-2022/"},{"title":"Dev et al. (2022)","text":"The theory of depressive realism holds that depressed individuals are less prone to optimistic bias, and are thus more realistic, in assessing their control or performance. Since the theory was proposed 40 years ago, many innovations have been validated for testing cognitive accuracy, including improved measures of bias in perceived control and performance. We incorporate several of those innovations in a well-powered, pre-registered study designed to identify depressive realism. Amazon MTurk workers (N = 246) and undergraduate students (N = 134) completed a classic contingency task, an overconfidence task, and measures of mental health constructs, including depression and anxiety. We measured perceived control throughout the contingency task, allowing us to compare control estimates at the trial-level to estimates assessed at task conclusion. We found no evidence that depressive symptoms relate to illusory control or to overconfidence. Our results suggest that despite its popular acceptance, depressive realism is not replicable.","link":"/opendata/dev-et-al-2022/"},{"title":"Devine & Otto (2022)","text":"People tend to avoid engaging in cognitively demanding tasks unless it is worth our while-that is, if the benefits outweigh the costs of effortful action. Yet, we seemingly partake in a variety of effortful mental activities (e.g. playing chess, completing Sudoku puzzles) because they impart a sense of progress. Here, we examine the possibility that information about progress-specifically, the number of trials completed of a demanding cognitive control task, relative to the total number of trials to be completed-reduces individuals aversion to cognitively effort activity, across four experiments. In Experiment 1, we provide an initial demonstration that presenting progress information reduces individuals avoidance of cognitively demanding activity avoidance using a variant of the well-characterized Demand Selection Task (DST). The subsequent experiments buttress this finding using a more sophisticated within-subjects versions of the DST, independently manipulating progress information and demand level to further demonstrate that, 1) people prefer receiving information about temporal progress in a task, and 2) all else being equal, individuals will choose to engage in tasks that require greater levels of cognitive effort when the more demanding option confers information about their progress in a task. Together, these results suggest that progress information can motivate cognitive effort expenditure and, in some cases, override individuals default bias towards demand avoidance.","link":"/opendata/devine-otto-2022/"},{"title":"Devine et al. (2023)","text":"Considerable evidence suggests that people value the freedom of choice. However, it is unclear whether this preference for choice stems purely from choice’s intrinsic value, or whether people prefer to choose because it tends to provide instrumental information about desirable outcomes. To address this question, we asked participants (n=200) to complete a two-stage choice task in which they could freely choose to exert choice or not. Borrowing a concept from information theory—mutual information—we manipulated the instrumental contingency between participants’ choices and eventual outcomes. Across two experiments, and leveraging reinforcement learning modeling, we demonstrate a marked preference for choice, but importantly found that participants’ preference for free choice is weakened when actions are decoupled from outcomes. Taken together, our results demonstrate that a significant factor in people’s preference for choice is an assumption about the instrumental value of choice, suggesting against a purely intrinsic value of choice.","link":"/opendata/devine-et-al-2023/"},{"title":"Dezfouli et al. (2019)","text":"Popular computational models of decision-making make specific assumptions about learning processes that may cause them to underfit observed behaviours. Here we suggest an alternative method using recurrent neural networks (RNNs) to generate a flexible family of models that have sufficient capacity to represent the complex learning and decision- making strategies used by humans. In this approach, an RNN is trained to predict the next action that a subject will take in a decision-making task and, in this way, learns to imitate the processes underlying subjects choices and their learning abilities. We demonstrate the benefits of this approach using a new dataset drawn from patients with either unipolar (n = 34) or bipolar (n = 33) depression and matched healthy controls (n = 34) making decisions on a two-armed bandit task. The results indicate that this new approach is better than baseline reinforcement-learning methods in terms of overall performance and its capacity to predict subjects choices. We show that the model can be interpreted using off-policy simulations and thereby provides a novel clustering of subjects learning processes-something that often eludes traditional approaches to modelling and behavioural analysis.","link":"/opendata/dezfouli-et-al-2019/"},{"title":"Dildine et al. (2020)","text":"Self-report is the gold standard for measuring pain. However, decisions about pain can vary substantially within and between individuals. We measured whether self-reported pain is accompanied by metacognition and variations in confidence, similar to perceptual decision-making in other modalities. Eighty healthy volunteers underwent acute thermal pain and provided pain ratings followed by confidence judgments on continuous visual analogue scales. We investigated whether eye fixations and reaction time during pain rating might serve as implicit markers of confidence. Confidence varied across trials and increased confidence was associated with faster pain rating reaction times. The association between confidence and fixations varied across individuals as a function of the reliability of individuals’ association between temperature and pain. Taken together, this work indicates that individuals can provide metacognitive judgments of pain and extends research on confidence in perceptual decision-making to pain.","link":"/opendata/dildine-et-al-2020/"},{"title":"Dillon et al. (2021)","text":"Choices and response times in two-alternative decision-making tasks can be modeled by assuming that individuals steadily accrue evidence in favor of each alternative until a response boundary for one of them is crossed, at which point that alternative is chosen. Prior studies have reported that evidence accumulation during decision-making tasks takes longer in adults with psychopathology than in healthy controls, indicating that slow evidence accumulation may be transdiagnostic. However, few studies have examined perceptual decision making in anxiety disorders, where hypervigilance might enhance performance. Therefore, this study used the Hierarchical Drift Diffusion model to investigate evidence accumulation in adults with social anxiety disorder (SAD) and healthy controls as they performed a probabilistic reward task (PRT), in which social rewards were delivered for correct perceptual judgments. Adults with SAD completed the PRT before and after gaze-contingent music reward therapy (GCMRT), which trains attention allocation and has shown efficacy for SAD. Healthy controls also completed the PRT twice. Results revealed excellent performance in adults with SAD, especially after GCMRT: relative to controls, they showed faster evidence accumulation, better discriminability, and earned more rewards. These data highlight a positive effect of attention training on performance in anxious adults and show how a behavioral trait that is typically problematic-hypervigilance in SAD-can nevertheless confer advantages in certain contexts. The data also indicate that, in contrast to other forms of psychopathology, SAD is not characterized by slow evidence accumulation, at least in the context of the social PRT.","link":"/opendata/dillon-et-al-2021/"},{"title":"Dombrovski et al. (2019)","text":"Suicidal behavior is associated with impaired decision making in contexts of uncertainty. Existing studies, however, do not definitively address whether suicide attempers have 1) impairment in learning from experience or 2) impairment in choice based on comparison of estimated option values. Our reinforcement learning model-based behavioral study tested these hypotheses directly in middle-aged and older suicide attempters representative of those who die by suicide. Two samples (sample 1, n = 135; sample 2, n = 125) of suicide attempters with depression (nattempters = 54 and 39, respectively), suicide ideators, nonsuicidal patients with depression, and healthy control participants completed a probabilistic three-choice decision-making task. A second experiment in sample 2 experimentally dissociated long-term learned value from reward magnitude. Analyses combined computational reinforcement learning and mixed-effects models of decision times and choices. With regard to learning, suicide attempters (vs. all comparison groups) were less sensitive to one-back reinforcement, as indicated by a reduced effect on both choices and decision times. Learning deficits scaled with attempt lethality and were partially explained by poor cognitive control. With regard to value-based choice, suicide attempters (vs. all comparison groups) displayed abnormally long decision times when choosing between similarly valued options and were less able to distinguish between the best and second-best options. Group differences in value-based choice were robust to controlling for cognitive performance, comorbidities, impulsivity, psychotropic exposure, and possible brain damage from attempts. Serious suicidal behavior is associated with impaired reward learning, likely undermining the search for alternative solutions. Attempted suicide is associated with impaired value comparison during the choice process, potentially interfering with the consideration of deterrents and alternatives in a crisis.","link":"/opendata/dombrovski-et-al-2019/"},{"title":"Dombrovski et al. (2020)","text":"When making decisions, should one exploit known good options or explore potentially better alternatives? Exploration of spatially unstructured options depends on the neocortex, striatum, and amygdala. In natural environments, however, better options often cluster together, forming structured value distributions. The hippocampus binds reward information into allocentric cognitive maps to support navigation and foraging in such spaces. Here we report that human posterior hippocampus (PH) invigorates exploration while anterior hippocampus (AH) supports the transition to exploitation on a reinforcement learning task with a spatially structured reward function. These dynamics depend on differential reinforcement representations in the PH and AH. Whereas local reward prediction error signals are early and phasic in the PH tail, global value maximum signals are delayed and sustained in the AH body. AH compresses reinforcement information across episodes, updating the location and prominence of the value maximum and displaying goal cell-like ramping activity when navigating toward it.","link":"/opendata/dombrovski-et-al-2020/"},{"title":"Donegan et al. (2023)","text":"Model-based planning is thought to protect against over-reliance on habits. It is reduced in individuals high in compulsivity, but effect sizes are small and may depend on subtle features of the tasks used to assess it. We developed a diamond-shooting smartphone game that measures model-based planning in an at-home setting, and varied the game’s structure within and across participants to assess how it affects measurement reliability and validity with respect to previously established correlates of model-based planning, with a focus on compulsivity. Increasing the number of trials used to estimate model-based planning did remarkably little to affect the association with compulsivity. However, associations with compulsivity were higher when transition ratios were less deterministic and depending on the reward drift utilised. These findings suggest that model-based planning can be measured at home via an app, can be estimated in relatively few trials, and can be optimised for sensitivity to compulsive symptoms in the general population.","link":"/opendata/donegan-et-al-2023/"},{"title":"Doody et al. (2022)","text":"Reinforcement learning (RL) is widely regarded as divisible into two distinct computational strategies. Model-free learning is a simple RL process in which a value is associated with actions, whereas model-based learning relies on the formation of internal models of the environment to maximise reward. Recently, theoretical and animal work has suggested that such models might be used to train model-free behaviour, reducing the burden of costly forward planning. Here we devised a way to probe this possibility in human behaviour. We adapted a two-stage decision task and found evidence that model-based processes at the time of learning can alter model-free valuation in healthy individuals. We asked people to rate subjective value of an irrelevant feature that was seen at the time a model-based decision would have been made. These irrelevant feature value ratings were updated by rewards, but in a way that accounted for whether the selected action retrospectively ought to have been taken. This model-based influence on model-free value ratings was best accounted for by a reward prediction error that was calculated relative to the decision path that would most likely have led to the reward. This effect occurred independently of attention and was not present when participants were not explicitly told about the structure of the environment. These findings suggest that current conceptions of model-based and model-free learning require updating in favour of a more integrated approach. Our task provides an empirical handle for further study of the dialogue between these two learning systems in the future.","link":"/opendata/doody-et-al-2022/"},{"title":"Dorfman et al. (2019)","text":"People learn differently from good and bad outcomes. We argue that valence-dependent learning asymmetries are partly driven by beliefs about the causal structure of the environment. If hidden causes can intervene to generate bad (or good) outcomes, then a rational observer will assign blame (or credit) to these hidden causes, rather than to the stable outcome distribution. Thus, a rational observer should learn less from bad outcomes when they are likely to have been generated by a hidden cause, and this pattern should reverse when hidden causes are likely to generate good outcomes. To test this hypothesis, we conducted two experiments ( N = 80, N = 255) in which we explicitly manipulated the behavior of hidden agents. This gave rise to both kinds of learning asymmetries in the same paradigm, as predicted by a novel Bayesian model. These results provide a mechanistic framework for understanding how causal attributions contribute to biased learning.","link":"/opendata/dorfman-et-al-2019/"},{"title":"Dorfman & Gershman (2019)","text":"A Pavlovian bias to approach reward-predictive cues and avoid punishment-predictive cues can conflict with instrumentally-optimal actions. Here, we propose that the brain arbitrates between Pavlovian and instrumental control by inferring which is a better predictor of reward. The instrumental predictor is more flexible; it can learn values that depend on both stimuli and actions, whereas the Pavlovian predictor learns values that depend only on stimuli. The arbitration theory predicts that the Pavlovian predictor will be favored when rewards are relatively uncontrollable, because the additional flexibility of the instrumental predictor is not useful. Consistent with this hypothesis, we find that the Pavlovian approach bias is stronger under low control compared to high control contexts.","link":"/opendata/dorfman-gershman-2019/"},{"title":"Dubois et al. (2022)","text":"Deciding between exploring new avenues and exploiting known choices is central to learning, and this exploration-exploitation trade-off changes during development. Exploration is not a unitary concept, and humans deploy multiple distinct mechanisms, but little is known about their specific emergence during development. Using a previously validated task in adults, changes in exploration mechanisms were investigated between childhood (8-9 y/o, N = 26; 16 females), early (12-13 y/o, N = 38; 21 females), and late adolescence (16-17 y/o, N = 33; 19 females) in ethnically and socially diverse schools from disadvantaged areas. We find an increased usage of a computationally light exploration heuristic in younger groups, effectively accommodating their limited neurocognitive resources. Moreover, this heuristic was associated with self-reported, attention-deficit/hyperactivity disorder symptoms in this population-based sample. This study enriches our mechanistic understanding about how exploration strategies mature during development.","link":"/opendata/dubois-et-al-2022/"},{"title":"Dubois & Hauser (2022)","text":"Deciding whether to forgo a good choice in favour of exploring a potentially more rewarding alternative is one of the most challenging arbitrations both in human reasoning and in artificial intelligence. Humans show substantial variability in their exploration, and theoretical (but only limited empirical) work has suggested that excessive exploration is a critical mechanism underlying the psychiatric dimension of impulsivity. In this registered report, we put these theories to test using large online samples, dimensional analyses, and computational modelling. Capitalising on recent advances in disentangling distinct human exploration strategies, we not only demonstrate that impulsivity is associated with a specific form of exploration-value-free random exploration-but also explore links between exploration and other psychiatric dimensions.","link":"/opendata/dubois-hauser-2022/"},{"title":"Dumbalska et al. (2022)","text":"When a target stimulus occurs in the presence of distracters, decisions are less accurate. But how exactly do distracters affect choices? Here, we explored this question using measurement of human behaviour, psychophysical reverse correlation and computational modelling. We contrasted two models: one in which targets and distracters had independent influence on choices (independent model) and one in which distracters modulated choices in a way that depended on their similarity to the target (interaction model). Across three experiments, participants were asked to make fine orientation judgments about the tilt of a target grating presented adjacent to an irrelevant distracter. We found strong evidence for the interaction model, in that decisions were more sensitive when target and distracter were consistent relative to when they were inconsistent. This consistency bias occurred in the frame of reference of the decision, that is, it operated on decision values rather than on sensory signals, and surprisingly, it was independent of spatial attention. A normalization framework, where target features are normalized by the expectation and variability of the local context, successfully captures the observed pattern of results.","link":"/opendata/dumbalska-et-al-2022/"},{"title":"Duyck et al. (2021)","text":"Eye blinks strongly attenuate visual input, yet we perceive the world as continuous. How this visual continuity is achieved remains a fundamental and unsolved problem. A decrease in luminance sensitivity has been proposed as a mechanism but is insufficient to mask the even larger decrease in luminance because of blinks. Here we put forward a different hypothesis: visual continuity can be achieved through shortening of perceived durations of the sensory consequences of blinks. Here we probed the perceived durations of the blackouts caused by blinks and visual stimuli interrupted by blinks. We found that the perceived durations of blackouts because of blinks are about half as long as artificial blackouts immediately preceding or following the blink. Stimuli interrupted by blinks were perceived as briefer than uninterrupted stimuli, by about the same duration as the interruption-but so were stimuli interrupted by optically simulated blinks. There was a difference between real and simulated blinks, however: The decrease in perceived duration depended on the duration of the interruption for simulated, but not for real, blinks. These profound modifications in time perception during blinks show a way in which temporal processing contributes to the solution of an essential perceptual problem.","link":"/opendata/duyck-et-al-2021/"},{"title":"Eckert et al. (2022)","text":"Predictive processing posits that perception emerges from inferential processes within a hierarchical cortical system. Alterations of these processes may result in psychotic experiences, such as hallucinations and delusions. Central to the predictive processing account of psychosis is the notion of aberrant weights attributed to prior information and sensory input. Based on the notion that previous perceptual choices represent a relevant source of prior information, we here asked whether the propensity towards psychotic experiences may be related to altered choice history biases in perceptual decision-making. We investigated the relationship between choice history biases in perceptual decision-making and psychosis proneness in the general population. Choice history biases and their adaptation to experimentally induced changes in stimulus serial dependencies were investigated in decision-making tasks with auditory (experiment 1) and visual (experiment 2) stimuli. We further explored a potential compensatory mechanism for reduced choice history biases by reliance on predictive cross-modal cues. In line with our preregistered hypothesis, psychosis proneness was associated with decreased choice history biases in both experiments. This association is generalized across conditions with and without stimulus serial dependencies. We did not find consistent evidence for a compensatory reliance on cue information in psychosis-prone individuals across experiments. Our results show reduced choice history biases in psychosis proneness. A compensatory mechanism between implicit choice history effects and explicit cue information is not supported unequivocally by our data.","link":"/opendata/eckert-et-al-2022/"},{"title":"Eckstein et al. (2022a)","text":"During adolescence, youth venture out, explore the wider world, and are challenged to learn how to navigate novel and uncertain environments. We investigated how performance changes across adolescent development in a stochastic, volatile reversal-learning task that uniquely taxes the balance of persistence and flexibility. In a sample of 291 participants aged 8-30, we found that in the mid-teen years, adolescents outperformed both younger and older participants. We developed two independent cognitive models, based on Reinforcement learning (RL) and Bayesian inference (BI). The RL parameter for learning from negative outcomes and the BI parameters specifying participants’ mental models were closest to optimal in mid-teen adolescents, suggesting a central role in adolescent cognitive processing. By contrast, persistence and noise parameters improved monotonically with age. We distilled the insights of RL and BI using principal component analysis and found that three shared components interacted to form the adolescent performance peak: adult-like behavioral quality, child-like time scales, and developmentally-unique processing of positive feedback. This research highlights adolescence as a neurodevelopmental window that can create performance advantages in volatile and uncertain environments. It also shows how detailed insights can be gleaned by using cognitive models in new ways.","link":"/opendata/eckstein-et-al-2022a/"},{"title":"Eckstein et al. (2022b)","text":"Reinforcement Learning (RL) models have revolutionized the cognitive and brain sciences, promising to explain behavior from simple conditioning to complex problem solving, to shed light on developmental and individual differences, and to anchor cognitive processes in specific brain mechanisms. However, the RL literature increasingly reveals contradictory results, which might cast doubt on these claims. We hypothesized that many contradictions arise from two commonly-held assumptions about computational model parameters that are actually often invalid: That parameters generalize between contexts (e.g. tasks, models) and that they capture interpretable (i.e. unique, distinctive) neurocognitive processes. To test this, we asked 291 participants aged 8-30 years to complete three learning tasks in one experimental session, and fitted RL models to each. We found that some parameters (exploration / decision noise) showed significant generalization: they followed similar developmental trajectories, and were reciprocally predictive between tasks. Still, generalization was significantly below the methodological ceiling. Furthermore, other parameters (learning rates, forgetting) did not show evidence of generalization, and sometimes even opposite developmental trajectories. Interpretability was low for all parameters. We conclude that the systematic study of context factors (e.g. reward stochasticity; task volatility) will be necessary to enhance the generalizability and interpretability of computational cognitive models.","link":"/opendata/eckstein-et-al-2022b/"},{"title":"Egger & Florack (2022)","text":"In two experiments, we investigated two untested assumptions regarding the mechanism by which selective attention during search affects subsequent preferences for objects. First, we tested whether an increase in visual competition during search increases preferences for relevant objects and decreases preferences for irrelevant objects subsequent to search. Second, we tested whether searching for objects increases the perceived fluency to process relevant objects and decreases the perceived fluency to process irrelevant objects. Our results show that search can affect relevant and irrelevant objects differently. Selective attention increased preferences for target objects subsequent to search, whereas selective attention did not affect preferences for distractors. Furthermore, our results indicate that searching for a target object increased the perceived fluency for this target object during choice, whereas ignoring a distractor product blocked mere exposure effects. Contrary to assumptions made in previous research, we found no indication that the competition for visual resources during search is linked to preferences for targets or distractors.","link":"/opendata/egger-florack-2022/"},{"title":"Ehlers & Lonsdorf (2022)","text":"Data sharing holds promise for advancing and accelerating science by facilitating and fostering collaboration, reproducibility and optimal use of sparse resources. We argue that despite the existence of general data sharing guidelines (e.g, FAIR-principles), their translation and implementation requires field-specific considerations. Here, we addressed this timely question for the field of experimental research on fear and anxiety and showcase the enormous prospects by illustrating the wealth and richness of a curated data collection of publicly available datasets using the fear conditioning paradigm based on 103 studies and 8839 participants. We highlight challenges encountered when aiming to reuse the available data corpus and derive 10 simple steps for making data sharing in the field more efficient and sustainable and hence facilitating collaboration, cumulative knowledge generation and large scale mega-, meta- and psychometric analyses.We share our vision and first steps towards transforming such curated data collections into a homogenized and dynamically growing database allowing for easy contributions and for living analysis tools for the collective benefit of the research community.We share our vision and first steps towards transforming such curated data collections into a homogenized and dynamically growing database allowing for easy contributions and for living analysis tools for the collective benefit of the research community.","link":"/opendata/ehlers-lonsdorf-2022/"},{"title":"Eisenberg et al. (2021)","text":"Psychological sciences have identified a wealth of cognitive processes and behavioral phenomena, yet struggle to produce cumulative knowledge. Progress is hamstrung by siloed scientific traditions and a focus on explanation over prediction, two issues that are particularly damaging for the study of multifaceted constructs like self-regulation. Here, we derive a psychological ontology from a study of individual differences across a broad range of behavioral tasks, self-report surveys, and self-reported real-world outcomes associated with self-regulation. Though both tasks and surveys putatively measure self-regulation, they show little empirical relationship. Within tasks and surveys, however, the ontology identifies reliable individual traits and reveals opportunities for theoretic synthesis. We then evaluate predictive power of the psychological measurements and find that while surveys modestly and heterogeneously predict real-world outcomes, tasks largely do not. We conclude that self-regulation lacks coherence as a construct, and that data-driven ontologies lay the groundwork for a cumulative psychological science.","link":"/opendata/eisenberg-et-al-2021/"},{"title":"Eissa et al. (2022)","text":"Solutions to challenging inference problems are often subject to a fundamental trade-off between: 1) bias (being systematically wrong) that is minimized with complex inference strategies, and 2) variance (being oversensitive to uncertain observations) that is minimized with simple inference strategies. However, this trade-off is based on the assumption that the strategies being considered are optimal for their given complexity and thus has unclear relevance to forms of inference based on suboptimal strategies. We examined inference problems applied to rare, asymmetrically available evidence, which a large population of human subjects solved using a diverse set of strategies that varied in form and complexity. In general, subjects using more complex strategies tended to have lower bias and variance, but with a dependence on the form of strategy that reflected an inversion of the classic bias-variance trade-off: subjects who used more complex, but imperfect, Bayesian-like strategies tended to have lower variance but higher bias because of incorrect tuning to latent task features, whereas subjects who used simpler heuristic strategies tended to have higher variance because they operated more directly on the observed samples but lower, near-normative bias. Our results help define new principles that govern individual differences in behavior that depends on rare-event inference and, more generally, about the information-processing trade-offs that can be sensitive to not just the complexity, but also the optimality, of the inference process.","link":"/opendata/eissa-et-al-2022/"},{"title":"Elder et al. (2022)","text":"People learn about themselves from social feedback, but desires for coherence and positivity constrain how feedback is incorporated into the self-concept. We developed a network-based model of the self-concept and embedded it in a reinforcement-learning framework to provide a computational account of how motivations shape self-learning from feedback. Participants (N = 46 adult university students) received feedback while evaluating themselves on traits drawn from a causal network of trait semantics. Network-defined communities were assigned different likelihoods of positive feedback. Participants learned from positive feedback but dismissed negative feedback, as reflected by asymmetries in computational parameters that represent the incorporation of positive versus negative outcomes. Furthermore, participants were constrained in how they incorporated feedback: Self-evaluations changed less for traits that have more implications and are thus more important to the coherence of the network. We provide a computational explanation of how motives for coherence and positivity jointly constrain learning about the self from feedback, an explanation that makes testable predictions for future clinical research.","link":"/opendata/elder-et-al-2022/"},{"title":"Elder et al. (2023)","text":"Implicit measures were initially assumed to assess stable individual differences, but other perspectives posit that they reflect context-dependent processes. This pre-registered research investigates whether the processes contributing to responses on the race Implicit Association Test are temporally stable and reliably measured using multinomial processing tree modeling. We applied two models-the Quad model and the Process Dissociation Procedure-to six datasets (N = 2,036), each collected over two occasions, examined the within-measurement reliability and between-measurement stability of model parameters, and meta-analyzed the results. Parameters reflecting accuracy-oriented processes demonstrate adequate stability and reliability, which suggests these processes are relatively stable within individuals. Parameters reflecting evaluative associations demonstrate poor stability but modest reliability, which suggests that associations are either context-dependent or stable but noisily measured. These findings suggest that processes contributing to racial bias on implicit measures differ in temporal stability, which has practical implications for predicting behavior using the Implicit Association Test.","link":"/opendata/elder-et-al-2023/"},{"title":"Éltető et al. (2022)","text":"Humans can implicitly learn complex perceptuo-motor skills over the course of large numbers of trials. This likely depends on our becoming better able to take advantage of ever richer and temporally deeper predictive relationships in the environment. Here, we offer a novel characterization of this process, fitting a non-parametric, hierarchical Bayesian sequence model to the reaction times of human participants responses over ten sessions, each comprising thousands of trials, in a serial reaction time task involving higher-order dependencies. The model, adapted from the domain of language, forgetfully updates trial-by-trial, and seamlessly combines predictive information from shorter and longer windows onto past events, weighing the windows proportionally to their predictive power. As the model implies a posterior over window depths, we were able to determine how, and how many, previous sequence elements influenced individual participants internal predictions, and how this changed with practice. Already in the first session, the model showed that participants had begun to rely on two previous elements (i.e., trigrams), thereby successfully adapting to the most prominent higher-order structure in the task. The extent to which local statistical fluctuations in trigram frequency influenced participants responses waned over subsequent sessions, as participants forgot the trigrams less and evidenced skilled performance. By the eighth session, a subset of participants shifted their prior further to consider a context deeper than two previous elements. Finally, participants showed resistance to interference and slow forgetting of the old sequence when it was changed in the final sessions. Model parameters for individual participants covaried appropriately with independent measures of working memory and error characteristics. In sum, the model offers the first principled account of the adaptive complexity and nuanced dynamics of humans internal sequence representations during long-term implicit skill learning.","link":"/opendata/elteto-et-al-2022/"},{"title":"Embrey et al. (2023)","text":"Humans are often termed “cognitive misers” for their aversion to mental effort. Both in and outside the laboratory people often show preference for low-effort tasks and are willing to forgo financial reward to avoid more demanding alternatives. Mental effort, however, does not seem to be ubiquitously avoided: people play crosswords, board games, and read novels, all as forms of leisure. While such activities undoubtedly require effort, the type of cognitive demands they impose appear markedly different from the tasks typically used in mental-effort research (e.g., N-Back, demand selection tasks, vigilance tasks). We investigate the effect disparate demands such as rule discovery compared to rule implementation have on people’s aversion to or preference for increased mental effort. Across four experiments using three different tasks (i.e., N-Back, number sequence problems, and anagrams), and a mixture of online and lab-based settings we find that aversion to effort remains stable regardless of the types of cognitive demands a task imposes. The results are discussed in terms of other factors that might induce the pursuit of mental effort over and above the type of cognitive activity involved in a task.","link":"/opendata/embrey-et-al-2023/"},{"title":"Engelmann et al. (2017)","text":"Commonly observed distortions in decision-making among patients with major depressive disorder (MDD) may emerge from impaired reward processing and cognitive biases toward negative events. There is substantial theoretical support for the hypothesis that MDD patients overweight potential losses compared with gains, though the neurobiological underpinnings of this bias are uncertain. Twenty-one unmedicated patients with MDD were compared with 25 healthy controls (HC) using functional magnetic resonance imaging (fMRI) together with an economic decision-making task over mixed lotteries involving probabilistic gains and losses. Region-of-interest analyses evaluated neural signatures of gain and loss coding within a core network of brain areas known to be involved in valuation (anterior insula, caudate nucleus, ventromedial prefrontal cortex). Usable fMRI data were available for 19 MDD and 23 HC subjects. Anterior insula signal showed negative coding of losses (gain > loss) in HC subjects consistent with previous findings, whereas MDD subjects demonstrated significant reversals in these associations (loss > gain). Moreover, depression severity further enhanced the positive coding of losses in anterior insula, ventromedial prefrontal cortex, and caudate nucleus. The hyper-responsivity to losses displayed by the anterior insula of MDD patients was paralleled by a reduced influence of gain, but not loss, stake size on choice latencies. Patients with MDD demonstrate a significant shift from negative to positive coding of losses in the anterior insula, revealing the importance of this structure in value-based decision-making in the context of emotional disturbances.","link":"/opendata/engelmann-et-al-2017/"},{"title":"Erev et al. (2017)","text":"Experimental studies of choice behavior document distinct, and sometimes contradictory, deviations from maximization. For example, people tend to overweight rare events in 1-shot decisions under risk, and to exhibit the opposite bias when they rely on past experience. The common explanations of these results assume that the contradicting anomalies reflect situation-specific processes that involve the weighting of subjective values and the use of simple heuristics. The current article analyzes 14 choice anomalies that have been described by different models, including the Allais, St. Petersburg, and Ellsberg paradoxes, and the reflection effect. Next, it uses a choice prediction competition methodology to clarify the interaction between the different anomalies. It focuses on decisions under risk (known payoff distributions) and under ambiguity (unknown probabilities), with and without feedback concerning the outcomes of past choices. The results demonstrate that it is not necessary to assume situation-specific processes. The distinct anomalies can be captured by assuming high sensitivity to the expected return and 4 additional tendencies: pessimism, bias toward equal weighting, sensitivity to payoff sign, and an effort to minimize the probability of immediate regret. Importantly, feedback increases sensitivity to probability of regret. Simple abstractions of these assumptions, variants of the model Best Estimate and Sampling Tools (BEAST), allow surprisingly accurate ex ante predictions of behavior. Unlike the popular models, BEAST does not assume subjective weighting functions or cognitive shortcuts. Rather, it assumes the use of sampling tools and reliance on small samples, in addition to the estimation of the expected values.","link":"/opendata/erev-et-al-2017/"},{"title":"Ez-zizi et al. (2023)","text":"Two prominent types of uncertainty that have been studied extensively are expected and unexpected uncertainty. Studies suggest that humans are capable of learning from reward under both expected and unexpected uncertainty when the source of variability is the reward. How do people learn when the source of uncertainty is the environments state and rewards themselves are deterministic? How does their learning compare with the case of reward uncertainty? The present study addressed these questions using behavioural experimentation and computational modelling. Experiment 1 showed that human subjects were generally able to use reward feedback to successfully learn the task rules under state uncertainty, and were able to detect a non-signalled reversal of stimulus-response contingencies. Experiment 2, which combined all four types of uncertainties—expected versus unexpected uncertainty, and state versus reward uncertainty—highlighted key similarities and differences in learning between state and reward uncertainties. We found that subjects performed significantly better in the state uncertainty condition, primarily because they explored less and improved their state disambiguation. We also show that a simple reinforcement learning mechanism that ignores state uncertainty and updates the state-action value of only the identified state accounted for the behavioural data better than both a Bayesian reinforcement learning model that keeps track of belief states and a model that acts based on sampling from past experiences. Our findings suggest a common mechanism supports reward-based learning under state and reward uncertainty.","link":"/opendata/ez-zizi-et-al-2023/"},{"title":"Fan et al. (2021)","text":"Anxiety has been related to decreased physical exploration, but past findings on the interaction between anxiety and exploration during decision making were inconclusive. Here we examined how latent factors of trait anxiety relate to different exploration strategies when facing volatility-induced uncertainty. Across two studies (total N = 985), we demonstrated that people used a hybrid of directed, random and undirected exploration strategies, which were respectively sensitive to relative uncertainty, total uncertainty and value difference. Trait somatic anxiety, that is, the propensity to experience physical symptoms of anxiety, was inversely correlated with directed exploration and undirected exploration, manifesting as a lesser likelihood for choosing the uncertain option and reducing choice stochasticity regardless of uncertainty. Somatic anxiety is also associated with underestimation of relative uncertainty. Together, these results reveal the selective role of trait somatic anxiety in modulating both uncertainty-driven and value-driven exploration strategies.","link":"/opendata/fan-et-al-2021/"},{"title":"Fan et al. (2023)","text":"Exploration is an important part of decision making and is crucial to maximizing long-term reward. Past work has shown that people use different forms of uncertainty to guide exploration. In this study, we investigate the role of the pupil-linked arousal system in uncertainty-guided exploration. We measured participants’ pupil dilation (N = 48) while they performed a two- armed bandit task. Consistent with previous work, we found that people adopted a hybrid of directed, random and undirected exploration, which are sensitive to relative uncertainty, total uncertainty and value difference between options, respectively. We also found a positive correlation between pupil size and total uncertainty. Furthermore, augmenting the choice model with subject-specific total uncertainty estimates decoded from the pupil size improved predictions of held-out choices, suggesting that people used the uncertainty estimate encoded in pupil size to decide which option to explore Together, the data shed light on the computations underlying uncertainty-driven exploration. Under the assumption that pupil size reflects Locus Coeruleus-Norepinephrine (LC-NE) neuromodulatory activity, these results also extend the theory of LC-NE function in exploration, highlighting its selective role in driving uncertainty- guided random exploration.","link":"/opendata/fan-et-al-2023/"},{"title":"Farashahi & Soltani (2021)","text":"Learning appropriate representations of the reward environment is challenging in the real world where there are many options, each with multiple attributes or features. Despite existence of alternative solutions for this challenge, neural mechanisms underlying emergence and adoption of value representations and learning strategies remain unknown. To address this, we measure learning and choice during a multi-dimensional probabilistic learning task in humans and trained recurrent neural networks (RNNs) to capture our experimental observations. We find that human participants estimate stimulus-outcome associations by learning and combining estimates of reward probabilities associated with the informative feature followed by those of informative conjunctions. Through analyzing representations, connectivity, and lesioning of the RNNs, we demonstrate this mixed learning strategy relies on a distributed neural code and opponency between excitatory and inhibitory neurons through value-dependent disinhibition. Together, our results suggest computational and neural mechanisms underlying emergence of complex learning strategies in naturalistic settings.","link":"/opendata/farashahi-soltani-2021/"},{"title":"Fareri et al. (2022)","text":"Choices under risk often have consequences for ourselves and others. Yet, it is unclear how the other’s identity (stranger, close friend, etc.) influences risky choices made on their behalf. In a mixed within and between subjects design, two participant groups made three series of risky economic decisions: for themselves, another person, or for both themselves and another person (i.e., shared outcomes). One group made choices involving a same-sex stranger (n = 29), the other made choices involving a same-sex close friend (n = 28). Hierarchical Bayesian estimation of computations underlying risky decision-making revealed that relative to choosing for themselves, people were more risk averse, loss averse, and consistent when choices involved another person. Partner identity was additionally crucial: people became risk neutral and more consistent when choosing for friends relative to strangers. These findings establish that the complexity of the social world is mirrored in its nuanced consequences for our choices.","link":"/opendata/fareri-et-al-2022/"},{"title":"Farkas et al. (2023)","text":"Despite the fact that reliability estimation is crucial for robust inference, it is underutilized in neuroscience and cognitive psychology. Appreciating reliability can help researchers increase statistical power, effect sizes, and reproducibility, decrease the impact of measurement error, and inform methodological choices. However, accurately calculating reliability for many experimental learning tasks is challenging. In this study, we highlight a number of these issues, and estimate multiple metrics of internal consistency and split-half reliability of a widely used learning task on a large sample of 180 subjects. We show how pre-processing choices, task length, and sample size can affect reliability and its estimation. Our results show that the Alternating Serial Reaction Time Task has respectable reliability, especially when learning scores are calculated based on reaction times and two-stage averaging. We also show that a task length of 25 blocks can be sufficient to meet the usual thresholds for minimally acceptable reliability. We further illustrate how relying on a single point estimate of reliability can be misleading, and the calculation of multiple metrics, along with their uncertainties, can lead to a more complete characterization of the psychometric properties of tasks.","link":"/opendata/farkas-et-al-2023/"},{"title":"Feher-da-Silva et al. (2023)","text":"A standard assumption in neuroscience is that low-effort model-free learning is automatic and continuously used, whereas more complex model-based strategies are only used when the rewards they generate are worth the additional effort. We present evidence refuting this assumption. First, we demonstrate flaws in previous reports of combined model-free and model-based reward prediction errors in the ventral striatum that probably led to spurious results. More appropriate analyses yield no evidence of model-free prediction errors in this region. Second, we find that task instructions generating more correct model-based behaviour reduce rather than increase mental effort. This is inconsistent with cost-benefit arbitration between model-based and model-free strategies. Together, our data indicate that model-free learning may not be automatic. Instead, humans can reduce mental effort by using a model-based strategy alone rather than arbitrating between multiple strategies. Our results call for re-evaluation of the assumptions in influential theories of learning and decision-making.","link":"/opendata/feher-da-silva-et-al-2023/"},{"title":"Felso et al. (2022)","text":"While making plans, people have to decide how far out into the future they want to plan: days, months, years, or even longer. Overly short-sighted planning can harm peoples well-being in important life domains, such as health, finances, and academics. While self-report scales exist to measure peoples planning, peoples answers to such questions may be distorted by their desire to make a good impression and conform to norms and expectations. Here, we introduce a method for objectively quantifying peoples propensity to plan into the future. Our method combines a process-tracing method with Bayesian inverse reinforcement learning to measure how prone an individual is to plan multiple steps ahead. To infer this from a persons process-tracing data, our method inverts a new resource-rational model of individual differences in planning. This model assumes that subjective planning costs are captured by a cost function with two parameters: a mental effort cost and a planning depth cost. Upon showing that our model of planning explains individual participants planning behavior better than the best previous models, we validate our method on simulated data and real data from a large online experiment where the cost of planning was manipulated within participants. Our results show that our method can infer individual differences in the planning depth cost. Our model provides a mechanistic account for why some people plan too shortsightedly. The subjective planning costs inferred by our method can be used as an objective, non-self-report measure of individual differences in peoples propensity to plan into the future.","link":"/opendata/felso-et-al-2022/"},{"title":"Feng et al. (2021)","text":"Growing evidence suggests that behavioral variability plays a critical role in how humans manage the tradeoff between exploration and exploitation. In these decisions a little variability can help us to overcome the desire to exploit known rewards by encouraging us to randomly explore something else. Here we investigate how such ‘random exploration’ could be controlled using a drift-diffusion model of the explore-exploit choice. In this model, variability is controlled by either the signal-to-noise ratio with which reward is encoded (the ‘drift rate’), or the amount of information required before a decision is made (the ‘threshold’). By fitting this model to behavior, we find that while, statistically, both drift and threshold change when people randomly explore, numerically, the change in drift rate has by far the largest effect. This suggests that random exploration is primarily driven by changes in the signal-to-noise ratio with which reward information is represented in the brain.","link":"/opendata/feng-et-al-2021/"},{"title":"Ferrucci et al. (2021)","text":"A standard view in the literature is that decisions are the result of a process that accumulates evidence in favor of each alternative until such accumulation reaches a threshold and a decision is made. However, this view has been recently questioned by an alternative proposal that suggests that, instead of accumulated, evidence is combined with an urgency signal. Both theories have been mathematically formalized and supported by a variety of decision-making tasks with constant information. However, recently, tasks with changing information have shown to be more effective to study the dynamics of decision making. Recent research using one of such tasks, the tokens task, has shown that decisions are better described by an urgency mechanism than by an accumulation one. However, the results of that study could depend on a task where all fundamental information was noiseless and always present, favoring a mechanism of non-integration, such as the urgency one. Here, we wanted to address whether the same conclusions were also supported by an experimental paradigm in which sensory evidence was removed shortly after it was provided, making working memory necessary to properly perform the task. Here, we show that, under such condition, participants’ behavior could be explained by an urgency-gating mechanism that low-pass filters the mnemonic information and combines it with an urgency signal that grows with time but not by an accumulation process that integrates the same mnemonic information. Thus, our study supports the idea that, under certain situations with dynamic sensory information, decisions are better explained by an urgency-gating mechanism than by an accumulation one.","link":"/opendata/ferrucci-et-al-2021/"},{"title":"Fielder & Glöckner (2012)","text":"In the last years, research on risky choice has moved beyond analyzing choices only. Models have been suggested that aim to describe the underlying cognitive processes and some studies have tested process predictions of these models. Prominent approaches are evidence accumulation models such as decision field theory (DFT), simple serial heuristic models such as the adaptive toolbox, and connectionist approaches such as the parallel constraint satisfaction (PCS) model. In two studies involving measures of attention and pupil dilation, we investigate hypotheses derived from these models in choices between two gambles with two outcomes each. We show that attention to an outcome of a gamble increases with its probability and its value and that attention shifts toward the subsequently favored gamble after about two thirds of the decision process, indicating a gaze-cascade effect. Information search occurs mostly within-gambles, and the direction of search does not change over the course of decision making. Pupil dilation, which reflects both cognitive effort and arousal, increases during the decision process and increases with mean expected value. Overall, the results support aspects of automatic integration models for risky choice such as DFT and PCS, but in their current specification none of them can account for the full pattern of results.","link":"/opendata/fielder-glockner-2012/"},{"title":"Finke et al. (2023)","text":"Evidence regarding unaware differential fear conditioning in humans is mixed and even less is known about the effects of contingency awareness on appetitive conditioning. Phasic pupil dilation responses (PDR) might be more sensitive for capturing implicit learning than other measures, such as skin conductance responses (SCR). Here, we report data from two delay conditioning experiments utilizing PDR (alongside SCR and subjective assessments) to investigate the role of contingency awareness in aversive and appetitive conditioning. In both experiments, valence of unconditioned stimuli (UCS) was varied within participants by administering aversive (mild electric shocks) and appetitive UCSs (monetary rewards). Preceding visual stimuli (CSs) predicted either the reward, the shock (65% reinforcement), or neither UCS. In Exp. 1, participants were fully instructed about CS-UCS contingencies, whereas in Exp. 2, no such information was given. PDR and SCR demonstrated successful differential conditioning in Exp. 1 and in (learned) aware participants in Exp. 2. In non-instructed participants who remained fully unaware of contingencies (Exp. 2), differential modulation of early PDR (immediately after CS onset) by appetitive cues emerged. Associations with model-derived learning parameters further suggest that early PDR in unaware participants mainly reflect implicit learning of expected outcome value, whereas early PDR in aware (instructed/learned-aware) participants presumably index attentional processes (related to uncertainty/prediction error processing). Similar, but less clear results emerged for later PDR (preceding UCS onset). Our data argue in favor of a dual-process account of associative learning, suggesting that value-related processing can take place irrespective of mechanisms involved in conscious memory formation.","link":"/opendata/finke-et-al-2023/"},{"title":"Fleming et al. (2023)","text":"An important finding in the cognitive effort literature has been that sensitivity to the costs of effort varies between individuals, suggesting that some people find effort more aversive than others. It has been suggested this may explain individual differences in other aspects of cognition; in particular that greater effort sensitivity may underlie some of the symptoms of conditions such as depression and schizophrenia. In this paper, we highlight a major problem with existing measures of cognitive effort that hampers this line of research, specifically the confounding of effort and difficulty. This means that behaviour thought to reveal effort costs could equally be explained by cognitive capacity, which influences the frequency of success and thereby the chance of obtaining reward. To address this shortcoming, we introduce a new test, the Number Switching Task (NST), specially designed such that difficulty will be unaffected by the effort manipulation and can easily be standardised across participants. In a large, online sample, we show that these criteria are met successfully and reproduce classic effort discounting results with the NST. We also demonstrate the use of Bayesian modelling with this task, producing behavioural parameters which can be associated with other measures, and report a preliminary association with the Need for Cognition scale.","link":"/opendata/fleming-et-al-2023/"},{"title":"Fontanesi et al. (2019)","text":"Reinforcement learning (RL) models describe how humans and animals learn by trial-and-error to select actions that maximize rewards and minimize punishments. Traditional RL models focus exclusively on choices, thereby ignoring the interactions between choice preference and response time (RT), or how these interactions are influenced by contextual factors. However, in the field of perceptual decision-making, such interactions have proven to be important to dissociate between different underlying cognitive processes. Here, we investigated such interactions to shed new light on overlooked differences between learning to seek rewards and learning to avoid losses. We leveraged behavioral data from four RL experiments, which feature manipulations of two factors: outcome valence (gains vs. losses) and feedback information (partial vs. complete feedback). A Bayesian meta-analysis revealed that these contextual factors differently affect RTs and accuracy: While valence only affects RTs, feedback information affects both RTs and accuracy. To dissociate between the latent cognitive processes, we jointly fitted choices and RTs across all experiments with a Bayesian, hierarchical diffusion decision model (DDM). We found that the feedback manipulation affected drift rate, threshold, and non-decision time, suggesting that it was not a mere difficulty effect. Moreover, valence affected non-decision time and threshold, suggesting a motor inhibition in punishing contexts. To better understand the learning dynamics, we finally fitted a combination of RL and DDM (RLDDM). We found that while the threshold was modulated by trial-specific decision conflict, the non-decision time was modulated by the learned context valence. Overall, our results illustrate the benefits of jointly modeling RTs and choice data during RL, to reveal subtle mechanistic differences underlying decisions in different learning contexts.","link":"/opendata/fontanesi-et-al-2019/"},{"title":"Fontanesi et al. (2022)","text":"Recent years have witnessed a surge of interest in understanding the neural and cognitive dynamics that drive sequential decision making in general and foraging behavior in particular. Due to the intrinsic properties of most sequential decision-making paradigms, however, previous research in this area has suffered from the difficulty to disentangle properties of the decision related to (a) the value of switching to a new patch versus, which increases monotonically, and (b) the conflict experienced between choosing to stay or leave, which first increases but then decreases after reaching the point of indifference between staying and switching. Here, we show how the same problems arise in studies of sequential decision-making under risk, and how they can be overcome, taking as a specific example recent research on the ‘pig’ dice game. In each round of the ‘pig’ dice game, people roll a die and accumulate rewards until they either decide to proceed to the next round or lose all rewards. By combining simulation-based dissections of the task structure with two experiments, we show how an extension of the standard paradigm, together with cognitive modeling of decision-making processes, allows to disentangle properties related to either switch value or choice conflict. Our study elucidates the cognitive mechanisms of sequential decision making and underscores the importance of avoiding potential pitfalls of paradigms that are commonly used in this research area.","link":"/opendata/fontanesi-et-al-2022/"},{"title":"Forbes & Bennett (2023)","text":"The valence of an individual’s emotional response to an event is often thought to depend on their prior expectations for the event: better-than-expected outcomes produce positive affect and worse-than-expected outcomes produce negative affect. In recent years, this hypothesis has been instantiated within influential computational models of subjective affect that assume the valence of affect is driven by reward prediction errors. However, there remain a number of open questions regarding this association. In this project, we investigated the moderating effects of outcome valence and decision context (Experiment 1: free vs. forced choices; Experiment 2: trials with versus trials without counterfactual feedback) on the effects of reward prediction errors on subjective affect. We conducted two large-scale online experiments (N = 300 in total) of general-population samples recruited via Prolific to complete a risky decision-making task with embedded high-resolution sampling of subjective affect. Hierarchical Bayesian computational modelling revealed that the effects of reward prediction errors on subjective affect were significantly moderated by both outcome valence and decision context. Specifically, after accounting for concurrent reward amounts we found evidence that only negative reward prediction errors (worse-than-expected outcomes) influenced subjective affect, with no significant effect of positive reward prediction errors (better-than-expected outcomes). Moreover, these effects were only apparent on trials in which participants made a choice freely (but not on forced-choice trials) and when counterfactual feedback was absent (but not when counterfactual feedback was present). These results deepen our understanding of the effects of reward prediction errors on subjective affect.","link":"/opendata/forbes-bennett-2023/"},{"title":"Forest et al. (2023)","text":"Trajectories of cognitive and neural development suggest that, despite early emergence, the ability to extract environmental patterns changes across childhood. Here, 5- to 9-year-olds and adults (N = 211, 110 females, in a large Canadian city) completed a memory test assessing what they remembered after watching a stream of shape triplets: the particular sequence in which the shapes occurred and/or their group-level structure. After accounting for developmental improvements in overall memory, all ages remembered specific transitions, while memory for group membership was only observed in older children and adults (age by test-type interaction η2 = .05). Thus, while young children form memories for specifics of structured experience, memory for derived associations is refined later-underscoring that adults and young children form different memories despite identical experience.","link":"/opendata/forest-et-al-2023/"},{"title":"Fornari et al. (2023)","text":"Learning to predict action outcomes in morally conflicting situations is essential for social decision-making but poorly understood. Here we tested which forms of Reinforcement Learning Theory capture how participants learn to choose between self-money and other-shocks, and how they adapt to changes in contingencies. We find choices were better described by a reinforcement learning model based on the current value of separately expected outcomes than by one based on the combined historical values of past outcomes. Participants track expected values of self-money and other-shocks separately, with the substantial individual difference in preference reflected in a valuation parameter balancing their relative weight. This valuation parameter also predicted choices in an independent costly helping task. The expectations of self-money and other-shocks were biased toward the favored outcome but fMRI revealed this bias to be reflected in the ventromedial prefrontal cortex while the pain-observation network represented pain prediction errors independently of individual preferences.","link":"/opendata/fornari-et-al-2023/"},{"title":"Forys et al. (2022)","text":"We must often decide how much effort to exert or withhold to avoid undesirable outcomes or obtain rewards. In depression and anxiety, levels of avoidance can be excessive and reward-seeking may be reduced. Yet outstanding questions remain about the links between motivated action/inhibition and anxiety and depression levels, and whether they differ between men and women. Here we examined the relationship between anxiety and depression scores, and performance on effortful active and inhibitory avoidance (Study 1) and reward seeking (Study 2) in humans. Undergraduates and paid online workers (NAvoid=545, NReward=310; NFemale=368, NMale=450, MAge=22.58, RangeAge=17-62) were assessed on the Beck Depression Inventory II (BDI) and the Beck Anxiety Inventory (BAI) and performed an instructed online avoidance or reward-seeking task. Participants had to make multiple presses on active trials and withhold presses on inhibitory trials to avoid an unpleasant sound (Study 1) or obtain points towards a monetary reward (Study 2). Overall, men deployed more effort than women in both avoidance and reward-seeking, and anxiety scores were negatively associated with active reward-seeking performance based on sensitivity scores. Gender interacted with anxiety scores and inhibitory avoidance performance, such that women with higher anxiety showed worse avoidance performance. Our results illuminate effects of gender in the relationship between anxiety and depression levels and the motivation to actively and effortfully respond to obtain positive and avoid negative outcomes. We must often take or withhold effortful action to avoid unpleasant outcomes or obtain rewards. Depression and anxiety can impact these behaviours’ effectiveness, but the roles of avoidance in depression and reward-seeking in anxiety are not fully understood. Gender differences in avoidance and reward-seeking have also not been examined. We present a task in which community participants with a range of anxiety and depression levels made or withheld button presses to avoid hearing an unpleasant sound or obtain a reward. Men deployed more effort than women in avoidance, and women with higher anxiety scores had lower avoidance performance than men. We illuminate gender differences in how depressive and anxiety scores impact our ability to avoid threats and obtain rewards.","link":"/opendata/forys-et-al-2022/"},{"title":"Fox et al. (2023)","text":"Prior studies have found metacognitive impairments are linked to a transdiagnostic dimension of anxious-depression, manifesting as reduced confidence in performance (‘metacognitive bias’). However, previous work has been cross-sectional and so it is unclear if under-confidence is a trait-like marker of anxious-depression vulnerability, or if it resolves when anxious-depression improves. Data were collected as part of the ‘Precision in Psychiatry’ study, a large-scale transdiagnostic, four-week observational study of individuals initiating internet-based cognitive behavioural therapy (iCBT) or antidepressant medication. Self-reported clinical questionnaires and perceptual task performance were gathered to assess anxious-depression and metacognitive bias at baseline and four-week follow-up. Primary analyses were conducted for individuals who received iCBT (n=649), with comparisons between smaller samples that received antidepressant medication (n=88) and a control group receiving no intervention (n=82). Prior to receiving treatment, anxious-depression severity was associated with under-confidence in performance in the iCBT arm, replicating previous work. From baseline to follow-up, levels of anxious-depression were significantly reduced, and this was accompanied by a significant increase in metacognitive confidence (B=0.17, SE=0.02, p<0.001). These changes were correlated (r(647)=-0.12, p=0.002); those with the greatest reductions in anxious-depression levels had the largest increase in confidence. In the antidepressant arm, anxious-depression reduced (B=-0.61, SE=0.09, p<0.001) and confidence increased (B=0.31, SE=0.08, p<0.001). Among controls, confidence remained stable from baseline to follow-up (B=0.11, SE=0.07, p=0.103). Metacognitive biases in anxious-depression are state-dependent; when symptoms improve with treatment, so does confidence in performance. Our results suggest this is not specific to the type of intervention.","link":"/opendata/fox-et-al-2023/"},{"title":"Fradkin et al. (2020)","text":"Obsessive compulsive (OC) symptoms involve excessive information gathering (e.g., checking, reassurance-seeking), and uncertainty about possible, often catastrophic, future events. Here we propose that these phenomena are the result of excessive uncertainty regarding state transitions (transition uncertainty): a computational impairment in Bayesian inference leading to a reduced ability to use the past to predict the present and future, and to oversensitivity to feedback (i.e. prediction errors). Using a computational model of Bayesian learning under uncertainty in a reversal learning task, we investigate the relationship between OC symptoms and transition uncertainty. Individuals high and low in OC symptoms performed a task in which they had to detect shifts (i.e. transitions) in cue-outcome contingencies. Modeling subjects choices was used to estimate each individual participants transition uncertainty and associated responses to feedback. We examined both an optimal observer model and an approximate Bayesian model in which participants were assumed to attend (and learn about) only one of several cues on each trial. Results suggested the participants were more likely to distribute attention across cues, in accordance with the optimal observer model. As hypothesized, participants with higher OC symptoms exhibited increased transition uncertainty, as well as a pattern of behavior potentially indicative of a difficulty in relying on learned contingencies, with no evidence for perseverative behavior. Increased transition uncertainty compromised these individuals ability to predict ensuing feedback, rendering them more surprised by expected outcomes. However, no evidence for excessive belief updating was found. These results highlight a potential computational basis for OC symptoms and obsessive compulsive disorder (OCD). The fact the OC symptoms predicted a decreased reliance on the past rather than perseveration challenges preconceptions of OCD as a disorder of inflexibility. Our results have implications for the understanding of the neurocognitive processes leading to excessive uncertainty and distrust of past experiences in OCD.","link":"/opendata/fradkin-et-al-2020/"},{"title":"Fradkin & Eldar (2022)","text":"To attain goals, people must proactively prevent interferences and react to interferences once they occur. Whereas most research focuses on how people deal with external interferences, here we investigate the use of proactive and reactive control in dealing with unwanted thoughts. To examine this question, we asked people to generate an association to each of several repeating cue words, while forbidding the repetition of associations. Reactively rejecting and replacing unwanted repeated associations after they occur entails slower response times. Conversely, proactive control entails constricting the search space and thus faster response times. To gain further insight into different potential proactive thought control mechanisms, we augmented the analysis of raw response times with a novel, hypothesis-based, tractable computational model describing how people serially sample associations. Our results indicate that people primarily react to unwanted thoughts after they occur. Yet, we found evidence for two latent proactive control mechanisms: one that allows people to mitigate the episodic strengthening of repeated thoughts, and another that helps avoid looping in a repetitive thought. Exploratory analysis showed a relationship between model parameters and self-reported individual differences in the control over unwanted thoughts in daily life. The findings indicate the novel task and model can advance our understanding of how people can and cannot control their thoughts and memories, and benefit future research on the mechanisms responsible for unwanted thought in different psychiatric conditions. Finally, we discuss implications concerning the involvement of associative thinking and various control processes in semantic fluency, decision-making and creativity.","link":"/opendata/fradkin-eldar-2022/"},{"title":"Franco et al. (2022)","text":"The survival of human organisms depends on our ability to solve complex tasks in the face of limited cognitive resources. However, little is known about the factors that drive the complexity of those tasks. Here, building on insights from computational complexity theory, we quantify the computational hardness of cognitive tasks using a set of task-independent metrics related to the computational resource requirements of individual instances of a task. We then examine the relation between those metrics and human behavior and find that they predict both time spent on a task as well as accuracy in three canonical cognitive tasks. Our findings demonstrate that performance in cognitive tasks can be predicted based on generic metrics of their inherent computational hardness.","link":"/opendata/franco-et-al-2022/"},{"title":"Franklin & Frank (2020)","text":"Humans routinely face novel environments in which they have to generalize in order to act adaptively. However, doing so involves the non-trivial challenge of deciding which aspects of a task domain to generalize. While it is sometimes appropriate to simply re-use a learned behavior, often adaptive generalization entails recombining distinct components of knowledge acquired across multiple contexts. Theoretical work has suggested a computational trade-off in which it can be more or less useful to learn and generalize aspects of task structure jointly or compositionally, depending on previous task statistics, but it is unknown whether humans modulate their generalization strategy accordingly. Here we develop a series of navigation tasks that separately manipulate the statistics of goal values (what to do) and state transitions (how to do it) across contexts and assess whether human subjects generalize these task components separately or conjunctively. We find that human generalization is sensitive to the statistics of the previously experienced task domain, favoring compositional or conjunctive generalization when the task statistics are indicative of such structures, and a mixture of the two when they are more ambiguous. These results support a normative meta-generalization account and suggests that people not only generalize previous task components but also generalize the statistical structure most likely to support generalization.","link":"/opendata/franklin-frank-2020/"},{"title":"Frey et al. (2017)","text":"To what extent is there a general factor of risk preference, R, akin to g, the general factor of intelligence? Can risk preference be regarded as a stable psychological trait? These conceptual issues persist because few attempts have been made to integrate multiple risk-taking measures, particularly measures from different and largely unrelated measurement traditions (self-reported propensity measures assessing stated preferences, incentivized behavioral measures eliciting revealed preferences, and frequency measures assessing actual risky activities). Adopting a comprehensive psychometric approach (1507 healthy adults completing 39 risk-taking measures, with a subsample of 109 participants completing a retest session after 6 months), we provide a substantive empirical foundation to address these issues, finding that correlations between propensity and behavioral measures were weak. Yet, a general factor of risk preference, R, emerged from stated preferences and generalized to specific and actual real-world risky activities (for example, smoking). Moreover, R proved to be highly reliable across time, indicative of a stable psychological trait. Our findings offer a first step toward a general mapping of the construct risk preference, which encompasses both general and domain-specific components, and have implications for the assessment of risk preference in the laboratory and in the wild.","link":"/opendata/frey-et-al-2017/"},{"title":"Friehs et al. (2023)","text":"Stopping an already initiated action is crucial for human everyday behavior and empirical evidence points toward the prefrontal cortex playing a key role in response inhibition. Two regions that have been consistently implicated in response inhibition are the right inferior frontal gyrus (IFG) and the more superior region of the dorsolateral prefrontal cortex (DLPFC). The present study investigated the effect of offline 1 Hz transcranial magnetic stimulation (TMS) over the right IFG and DLPFC on performance in a gamified stop-signal task (SSG). We hypothesized that perturbing each area would decrease performance in the SSG, albeit with a quantitative difference in the performance decrease after stimulation. After offline TMS, functional short-term reorganization is possible, and the domain-general area (i.e., the right DLPFC) might be able to compensate for the perturbation of the domain-specific area (i.e., the right IFG). Results showed that 1 Hz offline TMS over the right DLPFC and the right IFG at 110% intensity of the resting motor threshold had no effect on performance in the SSG. In fact, evidence in favor of the null hypothesis was found. One intriguing interpretation of this result is that within-network compensation was triggered, canceling out the potential TMS effects as has been suggested in recent theorizing on TMS effects, although the presented results do not unambiguously identify such compensatory mechanisms. Future studies may result in further support for this hypothesis, which is especially important when studying reactive response in complex environments.","link":"/opendata/friehs-et-al-2023/"},{"title":"Frischkorn & von-Bastian (2021)","text":"Process-Overlap Theory (POT) suggests that measures of cognitive abilities sample from sets of independent cognitive processes. These cognitive processes can be separated into domain-general executive processes, sampled by the majority of cognitive ability measures, and domain-specific processes, sampled only by measures within a certain domain. According to POT, fluid intelligence measures are related because different tests sample similar domain-general executive cognitive processes to some extent. Re-analyzing data from a study by De Simoni and von Bastian (2018), we assessed domain-general variance from executive processing tasks measuring inhibition, shifting, and efficiency of removal from working memory, as well as examined their relation to a domain-general factor extracted from fluid intelligence measures. The results showed that domain-general factors reflecting general processing speed were moderately and negatively correlated with the domain-general fluid intelligence factor (r = -.17–.36). However, domain-general factors isolating variance specific to inhibition, shifting, and removal showed only small and inconsistent correlations with the domain-general fluid intelligence factor (r = .02–.22). These findings suggest that (1) executive processing tasks sample only few domain-general executive processes also sampled by fluid intelligence measures, as well as (2) that domain-general speed of processing contributes more strongly to individual differences in fluid intelligence than do domain-general executive processes.","link":"/opendata/frischkorn-von-bastian-2021/"},{"title":"Fromer et al. (2021)","text":"The amount of mental effort we invest in a task is influenced by the reward we can expect if we perform that task well. However, some of the rewards that have the greatest potential for driving these efforts are partly determined by factors beyond one’s control. In such cases, effort has more limited efficacy for obtaining rewards. According to the Expected Value of Control theory, people integrate information about the expected reward and efficacy of task performance to determine the expected value of control, and then adjust their control allocation (i.e., mental effort) accordingly. Here we test this theory’s key behavioral and neural predictions. We show that participants invest more cognitive control when this control is more rewarding and more efficacious, and that these incentive components separately modulate EEG signatures of incentive evaluation and proactive control allocation. Our findings support the prediction that people combine expectations of reward and efficacy to determine how much effort to invest.","link":"/opendata/fromer-et-al-2021/"},{"title":"Fung et al. (2019)","text":"Theoretical models distinguish between neural responses elicited by distal threats and those evoked by more immediate threats1-3. Specifically, slower cognitive fear responses towards distal threats involve a network of brain regions including the ventral hippocampus (vHPC) and medial prefrontal cortex (mPFC), while immediate reactive fear responses rely on regions such as the periaqueductal grey4,5. However, it is unclear how anxiety and its neural substrates relate to these distinct defensive survival circuits. We tested whether individual differences in trait anxiety would impact escape behaviour and neural responses to slow and fast attacking predators: conditions designed to evoke cognitive and reactive fear, respectively. Behaviourally, we found that trait anxiety was not related to escape decisions for fast threats, but individuals with higher trait anxiety escaped earlier during slow threats. Functional magnetic resonance imaging showed that when subjects faced slow threats, trait anxiety positively correlated with activity in the vHPC, mPFC, amygdala and insula. Furthermore, the strength of functional coupling between two components of the cognitive circuit-the vHPC and mPFC-was correlated with the degree of trait anxiety. This suggests that anxiety predominantly affects cognitive fear circuits that are involved in volitional strategic escape.","link":"/opendata/fung-et-al-2019/"},{"title":"Funkhouser et al. (2020)","text":"The popularity of network analysis in psychopathology research has increased exponentially in recent years. Yet, little research has examined the replicability of cross-sectional psychopathology network models, and those that have used single items for symptoms rather than multiitem scales. The present study therefore examined the replicability and generalizability of regularized partial correlation networks of internalizing symptoms within and across 5 samples (total N = 2,573) using the Inventory for Depression and Anxiety Symptoms, a factor analytically derived measure of individual internalizing symptoms. As different metrics may yield different conclusions about the replicability of network parameters, we examined both global and specific metrics of similarity between networks. Correlations within and between nonclinical samples suggested considerable global similarities in network structure (rss = .53-.87) and centrality strength (rss = .37-.86), but weaker similarities in network structure (rss = .36-.66) and centrality (rss = .04-.54) between clinical and nonclinical samples. Global strength (i.e., connectivity) did not significantly differ across all 5 networks and few edges (0-5.5%) significantly differed between networks. Specific metrics of similarity indicated that, on average, approximately 80% of edges were consistently estimated within and between all 5 samples. The most central symptom (i.e., dysphoria) was consistent within and across samples, but there were few other matches in centrality rank-order. In sum, there were considerable similarities in network structure, the presence and sign of individual edges, and the most central symptom within and across internalizing symptom networks estimated from nonclinical samples, but global metrics suggested network structure and symptom centrality had weak to moderate generalizability from nonclinical to clinical samples.","link":"/opendata/funkhouser-et-al-2020/"},{"title":"Gagne et al. (2020)","text":"Using a contingency volatility manipulation, we tested the hypothesis that difficulty adapting probabilistic decision-making to second-order uncertainty might reflect a core deficit that cuts across anxiety and depression and holds regardless of whether outcomes are aversive or involve reward gain or loss. We used bifactor modeling of internalizing symptoms to separate symptom variance common to both anxiety and depression from that unique to each. Across two experiments, we modeled performance on a probabilistic decision-making under volatility task using a hierarchical Bayesian framework. Elevated scores on the common internalizing factor, with high loadings across anxiety and depression items, were linked to impoverished adjustment of learning to volatility regardless of whether outcomes involved reward gain, electrical stimulation, or reward loss. In particular, high common factor scores were linked to dampened learning following better-than-expected outcomes in volatile environments. No such relationships were observed for anxiety- or depression-specific symptom factors.","link":"/opendata/gagne-et-al-2020/"},{"title":"Gagne et al. (2022)","text":"Individuals prone to anxiety and depression often report beliefs and make judgements about themselves that are more negative than those reported by others. We use computational modeling of a richly naturalistic task to disentangle the role of negative priors versus negatively biased belief updating and to investigate their association with different dimensions of Internalizing psychopathology. Undergraduate participants first provided profiles for a hypothetical tech internship. They then viewed pairs of other profiles and selected the individual they would prefer to work alongside out of each pair. In a subsequent phase of the experiment, participants made judgments about their relative popularity as hypothetical internship partners both before any feedback and after each of 20 items of feedback revealing whether or not they had been selected as the preferred teammate from a given pairing. Scores on latent factors of general negative affect, anxiety-specific affect and depression-specific affect were estimated using participants self-report scores on standardized measures of anxiety and depression together with factor loadings from a bifactor analysis conducted previously. Higher scores on the depression-specific factor were linked to more negative prior beliefs but were not associated with differences in belief updating. In contrast, higher scores on the anxiety-specific factor were associated with a negative bias in belief updating but no difference in prior beliefs. These findings indicate that, to at least some extent, distinct processes may impact the formation of belief priors and in-the-moment belief updating and that these processes may be differentially disrupted in depression and anxiety. Future directions for enquiry include examination of the possibility that prior beliefs biases in depression might reflect generalization from prior experiences or global schema whereas belief updating biases in anxiety might be more situationally specific.","link":"/opendata/gagne-et-al-2022/"},{"title":"Garcia et al. (2023)","text":"Standard models of decision-making assume each option is associated with subjective value, regardless of whether this value is inferred from experience (experiential) or explicitly instructed probabilistic outcomes (symbolic). In this study, we present results that challenge the assumption of unified representation of experiential and symbolic value. Across nine experiments, we presented participants with hybrid decisions between experiential and symbolic options. Participants choices exhibited a pattern consistent with a systematic neglect of the experiential values. This normatively irrational decision strategy held after accounting for alternative explanations, and persisted even when it bore an economic cost. Overall, our results demonstrate that experiential and symbolic values are not symmetrically considered in hybrid decisions, suggesting they recruit different representational systems that may be assigned different priority levels in the decision process. These findings challenge the dominant models commonly used in value-based decision-making research.","link":"/opendata/garcia-et-al-2023/"},{"title":"Garrett & Daw (2020)","text":"Deciding which options to engage, and which to forego, requires developing accurate beliefs about the overall distribution of prospects. Here we adapt a classic prey selection task from foraging theory to examine how individuals keep track of an environments reward rate and adjust choices in response to its fluctuations. Preference shifts were most pronounced when the environment improved compared to when it deteriorated. This is best explained by a trial-by-trial learning model in which participants estimate the reward rate with upward vs. downward changes controlled by separate learning rates. A failure to adjust expectations sufficiently when an environment becomes worse leads to suboptimal choices: options that are valuable given the environmental conditions are rejected in the false expectation that better options will materialize. These findings offer a previously unappreciated parallel in the serial choice setting of observations of asymmetric updating and resulting biased (often overoptimistic) estimates in other domains.","link":"/opendata/garrett-daw-2020/"},{"title":"Garrett & Sharot (2023)","text":"In a recent paper, Burton et al. claim that individuals update beliefs to a greater extent when learning an event is less likely compared to more likely than expected. Here, we investigate Burton’s et al.’s, findings. First, we show how Burton et al.’s data do not in fact support a belief update bias for neutral events. Next, in an attempt to replicate their findings, we collect a new data set employing the original belief update task design, but with neutral events. A belief update bias for neutral events is not observed. Finally, we highlight the statistical errors and confounds in Burton et al.’s design and analysis. This includes mis-specifying a reinforcement learning approach to model the data and failing to follow standard computational model fitting sanity checks such as parameter recovery, model comparison and out of sample prediction. Together, the results find little evidence for biased updating for neutral events.In a recent paper, Burton et al. claim that individuals update beliefs to a greater extent when learning an event is less likely compared to more likely than expected. Here, we investigate Burton’s et al.’s, findings. First, we show how Burton et al.’s data do not in fact support a belief update bias for neutral events. Next, in an attempt to replicate their findings, we collect a new data set employing the original belief update task design, but with neutral events. A belief update bias for neutral events is not observed. Finally, we highlight the statistical errors and confounds in Burton et al.’s design and analysis. This includes mis-specifying a reinforcement learning approach to model the data and failing to follow standard computational model fitting sanity checks such as parameter recovery, model comparison and out of sample prediction. Together, the results find little evidence for biased updating for neutral events.","link":"/opendata/garrett-sharot-2023/"},{"title":"Geddert & Egner (2022)","text":"Adaptive behavior requires the ability to focus on a current task and protect it from distraction (cognitive stability), as well as the ability to rapidly switch to another task in light of changing circumstances (cognitive flexibility). Cognitive stability and flexibility have been conceptualized as opposite endpoints on a stability-flexibility trade-off continuum, implying an obligatory reciprocity between the two: Greater flexibility necessitates less stability, and vice versa. Surprisingly, rigorous empirical tests of this critical assumption are lacking. Here, we acquired simultaneous measurements of cognitive stability (congruency effects) and flexibility (switch costs) on the same stimuli within the same task while independently varying contextual demands on these functions with block-wise manipulations of the proportion of incongruent trials and task switches, respectively. If cognitive stability and flexibility are reciprocal, increases in flexibility in response to higher switch rates should lead to commensurate decreases in stability, and increases in stability in response to more frequent incongruent trials should result in decreased flexibility. Across three experiments, using classic cued task-switching (Experiments 1 and 3) and attentional set-shifting (Experiment 2) protocols, we found robust evidence against an obligatory stability-flexibility trade-off. Although we observed the expected contextual adaptation of stability and flexibility to changing demands, strategic adjustments in stability had little influence on flexibility, and vice versa. These results refute the long-held assumption of a stability-flexibility trade-off, documenting instead that the cognitive processes mediating these functions can be regulated independently-it is possible to be both stable and flexible at the same time.","link":"/opendata/geddert-egner-2022/"},{"title":"Gera et al. (2022)","text":"Habits are a prominent feature of both adaptive and maladaptive behavior. Yet, despite substantial research efforts, there are currently no well-established experimental procedures for habit induction in humans. It is likely that laboratory experimental settings, as well as the session-based structure typically used in controlled experiments (also outside the lab), impose serious constraints on studying habits and other effects that are sensitive to context, motivation, and training duration and frequency. To overcome these challenges, we devised a unique real-world free-operant task structure, implemented through a novel smartphone application, whereby participants could freely enter the app (24 hours a day, 7 days a week) to win rewards. This procedure is free of typical laboratory constraints, yet well-controlled. Using the canonical sensitivity to outcome devaluation criterion, we successfully demonstrated habit formation as a function of training duration, a longstanding challenge in the field. Additionally, we show a positive relationship between multiple facets of engagement/motivation and goal-directedness. We suggest that our novel paradigm can be used to study the neurobehavioral and psychological mechanism underlying habits in humans. Moreover, the real-world free-operant framework can potentially be used to examine other instrumental behavior- related questions, with greater face validity in naturalistic conditions.","link":"/opendata/gera-et-al-2022/"},{"title":"Gerhardsson et al. (2020)","text":"To learn from feedback (trial and error) is essential for all species. Insufficient sleep has been found to reduce the sensitivity to feedback as well as increase reward sensitivity. To determine whether insufficient sleep alters learning from positive and negative feedback, healthy participants (n = 32, mean age 29.0 years, 18 women) were tested once after normal sleep (8 hr time in bed for 2 nights) and once after 2 nights of sleep restriction (4 hr/night) on a probabilistic selection task where learning behaviour was evaluated in three ways: as generalised learning, short-term win-stay/lose-shift learning strategies, and trial-by-trial learning rate. Sleep restriction did not alter the sensitivity to either positive or negative feedback on generalised learning. Also, short-term win-stay/lose-shift strategies were not affected by sleep restriction. Similarly, results from computational models that assess the trial-by-trial update of stimuli value demonstrated no difference between sleep conditions after the first block. However, a slower learning rate from negative feedback when evaluating all learning blocks was found after sleep restriction. Despite a marked increase in sleepiness and slowed learning rate for negative feedback, sleep restriction did not appear to alter strategies and generalisation of learning from positive or negative feedback.","link":"/opendata/gerhardsson-et-al-2020/"},{"title":"Gerlicher et al. (2022)","text":"Although reconsolidation-based interventions constitute a promising new avenue to treating fear and anxieties disorders, the success of the intervention is not guaranteed. The initiation of memory reconsolidation is dependent on whether a mismatch between the experienced and predicted outcome-a prediction error (PE)-occurs during fear memory reactivation. It remains, however, elusive whether any type of PE renders fear memories susceptible to reconsolidation disruption. Here, we investigated whether a value PE, elicited by an outcome that is better or worse than expected, is necessary to make fear memories susceptible to reconsolidation disruption or whether a model-based identity PE, i.e., a PE elicited by an outcome equally aversive but different than expected, would be sufficient. Blocking beta-adrenergic receptors with propranolol HCl after reactivation did, however, not reduce the expression of fear after either type of PE. Instead, we observed intact fear memory expression 24 h after reactivation in the value-, identity- and a no-PE control group. The present results do not corroborate our earlier findings of reconsolidation disruption and point towards challenges that the field is currently facing in observing evidence for memory reconsolidation at all. We provide potential explanations for the unexpected failure of replicating reconsolidation disruption and discuss future directions.","link":"/opendata/gerlicher-et-al-2022/"},{"title":"Gershman & Bhui (2020)","text":"Discounting of future rewards is traditionally interpreted as evidence for an intrinsic preference in favor of sooner rewards. However, temporal discounting can also arise from internal uncertainty in value representations of future events, if one assumes that noisy mental simulations of the future are rationally combined with prior beliefs. Here, we further develop this idea by considering how simulation noise may be adaptively modulated by task demands, based on principles of rational inattention. We show how the optimal allocation of mental effort can give rise to the magnitude effect in intertemporal choice. In a re-analysis of two prior data sets, and in another experiment, we reveal several behavioral signatures of this theoretical account, tying choice stochasticity to the magnitude effect. We conclude that some aspects of temporal discounting may result from a cognitively plausible adaptive response to the costs of information processing.","link":"/opendata/gershman-bhui-2020/"},{"title":"Gillan et al. (2016)","text":"Prominent theories suggest that compulsive behaviors, characteristic of obsessive-compulsive disorder and addiction, are driven by shared deficits in goal-directed control, which confers vulnerability for developing rigid habits. However, recent studies have shown that deficient goal-directed control accompanies several disorders, including those without an obvious compulsive element. Reasoning that this lack of clinical specificity might reflect broader issues with psychiatric diagnostic categories, we investigated whether a dimensional approach would better delineate the clinical manifestations of goal-directed deficits. Using large-scale online assessment of psychiatric symptoms and neurocognitive performance in two independent general-population samples, we found that deficits in goal-directed control were most strongly associated with a symptom dimension comprising compulsive behavior and intrusive thought. This association was highly specific when compared to other non-compulsive aspects of psychopathology. These data showcase a powerful new methodology and highlight the potential of a dimensional, biologically-grounded approach to psychiatry research.","link":"/opendata/gillan-et-al-2016/"},{"title":"Gillan et al. (2020)","text":"Goal-directed control guides optimal decision-making and it is an important cognitive faculty that protects against developing habits. Previous studies have found some evidence of goal-directed deficits when healthy individuals are stressed, and in psychiatric conditions characterised by compulsive behaviours and anxiety. Here, we tested if goal-directed control is affected by state anxiety, which might explain the former results. We carried out a causal test of this hypothesis in two experiments (between-subject N = 88; within-subject N = 50) that used the inhalation of hypercapnic gas (7.5% CO2) to induce an acute state of anxiety in healthy volunteers. In a third experiment (N = 1413), we used a correlational design to test if real-life anxiety-provoking events (panic attacks, stressful events) are associated with impaired goal-directed control. In the former two causal experiments, we induced a profoundly anxious state, both physiologically and psychologically, but this did not affect goal-directed performance. In the third, correlational, study, we found no evidence for an association between goal-directed control, panic attacks or stressful life eventsover and above variance accounted for by trait differences in compulsivity. In sum, three complementary experiments found no evidence that anxiety impairs goal-directed control in human subjects.","link":"/opendata/gillan-et-al-2020/"},{"title":"Giron et al. (2022)","text":"Analogies to stochastic optimization are common in developmental psychology, describing a gradual reduction in randomness (cooling off) over the lifespan. Yet for lack of concrete empirical comparison, there is ambiguity in interpreting this analogy. Using data from n=281 participants ages 5 to 55, we show that cooling off does not only apply to the single dimension of randomness. Rather, development resembles an optimization process along multiple dimensions of learning (i.e., reward generalization, uncertainty-directed exploration, and random temperature). What begins as large tweaks in the parameters that define learning during childhood plateaus and converges to efficient parameter constellations in adulthood. The developmental trajectory of human parameters is strikingly similar to several stochastic optimization algorithms, yet we observe intriguing differences in convergence. Notably, none of the optimization algorithms discovered reliably better regions of the strategy space than adult participants, suggesting a remarkable efficiency of human development.","link":"/opendata/giron-et-al-2022/"},{"title":"Gladhill et al. (2022)","text":"Previous research has demonstrated that emotional faces affect time perception, however, the underlying mechanisms are not fully understood. Earlier attempts focus on effects at the different stages of the pacemaker-accumulator model (clock, memory, and/or decision-making) including, an increase in pacemaker rate or accumulation rate via arousal or attention, respectively, or by biasing decision-making. A visual temporal bisection task with sub-second intervals was conducted in two groups to further investigate these effects; one group was strictly behavioral whereas the second included a 64-channel electroencephalogram (EEG). To separate the influence of face and timing responses, participants timed a visual stimulus, temporally flanked (before and after) by two faces, either negative or neutral, creating three trial-types: Neg→Neut, Neut→Neg, or Neut→Neut. We found a leftward shift in bisection point (BP) in Neg→Neut relative to Neut→Neut suggests an overestimation of the temporal stimulus when preceded by a negative face. Neurally, we found the face-responsive N170 was larger for negative faces and the N1 and contingent negative variation (CNV) were larger when the temporal stimulus was preceded by a negative face. Additionally, there was an interaction effect between condition and response for the late positive component of timing (LPCt) and a significant difference between response (short/long) in the neutral condition. We concluded that a preceding negative face affects the clock stage leading to more pulses being accumulated, either through attention or arousal, as indexed by a larger N1, CNV, and N170; whereas viewing a negative face after impacted decision-making mechanisms, as evidenced by the LPCt.","link":"/opendata/gladhill-et-al-2022/"},{"title":"Glickman et al. (2022)","text":"Evidence integration is a normative algorithm for choosing between alternatives with noisy evidence, which has been successful in accounting for vast amounts of behavioural and neural data. However, this mechanism has been challenged by non-integration heuristics, and tracking decision boundaries has proven elusive. Here we first show that the decision boundaries can be extracted using a model-free behavioural method termed decision classification boundary, which optimizes choice classification based on the accumulated evidence. Using this method, we provide direct support for evidence integration over non-integration heuristics, show that the decision boundaries collapse across time and identify an integration bias whereby incoming evidence is modulated based on its consistency with preceding information. This consistency bias, which is a form of pre-decision confirmation bias, was supported in four cross-domain experiments, showing that choice accuracy and decision confidence are modulated by stimulus consistency. Strikingly, despite its seeming sub-optimality, the consistency bias fosters performance by enhancing robustness to integration noise.","link":"/opendata/glickman-et-al-2022/"},{"title":"Glockner & Pachur (2012)","text":"In the behavioral sciences, a popular approach to describe and predict behavior is cognitive modeling with adjustable parameters (i.e., which can be fitted to data). Modeling with adjustable parameters allows, among other things, measuring differences between people. At the same time, parameter estimation also bears the risk of overfitting. Are individual differences as measured by model parameters stable enough to improve the ability to predict behavior as compared to modeling without adjustable parameters? We examined this issue in cumulative prospect theory (CPT), arguably the most widely used framework to model decisions under risk. Specifically, we examined (a) the temporal stability of CPTs parameters; and (b) how well different implementations of CPT, varying in the number of adjustable parameters, predict individual choice relative to models with no adjustable parameters (such as CPT with fixed parameters, expected value theory, and various heuristics). We presented participants with risky choice problems and fitted CPT to each individuals choices in two separate sessions (which were 1 week apart). All parameters were correlated across time, in particular when using a simple implementation of CPT. CPT allowing for individual variability in parameter values predicted individual choice better than CPT with fixed parameters, expected value theory, and the heuristics. CPTs parameters thus seem to pick up stable individual differences that need to be considered when predicting risky choice.","link":"/opendata/glockner-pachur-2012/"},{"title":"Gluck et al. (2023)","text":"A habitual avoidance component may enforce the persistence of maladaptive avoidance behavior in anxiety disorders. Whether habitual avoidance is acquired more strongly in anxiety disorders is unclear. Individuals with current social anxiety disorder, panic disorder and/or agoraphobia (n = 62) and healthy individuals (n = 62) completed a devaluation paradigm with extensive avoidance training, followed by the devaluation of the aversive outcome. In the subsequent test phase, habitual response tendencies were inferred from compatibility effects. Neutral control trials were added to assess general approach learning in the absence of previous extensive avoidance training. The compatibility effects indicating habitual control did not differ between patients with anxiety disorders and healthy controls. Patients showed lower overall approach accuracy, but this effect was unrelated to the compatibility effects. In this study, anxiety disorders were characterized by reduced approach but not stronger habitual avoidance. These results do not indicate a simple and direct association between anxiety disorders and the acquisition of pervasive habitual avoidance in this devaluation paradigm.","link":"/opendata/gluck-et-al-2023/"},{"title":"Goris et al. (2019)","text":"A common idea about individuals with autism spectrum disorder (ASD) is that they have an above-average preference for predictability and sameness. However, surprisingly little research has gone toward this core symptom, and some studies suggest the preference for predictability in ASD might be less general than commonly assumed. Here, we investigated this important symptom of ASD using three different paradigms, which allowed us to measure preference for predictability under well-controlled experimental conditions. Specifically, we used a dimensional approach by investigating correlations between autistic traits (as measured with the Autism-Spectrum Quotient and Social Responsiveness Scale in a neurotypical population) and the scores on three different tasks. The music preference task assessed preferences for tone sequences that varied in predictability. The perceptual fluency task required participants to evaluate stimuli that were preceded by a similar versus dissimilar subliminally presented prime. The gambling task presented four decks of cards that had equal outcome probabilities but varied in predictability. We observed positive correlations between autistic traits and a preference for predictability in both the music preference and perceptual fluency task. We did not find our hypothesized correlation with gambling behavior but did observe a post hoc correlation showing that participants with more autistic traits were faster to choose the predictable deck. Together, these findings show that a relation between autistic traits and preference for predictability can be observed in a standardized lab environment, and should be considered an important first step toward a better, more mechanistic understanding of insistence on sameness in ASD.","link":"/opendata/goris-et-al-2019/"},{"title":"Grahek et al. (2022)","text":"To determine how much cognitive control to invest in a task, people need to consider whether exerting control matters for obtaining rewards. In particular, they need to account for the efficacy of their performance-the degree to which rewards are determined by performance or by independent factors. Yet it remains unclear how people learn about their performance efficacy in an environment. Here we combined computational modeling with measures of task performance and EEG, to provide a mechanistic account of how people (i) learn and update efficacy expectations in a changing environment and (ii) proactively adjust control allocation based on current efficacy expectations. Across 2 studies, subjects performed an incentivized cognitive control task while their performance efficacy (the likelihood that rewards are performance-contingent or random) varied over time. We show that people update their efficacy beliefs based on prediction errors-leveraging similar neural and computational substrates as those that underpin reward learning-and adjust how much control they allocate according to these beliefs. Using computational modeling, we show that these control adjustments reflect changes in information processing, rather than the speed-accuracy tradeoff. These findings demonstrate the neurocomputational mechanism through which people learn how worthwhile their cognitive control is.","link":"/opendata/grahek-et-al-2022/"},{"title":"Grill et al. (2023)","text":"The Behavioral Avoidance Test (BAT) for spider phobia is a widely-used diagnostic tool assessing fear by measuring avoidance behavior. However, BATs require considerable preparation and different BAT protocols across studies hamper result comparability. To address this, we have developed an open-access online BAT (vBATon). We validated its efficacy in measuring avoidance and eliciting anxiety/disgust by comparing it to a real-life BAT. Spider-fearful (N = 31) and non-fearful (N = 31) individuals completed both tests on separate dates. Both tests successfully distinguished between spider-fearful and non-fearful individuals. Crucially, equivalence tests confirmed that vBATon captures avoidance, anxiety, and disgust equal to the real-life BAT. We found moderate to high correlations between vBATon and (1) the real-life BAT and (2) self-report measurements of spider-fear. Our study suggests that vBATon is a valid, standardized, efficient and user-friendly alternative to real-life BATs for measuring spider fear in both research and clinical practice. (Access to vBATon: https://spider-phobia-research.com/wp-content/uploads/2023/07/vbaton_pub/)","link":"/opendata/grill-et-al-2023/"},{"title":"Grogan et al. (2020)","text":"We can be motivated when reward depends on performance, or merely by the prospect of a guaranteed reward. Performance-dependent (contingent) reward is instrumental, relying on an internal action-outcome model, whereas motivation by guaranteed reward may minimise opportunity cost in reward-rich environments. Competing theories propose that each type of motivation should be dependent on dopaminergic activity. We contrasted these two types of motivation with a rewarded saccade task, in patients with Parkinson’s disease (PD). When PD patients were ON dopamine, they had greater response vigour (peak saccadic velocity residuals) for contingent rewards, whereas when PD patients were OFF medication, they had greater vigour for guaranteed rewards. These results support the view that reward expectation and contingency drive distinct motivational processes, and can be dissociated by manipulating dopaminergic activity. We posit that dopamine promotes goal-directed motivation, but dampens reward-driven vigour, contradictory to the prediction that increased tonic dopamine amplifies reward expectation.","link":"/opendata/grogan-et-al-2020/"},{"title":"Grogan et al. (2022)","text":"Motivation can improve performance when the potential rewards outweigh the cost of effort expended. In working memory (WM), people can prioritise rewarded items at the expense of unrewarded items, suggesting a fixed memory capacity. But can capacity itself change with motivation? Across four experiments (N = 30-34) we demonstrate motivational improvements in WM even when all items were rewarded. However, this was not due to better memory precision, but rather better selection of the probed item within memory. Motivational improvements operated independently of encoding, maintenance, or attention shifts between items in memory. Moreover, motivation slowed responses. This contrasted with the benefits of rewarding items unequally, which allowed prioritisation of one item over another. We conclude that motivation can improve memory recall, not via precision or capacity, but via speed-accuracy trade-offs when selecting the item to retrieve.","link":"/opendata/grogan-et-al-2022/"},{"title":"Gronau et al. (2023)","text":"Response inhibition is a key attribute of human executive control. Standard stop-signal tasks require countermanding a single response; the speed at which that response can be inhibited indexes the efficacy of the inhibitory control networks. However, more complex stopping tasks, where one or more components of a multi-component action are cancelled (i.e., response-selective stopping) cannot be explained by the independent-race model appropriate for the simple task (Logan and Cowan, 1984). Healthy human participants (n=28; 10 male; 19-40 years) completed a response-selective stopping task where a `go’ stimulus required simultaneous (bimanual) button presses in response to left and right pointing green arrows. On a subset of trials (30%) one, or both, arrows turned red (constituting the stop signal) requiring that only the button-press(es) associated with red arrows be cancelled. Electromyographic recordings from both index fingers (first dorsal interosseous) permitted voluntary motor responses that resulted in overt button presses, and activity that was cancelled prior to an overt response (i.e., partial, or covert, responses), to be assessed. We propose a simultaneously inhibit and start (SIS) model that extends the independent race model and provides a highly accurate account of response-selective stopping data. Together with fine-grained EMG analysis, our model-based analysis offers converging evidence that the selective-stop signal simultaneously triggers a process that stops the bimanual response and triggers a new unimanual response corresponding to the green arrow. Our results require a reconceptualization of response-selective stopping and offer a tractable framework for assessing such tasks in healthy and patient populations.","link":"/opendata/gronau-et-al-2023/"},{"title":"Gross et al. (2021)","text":"Helping other people can entail risks for the helper. For example, when treating infectious patients, medical volunteers risk their own health. In such situations, decisions to help should depend on the individual’s valuation of others’ well-being (social preferences) and the degree of personal risk the individual finds acceptable (risk preferences). We investigated how these distinct preferences are psychologically and neurobiologically integrated when helping is risky. We used incentivized decision-making tasks (Study 1; N = 292 adults) and manipulated dopamine and norepinephrine levels in the brain by administering methylphenidate, atomoxetine, or a placebo (Study 2; N = 154 adults). We found that social and risk preferences are independent drivers of risky helping. Methylphenidate increased risky helping by selectively altering risk preferences rather than social preferences. Atomoxetine influenced neither risk preferences nor social preferences and did not affect risky helping. This suggests that methylphenidate-altered dopamine concentrations affect helping decisions that entail a risk to the helper.","link":"/opendata/gross-et-al-2021/"},{"title":"Grosskurth et al. (2019)","text":"Human decisions can be habitual or goal-directed, also known as model-free (MF) or model-based (MB) control. Previous work suggests that the balance between the two decision systems is impaired in psychiatric disorders such as compulsion and addiction, via overreliance on MF control. However, little is known whether the balance can be altered through task training. Here, 20 healthy participants performed a well-established two-step task that differentiates MB from MF control, across five training sessions. We used computational modelling and functional near-infrared spectroscopy to assess changes in decision-making and brain hemodynamic over time. Mixed-effects modelling revealed overall no substantial changes in MF and MB behavior across training. Although our behavioral and brain findings show task-induced changes in learning rates, these parameters have no direct relation to either MF or MB control or the balance between the two systems, and thus do not support the assumption of training effects on MF or MB strategies. Our findings indicate that training on the two-step paradigm in its current form does not support a shift in the balance between MF and MB control. We discuss these results with respect to implications for restoring the balance between MF and MB control in psychiatric conditions.","link":"/opendata/grosskurth-et-al-2019/"},{"title":"Haddara & Rahnev (2022)","text":"It is widely believed that feedback improves behavior, but the mechanisms behind this improvement remain unclear. Different theories postulate that feedback has either a direct effect on performance through automatic reinforcement mechanisms or only an indirect effect mediated by a deliberate change in strategy. To adjudicate between these competing accounts, we performed two large experiments on human adults (total N = 518); approximately half the participants received trial-by-trial feedback on a perceptual task, whereas the other half did not receive any feedback. We found that feedback had no effect on either perceptual or metacognitive sensitivity even after 7 days of training. On the other hand, feedback significantly affected participants’ response strategies by reducing response bias and improving confidence calibration. These results suggest that the beneficial effects of feedback stem from allowing people to adjust their strategies for performing the task and not from direct reinforcement mechanisms, at least in the domain of perception.","link":"/opendata/haddara-rahnev-2022/"},{"title":"Haigh et al. (2021)","text":"Weber’s law predicts that stimulus sensitivity will increase proportionally with increases in stimulus intensity. Does this hold for the stimulus of time - specifically, duration in the milliseconds to seconds range? There is conflicting evidence on the relationship between temporal sensitivity and duration. Weber’s law predicts a linear relationship between sensitivity and duration on interval timing tasks, while two alternative models predict a reverse J-shaped and a U-shaped relationship. Based on previous research, we hypothesised that temporal sensitivity in humans would follow a U-shaped function, increasing and then decreasing with increases in duration, and that this model would provide a better statistical fit to the data than the reverse-J or the simple Weber’s Law model. In a two-alternative forced-choice interval comparison task, 24 participants made duration judgements about six groups of auditory intervals between 100 and 3,200 ms. Weber fractions were generated for each group of intervals and plotted against time to generate a function describing sensitivity to the stimulus of duration. Although the sensitivity function was slightly concave, and the model describing a U-shaped function gave the best fit to the data, the increase in the model fit was not sufficient to warrant the extra free parameter in the chosen model. Further analysis demonstrated that Weber’s law itself provided a better description of sensitivity to changes in duration than either of the two models tested.","link":"/opendata/haigh-et-al-2021/"},{"title":"Haines et al. (2020)","text":"Trait impulsivity—defined by strong preference for immediate over delayed rewards and difficulties inhibiting prepotent behaviors—is observed in all externalizing disorders, including substance-use disorders. Many laboratory tasks have been developed to identify decision-making mechanisms and correlates of impulsive behavior, but convergence between task measures and self-reports of impulsivity are consistently low. Long-standing theories of personality and decision-making predict that neurally mediated individual differences in sensitivity to (a) reward cues and (b) punishment cues (frustrative nonreward) interact to affect behavior. Such interactions obscure one-to-one correspondences between single personality traits and task performance. We used hierarchical Bayesian analysis in three samples with differing levels of substance use (N = 967) to identify interactive dependencies between trait impulsivity and state anxiety on impulsive decision-making. Our findings reveal how anxiety modulates impulsive decision-making and demonstrate benefits of hierarchical Bayesian analysis over traditional approaches for testing theories of psychopathology spanning levels of analysis.","link":"/opendata/haines-et-al-2020/"},{"title":"Hakim et al. (2019)","text":"Complex cognition relies on both on-line representations in working memory (WM), said to reside in the focus of attention, and passive off-line representations of related information. Here, we dissected the focus of attention by showing that distinct neural signals index the on-line storage of objects and sustained spatial attention. We recorded electroencephalogram (EEG) activity during two tasks that employed identical stimulus displays but varied the relative demands for object storage and spatial attention. We found distinct delay-period signatures for an attention task (which required only spatial attention) and a WM task (which invoked both spatial attention and object storage). Although both tasks required active maintenance of spatial information, only the WM task elicited robust contralateral delay activity that was sensitive to mnemonic load. Thus, we argue that the focus of attention is maintained via a collaboration between distinct processes for covert spatial orienting and object-based storage.","link":"/opendata/hakim-et-al-2019/"},{"title":"Hammond et al. (2023)","text":"High anxiety may be related insufficient sensitivity to changing reinforcement during operant learning. Whether such findings are specific to anxiety is unclear given a wider literature relating negative affect to abnormal learning and the possibility that relationships are not consistent across incentive types (i.e. punishment and reward) and outcomes (i.e., positive or negative). In two separate samples ( = 76; = 49), participants completed an operant learning task with positive, negative, and neutral socio-affective feedback, designed to assess adaptive responses to changing environmental volatility. Contrary to expectations, general affective distress, rather than anxiety or depression specifically, was related to an increase, rather than a decrease, in the rate of learning for negative outcomes in volatile, relative to stable, environments. Our results suggest an important but general role in anxiety and depression of overweighting negative feedback when the value of an action becomes uncertain, as when environmental volatility increases.","link":"/opendata/hammond-et-al-2023/"},{"title":"Hao et al. (2022)","text":"The Value Learning Task (VLT; e.g., Raymond & OBrien, 2009) is widely used to investigate how acquired value impacts how we perceive and process stimuli. The task consists of a series of trials in which participants attempt to maximize accumulated winnings as they make choices from a pair of presented images associated with probabilistic win, loss, or no-change outcomes. The probabilities and outcomes are initially unknown to the participant and thus the task involves decision making and learning under uncertainty. Despite the symmetric outcome structure for win and loss pairs, people learn win associations better than loss associations (Lin, Cabrera-Haro, & Reuter-Lorenz, 2020). This learning asymmetry could lead to differences when the stimuli are probed in subsequent tasks, compromising inferences about how acquired value affects downstream processing. We investigate the nature of the asymmetry using a standard error-driven reinforcement learning model with a softmax choice rule. Despite having no special role for valence, the model yields the learning asymmetry observed in human behavior, whether the model parameters are set to maximize empirical fit, or task payoff. The asymmetry arises from an interaction between a neutral initial value estimate and a choice policy that exploits while exploring, leading to more poorly discriminated value estimates for loss stimuli. We also show how differences in estimated individual learning rates help to explain individual differences in the observed win-loss asymmetries, and how the final value estimates produced by the model provide a simple account of a post-learning explicit value categorization task.","link":"/opendata/hao-et-al-2022/"},{"title":"Harada-Laszlo et al. (2023)","text":"Catastrophizing is a transdiagnostic construct that has been suggested to precipitate and maintain a multiplicity of psychiatric disorders, including anxiety, depression, PTSD and OCD. However, the underlying cognitive mechanisms that result in catastrophizing are unknown. Relating reinforcement learning model parameters to catastrophizing may allow us to further understand the process of catastrophizing.Objective: Using a modified four-armed bandit task, we aimed to investigate the relationship between reinforcement learning parameters and self-report catastrophizing questionnaire scores to gain a mechanistic understanding of how catastrophizing may alter learning. We recruited 211 participants to complete a computerised four-armed bandit task and tested the fit of six reinforcement learning models on our data, including two novel models which both incorporated a scaling factor related to a history of negative outcomes variable. We investigated the relationship between self-report catastrophizing scores and free parameters from the overall best-fitting model, along with the best-fitting model to include history, using Pearson’s correlations. Subsequently, we reassessed these relationships using multiple regression analyses to evaluate whether any observed relationships were altered when relevant IQ and mental health covariates were applied. Model-agnostic analyses indicated there were effects of outcome history on reaction time and accuracy, and that the effects on accuracy related to catastrophizing. The overall model of best fit was the Standard Rescorla-Wagner Model and the best-fitting model to include history was a model in which learning rate was scaled by history of negative outcome. We found no effect of catastrophizing on the scaling by history of negative outcome parameter (r=0.003, p=0.679), the learning rate parameter (r=0.026, p=0.703) or the inverse temperature parameter (r=0.086, p=0.220). We were unable to relate catastrophizing to any of the reinforcement learning parameters we investigated. This implies that catastrophizing is not straightforwardly linked to any changes to learning after a series of negative outcomes are received. Future research could use further exploration of the space of models which include a history parameter.","link":"/opendata/harada-laszlo-et-al-2023/"},{"title":"Harhen & Bornstein (2022)","text":"Patch foraging presents a sequential decision-making problem widely studied across organisms — stay with a current option or leave it in search of a better alternative? Behavioral ecology has identified an optimal strategy for these decisions, but, across species, foragers systematically deviate from it, staying too long with an option or overharvesting relative to this optimum. Despite the ubiquity of this behavior, the mechanism underlying it remains unclear and an object of extensive investigation. Here, we address this gap by approaching foraging as both a decision-making and learning problem. Specifically, we propose a model in which foragers 1) rationally infer the structure in their environment and 2) use their uncertainty over the inferred structure representation to adaptively discount future rewards. We find that overharvesting can emerge from this rational statistical inference and uncertainty adaptation process. In a patch leaving task, we show that human participants adapt their foraging to the richness and dynamics of the environment in ways consistent with our model. These findings suggest that definitions of optimal foraging could be extended by considering how foragers reduce and adapt to uncertainty over representations of their environment.","link":"/opendata/harhen-bornstein-2022/"},{"title":"Harootonian et al. (2022)","text":"Successful navigation requires the ability to compute one’s location and heading from incoming multisensory information. Previous work has shown that this multisensory input comes in two forms: body-based idiothetic cues, from one’s own rotations and translations, and visual allothetic cues, from the environment (usually visual landmarks). However, exactly how these two streams of information are integrated is unclear, with some models suggesting the body-based idiothetic and visual allothetic cues are combined, while others suggest they compete. In this paper we investigated the integration of body-based idiothetic and visual allothetic cues in the computation of heading using virtual reality. In our experiment, participants performed a series of body turns of up to 360 degrees in the dark with only a brief flash (300ms) of visual feedback en route. Because the environment was virtual, we had full control over the visual feedback and were able to vary the offset between this feedback and the true heading angle. By measuring the effect of the feedback offset on the angle participants turned, we were able to determine the extent to which they incorporated visual feedback as a function of the offset error. By further modeling this behavior we were able to quantify the computations people used. While there were considerable individual differences in performance on our task, with some participants mostly ignoring the visual feedback and others relying on it almost entirely, our modeling results suggest that almost all participants used the same strategy in which idiothetic and allothetic cues are combined when the mismatch between them is small, but compete when the mismatch is large. These findings suggest that participants update their estimate of heading using a hybrid strategy that mixes the combination and competition of cues.","link":"/opendata/harootonian-et-al-2022/"},{"title":"Harris et al. (2022)","text":"In this study, we examined the relationship between physiological encoding of surprise and the learning of anticipatory eye movements. Active inference portrays perception and action as interconnected inference processes, driven by the imperative to minimise the surprise of sensory observations. To examine this characterisation of oculomotor learning during a hand-eye coordination task, we tested whether anticipatory eye movements were updated in accordance with Bayesian principles and whether trial-by-trial learning rates tracked pupil dilation as a marker of ‘surprise’. Forty-four participants completed an interception task in immersive virtual reality that required them to hit bouncing balls that had either expected or unexpected bounce profiles. We recorded anticipatory eye movements known to index participants’ beliefs about likely ball bounce trajectories. By fitting a hierarchical Bayesian inference model to the trial-wise trajectories of these predictive eye movements, we were able to estimate each individual’s expectations about bounce trajectories, rates of belief updating, and precision-weighted prediction errors. We found that the task-evoked pupil response tracked prediction errors and learning rates but not beliefs about ball bounciness or environmental volatility. These findings are partially consistent with active inference accounts and shed light on how encoding of surprise may shape the control of action.","link":"/opendata/harris-et-al-2022/"},{"title":"Hayes & Wedell (2022a)","text":"Previous studies of reinforcement learning (RL) have established that choice outcomes are encoded in a context-dependent fashion. Several computational models have been proposed to explain context-dependent encoding, including reference point centering and range adaptation models. The former assumes that outcomes are centered around a running estimate of the average reward in each choice context, while the latter assumes that outcomes are compared to the minimum reward and then scaled by an estimate of the range of outcomes in each choice context. However, there are other computational mechanisms that can explain context dependence in RL. In the present study, a frequency encoding model is introduced that assumes outcomes are evaluated based on their proportional rank within a sample of recently experienced outcomes from the local context. A range-frequency model is also considered that combines the range adaptation and frequency encoding mechanisms. We conducted two fully incentivized behavioral experiments using choice tasks for which the candidate models make divergent predictions. The results were most consistent with models that incorporate frequency or rank-based encoding. The findings from these experiments deepen our understanding of the underlying computational processes mediating context-dependent outcome encoding in human RL.","link":"/opendata/hayes-wedell-2022a/"},{"title":"Hayes & Wedell (2022b)","text":"In reinforcement learning tasks, people learn the values of options relative to other options in the local context. Prior research suggests that relative value learning is enhanced when choice contexts are temporally clustered in a blocked sequence compared to a randomly interleaved sequence. The present study was aimed at further investigating the effects of blocked versus interleaved training using a choice task that distinguishes among different contextual encoding models. Our results showed that the presentation format in which contexts are experienced can lead to qualitatively distinct forms of relative value learning. This conclusion was supported by a combination of model-free and model-based analyses. In the blocked condition, choice behavior was most consistent with a reference point model in which outcomes are encoded relative to a dynamic estimate of the contextual average reward. In contrast, the interleaved condition was best described by a range-frequency encoding model. We propose that blocked training makes it easier to track contextual outcome statistics, such as the average reward, which may then be used to relativize the values of experienced outcomes. When contexts are interleaved, range-frequency encoding may serve as a more efficient means of storing option values in memory for later retrieval.","link":"/opendata/hayes-wedell-2022b/"},{"title":"Heffner et al. (2021)","text":"People make decisions based on deviations from expected outcomes, known as prediction errors. Past work has focused on reward prediction errors, largely ignoring violations of expected emotional experiences-emotion prediction errors. We leverage a method to measure real-time fluctuations in emotion as people decide to punish or forgive others. Across four studies (N = 1,016), we reveal that emotion and reward prediction errors have distinguishable contributions to choice, such that emotion prediction errors exert the strongest impact during decision-making. We additionally find that a choice to punish or forgive can be decoded in less than a second from an evolving emotional response, suggesting that emotions swiftly influence choice. Finally, individuals reporting significant levels of depression exhibit selective impairments in using emotion-but not reward-prediction errors. Evidence for emotion prediction errors potently guiding social behaviours challenge standard decision-making models that have focused solely on reward.","link":"/opendata/heffner-et-al-2021/"},{"title":"Hein et al. (2021)","text":"Clinical and subclinical (trait) anxiety impairs decision making and interferes with learning. Less understood are the effects of temporary anxious states on learning and decision making in healthy populations, and whether these can serve as a model for clinical anxiety. Here we test whether anxious states in healthy individuals elicit a pattern of aberrant behavioural, neural, and physiological responses comparable with those found in anxiety disorders-particularly when processing uncertainty in unstable environments. In our study, both a state anxious and a control group learned probabilistic stimulus-outcome mappings in a volatile task environment while we recorded their electrophysiological (EEG) signals. By using a hierarchical Bayesian model of inference and learning, we assessed the effect of state anxiety on Bayesian belief updating with a focus on uncertainty estimates. State anxiety was associated with an underestimation of environmental uncertainty, and informational uncertainty about the reward tendency. Anxious individuals beliefs about reward contingencies were more precise (had smaller uncertainty) and thus more resistant to updating, ultimately leading to impaired reward-based learning. State anxiety was also associated with greater uncertainty about volatility. We interpret this pattern as evidence that state anxious individuals are less tolerant to informational uncertainty about the contingencies governing their environment and more willing to be uncertain about the level of stability of the world itself. Further, we tracked the neural representation of belief update signals in the trial-by-trial EEG amplitudes. In control participants, lower-level precision-weighted prediction errors (pwPEs) about reward tendencies were represented in the ERP signals across central and parietal electrodes peaking at 496 ms, overlapping with the late P300 in classical ERP analysis. The state anxiety group did not exhibit a significant representation of low-level pwPEs, and there were no significant differences between the groups. Smaller variance in low-level pwPE about reward tendencies in state anxiety could partially account for the null results. Expanding previous computational work on trait anxiety, our findings establish that temporary anxious states in healthy individuals impair reward-based learning in volatile environments, primarily through changes in uncertainty estimates, which play a central role in current Bayesian accounts of perceptual inference and learning.","link":"/opendata/hein-et-al-2021/"},{"title":"Held et al. (2023)","text":"Humans can up- or downregulate the degree to which they rely on task information for goal directed behaviour, a process often referred to as cognitive control. Adjustments in cognitive control are traditionally studied in response to experienced or expected task-rule conflict. However, recent theories suggest that people can also learn to adapt control settings through reinforcement. Across three preregistered task switching experiments (n=415), we selectively rewarded correct performance on trials with either more (incongruent) or less (congruent) task-rule conflict. Results confirmed the hypothesis that people rewarded more on incongruent trials showed smaller task-rule congruency effects, thus optimally adapting their control settings to the reward scheme. Using drift diffusion modelling, we further show that this reinforcement of cognitive control may occur through conflict-dependent within-trial adjustments of response thresholds after conflict detection. Together, our findings suggest that, while people remain more efficient at learning stimulus-response associations through rewards, they can similarly learn cognitive control strategies through reinforcement.","link":"/opendata/held-et-al-2023/"},{"title":"Hellmann et al. (2022)","text":"Many decisions must be made with incomplete information. The ability to evaluate the resulting uncertainty is a key aspect of metacognition. As both confidence judgments and reaction times are expected to be closely related to sensory uncertainty, a mathematical model of human perceptual decision-making should be able to explain them both. Here, we propose the new dynamical evidence and visibility model (dynWEV), an extension of the drift diffusion model of decision making, to account for choices, reaction times, and confidence at the same time. The decision process in a binary perceptual task is described as a Wiener process accumulating sensory evidence about the choice options bounded by two constant thresholds. To account for confidence judgments, we assume a period of postdecisional accumulation of sensory evidence and parallel accumulation of information about the reliability of the present stimulus. We examined model fits in two experiments, a motion discrimination task with random dot kinematograms and a post-masked orientation discrimination task. A comparison between the dynamical evidence and visibility model, two-stage dynamical signal detection theory, and several versions of race models of decision making showed that only dynWEV produced acceptable fits of choices, confidence, and reaction time. This finding suggests that confidence judgments not only depend on choice evidence, but also on a parallel estimate of sensory uncertainty as well as postdecisional accumulation of evidence.","link":"/opendata/hellmann-et-al-2022/"},{"title":"Heo et al. (2021)","text":"Depression is characterized by deficits in the reinforcement learning (RL) process. Although many computational and neural studies have extended our knowledge of the impact of depression on RL, most focus on habitual control (model-free RL), yielding a relatively poor understanding of goal-directed control (model-based RL) and arbitration control to find a balance between the two. We investigated the effects of subclinical depression on model-based and model-free learning in the prefrontal-striatal circuitry. First, we found that subclinical depression is associated with the attenuated state and reward prediction error representation in the insula and caudate. Critically, we found that it accompanies the disrupted arbitration control between model-based and model-free learning in the predominantly inferior lateral prefrontal cortex and frontopolar cortex. We also found that depression undermines the ability to exploit viable options, called exploitation sensitivity. These findings characterize how subclinical depression influences different levels of the decision-making hierarchy, advancing previous conflicting views that depression simply influences either habitual or goal-directed control. Our study creates possibilities for various clinical applications, such as early diagnosis and behavioral therapy design.","link":"/opendata/heo-et-al-2021/"},{"title":"Hertz et al. (2021)","text":"Social learning underpins our speciess extraordinary success. Learning through observation has been investigated in several species, but learning from advice-where information is intentionally broadcast-is less understood. We used a pre-registered, online experiment (n = 1492) combined with computational modelling to examine learning through observation and advice. Participants were more likely to immediately follow advice than to copy an observed choice, but this was dependent upon trust in the adviser: highly paranoid participants were less likely to follow advice in the short term. Reinforcement learning modelling revealed two distinct patterns regarding the long-term effects of social information: some individuals relied fully on social information, whereas others reverted to trial-and-error learning. This variation may affect the prevalence and fidelity of socially transmitted information. Our results highlight the privileged status of advice relative to observation and how the assimilation of intentionally broadcast information is affected by trust in others.","link":"/opendata/hertz-et-al-2021/"},{"title":"Hewitson et al. (2023)","text":"People form metacognitive representations of their own abilities across a range of tasks. How these representations are influenced by errors during learning is poorly understood. Here we ask how metacognitive confidence judgments of performance during motor learning are shaped by the learner’s recent history of errors. Across four motor learning experiments, our computational modeling approach demonstrated that people’s confidence judgments are best explained by a recency-weighted averaging of visually observed errors. Moreover, in the formation of these confidence estimates, people appear to re-weight observed motor errors according to a subjective cost function. Confidence judgments were adaptive, incorporating recent motor errors in a manner that was sensitive to the volatility of the learning environment, integrating a shallower history when the environment was more volatile. Finally, confidence tracked motor errors in the context of both implicit and explicit motor learning, but only showed evidence of influencing behavior in the latter. Our study thus provides a novel descriptive model that successfully approximates the dynamics of metacognitive judgments during motor learning.","link":"/opendata/hewitson-et-al-2023/"},{"title":"Hezemans et al. (2022)","text":"Apathy is a debilitating feature of many neuropsychiatric diseases, that is typically described as a reduction of goal-directed behaviour. Despite its prevalence and prognostic importance, the mechanisms underlying apathy remain controversial. Degeneration of the locus coeruleus-noradrenaline system is known to contribute to motivational deficits, including apathy. In healthy people, noradrenaline has been implicated in signalling the uncertainty of expectations about the environment. We proposed that noradrenergic deficits contribute to apathy by modulating the relative weighting of prior beliefs about action outcomes. We tested this hypothesis in the clinical context of Parkinson’s disease, given its associations with apathy and noradrenergic dysfunction. Participants with mild-to-moderate Parkinson’s disease (N = 17) completed a randomised double-blind, placebo-controlled, crossover study with 40 mg of the noradrenaline reuptake inhibitor atomoxetine. Prior weighting was inferred from psychophysical analysis of performance in an effort-based visuomotor task, and was confirmed as negatively correlated with apathy. Locus coeruleus integrity was assessed in vivo using magnetisation transfer imaging at ultra-high field 7T. The effect of atomoxetine depended on locus coeruleus integrity: participants with a more degenerate locus coeruleus showed a greater increase in prior weighting on atomoxetine versus placebo. The results indicate a contribution of the noradrenergic system to apathy and potential benefit from noradrenergic treatment of people with Parkinson’s disease, subject to stratification according to locus coeruleus integrity. More broadly, these results reconcile emerging predictive processing accounts of the role of noradrenaline in goal-directed behaviour with the clinical symptom of apathy and its potential pharmacological treatment.","link":"/opendata/hezemans-et-al-2022/"},{"title":"Hirmas & Engelmann (2023)","text":"Does attention have a causal impact on risky decisions? We address this question in a preregistered experiment in which participants accept or reject a series of mixed gambles while exogenously varying how information can be sampled. Specifically, in each trial participants observe the outcomes of a mixed-gamble with gains and losses presented sequentially. To isolate the causal role of attention on the decision process, we manipulate for how long a specific attribute is presented before showing the next one (e.g., 600 ms/800 ms vs 400 ms). Our results partially confirm our preregistered hypotheses that longer exposure to an attribute increases its weight on the decision. While we find no effects on choice frequency, we observe specific effects on the decision weights of our Random Utility Model. Presenting losses longer (for 600 ms, but not 800 ms) than gains (400 ms) leads to increased sensitivity for losses. When gains are presented for longer (600 ms and 800 ms) than losses (400 ms), the participants show increased sensitivity to both gain and loss values in their decision. Loss aversion reflects this trend across attention treatments, but differences remain non-significant. Further exploratory analyses show that specifically participants with higher impulsiveness become more sensitive to attribute values when gains are presented for longer. Jointly, these results support the notion that attention has a causal impact on the sensitivity to specific attributes during risky choice. Moreover, our results underline the moderating role of impulsiveness on the relationship between attention and choice.","link":"/opendata/hirmas-engelmann-2023/"},{"title":"Hitchcock et al. (2021)","text":"How does rumination affect reinforcement learning-the ubiquitous process by which we adjust behavior after error in order to behave more effectively in the future? In a within-subject design (n=49), we tested whether experimentally manipulated rumination disrupts reinforcement learning in a multidimensional learning task previously shown to rely on selective attention. Rumination impaired performance, yet unexpectedly this impairment could not be attributed to decreased attentional breadth (quantified using a decay parameter in a computational model). Instead, trait rumination (between subjects) was associated with higher decay rates (implying narrower attention), yet not with impaired performance. Our task-performance results accord with the possibility that state rumination promotes stress-generating behavior in part by disrupting reinforcement learning. The trait-rumination finding accords with the predictions of a prominent model of trait rumination (the attentional-scope model). More work is needed to understand the specific mechanisms by which state rumination disrupts reinforcement learning.","link":"/opendata/hitchcock-et-al-2021/"},{"title":"Hitchcock et al. (2022)","text":"Cognitive theories of depression, and mindfulness theories of well-being, converge on the notion that self-judgment plays a critical role in mental health. However, these theories have rarely been tested via tasks and computational modeling analyses that can disentangle the information processes operative in self-judgments. We applied a drift-diffusion computational model to the self-referential encoding task (SRET) collected before and after an 8-week mindfulness intervention (n = 96). A drift-rate regression parameter representing positive-relative to negative-self-referential judgment strength positively related to mindful awareness and inversely related to depression, both at baseline and over time; however, this parameter did not significantly relate to the interaction between mindful awareness and nonjudgmentalness. At the level of individual depression symptoms, at baseline, a spectrum of symptoms (inversely) correlated with the drift-rate regression parameter, suggesting that many distinct depression symptoms relate to valenced self-judgment between subjects. By contrast, over the intervention, changes in only a smaller subset of anhedonia-related depression symptoms showed substantial relationships with this parameter. Both behavioral and model-derived measures showed modest split-half and test-retest correlations. Results support cognitive theories that implicate self-judgment in depression and mindfulness theories, which imply that mindful awareness should lead to more positive self-views.","link":"/opendata/hitchcock-et-al-2022/"},{"title":"Homan et al. (2019)","text":"By combining computational, morphological, and functional analyses, this study relates latent markers of associative threat learning to overt post-traumatic stress disorder (PTSD) symptoms in combat veterans. Using reversal learning, we found that symptomatic veterans showed greater physiological adjustment to cues that did not predict what they had expected, indicating greater sensitivity to prediction errors for negative outcomes. This exaggerated weighting of prediction errors shapes the dynamic learning rate (associability) and value of threat predictive cues. The degree to which the striatum tracked the associability partially mediated the positive correlation between prediction-error weights and PTSD symptoms, suggesting that both increased prediction-error weights and decreased striatal tracking of associability independently contribute to PTSD symptoms. Furthermore, decreased neural tracking of value in the amygdala, in addition to smaller amygdala volume, independently corresponded to higher PTSD symptom severity. These results provide evidence for distinct neurocomputational contributions to PTSD symptoms.","link":"/opendata/homan-et-al-2019/"},{"title":"Hong et al. (2021)","text":"To obtain a coherent perception of the world, our senses need to be in alignment. When we encounter misaligned cues from two sensory modalities, the brain must infer which cue is faulty and recalibrate the corresponding sense. We examined whether and how the brain uses cue reliability to identify the miscalibrated sense by measuring the audiovisual ventriloquism aftereffect for stimuli of varying visual reliability. To adjust for modality-specific biases, visual stimulus locations were chosen based on perceived alignment with auditory stimulus locations for each participant. During an audiovisual recalibration phase, participants were presented with bimodal stimuli with a fixed perceptual spatial discrepancy; they localized one modality, cued after stimulus presentation. Unimodal auditory and visual localization was measured before and after the audiovisual recalibration phase. We compared participants’ behavior to the predictions of three models of recalibration: (a) Reliability-based: each modality is recalibrated based on its relative reliability-less reliable cues are recalibrated more; (b) Fixed-ratio: the degree of recalibration for each modality is fixed; (c) Causal-inference: recalibration is directly determined by the discrepancy between a cue and its estimate, which in turn depends on the reliability of both cues, and inference about how likely the two cues derive from a common source. Vision was hardly recalibrated by audition. Auditory recalibration by vision changed idiosyncratically as visual reliability decreased: the extent of auditory recalibration either decreased monotonically, peaked at medium visual reliability, or increased monotonically. The latter two patterns cannot be explained by either the reliability-based or fixed-ratio models. Only the causal-inference model of recalibration captures the idiosyncratic influences of cue reliability on recalibration. We conclude that cue reliability, causal inference, and modality-specific biases guide cross-modal recalibration indirectly by determining the perception of audiovisual stimuli.","link":"/opendata/hong-et-al-2021/"},{"title":"Horvath et al. (2021)","text":"Humans often face sequential decision-making problems, in which information about the environmental reward structure is detached from rewards for a subset of actions. In the current exploratory study, we introduce an information-selective symmetric reversal bandit task to model such situations and obtained choice data on this task from 24 participants. To arbitrate between different decision-making strategies that participants may use on this task, we developed a set of probabilistic agent-based behavioral models, including exploitative and explorative Bayesian agents, as well as heuristic control agents. Upon validating the model and parameter recovery properties of our model set and summarizing the participants choice data in a descriptive way, we used a maximum likelihood approach to evaluate the participants choice data from the perspective of our model set. In brief, we provide quantitative evidence that participants employ a belief state-based hybrid explorative-exploitative strategy on the information-selective symmetric reversal bandit task, lending further support to the finding that humans are guided by their subjective uncertainty when solving exploration-exploitation dilemmas. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s42113-021-00112-3.","link":"/opendata/horvath-et-al-2021/"},{"title":"Horwath et al. (2022)","text":"A large body of research illustrates the prioritization of goal-relevant information in memory; however, it is unclear how reward-related memories are organized. Using a rewarded free recall paradigm, we investigated how reward motivation structures the organization of memory around temporal and higher-order contexts. To better understand these processes, we simulated our findings using a reward-modulated variant of the Context Maintenance and Retrieval Model (CMR; Polyn et al., 2009). In the first study, we found that reward did not influence temporal clustering, but instead organized memory based on reward category. Further, we showed that a reward-modulated learning rate and source features of CMR most accurately depict reward’s enhancement of memory and clustering by value. In a second study, we showed that reward-memory effects can exist in both extended periods of sustained motivation and frequent changes in motivation, by showing equivocal reward effects using mixed- and pure-list motivation manipulations. However, we showed that a reward-modulated learning rate in isolation most accurately depicts reward’s enhancement of memory using a pure-list manipulation. Overall, we conclude that reward-related memories are adaptively organized by higher-order value information, and contextual binding to value contexts may only be necessary when rewards are intermittent versus sustained.","link":"/opendata/horwath-et-al-2022/"},{"title":"Hotaling et al. (2019)","text":"When people make risky choices, two kinds of information are crucial: outcome values and outcome probabilities. Here, we demonstrate that the juncture at which value and probability information is provided has a fundamental effect on choice. Across four experiments involving 489 participants, we compared two decision-making scenarios: one in which value information was revealed during sampling (standard) and one in which value information was revealed after sampling (value ignorance). On average, participants made riskier choices when value information was provided after sampling. Moreover, parameter estimates from a hierarchical Bayesian implementation of cumulative-prospect theory suggested that participants overweighted rare events when value information was absent during sampling but did not overweight such events in the standard condition. This suggests that the impact of rare events on choice relies crucially on the timing of probability and value integration. We provide paths toward mechanistic explanations of our results based on frameworks that assume different underlying cognitive architectures.","link":"/opendata/hotaling-et-al-2019/"},{"title":"Houshmand-Chatroudi & Yotsumoto (2023)","text":"Perception of time is not always veridical; rather, it is subjected to distortions. One such compelling distortion is that the duration of regularly spaced intervals is often overestimated. One account suggests that excitatory phases of neural entrainment concomitant with such stimuli play a major role. However, assessing the correlation between the power of entrained oscillations and time dilation has yielded inconclusive results. In this study, we evaluated whether phase characteristics of neural oscillations impact time dilation. For this purpose, we entrained 10-Hz oscillations and experimentally manipulated the presentation of flickers so that they were presented either in-phase or out-of-phase relative to the established rhythm. Simultaneous electroencephalography (EEG) recordings confirmed that in-phase and out-of-phase flickers had landed on different inhibitory phases of high-amplitude alpha oscillations. Moreover, to control for confounding factors of expectancy and masking, we created two additional conditions. Results, supplemented by the Bayesian analysis, indicated that the phase of entrained visual alpha oscillation does not differentially affect flicker-induced time dilation. Repeating the same experiment with regularly spaced auditory stimuli replicated the null findings. Moreover, we found a robust enhancement of precision for the reproduction of flickers relative to static stimuli that were partially supported by entrainment models. We discussed our results within the framework of neural oscillations and time-perception models, suggesting that inhibitory cycles of visual alpha may have little relevance to the overestimation of regularly spaced intervals. Moreover, based on our findings, we proposed that temporal oscillators, assumed in entrainment models, may act independently of excitatory phases in the brain’s lower level sensory areas.","link":"/opendata/houshmand-chatroudi-yotsumoto-2023/"},{"title":"Hoven et al. (2022)","text":"A growing body of evidence suggests that, during decision-making, BOLD signal in the ventromedial prefrontal cortex (VMPFC) correlates both with motivational variables - such as incentives and expected values - and metacognitive variables - such as confidence judgments - which reflect the subjective probability of being correct. At the behavioral level, we recently demonstrated that the value of monetary stakes bias confidence judgments, with gain (respectively loss) prospects increasing (respectively decreasing) confidence judgments, even for similar levels of difficulty and performance. If and how this value-confidence interaction is reflected in the VMPFC remains unknown. Here, we used an incentivized perceptual decision-making fMRI task that dissociates key decision-making variables, thereby allowing to test several hypotheses about the role of the VMPFC in the value-confidence interaction. While our initial analyses seemingly indicate that the VMPFC combines incentives and confidence to form an expected value signal, we falsified this conclusion with a meticulous dissection of qualitative activation patterns. Rather, our results show that strong VMPFC confidence signals observed in trials with gain prospects are disrupted in trials with no - or negative (loss) - monetary prospects. Deciphering how decision variables are represented and interact at finer scales seems necessary to better understand biased (meta)cognition.","link":"/opendata/hoven-et-al-2022/"},{"title":"Hoven et al. (2023)","text":"Confidence is suggested to be a key component in psychiatry and manifests at various hierarchical levels, from confidence in a decision (local confidence), to confidence about performance (global confidence), to higher-order traits such as self-beliefs. Most research focuses on local confidence, but global levels may relate more closely to symptoms. Here, using a transdiagnostic framework, we tested the relationships between self-reported psychopathology, local and global confidence, and higher-order self-beliefs in a general population sample (N = 489). We show contrasting relationships between confidence and psychopathology dimensions. An anxious-depression dimension related to local and global underconfidence. By contrast, a compulsive-intrusive-thoughts dimension related to increased overconfidence at both levels, and showed a decoupling between (1) higher-order self-beliefs and (2) local and global task confidence. The strongest predictor of mental health was a self-beliefs dimension. This study examines higher-order confidence in relation to psychiatric symptoms fluctuating in the general population. Critically, psychopathological symptoms show distinct associations with confidence.","link":"/opendata/hoven-et-al-2023/"},{"title":"Hummos et al. (2022)","text":"Interactions across frontal cortex are critical for cognition. Animal studies suggest a role for mediodorsal thalamus (MD) in these interactions, but the computations performed and direct relevance to human decision making are unclear. Here, inspired by animal work, we extended a neural model of an executive frontal-MD network and trained it on a human decision-making task for which neuroimaging data were collected. Using a biologically-plausible learning rule, we found that the model MD thalamus compressed its cortical inputs (dorsolateral prefrontal cortex, dlPFC) underlying stimulus-response representations. Through direct feedback to dlPFC, this thalamic operation efficiently partitioned cortical activity patterns and enhanced task switching across different contingencies. To account for interactions with other frontal regions, we expanded the model to compute higher-order strategy signals outside dlPFC, and found that the MD offered a more efficient route for such signals to switch dlPFC activity patterns. Human fMRI data provided evidence that the MD engaged in feedback to dlPFC, and had a role in routing orbitofrontal cortex inputs when subjects switched behavioral strategy. Collectively, our findings contribute to the emerging evidence for thalamic regulation of frontal interactions in the human brain.","link":"/opendata/hummos-et-al-2022/"},{"title":"Hunter et al. (2021)","text":"A goal of computational psychiatry is to ground symptoms in basic mechanisms. Theory suggests that avoidance in anxiety disorders may reflect dysregulated mental simulation, a process for evaluating candidate actions. If so, these covert processes should have observable consequences: choices reflecting increased and biased deliberation. In two online general population samples, we examined how self-report symptoms of social anxiety disorder predict choices in a socially framed reinforcement learning task, the patent race, in which the pattern of choices reflects the content of deliberation. Using a computational model to assess learning strategy, we found that self-report social anxiety was indeed associated with increased deliberative evaluation. This effect was stronger for a particular subset of feedback (upward counterfactual) in one of the experiments, broadly matching the biased content of rumination in social anxiety disorder, and robust to controlling for other psychiatric symptoms. These results suggest a grounding of symptoms of social anxiety disorder in more basic neuro-computational mechanisms.","link":"/opendata/hunter-et-al-2021/"},{"title":"Ivanov et al. (2023)","text":"A series of recent studies has demonstrated that attentional selection is modulated by statistical regularities, even when they concern task-irrelevant stimuli. Irrelevant distractors presented more frequently at one location interfere less with search than distractors presented elsewhere. To account for this finding, it has been proposed that through statistical learning, the frequent distractor location becomes suppressed relative to the other locations. Learned distractor suppression has mainly been studied at the group level, where individual differences are treated as unexplained error variance. Yet these individual differences may provide important mechanistic insights and could be predictive of cognitive and real-life outcomes. In the current study, we ask whether in an additional singleton task, the standard measures of attentional capture and learned suppression are reliable and stable at the level of the individual. In an online study, we assessed both the within- and between-session reliability of individual-level measures of attentional capture and learned suppression. We show that the measures of attentional capture, but not of distractor suppression, are moderately stable within the same session (i.e., split-half reliability). Test-retest reliability over a 2-month period was found to be moderate for attentional capture but weak or absent for suppression. RT-based measures proved to be superior to accuracy measures. While producing very robust findings at the group level, the predictive validity of these RT-based measures is still limited when it comes to individual-level performance. We discuss the implications for future research drawing on inter-individual variation in the attentional biases that result from statistical learning.","link":"/opendata/ivanov-et-al-2023/"},{"title":"Jackson & Cavanagh (2023)","text":"Reward Positivity (RewP) is a feedback-locked event-related potential component that is specifically elicited by rewarding feedback and scales with positive reward prediction error, a hallmark of reinforcement learning models. The RewP is also diminished in depression, suggesting that it may be a novel marker of anhedonia. Here, we examined if a sad mood induction offered an opportunity to causally induce a mood-related alteration of the RewP and reward-related learning. In Experiment 1 (N = 50 total), participants were randomly assigned to previously established sad or neutral mood induction procedures before a probabilistic selection task. This manipulation failed to induce changes in affect, suggesting that standard methods are inadequate. In Experiment 2 (N = 50 total), participants were randomly assigned to newly developed happy versus sad mood manipulations, which successfully induced large changes in affect. While the RewP was unaffected by mood induction, positive mood moderated the relationship between prediction error encoding in the RewP and reward learning, such that low positive mood and low prediction error encoding resulted in poorer reward learning. These findings provide a mechanistic example of how reduced positive affect moderates reward learning via poorer information encoding in the RewP.","link":"/opendata/jackson-cavanagh-2023/"},{"title":"Jana & Aron (2022)","text":"Mind wandering is a state in which our mental focus shifts toward task-unrelated thoughts. Although it is known that mind wandering has a detrimental effect on concurrent task performance (e.g., decreased accuracy), its effect on executive functions is poorly studied. Yet the latter question is relevant to many real-world situations, such as rapid stopping during driving. Here, we studied how mind wandering would affect the requirement to subsequently stop an incipient motor response. In healthy adults, we tested whether mind wandering affected stopping and, if so, which component of stopping was affected: the triggering of the inhibitory brake or the implementation of the brake following triggering. We observed that during mind wandering, stopping latency increased, as did the percentage of trials with failed triggering. Indeed, 67% of the variance of the increase in stopping latency was explained by increased trigger failures. Thus, mind wandering primarily affects stopping by affecting the triggering of the brake.","link":"/opendata/jana-aron-2022/"},{"title":"Jang et al. (2019)","text":"Dopamine is thought to provide reward prediction error signals to temporal lobe memory systems, but the role of these signals in episodic memory has not been fully characterized. Here we developed an incidental memory paradigm to (i) estimate the influence of reward prediction errors on the formation of episodic memories, (ii) dissociate this influence from surprise and uncertainty, (iii) characterize the role of temporal correspondence between prediction error and memoranda presentation and (iv) determine the extent to which this influence is dependent on memory consolidation. We found that people encoded incidental memoranda more strongly when they gambled for potential rewards. Moreover, the degree to which gambling strengthened encoding scaled with the reward prediction error experienced when memoranda were presented (and not before or after). This encoding enhancement was detectable within minutes and did not differ substantially after 24 h, indicating that it is not dependent on memory consolidation. These results suggest a computationally and temporally specific role for reward prediction error signalling in memory formation.","link":"/opendata/jang-et-al-2019/"},{"title":"Jangraw et al. (2023)","text":"Does our mood change as time passes? This question is central to behavioural and affective science, yet it remains largely unexamined. To investigate, we intermixed subjective momentary mood ratings into repetitive psychology paradigms. Here we demonstrate that task and rest periods lowered participants’ mood, an effect we call ‘Mood Drift Over Time’. This finding was replicated in 19 cohorts totalling 28,482 adult and adolescent participants. The drift was relatively large (-13.8% after 7.3 min of rest, Cohen’s d = 0.574) and was consistent across cohorts. Behaviour was also impacted: participants were less likely to gamble in a task that followed a rest period. Importantly, the drift slope was inversely related to reward sensitivity. We show that accounting for time using a linear term significantly improves the fit of a computational model of mood. Our work provides conceptual and methodological reasons for researchers to account for time’s effects when studying mood and behaviour.","link":"/opendata/jangraw-et-al-2023/"},{"title":"Jaswetz et al. (2022)","text":"Simultaneous execution of memory retrieval and cognitively demanding interventions alter the subjective experience of aversive memories. This principle can be used in treatment to target traumatic memories. An often-used interpretation is that cognitive demand interferes with memory reconsolidation. Laboratory models applying this technique often do not meet some important procedural steps thought necessary to trigger reconsolidation. It remains therefore unclear whether cognitively demanding interventions can alter the reconsolidation process of aversive memories. Here, 78 (41 included) healthy participants completed an established 3-day threat conditioning paradigm. Two conditioned stimuli were paired with a shock (CS+ s) and one was not (CS-). The next day, one CS+ (CS+ R), but not the other (CS+), was presented as a reminder. After 10 min, participants performed a 2-back working memory task. On day three, we assessed retention. We found successful acquisition of conditioned threat and retention (CS+ s > CS-). However, SCRs to the CS+ R and the CS+ during retention did not significantly differ. Although threat conditioning was successful, the well-established cognitively demanding intervention did not alter the reconsolidation process of conditioned threat memories. These findings challenge current views on how cognitively demand may enhance psychotherapy-outcome.","link":"/opendata/jaswetz-et-al-2022/"},{"title":"Jayles et al. (2021)","text":"Cognitive biases are widespread in humans and animals alike, and can sometimes be reinforced by social interactions. One prime bias in judgment and decision-making is the human tendency to underestimate large quantities. Previous research on social influence in estimation tasks has generally focused on the impact of single estimates on individual and collective accuracy, showing that randomly sharing estimates does not reduce the underestimation bias. Here, we test a method of social information sharing that exploits the known relationship between the true value and the level of underestimation, and study if it can counteract the underestimation bias. We performed estimation experiments in which participants had to estimate a series of quantities twice, before and after receiving estimates from one or several group members. Our purpose was threefold: to study (i) whether restructuring the sharing of social information can reduce the underestimation bias, (ii) how the number of estimates received affects the sensitivity to social influence and estimation accuracy, and (iii) the mechanisms underlying the integration of multiple estimates. Our restructuring of social interactions successfully countered the underestimation bias. Moreover, we find that sharing more than one estimate also reduces the underestimation bias. Underlying our results are a human tendency to herd, to trust larger estimates than one’s own more than smaller estimates, and to follow disparate social information less. Using a computational modeling approach, we demonstrate that these effects are indeed key to explain the experimental results. Overall, our results show that existing knowledge on biases can be used to dampen their negative effects and boost judgment accuracy, paving the way for combating other cognitive biases threatening collective systems.","link":"/opendata/jayles-et-al-2021/"},{"title":"Jean-Richard-dit-Bressel et al. (2021)","text":"Punishment maximises the probability of our individual survival by reducing behaviours that cause us harm, and also sustains trust and fairness in groups essential for social cohesion. However, some individuals are more sensitive to punishment than others and these differences in punishment sensitivity have been linked to a variety of decision-making deficits and psychopathologies. The mechanisms for why individuals differ in punishment sensitivity are poorly understood, although recent studies of conditioned punishment in rodents highlight a key role for punishment contingency detection (Jean-Richard-Dit-Bressel et al., 2019). Here, we applied a novel Planets and Pirates conditioned punishment task in humans, allowing us to identify the mechanisms for why individuals differ in their sensitivity to punishment. We show that punishment sensitivity is bimodally distributed in a large sample of normal participants. Sensitive and insensitive individuals equally liked reward and showed similar rates of reward-seeking. They also equally disliked punishment and did not differ in their valuation of cues that signalled punishment. However, sensitive and insensitive individuals differed profoundly in their capacity to detect and learn volitional control over aversive outcomes. Punishment insensitive individuals did not learn the instrumental contingencies, so they could not withhold behaviour that caused punishment and could not generate appropriately selective behaviours to prevent impending punishment. These differences in punishment sensitivity could not be explained by individual differences in behavioural inhibition, impulsivity, or anxiety. This bimodal punishment sensitivity and these deficits in instrumental contingency learning are identical to those dictating punishment sensitivity in non-human animals, suggesting that they are general properties of aversive learning and decision-making.","link":"/opendata/jean-richard-dit-bressel-et-al-2021/"},{"title":"Jean-Richard-dit-Bressel et al. (2023)","text":"Individuals differ in sensitivity to the adverse consequences of their actions, leading some to persist in maladaptive behaviours. Two pathways have been identified for this insensitivity: a motivational pathway based on reward valuation and a behavioural pathway based on stimulus–response mechanisms. Here we identify a third, cognitive pathway based on differences in punishment knowledge. Exposed to identical punishment contingencies, some people (Sensitive) form correct causal beliefs that guide their behaviour to avoid punishment, whereas others form incorrect causal beliefs that lead them to earn punishment. Incorrect causal beliefs were not inherently problematic, many individuals benefited from information about why punishment was occurring, revaluing their actions and changing their behaviour (Unaware). However, we identify one condition where incorrect causal beliefs can be problematic: when punishment is infrequent. Under this condition, more individuals showed detrimental patterns of behaviour that resisted information-driven updating (Compulsive). For these individuals, rare punishment inoculated behavioural preferences against cognitive and behavioural updating.","link":"/opendata/jean-richard-dit-bressel-et-al-2023/"},{"title":"Jenkins et al. (2023)","text":"Implicit statistical learning, whereby predictable relationships between stimuli are detected without conscious awareness, is important for language acquisition. However, while this process is putatively implicit, it is often assessed using measures that require explicit reflection and conscious decision making. Here, we conducted three experiments combining an artificial grammar learning paradigm with a serial reaction time (SRT-AGL) task, to measure statistical learning of adjacent and nonadjacent dependencies implicitly, without conscious decision making. Participants viewed an array of six visual stimuli and were presented with a sequence of three auditory (nonsense words, Expt. 1; names of familiar objects, Expt. 2) or visual (abstract shapes, Expt. 3) cues and were asked to click on the corresponding visual stimulus as quickly as possible. In each experiment, the final stimulus in the sequence was predictable based on items earlier in the sequence. Faster responses to this predictable final stimulus compared to unpredictable stimuli would provide evidence of implicit statistical learning, without requiring explicit decision making or conscious reflection. Despite previous positive results (Misyak et al., 2009), we saw little evidence of implicit statistical learning in any of the experiments, suggesting that in this case, these SRT-AGL tasks were not an effective measure implicit statistical learning.","link":"/opendata/jenkins-et-al-2023/"},{"title":"Ji & MacLeod (2023)","text":"Reduced tendency to engage in potentially rewarding activities is a hallmark of depression. The present study investigated the role of future expectancy biases in depression-linked behavioural choice, in participants varying in self-reported depression symptoms (dysphoria). A novel laboratory paradigm was developed to test the hypotheses that the degree to which higher dysphoria is associated with reduced tendency to engage in a potentially rewarding activity is dependent on the presence of negative biases in the expected outcomes of activity engagement. Specifically, two types of expectancy biases were distinguished: a) the expected likelihood of a negative rather than positive outcome, and b) the expected emotional impact of either outcome. N = 176 undergraduate students with varied levels of dysphoria were given the opportunity to choose to engage in a coin-tossing game that could result in a win or loss monetary outcome in terms of charity donations, and then rated both types of expectancies. Results indicated that higher dysphoria was associated with more negative expectations concerning the likelihood of objective outcomes and the emotional impact of such outcomes, and as hypothesised, such negative expectancy biases mediated indirect associations between dysphoria and behavioural choice.","link":"/opendata/ji-macleod-2023/"},{"title":"Jiang, Mi et al. (2023)","text":"Social networks shape our decisions by constraining what information we learn and from whom. Yet, the mechanisms by which network structures affect individual learning and decision-making remain unclear. Here, by combining a real-time distributed learning task with functional magnetic resonance imaging, computational modeling and social network analysis, we studied how humans learn from observing others’ decisions on seven-node networks with varying topological structures. We show that learning on social networks can be approximated by a well-established error-driven process for observational learning, supported by an action prediction error encoded in the lateral prefrontal cortex. Importantly, learning is flexibly weighted toward well-connected neighbors, according to activity in the dorsal anterior cingulate cortex, but only insofar as social observations contain secondhand, potentially intertwining, information. These data suggest a neurocomputational mechanism of network-based filtering on the sources of information, which may give rise to biased learning and the spread of misinformation in an interconnected society.","link":"/opendata/jiang-mi-et-al-2023/"},{"title":"Jiang, Marcowski et al. (2023)","text":"Many consider moral decisions to follow an internal “moral compass”, resistant to social pressures. Here we examine how social influence shapes moral decisions under risk, and how it operates in different decision contexts. We employed an adapted Asian Disease Paradigm where participants chose between certain losses/gains and probabilistic losses/gains in a series of moral (lives) or financial (money) decisions. We assessed participants’ own risk preferences before and after exposing them to social norms that are generally risk-averse or risk-seeking. Our results showed that participants robustly shifted their own choices towards the observed risk preferences. This conformity holds even after a re-testing in three days. Interestingly, in the monetary domain, risk-averse norms have more influence on choices in the loss frame, whereas risk-seeking norms have more influence in the gain frame, presumably because norms that contradict default behavior are most informative. In the moral domain, risk-averse as opposed to risk-seeking norms are more effective in the loss frame but in the gain frame different norms are equally effective. Taken together, our results demonstrate conformity in risk preferences across contexts and highlight unique features of decisions and conformity in moral and monetary domains.","link":"/opendata/jiang-marcowski-et-al-2023/"},{"title":"Johnston et al. (2023)","text":"Memory is a reconstructive process with biases toward remembering events in line with prior higher orderbeliefs. This can result in events being remembered as more positive or negative than they actually were.While positive recall biases may contribute to well-being, negative recall biases may promote internalizingsymptoms, such as social anxiety. Adolescence is characterized by increased salience of peers and peakincidence of social anxiety. Symptoms often wax and wane before becoming more intractable duringadulthood. Open questions remain regarding expression of biases for social feedback, and how individualdifferences in such biases may contribute to social anxiety across development. Two independent studiesused a novel social feedback and cued response task to assess memory biases and prediction biasesabout being liked or disliked. Our findings revealed a robust positivity bias about memories for socialfeedback, regardless of whether memories were true or false. Moreover, memory bias was associated withsocial anxiety in a developmentally sensitive way. Among adults (Study 1), more severe symptoms of socialanxiety were associated with a negativity bias. During the transition from adolescence to adulthood (Study2), age strengthened the positivity bias in those with less severe symptoms and strengthened the negativitybias in those with more severe symptoms. Biases did not generalize to predictions about social feedback.These studies support a model by which higher order beliefs can infiltrate perceptions of memory for past,but not predictions of future, social events, shaping susceptibility for social anxiety, particularly during thetransition into adulthood.","link":"/opendata/johnston-et-al-2023/"},{"title":"Kaanders et al. (2022)","text":"No one likes to be wrong. Previous research has shown that participants may underweight information incompatible with previous choices, a phenomenon called confirmation bias. In this paper, we argue that a similar bias exists in the way information is actively sought. We investigate how choice influences information gathering using a perceptual choice task and find that participants sample more information from a previously chosen alternative. Furthermore, the higher the confidence in the initial choice, the more biased information sampling becomes. As a consequence, when faced with the possibility of revising an earlier decision, participants are more likely to stick with their original choice, even when incorrect. Critically, we show that agency controls this phenomenon. The effect disappears in a fixed sampling condition where presentation of evidence is controlled by the experimenter, suggesting that the way in which confirmatory evidence is acquired critically impacts the decision process. These results suggest active information acquisition plays a critical role in the propagation of strongly held beliefs over time.","link":"/opendata/kaanders-et-al-2022/"},{"title":"Kahveci et al. (2022)","text":"Reaction time (RT) data are often pre-processed before analysis by rejecting outliers and errors and aggregating the data. In stimulus-response compatibility paradigms such as the Approach-Avoidance Task (AAT), researchers often decide how to pre-process the data without an empirical basis, leading to the use of methods that may hurt rather than help data quality. To provide this empirical basis, we investigated how different pre-processing methods affect the reliability and validity of this task. Our literature review revealed 108 different pre-processing pipelines among 163 examined studies. Using simulated and real datasets, we found that validity and reliability were negatively affected by retaining error trials, by replacing error RTs with the mean RT plus a penalty, by retaining outliers, and by removing the highest and lowest sample-wide RT percentiles as outliers. We recommend removing error trials and rejecting RTs deviating more than 2 or 3 SDs from the participant mean. Bias scores were more reliable but not more valid if computed with means or D-scores rather than with medians. Bias scores were less accurate if based on averaging multiple conditions together, as with compatibility scores, rather being than based on separate averages per condition, as with double-difference scores. We call upon the field to drop the suboptimal practices to improve the psychometric properties of the AAT. We also call for similar investigations in related RT-based cognitive bias measures such as the implicit association task, as their commonly accepted pre-processing practices currently involve many of the aforementioned discouraged methods.","link":"/opendata/kahveci-et-al-2022/"},{"title":"Kaiser et al. (2021)","text":"In a dynamic world, it is essential to decide when to leave an exploited resource. Such patch-leaving decisions involve balancing the cost of moving against the gain expected from the alternative patch. This contrasts with value-guided decisions that typically involve maximizing reward by selecting the current best option. Patterns of neuronal activity pertaining to patch-leaving decisions have been reported in dorsal anterior cingulate cortex (dACC), whereas competition via mutual inhibition in ventromedial prefrontal cortex (vmPFC) is thought to underlie value-guided choice. Here, we show that the balance between cortical excitation and inhibition (E/I balance), measured by the ratio of GABA and glutamate concentrations, plays a dissociable role for the two kinds of decisions. Patch-leaving decision behaviour relates to E/I balance in dACC. In contrast, value-guided decision-making relates to E/I balance in vmPFC. These results support mechanistic accounts of value-guided choice and provide evidence for a role of dACC E/I balance in patch-leaving decisions.","link":"/opendata/kaiser-et-al-2021/"},{"title":"Kaplan & Solway (2022)","text":"Computational models of decision making have identified a relationship between obsessive-compulsive symptoms (OCS), both in the general population and in patients, and impairments in perceptual evidence accumulation. Some studies have interpreted these deficits to reflect global disease traits which give rise to clusters of OCS. Such assumptions are not uncommon, even if implicit, in computational psychiatry more broadly. However, it is well established that state- and trait-symptom scores are often correlated (e.g., state and trait anxiety), and the extent to which perceptual deficits are actually explained by state-based symptoms is unclear. State-based symptoms may give rise to information processing differences in a number of ways, including the mechanistically less interesting possibility of tying up working memory and attentional resources for off-task processing. In a general population sample (N = 150), we investigated the extent to which previously identified impairments in perceptual evidence accumulation were related to trait vs stated-based OCS. In addition, we tested whether differences in working memory capacity moderated state-based impairments, such that impairments were worse in individuals with lower working memory capacity. We replicated previous work demonstrating a negative relationship between the rate of evidence accumulation and trait-based OCS when state-based symptoms were unaccounted for. When state-based effects were included in the model, they captured a significant degree of impairment while trait-based effects were attenuated, although they did not disappear completely. We did not find evidence that working memory capacity moderated the state-based effects. Our work suggests that investigating the relationship between information processing and state-based symptoms may be important more generally in computational psychiatry beyond this specific context.","link":"/opendata/kaplan-solway-2022/"},{"title":"Karagoz et al. (2022)","text":"When making decisions, we sometimes rely on habit and at other times plan towards goals. Planning requires the construction and use of an internal representation of the environment, a cognitive map. How are these maps constructed, and how do they guide goal-directed decisions? We coupled a sequential decision-making task with a behavioral representational similarity analysis approach to examine how relationships between choice options change when people build a cognitive map of the task structure. We found that participants who encoded stronger higher-order relationships among choice options showed increased planning and better performance. These higher-order relationships were more strongly encoded among objects encountered in high-reward contexts, indicating a role for motivation during cognitive map construction. In contrast, lower-order relationships such as simple visual co-occurrence of objects did not predict goal-directed planning. These results show that humans actively construct and use cognitive maps of task structure to make goal-directed decisions.","link":"/opendata/karagoz-et-al-2022/"},{"title":"Kasparek et al. (2020)","text":"Childhood adversity is common and strongly associated with risk for psychopathology. Identifying factors that buffer children from experiencing psychopathology following adversity is critical for developing more effective intervention approaches. The present study examined several behavioral metrics of reward processing reflecting global approach motivation for reward and the degree to which reward responses scaled with reward value (i.e., behavioral sensitivity to reward value) as potential moderators of the association of multiple dimensions of adversity-including trauma, caregiver neglect, and food insecurity-with depression and externalizing psychopathology in a sample of youth aged 8-16 years (n = 132). Trauma exposure and externalizing problems were positively associated at low and moderate levels of reward reactivity, but this association became non-significant at high levels of reward reactivity. Our findings extend prior work, suggesting that high behavioral sensitivity to reward value may buffer against externalizing problems following exposure to trauma.","link":"/opendata/kasparek-et-al-2020/"},{"title":"Kapser et al. (2023)","text":"Why can initial biases persist in repeated choice tasks? Previous research has shown that frequent rewards can lure the decision maker into premature exploitation of a supposedly best option, which can result in the persistence of initial biases. Here, we demonstrate that even in the absence of rewards, initial biases can be perpetuated through a positive testing strategy. After eliciting a biased preference for one of two equally rewarding options, participants (N = 203) could sample freely from both options without the lure of any financial rewards. When participants were told to rule out alternatives in this phase, they explored the supposedly worse option and thereby managed to overcome their initial bias. When told to optimize their strategy, however, they exhibited a positive testing strategy resulting in the continued exploitation of the supposedly better option, a bias they maintained in an incentivized choice phase and later judgments. Across all participants, individual tendencies to exploit one option in earlier phases predicted biased behavior in subsequent phases. The findings highlight that not only the pursuit of instrumental rewards can lead to exploitation and the maintenance of initial biases. We discuss potential consequences for interventions.","link":"/opendata/kapser-et-al-2023/"},{"title":"Katabi & Shahar (2023)","text":"ADHD is characterized by a difficulty to act in a goal-directed manner. While most environments require a sequence of actions for goal attainment, ADHD was never studied in the context of value-based sequence learning. Here, we made use of current advancements in hierarchical reinforcement-learning algorithms to track the internal value and choice policy of individuals with ADHD performing a three-stage sequence learning task. Specifically, 54 participants (28 ADHD, 26 controls) completed a value-based reinforcement-learning task that allowed us to estimate internal action values for each trial and stage using computational modeling. We found attenuated sensitivity to action values in ADHD compared to controls, both in choice and reaction-time variability estimates. Remarkably, this was found only for first stage actions (i.e., initiatory actions), while for action performed just before outcome delivery the two groups were strikingly indistinguishable. These results suggest a difficulty in following value estimation for initiatory actions in ADHD.","link":"/opendata/katabi-shahar-2023/"},{"title":"Katyal et al. (2023)","text":"Individuals with anxiety and depression exhibit chronic metacognitive biases such as underconfidence. The origin of such biases is unknown. Here we quantified the impact of feedback valence on confidence in two large general population samples (N=230 and N=278). We studied metacognition both locally, as confidence in individual task instances, and globally, as longer run self-performance estimates. Global confidence was sensitive to both local confidence and feedback valence – more frequent positive (negative) feedback increased (respectively decreased) global confidence. Feedback valence impacted confidence in a domain-general fashion and also led to shifts in affective self-beliefs. Notably, global confidence was more sensitive to low (vs. high) local confidence in individuals with greater transdiagnostic anxious-depression symptomatology, despite sensitivity to feedback valence remaining intact. Together, our results reveal a mechanistic basis for chronic underconfidence in anxious-depression rooted in distorted interactions between local and global metacognition, while also highlighting potential for restoring confidence through targeted feedback.","link":"/opendata/katyal-et-al-2023/"},{"title":"Katzman et al. (2023)","text":"Across development, people seek to control their environments, often demonstrating a preference for contexts in which they have the opportunity to make choices. However, it is not clear how children, adolescents, and adults learn to calibrate this preference based on the costs and benefits of exerting control. Here, 92 participants between the ages of 10 and 25 completed a probabilistic reinforcement-learning task across contexts in which the instrumental value of control varied. On every trial, participants selected between two slot machines to try to gain the most reward. Critically, however, machine decisions were preceded by first-stage agency decisions in which participants decided whether to choose between the machines themselves or forgo agency and let a computer randomly select between them. On each trial, we manipulated the instrumental value of control by varying the reward probabilities of the machines, as well as an offer amount that participants would earn by forgoing agency. We found that across age, participants overvalued agency, but they also demonstrated a preference for control when it had greater instrumental value. Moreover, we found that sensitivity to the value of control increased with age, indicating developmental improvements in weighing the costs and benefits of agentic choice.","link":"/opendata/katzman-et-al-2023/"},{"title":"Kelly & Sharot (2021)","text":"Vast amounts of personalized information are now available to individuals. A vital research challenge is to establish how people decide what information they wish to obtain. Here, over five studies examining information-seeking in different domains we show that information-seeking is associated with three diverse motives. Specifically, we find that participants assess whether information is useful in directing action, how it will make them feel, and whether it relates to concepts they think of often. We demonstrate that participants integrate these assessments into a calculation of the value of information that explains information seeking or its avoidance. Different individuals assign different weights to these three factors when seeking information. Using a longitudinal approach, we find that the relative weights assigned to these information-seeking motives within an individual show stability over time, and are related to mental health as assessed using a battery of psychopathology questionnaires.","link":"/opendata/kelly-sharot-2021/"},{"title":"Kemp et al. (2022)","text":"The underlying vulnerability for schizophrenia-spectrum disorders is expressed across a continuum of clinical and subclinical symptoms and impairment referred to as schizotypy. Schizotypy is a multidimensional construct with positive, negative, and disorganized dimensions. Models of pathological personality provide useful frameworks for assessing the multidimensional structure of schizotypy. The present study examined the association of positive, negative, and disorganized schizotypy with pathological personality traits and facets assessed by the Personality Inventory for DSM-5 (PID-5) in 1,342 young adults. As hypothesized, positive schizotypy was largely associated with the PID-5 psychoticism domain and facets, negative schizotypy was primarily associated with the detachment domain and facets and the restricted affectivity facet, and disorganized schizotypy’s strongest associations were with the distractibility and eccentricity facets and the negative affect domain. Negative schizotypy was differentiated from the other two schizotypy dimensions in its pattern of correlations with the PID-5 domains and facets. As expected, positive and disorganized schizotypy exhibited a moderate degree of similarity in their associations. Overall, the PID-5 domains accounted for approximately half of the variance in each of the schizotypy dimensions, and the PID-5 facets accounted for upwards of 2/3rds of the variance in each schizotypy dimension. The associations among the schizotypy and PID-5 measures did not appear to reflect highly redundant items across the measures. We conclude by providing regression-based algorithms for computing positive, negative, and disorganized schizotypy scores based on the PID-5 facets.","link":"/opendata/kemp-et-al-2022/"},{"title":"Keung et al. (2020)","text":"Divisive normalization has long been used to account for computations in various neural processes and behaviours. The model proposes that inputs into a neural system are divisively normalized by the system’s total activity. More recently, dynamical versions of divisive normalization have been shown to account for how neural activity evolves over time in value-based decision making. Despite its ubiquity, divisive normalization has not been studied in decisions that require evidence to be integrated over time. Such decisions are important when the information is not all available at once. A key feature of such decisions is how evidence is weighted over time, known as the integration kernel. Here, we provide a formal expression for the integration kernel in divisive normalization, and show that divisive normalization quantitatively accounts for 133 human participants’ perceptual decision making behaviour, performing as well as the state-of-the-art Drift Diffusion Model, the predominant model for perceptual evidence accumulation.","link":"/opendata/keung-et-al-2020/"},{"title":"Khaw et al. (2017)","text":"The data presented in this article are related to the research article entitled “Discrete Adjustment to a Changing Environment: Experimental Evidence” (Khaw et al., 2017) [1]. We present data from a laboratory experiment that asks subjects to forecast the outcome of a time-varying Bernoulli process. On a computer program, subjects draw rings with replacement from a virtual box containing green and red rings in an unknown proportion. Subjects provide their estimates of the probability of drawing a green ring. They are rewarded for their participation and for the accuracy of their estimates. The actual probability of drawing a green ring is initially drawn from a uniform distribution. It then changes intermittently throughout the session, and each subsequent probability is an independent draw from the uniform distribution. Each session involves 1000 ring draws. The dataset contains the values of the underlying probability, the sequence of ring draws that are realized, and the subjects’ estimates and response times. The dataset contains the performance of 11 subjects who each completed 10 sessions over the course of several days.","link":"/opendata/khaw-et-al-2017/"},{"title":"Kim et al. (2019)","text":"It has previously been shown that the relative reliability of model-based and model-free reinforcement-learning (RL) systems plays a role in the allocation of behavioral control between them. However, the role of task complexity in the arbitration between these two strategies remains largely unknown. Here, using a combination of novel task design, computational modelling, and model-based fMRI analysis, we examined the role of task complexity alongside state-space uncertainty in the arbitration process. Participants tended to increase model-based RL control in response to increasing task complexity. However, they resorted to model-free RL when both uncertainty and task complexity were high, suggesting that these two variables interact during the arbitration process. Computational fMRI revealed that task complexity interacts with neural representations of the reliability of the two systems in the inferior prefrontal cortex.","link":"/opendata/kim-et-al-2019/"},{"title":"Kim et al. (2021)","text":"Controllability perception significantly influences motivated behavior and emotion and requires an estimation of one’s influence on an environment. Previous studies have shown that an agent can infer controllability by observing contingency between one’s own action and outcome if there are no other outcome-relevant agents in an environment. However, if there are multiple agents who can influence the outcome, estimation of one’s genuine controllability requires exclusion of other agents’ possible influence. Here, we first investigated a computational and neural mechanism of controllability inference in a multi-agent setting. Our novel multi-agent Bayesian controllability inference model showed that other people’s action-outcome contingency information is integrated with one’s own action-outcome contingency to infer controllability, which can be explained as a Bayesian inference. Model-based functional MRI analyses showed that multi-agent Bayesian controllability inference recruits the temporoparietal junction (TPJ) and striatum. Then, this inferred controllability information was leveraged to increase motivated behavior in the vmPFC. These results generalize the previously known role of the striatum and vmPFC in single-agent controllability to multi-agent controllability, and this generalized role requires the TPJ in addition to the striatum of single-agent controllability to integrate both self- and other-related information. Finally, we identified an innate positive bias toward the self during the multi-agent controllability inference, which facilitated behavioral adaptation under volatile controllability. Furthermore, low positive bias and high negative bias were associated with increased daily feelings of guilt. Our results provide a mechanism of how our sense of controllability fluctuates due to other people in our lives, which might be related to social learned helplessness and depression.","link":"/opendata/kim-et-al-2021/"},{"title":"Kim et al. (2023)","text":"Older adults tend to be more prone to distraction compared to young adults and this age-related deficit has been attributed to a deficiency in inhibitory processing. However, recent findings challenge the notion that aging leads to global impairments in inhibition. To reconcile these mixed findings, we investigated how aging modulates multiple mechanisms of attentional control including goal-directed target orienting, proactive distractor suppression, attention capture, and reactive disengagement by tracking the timing and direction of eye movements. When engaged in feature-search mode and proactive distractor suppression, older adults made fewer first fixations to the target but inhibited the task-irrelevant salient distractor as effectively as did young adults. In contrast, task-irrelevant salient distractors captured older adults’ attention significantly more than younger adults’ attention during singleton-search mode and reactive distractor disengagement. In addition to elevated attention capture, older adults showed increased fixation times in orienting to the target, longer dwell times on incorrect saccades, and increased saccadic reaction times. Thus, older adults exhibited deficiencies in goal-directed attentional control, disengagement, and processing speeds, but preserved mechanisms of proactive distractor suppression. Our findings suggest that older adults are more prone to initiating reflexive, stimulus-driven saccades over goal-oriented saccades due to longer top-down processing requirements and shifts in attentional priority within the visual cortex. We propose that aging leads to dual shifts in mechanisms of top-down and bottom-up attentional control, but that older adults still preserve mechanisms of proactive inhibition.","link":"/opendata/kim-et-al-2023/"},{"title":"Kinley et al. (2023)","text":"Probabilistic models of delay discounting allow the estimation of discount functions without assuming that these functions describe sharp boundaries in decision making. However, existing probabilistic models allow for two implausible possibilities: first, that no reward might sometimes be preferred over some reward (e.g., $0 now over $100 in 1 year), and second, that the same reward might sometimes be preferred later rather than sooner (e.g., $100 in a year over $100 now). Here we show that probabilistic models of discounting perform better when they assign these cases a probability of 0. We demonstrate this result across a range of discount functions using nonlinear regression. We also introduce a series of generalized linear models that implicitly parameterize various discount functions, and demonstrate the same result for these.","link":"/opendata/kinley-et-al-2023/"},{"title":"Kinzuka et al. (2022)","text":"As time plays a fundamental role in our social activities, scholars have studied temporal perception since the earliest days of experimental psychology. Since the 1960s, the ubiquity of color has been driving research on the potential effects of the colors red and blue on temporal perception and on its underlying mechanism. However, the results have been inconsistent, which could be attributed to the difficulty of controlling physical properties such as hue and luminance within and between studies. Therefore, we conducted a two-interval duration-discrimination task to evaluate the perceived duration of color stimuli under different equiluminant conditions: subjective or pupillary light reflex (PLR)-based equiluminance. The results, based on psychometric functional analyses and simultaneous pupillary recordings, showed that the perceived duration of red was overestimated compared with blue even when the intensity of the stimulus was controlled based on subjective equiluminance (Experiment 1). However, since blue is known to induce a larger PLR than red despite equiluminance, we conducted a controlled study to distinguish the indirect effect of pupillary response to temporal perception. Interestingly, the effect observed in Experiment 1 faded when the luminance levels of the two stimuli were matched based on PLR response (Experiment 2). These results indicate that duration judgement can be affected not only by the hue but also by different equiluminance methods. Furthermore, this causality between the equiluminance method and temporal perception can be explained by the fluctuations in incident light entering the pupil.","link":"/opendata/kinzuka-et-al-2022/"},{"title":"Klaassen et al. (2021)","text":"Successful responding to acutely threatening situations requires adequate approach-avoidance decisions. However, it is unclear how threat-induced states-like freezing-related bradycardia-impact the weighing of the potential outcomes of such value-based decisions. Insight into the underlying computations is essential, not only to improve our models of decision-making but also to improve interventions for maladaptive decisions, for instance in anxiety patients and first-responders who frequently have to make decisions under acute threat. Forty-two participants made passive and active approach-avoidance decisions under threat-of-shock when confronted with mixed outcome-prospects (i.e., varying money and shock amounts). Choice behavior was best predicted by a model including individual action-tendencies and bradycardia, beyond the subjective value of the outcome. Moreover, threat-related bradycardia (high-vs-low threat) interacted with subjective value, depending on the action-context (passive-vs-active). Specifically, in action-contexts incongruent with participants’ intrinsic action-tendencies, stronger bradycardia related to diminished effects of subjective value on choice across participants. These findings illustrate the relevance of testing approach-avoidance decisions in relatively ecologically valid conditions of acute and primarily reinforced threat. These mechanistic insights into approach-avoidance conflict-resolution may inspire biofeedback-related techniques to optimize decision-making under threat. Critically, the findings demonstrate the relevance of incorporating internal psychophysiological states and external action-contexts into models of approach-avoidance decision-making.","link":"/opendata/klaassen-et-al-2021/"},{"title":"Klein et al. (2017)","text":"To decide optimally between available options, organisms need to learn the values associated with these options. Reinforcement learning models offer a powerful explanation of how these values are learnt from experience. However, human choices often violate normative principles. We suggest that seemingly counterintuitive decisions may arise as a natural consequence of the learning mechanisms deployed by humans. Here, using fMRI and a novel behavioural task, we show that, when suddenly switched to novel choice contexts, participants choices are incongruent with values learnt by standard learning algorithms. Instead, behaviour is compatible with the decisions of an agent learning how good an option is relative to an option with which it had previously been paired. Striatal activity exhibits the characteristics of a prediction error used to update such relative option values. Our data suggest that choices can be biased by a tendency to learn option values with reference to the available alternatives.","link":"/opendata/klein-et-al-2017/"},{"title":"Klever et al. (2022)","text":"Visual perception is not only shaped by sensitivity but also by confidence, i.e., the ability to estimate the accuracy of a visual decision. Younger observers have been reported to have access to a reliable measure of their own uncertainty when making visual decisions. This metacognitive ability might be challenged during ageing due to increasing sensory noise and decreasing cognitive control resources. We investigated age effects on visual confidence using a visual contrast discrimination task and a confidence forced-choice paradigm. Younger adults (19-38 years) showed significantly lower discrimination thresholds than older adults (60-78 years). To focus on confidence sensitivity above and beyond differences in discrimination performance, we estimated confidence efficiency that reflects the ability to distinguish good from bad perceptual decisions. Confidence efficiency was estimated by comparing thresholds obtained from all trials and trials that were judged with relatively higher confidence, respectively. In both age groups, high confidence judgments were associated with better visual performance, but confidence efficiency was reduced in older adults. However, we observed substantial variability across all participants. Controlling for age group, confidence effciency was closely linked to individual differences in cognitive control capacities. Our findings provide evidence for age-related differences in confidence efficiency that present a specific challenge to perceptual performance in old age. We propose that these differences are driven by cognitive control capacities, supporting their crucial role for metacognitive efficiency.","link":"/opendata/klever-et-al-2022/"},{"title":"Klingelhoefer-Jens et al. (2023)","text":"Exposure to adverse childhood experiences (ACEs) is a strong predictor for developing behavioral, somatic and psychopathological conditions. Exposure to threat-related early adversity has been suggested to be specifically linked to altered emotional learning as well as changes in neural circuits involved in emotional responding and fear. Learning mechanisms are particularly interesting as they are central mechanisms through which environmental inputs shape emotional and cognitive processes and ultimately behavior. Multiple theories on the mechanisms underlying this association have been suggested which, however, differ in the operationalization of ACEs. 1,402 physically and mentally healthy participants underwent a fear conditioning paradigm including a fear acquisition and generalization phase while skin conductance responses (SCRs) and different subjective ratings were acquired. ACEs were retrospectively assessed through the childhood trauma questionnaire and participants were assigned to individuals exposed or unexposed to at least moderate adverse childhood experiences according to established cut-off criteria. In addition, we provide exploratory analyses aiming to shed light on different theoretical accounts on how ACEs impact individual risk profiles (i.e., cumulative risk account, specificity model, dimensional model). During fear acquisition training and generalization, we observed reduced discrimination in SCRs between the CS+ and the CS-, primarily due to reduced CS+ responding in exposed individuals. During fear generalization, no differences in generalization gradients were observed between exposed and unexposed individuals but generally blunted physiological responses in exposed individuals. No differences between the groups were observed in ratings in any of the experimental phases. The lower CS discrimination in SCRs in exposed individuals was evident across operationalizations according to the cumulative risk account, specificity as well as dimensional model. However, none of these theories showed clear explanatory superiority. Our results stand in stark contrast to typical patterns observed in patients suffering from anxiety and stress-related disorders (i.e., reduced CS discrimination due to increased responses to safety signals). Thus, reduced CS+ responding in individuals exposed to ACEs, yet not showing signs of psychopathology, may represent a specific characteristic of this resilient subgroup that warrants further investigation with respect to its relation to risk and resilience. In addition, we conclude that theories linking ACEs to psychopathology need refinement.","link":"/opendata/klingelhoefer-jens-et-al-2023/"},{"title":"Klingelhöfer-Jens et al. (2022)","text":"Here, we follow the call to target measurement reliability as a key prerequisite for individual-level predictions in translational neuroscience by investigating (1) longitudinal reliability at the individual and (2) group level, (3) internal consistency and (4) response predictability across experimental phases. One hundred and twenty individuals performed a fear conditioning paradigm twice 6 months apart. Analyses of skin conductance responses, fear ratings and blood oxygen level dependent functional magnetic resonance imaging (BOLD fMRI) with different data transformations and included numbers of trials were conducted. While longitudinal reliability was rather limited at the individual level, it was comparatively higher for acquisition but not extinction at the group level. Internal consistency was satisfactory. Higher responding in preceding phases predicted higher responding in subsequent experimental phases at a weak to moderate level depending on data specifications. In sum, the results suggest that while individual-level predictions are meaningful for (very) short time frames, they also call for more attention to measurement properties in the field.","link":"/opendata/klingelhofer-jens-et-al-2022/"},{"title":"Kobor et al. (2021)","text":"Both primarily and recently encountered information have been shown to influence experience-based risky decision making. The primacy effect predicts that initial experience will influence later choices even if outcome probabilities change and reward is ultimately more or less sparse than primarily experienced. However, it has not been investigated whether extended initial experience would induce a more profound primacy effect upon risky choices than brief experience. Therefore, the present study tested in two experiments whether young adults adjusted their risk-taking behavior in the Balloon Analogue Risk Task after an unsignaled and unexpected change point. The change point separated early good luck or bad luck trials from subsequent ones. While mostly positive (more reward) or mostly negative (no reward) events characterized the early trials, subsequent trials were unbiased. In Experiment 1, the change point occurred after one-sixth or one-third of the trials (brief vs. extended experience) without intermittence, whereas in Experiment 2, it occurred between separate task phases. In Experiment 1, if negative events characterized the early trials, after the change point, risk-taking behavior increased as compared with the early trials. Conversely, if positive events characterized the early trials, risk-taking behavior decreased after the change point. Although the adjustment of risk-taking behavior occurred due to integrating recent experiences, the impact of initial experience was simultaneously observed. The length of initial experience did not reliably influence the adjustment of behavior. In Experiment 2, participants became more prone to take risks as the task progressed, indicating that the impact of initial experience could be overcome. Altogether, we suggest that initial beliefs about outcome probabilities can be updated by recent experiences to adapt to the continuously changing decision environment.","link":"/opendata/kobor-et-al-2021/"},{"title":"Komar et al. (2023)","text":"The cognitive mechanisms underlying the animacy effect on free recall have as yet to be identified. According to the attentional-prioritization account, animate words are better recalled because they recruit more attention at encoding than inanimate words. The account implies that the animacy effect should be larger when animate words are presented together with inanimate words in mixed lists or pairs than when animate and inanimate words are presented separately in pure lists or pairs. The present series of experiments served to systematically test whether list composition or pair composition modulate the animacy effect. In Experiment 1, the animacy effect was compared between mixed and pure lists. In Experiments 2 and 3, the words were presented in mixed or pure pairs to manipulate the direct competition for attention between animate and inanimate words at encoding. While encoding was intentional in Experiments 1 and 2, it was incidental in Experiment 3. In each experiment, a significant animacy effect was obtained, but the effect was equally large in mixed and pure lists or pairs of animate and inanimate words despite considerable sensitivity of the statistical test of the critical interaction. These findings provide evidence against the attentional-prioritization account of the animacy effect.","link":"/opendata/komar-et-al-2023/"},{"title":"Konovalov & Krajbich (2020)","text":"Converging evidence has demonstrated that humans exhibit two distinct strategies when learning in complex environments. One is model-free learning, i.e., simple reinforcement of rewarded actions, and the other is model-based learning, which considers the structure of the environment. Recent work has argued that people exhibit little model-based behavior unless it leads to higher rewards. Here we use mouse tracking to study model-based learning in stochastic and deterministic (pattern-based) environments of varying difficulty. In both tasks participants mouse movements reveal that they learned the structures of their environments, despite the fact that standard behavior-based estimates suggested no such learning in the stochastic task. Thus, we argue that mouse tracking can reveal whether subjects have structure knowledge, which is necessary but not sufficient for model-based choice.","link":"/opendata/konovalov-krajbich-2020/"},{"title":"Konstantinidis et al. (2018)","text":"Recent experimental evidence in experience-based decision-making suggests that people are more risk seeking in the gains domain relative to the losses domain. This critical result is at odds with the standard reflection effect observed in description-based choice and explained by Prospect Theory. The so-called reversed-reflection effect has been predicated on the extreme-outcome rule, which suggests that memory biases affect risky choice from experience. To test the general plausibility of the rule, we conducted two experiments examining how the magnitude of prospective outcomes impacts risk preferences. We found that while the reversed-reflection effect was present with small-magnitude payoffs, using payoffs of larger magnitude brought participants behavior back in line with the standard reflection effect. Our results suggest that risk preferences in experience-based decision-making are not only affected by the relative extremeness but also by the absolute extremeness of past events.","link":"/opendata/konstantinidis-et-al-2018/"},{"title":"Kool et al. (2017)","text":"Human behavior is sometimes determined by habit and other times by goal-directed planning. Modern reinforcement-learning theories formalize this distinction as a competition between a computationally cheap but inaccurate model-free system that gives rise to habits and a computationally expensive but accurate model-based system that implements planning. It is unclear, however, how people choose to allocate control between these systems. Here, we propose that arbitration occurs by comparing each systems task-specific costs and benefits. To investigate this proposal, we conducted two experiments showing that people increase model-based control when it achieves greater accuracy than model-free control, and especially when the rewards of accurate performance are amplified. In contrast, they are insensitive to reward amplification when model-based and model-free control yield equivalent accuracy. This suggests that humans adaptively balance habitual and planned action through on-line cost-benefit analysis.","link":"/opendata/kool-et-al-2017/"},{"title":"Kool et al. (2018)","text":"Decision-making algorithms face a basic tradeoff between accuracy and effort (i.e., computational demands). It is widely agreed that humans can choose between multiple decision-making processes that embody different solutions to this tradeoff: Some are computationally cheap but inaccurate, whereas others are computationally expensive but accurate. Recent progress in understanding this tradeoff has been catalyzed by formalizing it in terms of model-free (i.e., habitual) versus model-based (i.e., planning) approaches to reinforcement learning. Intuitively, if two tasks offer the same rewards for accuracy but one of them is much more demanding, we might expect people to rely on habit more in the difficult task: Devoting significant computation to achieve slight marginal accuracy gains would not be worth it. We test and verify this prediction in a sequential reinforcement learning task. Because our paradigm is amenable to formal analysis, it contributes to the development of a computational model of how people balance the costs and benefits of different decision-making processes in a task-specific manner; in other words, how we decide when hard thinking is worth it.","link":"/opendata/kool-et-al-2018/"},{"title":"Koppel et al. (2022)","text":"Most disciplines rely on economic games to measure prosocial behavior in controlled experimental settings. However, participants’ comprehension of these games might be lower than desirable, which complicates interpretation of results. We here assess subject comprehension of the payoff structure of five standard economic games commonly used to study prosocial behavior: the Dictator Game, Ultimatum Game, Trust Game, Public Goods Game, and Prisoner’s Dilemma. Participants were recruited from two online platforms: Prolific (n = 528) and CloudResearch (using the CloudResearch MTurk toolkit; n = 540). The Trust Game had the highest level of misunderstanding (70%), followed by the Public Goods Game and the Prisoner’s Dilemma (each at 52%), the Ultimatum Game (27%), and the Dictator Game (24%). Study platform was a significant predictor of misunderstanding in the Dictator Game, Ultimatum Game, and Public Goods Game, with greater misunderstanding on Prolific than CloudResearch. Incentivizing the comprehension questions had no significant impact on misunderstanding in any of the games. The only variable that significantly predicted misunderstanding across all games was numeracy, which was associated with lower misunderstanding. Finally, we found suggestive evidence in exploratory analyses that misunderstanding predicts greater contributions in the Public Goods Game (in line with previous studies) and in the Dictator Game, increased the likelihood to choose the option that maximizes total payoff in the Prisoner’s Dilemma and reduced back transfers in the Trust Game. These findings suggest that misunderstanding may be an important factor in explaining prosocial behavior and that reliance on standard one-shot games may lead researchers to overestimate the importance of social preferences.","link":"/opendata/koppel-et-al-2022/"},{"title":"Kosciessa et al. (2021)","text":"Knowledge about the relevance of environmental features can guide stimulus processing. However, it remains unclear how processing is adjusted when feature relevance is uncertain. We hypothesized that (a) heightened uncertainty would shift cortical networks from a rhythmic, selective processing-oriented state toward an asynchronous (“excited”) state that boosts sensitivity to all stimulus features, and that (b) the thalamus provides a subcortical nexus for such uncertainty-related shifts. Here, we had young adults attend to varying numbers of task-relevant features during EEG and fMRI acquisition to test these hypotheses. Behavioral modeling and electrophysiological signatures revealed that greater uncertainty lowered the rate of evidence accumulation for individual stimulus features, shifted the cortex from a rhythmic to an asynchronous/excited regime, and heightened neuromodulatory arousal. Crucially, this unified constellation of within-person effects was dominantly reflected in the uncertainty-driven upregulation of thalamic activity. We argue that neuromodulatory processes involving the thalamus play a central role in how the brain modulates neural excitability in the face of momentary uncertainty.","link":"/opendata/kosciessa-et-al-2021/"},{"title":"Kraft et al. (2022)","text":"Cognitive flexibility - the ability to adjust one ´s behavior to changing environmental demands - is crucial for controlled behavior. However, the term cognitive flexibility is used heterogeneously, and associations between cognitive flexibility and other facets of flexible behavior have only rarely been studied systematically. To resolve some of these conceptual uncertainties, we directly compared cognitive flexibility (cue-instructed switching between two affectively neutral tasks), affective flexibility (switching between a neutral and an affective task using emotional stimuli), and feedback-based flexibility (non-cued, feedback-dependent switching between two neutral tasks). Three experimental paradigms were established that share as many procedural features (in terms of stimuli and/or task rules) as possible and administered in a pre-registered study plan (N = 100). Correlation analyses revealed significant associations between the efficiency of cognitive and affective task switching (response time switch costs). Feedback-based flexibility (measured as mean number of errors after rule reversals) did not correlate with task switching efficiency in the other paradigms, but selectively with the effectiveness of affective switching (error rate costs when switching from neutral to emotion task). While preregistered confirmatory factor analysis (CFA) provided no clear evidence for a shared factor underlying the efficiency of switching in all three domains of flexibility, an exploratory CFA suggested commonalities regarding switching effectiveness (accuracy-based switch costs). We propose shared mechanisms controlling the efficiency of cue-dependent task switching across domains, while the relationship to feedback-based flexibility may depend on mechanisms controlling switching effectiveness. Our results call for a more stringent conceptual differentiation between different variants of psychological flexibility.","link":"/opendata/kraft-et-al-2022/"},{"title":"Kriechbaum & Bäuml (2023)","text":"Retrieval practice performed shortly upon the encoding of information benefits recall of the retrieved information but causes forgetting of nonretrieved information. Here, we show that the forgetting effect on the nonretrieved information can quickly evolve into recall enhancement when retrieval practice is delayed. During a time window of twenty minutes upon the encoding of information, the forgetting effect observed shortly after encoding first disappeared and then turned into recall enhancement when the temporal lag between encoding and retrieval practice was prolonged. Strikingly, recall enhancement continued to emerge when retrieval practice was postponed up to one week. The results illustrate a fast transition from the forgetting of nonretrieved information to recall enhancement. This fast transition is of relevance for daily life, in which retrieval is often selective and delayed.","link":"/opendata/kriechbaum-bauml-2023/"},{"title":"Kristjansson et al. (2014)","text":"Predators are known to select food of the same type in non-random sequences or “runs” that are longer than would be expected by chance. If prey are conspicuous, predators will switch between available sources, interleaving runs of different prey types. However, when prey are cryptic, predators tend to focus on one food type at a time, effectively ignoring equally available sources. This latter finding is regarded as a key indicator that animal foraging is strongly constrained by attention. It is unknown whether human foraging is equally constrained. Here, using a novel iPad task, we demonstrate for the first time that it is. Participants were required to locate and touch 40 targets from 2 different categories embedded within a dense field of distractors. When individual target items “popped-out” search was organized into multiple runs, with frequent switching between target categories. In contrast, as soon as focused attention was required to identify individual targets, participants typically exhausted one entire category before beginning to search for the other. This commonality in animal and human foraging is compelling given the additional cognitive tools available to humans, and suggests that attention constrains search behavior in a similar way across a broad range of species.","link":"/opendata/kristjansson-et-al-2014/"},{"title":"Kristjansson & Kristjansson (2023)","text":"Go/No-Go responses in visual search yield different estimates of the properties of visual search than more standard present and absent tasks. Such minor methodological tweaks have a surprisingly large effect on measures that have, for the last half-century or so, formed the backbone of prominent theories of visual attention. Secondly, priming effects in visual search have a dominating influence on visual search, accounting for effects that have been attributed to top-down guidance in standard theories. Priming effects in visual search have never been investigated for Go/No-Go present/absent decisions. Here, Go/No-Go tasks were used to assess visual search for an odd-one-out face, defined either by color or facial expression. The Go/No-Go responses for the color-based task were very fast for both present and absent and interestingly “Go” responses were even faster for the target absent case. The “Go” responses were, on the other hand, much slower for expression and became higher with increased set-size, particularly for the target-absent response. Priming effects were considerable for the feature search, but for expression, the target absent priming was strong but very weak for target present trials, arguing that repetition priming for this search mainly reflects priming of context rather than target features. Overall, the results reinforce the point that Go/No-Go tasks are highly informative for theoretical accounts of visual attention, and cast a new light on attentional priming.","link":"/opendata/kristjansson-kristjansson-2023/"},{"title":"Kroker et al. (2022)","text":"SummaryThe framing-effect is a bias that affects decision-making depending on whether the available options are presented with positive or negative connotations. Even when the outcome of two choices is equivalent, people have a strong tendency to avoid the negatively framed option because losses are perceived about twice as salient as gains of the same amount (i.e. loss-aversion). The ventromedial prefrontal cortex (vmPFC) is crucial for rational decision-making, and dysfunctions in this region have been linked to cognitive biases, impulsive behavior and gambling addiction. Using a financial decision-making task in combination with magnetoencephalographic neuroimaging, we show that excitatory compared to inhibitory non-invasive transcranial direct current stimulation (tDCS) of the vmPFC reduces framing-effects while improving the assessment of loss-probabilities, ultimately leading to increased overall gains. Behavioral and neural data consistently suggest that this improvement in rational decision-making is predominately a consequence of reduced loss-aversion. These findings recommend further research towards clinical applications of vmPFC-tDCS in addictive disorders.","link":"/opendata/kroker-et-al-2022/"},{"title":"Kucina et al. (2022)","text":"Standard, well-established cognitive tasks that produce reliable effects in group comparisons also lead to unreliable measurement when assessing individual differences. This “reliability paradox” has been demonstrated in decision-conflict tasks such as the Simon, Flanker, and Stroop tasks, which measure various aspects of cognitive control. We aimed to address this paradox by implementing carefully calibrated versions of the standard tests with an additional manipulation to encourage processing of conflicting information, as well as combinations of standard tasks. A series of experiments concluded that a Flanker task and a combined Simon and Stroop task with the additional manipulation produced reliable estimates of individual differences in under 100 trials per task, which markedly improves on the reliability seen in benchmark Flanker, Simon, and Stroop data. We make the new tasks freely available and discuss both theoretical and applied implications regarding how the cognitive testing of individual differences is carried out.","link":"/opendata/kucina-et-al-2022/"},{"title":"Kuhrt et al. (2023)","text":"Contrary to the law of less work, individuals with high levels of need for cognition and self-control tend to choose harder tasks more often. While both traits can be integrated into a core construct of dispositional cognitive effort investment, its relation to actual cognitive effort investment remains unclear. As individuals with high levels of cognitive effort investment are characterized by a high intrinsic motivation towards effortful cognition, they would be less likely to increase their effort based on expected payoff, but rather based on increasing demand. In the present study, we measured actual effort investment on multiple dimensions, i.e., subjective load, reaction time, accuracy, early and late frontal midline theta power, N2 and P3 amplitude, and pupil dilation. In a sample of N = 148 participants, we examined the relationship of dispositional cognitive effort investment and effort indices during a flanker and an n-back task with varying demand and payoff. Exploratorily, we examined this relationship for the two subdimensions cognitive motivation and effortful-self-control as well. In both tasks, effort indices were sensitive to demand and partly to payoff. The analyses revealed a main effect of cognitive effort investment for accuracy (n-back task), interaction effects with payoff for reaction time (n-back and flanker task) and P3 amplitude (n-back task) and demand for early frontal midline theta power (flanker task). Taken together, our results partly support the notion that individuals with high levels of cognitive effort investment exert effort more efficiently. Moreover, the notion that these individuals exert effort regardless of payoff is partly supported, too. This may further our understanding of the conditions under which person-situation interactions occur, i.e. the conditions under which situations determine effort investment in goal-directed behavior more than personality, and vice versa.","link":"/opendata/kuhrt-et-al-2023/"},{"title":"Kumano et al. (2021)","text":"In everyday life, people sometimes find themselves making decisions on behalf of others, taking risks on another’s behalf, accepting the responsibility for these choices and possibly suffering regret for what they could have done differently. Previous research has extensively studied how people deal with risk when making decisions for others or when being observed by others. Here, we asked whether making decisions for present others is affected by regret avoidance. We studied value-based decision making under uncertainty, manipulating both whether decisions benefited the participant or a partner (beneficiary effect) and whether the partner watched the participant’s choices (audience effect) and their factual and counterfactual outcomes. Computational behavioural analysis revealed that participants were less mindful of regret (and more strongly driven by bigger risks) when choosing for others vs for themselves. Conversely, they chose more conservatively (regarding both regret and risk) when being watched vs alone. The effects of beneficiary and audience on anticipated regret counteracted each other, suggesting that participants’ financial and reputational interests impacted the feeling of regret independently.","link":"/opendata/kumano-et-al-2021/"},{"title":"Kuper-Smith & Korn (2023)","text":"Decisions in social dilemmas lead to outcomes for oneself and others. These outcomes can be gains or losses, yet we lack a full understanding of how people’s decisions depend on which outcomes are above or below zero. We systematically varied whether the outcomes of social dilemmas (Prisoner’s Dilemma, Stag-Hunt, Chicken) were losses, gains, or combinations thereof. Across 7 experiments (4 preregistered; N Offline = 197, N Online = 1,653), participants consistently tried to avoid losses altogether (loss avoidance), but they did not try to minimise losses (loss aversion). If cooperation avoided losses, people cooperated more, if defection avoided losses, people defected more, even if this imposed a loss on someone else. Loss avoidance was larger for one-shot than for iterated games and was present in all games studied. Our results suggest that loss avoidance, rather than loss aversion, systematically influences how people cooperate.","link":"/opendata/kuper-smith-korn-2023/"},{"title":"Kurtenbach et al. 2022","text":"Performance during instrumental learning is commonly believed to reflect the knowledge that has been acquired up to that point. However, recent work in rodents found that instrumental performance was enhanced during periods when reinforcement was withheld, relative to periods when reinforcement was provided. This suggests that reinforcement may mask acquired knowledge and lead to impaired performance. In the present study, we investigated whether such a beneficial effect of removing reinforcement translates to humans. Specifically, we tested whether performance during learning was improved during non-reinforced relative to reinforced task periods using signal detection theory and a computational modelling approach. To this end, 60 healthy volunteers performed a novel visual go/no-go learning task with deterministic reinforcement. To probe acquired knowledge in the absence of reinforcement, we interspersed blocks without feedback. In these non-reinforced task blocks, we found an increased d, indicative of enhanced instrumental performance. However, computational modelling showed that this improvement in performance was not due to an increased sensitivity of decision making to learnt values, but to a more cautious mode of responding, as evidenced by a reduction of a general response bias. Together with an initial tendency to act, this is sufficient to drive differential changes in hit and false alarm rates that jointly lead to an increased d. To conclude, the improved instrumental performance in the absence of reinforcement observed in studies using asymmetrically reinforced go/no-go tasks may reflect a change in response bias rather than unmasking latent knowledge.","link":"/opendata/kurtenbach-et-al-2022/"},{"title":"Kvam et al. (2023)","text":"People discount both future outcomes that could happen and past outcomes that could have happened according to how far away they are in time. A common finding is that future outcomes are often preferred to past ones when the payoffs and temporal distance (how long ago/until they occur) are matched, referred to as temporal value asymmetry. In this article, we examine the consistency of this effect by examining the effect of manipulating the magnitude and delays of past and future payoffs on participants’ choices and challenge the claim that differences in value are primarily due to differences in discounting rates for past and future events. We find reversals of the temporal value asymmetry when payoffs are low and when temporal distance is large, suggesting that people have different sensitivity to the magnitude of past and future payoffs. We show that these effects can be accommodated in a direct difference model of intertemporal choice but not in the most common discounting models (hyperboloid), suggesting that both temporal distance and payoff magnitude carry independent influences on the subjective value of past and future outcomes. Finally, we explore how these tendencies to represent past and future outcome values are related to one another and to individual differences in personality and psychological traits, showing how these measures cluster according to whether they measure processes related to past/future events, payoffs/delays, and whether they are behavioral/self-report measures.","link":"/opendata/kvam-et-al-2023/"},{"title":"Kwak et al. (2023)","text":"Visual perception is limited by spatial resolution, the ability to discriminate fine details. Spatial resolution not only declines with eccentricity but also differs for polar angle locations around the visual field, also known as ‘performance fields’. To compensate for poor peripheral resolution, we make rapid eye movements-saccades-to bring peripheral objects into high-acuity foveal vision. Already before saccade onset, visual attention shifts to the saccade target location and prioritizes visual processing. This presaccadic shift of attention improves performance in many visual tasks, but whether it changes resolution is unknown. Here, we investigated whether presaccadic attention sharpens peripheral spatial resolution; and if so, whether such effect interacts with performance fields asymmetries. We measured acuity thresholds in an orientation discrimination task during fixation and saccade preparation around the visual field. The results revealed that presaccadic attention sharpens acuity, which can facilitate a smooth transition from peripheral to foveal representation. This acuity enhancement is similar across the four cardinal locations; thus, the typically robust effect of presaccadic attention does not change polar angle differences in resolution.","link":"/opendata/kwak-et-al-2023/"},{"title":"Lamba et al. (2020)","text":"Very little is known about how individuals learn under uncertainty when other people are involved. We propose that humans are particularly tuned to social uncertainty, which is especially noisy and ambiguous. Individuals exhibiting less tolerance for uncertainty, such as those with anxiety, may have greater difficulty learning in uncertain social contexts and therefore provide an ideal test population to probe learning dynamics under uncertainty. Using a dynamic trust game and a matched nonsocial task, we found that healthy subjects (n = 257) were particularly good at learning under negative social uncertainty, swiftly figuring out when to stop investing in an exploitative social partner. In contrast, subjects with anxiety (n = 97) overinvested in exploitative partners. Computational modeling attributed this pattern to a selective reduction in learning from negative social events and a failure to enhance learning as uncertainty rises-two mechanisms that likely facilitate adaptive social choice.","link":"/opendata/lamba-et-al-2020/"},{"title":"Lamba et al. (2023)","text":"People learn adaptively from feedback, but the rate of such learning differs drastically across individuals and contexts. Here we examine whether this variability reflects differences in what is learned. Leveraging a neurocomputational approach that merges fMRI and an iterative reward learning task, we link the specificity of credit assignment-how well people are able to appropriately attribute outcomes to their causes-to the precision of neural codes in the prefrontal cortex (PFC). Participants credit task-relevant cues more precisely in social compared to nonsocial contexts, a process that is mediated by high-fidelity (i.e., distinct and consistent) state representations in the PFC. Specifically, the medial PFC and orbitofrontal cortex work in concert to match the neural codes from feedback to those at choice, and the strength of these common neural codes predict credit assignment precision. Together this work provides a window into how neural representations drive adaptive learning.","link":"/opendata/lamba-et-al-2023/"},{"title":"Lange et al. (2021)","text":"Making good decisions requires updating beliefs according to new evidence. This is a dynamical process that is prone to biases: in some cases, beliefs become entrenched and resistant to new evidence (leading to primacy effects), while in other cases, beliefs fade over time and rely primarily on later evidence (leading to recency effects). How and why either type of bias dominates in a given context is an important open question. Here, we study this question in classic perceptual decision-making tasks, where, puzzlingly, previous empirical studies differ in the kinds of biases they observe, ranging from primacy to recency, despite seemingly equivalent tasks. We present a new model, based on hierarchical approximate inference and derived from normative principles, that not only explains both primacy and recency effects in existing studies, but also predicts how the type of bias should depend on the statistics of stimuli in a given task. We verify this prediction in a novel visual discrimination task with human observers, finding that each observers temporal bias changed as the result of changing the key stimulus statistics identified by our model. The key dynamic that leads to a primacy bias in our model is an overweighting of new sensory information that agrees with the observers existing belief-a type of confirmation bias. By fitting an extended drift-diffusion model to our data we rule out an alternative explanation for primacy effects due to bounded integration. Taken together, our results resolve a major discrepancy among existing perceptual decision-making studies, and suggest that a key source of bias in human decision-making is approximate hierarchical inference.","link":"/opendata/lange-et-al-2021/"},{"title":"Larsen et al. (2022)","text":"Hallucinations are characterized by disturbances of perceptual processes involved in decision-making about environmental stimuli. Here, we examine whether cognitive and computational processes by which sensory information is integrated may offer insight into the perceptual mechanisms of hallucinatory symptoms. We used a multi-element perceptual averaging task in which observers made dichotomous judgments about the “average color” (red or blue) of an array of stimuli in trials that varied in the strength (mean) and reliability (variance) of the decision-relevant perceptual evidence. Generally, observers excluded or down-weighted extreme (outlying) perceptual evidence akin to a statistician excluding outlying data points; however, individuals prone to hallucinations afforded more weight to more extreme or untrustworthy evidence. Computational modeling showed that individuals prone to hallucinations tended not to use the optimal model in which evidence is integrated as a function of the log odds of each perceptual option leading to “robust averaging”. Finally, observers generally adapted to trials with unreliable evidence by increasingly downweighting extreme evidence, but the weighting strategy in hallucination prone individuals remained insensitive to the reliability of evidence. By showing that hallucination proneness is associated with reduced attenuation of untrustworthy evidence in perceptual decision-making, our findings suggest a novel perceptual mechanism underlying hallucinations. Our findings also provide support for the view that hallucination-proneness relates to alterations in the perceptual systems that track statistical regularities in environmental stimuli.","link":"/opendata/larsen-et-al-2022/"},{"title":"Latzman et al. (2020)","text":"A robust literature demonstrates that psychopathology and personality pathology are well-represented within quantitatively-derived, hierarchical dimensional models. Nevertheless, the location of core traits comprising psychopathic personality (psychopathy) as defined by the triarchic model has not been clearly explicated. We extended hierarchical structural models of personality pathology to include triarchic psychopathy trait dimensions (boldness, meanness, disinhibition) to interface the hierarchical framework of pathological personality dimensions with basic psychopathy trait dimensions. Using data from a racially diverse undergraduate sample (N = 749), “bass-ackwards” analyses revealed a coherently organized hierarchical structure of personality pathology. Psychopathy dimensions were clearly situated within levels of the hierarchy extending beyond the highest, undifferentiated general-factor level. A broad externalizing factor emerged at level 2, which bifurcated into callous-disinhibition and fearless dominance subfactors at level 3 – encompassing psychopathic traits of meanness and disinhibition (thought to represent the trait commonality between antisocial personality disorder and psychopathy) and boldness (thought to differentiate psychopathy from antisocial personality disorder), respectively, at the final two levels of the hierarchy. These results position triarchic psychopathy traits within an extended externalizing spectrum that accommodates boldness-related content.","link":"/opendata/latzman-et-al-2020/"},{"title":"Lawrence et al. (2022)","text":"Theoretical accounts have linked anxiety to intolerance of ambiguity. However, this relationship has not been well operationalized empirically. Here, we used computational and neuro-imaging methods to characterize anxiety-related differences in aversive decision-making under ambiguity and associated patterns of cortical activity. Adult human participants chose between two urns on each trial. The ratio of tokens (Os and Xs) in each urn determined probability of electrical stimulation receipt. A number above each urn indicated the magnitude of stimulation that would be received if a shock was delivered. On ambiguous trials, one of the two urns had tokens occluded. By varying the number of tokens occluded, we manipulated the extent of missing information. At higher levels of missing information, there is greater second order uncertainty, i.e., more uncertainty as to the probability of pulling a given type of token from the urn. Adult human participants demonstrated avoidance of ambiguous options which increased with level of missing information. Extent of information-level dependent ambiguity aversion was significantly positively correlated with trait anxiety. Activity in both the dorsal anterior cingulate cortex and inferior frontal sulcus during the decision-making period increased as a function of missing information. Greater engagement of these regions, on high missing information trials, was observed when participants went on to select the ambiguous option; this was especially apparent in high trait anxious individuals. These findings are consistent with individuals vulnerable to anxiety requiring greater activation of frontal regions supporting rational decision-making to overcome a predisposition to engage in ambiguity avoidance at high levels of missing information.","link":"/opendata/lawrence-et-al-2022/"},{"title":"Le-Bouc & Pessiglione (2022)","text":"Humans procrastinate despite being aware of potential adverse consequences. Yet, the neuro-computational mechanisms underlying procrastination remain poorly understood. Here, we use fMRI during intertemporal choice to inform a computational model that predicts procrastination behavior in independent tests. Procrastination is assessed in the laboratory as the preference for performing an effortful task on the next day as opposed to immediately, and at home as the delay taken in returning completed administrative forms. These procrastination behaviors are respectively modeled as unitary and repeated decisions to postpone a task until the next time step, based on a net expected value that integrates reward and effort attributes, both discounted with delay. The key feature that is associated with procrastination behavior across individuals (both in-lab and at-home) is the extent to which the expected effort cost (signaled by the dorsomedial prefrontal cortex) is attenuated by the delay before task completion. Thus, procrastination might stem from a cognitive bias that would make doing a task later (compared to now) appear as much less effortful but not much less rewarding.","link":"/opendata/le-bouc-pessiglione-2022/"},{"title":"Le-Pelley & Newell (2023)","text":"In two experiments we examined the influence of ‘history-driven’ attentional biases on choice behavior. In Experiment 1 we used a value-modulated attentional capture procedure to induce an automatic reward-related attentional bias, and found that this bias shaped choice in a subsequent task in which participants were required to pick the highest number from a briefly displayed choice array. In Experiment 2 we investigated the influence of a working memory manipulation, and found that choice in the number-selection task was influenced by the current (and prior) contents of memory, consistent with an influence of memory-driven attentional bias on information encoding. Our findings indicate that history-driven attentional biases can translate to an influence on overt, downstream processes of behavioural choice, and should be incorporated into models of the interaction between attention and choice.","link":"/opendata/le-pelley-newell-2023/"},{"title":"Lefebvre et al. (2017)","text":"When forming and updating beliefs about future life outcomes, people tend to consider good news and to disregard bad news. This tendency is assumed to support the optimism bias. Whether this learning bias is specific to ‘high-level’ abstract belief update or a particular expression of a more general ‘low-level’ reinforcement learning process is unknown. Here we report evidence in favour of the second hypothesis. In a simple instrumental learning task, participants incorporated better-than-expected outcomes at a higher rate than worse-than-expected ones. In addition, functional imaging indicated that inter-individual difference in the expression of optimistic update corresponds to enhanced prediction error signalling in the reward circuitry. Our results constitute a step towards the understanding of the genesis of optimism bias at the neurocomputational level.","link":"/opendata/lefebvre-et-al-2017/"},{"title":"Lehmann et al. (2019)","text":"In many daily tasks, we make multiple decisions before reaching a goal. In order to learn such sequences of decisions, a mechanism to link earlier actions to later reward is necessary. Reinforcement learning (RL) theory suggests two classes of algorithms solving this credit assignment problem: In classic temporal-difference learning, earlier actions receive reward information only after multiple repetitions of the task, whereas models with eligibility traces reinforce entire sequences of actions from a single experience (one-shot). Here, we show one-shot learning of sequences. We developed a novel paradigm to directly observe which actions and states along a multi-step sequence are reinforced after a single reward. By focusing our analysis on those states for which RL with and without eligibility trace make qualitatively distinct predictions, we find direct behavioral (choice probability) and physiological (pupil dilation) signatures of reinforcement learning with eligibility trace across multiple sensory modalities.","link":"/opendata/lehmann-et-al-2019/"},{"title":"Lei & Solway (2022)","text":"A large literature has accumulated suggesting that human and animal decision making is driven by at least two systems, and that important functions of these systems can be captured by reinforcement learning algorithms. The “model-free” system caches and uses stimulus-value or stimulus-response associations, and the “model-based” system implements more flexible planning using a model of the world. However, it is not clear how the two systems interact during deliberation and how a single decision emerges from this process, especially when they disagree. Most previous work has assumed that while the systems operate in parallel, they do so independently, and they combine linearly to influence decisions. Using an integrated reinforcement learning/drift-diffusion model, we tested the hypothesis that the two systems interact in a non-linear fashion similar to other situations with cognitive conflict. We differentiated two forms of conflict: action conflict, a binary state representing whether the systems disagreed on the best action, and value conflict, a continuous measure of the extent to which the two systems disagreed on the difference in value between the available options. We found that decisions with greater value conflict were characterized by reduced model-based control and increased caution both with and without action conflict. Action conflict itself (the binary state) acted in the opposite direction, although its effects were less prominent. We also found that between-system conflict was highly correlated with within-system conflict, and although it is less clear a priori why the latter might influence the strength of each system above its standard linear contribution, we could not rule it out. Our work highlights the importance of non-linear conflict effects, and provides new constraints for more detailed process models of decision making. It also presents new avenues to explore with relation to disorders of compulsivity, where an imbalance between systems has been implicated.","link":"/opendata/lei-solway-2022/"},{"title":"Leng et al. (2021)","text":"To invest effort into any cognitive task, people must be sufficiently motivated. Whereas prior research has focused primarily on how the cognitive control required to complete these tasks is motivated by the potential rewards for success, it is also known that control investment can be equally motivated by the potential negative consequence for failure. Previous theoretical and experimental work has yet to examine how positive and negative incentives differentially influence the manner and intensity with which people allocate control. Here, we develop and test a normative model of control allocation under conditions of varying positive and negative performance incentives. Our model predicts, and our empirical findings confirm, that rewards for success and punishment for failure should differentially influence adjustments to the evidence accumulation rate versus response threshold, respectively. This dissociation further enabled us to infer how motivated a given person was by the consequences of success versus failure.","link":"/opendata/leng-et-al-2021/"},{"title":"Leong et al. (2021)","text":"People’s perceptual reports are biased toward percepts they are motivated to see. The arousal system coordinates the body’s response to motivationally significant events and is well positioned to regulate motivational effects on perceptual judgments. However, it remains unclear whether arousal would enhance or reduce motivational biases. Here, we measured pupil dilation as a measure of arousal while participants (N = 38) performed a visual categorization task. We used monetary bonuses to motivate participants to perceive one category over another. Even though the reward-maximizing strategy was to perform the task accurately, participants were more likely to report seeing the desirable category. Furthermore, higher arousal levels were associated with making motivationally biased responses. Analyses using computational models suggested that arousal enhanced motivational effects by biasing evidence accumulation in favor of desirable percepts. These results suggest that heightened arousal biases people toward what they want to see and away from an objective representation of the environment.","link":"/opendata/leong-et-al-2021/"},{"title":"Lewis et al. (2023)","text":"In cost-benefit decision-making, women and men often show different trade-offs. However, surprisingly little is known about sex differences in instrumental tasks, where physical effort is exerted to gain rewards. To this end, we tested 81 individuals (47 women) with an effort allocation task, where participants had to repeatedly press a button to collect food and money tokens. We analyzed the motivational phases of invigoration and effort maintenance with varying reward magnitude, difficulty, and reward type. Whereas women and men did not differ in invigoration, we found that women showed higher effort maintenance as well as higher subjective wanting and exertion ratings for small rewards compared with men. Notably, men increased their effort more than women for higher rewards to match women’s levels of performance. Crucially, we found no sex differences depending on reward type or difficulty, indicating that sex differences were specific to the encoding of the magnitude of benefits, not costs. To summarize, women exerted higher physical effort for small rewards, which corresponded with an elevated subjective value in women compared with men. Therefore, sex differences in perceived reward magnitude may contribute to differential behavioral preferences highlighting the potential of cost-benefit decision-making to provide insights about potential mechanisms.","link":"/opendata/lewis-et-al-2023/"},{"title":"Li & Ma (2020)","text":"Decision confidence reflects our ability to evaluate the quality of decisions and guides subsequent behavior. Experiments on confidence reports have almost exclusively focused on two-alternative decision-making. In this realm, the leading theory is that confidence reflects the probability that a decision is correct (the posterior probability of the chosen option). There is, however, another possibility, namely that people are less confident if the best two options are closer to each other in posterior probability, regardless of how probable they are in absolute terms. This possibility has not previously been considered because in two-alternative decisions, it reduces to the leading theory. Here, we test this alternative theory in a three-alternative visual categorization task. We found that confidence reports are best explained by the difference between the posterior probabilities of the best and the next-best options, rather than by the posterior probability of the chosen (best) option alone, or by the overall uncertainty (entropy) of the posterior distribution. Our results upend the leading notion of decision confidence and instead suggest that confidence reflects the observer’s subjective probability that they made the best possible decision.","link":"/opendata/li-ma-2020/"},{"title":"Li & Ma (2021)","text":"When people view a consumable item for a longer amount of time, they choose it more frequently; this also seems to be the direction of causality. The leading model of this effect is a drift-diffusion model with a fixation-based attentional bias. Here, we propose an explicitly Bayesian account for the same data. This account is based on the notion that the brain builds a posterior belief over the value of an item in the same way it would over a sensory variable. As the agent gathers evidence about the item from sensory observations and from retrieved memories, the posterior distribution narrows. We further postulate that the utility of an item is a weighted sum of the posterior mean and the negative posterior standard deviation, with the latter accounting for risk aversion. Fixating for longer can increase or decrease the posterior mean, but will inevitably lower the posterior standard deviation. This model fits the data better than the original attentional drift-diffusion model but worse than a variant with a collapsing bound. We discuss the often overlooked technical challenges in fitting models simultaneously to choice and response time data in the absence of an analytical expression. Our results hopefully contribute to emerging accounts of valuation as an inference process.","link":"/opendata/li-ma-2021/"},{"title":"Li et al. (2022)","text":"While making decisions, we often rely on past experiences to guide our choices. However, not all experiences are remembered equally well, and some elements of an experience are more memorable than others. Thus, the intrinsic memorability of past experiences may bias our decisions. Here, we hypothesized that individuals would tend to choose more memorable options than less memorable ones. We investigated the effect of item memorability on choice in two experiments. First, using food images, we found that the same items were consistently remembered, and others consistently forgotten, across participants. However, contrary to our hypothesis, we found that participants did not prefer or choose the more memorable over the less memorable items when choice options were matched for the individuals valuation of the items. Second, we replicated these findings in an alternate stimulus domain, using words that described the same food items. These findings suggest that stimulus memorability does not play a significant role in determining choice based on subjective value.","link":"/opendata/li-et-al-2022/"},{"title":"Li & McClelland (2022)","text":"When we plan for long-range goals, proximal information cannot be exploited in a blindly myopic way, as relevant future information must also be considered. But when a subgoal must be resolved first, irrelevant future information should not interfere with the processing of more proximal, subgoal-relevant information. We explore the idea that decision making in both situations relies on the flexible modulation of the degree to which different pieces of information under consideration are weighted, rather than explicitly decomposing a problem into smaller parts and solving each part independently. We asked participants to find the shortest goal-reaching paths in mazes and modeled their initial path choices as a noisy, weighted information integration process. In a base task where choosing the optimal initial path required weighting starting-point and goal-proximal factors equally, participants did take both constraints into account, with participants who made more accurate choices tending to exhibit more balanced weighting. The base task was then embedded as an initial subtask in a larger maze, where the same two factors constrained the optimal path to a subgoal, and the final goal position was irrelevant to the initial path choice. In this more complex task, participants’ choices reflected predominant consideration of the subgoal-relevant constraints, but also some influence of the initially-irrelevant final goal. More accurate participants placed much less weight on the optimality-irrelevant goal and again tended to weight the two initially-relevant constraints more equally. These findings suggest that humans may rely on a graded, task-sensitive weighting of multiple constraints to generate approximately optimal decision outcomes in both hierarchical and non-hierarchical goal-directed tasks.","link":"/opendata/li-mcclelland-2022/"},{"title":"Li et al. (2023)","text":"The hypothesis that individuals can accurately represent temporal information within approximately 3 s is the premise of several theoretical models and empirical studies in the field of temporal processing. The significance of accurately representing time within 3 s and the universality of the overestimation contrast dramatically. To clarify whether this overestimation arises from an inability to accurately represent time or a response bias, we systematically examined whether feedback reduces overestimation at the 3 temporal processing stages of timing (encoding), working memory, and decisions proposed by the scalar timing model. Participants reproduced the time interval between 2 circles with or without feedback, while the electroencephalogram (EEG) was synchronously recorded. Behavioral results showed that feedback shortened reproduced times and significantly minimized overestimation. EEG results showed that feedback significantly decreased the amplitude of contingent negative variation (CNV) in the decision stage but did not modulate the CNV amplitude in the encoding stage or the P2-P3b amplitudes in the working memory stage. These results suggest that overestimation arises from response bias when individuals convert an accurate representation of time into behavior. Our study provides electrophysiological evidence to support the conception that short intervals under approximately 3 s can be accurately represented as “temporal gestalt.”","link":"/opendata/li-et-al-2023/"},{"title":"Ligneul (2019)","text":"The Iowa Gambling Task (IGT) is one of the most common paradigms used to assess decision-making and executive functioning in neurological and psychiatric disorders. Several reinforcement-learning (RL) models were recently proposed to refine the qualitative and quantitative inferences that can be made about these processes based on IGT data. Yet, these models do not account for the complex exploratory patterns which characterize participants behavior in the task. Using a dataset of more than 500 subjects, we demonstrate the existence of sequential exploration in the IGT and we describe a new computational architecture disentangling exploitation, random exploration and sequential exploration in this large population of participants. The new Value plus Sequential Exploration (VSE) architecture provided a better fit than previous models. Parameter recovery, model recovery and simulation analyses confirmed the superiority of the VSE scheme. Furthermore, using the VSE model, we confirmed the existence of a significant reduction in directed exploration across lifespan in the IGT, as previously reported with other paradigms. Finally, we provide a user-friendly toolbox enabling researchers to easily and flexibly fit computational models on the IGT data, hence promoting reanalysis of the numerous datasets acquired in various populations of patients and contributing to the development of computational psychiatry.","link":"/opendata/ligneul-2019/"},{"title":"Ligneul et al. (2022)","text":"Estimating the controllability of the environment enables agents to better predict upcoming events and decide when to engage controlled action selection. How does the human brain estimate controllability? Trial-by-trial analysis of choices, decision times and neural activity in an explore-and-predict task demonstrate that humans solve this problem by comparing the predictions of an actor model with those of a reduced spectator model of their environment. Neural blood oxygen level-dependent responses within striatal and medial prefrontal areas tracked the instantaneous difference in the prediction errors generated by these two statistical learning models. Blood oxygen level-dependent activity in the posterior cingulate, temporoparietal and prefrontal cortices covaried with changes in estimated controllability. Exposure to inescapable stressors biased controllability estimates downward and increased reliance on the spectator model in an anxiety-dependent fashion. Taken together, these findings provide a mechanistic account of controllability inference and its distortion by stress exposure.","link":"/opendata/ligneul-et-al-2022/"},{"title":"Lin et al. (2020)","text":"Neutral stimuli can acquire value when people learn to associate them with positive or negative outcomes (i.e., gain versus loss associations). Acquired value has been shown to affect how gain and loss associated stimuli are attended, remembered, and acted upon. Here we investigate a potential and previously unreported learning asymmetry in the acquisition of gain and loss associations that may have consequences for subsequent cognitive processing. In our first study, we provide meta-analytic evidence that in probabilistic learning tasks that pair neutral stimuli with intermixed win, loss, and no-change outcomes, people learn win-associations better than loss-associations despite the symmetrical task structure and symmetrical outcome probabilities. Then in two empirical studies, we demonstrate that this learning asymmetry is evident when acquiring gain versus loss associations to gray-scale landscape images whether participants earn points or money (Study 2), and whether or not they receive explicit instructions about the outcome contingencies (Study 3). Furthermore, performance on a post-learning source recognition task was also asymmetrical: explicit knowledge of associated outcomes was superior for optimal gain than optimal loss scenes. These findings indicate the acquisition of gain and loss associations need not be equivalent, despite symmetrical outcome probabilities, equivalent numbers of learning trials, and a constant learning criterion. Consequently, learning asymmetries could contribute to valence and optimality differences in subsequent cognitive processing.","link":"/opendata/lin-et-al-2020/"},{"title":"Lin et al. (2022)","text":"People who take on challenges and persevere longer are more likely to succeed in life. But individuals often avoid exerting effort, and there is limited experimental research investigating whether we can learn to value effort. Because existing research focuses on enhancing cognitive performance rather than increasing the value of effort, it also remains unclear whether individuals can learn to care more about challenging themselves than performing well. We developed a paradigm to test an intuitive idea: that people can learn to value effort and will seek effortful challenges if directly incentivized to do so. What’s more, we dissociate the effects of rewarding people for choosing effortful challenges and performing well. Results revealed that rewarding effort increased people’s willingness to choose harder tasks, even when rewards were no longer offered (near-transfer). Critically, the effects of this brief manipulation also carried over to an unrelated and unrewarded task (far-transfer). Our results suggest people can learn to value effort and that this valuation can generalise to unfamiliar and unrewarded tasks.","link":"/opendata/lin-et-al-2022/"},{"title":"Lin & von-Helversen (2023)","text":"Previous research showed that animals adopt different foraging strategies in different environment settings. However, research on whether humans adapt their foraging strategies to the foraging environment has shown little evidence of a change in strategies. This study aims to investigate whether humans will adapt their foraging strategies when performance differences between strategies are large and why participants may fixate on a single strategy. We conducted two foraging experiments and identified the strategies used by the participants. Most participants used the Give-Up Time (GUT) strategy regardless of the environment they encountered. GUT was used even in environments where other strategies such as the Fixed-Time strategy or the Fixed-Number strategy performed better. Using computer simulations, we further examined the conditions under which the GUT strategy will perform well compared to the other strategies. We found that even though the GUT strategy is not always the best strategy, it performs consistently on a satisfactory level and had an advantage when variance in the quality of patches was introduced. The consistently good performance of the GUT strategy could thus explain participants’ lack of strategy switching.","link":"/opendata/lin-von-helversen-2023/"},{"title":"Linka et al. (2023)","text":"Adult gaze behaviour towards naturalistic scenes is highly biased towards semantic object classes. Little is known about the ontological development of these biases, nor about group-level differences in gaze behaviour between adults and preschoolers. Here, we let preschoolers (n = 34, age 5 years) and adults (n = 42, age 18–59 years) freely view 40 complex scenes containing objects with different semantic attributes to compare their fixation behaviour. Results show that preschool children allocate a significantly smaller proportion of dwell time and first fixations on Text and instead fixate Faces, Touched objects, Hands and Bodies more. A predictive model of object fixations controlling for a range of potential confounds suggests that most of these differences can be explained by drastically reduced text salience in pre-schoolers and that this effect is independent of low-level salience. These findings are in line with a developmental attentional antagonism between text and body parts (touched objects and hands in particular), which resonates with recent findings regarding ‘cortical recycling’. We discuss this and other potential mechanisms driving salience differences between children and adults.","link":"/opendata/linka-et-al-2023/"},{"title":"Littman et al. (2023)","text":"The affordances task serves as an important tool for the assessment of cognition and visuomotor functioning, and yet its test-retest reliability has not been established. In the affordances task, participants attend to a goal-directed task (e.g., classifying manipulable objects such as cups and pots) while suppressing their stimulus-driven, irrelevant reactions afforded by these objects (e.g., grasping their handles). This results in cognitive conflicts manifesting at the task level and the response level. In the current study, we assessed the reliability of the affordances task for the first time. While doing so, we referred to the “reliability paradox,” according to which behavioral tasks that produce highly replicable group-level effects often yield low test-retest reliability due to the inadequacy of traditional correlation methods in capturing individual differences between participants. Alongside the simple test-retest correlations, we employed a Bayesian generative model that was recently demonstrated to result in a more precise estimation of test-retest reliability. Two hundred and ninety-five participants completed an online version of the affordances task twice, with a one-week gap. Performance on the online version replicated results obtained under in-lab administrations of the task. While the simple correlation method resulted in weak test-retest measures of the different effects, the generative model yielded a good reliability assessment. The current results support the utility of the affordances task as a reliable behavioral tool for the assessment of group-level and individual differences in cognitive and visuomotor functioning. The results further support the employment of generative modeling in the study of individual differences.","link":"/opendata/littman-et-al-2023/"},{"title":"Liu & Li (2019)","text":"Although some previous studies have investigated the time distortion of anxious patients, it remains open about the interactive effects of trait and state anxieties on time perception. In the present study, participants in high and low trait anxieties perceived 24 negative and 24 neutral words for 2 s in induced anxious and calm mood states, and their time perceptions were recorded by the time reproduction task. The results showed that high trait anxious individuals underestimated the 2-second duration while low trait anxious individuals overestimated the 2-second duration in the anxious state compared with those in the calm state. Meanwhile, in the anxious state, the timing error was absolute for high trait anxious individuals, whereas it was relative for low trait anxious individuals. The results show the interactive effects of trait and state anxieties on time perception. We suggest that the temporal distortion of the high trait anxious individuals in the anxious state might be due to an attentional dysfunction, whereas the temporal distortion of the low trait anxious individuals in the anxious state might be due to a mood congruency.","link":"/opendata/liu-li-2019/"},{"title":"Locke et al. (2022)","text":"Perceptual confidence is an important internal signal about the certainty of our decisions and there is a substantial debate on how it is computed. We highlight three confidence metric types from the literature: observers either use 1) the full probability distribution to compute probability correct (Probability metrics), 2) point estimates from the perceptual decision process to estimate uncertainty (Evidence-Strength metrics), or 3) heuristic confidence from stimulus-based cues to uncertainty (Heuristic metrics). These metrics are rarely tested against one another, so we examined models of all three types on a suprathreshold spatial discrimination task. Observers were shown a cloud of dots sampled from a dot generating distribution and judged if the mean of the distribution was left or right of centre. In addition to varying the horizontal position of the mean, there were two sensory uncertainty manipulations: the number of dots sampled and the spread of the generating distribution. After every two perceptual decisions, observers made a confidence forced-choice judgement whether they were more confident in the first or second decision. Model results showed that the majority of observers were best-fit by either: 1) the Heuristic model, which used dot cloud position, spread, and number of dots as cues; or 2) an Evidence-Strength model, which computed the distance between the sensory measurement and discrimination criterion, scaled according to sensory uncertainty. An accidental repetition of some sessions also allowed for the measurement of confidence agreement for identical pairs of stimuli. This N-pass analysis revealed that human observers were more consistent than their best-fitting model would predict, indicating there are still aspects of confidence that are not captured by our modelling. As such, we propose confidence agreement as a useful technique for computational studies of confidence. Taken together, these findings highlight the idiosyncratic nature of confidence computations for complex decision contexts and the need to consider different potential metrics and transformations in the confidence computation.","link":"/opendata/locke-et-al-2022/"},{"title":"Lockwood et al. (2018)","text":"Sense of ownership is a ubiquitous and fundamental aspect of human cognition. Here we used model-based functional magnetic resonance imaging and a novel minimal ownership paradigm to probe the behavioural and neural mechanisms underpinning ownership acquisition for ourselves, friends and strangers. We find a self-ownership bias at multiple levels of behaviour from initial preferences to reaction times and computational learning rates. Ventromedial prefrontal cortex (vmPFC) and anterior cingulate sulcus (ACCs) responded more to self vs. stranger associations, but despite a pervasive neural bias to track self-ownership, no brain area tracked self-ownership exclusively. However, ACC gyrus (ACCg) specifically coded ownership prediction errors for strangers and ownership associative strength for friends and strangers but not for self. Core neural mechanisms for associative learning are biased to learn in reference to self but also engaged when learning in reference to others. In contrast, ACC gyrus exhibits specialization for learning about others.","link":"/opendata/lockwood-et-al-2018/"},{"title":"Lockwood et al. (2021)","text":"Social cohesion relies on prosociality in increasingly aging populations. Helping other people requires effort, yet how willing people are to exert effort to benefit themselves and others, and whether such behaviors shift across the life span, is poorly understood. Using computational modeling, we tested the willingness of 95 younger adults (18-36 years old) and 92 older adults (55-84 years old) to put physical effort into self- and other-benefiting acts. Participants chose whether to work and exert force (30%-70% of maximum grip strength) for rewards (2-10 credits) accrued for themselves or, prosocially, for another. Younger adults were somewhat selfish, choosing to work more at higher effort levels for themselves, and exerted less force in prosocial work. Strikingly, compared with younger adults, older people were more willing to put in effort for others and exerted equal force for themselves and others. Increased prosociality in older people has important implications for human behavior and societal structure.","link":"/opendata/lockwood-et-al-2021/"},{"title":"Lojowska et al. (2023)","text":"Humans are exposed to environmental and economic threats that can profoundly affect individual survival and group functioning. Although anecdotal evidence suggests that threat exposure can increase collective action, the effects of threat on decision-making have been mainly investigated at the individual level. Here we examine how threat exposure and concomitant physiological responses modulate cooperation in small groups. Individuals (N = 105, ages 18-34 years) in groups of three were exposed to threat of electric shocks while deciding how much to contribute to a public good. Threat of shock induced a state of physiological freezing and, compared with no-threat conditions, reduced free riding and enabled groups to maintain higher cooperation over time. Exploratory analyses revealed that more cooperative responses under threat were driven by stronger baseline prosociality, suggesting that habitual prosociality is reinforced under threat. The current results support the view that human groups respond to outside threat with increased cooperation.","link":"/opendata/lojowska-et-al-2023/"},{"title":"Lopez-Gamundi et al. (2023)","text":"The ability to weigh a reward against the cost of acquiring it is critical for decision-making. While the medial prefrontal cortex (mPFC) has been implicated in tracking both mental effort demands and net value of rewards, these findings primarily come from choice paradigms that confound increased task difficulty with decreased reward probability. To resolve this issue, we designed novel tasks that kept probability of success – and therefore probability of reward – constant between levels of effort demand. In two experiments, participants completed a novel effort-based reward task that manipulated effort demand and either reward magnitude or probability of success. Electroencephalogram (EEG) data was recorded to compare an electrophysiological index of mPFC function (frontal midline theta (FMT)) to an index of incentive salience (component P3) at both cue evaluation and feedback phases. We found no evidence that FMT tracked effort demands or net value during cue evaluation. At feedback, however, FMT power was enhanced for high compared to low effort trials, but not modulated by reward magnitude or probability. Conversely, P3 was sensitive to reward magnitude and probability at both cue and feedback phases and only integrated expended effort costs at feedback, such that P3 amplitudes continued to scale with reward magnitude and probability but were also increased for high compared to low effort reward feedback. These findings suggest that, in the absence of option comparison and unequal likelihood of success, the mPFC does not track net value of prospective effort-based rewards. Instead, expended cognitive effort potentiates FMT power and enhances the saliency of rewards at feedback.","link":"/opendata/lopez-gamundi-et-al-2023/"},{"title":"Lopez-Yepez et al. (2021)","text":"Choice history effects describe how future choices depend on the history of past choices. In experimental tasks this is typically framed as a bias because it often diminishes the experienced reward rates. However, in natural habitats, choices made in the past constrain choices that can be made in the future. For foraging animals, the probability of earning a reward in a given patch depends on the degree to which the animals have exploited the patch in the past. One problem with many experimental tasks that show choice history effects is that such tasks artificially decouple choice history from its consequences on reward availability over time. To circumvent this, we use a variable interval (VI) reward schedule that reinstates a more natural contingency between past choices and future reward availability. By examining the behavior of optimal agents in the VI task we discover that choice history effects observed in animals serve to maximize reward harvesting efficiency. We further distil the function of choice history effects by manipulating first- and second-order statistics of the environment. We find that choice history effects primarily reflect the growth rate of the reward probability of the unchosen option, whereas reward history effects primarily reflect environmental volatility. Based on observed choice history effects in animals, we develop a reinforcement learning model that explicitly incorporates choice history over multiple time scales into the decision process, and we assess its predictive adequacy in accounting for the associated behavior. We show that this new variant, known as the double trace model, has a higher performance in predicting choice data, and shows near optimal reward harvesting efficiency in simulated environments. These results suggests that choice history effects may be adaptive for natural contingencies between consumption and reward availability. This concept lends credence to a normative account of choice history effects that extends beyond its description as a bias.","link":"/opendata/lopez-yepez-et-al-2021/"},{"title":"Loued-Khenissi et al. (2023)","text":"Does gravity affect decision-making? This question comes into sharp focus as plans for interplanetary human space missions solidify. In the framework of Bayesian brain theories, gravity encapsulates a strong prior, anchoring agents to a reference frame via the vestibular system, informing their decisions and possibly their integration of uncertainty. What happens when such a strong prior is altered? We address this question using a self-motion estimation task in a space analog environment under conditions of altered gravity. Two participants were cast as remote drone operators orbiting Mars in a virtual reality environment on board a parabolic flight, where both hyper- and microgravity conditions were induced. From a first-person perspective, participants viewed a drone exiting a cave and had to first predict a collision and then provide a confidence estimate of their response. We evoked uncertainty in the task by manipulating the motion’s trajectory angle. Post-decision subjective confidence reports were negatively predicted by stimulus uncertainty, as expected. Uncertainty alone did not impact overt behavioral responses (performance, choice) differentially across gravity conditions. However microgravity predicted higher subjective confidence, especially in interaction with stimulus uncertainty. These results suggest that variables relating to uncertainty affect decision-making distinctly in microgravity, highlighting the possible need for automatized, compensatory mechanisms when considering human factors in space research.","link":"/opendata/loued-khenissi-et-al-2023/"},{"title":"Luettgau et al. (2020)","text":"Decision-making is guided by memories of option values. However, retrieving items from memory renders them malleable. Here, we show that merely retrieving values from memory and making a choice between options is sufficient both to induce changes to stimulus-reward associations in the hippocampus and to bias future decision-making. After allowing participants to make repeated choices between reward-conditioned stimuli, in the absence of any outcome, we observe that participants prefer stimuli they have previously chosen, and neglect previously unchosen stimuli, over otherwise identical-valued options. Using functional brain imaging, we show that decisions induce changes to hippocampal representations of stimulus-outcome associations. These changes are correlated with future decision biases. Our results indicate that choice-induced preference changes are partially driven by choice-induced modification of memory representations and suggest that merely making a choice - even without experiencing any outcomes - induces associative plasticity.","link":"/opendata/luettgau-et-al-2020/"},{"title":"Lukics & Lukács (2022)","text":"While several studies suggest that the nature and properties of the input have significant effects on statistical learning, they have rarely been investigated systematically. In order to understand how input characteristics and their interactions impact statistical learning, we explored the effects of modality (auditory vs. visual), presentation type (serial vs. simultaneous), domain (linguistic vs. non-linguistic), and training type (random, starting small, starting big) on artificial grammar learning in young adults (N = 360). With serial presentation of stimuli, learning was more effective in the auditory than in the visual modality. However, with simultaneous presentation of visual and serial presentation of auditory stimuli, the modality effect was not present. We found a significant domain effect as well: a linguistic advantage over nonlinguistic material, which was driven by the domain effect in the auditory modality. Overall, the auditory linguistic condition had an advantage over other modality-domain types. Training types did not have any overall effect on learning; starting big enhanced performance only in the case of serial visual presentation. These results show that input characteristics such as modality, presentation type, domain and training type influence statistical learning, and suggest that their effects are also dependent on the specific stimuli and structure to be learned.","link":"/opendata/lukics-lukacs-2022/"},{"title":"Lukinova & Erlich (2021)","text":"Stress is a normal part of our everyday lives. It alerts us to changes in our environment working as an early warning system. However, when stress is prolonged, it can become harmful. The deleterious effects of stress on brain function are well established: chronic stress significantly impairs cognitive function reducing our ability to solve problems and to regulate behavior and, therefore, may lead to more challenges that can further exacerbate stress. An important class of decisions that may be made under stress include those between rewards delivered immediately vs. those delivered in the future. Not considering or devaluing future outcomes (delay discounting) can result in adverse outcomes such as not buying health insurance, gambling or drug use. To date, however, little is known about how chronic stress influences economic decisions that differ in the time of outcome delivery. A handful of studies suggest that increased stress may lead to more impulsive choices in subjects of average socioeconomic status and stress levels. Here, we address this gap by using a longitudinal design to test a combination of decision-making tasks, questionnaires, saliva and hair samples within subject (N = 41, 34 with all stress measures) to determine whether chronic stress measures are associated with the economic choices under different time scales. We found that the degree to which people think it is worth waiting, i.e. individual’s discount factor, over seconds, but not over days, varied reliably with endogenous stress. These results are imperative to studying stress in a variety of contexts: e.g., variation in consumer’s impulse purchases and the willingness to pay to avoid waiting in mobile gaming could linked to the individual stress responses to the environment, even in healthy adults.","link":"/opendata/lukinova-erlich-2021/"},{"title":"Lukinova & Erlich (2021a)","text":"Delay-discounting studies in neuroscience, psychology, and economics have been mostly focused on concepts of self-control, reward evaluation, and discounting. Another important relationship to consider is the link between intertemporal choice and time perception. We presented 50 college students with timing tasks on the range of seconds to minutes and intertemporal-choice tasks on both the time-scale of seconds and of days. We hypothesized that individual differences in time perception would influence decisions about short experienced delays but not long delays. While we found some evidence that individual differences in internal clock speed account for some unexplained variance between choices across time-horizons, overall our findings suggest a nominal contribution of the altered sense of time in intertemporal choice.","link":"/opendata/lukinova-erlich-2021a/"},{"title":"MacDonald et al. (2016)","text":"Childhood maltreatment has diverse, lifelong impact on morbidity and mortality. The Childhood Trauma Questionnaire (CTQ) is one of the most commonly used scales to assess and quantify these experiences and their impact. Curiously, despite very widespread use of the CTQ, scores on its Minimization-Denial (MD) subscale-originally designed to assess a positive response bias-are rarely reported. Hence, little is known about this measure. If response biases are either common or consequential, current practices of ignoring the MD scale deserve revision. Therewith, we designed a study to investigate 3 aspects of minimization, as defined by the CTQs MD scale: 1) its prevalence; 2) its latent structure; and finally 3) whether minimization moderates the CTQs discriminative validity in terms of distinguishing between psychiatric patients and community volunteers. Archival, item-level CTQ data from 24 multinational samples were combined for a total of 19,652 participants. Analyses indicated: 1) minimization is common; 2) minimization functions as a continuous construct; and 3) high MD scores attenuate the ability of the CTQ to distinguish between psychiatric patients and community volunteers. Overall, results suggest that a minimizing response bias-as detected by the MD subscale-has a small but significant moderating effect on the CTQs discriminative validity. Results also may suggest that some prior analyses of maltreatment rates or the effects of early maltreatment that have used the CTQ may have underestimated its incidence and impact. We caution researchers and clinicians about the widespread practice of using the CTQ without the MD or collecting MD data but failing to assess and control for its effects on outcomes or dependent variables.","link":"/opendata/macdonald-et-al-2016/"},{"title":"Mack et al. (2020)","text":"Prefrontal cortex (PFC) is thought to support the ability to focus on goal-relevant information by filtering out irrelevant information, a process akin to dimensionality reduction. Here, we test this dimensionality reduction hypothesis by relating a data-driven approach to characterizing the complexity of neural representation with a theoretically-supported computational model of learning. We find evidence of goal-directed dimensionality reduction within human ventromedial PFC during learning. Importantly, by using computational predictions of each participant’s attentional strategies during learning, we find that that the degree of neural compression predicts an individual’s ability to selectively attend to concept-specific information. These findings suggest a domain-general mechanism of learning through compression in ventromedial PFC.","link":"/opendata/mack-et-al-2020/"},{"title":"Madan et al. (2021)","text":"Both memory and choice are influenced by context: Memory is enhanced when encoding and retrieval contexts match, and choice is swayed by available options. Here, we assessed how context influences risky choice in an experience-based task in two main experiments (119 and 98 participants retained, respectively) and two additional experiments reported in the Supplemental Material available online (152 and 106 participants retained, respectively). Within a single session, we created two separate contexts by presenting blocks of trials in distinct backgrounds. Risky choices were context dependent; given the same choice, people chose differently depending on other outcomes experienced in that context. Choices reflected an overweighting of the most extreme outcomes within each local context rather than the global context of all outcomes. When tested in the nontrained context, people chose according to the context at encoding and not retrieval. In subsequent memory tests, people displayed biases specific to distinct contexts: Extreme outcomes from each context were more accessible and judged as more frequent. These results pose a challenge for theories of choice that rely on retrieval as guiding choice.","link":"/opendata/madan-et-al-2021/"},{"title":"Maekelae et al. (2023)","text":"Measuring individual differences in cognitive effort can be elusive as effort is a function of motivation and ability. We report six studies (N = 663) investigating the relationship of Need for Cognition and working memory capacity with three cognitive effort measures: demand avoidance in the Demand Selection Task, effort discounting measured as the indifference point in the Cognitive Effort Discounting paradigm, and rational reasoning score with items from the heuristic and bias literature. We measured perceived mental effort with the NASA task load index. The three tasks were not correlated with each other (all r’s .1). Need for Cognition was positively associated with effort discounting (r = .168, p < .001) and rational reasoning (r = .176, p < .001), but not demand avoidance (r = .085, p = .186). Working memory capacity was related to effort discounting (r = .185, p = .004). Higher perceived effort was related to poorer rational reasoning. Our data indicate that two of the tasks are related to Need for Cognition but are also influenced by a participant’s working memory capacity. We discuss whether any of the tasks measure cognitive effort.","link":"/opendata/maekelae-et-al-2023/"},{"title":"Maier et al. (2023)","text":"Many controversies arise from differences in how people resolve moral dilemmas by following deontological moral rules versus consequentialist cost-benefit reasoning (CBR). This article explores whether and, if so, how these seemingly intractable differences may arise from experience and whether they can be overcome through moral learning. We designed a new experimental paradigm to investigate moral learning from consequences of previous decisions. Our participants (N=387) faced a series of realistic moral dilemmas between two conflicting choices: one prescribed by a moral rule and the other favored by CBR. Critically, we let them observe the consequences of each of their decisions before making the next one. In one condition, CBR-based decisions consistently led to good outcomes, whereas rule-based decisions consistently led to bad outcomes. In the other condition, this contingency was reversed. We observed systematic, experience-dependent changes in people’s moral rightness ratings and moral decisions over the course of just 13 decisions. Without being aware of it, participants adjusted how much moral weight they gave to CBR versus moral rules according to which approach produced better consequences in their respective experimental condition. These learning effects transferred to their subsequent responses to the Oxford Utilitarianism Scale, indicating genuine moral learning rather than task-specific effects.Our findings demonstrate the existence of rapid adaptive moral learning from the consequences of previous decisions. Individual differences in morality may thus be more malleable than previously thought.","link":"/opendata/maier-et-al-2023/"},{"title":"Makarov et al. (2023)","text":"Can synchrony in stimulation guide attention and aid perceptual performance? Here, in a series of three experiments, we tested the influence of visual and auditory synchrony on attentional selection during a visual foraging task. Experiment 1 was performed online, where the task was to forage for 10 (out of 20) vertical lines among 60 randomly oriented distractor lines that changed color between yellow and blue at random intervals. The targets either changed colors in visual synchrony or not. In another condition, a non-spatial sound additionally occurred synchronously with the color change of the targets. Experiment 2 was run in the laboratory (within-subjects) with the same design. When the targets changed color in visual synchrony, foraging times were significantly shorter than when they randomly changed colors, but there was no additional benefit for the sound synchrony. In Experiment 3, task difficulty was increased as participants foraged for as many 45° rotated lines as possible among lines of different orientations within 10 seconds, with the same synchrony conditions as in Experiments 1 and 2. Again, there was a large benefit of visual synchrony but no additional benefit for sound synchronization. Our results provide strong evidence that visual synchronization can guide attention during multiple target foraging. This likely reflects temporal grouping of the synchronized targets. No additional benefit occurred for sound synchrony, even when the foraging task was quite difficult (Experiment 3).","link":"/opendata/makarov-et-al-2023/"},{"title":"Makowski et al. (2022)","text":"As a measure of the brain’s temporal fine-tuning capacity, temporal resolution power (TRP) explained repeatedly a substantial amount of variance in psychometric intelligence. Recently, spatial suppression, referred to as the increasing difficulty in quickly perceiving motion direction as the size of the moving stimulus increases, has attracted particular attention, when it was found to be positively related to psychometric intelligence. Due to the conceptual similarities of TRP and spatial suppression, the present study investigated their mutual interplay in the relation to psychometric intelligence in 273 young adults to better understand the reasons for these relationships. As in previous studies, psychometric intelligence was positively related to a latent variable representing TRP but, in contrast to previous reports, negatively to latent and manifest measures of spatial suppression. In a combined structural equation model, TRP still explained a substantial amount of variance in psychometric intelligence while the negative relation between spatial suppression and intelligence was completely explained by TRP. Thus, our findings confirmed TRP to be a robust predictor of psychometric intelligence but challenged the assumption of spatial suppression as a representation of general information processing efficiency as reflected in psychometric intelligence. Possible reasons for the contradictory findings on the relation between spatial suppression and psychometric intelligence are discussed.","link":"/opendata/makowski-et-al-2022/"},{"title":"Manning et al. (2022)","text":"Many studies report atypical responses to sensory information in autistic individuals, yet it is not clear which stages of processing are affected, with little consideration given to decision-making processes. We combined diffusion modelling with high-density EEG to identify which processing stages differ between 50 autistic and 50 typically developing children aged 6-14 years during two visual motion tasks. Our pre-registered hypotheses were that autistic children would show task-dependent differences in sensory evidence accumulation, alongside a more cautious decision-making style and longer non-decision time across tasks. We tested these hypotheses using hierarchical Bayesian diffusion models with a rigorous blind modelling approach, finding no conclusive evidence for our hypotheses. Using a data-driven method, we identified a response-locked centro-parietal component previously linked to the decision-making process. The build-up in this component did not consistently relate to evidence accumulation in autistic children. This suggests that the relationship between the EEG measure and diffusion-modelling is not straightforward in autistic children. Compared to a related study of children with dyslexia, motion processing differences appear less pronounced in autistic children. Exploratory analyses also suggest weak evidence that ADHD symptoms moderate perceptual decision-making in autistic children.","link":"/opendata/manning-et-al-2022/"},{"title":"Marchant et al. (2023)","text":"Humans excel at causal reasoning, yet at the same time consistently fail to respect its basic axioms. They seemingly fail to recognize, for instance, that only the direct causes of an event can affect its probability (the Markov condition). How can one explain this paradox? Here we argue that standard normative analyses of causal reasoning mostly apply to the idealized case where the reasoner has perfect confidence in her knowledge of the underlying causal model. Given uncertainty about the correct representation of a causal system, it is not always rational for a reasoner to respect the Markov condition and other ‘normative’ principles. To test whether uncertainty can account for the apparent fallibility of human judgments, we formulate a simple computational model of a rational-but-uncertain causal reasoner. In a re-analysis of a recent causal reasoning study, the model fits the data significantly better than its standard normative counterpart.","link":"/opendata/marchant-et-al-2023/"},{"title":"Marciano et al. (2023)","text":"Expectations are often dynamic: sports fans know that expectations are rapidly updated as games unfold. Yet expectations have traditionally been studied as static. Here we present behavioral and electrophysiological evidence of sub-second changes in expectations using slot machines as a case study. In Study 1, we demonstrate that EEG signal before the slot machine stops varies based on proximity to winning. Study 2 introduces a behavioral paradigm to measure dynamic expectations via betting, and shows that expectation trajectories vary as a function of winning proximity. Notably, these expectation trajectories parallel Study 1’s EEG activity. Studies 3 (EEG) and 4 (behavioral) replicate these findings in the loss domain. These four studies provide compelling evidence that dynamic sub-second updates in expectations can be behaviorally and electrophysiologically measured. Our research opens promising avenues for understanding the dynamic nature of reward expectations and their impact on cognitive processes.","link":"/opendata/marciano-et-al-2023/"},{"title":"Marshall et al. (2022)","text":"Optimality analysis of value-based decisions in binary and multi-alternative choice settings predicts that reaction times should be sensitive only to differences in stimulus magnitudes, but not to overall absolute stimulus magnitude. Yet experimental work in the binary case has shown magnitude sensitive reaction times, and theory shows that this can be explained by switching from linear to multiplicative time costs, but also by nonlinear subjective utility. Thus disentangling explanations for observed magnitude sensitive reaction times is difficult. Here for the first time we extend the theoretical analysis of geometric time-discounting to ternary choices, and present novel experimental evidence for magnitude-sensitivity in such decisions, in both humans and slime moulds. We consider the optimal policies for all possible combinations of linear and geometric time costs, and linear and nonlinear utility; interestingly, geometric discounting emerges as the predominant explanation for magnitude sensitivity.","link":"/opendata/marshall-et-al-2022/"},{"title":"Martin et al. (2013)","text":"Patients with schizophrenia experience a loss of temporal continuity or subjective fragmentation along the temporal dimension. Here, we develop the hypothesis that impaired temporal awareness results from a perturbed structuring of events in time-i.e., canonical neural dynamics. To address this, 26 patients and their matched controls took part in two psychophysical studies using desynchronized audiovisual speech. Two tasks were used and compared: first, an identification task testing for multisensory binding impairments in which participants reported what they heard while looking at a speaker’s face; in a second task, we tested the perceived simultaneity of the same audiovisual speech stimuli. In both tasks, we used McGurk fusion and combination that are classic ecologically valid multisensory illusions. First, and contrary to previous reports, our results show that patients do not significantly differ from controls in their rate of illusory reports. Second, the illusory reports of patients in the identification task were more sensitive to audiovisual speech desynchronies than those of controls. Third, and surprisingly, patients considered audiovisual speech to be synchronized for longer delays than controls. As such, the temporal tolerance profile observed in a temporal judgement task was less of a predictor for sensory binding in schizophrenia than for that obtained in controls. We interpret our results as an impairment of temporal event structuring in schizophrenia which does not specifically affect sensory binding operations but rather, the explicit access to timing information associated here with audiovisual speech processing. Our findings are discussed in the context of curent neurophysiological frameworks for the binding and the structuring of sensory events in time.","link":"/opendata/martin-et-al-2013/"},{"title":"Marton et al. (2019)","text":"Doubt is subjective uncertainty about ones perceptions and recall. It can impair decision-making and is a prominent feature of obsessive-compulsive disorder (OCD). We propose that evaluation of doubt during decision-making provides a useful endophenotype with which to study the underlying pathophysiology of OCD and potentially other psychopathologies. For the current study, we developed a new instrument, the Doubt Questionnaire, to clinically assess doubt. The random dot motion task was used to measure reaction time and subjective certainty, at varying levels of perceptual difficulty, in individuals who scored high and low on doubt, and in individuals with and without OCD. We found that doubt scores were significantly higher in OCD cases than controls. Drift diffusion modeling revealed that high doubt scores predicted slower evidence accumulation than did low doubt scores; and OCD diagnosis lower than controls. At higher levels of dot coherence, OCD participants exhibited significantly slower drift rates than did controls (q<0.05 for 30%, and 45% coherence; q<0.01 for 70% coherence). In addition, at higher levels of coherence, high doubt subjects exhibited even slower drift rates and reaction times than low doubt subjects (q<0.01 for 70% coherence). Moreover, under high coherence conditions, individuals with high doubt scores reported lower certainty in their decisions than did those with low doubt scores. We conclude that the Doubt Questionnaire is a useful instrument for measuring doubt. Compared to those with low doubt, those with high doubt accumulate evidence more slowly and report lower certainty when making decisions under conditions of low uncertainty. High doubt may affect the decision-making process in individuals with OCD. The dimensional doubt measure is a useful endophenotype for OCD research and could enable computationally rigorous and neurally valid understanding of decision-making and its pathological expression in OCD and other disorders.","link":"/opendata/marton-et-al-2019/"},{"title":"Marzuki et al. (2023)","text":"BackgroundCompulsive behaviour is often triggered by Pavlovian cues. Assessing how Pavlovian cues drive instrumental behaviour in obsessive-compulsive disorder (OCD) is therefore crucial to understand how compulsions develop and are maintained. An aversive Pavlovian-to-Instrumental transfer (PIT) paradigm, particularly one involving avoidance/cancellation of negative outcomes, can enable such investigation and has not previously been studied in clinical-OCD. MethodsForty-one participants diagnosed with OCD (21 adults; 20 youths) and 44 controls (21 adults; 23 youths) completed an aversive PIT task. Participants had to cancel unpleasant noises by learning response-outcome (instrumental) and stimulus-outcome (Pavlovian) associations. We then assessed whether Pavlovian cues elicited specific instrumental avoidance responses (specific PIT) and induced general instrumental avoidance (general PIT). We investigated whether task learning and confidence indices influenced PIT strength differentially between groups.ResultsUrge to avoid unpleasant noises and preference for safe over unsafe stimuli influenced specific and general PIT respectively in OCD, while PIT in controls was more influenced by confidence in instrumental and Pavlovian learning. However, there was no overall group difference in PIT performance, although youths with OCD showed weaker specific PIT than youth controls. ConclusionsIn OCD, implicit motivational factors, but not learnt knowledge, contribute to the successful integration of aversive Pavlovian and instrumental cues. This implies that compulsive avoidance may be driven by these automatic processes. Youths with OCD show deficits in specific PIT, suggesting cue integration impairments are only apparent in adolescence. These findings are clinically relevant as they emphasise the importance of targeting such implicit motivational processes when treating OCD.","link":"/opendata/marzuki-et-al-2023/"},{"title":"Mason et al. (2022)","text":"Decision-making involves weighing up the outcome likelihood, potential rewards, and effort needed. Previous research has focused on the trade-offs between risk and reward or between effort and reward. Here we bridge this gap and examine how risk in effort levels influences choice. With outcome uncertainty, people’s risk attitudes follow a fourfold pattern, varying with the domain (gains or losses) and probability (rare or common). Three experiments assessed people’s risk attitudes for money, physical effort, and mental effort. With monetary gambles, risk attitudes followed the classic fourfold pattern, and people were risk averse for increases in money (gains). With both physical and mental effort, however, people exhibited a “flipped” fourfold pattern of risk preferences and were instead risk seeking for increases in effort. Overall, these findings indicate that people treat effort as a loss of resources and are more willing to take risks to avoid potentially high levels of effort.","link":"/opendata/mason-et-al-2022/"},{"title":"Mason et al. (2023)","text":"Many real-world decisions involving rare events also involve extreme outcomes. Despite this confluence, decisions-from-experience research has focused on the impact of rare but non-extreme outcomes. In those situations, people typically choose as if they underestimate the probability of a rare outcome happening. Separately, people have been shown to overestimate the probability of an extreme outcome happening. Here, for the first time, we examine the confluence of these two competing biases in decisions from experience. In a between-subjects behavioural experiment, we examine people’s risk preferences for rare extreme outcomes and for rare non-extreme outcomes. When outcomes are both rare and extreme, people’s risk preferences shift away from traditional risk patterns for rare events: they underweight those extreme and rare events less. We simulate these results using a small-sample model of decision-making that accounts for the overweighting of extreme events. The additive effect of these decision biases on risk preferences reveals that to understand real-world risk for rare events we must also consider the extremity of the outcomes.","link":"/opendata/mason-et-al-2023/"},{"title":"Mathar et al. (2022)","text":"Supplementation with the catecholamine precursor L-Tyrosine might enhance cognitive performance, but overall findings are mixed. Here, we investigate the effect of a single dose of tyrosine (2g) vs. placebo on two catecholamine-dependent trans-diagnostic traits: model-based control during reinforcement learning (2-step task) and temporal discounting, using a double-blind, placebo-controlled, within-subject design (n = 28 healthy male participants). We leveraged drift diffusion models in a hierarchical Bayesian framework to jointly model participants’ choices and response times (RTS) in both tasks. Furthermore, comprehensive autonomic monitoring (heart rate, heart rate variability, pupillometry, spontaneous eye blink rate) was performed both pre- and post-supplementation, to explore potential physiological effects of supplementation. Across tasks, tyrosine consistently reduced participants’ RTs without deteriorating task-performance. Diffusion modeling linked this effect to attenuated decision-thresholds in both tasks and further revealed increased model-based control (2-step task) and (if anything) attenuated temporal discounting. On the physiological level, participants’ pupil dilation was predictive of the individual degree of temporal discounting. Tyrosine supplementation reduced physiological arousal as revealed by increases in pupil dilation variability and reductions in heart rate. Supplementation-related changes in physiological arousal predicted individual changes in temporal discounting. Our findings provide first evidence that tyrosine supplementation might impact psychophysiological parameters, and suggest that modeling approaches based on sequential sampling models can yield novel insights into latent cognitive processes modulated by amino-acid supplementation.","link":"/opendata/mathar-et-al-2022/"},{"title":"McDonald et al. (2019)","text":"Previous studies of strategic social interaction in game theory have predominantly used games with clearly-defined turns and limited choices. Yet, most real-world social behaviors involve dynamic, coevolving decisions by interacting agents, which poses challenges for creating tractable models of behavior. Here, using a game in which humans competed against both real and artificial opponents, we show that it is possible to quantify the instantaneous dynamic coupling between agents. Adopting a reinforcement learning approach, we use Gaussian Processes to model the policy and value functions of participants as a function of both game state and opponent identity. We found that higher-scoring participants timed their final change in direction to moments when the opponents counter-strategy was weaker, while lower-scoring participants less precisely timed their final moves. This approach offers a natural set of metrics for facilitating analysis at multiple timescales and suggests new classes of experimental paradigms for assessing behavior.","link":"/opendata/mcdonald-et-al-2019/"},{"title":"McDougle et al. (2019)","text":"Decisions must be implemented through actions, and actions are prone to error. As such, when an expected outcome is not obtained, an individual should be sensitive to not only whether the choice itself was suboptimal but also whether the action required to indicate that choice was executed successfully. The intelligent assignment of credit to action execution versus action selection has clear ecological utility for the learner. To explore this, we used a modified version of a classic reinforcement learning task in which feedback indicated whether negative prediction errors were, or were not, associated with execution errors. Using fMRI, we asked if prediction error computations in the human striatum, a key substrate in reinforcement learning and decision making, are modulated when a failure in action execution results in the negative outcome. Participants were more tolerant of non-rewarded outcomes when these resulted from execution errors versus when execution was successful, but reward was withheld. Consistent with this behavior, a model-driven analysis of neural activity revealed an attenuation of the signal associated with negative reward prediction errors in the striatum following execution failures. These results converge with other lines of evidence suggesting that prediction errors in the mesostriatal dopamine system integrate high-level information during the evaluation of instantaneous reward outcomes.","link":"/opendata/mcdougle-et-al-2019/"},{"title":"McDougle & Collins (2021)","text":"What determines the speed of our decisions? Various models of decision-making have focused on perceptual evidence, past experience, and task complexity as important factors determining the degree of deliberation needed for a decision. Here, we build on a sequential sampling decision-making framework to develop a new model that captures a range of reaction time (RT) effects by accounting for both working memory and instrumental learning processes. The model captures choices and RTs at various stages of learning, and in learning environments with varying complexity. Moreover, the model generalizes from tasks with deterministic reward contingencies to probabilistic ones. The model succeeds in part by incorporating prior uncertainty over actions when modeling RT. This straightforward process model provides a parsimonious account of decision dynamics during instrumental learning and makes unique predictions about internal representations of action values.","link":"/opendata/mcdougle-collins-2021/"},{"title":"Meder et al. (2021)","text":"Ergodicity describes an equivalence between the expectation value and the time average of observables. Applied to human behaviour, ergodic theories of decision-making reveal how individuals should tolerate risk in different environments. To optimize wealth over time, agents should adapt their utility function according to the dynamical setting they face. Linear utility is optimal for additive dynamics, whereas logarithmic utility is optimal for multiplicative dynamics. Whether humans approximate time optimal behavior across different dynamics is unknown. Here we compare the effects of additive versus multiplicative gamble dynamics on risky choice. We show that utility functions are modulated by gamble dynamics in ways not explained by prevailing decision theories. Instead, as predicted by time optimality, risk aversion increases under multiplicative dynamics, distributing close to the values that maximize the time average growth of in-game wealth. We suggest that our findings motivate a need for explicitly grounding theories of decision-making on ergodic considerations.","link":"/opendata/meder-et-al-2021/"},{"title":"Mehta et al. (2023)","text":"The sense of agency, or the belief in action causality, is an elusive construct that impacts day-to-day experience and decision-making. Despite its relevance in a range of neuropsychiatric disorders, it is widely under-studied and remains difficult to measure objectively in patient populations. We developed and tested a novel cognitive measure of reward-dependent agency perception in an in-person and online cohort. The in-person cohort consisted of 52 healthy control subjects and 20 subjects with depression and anxiety disorders (DA), including major depressive disorder and generalized anxiety disorder. The online sample consisted of 254 participants. The task consisted of an effort implementation for monetary rewards with computerized visual feedback interference and trial-by-trial ratings of self versus other agency. All subjects across both cohorts demonstrated higher self-agency after receiving positive-win feedback, compared to negative-loss feedback when the level of computer inference was kept constant. Patients with DA showed reduced positive feedback-dependent agency compared to healthy controls. Finally, in our online sample, we found that higher self-agency following negative-loss feedback was associated with worse anhedonia symptoms. Together this work suggests how positive and negative environmental information impacts the sense of self-agency in healthy subjects, and how it is perturbed in patients with depression and anxiety.","link":"/opendata/mehta-et-al-2023/"},{"title":"Mennella et al. (2022)","text":"Adaptation to our social environment requires learning how to avoid potentially harmful situations, such as encounters with aggressive individuals. Threatening facial expressions can evoke automatic stimulus-driven reactions, but whether their aversive motivational value suffices to drive instrumental active avoidance remains unclear. When asked to freely choose between different action alternatives, participants spontaneously-without instruction or monetary reward-developed a preference for choices that maximized the probability of avoiding angry individuals (sitting away from them in a waiting room). Most participants showed clear behavioral signs of instrumental learning, even in the absence of an explicit avoidance strategy. Inter-individual variability in learning depended on participants subjective evaluations and sensitivity to threat approach feedback. Counterfactual learning best accounted for avoidance behaviors, especially in participants who developed an explicit avoidance strategy. Our results demonstrate that implicit defensive behaviors in social contexts are likely the product of several learning processes, including instrumental learning.","link":"/opendata/mennella-et-al-2022/"},{"title":"Mezzadri et al. (2022)","text":"This study simultaneously manipulates within-category (rule-based vs. similarity-based), between-category (blocked vs. interleaved), and across-blocks (constant vs. variable) orders to investigate how different types of presentation order interact with one another. With regard to within-category orders, stimuli were presented either in a “rule plus exceptions” fashion (in the rule-based order) or by maximizing the similarity between contiguous examples (in the similarity-based order). As for the between-category manipulation, categories were either blocked (in the blocked order) or alternated (in the interleaved order). Finally, the sequence of stimuli was either repeated (in the constant order) or varied (in the variable order) across blocks. This research offers a novel approach through both an individual and concurrent analysis of the studied factors, with the investigation of across-blocks manipulations being unprecedented. We found a significant interaction between within-category and across-blocks orders, as well as between between-category and across-blocks orders. In particular, the combination similarity-based + variable orders was the most detrimental, whereas the combination blocked + constant was the most beneficial. We also found a main effect of across-blocks manipulation, with faster learning in the constant order as compared to the variable one. With regard to the classification of novel stimuli, learners in the rule-based and interleaved orders showed generalization patterns that were more consistent with a specific rule-based strategy, as compared to learners in the similarity-based and blocked orders, respectively. This study shows that different types of order can interact in a subtle fashion and thus should not be considered in isolation.","link":"/opendata/mezzadri-et-al-2022/"},{"title":"Michely et al. (2022)","text":"Instrumental learning is driven by a history of outcome success and failure. Here, we examined the impact of serotonin on learning from positive and negative outcomes. Healthy human volunteers were assessed twice, once after acute (single-dose), and once after prolonged (week-long) daily administration of the SSRI citalopram or placebo. Using computational modelling, we show that prolonged boosting of serotonin enhances learning from punishment and reduces learning from reward. This valence-dependent learning asymmetry increases subjects tendency to avoid actions as a function of cumulative failure without leading to detrimental, or advantageous, outcomes. By contrast, no significant modulation of learning was observed following acute SSRI administration. However, differences between the effects of acute and prolonged administration were not significant. Overall, these findings may help explain how serotonergic agents impact on mood disorders.","link":"/opendata/michely-et-al-2022/"},{"title":"Mieth et al. (2021)","text":"To determine the role of moral norms in cooperation and punishment, we examined the effects of a moral-framing manipulation in a Prisoner’s Dilemma game with a costly punishment option. In each round of the game, participants decided whether to cooperate or to defect. The Prisoner’s Dilemma game was identical for all participants with the exception that the behavioral options were paired with moral labels (“I cooperate” and “I cheat”) in the moral-framing condition and with neutral labels (“A” and “B”) in the neutral-framing condition. After each round of the Prisoner’s Dilemma game, participants had the opportunity to invest some of their money to punish their partners. In two experiments, moral framing increased moral and hypocritical punishment: participants were more likely to punish partners for defection when moral labels were used than when neutral labels were used. When the participants’ cooperation was enforced by their partners’ moral punishment, moral framing did not only increase moral and hypocritical punishment but also cooperation. The results suggest that moral framing activates a cooperative norm that specifically increases moral and hypocritical punishment. Furthermore, the experience of moral punishment by the partners may increase the importance of social norms for cooperation, which may explain why moral framing effects on cooperation were found only when participants were subject to moral punishment.","link":"/opendata/mieth-et-al-2021/"},{"title":"Mikus et al. (2022)","text":"Human behaviour requires flexible arbitration between actions we do out of habit and actions that are directed towards a specific goal. Drugs that target opioid and dopamine receptors are notorious for inducing maladaptive habitual drug consumption; yet, how the opioidergic and dopaminergic neurotransmitter systems contribute to the arbitration between habitual and goal-directed behaviour is poorly understood. By combining pharmacological challenges with a well-established decision-making task and a novel computational model, we show that the administration of the dopamine D2/3 receptor antagonist amisulpride led to an increase in goal-directed or model-based relative to habitual or model-free behaviour, whereas the non-selective opioid receptor antagonist naltrexone had no appreciable effect. The effect of amisulpride on model-based/model-free behaviour did not scale with drug serum levels in the blood. Furthermore, participants with higher amisulpride serum levels showed higher explorative behaviour. These findings highlight the distinct functional contributions of dopamine and opioid receptors to goal-directed and habitual behaviour and support the notion that even small doses of amisulpride promote flexible application of cognitive control.","link":"/opendata/mikus-et-al-2022/"},{"title":"Mikus et al. (2023)","text":"The ability to learn about other people is crucial for human social functioning. Dopamine has been proposed to regulate the precision of beliefs, but direct behavioural evidence of this is lacking. In this study, we investigate how a high dose of the D2/D3 dopamine receptor antagonist sulpiride impacts learning about other people’s prosocial attitudes in a repeated Trust game. Using a Bayesian model of belief updating, we show that in a sample of 76 male participants sulpiride increases the volatility of beliefs, which leads to higher precision weights on prediction errors. This effect is driven by participants with genetically conferred higher dopamine availability (Taq1a polymorphism) and remains even after controlling for working memory performance. Higher precision weights are reflected in higher reciprocal behaviour in the repeated Trust game but not in single-round Trust games. Our data provide evidence that the D2 receptors are pivotal in regulating prediction error-driven belief updating in a social context.","link":"/opendata/mikus-et-al-2023/"},{"title":"Miletic & van-Maanen (2019)","text":"The time available to inform decisions is often limited, for example because of a response deadline. In such circumstances, accurate knowledge of the amount of time available for a decision is crucial for optimal choice behavior. However, the relation between temporal cognition and decision-making under time pressure is poorly understood. Here, we test how the precision of the internal representation of time affects choice behavior when decision time is limited by a deadline. We show that participants with a precise internal representation of time respond more cautiously in decision-making. Furthermore, we provide an empirical test of theoretical accounts of decision-making that argue that it is optimal to commit to a decision based on increasingly less evidence as the deadline approaches (so-called ‘collapsing decision bounds’). These theories entail that the speed of collapse of the decision bound should depend on the precision of the internal representation of the deadline. However, although we find evidence that participants collapse decision bounds, we found no relation between the amount of collapse and the internal representation of time.","link":"/opendata/miletic-van-maanen-2019/"},{"title":"Millner et al. (2018)","text":"To survive in complex environments, animals need to have mechanisms to select effective actions quickly, with minimal computational costs. As perhaps the computationally most parsimonious of these systems, Pavlovian control accomplishes this by hardwiring specific stereotyped responses to certain classes of stimuli. It is well documented that appetitive cues initiate a Pavlovian bias toward vigorous approach; however, Pavlovian responses to aversive stimuli are less well understood. Gaining a deeper understanding of aversive Pavlovian responses, such as active avoidance, is important given the critical role these behaviors play in several psychiatric conditions. The goal of the current study was to establish a behavioral and computational framework to examine aversive Pavlovian responses (activation vs. inhibition) depending on the proximity of an aversive state (escape vs. avoidance). We introduce a novel task in which participants are exposed to primary aversive (noise) stimuli and characterized behavior using a novel generative computational model. This model combines reinforcement learning and drift-diffusion models so as to capture effects of invigoration/inhibition in both explicit choice behavior as well as changes in RT. Choice and RT results both suggest that escape is associated with a bias for vigorous action, whereas avoidance is associated with behavioral inhibition. These results lay a foundation for future work seeking insights into typical and atypical aversive Pavlovian responses involved in psychiatric disorders, allowing us to quantify both implicit and explicit indices of vigorous choice behavior in the context of aversion.","link":"/opendata/millner-et-al-2018/"},{"title":"Mkrtchian et al. (2017)","text":"Serious and debilitating symptoms of anxiety are the most common mental health problem worldwide, accounting for around 5% of all adult years lived with disability in the developed world. Avoidance behavior-avoiding social situations for fear of embarrassment, for instance-is a core feature of such anxiety. However, as for many other psychiatric symptoms the biological mechanisms underlying avoidance remain unclear. Reinforcement learning models provide formal and testable characterizations of the mechanisms of decision making; here, we examine avoidance in these terms. A total of 101 healthy participants and individuals with mood and anxiety disorders completed an approach-avoidance go/no-go task under stress induced by threat of unpredictable shock. We show an increased reliance in the mood and anxiety group on a parameter of our reinforcement learning model that characterizes a prepotent (pavlovian) bias to withhold responding in the face of negative outcomes. This was particularly the case when the mood and anxiety group was under stress. This formal description of avoidance within the reinforcement learning framework provides a new means of linking clinical symptoms with biophysically plausible models of neural circuitry and, as such, takes us closer to a mechanistic understanding of mood and anxiety disorders.","link":"/opendata/mkrtchian-et-al-2017/"},{"title":"Mkrtchian et al. (2023)","text":"Computational models can offer mechanistic insight into cognition and therefore have the potential to transform our understanding of psychiatric disorders and their treatment. For translational efforts to be successful, it is imperative that computational measures capture individual characteristics reliably. To date, this issue has received little consideration. Here we examine the reliability of reinforcement learning and economic models derived from two commonly used tasks. Healthy individuals (N=50) completed a restless four-armed bandit and a calibrated gambling task twice, two weeks apart. Reward and punishment processing parameters from the reinforcement learning model showed fair-to-good reliability, while risk/loss aversion parameters from a prospect theory model exhibited good-to-excellent reliability. Both models were further able to predict future behaviour above chance within individuals. This prediction was better when based on participants’ own model parameters than other participants’ parameter estimates. These results suggest that reinforcement learning, and particularly prospect theory parameters, can be measured reliably to assess learning and decision-making mechanisms, and that these processes may represent relatively distinct computational profiles across individuals. Overall, these findings indicate the translational potential of clinically-relevant computational parameters for precision psychiatry.","link":"/opendata/mkrtchian-et-al-2023/"},{"title":"Moeller et al. (2021)","text":"Reward prediction errors (RPEs) and risk preferences have two things in common: both can shape decision making behavior, and both are commonly associated with dopamine. RPEs drive value learning and are thought to be represented in the phasic release of striatal dopamine. Risk preferences bias choices towards or away from uncertainty; they can be manipulated with drugs that target the dopaminergic system. Based on the common neural substrate, we hypothesize that RPEs and risk preferences are linked on the level of behavior as well. Here, we develop this hypothesis theoretically and test it empirically. First, we apply a recent theory of learning in the basal ganglia to predict how RPEs influence risk preferences. We find that positive RPEs should cause increased risk-seeking, while negative RPEs should cause risk-aversion. We then test our behavioral predictions using a novel bandit task in which value and risk vary independently across options. Critically, conditions are included where options vary in risk but are matched for value. We find that our prediction was correct: participants become more risk-seeking if choices are preceded by positive RPEs, and more risk-averse if choices are preceded by negative RPEs. These findings cannot be explained by other known effects, such as nonlinear utility curves or dynamic learning rates.","link":"/opendata/moeller-et-al-2021/"},{"title":"Molenmaker et al. (2023)","text":"Peer punishment can help groups to establish collectively beneficial public goods. However, when humans condition punishment on other factors than poor contribution, punishment can become ineffective and group cooperation deteriorates. Here we show that this happens in pluriform groups where members have different socio-demographic characteristics. In our public good provision experiment, participants were confronted with a public good from which all group members benefitted equally, and in-between rounds they could punish each other. Groups were uniform (members shared the same academic background) or pluriform (half the members shared the same academic background, and the other half shared another background). We show that punishment effectively enforced cooperation in uniform groups where punishment was conditioned on poor contribution. In pluriform groups, punishment was conditioned on poor contribution too, but also partially on others’ social-demographic characteristics-dissimilar others were punished more than similar others regardless of their contribution. As a result, punishment lost its effectiveness in deterring free-riding and maintaining public good provision. Follow-up experiments indicated that such discriminatory punishment was used to demarcate and reinforce subgroup boundaries. This work reveals that peer punishment fails to enforce cooperation in groups with a pluriform structure, which is rule rather than exception in contemporary societies.","link":"/opendata/molenmaker-et-al-2023/"},{"title":"Molinaro & Collins (2023)","text":"When observing the outcome of a choice, people are sensitive to the choice’s context, such that the experienced value of an option depends on the alternatives: getting $1 when the possibilities were 0 or 1 feels much better than when the possibilities were 1 or 10. Context-sensitive valuation has been documented within reinforcement learning (RL) tasks, in which values are learned from experience through trial and error. Range adaptation, wherein options are rescaled according to the range of values yielded by available options, has been proposed to account for this phenomenon. However, we propose that other mechanisms-reflecting a different theoretical viewpoint-may also explain this phenomenon. Specifically, we theorize that internally defined goals play a crucial role in shaping the subjective value attributed to any given option. Motivated by this theory, we develop a new “intrinsically enhanced” RL model, which combines extrinsically provided rewards with internally generated signals of goal achievement as a teaching signal. Across 7 different studies (including previously published data sets as well as a novel, preregistered experiment with replication and control studies), we show that the intrinsically enhanced model can explain context-sensitive valuation as well as, or better than, range adaptation. Our findings indicate a more prominent role of intrinsic, goal-dependent rewards than previously recognized within formal models of human RL. By integrating internally generated signals of reward, standard RL theories should better account for human behavior, including context-sensitive valuation and beyond.","link":"/opendata/molinaro-collins-2023/"},{"title":"Molinaro et al. (2023)","text":"From an early age, children need to gather information to learn about their environment. Deciding which knowledge to pursue can be difficult because information can serve several, sometimes competing, purposes. Here, we examine the developmental trajectories of such diverse information-seeking motives. Over five experiments involving 521 children (aged 4-12), we find that school-age children integrate three key factors into their information-seeking choices: whether information reduces uncertainty, is useful in directing action, and is likely to be positive. Choices that likely reveal positive information and are useful for action emerge as early as age 4, followed by choices that reduce uncertainty (at ~age 5). Our results suggest that motives related to usefulness and uncertainty reduction become stronger with age, while the tendency to seek positive news does not show a statistically significant change throughout development. This study reveals how the relative importance of diverging, sometimes conflicting, information-seeking motives emerges throughout development.","link":"/opendata/molinaro-et-al-2023/"},{"title":"Molter et al. (2022)","text":"Choices are influenced by gaze allocation during deliberation, so that fixating an alternative longer leads to increased probability of choosing it. Gaze-dependent evidence accumulation provides a parsimonious account of choices, response times and gaze-behaviour in many simple decision scenarios. Here, we test whether this framework can also predict more complex context-dependent patterns of choice in a three-alternative risky choice task, where choices and eye movements were subject to attraction and compromise effects. Choices were best described by a gaze-dependent evidence accumulation model, where subjective values of alternatives are discounted while not fixated. Finally, we performed a systematic search over a large model space, allowing us to evaluate the relative contribution of different forms of gaze-dependence and additional mechanisms previously not considered by gaze-dependent accumulation models. Gaze-dependence remained the most important mechanism, but participants with strong attraction effects employed an additional similarity-dependent inhibition mechanism found in other models of multi-alternative multi-attribute choice.","link":"/opendata/molter-et-al-2022/"},{"title":"Moneta et al. (2023)","text":"The ventromedial prefrontal-cortex (vmPFC) is known to contain expected value signals that inform our choices. But expected values even for the same stimulus can differ by task. In this study, we asked how the brain flexibly switches between such value representations in a task-dependent manner. Thirty-five participants alternated between tasks in which either stimulus color or motion predicted rewards. We show that multivariate vmPFC signals contain a rich representation that includes the current task state or context (motion/color), the associated expected value, and crucially, the irrelevant value of the alternative context. We also find that irrelevant value representations in vmPFC compete with relevant value signals, interact with task-state representations and relate to behavioral signs of value competition. Our results shed light on vmPFC’s role in decision making, bridging between its role in mapping observations onto the task states of a mental map, and computing expected values for multiple states.","link":"/opendata/moneta-et-al-2023/"},{"title":"Montero-Porras et al. (2022)","text":"People have different preferences for what they allocate for themselves and what they allocate to others in social dilemmas. These differences result from contextual reasons, intrinsic values, and social expectations. What is still an area of debate is whether these differences can be estimated from differences in each individual’s deliberation process. In this work, we analyse the participants’ reaction times in three different experiments of the Iterated Prisoner’s Dilemma with the Drift Diffusion Model, which links response times to the perceived difficulty of the decision task, the rate of accumulation of information (deliberation), and the intuitive attitudes towards the choices. The correlation between these results and the attitude of the participants towards the allocation of resources is then determined. We observe that individuals who allocated resources equally are correlated with more deliberation than highly cooperative or highly defective participants, who accumulate evidence more quickly to reach a decision. Also, the evidence collection is faster in fixed neighbour settings than in shuffled ones. Consequently, fast decisions do not distinguish cooperators from defectors in these experiments, but appear to separate those that are more reactive to the behaviour of others from those that act categorically.","link":"/opendata/montero-porras-et-al-2022/"},{"title":"Moran et al. (2019)","text":"An extensive reinforcement learning literature shows that organisms assign credit efficiently, even under conditions of state uncertainty. However, little is known about credit-assignment when state uncertainty is subsequently resolved. Here, we address this problem within the framework of an interaction between model-free (MF) and model-based (MB) control systems. We present and support experimentally a theory of MB retrospective-inference. Within this framework, a MB system resolves uncertainty that prevailed when actions were taken thus guiding an MF credit-assignment. Using a task in which there was initial uncertainty about the lotteries that were chosen, we found that when participants momentary uncertainty about which lottery had generated an outcome was resolved by provision of subsequent information, participants preferentially assigned credit within a MF system to the lottery they retrospectively inferred was responsible for this outcome. These findings extend our knowledge about the range of MB functions and the scope of system interactions.","link":"/opendata/moran-et-al-2019/"},{"title":"Moran et al. (2021)","text":"Dual-reinforcement learning theory proposes behaviour is under the tutelage of a retrospective, value-caching, model-free (MF) system and a prospective-planning, model-based (MB), system. This architecture raises a question as to the degree to which, when devising a plan, a MB controller takes account of influences from its MF counterpart. We present evidence that such a sophisticated self-reflective MB planner incorporates an anticipation of the influences its own MF-proclivities exerts on the execution of its planned future actions. Using a novel bandit task, wherein subjects were periodically allowed to design their environment, we show that reward-assignments were constructed in a manner consistent with a MB system taking account of its MF propensities. Thus, in the task participants assigned higher rewards to bandits that were momentarily associated with stronger MF tendencies. Our findings have implications for a range of decision making domains that includes drug abuse, pre-commitment, and the tension between short and long-term decision horizons in economics.","link":"/opendata/moran-et-al-2021/"},{"title":"Morris et al. (2021)","text":"Humans have a remarkable capacity for flexible decision-making, deliberating among actions by modeling their likely outcomes. This capacity allows us to adapt to the specific features of diverse circumstances. In real-world decision-making, however, people face an important challenge: There are often an enormous number of possibilities to choose among, far too many for exhaustive consideration. There is a crucial, understudied prechoice step in which, among myriad possibilities, a few good candidates come quickly to mind. How do people accomplish this? We show across nine experiments (N = 3,972 U.S. residents) that people use computationally frugal cached value estimates to propose a few candidate actions on the basis of their success in past contexts (even when irrelevant for the current context). Deliberative planning is then deployed just within this set, allowing people to compute more accurate values on the basis of context-specific criteria. This hybrid architecture illuminates how typically valuable thoughts come quickly to mind during decision-making.","link":"/opendata/morris-et-al-2021/"},{"title":"Moutoussis et al. (2018)","text":"Pavlovian influences are important in guiding decision-making across health and psychopathology. There is an increasing interest in using concise computational tasks to parametrise such influences in large populations, and especially to track their evolution during development and changes in mental health. However, the developmental course of Pavlovian influences is uncertain, a problem compounded by the unclear psychometric properties of the relevant measurements. We assessed Pavlovian influences in a longitudinal sample using a well characterised and widely used Go-NoGo task. We hypothesized that the strength of Pavlovian influences and other psychomarkers guiding decision-making would behave like traits. As reliance on Pavlovian influence is not as profitable as precise instrumental decision-making in this Go-NoGo task, we expected this influence to decrease with higher IQ and age. Additionally, we hypothesized it would correlate with expressions of psychopathology. We found that Pavlovian effects had weak temporal stability, while model-fit was more stable. In terms of external validity, Pavlovian effects decreased with increasing IQ and experience within the task, in line with normative expectations. However, Pavlovian effects were poorly correlated with age or psychopathology. Thus, although this computational construct did correlate with important aspects of development, it does not meet conventional requirements for tracking individual development. We suggest measures that might improve psychometric properties of task-derived Pavlovian measures for future studies.","link":"/opendata/moutoussis-et-al-2018/"},{"title":"Muela et al. (2023)","text":"Negative/positive urgency (NU/PU) refers to the proneness to act rashly under negative/positive emotions. These traits are proxies to generalized emotion dysregulation, and are well-established predictors of gambling-related problems. We aimed to replicate a previous work (Quintero et al., 2020) showing NU to be related to faulty extinction of conditioned stimuli in an emotional conditioning task, to extend these findings to PU, and to clarify the role of urgency in the development of gambling-related craving and problems. 81 gamblers performed an acquisition-extinction task in which neutral, disgusting, erotic and gambling-related images were used as unconditioned stimuli (US), and color patches as conditioned stimuli (CS). Trial-by-trial predictive responses were analyzed using generalized linear mixed-effects models (GLME). PU was more strongly related than NU to craving and severity of gambling problems. PU did not influence acquisition in the associative task, whereas NU slightly slowed it. Extinction was hampered in individuals with high PU, and a follow-up analysis showed this effect to depend on relative preference for skill-based and casino games. Results suggest that resistance to extinction of emotionally conditioned cues is a sign of malfunctioning emotion regulation in problematic gambling. In our work, the key effect was driven by PU (instead of NU), and gambling craving and symptoms were also more closely predicted by it. Future research should compare the involvement of PU and NU in emotion regulation and gambling problems, for gamblers with preference for different gambling modalities (e.g., pure chance vs skill games).","link":"/opendata/muela-et-al-2023/"},{"title":"Mueller et al. (2019)","text":"In classical fear conditioning, neutral conditioned stimuli that have been paired with aversive physical unconditioned stimuli eventually trigger fear responses. Here, we tested whether aversive mental images systematically paired with a conditioned stimulus also cause de novo fear learning in the absence of any external aversive stimulation. In two experiments (N = 45 and N = 41), participants were first trained to produce aversive, neutral, or no imagery in response to three different visual-imagery cues. In a subsequent imagery-based differential-conditioning paradigm, each of the three cues systematically co-terminated with one of three different neutral faces. Although the face that was paired with the aversive-imagery cue was never paired with aversive external stimuli or threat-related instructions, participants rated it as more arousing, unpleasant, and threatening and displayed relative fear bradycardia and fear-potentiated startle. These results could be relevant for the development of fear and related disorders without trauma.","link":"/opendata/mueller-et-al-2019/"},{"title":"Müller et al. (2022)","text":"Research suggests that the temporal order in which people receive information about costs and benefits whilst making decisions can influence their choices. But, do people have a preference for seeking information about costs or benefits when making effort-based decisions, and does this impact motivation? Here, participants made choices about whether to exert different levels of physical effort to obtain different magnitudes of reward, or rest for low reward. Prior to each effort-based choice, they also had to decide which information they wanted to see first: how much physical effort would be required, or how large the reward would be. We found no overall preference for seeking reward or effort information first, but motivation did change when people saw reward or effort information first. Seeking effort information first, both someones average tendency to do so and their choice to see effort first on a given trial, was associated with reductions in the willingness to exert higher effort. Moreover, the tendency to prefer effort information first was associated with reduced vigorous exercise and higher levels of fatigue in everyday life. These findings highlight that preferences for seeking effort information may be a bias that reduces peoples willingness to exert effort in the lab and in everyday life.","link":"/opendata/muller-et-al-2022/"},{"title":"Naefgen et al. (2022)","text":"We expand the usually cross-sectional perspective on dual-tasking performance toinclude both intra- and interpersonal variability, which should capture within-persondynamics and psychological processes better. Two simple tasks, first as single-, then as dualtasks, were performed by 58 participants over 20 session. We found positive relationships between (1) single- and dual-tasking performance and (2) the dual-task component tasks both inter- and intrapersonally. Better single-taskers were better dual-taskers and better singletasking sessions were better dual-tasking sessions. This implies shared capacities that covary both inter- and intraindividually. We conclude that taking intra- and interpersonal variability into account is a promising future perspective.","link":"/opendata/naefgen-et-al-2022/"},{"title":"Najar et al. (2020)","text":"While there is no doubt that social signals affect human reinforcement learning, there is still no consensus about how this process is computationally implemented. To address this issue, we compared three psychologically plausible hypotheses about the algorithmic implementation of imitation in reinforcement learning. The first hypothesis, decision biasing (DB), postulates that imitation consists in transiently biasing the learners action selection without affecting their value function. According to the second hypothesis, model-based imitation (MB), the learner infers the demonstrators value function through inverse reinforcement learning and uses it to bias action selection. Finally, according to the third hypothesis, value shaping (VS), the demonstrators actions directly affect the learners value function. We tested these three hypotheses in 2 experiments (N = 24 and N = 44) featuring a new variant of a social reinforcement learning task. We show through model comparison and model simulation that VS provides the best explanation of learners behavior. Results replicated in a third independent experiment featuring a larger cohort and a different design (N = 302). In our experiments, we also manipulated the quality of the demonstrators choices and found that learners were able to adapt their imitation rate, so that only skilled demonstrators were imitated. We proposed and tested an efficient meta-learning process to account for this effect, where imitation is regulated by the agreement between the learner and the demonstrator. In sum, our findings provide new insights and perspectives on the computational mechanisms underlying adaptive imitation in human reinforcement learning.","link":"/opendata/najar-et-al-2020/"},{"title":"Nava et al. (2023)","text":"Cooperation is one of the most advantageous strategies to have evolved in small- and large-scale human societies, often considered essential to their success or survival. We investigated how cooperation and the mechanisms influencing it change across the lifespan, by assessing cooperative choices from adolescence to old age (12-79 years, N = 382) forcing participants to decide either intuitively or deliberatively through the use of randomised time constraints. As determinants of these choices, we considered participants’ level of altruism, their reciprocity expectations, their optimism, their desire to be socially accepted, and their attitude toward risk. We found that intuitive decision-making favours cooperation, but only from age 20 when a shift occurs: whereas in young adults, intuition favours cooperation, in adolescents it is reflection that favours cooperation. Participants’ decisions were shown to be rooted in their expectations about other people’s cooperative behaviour and influenced by individuals’ level of optimism about their own future, revealing that the journey to the cooperative humans we become is shaped by reciprocity expectations and individual predispositions.","link":"/opendata/nava-et-al-2023/"},{"title":"Navarro et al. (2018)","text":"In everyday life, people need to make choices without full information about the environment, which poses an explore-exploit dilemma in which one must balance the need to learn about the world and the need to obtain rewards from it. The explore-exploit dilemma is often studied using the multi-armed restless bandit task, in which people repeatedly select from multiple options, and human behaviour is modelled as a form of reinforcement learning via Kalman filters. Inspired by work in the judgment and decision-making literature, we present two experiments using multi-armed bandit tasks in both static and dynamic environments, in situations where options can become unviable and vanish if they are not pursued. A Kalman filter model using Thompson sampling provides an excellent account of human learning in a standard restless bandit task, but there are systematic departures in the vanishing bandit task. We explore the nature of this loss aversion signal and consider theoretical explanations for the results.","link":"/opendata/navarro-et-al-2018/"},{"title":"Navarro et al. (2023)","text":"Complex span tasks are perhaps the most widely used paradigm to measure working memory capacity (WMC). Researchers assume that all types of complex span tasks assess domain-general WM. However, most research supporting this claim comes from factor analysis approaches that do not examine task performance at the item level, thus not allowing comparison of the characteristics of verbal and spatial complex span tasks. Item response theory (IRT) can help determine the extent to which different complex span tasks assess domain-general WM. In the current study, spatial and verbal complex span tasks were examined using IRT. The results revealed differences between verbal and spatial tasks in terms of item difficulty and block difficulty, and showed that most subjects with below-average ability were able to answer most items correctly across all tasks. In line with previous research, the findings suggest that examining domain-general WM by using only one task might elicit skewed scores based on task domain. Further, visuospatial complex span tasks should be prioritized as a measure of WMC if resources are limited.","link":"/opendata/navarro-et-al-2023/"},{"title":"Navidi et al. (2023)","text":"Prosocial learning involves the acquisition of knowledge and skills necessary for making decisions that benefit others. We asked if, in the context of value-based decision-making, there is any difference between learning strategies for oneself vs. for others. We implemented a 2-step reinforcement learning paradigm in which participants learned, in separate blocks, to make decisions for themselves or for a present other confederate who evaluated their performance. We replicated the canonical features of the model-based and model-free reinforcement learning in our results. The behaviour of the majority of participants was best explained by a mixture of the model-based and model-free control, while most participants relied more heavily on MB control, and this strategy enhanced their learning success. Regarding our key self-other hypothesis, we did not find any significant difference between the behavioural performances nor in the model-based parameters of learning when comparing self and other conditions.","link":"/opendata/navidi-et-al-2023/"},{"title":"Neuser et al. (2020)","text":"Interoceptive feedback transmitted via the vagus nerve plays a vital role in motivation by tuning actions according to physiological needs. Whereas vagus nerve stimulation (VNS) reinforces actions in animals, motivational effects elicited by VNS in humans are still largely elusive. Here, we applied non-invasive transcutaneous auricular VNS (taVNS) on the left or right ear while participants exerted effort to earn rewards using a randomized cross-over design (vs. sham). In line with preclinical studies, acute taVNS enhances invigoration of effort, and stimulation on the left side primarily facilitates invigoration for food rewards. In contrast, we do not find conclusive evidence that acute taVNS affects effort maintenance or wanting ratings. Collectively, our results suggest that taVNS enhances reward-seeking by boosting invigoration, not effort maintenance and that the stimulation side affects generalization beyond food reward. Thus, taVNS may enhance the pursuit of prospective rewards which may pave avenues to treat motivational deficiencies.","link":"/opendata/neuser-et-al-2020/"},{"title":"Nicholas et al. (2022)","text":"A key question in decision-making is how humans arbitrate between competing learning and memory systems to maximize reward. We address this question by probing the balance between the effects, on choice, of incremental trial-and-error learning versus episodic memories of individual events. Although a rich literature has studied incremental learning in isolation, the role of episodic memory in decision-making has only recently drawn focus, and little research disentangles their separate contributions. We hypothesized that the brain arbitrates rationally between these two systems, relying on each in circumstances to which it is most suited, as indicated by uncertainty. We tested this hypothesis by directly contrasting contributions of episodic and incremental influence to decisions, while manipulating the relative uncertainty of incremental learning using a well-established manipulation of reward volatility. Across two large, independent samples of young adults, participants traded these influences off rationally, depending more on episodic information when incremental summaries were more uncertain. These results support the proposal that the brain optimizes the balance between different forms of learning and memory according to their relative uncertainties and elucidate the circumstances under which episodic memory informs decisions.","link":"/opendata/nicholas-et-al-2022/"},{"title":"Nitsch et al. (2022)","text":"A contemporary research agenda in behavioral economics and neuroeconomics aims to identify individual differences and (neuro)psychological correlates of rationality. This research has been widely received in important interdisciplinary and field outlets. However, the psychometric reliability of such measurements of rationality has been presumed without enough methodological scrutiny. Drawing from multiple original and published datasets (in total over 1,600 participants), we unequivocally show that contemporary measurements of rationality have moderate to poor reliability according to common standards. Further analyses of the variance components, as well as a allowing participants to revise previous choices, suggest that this is driven by low between-subject variance rather than high measurement error. As has been argued previously for other behavioral measurements, this poses a challenge to the predominant correlational research designs and the search for sociodemographic or neural predictors. While our results draw a sobering picture of the prospects of contemporary measurements of rationality, they are not necessarily surprising from a theoretical perspective, which we outline in our discussion.","link":"/opendata/nitsch-et-al-2022/"},{"title":"Niv et al. (2012)","text":"Humans and animals are exquisitely, though idiosyncratically, sensitive to risk or variance in the outcomes of their actions. Economic, psychological, and neural aspects of this are well studied when information about risk is provided explicitly. However, we must normally learn about outcomes from experience, through trial and error. Traditional models of such reinforcement learning focus on learning about the mean reward value of cues and ignore higher order moments such as variance. We used fMRI to test whether the neural correlates of human reinforcement learning are sensitive to experienced risk. Our analysis focused on anatomically delineated regions of a priori interest in the nucleus accumbens, where blood oxygenation level-dependent (BOLD) signals have been suggested as correlating with quantities derived from reinforcement learning. We first provide unbiased evidence that the raw BOLD signal in these regions corresponds closely to a reward prediction error. We then derive from this signal the learned values of cues that predict rewards of equal mean but different variance and show that these values are indeed modulated by experienced risk. Moreover, a close neurometric-psychometric coupling exists between the fluctuations of the experience-based evaluations of risky options that we measured neurally and the fluctuations in behavioral risk aversion. This suggests that risk sensitivity is integral to human learning, illuminating economic models of choice, neuroscientific models of affective learning, and the workings of the underlying neural mechanisms.","link":"/opendata/niv-et-al-2012/"},{"title":"NSPN Consortium (2022)","text":"Data from the NSPN study, a dataset of N=2400 adolescents and young adults (ages 14-24) who completed many self-report, cognitive, and reinforcement learning measures.","link":"/opendata/nspn-consortium-2022/"},{"title":"Nussenbaum et al. (2020)","text":"For years, adult psychological research has benefitted from web-based data collection. There is growing interest in harnessing this approach to facilitate data collection from children and adolescents to address foundational questions about cognitive development. To date, however, few studies have directly tested whether findings from in-lab developmental psychology tasks can be replicated online, particularly in the domain of value-based learning and decision-making. To address this question, we set up a pipeline for online data collection with children, adolescents, and adults, and conducted a replication of Decker et al. (2016). The original in-lab study employed a sequential decision-making paradigm to examine shifts in value-learning strategies from childhood to adulthood. Here, we used the same paradigm in a sample of 151 children (N = 50; ages 8 - 12 years), adolescents (N = 50; ages 13 - 17 years), and adults (N = 51; ages 18 - 25 years) and replicated the main finding that the use of a “model-based” learning strategy increases with age. In addition, we adapted a new index of abstract reasoning (MaRs-IB; Chierchia et al. 2019) for use online, and replicated a key result from Potter et al. (2017), which found that abstract reasoning ability mediated the relation between age and model-based learning. Our re-analyses of two previous in-lab datasets alongside our analysis of our online dataset revealed few qualitative differences across task administrations. These findings suggest that with appropriate precautions, researchers can effectively examine developmental differences in learning computations through unmoderated, online experiments.","link":"/opendata/nussenbaum-et-al-2020/"},{"title":"Nussenbaum et al. (2023)","text":"Across the lifespan, individuals frequently choose between exploiting known rewarding options or exploring unknown alternatives. A large body of work has suggested that children may explore more than adults. However, because novelty and reward uncertainty are often correlated, it is unclear how they differentially influence decision-making across development. Here, children, adolescents, and adults (ages 8-27 years, N = 122) completed an adapted version of a recently developed value-guided decision-making task that decouples novelty and uncertainty. In line with prior studies, we found that exploration decreased with increasing age. Critically, participants of all ages demonstrated a similar bias to select choice options with greater novelty, whereas aversion to reward uncertainty increased into adulthood. Computational modeling of participant choices revealed that whereas adolescents and adults demonstrated attenuated uncertainty aversion for more novel choice options, children’s choices were not influenced by reward uncertainty.","link":"/opendata/nussenbaum-et-al-2023/"},{"title":"O'Connell et al. (2202)","text":"Empathic experiences shape social behaviors and display considerable individual variation. Recent advances in computational behavioral modeling can help rigorously quantify individual differences, but remain understudied in the context of empathy and antisocial behavior. We adapted a go/no-go reinforcement learning task across social and non-social contexts such that monetary gains and losses explicitly impacted the subject, a study partner, or no one. Empathy was operationalized as sensitivity to others’ rewards, sensitivity to others’ losses, and as the Pavlovian influence of empathic outcomes on approach and avoidance behavior. Results showed that 61 subjects learned for a partner in a way that was computationally similar to how they learned for themselves. Results supported the psychometric value of individualized model parameters such as sensitivity to others’ loss, which was inversely associated with antisociality. Modeled empathic sensitivity also mapped onto motivation ratings, but was not associated with self-reported trait empathy. This work is the first to apply a social reinforcement learning task that spans affect and action requirement (go/no-go) to measure multiple facets of empathic sensitivity.","link":"/opendata/o-connell-et-al-2202/"},{"title":"Oguchi et al. (2023)","text":"Humans form complex societies in which we routinely engage in social decision-making regarding the allocation of resources among ourselves and others. One dimension that characterizes social decision-making in particular is whether to prioritize self-interest or respect for others-proself or prosocial. What causes this individual difference in social value orientation? Recent developments in the social dual-process theory argue that social decision-making is characterized by its underlying domain-general learning systems: the model-free and model-based systems. In line with this learning approach, we propose and experimentally test the hypothesis that differences in social preferences stem from which learning system is dominant in an individual. Here, we used a non-social state transition task that allowed us to assess the balance between model-free/model-based learning and investigate its relation to the social value orientations. The results showed that proselfs depended more on model-based learning, whereas prosocials depended more on model-free learning. Reward amount and reaction time analyses showed that proselfs learned the task structure earlier in the session than prosocials, reflecting their difference in model-based/model-free learning dependence. These findings support the learning hypothesis on what makes differences in social preferences and have implications for understanding the mechanisms of prosocial behavior.","link":"/opendata/oguchi-et-al-2023/"},{"title":"Olschewski et al. (2019)","text":"The perception and integration of sequential numerical information is a common cognitive task. It is a prerequisite for experience-based economic choices, but it is usually not part of economic decision theory. To better understand the process of symbolic number integration and its influence on economic behavior, we performed three experimental studies that examined mean estimates and economic valuations of continuous number distributions. The results indicate that participants valued random number distributions below their respective arithmetic means and valued distributions as lower when their variance increased, indicating risk aversion. A similar though less pronounced pattern also occurred in the matched mean estimation task where accuracy was incentivized and preferences played no role. These patterns suggest that seemingly risk-averse preferences are partly due to cognitive biases when perceiving and estimating numbers. In addition, participants apparent economic preference for right-skewed outcome distributions could be attributed mainly to estimation biases. We discuss the extent to which the results can be explained based on a compressed mental number line and different sample weighting models. Finally, a new model that can account for the qualitative data pattern and has stronger overweighting of lower than higher numbers as its core feature is developed. Together, our results indicate that basic cognitive processes in perceiving and integrating number sequences play a key role in understanding experience-based economic behavior.","link":"/opendata/olschewski-et-al-2019/"},{"title":"Otsuka & Yotsumoto (2023)","text":"Magnitude information is often correlated in the external world, providing complementary information about the environment. As if to reflect this relationship, the perceptions of different magnitudes (e.g., time and numerosity) are known to influence one another. Recent studies suggest that such magnitude interaction is similar to cue integration, such as multisensory integration. Here, we tested whether human observers could integrate the magnitudes of two quantities with distinct physical units (i.e., time and numerosity) as abstract magnitude information. The participants compared the magnitudes of two visual stimuli based on time, numerosity, or both. Consistent with the predictions of the maximum likelihood estimation (MLE) model, the participants integrated time and numerosity in a near-optimal manner; the weights for numerosity increased as the reliability of the numerosity information increased, and the integrated estimate was more reliable than either the time or numerosity estimate. Furthermore, the integration approached a statistical optimum as the temporal discrepancy of the acquisition of each piece of information became smaller. These results suggest that magnitude interaction arises through a similar computational mechanism to cue integration. They are also consistent with the idea that different magnitudes are processed by a generalized magnitude system.","link":"/opendata/otsuka-yotsumoto-2023/"},{"title":"Otsuka (2023)","text":"People can extract and learn statistical regularities from various aspects of everyday life. The current study examined whether people have a mechanism to learn regularity based on time information and investigated whether sensitivity to time information is modulated by individual time management. In the familiarization phase, participants were required to observe a visual sequence of objects. Although the objects were presented in a random order, the amount of time for which the objects were presented was organized into successive triplets (e.g., 850-1,000-700 ms). In the subsequent test phase, two three-object sequences were presented. One sequence was a timing triplet that had temporal regularities. The other was a foil created from three different triplets. Participants were required to judge which sequence was more familiar based on the familiarization phase. The results showed that the triplets were successfully discriminated from the foils. These results were also observed for blank intervals. The current findings also revealed that although visual statistical learning was expressed when participants observed the temporal regularities of shapes tied to the corresponding durations during familiarization, this learning overshadowed them from indicating generic timing regularities when they were untied to objects. Furthermore, participants with high scores on the Time Management Scale showed a higher extent of visual statistical learning on object durations than those with low scores. These results suggest that people extract and learn regularities based on time information and that statistical learning based on time information is correlated with individual time management. (PsycInfo Database Record (c) 2023 APA, all rights reserved).","link":"/opendata/otsuka-2023/"},{"title":"Pachur et al. (2017)","text":"We separate for the first time the roles of cognitive and motivational factors in shaping age differences in decision making under risk. Younger and older adults completed gain, loss, and mixed-domain choice problems as well as measures of cognitive functioning and affect. The older adults decision quality was lower than the younger adults in the loss domain, and this age difference was attributable to the older adults lower cognitive abilities. In addition, the older adults chose the more risky option more often than the younger adults in the gain and mixed domains; this difference in risk aversion was attributable to less pronounced negative affect among the older adults. Computational modeling with a hierarchical Bayesian implementation of cumulative prospect theory revealed that the older adults had higher response noise and more optimistic decision weights for gains than did the younger adults. Moreover, the older adults showed no loss aversion, a finding that supports a positivity-focus (rather than a loss-prevention) view of motivational reorientation in older age.","link":"/opendata/pachur-et-al-2017/"},{"title":"Pachur et al. (2018)","text":"There is a disconnect in the literature between analyses of risky choice based on cumulative prospect theory (CPT) and work on predecisional information processing. One likely reason is that for expectation models (e.g., CPT), it is often assumed that people behaved only as if they conducted the computations leading to the predicted choice and that the models are thus mute regarding information processing. We suggest that key psychological constructs in CPT, such as loss aversion and outcome and probability sensitivity, can be interpreted in terms of attention allocation. In two experiments, we tested hypotheses about specific links between CPT parameters and attentional regularities. Experiment 1 used process tracing to monitor participants predecisional attention allocation to outcome and probability information. As hypothesized, individual differences in CPTs loss-aversion, outcome-sensitivity, and probability-sensitivity parameters (estimated from participants choices) were systematically associated with individual differences in attention allocation to outcome and probability information. For instance, loss aversion was associated with the relative attention allocated to loss and gain outcomes, and a more strongly curved weighting function was associated with less attention allocated to probabilities. Experiment 2 manipulated participants attention to losses or gains, causing systematic differences in CPTs loss-aversion parameter. This result indicates that attention allocation can to some extent cause choice regularities that are captured by CPT. Our findings demonstrate an as-if models capacity to reflect characteristics of information processing. We suggest that the observed CPT-attention links can be harnessed to inform the development of process models of risky choice.","link":"/opendata/pachur-et-al-2018/"},{"title":"Palmer et al. (2023)","text":"Objective: Near-misses are a structural characteristic of gambling products that can be engineered within modern digital games. Over a series of pre-registered experiments using an online slot machine simulation, we investigated the impact of near-miss outcomes, on subjective ratings (motivation, valence) and two behavioural measures (speed of gambling, bet size).Method: Participants were recruited using Prolific and gambled on an online 3-reel slot machine simulator that delivered a 1 in 3 rate of X-X-O near-misses. Study 1 measured trial-by-trial subjective ratings of valence and motivation (Study 1a, n = 169; Study 1b, n = 148). Study 2 measured spin initiation latencies (n = 170) as a function of the previous trial outcome. Study 3 measured bet size (n = 172) as a function of the previous trial outcome.Results: In Study 1a, near-misses increased the motivation to continue gambling relative to full-misses, supporting H1. On valence ratings, near-misses were rated significantly more positively from full-misses, in the opposite direction to H2; this effect was confirmed in a close replication (Study 1b). In Study 2, participants gambled faster following near-misses relative to full-misses, supporting H3. In Study 3, participants significantly increased their bet size following near-misses relative to full-misses, supporting H4.Conclusion: Across all dependent variables, near-miss outcomes yielded statistically significant differences from objectively-equivalent full-miss outcomes, corroborating the ‘near miss effect’ across both subjective and behavioral measures, and in the environment of online gambling. The unexpected findings on valence ratings are considered in terms of boundary conditions for the near-miss effect, and competing theoretical accounts based on frustration/regret, goal generalization, and skill acquisition.","link":"/opendata/palmer-et-al-2023/"},{"title":"Palminteri et al. (2016)","text":"Adolescence is a period of life characterised by changes in learning and decision-making. Learning and decision-making do not rely on a unitary system, but instead require the coordination of different cognitive processes that can be mathematically formalised as dissociable computational modules. Here, we aimed to trace the developmental time-course of the computational modules responsible for learning from reward or punishment, and learning from counterfactual feedback. Adolescents and adults carried out a novel reinforcement learning paradigm in which participants learned the association between cues and probabilistic outcomes, where the outcomes differed in valence (reward versus punishment) and feedback was either partial or complete (either the outcome of the chosen option only, or the outcomes of both the chosen and unchosen option, were displayed). Computational strategies changed during development: whereas adolescents behaviour was better explained by a basic reinforcement learning algorithm, adults behaviour integrated increasingly complex computational features, namely a counterfactual learning module (enabling enhanced performance in the presence of complete feedback) and a value contextualisation module (enabling symmetrical reward and punishment learning). Unlike adults, adolescent performance did not benefit from counterfactual (complete) feedback. In addition, while adults learned symmetrically from both reward and punishment, adolescents learned from reward but were less likely to learn from punishment. This tendency to rely on rewards and not to consider alternative consequences of actions might contribute to our understanding of decision-making in adolescence.","link":"/opendata/palminteri-et-al-2016/"},{"title":"Palminteri et al. (2017)","text":"Previous studies suggest that factual learning, that is, learning from obtained outcomes, is biased, such that participants preferentially take into account positive, as compared to negative, prediction errors. However, whether or not the prediction error valence also affects counterfactual learning, that is, learning from forgone outcomes, is unknown. To address this question, we analysed the performance of two groups of participants on reinforcement learning tasks using a computational model that was adapted to test if prediction error valence influences learning. We carried out two experiments: in the factual learning experiment, participants learned from partial feedback (i.e., the outcome of the chosen option only); in the counterfactual learning experiment, participants learned from complete feedback information (i.e., the outcomes of both the chosen and unchosen option were displayed). In the factual learning experiment, we replicated previous findings of a valence-induced bias, whereby participants learned preferentially from positive, relative to negative, prediction errors. In contrast, for counterfactual learning, we found the opposite valence-induced bias: negative prediction errors were preferentially taken into account, relative to positive ones. When considering valence-induced bias in the context of both factual and counterfactual learning, it appears that people tend to preferentially take into account information that confirms their current choice.","link":"/opendata/palminteri-et-al-2017/"},{"title":"Panidi et al. (2022)","text":"In this study, we provide causal evidence that the dorsolateral prefrontal cortex (DLPFC) supports the computation of subjective value in choices under risk via its involvement in probability weighting. Following offline continuous theta-burst transcranial magnetic stimulation (cTBS) of the DLPFC subjects (N = 30, mean age 23.6, 56% females) completed a computerized task consisting of 96 binary lottery choice questions presented in random order. Using the hierarchical Bayesian modeling approach, we then estimated the structural parameters of risk preferences (the degree of risk aversion and the curvature of the probability weighting function) and analyzed the obtained posterior distributions to determine the effect of stimulation on model parameters. On a behavioral level, temporary downregulation of the left DLPFC excitability through cTBS decreased the likelihood of choosing an option with higher expected reward while the probability of choosing a riskier lottery did not significantly change. Modeling the stimulation effects on risk preference parameters showed anecdotal evidence as assessed by Bayes factors that probability weighting parameter increased after the left DLPFC TMS compared to sham.","link":"/opendata/panidi-et-al-2022/"},{"title":"Patt et al. (2021)","text":"When faced with intertemporal choices, people typically devalue rewards available in the future compared to rewards more immediately available, a phenomenon known as temporal discounting. Decisions involving intertemporal choices arise daily, with critical impact on health and financial wellbeing. Although many such decisions are “experiential” in that they involve delays and rewards that are experienced in real-time and can inform subsequent choices, most studies have focused on intertemporal choices with hypothetical outcomes (or outcomes delivered after all decisions are made). The present study focused on experiential intertemporal choices. First, a novel intertemporal choice task was developed and validated, using delays experienced in real time and artistic photographs as consumable perceptual rewards. Second, performance on the experiential task was compared to performance on a classic intertemporal choice task with hypothetical outcomes. Involvement of distinct processes across tasks was probed by examining differential relations to state and trait anxiety. A two-parameter logistic function framework was proposed to fit indifference point data. This approach accounts for individual variability not only in the delay at which an individual switches from choosing the delayed to more immediate option, but also in the slope of that switch. Fit results indicated that the experiential task elicited temporal discounting, with effective trade-off between delay and perceptual reward. Comparison with the hypothetical intertemporal choice task suggested distinct mechanisms: first, temporal discounting across the two tasks was not correlated; and second, state and trait anxiety both were associated with choice behavior in the experiential task, albeit in distinct ways, whereas neither was significantly associated with choice behavior in the hypothetical task. The engagement of different processes in the experiential compared to hypothetical task may align with neural evidence for the recruitment of the hippocampus in animal but not in classic human intertemporal choice studies.","link":"/opendata/patt-et-al-2021/"},{"title":"Pauli et al. (2022)","text":"Theoretical and empirical accounts suggest that adolescence is associated with heightened reward learning and impulsivity. Experimental tasks and computational models that can dissociate reward learning from the tendency to initiate actions impulsively (action initiation bias) are thus critical to characterise the mechanisms that drive developmental differences. However, existing work has rarely quantified both learning ability and action initiation, or it has tested small samples. Here, using computational modelling of a learning task collected from a large sample (N=742, 9-18 years, 11 countries), we tested differences in reward and punishment learning and action initiation from childhood to adolescence. Computational modelling revealed that whilst punishment learning rates increased with age, reward learning remained stable. In parallel, action initiation biases decreased with age. Results were similar when considering pubertal stage instead of chronological age. We conclude that heightened reward responsivity in adolescence can reflect differences in action initiation rather than enhanced reward learning.","link":"/opendata/pauli-et-al-2022/"},{"title":"Payzan-LeNestour et al. (2021)","text":"Research in the field of multisensory perception shows that what we hear can influence what we see in a wide range of perceptual tasks. It is however unknown whether this extends to the visual perception of risk, despite the importance of the question in many applied domains where properly assessing risk is crucial, starting with financial trading. To fill this knowledge gap, we ran interviews with professional traders and conducted three laboratory studies using judgments of financial asset risk as a testbed. We provide evidence that the presence of ambient sound impacts risk perception, possibly due to the combination of facilitatory and synesthetic effects of general relevance to the perception of risk in many species as well as humans. We discuss the implications of our findings for various applied domains (e.g., financial, medical, and military decision-making), and raise new questions for future research.","link":"/opendata/payzan-lenestour-et-al-2021/"},{"title":"Pedersen et al. (2021)","text":"Adaptive behavior requires balancing approach and avoidance based on the rewarding and aversive consequences of actions. Imbalances in this evaluation are thought to characterize mood disorders such as major depressive disorder (MDD). We present a novel application of the drift diffusion model (DDM) suited to quantify how offers of reward and aversiveness, and neural correlates thereof, are dynamically integrated to form decisions, and how such processes are altered in MDD. Hierarchical parameter estimation from the DDM demonstrated that the MDD group differed in three distinct reward-related parameters driving approach-based decision making. First, MDD was associated with reduced reward sensitivity, measured as the impact of offered reward on evidence accumulation. Notably, this effect was replicated in a follow-up study. Second, the MDD group showed lower starting point bias towards approaching offers. Third, this starting point was influenced in opposite directions by Pavlovian effects and by nucleus accumbens activity across the groups: greater accumbens activity was related to approach bias in controls but avoid bias in MDD. Cross-validation revealed that the combination of these computational biomarkers were diagnostic of patient status, with accumbens influences being particularly diagnostic. Finally, within the MDD group, reward sensitivity and nucleus accumbens parameters were differentially related to symptoms of perceived stress and depression. Collectively, these findings establish the promise of computational psychiatry approaches to dissecting approach-avoidance decision dynamics relevant for affective disorders.","link":"/opendata/pedersen-et-al-2021/"},{"title":"Peng et al. (2022)","text":"Obsessive-compulsive disorder (OCD) is characterized by uncontrollable repetitive actions thought to rely on abnormalities within fundamental instrumental learning systems. We investigated cognitive and computational mechanisms underlying Pavlovian biases on instrumental behavior in both clinical OCD patients and healthy controls using a Pavlovian-Instrumental Transfer (PIT) task. PIT is typically evidenced by increased responding in the presence of a positive (previously rewarded) Pavlovian cue, and reduced responding in the presence of a negative cue. Thirty OCD patients and thirty-one healthy controls completed the Pavlovian Instrumental Transfer test, which included instrumental training, Pavlovian training for positive, negative and neutral cues, and a PIT phase in which participants performed the instrumental task in the presence of the Pavlovian cues. Modified Rescorla-Wagner models were fitted to trial-by-trial data of participants to estimate underlying computational mechanism and quantify individual differences during training and transfer stages. Bayesian hierarchical methods were used to estimate free parameters and compare the models. Behavioral and computational results indicated a weaker Pavlovian influence on instrumental behavior in OCD patients than in HC, especially for negative Pavlovian cues. Our results contrast with the increased PIT effects reported for another set of disorders characterized by compulsivity, substance use disorders, in which PIT is enhanced. A possible reason for the reduced PIT in OCD may be impairment in using the contextual information provided by the cues to appropriately adjust behavior, especially when inhibiting responding when a negative cue is present. This study provides deeper insight into our understanding of deficits in OCD from the perspective of Pavlovian influences on instrumental behavior and may have implications for OCD treatment modalities focused on reducing compulsive behaviors.","link":"/opendata/peng-et-al-2022/"},{"title":"Pereg et al. (2022)","text":"The study of social learning examines how individuals learn from others by means of observation, imitation, or compliance with advice. However, it still remains largely unknown whether social learning processes have a distinct contribution to choice behavior, independent from non-social trial-and-error learning that often occurs simultaneously. 153 participants completed a reinforcement learning task, where they were asked to make choices to gain rewards. Advice from an artificial teacher was presented on 60% of the trials, allowing us to compare choice behavior with and without advice. Results showed a strong and reliable tendency to follow advice (test-retest reliability ~.73). Computational modeling suggested a unique contribution of three distinct learning strategies: (a) individual learning (i.e., learning the value of actions, independent of advice), (b) informed advice-taking (i.e., learning the value of following advice), and (c) non-informed advice-taking (i.e., a constant bias to follow advice regardless of outcome history). Computational lesion simulations analyses provided specific regression signatures to both informed and non-informed advice taking processes that were then confirmed by empirical data. We discuss the theoretical implications of integrating internal and external information during the learning process.","link":"/opendata/pereg-et-al-2022/"},{"title":"Pescetelli & Yeung (2019)","text":"In a world where ideas flow freely between people across multiple platforms, we often find ourselves relying on others’ information without an objective standard to judge whether those opinions are accurate. The present study tests an agreement-in-confidence hypothesis of advice perception, which holds that internal metacognitive evaluations of decision confidence play an important functional role - namely being a learning signal that allows to learn about the reliability of others in the absence of feedback - in the perception and use of social information, such as peers’ advice. We propose that confidence can be used, computationally, to estimate advisors’ trustworthiness and advice reliability. Specifically, these processes are hypothesized to be particularly important in situations where objective feedback is absent or difficult to acquire. Here, we use a judge-advisor system paradigm to precisely manipulate the profiles of virtual advisors whose opinions are provided to participants performing a perceptual decision making task. We find that when advisors’ and participants’ judgments are independent, people are able to discriminate subtle advice features, like confidence calibration, whether or not objective feedback is available. However, when observers’ judgments (and judgment errors) are correlated - as is the case in many social contexts - predictable distortions can be observed between feedback and feedback-free scenarios. A simple model of advice reliability estimation, endowed with metacognitive insight, is able to explain key patterns of results observed in the human data. Finally, we use agent-based modeling to explore implications of these individual-level decision strategies for network-level patterns of trust and belief formation.","link":"/opendata/pescetelli-yeung-2019/"},{"title":"Peterson et al. (2021)","text":"Predicting and understanding how people make decisions has been a long-standing goal in many fields, with quantitative models of human decision-making informing research in both the social sciences and engineering. We show how progress toward this goal can be accelerated by using large datasets to power machine-learning algorithms that are constrained to produce interpretable psychological theories. Conducting the largest experiment on risky choice to date and analyzing the results using gradient-based optimization of differentiable decision theories implemented through artificial neural networks, we were able to recapitulate historical discoveries, establish that there is room to improve on existing theories, and discover a new, more accurate model of human decision-making in a form that preserves the insights from centuries of research.","link":"/opendata/peterson-et-al-2021/"},{"title":"Petitet et al. (2022)","text":"Apathy and impulsivity are expressed in a wide range of neuropsychiatric disorders, and, to a less severe extent, in healthy people too. Although traditionally considered to be opposite extremes of a single motivational spectrum, recent epidemiological questionnaire-based data suggest that both traits can in fact co-exist within the same individual. Here, we sought to investigate the relationship between these constructs in healthy people within a controlled task environment that examines the ability to make a decision under temporal uncertainty and measures the vigour of the response. Sixty participants performed a new version of the Traffic Light Task (TLT) and completed self-report questionnaire measures of apathy and impulsivity. Although the two traits were positively correlated in questionnaire assessments, they were associated with distinct behavioural signatures on the task. Impulsivity was expressed as an inflexible tendency to generate rapid anticipatory responses, regardless of task context. Apathy, on the other hand, was associated with a blunted effect of reward on response vigour. These findings reveal how apathy and impulsivity are related to distinct dimensions of goal-directed behaviour, explaining how these traits might co-exist in the same individuals.","link":"/opendata/petitet-et-al-2022/"},{"title":"Pettine et al. (2023)","text":"The world is overabundant with feature-rich information obscuring the latent causes of experience. How do people approximate the complexities of the external world with simplified internal representations that generalize to novel examples or situations? Theories suggest that internal representations could be determined by decision boundaries that discriminate between alternatives, or by distance measurements against prototypes and individual exemplars. Each provide advantages and drawbacks for generalization. We therefore developed theoretical models that leverage both discriminative and distance components to form internal representations via action-reward feedback. We then developed three latent-state learning tasks to test how humans use goal-oriented discrimination attention and prototypes/exemplar representations. The majority of participants attended to both goal-relevant discriminative features and the covariance of features within a prototype. A minority of participants relied only on the discriminative feature. Behaviour of all participants could be captured by parameterizing a model combining prototype representations with goal-oriented discriminative attention.","link":"/opendata/pettine-et-al-2023/"},{"title":"Pike et al. (2022)","text":"Affective biases are commonly seen in disorders such as depression and anxiety, where individuals may show attention towards and more rapid processing of negative or threatening stimuli. Affective biases have been shown to change with effective intervention: randomized controlled trials into these biases and the mechanisms that underpin them may allow greater understanding of how interventions can be improved and their success be maximized. For trials to be informative, we must have reliable ways of measuring affective bias over time, so we can detect how interventions are changing these biases. In particular, the test-retest reliability of our measures puts an upper bound on our ability to detect effects: thus, in this study, we examine the test-retest reliability of two behavioural tasks that examine affective bias. We recruited 58 individuals in an online study who completed these tasks twice, with at least 14 days in between sessions. We analysed reliability of both summary statistics and parameters from computational models using Pearson’s correlations and intra-class correlations. Standard summary statistic measures from these affective bias tasks had reliability ranging from 0.18 (poor) to 0.49 (moderate). Parameters from computational modelling of these tasks were in many cases less reliable than summary statistics. Embedding the covariance between sessions within the generative modelling framework resulted in higher stability estimates. In sum, measures from these affective bias tasks are moderately reliable, but further work to improve the reliability of these tasks would improve still further our ability to draw inferences in randomized trials.","link":"/opendata/pike-et-al-2022/"},{"title":"Pike et al. (2023)","text":"Catastrophizing, when an individual overestimates the probability of a severe negative outcome, is related to various aspects of mental ill-health. Here, we further characterize catastrophizing by investigating the extent to which self-reported catastrophizing is associated with risk-taking, using an online behavioural task and computational modelling. We performed two online studies: a pilot study (n = 69) and a main study (n = 263). In the pilot study, participants performed the Balloon Analogue Risk Task (BART), alongside two other tasks (reported in the Supplement), and completed mental health questionnaires. Based on the findings from the pilot, we explored risk-taking in more detail in the main study using two versions of the Balloon Analogue Risk task (BART), with either a high or low cost for bursting the balloon. In the main study, there was a significant negative relationship between self-report catastrophizing scores and risk-taking in the low (but not high) cost version of the BART. Computational modelling of the BART task revealed no relationship between any parameter and Catastrophizing scores in either version of the task. We show that increased self-reported catastrophizing may be associated with reduced behavioural measures of risk-taking, but were unable to identify a computational correlate of this effect.","link":"/opendata/pike-et-al-2023/"},{"title":"Piray et al. (2019)","text":"Learning and decision-making are modulated by socio-emotional processing and such modulation is implicated in clinically relevant personality traits of social anxiety. The present study elucidates the computational and neural mechanisms by which emotionally aversive cues disrupt learning in socially anxious human individuals. Healthy volunteers with low or high trait social anxiety performed a reversal learning task requiring learning actions in response to angry or happy face cues. Choice data were best captured by a computational model in which learning rate was adjusted according to the history of surprises. High trait socially anxious individuals used a less-dynamic strategy for adjusting their learning rate in trials started with angry face cues and unlike the low social anxiety group, their dorsal anterior cingulate cortex (dACC) activity did not covary with the learning rate. Our results demonstrate that trait social anxiety is accompanied by disruption of optimal learning and dACC activity in threatening situations.","link":"/opendata/piray-et-al-2019/"},{"title":"Pisauro et al. (2022)","text":"Social interactions evolve continuously. Sometimes we cooperate, sometimes we compete, while at other times we strategically position ourselves somewhere in between to account for the ever-changing social contexts around us. Research on social interactions often focuses on a binary dichotomy between competition and cooperation, ignoring peoples evolving shifts along a continuum. Here, we develop an economic game - the Space Dilemma - where two players change their degree of cooperativeness over time in cooperative and competitive contexts. Using computational modelling we show how social contexts bias choices and characterise how inferences about others intentions modulate cooperativeness. Consistent with the modelling predictions, brain regions previously linked to social cognition, including the temporo-parietal junction, dorso-medial prefrontal cortex and the anterior cingulate gyrus, encode social prediction errors and context-dependent signals, correlating with shifts along a cooperation-competition continuum. These results provide a comprehensive account of the computational and neural mechanisms underlying the continuous trade-off between cooperation and competition.","link":"/opendata/pisauro-et-al-2022/"},{"title":"Plate et al. (2023)","text":"Similar decision-making situations often arise repeatedly, presenting tradeoffs between (i) acquiring new information to facilitate future-related decisions (exploration) and (ii) using existing information to secure expected outcomes (exploitation). Exploration choices have been well characterized in nonsocial contexts, however, choices to explore (or not) in social environments are less well understood. Social environments are of particular interest because a key factor that increases exploration in nonsocial contexts is environmental uncertainty, and the social world is generally appreciated to be highly uncertain. Although uncertainty sometimes must be reduced behaviorally (e.g., by trying something and seeing what happens), other times it may be reduced cognitively (e.g., by imagining possible outcomes). Across four experiments, participants searched for rewards in a series of grids that were either described as comprising real people distributing previously earned points (social context) or as the result of a computer algorithm or natural phenomenon (nonsocial context). In Experiments 1 and 2, participants explored more, and earned fewer rewards, in the social versus nonsocial context, suggesting that social uncertainty prompted behavioral exploration at the cost of task-relevant goals. In Experiments 3 and 4, we provided additional information about the people in the search space that could support social-cognitive approaches to uncertainty reduction, including relationships of the social agents distributing points (Experiment 3) and information relevant to social group membership (Experiment 4); exploration decreased in both instances. Taken together, these experiments highlight the approaches to, and tradeoffs of, uncertainty reduction in social contexts.","link":"/opendata/plate-et-al-2023/"},{"title":"Pleskac et al. (2019)","text":"Computational models of decision making typically assume as people deliberate between options they mentally simulate outcomes from each one and integrate valuations of these outcomes to form a preference. In two studies, we investigated this deliberation process using a task where participants make a series of decisions between a certain and an uncertain option, which were shown as dynamic visual samples that represented possible payoffs. We developed and validated a method of reverse correlational analysis for the task that measures how this time-varying signal was used to make a choice. The first study used this method to examine how information processing during deliberation differed from a perceptual analog of the task. We found participants were less sensitive to each sample of information during preferential choice. In a second study, we investigated how these different measures of deliberation were related to impulsivity and drug and alcohol use. We found that while properties of the deliberation process were not related to impulsivity, some aspects of the process may be related to substance use. In particular, alcohol abuse was related to diminished sensitivity to the payoff information and drug use was related to how the initial starting point of evidence accumulation. We synthesized our results with a rank-dependent sequential sampling model which suggests that participants allocated more attentional weight to larger potential payoffs during preferential choice.","link":"/opendata/pleskac-et-al-2019/"},{"title":"Poli et al. (2022)","text":"Exploration is curiosity-driven when it relies on the intrinsic motivation to know rather than on extrinsic rewards. Recent evidence shows that artificial agents perform better on a variety of tasks when their learning is curiosity-driven, and humans often engage in curiosity-driven learning when sampling information from the environment. However, which mechanisms underlie curiosity is still unclear. Here, we let participants freely explore different unknown environments that contained learnable sequences of events with varying degrees of noise and volatility. A hierarchical reinforcement learning model captured how participants were learning in these different kinds of unknown environments, and it also tracked the errors they expected to make and the learning opportunities they were planning to seek. With this computational approach, we show that participants exploratory behavior is guided by learning progress and perceptual novelty. Moreover, we demonstrate an overall tendency of participants to avoid extreme forms of uncertainty. These findings elucidate the cognitive mechanisms that underlie curiosity-driven exploration of unknown environments. Implications of this novel way of quantifying curiosity within a reinforcement learning framework are discussed.","link":"/opendata/poli-et-al-2022/"},{"title":"Poli et al. (2023)","text":"Humans are driven by an intrinsic motivation to learn, but the developmental origins of curiosity-driven exploration remain unclear. We investigated the computational principles guiding 4-year-old children’s exploration during a touchscreen game (N=102, F=49, M=53). Children guessed the location of characters that were hiding behind a hedge following predictable (yet noisy) patterns. Children could freely switch characters, which allowed us to quantify when they decided to explore something different and what they chose to explore. Bayesian modelling of their responses revealed that children selected activities that were more novel and offered greater learning progress. Moreover, children’s interest in making learning progress correlated with better learning performance. These findings highlight the importance of novelty and learning progress in guiding children’s curiosity-driven exploration.","link":"/opendata/poli-et-al-2023/"},{"title":"Polti et al. (2018)","text":"Psychological models of time perception involve attention and memory: while attention typically regulates the flow of events, memory maintains timed events or intervals. The precise, and possibly distinct, roles of attention and memory in time perception remain debated. In this behavioral study, we tested 48 participants in a prospective duration estimation task while they fully attended to time or performed a working memory (WM) task. We report that paying attention to time lengthened perceived duration in the range of seconds to minutes, whereas diverting attention away from time shortened perceived duration. The overestimation due to attending to time did not scale with durations. To the contrary, increasing WM load systematically decreased subjective duration and this effect scaled with durations. Herein, we discuss the dissociation between attention and WM in timing and scalar variability from the perspective of Bayesian models of time estimations.","link":"/opendata/polti-et-al-2018/"},{"title":"Pool et al. (2021)","text":"It has been suggested that there are two distinct and parallel mechanisms for controlling instrumental behavior in mammals: goal-directed actions and habits. To gain an understanding of how these two systems interact to control behavior, it is essential to characterize the mechanisms by which the balance between these systems is influenced by experience. Studies in rodents have shown that the amount of training governs the relative expression of these two systems: Behavior is goal-directed following moderate training, but the more extensively an instrumental action is trained, the more it becomes habitual. It is less clear whether humans exhibit similar training effects on the expression of goal-directed and habitual behavior, as human studies have reported contradictory findings. To tackle these contradictory findings, we formed a consortium, where four laboratories undertook a preregistered experimental induction of habits by manipulating the amount of training. There was no statistical evidence for a main effect of the amount of training on the formation and expression of habits. However, exploratory analyses suggest a moderating effect of the affective component of stress on the impact of training over habit expression. Participants who were lower in affective stress appeared to be initially goal-directed, but became habitual with increased training, whereas participants who were high in affective stress were already habitual even after moderate training, thereby manifesting insensitivity to overtraining effects. Our findings highlight the importance of the role of moderating variables such as individual differences in stress and anxiety when studying the experimental induction of habits in humans.","link":"/opendata/pool-et-al-2021/"},{"title":"Potter et al. (2017)","text":"Reinforcement learning theory distinguishes “model-free” learning, which fosters reflexive repetition of previously rewarded actions, from “model-based” learning, which recruits a mental model of the environment to flexibly select goal-directed actions. Whereas model-free learning is evident across development, recruitment of model-based learning appears to increase with age. However, the cognitive processes underlying the development of model-based learning remain poorly characterized. Here, we examined whether age-related differences in cognitive processes underlying the construction and flexible recruitment of mental models predict developmental increases in model-based choice. In a cohort of participants aged 9–25, we examined whether the abilities to infer sequential regularities in the environment (“statistical learning”), maintain information in an active state (“working memory”) and integrate distant concepts to solve problems (“fluid reasoning”) predicted age-related improvements in model-based choice. We found that age-related improvements in statistical learning performance did not mediate the relationship between age and model-based choice. Ceiling performance on our working memory assay prevented examination of its contribution to model-based learning. However, age-related improvements in fluid reasoning statistically mediated the developmental increase in the recruitment of a model-based strategy. These findings suggest that gradual development of fluid reasoning may be a critical component process underlying the emergence of model-based learning.","link":"/opendata/potter-et-al-2017/"},{"title":"Prieto et al. (2023)","text":"In developmental psychology, one of the questions that garners the attention of clinicians and experimental psychologists is caregiver bias in child’s psychological problems reports. Different models suggested by developmental psychology (e.g. The Depression-distortion, Accuracy and Combinatory model), had discussed the relation between the mother’s objective description of a child’s mental state and the degree of bias in her report. Recent evidence suggests that such bias could respond to a deficit in the caregiver’s ability to access, monitor and regulate their own psychological contents. The hypothesis to investigate is if such ability in caregivers, known as metacognition, could modulate the quality of mental attributions in the children. To this end, we recruited 102 Chilean mothers-children dyads users of the public care system in the city of Santiago. In an experimental setting, we evaluated metacognition in the mother through a perceptual metacognition task in addition to different psychopathology questionnaires (SCL-90-R & STAI) and biological reactivity to stress (TSST). By questioning the mother, we also investigated her belief about psychological symptoms in the child (CBCL 1½ - 5). Our results indicate that maternal severity and quantity of symptoms predict maternal reports of problematic psychological states in their child only in caregivers who show low metacognitive efficiency. We conclude the article with a discussion of the link between metacognitive capacity and the ability to attribute mental states to third parties.","link":"/opendata/prieto-et-al-2023/"},{"title":"Pronk et al. (2022)","text":"Research deployed via the internet and administered via smartphones could have access to more diverse samples than lab-based research. Diverse samples could have relatively high variation in their traits and so yield relatively reliable measurements of individual differences in these traits. Several cognitive tasks that originated from the experimental research tradition have been reported to yield relatively low reliabilities (Hedge et al., 2018) in samples with restricted variance (students). This issue could potentially be addressed by smartphone-mediated administration in diverse samples. We formulate several criteria to determine whether a cognitive task is suitable for individual differences research on commodity smartphones: no very brief or precise stimulus timing, relative response times (RTs), a maximum of two response options, and a small number of graphical stimuli. The flanker task meets these criteria. We compared the reliability of individual differences in the flanker effect across samples and devices in a preregistered study. We found no evidence that a more diverse sample yields higher reliabilities. We also found no evidence that commodity smartphones yield lower reliabilities than commodity laptops. Hence, diverse samples might not improve reliability above student samples, but smartphones may well measure individual differences with cognitive tasks reliably. Exploratively, we examined different reliability coefficients, split-half reliabilities, and the development of reliability estimates as a function of task length.","link":"/opendata/pronk-et-al-2022/"},{"title":"Ptasczynski et al. (2022)","text":"Reinforcement learning algorithms have a long-standing success story in explaining the dynamics of instrumental conditioning in humans and other species. While normative reinforcement learning models are critically dependent on external feedback, recent findings in the field of perceptual learning point to a crucial role of internally generated reinforcement signals based on subjective confidence, when external feedback is not available. Here, we investigated the existence of such confidence-based learning signals in a key domain of reinforcement-based learning: instrumental conditioning. We conducted a value-based decision making experiment which included phases with and without external feedback and in which participants reported their confidence in addition to choices. Behaviorally, we found signatures of self-reinforcement in phases without feedback, reflected in an increase of subjective confidence and choice consistency. To clarify the mechanistic role of confidence in value-based learning, we compared a family of confidence-based learning models with more standard models predicting either no change in value estimates or a devaluation over time when no external reward is provided. We found that confidence-based models indeed outperformed these reference models, whereby the learning signal of the winning model was based on the prediction error between current confidence and a stimulus-unspecific average of previous confidence levels. Interestingly, individuals with more volatile reward-based value updates in the presence of feedback also showed more volatile confidence-based value updates when feedback was not available. Together, our results provide evidence that confidence-based learning signals affect instrumentally learned subjective values in the absence of external feedback.","link":"/opendata/ptasczynski-et-al-2022/"},{"title":"Pupillo et al. (2023)","text":"Predictive processing accounts propose that our brain constantly tries to match top-down internal representations with bottom-up incoming information from the environment. Predictions can lead to prediction errors of varying degrees depending on the extent to which the information encountered in the environment conforms with prior expectations. Theoretical and computational models assume that prediction errors have beneficial effects on learning and memory. However, while there is strong evidence on the effects of prediction error on learning, relatively less evidence is available regarding its effects on memory. Moreover, most of the studies available so far manipulated prediction error by using monetary rewards, whereas in everyday life learning does not always occur in the presence of explicit rewards. We used a task in which participants leaned context/object-category associations of different strength based on the outcomes of their predictions. After learning these associations, participants were presented with trial-unique objects that could match or violate their predictions. Finally, participants were asked to complete a surprise recognition memory test. We used a reinforcement learning model to derive subject-specific trial-to-trial estimates of prediction error at encoding and link it to subsequent recognition memory. Results showed that model-derived prediction errors at encoding influenced subsequent memory as a function of the outcome of participants’ predictions (correct vs incorrect). When participants correctly predicted the object category, stronger prediction errors (as a consequence of weak expectations) led to enhanced memory. In contrast, when participants incorrectly predicted the object category, stronger prediction errors (as a consequence of strong expectations) led to impaired memory. These results reveal a computationally specific influence of prediction error on memory formation, highlighting the important moderating role of choice outcome that may be related to interactions between the hippocampal and striatal dopaminergic systems.","link":"/opendata/pupillo-et-al-2023/"},{"title":"Qin et al. (2023)","text":"A large amount of literature demonstrates that social behaviour can be triggered by environmental cues. A long-standing debate involves the question of whether such stimuli trigger behaviour directly (i.e. habits) or whether these effects mediate goals. As studies on automatic goal pursuit typically use real-world cues that are already associated with the behaviour and potentially the goal, it is impossible to make strong claims about the nature of the effects. In the present paper, we use a paradigm inspired by the Pavlovian-to-instrumental transfer (PIT) literature to examine how the environment can trigger goal-directed behaviour. Building on the essence of pro-self and pro-social motives in humans, two experiments explored the PIT effect when the outcomes were framed in terms of self- versus other-interest. Participants performed actions to earn money for themselves or a charity. Each outcome was linked to a different cue. The results showed that a cue predictive of self-interest outcomes facilitated responses instrumental in gaining the outcome, while such specific PIT effect for other-interest outcomes only emerged when participants were free to donate the money. We briefly discuss these findings reflecting on whether the PIT effect in our paradigm is indeed sensitive to the value of social goals.","link":"/opendata/qin-et-al-2023/"},{"title":"Raab & Hartley (2020)","text":"Multiple learning systems allow individuals to flexibly respond to opportunities and challenges present in the environment. An evolutionarily conserved Pavlovian learning mechanism couples valence and action, promoting a tendency to approach cues associated with reward and to inhibit action in the face of anticipated punishment. Although this default response system may be adaptive, these hard-wired reactions can hinder the ability to learn flexible instrumental actions in pursuit of a goal. Such constraints on behavioral flexibility have been studied extensively in adults. However, the extent to which these valence-specific response tendencies bias instrumental learning across development remains poorly characterized. Here, we show that while Pavlovian response biases constrain flexible action learning in children and adults, these biases are attenuated in adolescents. This adolescent-specific reduction in Pavlovian bias may promote unbiased exploration of approach and avoidance responses, facilitating the discovery of rewarding behavior in the many novel contexts that adolescents encounter.","link":"/opendata/raab-hartley-2020/"},{"title":"Raab et al. (2022)","text":"Accurate assessment of environmental controllability enables individuals to adaptively adjust their behavior-exploiting rewards when desirable outcomes are contingent upon their actions and minimizing costly deliberation when their actions are inconsequential. However, it remains unclear how estimation of environmental controllability changes from childhood to adulthood. Ninety participants (ages 8-25) completed a task that covertly alternated between controllable and uncontrollable conditions, requiring them to explore different actions to discover the current degree of environmental controllability. We found that while children were able to distinguish controllable and uncontrollable conditions, accuracy of controllability assessments improved with age. Computational modeling revealed that whereas younger participants controllability assessments relied on evidence gleaned through random exploration, older participants more effectively recruited their task structure knowledge to make highly informative interventions. Age-related improvements in working memory mediated this qualitative shift toward increased use of an inferential strategy. Collectively, these findings reveal an age-related shift in the cognitive processes engaged to assess environmental controllability. Improved detection of environmental controllability may foster increasingly adaptive behavior over development by revealing when actions can be leveraged for ones benefit.","link":"/opendata/raab-et-al-2022/"},{"title":"Rac-Lubashevsky & Frank (2021)","text":"Adaptive cognitive-control involves a hierarchical cortico-striatal gating system that supports selective updating, maintenance, and retrieval of useful cognitive and motor information. Here, we developed a task that independently manipulates selective gating operations into working-memory (input gating), from working-memory (output gating), and of responses (motor gating) and tested the neural dynamics and computational principles that support them. Increases in gating demands, captured by gate switches, were expressed by distinct EEG correlates at each gating level that evolved dynamically in partially overlapping time windows. Further, categorical representations of specific maintained items and of motor responses could be decoded from EEG when the corresponding gate was switching, thereby linking gating operations to prioritization. Finally, gate switching at all levels was related to increases in the motor decision threshold as quantified by the drift diffusion model. Together these results support the notion that cognitive gating operations scaffold on top of mechanisms involved in motor gating.","link":"/opendata/rac-lubashevsky-frank-2021/"},{"title":"Radulescu et al. (2020)","text":"There are a number of well-accepted ways to measure risk sensitivity, with researchers often making conclusions about individual differences based on a single task. Even though long-standing observations suggest that how risky outcomes are presented changes peoples behavior, it is unclear whether risk sensitivity is a unitary trait that can be measured by any one of these instruments. To directly answer this question, we administered three tasks commonly used to elicit risk sensitivity within-subject to a large sample of participants on Amazon Mechanical Turk. Our findings revealed high individual variability in each measure, with little evidence of consistency among different tasks: many participants who were classified as risk-averse in one task were risk-seeking in another, and we observed no significant correlations between continuous measures of risk sensitivity as measured in each of the tasks. Our results cast doubt on the pervasive assumption that risk paradigms measure a single underlying trait, and suggest instead that behavior in risky situations is the result of heterogeneous, interacting, and possibly task-dependent cognitive mechanisms.","link":"/opendata/radulescu-et-al-2020/"},{"title":"Rahnev et al. (2020)","text":"Understanding how people rate their confidence is critical for the characterization of a wide range of perceptual, memory, motor and cognitive processes. To enable the continued exploration of these processes, we created a large database of confidence studies spanning a broad set of paradigms, participant populations and fields of study. The data from each study are structured in a common, easy-to-use format that can be easily imported and analysed using multiple software packages. Each dataset is accompanied by an explanation regarding the nature of the collected data. At the time of publication, the Confidence Database (which is available at https://osf.io/s46pr/) contained 145 datasets with data from more than 8,700 participants and almost 4 million trials. The database will remain open for new submissions indefinitely and is expected to continue to grow. Here we show the usefulness of this large collection of datasets in four different analyses that provide precise estimations of several foundational confidence-related effects.’","link":"/opendata/rahnev-et-al-2020/"},{"title":"Rao & Hastie (2023)","text":"Beliefs like the Gamblers Fallacy and the Hot Hand have interested cognitive scientists, economists, and philosophers for centuries. We propose that these judgment patterns arise from the observers mental models of the sequence-generating mechanism, moderated by the strength of belief in an a priori base rate. In six behavioral experiments, participants observed one of three mechanisms generating sequences of eight binary events: a random mechanical device, an intentional goal-directed actor, and a financial market. We systematically manipulated participants beliefs about the base rate probabilities at which different outcomes were generated by each mechanism. Participants judged 18 sequences of outcomes produced by a mechanism with either an unknown base rate, a specified distribution of three equiprobable base rates, or a precise, fixed base rate. Six target sequences ended in streaks of between two and seven identical outcomes. The most common predictions for subsequent events were best described as pragmatic belief updating, expressed as an increasingly strong expectation that a streak of identical signals would repeat as the length of that streak increased. The exception to this pattern was for sequences generated by a random mechanical device with a fixed base rate of .50. Under this specific condition, participants exhibited a bias toward reversal of streaks, and this bias was larger when participants were asked to make a dichotomous choice versus a numerical probability rating. We review alternate accounts for the anomalous judgments of sequences and conclude with our favored interpretation that is based on Rabins version of Tversky & Kahnemans Law of Small Numbers.","link":"/opendata/rao-hastie-2023/"},{"title":"Rebholz et al. (2023)","text":"In sampling approaches to advice taking, participants can sequentially sample multiple pieces of advice before making a final judgment. To contribute to the understanding of active advice seeking, we develop and compare different strategies for information integration from external sources, including Bayesian belief updating. In a reanalysis of empirical data, we find that participants most frequently compromise between their initial beliefs and the distributions of multiple pieces of advice sampled from others. Moreover, across all participants, compromising predicts their final beliefs better than choosing one of the two sources of information. However, participants’ willingness to integrate external opinions is relatively higher for multiple pieces of reasonably distant as compared to close advice. Nevertheless, egocentrism is as pronounced as in the traditional paradigm where only a single piece of external evidence is provided. Crucially, there are large inter- and intra-individual differences in strategy selection for sequential advice taking. On the one hand, some participants choose their own or others’ judgments more often, and other participants are better described as compromisers between internal and external sources of information. On the other hand, virtually all participants apply different advice taking strategies for different items and trials. Our findings constitute initial evidence of the adaptive utilization of multiple, sequentially sampled external opinions.","link":"/opendata/rebholz-et-al-2023/"},{"title":"Reed et al. (2020)","text":"Paranoia is the belief that harm is intended by others. It may arise from selective pressures to infer and avoid social threats, particularly in ambiguous or changing circumstances. We propose that uncertainty may be sufficient to elicit learning differences in paranoid individuals, without social threat. We used reversal learning behavior and computational modeling to estimate belief updating across individuals with and without mental illness, online participants, and rats chronically exposed to methamphetamine, an elicitor of paranoia in humans. Paranoia is associated with a stronger prior on volatility, accompanied by elevated sensitivity to perceived changes in the task environment. Methamphetamine exposure in rats recapitulates this impaired uncertainty-driven belief updating and rigid anticipation of a volatile environment. Our work provides evidence of fundamental, domain-general learning differences in paranoid individuals. This paradigm enables further assessment of the interplay between uncertainty and belief-updating across individuals and species.","link":"/opendata/reed-et-al-2020/"},{"title":"Reiter et al. (2021)","text":"Adolescents are prone to social influence from peers, with implications for development, both adaptive and maladaptive. Here, using a computer-based paradigm, we replicate a cross-sectional effect of more susceptibility to peer influence in a large dataset of adolescents 14 to 24 years old. Crucially, we extend this finding by adopting a longitudinal perspective, showing that a within-person susceptibility to social influence decreases over a 1.5 year follow-up time period. Exploiting this longitudinal design, we show that susceptibility to social influences at baseline predicts an improvement in peer relations over the follow-up period. Using a Bayesian computational model, we demonstrate that in younger adolescents a greater tendency to adopt others preferences arises out of a higher uncertainty about their own preferences in the paradigmatic case of delay discounting (a phenomenon called preference uncertainty). This preference uncertainty decreases over time and, in turn, leads to a reduced susceptibility of ones own behaviour to an influence from others. Neuro-developmentally, we show that a measure of myelination within medial prefrontal cortex, estimated at baseline, predicts a developmental decrease in preference uncertainty at follow-up. Thus, using computational and neural evidence, we reveal adaptive mechanisms underpinning susceptibility to social influence during adolescence.","link":"/opendata/reiter-et-al-2021/"},{"title":"Ren et al. (2021)","text":"The coefficient of variation (CV), also known as relative standard deviation, has been used to measure the constancy of the Weber fraction, a key signature of efficient neural coding in time perception. It has long been debated whether or not duration judgments follow Weber’s law, with arguments based on examinations of the CV. However, what has been largely ignored in this debate is that the observed CVs may be modulated by temporal context and decision uncertainty, thus questioning conclusions based on this measure. Here, we used a temporal reproduction paradigm to examine the variation of the CV with two types of temporal context: full-range mixed vs. sub-range blocked intervals, separately for intervals presented in the visual and auditory modalities. We found a strong contextual modulation of both interval-duration reproductions and the observed CVs. We then applied a two-stage Bayesian model to predict those variations. Without assuming a violation of the constancy of the Weber fraction, our model successfully predicted the central-tendency effect and the variation in the CV. Our findings and modeling results indicate that both the accuracy and precision of our timing behavior are highly dependent on the temporal context and decision uncertainty. And, critically, they advise caution with using variations of the CV to reject the constancy of the Weber fraction of duration estimation.","link":"/opendata/ren-et-al-2021/"},{"title":"Rhoads et al. (2023)","text":"Most prosocial and antisocial behaviors affect ourselves and others simultaneously. To know whether to repeat choices that help or harm, we must learn from their outcomes. But the neurocomputational processes supporting such simultaneous learning remain poorly understood. In this pre-registered study, two independent samples (N=89) learned to make choices that simultaneously affected themselves and another person. Detailed model comparison showed that people integrate self- and other-relevant information into a single cached value per choice, but update this value asymmetrically based on different types of prediction errors related to the target (e.g., self, other) and valence (e.g., positive, negative). People who acquire more prosocial patterns are more sensitive to information about how their choices affect others. However, those with higher levels of subclinical psychopathic traits are relatively insensitive to unexpected outcomes for others and more sensitive for themselves. Model-based neuroimaging revealed distinct brain regions tracking prediction errors guided by the asymmetric value update. These results demonstrate that the way people distinctly encode self- and other-relevant outcomes resulting from a particular behavior guides how desirable the same behavior will be in the future, regardless of whether it is mutually beneficial or costly, instrumentally harmful, or altruistic.","link":"/opendata/rhoads-et-al-2023/"},{"title":"Riemer & Wolbers (2020)","text":"In time reproduction tasks, the reaction time of motor responses is intrinsically linked to the measure of perceptual timing. Decisions are based on a continuous comparison between elapsed time and a memory trace of the to-be-reproduced interval. Here, we investigate the possibility that negative reproduction errors can be explained by the tendency to prefer earlier over later response times, or whether the whole range of possible response times is shifted. In experiment 1, we directly compared point reproduction (participants indicate the exact time point of equality) and range reproduction (participants bracket an interval containing this time point). In experiment 2, participants indicated, in three separate tasks, the exact time point at which the reproduction phase was equal to the standard duration (point reproduction), the earliest (start reproduction), or the latest moment (stop reproduction) at which the exact time point of equality might have been reached. The results demonstrate that the bias towards earlier responses not only affects reproduction of the exact time point of equality. Rather, even if the decision threshold is changed in favor of late responses, they exhibit a continuous shift towards negative errors that increases with the length of the standard duration. The findings are discussed in the context of the hypothesis that systematic errors in time reproduction tasks reflect a dimension-unspecific tendency towards earlier responses caused by the psychophysical method rather than by a time-specific perceptual distortion.","link":"/opendata/riemer-wolbers-2020/"},{"title":"Riemer et al. (2022)","text":"Chronometric counting is a prevalent issue in the study of human time perception as it reduces the construct validity of tasks and can conceal existing timing deficits. Several methods have been proposed to prevent counting strategies, but the factors promoting those strategies in specific tasks are largely uninvestigated. Here, we modified a classical two-interval duration discrimination task in two aspects that could affect the tendency to apply counting strategies. We removed the pause between the two intervals and changed the task instructions: Participants decided whether a short event occurred in the first or in the second half of a reference duration. In Experiment 1, both classical and modified task versions were performed under timing conditions, in which participants were asked not to count, and counting conditions, in which counting was explicitly instructed. The task modifications led to (i) a general decrease in judgment precision, (ii) a shift of the point of subjective equality, and (iii) a counting-related increase in reaction times, suggesting enhanced cognitive effort of counting during the modified task version. Precision in the two task versions was not differently affected by instructed counting. Experiment 2 demonstrates that-in the absence of any counting-related instructions-participants are less likely to engage in spontaneous counting in the modified task version. These results enhance our understanding of the two-interval duration discrimination task and demonstrate that the modifications tested here-although they do not significantly reduce the effectiveness of instructed counting-can diminish the spontaneous tendency to adopt counting strategies.","link":"/opendata/riemer-et-al-2022/"},{"title":"Rischall et al. (2022)","text":"In natural settings, people decide not only when to request information, but also which attribute of a situation to inquire about. Little is known about how participants prioritize inquiries about task-relevant features. We show that, in a new task of information demand, participants inefficiently inquired about attributes that had high individual value but were less informative about a total payoff, and these inefficiencies persisted in instrumental conditions in which they entailed significantly lower rewards. Factors contributing to inefficient information demand included a form of anticipatory utility motivated by high value individual attributes rather than the total reward, and difficulty identifying the most informative observations. Across participants, more efficient inquiries were associated with personality traits, including lower extraversion and reward sensitivity scores and higher stress tolerance and need for cognition. The results highlight new affective, cognitive and personality factors involved in prioritizing sources of information.","link":"/opendata/rischall-et-al-2022/"},{"title":"Rischall et al. (2023)","text":"In natural settings, people evaluate complex multi-attribute situations and decide which attribute to request information about. Little is known about how people make this selection and specifically, how they identify individual observations that best predict the value of a multi-attribute situation. Here show that, in a simple task of information demand, participants inefficiently query attributes that have high individual value but are relatively uninformative about a total payoff. This inefficiency is robust in two instrumental conditions in which gathering less informative observations leads to significantly lower rewards. Across individuals, variations in the sensitivity to informativeness is associated with personality metrics, showing negative associations with extraversion and thrill seeking and positive associations with stress tolerance and need for cognition. Thus, people select informative queries using sub-optimal strategies that are associated with personality traits and influence consequential choices.","link":"/opendata/rischall-et-al-2023/"},{"title":"Rmus et al. (2023a)","text":"In reinforcement learning (RL) experiments, participants learn to make rewarding choices in response to different stimuli; RL models use outcomes to estimate stimulus–response values that change incrementally. RL models consider any response type indiscriminately, ranging from more concretely defined motor choices (pressing a key with the index finger), to more general choices that can be executed in a number of ways (selecting dinner at the restaurant). However, does the learning process vary as a function of the choice type? In Experiment 1, we show that it does: Participants were slower and less accurate in learning correct choices of a general format compared with learning more concrete motor actions. Using computational modeling, we show that two mechanisms contribute to this. First, there was evidence of irrelevant credit assignment: The values of motor actions interfered with the values of other choice dimensions, resulting in more incorrect choices when the correct response was not defined by a single motor action; second, information integration for relevant general choices was slower. In Experiment 2, we replicated and further extended the findings from Experiment 1 by showing that slowed learning was attributable to weaker working memory use, rather than slowed RL. In both experiments, we ruled out the explanation that the difference in performance between two condition types was driven by difficulty/different levels of complexity. We conclude that defining a more abstract choice space used by multiple learning systems for credit assignment recruits executive resources, limiting how much such processes then contribute to fast learning.","link":"/opendata/rmus-et-al-2023a/"},{"title":"Rmus et al. (2023b)","text":"The ability to use past experience to effectively guide decision-making declines in older adulthood. Such declines have been theorized to emerge from either impairments of striatal reinforcement learning systems (RL) or impairments of recurrent networks in prefrontal and parietal cortex that support working memory (WM). Distinguishing between these hypotheses has been challenging because either RL or WM could be used to facilitate successful decision-making in typical laboratory tasks. Here we investigated the neurocomputational correlates of age-related decision-making deficits using an RL-WM task to disentangle these mechanisms, a computational model to quantify them, and magnetic resonance spectroscopy to link them to their molecular bases. Our results reveal that task performance is worse in older age, in a manner best explained by working memory deficits, as might be expected if cortical recurrent networks were unable to sustain persistent activity across multiple trials. Consistent with this, we show that older adults had lower levels of prefrontal glutamate, the excitatory neurotransmitter thought to support persistent activity, compared to younger adults. Individuals with the lowest prefrontal glutamate levels displayed the greatest impairments in working memory after controlling for other anatomical and metabolic factors. Together, our results suggest that lower levels of prefrontal glutamate may contribute to failures of working memory systems and impaired decision-making in older adulthood.","link":"/opendata/rmus-et-al-2023b/"},{"title":"Roberts et al. (2022)","text":"Decades of research have established the ubiquity and importance of choice biases, such as the framing effect, yet why these seemingly irrational behaviors occur remains unknown. A prominent dual-system account maintains that alternate framings bias choices because of the unchecked influence of quick, affective processes, and findings that time pressure increases the framing effect have provided compelling support. Here, we present a novel alternative account of magnified framing biases under time pressure that emphasizes shifts in early visual attention and strategic adaptations in the decision-making process. In a preregistered direct replication (N = 40 adult undergraduates), we found that time constraints produced strong shifts in visual attention toward reward-predictive cues that, when combined with truncated information search, amplified the framing effect. Our results suggest that an attention-guided, strategic information-sampling process may be sufficient to explain prior results and raise challenges for using time pressure to support some dual-system accounts.","link":"/opendata/roberts-et-al-2022/"},{"title":"Robinson & Brady (2023)","text":"Ensemble perception is a process by which we summarize complex scenes. Despite the importance of ensemble perception to everyday cognition, there are few computational models that provide a formal account of this process. Here we develop and test a model in which ensemble representations reflect the global sum of activation signals across all individual items. We leverage this set of minimal assumptions to formally connect a model of memory for individual items to ensembles. We compare our ensemble model against a set of alternative models in five experiments. Our approach uses performance on a visual memory task for individual items to generate zero-free-parameter predictions of interindividual and intraindividual differences in performance on an ensemble continuous-report task. Our top-down modelling approach formally unifies models of memory for individual items and ensembles and opens a venue for building and comparing models of distinct memory processes and representations.","link":"/opendata/robinson-brady-2023/"},{"title":"Robison & Nguyen (2023)","text":"Across four experiments, we manipulated features of a simple reaction time (RT) task to examine the effects of such features on vigilance. In Experiment 1, we created simple reaction time “game” that pitted participants against two computerized avatars. In one condition, participants were awarded points, while the other condition did not receive points. Performance in the two conditions did not differ, but both conditions showed faster RTs and shallower time-on-task performance decrements compared to a standard psychomotor vigilance task. In Experiment 2, we removed the competitive feature but retained the point system. In this case, participants without a point system showed a steeper performance decrement than those with a point system. Experiments 3 and 4 replicated these effects and corroborated their findings with pupillometry. Participants in both conditions of Experiment 3 (competitive task) and the points condition of Experiment 4 showed larger task-evoked pupillary responses than participants in the no-points condition of Experiment 4. These findings challenge the notion that time-on-task performance decrements are caused by resource depletion (Smit et al., 2004), and are better explained by motivational control (Hockey, 2011) or cost-benefit theories (Boksem & Tops, 2008; Kurzban et al., 2013) of mental effort and cognitive fatigue.","link":"/opendata/robison-nguyen-2023/"},{"title":"Robison & Campbell (2023)","text":"There has been debate regarding the correlation between baseline/resting state measures of pupil diameter and cognitive abilities such as working memory capacity and fluid intelligence. A positive correlation between baseline pupil diameter and cognitive ability has been cited as evidence for a role of the locus coeruleus-norepinephrine (LC-NE) and its functional connection with cortical networks as a reason for individual differences in fluid intelligence (Tsukahara & Engle, Proceedings of the National Academy of Sciences, 118(46), e2110630118, 2021a). Several recent attempts to replicate this correlation have failed. The current studies make another attempt and find substantial evidence against a positive correlation between pupil diameter and intelligence. Given the data from the current studies in combination with other recent failures to replicate, we conclude that individual differences in baseline pupil diameter should not be used as evidence for a role of the LC-NE system in goal-directed cognitive activity.","link":"/opendata/robison-campbell-2023/"},{"title":"Rodman et al. (2023)","text":"Peer relationships and social belonging are particularly important during adolescence. Using a willingness-to-work paradigm to quantify incentive motivation, we examined whether evaluative information holds unique value for adolescents. Participants (N = 102; 12-23 years old) rated peers, predicted how peers rated them, and exerted physical effort to view each peer’s rating. We measured grip force, speed, and opt-out behavior to examine the motivational value of peer feedback, relative to money in a control condition, and to assess how peer desirability and participants’ expectations modulated motivated effort across age. Overall, when compared with adolescents, adults were relatively less motivated for feedback than money. Whereas adults exerted less force and speed for feedback when expecting rejection, adolescents exerted greater force and speed when expecting to be more strongly liked or disliked. These findings suggest that the transition into adulthood is accompanied by a self-protective focus, whereas adolescents are motivated to consume highly informative feedback, even if negative.","link":"/opendata/rodman-et-al-2023/"},{"title":"Rojek-Giffin et al. (2023)","text":"Globalizing economies and long-distance trade rely on individuals from different cultural groups to negotiate agreement on what to give and take. In such settings, individuals often lack insight into what interaction partners deem fair and appropriate, potentially seeding misunderstandings, frustration, and conflict. Here, we examine how individuals decipher distinct rules of engagement and adapt their behavior to reach agreements with partners from other cultural groups. Modeling individuals as Bayesian learners with inequality aversion reveals that individuals, in repeated ultimatum bargaining with responders sampled from different groups, can be more generous than needed. While this allows them to reach agreements, it also gives rise to biased beliefs about what is required to reach agreement with members from distinct groups. Preregistered behavioral (N = 420) and neuroimaging experiments (N = 49) support model predictions: Seeking equitable agreements can lead to overly generous behavior toward partners from different groups alongside incorrect beliefs about prevailing norms of what is appropriate in groups and cultures other than one’s own.","link":"/opendata/rojek-giffin-et-al-2023/"},{"title":"Rollwage et al. (2020)","text":"A prominent source of polarised and entrenched beliefs is confirmation bias, where evidence against one’s position is selectively disregarded. This effect is most starkly evident when opposing parties are highly confident in their decisions. Here we combine human magnetoencephalography (MEG) with behavioural and neural modelling to identify alterations in post-decisional processing that contribute to the phenomenon of confirmation bias. We show that holding high confidence in a decision leads to a striking modulation of post-decision neural processing, such that integration of confirmatory evidence is amplified while disconfirmatory evidence processing is abolished. We conclude that confidence shapes a selective neural gating for choice-consistent information, reducing the likelihood of changes of mind on the basis of new information. A central role for confidence in shaping the fidelity of evidence accumulation indicates that metacognitive interventions may help ameliorate this pervasive cognitive bias.","link":"/opendata/rollwage-et-al-2020/"},{"title":"Romero-Verdugo et al. (2023)","text":"In our connected era, we spend significant time and effort satisfying our curiosity. Often, we choose which information we seek, but sometimes the selection is made for us. We hypothesized that humans exhibit enhanced curiosity in the context of choice. We designed a task in which healthy participants saw two lotteries on each trial. On some trials, participants chose which lottery to play. On other trials, the lottery was selected for them. Participants then indicated their curiosity about the outcome of the to-be-played lottery via self-report ratings (Experiment 1, N = 34) or willingness-to-wait decisions (Experiment 2, N = 34). We found that participants exhibited higher curiosity ratings and greater willingness to wait for the outcome of lotteries they had chosen than for lotteries that had been selected for them (controlling for initial preference). This demonstrates that choice boosts curiosity, which may have implications for boosting learning, memory, and motivation.","link":"/opendata/romero-verdugo-et-al-2023/"},{"title":"Roseboom (2019)","text":"Recent sensory history affects subsequent experience. Behavioral results have demonstrated this effect in two forms: repeated exposure to the same sensory input produces negative aftereffects wherein sensory stimuli like those previously experienced are judged as less like the exposed stimulation, while singular exposures can produce positive aftereffects wherein judgments are more like previously experienced stimulation. For timing perception, there is controversy regarding the influence of recent exposure-both singular and repeated exposure produce apparently negative aftereffects-often referred to as temporal recalibration and rapid temporal recalibration, respectively. While negative aftereffects have been found following repeated exposure for all timing tasks, following a single exposure, they have only been demonstrated using synchrony judgments (SJs). Here, we examine the influence of a single presentation-serial dependence for timing-for standard timing tasks: SJ, temporal order judgments, and magnitude estimation judgments. We found that serial dependence produced apparently negative aftereffects in SJ, but positive aftereffects in temporal order judgment and magnitude estimation judgment. We propose that these findings, and those following repeated exposure, can be reconciled within a framework wherein negative aftereffects occur at sensory layers, consistent with classical depictions of sensory adaptation, and Bayesian-like positive aftereffects operate across different, higher, decision levels. These findings are consistent with the aftereffects known from other perceptual dimensions and provide a general framework for interpreting positive (serial dependence) and negative (sensory adaptation) aftereffects across different tasks.","link":"/opendata/roseboom-2019/"},{"title":"Rosenbaum et al. (2022)","text":"As individuals learn through trial and error, some are more influenced by good outcomes, while others weight bad outcomes more heavily. Such valence biases may also influence memory for past experiences. Here, we examined whether valence asymmetries in reinforcement learning change across adolescence, and whether individual learning asymmetries bias the content of subsequent memory. Participants ages 8-27 learned the values of point machines, after which their memory for trial-unique images presented with choice outcomes was assessed. Relative to children and adults, adolescents overweighted worse-than-expected outcomes during learning. Individuals valence biases modulated incidental memory, such that those who prioritized worse- (or better-) than-expected outcomes during learning were also more likely to remember images paired with these outcomes, an effect reproduced in an independent dataset. Collectively, these results highlight age-related changes in the computation of subjective value and demonstrate that a valence-asymmetric valuation process influences how information is prioritized in episodic memory.","link":"/opendata/rosenbaum-et-al-2022/"},{"title":"Rossi-Goldthorpe et al. (2021)","text":"Self-deception, paranoia, and overconfidence involve misbeliefs about the self, others, and world. They are often considered mistaken. Here we explore whether they might be adaptive, and further, whether they might be explicable in Bayesian terms. We administered a difficult perceptual judgment task with and without social influence (suggestions from a cooperating or competing partner). Crucially, the social influence was uninformative. We found that participants heeded the suggestions most under the most uncertain conditions and that they did so with high confidence, particularly if they were more paranoid. Model fitting to participant behavior revealed that their prior beliefs changed depending on whether the partner was a collaborator or competitor, however, those beliefs did not differ as a function of paranoia. Instead, paranoia, self-deception, and overconfidence were associated with participants perceived instability of their own performance. These data are consistent with the idea that self-deception, paranoia, and overconfidence flourish under uncertainty, and have their roots in low self-esteem, rather than excessive social concern. The model suggests that spurious beliefs can have value-self-deception is irrational yet can facilitate optimal behavior. This occurs even at the expense of monetary rewards, perhaps explaining why self-deception and paranoia contribute to costly decisions which can spark financial crashes and devastating wars.","link":"/opendata/rossi-goldthorpe-et-al-2021/"},{"title":"Rouhani et al. (2018)","text":"Reward-prediction errors track the extent to which rewards deviate from expectations, and aid in learning. How do such errors in prediction interact with memory for the rewarding episode? Existing findings point to both cooperative and competitive interactions between learning and memory mechanisms. Here, we investigated whether learning about rewards in a high-risk context, with frequent, large prediction errors, would give rise to higher fidelity memory traces for rewarding events than learning in a low-risk context. Experiment 1 showed that recognition was better for items associated with larger absolute prediction errors during reward learning. Larger prediction errors also led to higher rates of learning about rewards. Interestingly we did not find a relationship between learning rate for reward and recognition-memory accuracy for items, suggesting that these two effects of prediction errors were caused by separate underlying mechanisms. In Experiment 2, we replicated these results with a longer task that posed stronger memory demands and allowed for more learning. We also showed improved source and sequence memory for items within the high-risk context. In Experiment 3, we controlled for the difficulty of reward learning in the risk environments, again replicating the previous results. Moreover, this control revealed that the high-risk context enhanced item-recognition memory beyond the effect of prediction errors. In summary, our results show that prediction errors boost both episodic item memory and incremental reward learning, but the two effects are likely mediated by distinct underlying systems.","link":"/opendata/rouhani-et-al-2018/"},{"title":"Rouhani & Niv (2021)","text":"Memory helps guide behavior, but which experiences from the past are prioritized? Classic models of learning posit that events associated with unpredictable outcomes as well as, paradoxically, predictable outcomes, deploy more attention and learning for those events. Here, we test reinforcement learning and subsequent memory for those events, and treat signed and unsigned reward prediction errors (RPEs), experienced at the reward-predictive cue or reward outcome, as drivers of these two seemingly contradictory signals. By fitting reinforcement learning models to behavior, we find that both RPEs contribute to learning by modulating a dynamically changing learning rate. We further characterize the effects of these RPE signals on memory and show that both signed and unsigned RPEs enhance memory, in line with midbrain dopamine and locus-coeruleus modulation of hippocampal plasticity, thereby reconciling separate findings in the literature.","link":"/opendata/rouhani-niv-2021/"},{"title":"Ruggeri et al. (2020)","text":"Prospect theory is among the most influential frameworks in behavioural science, specifically in research on decision-making under risk. Kahneman and Tversky’s 1979 study tested financial choices under risk, concluding that such judgements deviate significantly from the assumptions of expected utility theory, which had remarkable impacts on science, policy and industry. Though substantial evidence supports prospect theory, many presumed canonical theories have drawn scrutiny for recent replication failures. In response, we directly test the original methods in a multinational study (n = 4,098 participants, 19 countries, 13 languages), adjusting only for current and local currencies while requiring all participants to respond to all items. The results replicated for 94% of items, with some attenuation. Twelve of 13 theoretical contrasts replicated, with 100% replication in some countries. Heterogeneity between countries and intra-individual variation highlight meaningful avenues for future theorizing and applications. We conclude that the empirical foundations for prospect theory replicate beyond any reasonable thresholds.","link":"/opendata/ruggeri-et-al-2020/"},{"title":"Ruggeri et al. (2022)","text":"Economic inequality is associated with preferences for smaller, immediate gains over larger, delayed ones. Such temporal discounting may feed into rising global inequality, yet it is unclear whether it is a function of choice preferences or norms, or rather the absence of sufficient resources for immediate needs. It is also not clear whether these reflect true differences in choice patterns between income groups. We tested temporal discounting and five intertemporal choice anomalies using local currencies and value standards in 61 countries (N = 13,629). Across a diverse sample, we found consistent, robust rates of choice anomalies. Lower-income groups were not significantly different, but economic inequality and broader financial circumstances were clearly correlated with population choice patterns.","link":"/opendata/ruggeri-et-al-2022/"},{"title":"Rutledge et al. (2021)","text":"The subjective well-being or happiness of individuals is an important metric for societies. Although happiness is influenced by life circumstances and population demographics such as wealth, we know little about how the cumulative influence of daily life events are aggregated into subjective feelings. Using computational modeling, we show that emotional reactivity in the form of momentary happiness in response to outcomes of a probabilistic reward task is explained not by current task earnings, but by the combined influence of recent reward expectations and prediction errors arising from those expectations. The robustness of this account was evident in a large-scale replication involving 18,420 participants. Using functional MRI, we show that the very same influences account for task-dependent striatal activity in a manner akin to the influences underpinning changes in happiness.","link":"/opendata/rutledge-et-al-2021/"},{"title":"Rybicki et al. (2022)","text":"Some theories of human cultural evolution posit that humans have social-specific learning mechanisms that are adaptive specialisations moulded by natural selection to cope with the pressures of group living. However, the existence of neurochemical pathways that are specialised for learning from social information and individual experience is widely debated. Cognitive neuroscientific studies present mixed evidence for social-specific learning mechanisms: some studies find dissociable neural correlates for social and individual learning, whereas others find the same brain areas and, dopamine-mediated, computations involved in both. Here, we demonstrate that, like individual learning, social learning is modulated by the dopamine D2 receptor antagonist haloperidol when social information is the primary learning source, but not when it comprises a secondary, additional element. Two groups (total N = 43) completed a decision-making task which required primary learning, from own experience, and secondary learning from an additional source. For one group, the primary source was social, and secondary was individual; for the other group this was reversed. Haloperidol affected primary learning irrespective of social/individual nature, with no effect on learning from the secondary source. Thus, we illustrate that dopaminergic mechanisms underpinning learning can be dissociated along a primary-secondary but not a social-individual axis. These results resolve conflict in the literature and support an expanding field showing that, rather than being specialised for particular inputs, neurochemical pathways in the human brain can process both social and non-social cues and arbitrate between the two depending upon which cue is primarily relevant for the task at hand.","link":"/opendata/rybicki-et-al-2022/"},{"title":"Sadeghi et al. (2023)","text":"The role of the heart in the experience of time has been long theorized but empirical evidence is scarce. Here, we examined the interaction between fine-grained cardiac dynamics and the momentary experience of subsecond intervals. Participants performed a temporal bisection task for brief tones (80-188 ms) synchronized with the heart. We developed a cardiac Drift-Diffusion Model (cDDM) that embedded contemporaneous heart rate dynamics into the temporal decision model. Results revealed the existence of temporal wrinkles-dilation or contraction of short intervals-in synchrony with cardiac dynamics. A lower prestimulus heart rate was associated with an initial bias in encoding the millisecond-level stimulus duration as longer, consistent with facilitation of sensory intake. Concurrently, a higher prestimulus heart rate aided more consistent and faster temporal judgments through more efficient evidence accumulation. Additionally, a higher speed of poststimulus cardiac deceleration, a bodily marker of attention, was associated with a greater accumulation of sensory temporal evidence in the cDDM. These findings suggest a unique role of cardiac dynamics in the momentary experience of time. Our cDDM framework opens a new methodological avenue for investigating the role of the heart in time perception and perceptual judgment.","link":"/opendata/sadeghi-et-al-2023/"},{"title":"Sadeghiyeh et al. (2020)","text":"The explore-exploit dilemma describes the trade off that occurs any time we must choose between exploring unknown options and exploiting options we know well. Implicit in this trade off is how we value future rewards - exploiting is usually better in the short term, but in the longer term the benefits of exploration can be huge. Thus, in theory there should be a tight connection between how much people value future rewards, i.e. how much they discount future rewards relative to immediate rewards, and how likely they are to explore, with less ‘temporal discounting’ associated with more exploration. By measuring individual differences in temporal discounting and correlating them with explore-exploit behavior, we tested whether this theoretical prediction holds in practice. We used the 27-item Delay-Discounting Questionnaire to estimate temporal discounting and the Horizon Task to quantify two strategies of explore-exploit behavior: directed exploration, where information drives exploration by choice, and random exploration, where behavioral variability drives exploration by chance. We find a clear correlation between temporal discounting and directed exploration, with more temporal discounting leading to less directed exploration. Conversely, we find no relationship between temporal discounting and random exploration. Unexpectedly, we find that the relationship with directed exploration appears to be driven by a correlation between temporal discounting and uncertainty seeking at short time horizons, rather than information seeking at long horizons. Taken together our results suggest a nuanced relationship between temporal discounting and explore-exploit behavior that may be mediated by multiple factors.","link":"/opendata/sadeghiyeh-et-al-2020/"},{"title":"Sadibolova et al. (2022)","text":"The perception of time is characterized by pronounced variability across individuals, with implications for a diverse array of psychological functions. The neurocognitive sources of this variability are poorly understood, but accumulating evidence suggests a role for inter-individual differences in striatal dopamine levels. Here we present a pre-registered study that tested the predictions that spontaneous eyeblink rates, which provide a proxy measure of striatal dopamine availability, would be associated with aberrant interval timing (lower temporal precision or overestimation bias). Neurotypical adults (N = 69) underwent resting state eye tracking and completed visual psychophysical interval timing and control tasks. Elevated spontaneous eyeblink rates were associated with poorer temporal precision but not with inter-individual differences in perceived duration or performance on the control task. These results signify a role for striatal dopamine in variability in human time perception and can help explain deficient temporal precision in psychiatric populations characterized by elevated dopamine levels.","link":"/opendata/sadibolova-et-al-2022/"},{"title":"Sadibolova et al. (2023)","text":"One of the ongoing controversies in interval timing concerns whether human time perception relies on multiple distinct mechanisms. This debate centres around whether subsecond and suprasecond timing may be attributed to a single semi-uniform mechanism or separate and interacting mechanisms. Whereas past studies offer valuable insights, this study overcomes previous limitations by adopting multiple convergent statistical approaches in a design with strong statistical power. We conducted two online experiments involving participants reproducing temporal intervals ranging from 400 to 2400ms (Experiment 1; N=302) and 1000 to 2000ms (Experiment 2; N=302). We contrasted the application of exploratory factor analysis and structural equation modelling to differentiate distinct latent structures underlying duration reproduction patterns. Additionally, we compared the model outcomes with results from changepoint analysis models fitted to individual participants’ data. In both experiments, these analyses yielded evidence for a two-factor model comprising a general timing factor spanning the full interval range and a second factor capturing the regression to the mean of presented stimulus intervals (central tendency bias). We observed a low proportion of detected changepoints, further supporting the limited evidence for a discontinuity between the distinct underlying mechanisms, while also finding that the changepoint detection was predicted by factor scores. These results highlight the role of central tendency effects in investigating changepoints and the continuity of timing systems. Our work contributes to integrating factor analytic and computational modelling approaches in the study of time perception and has implications for the measurement and interpretation of interval timing performance in a range of contexts.","link":"/opendata/sadibolova-et-al-2023/"},{"title":"Saeedpour et al. (2023)","text":"Pavlovian influences impair instrumental learning. It is easier to learn to approach reward-predictive signals and avoid punishment-predictive cues than their contrary. Whether the interindividual variability in this Pavlovian influence is consistent across time has been examined by a number of recent studies and met with mixed results. Here we introduce an open-source, web-based instance of a well-established Go-NoGo paradigm for measuring Pavlovian influence. We closely replicated the previous laboratory-based results. Moreover, the interindividual differences in Pavlovian influence were consistent across a two-week time window at the level of (i) raw measures of learning (i.e. performance accuracy), (ii) linear, descriptive estimates of Pavlovian bias (test-retest reliability: 0.40), and (iii) parameters obtained from reinforcement learning model fitting and model selection (test-retest reliability: 0.25). Nonetheless, the correlations reported here are still lower than the standards (i.e. 0.7) employed in psychometrics and self-reported measures. Our results provide support for trusting Pavlovian bias as a relatively stable individual characteristic and for using its measure in the computational understanding of human mental health.","link":"/opendata/saeedpour-et-al-2023/"},{"title":"Safra et al. (2019)","text":"Depression is characterized by a marked decrease in social interactions and blunted sensitivity to rewards. Surprisingly, despite the importance of social deficits in depression, non-social aspects have been disproportionally investigated. As a consequence, the cognitive mechanisms underlying atypical decision-making in social contexts in depression are poorly understood. In the present study, we investigate whether deficits in reward processing interact with the social context and how this interaction is affected by self-reported depression and anxiety symptoms in the general population. Two cohorts of subjects (discovery and replication sample: N = 50 each) took part in an experiment involving reward learning in contexts with different levels of social information (absent, partial and complete). Behavioral analyses revealed a specific detrimental effect of depressive symptoms-but not anxiety-on behavioral performance in the presence of social information, i.e. when participants were informed about the choices of another player. Model-based analyses further characterized the computational nature of this deficit as a negative audience effect, rather than a deficit in the way others choices and rewards are integrated in decision making. To conclude, our results shed light on the cognitive and computational mechanisms underlying the interaction between social cognition, reward learning and decision-making in depressive disorders.","link":"/opendata/safra-et-al-2019/"},{"title":"Sakamoto & Miyoshi (2023)","text":"Human behavior is flexibly regulated in accordance with specific goals of cognitive tasks. One notable instance is the goal-directed modulation of human metacognitive behavior, where logically equivalent decision-making problems can yield different patterns of introspective confidence depending on the frame in which they are presented. While this observation highlights the important heuristic nature of metacognitive monitoring, the computational mechanisms of this phenomenon remain elusive. Using a two-alternative dot-number discrimination task, we aimed to investigate the underlying computational rules of the confidence framing effect. Participants made decisions under “choose more” or “choose less” frames and reported their confidence. We replicated the previous observation of the confidence framing effect, demonstrating distinctive confidence rating patterns between the decision frames. Furthermore, formal model comparisons revealed that the confidence framing effect can be attributed to a simple heuristic that assigns greater weight to the evidence for the chosen alternative compared to that for the unchosen alternative. This computation appears to be based on internal decisional evidence constituted under specific task demands rather than physical stimulus intensity itself, a view justified in terms of ecological rationality. These results shed fresh light on the adaptive nature of human decision-making and metacognitive monitoring.","link":"/opendata/sakamoto-miyoshi-2023/"},{"title":"Salomon et al. (2022)","text":"Although research about preference formation and modification has classically focused on the role of external reinforcements, there is also increasing evidence for a key role of non-externally reinforced cognitive mechanisms such as attention and memory in preference modification. In a novel paradigm for behavioral change called the Cue-Approach training (CAT) task, preferences are modified via the mere association of images of stimuli with a neutral cue and a rapid motor response, without external reinforcements. The procedure’s efficacy has been replicated across dozens of studies, and the net behavioral change was linked with increased activity in a frontal value-based decision-making brain region during the post-training probe choice phase. However, the cognitive mechanisms during the training phase itself have not been elucidated. Based on the structure of the task alongside recent findings of the involvement of striatal and supplementary motor regions during training, we hypothesized that a motor-related learning process could be a prospective candidate. To test this hypothesis, we developed a computational model of the motor response pattern during training in a large corpus of data collected from 864 participants across 29 different CAT experiments. Using Bayesian modeling of the meta-analysis data, we developed a computational marker for individualized learning in the training task, which was found to be associated with the preference modification effect in the subsequent probe task, both at the participant-level as well as in the more granular individual-item level. Following the conclusions of the meta-analysis, in two additional experiments (a pilot study and a larger preregistered replication study) we aimed to affect learning efficacy by manipulating the training procedure difficulty. As hypothesized and preregistered, training difficulty was captured by the new computational marker identified on the previously collected samples. Manipulation of the training difficulty also resulted in a differential preference modification effect, suggesting a causal relationship between the motor learning captured by the computational model and the post-training behavioral change effect. Our work highlights a novel non-reinforced preference modification pathway, suggesting that attention and motor learning are linked to preference formation, and provides a computational framework to identify individualized training markers which could predict future behavioral change effects.","link":"/opendata/salomon-et-al-2022/"},{"title":"Sax et al. (2023)","text":"Depression is believed to hinder one’s ability to reason about oneself (metacognition). This impairment can arise from dysfunctional biases and/or learning processes. However, the relationship between depression, biases and learning in metacognition is not known. Here we combined multi-trial behavioural experiments with computa- tional modelling to explicitly test whether depression impacts biases and/or learning in a metacognitive task. First, using a perceptual estimation task with fixed feedback valence (N=131), we show that depressive symptoms predict negative metacognitive biases but do not impact learning. Next, we tested the generality of our results in a more realistic perceptual estimation task where we varied the valence of the feedback. Using a Rescorla-Wagner model of confidence fitted to behavioural data (N=74), we show that also in this task, depressive symptoms predict negative metacognitive biases but do not impair learning. Overall, our study suggests that depression impacts metacognitive states but not one’s ability to learn while offering a behavioural-computational framework for the study of metacog- nition in depression.","link":"/opendata/sax-et-al-2023/"},{"title":"Schaaf et al. (2023)","text":"Recently it has been suggested that parameters estimates of computational models can be used to understand individual differences at the process level. One area of research in which this approach, called computational phenotyping, took hold is computational psychiatry, but it is also used to understand differences in age and personality. One requirement for successful computational phenotyping is that behavior and parameters are stable over time. Surprisingly, the test-retest reliability of behavior and model parameters remains unknown for most experimental tasks and models. The present study seeks to close this gap by investigating the test-retest reliability of canonical reinforcement learning models in the context of two often-used learning paradigms: a two-armed bandit and a reversal learning task. We tested independent cohorts for the two tasks (N=142 and N=154) via an online testing platform with a between-test interval of five weeks. Whereas reliability was high for personality and cognitive measures, it was generally poor for the parameter estimates of the reinforcement learning models. Given that simulations indicated that our procedures could detect high test-retest reliability, this suggests that a significant proportion of the variability must be ascribed to the participants themselves. In support of that hypothesis, we show that mood (stress and happiness) can partly explain within-subject variability. Taken together, these results are critical for current practices in computational phenotyping and suggest that individual variability should be taken into account in the future development of the field.","link":"/opendata/schaaf-et-al-2023/"},{"title":"Schöbel et al. (2016)","text":"People often make decisions in a social environment. The present work examines social influence on peoples decisions in a sequential decision-making situation. In the first experimental study, we implemented an information cascade paradigm, illustrating that people infer information from decisions of others and use this information to make their own decisions. We followed a cognitive modeling approach to elicit the weight people give to social as compared to private individual information. The proposed social influence model shows that participants overweight their own private information relative to social information, contrary to the normative Bayesian account. In our second study, we embedded the abstract decision problem of Study 1 in a medical decision-making problem. We examined whether in a medical situation people also take others authority into account in addition to the information that their decisions convey. The social influence model illustrates that people weight social information differentially according to the authority of other decision makers. The influence of authority was strongest when an authoritys decision contrasted with private information. Both studies illustrate how the social environment provides sources of information that people integrate differently for their decisions.","link":"/opendata/schobel-et-al-2016/"},{"title":"Scholl et al. (2022)","text":"Real-life decision-making often comprises sequences of successive decisions about whether to take opportunities as they are encountered or keep searching for better ones instead. We investigated individual differences related to such sequential decision-making and link them especially to apathy and compulsivity in a large online sample (discovery sample: n = 449 and confirmation sample: n = 756). Our cognitive model revealed distinct changes in the way participants evaluated their environments and planned their own future behaviour. Apathy was linked to decision inertia, i.e., automatically persisting with a sequence of searches for longer than appropriate given the value of searching. Thus, despite being less motivated, they did not avoid the effort associated with longer searches. In contrast, compulsivity was linked to self-reported insensitivity to the cost of continuing with a sequence of searches. The objective measures of behavioural cost insensitivity were clearly linked to compulsivity only in the discovery sample. While the confirmation sample showed a similar effect, it did not reach significance. Nevertheless, in both samples, participants reported awareness of such bias (experienced as overchasing). In addition, this awareness made them report preemptively avoiding situations related to the bias. However, we found no evidence of them actually preempting more in the task, which might mean a misalignment of their metacognitive beliefs or that our behavioural measures were incomplete. In summary, individual variation in distinct, fundamental aspects of sequential decision-making can be linked to variation in 2 measures of behavioural traits associated with psychological illness in the normal population.","link":"/opendata/scholl-et-al-2022/"},{"title":"Schubert et al. (2023)","text":"The rise of large-scale collaborative panel studies in educational psychology and cognitive neuroscience has generated a need for fast, reliable, and valid assessments of cognitive abilities. In these studies, a detailed characterization of participants’ cognitive abilities is often unnecessary. Tests are chosen based on their ease of use and the duration and feasibility of their administration. These demands often result in the use of abbreviated measures or even related proxies, potentially compromising the reliabilities and validities of those measures. The present study evaluates the usefulness of the mini-q (Baudson & Preckel, 2016), a three-minute speeded reasoning test, as a brief assessment of general cognitive abilities in large-scale panel studies in a sample of 140 participants from diverse educational and occupational backgrounds. Participants’ test performance showed an excellent reliability and was substantially related (r = .57) to their general cognitive abilities measured with a broad test battery, supporting the test’s potential as a short screening of cognitive abilities. The largest share (54 %) of the relationship between test performance and general cognitive abilities was accounted for by participants’ working memory capacity, whereas individual differences in processing speed did not account for any part of the relationship between the two measures. Overall, our results support the notion that the mini-q can be used as a brief, reliable, and valid assessment of general cognitive abilities. However, possible disadvantages of participants with different native languages should be carefully considered due to the test’s reliance on verbal abilities.","link":"/opendata/schubert-et-al-2023/"},{"title":"Schultz et al. (2023)","text":"Reward improves memory through both encoding and consolidation processes. In this pre-registered study, we tested whether reward effects on memory generalize from rewarded items to unrewarded but episodically-related items. 59 human volunteers incidentally encoded associations between unique objects and repeated scenes. Some scenes typically yielded high reward, whereas others typically yielded low reward. Memory was tested immediately after encoding (n=29) or the next day (n=30). Overall, reward had only a limited influence on memory. It neither enhanced consolidation, nor did its effect generalize to episodically related stimuli. We thus contribute to understanding the boundary conditions of reward effects on memory.","link":"/opendata/schultz-et-al-2023/"},{"title":"Schulz et al. (2019)","text":"How do children and adults differ in their search for rewards? We considered three different hypotheses that attribute developmental differences to (a) children’s increased random sampling, (b) more directed exploration toward uncertain options, or (c) narrower generalization. Using a search task in which noisy rewards were spatially correlated on a grid, we compared the ability of 55 younger children (ages 7 and 8 years), 55 older children (ages 9-11 years), and 50 adults (ages 19-55 years) to successfully generalize about unobserved outcomes and balance the exploration-exploitation dilemma. Our results show that children explore more eagerly than adults but obtain lower rewards. We built a predictive model of search to disentangle the unique contributions of the three hypotheses of developmental differences and found robust and recoverable parameter estimates indicating that children generalize less and rely on directed exploration more than adults. We did not, however, find reliable differences in terms of random sampling.","link":"/opendata/schulz-et-al-2019/"},{"title":"Schurr et al. (2023)","text":"Computational phenotyping has emerged as a powerful tool for characterizing individual variability across a variety of cognitive domains. An individual’s computational phenotype is defined as a set of mechanistically interpretable parameters obtained from fitting computational models to behavioral data. However, the interpretation of these parameters hinges critically on their psychometric properties, which are rarely studied. In order to identify the sources governing the temporal variability of the computational phenotype, we carried out a 12-week longitudinal study using a battery of seven tasks that measure aspects of human learning, memory, perception, and decision making. To examine the influence of state-like effects, each week participants provided reports tracking their mood, habits and daily activities. We developed a dynamic computational phenotyping framework, which allowed us to tease apart the time-varying effects of practice and internal states such as affective valence and arousal. Our results show that many phenotype dimensions covary with practice and affective factors, indicating that what appears to be unreliability may reflect previously unmeasured structure. These results support a fundamentally dynamic understanding of cognitive variability within an individual.","link":"/opendata/schurr-et-al-2023/"},{"title":"Sedlinská et al. (2022)","text":"Pavlovian bias is an innate motivational tendency to approach rewards and remain passive in the face of punishment. The relative reliance on Pavlovian valuation has been found to increase when the perceived control over environmental reinforcers is compromised, leading to behavior resembling learned helplessness (LH). In our study, we used a version of an orthogonalized Go-NoGo reinforcement learning task to examine the relative reliance on Pavlovian and instrumental valuation during and after an intermittent loss of control over rewards and losses. Sixty healthy young adults underwent the task and received anodal high-definition transcranial direct current stimulation (HD-tDCS) over the medial prefrontal/ dorsal anterior cingulate cortex in a randomized, double-blind, sham-controlled study. Furthermore, we evaluated changes in cue-locked mid-frontal theta power derived from electroencephalography. We hypothesized that active stimulation would reduce Pavlovian bias during manipulation of outcome controllability, and the effect would be accompanied by stronger mid-frontal theta activity, representing arbitration between choice strategies in favor of instrumental relative to Pavlovian valuation. We found a progressive decrease in Pavlovian bias during and after the loss of control over feedback. Active HD-tDCS counteracted this effect while not affecting the mid-frontal theta signal. The results were at odds with our hypotheses but also with previous findings reporting LH-like patterns during and after the loss of control without brain stimulation. The discrepancy may be related to different protocols for the controllability manipulation. We argue that the subjective evaluation of task controllability is crucial in mediating the balance between Pavlovian and instrumental valuation during reinforcement learning and that the medial prefrontal/dorsal anterior cingulate cortex is a key region in this respect. These findings have implications for understanding the behavioral and neural underpinnings of LH in humans.","link":"/opendata/sedlinska-et-al-2022/"},{"title":"Seow et al. (2020)","text":"Alterations in error processing are implicated in a range of DSM-defined psychiatric disorders. For instance, obsessive-compulsive disorder (OCD) and generalised anxiety disorder show enhanced electrophysiological responses to errors-i.e. error-related negativity (ERN)-while others like schizophrenia have an attenuated ERN. However, as diagnostic categories in psychiatry are heterogeneous and also highly intercorrelated, the precise mapping of ERN enhancements/impairments is unclear. To address this, we recorded electroencephalograms (EEG) from 196 participants who performed the Flanker task and collected scores on 9 questionnaires assessing psychiatric symptoms to test if a dimensional framework could reveal specific transdiagnostic clinical manifestations of error processing dysfunctions. Contrary to our hypothesis, we found non-significant associations between ERN amplitude and symptom severity of OCD, trait anxiety, depression, social anxiety, impulsivity, eating disorders, alcohol addiction, schizotypy and apathy. A transdiagnostic approach did nothing to improve signal; there were non-significant associations between all three transdiagnostic dimensions (anxious-depression, compulsive behaviour and intrusive thought, and social withdrawal) and ERN magnitude. In these same individuals, we replicated a previously published transdiagnostic association between goal-directed learning and compulsive behaviour and intrusive thought. Possible explanations discussed are (i) that associations between the ERN and psychopathology might be smaller than previously assumed, (ii) that these associations might depend on a greater level of symptom severity than other transdiagnostic cognitive biomarkers, or (iii) that task parameters, such as the ratio of compatible to incompatible trials, might be crucial for ensuring the sensitivity of the ERN to clinical phenomena.","link":"/opendata/seow-et-al-2020/"},{"title":"Seow et al. (2021)","text":"Compulsive individuals have deficits in model-based planning, but the mechanisms that drive this have not been established. We examined two candidates-that compulsivity is linked to (1) an impaired model of the task environment and/or (2) an inability to engage cognitive control when making choices. To test this, 192 participants performed a two-step reinforcement learning task with concurrent EEG recordings, and we related the neural and behavioral data to their scores on a self-reported transdiagnostic dimension of compulsivity. To examine subjects’ internal model of the task, we used established behavioral and neural responses to unexpected events [reaction time (RT) slowing, P300 wave, and parietal-occipital alpha band power] measured when an unexpected transition occurred. To assess cognitive control, we probed theta power at the time of initial choice. As expected, model-based planning was linked to greater behavioral (RT) and neural (alpha power, but not P300) sensitivity to rare transitions. Critically, the sensitivities of both RT and alpha to task structure were weaker in those high in compulsivity. This RT-compulsivity effect was tested and replicated in an independent pre-existing dataset (N = 1413). We also found that mid-frontal theta power at the time of choice was reduced in highly compulsive individuals though its relation to model-based planning was less pronounced. These data suggest that model-based planning deficits in compulsive individuals may arise, at least in part, from having an impaired representation of the environment, specifically how actions lead to future states.","link":"/opendata/seow-et-al-2021/"},{"title":"Shahar et al. (2019)","text":"A well-established notion in cognitive neuroscience proposes that multiple brain systems contribute to choice behaviour. These include: (1) a model-free system that uses values cached from the outcome history of alternative actions, and (2) a model-based system that considers action outcomes and the transition structure of the environment. The widespread use of this distinction, across a range of applications, renders it important to index their distinct influences with high reliability. Here we consider the two-stage task, widely considered as a gold standard measure for the contribution of model-based and model-free systems to human choice. We tested the internal/temporal stability of measures from this task, including those estimated via an established computational model, as well as an extended model using drift-diffusion. Drift-diffusion modeling suggested that both choice in the first stage, and RTs in the second stage, are directly affected by a model-based/free trade-off parameter. Both parameter recovery and the stability of model-based estimates were poor but improved substantially when both choice and RT were used (compared to choice only), and when more trials (than conventionally used in research practice) were included in our analysis. The findings have implications for interpretation of past and future studies based on the use of the two-stage task, as well as for characterising the contribution of model-based processes to choice behaviour.","link":"/opendata/shahar-et-al-2019/"},{"title":"Sharp et al. (2022)","text":"Managing multiple goals is essential to adaptation, yet we are only beginning to understand computations by which we navigate the resource demands entailed in so doing. Here, we sought to elucidate how humans balance reward seeking and punishment avoidance goals, and relate this to variation in its expression within anxious individuals. To do so, we developed a novel multigoal pursuit task that includes trial-specific instructed goals to either pursue reward (without risk of punishment) or avoid punishment (without the opportunity for reward). We constructed a computational model of multigoal pursuit to quantify the degree to which participants could disengage from the pursuit goals when instructed to, as well as devote less model-based resources toward goals that were less abundant. In general, participants (n = 192) were less flexible in avoiding punishment than in pursuing reward. Thus, when instructed to pursue reward, participants often persisted in avoiding features that had previously been associated with punishment, even though at decision time these features were unambiguously benign. In a similar vein, participants showed no significant downregulation of avoidance when punishment avoidance goals were less abundant in the task. Importantly, we show preliminary evidence that individuals with chronic worry may have difficulty disengaging from punishment avoidance when instructed to seek reward. Taken together, the findings demonstrate that people avoid punishment less flexibly than they pursue reward. Future studies should test in larger samples whether a difficulty to disengage from punishment avoidance contributes to chronic worry.","link":"/opendata/sharp-et-al-2022/"},{"title":"Sharp & Eldar (2023)","text":"Leading models of human planning posit that planning relies on learned forward predictions, from a given state to the outcomes that typically follow it. Here, however, we show that in many situations planning can be made more efficient by relying on backward predictions, from a given outcome to the states that typically precede it. This holds specifically in environments where the number of states an agent may occupy increases with time (i.e., in diverging environments), because in such environments, backward predictions can be more compactly represented than forward predictions. Correspondingly, in three preregistered experiments, we find that humans engage in backward learning and planning in a diverging environment, and in forward learning and planning in a converging environment. We validate the applicability of these findings to real-life learning in a large-scale real-world prediction task. Thus, we establish that humans adaptively deploy forward and backward learning in the service of efficient planning, and this changes how they plan and what decisions they reach.","link":"/opendata/sharp-eldar-2023/"},{"title":"Sheffield et al. (2023)","text":"Multiple measures of decision-making under uncertainty (e.g. jumping to conclusions (JTC), bias against disconfirmatory evidence (BADE), win-switch behavior, random exploration) have been associated with delusional thinking in independent studies. Yet, it is unknown whether these variables explain shared or unique variance in delusional thinking, and whether these relationships are specific to paranoia or delusional ideation more broadly. Additionally, the underlying computational mechanisms require further investigation. To investigate these questions, task and self-report data were collected in 88 individuals (46 healthy controls, 42 schizophrenia-spectrum) and included measures of cognitive biases and behavior on probabilistic reversal learning and explore/exploit tasks. Of those, only win-switch rate significantly differed between groups. In regression, reversal learning performance, random exploration, and poor evidence integration during BADE showed significant, independent associations with paranoia. Only self-reported JTC was associated with delusional ideation, controlling for paranoia. Computational parameters increased the proportion of variance explained in paranoia. Overall, decision-making influenced by strong volatility and variability is specifically associated with paranoia, whereas self-reported hasty decision-making is specifically associated with other themes of delusional ideation. These aspects of decision-making under uncertainty may therefore represent distinct cognitive processes that, together, have the potential to worsen delusional thinking across the psychosis spectrum.","link":"/opendata/sheffield-et-al-2023/"},{"title":"Shin & Niv (2021)","text":"How do we evaluate a group of people after a few negative experiences with some members but mostly positive experiences otherwise? How do rare experiences influence our overall impression? We show that rare events may be overweighted due to normative inference of the hidden causes that are believed to generate the observed events. We propose a Bayesian inference model that organizes environmental statistics by combining similar events and separating outlying observations. Relying on the models inferred latent causes for group evaluation overweights rare or variable events. We tested the models predictions in eight experiments where participants observed a sequence of social or non-social behaviours and estimated their average. As predicted, estimates were biased toward sparse events when estimating after seeing all observations, but not when tracking a summary value as observations accrued. Our results suggest that biases in evaluation may arise from inferring the hidden causes of group members behaviours.","link":"/opendata/shin-niv-2021/"},{"title":"Sidarus et al. (2019)","text":"Value-based decision-making involves trading off the cost associated with an action against its expected reward. Research has shown that both physical and mental effort constitute such subjective costs, biasing choices away from effortful actions, and discounting the value of obtained rewards. Facing conflicts between competing action alternatives is considered aversive, as recruiting cognitive control to overcome conflict is effortful. Moreover, engaging control to proactively suppress irrelevant information that could conflict with task-relevant information would presumably also be cognitively costly. Yet, it remains unclear whether the cognitive control demands involved in preventing and resolving conflict also constitute costs in value-based decisions. The present study investigated this question by embedding irrelevant distractors (flanker arrows) within a reversal-learning task, with intermixed free and instructed trials. Results showed that participants learned to adapt their free choices to maximize rewards, but were nevertheless biased to follow the suggestions of irrelevant distractors. Thus, the perceived cost of investing cognitive control to suppress an external suggestion could sometimes trump internal value representations. By adapting computational models of reinforcement learning, we assessed the influence of conflict at both the decision and learning stages. Modelling the decision showed that free choices were more biased when participants were less sure about which action was more rewarding. This supports the hypothesis that the costs linked to conflict management were traded off against expected rewards. During the learning phase, we found that learning rates were reduced in instructed, relative to free, choices. Learning rates were further reduced by conflict between an instruction and subjective action values, whereas learning was not robustly influenced by conflict between ones actions and external distractors. Our results show that the subjective cognitive control costs linked to conflict factor into value-based decision-making, and highlight that different types of conflict may have different effects on learning about action outcomes.","link":"/opendata/sidarus-et-al-2019/"},{"title":"Siegel et al. (2022)","text":"Money can be tainted when it is associated with direct or indirect harm to others. Deciding whether to accept “dirty money” poses a dilemma because money can be used to help others, but accepting dirty money has moral costs. How people resolve the dilemma of dirty money remains unknown. One theory casts the dilemma as a valuation conflict that can be resolved by integrating the costs and benefits of accepting dirty money. Here, we use behavioral experiments and computational modeling to test the valuation conflict account and unveil the cognitive computations employed when deciding whether to accept or reject morally tainted cash. In Study 1, British participants decided whether to accept “dirty” money obtained by inflicting electric shocks on another person (versus “clean” money obtained by shocking oneself). Computational models showed that the source of the money (dirty versus clean) impacted decisions by shifting the relative valuation of the money’s positive and negative attributes, rather than imposing a uniform bias on decision-making. Studies 2 and 3 replicate this finding and show that participants were more willing to accept dirty money when the money was directed towards a good cause, and observers judged such decisions to be more praiseworthy than accepting dirty money for one’s own profit. Our findings suggest that dirty money can be psychologically “laundered” through charitable activities and have implications for understanding and preventing the social norms that can justify corrupt behavior.","link":"/opendata/siegel-et-al-2022/"},{"title":"Sierra et al. (2022)","text":"Precisely estimating event timing is essential for survival, yet temporal distortions are ubiquitous in our daily sensory experience. Here, we tested whether the relative position, duration, and distance in time of two sequentially-organized events-standard S, with constant duration, and comparison C, with duration varying trial-by-trial-are causal factors in generating temporal distortions. We found that temporal distortions emerge when the first event is shorter than the second event. Importantly, a significant interaction suggests that a longer inter-stimulus interval (ISI) helps to counteract such serial distortion effect only when the constant S is in the first position, but not if the unpredictable C is in the first position. These results imply the existence of a perceptual bias in perceiving ordered event durations, mechanistically contributing to distortion in time perception. We simulated our behavioral results with a Bayesian model and replicated the finding that participants disproportionately expand first-position dynamic (unpredictable) short events. Our results clarify the mechanisms generating time distortions by identifying a hitherto unknown duration-dependent encoding inefficiency in human serial temporal perception, something akin to a strong prior that can be overridden for highly predictable sensory events but unfolds for unpredictable ones.","link":"/opendata/sierra-et-al-2022/"},{"title":"Simon-Kutscher et al. (2019)","text":"During a threatening encounter, people can learn to associate the aversive event with a discrete preceding cue or with the context in which the event took place, corresponding to cue-dependent and context-dependent fear conditioning, respectively. Which of these forms of fear learning prevails has critical implications for fear-related psychopathology. We tested here whether acute stress may modulate the balance of cue-dependent and contextual fear learning. Participants (N = 72) underwent a stress or control manipulation 30 min before they completed a fear-learning task in a virtual environment that allowed both cued and contextual fear learning. Results showed equally strong cue- and context-dependent fear conditioning in the control group. Stress, however, abolished contextual fear learning, which was directly correlated with the activity of the stress hormone cortisol, and made cue-dependent fear more resistant to extinction. These results are the first to show that stress favors cue-dependent over contextual fear learning.","link":"/opendata/simon-kutscher-et-al-2019/"},{"title":"Sin et al. (2021)","text":"Many decisions in life are sequential and constrained by a time window. Although mathematically derived optimal solutions exist, it has been reported that humans often deviate from making optimal choices. Here, we used a secretary problem, a classic example of finite sequential decision-making, and investigated the mechanisms underlying individuals’ suboptimal choices. Across three independent experiments, we found that a dynamic programming model comprising subjective value function explains individuals’ deviations from optimality and predicts the choice behaviors under fewer and more opportunities. We further identified that pupil dilation reflected the levels of decision difficulty and subsequent choices to accept or reject the stimulus at each opportunity. The value sensitivity, a model-based estimate that characterizes each individual’s subjective valuation, correlated with the extent to which individuals’ physiological responses tracked stimuli information. Our results provide model-based and physiological evidence for subjective valuation in finite sequential decision-making, rediscovering human suboptimality in subjectively optimal decision-making processes.","link":"/opendata/sin-et-al-2021/"},{"title":"Slanina-Davies et al. (2023)","text":"OBJECTIVE: Eating disorders (EDs) are a heterogenous group of disorders characterized by disturbed eating patterns. Links have been made between ED symptoms and control-seeking behaviors, which may cause relief from distress. However, whether direct behavioral measures of control-seeking behavior correlate with ED symptoms has not been directly tested. Additionally, existing paradigms may conflate control-seeking behavior with uncertainty-reducing behavior. METHOD: A general population sample of 183 participants completed part in an online behavioral task, in which participants rolled a die in order to obtain/avoid a set of numbers. Prior to each roll, participants could choose to change arbitrary features of the task (such as the color of their die) or view additional information (such as the current trial number). Selecting these Control Options could cost participants points or not (Cost/No-Cost conditions). Each participant completed all four conditions, each with 15 trials, followed by a series of questionnaires, including the Eating Attitudes Test-26 (EAT-26), the Intolerance of Uncertainty Scale, and the Obsessive-Compulsive Inventory-Revised (OCI-R). RESULTS: A Spearman’s rank test indicated no significant correlation between total EAT-26 score and total number of Control Options selected, with only elevated scores on a measure of obsessions and compulsivity (OCI-R) correlating with the total number of Control Options selected (rs = .155, p = .036). DISCUSSION: In our novel paradigm, we find no relationship between EAT-26 score and control-seeking. However, we do find some evidence that this behavior may be present in other disorders that often coincide with ED diagnosis, which may indicate that transdiagnostic factors such as compulsivity are important to control-seeking.","link":"/opendata/slanina-davies-et-al-2023/"},{"title":"Slater et al. (2023)","text":"Contrary to classical economic theories, experimental findings show that people are not exclusively self-interested, rather, they have other-regarding preferences, such as fairness and reciprocity. Further, these social preferences are emotionally driven, and deliberative processes are required to implement ‘rational’ self-interested motives. Here, we aimed to enhance ‘rational’ self-interested behavior by enhancing the neuronal activity of the right ventrolateral prefrontal cortex (rVLPFC), a brain area associated with emotion regulation processes, using anodal transcranial-direct-current-stimulation (tDCS), among ultimatum game responders. We found that stimulated, compared to control (sham stimulation), participants accepted unfair offers significantly more often. Interestingly, this effect was not moderated by whether the unfair offers were made intentionally by the participants’ partners or through a fair mechanism (i.e., randomly made by the computer), suggesting that inequality per se is emotionally aversive even when it was not determined deliberately. In contrast, the effect was absent when playing on behalf of another random participant, suggesting that when the self is not involved, decisions are less emotional. These findings reveal the causal and vital role of the rVLPFC in promoting self-interested behavior in social exchange situations. We discuss theoretical implications for dual-system models and specifically in the context of social exchange situations.","link":"/opendata/slater-et-al-2023/"},{"title":"Smid et al. (2022)","text":"Human decision-making is underpinned by distinct systems that differ in flexibility and associated cognitive cost. A widely accepted dichotomy distinguishes between a cheap but rigid model-free system and a flexible but costly model-based system. Typically, humans use a hybrid of both types of decision-making depending on environmental demands. However, childrens use of a model-based system during decision-making has not yet been shown. While prior developmental work has identified simple building blocks of model-based reasoning in young children (1-4 years old), there has been little evidence of this complex cognitive system influencing behavior before adolescence. Here, by using a modified task to make engagement in cognitively costly strategies more rewarding, we show that children aged 5-11-years (N = 85), including the youngest children, displayed multiple indicators of model-based decision making, and that the degree of its use increased throughout childhood. Unlike adults (N = 24), however, children did not display adaptive arbitration between model-free and model-based decision-making. Our results demonstrate that throughout childhood, children can engage in highly sophisticated and costly decision-making strategies. However, the flexible arbitration between decision-making strategies might be a critically late-developing component in human development.","link":"/opendata/smid-et-al-2022/"},{"title":"Smith & Pollak (2022)","text":"To effectively navigate their environments, infants and children learn how to recognize events predict salient outcomes, such as rewards or punishments. Relatively little is known about how children acquire this ability to attach value to the stimuli they encounter. Studies often examine childrens ability to learn about rewards and threats using either classical conditioning or behavioral choice paradigms. Here, we assess both approaches and find that they yield different outcomes in terms of which individuals had efficiently learned the value of information presented to them. The findings offer new insights into understanding how to assess different facets of value learning in children.","link":"/opendata/smith-pollak-2022/"},{"title":"Smith et al. (2023)","text":"Behavioural and neuroimaging research has shown that older adults are less sensitive to financial losses compared to younger adults. Yet relatively less is known about age-related differences in social decisions and social reward processing. As part of a pilot study that was sponsored by the Scientific Research Network on Decision Neuroscience and Aging, we collected behavioural and functional magnetic resonance imaging (fMRI) data from 50 participants (Younger: N = 26, ages 18–34 years; Older: N = 24, ages 63–80 years) who completed three tasks in the scanner: an economic trust game as the investor with three partners (computer, stranger, friend) as the investee; a card-guessing task with monetary gains and losses shared with three partners (computer, stranger, friend); and an ultimatum game as responder to three anonymous proposers (computer, age-similar adults, age-dissimilar adults). We also collected B0 field maps and high-resolution structural images (T1-weighted and T2-weighted images). These data could be reused to answer questions about moment-to-moment variability in fMRI signal, representational similarity between tasks, and brain structure.","link":"/opendata/smith-et-al-2023/"},{"title":"Snijder et al. (2023)","text":"The domain of cognitive control has been a major focus of experimental, neuroscience, and individual differences research. Currently, however, no theory of cognitive control successfully unifies both experimental and individual differences findings. Some perspectives deny that there even exists a unified psychometric cognitive control construct to be measured at all. These shortcomings of the current literature may reflect the fact that current cognitive control paradigms are optimized for the detection of within-subject experimental effects rather than individual differences. In the current study, we examine the psychometric properties of the Dual Mechanisms of Cognitive Control (DMCC) task battery, which was designed in accordance with a theoretical framework that postulates common sources of within-subject and individual differences variation. We evaluated both internal consistency and test-retest reliability, and for the latter, utilized both classical test theory measures (i.e., split-half methods, intraclass correlation) and newer hierarchical Bayesian estimation of generative models. Although traditional psychometric measures suggested poor reliability, the hierarchical Bayesian models indicated a different pattern, with good to excellent test-retest reliability in almost all tasks and conditions examined. Moreover, within-task, between-condition correlations were generally increased when using the Bayesian model derived estimates, and these higher correlations appeared to be directly linked to the higher reliability of the measures. In contrast, between-task correlations remained low regardless of theoretical manipulations or estimation approach. Together, these findings highlight the advantages of Bayesian estimation methods, while also pointing to the important role of reliability in the search for a unified theory of cognitive control.","link":"/opendata/snijder-et-al-2023/"},{"title":"Solomyak et al. (2022)","text":"Many decision-making studies have demonstrated that humans learn either expected values or relative preferences among choice options, yet little is known about what environmental conditions promote one strategy over the other. Here, we test the novel hypothesis that humans adapt the degree to which they form absolute values to the diversity of the learning environment. Since absolute values generalize better to new sets of options, we predicted that the more options a person learns about the more likely they would be to form absolute values. To test this, we designed a multi-day learning experiment comprising twenty learning sessions in which subjects chose among pairs of images each associated with a different probability of reward. We assessed the degree to which subjects formed absolute values and relative preferences by asking them to choose between images they learned about in separate sessions. We found that concurrently learning about more images within a session enhanced absolute-value, and suppressed relative-preference, learning. Conversely, cumulatively pitting each image against a larger number of other images across multiple sessions did not impact the form of learning. These results show that the way humans encode preferences is adapted to the diversity of experiences offered by the immediate learning context.","link":"/opendata/solomyak-et-al-2022/"},{"title":"Son et al. (2023)","text":"To act adaptively in a complex and dynamic social landscape, humans must continually make inferences about who might be connected to whom. How do they solve this fundamental problem of social link prediction: inferring the existence of unobserved or potential relationships in their social network from noisy, limited information? We propose that people generate principled inferences by learning cognitive maps that systematically abstract over direct relations (friends) and multistep relations (e.g., friends-of-friends). We show that such abstracted cognitive maps enable a flexible solution for link prediction and provide a natural explanation for a variety of otherwise puzzling empirical observations in social psychology. Our proposal generalizes the theory of cognitive maps to the fundamental computational problem of social link prediction and presents a powerful framework for understanding the workings of a predictive mind operating within a complex social world.","link":"/opendata/son-et-al-2023/"},{"title":"Song et al. (2022)","text":"Realistic and complex decision tasks often allow for many possible solutions. How do we find the correct one? Introspection suggests a process of trying out solutions one after the other until success. However, such methodical serial testing may be too slow, especially in environments with noisy feedback. Alternatively, the underlying learning process may involve implicit reinforcement learning that learns about many possibilities in parallel. Here we designed a multi-dimensional probabilistic active-learning task tailored to study how people learn to solve such complex problems. Participants configured three-dimensional stimuli by selecting features for each dimension and received probabilistic reward feedback. We manipulated task complexity by changing how many feature dimensions were relevant to maximizing reward, as well as whether this information was provided to the participants. To investigate how participants learn the task, we examined models of serial hypothesis testing, feature-based reinforcement learning, and combinations of the two strategies. Model comparison revealed evidence for hypothesis testing that relies on reinforcement-learning when selecting what hypothesis to test. The extent to which participants engaged in hypothesis testing depended on the instructed task complexity: people tended to serially test hypotheses when instructed that there were fewer relevant dimensions, and relied more on gradual and parallel learning of feature values when the task was more complex. This demonstrates a strategic use of task information to balance the costs and benefits of the two methods of learning.","link":"/opendata/song-et-al-2022/"},{"title":"Soutscheck et al. (2022)","text":"Deciding whether to engage in strenuous mental activities requires trading-off the potential benefits against the costs of mental effort, but it is unknown which brain rhythms are causally involved in such cost-benefit calculations. We show that brain stimulation targeting midfrontal theta oscillations increases the engagement in goal-directed mental effort. Participants received transcranial alternating current stimulation over dorsomedial prefrontal cortex while deciding whether they are willing to perform a demanding working memory task for monetary rewards. Midfrontal theta tACS increased the willingness to exert mental effort for rewards while leaving working memory performance unchanged. Computational modelling using a hierarchical Bayesian drift diffusion model suggests that theta tACS shifts the starting bias before evidence accumulation towards high reward-high effort options without affecting the velocity of the evidence accumulation process. Our findings suggest that the motivation to engage in goal-directed mental effort can be increased via midfrontal tACS.","link":"/opendata/soutscheck-et-al-2022/"},{"title":"Soutschek et al. (2022)","text":"Deficits in impulse control belong to the core profile of nicotine dependence. Smokers might thus benefit from voluntarily self-restricting their access to the immediate temptation of nicotine products (precommitment) in order to avoid impulse control failures. However, little is known about how smokers’ willingness to engage in voluntary self-restrictions is determined by metacognitive insight into their general preferences for immediate over delayed rewards. Here, with a series of monetary intertemporal choice tasks, we provide empirical evidence for reduced metacognitive accuracy in smokers relative to non-smokers and show that smokers overestimate the subjective value of delayed rewards relative to their revealed preferences. In line with the metacognitive deficits, smokers were also less sensitive to the risk of preference reversals when deciding whether or not to restrict their access to short-term financial rewards. Taken together, the current findings suggest that deficits not only in impulse control but also in metacognition may hamper smokers’ resistance to immediate rewards and capacity to pursue long-term goals.","link":"/opendata/soutschek-et-al-2022/"},{"title":"Souza & Frischkorn (2023)","text":"The limited capacity of working memory constrains how well we can think and act. Focused attention alleviates this limitation by prioritizing the most relevant mental content at a given time. Retro-cues tap into this ability by guiding attention to one working memory content, thereby improving memory speed and accuracy. So far, few attempts have been made to understand the retro-cue effect through well-established computational models, nor how their parameters track age-related changes and individual differences in focusing efficiency. The present study aims to close these gaps. We applied the drift-diffusion model to the data from a large sample of younger and older adults (total N = 346) that completed four retro-cue tasks. Diffusion modeling showed that retro-cues increased the quality of evidence accumulation, reduced the time taken for retrieval, and changed response conservativeness. Younger and older adults benefited from retro-cues in all tasks and parameters. Yet, age-related decline was observed in the retro-cue boost for evidence quality. Likewise, evidence quality was the only parameter capturing individual differences in focusing efficiency. Our results suggest that people differ in how well they can strengthen and protect a focused representation to boost evidence-quality accumulation, and this ability declines with aging.","link":"/opendata/souza-frischkorn-2023/"},{"title":"Spektor et al. (2022)","text":"People rely on the choice context to guide their decisions, violating fundamental principles of rational choice theory and exhibiting phenomena called context effects. Recent research has uncovered that dominance relationships can both increase or decrease the choice share of the dominating option, marking the two ends of an attraction-repulsion continuum. However, empirical links between the two opposing effects are scarce and theoretical accounts are missing altogether. The present study (N = 55) used eye tracking alongside a within-subject design that contrasts a perceptual task and a preferential-choice analog in order to bridge this gap and uncover the underlying information-search processes. Although individuals differed in their perceptual and preferential choices, they generally engaged in alternative-wise comparisons and a repulsion effect was present in both conditions that became weaker the more predominant the attribute-wise comparisons were. Altogether, our study corroborates the notion that repulsion effects are a robust and general phenomenon that theoretical accounts need to take seriously.","link":"/opendata/spektor-et-al-2022/"},{"title":"Spektor et al. (2023)","text":"Individuals’ decisions under risk tend to be in line with the notion that “losses loom larger than gains”. This loss aversion in decision making is commonly understood as a stable individual preference that is manifested across different contexts. The presumed stability and generality, which underlies the prominence of loss aversion in the literature at large, has been recently questioned by studies showing how loss aversion can disappear, and even reverse, as a function of the choice context. The present study investigated whether loss aversion reflects a trait-like attitude of avoiding losses or rather individuals’ adaptability to different contexts. We report three experiments that investigated the within-subject context sensitivity of loss aversion in a two-alternative forced-choice task. The results show beside interindiviudal differences in loss aversion, that the context affects the extent of loss aversion. This indicates that even though the absolute value of loss aversion can be affected by external factors such as the choice context, estimates of people’s loss aversion still capture the relative dispositions towards gains and losses across individuals.","link":"/opendata/spektor-et-al-2023/"},{"title":"Spektor & Wulff (2023)","text":"Good decisions require information. When people face a situation in which they need to make a choice but know little about the options available, how do they search for information? We present an analysis of over 1,000,000 information-search decisions made by over 2,500 individuals in a decisions-from-experience setting. We found that individuals solve the problem in a smart way, relying on several strategies—including two novel ones. In discovery-driven search, people leverage detailed knowledge about the structure of the environment to find previously unobserved outcomes and terminate information search after all possible outcomes have been observed. In fixed search, on the other hand, people decide in advance how much information they want to obtain and stick to that decision irrespective of the feedback obtained. These novel strategies are distinct from uncertainty-driven search—the dominant strategy in research on information search—in which people engaged only after all outcomes had been observed. Overall, our results suggest that people adaptively and dynamically rely on a toolbox of information-search strategies. This is at odds with a narrow interpretation of information search as cost–benefit optimization and highlights a need for broader theories of information-search behavior in decisions under uncertainty, capturing the diversity of the strategic tools recruited.","link":"/opendata/spektor-wulff-2023/"},{"title":"Spicer et al. (2022)","text":"One of the most robust effects in cognitive psychology is anchoring, in which judgments show a bias toward previously viewed values. However, in what is essentially the same task as used in anchoring research, a perceptual illusion demonstrates the opposite effect of repulsion. Here, we united these two literatures, testing in two experiments with adults (total N = 200) whether prior comparative decisions bias cognitive and perceptual judgments in opposing directions or whether anchoring and repulsion are two domain-general biases whose co-occurrence has so far gone undetected. We found that in both perceptual and cognitive tasks, anchoring and repulsion co-occur. Further, the direction of the bias depends on the comparison value: Distant values attract judgments, whereas nearby values repulse judgments. Because none of the leading theories for either effect account for both biases, theoretical integration is needed. As a starting point, we describe one such joint theory based on sampling models of cognition.","link":"/opendata/spicer-et-al-2022/"},{"title":"Steffen et al. (2023)","text":"Forward planning is crucial to maximize outcome in complex sequential decision-making scenarios. In this cross-sectional study, we were particularly interested in age-related differences of forward planning. We presumed that especially older individuals would show a shorter planning depth to keep the costs of model-based decision-making within limits. To test this hypothesis, we developed a sequential decision-making task to assess forward planning in younger (age 60 years; n = 27) adults. By using reinforcement learning modelling, we inferred planning depths from participants’ choices. Our results showed significantly shorter planning depths and higher response noise for older adults. Age differences in planning depth were only partially explained by well-known cognitive covariates such as working memory and processing speed. Consistent with previous findings, this indicates age-related shifts away from model-based behaviour in older adults. In addition to a shorter planning depth, our findings suggest that older adults also apply a variety of heuristical low-cost strategies.","link":"/opendata/steffen-et-al-2023/"},{"title":"Steiner & Frey (2021)","text":"Representative design refers to the idea that experimental stimuli should be sampled or designed such that they represent the environments to which measured constructs are supposed to generalize. In this article we investigate the role of representative design in achieving valid and reliable psychological assessments, by focusing on a widely used behavioral measure of risk taking-the Balloon Analogue Risk Task (BART). Specifically, we demonstrate that the typical implementation of this task violates the principle of representative design, thus conflicting with the expectations people likely form from real balloons. This observation may provide an explanation for the previously observed limitations in some of the BARTs psychometric properties (e.g., convergent validity with other measures of risk taking). To experimentally test the effects of improved representative designs, we conducted two extensive empirical studies (N = 772 and N = 632), finding that participants acquired more accurate beliefs about the optimal behavior in the BART because of these task adaptions. Yet, improving the tasks representativeness proved to be insufficient to enhance the BARTs psychometric properties. It follows that for the development of valid behavioral measurement instruments-as are needed, for instance, in functional neuroimaging studies-our field has to overcome the philosophy of the repair program (i.e., fixing existing tasks). Instead, we suggest that the development of valid task designs requires novel ecological assessments, aimed at identifying those real-life behaviors and associated psychological processes that lab tasks are supposed to capture and generalize to.","link":"/opendata/steiner-frey-2021/"},{"title":"Steinke et al. (2020)","text":"The Wisconsin Card Sorting Test (WCST) is considered a gold standard for the assessment of cognitive flexibility. On the WCST, repeating a sorting category following negative feedback is typically treated as indicating reduced cognitive flexibility. Therefore such responses are referred to as ‘perseveration’ errors. Recent research suggests that the propensity for perseveration errors is modulated by response demands: They occur less frequently when their commitment repeats the previously executed response. Here, we propose parallel reinforcement-learning models of card sorting performance, which assume that card sorting performance can be conceptualized as resulting from model-free reinforcement learning at the level of responses that occurs in parallel with model-based reinforcement learning at the categorical level. We compared parallel reinforcement-learning models with purely model-based reinforcement learning, and with the state-of-the-art attentional-updating model. We analyzed data from 375 participants who completed a computerized WCST. Parallel reinforcement-learning models showed best predictive accuracies for the majority of participants. Only parallel reinforcement-learning models accounted for the modulation of perseveration propensity by response demands. In conclusion, parallel reinforcement-learning models provide a new theoretical perspective on card sorting and it offers a suitable framework for discerning individual differences in latent processes that subserve behavioral flexibility.","link":"/opendata/steinke-et-al-2020/"},{"title":"Stevenson et al. (2022)","text":"Decision-making behavior is often understood using the framework of evidence accumulation models (EAMs). Nowadays, EAMs are applied to various domains of decision-making with the underlying assumption that the latent cognitive constructs proposed by EAMs are consistent across these domains. In this study we investigate both the extent to which the parameters of EAMs are related between four different decision-making domains and across different time points. To that end, we make use of the novel joint modelling approach, that explicitly includes relationships between parameters, such as covariances or underlying factors, in one combined joint model. Consequently, this joint model also accounts for measurement error and uncertainty within the estimation of these relations. We found that EAM parameters were consistent between time points on three of the four decision-making tasks. For our between-task analysis we constructed a joint model with a factor analysis on the parameters of the different tasks. Our two factor joint model indicated that information processing ability was related between the different decision-making domains. However, other cognitive constructs such as the degree of response caution and urgency were only comparable on some domains.","link":"/opendata/stevenson-et-al-2022/"},{"title":"Stojić et al. (2020)","text":"Uncertainty plays a critical role in reinforcement learning and decision making. However, exactly how it influences behavior remains unclear. Multiarmed-bandit tasks offer an ideal test bed, since computational tools such as approximate Kalman filters can closely characterize the interplay between trial-by-trial values, uncertainty, learning, and choice. To gain additional insight into learning and choice processes, we obtained data from subjects overt allocation of gaze. The estimated value and estimation uncertainty of options influenced what subjects looked at before choosing; these same quantities also influenced choice, as additionally did fixation itself. A momentary measure of uncertainty in the form of absolute prediction errors determined how long participants looked at the obtained outcomes. These findings affirm the importance of uncertainty in multiple facets of behavior and help delineate its effects on decision making.","link":"/opendata/stojic-et-al-2020/"},{"title":"Stuppy-Sullivan et al. (2020)","text":"Aberrant cost–benefit decision making is a key factor related to individual differences in the expression of substance use disorders (SUDs). Previous research highlights how delay-cost sensitivity affects variability in SUDs; however, other forms of cost–benefit decision making—effort-based choice—have received less attention. We administered the Effort Expenditure for Rewards Task (EEfRT) in an SUD-enriched community sample (N = 80). Individuals with more severe SUDs were less likely to use information about expected value when deciding between high-effort, high-reward and low-effort, low-reward options. Furthermore, individuals whose severity of use was primarily related to avoiding aversive affective states and individuals with heightened sensitivity to delay costs during intertemporal decision making were the least sensitive to expected value signals when making decisions to engage in effortful behavior. Together, these findings suggest that individuals with more severe SUDs have difficulty integrating multiple decision variables to guide behavior during effort-based decision making.","link":"/opendata/stuppy-sullivan-et-al-2020/"},{"title":"Sugawara & Katahira (2021)","text":"The learning rate is a key parameter in reinforcement learning that determines the extent to which novel information (outcome) is incorporated in guiding subsequent actions. Numerous studies have reported that the magnitude of the learning rate in human reinforcement learning is biased depending on the sign of the reward prediction error. However, this asymmetry can be observed as a statistical bias if the fitted model ignores the choice autocorrelation (perseverance), which is independent of the outcomes. Therefore, to investigate the genuine process underlying human choice behavior using empirical data, one should dissociate asymmetry in learning and perseverance from choice behavior. The present study addresses this issue by using a Hybrid model incorporating asymmetric learning rates and perseverance. First, by conducting simulations, we demonstrate that the Hybrid model can identify the true underlying process. Second, using the Hybrid model, we show that empirical data collected from a web-based experiment are governed by perseverance rather than asymmetric learning. Finally, we apply the Hybrid model to two open datasets in which asymmetric learning was reported. As a result, the asymmetric learning rate was validated in one dataset but not another.","link":"/opendata/sugawara-katahira-2021/"},{"title":"Sukhov et al. (2023)","text":"Persistence and perseverance, even in the face of great adversity, are admirable qualities. However, knowing when to stop pursuing something is as important as exerting effort toward attaining a goal. Howdo people decide when to persist and when to quit? Here, we design a novel task to study this question, in which people were given a finite number of opportunities to pursue stochastic rewards by selecting among a set of options that provide a reward each trial. At any time, if people were not satisfied with the option they had selected they could choose to abandon it and instead try a new option. However, if they did so they could never return to the previous option. Mathematical analysis of this task shows that the optimal strategy explores a relatively small number of options before settling on a sufficiently good option. Further, we find that the optimal strategy is to abandon an option if the total number of remaining trials exceeds a threshold specified by the observed option’s performance. A large-scale, pre-registered experiment (N = 3,632) reveals that people largely behave in accordance with the optimal strategy. People also make decisions to persist with an option based on its performance and they typically explore relatively few options before settling on a sufficiently good one. However, compared to the optimal strategy, people are less sensitive to the number of remaining trials and are more likely to persist with sub-optimal options. Together, this work provides a new approach to studying how we decide when to quit and deepens our understanding of human persistence.","link":"/opendata/sukhov-et-al-2023/"},{"title":"Suthaharan et al. (2021)","text":"The COVID-19 pandemic has made the world seem less predictable. Such crises can lead people to feel that others are a threat. Here, we show that the initial phase of the pandemic in 2020 increased individuals paranoia and made their belief updating more erratic. A proactive lockdown made peoples belief updating less capricious. However, state-mandated mask-wearing increased paranoia and induced more erratic behaviour. This was most evident in states where adherence to mask-wearing rules was poor but where rule following is typically more common. Computational analyses of participant behaviour suggested that people with higher paranoia expected the task to be more unstable. People who were more paranoid endorsed conspiracies about mask-wearing and potential vaccines and the QAnon conspiracy theories. These beliefs were associated with erratic task behaviour and changed priors. Taken together, we found that real-world uncertainty increases paranoia and influences laboratory task behaviour.","link":"/opendata/suthaharan-et-al-2021/"},{"title":"Swart et al. (2017)","text":"Catecholamines modulate the impact of motivational cues on action. Such motivational biases have been proposed to reflect cue-based, Pavlovian effects. Here, we assess whether motivational biases may also arise from asymmetrical instrumental learning of active and passive responses following reward and punishment outcomes. We present a novel paradigm, allowing us to disentangle the impact of reward and punishment on instrumental learning from Pavlovian response biasing. Computational analyses showed that motivational biases reflect both Pavlovian and instrumental effects: reward and punishment cues promoted generalized (in)action in a Pavlovian manner, whereas outcomes enhanced instrumental (un)learning of chosen actions. These cue- and outcome-based biases were altered independently by the catecholamine enhancer melthylphenidate. Methylphenidates effect varied across individuals with a putative proxy of baseline dopamine synthesis capacity, working memory span. Our study uncovers two distinct mechanisms by which motivation impacts behaviour, and helps refine current models of catecholaminergic modulation of motivated action.","link":"/opendata/swart-et-al-2017/"},{"title":"Swart et al. (2018)","text":"Motivation exerts control over behavior by eliciting Pavlovian responses, which can either match or conflict with instrumental action. We can overcome maladaptive motivational influences putatively through frontal cognitive control. However, the neurocomputational mechanisms subserving this control are unclear; does control entail up-regulating instrumental systems, down-regulating Pavlovian systems, or both? We combined electroencephalography (EEG) recordings with a motivational Go/NoGo learning task (N = 34), in which multiple Go options enabled us to disentangle selective action learning from nonselective Pavlovian responses. Midfrontal theta-band (4 Hz-8 Hz) activity covaried with the level of Pavlovian conflict and was associated with reduced Pavlovian biases rather than reduced instrumental learning biases. Motor and lateral prefrontal regions synchronized to the midfrontal cortex, and these network dynamics predicted the reduction of Pavlovian biases over and above local, midfrontal theta activity. This work links midfrontal processing to detecting Pavlovian conflict and highlights the importance of network processing in reducing the impact of maladaptive, Pavlovian biases.","link":"/opendata/swart-et-al-2018/"},{"title":"Szücs et al. (2022)","text":"Rivalry and admiration-seeking are two distinct strategies humans use to gain status in social competition. However in vivo data is lacking about whether these behavioral manifestations of status pursuit are driven by distinct rivalry and admiration-seeking traits, as outlined by the Narcissistic Admiration and Rivalry Concept (NARC), whether NARC traits interact with environmental cues as suggested by the Status Pursuit In Narcissism (SPIN) model, and whether these interactions primarily occur with trait-relevant cues (defeat in the case of trait rivalry and victory in the case of trait admiration-seeking) as proposed by Trait activation theory (TAT). We used a rigged video game tournament with three randomized blocks with defeat manipulations of varying intensity: defeats to victory ratios of 1:1 (neutral), 2:1 (moderate losing), and 3:1 (extreme losing), measuring behavioral rivalry (stealing points from opponents) and admiration-seeking (paying to boost rank in the tournament) in a sample of 434 undergraduates assessed for trait rivalry and trait admiration-seeking with the Narcissistic Admiration and Rivalry Questionnaire. We found evidence for trait-congruent main and interaction effects: whereas behavioral rivalry scaled with trait rivalry and behavioral admiration-seeking with trait admiration-seeking, trait rivalry primarily increased status-pursuit behaviors following defeats and trait admiration-seeking following victories. These results corroborate the NARC’s two-dimensional conceptualization of narcissistic grandiosity, support the SPIN model’s interactionist view of status pursuit, and extend these frameworks by outlining trait-specific environmental effects consistent with TAT.","link":"/opendata/szucs-et-al-2022/"},{"title":"Tagu & Kristjansson (2021)","text":"A critical question in visual foraging concerns the mechanisms driving the next target selection. Observers first identify a set of candidate targets, and then select the best option among these candidates. Recent evidence suggests that target selection relies on internal biases towards proximity (nearest target from the last selection), priming (target from the same category as the last selection) and value (target associated with high value). Here, we tested the role of eye movements in target selection, and notably whether disabling eye movements during target selection could affect search strategy. We asked observers to perform four foraging tasks differing by selection modality and target value. During gaze foraging, participants had to accurately fixate the targets to select them and could not anticipate the next selection with their eyes, while during mouse foraging they selected the targets with mouse clicks and were free to move their eyes. We moreover manipulated both target value and proximity. Our results revealed notable individual differences in search strategy, confirming the existence of internal biases towards value, proximity and priming. Critically, there were no differences in search strategy between mouse and gaze foraging, suggesting that disabling eye movements during target selection did not affect foraging behaviour. These results importantly suggest that overt orienting is not necessary for target selection. This study provides fundamental information for theoretical conceptions of attentional selection, and emphasizes the importance of covert attention for target selection during visual foraging.","link":"/opendata/tagu-kristjansson-2021/"},{"title":"Tal-Perry & Yuval-Greenberg (2021)","text":"Eye movements are inhibited prior to the occurrence of temporally predictable events. This ‘oculomotor inhibition effect’ has been demonstrated with various tasks and modalities. Specifically, it was shown that when intervals between cue and target are fixed, saccade rate prior to the target is lower than when they are varied. However, it is still an open question whether this effect is linked to temporal expectation to the predictable target, or to the duration estimation of the interval preceding it. Here, we examined this question in 20 participants while they performed an implicit temporal expectation and an explicit time estimation task. In each trial, following cue onset, two consecutive grating patches were presented, each preceded by an interval. Temporal expectation was manipulated by setting the first interval duration to be either fixed or varied within each block. Participants were requested to compare either the durations of the two intervals (time estimation), or the tilts of the two grating patches (temporal expectation). Saccade rate, measured prior to the first grating, was lower in the fixed relative to the varied condition of both tasks. This suggests that the inhibition effect is elicited by target predictability and indicates that it is linked to temporal expectation, rather than to time estimation processes. Additionally, this finding suggests that the oculomotor inhibition is independent of motor readiness, as it was elicited even when no response was required. We conclude that the prestimulus oculomotor inhibition effect can be used as a marker of temporal expectation, and discuss its potential underlying mechanisms.","link":"/opendata/tal-perry-yuval-greenberg-2021/"},{"title":"Tarantola et al. (2017)","text":"Our personal preferences affect a broad array of social behaviors. This includes the way we learn the preferences of others, an ability that often relies on limited or ambiguous information. Here we report an egocentric influence on this type of social learning that is reflected in both performance and response times. Using computational models that combine inter-trial learning and intra-trial choice, we find transient effects of participants preferences on the learning process, through the influence of priors, and persistent effects on the choice process. A second experiment shows that these effects generalize to non-social learning, though participants in the social learning experiment appeared to additionally benefit by using their knowledge about the popularity of certain preferences. We further find that the domain-general egocentric influences we identify can yield performance advantages in uncertain environments.People often assume that other people share their preferences, but how exactly this bias manifests itself in learning and decision-making is unclear. Here, authors show that a persons own preferences influence learning in both social and non-social situations, and that this bias improves performance.","link":"/opendata/tarantola-et-al-2017/"},{"title":"Tardiff et al. (2022)","text":"Expectations, such as those arising from either learned rules or recent stimulus regularities, can bias subsequent auditory perception in diverse ways. However, it is not well understood if and how these diverse effects depend on the source of the expectations. Further, it is unknown whether different sources of bias use the same or different computational and physiological mechanisms. We examined how rule-based and stimulus-based expectations influenced behavior and pupil-linked arousal, a marker of certain forms of expectation-based processing, of human subjects performing an auditory frequency-discrimination task. Rule-based cues consistently biased choices and response times (RTs) toward the more-probable stimulus. In contrast, stimulus-based cues had a complex combination of effects, including choice and RT biases toward and away from the frequency of recently presented stimuli. These different behavioral patterns also had: 1) distinct computational signatures, including different modulations of key components of a novel form of a drift-diffusion decision model and 2) distinct physiological signatures, including substantial bias-dependent modulations of pupil size in response to rule-based but not stimulus-based cues. These results imply that different sources of expectations can modulate auditory processing via distinct mechanisms: one that uses arousal-linked, rule-based information and another that uses arousal-independent, stimulus-based information to bias the speed and accuracy of auditory perceptual decisions.","link":"/opendata/tardiff-et-al-2022/"},{"title":"Tashjian et al. (2022)","text":"Protection often involves the capacity to prospectively plan the actions needed to mitigate harm. The computational architecture of decisions involving protection remains unclear, as well as whether these decisions differ from other beneficial prospective actions such as reward acquisition. Here we compare protection acquisition to reward acquisition and punishment avoidance to examine overlapping and distinct features across the three action types. Protection acquisition is positively valenced similar to reward. For both protection and reward, the more the actor gains, the more benefit. However, reward and protection occur in different contexts, with protection existing in aversive contexts. Punishment avoidance also occurs in aversive contexts, but differs from protection because punishment is negatively valenced and motivates avoidance. Across three independent studies (Total N = 600) we applied computational modeling to examine model-based reinforcement learning for protection, reward, and punishment in humans. Decisions motivated by acquiring protection evoked a higher degree of model-based control than acquiring reward or avoiding punishment, with no significant differences in learning rate. The context-valence asymmetry characteristic of protection increased deployment of flexible decision strategies, suggesting model-based control depends on the context in which outcomes are encountered as well as the valence of the outcome.","link":"/opendata/tashjian-et-al-2022/"},{"title":"Tavoni et al. (2022)","text":"We must often infer latent properties of the world from noisy and changing observations. Complex, probabilistic approaches to this challenge such as Bayesian inference are accurate but cognitively demanding, relying on extensive working memory and adaptive processing. Simple heuristics are easy to implement but may be less accurate. What is the appropriate balance between complexity and accuracy? Here we model a hierarchy of strategies of variable complexity and find a power law of diminishing returns: increasing complexity gives progressively smaller gains in accuracy. The rate of diminishing returns depends systematically on the statistical uncertainty in the world, such that complex strategies do not provide substantial benefits over simple ones when uncertainty is either too high or too low. In between, there is a complexity dividend. In two psychophysical experiments, we confirm specific model predictions about how working memory and adaptivity should be modulated by uncertainty.","link":"/opendata/tavoni-et-al-2022/"},{"title":"Teghil et al. (2020)","text":"Behavioral evidence suggests that different mechanisms mediate duration perception depending on whether regular or irregular cues for time estimation are provided, and that individual differences in interoceptive processing may affect duration perception only in the latter case. However, no study has addressed brain correlates of this proposed distinction. Here participants performed a duration reproduction task in two conditions: with unevenly spaced stimuli during time estimation/reproduction (irregular), with regularly spaced stimuli provided during the same task (regular). They also underwent resting-state fMRI to assess regional functional connectivity, in order to link individual differences in behavioral performance to variations in patterns of intrinsic brain oscillations. Resting-state functional connectivity of the right precentral gyrus with the ipsilateral insula and putamen was predicted by duration reproduction performance selectively in the irregular condition. The connectivity of the right posterior insula, within a network modulated by participants’ degree of interoceptive awareness, correlated positively with performance in the irregular condition only. Findings support the distinction between brain networks involved in duration processing with or without regular cues, and the hypothesis that the multimodal integration of interoceptive and exteroceptive cues is specifically involved in the latter.","link":"/opendata/teghil-et-al-2020/"},{"title":"Teicher & Parigger (2015)","text":"There is increasing interest in childhood maltreatment as a potent stimulus that may alter trajectories of brain development, induce epigenetic modifications and enhance risk for medical and psychiatric disorders. Although a number of useful scales exist for retrospective assessment of abuse and neglect they have significant limitations. Moreover, they fail to provide detailed information on timing of exposure, which is critical for delineation of sensitive periods. The Maltreatment and Abuse Chronology of Exposure (MACE) scale was developed in a sample of 1051 participants using item response theory to gauge severity of exposure to ten types of maltreatment (emotional neglect, non-verbal emotional abuse, parental physical maltreatment, parental verbal abuse, peer emotional abuse, peer physical bullying, physical neglect, sexual abuse, witnessing interparental violence and witnessing violence to siblings) during each year of childhood. Items included in the subscales had acceptable psychometric properties based on infit and outfit mean square statistics, and each subscale passed Andersen’s Likelihood ratio test. The MACE provides an overall severity score and multiplicity score (number of types of maltreatment experienced) with excellent test-retest reliability. Each type of maltreatment showed good reliability as did severity of exposure across each year of childhood. MACE Severity correlated 0.738 with Childhood Trauma Questionnaire (CTQ) score and MACE Multiplicity correlated 0.698 with the Adverse Childhood Experiences scale (ACE). However, MACE accounted for 2.00- and 2.07-fold more of the variance, on average, in psychiatric symptom ratings than CTQ or ACE, respectively, based on variance decomposition. Different types of maltreatment had distinct and often unique developmental patterns. The 52-item MACE, a simpler Maltreatment Abuse and Exposure Scale (MAES) that only assesses overall exposure and the original test instrument (MACE-X) with several additional items plus spreadsheets and R code for scoring are provided to facilitate use and to spur further development.","link":"/opendata/teicher-parigger-2015/"},{"title":"Ten et al. (2021)","text":"Curiosity-driven learning is foundational to human cognition. By enabling humans to autonomously decide when and what to learn, curiosity has been argued to be crucial for self-organizing temporally extended learning curricula. However, the mechanisms driving people to set intrinsic goals, when they are free to explore multiple learning activities, are still poorly understood. Computational theories propose different heuristics, including competence measures (e.g., percent correct) and learning progress, that could be used as intrinsic utility functions to efficiently organize exploration. Such intrinsic utilities constitute computationally cheap but smart heuristics to prevent people from laboring in vain on unlearnable activities, while still motivating them to self-challenge on difficult learnable activities. Here, we provide empirical evidence for these ideas by means of a free-choice experimental paradigm and computational modeling. We show that while humans rely on competence information to avoid easy tasks, models that include a learning-progress component provide the best fit to task selection data. These results bridge the research in artificial and biological curiosity, reveal strategies that are used by humans but have not been considered in computational research, and introduce tools for probing how humans become intrinsically motivated to learn and acquire interests and skills on extended time scales.","link":"/opendata/ten-et-al-2021/"},{"title":"Teoh et al. (2020)","text":"Dual-process models of altruistic choice assume that automatic responses give way to deliberation over time, and are a popular way to conceptualize how people make generous choices and why those choices might change under time pressure. However, these models have led to conflicting interpretations of behaviour and underlying psychological dynamics. Here, we propose that flexible, goal-directed deployment of attention towards information priorities provides a more parsimonious account of altruistic choice dynamics. We demonstrate that time pressure tends to produce early gaze-biases towards a person’s own outcomes, and that individual differences in this bias explain how individuals’ generosity changes under time pressure. Our gaze-informed drift-diffusion model incorporating moment-to-moment eye-gaze further reveals that underlying social preferences both drive attention, and interact with it to shape generosity under time pressure. These findings help explain existing inconsistencies in the field by emphasizing the role of dynamic attention-allocation during altruistic choice.","link":"/opendata/teoh-et-al-2020/"},{"title":"Teoh & Hutcherson (2022)","text":"Time pressure is a powerful experimental manipulation frequently used to arbitrate between competing dual-process models of prosocial decision-making, which typically assume that automatic responses yield to deliberation over time. However, the use of time pressure has led to conflicting conclusions about the psychological dynamics of prosociality. Here, we proposed that flexible, context-sensitive information search, rather than automatic responses, underlies these divergent effects of time pressure on prosociality. We demonstrated in two preregistered studies (N = 304 adults from the United States and Canada; Prolific Academic) that different prosocial contexts (i.e., pure altruism vs. cooperation) have distinct effects on information search, driving people to prioritize information differently, particularly under time pressure. Furthermore, these information priorities subsequently influence prosocial choices, accounting for the different effects of time pressure in altruistic and cooperative contexts. These findings help explain existing inconsistencies in the field by emphasizing the role of dynamic context-sensitive information search during social decision-making, particularly under time pressure.","link":"/opendata/teoh-hutcherson-2022/"},{"title":"Thomas et al. (2019)","text":"How do we make simple choices such as deciding between an apple and an orange? Recent empirical evidence suggests that choice behaviour and gaze allocation are closely linked at the group level, whereby items looked at longer during the decision-making process are more likely to be chosen. However, it is unclear how variable this gaze bias effect is between individuals. Here we investigate this question across four different simple choice experiments and using a computational model that can be easily applied to individuals. We show that an association between gaze and choice is present for most individuals, but differs considerably in strength. Generally, individuals with a strong association between gaze and choice behaviour are worse at choosing the best item from a choice set compared with individuals with a weak association. Accounting for individuals variability in gaze bias in the model can explain and accurately predict individual differences in choice behaviour.","link":"/opendata/thomas-et-al-2019/"},{"title":"Thunberg et al. (2023)","text":"Response inhibition, the intentional stopping of planned or initiated actions, is often considered a key facet of control, impulsivity, and self-regulation. The stop signal task is argued to be the purest inhibition task we have, and it is thus central to much work investigating the role of inhibition in areas like development and psychopathology. Most of this work quantifies stopping behavior by calculating the stop signal reaction time as a measure of individual stopping latency. Individual difference studies aiming to investigate why and how stopping latencies differ between people often do this under the assumption that the stop signal reaction time indexes a stable, dispositional trait. However, empirical support for this assumption is lacking, as common measures of inhibition and control tend to show low test-retest reliability and thus appear unstable over time. The reasons for this could be methodological, where low stability is driven by measurement noise, or substantive, where low stability is driven by a larger influence of state-like and situational factors. To investigate this, we characterized the split-half and test-retest reliability of a range of common behavioral and electrophysiological measures derived from the stop signal task. Across three independent studies, different measurement modalities, and a systematic review of the literature, we found a pattern of low temporal stability for inhibition measures and higher stability for measures of manifest behavior and non-inhibitory processing. This pattern could not be explained by measurement noise and low internal consistency. Consequently, response inhibition appears to have mostly state-like and situational determinants, and there is little support for the validity of conceptualizing common inhibition measures as reflecting stable traits.","link":"/opendata/thunberg-et-al-2023/"},{"title":"Thyer et al. (2022)","text":"Past work has shown that storage in working memory elicits stimulus-specific neural activity that tracks the stored content. Here, we present evidence for a distinct class of load-sensitive neural activity that indexes items without representing their contents per se. We recorded electroencephalogram (EEG) activity while adult human subjects stored varying numbers of items in visual working memory. Multivariate analysis of the scalp topography of EEG voltage enabled precise tracking of the number of individuated items stored and robustly predicted individual differences in working memory capacity. Critically, this signature of working memory load generalized across variations in both the type and number of visual features stored about each item, suggesting that it tracked the number of individuated memory representations and not the content of those memories. We hypothesize that these findings reflect the operation of a capacity-limited pointer system that supports on-line storage and attentive tracking.","link":"/opendata/thyer-et-al-2022/"},{"title":"Tomov et al. (2021)","text":"The ability to transfer knowledge across tasks and generalize to novel ones is an important hallmark of human intelligence. Yet not much is known about human multitask reinforcement learning. We study participants behaviour in a two-step decision-making task with multiple features and changing reward functions. We compare their behaviour with two algorithms for multitask reinforcement learning, one that maps previous policies and encountered features to new reward functions and one that approximates value functions across tasks, as well as to standard model-based and model-free algorithms. Across three exploratory experiments and a large preregistered confirmatory experiment, our results provide evidence that participants who are able to learn the task use a strategy that maps previously learned policies to novel scenarios. These results enrich our understanding of human reinforcement learning in complex environments with changing task demands.","link":"/opendata/tomov-et-al-2021/"},{"title":"Torok et al. (2022)","text":"Internal models capture the regularities of the environment and are central to understanding how humans adapt to environmental statistics. In general, the correct internal model is unknown to observers, instead they rely on an approximate model that is continually adapted throughout learning. However, experimenters assume an ideal observer model, which captures stimulus structure but ignores the diverging hypotheses that humans form during learning. We combine non-parametric Bayesian methods and probabilistic programming to infer rich and dynamic individualised internal models from response times. We demonstrate that the approach is capable of characterizing the discrepancy between the internal model maintained by individuals and the ideal observer model and to track the evolution of the contribution of the ideal observer model to the internal model throughout training. In particular, in an implicit visuomotor sequence learning task the identified discrepancy revealed an inductive bias that was consistent across individuals but varied in strength and persistence.","link":"/opendata/torok-et-al-2022/"},{"title":"Torres et al. (2023)","text":"Visual working memory is thought to have a fixed capacity limit. However, recent evidence suggests that capacity is greater for real-world objects compared to simple features (i.e., colors). Here, we examined whether greater working memory for objects was due to greater memorability. In online samples of young adults, real-world objects were better remembered than colors, which was attributed to a higher proportion of high-confidence responses (Exp 1). Memory performance for objects was also improved compared to their scrambled counterparts (Exp 2), indicating that this benefit is unrelated to visual complexity. Critically, the specific objects that were likely to be remembered with high confidence were highly correlated across experiments, consistent with the idea that some objects are more memorable than others. Object memorability also predicted memory performance between objects within a display (Exp 3). These findings suggest that the object benefit in working memory may be supported by stimulus memorability.","link":"/opendata/torres-et-al-2023/"},{"title":"Toso et al. (2021)","text":"The connection between stimulus perception and time perception remains unknown. The present study combines human and rat psychophysics with sensory cortical neuronal firing to construct a computational model for the percept of elapsed time embedded within sense of touch. When subjects judged the duration of a vibration applied to the fingertip (human) or whiskers (rat), increasing stimulus intensity led to increasing perceived duration. Symmetrically, increasing vibration duration led to increasing perceived intensity. We modeled real spike trains recorded from vibrissal somatosensory cortex as input to dual leaky integrators-an intensity integrator with short time constant and a duration integrator with long time constant-generating neurometric functions that replicated the actual psychophysical functions of rats. Returning to human psychophysics, we then confirmed specific predictions of the dual leaky integrator model. This study offers a framework, based on sensory coding and subsequent accumulation of sensory drive, to account for how a feeling of the passage of time accompanies the tactile sensory experience.","link":"/opendata/toso-et-al-2021/"},{"title":"Traner et al. (2021)","text":"Classic foraging theory predicts that humans and animals aim to gain maximum reward per unit time. However, in standard instrumental conditioning tasks individuals adopt an apparently suboptimal strategy: they respond slowly when the expected value is low. This reward-related bias is often explained as reduced motivation in response to low rewards. Here we present evidence this behavior is associated with a complementary increased motivation to search the environment for alternatives. We trained monkeys to search for reward-related visual targets in environments with different values. We found that the reward-related bias scaled with environment value, was consistent with persistent searching after the target was already found, and was associated with increased exploratory gaze to objects in the environment. A novel computational model of foraging suggests that this search strategy could be adaptive in naturalistic settings where both environments and the objects within them provide partial information about hidden, uncertain rewards.","link":"/opendata/traner-et-al-2021/"},{"title":"Troudart & Shahar (2023)","text":"Acting in a goal-directed manner requires an ability to accurately predict the outcomes of one’s actions. However, not much is known regarding how threat-related cues influence our ability to form action-outcome associations according to the environment’s known causal structure. Here, we examined the extent to which threat-related cues influence individuals’ tendency to form and act in accordance with action-outcome associations that do not exist in the external environment (i.e., outcome-irrelevant learning). Forty-nine healthy participants completed an online multi-armed reinforcement-learning bandit task in which they were asked to help a child safely cross a street. Outcome-irrelevant learning was estimated as a tendency to assign value to response keys that did not predict an outcome but were used to report participants’ choices. We first replicated previous findings showing that individuals tend to form and act in accordance with irrelevant action-outcome associations, across experimental conditions, and despite explicit knowledge regarding the environment’s true structure. Importantly, results of a Bayesian regression analysis suggest that the presentation of threat-related images, compared with neutral or no visual cues at trial initiation, increased outcome-irrelevant learning. We discuss outcome-irrelevant learning as a possible theoretical mechanism that may lead to altered learning in the face of perceived threat.","link":"/opendata/troudart-shahar-2023/"},{"title":"Tsigeman et al. (2022)","text":"Visuospatial working memory (VSWM) is essential to human cognitive abilities and is associated with important life outcomes such as academic performance. Recently, a number of reliable measures of VSWM have been developed to help understand psychological processes and for practical use in education. We sought to extend this work using Item Response Theory (IRT) and Computerised Adaptive Testing (CAT) frameworks to construct, calibrate and validate a new adaptive, computerised, and open-source VSWM test. We aimed to overcome the limitations of previous instruments and provide researchers with a valid and freely available VSWM measurement tool. The Jack and Jill (JaJ) VSWM task was constructed using explanatory item response modelling of data from a sample of the general adult population (Study 1, N = 244) in the UK and US. Subsequently, a static version of the task was tested for validity and reliability using a sample of adults from the UK and Australia (Study 2, N = 148) and a sample of Russian adolescents (Study 3, N = 263). Finally, the adaptive version of the JaJ task was implemented on the basis of the underlying IRT model and evaluated with another sample of Russian adolescents (Study 4, N = 239). JaJ showed sufficient internal consistency and concurrent validity as indicated by significant and substantial correlations with established measures of working memory, spatial ability, non-verbal intelligence, and academic achievement. The findings suggest that JaJ is an efficient and reliable measure of VSWM from adolescent to adult age.","link":"/opendata/tsigeman-et-al-2022/"},{"title":"Tsuji & Imaizumi (2022)","text":"Sense of agency (SoA) is a feeling of controlling one’s own action. Recognition memory can improve for stimuli that involve SoA perhaps because of the self-reference effect. Recognition memory consists of recollection (i.e., detailed memory of stimuli) and familiarity (i.e., a feeling that stimuli are in memory). The self-reference effect is often observed in the recollection. Here, we investigated whether SoA particularly improves the recollection process. Participants pressed a key to produce an outcome (i.e., box movement followed by word presentation in Experiment 1 or word presentation in Experiment 2) and rated their SoA over the outcome. The outcome was spatially congruent or incongruent with the action. The participants learned the words intentionally (Experiment 1) or incidentally (Experiment 2). Performances of recollection and familiarity were assessed using the remember/know procedure. Our results suggest that the participants’ SoA was successfully manipulated. However, contrary to our hypothesis and previous findings, we found no effects of voluntary action and action-outcome congruence on recollection and familiarity processes of recognition memory, regardless of the latency of word presentation and learning strategies. Further studies are needed to replicate and elucidate the relationship between the SoA and recognition memory.","link":"/opendata/tsuji-imaizumi-2022/"},{"title":"Turan et al. (2023)","text":"Generating predictions about environmental regularities, relying on these predictions, and updating these predictions when there is a violation from incoming sensory evidence are considered crucial functions of our cognitive system for being adaptive in the future. The violation of a prediction can result in a prediction error (PE) which affects subsequent memory processing. In our preregistered studies, we examined the effects of different levels of PE on episodic memory. Participants were asked to generate predictions about the associations between sequentially presented cue-target pairs, which were violated later with individual items in three PE levels, namely low, medium, and high PE. Hereafter, participants were asked to provide old/new judgments on the items with confidence ratings, and to retrieve the paired cues. Our results indicated a better recognition memory for low PE than medium and high PE levels, suggesting a memory congruency effect. On the other hand, there was no evidence of memory benefit for high PE level. Together, these novel and coherent findings strongly suggest that high PE does not guarantee better memory.","link":"/opendata/turan-et-al-2023/"},{"title":"Turi et al. (2017)","text":"According to the placebo-reward hypothesis, placebo is a reward-anticipation process that increases midbrain dopamine (DA) levels. Reward-based learning processes, such as reinforcement learning, involves a large part of the DA-ergic network that is also activated by the placebo intervention. Given the neurochemical overlap between placebo and reward learning, we investigated whether verbal instructions in conjunction with a placebo intervention are capable of enhancing reward learning in healthy individuals by using a monetary reward-based reinforcement-learning task. Placebo intervention was performed with non-invasive brain stimulation techniques. In a randomized, triple-blind, cross-over study we investigated this cognitive placebo effect in healthy individuals by manipulating the participants perceived uncertainty about the interventions efficacy. Volunteers in the purportedly low- and high-uncertainty conditions earned more money, responded more quickly and had a higher learning rate from monetary rewards relative to baseline. Participants in the purportedly high-uncertainty conditions showed enhanced reward learning, and a model-free computational analysis revealed a higher learning rate from monetary rewards compared to the purportedly low-uncertainty and baseline conditions. Our results indicate that the placebo response is able to enhance reward learning in healthy individuals, opening up exciting avenues for future research in placebo effects on other cognitive functions.","link":"/opendata/turi-et-al-2017/"},{"title":"Tylen et al. (2023)","text":"Capacities for abstract thinking and problem-solving are central to human cognition. Processes of abstraction allow the transfer of experiences and knowledge between contexts helping us make informed decisions in new or changing contexts. While we are often inclined to relate such reasoning capacities to individual minds and brains, they may in fact be contingent on human-specific modes of collaboration, dialogue, and shared attention. In an experimental study, we test the hypothesis that social interaction enhances cognitive processes of rule-induction, which in turn improves problem-solving performance. Through three sessions of increasing complexity, individuals and groups were presented with a problem-solving task requiring them to categorize a set of visual stimuli. To assess the character of participants’ problem representations, after each training session, they were presented with a transfer task involving stimuli that differed in appearance, but shared relations among features with the training set. Besides, we compared participants’ categorization behaviors to simulated agents relying on exemplar learning. We found that groups performed superior to individuals and agents in the training sessions and were more likely to correctly generalize their observations in the transfer phase, especially in the high complexity session, suggesting that groups more effectively induced underlying categorization rules from the stimuli than individuals and agents. Crucially, variation in performance among groups was predicted by semantic diversity in members’ dialogical contributions, suggesting a link between social interaction, cognitive diversity, and abstraction.","link":"/opendata/tylen-et-al-2023/"},{"title":"Tymula et al. (2023)","text":"Research in the multidisciplinary field of neuroeconomics has mainly been driven by two influential theories regarding human economic choice: prospect theory, which describes decision-making under risk, and reinforcement learning theory, which describes learning for decision-making. We hypothesized that these two distinct theories guide decision-making in a comprehensive manner. Here, we propose and test a decision-making theory under uncertainty that combines these highly influential theories. Collecting many gambling decisions from laboratory monkeys allowed for reliable testing of our model and revealed a systematic violation of prospect theory’s assumption that probability weighting is static. Using the same experimental paradigm in humans, substantial similarities between these species were uncovered by various econometric analyses of our dynamic prospect theory model, which incorporates decision-by-decision learning dynamics of prediction errors into static prospect theory. Our model provides a unified theoretical framework for exploring a neurobiological model of economic choice in human and nonhuman primates.","link":"/opendata/tymula-et-al-2023/"},{"title":"Unger & Sloutsky (2022)","text":"Our knowledge of the world is populated with categories such as dogs, cups, and chairs. Such categories shape how we perceive, remember, and reason about their members. Much of our exposure to the entities we come to categorize occurs incidentally as we experience and interact with them in our everyday lives, with limited access to explicit teaching. This research investigated whether incidental exposure contributes to building category knowledge by rendering people “ready to learn”-allowing them to rapidly capitalize on brief access to explicit teaching. Across five experiments (N = 438 adults), we found that incidental exposure did produce a ready-to-learn effect, even when learners showed no evidence of robust category learning during exposure. Importantly, this readiness to learn occurred only when categories possessed a rich structure in which many features were correlated within categories. These findings offer a window into how our everyday experiences may contribute to building category knowledge.","link":"/opendata/unger-sloutsky-2022/"},{"title":"Urai et al. (2017)","text":"While judging their sensory environments, decision-makers seem to use the uncertainty about their choices to guide adjustments of their subsequent behaviour. One possible source of these behavioural adjustments is arousal: decision uncertainty might drive the brains arousal systems, which control global brain state and might thereby shape subsequent decision-making. Here, we measure pupil diameter, a proxy for central arousal state, in human observers performing a perceptual choice task of varying difficulty. Pupil dilation, after choice but before external feedback, reflects three hallmark signatures of decision uncertainty derived from a computational model. This increase in pupil-linked arousal boosts observers tendency to alternate their choice on the subsequent trial. We conclude that decision uncertainty drives rapid changes in pupil-linked arousal state, which shape the serial correlation structure of ongoing choice behaviour.","link":"/opendata/urai-et-al-2017/"},{"title":"van-Baar et al. (2019)","text":"Individuals employ different moral principles to guide their social decision-making, thus expressing a specific ‘moral strategy’. Which computations characterize different moral strategies, and how might they be instantiated in the brain? Here, we tackle these questions in the context of decisions about reciprocity using a modified Trust Game. We show that different participants spontaneously and consistently employ different moral strategies. By mapping an integrative computational model of reciprocity decisions onto brain activity using inter-subject representational similarity analysis of fMRI data, we find markedly different neural substrates for the strategies of ‘guilt aversion’ and ‘inequity aversion’, even under conditions where the two strategies produce the same choices. We also identify a new strategy, ‘moral opportunism’, in which participants adaptively switch between guilt and inequity aversion, with a corresponding switch observed in their neural activation patterns. These findings provide a valuable view into understanding how different individuals may utilize different moral principles.","link":"/opendata/van-baar-et-al-2019/"},{"title":"van-den-Berg et al. (2023)","text":"Previous work has shown that humans distribute their visual working memory (VWM) resources flexibly across items: the higher the importance of an item, the better it is remembered. A related, but much less studied question is whether people also have control over the total amount of VWM resource allocated to a task. Here, we approach this question by testing whether increasing monetary incentives results in better overall VWM performance. In three experiments, subjects performed a delayed-estimation task on the Amazon Turk platform. In the first two experiments, four groups of subjects received a bonus payment based on their performance, with the maximum bonus ranging from $0 to $10 between groups. We found no effect of the amount of bonus on intrinsic motivation or on VWM performance in either experiment. In the third experiment, reward was manipulated on a trial-by-trial basis using a within-subjects design. Again, no evidence was found that VWM performance depended on the magnitude of potential reward. These results suggest that encoding quality in visual working memory is insensitive to monetary reward, which has implications for resource-rational theories of VWM.","link":"/opendata/van-den-berg-et-al-2023/"},{"title":"van-Lieshout et al. (2021)","text":"You probably know what kind of things you are curious about, but can you also explain what it feels like to be curious? Previous studies have demonstrated that we are particularly curious when uncertainty is high and when information provides us with a substantial update of what we know. It is unclear, however, whether this drive to seek information (curiosity) is appetitive or aversive. Curiosity might correspond to an appetitive drive elicited by the state of uncertainty, because we like that state, or rather it might correspond to an aversive drive to reduce the state of uncertainty, because we don’t like it. To investigate this, we obtained both subjective valence (happiness) and curiosity ratings from subjects who performed a lottery task that elicits uncertainty-dependent curiosity. We replicated a strong main effect of outcome uncertainty on curiosity: Curiosity increased with outcome uncertainty, irrespective of whether the outcome represented a monetary gain or loss. By contrast, happiness decreased with higher outcome uncertainty. This indicates that people were more curious, but less happy about lotteries with higher outcome uncertainty. These findings raise the hypothesis, to be tested in future work, that curiosity reflects an aversive drive to reduce the unpleasant state of uncertainty.","link":"/opendata/van-lieshout-et-al-2021/"},{"title":"van Baar et al. (2021)","text":"Predicting the behaviour of others is an essential part of social cognition. Despite its ubiquity, social prediction poses a poorly understood generalization problem: we cannot assume that others will repeat past behaviour in new settings or that their future actions are entirely unrelated to the past. We demonstrate that humans solve this challenge using a structure learning mechanism that uncovers other peoples latent, unobservable motives, such as greed and risk aversion. In four studies, participants (N = 501) predicted other players decisions across four economic games, each with different social tensions (for example, Prisoners Dilemma and Stag Hunt). Participants achieved accurate social prediction by learning the stable motivational structure underlying a players changing actions across games. This motive-based abstraction enabled participants to attend to information diagnostic of the players next move and disregard irrelevant contextual cues. Participants who successfully learned anothers motives were more strategic in a subsequent competitive interaction with that player in entirely new contexts, reflecting that social structure learning supports adaptive social behaviour.","link":"/opendata/van-baar-et-al-2021/"},{"title":"Vandendriessche et al. (2022)","text":"Value-based decision-making impairment in depression is a complex phenomenon: while some studies did find evidence of blunted reward learning and reward-related signals in the brain, others indicate no effect. Here we test whether such reward sensitivity deficits are dependent on the overall value of the decision problem. We used a two-armed bandit task with two different contexts: one rich, one poor where both options were associated with an overall positive, negative expected value, respectively. We tested patients (N = 30) undergoing a major depressive episode and age, gender and socio-economically matched controls (N = 26). Learning performance followed by a transfer phase, without feedback, were analyzed to distangle between a decision or a value-update process mechanism. Finally, we used computational model simulation and fitting to link behavioral patterns to learning biases. Control subjects showed similar learning performance in the rich and the poor contexts, while patients displayed reduced learning in the poor context. Analysis of the transfer phase showed that the context-dependent impairment in patients generalized, suggesting that the effect of depression has to be traced to the outcome encoding. Computational model-based results showed that patients displayed a higher learning rate for negative compared to positive outcomes (the opposite was true in controls). Our results illustrate that reinforcement learning performances in depression depend on the value of the context. We show that depressive patients have a specific trouble in contexts with an overall negative state value, which in our task is consistent with a negativity bias at the learning rates level.","link":"/opendata/vandendriessche-et-al-2022/"},{"title":"van Timmeren et al. (2022)","text":"Despite our familiarity with the concept of habits, eliciting and measuring habits experimentally in humans has proven to be difficult. A possible explanation is that participants in psychological experiments actively recruit goal-directed control and therefore make few habitual slips-of-action in the presence of stimuli signalling devalued outcomes. In the current experiment we used the symmetrical outcome-revaluation task in combination with a working memory load in an attempt to tip the balance from goal-directed control to stimulus-response habit. During the instrumental learning phase, participants learned to make a Go response to stimuli signalling valuable outcomes (and points) while not responding (NoGo) to stimuli signalling not-valuable outcomes. During the subsequent test phase, the outcomes signalled by the stimuli were either value-congruent with training (still-valuable and still-not-valuable), or value-incongruent (devalued and upvalued). Participants had to flexibly adjust their behaviour on value-incongruent trials where the stimulus-response association learned during training was no longer appropriate. For half the participants, a concurrent working memory load was imposed during the test phase. In line with our preregistered hypotheses, participants showed evidence for habitual slips-of-action but those under working memory load showed increased habit tendencies (specifically failures to inhibit prepotent Go responses in the presence of stimuli signalling devalued outcomes). This central finding suggests that a working memory load can be used to reveal habits in humans.","link":"/opendata/van-timmeren-et-al-2022/"},{"title":"Vellani et al. (2022)","text":"Individual differences in behaviour, traits and mental-health are partially heritable. Traditionally, studies have focused on quantifying the heritability of high-order characteristics, such as happiness or education attainment. Here, we quantify the degree of heritability of lower-level mental processes that likely contribute to complex traits and behaviour. In particular, we quantify the degree of heritability of cognitive and affective factors that contribute to the generation of beliefs about risk, which drive behavior in domains ranging from finance to health. Monozygotic and dizygotic twin pairs completed a belief formation task. We first show that beliefs about risk are associated with vividness of imagination, affective evaluation and learning abilities. We then demonstrate that the genetic contribution to individual differences in these processes range between 13.5 and 39%, with affect evaluation showing a particular robust heritability component. These results provide clues to which mental factors may be driving the heritability component of beliefs formation, which in turn contribute to the heritability of complex traits.","link":"/opendata/vellani-et-al-2022/"},{"title":"Verdejo-Garcia et al. (2021)","text":"Impulsive behaviours are a major contributor to the global burden of disease, but existing measures of cognitive impulsivity have suboptimal reliability and validity. Here, we introduce the Cognitive Impulsivity Suite, comprising three computerized/online tasks using a gamified interface. We conceptualize rapid-response impulsive behaviours (disinhibition) as arising from the failure of three distinct cognitive mechanisms: attentional control, information gathering and monitoring/shifting. We demonstrate the construct and criterion validity of the Cognitive Impulsivity Suite in an online community sample (N = 1,056), show test-retest reliability and between-subjects variability in a face-to-face community sample (N = 63), and replicate the results in a community and clinical sample (N = 578). The results support the theoretical architecture of the attentional control, information gathering and monitoring/shifting constructs. The Cognitive Impulsivity Suite demonstrated incremental criterion validity for prediction of real-world, addiction-related problems and is a promising tool for large-scale research on cognitive impulsivity.","link":"/opendata/verdejo-garcia-et-al-2021/"},{"title":"Vieira & Olsson (2022)","text":"Empathy for others distress has long been considered the driving force of helping. However, when deciding to help others in danger, one must consider not only their distress, but also the risk to oneself. Whereas the role of self-defense in helping has been overlooked in human research, studies in other animals indicate defensive responses are necessary for the protection of conspecifics. In this pre-registered study (N=49), we demonstrate that human defensive neural circuits are implicated in helping others under threat. Participants underwent fMRI scanning while deciding whether to help another participant avoid aversive electrical shocks, at the risk of also being shocked. We found that higher engagement of neural circuits that coordinate fast escape from self-directed danger (including the insula, PAG, and ACC) facilitated decisions to help others. Importantly, using representational similarity analysis, we found that the strength with which the amygdala and insula uniquely represented the threat to oneself (and not the others distress) predicted helping. Our findings indicate that in humans, as other mammals, defensive mechanisms play a greater role in helping behavior than previously understood.","link":"/opendata/vieira-olsson-2022/"},{"title":"Vilgis et al. (2022)","text":"Working memory deficits are common in attention-deficit/hyperactivity disorder (ADHD) and depression-two common neurodevelopmental disorders with overlapping cognitive profiles but distinct clinical presentation. Multivariate techniques have previously been utilized to understand working memory processes in functional brain networks in healthy adults but have not yet been applied to investigate how working memory processes within the same networks differ within typical and atypical developing populations. We used multivariate pattern analysis (MVPA) to identify whether brain networks discriminated between spatial versus verbal working memory processes in ADHD and Persistent Depressive Disorder (PDD). Thirty-six male clinical participants and 19 typically developing (TD) boys participated in a fMRI scan while completing a verbal and a spatial working memory task. Within a priori functional brain networks (frontoparietal, default mode, salience), the TD group demonstrated differential response patterns to verbal and spatial working memory. The PDD group showed weaker differentiation than TD, with lower classification accuracies observed in primarily the left frontoparietal network. The neural profiles of the ADHD and PDD differed specifically in the SN where the ADHD groups neural profile suggests significantly less specificity in neural representations of spatial and verbal working memory. We highlight within-group classification as an innovative tool for understanding the neural mechanisms of how cognitive processes may deviate in clinical disorders, an important intermediary step towards improving translational psychiatry.","link":"/opendata/vilgis-et-al-2022/"},{"title":"Visalli et al. (2022)","text":"Evidence is discordant regarding how emotional processing and cognitive control interact to shape behavior. This study sought to examine this interaction by looking at the distinction between proactive and reactive modes of control and how they relate with emotional processing. Seventy-four healthy participants performed an emotional priming Stroop task. On each trial, target stimuli of a spatial Stroop task were preceded by sad or neutral facial expressions, providing two emotional conditions. To manipulate the requirement of both proactive and reactive control, the proportion of congruent trials (PC) was varied at the list-wide (LWPC) and item-specific (ISPC) levels, respectively. We found that sad priming led to behavioral costs only in trials with low proactive and reactive cognitive control exertion. Our findings suggest that emotional processing affects processing stages other than cognitive control in the Stroop task. Moreover, both proactive and reactive control modes seem effective in overcoming emotional interference of priming stimuli.","link":"/opendata/visalli-et-al-2022/"},{"title":"Vives et al. (2023)","text":"Decisions made under uncertainty often are considered according to their perceived subjective value. We move beyond this traditional framework to explore the hypothesis that conceptual representations of uncertainty influence risky choice. Results reveal that uncertainty concepts are represented along a dimension that jointly captures probabilistic and valenced features of the conceptual space. These uncertainty representations predict the degree to which an individual engages in risky decision-making. Moreover, we find that most individuals have two largely distinct representations: one for uncertainty and another for certainty. In contrast, a minority of individuals exhibit substantial overlap between their representations of uncertainty and certainty. Together, these findings reveal the relationship between the conceptualization of uncertainty and risky decisions.","link":"/opendata/vives-et-al-2023/"},{"title":"Viviani et al. (2023)","text":"The spatial Stroop task measures the ability to resolve interference between relevant and irrelevant spatial information. We recently proposed a four-choice spatial Stroop task that ensures methodological advantages over the original color-word verbal Stroop task, requiring participants to indicate the direction of an arrow while ignoring its position in one of the screen corners. However, its peripheral spatial arrangement might represent a methodological weakness and could introduce experimental confounds. Thus, aiming at improving our “Peripheral” spatial Stroop, we designed and made available five novel spatial Stroop tasks (Perifoveal, Navon, Figure-Ground, Flanker, and Saliency), wherein the stimuli appeared at the center of the screen. In a within-subjects online study, we compared the six versions to identify which task produced the largest but also the most reliable and robust Stroop effect. Indeed, although internal reliability is frequently overlooked, its estimate is fundamental, also in light of the recently proposed reliability paradox. Data analyses were performed using both the classical general linear model analytical approach and two multilevel modelling approaches (linear mixed models and random coefficient analysis), which specifically served for more accurately estimating the Stroop effect by explaining intra-subject, trial-by-trial variability. We then assessed our results based on their robustness to such analytic flexibility. Overall, our results indicate that the Perifoveal spatial Stroop is the best alternative task for its statistical properties and methodological advantages. Interestingly, our results also indicate that the Peripheral and Perifoveal Stroop effects were not only the largest, but also those with highest and most robust internal reliability.","link":"/opendata/viviani-et-al-2023/"},{"title":"von Clarenau et al. (2022)","text":"People routinely make decisions based on samples of numerical values. A common conclusion from the literature in psychophysics and behavioral economics is that observers subjectively compress magnitudes, such that extreme values have less sway over choice than prescribed by a normative model (underweighting). However, recent studies have reported evidence for anti-compression, that is, the relative overweighting of extreme values. Here, we investigate potential reasons for this discrepancy in findings and examine the possibility that it reflects adaptive responses to different task requirements. We performed a large-scale study (N = 607) of sequential numerical integration, manipulating (i) the task requirement (averaging a single stream or comparing two streams of numbers), (ii) the distribution of sample values (uniform or Gaussian), and (iii) their range (1 to 9 or 100 to 900). The data showed compression of subjective values in the averaging task, but anti-compression in the comparison task. This pattern held for both distribution types and for both ranges. The findings are consistent with model simulations showing that either compression or anti-compression can be beneficial for noisy observers, depending on the sample-level processing demands imposed by the task.","link":"/opendata/von-clarenau-et-al-2022/"},{"title":"Voulgaropoulou et al. (2022)","text":"Humans are continuously exposed to stressful challenges in everyday life. Such stressful events trigger a complex physiological reaction - the fight-or-flight response - that can hamper flexible decision-making and learning. Inspired by key neural and peripheral characteristics of the fight-or-flight response, here, we ask whether acute stress changes how humans learn about costs and benefits. Healthy adults were randomly exposed to an acute stress (age mean=23.48, 21/40 female) or no-stress control (age mean=23.80, 22/40 female) condition, after which they completed a reinforcement learning task in which they minimize cost (physical effort) and maximize benefits (monetary rewards). During the task pupillometry data were collected. A computational model of cost-benefit reinforcement learning was employed to investigate the effect of acute stress on cost and benefit learning and decision-making. Acute stress improved learning to maximize rewards relative to minimizing physical effort (Condition-by-Trial Type interaction: F(1,78)= 6.53, p = 0.01, n2G= 0.04; reward > effort in stress condition: t(39) = 5.40, p αR in control condition: t(39) = -4.75, p < 0.001]. This process was associated with distinct alterations in pupil size fluctuations. Data and scripts are available (https://osf.io/ydv2q/). Here we demonstrate that acute stress is associated with asymmetric learning about reward value versus action cost, thereby providing new insights into learning strategies under acute stress, which, depending on the context, may be maladaptive or beneficial. Our pupillometry and physiological results tentatively link asymmetric cost and benefit learning to stress-related changes in catecholamine activity.","link":"/opendata/voulgaropoulou-et-al-2022/"},{"title":"Vuletich & Payne (2019)","text":"Can implicit bias be changed? In a recent longitudinal study, Lai and colleagues (2016, Study 2) compared nine interventions intended to reduce racial bias across 18 university campuses. Although all interventions changed participants’ bias on an immediate test, none were effective after a delay. This study has been interpreted as strong evidence that implicit biases are difficult to change. We revisited Lai et al.’s study to test whether the stability observed reflected persistent individual attitudes or stable environments. Our reanalysis (N = 4,842) indicates that individual biases did not return to preexisting levels. Instead, campus means returned to preexisting campus means, whereas individual scores fluctuated mostly randomly. Campus means were predicted by markers of structural inequality. Our results are consistent with the theory that implicit bias reflects biases in the environment rather than individual dispositions. This conclusion is nearly the opposite of the original interpretation: Although social environments are stable, individual implicit biases are ephemeral.","link":"/opendata/vuletich-payne-2019/"},{"title":"Wall et al. (2023)","text":"Predictive inference is an important cognitive function and there are many tasks which measure it, and the error driven learning that underpins it. Context is a key contribution to this learning, with different contexts requiring different learning strategies. A factor not often considered however, is the conditions and time-frame over which a model of that context is developed. This study required participants to learn under two changing, unsignalled contexts with opposing optimal responses to large errors - change-points and oddballs. The changes in context occurred under two task structures: 1) a fixed task structure, with consecutive, short blocks of each context, and 2) a random task structure, with the context randomly selected for each new block. Through this design we examined the conditions under which learning contexts can be differentiated from each other, and the time-frame over which that learning occurs. We found that participants responded in accordance with the optimal strategy for each contexts, and did so within a short period of time, over very few meaningful errors. We further found that the responses became more optimal throughout the experiment, but only for periods of context consistency (the fixed task structure), and if the first experienced context involved meaningful errors. These results show that people will continue to refine their model of the environment across multiple trials and blocks, leading to more context-appropriate responding - but only in certain conditions. This highlights the importance of considering the task structure, and the time-frames of model development those patterns may encourage. This has implications for interpreting differences in learning across different contexts","link":"/opendata/wall-et-al-2023/"},{"title":"Waltmann et al. (2022a)","text":"Task-based measures that capture neurocognitive processes can help bridge the gap between brain and behavior. To transfer tasks to clinical application, reliability is a crucial benchmark because it imposes an upper bound to potential correlations with other variables (e.g., symptom or brain data). However, the reliability of many task readouts is low. In this study, we scrutinized the retest reliability of a probabilistic reversal learning task (PRLT) that is frequently used to characterize cognitive flexibility in psychiatric populations. We analyzed data from N = 40 healthy subjects, who completed the PRLT twice. We focused on how individual metrics are derived, i.e., whether data were partially pooled across participants and whether priors were used to inform estimates. We compared the reliability of the resulting indices across sessions, as well as the internal consistency of a selection of indices. We found good to excellent reliability for behavioral indices as derived from mixed-effects models that included data from both sessions. The internal consistency was good to excellent. For indices derived from computational modeling, we found excellent reliability when using hierarchical estimation with empirical priors and including data from both sessions. Our results indicate that the PRLT is well equipped to measure individual differences in cognitive flexibility in reinforcement learning. However, this depends heavily on hierarchical modeling of the longitudinal data (whether sessions are modeled separately or jointly), on estimation methods, and on the combination of parameters included in computational models. We discuss implications for the applicability of PRLT indices in psychiatric research and as diagnostic tools.","link":"/opendata/waltmann-et-al-2022a/"},{"title":"Waltmann et al. (2022b)","text":"Precisely charting the maturation of core neurocognitive functions such as reinforcement learning (RL) and flexible adaptation to changing action-outcome contingencies is key for developmental neuroscience and adjacent fields like developmental psychiatry. However, research in this area is both sparse and conflicted, especially regarding potentially asymmetric development of learning for different motives (obtain wins vs avoid losses) and learning from valenced feedback (positive vs negative). In the current study, we investigated the development of RL from adolescence to adulthood, using a probabilistic reversal learning task modified to experimentally separate motivational context and feedback valence, in a sample of 95 healthy participants between 12 and 45. We show that adolescence is characterized by enhanced novelty seeking and response shifting especially after negative feedback, which leads to poorer returns when reward contingencies are stable. Computationally, this is accounted for by reduced impact of positive feedback on behavior. We also show, using fMRI, that activity of the medial frontopolar cortex reflecting choice probability is attenuated in adolescence. We argue that this can be interpreted as reflecting diminished confidence in upcoming choices. Interestingly, we find no age- related differences between learning in win and loss contexts.","link":"/opendata/waltmann-et-al-2022b/"},{"title":"Waltmann et al. (2023a)","text":"Precisely charting the maturation of core neurocognitive functions such as reinforcement learning (RL) and flexible adaptation to changing action-outcome contingencies is key for developmental neuroscience and adjacent fields like developmental psychiatry. However, research in this area is both sparse and conflicted, especially regarding potentially asymmetric development of learning for different motives (obtain wins vs avoid losses) and learning from valenced feedback (positive vs negative). In the current study, we investigated the development of RL from adolescence to adulthood, using a probabilistic reversal learning task modified to experimentally separate motivational context and feedback valence, in a sample of 95 healthy participants between 12 and 45. We show that adolescence is characterized by enhanced novelty seeking and response shifting especially after negative feedback, which leads to poorer returns when reward contingencies are stable. Computationally, this is accounted for by reduced impact of positive feedback on behavior. We also show, using fMRI, that activity of the medial frontopolar cortex reflecting choice probability is attenuated in adolescence. We argue that this can be interpreted as reflecting diminished confidence in upcoming choices. Interestingly, we find no age-related differences between learning in win and loss contexts.","link":"/opendata/waltmann-et-al-2023a/"},{"title":"Waltmann et al. (2023b)","text":"Binge Eating Disorder (BED) is thought of as a disorder of cognitive control but evidence regarding its neurocognitive mechanisms is inconclusive. A key limitation in prior research is the lack of clear separation between effects of BED and obesity. Moreover, research has largely disregarded self-report evidence that neurocognitive deficits may emerge primarily in contexts focused on avoiding aversive states. Addressing these gaps, this longitudinal study investigated behavioral flexibility and its underlying neuro-computational processes in approach and avoidance contexts in normal weight individuals, obese individuals, and obese individuals with BED. Participants performed a probabilistic reversal learning task during functional imaging, with different blocks focused on obtaining wins or avoiding losses. They were reinvited for a 6-months follow-up. Analyses were informed by computational models of reinforcement learning (RL). Compared to obese individuals without BED, BED participants had relatively more difficulties to avoid losses, while obese participants without BED had relatively more difficulties to obtain rewards. Computationally, this was explained by differential learning sensitivities in the win vs loss conditions between groups. In the brain, this was mirrored in differential neural learning signals in the ventromedial prefrontal cortex (vmPFC) per condition. The condition-specific behavioral and neural differences were subtle, but strikingly scaled with BED symptoms between and within subjects. Compared to normal weight controls, obese participants without BED switched more between choice options. This was reflected in diminished representation of choice certainty in the vmPFC. Hence, RL alterations in obesity with and without BED may be qualitatively different: our finding that BED was associated with relative RL deficits in loss-avoidance contexts aligns with the notion that in BED, neurocognitive alterations may selectively emerge in aversive states. Our study thus highlights the importance of distinguishing between obesity with and without BED to identify unique neuro-computational alterations underlying different styles of maladaptive eating behavior.","link":"/opendata/waltmann-et-al-2023b/"},{"title":"Wamsley et al. (2023)","text":"Traditionally, neuroscience and psychology have studied the human brain during periods of “online” attention to the environment, while participants actively engage in processing sensory stimuli. But emerging evidence shows that the waking brain also intermittently enters an “offline” state, during which sensory processing is inhibited and our attention shifts inward. In fact, humans may spend up to half of their waking hours offline (Killingsworth & Gilbert, 2010; Wamsley & Summer, 2020). The function of alternating between online and offline forms of wakefulness remains unknown. We hypothesized that rapidly switching between online and offline states enables the brain to alternate between the competing demands of encoding new information and consolidating already-encoded information. N=46 participants (34 female) trained on a memory task just prior to a 30min retention interval, during which they completed a simple attention task while undergoing simultaneous high-density EEG (electroencephalography) and pupillometry recording. We used a data-driven method to parse this retention interval into a sequence of discrete online and offline states, with 5sec temporal resolution. We found evidence for three distinct states, one of which was an offline state with features well-suited to support memory consolidation, including increased EEG slow oscillation power, reduced attention to the external environment, and increased pupil diameter (a proxy for increased norepinephrine). Participants who spent more time in this offline state following encoding showed improved memory at delayed test. These observations are consistent with the hypothesis that even brief, seconds-long entry into an offline state may support the early stages of memory consolidation.","link":"/opendata/wamsley-et-al-2023/"},{"title":"Wan et al. (2022)","text":"How does the brain prioritize among the contents of working memory (WM) to appropriately guide behavior? Previous work, employing inverted encoding modeling (IEM) of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) datasets, has shown that unprioritized memory items (UMI) are actively represented in the brain, but in a “flipped”, or opposite, format compared to prioritized memory items (PMI). To acquire independent evidence for such a priority-based representational transformation, and to explore underlying mechanisms, we trained recurrent neural networks (RNNs) with a long short-term memory (LSTM) architecture to perform a 2-back WM task. Visualization of LSTM hidden layer activity using Principal Component Analysis (PCA) confirmed that stimulus representations undergo a representational transformation-consistent with a flip-while transitioning from the functional status of UMI to PMI. Demixed (d)PCA of the same data identified two representational trajectories, one each within a UMI subspace and a PMI subspace, both undergoing a reversal of stimulus coding axes. dPCA of data from an EEG dataset also provided evidence for priority-based transformations of the representational code, albeit with some differences. This type of transformation could allow for retention of unprioritized information in WM while preventing it from interfering with concurrent behavior. The results from this initial exploration suggest that the algorithmic details of how this transformation is carried out by RNNs, versus by the human brain, may differ.","link":"/opendata/wan-et-al-2022/"},{"title":"Wang et al. (2023)","text":"The ability to respond flexibly to an ever-changing environment relies on the orbitofrontal cortex (OFC). However, how the OFC associates sensory information with predicted outcomes to enable flexible sensory learning in humans remains elusive. Here, we combine a probabilistic tactile reversal learning task with functional magnetic resonance imaging (fMRI) to investigate how lateral OFC (lOFC) interacts with the primary somatosensory cortex (S1) to guide flexible tactile learning in humans. fMRI results reveal that lOFC and S1 exhibit distinct task-dependent engagement: while the lOFC responds transiently to unexpected outcomes immediately following reversals, S1 is persistently engaged during re-learning. Unlike the contralateral stimulus-selective S1, activity in ipsilateral S1 mirrors the outcomes of behavior during re-learning, closely related to top-down signals from lOFC. These findings suggest that lOFC contributes to teaching signals to dynamically update representations in sensory areas, which implement computations critical for adaptive behavior.","link":"/opendata/wang-et-al-2023/"},{"title":"Wang & Navarro-Martinez (2023)","text":"An increasing number of studies call into question the external validity of social preference games. In this paper, we show that these games have a low correlation with single pro-social behaviors in the field, but this correlation can be substantially increased by aggregating behaviors to reduce measurement error. We tracked people’s daily pro-social behaviors for 14 days using a day reconstruction method; the same people played three different social preference games on seven different occasions. We show that, as more pro-social behaviors and game rounds are aggregated, the games become much better predictors of pro-sociality. This predictive power is further increased by using statistical methods designed to better account for measurement error. These findings suggest that social preference games capture important underlying dispositions of real-world pro-sociality, and they can be successfully used to predict aggregated pro-social inclinations. This has crucial implications for the external validity and applicability of economic games.","link":"/opendata/wang-navarro-martinez-2023/"},{"title":"Warren et al. (2017)","text":"The adaptive regulation of the trade-off between pursuing a known reward (exploitation) and sampling lesser-known options in search of something better (exploration) is critical for optimal performance. Theory and recent empirical work suggest that humans use at least two strategies for solving this dilemma: a directed strategy in which choices are explicitly biased toward information seeking, and a random strategy in which decision noise leads to exploration by chance. Here we examined the hypothesis that random exploration is governed by the neuromodulatory locus coeruleus-norepinephrine system. We administered atomoxetine, a norepinephrine transporter blocker that increases extracellular levels of norepinephrine throughout the cortex, to 22 healthy human participants in a double-blind crossover design. We examined the effect of treatment on performance in a gambling task designed to produce distinct measures of directed exploration and random exploration. In line with our hypothesis we found an effect of atomoxetine on random, but not directed exploration. However, contrary to expectation, atomoxetine reduced rather than increased random exploration. We offer three potential explanations of our findings, involving the non-linear relationship between tonic NE and cognitive performance, the interaction of atomoxetine with other neuromodulators, and the possibility that atomoxetine affected phasic norepinephrine activity more so than tonic norepinephrine activity.","link":"/opendata/warren-et-al-2017/"},{"title":"Watson et al. (2019)","text":"Physically salient but task-irrelevant distractors can capture attention in visual search, but resource-dependent, executive-control processes can help reduce this distraction. However, it is not only physically salient stimuli that grab our attention: Recent research has shown that reward history also influences the likelihood that stimuli will capture attention. Here, we investigated whether resource-dependent control processes modulate the effect of reward on attentional capture, much as for the effect of physical salience. To this end, we used eye tracking with a rewarded visual search task and compared performance under conditions of high and low working memory load. In two experiments, we demonstrated that oculomotor capture by high-reward distractor stimuli is enhanced under high memory load. These results highlight the role of executive-control processes in modulating distraction by reward-related stimuli. Our findings have implications for understanding the neurocognitive processes involved in real-life conditions in which reward-related stimuli may influence behavior, such as addiction.","link":"/opendata/watson-et-al-2019/"},{"title":"Watson et al. (2022)","text":"The translation of the outcome-devaluation paradigm to study habit in humans has yielded interesting insights but proven to be challenging. We present a novel, outcome-revaluation task with a symmetrical design, in the sense that half of the available outcomes are always valuable and the other half not-valuable. In the present studies, during the instrumental learning phase, participants learned to respond (Go) to certain stimuli to collect valuable outcomes (and points) while refraining to respond (NoGo) to stimuli signaling not-valuable outcomes. Half of the stimuli were short-trained, while the other half were long-trained. Subsequently, in the test phase, the signaled outcomes were either value-congruent with training (still-valuable and still-not-valuable), or value-incongruent (devalued and upvalued). The change in outcome value on value-incongruent trials meant that participants had to flexibly adjust their behavior. At the end of the training phase, participants completed the self-report behavioral automaticity index - providing an automaticity score for each stimulus-response association. We conducted two experiments using this task, that both provided evidence for stimulus-driven habits as reflected in poorer performance on devalued and upvalued trials relative to still-not-valuable trials and still-valuable trials, respectively. While self-reported automaticity increased with longer training, behavioral flexibility was not affected. After extended training (Experiment 2), higher levels of self-reported automaticity when responding to stimuli signaling valuable outcomes were related to more slips of action when the associated outcome was subsequently devalued. We conclude that the symmetrical outcome-revaluation task provides a promising paradigm for the experimental investigation of habits in humans.","link":"/opendata/watson-et-al-2022/"},{"title":"Weber et al. (2022)","text":"Across species, animals have an intrinsic drive to approach appetitive stimuli and to withdraw from aversive stimuli. In affective science, influential theories of emotion link positive affect with strengthened behavioural approach and negative affect with avoidance. Based on these theories, we predicted that individuals positive and negative affect levels should particularly influence their behaviour when innate Pavlovian approach/avoidance tendencies conflict with learned instrumental behaviours. Here, across two experiments - exploratory Experiment 1 (N = 91) and a preregistered confirmatory Experiment 2 (N = 335) - we assessed how induced positive and negative affect influenced Pavlovian-instrumental interactions in a reward/punishment Go/No-Go task. Contrary to our hypotheses, we found no evidence for a main effect of positive/negative affect on either approach/avoidance behaviour or Pavlovian-instrumental interactions. However, we did find evidence that the effects of induced affect on behaviour were moderated by individual differences in self-reported behavioural inhibition and gender. Exploratory computational modelling analyses explained these demographic moderating effects as arising from positive correlations between demographic factors and individual differences in the strength of Pavlovian-instrumental interactions. These findings serve to sharpen our understanding of the effects of positive and negative affect on instrumental behaviour.","link":"/opendata/weber-et-al-2022/"},{"title":"Wehrman et al. (2023)","text":"Assimilation is the process by which one judgment tends to approach some aspect of another stimulus or judgment. This effect has been known for over half a century in various domains such as the judgment of weight or sound intensity. However, the assimilation of judgments of durations have been relatively unexplored. In the current article, we present the results of five experiments in which participant s were required to judge the duration of a visual stimulus on each trial. In each experiment, we manipulated the pattern of durations they experienced in order to systematically separate the effects of the objective and subjective duration of stimuli on subsequent judgments. We found that duration judgments were primarily driven by prior judgments, with little, if any, effect of the prior objective stimulus duration. This is in contrast to the findings previously reported in regards to non-temporal judgments. We propose two mechanist explanations of this effect; a representational account in which judgments represent the speed of an underlying pacemaker, and an assimilation account in which judgment is based in prior experience. We further discuss results in terms of predictive coding, in which the previous rating is representative of a prior expectation, which is modified by current experience.","link":"/opendata/wehrman-et-al-2023/"},{"title":"Weilbacher et al. (2020)","text":"Previous research has indicated a bias in memory-based decision-making, with people preferring options that they remember better. However, the cognitive mechanisms underlying this memory bias remain elusive. Here, we propose that choosing poorly remembered options is conceptually similar to choosing options with uncertain outcomes. We predicted that the memory bias would be reduced when options had negative subjective value, analogous to the reflection effect, according to which uncertainty aversion is stronger in gains than in losses. In two preregistered experiments (N = 36 each), participants made memory-based decisions between appetitive and aversive stimuli. People preferred better-remembered options in the gain domain, but this behavioral pattern reversed in the loss domain. This effect was not related to participants’ ambiguity or risk attitudes, as measured in a separate task. Our results increase the understanding of memory-based decision-making and connect this emerging field to well-established research on decisions under uncertainty.","link":"/opendata/weilbacher-et-al-2020/"},{"title":"Weiss et al. (2021)","text":"Making accurate decisions in uncertain environments requires identifying the generative cause of sensory cues, but also the expected outcomes of possible actions. Although both cognitive processes can be formalized as Bayesian inference, they are commonly studied using different experimental frameworks, making their formal comparison difficult. Here, by framing a reversal learning task either as cue-based or outcome-based inference, we found that humans perceive the same volatile environment as more stable when inferring its hidden state by interaction with uncertain outcomes than by observation of equally uncertain cues. Multivariate patterns of magnetoencephalographic (MEG) activity reflected this behavioral difference in the neural interaction between inferred beliefs and incoming evidence, an effect originating from associative regions in the temporal lobe. Together, these findings indicate that the degree of control over the sampling of volatile environments shapes human learning and decision-making under uncertainty.","link":"/opendata/weiss-et-al-2021/"},{"title":"Wester et al. (2022)","text":"The expanded version of the Inventory of Depression and Anxiety Symptoms (IDAS-II) is a 99-item self-report measure containing 18 nonoverlapping dimensional scales assessing symptoms of depression, anxiety, and mania. The aim of this study was to develop and validate a German adaptation of the IDAS-II. Participants from a community sample (N = 1,054) completed the IDAS-II (German version). In addition, part of the sample (N = 550) completed a series of additional measures of depression (Patient Health Questionnaire-9, WHO-Five Well-Being Index, Symptom Checklist-90 Revised-Short Version) and anxiety disorders (Generalized Anxiety Disorder Scale-7, Fear of Negative Evaluation Scale-5, Dimensional Obsessive-Compulsive Scale, The International Trauma Questionnaire). We conducted item-level confirmatory factor analyses (CFA) separately for the 15 nonsaturated IDAS-II scales, which confirmed unidimensionality. McDonalds ω indicated good internal consistency (ω > .8) of all symptom scales except Euphoria (ω = .751) and Ordering (ω = .728). A CFA of the IDAS-II scales, based on exploratory results by Watson et al. (2012), confirmed a three-factor model of Distress, Obsessions/Fear, and Positive Mood. Correlational analyses with additional symptom measures supported the convergent and discriminant validity of the IDAS-II scales. The IDAS-II (German version) allows for a reliable assessment of the severity of depression, anxiety, and bipolar symptoms and is one of the first clinical measures for German-speaking samples that is consistent with the Hierarchical Taxonomy of Psychopathology (HiTOP).","link":"/opendata/wester-et-al-2022/"},{"title":"Widge et al. (2019)","text":"Deep brain stimulation (DBS) is a circuit-oriented treatment for mental disorders. Unfortunately, even well-conducted psychiatric DBS clinical trials have yielded inconsistent symptom relief, in part because DBS mechanism(s) of action are unclear. One clue to those mechanisms may lie in the efficacy of ventral internal capsule/ventral striatum (VCVS) DBS in both major depression (MDD) and obsessive-compulsive disorder (OCD). MDD and OCD both involve deficits in cognitive control. Cognitive control depends on prefrontal cortex (PFC) regions that project into the VCVS. Here, we show that VCVS DBS effect is explained in part by enhancement of PFC-driven cognitive control. DBS improves human subjects performance on a cognitive control task and increases theta (5-8Hz) oscillations in both medial and lateral PFC. The theta increase predicts subjects clinical outcomes. Our results suggest a possible mechanistic approach to DBS therapy, based on tuning stimulation to optimize these neurophysiologic phenomena.","link":"/opendata/widge-et-al-2019/"},{"title":"Wiehler et al. (2021)","text":"Gambling disorder (GD) is a behavioral addiction associated with impairments in value-based decision-making and behavioral flexibility and might be linked to changes in the dopamine system. Maximizing long-term rewards requires a flexible trade-off between the exploitation of known options and the exploration of novel options for information gain. This exploration-exploitation trade-off is thought to depend on dopamine neurotransmission. We hypothesized that human gamblers would show a reduction in directed (uncertainty-based) exploration, accompanied by changes in brain activity in a fronto-parietal exploration-related network. Twenty-three frequent, non-treatment seeking gamblers and twenty-three healthy matched controls (all male) performed a four-armed bandit task during functional magnetic resonance imaging (fMRI). Computational modeling using hierarchical Bayesian parameter estimation revealed signatures of directed exploration, random exploration, and perseveration in both groups. Gamblers showed a reduction in directed exploration, whereas random exploration and perseveration were similar between groups. Neuroimaging revealed no evidence for group differences in neural representations of basic task variables (expected value, prediction errors). Our hypothesis of reduced frontal pole (FP) recruitment in gamblers was not supported. Exploratory analyses showed that during directed exploration, gamblers showed reduced parietal cortex and substantia-nigra/ventral-tegmental-area activity. Cross-validated classification analyses revealed that connectivity in an exploration-related network was predictive of group status, suggesting that connectivity patterns might be more predictive of problem gambling than univariate effects. Findings reveal specific reductions of strategic exploration in gamblers that might be linked to altered processing in a fronto-parietal network and/or changes in dopamine neurotransmission implicated in GD.","link":"/opendata/wiehler-et-al-2021/"},{"title":"Williams et al. (2021)","text":"Human learning, at least in part, appears to be dependent on the evaluation of how outcomes of our actions align with our expectations. Over the past 23 years, electroencephalography (EEG) has been used to probe the neural signatures of feedback processing. Seminal work demonstrated a difference in the human event-related potential (ERP) dependent on whether people were processing correct or incorrect feedback. Since then, these feedback evoked ERPs have been associated with reinforcement learning and conflict monitoring, tied to subsequent behavioral adaptations, and shown to be sensitive to a wide range of factors (e.g., Parkinson’s disease). Recently, research has turned to frequency decomposition techniques to examine how changes in the EEG power spectra are related to underlying learning mechanisms. Although the literature on the neural correlates of feedback processing is vast, there are still methodological discrepancies and differences in results across studies. Here, we provide reference results and an investigation of methodological considerations for the ERP (reward positivity) and frequency (delta and theta power) correlates of feedback evaluation with a large sample size. Specifically, participants (n = 500) performed a two-armed bandit task while we recorded EEG. Our findings provide key information about the data characteristics and relationships that exist between the neural signatures of feedback evaluation. Additionally, we conclude with selected methodological recommendations for standardization of future research. All data and scripts are freely provided to facilitate open science.","link":"/opendata/williams-et-al-2021/"},{"title":"Williams et al. (2022)","text":"Visual object recognition is not performed in isolation but depends on prior knowledge and context. Here, we found that auditory context plays a critical role in visual object perception. Using a psychophysical task in which naturalistic sounds were paired with noisy visual inputs, we demonstrated across two experiments (young adults; ns = 18-40 in Experiments 1 and 2, respectively) that the representations of ambiguous visual objects were shifted toward the visual features of an object that were related to the incidental sound. In a series of control experiments, we found that these effects were not driven by decision or response biases (ns = 40-85) nor were they due to top-down expectations (n = 40). Instead, these effects were driven by the continuous integration of audiovisual inputs during perception itself. Together, our results demonstrate that the perceptual experience of visual objects is directly shaped by naturalistic auditory context, which provides independent and diagnostic information about the visual world.","link":"/opendata/williams-et-al-2022/"},{"title":"Wimmer et al. (2018)","text":"Over the past few decades, neuroscience research has illuminated the neural mechanisms supporting learning from reward feedback. Learning paradigms are increasingly being extended to study mood and psychiatric disorders as well as addiction. However, one potentially critical characteristic that this research ignores is the effect of time on learning: human feedback learning paradigms are usually conducted in a single rapidly paced session, whereas learning experiences in ecologically relevant circumstances and in animal research are almost always separated by longer periods of time. In our experiments, we examined reward learning in short condensed sessions distributed across weeks versus learning completed in a single “massed” session in male and female participants. As expected, we found that after equal amounts of training, accuracy was matched between the spaced and massed conditions. However, in a 3-week follow-up, we found that participants exhibited significantly greater memory for the value of spaced-trained stimuli. Supporting a role for short-term memory in massed learning, we found a significant positive correlation between initial learning and working memory capacity. Neurally, we found that patterns of activity in the medial temporal lobe and prefrontal cortex showed stronger discrimination of spaced- versus massed-trained reward values. Further, patterns in the striatum discriminated between spaced- and massed-trained stimuli overall. Our results indicate that single-session learning tasks engage partially distinct learning mechanisms from distributed training. Our studies begin to address a large gap in our knowledge of human learning from reinforcement, with potential implications for our understanding of mood disorders and addiction.","link":"/opendata/wimmer-et-al-2018/"},{"title":"Wimmer & Poldrack (2022)","text":"Neuroscience research has illuminated the mechanisms supporting learning from reward feedback, demonstrating a critical role for the striatum and midbrain dopamine system. However, in humans, short-term working memory that is dependent on frontal and parietal cortices can also play an important role, particularly in commonly used paradigms in which learning is relatively condensed in time. Given the growing use of reward-based learning tasks in translational studies in computational psychiatry, it is important to understand the extent of the influence of working memory and also how core gradual learning mechanisms can be better isolated. In our experiments, we manipulated the spacing between repetitions along with a post-learning delay preceding a test phase. We found that learning was slower for stimuli repeated after a long delay (spaced-trained) compared to those repeated immediately (massed-trained), likely reflecting the remaining contribution of feedback learning mechanisms when working memory is not available. For massed learning, brief interruptions led to drops in subsequent performance, and individual differences in working memory capacity positively correlated with overall performance. Interestingly, when tested after a delay period but not immediately, relative preferences decayed in the massed condition and increased in the spaced condition. Our results provide additional support for a large role of working memory in reward-based learning in temporally condensed designs. We suggest that spacing training within or between sessions is a promising approach to better isolate and understand mechanisms supporting gradual reward-based learning, with particular importance for understanding potential learning dysfunctions in addiction and psychiatric disorders.","link":"/opendata/wimmer-poldrack-2022/"},{"title":"Wimmer et al. (2023)","text":"Theories of neural replay propose that it supports a range of functions, most prominently planning and memory consolidation. Here, we test the hypothesis that distinct signatures of replay in the same task are related to model-based decision-making (“planning”) and memory preservation. We designed a reward learning task wherein participants utilized structure knowledge for model-based evaluation, while at the same time had to maintain knowledge of two independent and randomly alternating task environments. Using magnetoencephalography and multivariate analysis, we first identified temporally compressed sequential reactivation, or replay, both prior to choice and following reward feedback. Before choice, prospective replay strength was enhanced for the current task-relevant environment when a model-based planning strategy was beneficial. Following reward receipt, and consistent with a memory preservation role, replay for the alternative distal task environment was enhanced as a function of decreasing recency of experience with that environment. Critically, these planning and memory preservation relationships were selective to pre-choice and post-feedback periods, respectively. Our results provide support for key theoretical proposals regarding the functional role of replay and demonstrate that the relative strength of planning and memory-related signals are modulated by ongoing computational and task demands.","link":"/opendata/wimmer-et-al-2023/"},{"title":"Wischnewski et al. (2021)","text":"In decision-making with uncertain outcomes people may rely on external cues, such as expert advice, even if this information has no predictive value. While the fronto-parietal event-related potential (ERP) components feedback-related negativity (FRN) and P3 are associated with both reward/punishment feedback processing, the relationship between ERP modulation and expert advice during decision making remains unclear. In this double-blind sham-controlled within-subject study transcranial alternating current stimulation (tACS) at an intensity of 1 mA was applied to the frontal cortex in twenty-four healthy volunteers. The aim was to decrease reliance on expert advice by targeting FRN, P3a and P3b components. Following administration of frontal delta (2.5 Hz), theta (5 Hz) and sham tACS, ERPs and advice-guided decision making were evaluated. Results showed a tentative behavioral effect of delta tACS in the response bias. In contrast, theta tACS significantly lowered P3b and P3a amplitudes, but no effects of tACS were observed for the FRN. Effects on electrophysiology and advice following behavior were uncorrelated. Our findings suggest that theta tACS may modulate electrocortical signals and delta tACS advice following, yet the relationship between both remains unresolved.","link":"/opendata/wischnewski-et-al-2021/"},{"title":"Wise et al. (2019)","text":"Visual selective attention acts as a filter on perceptual information, facilitating learning and inference about important events in an agents environment. A role for visual attention in reward-based decisions has previously been demonstrated, but it remains unclear how visual attention is recruited during aversive learning, particularly when learning about multiple stimuli concurrently. This question is of particular importance in psychopathology, where enhanced attention to threat is a putative feature of pathological anxiety. Using an aversive reversal learning task that required subjects to learn, and exploit, predictions about multiple stimuli, we show that the allocation of visual attention is influenced significantly by aversive value but not by uncertainty. Moreover, this relationship is bidirectional in that attention biases value updates for attended stimuli, resulting in heightened value estimates. Our findings have implications for understanding biased attention in psychopathology and support a role for learning in the expression of threat-related attentional biases in anxiety.","link":"/opendata/wise-et-al-2019/"},{"title":"Wise et al. (2020)","text":"Symptom expression in psychiatric conditions is often linked to altered threat perception, however how computational mechanisms that support aversive learning relate to specific psychiatric symptoms remains undetermined. We answer this question using an online game-based aversive learning task together with measures of common psychiatric symptoms in 400 subjects. We show that physiological symptoms of anxiety and a transdiagnostic compulsivity-related factor are associated with enhanced safety learning, as measured using a probabilistic computational model, while trait cognitive anxiety symptoms are associated with enhanced learning from danger. We use data-driven partial least squares regression to identify two separable components across behavioural and questionnaire data: one linking enhanced safety learning and lower estimated uncertainty to physiological anxiety, compulsivity, and impulsivity; the other linking enhanced threat learning and heightened uncertainty estimation to symptoms of depression and social anxiety. Our findings implicate aversive learning processes in the expression of psychiatric symptoms that transcend diagnostic boundaries.","link":"/opendata/wise-et-al-2020/"},{"title":"Wise et al. (2022)","text":"Successful avoidance of recurrent threats depends on inferring threatening agents’ preferences and predicting their movement patterns accordingly. However, it remains largely unknown how the human brain achieves this, despite the fact that many natural threats are posed by complex, dynamic agents that act according to their own goals. Here, we propose that humans exploit an interactive cognitive map of the social environment to infer threatening agents’ preferences and also to simulate their future behavior, providing for flexible, generalizable avoidance strategies. We tested this proposal across three preregistered experiments (total n=510) using a task in which participants collected rewards while avoiding one of several possible virtual threatening agents. A novel, model-based, hypothesis-testing inverse reinforcement learning computational model best explained participants’ inferences about threatening agents’ latent preferences, with participants using this inferred knowledge to enact generalizable, model-based avoidance strategies across different environments. Using tree-search planning models, we found that participants’ behavior was best explained by a planning algorithm that incorporated simulations of the threat’s goal-directed behavior, and that prior expectations about the threat’s predictability were linked to individual differences in avoidance. Together, our results indicate that humans use a cognitive map to determine threatening agents’ preferences, in turn facilitating generalized predictions of the threatening agent’s behavior and enabling flexible and effective avoidance.","link":"/opendata/wise-et-al-2022/"},{"title":"Wise et al. (2023)","text":"In social environments, survival can depend upon inferring and adapting to other agents’ goal-directed behavior. However, it remains unclear how humans achieve this, despite the fact that many decisions must account for complex, dynamic agents acting according to their own goals. Here, we use a predator-prey task (total n = 510) to demonstrate that humans exploit an interactive cognitive map of the social environment to infer other agents’ preferences and simulate their future behavior, providing for flexible, generalizable responses. A model-based inverse reinforcement learning model explained participants’ inferences about threatening agents’ preferences, with participants using this inferred knowledge to enact generalizable, model-based behavioral responses. Using tree-search planning models, we then found that behavior was best explained by a planning algorithm that incorporated simulations of the threat’s goal-directed behavior. Our results indicate that humans use a cognitive map to determine other agents’ preferences, facilitating generalized predictions of their behavior and effective responses.","link":"/opendata/wise-et-al-2023/"},{"title":"Woelk et al. (2022)","text":"Anxiety disorders are effectively treated with exposure therapy, but relapse remains high. Fear may reinstate after reoccurrence of the negative event because the expectancy of the aversive outcome (unconditioned stimulus [US]) is adjusted but not its evaluation. Imagery rescripting (ImRs) is an intervention that is proposed to work through revaluation of the US. The aim of our preregistered study was to test the effects of ImRs and extinction on US expectancy and US revaluation. Day 1 (n = 106) consisted of acquisition with an aversive film clip as US. The manipulation (ImRs + extinction, extinction-only, or ImRs-only) took place on Day 2. Reinstatement of fear was tested on Day 3. Results showed expectancy learning in both extinction conditions but not in the ImRs-only condition and no enhanced revaluation learning in ImRs. The combination of ImRs and extinction slowed down extinction but did not protect against reinstatement, which pleads in favor of stand-alone interventions in clinical practice.","link":"/opendata/woelk-et-al-2022/"},{"title":"Wu et al. (2018)","text":"From foraging for food to learning complex games, many aspects of human behaviour can be framed as a search problem with a vast space of possible actions. Under finite search horizons, optimal solutions are generally unobtainable. Yet, how do humans navigate vast problem spaces, which require intelligent exploration of unobserved actions? Using various bandit tasks with up to 121 arms, we study how humans search for rewards under limited search horizons, in which the spatial correlation of rewards (in both generated and natural environments) provides traction for generalization. Across various different probabilistic and heuristic models, we find evidence that Gaussian process function learning-combined with an optimistic upper confidence bound sampling strategy-provides a robust account of how people use generalization to guide search. Our modelling results and parameter estimates are recoverable and can be used to simulate human-like performance, providing insights about human behaviour in complex environments.","link":"/opendata/wu-et-al-2018/"},{"title":"Wu et al. (2022)","text":"How does time pressure influence exploration and decision-making? We investigated this question with several four-armed bandit tasks manipulating (within subjects) expected reward, uncertainty, and time pressure (limited vs. unlimited). With limited time, people have less opportunity to perform costly computations, thus shifting the cost-benefit balance of different exploration strategies. Through behavioral, reinforcement learning (RL), reaction time (RT), and evidence accumulation analyses, we show that time pressure changes how people explore and respond to uncertainty. Specifically, participants reduced their uncertainty-directed exploration under time pressure, were less value-directed, and repeated choices more often.Since our analyses relate uncertainty to slower responses and dampened evidence accumulation (i.e., drift rates), this demonstrates a resource-rational shift towards simpler, lower-cost strategies under time pressure. These results shed light on how people adapt their exploration and decision-making strategies to externally imposed cognitive constraints.","link":"/opendata/wu-et-al-2022/"},{"title":"Wu et al. (2023)","text":"When exposed to perceptual and motor sequences, people are able to gradually identify patterns within and form a compact internal description of the sequence. One proposal of how sequences can be compressed is people’s ability to form chunks. We study people’s chunking behavior in a serial reaction time task. We relate chunk representation with sequence statistics and task demands, and propose a rational model of chunking that rearranges and concatenates its representation to jointly optimize for accuracy and speed. Our model predicts that participants should chunk more if chunks are indeed part of the generative model underlying a task and should, on average, learn longer chunks when optimizing for speed than optimizing for accuracy. We test these predictions in two experiments. In the first experiment, participants learn sequences with underlying chunks. In the second experiment, participants were instructed to act either as fast or as accurately as possible. The results of both experiments confirmed our model’s predictions. Taken together, these results shed new light on the benefits of chunking and pave the way for future studies on step-wise representation learning in structured domains.","link":"/opendata/wu-et-al-2023/"},{"title":"Wulff et al. (2018)","text":"People can learn about the probabilistic consequences of their actions in two ways: One is by consulting descriptions of an action’s consequences and probabilities (e.g., reading up on a medication’s side effects). The other is by personally experiencing the probabilistic consequences of an action (e.g., beta testing software). In principle, people taking each route can reach analogous states of knowledge and consequently make analogous decisions. In the last dozen years, however, research has demonstrated systematic discrepancies between description- and experienced-based choices. This description-experience gap has been attributed to factors including reliance on a small set of experience, the impact of recency, and different weighting of probability information in the two decision types. In this meta-analysis focusing on studies using the sampling paradigm of decisions from experience, we evaluated these and other determinants of the decision–experience gap by reference to more than 70,000 choices made by more than 6,000 participants. We found, first, a robust description-experience gap but also a key moderator, namely, problem structure. Second, the largest determinant of the gap was reliance on small samples and the associated sampling error: free to terminate search, individuals explored too little to experience all possible outcomes. Third, the gap persisted when sampling error was basically eliminated, suggesting other determinants. Fourth, the occurrence of recency was contingent on decision makers’ autonomy to terminate search, consistent with the notion of optional stopping. Finally, we found indications of different probability weighting in decisions from experience versus decisions from description when the problem structure involved a risky and a safe option.","link":"/opendata/wulff-et-al-2018/"},{"title":"Wurtz et al. (2023)","text":"Depressive symptoms are associated with negative expectations and reduced belief updating by positive information. Cognitive immunization, the devaluation of positive information, has been argued to be central in this relationship and predictive processing models suggest that more positive information is associated with greater cognitive immunization. In an online experiment, N=347 healthy participants took part in a performance task with standardized feedback that was either mildly, moderately, or extremely positive. Effects of the feedback positivity on cognitive immunization were investigated. Further, depressive symptoms, interpretation bias, as well as participant’s self-evaluation were examined as potential moderators of belief updating. Participants in the mildly positive condition showed the greatest amount of cognitive immunization, with no difference between the moderately and the extremely positive condition. Irrespective of the feedback condition, participants’ positive expectations increased after feedback, yet depressive symptoms were associated with a weaker increase. Interpretation biases were unrelated to these associations, but exploratory analyses suggested that self-evaluation may be a potentially central factor. The results suggest that healthy participants engaged in cognitive immunization when feedback was less positive than expected. Depressive symptoms were associated with reduced belief updating, with self-evaluation being a promising factor for future research.","link":"/opendata/wurtz-et-al-2023/"},{"title":"Xia et al. (2021)","text":"In the real world, many relationships between events are uncertain and probabilistic. Uncertainty is also likely to be a more common feature of daily experience for youth because they have less experience to draw from than adults. Some studies suggest probabilistic learning may be inefficient in youths compared to adults, while others suggest it may be more efficient in youths in mid adolescence. Here we used a probabilistic reinforcement learning task to test how youth age 8-17 (N = 187) and adults age 18-30 (N = 110) learn about stable probabilistic contingencies. Performance increased with age through early-twenties, then stabilized. Using hierarchical Bayesian methods to fit computational reinforcement learning models, we show that all participants’ performance was better explained by models in which negative outcomes had minimal to no impact on learning. The performance increase over age was driven by 1) an increase in learning rate (i.e. decrease in integration time scale); 2) a decrease in noisy/exploratory choices. In mid-adolescence age 13-15, salivary testosterone and learning rate were positively related. We discuss our findings in the context of other studies and hypotheses about adolescent brain development.","link":"/opendata/xia-et-al-2021/"},{"title":"Xia et al. (2022)","text":"Fear conditioning is a laboratory paradigm commonly used to investigate aversive learning and memory. In context fear conditioning, a configuration of elemental cues (conditioned stimulus, CS) predicts an aversive event (unconditioned stimulus, US). To quantify context fear acquisition in humans, previous work has used startle eye-blink responses (SEBR), skin conductance responses (SCR) and verbal reports, but different quantification methods have rarely been compared. Moreover, it is unclear how to induce, and measure context fear memory retention over several days. First, we used a semi-immersive virtual reality paradigm. In two experiments, we found successful declarative learning and memory retention over seven days, but no evidence of conditioned responses. Next, we used a configural fear conditioning paradigm with five static room images as CSs in two experiments. Besides successful declarative learning and memory retention after seven days, SCR and pupil dilation to CS onset differentiated CS+/CS- during acquisition training, and SEBR and pupil dilation differentiated CS+/CS- during the recall test, with medium to large effect sizes for the most sensitive indices (SEBR: Hedge’s g = 0.56 and 0.69; pupil dilation: Hedge’s g = 0.99 and g = 0.88). Our results demonstrate that with a suitable experimental paradigm, context fear memory retention can be demonstrated over seven days, and we provide robust and replicable measurement methods.","link":"/opendata/xia-et-al-2022/"},{"title":"Xiang et al. (2023)","text":"How do people judge responsibility in collaborative tasks? Past work has proposed a number of metrics that people may use to attribute blame and credit to others, such as effort, competence, and force. Some theories consider only the produced effort or force (individuals are more responsible if they produce more effort or force), whereas others consider counterfactuals (individuals are more responsible if some alternative behavior on their or their collaborator’s part could have altered the outcome). Across four experiments (N = 717), we found that participants’ judgments are best described by a model that considers both produced and counterfactual effort. This finding generalized to an independent validation data set (N = 99). Our results thus support a dual-factor theory of responsibility attribution in collaborative tasks.","link":"/opendata/xiang-et-al-2023/"},{"title":"Xu et al. (2020)","text":"The literature on human delay discounting behavior is dominated by experimental paradigms, which do not impose actual delays. Given that waiting may be aversive even on short timescales, we present a novel delay discounting paradigm to study differences in delay discounting behavior either when real waiting is involved, or not. This paradigm retains the fundamental trade-off between rewards received versus their immediacy. We used hierarchical Bayesian modeling to decompose and test models that separate discounting and subjective time perception mechanisms. We report 2 experiments that also explore the magnitude effect and gain-loss asymmetry. In both experiments, we found greater discounting and nonlinear (convex) time perception in the online waiting task, which required waiting after each choice, compared to a control condition where waiting was deferred until the end of the experiment. Discounting was also measured by a parameter of a hyperbolic-type model and related to reversals of preference between initial desire to wait and then not doing so. These results from our novel paradigm support the emerging view that subjective time perception plays an important role in intertemporal choice in addition to discounting tendencies.","link":"/opendata/xu-et-al-2020/"},{"title":"Xu & Stocco (2021)","text":"Behavioral data, despite being a common index of cognitive activity, is under scrutiny for having poor reliability as a result of noise or lacking replications of reliable effects. Here, we argue that cognitive modeling can be used to enhance the test-retest reliability of the behavioral measures by recovering individual-level parameters from behavioral data. We tested this empirically with the Probabilistic Stimulus Selection (PSS) task, which is used to measure a participants sensitivity to positive or negative reinforcement. An analysis of 400,000 simulations from an Adaptive Control of Thought-Rational (ACT-R) model of this task showed that the poor reliability of the task is due to the instability of the end-estimates: because of the way the task works, the same participants might sometimes end up having apparently opposite scores. To recover the underlying interpretable parameters and enhance reliability, we used a Bayesian Maximum A Posteriori (MAP) procedure. We were able to obtain reliable parameters across sessions (intraclass correlation coefficient ≈ 0.5). A follow-up study on a modified version of the task also found the same pattern of results, with very poor test-retest reliability in behavior but moderate reliability in recovered parameters (intraclass correlation coefficient ≈ 0.4). Collectively, these results imply that this approach can further be used to provide superior measures in terms of reliability, and bring greater insights into individual differences.","link":"/opendata/xu-stocco-2021/"},{"title":"Xu et al. (2021)","text":"Classic reinforcement learning (RL) theories cannot explain human behavior in the absence of external reward or when the environment changes. Here, we employ a deep sequential decision-making paradigm with sparse reward and abrupt environmental changes. To explain the behavior of human participants in these environments, we show that RL theories need to include surprise and novelty, each with a distinct role. While novelty drives exploration before the first encounter of a reward, surprise increases the rate of learning of a world-model as well as of model-free action-values. Even though the world-model is available for model-based RL, we find that human decisions are dominated by model-free action choices. The world-model is only marginally used for planning, but it is important to detect surprising events. Our theory predicts human action choices with high probability and allows us to dissociate surprise, novelty, and reward in EEG signals.","link":"/opendata/xu-et-al-2021/"},{"title":"Yamaguchi & Swainson (2023)","text":"The task-switch cost is one of the most robust phenomena, but it can disappear after nogo trials where the actors decide not to respond to the target. According to the response-selection account, it is the occurrence of response selection that generates a task-switch cost on the following trial. The present study used a variety of selective go/nogo procedures to investigate whether response selection on nogo trials is followed by a switch cost. The first two experiments aimed to replicate previous studies in which go/nogo trials could be distinguished either by the target features or by the responses assigned to the target, but the results were mixed. The subsequent four experiments constrained the conditions so that the actors would need to select a specific response in order to decide whether or not they execute the response. In these experiments, the task-switch cost was consistently absent after selective nogo trials, even when response selection was required on these trials. These results contradict the hypothesis that response selection on nogo trials would be followed by a subsequent switch cost. The results are consistent with the proposal that a task-switch cost might have been established by response selection or other task-related process on nogo trials, but it is abolished because nogo signals interfered with the activated task-set.","link":"/opendata/yamaguchi-swainson-2023/"},{"title":"Yan et al. (2023)","text":"The neural circuits of reward processing and interval timing (including perception and production) are functionally intertwined, suggesting that it might be possible for momentary reward processing to influence subsequent timing behavior. Previous animal and human studies have mainly focused on the effect of reward on interval perception, whereas its impact on interval production is less clear. In this study, we examined whether feedback, as an example of performance-contingent reward, biases interval production. We recorded EEG from 20 participants while they engaged in a continuous drumming task with different realistic tempos (1728 trials per participant). Participants received color-coded feedback after each beat about whether they were correct (on time) or incorrect (early or late). Regression-based EEG analysis was used to unmix the rapid occurrence of a feedback response called the reward positivity (RewP), which is traditionally observed in more slow-paced tasks. Using linear mixed modelling, we found that RewP amplitude predicted timing behavior for the upcoming beat. This performance-biasing effect of the RewP was interpreted as reflecting the impact of fluctuations in dopaminergic activities on timing, and the necessity of continuous paradigms to make such observations was highlighted.","link":"/opendata/yan-et-al-2023/"},{"title":"Yeung & Han (2023)","text":"The n-back task is a popular paradigm for studying neurocognitive processing at varying working memory loads. Although much is known about the effects of load on behavior and neural activation during n-back performance, the temporal dynamics of such effects remain unclear. Here, we investigated the within- and between-session stability and consistency of task performance and frontal cortical activation during the n-back task using functional near-infrared spectroscopy (fNIRS). Forty healthy young adults performed the 1-back and 3-back conditions three times per condition. They then undertook identical retest sessions 3 weeks later (M = 21.2 days, SD = 0.9). Over the course of the task, activation in the participants’ frontopolar, dorsomedial, dorsolateral, ventrolateral, and posterolateral frontal cortices was measured with fNIRS. We found significantly improved working memory performance (difference between 1-back and 3-back accuracies) over time both within and between sessions. All accuracy and reaction time measures exhibited good to excellent consistency within and across sessions. Additionally, changes in frontal oxyhemoglobin (HbO) and deoxyhemoglobin (HbR) concentration were maintained over time across timescales, except that load-dependent (3-back > 1-back) HbO changes, particularly in the ventrolateral PFC, diminished over separate sessions. The consistency of fNIRS measures varied greatly, with changes in 3-back dorsolateral and ventrolateral HbO demonstrating fair-to-good consistency both within and between sessions. Overall, this study clarified the temporal dynamics of task performance and frontal activation during the n-back task. The findings revealed the neural mechanisms underlying the change in n-back task performance over time and have practical implications for future n-back research.","link":"/opendata/yeung-han-2023/"},{"title":"Yoo et al. (2022)","text":"How does the nature of a stimulus affect our ability to learn appropriate response associations? In typical laboratory experiments learning is investigated under somewhat ideal circumstances, where stimuli are easily discriminable visually and linguistically. This is not representative of most real-life learning, where visually or linguistically overlapping stimuli can result in different rewards (e.g., you may learn over time that you can pet one specific dog that is friendly, but that you should avoid a very similar looking one that isn’t). With two experiments, we test how humans learn in three stimulus conditions: stimuli with distinct visual representations but overlapping linguistic representations, stimuli with distinct linguistic representations but overlapping visual representations, and stimuli with distinct visual and linguistic representations. We find that decreasing linguistic and visual distinctness both decrease performance, substantially more for the lowered linguistic distinctness condition. We develop computational models to test different hypotheses about how reinforcement learning (RL) and working memory (WM) processes are affected by different stimulus conditions. Interestingly, we find that only RL, and not WM, is affected by stimulus condition: people learn slower and have higher across-stimulus value confusion at decision when linguistic information overlaps relative to when it is distinct. These results demonstrate strong effects of stimulus type on learning, and highlight the importance of considering the parallel contributions of different cognitive processes when studying behavior.","link":"/opendata/yoo-et-al-2022/"},{"title":"Zaatri et al. (2022)","text":"Although living in social groups provides many benefits for group members, such groups also serve as a setting for social competition over rank and influence. Evolutionary accounts suggest that social anxiety plays a role in regulating in-group conflict, as individuals who are concerned about social threat may choose to defer to others to maintain the hierarchical status quo. Here, we examine how social anxiety levels are related to the advice-giving style an individual adopts: a competitive influence-seeking strategy or a defensive blend-in strategy. We begin by demonstrating that similarity to others drives activity in the brains valuation system, even during a competitive advice-taking task. Then, in three behavioural experiments, we show that social anxiety levels are related to the tendency to give advice resembling the advice given by rival advisers and to refrain from status-seeking behaviour. Social anxiety was also associated with negative social comparisons with rival advisers. Our findings highlight the role of competing social goals in shaping information sharing.","link":"/opendata/zaatri-et-al-2022/"},{"title":"Zajkowski et al. (2017)","text":"The explore-exploit dilemma occurs anytime we must choose between exploring unknown options for information and exploiting known resources for reward. Previous work suggests that people use two different strategies to solve the explore-exploit dilemma: directed exploration, driven by information seeking, and random exploration, driven by decision noise. Here, we show that these two strategies rely on different neural systems. Using transcranial magnetic stimulation to inhibit the right frontopolar cortex, we were able to selectively inhibit directed exploration while leaving random exploration intact. This suggests a causal role for right frontopolar cortex in directed, but not random, exploration and that directed and random exploration rely on (at least partially) dissociable neural systems.","link":"/opendata/zajkowski-et-al-2017/"},{"title":"Zaller et al. (2021)","text":"Excessive information seeking, or exploratory behavior to minimize the uncertainty of unknown options, is a feature of anxiety disorders. The horizons task (Wilson et al. 2014) is a popular task for measuring information-seeking behavior, recently used to identify under-exploration in psychosis (Waltz et al. 2020). The horizons task has not yet been evaluated as a tool for measuring information seeking behavior in anxious individuals. We recruited 100 participants to complete an online version of the horizons task. Anxiety was measured with the Penn State Worry Questionnaire (PSWQ), and attitudes related to information seeking were measured with the Intolerance of Uncertainty scale (IUS) and the Need for Closure scale (NCS). Information seeking behavior on the horizons task was measured per participant using hierarchical Bayesian modeling. We confirmed that behavior on the online version of the horizons task is similar to previous in-person studies. Contrary to our hypotheses, we found no evidence of a relationship between information seeking behavior on the task and anxiety symptoms or the information seeking scales. Our results suggest behavior on the horizons task does not predict beliefs and attitudes towards anxious information seeking. We suspect this may reflect design features of the task that reduce the value of information seeking behaviors. We conclude by proposing modifications to the task that may improve its utility as a measure of information seeking behavior in anxiety.","link":"/opendata/zaller-et-al-2021/"},{"title":"Zang et al. (2022)","text":"Duration estimates are often biased by the sampled statistical context, yielding the classical central-tendency effect, i.e., short durations are over- and long duration underestimated. Most studies of the central-tendency bias have primarily focused on the integration of the sensory measure and the prior information, without considering any cognitive limits. Here, we investigated the impact of cognitive (visual working-memory) load on duration estimation in the duration encoding and reproduction stages. In four experiments, observers had to perform a dual, attention-sharing task: reproducing a given duration (primary) and memorizing a variable set of color patches (secondary). We found an increase in memory load (i.e., set size) during the duration-encoding stage to increase the central-tendency bias, while shortening the reproduced duration in general; in contrast, increasing the load during the reproduction stage prolonged the reproduced duration, without influencing the central tendency. By integrating an attentional-sharing account into a hierarchical Bayesian model, we were able to predict both the general over- and underestimation and the central-tendency effects observed in all four experiments. The model suggests that memory pressure during the encoding stage increases the sensory noise, which elevates the central-tendency effect. In contrast, memory pressure during the reproduction stage only influences the monitoring of elapsed time, leading to a general duration over-reproduction without impacting the central tendency.","link":"/opendata/zang-et-al-2022/"},{"title":"Zbozinek et al. (2022)","text":"In the natural world, stimulus-outcome associations are often ambiguous, and most associations are highly complex and situation-dependent. Learning to disambiguate these complex associations to identify which specific outcomes will occur in which situations is critical for survival. Pavlovian occasion setters are stimuli that determine whether other stimuli will result in a specific outcome. Occasion setting is a well-established phenomenon, but very little investigation has been conducted on how occasion setters are disambiguated when they themselves are ambiguous (i.e., when they do not consistently signal whether another stimulus will be reinforced). In two preregistered studies, we investigated the role of higher-order Pavlovian occasion setting in humans. We developed and tested the first computational model predicting direct associative learning, traditional occasion setting (i.e., 1st-order occasion setting), and 2nd-order occasion setting. This model operationalizes stimulus ambiguity as a mechanism to engage in higher-order Pavlovian learning. Both behavioral and computational modeling results suggest that 2nd-order occasion setting was learned, as evidenced by lack and presence of transfer of occasion setting properties when expected and the superior fit of our 2nd-order occasion setting model compared to the 1st-order occasion setting or direct associations models. These results provide a controlled investigation into highly complex associative learning and may ultimately lead to improvements in the treatment of Pavlovian-based mental health disorders (e.g., anxiety disorders, substance use).","link":"/opendata/zbozinek-et-al-2022/"},{"title":"Zech et al. (2022)","text":"The approach-avoidance task (AAT) is an implicit task that measures peoples behavioral tendencies to approach or avoid stimuli in the environment. In recent years, it has been used successfully to help explain a variety of health problems (e.g., addictions and phobias). Unfortunately, more recent AAT studies have failed to replicate earlier promising findings. One explanation for these replication failures could be that the AAT does not reliably measure approach-avoidance tendencies. Here, we first review existing literature on the reliability of various versions of the AAT. Next, we examine the AATs reliability in a large and diverse sample (N = 1077; 248 of whom completed all sessions). Using a smartphone-based, mobile AAT, we measured participants approach-avoidance tendencies eight times over a period of seven months (one measurement per month) in two distinct stimulus sets (happy/sad expressions and disgusting/neutral stimuli). The mobile AATs split-half reliability was adequate for face stimuli (r = .85), but low for disgust stimuli (r = .72). Its test-retest reliability based on a single measurement was poor for either stimulus set (all ICC1s < .3). Its test-retest reliability based on the average of all eight measurements was moderately good for face stimuli (ICCk = .73), but low for disgust stimuli (ICCk = .5). Results suggest that single-measurement AATs could be influenced by unexplained temporal fluctuations of approach-avoidance tendencies. These fluctuations could be examined in future studies. Until then, this work suggests that future research using the AAT should rely on multiple rather than single measurements.","link":"/opendata/zech-et-al-2022/"},{"title":"Zerna et al. (2022)","text":"When individuals set goals, they consider the subjective value (SV) of the anticipated reward and the required effort, a trade-off that is of great interest to psychological research. One approach to quantify the SVs of levels of difficulty of a cognitive task is the Cognitive Effort Discounting Paradigm by Westbrook and colleagues (2013). However, it fails to acknowledge the highly individual nature of effort, as it assumes a unidirectional, inverse relationship between task load and SVs. Therefore, it cannot map differences in effort perception that arise from traits like Need for Cognition, since individuals who enjoy effortful cognitive activities likely do not prefer the easiest level. We replicated the analysis of Westbrook and colleagues with an adapted version, the Cognitive and Affective Discounting (CAD) Paradigm. It quantifies SVs without assuming that the easiest level is preferred, thereby enabling the assessment of SVs for tasks without objective order of task load. Results show that many participants preferred a more or the most difficult level. Variance in SVs was best explained by a declining logistic contrast of the $n$-back levels and by the accuracy of responses, while reaction time as a predictor was highly volatile depending on the preprocessing pipeline. Participants with higher Need for Cognition scores perceived higher $n$-back levels as less effortful and found them less aversive. Effects of Need for Cognition on SVs in lower levels did not reach significance, as group differences only emerged in higher levels. The CAD Paradigm appears to be well suited for assessing and analysing task preferences independent of the supposed objective task difficulty.","link":"/opendata/zerna-et-al-2022/"},{"title":"Zetsche et al. (2023)","text":"Rumination is a well-known risk factor for the onset and recurrence of depressive episodes. Depressed individuals with a tendency to ruminate have been found to exhibit deficits in updating the contents of working memory. Thus, working memory training targeting updating-specific cognitive control processes may bear the potential to reduce ruminative tendencies. This registered clinical trial (ClinicalTrials.gov/NCT03011216) examined the effect of training cognitive control on rumination in the daily lives of clinically depressed individuals. Sixty-five individuals with a current major depressive disorder were randomized to 10 sessions of either cognitive control training (N=31) or placebo training (N=34). Primary outcome measures were the frequency and negativity of ruminative thoughts in the daily lives of participants assessed by a 7-day experience sampling procedure prior to training, after training, and at 3-months follow-up. Secondary outcomes were depressive symptoms, depressive mood, and level of disability. Cognitive control training led to stronger improvements in the trained task than placebo training. There was no transfer of the training effect to a novel cognitive control task. Cognitive control training did not lead to a greater reduction in daily rumination frequency, negativity of ruminative thoughts, or the negative influence of rumination on subsequent affect than the placebo training. There was no training-specific effect on participants’ depressive symptomatology or level of disability. Results join in a picture of mixed findings regarding the effect of cognitive control training on rumination. Future research has to identify determinants of beneficial training effects.","link":"/opendata/zetsche-et-al-2023/"},{"title":"Zhang et al. (2023)","text":"Humans are remarkably efficient at decision-making, even in “open-ended’’ problems where the set of possible actions is too large for exhaustive evaluation. Our success relies, in part, on efficient processes of calling to mind and considering the right candidate actions for evaluation. When this process fails, however, the result is a kind of cognitive puzzle in which the value of a solution or action would be obvious as soon as it is considered, but never gets considered in the first place. Recently, machine learning (ML) architectures have attained or even exceeded human performance on certain kinds of open-ended tasks such as the games of chess and go. We ask whether the broad architectural principles that underlie ML success in these domains tend to generate similar consideration failures to those observed in humans. We demonstrate a case in which they do, illuminating how humans make open-ended decisions, how this relates to ML approaches to similar problems, and how both architectures lead to characteristic patterns of success and failure.","link":"/opendata/zhang-et-al-2023/"},{"title":"Zhao et al. (2021)","text":"Working memory (WM) deficits have been widely documented in schizophrenia (SZ), and almost all existing studies attributed the deficits to decreased capacity as compared to healthy control (HC) subjects. Recent developments in WM research suggest that other components, such as precision, also mediate behavioral performance. It remains unclear how different WM components jointly contribute to deficits in schizophrenia. We measured the performance of 60 SZ (31 females) and 61 HC (29 females) in a classical delay-estimation visual working memory (VWM) task and evaluated several influential computational models proposed in basic science of VWM to disentangle the effect of various memory components. We show that the model assuming variable precision (VP) across items and trials is the best model to explain the performance of both groups. According to the VP model, SZ exhibited abnormally larger variability of allocating memory resources rather than resources or capacity per se. Finally, individual differences in the resource allocation variability predicted variation of symptom severity in SZ, highlighting its functional relevance to schizophrenic pathology. This finding was further verified using distinct visual features and subject cohorts. These results provide an alternative view instead of the widely accepted decreased-capacity theory and highlight the key role of elevated resource allocation variability in generating atypical VWM behavior in schizophrenia. Our findings also shed new light on the utility of Bayesian observer models to characterize mechanisms of mental deficits in clinical neuroscience.","link":"/opendata/zhao-et-al-2021/"},{"title":"Zheng & Mei (2023)","text":"The question of how the brain represents reward prediction errors is central to reinforcement learning and adaptive, goal-directed behavior. Previous studies have revealed prediction error representations in multiple electrophysiological signatures, but it remains elusive whether these electrophysiological correlates underlying prediction errors are sensitive to valence (in a signed form) or to salience (in an unsigned form). One possible reason concerns the loose correspondence between objective probability and subjective prediction resulting from the optimistic bias, that is, the tendency to overestimate the likelihood of encountering positive future events. In the present electroencephalography (EEG) study, we approached this question by directly measuring participants’ idiosyncratic, trial-to-trial prediction errors elicited by subjective and objective probabilities across two experiments. We adopted monetary gain and loss feedback in Experiment 1 and positive and negative feedback as communicated by the same zero-value feedback in Experiment 2. We provided electrophysiological evidence in time and time-frequency domains supporting both reward and salience prediction error signals. Moreover, we showed that these electrophysiological signatures were highly flexible and sensitive to an optimistic bias and various forms of salience. Our findings shed new light on multiple presentations of prediction error in the human brain, which differ in format and functional role.","link":"/opendata/zheng-mei-2023/"},{"title":"Zhong et al. (2022)","text":"Surviving in an uncertain environment requires not only the ability to select the best action, but also the flexibility to withhold inappropriate actions when the environmental conditions change. Although selecting and withholding actions have been extensively studied in both human and animals, there is still lack of consensus on the mechanism underlying these action regulation functions, and more importantly, how they inter-relate. A critical gap impeding progress is the lack of a computational theory that will integrate the mechanisms of action regulation into a unified framework. The current study aims to advance our understanding by developing a neurodynamical computational theory that models the mechanism of action regulation that involves suppressing responses, and predicts how disruption of this mechanism can lead to motor deficits in Parkinson’s disease (PD) patients. We tested the model predictions in neurotypical individuals and PD patients in three behavioral tasks that involve free action selection between two opposed directions, action selection in the presence of conflicting information and abandoning an ongoing action when a stop signal is presented. Our results and theory suggest an integrated mechanism of action regulation that affects both action initiation and inhibition. When this mechanism is disrupted, motor behavior is affected, leading to longer reaction times and higher error rates in action inhibition.","link":"/opendata/zhong-et-al-2022/"},{"title":"Zhou et al. (2023)","text":"Theories of moral development propose that empathy is transmitted across individuals, yet the mechanism through which empathy is socially transmitted remains unclear. We conducted three studies to investigate whether, and if so, how observing empathic responses in others affects the empathy of the observer. Our results show that observing empathic or non-empathic responses generates learning signals that respectively increases or decreases empathy ratings of the observer and alters empathy-related responses in the anterior insula (AI), i.e., the same region that correlated with empathy baseline ratings, as well as its functional connectivity with the temporal-parietal junction (TPJ). Together, our findings provide a neurocomputational mechanism for the social transmission of empathy that accounts for changes in individual empathic responses in empathic and non-empathic social environments.","link":"/opendata/zhou-et-al-2023/"},{"title":"Zhu et al. (2019)","text":"Both basal ganglia (BG) and orbitofrontal cortex (OFC) have been widely implicated in social and non-social decision-making. However, unlike OFC damage, BG pathology is not typically associated with disturbances in social functioning. Here we studied the behavior of patients with focal lesions to either BG or OFC in a multi-strategy competitive game known to engage these regions. We find that whereas OFC patients are significantly impaired, BG patients show intact learning in the economic game. By contrast, when information about the strategic context is absent, both cohorts are significantly impaired. Computational modeling further shows a preserved ability in BG patients to learn by anticipating and responding to the behavior of others using the strategic context. These results suggest that apparently divergent findings on BG contribution to social decision-making may instead reflect a model where higher-order learning processes are dissociable from trial-and-error learning, and can be preserved despite BG damage.","link":"/opendata/zhu-et-al-2019/"},{"title":"Zhu et al. (2022)","text":"Human cognition is fundamentally noisy. While routinely regarded as a nuisance in experimental investigation, the few studies investigating properties of cognitive noise have found surprising structure. A first line of research has shown that inter-response-time distributions are heavy-tailed. That is, response times between subsequent trials usually change only a small amount, but with occasional large changes. A second, separate, line of research has found that participants’ estimates and response times both exhibit long-range autocorrelations (i.e., 1/f noise). Thus, each judgment and response time not only depends on its immediate predecessor but also on many previous responses. These two lines of research use different tasks and have distinct theoretical explanations: models that account for heavy-tailed response times do not predict 1/f autocorrelations and vice versa. Here, we find that 1/f noise and heavy-tailed response distributions co-occur in both types of tasks. We also show that a statistical sampling algorithm, developed to deal with patchy environments, generates both heavy-tailed distributions and 1/f noise, suggesting that cognitive noise may be a functional adaptation to dealing with a complex world.","link":"/opendata/zhu-et-al-2022/"},{"title":"Ziaka & Protopapas (2023)","text":"Cognitive control has been typically examined using single-item tasks. This has implications for the generalizability of theories of control implementation. Previous studies have revealed that different control demands are posed by tasks depending on whether they present stimuli individually (i.e., single-item) or simultaneously in array format (i.e., multi-item). In the present study we tracked within-task performance in single-item and multi-item Stroop tasks using simultaneous pupillometry, gaze, and behavioral response measures, aiming to explore the implications of format differences for cognitive control. The results indicated within-task performance decline in the multi-item version of the Stroop task, accompanied by pupil constriction and dwell time increase, in both the incongruent and the neutral condition. In contrast, no performance decline or dwell time increase was observed in the course of the single-item version of the task. These findings point to capacity constraints beyond the explanatory range of current theories of cognitive control, with implications for cognitive control research, and highlight the need for better understanding the cognitive demands of multi-item tasks.","link":"/opendata/ziaka-protopapas-2023/"},{"title":"Zika et al. (2023)","text":"Updating beliefs in changing environments can be driven by gradually adapting expectations or by relying on inferred hidden states (i.e. contexts), and changes therein. Previous work suggests that increased reliance on context could underly fear relapse phenomena that hinder clinical treatment of anxiety disorders. We test whether trait anxiety variations in a healthy population influence how much individuals rely on hidden-state inference. In a Pavlovian learning task, participants observed cues that predicted an upcoming electrical shock with repeatedly changing probability, and were asked to provide expectancy ratings on every trial. We show that trait anxiety is associated with steeper expectation switches after contingency reversals and reduced oddball learning. Furthermore, trait anxiety is related to better fit of a state inference, compared to a gradual learning, model when contingency changes are large. Our findings support previous work suggesting hidden-state inference as a mechanism behind anxiety-related to fear relapse phenomena.","link":"/opendata/zika-et-al-2023/"},{"title":"Zilker et al. (2020)","text":"The canonical conclusion from research on age differences in risky choice is that older adults are more risk averse than younger adults, at least in choices involving gains. Most of the evidence for this conclusion derives from studies that used a specific type of choice problem: choices between a safe and a risky option. However, safe and risky options differ not only in the degree of risk but also in the amount of information to be processed-that is, in their complexity. In both an online and a lab experiment, we demonstrate that differences in option complexity can be a key driver of age differences in risk attitude. When the complexity of the safe option is increased, older adults no longer seem more risk averse than younger adults (in gains). Using computational modeling, we test mechanisms that potentially underlie the effect of option complexity. The results show that participants are not simply averse to complexity, and that increasing the complexity of safe options does more than simply make responses more noisy. Rather, differences in option complexity affect the processing of attribute information: whereas the availability of a simple safe option is associated with the distortion of probability weighting and lower outcome sensitivity, these effects are attenuated when both options are more similar in complexity. We also dissociate these effects of option complexity from an effect of certainty. Our findings may also have implications for age differences in other decision phenomena (e.g., framing effect, loss aversion, immediacy effect).","link":"/opendata/zilker-et-al-2020/"},{"title":"Zilker & Pachur (2023)","text":"Probability weighting is one of the most powerful theoretical constructs in descriptive models of risky choice and constitutes a central component of cumulative prospect theory (CPT). Probability weighting has been shown to be related to two facets of attention allocation: one analysis showed that differences in the shape of CPT’s probability-weighting function are linked to differences in how attention is allocated across attributes (i.e., probabilities vs. outcomes); another analysis (that used a different measure of attention) showed a link between probability weighting and differences in how attention is allocated across options. However, the relationship between these two links is unclear. We investigate to what extent attribute attention and option attention independently contribute to probability weighting. Reanalyzing data from a process-tracing study, we first demonstrate links between probability weighting and both attribute attention and option attention within the same data set and the same measure of attention. We then find that attribute attention and option attention are at best weakly related and have independent and distinct effects on probability weighting. Moreover, deviations from linear weighting mainly emerged when attribute attention or option attention were imbalanced. Our analyses enrich the understanding of the cognitive underpinnings of preferences and illustrate that similar probability-weighting patterns can be associated with very different attentional policies. This complicates an unambiguous psychological interpretation of psycho-economic functions. Our findings indicate that cognitive process models of decision making should aim to concurrently account for the effects of different facets of attention allocation on preference. In addition, we argue that the origins of biases in attribute attention and option attention need to be better understood.","link":"/opendata/zilker-pachur-2023/"},{"title":"Ziv et al. (2022)","text":"The purpose of this pre-registered study was to examine whether asking gamers and non-gamers about their video game playing habits before or after they performed computerized cognitive-motor tasks affects their performance of those tasks. We recruited 187 participants from an online participants’ recruitment platform. Out of those participants, 131 matched our criteria as gamers or non-gamers. They were then divided to two subgroups, and performed a choice-RT task, a Simon task, an alternate task-switching task, and a digit span memory task either before or after answering a video-game playing habits questionnaire. The results showed that gamers who completed a video-games questionnaire before performing the tasks had faster reaction times (RTs) in the Simon task compared with gamers who answered the questionnaire after performing the tasks. In contrast, non-gamers who answered the questionnaire before the task had slower RTs in the Simon task and the alternate task-switching task compared with non-gamers who answered the questionnaire after performing the tasks. The results suggest that answering a video-games questionnaire before the start of a study can lead to a response expectancy effect-positive for gamers and negative for non-gamers. This may bias findings of studies examining video games and the performance of cognitive-motor tasks.","link":"/opendata/ziv-et-al-2022/"},{"title":"Zivony & Eimer (2023)","text":"When observers have to identify an object embedded in a rapid serial visual presentation (RSVP) stream, they often erroneously report the identity of a distractor instead of the target (distractor intrusion). In two experiments, we examined whether these intrusion errors are associated with the speed of attentional engagement. Participants reported the identity of target digits indicated by shape selection cues. To manipulate the speed of engagement, targets appeared either within a single RSVP stream or unpredictably in one of two streams. Objects that followed the selection cue were reported more frequently when engagement was delayed (two streams), whereas the probability of reporting objects preceding the cue was higher when engagement was faster (single stream). These results show that distractor intrusions are closely linked to the allocation of selective attention in time, making the intrusion paradigm a useful tool for research into the temporal dynamics of attention. They also provide new evidence for the idea that attentional selectivity operates within brief periods of perceptual enhancement (attentional episodes), facilitating the processing of all objects within this period, regardless of their status as targets or distractors.","link":"/opendata/zivony-eimer-2023/"},{"title":"Zorowitz & Niv (2023)","text":"Data from N=149 participants who completed a gamified version of the two-step task under one of three conditions: (1) stimuli from both first- and second-state choices were randomly assigned to right/left positions on the screen on every trial; (2) stimuli from both first- and second-state choices were assigned fixed right/left positions on the screen (i.e., unchanging across trials); or (3) stimuli from first-state choices were randomly assigned to right/left positions on the screen on every trial. Second-state stimuli were assigned fixed right/left positions on the screen (i.e., unchanging across trials).","link":"/opendata/zorowitz-niv-2023/"},{"title":"Zorowitz et al. (2023a)","text":"Matrix reasoning tasks are among the most widely used measures of cognitive ability in the behavioral sciences, but the lack of matrix reasoning tests in the public domain complicates their use. Here we present an extensive investigation and psychometric validation of the matrix reasoning item bank (MaRs-IB), an open-access set of matrix reasoning items. In a first study, we calibrate the psychometric functioning of the items in the MaRs-IB in a large sample adults participants (N=1501). Using additive multilevel item structure models, we establish that the MaRs-IB has many desirable psychometric properties: its items span a wide range of difficulty, possess medium- to-large levels of discrimination, and exhibit robust associations between item complexity and difficulty. However, we also find that item clones are not always psychometrically equivalent and cannot assumed to be exchangeable. In a second study, we demonstrate how experimenters can use the estimated item parameters to design new matrix reasoning tests using optimal item assembly. Specifically, we design and validate two new sets of test forms in an independent sample of adults (N=600). We find these new tests possess good reliability and convergent validity with an established measure of matrix reasoning. We hope that the materials and results made available here will encourage experimenters to use the MaRs-IB in their research.","link":"/opendata/zorowitz-et-al-2023a/"},{"title":"Zorowitz et al. (2023b)","text":"A common research design in the field of computational psychiatry involves leveraging the power of online participant recruitment to assess correlations between behavior in cognitive tasks and the self-reported severity of psychiatric symptoms in large, diverse samples. Although large online samples have many advantages for psychiatric research, some potential pitfalls of this research design are not widely understood. Here we detail circumstances in which entirely spurious correlations may arise between task behavior and symptom severity as a result of inadequate screening of careless or low-effort responding on psychiatric symptom surveys. Specifically, since many psychiatric symptom surveys have asymmetric ground-truth score distributions in the general population, participants who respond carelessly on these surveys will show apparently elevated symptom levels. If these participants are similarly careless in their task performance, and are not excluded from analysis, this may result in a spurious association between greater symptom scores and worse behavioral task performance. Here, we demonstrate exactly this pattern of results in two independent samples of participants (total N = 779) recruited online to complete a self-report symptom battery and one of two common cognitive tasks. We show that many behavior-symptom correlations are entirely abolished when participants flagged for careless responding on surveys are excluded from analysis. We also show that exclusion based on task performance alone is often insufficient to prevent these spurious correlations. Of note, we demonstrate that false-positive rates for these spurious correlations increase with sample size, contrary to common assumptions. We offer guidance on how researchers using this general experimental design can guard against this issue in future research; in particular, we recommend the adoption of screening methods for self-report measures that are currently uncommon in this field.","link":"/opendata/zorowitz-et-al-2023b/"},{"title":"Zorowitz et al. (2023c)","text":"The Pavlovian go/no-go task is commonly used to measure individual differences in Pavlovian biases and their interaction with instrumental learning. However, prior research has found suboptimal reliability for computational model-based performance measures for this task, limiting its usefulness in individual-differences research. These studies did not make use of several strategies previously shown to enhance task-measure reliability (e.g., task gamification, hierarchical Bayesian modeling for model estimation). Here we investigated if such approaches could improve the task’s reliability. Across two experiments, we recruited two independent samples of adult participants (N=103, N=110) to complete a novel, gamified version of the Pavlovian go/no-go task multiple times over several weeks. We used hierarchical Bayesian modeling to derive reinforcement learning model-based indices of participants’ task performance, and additionally to estimate the reliability of these measures. In Experiment 1, we observed considerable and unexpected practice effects, with most participants reaching near-ceiling levels of performance with repeat testing. Consequently, the test-retest reliability of some model parameters was unacceptable (range: 0.379–0.973). In Experiment 2, participants completed a modified version of the task designed to lessen these practice effects. We observed greatly reduced practice effects and improved estimates of the test-retest reliability (range: 0.696–0.989). The results demonstrate that model-based measures of performance on the Pavlovian go/no-go task can reach levels of reliability sufficient for use in individual- differences research. However, additional investigation is necessary to validate the modified version of the task in other populations and settings.","link":"/opendata/zorowitz-et-al-2023c/"}],"tags":[{"name":"memory","slug":"memory","link":"/opendata/tags/memory/"},{"name":"explore/exploit","slug":"explore-exploit","link":"/opendata/tags/explore-exploit/"},{"name":"2-arm bandit","slug":"2-arm-bandit","link":"/opendata/tags/2-arm-bandit/"},{"name":"pavlovian go/no-go task","slug":"pavlovian-go-no-go-task","link":"/opendata/tags/pavlovian-go-no-go-task/"},{"name":"schizotypy","slug":"schizotypy","link":"/opendata/tags/schizotypy/"},{"name":"m/eeg","slug":"m-eeg","link":"/opendata/tags/m-eeg/"},{"name":"social decision making","slug":"social-decision-making","link":"/opendata/tags/social-decision-making/"},{"name":"cognitive control","slug":"cognitive-control","link":"/opendata/tags/cognitive-control/"},{"name":"planning","slug":"planning","link":"/opendata/tags/planning/"},{"name":"sequential sampling","slug":"sequential-sampling","link":"/opendata/tags/sequential-sampling/"},{"name":"probability estimation","slug":"probability-estimation","link":"/opendata/tags/probability-estimation/"},{"name":"metacognition","slug":"metacognition","link":"/opendata/tags/metacognition/"},{"name":"confidence","slug":"confidence","link":"/opendata/tags/confidence/"},{"name":"perceptual decision making","slug":"perceptual-decision-making","link":"/opendata/tags/perceptual-decision-making/"},{"name":"pavlovian conditioning","slug":"pavlovian-conditioning","link":"/opendata/tags/pavlovian-conditioning/"},{"name":"eye-tracking","slug":"eye-tracking","link":"/opendata/tags/eye-tracking/"},{"name":"information seeking","slug":"information-seeking","link":"/opendata/tags/information-seeking/"},{"name":"paranoia","slug":"paranoia","link":"/opendata/tags/paranoia/"},{"name":"time perception","slug":"time-perception","link":"/opendata/tags/time-perception/"},{"name":"database","slug":"database","link":"/opendata/tags/database/"},{"name":"multi-arm bandit","slug":"multi-arm-bandit","link":"/opendata/tags/multi-arm-bandit/"},{"name":"restless bandit","slug":"restless-bandit","link":"/opendata/tags/restless-bandit/"},{"name":"punishment","slug":"punishment","link":"/opendata/tags/punishment/"},{"name":"anxiety","slug":"anxiety","link":"/opendata/tags/anxiety/"},{"name":"depression","slug":"depression","link":"/opendata/tags/depression/"},{"name":"sequential decision making","slug":"sequential-decision-making","link":"/opendata/tags/sequential-decision-making/"},{"name":"approach/avoidance","slug":"approach-avoidance","link":"/opendata/tags/approach-avoidance/"},{"name":"foraging","slug":"foraging","link":"/opendata/tags/foraging/"},{"name":"risk sensitivity","slug":"risk-sensitivity","link":"/opendata/tags/risk-sensitivity/"},{"name":"development","slug":"development","link":"/opendata/tags/development/"},{"name":"adolescence","slug":"adolescence","link":"/opendata/tags/adolescence/"},{"name":"working memory","slug":"working-memory","link":"/opendata/tags/working-memory/"},{"name":"configural learning","slug":"configural-learning","link":"/opendata/tags/configural-learning/"},{"name":"compulsivity","slug":"compulsivity","link":"/opendata/tags/compulsivity/"},{"name":"tdcs","slug":"tdcs","link":"/opendata/tags/tdcs/"},{"name":"agency","slug":"agency","link":"/opendata/tags/agency/"},{"name":"reversal learning","slug":"reversal-learning","link":"/opendata/tags/reversal-learning/"},{"name":"economic game","slug":"economic-game","link":"/opendata/tags/economic-game/"},{"name":"range adaptation","slug":"range-adaptation","link":"/opendata/tags/range-adaptation/"},{"name":"counterfactual feedback","slug":"counterfactual-feedback","link":"/opendata/tags/counterfactual-feedback/"},{"name":"free association","slug":"free-association","link":"/opendata/tags/free-association/"},{"name":"narrative","slug":"narrative","link":"/opendata/tags/narrative/"},{"name":"test-retest","slug":"test-retest","link":"/opendata/tags/test-retest/"},{"name":"volatility","slug":"volatility","link":"/opendata/tags/volatility/"},{"name":"compound generalization","slug":"compound-generalization","link":"/opendata/tags/compound-generalization/"},{"name":"mood","slug":"mood","link":"/opendata/tags/mood/"},{"name":"decisions from description","slug":"decisions-from-description","link":"/opendata/tags/decisions-from-description/"},{"name":"effort","slug":"effort","link":"/opendata/tags/effort/"},{"name":"continuous outcomes","slug":"continuous-outcomes","link":"/opendata/tags/continuous-outcomes/"},{"name":"stress","slug":"stress","link":"/opendata/tags/stress/"},{"name":"two-step","slug":"two-step","link":"/opendata/tags/two-step/"},{"name":"mouse-tracking","slug":"mouse-tracking","link":"/opendata/tags/mouse-tracking/"},{"name":"iowa gambling task","slug":"iowa-gambling-task","link":"/opendata/tags/iowa-gambling-task/"},{"name":"temporal discounting","slug":"temporal-discounting","link":"/opendata/tags/temporal-discounting/"},{"name":"pathological gambling","slug":"pathological-gambling","link":"/opendata/tags/pathological-gambling/"},{"name":"mental imagery","slug":"mental-imagery","link":"/opendata/tags/mental-imagery/"},{"name":"time pressure","slug":"time-pressure","link":"/opendata/tags/time-pressure/"},{"name":"value construction","slug":"value-construction","link":"/opendata/tags/value-construction/"},{"name":"stroop","slug":"stroop","link":"/opendata/tags/stroop/"},{"name":"probabilistic selection task","slug":"probabilistic-selection-task","link":"/opendata/tags/probabilistic-selection-task/"},{"name":"parkinson's","slug":"parkinson-s","link":"/opendata/tags/parkinson-s/"},{"name":"eeg","slug":"eeg","link":"/opendata/tags/eeg/"},{"name":"latent cause inference","slug":"latent-cause-inference","link":"/opendata/tags/latent-cause-inference/"},{"name":"self-report only","slug":"self-report-only","link":"/opendata/tags/self-report-only/"},{"name":"reward visibility","slug":"reward-visibility","link":"/opendata/tags/reward-visibility/"},{"name":"generalization","slug":"generalization","link":"/opendata/tags/generalization/"},{"name":"ecological momentary assessment","slug":"ecological-momentary-assessment","link":"/opendata/tags/ecological-momentary-assessment/"},{"name":"mania","slug":"mania","link":"/opendata/tags/mania/"},{"name":"horizons task","slug":"horizons-task","link":"/opendata/tags/horizons-task/"},{"name":"impulsivity","slug":"impulsivity","link":"/opendata/tags/impulsivity/"},{"name":"psychosis","slug":"psychosis","link":"/opendata/tags/psychosis/"},{"name":"sequence learning","slug":"sequence-learning","link":"/opendata/tags/sequence-learning/"},{"name":"serial reaction time","slug":"serial-reaction-time","link":"/opendata/tags/serial-reaction-time/"},{"name":"inverse RL","slug":"inverse-RL","link":"/opendata/tags/inverse-RL/"},{"name":"avoidance","slug":"avoidance","link":"/opendata/tags/avoidance/"},{"name":"clinical trial","slug":"clinical-trial","link":"/opendata/tags/clinical-trial/"},{"name":"balloon analog risk task","slug":"balloon-analog-risk-task","link":"/opendata/tags/balloon-analog-risk-task/"},{"name":"habits","slug":"habits","link":"/opendata/tags/habits/"},{"name":"outcome devaluation","slug":"outcome-devaluation","link":"/opendata/tags/outcome-devaluation/"},{"name":"autism","slug":"autism","link":"/opendata/tags/autism/"},{"name":"attention","slug":"attention","link":"/opendata/tags/attention/"},{"name":"spatial navigation","slug":"spatial-navigation","link":"/opendata/tags/spatial-navigation/"},{"name":"mindfulness","slug":"mindfulness","link":"/opendata/tags/mindfulness/"},{"name":"self-judgment","slug":"self-judgment","link":"/opendata/tags/self-judgment/"},{"name":"ptsd","slug":"ptsd","link":"/opendata/tags/ptsd/"},{"name":"adhd","slug":"adhd","link":"/opendata/tags/adhd/"},{"name":"mind wandering","slug":"mind-wandering","link":"/opendata/tags/mind-wandering/"},{"name":"stop signal","slug":"stop-signal","link":"/opendata/tags/stop-signal/"},{"name":"monetary incentive delay task","slug":"monetary-incentive-delay-task","link":"/opendata/tags/monetary-incentive-delay-task/"},{"name":"go/no-go task","slug":"go-no-go-task","link":"/opendata/tags/go-no-go-task/"},{"name":"aging","slug":"aging","link":"/opendata/tags/aging/"},{"name":"trauma","slug":"trauma","link":"/opendata/tags/trauma/"},{"name":"visual perception","slug":"visual-perception","link":"/opendata/tags/visual-perception/"},{"name":"auditory perception","slug":"auditory-perception","link":"/opendata/tags/auditory-perception/"},{"name":"causal reasoning","slug":"causal-reasoning","link":"/opendata/tags/causal-reasoning/"},{"name":"longitudinal","slug":"longitudinal","link":"/opendata/tags/longitudinal/"},{"name":"pavlovian instrumental transfer","slug":"pavlovian-instrumental-transfer","link":"/opendata/tags/pavlovian-instrumental-transfer/"},{"name":"curiosity","slug":"curiosity","link":"/opendata/tags/curiosity/"},{"name":"abstract reasoning","slug":"abstract-reasoning","link":"/opendata/tags/abstract-reasoning/"},{"name":"executive functioning","slug":"executive-functioning","link":"/opendata/tags/executive-functioning/"},{"name":"anchoring","slug":"anchoring","link":"/opendata/tags/anchoring/"},{"name":"substance use","slug":"substance-use","link":"/opendata/tags/substance-use/"},{"name":"categorization","slug":"categorization","link":"/opendata/tags/categorization/"},{"name":"implicit association test","slug":"implicit-association-test","link":"/opendata/tags/implicit-association-test/"},{"name":"multisensory integration","slug":"multisensory-integration","link":"/opendata/tags/multisensory-integration/"}],"categories":[]} \ No newline at end of file +{"pages":[{"title":"About OpenData","text":"OpenData is a database of publicly available behavioral datasets. To browse the database, click on the links above or use the search bar at the top-right of this page. What is the goal of OpenData? The goal of this project is simply to make it easier for researchers to find and use publicly available behavioral data as part of research. There's already so much out there that can be used to: Test new hypotheses or models Calculate effect sizes for power analysis Estimate meta-analytic effects across studies What is the scope of OpenData? The scope of this project is to catalogue any and all open datasets involving experimental or cognitive tasks (e.g., Stroop, delay discounting, 2-arm bandits). Datasets involving more naturalistic behaviors are also welcomed. The only firm requirement is that trial-level data must be available. Who maintains OpenData? This project is maintained by Sam Zorowitz and the Data Science and Sharing Team at the National Institute of Mental Health. How can I add a dataset? Please see the contributing page. How can I report an issue? Please open an issue on our Github or directly contact the maintainer.","link":"/opendata/about/index.html"},{"title":"Contributing to OpenData","text":"Contributions to OpenData are welcome! All of the code is managed through the GitHub repository. How to submit a dataset There are three main ways you can submit a dataset for entry to OpenData, which are described in turn below. Note that they are ranked in order of preference, from most preferred to least preferred, by the maintainers of Open Data. 1. Fill out the Google form Have a dataset to submit? Fill out this form: https://forms.gle/N87Dqhs73vbfoPec8 2. Submit a pull request To directly add a dataset yourself, you can open a pull request on Github. First you should fork the OpenData library via GitHub and make modifications on your fork. The OpenData database is written in markdown using a standardized, easy-to-copy template. Once your modification is complete, submit a pull request to merge your changes into the docs branch of OpenData. Pull requests will be reviewed by the maintainers. 3. Open an issue To advertise a dataset for entry into the database, you can open a new issue via GitHub. Please provide a link to a repository for the data and, if available, a link to a corresponding manuscript describing the data.","link":"/opendata/contribute/index.html"}],"posts":[{"title":"Bein et al. (2021)","text":"When our experience violates our predictions, it is adaptive to update our knowledge to promote a more accurate representation of the world and facilitate future predictions. Theoretical models propose that these mnemonic prediction errors should be encoded into a distinct memory trace to prevent interference with previous, conflicting memories. We investigated this proposal by repeatedly exposing participants to pairs of sequentially presented objects (A → B), thus evoking expectations. Then, we violated participants expectations by replacing the second object in the pairs with a novel object (A → C). The following item memory test required participants to discriminate between identical old items and similar lures, thus testing detailed and distinctive item memory representations. In two experiments, mnemonic prediction errors enhanced item memory: Participants correctly identified more old items as old when those items violated expectations during learning, compared with items that did not violate expectations. This memory enhancement for C items was only observed when participants later showed intact memory for the related A → B pairs, suggesting that strong predictions are required to facilitate memory for violations. Following up on this, a third experiment reduced prediction strength prior to violation and subsequently eliminated the memory advantage of violations. Interestingly, mnemonic prediction errors did not increase gist-based mistakes of identifying old items as similar lures or identifying similar lures as old. Enhanced item memory in the absence of gist-based mistakes suggests that violations enhanced memory for items details, which could be mediated via distinct memory traces. Together, these results advance our knowledge of how mnemonic prediction errors promote memory formation.","link":"/opendata/bein-et-al-2021/"},{"title":"Abeles et al. (2023)","text":"People show vast variability in skill performance and learning. What determines a person’s individual performance and learning ability? In this study we explored the possibility to predict participants’ future performance and learning, based on their behavior during initial skill acquisition. We recruited a large online multi-session sample of participants performing a sequential tapping skill learning task. We used machine learning to predict future performance and learning from raw data acquired during initial skill acquisition, and from engineered features calculated from the raw data. Strong correlations were observed between initial and final performance, and individual learning was not predicted. While canonical experimental tasks developed and selected to detect average effects may constrain insights regarding individual variability, development of novel tasks may shed light on the underlying mechanism of individual skill learning, relevant for real-life scenarios.","link":"/opendata/abeles-et-al-2023/"},{"title":"Abir et al. (2023)","text":"The purpose of exploration is to reduce goal-relevant uncertainty. This can be achieved by choosing to explore the parts of the environment one is most uncertain about. Humans, however, often choose to avoid uncertainty. How do humans balance approaching and avoiding uncertainty during exploration? To answer this question, we developed a task requiring participants to explore a simulated environment towards a clear goal. We compared human choices to the predictions of the optimal exploration policy and a hierarchy of simpler strategies. We found that participants generally explored the object they were more uncertain about. However, when overall uncertainty about choice options was high, participants avoided objects they were more uncertain about, learning instead about better known objects. We examined reaction times and individual differences to understand the costs and benefits of this strategy. We conclude that balancing approaching and avoiding uncertainty ameliorates the costs of exploration in a resource-rational manner.","link":"/opendata/abir-et-al-2023/"},{"title":"Abivardi et al. (2023)","text":"Detecting unusual auditory stimuli is crucial for discovering potential threat. Locus coeruleus (LC), which coordinates attention, and amygdala, which is implicated in resource prioritization, both respond to deviant sounds. Evidence concerning their interaction, however, is sparse. Seeking to elucidate if human amygdala affects estimated LC activity during this process, we recorded pupillary responses during an auditory oddball and an illuminance change task, in a female with bilateral amygdala lesions (BG) and in n = 23 matched controls. Neural input in response to oddballs was estimated via pupil dilation, a reported proxy of LC activity, harnessing a linear-time invariant system and individual pupillary dilation response function (IRF) inferred from illuminance responses. While oddball recognition remained intact, estimated LC input for BG was compacted to an impulse rather than the prolonged waveform seen in healthy controls. This impulse had the earliest response mean and highest kurtosis in the sample. As a secondary finding, BG showed enhanced early pupillary constriction to darkness. These findings suggest that LC-amygdala communication is required to sustain LC activity in response to anomalous sounds. Our results provide further evidence for amygdala involvement in processing deviant sound targets, although it is not required for their behavioral recognition.","link":"/opendata/abivardi-et-al-2023/"},{"title":"Adkins et al. (2021)","text":"Incentives can be used to increase motivation, leading to better learning and performance on skilled motor tasks. Prior work has shown that monetary punishments enhance on-line performance while equivalent monetary rewards enhance off-line skill retention. However, a large body of literature on loss aversion has shown that losses are treated as larger than equivalent gains. The divergence between the effects of punishments and reward on motor learning could be due to perceived differences in incentive value rather than valence per se. We test this hypothesis by manipulating incentive value and valence while participants trained to perform motor sequences. Consistent with our hypothesis, we found that large reward enhanced on-line performance but impaired the ability to retain the level of performance achieved during training. However, we also found that on-line performance was better with reward than punishment and that the effect of increasing incentive value was more linear with reward (small, medium, large) while the effect of value was more binary with punishment (large vs not large). These results suggest that there are differential effects of punishment and reward on motor learning and that these effects of valence are unlikely to be driven by differences in the subjective magnitude of gains and losses.","link":"/opendata/adkins-et-al-2021/"},{"title":"Alameda et al. (2023)","text":"Throughout the day, humans show natural fluctuations in arousal that impact cognitive function. To study the behavioural dynamics of cognitive control during high and low arousal states, healthy participants performed an auditory conflict task during high-intensity physical exercise (N= 39) or drowsiness (N= 33). In line with the pre-registered hypothesis, conflict and conflict adaptation effects were preserved during both altered arousal states. Overall task performance was markedly poorer during low arousal, but not for high arousal. Modelling behavioural dynamics with drift-diffusion analyses revealed evidence accumulation and non-decision time decelerated, and decisional boundaries became wider during low arousal, whereas high arousal was unexpectedly associated with a decrease in the interference of task-irrelevant information processing. These findings show how arousal differentially modulates cognitive control at both sides of normal alertness, and further validates drowsiness and physical exercise as key experimental models to disentangle the interaction between physiological fluctuations on cognitive dynamics.Statement of RelevanceThe variability in arousal and alertness that we naturally experience in our everyday activities is rarely considered in cognitive neuroscience frameworks. Here, we presented a Simon task where the word “left” or “right” was heard through the participant’s right or left ear while either exercising at high-intensity or falling asleep, in order to map the behavioural dynamics of cognitive control on both sides of the arousal spectrum. We disentangled different elements involved in decision-making with computational modelling analyses. While basic effects of conflict in cognitive control were preserved regardless of the arousal state, high and low arousal seem to differentially impact decision-making processes. These results emphasise that naturally inducing drowsiness and physical exercise can provide a suitable model to test the resilience of decision-making processes when challenged by arousal and show the resilience of cognitive control mechanisms in face of physiological fluctuations.","link":"/opendata/alameda-et-al-2023/"},{"title":"Albrecht et al. (2016)","text":"The negative symptoms of schizophrenia (SZ) are associated with a pattern of reinforcement learning (RL) deficits likely related to degraded representations of reward values. However, the RL tasks used to date have required active responses to both reward and punishing stimuli. Pavlovian biases have been shown to affect performance on these tasks through invigoration of action to reward and inhibition of action to punishment, and may be partially responsible for the effects found in patients. Forty-five patients with schizophrenia and 30 demographically-matched controls completed a four-stimulus reinforcement learning task that crossed action (Go or NoGo) and the valence of the optimal outcome (reward or punishment-avoidance), such that all combinations of action and outcome valence were tested. Behaviour was modelled using a six-parameter RL model and EEG was simultaneously recorded. Patients demonstrated a reduction in Pavlovian performance bias that was evident in a reduced Go bias across the full group. In a subset of patients administered clozapine, the reduction in Pavlovian bias was enhanced. The reduction in Pavlovian bias in SZ patients was accompanied by feedback processing differences at the time of the P3a component. The reduced Pavlovian bias in patients is suggested to be due to reduced fidelity in the communication between striatal regions and frontal cortex. It may also partially account for previous findings of poorer Go-learning in schizophrenia where Go responses or Pavlovian consistent responses are required for optimal performance. An attenuated P3a component dynamic in patients is consistent with a view that deficits in operant learning are due to impairments in adaptively using feedback to update representations of stimulus value.","link":"/opendata/albrecht-et-al-2016/"},{"title":"Aldridge-Waddon et al. (2022)","text":"The feelings of reward associated with social interaction help to motivate social behaviour and influence preferences for different types of social contact. In two studies conducted in a general population sample, we investigated self-reported and experimentally-assessed social reward processing in personality spectra with prominent interpersonal features, namely schizotypy and psychopathy. Study 1 (n = 154) measured social reward processing using the Social Reward Questionnaire, and a modified version of a Monetary and Social Incentive Delay Task. Study 2 (n = 42; a subsample of Study 1) investigated social reward processing using a Social Reward Subtype Incentive Delay Task. Our results show that schizotypy (specifically Cognitive-Perceptual dimension) and psychopathy (specifically Lifestyle dimension) are associated with diverging responses to social scenarios involving large gatherings or meeting new people (Sociability), with reduced processing in schizotypy and heightened processing in psychopathy. No difference, however, occurred for other social scenarios-with similar patterns of increased antisocial (Negative Social Potency) and reduced prosocial (Admiration, Sociability) reward processing across schizotypy and psychopathy dimensions. Our findings contribute new knowledge on social reward processing within these personality spectra and, with the important exception of Sociability, highlight potentially converging patterns of social reward processing in association with schizotypy and psychopathy.","link":"/opendata/aldridge-waddon-et-al-2022/"},{"title":"Algermissen et al. (2021)","text":"Action selection is biased by the valence of anticipated outcomes. To assess mechanisms by which these motivational biases are expressed and controlled, we measured simultaneous EEG-fMRI during a motivational Go/NoGo learning task (N = 36), leveraging the temporal resolution of EEG and subcortical access of fMRI. VmPFC BOLD encoded cue valence, importantly predicting trial-by-trial valence-driven response speed differences and EEG theta power around cue onset. In contrast, striatal BOLD encoded selection of active Go responses and correlated with theta power around response time. Within trials, theta power ramped in the fashion of an evidence accumulation signal for the value of making a Go response, capturing the faster responding to reward cues. Our findings reveal a dual nature of midfrontal theta power, with early components reflecting the vmPFC contribution to motivational biases, and late components reflecting their striatal translation into behavior, in line with influential recent value of work theories of striatal processing.","link":"/opendata/algermissen-et-al-2021/"},{"title":"Alister et al. (2022)","text":"The gaze cueing effect is the tendency for people to respond faster to targets appearing at locations gazed at by others compared to locations gazed away from by others. The effect is robust, widely studied, and is an influential finding within social cognition. However, much is still unknown about the cognitive processes that drive this effect. Formal evidence accumulation models provide the dominant theoretical account of the cognitive processes underlying speeded decision making but have never been applied to gaze cueing research and rarely to the study of social cognition more broadly. In this study, using a combination of individual-level and hierarchical computational modelling techniques, we applied Evidence Accumulation Models to gaze and arrow cueing data (four data sets total, n = 171, 139,001 trials) for the first time to 1) identify which cognitive mechanisms underlie the gaze cueing effect, and 2) see whether these processes could be considered the same as those that underlie arrow cueing effects. At the group level, people were best described by an attentional orienting mechanism rather than higher-order decision bias or information processing mechanisms. However, we found evidence for individual differences such that not everyone was best described by an attentional orienting mechanism. Further, the same people who were best described by an attentional orienting mechanism for gaze cues tended not to be best described by that same mechanism for arrow cues, suggesting these cueing effects may induce different responses within the same people – although we interpret this finding with caution.","link":"/opendata/alister-et-al-2022/"},{"title":"Almeras et al. (2022)","text":"Exploring novel environments through sequential sampling is essential for efficient decision-making under uncertainty. In the laboratory, human exploration has been studied in situations where exploration is traded against reward maximisation. By design, these ‘explore-exploit’ dilemmas confound the behavioural characteristics of exploration with those of the trade-off itself. Here we designed a sequential sampling task where exploration can be studied and compared in the presence and absence of trade-off with exploitation. Detailed model-based analyses of choice behaviour revealed specific exploration patterns arising in situations where information seeking is not traded against reward seeking. Human choices are directed toward the most uncertain option available, but only after an initial sampling phase consisting of choice streaks from each novel option. These findings outline competing cognitive pressures on information seeking: the repeated sampling of the current option (for hypothesis testing), and the directed sampling of the most uncertain option available (for structure mapping).","link":"/opendata/almeras-et-al-2022/"},{"title":"Amir et al. (2022)","text":"People have limited computational resources, yet they make complex strategic decisions over enormous spaces of possibilities. How do people efficiently search spaces with combinatorially branching paths? Here, we study players’ search strategies for a winning move in a “k-in-a-row” game. We find that players use scoring strategies to prune the search space and augment this pruning by a “shutter” heuristic that focuses the search on the paths emanating from their previous move. This strong pruning has its costs-both computational simulations and behavioral data indicate that the shutter size is correlated with players’ blindness to their opponent’s winning moves. However, simulations of the search while varying the shutter size, complexity levels, noise levels, branching factor, and computational limitations indicate that despite its costs, a narrow shutter strategy is the dominant strategy for most of the parameter space. Finally, we show that in the presence of computational limitations, the shutter heuristic enhances the performance of deep learning networks in these end-game scenarios. Together, our findings suggest a novel adaptive heuristic that benefits search in a vast space of possibilities of a strategic game.","link":"/opendata/amir-et-al-2022/"},{"title":"Amsalem et al. (2023)","text":"Statistical learning (SL), the extraction of regularities embedded in the environment, is often viewed as a fundamental and effortless process. However, whether spatial SL requires resources, or it can operate in parallel to other demands, is still not clear. To examine this issue, we tested spatial SL using the standard lab experiment under concurrent demands: high- and low-cognitive load (Experiment 1) and, spatial memory load (Experiment 2) during the familiarization phase. We found that any type of high-load demands during the familiarization abolished learning. Experiment 3 compared SL under spatial low-load and no-load. We found robust learning in the no-load condition that was dramatically reduced in the low-load condition. Finally, we compared a no-load condition with a very low-load, infrequent dot-probe condition that posed minimal demands while still requiring attention to the display (Experiment 4). The results showed, once again, that any concurrent task during the familiarization phase largely impaired spatial SL. Taken together, we conclude that spatial SL requires resources, a finding that challenges the view that the extraction of spatial regularities is automatic and implicit and suggests that this fundamental learning process is not as effortless as was typically assumed. We further discuss the practical and methodological implications of these findings.","link":"/opendata/amsalem-et-al-2023/"},{"title":"Antony et al. (2022)","text":"Two fundamental issues in memory research concern when later experiences strengthen or weaken initial memories and when the two memories become linked or remain independent. A promising candidate for explaining these issues is semantic relatedness. Here, across five paired-associate learning experiments (N=1000), we systematically varied the semantic relatedness between initial and later cues, initial and later targets, or both. We found that learning retroactively benefited long-term memory performance for semantically related words (vs. unshown control words), and these benefits increased as a function of relatedness. Critically, memory dependence between initial and later pairs also increased with relatedness, suggesting that pre-existing semantic relationships promote interdependence for memories formed across episodes. We also found that modest retroactive benefits, but not interdependencies, emerged when subjects learned via studying rather than practice testing. These findings demonstrate that semantic relatedness during new learning retroactively strengthens old associations while scaffolding new ones into well-fortified memory traces.","link":"/opendata/antony-et-al-2022/"},{"title":"Appelhoff et al. (2022)","text":"When judging the average value of sample stimuli (e.g., numbers) people tend to either over- or underweight extreme sample values, depending on task context. In a context of overweighting, recent work has shown that extreme sample values were overly represented also in neural signals, in terms of an anti-compressed geometry of number samples in multivariate electroencephalography (EEG) patterns. Here, we asked whether neural representational geometries may also reflect a relative underweighting of extreme values (i.e., compression) which has been observed behaviorally in a great variety of tasks. We used a simple experimental manipulation (instructions to average a single-stream or to compare dual-streams of samples) to induce compression or anti-compression in behavior when participants judged rapid number sequences. Model-based representational similarity analysis (RSA) replicated the previous finding of neural anti-compression in the dual-stream task, but failed to provide evidence for neural compression in the single-stream task, despite the evidence for compression in behavior. Instead, the results indicated enhanced neural processing of extreme values in either task, regardless of whether extremes were over- or underweighted in subsequent behavioral choice. We further observed more general differences in the neural representation of the sample information between the two tasks. Together, our results indicate a mismatch between sample-level EEG geometries and behavior, which raises new questions about the origin of common psychometric distortions, such as diminishing sensitivity for larger values.","link":"/opendata/appelhoff-et-al-2022/"},{"title":"Arato et al. (2023)","text":"What is the link between eye movements and sensory learning? Although some theories have argued for an automatic interaction between what we know and where we look that continuously modulates human information gathering behavior during both implicit and explicit learning, there exists limited experimental evidence supporting such an ongoing interplay. To address this issue, we used a visual statistical learning paradigm combined with a gaze contingent stimulus presentation and manipulated the explicitness of the task to explore how learning and eye movements interact. During both implicit exploration and explicit visual learning of unknown composite visual scenes, spatial eye movement patterns systematically and gradually changed in accordance with the underlying statistical structure of the scenes. Moreover, the degree of change was directly correlated with the amount and type of knowledge the observers acquired. Our results provide the first evidence for an ongoing and specific bidirectional interaction between hitherto accumulated knowledge and eye movements during both implicit and explicit visual statistical learning, in line with theoretical accounts of active learning.","link":"/opendata/arato-et-al-2023/"},{"title":"Arbuzova et al. (2022)","text":"It is still debated whether metacognition, or the ability to monitor our own mental states, relies on processes that are domain-general (a single set of processes can account for the monitoring of any mental process) or domain-specific (metacognition is accomplished by a collection of multiple monitoring modules, one for each cognitive domain). It has been speculated that two broad categories of metacognitive processes may exist: those that monitor primarily externally generated versus those that monitor primarily internally generated information. To test this proposed division, we measured metacognitive performance (using m-ratio, a signal detection theoretical measure) in four tasks that could be ranked along an internal-external axis of the source of information, namely memory, motor, visuomotor, and visual tasks. We found correlations between m-ratios in visuomotor and motor tasks, but no correlations between m-ratios in visual and visuomotor tasks, or between motor and memory tasks. While we found no correlation in metacognitive ability between visual and memory tasks, and a positive correlation between visuomotor and motor tasks, we found no evidence for a correlation between motor and memory tasks. This pattern of correlations does not support the grouping of domains based on whether the source of information is primarily internal or external. We suggest that other groupings could be more reflective of the nature of metacognition and discuss the need to consider other non-domain task-features when using correlations as a way to test the underlying shared processes between domains.","link":"/opendata/arbuzova-et-al-2022/"},{"title":"Armstrong et al. (2022)","text":"Attentional bias for threat is an adaptive feature of human psychology, but may become maladaptive in anxiety-related disorders, causing distress, distraction, and distorted perception of danger. Reaction time measures have revealed automatic, covert attention biases to threat, whereas eye tracking has revealed voluntary biases over a larger timescale, with monitoring or avoidance depending on context. Recently, attentional bias for threat has been studied as a conditioned fear response, providing new insight into how attentional biases are acquired and inhibited through learning experiences. However, very few studies have examined voluntary gaze biases during fear learning. In a novel eye tracking paradigm, we examine the overt components of attentional bias to threat and safety cues. We found that threat cues, but not safety cues, elicited an initial orienting bias, as well as sustained monitoring bias across 10-second trials. This collective “vigilance” response to threat cues was insensitive to extinction, whereas condition fear responding revealed by pupil size and self-report ratings showed marked extinction. Vigilance may be less prone to extinction, compared to autonomic arousal, because eye movements require less energy than preparing the body for defensive behavior. Implications for understanding vigilance in PTSD are considered.","link":"/opendata/armstrong-et-al-2022/"},{"title":"Arnau et al. (2023)","text":"Cognitive performance largely depends on how much effort is invested during task-execution. This also means that we rarely perform as good as we could. Cognitive effort is adjusted to the expected outcome of performance, meaning that it is driven by motivation. The results from recent studies investigating the effects manipulations of motivation clearly suggest that it is the expenditure of cognitive control that is particularly prone to being affected by modulations of cognitive effort. Although recent EEG studies investigated the neural underpinnings of the interaction of effort and control, reports on how cognitive effort is reflected by oscillatory activity of the EEG are quite sparse. It is the goal of the present study to bridge this gap by performing an exploratory analysis of high-density EEG data from a switching-task using manipulations of monetary incentives. A beamformer approach is used to localize the sensor-level effects in source-space. The results indicate that the manipulation of cognitive effort was successful. The participants reported significantly higher motivation and cognitive effort in high versus low reward trials. Performance was also significantly increased. The analysis of the EEG data revealed that the increase of cognitive effort was reflected by an increased mid-frontal theta activity during the cue-target interval, suggesting an increased use of proactive control. Alpha-desynchronization throughout the trial was also more pronounced in high reward trials, signaling a bias of attention towards the processing of external stimuli. Source reconstruction suggests that these effects are located in areas related to cognitive control, and visual processing.","link":"/opendata/arnau-et-al-2023/"},{"title":"Ashinoff et al. (2022)","text":"Base-rate neglect is a pervasive bias in judgment that is conceptualized as underweighting of prior information and can have serious consequences in real-world scenarios. This bias is thought to reflect variability in inferential processes but empirical support for a cohesive theory of base-rate neglect with sufficient explanatory power to account for longer-term and real-world beliefs is lacking. A Bayesian formalization of base-rate neglect in the context of sequential belief updating predicts that belief trajectories should exhibit dynamic patterns of dependence on the order in which evidence is presented and its consistency with prior beliefs. To test this, we developed a novel ‘urn-and-beads’ task that systematically manipulated the order of colored bead sequences and elicited beliefs via an incentive-compatible procedure. Our results in two independent online studies confirmed the predictions of the sequential base-rate neglect model: people exhibited beliefs that are more influenced by recent evidence and by evidence inconsistent with prior beliefs. We further found support for a noisy-sampling inference model whereby base-rate neglect results from rational discounting of noisy internal representations of prior beliefs. Finally, we found that model-derived indices of base-rate neglect-including noisier prior representation-correlated with propensity for unusual beliefs outside the laboratory. Our work supports the relevance of Bayesian accounts of sequential base-rate neglect to real-world beliefs and hints at strategies to minimize deleterious consequences of this pervasive bias.","link":"/opendata/ashinoff-et-al-2022/"},{"title":"Asutay & Västfjäll (2022)","text":"Affective experience has an important role in decision-making with recent theories suggesting a modulatory role of affect in ongoing subjective value computations. However, it is unclear how varying expectations and uncertainty dynamically influence affective experience and how dynamic representation of affect modulates risky choices. Using hierarchical Bayesian modeling on data from a risky choice task (N = 101), we find that the temporal integration of recently encountered choice parameters (expected value, uncertainty, and prediction errors) shapes affective experience and impacts subsequent choice behavior. Specifically, self-reported arousal prior to choice was associated with increased loss aversion, risk aversion, and choice consistency. Taken together, these findings provide clear behavioral evidence for continuous affective modulation of subjective value computations during risky decision-making.","link":"/opendata/asutay-vastfjall-2022/"},{"title":"Aydoğan et al. (2023)","text":"Interval timing refers to the ability to perceive and remember intervals in the seconds to minutes range. Our contemporary understanding of interval timing is derived from relatively small-scale, isolated studies that investigate a limited range of intervals with a small sample size, usually based on a single task. Consequently, the conclusions drawn from individual studies are not readily generalizable to other tasks, conditions, and task parameters. The current paper presents a live database that presents raw data from interval timing studies (currently composed of 68 datasets from eight different tasks incorporating various interval and temporal order judgments) with an online graphical user interface to easily select, compile, and download the data organized in a standard format. The Timing Database aims to promote and cultivate key and novel analyses of our timing ability by making published and future datasets accessible as open-source resources for the entire research community. In the current paper, we showcase the use of the database by testing various core ideas based on data compiled across studies (i.e., temporal accuracy, scalar property, location of the point of subjective equality, malleability of timing precision). The Timing Database will serve as the repository for interval timing studies through the submission of new datasets.","link":"/opendata/aydogan-et-al-2023/"},{"title":"Aylward et al. (2019)","text":"Anxiety is characterized by altered responses under uncertain conditions, but the precise mechanism by which uncertainty changes the behaviour of anxious individuals is unclear. Here we probe the computational basis of learning under uncertainty in healthy individuals and individuals suffering from a mix of mood and anxiety disorders. Participants were asked to choose between four competing slot machines with fluctuating reward and punishment outcomes during safety and stress. We predicted that anxious individuals under stress would learn faster about punishments and exhibit choices that were more affected by those punishments, thus formalizing our predictions as parameters in reinforcement learning accounts of behaviour. Overall, the data suggest that anxious individuals are quicker to update their behaviour in response to negative outcomes (increased punishment learning rates). When treating anxiety, it may therefore be more fruitful to encourage anxious individuals to integrate information over longer horizons when bad things happen, rather than try to blunt their responses to negative outcomes.","link":"/opendata/aylward-et-al-2019/"},{"title":"Bach (2015)","text":"Behavioural inhibition is a key anxiety-like behaviour in rodents and humans, distinct from avoidance of danger, and reduced by anxiolytic drugs. In some situations, it is not clear how behavioural inhibition minimises harm or maximises benefit for the agent, and can even appear counterproductive. Extant explanations of this phenomenon make use of descriptive models but do not provide a formal assessment of its adaptive value. This hampers a better understanding of the neural computations underlying anxiety behaviour. Here, we analyse a standard rodent anxiety model, the operant conflict test. We harvest Bayesian Decision Theory to show that behavioural inhibition normatively arises as cost-minimising strategy in temporally correlated environments. Importantly, only if behavioural inhibition is aimed at minimising cost, it depends on probability and magnitude of threat. Harnessing a virtual computer game, we test model predictions in four experiments with human participants. Humans exhibit behavioural inhibition with a strong linear dependence on threat probability and magnitude. Strikingly, inhibition occurs before motor execution and depends on the virtual environment, thus likely resulting from a neural optimisation process rather than a pre-programmed mechanism. Individual trait anxiety scores predict behavioural inhibition, underlining the validity of this anxiety model. These findings put anxiety behaviour into the context of cost-minimisation and optimal inference, and may ultimately pave the way towards a mechanistic understanding of the neural computations gone awry in human anxiety disorder.","link":"/opendata/bach-2015/"},{"title":"Bach et al. (2020)","text":"During adolescence and early adulthood, learning when to avoid threats and when to pursue rewards becomes crucial. Using a risky foraging task, we investigated individual differences in this dynamic across 781 individuals aged 14-24 years who were split into a hypothesis-generating discovery sample and a hold-out confirmation sample. Sex was the most important predictor of cautious behaviour and performance. Males earned one standard deviation (or 20%) more reward than females, collected more reward when there was little to lose and reduced foraging to the same level as females when potential losses became high. Other independent predictors of cautiousness and performance were self-reported daringness, IQ and self-reported cognitive complexity. We found no evidence for an impact of age or maturation. Thus, maleness, a high IQ or self-reported cognitive complexity, and self-reported daringness predicted greater success in risky foraging, possibly due to better exploitation of low-risk opportunities in high-risk environments.","link":"/opendata/bach-et-al-2020/"},{"title":"Bae & Luck (2019)","text":"Recent experiences influence the processing of new information even when those experiences are irrelevant to the current task. Does this reflect the indirect effects of a passively maintained representation of the previous experience, or is this representation reactivated when a new event occurs? To answer this question, we attempted to decode the orientation of the stimulus on the previous trial from the electroencephalogram on the current trial in a working memory task. Behavioral data confirmed that the previous-trial stimulus orientation influenced the reported orientation on the current trial, even though the previous-trial orientation was now task irrelevant. In two independent experiments, we found that the previous-trial orientation could be decoded from the current-trial electroencephalogram, indicating that the current-trial stimulus reactivated or boosted the representation of the previous-trial orientation. These results suggest that the effects of recent experiences on behavior are driven, in part, by a reactivation of those experiences and not solely by the indirect effects of passive memory traces.","link":"/opendata/bae-luck-2019/"},{"title":"Bahrami & Navajas (2020)","text":"The dataset includes 975 participants, who completed an online version of the4-arm bandit task in 2014. All participants gave their consent to carry the experiment. The experiment was approved by UCLResearch Ethics Committee(project 4223/001). The dataset is anonymised, and does not include information about the participants identity. The task followed the 4-arm bandit paradigm described in Daw et al. 2006. In this task the participants were asked to choose between four options on multiple trials. On each trial they had to choose an option and were then given information about the reward obtained by their choice. The rewards of each option drifted over time, in a manner also known as restless bandit, forcing the participants to constantly explore the different options to obtain the maximum reward. The rewards followed one of three drift schedules which were predefined, see below.The experiment lasted 150 trials. Participants failing to response within 4 seconds missed the trial and moved to the next one with no reward.","link":"/opendata/bahrami-navajas-2020/"},{"title":"Balasubramani et al. (2021)","text":"Choice selection strategies and decision making are typically investigated using multiple-choice gambling paradigms that require participants to maximize reward payoff. However, research shows that performance in such paradigms suffers from individual biases towards the frequency of gains to choose smaller local gains over larger longer term gain, also referred to as melioration. Here, we developed a simple two-choice reward task, implemented in 186 healthy human adult subjects across the adult lifespan to understand the behavioral, computational, and neural bases of payoff maximization versus melioration. The observed reward choice behavior on this task was best explained by a reinforcement learning model of differential future reward prediction. Simultaneously recorded and source-localized electroencephalography (EEG) showed that diminished theta-band activations in the right rostral anterior cingulate cortex (rACC) correspond to greater reward payoff maximization, specifically during the presentation of cumulative reward information at the end of each task trial. Notably, these activations (greater rACC theta) predicted depressed mood symptoms, thereby showcasing a reward processing marker of potential clinical utility. This study presents cognitive, computational and neural (EEG-based) analyses of a rapid reward-based decision-making task. The research has the following three highlights. 1) It teases apart two core aspects of reward processing, i.e. long term expected value maximization versus immediate gain frequency melioration based choice behavior. 2) It models reinforcement learning based behavioral differences between individuals showing that observed performance is best explained by differential extents of reward prediction. 3) It investigates neural correlates in 186 healthy human subjects across the adult lifespan, revealing specific theta band cortical source activations in right rostral anterior cingulate as correlates for maximization that further predict depressed mood across subjects.","link":"/opendata/balasubramani-et-al-2021/"},{"title":"Ballard et al. (2019a)","text":"Animals rely on learned associations to make decisions. Associations can be based on relationships between object features (e.g., the three leaflets of poison ivy leaves) and outcomes (e.g., rash). More often, outcomes are linked to multidimensional states (e.g., poison ivy is green in summer but red in spring). Feature-based reinforcement learning fails when the values of individual features depend on the other features present. One solution is to assign value to multi-featural conjunctive representations. Here, we test if the hippocampus forms separable conjunctive representations that enables the learning of response contingencies for stimuli of the form: AB+, B-, AC-, C+. Pattern analyses on functional MRI data show the hippocampus forms conjunctive representations that are dissociable from feature components and that these representations, along with those of cortex, influence striatal prediction errors. Our results establish a novel role for hippocampal pattern separation and conjunctive representation in reinforcement learning.","link":"/opendata/ballard-et-al-2019a/"},{"title":"Ballard et al. (2019b)","text":"Much is known about the effects of reward and punishment on behavior, yet little research has considered how these incentives influence the information-processing dynamics that underlie decision making. We fitted the linear ballistic accumulator to data from a perceptual-judgment task to examine the impacts of reward- and punishment-based incentives on three distinct components of information processing: the quality of the information processed, the quantity of that information, and the decision threshold. The threat of punishment lowered the average quality and quantity of information processed, compared with the prospect of reward or no performance incentive at all. The threat of punishment also induced less cautious decision making by lowering people’s decision thresholds relative to the prospect of reward. These findings suggest that information-processing dynamics are determined not only by objective properties of the decision environment but also by the higher order goals of the system.","link":"/opendata/ballard-et-al-2019b/"},{"title":"Balsdon et al. (2020)","text":"Perceptual decisions are accompanied by feelings of confidence that reflect the likelihood that the decision was correct. Here we aim to clarify the relationship between perception and confidence by studying the same perceptual task across three different confidence contexts. Human observers were asked to categorize the source of sequentially presented visual stimuli. Each additional stimulus provided evidence for making more accurate perceptual decisions, and better confidence judgements. We show that observers’ ability to set appropriate evidence accumulation bounds for perceptual decisions is strongly predictive of their ability to make accurate confidence judgements. When observers were not permitted to control their exposure to evidence, they imposed covert bounds on their perceptual decisions but not on their confidence decisions. This partial dissociation between decision processes is reflected in behaviour and pupil dilation. Together, these findings suggest a confidence-regulated accumulation-to-bound process that controls perceptual decision-making even in the absence of explicit speed-accuracy trade-offs.","link":"/opendata/balsdon-et-al-2020/"},{"title":"Balter & Raymond (2022)","text":"Transfer of learning refers to successful application of previously acquired knowledge or skills to novel settings. Although working memory (WM) is thought to play a role in transfer learning, direct evidence of the effect of limitations in WM on transfer learning is lacking. To investigate, we used an acquired equivalence paradigm that included tests of association and transfer learning. The effects of imposing an acute WM limitation on young adults was tested (within-subjects design: N = 27 adults; Mage = 24 years) by conducting learning transfer tests concurrent with a secondary task that required carrying a spatial WM load when performing the learned/transfer trial (Load condition) to acutely limit WM resources or no WM load (No-Load condition; WM was unloaded prior to performing the learned/transfer trial). Analysis using mixed effects models showed that although success on the transfer trials was high in the No-Load condition, performance dropped significantly to chance in the Load condition. Performance on tests of learned associations remained high in both conditions. These results indicate that transfer of learning depends on access to WM resources and suggests that even healthy young individuals may be affected in their ability to cross-utilize when cognitive resources become scarce, such as when engaging in two tasks simultaneously (e.g., using satellite navigation while driving).","link":"/opendata/balter-raymond-2022/"},{"title":"Balzus et al. (2022)","text":"Overactive performance monitoring, as reflected by enhanced neural responses to errors (the error-related negativity, ERN), is considered a biomarker for obsessive-compulsive disorder (OCD) and may be a promising target for novel treatment approaches. Prior research suggests that non-invasive brain stimulation with transcranial direct current stimulation (tDCS) may reduce the ERN in healthy individuals, yet no study has investigated its efficacy in attenuating the ERN in OCD. In this preregistered, randomized, sham-controlled, crossover study, we investigated effects of tDCS on performance monitoring in patients with OCD (n = 28) and healthy individuals (n = 28). Cathodal and sham tDCS was applied over the presupplementary motor area (pre-SMA) in two sessions, each followed by electroencephalogram recording during a flanker task. Cathodal tDCS reduced the ERN amplitude compared to sham tDCS, albeit this effect was only marginally significant (p = .052; mean difference: 0.86 μV). Additionally, cathodal tDCS reduced the correct-response negativity and increased the error positivity. These neural modulations were not accompanied by behavioral changes. Moreover, we found no evidence that the tDCS effect was more pronounced in the patient group. In summary, our findings indicate that tDCS over the pre-SMA modulates neural correlates of performance monitoring across groups. Therefore, this study represents a valuable starting point for future research to determine whether repeated tDCS application induces a more pronounced ERN attenuation and normalizes aberrant performance monitoring in the long term, thereby potentially alleviating obsessive-compulsive symptoms and providing a psychophysiological intervention strategy for individuals who do not benefit sufficiently from existing interventions.","link":"/opendata/balzus-et-al-2022/"},{"title":"Banca et al. (2023)","text":"Enhanced habit formation, greater automaticity and impaired goal/habit arbitration in obsessive-compulsive disorder (OCD) are key hypotheses from the goal/habit imbalance theory of compulsion which have not been directly investigated. This article tests these hypotheses using a combination of newly developed behavioral tasks. First, we trained patients with OCD and healthy controls, using a novel smartphone app, to perform chunked action sequences, previously shown to engage habit brain circuitry. The motor training was daily over one month period. There was equivalent procedural learning and attainment of habitual performance (measured with an objective criteria of automaticity) in both groups, despite greater subjective habitual tendencies in patients with OCD, self-reported via a recently developed questionnaire. We then used a combination of follow-up behavioral tasks to further assess the arbitration between previous automatic and new goal-directed action sequences. We found no evidence for impairments of goal/habit arbitration in OCD following re-evaluation based on monetary feedback, although there was a greater preference for engaging in the trained habitual sequence under certain conditions which may have derived from its intrinsic value. These findings may lead to a reformulation of the goal/habit imbalance hypothesis in OCD. Finally, OCD patients with higher compulsivity scores and habitual tendencies showed more engagement with the motor habit-training app and reported symptom alleviation, with implications for its potential use as a form of habit reversal therapy.","link":"/opendata/banca-et-al-2023/"},{"title":"Bang et al. (2022)","text":"Computing confidence in ones own and others decisions is critical for social success. While there has been substantial progress in our understanding of confidence estimates about oneself, little is known about how people form confidence estimates about others. Here, we address this question by asking participants undergoing fMRI to place bets on perceptual decisions made by themselves or one of three other players of varying ability. We show that participants compute confidence in another players decisions by combining distinct estimates of player ability and decision difficulty - allowing them to predict that a good player may get a difficult decision wrong and that a bad player may get an easy decision right. We find that this computation is associated with an interaction between brain systems implicated in decision-making (LIP) and theory of mind (TPJ and dmPFC). These results reveal an interplay between self- and other-related processes during a social confidence computation.","link":"/opendata/bang-et-al-2022/"},{"title":"Baptista et al. (2021)","text":"Checking behavior is a natural and adaptive strategy for resolving uncertainty in everyday situations. Here, we aimed at investigating the psychological drivers of checking and its regulation by uncertainty, in non-clinical participants and controlled experimental settings. We found that the sensitivity of participants’ explicit confidence judgments to actual performance (explicit metacognition) predicted the extent to which their checking strategy was regulated by uncertainty. Yet, a more implicit measure of metacognition (derived from asking participants to opt between trials) did not contribute to the regulation of checking behavior. Meanwhile, how participants scaled on questionnaires eliciting self-beliefs such as self-confidence and self-reported obsessive-compulsive symptoms also predicted participants’ uncertainty-guided checking tendencies. Altogether, these findings demonstrate that checking behavior is likely the outcome of a core explicit metacognitive process operating at the scale of single decisions, while remaining influenced by general self-beliefs. Our findings are thus consistent with two mechanisms (micro vs. macro) through which this otherwise adaptive behavior could go awry in certain psychiatric disorders such as obsessive-compulsive disorder.","link":"/opendata/baptista-et-al-2021/"},{"title":"Barnby et al. (2022a)","text":"Theoretical accounts suggest heightened uncertainty about the state of the world underpins aberrant belief updates, which in turn increase the risk of developing a persecutory delusion. However, this raises the question as to how an agent’s uncertainty may relate to the precise phenomenology of paranoia, as opposed to other qualitatively different forms of belief. We tested whether the same population (n=693) responded similarly to non-social and social contingency changes in a probabilistic reversal learning task and a modified repeated reversal Dictator game, and the impact of paranoia on both. We fitted computational models that included closely related parameters that quantified the rigidity across contingency reversals and the uncertainty about the environment/partner. Consistent with prior work we show that paranoia was associated with uncertainty around a partner’s behavioural policy and rigidity in harmful intent attributions in the social task. In the non-social task we found that pre-existing paranoia was associated with larger decision temperatures and commitment to suboptimal cards. We show relationships between decision temperature in the non-social task and priors over harmful intent attributions and uncertainty over beliefs about partners in the social task. Our results converge across both classes of model, suggesting paranoia is associated with a general uncertainty over the state of the world (and agents within it) that takes longer to resolve, although we demonstrate that this uncertainty is expressed asymmetrically in social contexts. Our model and data allow the representation of sociocognitive mechanisms that explain persecutory delusions and provide testable, phenomenologically relevant predictions for causal experiments.","link":"/opendata/barnby-et-al-2022a/"},{"title":"Barnby et al. (2022b)","text":"To benefit from social interactions, people need to predict how their social partners will behave. Such predictions arise through integrating prior expectations with evidence from observations, but where the priors come from and whether they influence the integration into beliefs about a social partner is not clear. Furthermore, this process can be affected by factors such as paranoia, in which the tendency to form biased impressions of others is common. Using a modified social value orientation (SVO) task in a large online sample (n = 697), we showed that participants used a Bayesian inference process to learn about partners, with priors that were based on their own preferences. Paranoia was associated with preferences for earning more than a partner and less flexible beliefs regarding a partner’s social preferences. Alignment between the preferences of participants and their partners was associated with better predictions and with reduced attributions of harmful intent to partners. Together, our data and model expand upon theories of interpersonal relationships by demonstrating how dyadic similarity mechanistically influences social interaction by generating more accurate predictions and less threatening impressions.","link":"/opendata/barnby-et-al-2022b/"},{"title":"Barnby et al. (2023)","text":"Striatal dopamine is important to paranoid attributions, although its computational role in social inference remains elusive. We employed a simple game theoretic paradigm and computational model of intentional attributions to investigate the effects of dopamine D2/D3 antagonism on ongoing mental state inference following social outcomes. Haloperidol, compared to placebo, enhanced the impact of partner behaviour on beliefs about harmful intent, and increased learning from recent encounters. These alterations caused significant changes to model covariation and negative correlations between self-interest and harmful intent attributions. Our findings suggest haloperidol improves flexibility in model-based beliefs about others and simultaneously reduces the self-relevance of social observations. Our results may reflect the role of D2/D3 dopamine in supporting self-relevant mentalisation. Our data and model bridge theory between general and social accounts of value representation. We demonstrate initial evidence for the sensitivity of our model and short social paradigm to drug intervention and clinical dimensions, allowing distinctions between mechanisms that operate across traits and states.","link":"/opendata/barnby-et-al-2023/"},{"title":"Barretto-Garcia et al. (2023)","text":"Humans are generally risk averse, preferring smaller certain over larger uncertain outcomes. Economic theories usually explain this by assuming concave utility functions. Here, we provide evidence that risk aversion can also arise from relative underestimation of larger monetary payoffs, a perceptual bias rooted in the noisy logarithmic coding of numerical magnitudes. We confirmed this with psychophysics and functional magnetic resonance imaging, by measuring behavioural and neural acuity of magnitude representations during a magnitude perception task and relating these measures to risk attitudes during separate risky financial decisions. Computational modelling indicated that participants use similar mental magnitude representations in both tasks, with correlated precision across perceptual and risky choices. Participants with more precise magnitude representations in parietal cortex showed less variable behaviour and less risk aversion. Our results highlight that at least some individual characteristics of economic behaviour can reflect capacity limitations in perceptual processing rather than processes that assign subjective values to monetary outcomes.","link":"/opendata/barretto-garcia-et-al-2023/"},{"title":"Basili et al. (2022)","text":"In September 2021 we conducted a survey to 1482 people in Italy, when the vaccination campaign against Covid19 was going on. In the first part of the survey we run three simple tests on players’ behavior in standard tasks with monetary incentives to measure their risk attitudes, willingness to contribute to a public good in an experimental game, and their beliefs about others’ behavior. In the second part, we asked respondents if they were vaccinated and, if not, for what reason. We classified as no-vaxxers those (around [Formula: see text] of the sample) who did not yet start the vaccination process and declared that they intended not to do it in the future. We find that no-vaxxers contribute less to the public good in the experimental game because they trust others less to do so. from the three tests we extrapolated a classification based on the benchmark of rationality and other-regarding preferences for each respondent, and we found that in this respect no-vaxxers do not differ from the rest of the population.","link":"/opendata/basili-et-al-2022/"},{"title":"Bavard et al. (2018)","text":"In economics and perceptual decision-making contextual effects are well documented, where decision weights are adjusted as a function of the distribution of stimuli. Yet, in reinforcement learning literature whether and how contextual information pertaining to decision states is integrated in learning algorithms has received comparably little attention. Here, we investigate reinforcement learning behavior and its computational substrates in a task where we orthogonally manipulate outcome valence and magnitude, resulting in systematic variations in state-values. Model comparison indicates that subjects behavior is best accounted for by an algorithm which includes both reference point-dependence and range-adaptation-two crucial features of state-dependent valuation. In addition, we find that state-dependent outcome valuation progressively emerges, is favored by increasing outcome information and correlated with explicit understanding of the task structure. Finally, our data clearly show that, while being locally adaptive (for instance in negative valence and small magnitude contexts), state-dependent valuation comes at the cost of seemingly irrational choices, when options are extrapolated out from their original contexts.","link":"/opendata/bavard-et-al-2018/"},{"title":"Bavard et al. (2021)","text":"Evidence suggests that economic values are rescaled as a function of the range of the available options. Although locally adaptive, range adaptation has been shown to lead to suboptimal choices, particularly notable in reinforcement learning (RL) situations when options are extrapolated from their original context to a new one. Range adaptation can be seen as the result of an adaptive coding process aiming at increasing the signal-to-noise ratio. However, this hypothesis leads to a counterintuitive prediction: Decreasing task difficulty should increase range adaptation and, consequently, extrapolation errors. Here, we tested the paradoxical relation between range adaptation and performance in a large sample of participants performing variants of an RL task, where we manipulated task difficulty. Results confirmed that range adaptation induces systematic extrapolation errors and is stronger when decreasing task difficulty. Last, we propose a range-adapting model and show that it is able to parsimoniously capture all the behavioral results.","link":"/opendata/bavard-et-al-2021/"},{"title":"Bavard & Palminteri (2023)","text":"Reinforcement learning research in humans and other species indicates that rewards are represented in a context-dependent manner. More specifically, reward representations seem to be normalized as a function of the value of the alternative options. The dominant view postulates that value context-dependence is achieved via a divisive normalization rule, inspired by perceptual decision-making research. However, behavioral and neural evidence points to another plausible mechanism: range normalization. Critically, previous experimental designs were ill-suited to disentangle the divisive and the range normalization accounts, which generate similar behavioral predictions in many circumstances. To address this question, we designed a new learning task where we manipulated, across learning contexts, the number of options and the value ranges. Behavioral and computational analyses falsify the divisive normalization account and rather provide support for the range normalization rule. Together, these results shed new light on the computational mechanisms underlying context-dependence in learning and decision-making.","link":"/opendata/bavard-palminteri-2023/"},{"title":"Bedder et al. (2023)","text":"Humans exhibit distinct risk preferences when facing choices involving potential gains and losses. These preferences are believed to be subject to neuromodulatory influence, particularly from dopamine and serotonin. As neuromodulators manifest circadian rhythms, this suggests decision making under risk might be affected by time of day. Here, in a large subject sample collected using a smartphone application, we found that risky options with potential losses were increasingly chosen over the course of the day. We observed this result in both a within-subjects design (N = 2599) comparing risky options chosen earlier and later in the day in the same individuals, and in a between-subjects design (N = 26,720) showing our effect generalizes across ages and genders. Using computational modelling, we show this diurnal change in risk preference reflects a decrease in sensitivity to increasing losses, but no change was observed in the relative impacts of gains and losses on choice (i.e., loss aversion). Thus, our findings reveal a striking diurnal modulation in human decision making, a pattern with potential importance for real-life decisions that include voting, medical decisions, and financial investments.","link":"/opendata/bedder-et-al-2023/"},{"title":"Bejjani et al. (2022)","text":"Cognitive control is guided by learning, as people adjust control to meet changing task demands. The two best-studied instances of control-learning are the enhancement of attentional task focus in response to increased frequencies of incongruent distracter stimuli, reflected in the list-wide proportion congruent (LWPC) effect, and the enhancement of switch-readiness in response to increased frequencies of task switches, reflected in the list-wide proportion switch (LWPS) effect. However, the latent architecture underpinning these adaptations in cognitive stability and flexibility - specifically, whether there is a single, domain-general, or multiple, domain-specific learners - is currently not known. To reveal the underlying structure of control-learning, we had a large sample of participants (N = 950) perform LWPC and LWPS paradigms, and afterwards assessed their explicit awareness of the task manipulations, as well as general cognitive ability and motivation. Structural equation modeling was used to evaluate several preregistered models representing different plausible hypotheses concerning the latent structure of control-learning. Task performance replicated standard LWPC and LWPS effects. Crucially, the model that best fit the data had correlated domain- and context-specific latent factors. Thus, peoples ability to adapt their on-task focus and between-task switch-readiness to changing levels of demand was mediated by distinct (though correlated) underlying factors. Model fit remained good when accounting for speed-accuracy trade-offs, variance in individual cognitive ability and self-reported motivation, as well as self-reported explicit awareness of manipulations and the order in which different levels of demand were experienced. Implications of these results for the cognitive architecture of dynamic cognitive control are discussed.","link":"/opendata/bejjani-et-al-2022/"},{"title":"Bellana et al. (2022)","text":"Some experiences linger in mind, spontaneously returning to our thoughts for minutes after their conclusion. Other experiences fall out of mind immediately. It remains unclear why. We hypothesize that an input is more likely to persist in our thoughts when it has been deeply processed: when we have extracted its situational meaning rather than its physical properties or low-level semantics. Here, participants read sequences of words with different levels of coherence (word-, sentence-, or narrative-level). We probe participants spontaneous thoughts via free word association, before and after reading. By measuring lingering subjectively (via self-report) and objectively (via changes in free association content), we find that information lingers when it is coherent at the narrative level. Furthermore, and an individuals feeling of transportation into reading material predicts lingering better than the materials objective coherence. Thus, our thoughts in the present moment echo prior experiences that have been incorporated into deeper, narrative forms of thinking.","link":"/opendata/bellana-et-al-2022/"},{"title":"Beltzer et al. (2019)","text":"Adaptive social behavior requires learning probabilities of social reward and punishment, and updating these probabilities when they change. Given prior research on aberrant reinforcement learning in affective disorders, this study examines how social anxiety affects probabilistic social reinforcement learning and dynamic updating of learned probabilities in a volatile environment. N=222 online participants completed questionnaires and a computerized ball-catching game with changing probabilities of reward and punishment. Dynamic learning rates were estimated to assess the relative importance ascribed to new information in response to volatility. Mixed-effects regression was used to analyze throw patterns as a function of social anxiety symptoms. Higher social anxiety predicted fewer throws to the previously punishing avatar and different learning rates after certain role changes, suggesting that social anxiety may be characterized by difficulty updating learned social probabilities. Socially anxious individuals may miss the chance to learn that a once-punishing situation no longer poses a threat.","link":"/opendata/beltzer-et-al-2019/"},{"title":"Ben Artzi & Shahar (2021)","text":"178 prolific workers completed an online experiment in return for monetary compensation. Participants completed a Reinforcement Learning task of four cards and two reward conditions. On each trial of the task, two cards of the four were offered by the computer, and participants were asked to pick one. Each card could lead to a reward on an independent drifting probability across trials. The difference between conditions was in whether participants won extra points or avoided the loss of points. All participants completed the OCI-R, and a partial sample also completed BDI, STAI, and SPQ.","link":"/opendata/ben-artzi-shahar-2021/"},{"title":"Ben Artzi et al. (2022a)","text":"To establish accurate action-outcome associations in the environment, individuals must refrain from assigning value to outcome-irrelevant features. However, reinforcement learning studies have largely ignored the role of attentional control processes on credit assignment (the process of assigning value to one’s actions). In the current study, we examined the extent to which working memory – a system that can filter and block the processing of irrelevant information in one’s mind – predicted credit assignment to outcome-irrelevant task features. One hundred and seventy-four individuals completed working memory capacity and outcome-irrelevant learning estimates. Outcome-irrelevant learning was estimated in a reinforcement learning task where only stimulus’ visual features predicted reward, but not the response keys used to indicate one’s selection. As expected, we found a consistent tendency to assign value to the tasks’ response keys, reflecting outcome-irrelevant learning at the group level. However, we also found substantial individual differences, such that only 55% of participants demonstrated this effect. Importantly, working memory capacity significantly moderated individual differences in outcome-irrelevant learning; individuals with higher capacity were less likely to assign credit to the outcome-irrelevant feature (i.e., response key). We discuss the influence of working memory on outcome-irrelevant learning through the perspective of cognitive control failure.","link":"/opendata/ben-artzi-et-al-2022a/"},{"title":"Ben Artzi et al. (2022b)","text":"Current studies suggest that individuals estimate the value of their choices based on observed feedback. Here, we ask whether individuals update the value of their unchosen actions, even when the associated feedback remains unknown. Two hundred and three individuals completed a multi-armed bandit task, making choices to gain rewards. We found robust evidence suggesting inverse value updating for unchosen actions based on the chosen action’s outcome. Computational modeling results suggested that this effect is mainly explained by a value updating mechanism whereby individuals integrate the outcome history for choosing an option with that of avoiding the alternative. Properties of the deliberation (i.e., duration/difficulty) did not moderate the latent value updating of unchosen actions, suggesting that memory traces generated during deliberation take a smaller role in this phenomenon than previously thought. We discuss the mechanisms facilitating credit assignment to unchosen actions and their implications for human decision-making.","link":"/opendata/ben-artzi-et-al-2022b/"},{"title":"Bennett et al. (2021)","text":"Aversion to uncertainty about the future has been proposed as a transdiagnostic trait underlying psychiatric diagnoses including obsessive-compulsive disorder and generalized anxiety. This association might explain the frequency of pathological information-seeking behaviors such as compulsive checking and reassurance-seeking in these disorders. Here we tested the behavioral predictions of this model using a noninstrumental information-seeking task that measured preferences for unusable information about future outcomes in different payout domains (gain, loss, and mixed gain/loss). We administered this task, along with a targeted battery of self-report questionnaires, to a general-population sample of 146 adult participants. Using computational cognitive modeling of choices to test competing theories of information valuation, we found evidence for a model in which preferences for costless and costly information about future outcomes were independent, and in which information preference was modulated by both outcome mean and outcome variance. Critically, we also found positive associations between a model parameter controlling preference for costly information and individual differences in latent traits of both anxiety and obsessive-compulsion. These associations were invariant across different payout domains, providing evidence that individuals high in obsessive-compulsive and anxious traits show a generalized increase in willingness-to-pay for unusable information about uncertain future outcomes, even though this behavior reduces their expected future reward.","link":"/opendata/bennett-et-al-2021/"},{"title":"Bennett, Radulescu et al. (2021)","text":"Positive and negative affective states are respectively associated with optimistic and pessimistic expectations regarding future reward. One mechanism that might underlie these affect-related expectation biases is attention to positive- versus negative-valence stimulus features (e.g., attending to the positive reviews of a restaurant versus its expensive price). Here we tested the effects of experimentally induced positive and negative affect on feature-based attention in 120 participants completing a compound-generalization task with eye-tracking. We found that participants reward expectations for novel compound stimuli were modulated by the affect induction in an affect-congruent way: positive affect increased reward expectations for compounds, whereas negative affect decreased reward expectations. Computational modelling and eye-tracking analyses each revealed that these effects were driven by affect-congruent changes in participants allocation of attention to high- versus low-value features of compound stimuli. These results provide mechanistic insight into a process by which affect produces biases in generalized reward expectations.","link":"/opendata/bennett-radulescu-et-al-2021/"},{"title":"Benwell et al. (2022)","text":"Human behaviours are guided by how confident we feel in our abilities. When confidence does not reflect objective performance, this can impact critical adaptive functions and impair life quality. Distorted decision-making and confidence have been associated with mental health problems. Here, utilising advances in computational and transdiagnostic psychiatry, we sought to map relationships between psychopathology and both decision-making and confidence in the general population across two online studies (N’s = 344 and 473, respectively). The results revealed dissociable decision-making and confidence signatures related to distinct symptom dimensions. A dimension characterised by compulsivity and intrusive thoughts was found to be associated with reduced objective accuracy but, paradoxically, increased absolute confidence, whereas a dimension characterized by anxiety and depression was associated with systematically low confidence in the absence of impairments in objective accuracy. These relationships replicated across both studies and distinct cognitive domains (perception and general knowledge), suggesting that they are reliable and domain general. Additionally, whereas Big-5 personality traits also predicted objective task performance, only symptom dimensions related to subjective confidence. Domain-general signatures of decision-making and metacognition characterise distinct psychological dispositions and psychopathology in the general population and implicate confidence as a central component of mental health.","link":"/opendata/benwell-et-al-2022/"},{"title":"Bertram et al. (2021)","text":"Information about risks and probabilities is ubiquitous in our environment, forming the basis for decisions in an uncertain world. Emotions are known to modulate subjective probability assessments when probabilistic information is emotionally valenced. Yet little is known about the role of emotions in subjective probability assessment of affectively neutral events. We investigated this in one correlational study (Study 1, N = 162) and one experimental study (Study 2, N = 119). As predicted, we found that emotional dominance modulated the degree of conservatism in respondents’ neutral probability estimates. Remarkably, this pattern also transferred to realistic risk assessments. Furthermore, respondents’ tendency to use the representativeness heuristic as a proxy for probability was increased in high dominance individuals. Our findings highlight the importance of considering emotions, particularly the little-understood emotion dimension dominance, in research on probabilistic cognition.","link":"/opendata/bertram-et-al-2021/"},{"title":"Biderman & Shohamy (2021)","text":"The goal of deliberation is to separate between options so that we can commit to one and leave the other behind. However, deliberation can, paradoxically, also form an association in memory between the chosen and unchosen options. Here, we consider this possibility and examine its consequences for how outcomes affect not only the value of the options we chose, but also, by association, the value of options we did not choose. In five experiments (total n = 612), including a preregistered experiment (n = 235), we found that the value assigned to unchosen options is inversely related to their chosen counterparts. Moreover, this inverse relationship was associated with participants’ memory of the pairs they chose between. Our findings suggest that deciding between options does not end the competition between them. Deliberation binds choice options together in memory such that the learned value of one can affect the inferred value of the other.","link":"/opendata/biderman-shohamy-2021/"},{"title":"Bioud et al. (2022)","text":"To decide whether a course of action is worth pursuing, individuals typically weigh its expected costs and benefits. Optimal decision-making relies upon accurate effort cost anticipation, which is generally assumed to be performed independently from goal valuation. In two experiments (n = 46), we challenged this independence principle of standard decision theory. We presented participants with a series of treadmill routes randomly associated to monetary rewards and collected both accept versus decline decisions and subjective estimates of energetic cost. Behavioural results show that higher monetary prospects led participants to provide higher cost estimates, although reward was independent from effort in our design. Among candidate cognitive explanations, they support a model in which prospective cost assessment is biased by the output of an automatic computation adjusting effort expenditure to goal value. This decision bias might lead people to abandon the pursuit of valuable goals that are in fact not so costly to achieve.","link":"/opendata/bioud-et-al-2022/"},{"title":"Biria et al. (2023)","text":"There has been little analysis of neurochemical correlates of compulsive behaviour to illuminate its underlying neural mechanisms. We use 7-Tesla proton magnetic resonance spectroscopy (1H-MRS) to assess the balance of excitatory and inhibitory neurotransmission by measuring glutamate and GABA levels in anterior cingulate cortex (ACC) and supplementary motor area (SMA) of healthy volunteers and participants with Obsessive-Compulsive Disorder (OCD). Within the SMA, trait and clinical measures of compulsive behaviour are related to glutamate levels, whereas a behavioural index of habitual control correlates with the glutamate:GABA ratio. Participants with OCD also show the latter relationship in the ACC while exhibiting elevated glutamate and lower GABA levels in that region. This study highlights SMA mechanisms of habitual control relevant to compulsive behaviour, common to the healthy sub-clinical and OCD populations. The results also demonstrate additional involvement of anterior cingulate in the balance between goal-directed and habitual responding in OCD.","link":"/opendata/biria-et-al-2023/"},{"title":"Bisschop (2021)","text":"Database of cognitive control task data (e.g., Stroop, Flanker tasks).","link":"/opendata/bisschop-2021/"},{"title":"Blackwell et al. (2023)","text":"According to major cognitive accounts of panic disorder, bodily sensations can lead to automatic activation of an associative fear network, potentially triggering a cascade of cognitive, emotional, and physiological responses culminating in a panic attack. However, the evidence for the automatic associations assumed by these models is mixed. This may reflect the heterogeneous nature of panic disorder, in that the relative importance of different bodily sensations and symptoms varies between individuals. The current study aimed to test this possibility via measuring the associations between three different sets of panic symptoms (cognitive, respiratory, cardiac) and scores on three symptom-specific single target implicit association tests (STIATs). A total of 226 unselected female participants aged 18-35 completed the STIATs as well as questionnaires assessing panic symptoms and related measures in a web-based study. Only limited evidence was found to support the idea of specific associations between STIAT stimuli sets and their related panic symptoms. Exploratory analyses indicated that there were only associations between STIAT scores and panic-relevant questionnaires amongst those participants who had experienced a panic attack in the previous 6 months. The results have implications for measuring panic-relevant associations and understanding their role in panic disorder.","link":"/opendata/blackwell-et-al-2023/"},{"title":"Blain & Rutledge (2020)","text":"Subjective well-being or happiness is often associated with wealth. Recent studies suggest that momentary happiness is associated with reward prediction error, the difference between experienced and predicted reward, a key component of adaptive behaviour. We tested subjects in a reinforcement learning task in which reward size and probability were uncorrelated, allowing us to dissociate between the contributions of reward and learning to happiness. Using computational modelling, we found convergent evidence across stable and volatile learning tasks that happiness, like behaviour, is sensitive to learning-relevant variables (i.e. probability prediction error). Unlike behaviour, happiness is not sensitive to learning-irrelevant variables (i.e. reward prediction error). Increasing volatility reduces how many past trials influence behaviour but not happiness. Finally, depressive symptoms reduce happiness more in volatile than stable environments. Our results suggest that how we learn about our world may be more important for how we feel than the rewards we actually receive.","link":"/opendata/blain-rutledge-2020/"},{"title":"Blain et al. (2022)","text":"Social interactions influence people’s feelings and behavior. Here, we propose that a person’s well-being is influenced not only by interactions they experience themselves, but also by those they observe. In particular, we test and quantify the influence of observed selfishness and observed inequality on a bystanders’ feelings and non-costly punishment decisions. We developed computational models that relate others’ (un)selfish acts to observers’ emotional reactions and punishment decisions. These characterize the rules by which others’ interactions are transformed into bystanders’ reactions, and successfully predict those reactions in out-of-sample participants. The models highlight the impact of two social values-‘selfishness aversion’ and ‘inequality aversion’. As for the latter we find that even small violations from perfect equality have a disproportionately large impact on feelings and punishment. In this age of internet and social media we constantly observe others’ online interactions, in addition to in-person interactions. Quantifying the consequences of such observations is important for predicting their impact on society.","link":"/opendata/blain-et-al-2022/"},{"title":"Blain et al. (2023)","text":"Humans frequently engage in intrinsically rewarding activities (for example, consuming art, reading). Despite such activities seeming diverse, we show that sensitivity to intrinsic rewards is domain general and associated with mental health. In this cross-sectional study, participants online (N = 483) were presented with putative visual, cognitive and social intrinsic rewards as well as monetary rewards and neutral stimuli. All rewards elicited positive feelings (were ‘liked’), generated consummatory behaviour (were ‘wanted’) and increased the likelihood of the action leading to them (were ‘reinforcing’). Factor analysis revealed that ~40% of response variance across stimuli was explained by a general sensitivity to all rewards, but not to neutral stimuli. Affective aspects of mental health were associated with sensitivity to intrinsic, but not monetary, rewards. These results may help explain thriving and suffering: individuals with high reward sensitivity will engage in a variety of intrinsically rewarding activities, eventually finding those they excel at, whereas low sensitivity individuals will not.","link":"/opendata/blain-et-al-2023/"},{"title":"Blankenship & Kibbe (2023)","text":"The ability to use knowledge to guide the completion of goals is a critical cognitive skill, but 3-year-olds struggle to complete goals that require multiple steps. This study asked whether 3-year-olds could benefit from “plan chunking” to complete multistep goals. Thirty-two U.S. children (range = 35.75-46.59 months; 18 girls; 9 white, 3 mixed race, 20 unknown; tested between July 2020 and April 2021) were asked to complete “treasure maps,” retrieving four colored map pieces by pressing specific buttons on a “rainbow box.” Children completed more of the four-step sequence correctly when the steps were presented in a way that encouraged chunking the steps into pairs. These findings suggest a potential mechanism supporting memory-guided planning abilities in early childhood.","link":"/opendata/blankenship-kibbe-2023/"},{"title":"Bode et al. (2023)","text":"Previous research has shown that people intrinsically value non-instrumental information, which cannot be used to change the outcome of events, but only provides an early resolution of uncertainty. This is true even for information about rather inconsequential events, such as the outcomes of small lotteries. Here we investigated whether participants’ willingness to pay for non-instrumental information about the outcome of simple coin-flip lotteries with guaranteed winnings was modulated by acute stress. Stress was induced using the Socially Evaluated Cold Pressor Test (SECPT), and information-seeking choices were compared to a warm water control group. Our results neither support the hypothesis that stress decreases information-seeking by directing cognitive resources away from the relevance of the lotteries, nor the opposite hypothesis that stress increases information-seeking by driving anxiety levels up. Instead, we found that despite successful stress induction, as evidenced by increased saliva cortisol levels in the SECPT group, information valuation was remarkably stable. This finding is in line with recent findings that experimentally increased state anxiety did not modulate non-instrumental information seeking. Together, these results suggest that the aversiveness of “not knowing” is a stable cognitive state and not easily modulated by situational context, such as acute stress.","link":"/opendata/bode-et-al-2023/"},{"title":"Bogdanov et al. (2021)","text":"Adverse effects following acute stress are traditionally thought to reflect functional impairments of central executive-dependent cognitive-control processes. However, recent evidence demonstrates that cognitive-control application is perceived as effortful and aversive, indicating that stress-related decrements in cognitive performance could denote decreased motivation to expend effort instead. To investigate this hypothesis, we tested 40 young, healthy individuals (20 female, 20 male) under both stress and control conditions in a 2-day study that had a within-subjects design. Cognitive-effort avoidance was assessed using the demand-selection task, in which participants chose between performing low-demand and high-demand variants of a task-switching paradigm. We found that acute stress indeed increased participants preference for less demanding behavior, whereas task-switching performance remained intact. Additional Bayesian and multiverse analyses confirmed the robustness of this effect. Our findings provide novel insights into how stressful experiences shape behavior by modulating our motivation to employ cognitive control.","link":"/opendata/bogdanov-et-al-2021/"},{"title":"Bognar et al. (2023)","text":"The U-shaped curve has long been recognized as a fundamental concept in psychological science, particularly in theories about motivational accounts and cognitive control. In this study (N=330), we empirically tested the prediction of a non-monotonic, curvilinear relationship between task difficulty and control adaptation. Drawing from Motivational Intensity Theory (MIT) and the expected value of control (EVC) framework, we hypothesized that control intensity would increase with task difficulty until a maximum tolerable level, after which it would decrease. To examine this hypothesis, we conducted two experiments utilizing Stroop-like conflict tasks, systematically manipulating the number of distractors to vary task difficulty. We assessed control adaptation and measured subjective task difficulty. Our results revealed a curvilinear pattern between perceived task difficulty and adaptation of control. The findings provide empirical support for the theoretical accounts of MIT and EVC, highlighting the nonlinear nature of the relationship between task difficulty and cognitive control.","link":"/opendata/bognar-et-al-2023/"},{"title":"Bolenz et al. (2019)","text":"Humans employ different strategies when making decisions. Previous research has reported reduced reliance on model-based strategies with aging, but it remains unclear whether this is due to cognitive or motivational factors. Moreover, it is not clear how aging affects the metacontrol of decision making, that is the dynamic adaptation of decision-making strategies to varying situational demands. In this cross-sectional study, we tested younger and older adults in a sequential decision-making task that dissociates model-free and model-based strategies. In contrast to previous research, model-based strategies led to higher payoffs. Moreover, we manipulated the costs and benefits of model-based strategies by varying reward magnitude and the stability of the task structure. Compared to younger adults, older adults showed reduced model-based decision making and less adaptation of decision-making strategies. Our findings suggest that aging affects the metacontrol of decision-making strategies and that reduced model-based strategies in older adults are due to limited cognitive abilities.","link":"/opendata/bolenz-et-al-2019/"},{"title":"Bolenz & Eppinger (2021)","text":"The development of metacontrol of decision making and its susceptibility to framing effects were investigated in a sample of 201 adolescents and adults in Germany (12-25 years, 111 female, ethnicity not recorded). In a task that dissociates model-free and model-based decision making, outcome magnitude and outcome valence were manipulated. Both adolescents and adults showed metacontrol and metacontrol tended to increase across adolescence. Furthermore, model-based decision making was more pronounced for loss compared to gain frames but there was no evidence that this framing effect differed with age. Thus, the strategic adaptation of decision making continues to develop into young adulthood and for both adolescents and adults, losses increase the motivation to invest cognitive resources into an effortful decision-making strategy.","link":"/opendata/bolenz-eppinger-2021/"},{"title":"Bolenz et al. (2022)","text":"Humans show metacontrol of decision making, that is they adapt their reliance on decision-making strategies toward situational differences such as differences in reward magnitude. Specifically, when higher rewards are at stake, individuals increase reliance on a more accurate but cognitively effortful strategy. We investigated whether the personality trait Need for Cognition (NFC) explains individual differences in metacontrol. Based on findings of cognitive effort expenditure in executive functions, we expected more metacontrol in individuals low in NFC. In two independent studies, metacontrol was assessed by means of a decision-making task that dissociates different reinforcement-learning strategies and in which reward magnitude was manipulated across trials. In contrast to our expectations, NFC did not account for individual differences in metacontrol of decision making. In fact, a Bayesian analysis provided moderate to strong evidence against a relationship between NFC and metacontrol. Beyond this, there was no consistent evidence for relationship between NFC and overall model-based decision making. These findings show that the effect of rewards on the engagement of effortful decision-making strategies is largely independent of the intrinsic motivation for engaging in cognitively effortful tasks and suggest a differential role of NFC for the regulation of cognitive effort in decision making and executive functions.","link":"/opendata/bolenz-et-al-2022/"},{"title":"Bolenz & Pachur (2023)","text":"Younger and older adults differ in their risky choices. Theoretical frameworks on human aging point to various cognitive and motivational factors that might underlie these differences. Using a novel computational model based on resource rationality, we find that the two age groups select qualitatively different strategies. Importantly, older adults did not use simpler strategies than younger adults, they did not select among fewer strategies, they did not make more errors, and they did not put more weight on cognitive costs. Instead, older adults selected strategies that had different risk propensities than those selected by younger adults. Our modeling approach suggests that rather than being a consequence of cognitive decline, age differences in risky choice seem to mainly reflect motivational differences between age groups.","link":"/opendata/bolenz-pachur-2023/"},{"title":"Bond et al. (2021)","text":"In uncertain or unstable environments, sometimes the best decision is to change your mind. To shed light on this flexibility, we evaluated how the underlying decision policy adapts when the most rewarding action changes. Human participants performed a dynamic two-armed bandit task that manipulated the certainty in relative reward (conflict) and the reliability of action-outcomes (volatility). Continuous estimates of conflict and volatility contributed to shifts in exploratory states by changing both the rate of evidence accumulation (drift rate) and the amount of evidence needed to make a decision (boundary height), respectively. At the trialwise level, following a switch in the optimal choice, the drift rate plummets and the boundary height weakly spikes, leading to a slow exploratory state. We find that the drift rate drives most of this response, with an unreliable contribution of boundary height across experiments. Surprisingly, we find no evidence that pupillary responses associated with decision policy changes. We conclude that humans show a stereotypical shift in their decision policies in response to environmental changes.","link":"/opendata/bond-et-al-2021/"},{"title":"Boned & López-Moliner (2022)","text":"When we try to assess the duration of an event, we are often affected by external information. Studies on multiple timing have found that simultaneous timing information can produce an averaging or central tendency effect, where the perceived duration of the elements tends to be biased towards a general average. We wanted to assess how this effect induced by simultaneous distractors could depend on the temporal similarity between stimuli. We used a duration judgment task in which participants (n = 22) had to compare the duration of two identical targets (1 s) accompanied by simultaneous distractors of different durations (0.3, 0.7, 1.5 or 3 s). We found a central tendency effect, where duration judgments of the target were systematically biased towards the duration of the distractors that accompanied them. We put forward a model based on the concept of duration-channels that can explain the central tendency effect with only one estimated parameter. This parameter modulates the rate of decay of this effect as distractors duration become more different than the duration of the target.","link":"/opendata/boned-lopez-moliner-2022/"},{"title":"Bonnaire et al. (2022)","text":"Blaszczynski and Nower (2002) conceptualized their Pathways Model by postulating the existence of three subtypes of problem gamblers who share common characteristics, but also present specific ones. This study investigated how the psychological mechanisms postulated in the Pathways Model predict clinical status in a sample that combined treatment-seeking gamblers (n = 59) and non-problematic community gamblers (n = 107). To test the Pathways Model, we computed a hierarchic logistic regression in which variables associated with each postulated pathway were entered sequentially to predict the status of the treatment-seeking gambler. Self-report questionnaires measured gambling-related cognitions, alexithymia, emotional reactivity, emotion regulation strategies and impulsivity. Behavioural tasks measured gambling persistence (slot machine task), decision-making under uncertainty (Iowa Gambling Task) and decision-making under risk (Game of Dice Task). We showed that specific factors theorized as underlying mechanisms for each pathway predicted the status of clinical gambler. For each pathway, significant predictors included gambling-related cognitive distortions and behaviourally measured gambling persistence (behaviourally conditioned pathway), emotional reactivity and emotion regulation strategies (emotionally vulnerable pathway), and lack of premeditation impulsivity facet (impulsivist-antisocial pathway). Our study adds to the body of literature confirming the validity of the Pathways Model and hold important implications in terms of assessment and treatment of problem gambling. In particular, a standardized assessment based on the Pathways Model should promote individualized treatment strategies to allow clinicians to take into account the high heterogeneity that characterizes gambling disorder.","link":"/opendata/bonnaire-et-al-2022/"},{"title":"Boschet et al. (2022)","text":"Conflicts between avoiding feared stimuli versus approaching them for competing rewards are essential for functional behavior and anxious psychopathology. Yet, little is known about the underlying decision process. We examined approach-avoidance decisions and their temporal dynamics when avoiding Pavlovian fear stimuli conflicted with gaining rewards. First, a formerly neutral stimulus (CS+) was repeatedly paired with an aversive stimulus (US) to establish Pavlovian fear. Another stimulus (CS-) was never paired with the US. A control group received neutral tones instead of aversive USs. Next, in each of 324 trials, participants chose between a CS-/low reward and a CS+/high reward option. For the latter, probability of CS+ presentation (Pavlovian fear information) and reward magnitude (reward information) varied. Computer mouse movements were tracked to capture the decision dynamics. Although no more USs occurred, pronounced and persistent costly avoidance of the Pavlovian fear CS+ was found. Time-continuous multiple regression of movement trajectories revealed a stronger and faster impact of Pavlovian fear compared to reward information during decision-making. The impact of fear information, but not reward information, modestly decreased across trials. These findings suggest a persistently stronger weighting of fear compared to reward information during approach-avoidance decisions, which may facilitate the development of pathological avoidance.","link":"/opendata/boschet-et-al-2022/"},{"title":"Bradshaw & McGettigan (2021)","text":"Recent research suggests that reinforcement learning may underlie trait formation in social interactions with faces. The current study investigated whether the same learning mechanisms could be engaged for trait learning from voices. On each trial of a training phase, participants (N = 192) chose from pairs of human or slot machine targets that varied in the (1) reward value and (2) generosity of their payouts. Targets were either auditory (voices or tones; Experiment 1) or visual (faces or icons; Experiment 2) and were presented sequentially before payout feedback. A test phase measured participant choice behaviour, and a post-test recorded their target preference ratings. For auditory targets, we found a significant effect of reward only on target choices, but saw higher preference ratings for more generous humans and slot machines. For visual targets, findings from previous studies were replicated: participants learned about both generosity and reward, but generosity was prioritised in the human condition. These findings provide one of the first demonstrations of reinforcement learning of reward with auditory stimuli in a social learning task, but suggest that the use of auditory targets does alter learning in this paradigm. Conversely, reinforcement learning of reward and trait information with visual stimuli remains intact even when sequential presentation introduces a delay in feedback.","link":"/opendata/bradshaw-mcgettigan-2021/"},{"title":"Bratzke & Ulrich (2019)","text":"The human ability to compare time between sensory modalities implies a supramodal representation of time. This notion is consistent with the pacemaker-counter model (PCM), the core architecture of prominent timing theories. Some theorists, however, have promoted modality-specific timing mechanisms, which might hamper crossmodal temporal comparison. This study tested whether PCM is sufficient to account for intra- as well as crossmodal timing. To account for modality-specific timing differences, we proceeded from the common assumption that the pacemaker runs faster for auditory than for visual stimuli. Participants reproduced short and long standards (800 vs. 2,400 ms) by terminating a comparison stimulus. In Experiment 1, in each trial the sensory modalities (auditory vs. visual) of the standard and the comparison were the same (congruent) or different (incongruent). PCM implies that timing performance depends on modality order. However, there should be virtually no congruency effects on overall performance. Although the results largely matched the predictions of PCM, there were substantial congruency effects on reproduction variability especially in the subsecond range. Three intramodal control experiments, however, showed that similar congruency effects can be observed when the standard and the comparison differ in intramodal characteristics. This suggests that temporal representations are not isolated from nontemporal stimulus characteristics, even when these are subtle and within the same modality. The present results can be interpreted as evidence for sensory timing within the subsecond range. Nevertheless, we used computer simulations to evaluate extensions of PCM that could account for the present result pattern, while retaining PCM’s supramodal property.","link":"/opendata/bratzke-ulrich-2019/"},{"title":"Braun et al. (2018)","text":"Many decisions are based on an internal model of the world. Yet, how such a model is constructed from experience and represented in memory remains unknown. We test the hypothesis that reward shapes memory for sequences of events by retroactively prioritizing memory for objects as a function of their distance from reward. Human participants encountered neutral objects while exploring a series of mazes for reward. Across six data sets, we find that reward systematically modulates memory for neutral objects, retroactively prioritizing memory for objects closest to the reward. This effect of reward on memory emerges only after a 24-hour delay and is stronger for mazes followed by a longer rest interval, suggesting a role for post-reward replay and overnight consolidation, as predicted by neurobiological data in animals. These findings demonstrate that reward retroactively prioritizes memory along a sequential gradient, consistent with the role of memory in supporting adaptive decision-making.","link":"/opendata/braun-et-al-2018/"},{"title":"Breslav et al. (2022)","text":"As children age, they can learn increasingly complex features of environmental structure-a key prerequisite for adaptive decision-making. Yet when we tested children (N = 304, 4-13 years old) in the Children’s Gambling Task, an age-appropriate variant of the Iowa Gambling Task, we found that age was negatively associated with performance. However, this paradoxical effect of age was found only in children who exhibited a maladaptive deplete-replenish bias, a tendency to shift choices after positive outcomes and repeat choices after negative outcomes. We found that this bias results from sensitivity to incidental nonrandom structure in the canonical, deterministic forms of these tasks-and that it would actually lead to optimal outcomes if the tasks were not deterministic. Our results illustrate that changes in decision-making across early childhood reflect, in part, increasing sensitivity to environmental structure.","link":"/opendata/breslav-et-al-2022/"},{"title":"Brooks & Sokol-Hessner (2023)","text":"Some of the most influential modern theories of risky monetary decision-making assume that choices result from stable, trait-like preferences, invariant to contextual influences such as recent events. Recent research has challenged this assumption, demonstrating that even when values and probabilities are explicit and known, decisions under risk are contextually sensitive, affected by recent events on multiple timescales, including immediate (previous monetary outcomes), neighborhood (recently encountered values), and global (cumulative earnings relative to dynamic expectations) events. Such temporal context-dependencies are perplexing, because relying on recent events at any timescale is inconsistent with the assumed goal of risky monetary decision-making: to maximize payoff. Identifying this suboptimal behavioral pattern raises the possibility it can be mitigated using behavioral change strategies. We tested whether the effects of temporal context in risk-taking can be attenuated with an intentional cognitive strategy. 124 participants completed two rounds of a contextually structured gambling task with trial-by-trial feedback, the Emotion Regulation Questionnaire, and working memory capacity tasks. Participants were randomly assigned to complete each gambling round with a strategy either emphasizing a natural, uncontrolled decision-making approach or directly instructing participants to ignore context. Instructions to ignore context influenced temporal context effects on the immediate timescale but did not change those on the neighborhood or global timescales. The strategy was not uniformly effective for all individuals. The cognitive strategy eliminated (and even reversed) contextual effects on the immediate timescale for individuals with moderate and high habitual use of reappraisal. However, the cognitive strategy paradoxically strengthened contextual effects on the immediate timescale for individuals with low habitual use of reappraisal. The selective effects of strategic control on contextual influence indicates both intriguing possibilities and limits on the ability to optimize human decision-making, and suggests that people do not simply maximize local utility, but rather that even simple risky decision-making may be fundamentally goal-dependent.","link":"/opendata/brooks-sokol-hessner-2023/"},{"title":"Brown et al. (2022)","text":"When navigating uncertain worlds, humans must balance exploring new options versus exploiting known rewards. Longer horizons and spatially structured option values encourage humans to explore, but the impact of real-world cognitive constraints such as environment size and memory demands on explore-exploit decisions is unclear. In the present study, humans chose between options varying in uncertainty during a multi-armed bandit task with varying environment size and memory demands. Regression and cognitive computational models of choice behavior showed that with a lower cognitive load, humans are more exploratory than a simulated value-maximizing learner, but under cognitive constraints, they adaptively scale down exploration to maintain exploitation. Thus, while humans are curious, cognitive constraints force people to decrease their strategic exploration in a resource-rational-like manner to focus on harvesting known rewards.","link":"/opendata/brown-et-al-2022/"},{"title":"Bruder et al. (2021a)","text":"In recent years the emergence of high-performance virtual reality (VR) technology has opened up new possibilities for the examination of context effects in psychological studies. The opportunity to create ecologically valid stimulation in a highly controlled lab environment is especially relevant for studies of psychiatric disorders, where it can be problematic to confront participants with certain stimuli in real life. However, before VR can be confidently applied widely it is important to establish that commonly used behavioral tasks generate reliable data within a VR surrounding. One field of research that could benefit greatly from VR-applications are studies assessing the reactivity to addiction related cues (cue-reactivity) in participants suffering from gambling disorder. Here we tested the reliability of a commonly used temporal discounting task in a novel VR set-up designed for the concurrent assessment of behavioral and psychophysiological cue-reactivity in gambling disorder. On 2 days, thirty-four healthy non-gambling participants explored two rich and navigable VR-environments (neutral: café vs. gambling-related: casino and sports-betting facility), while their electrodermal activity was measured using remote sensors. In addition, participants completed the temporal discounting task implemented in each VR environment. On a third day, participants performed the task in a standard lab testing context. We then used comprehensive computational modeling using both standard softmax and drift diffusion model (DDM) choice rules to assess the reliability of discounting model parameters assessed in VR. Test-retest reliability estimates were good to excellent for the discount rate log(k), whereas they were poor to moderate for additional DDM parameters. Differences in model parameters between standard lab testing and VR, reflecting reactivity to the different environments, were mostly numerically small and of inconclusive directionality. Finally, while exposure to VR generally increased tonic skin conductance, this effect was not modulated by the neutral versus gambling-related VR-environment. Taken together this proof-of-concept study in non-gambling participants demonstrates that temporal discounting measures obtained in VR are reliable, suggesting that VR is a promising tool for applications in computational psychiatry, including studies on cue-reactivity in addiction.","link":"/opendata/bruder-et-al-2021a/"},{"title":"Bruder et al. (2021b)","text":"High-performance virtual reality (VR) technology has opened new possibilities for the examination of the reactivity towards addiction-related cues (cue-reactivity) in addiction. In this preregistered study (https://osf.io/4mrta), we investigated the subjective, physiological, and behavioral effects of gambling-related VR environment exposure in participants reporting frequent or pathological gambling (n=31) as well as non-gambling controls (n=29). On two separate days, participants explored two rich and navigable VR-environments (neutral: café vs. gambling-related: casino/sports-betting facility), while electrodermal activity and heart rate were continuously measured using remote sensors. Within VR, participants performed a temporal discounting task and a sequential decision-making task designed to assess model-based and model-free contributions to behavior. Replicating previous findings, we found strong evidence for increased temporal discounting and reduced model-based control in participants reporting frequent or pathological gambling. Although VR gambling environment exposure increased subjective craving, there was if anything inconclusive evidence for further behavioral or physiological effects. Instead, VR exposure substantially increased physiological arousal (electrodermal activity), across groups and conditions. VR is a promising tool for the investigation of context effects in addiction, but some caution is warranted since effects of real gambling environments might not generally replicate in VR. Future studies should delineate how factors such as cognitive load and ecological validity could be balanced to create a more naturalistic VR experience.","link":"/opendata/bruder-et-al-2021b/"},{"title":"Brus et al. (2021)","text":"Confidence, the subjective estimate of decision quality, is a cognitive process necessary for learning from mistakes and guiding future actions. The origins of confidence judgments resulting from economic decisions remain unclear. We devise a task and computational framework that allowed us to formally tease apart the impact of various sources of confidence in value-based decisions, such as uncertainty emerging from encoding and decoding operations, as well as the interplay between gaze-shift dynamics and attentional effort. In line with canonical decision theories, trial-to-trial fluctuations in the precision of value encoding impact economic choice consistency. However, this uncertainty has no influence on confidence reports. Instead, confidence is associated with endogenous attentional effort towards choice alternatives and down-stream noise in the comparison process. These findings provide an explanation for confidence (miss)attributions in value-guided behaviour, suggesting mechanistic influences of endogenous attentional states for guiding decisions and metacognitive awareness of choice certainty.","link":"/opendata/brus-et-al-2021/"},{"title":"Bueno & Cravo (2021)","text":"Studies investigating the neural mechanisms of time perception often measure brain activity while participants perform a temporal task. However, several of these studies are based exclusively on tasks in which time is relevant, making it hard to dissociate activity related to decisions about time from other task-related patterns. In the present study, human participants performed a temporal or color discrimination task of visual stimuli. Participants were informed which magnitude they would have to judge before or after presenting the two stimuli (S1 and S2) in different blocks. Our behavioral results showed, as expected, that performance was better when participants knew beforehand which magnitude they would judge. Electrophysiological data (EEG) was analysed using Linear Discriminant Contrasts (LDC) and a Representational Similarity Analysis (RSA) approach to investigate whether and when information about time and color was encoded. During the presentation of S1, we did not find consistent differences in EEG activity as a function of the task. On the other hand, during S2, we found that temporal and color information was encoded in a task-relevant manner. Taken together, our results suggest that task goals strongly modulate decision-related information in EEG activity.","link":"/opendata/bueno-cravo-2021/"},{"title":"Burleigh et al. (2022)","text":"Many symptoms of anxiety and posttraumatic stress disorder are elicited by fearful mental imagery. Yet little is known about how visual imagery of conditioned stimuli (CSs) affects the acquisition of differential fear conditioning. Across three experiments with younger human adults (Experiment 1: n = 33, Experiment 2: n = 27, Experiment 3: n = 26), we observed that participants acquired differential fear conditioning to both viewed and imagined percepts serving as the CSs, as measured via self-reported fear and skin conductance responses. Additionally, this differential conditioning generalized across CS-percept modalities such that differential conditioning acquired in response to visual percepts generalized to the corresponding imagined percepts and vice versa. This is novel evidence that perceived and imagined stimuli engage learning processes in very similar ways and is consistent with the theory that mental imagery is depictive and recruits neural resources shared with visual perception. Our findings also provide new insight into the mechanisms of anxiety and related disorders.","link":"/opendata/burleigh-et-al-2022/"},{"title":"Burton et al. (2022)","text":"How people update their beliefs when faced with new information is integral to everyday life. A sizeable body of literature suggests that people’s belief updating is optimistically biased, such that their beliefs are updated more in response to good news than bad news. However, recent research demonstrates that findings previously interpreted as evidence of optimistic belief updating may be the result of flaws in experimental design, rather than motivated reasoning. In light of this controversy, we conduct three pre-registered variations of the standard belief updating paradigm (combined N = 300) in which we test for asymmetric belief updating with neutral, non-valenced stimuli using analytic approaches found in previous research. We find evidence of seemingly biased belief updating with neutral stimuli - results that cannot be attributed to a motivational, valence-based, optimism account - and further show that there is uninterpretable variability across samples and analytic techniques. Jointly, these results serve to highlight the methodological flaws in current optimistic belief updating research.","link":"/opendata/burton-et-al-2022/"},{"title":"Byrne et al. (2020a)","text":"Previous research suggests that depressive symptoms are associated with altered sensitivity to reward and punishment in various decision-making contexts. Building on this work, this study investigated whether depressed-affect symptoms influenced risky decision making under time pressure. The effect of depressed affect on risky choice was assessed in a reward (Experiments 1A and 1B) and loss (Experiment 2) context under low- and high-pressure conditions. Decisions involved learning to choose between a “sure” option and a “risky” option with identical expected values. In Experiment 1A, depressed affect predicted increased risky decision making under time pressure but did not affect decision making under low pressure. Experiment 1B replicated this effect. In contrast, in Experiment 2, depressed affect led to reduced risk taking in low-pressure condition but did not affect decision making under high pressure. These results suggest that the pattern of risky decision making among those experiencing symptoms of depressed affect depends on performance pressure demands.","link":"/opendata/byrne-et-al-2020a/"},{"title":"Byrne et al. (2020b)","text":"Acute stress has been shown to influence reward sensitivity, feedback learning, and risk-taking during decision-making, primarily through activation of the hypothalamic pituitary axis (HPA). However, it is unclear how acute stress affects decision-making among choices that vary in their degree of uncertainty. To address this question, we conducted two experiments in which participants repeatedly chose between two options-a high-uncertainty option that offered highly variable rewards but was advantageous in the long-term, and a low-uncertainty option that offered smaller yet more consistent rewards. The Socially Evaluated Cold Pressor Task (SECPT) was utilized to induce acute stress. Participants in Experiment 1 (N = 114) were exposed to either the SECPT or a warm-water control condition and then completed the decision-making under uncertainty task. Compared to the control condition, those exposed to the acute stress manipulation chose the high-uncertainty option that provided highly variable but larger rewards over the option that provided stable, smaller rewards. Experiment 2 (N = 95) incorporated a salivary cortisol measure. Results replicated the behavioral findings in Experiment 1 and demonstrated that the acute stress manipulation increased salivary cortisol. This work suggests that moderate acute stress is associated with tolerance of outcome variability in contexts that depend on learning to maximize rewards.","link":"/opendata/byrne-et-al-2020b/"},{"title":"Byrne et al. (2023)","text":"Eye movement data has been extensively utilized by researchers interested in studying decision-making within the strategic setting of economic games. In this paper, we demonstrate that both deep learning and support vector machine classification methods are able to accurately identify participants’ decision strategies before they commit to action while playing games. Our approach focuses on creating scanpath images that best capture the dynamics of a participant’s gaze behaviour in a way that is meaningful for predictions to the machine learning models. Our results demonstrate a higher classification accuracy by 18% points compared to a baseline logistic regression model, which is traditionally used to analyse gaze data recorded during economic games. In a broader context, we aim to illustrate the potential for eye-tracking data to create information asymmetries in strategic environments in favour of those who collect and process the data. These information asymmetries could become especially relevant as eye-tracking is expected to become more widespread in user applications, with the seemingly imminent mass adoption of virtual reality systems and the development of devices with the ability to record eye movement outside of a laboratory setting.","link":"/opendata/byrne-et-al-2023/"},{"title":"Callahan-Flintoft et al. (2020)","text":"Much of our world changes smoothly in time, yet the allocation of attention is typically studied with sudden changes - transients. A sizeable lag in selecting feature information is seen when stimuli change smoothly. Yet this lag is not seen with temporally uncorrelated rapid serial visual presentation (RSVP) stimuli. This suggests that temporal autocorrelation of a feature paradoxically increases the latency at which information is sampled. To test this, participants are asked to report the color of a disk when a cue was presented. There is an increase in selection latency when the disk’s color changed smoothly compared to randomly. This increase is due to the smooth color change presented after the cue rather than extrapolated predictions based on the color changes presented before. These results support an attentional drag theory, whereby attentional engagement is prolonged when features change smoothly. A computational model provides insights into the potential underlying neural mechanisms.","link":"/opendata/callahan-flintoft-et-al-2020/"},{"title":"Callaway et al. (2022)","text":"Making good decisions requires thinking ahead, but the huge number of actions and outcomes one could consider makes exhaustive planning infeasible for computationally constrained agents, such as humans. How people are nevertheless able to solve novel problems when their actions have long-reaching consequences is thus a long-standing question in cognitive science. To address this question, we propose a model of resource-constrained planning that allows us to derive optimal planning strategies. We find that previously proposed heuristics such as best-first search are near optimal under some circumstances but not others. In a mouse-tracking paradigm, we show that people adapt their planning strategies accordingly, planning in a manner that is broadly consistent with the optimal model but not with any single heuristic model. We also find systematic deviations from the optimal model that might result from additional cognitive constraints that are yet to be uncovered.","link":"/opendata/callaway-et-al-2022/"},{"title":"Canale et al. (2022)","text":"Previous research indicates that the invigorating effect of stress sensitivity on gambling behavior might be moderated by individual differences. The current preregistered study tested whether gender and negative urgency (i.e. an emotion-related impulsivity trait) moderate the relationship between perceived stress and laboratory gambling following experimentally induced sadness. One hundred twenty college students were randomly assigned to a sadness versus a control condition before completing a laboratory gambling task. Although the distribution of the main study variables forced us to slightly deviate from the preregistered data analysis plan, we were able to show that heightened stress sensitivity affects gambling behavior and that this effect differs by gender (but not in terms of negative urgency) under conditions of sadness versus neutral mood. Men with high stress sensitivity gambled more money and more frequently selected the riskier betting option in the sadness condition, whereas women with heightened stress sensitivity display the same pattern in the neutral condition. Our study is relevant from a methodological standpoint and answers recent calls for endorsing open-science practices in gambling research. Findings also suggest that more research into female gambling is warranted and that emotion-regulation skills should be a central component of problem gambling prevention.","link":"/opendata/canale-et-al-2022/"},{"title":"Cao & Tsetsos (2022)","text":"Decisions between two economic goods can be swayed by a third unavailable ‘decoy’ alternative, which does not compete for choice, notoriously violating the principles of rational choice theory. Although decoy effects typically depend on the decoy’s position in a multiattribute choice space, recent studies using risky prospects (i.e., varying in reward and probability) reported a novel ‘positive’ decoy effect operating on a single value dimension: the higher the ‘expected value’ (EV) of an unavailable (distractor) prospect was, the easier the discrimination between two available target prospects became, especially when their expected-value difference was small. Here, we show that this unidimensional distractor effect affords alternative interpretations: it occurred because the distractor’s EV covaried positively with the subjective utility difference between the two targets. Looking beyond this covariation, we report a modest ‘negative’ distractor effect operating on subjective utility, as well as classic multiattribute decoy effects. A normatively meaningful model (selective integration), in which subjective utilities are shaped by intra-attribute information distortion, reproduces the multiattribute decoy effects, and as an epiphenomenon, the negative unidimensional distractor effect. These findings clarify the modulatory role of an unavailable distracting option, shedding fresh light on the mechanisms that govern multiattribute decisions.","link":"/opendata/cao-tsetsos-2022/"},{"title":"Caron et al. (2020)","text":"Rosenbaum, Mama, and Algom (2017) reported that participants who completed the Stroop task (i.e., name the hue of a color word when the hue and word meaning are congruent or incongruent) showed a smaller Stroop effect (i.e., the difference in response times between congruent and incongruent trials) when they performed the task standing than when sitting. We report five attempted replications (analyzed sample sizes: N = 108, N = 108, N = 98, N = 78, and N = 51, respectively) of Rosenbaum et al.’s findings, which were conducted in two institutions. All experiments yielded the standard Stroop effect, but we failed to detect any consistent effect of posture (sitting vs. standing) on the magnitude of the Stroop effect. Taken together, the results suggest that posture does not influence the magnitude of the Stroop effect to the extent that was previously suggested.","link":"/opendata/caron-et-al-2020/"},{"title":"Carsten et al. (2023)","text":"Previous studies suggest that humans are capable of coregulating the speed of decisions and movements if promoted by task incentives. It is unclear however whether such behavior is inherent to the process of translating decisional information into movements, beyond posing a valid strategy in some task contexts. Therefore, in a behavioral online study we imposed time constraints to either decision- or movement phases of a sensorimotor task, ensuring that coregulating decisions and movements was not promoted by task incentives. We found that participants indeed moved faster when fast decisions were promoted and decided faster when subsequent finger tapping movements had to be executed swiftly. These results were further supported by drift diffusion modelling and inspection of psychophysical kernels: Sensorimotor delays related to initiating the finger tapping sequence were shorter in fast-decision as compared to slow-decision blocks. Likewise, the decisional speed-accuracy tradeoff shifted in favor of faster decisions in fast-tapping as compared to slow-tapping blocks. These findings suggest that decisions not only impact movement characteristics, but that properties of movement impact the time taken to decide. We interpret these behavioral results in the context of embodied decision-making, whereby shared neural mechanisms may modulate decisions and movements in a joint fashion.","link":"/opendata/carsten-et-al-2023/"},{"title":"Castellotti et al. (2022)","text":"The passing of time can be precisely measured by using clocks, whereas humans’ estimation of temporal durations is influenced by many physical, cognitive and contextual factors, which distort our internal clock. Although it has been shown that temporal estimation accuracy is impaired by non-temporal tasks performed at the same time, no studies have investigated how concurrent cognitive and motor tasks interfere with time estimation. Moreover, most experiments only tested time intervals of a few seconds. In the present study, participants were asked to perform cognitive tasks of different difficulties (look, read, solve simple and hard mathematical operations) and estimate durations of up to two minutes, while walking or sitting. The results show that if observers pay attention only to time without performing any other mental task, they tend to overestimate the durations. Meanwhile, the more difficult the concurrent task, the more they tend to underestimate the time. These distortions are even more pronounced when observers are walking. Estimation biases and uncertainties change differently with durations depending on the task, consistent with a fixed relative uncertainty. Our findings show that cognitive and motor systems interact non-linearly and interfere with time perception processes, suggesting that they all compete for the same resources.","link":"/opendata/castellotti-et-al-2022/"},{"title":"Castro-Rodrigues et al. (2022)","text":"Explicit information obtained through instruction profoundly shapes human choice behaviour. However, this has been studied in computationally simple tasks, and it is unknown how model-based and model-free systems, respectively generating goal-directed and habitual actions, are affected by the absence or presence of instructions. We assessed behaviour in a variant of a computationally more complex decision-making task, before and after providing information about task structure, both in healthy volunteers and in individuals suffering from obsessive-compulsive or other disorders. Initial behaviour was model-free, with rewards directly reinforcing preceding actions. Model-based control, employing predictions of states resulting from each action, emerged with experience in a minority of participants, and less in those with obsessive-compulsive disorder. Providing task structure information strongly increased model-based control, similarly across all groups. Thus, in humans, explicit task structural knowledge is a primary determinant of model-based reinforcement learning and is most readily acquired from instruction rather than experience.","link":"/opendata/castro-rodrigues-et-al-2022/"},{"title":"Cataldo et al. (2022)","text":"Healthy adults show better memory for low-arousal positive versus negative stimuli, but depression compromises this positive memory advantage. Existing studies are limited by small samples or analyses that provide limited insight into underlying mechanisms. Our study addresses these concerns by using a multistaged analysis, including diffusion modeling, to identify precise psychological processes underlying the positive memory advantage and its disruption by depression in a large sample. A total of 1,358 participants completed the BDI-II (Beck et al., 1996) and an emotional memory task. At encoding, participants judged whether positive and negative words were positive or self-descriptive. After a free recall test, participants viewed an equal mix of studied and unstudied words and judged whether each was old or new; if judged old, they indicated whether the study source was a valence or self-reference judgment. We replicate the positive memory advantage and its decrease in depression in recall, recognition, and source accuracy. The hierarchical drift diffusion model (HDDM; Wiecki et al., 2013) revealed that higher BDI-II scores are associated with more efficient evidence accumulation for negative words in the recognition and source memory tasks. By contrast, evidence accumulation for positive words is unaffected by BDI-II during the recognition task but becomes less efficient with increased BDI-II during the source memory task. In conclusion, in a well-controlled design with a large sample, we find that depression reduces the positive memory advantage. HDDM analyses suggest that this reflects differential effects of depression on the speed of evidence accumulation during the retrieval of positive versus negative memories.","link":"/opendata/cataldo-et-al-2022/"},{"title":"Caulfield et al. (2023)","text":"Worry is a repetitive, negative thought process that is widely experienced as difficult to control. Despite the adverse effects of uncontrollable worry on academic and other role functioning, the mechanisms by which worry becomes uncontrollable remain poorly understood. Previous experimental work has historically emphasized valence (negative versus positive or neutral). However, contemporary cognitive neuroscience also distinguishes between internally-directed attention (e.g., to thoughts) and externally-directed attention (e.g., to perceptual stimuli). To date, no studies have experimentally examined potential dissociable contributions of valence versus attentional direction to impaired disengagement from worry. In a 2 (negative or neutral valence) x 2 (internal or external attention) between-subjects, experimental and prospective design (https://osf.io/vdyfn/), participants (N = 200) completed alternating blocks of a randomly-assigned attention manipulation and validated sustained attention task. Participants also rated trait worry and distress during the experimental session (T1) and a naturalistic stressor (the week before finals; T2). There was a main effect, such that internally-directed attention impaired sustained attention (increased commission errors). Worry (internal x negative) also impaired sustained attention (faster and less accurate responding) in planned group contrasts. Trait worry did not moderate these effects. Sustained attention at T1 did not predict distress or worry during the T2 stressor. These findings augment the literature on the attentional consequences of worry and replicate and extend previous findings of altered speed-accuracy tradeoffs following experimentally-induced worry. We also find evidence for impaired disengagement from internally-directed (versus externally-directed) attention, which may help to explain impaired disengagement from related forms of perseverative thought (e.g., rumination).","link":"/opendata/caulfield-et-al-2023/"},{"title":"Cavallaro & Hawkins (2022)","text":"We investigated whether cognitive process models commonly studied in experimentalpsychology provide a deeper explanation of preferential choicesthan the descriptive random utility models commonly studied in the appliedchoice literature, with a specific focus on choices made under time pressure.In two preferential choice scenarios we used two standard manipulations oftime pressure to assess whether changes in decision time affect subjective valuationsof the features of preferential options, and whether the answer to thisquestion depends on the theoretical lens used to understand the data. Acrossfour experiments, we found that choices between preferential options and thetime taken to make them varied as a function of time pressure, reinforcingand generalising findings from lower-level perceptual decisions to higherlevelapplied choices. Critically, theoretical analyses from the two traditionsled to different psychological conclusions about how people adapted to timepressure. The random utility analyses suggested that time-pressure inducedchanges to choices were the result of changes in subjective valuations for thefeatures of preferential options. However, the cognitive process analyses attributedtime-pressure induced changes to choices to differential informationaccumulation; subjective valuations remained stable across contexts, againreinforcing decades of findings from the perceptual decision literature. Weargue that cognitive process models provide mechanistic explanations of theprocesses underlying decisions for preferential options. Furthermore, conventionaldescriptive models of choice in the applied literature may misattributevariability in choices to the incorrect latent cause, which has implications forout-of-sample prediction in the marketplace.","link":"/opendata/cavallaro-hawkins-2022/"},{"title":"Cavanagh et al. (2019)","text":"In this report, we provide the first evidence that mood and anxiety dimensions are associated with unique aspects of EEG responses to reward and punishment, respectively. We reanalyzed data from our prior publication of a categorical depiction of depression to address more sophisticated dimensional hypotheses. Highly symptomatic depressed individuals (N = 46) completed a probabilistic learning task with concurrent EEG. Measures of anxiety and depression symptomatology were significantly correlated with each other; however, only anxiety predicted better avoidance learning due to a tighter coupling of negative prediction error signaling with punishment-specific EEG features. In contrast, depression predicted a smaller reward-related EEG feature, but this did not affect prediction error coupling or the ability to learn from reward. We suggest that this reward-related alteration reflects motivational or hedonic aspects of reward and not a diminishment in the ability to represent the information content of reinforcements. These findings compel further research into the domain-specific neural systems underlying dimensional aspects of psychiatric disease.","link":"/opendata/cavanagh-et-al-2019/"},{"title":"Cavanagh (2021)","text":"Healthy control college students. 23 subjects completed the 3-armed bandit task with oscillating probabilities. For example, the ‘blue’ stim would slowly move from 20% reinforcing to 90% then back to 20 over many trials. The other ‘red’ and ‘green’ stims would move similarly, but in different phase. See Fig 1 of the paper. This makes the task great for investigating reward processing & reward prediction error in the service of novel task set generation.","link":"/opendata/cavanagh-2021/"},{"title":"Cavanaugh et al. (2017)","text":"Individual differences in dopaminergic tone underlie tendencies to learn from reward versus punishment. These effects are well documented in Parkinsons patients, who vacillate between low and high tonic dopaminergic states as a function of medication. Yet very few studies have investigated the influence of higher-level cognitive states known to affect downstream dopaminergic learning in Parkinsons patients. A dopamine-dependent cognitive influence over learning would provide a candidate mechanism for declining cognitive integrity and motivation in Parkinsons patients. In this report we tested the influence of two high-level cognitive states (cost of conflict and value of volition) that have recently been shown to cause predictable learning biases in healthy young adults as a function of dopamine receptor subtype and dopaminergic challenge. It was hypothesized that Parkinsons patients OFF medication would have an enhanced cost of conflict and a decreased value of volition, and that these effects would be remediated or reversed ON medication. Participants included N = 28 Parkinsons disease patients who were each tested ON and OFF dopaminergic medication and 28 age- and sex-matched controls. The expected cost of conflict effect was observed in Parkinsons patients OFF versus ON medication, but only in those that were more recently diagnosed (<5 years). We found an unexpected effect in the value of volition task: medication compromised the ability to learn from difficult a-volitional (instructed) choices. This novel finding was also enhanced in recently diagnosed patients. The difference in learning biases ON versus OFF medication between these two tasks was strongly correlated, bolstering the idea that they tapped into a common underlying imbalance in dopaminergic tone that is particularly variable in earlier stage Parkinsonism. The finding that these decision biases are specific to earlier but not later stage disease may offer a chance for future studies to quantify phenotypic expressions of idiosyncratic disease progression.","link":"/opendata/cavanaugh-et-al-2017/"},{"title":"Cavanagh & Frank (2023)","text":"Data from N=112 participants who completed the probabilistic selection task.","link":"/opendata/cavanagh-frank-2023/"},{"title":"Cesanek et al. (2023)","text":"Nearly all tasks of daily life involve skilled object manipulation, and successful manipulation requires knowledge of object dynamics. We recently developed a motor learning paradigm that reveals the categorical organization of motor memories of object dynamics. When participants repeatedly lift a constant-density “family” of cylindrical objects that vary in size, and then an outlier object with a greater density is interleaved into the sequence of lifts, they often fail to learn the weight of the outlier, persistently treating it as a family member despite repeated errors. Here we examine eight factors (Similarity, Cardinality, Frequency, History, Structure, Stochasticity, Persistence, and Time Pressure) that could influence the formation and retrieval of category representations in the outlier paradigm. In our web-based task, participants (N = 240) anticipated object weights by stretching a virtual spring attached to the top of each object. Using Bayesian t-tests, we analyze the relative impact of each manipulated factor on categorical encoding (strengthen, weaken, or no effect). Our results suggest that category representations of object weight are automatic, rigid, and linear and, as a consequence, the key determinant of whether an outlier is encoded as a member of the family is its discriminability from the family members.","link":"/opendata/cesanek-et-al-2023/"},{"title":"Chakroun et al. (2023)","text":"Dopamine fundamentally contributes to reinforcement learning, but recent accounts also suggest a contribution to specific action selection mechanisms and the regulation of response vigour. Here, we examine dopaminergic mechanisms underlying human reinforcement learning and action selection via a combined pharmacological neuroimaging approach in male human volunteers (n = 31, within-subjects; Placebo, 150 mg of the dopamine precursor L-dopa, 2 mg of the D2 receptor antagonist Haloperidol). We found little credible evidence for previously reported beneficial effects of L-dopa vs. Haloperidol on learning from gains and altered neural prediction error signals, which may be partly due to differences experimental design and/or drug dosages. Reinforcement learning drift diffusion models account for learning-related changes in accuracy and response times, and reveal consistent decision threshold reductions under both drugs, in line with the idea that lower dosages of D2 receptor antagonists increase striatal DA release via an autoreceptor-mediated feedback mechanism. These results are in line with the idea that dopamine regulates decision thresholds during reinforcement learning, and may help to bridge action selection and response vigor accounts of dopamine.","link":"/opendata/chakroun-et-al-2023/"},{"title":"Chambon et al. (2020)","text":"The valence of new information influences learning rates in humans: good news tends to receive more weight than bad news. We investigated this learning bias in four experiments, by systematically manipulating the source of required action (free versus forced choices), outcome contingencies (low versus high reward) and motor requirements (go versus no-go choices). Analysis of model-estimated learning rates showed that the confirmation bias in learning rates was specific to free choices, but was independent of outcome contingencies. The bias was also unaffected by the motor requirements, thus suggesting that it operates in the representational space of decisions, rather than motoric actions. Finally, model simulations revealed that learning rates estimated from the choice-confirmation model had the effect of maximizing performance across low- and high-reward environments. We therefore suggest that choice-confirmation bias may be adaptive for efficient learning of action-outcome contingencies, above and beyond fostering person-level dispositions such as self-esteem.","link":"/opendata/chambon-et-al-2020/"},{"title":"Charlton et al. (2023)","text":"To interpret the sensory environment, the brain combines ambiguous sensory measurements with knowledge that reflects context-specific prior experience. But environmental contexts can change abruptly and unpredictably, resulting in uncertainty about the current context. Here we address two questions: how should context-specific prior knowledge optimally guide the interpretation of sensory stimuli in changing environments, and do human decision-making strategies resemble this optimum? We probe these questions with a task in which subjects report the orientation of ambiguous visual stimuli that were drawn from three dynamically switching distributions, representing different environmental contexts. We derive predictions for an ideal Bayesian observer that leverages knowledge about the statistical structure of the task to maximize decision accuracy, including knowledge about the dynamics of the environment. We show that its decisions are biased by the dynamically changing task context. The magnitude of this decision bias depends on the observer’s continually evolving belief about the current context. The model therefore not only predicts that decision bias will grow as the context is indicated more reliably, but also as the stability of the environment increases, and as the number of trials since the last context switch grows. Analysis of human choice data validates all three predictions, suggesting that the brain leverages knowledge of the statistical structure of environmental change when interpreting ambiguous sensory signals.","link":"/opendata/charlton-et-al-2023/"},{"title":"Charpentier et al. (2022)","text":"Seeking information when anxious may help reduce the aversive feeling of uncertainty and guide decision-making. If information is negative or confusing, however, this may increase anxiety further. Information gathered under anxiety can thus be beneficial and/or damaging. Here, we examine whether anxiety leads to a general increase in information-seeking, or rather to changes in the type of information and/or situations in which it is sought. In two controlled laboratory studies, we show that both trait anxiety and induced anxiety lead to a selective alteration in information-seeking. In particular, anxiety did not enhance the general tendency to seek information, nor did it alter the valence of the information gathered. Rather, anxiety amplified the tendency to seek information more in response to large changes in the environment. This was true even when the cause of the anxiety was not directly related to the information sought. As anxious individuals have been shown to have problems learning in changing environments, greater information-seeking in such environments may be an adaptive compensatory mechanism.","link":"/opendata/charpentier-et-al-2022/"},{"title":"Chaumon et al. (2022)","text":"The COVID-19 pandemic and associated lockdowns triggered worldwide changes in the daily routines of human experience. The Blursday database provides repeated measures of subjective time and related processes from participants in nine countries tested on 14 questionnaires and 15 behavioural tasks during the COVID-19 pandemic. A total of 2,840 participants completed at least one task, and 439 participants completed all tasks in the first session. The database and all data collection tools are accessible to researchers for studying the effects of social isolation on temporal information processing, time perspective, decision-making, sleep, metacognition, attention, memory, self-perception and mindfulness. Blursday includes quantitative statistics such as sleep patterns, personality traits, psychological well-being and lockdown indices. The database provides quantitative insights on the effects of lockdown (stringency and mobility) and subjective confinement on time perception (duration, passage of time and temporal distances). Perceived isolation affects time perception, and we report an inter-individual central tendency effect in retrospective duration estimation.","link":"/opendata/chaumon-et-al-2022/"},{"title":"Chen et al. (2023)","text":"Failures to obtain rewards influence both the direction (choosing what to do) and intensity (response vigor) of subsequent motivated behavior. For instance, in risky decision-making, losses can induce faster responses (‘intensity’) and sometimes increase risk-taking (‘direction’), which may lead to detrimental consequences. Pauses might reduce these motivational influences of prior outcomes. To examine this question, participants (from the general population on Prolific.co in 2022 and 2023) alternated between a guess game, in which they won or lost money, and a choice game, in which they chose between two risky options. The pause between a guess and a choice game was either short (0 or 300 milliseconds) or long (3000 milliseconds). In four experiments, prior outcomes consistently influenced decision speed, but not people’s choices. Contrarily, pauses increased decision quality (choosing the option with a higher expected value) without substantially reducing decision speed. Pauses may improve decision quality by increasing attention to relevant information, as its effect was absent when the overall task attention was increased (Experiment 3). These findings have implications for both responsible gambling and risky decision-making research. Future work can examine the underlying computational and cognitive processes, and the generalizability of these findings to other contexts and populations.","link":"/opendata/chen-et-al-2023/"},{"title":"Ciranka et al. (2022a)","text":"Humans and other animals are capable of inferring never-experienced relations (for example, A > C) from other relational observations (for example, A > B and B > C). The processes behind such transitive inference are subject to intense research. Here we demonstrate a new aspect of relational learning, building on previous evidence that transitive inference can be accomplished through simple reinforcement learning mechanisms. We show in simulations that inference of novel relations benefits from an asymmetric learning policy, where observers update only their belief about the winner (or loser) in a pair. Across four experiments (n = 145), we find substantial empirical support for such asymmetries in inferential learning. The learning policy favoured by our simulations and experiments gives rise to a compression of values that is routinely observed in psychophysics and behavioural economics. In other words, a seemingly biased learning strategy that yields well-known cognitive distortions can be beneficial for transitive inferential judgements.","link":"/opendata/ciranka-et-al-2022a/"},{"title":"Ciranka & van den Bos (2022)","text":"Adolescents are known for their propensity to take risks, which may be especially strong in social contexts. People are known to use social information more when uncertain of how to decide. How feelings of uncertainty relate to the development of social susceptibility and risky choice across adolescence is unknown. To understand the effects of uncertainty on social influence, we introduce a novel task that measures risky choice under different levels of uncertainty, either with or without observing someone elses advice. Testing 161 adolescents and young adults (10-26 years of age), we show that risky-choice, social information use and subjective uncertainty decrease across development. We also fit a model wherein social information is used to reduce uncertainty to participants’ choices according to Bayesian principles. This model revealed that age-related changes in uncertainty fully accounted for age-related changes in social information use. Our results imply that uncertainty is a key mechanism in adolescents’ social susceptibility in risky behaviour.","link":"/opendata/ciranka-van-den-bos-2022/"},{"title":"Clarke et al. (2018)","text":"A striking range of individual differences has recently been reported in three different visual search tasks. These differences in performance can be attributed to strategy, that is, the efficiency with which participants control their search to complete the task quickly and accurately. Here we ask if an individual’s strategy and performance in one search task is correlated with how they perform in the other two. We tested 64 observers in the three tasks mentioned above over two sessions. Even though the test-retest reliability of the tasks is high, an observer’s performance and strategy in one task did not reliably predict their behaviour in the other two. These results suggest search strategies are stable over time, but context-specific. To understand visual search we therefore need to account not only for differences between individuals, but also how individuals interact with the search task and context. These context-specific but stable individual differences in strategy can account for a substantial proportion of variability in search performance.","link":"/opendata/clarke-et-al-2018/"},{"title":"Clarke et al. (2022)","text":"Foraging entails finding multiple targets sequentially. In humans and other animals, a key observation has been a tendency to forage in ‘runs’ of the same target type. This tendency is context-sensitive, and in humans, it is strongest when the targets are difficult to distinguish from the distractors. Many important questions have yet to be addressed about this and other tendencies in human foraging, and a key limitation is a lack of precise measures of foraging behaviour. The standard measures tend to be run statistics, such as the maximum run length and the number of runs. But these measures are not only interdependent, they are also constrained by the number and distribution of targets, making it difficult to make inferences about the effects of these aspects of the environment on foraging. Moreover, run statistics are underspecified about the underlying cognitive processes determining foraging behaviour. We present an alternative approach: modelling foraging as a procedure of generative sampling without replacement, implemented in a Bayesian multilevel model. This allows us to break behaviour down into a number of biases that influence target selection, such as the proximity of targets and a bias for selecting targets in runs, in a way that is not dependent on the number of targets present. Our method thereby facilitates direct comparison of specific foraging tendencies between search environments that differ in theoretically important dimensions. We demonstrate the use of our model with simulation examples and re-analysis of existing data. We believe our model will provide deeper insights into visual foraging and provide a foundation for further modelling work in this area.","link":"/opendata/clarke-et-al-2022/"},{"title":"Clay et al. (2022)","text":"Current models of mental effort in psychology, behavioral economics, and cognitive neuroscience typically suggest that exerting cognitive effort is aversive, and people avoid it whenever possible. The aim of this research was to challenge this view and show that people can learn to value and seek effort intrinsically. Our experiments tested the hypothesis that effort-contingent reward in a working-memory task will induce a preference for more demanding math tasks in a transfer phase, even though participants were aware that they would no longer receive any reward for task performance. In laboratory Experiment 1 (n = 121), we made reward directly contingent on mobilized cognitive effort as assessed via cardiovascular measures (β-adrenergic sympathetic activity) during the training task. Experiments 2a to 2e (n = 1,457) were conducted online to examine whether the effects of effort-contingent reward on subsequent demand seeking replicate and generalize to community samples. Taken together, the studies yielded reliable evidence that effort-contingent reward increased participants demand seeking and preference for the exertion of cognitive effort on the transfer task. Our findings provide evidence that people can learn to assign positive value to mental effort. The results challenge currently dominant theories of mental effort and provide evidence and an explanation for the positive effects of environments appreciating effort and individual growth on peoples evaluation of effort and their willingness to mobilize effort and approach challenging tasks.","link":"/opendata/clay-et-al-2022/"},{"title":"Cochrane et al. (2023)","text":"Evidence accumulation models have enabled strong advances in our understanding of decision-making, yet their application to examining learning has not been common. Using data from participants completing a dynamic random dot-motion direction discrimination task across four days, we characterized alterations in two components of perceptual decision-making (Drift Diffusion Model drift rate and response boundary). Continuous-time learning models were applied to characterize trajectories of performance change, with different models allowing for varying dynamics. The best-fitting model included drift rate changing as a continuous, exponential function of cumulative trial number. In contrast, response boundary changed within each daily session, but in an independent manner across daily sessions. Our results highlight two different processes underlying the pattern of behavior observed across the entire learning trajectory, one involving a continuous tuning of perceptual sensitivity, and another more variable process describing participants’ threshold of when enough evidence is present to act.","link":"/opendata/cochrane-et-al-2023/"},{"title":"Cohen et al. (2020)","text":"Beliefs about the controllability of positive or negative events in the environment can shape learning throughout the lifespan. Previous research has shown that adults learning is modulated by beliefs about the causal structure of the environment such that they update their value estimates to a lesser extent when the outcomes can be attributed to hidden causes. This study examined whether external causes similarly influenced outcome attributions and learning across development. Ninety participants, ages 7 to 25 years, completed a reinforcement learning task in which they chose between two options with fixed reward probabilities. Choices were made in three distinct environments in which different hidden agents occasionally intervened to generate positive, negative, or random outcomes. Participants beliefs about hidden-agent intervention aligned with the true probabilities of the positive, negative, or random outcome manipulation in each of the three environments. Computational modeling of the learning data revealed that while the choices made by both adults (ages 18-25) and adolescents (ages 13-17) were best fit by Bayesian reinforcement learning models that incorporate beliefs about hidden-agent intervention, those of children (ages 7-12) were best fit by a one learning rate model that updates value estimates based on choice outcomes alone. Together, these results suggest that while children demonstrate explicit awareness of the causal structure of the task environment, they do not implicitly use beliefs about the causal structure of the environment to guide reinforcement learning in the same manner as adolescents and adults.","link":"/opendata/cohen-et-al-2020/"},{"title":"Cohen et al. (2023)","text":"Charitable giving involves a complex economic and social decision because the giver expends resources for goods or services they will never receive. Although psychologists have identified numerous factors that influence charitable giving, there currently exists no unifying computational model of charitable choice. Here, we submit one such model, based within the strictures of Psychological Value Theory (PVT). In four experiments, we assess whether charitable giving is driven by the perceived Psychological Value of the recipient. Across all four experiments, we simultaneously predict response choice and response time with high accuracy. In a fifth experiment, we show that PVT predicts charitable giving more accurately than an account based on competence and warmth. PVT accurately predicts which charity a respondent will choose to donate to and separately, whether a respondent will choose to donate at all. PVT models the cognitive processes underlying charitable donations and it provides a computational framework for integrating known influences on charitable giving. For example, we show that in-group preference influences charitable giving by changing the Psychological Values of the options, rather than by bringing about a response bias toward the in-group.","link":"/opendata/cohen-et-al-2023/"},{"title":"Collins (2018)","text":"Learning to make rewarding choices in response to stimuli depends on a slow but steady process, reinforcement learning, and a fast and flexible, but capacity-limited process, working memory. Using both systems in parallel, with their contributions weighted based on performance, should allow us to leverage the best of each system: rapid early learning, supplemented by long-term robust acquisition. However, this assumes that using one process does not interfere with the other. We use computational modeling to investigate the interactions between the two processes in a behavioral experiment and show that working memory interferes with reinforcement learning. Previous research showed that neural representations of reward prediction errors, a key marker of reinforcement learning, were blunted when working memory was used for learning. We thus predicted that arbitrating in favor of working memory to learn faster in simple problems would weaken the reinforcement learning process. We tested this by measuring performance in a delayed testing phase where the use of working memory was impossible, and thus participant choices depended on reinforcement learning. Counterintuitively, but confirming our predictions, we observed that associations learned most easily were retained worse than associations learned slower: Using working memory to learn quickly came at the cost of long-term retention. Computational modeling confirmed that this could only be accounted for by working memory interference in reinforcement learning computations. These results further our understanding of how multiple systems contribute in parallel to human learning and may have important applications for education and computational psychiatry.","link":"/opendata/collins-2018/"},{"title":"Constant et al. (2023)","text":"According to Bayesian models, both decisions and confidence are based on the same precision-weighted integration of prior expectations (“priors”) and incoming information (“likelihoods”). This assumes that priors are integrated optimally and equally in decisions and confidence, which has not been tested. In three experiments, we quantify how priors inform decisions and confidence. With a dual-decision task we create pairs of conditions that are matched in posterior information, but differ on whether the prior or likelihood is more informative. We find that priors are underweighted in discrimination decisions, but are less underweighted in confidence about those decisions, and this is not due to differences in processing time. The same patterns remain with exogenous probabilistic cues as priors. With a Bayesian model we quantify the weighting parameters for the prior at both levels, and find converging evidence that priors are more optimally used in explicit confidence, even when underused in decisions.","link":"/opendata/constant-et-al-2023/"},{"title":"Conway et al. (2020)","text":"A large and consistent research literature demonstrates the superiority of dimensional models of mental disorder. Factor analytic research has mapped the latent dimensions underlying separate sets of mental disorders (e.g., emotional disorders), but a common framework-unencumbered by arbitrary historical boundaries between disorder groups-requires additional research. Using empirically derived measures of three key domains of psychopathological variation, the overarching goal of the current study was to explicate dimensions connecting internalizing, externalizing, and personality disorders. Participants included 1,144 racially diverse undergraduates. Exploratory structural equation modeling analyses revealed seven latent dimensions: core internalizing, core externalizing, antagonism, impulsivity, dutifulness, detachment, and suspiciousness. This meta-structure reflects a more comprehensive model of the architecture of mental disorders than accounts derived from less inclusive assessment batteries. Future empirical work is needed to evaluate the utility of this structural model in etiological research, assessment, and treatment arenas.","link":"/opendata/conway-et-al-2020/"},{"title":"Copeland et al. (2023a)","text":"A considerable number of people successfully give up tobacco smoking. In nicotine-dependent individuals, tobacco choice is determined by greater expected drug value; however, less is known about the underlying mechanisms through which people quit smoking. This study aimed to explore whether computational parameters of value-based decision-making characterise recovery from nicotine addiction. Using a pre-registered, between-subject design, current daily smokers (n = 51) and ex-smokers who used to smoke daily (n = 51) were recruited from the local community. Participants completed a two-alternative forced choice task in which they chose between either two tobacco-related images (in one block) or tobacco-unrelated images (in a different block). During each trial, participants pressed a computer key to select the image they rated most positively during a previous task block. To estimate evidence accumulation (EA) processes and response thresholds during the different blocks, a drift-diffusion model was fitted to the reaction time and error data. Ex-smokers had significantly higher response thresholds when making tobacco-related decisions (p = .01, d = .45) compared to current smokers, although there were no significant group differences during tobacco-unrelated decisions. Furthermore, there were no significant group differences in EA rates when making tobacco or tobacco-unrelated decisions. Greater cautiousness when making value-based decisions about tobacco-related cues characterised recovery from nicotine addiction.","link":"/opendata/copeland-et-al-2023a/"},{"title":"Copeland et al. (2023b)","text":"Devaluation of alcohol leads to reductions in alcohol choice and consumption; however, the cognitive mechanisms that underpin this relationship are not well-understood. In this study we applied a computational model of value-based decision-making (VBDM) to decisions made about alcohol and alcohol-unrelated cues following experimental manipulation of alcohol value. Using a pre-registered within-subject design, thirty-six regular alcohol consumers (≥14 UK units per week) completed a two-alternative forced choice task where they chose between two alcohol images (in one block) or two soft drink images (in a different block) after watching videos that emphasised the positive (alcohol value), and separately, the negative (alcohol devalue) consequences of alcohol. On each block, participants pressed a key to select the image depicting the drink they would rather consume. A drift-diffusion model (DDM) was fitted to reaction time and choice data to estimate evidence accumulation (EA) processes and response thresholds during the different blocks in each experimental condition. In the alcohol devalue condition, soft drink EA rates were significantly increased compared to alcohol EA rates (p = .04, d = .31), and compared to soft drink EA rates in the alcohol value condition (p = .01, d = .38). However, the experimental manipulation had no effect on EA rates for alcoholic drinks or on response thresholds in either priming condition. In line with behavioural economic models of addiction that emphasise the important role of alternative reinforcement, experimentally manipulating alcohol value alters the internal cognitive processes that precede soft drink choice.","link":"/opendata/copeland-et-al-2023b/"},{"title":"Corcoran et al. (2018)","text":"The ability of subjects to identify and reproduce brief temporal intervals is influenced by many factors whether they be stimulus-based, task-based or subject-based. The current study examines the role individual differences play in subsecond and suprasecond timing judgments, using the schizoptypy personality scale as a test-case approach for quantifying a broad range of individual differences. In two experiments, 129 (Experiment 1) and 141 (Experiment 2) subjects completed the O-LIFE personality questionnaire prior to performing a modified temporal-bisection task. In the bisection task, subjects responded to two identical instantiations of a luminance grating presented in a 4deg window, 4deg above fixation for 1.5 s (Experiment 1) or 3 s (Experiment 2). Subjects initiated presentation with a button-press, and released the button when they considered the stimulus to be half-way through (750/1500 ms). Subjects were then asked to indicate their ‘most accurate estimate’ of the two intervals. In this way we measure both performance on the task (a first-order measure) and the subjects’ knowledge of their performance (a second-order measure). In Experiment 1 the effect of grating-drift and feedback on performance was also examined. Experiment 2 focused on the static/no-feedback condition. For the group data, Experiment 1 showed a significant effect of presentation order in the baseline condition (no feedback), which disappeared when feedback was provided. Moving the stimulus had no effect on perceived duration. Experiment 2 showed no effect of stimulus presentation order. This elimination of the subsecond order-effect was at the expense of accuracy, as the mid-point of the suprasecond interval was generally underestimated. Response precision increased as a proportion of total duration, reducing the variance below that predicted by Weber’s law. This result is consistent with a breakdown of the scalar properties of time perception in the early suprasecond range. All subjects showed good insight into their own performance, though that insight did not necessarily correlate with the veridical bisection point. In terms of personality, we found evidence of significant differences in performance along the Unusual Experiences subscale, of most theoretical interest here, in the subsecond condition only. There was also significant correlation with Impulsive Nonconformity and Cognitive Disorganisation in the sub- and suprasecond conditions, respectively. Overall, these data support a partial dissociation of timing mechanisms at very short and slightly longer intervals. Further, these results suggest that perception is not the only critical mitigator of confidence in temporal experience, since individuals can effectively compensate for differences in perception at the level of metacognition in early suprasecond time. Though there are individual differences in performance, these are perhaps less than expected from previous reports and indicate an effective timing mechanism dealing with brief durations independent of the influence of significant personality trait differences.","link":"/opendata/corcoran-et-al-2018/"},{"title":"Corlazzoli et al. (2023)","text":"When presented with the choice to invest cognitive control in a task, several signals are monitored to reach a decision. Leading theoretical frameworks argued that the investment of cognitive control is determined by a cost-benefit computation. However, previous accounts remained silent on the potential role of subjective experience in this computation. We experience confidence when giving an answer, feel the excitement of an anticipated reward, and reflect on how much effort is required for successful task performance. Two questions are investigated in the present work: how objective task parameters give rise to subjective experience and whether these drive the decision to allocate cognitive control. To this end, we designed a task in which we manipulated three objective parameters in the same sequence of events (stimulus uncertainty, physical effort, and reward prediction error). We asked participants to report their subjective experiences associated with these manipulations: confidence, subjective physical effort, and reward satisfaction. At the end of each trial, participants indicated whether they wanted to repeat that trial on the next day. In response to the first question, we demonstrate that subjective ratings are reliable and selective. Subjective experiences closely mirrored their objective manipulations. In response to the second question, we demonstrate that subjective experiences provide a better fit for the decisions on future control investments. While objective task parameters are considered when deciding, they do not always produce the expected changes in subjective experience, and when dissociations occur, it is the subjective experience that better explains the decision to allocate cognitive control.","link":"/opendata/corlazzoli-et-al-2023/"},{"title":"Correa et al. (2018)","text":"The extent to which subjective awareness influences reward processing, and thereby affects future decisions, is currently largely unknown. In the present report, we investigated this question in a reinforcement learning framework, combining perceptual masking, computational modeling, and electroencephalographic recordings (human male and female participants). Our results indicate that degrading the visibility of the reward decreased, without completely obliterating, the ability of participants to learn from outcomes, but concurrently increased their tendency to repeat previous choices. We dissociated electrophysiological signatures evoked by the reward-based learning processes from those elicited by the reward-independent repetition of previous choices and showed that these neural activities were significantly modulated by reward visibility. Overall, this report sheds new light on the neural computations underlying reward-based learning and decision-making and highlights that awareness is beneficial for the trial-by-trial adjustment of decision-making strategies.","link":"/opendata/correa-et-al-2018/"},{"title":"Correa et al. (2023)","text":"Human behavior emerges from planning over elaborate decompositions of tasks into goals, subgoals, and low-level actions. How are these decompositions created and used? Here, we propose and evaluate a normative framework for task decomposition based on the simple idea that people decompose tasks to reduce the overall cost of planning while maintaining task performance. Analyzing 11,117 distinct graph-structured planning tasks, we find that our framework justifies several existing heuristics for task decomposition and makes predictions that can be distinguished from two alternative normative accounts. We report a behavioral study of task decomposition (N = 806) that uses 30 randomly sampled graphs, a larger and more diverse set than that of any previous behavioral study on this topic. We find that human responses are more consistent with our framework for task decomposition than alternative normative accounts and are most consistent with a heuristic-betweenness centrality-that is justified by our approach. Taken together, our results suggest the computational cost of planning is a key principle guiding the intelligent structuring of goal-directed behavior.","link":"/opendata/correa-et-al-2023/"},{"title":"Cortese et al. (2021)","text":"The human brain excels at constructing and using abstractions, such as rules, or concepts. Here, in two fMRI experiments, we demonstrate a mechanism of abstraction built upon the valuation of sensory features. Human volunteers learned novel association rules based on simple visual features. Reinforcement-learning algorithms revealed that, with learning, high-value abstract representations increasingly guided participant behaviour, resulting in better choices and higher subjective confidence. We also found that the brain area computing value signals - the ventromedial prefrontal cortex - prioritised and selected latent task elements during abstraction, both locally and through its connection to the visual cortex. Such a coding scheme predicts a causal role for valuation. Hence, in a second experiment, we used multivoxel neural reinforcement to test for the causality of feature valuation in the sensory cortex, as a mechanism of abstraction. Tagging the neural representation of a task feature with rewards evoked abstraction-based decisions. Together, these findings provide a novel interpretation of value as a goal-dependent, key factor in forging abstract representations.","link":"/opendata/cortese-et-al-2021/"},{"title":"Cotton et al. (2023)","text":"Recent events have led to a change in environments from in-person to remote work for many people. This change presents several issues for work, education, and research, particularly related to cognitive performance, as the remote environment may have more distractors. An increase in distraction is one factor that may lead to increases in mind-wandering and disengagement with the task at hand, whether it is a virtual meeting, an online lecture or a psychological experiment. The present study investigated effects of mind-wandering and multitasking during working memory tasks in remote and in-person environments. In two experiments, participants completed a working memory task with varied cognitive load during a secondary task. After each working memory trial, we asked participants about their mind-wandering during that trial. Some participants completed the procedures in-person, while others completed the procedures remotely. While overall remote participants reported significantly more mind-wandering and poorer secondary task performance than the in-person participants, this pattern was not reflected in their working memory accuracy and both groups exhibited similar multitasking effects on memory performance. Additional analyses found that for remote participants the level of engagement with the task was a better predictor of working memory performance than either cognitive load or mind-wandering rates, but in a manner that did not indicate a tradeoff in resources between tasks. Together, these results demonstrate the importance of considering multiple metrics when assessing performance and illustrate that making assumptions about the equivalence of remote and in person work is a risky proposition.","link":"/opendata/cotton-et-al-2023/"},{"title":"Crawford et al. (2022)","text":"The study investigated whether cognitive effort decision-making measured via a neuroeconomic paradigm that manipulated framing (gain vs. loss outcomes), could predict daily life engagement in mentally demanding activities in both younger and older adults.Method: Younger and older adult participants (N=310) completed the Cognitive Effort Discounting paradigm (Cog-ED), under both gain and loss conditions, to provide an experimental index of cognitive effort costs for each participant in each framing condition. A subset of participants (N=230) also completed a seven-day Ecological Momentary Assessment (EMA) protocol measuring engagement in mentally demanding daily life activities. In a large, online sample, we replicated a robust increase in cognitive effort costs among older, relative to younger, adults. Additionally, costs were found to be reduced in the loss relative to gain frame, although these effects were only reliable at high levels of task difficulty and were not moderated by age. Critically, participants who had lower effort costs in the gain frame tended to report engaging in more mentally demanding daily life activities, but the opposite pattern was observed in the loss frame. Further exploratory analysis demonstrated that older adults reported overall lower levels of engagement in daily-life mentally demanding activities, and these age differences were statistically mediated by gain and loss effort costs. Together, these results suggest that cognitive effort costs, as measured through behavioral choice patterns in a neuroeconomic decision-making task, can be used to predict and explain engagement in mentally demanding activities during daily life among both older and younger adults.","link":"/opendata/crawford-et-al-2022/"},{"title":"Crivelli-Decker et al. (2023)","text":"Recent work in cognitive and systems neuroscience has suggested that the hippocampus might support planning, imagination, and navigation by forming cognitive maps that capture the abstract structure of physical spaces, tasks, and situations. Navigation involves disambiguating similar contexts, and the planning and execution of a sequence of decisions to reach a goal. Here, we examine hippocampal activity patterns in humans during a goal-directed navigation task to investigate how contextual and goal information are incorporated in the construction and execution of navigational plans. During planning, hippocampal pattern similarity is enhanced across routes that share a context and a goal. During navigation, we observe prospective activation in the hippocampus that reflects the retrieval of pattern information related to a key-decision point. These results suggest that, rather than simply representing overlapping associations or state transitions, hippocampal activity patterns are shaped by context and goals.","link":"/opendata/crivelli-decker-et-al-2023/"},{"title":"Csifcsak et al. (2020)","text":"The ability to control the occurrence of rewarding and punishing events is crucial for our well-being. Two ways to optimize performance are to follow heuristics like Pavlovian biases to approach reward and avoid loss or to rely more on slowly accumulated stimulus-action associations. Although reduced control over outcomes has been linked to suboptimal decision-making in clinical conditions associated with learned helplessness, it is unclear how uncontrollability of the environment is related to the arbitration between different response strategies. This study directly tested whether a behavioral manipulation designed to induce learned helplessness in healthy adults (intermittent loss of control over feedback in a reinforcement learning task; yoking) would modulate the magnitude of Pavlovian bias and the neurophysiological signature of cognitive control (frontal midline theta power) in healthy adults. Using statistical analysis and computational modeling of behavioral data and electroencephalographic signals, we found stronger Pavlovian influences and alterations in frontal theta activity in the yoked group. However, these effects were not accompanied by reduced performance in experimental blocks with regained control, indicating that our behavioral manipulation was not potent enough for inducing helplessness and impaired coping ability with task demands. We conclude that the level of contingency between instrumental choices and rewards/punishments modulates Pavlovian bias during value-based decision-making, probably via interfering with the implementation of cognitive control. These findings might have implications for understanding the mechanisms underlying helplessness in various psychiatric conditions.","link":"/opendata/csifcsak-et-al-2020/"},{"title":"Csifcsak et al. (2021)","text":"Recent studies suggest that choice behavior in reinforcement learning tasks is shaped by the level of outcome controllability. In particular, Pavlovian bias (PB) seems to be enhanced under low levels of control, manifesting in approach tendencies toward rewards and response inhibition when facing potential losses. The medial prefrontal cortex (mPFC) has been implicated both in evaluating outcome controllability and in the recruitment of cognitive control (CC) to suppress maladaptive PB during reinforcement learning. The current study tested whether high-definition transcranial direct current stimulation (HD-tDCS) above the mPFC of healthy humans can influence PB, and counteract the previously documented, deleterious behavioral effects of low outcome controllability on decision-making. In a preregistered, between-group, double-blind study (N = 103 adults, both sexes), we tested the interaction between controllability and HD-tDCS on parameters of choice behavior in a Go/NoGo task. Relative to sham stimulation, HD-tDCS resulted in more robust performance improvement following reduced control, an effect that was more pronounced in appetitive trials. In addition, we found evidence for weaker PB when HD-tDCS was administered during low controllability over outcomes. Computational modeling revealed that parameter estimates of learning rate and choice randomness were modulated by controllability, HD-tDCS and their interaction. Overall, these results highlight the potential of our HD-tDCS protocol for interfering with choice arbitration under low levels of control, resulting in more adaptive behavior.","link":"/opendata/csifcsak-et-al-2021/"},{"title":"Cutler et al. (2021)","text":"Reinforcement learning is a fundamental mechanism displayed by many species. However, adaptive behaviour depends not only on learning about actions and outcomes that affect ourselves, but also those that affect others. Using computational reinforcement learning models, we tested whether young (age 18-36) and older (age 60-80, total n = 152) adults learn to gain rewards for themselves, another person (prosocial), or neither individual (control). Detailed model comparison showed that a model with separate learning rates for each recipient best explained behaviour. Young adults learned faster when their actions benefitted themselves, compared to others. Compared to young adults, older adults showed reduced self-relevant learning rates but preserved prosocial learning. Moreover, levels of subclinical self-reported psychopathic traits (including lack of concern for others) were lower in older adults and the core affective-interpersonal component of this measure negatively correlated with prosocial learning. These findings suggest learning to benefit others is preserved across the lifespan with implications for reinforcement learning and theories of healthy ageing.","link":"/opendata/cutler-et-al-2021/"},{"title":"Dai et al. (2019)","text":"Uncertainty about the waiting time before obtaining an outcome is integral to intertemporal choice. Here, we showed that people express different time preferences depending on how they learn about this temporal uncertainty. In two studies, people chose between pairs of options: one with a single, sure delay and the other involving multiple, probabilistic delays (a lottery). The probability of each delay occurring either was explicitly described (timing risk) or could be learned through experiential sampling (timing uncertainty; the delay itself was not experienced). When the shorter delay was rare, people preferred the lottery more often when it was described than when it was experienced. When the longer delay was rare, this pattern was reversed. Modeling analyses suggested that underexperiencing rare delays and different patterns of probability weighting contribute to this description-experience gap. Our results challenge traditional models of intertemporal choice with temporal uncertainty as well as the generality of inverse-S-shaped probability weighting in such choice.","link":"/opendata/dai-et-al-2019/"},{"title":"Das & Steyvers (2023)","text":"Cognitive decline often accompanies natural aging, which results in younger adults outperforming older adults, on average, on tasks requiring attention, memory, or mental flexibility skills. This performance gap between age groups persists even after people train on these tasks, but it remains unclear whether the gap persists when individuals, rather than groups, are compared at different training levels. In this paper, we analyzed 9,923 users between 18-90 years old (63% over 60) who performed a variety of cognitive tasks on an online cognitive training platform. We quantified an older adult’s potential to catch up to, or perform as well as, a younger adult. We found that the probability of catching up to someone decades younger increases with differential amounts of training on a variety of cognitive tasks. These findings suggest that age-related performance deficits can be overcome with additional training.","link":"/opendata/das-steyvers-2023/"},{"title":"da Silva & Hare (2020)","text":"Distinct model-free and model-based learning processes are thought to drive both typical and dysfunctional behaviours. Data from two-stage decision tasks have seemingly shown that human behaviour is driven by both processes operating in parallel. However, in this study, we show that more detailed task instructions lead participants to make primarily model-based choices that have little, if any, simple model-free influence. We also demonstrate that behaviour in the two-stage task may falsely appear to be driven by a combination of simple model-free and model-based learning if purely model-based agents form inaccurate models of the task because of misconceptions. Furthermore, we report evidence that many participants do misconceive the task in important ways. Overall, we argue that humans formulate a wide variety of learning models. Consequently, the simple dichotomy of model-free versus model-based learning is inadequate to explain behaviour in the two-stage task and connections between reward learning, habit formation and compulsivity.","link":"/opendata/da-silva-hare-2020/"},{"title":"de-Eccher et al. (2023)","text":"Much recent work has shown that children actively shape their learning progress by choosing what, when and from whom they want to learn. However less is known about whether children are sensitive to gaps in their knowledge, and whether they selectively solicit information about items they previously indicated having a knowledge gap about. In a cross-situational word-learning task, we asked 5-year-olds, 6-9-year-olds and adults to estimate their knowledge of newly learned word-object associations. We then examined whether participants preferentially chose to hear the labels of objects they reported not knowing the label of and whether such uncertainty-driven sampling behavior led to improved learning. We found that participants in all age groups were sensitive to gaps in their knowledge of the word-object associations, i.e., were more likely to say they had correctly indicated the label of an object when they were correct, relative to when they were incorrect. Furthermore, 6-9-year-olds and adults were more likely to sample objects whose labels they reported not knowing, while 5-year-olds were equally likely to sample objects whose labels they reported not knowing and knowing. In other words, older children and adults displayed sampling behavior directed at reducing knowledge gaps and uncertainty, while younger children did not. However, while all age groups showed evidence of having learned the word-object associations, participants who displayed more uncertainty-driven sampling behavior were not more accurate at test. Our findings underscore the role of uncertainty in driving 6-9-year-olds’ and adults’ sampling behavior and speak to the mechanisms underlying previously reported performance boosts in active learning.","link":"/opendata/de-eccher-et-al-2023/"},{"title":"de-Voogd et al. (2022)","text":"Threatening situations ask for rapid and accurate perceptual decisions to optimize coping. Theoretical models have stated that psychophysiological states, such as bradycardia during threat-anticipatory freezing, may facilitate perception. However, it’s unclear if this occurs via enhanced bottom-up sensory processing or by relying more on prior expectations. To test this, 52 (26 female) participants completed a visual target-detection paradigm under threat-of-shock (15% reinforcement rate) with a manipulation of prior expectations. Participants judged the presence of a backward-masked grating (target presence rate 50%) after systematically manipulating their decision criterion with a rare (20%) or frequent (80%) target presence rate procedure. Threat-of-shock induced stronger heart rate deceleration compared to safe, indicative of threat-anticipatory freezing. Importantly, threat-of-shock enhanced perceptual sensitivity but we did not find evidence of an altered influence of the effect of prior expectations on current decisions. Correct target detection (hits) was furthermore accompanied by an increase in the magnitude of this heart rate deceleration compared to a missed target. While this was independent of threat-of-shock manipulation, only under threat-of-shock this increase was accompanied by more hits and increased sensitivity. Together, these findings suggest that under acute threat participants may rely more on bottom-up sensory processing versus prior expectations in perceptual decision-making. Critically, bradycardia may underlie such enhanced perceptual sensitivity.","link":"/opendata/de-voogd-et-al-2022/"},{"title":"de-Waard et al. (2023)","text":"Through statistical learning, humans can learn to suppress visual areas that often contain distractors. Recent findings suggest that this form of learned suppression is insensitive to context, putting into question its real-life relevance. The current study presents a different picture: we show context-dependent learning of distractor-based regularities. Unlike previous studies which typically used background cues to differentiate contexts, the current study manipulated task context. Specifically, the task alternated from block to block between a compound search and a detection task. In both tasks, participants searched for a unique shape, while ignoring a uniquely colored distractor item. Crucially, a different high-probability distractor location was assigned to each task context in the training blocks, and all distractor locations were made equiprobable in the testing blocks. In a control experiment, participants only performed a compound search task such that the contexts were made indistinguishable, but the high-probability locations changed in exactly the same way as in the main experiment. We analyzed response times for different distractor locations and show that participants can learn to suppress a location in a context-dependent way, but suppression from previous task contexts lingers unless a new high-probability location is introduced.","link":"/opendata/de-waard-et-al-2023/"},{"title":"Decker et al. (2016)","text":"Theoretical models distinguish two decision-making strategies that have been formalized in reinforcement-learning theory. A model-based strategy leverages a cognitive model of potential actions and their consequences to make goal-directed choices, whereas a model-free strategy evaluates actions based solely on their reward history. Research in adults has begun to elucidate the psychological mechanisms and neural substrates underlying these learning processes and factors that influence their relative recruitment. However, the developmental trajectory of these evaluative strategies has not been well characterized. In this study, children, adolescents, and adults performed a sequential reinforcement-learning task that enabled estimation of model-based and model-free contributions to choice. Whereas a model-free strategy was apparent in choice behavior across all age groups, a model-based strategy was absent in children, became evident in adolescents, and strengthened in adults. These results suggest that recruitment of model-based valuation systems represents a critical cognitive component underlying the gradual maturation of goal-directed behavior.","link":"/opendata/decker-et-al-2016/"},{"title":"Decker, Dubois et al. (2022)","text":"Attentional lapses have been found to impair everything from basic perception to learning and memory. Yet, despite the well documented costs of lapses on cognition, recent work suggests that lapses might unexpectedly confer some benefits. One potential benefit is that lapses broaden our learning to integrate seemingly irrelevant content that could later prove useful–a benefit that prior research focusing only on goal relevant memory would miss. Here, we measure how fluctuations in sustained attention influence the learning of seemingly goal-irrelevant content that competes for attention with target content. Participants completed a correlated flanker task in which they categorized central targets (letters or numbers) while ignoring peripheral flanking symbols that shared hidden probabilistic relationships with the targets. We found that across participants, higher rates of attentional lapses correlated with greater learning of the target-flanker relationships. Moreover, within participants, learning was more evident during attentional lapses. These findings address long-standing theoretical debates and reveal a benefit of attentional lapses: they expand the scope of learning and decisions beyond the strictly relevant.","link":"/opendata/decker-dubois-et-al-2022/"},{"title":"Del-Popolo-Cristaldi et al. (2023)","text":"Adaptive cognitive control (CC), the ability to adjust goal-directed behavior according to changing environmental demand, can be instantiated bottom-up by implicit knowledge, including temporal predictability of task-relevant events. In S1-S2 tasks, either local (trial-by-trial hazard expectation) or global (block-by-block expectation) temporal information can induce prediction, allowing for proactive action control. Recent developmental evidence showed that adaptive CC based on global temporal prediction emerges earlier than when it is based on the local one only. However, very little is known about how children learn to dynamically adjust behavior on the fly according to changing global predictive information. Addressing this issue is nevertheless crucial to unravel the mechanisms underlying adaptive CC flexibility. Here we used a modified version of the Dynamic Temporal Prediction task to investigate how typically developing younger (6-8 years) and older children (9-11 years), adolescents (12-15 years) and adults (21-31 years) use global prediction to shape adaptive CC over time. Specifically, the short-long percentage of S2 preparatory intervals was manipulated list-wide to create a slow-fast-slow-fast fixed block sequence and test how efficiently the response speed adapted accordingly. Overall, results revealed that in all groups behavioral performance is successfully adjusted as a function of global prediction in the late phase of the task (block 3 to 4). Remarkably, only adolescents and adults exhibit an early adaptation of adaptive CC (block 1 to 2), while children younger than 11 show sluggish ability in inferring implicit changes in global predictive rules. This age-related dissociation suggests that, although being present from an early age, adaptive CC based on global predictive information needs more developmental space to become flexible in an efficient way. In the light of a neuroconstructivist approach, we suggest that bottom-up driven implicit flexibility may represent a key prerequisite for the development of efficient explicit cognitive control.","link":"/opendata/del-popolo-cristaldi-et-al-2023/"},{"title":"del-Rio et al. (2023)","text":"Our perception does not depend exclusively on the immediate sensory input. It is also influenced by our internal predictions derived from prior observations and the temporal regularities of the environment, which can result in choice history biases. However, the mechanisms facilitating this flexible use of prior information to predict the future are unclear. Prior information may offset evidence accumulation independently of the current sensory input, or it may modulate the weight of current sensory input based on its consistency with the expectation. To address this question, we used a visual decision-making task and manipulated the transitional probabilities between successive noisy grating stimuli. Using a reverse correlation analysis, we evaluated the contribution of stimulus-independent bias and stimulus-dependent sensitivity modulations to choice history biases. We found that both mechanisms coexist, whereby there was increased bias to respond in line with the predicted orientation alongside modulations in perceptual sensitivity to favour perceptual information consistent with the prediction, akin to selective attention. Furthermore, at the individual differences level, we investigated the relationship between autistic-like traits and the adaptation of choice history biases to the sequential statistics of the environment. Over two studies, we found no convincing evidence of reduced adaptation to sequential regularities in individuals with high autistic-like traits. In sum, we present robust evidence for both perceptual confirmation bias and decision bias supporting adaptation to sequential regularities in the environment.","link":"/opendata/del-rio-et-al-2023/"},{"title":"Dercon et al. (2022)","text":"Cognitive distancing is a therapeutic technique commonly used in psychological treatment of various mental health disorders, but its computational mechanisms remain unknown. To determine the effects of cognitive distancing on computational learning mechanisms, we use an online reward decision-making task, combined with reinforcement learning modelling in 935 participants, 49.1% of whom were trained to regulate their emotional response to task performance feedback. Those participants practicing cognitive distancing showed heightened learning from negative events as well as an increased integration of previous choice values. These differences seemed to represent an evolving shift in strategy by the distancing participants during the task, from exploiting optimal choices earlier in the task (as indicated by greater inverse temperature parameters), to a late-stage increase in learning from negative outcomes (represented as higher loss learning rates). Our findings suggest adaptive changes in computational learning mechanisms underpin the clinical utility of cognitive distancing in psychological therapy.","link":"/opendata/dercon-et-al-2022/"},{"title":"Dev et al. (2022)","text":"The theory of depressive realism holds that depressed individuals are less prone to optimistic bias, and are thus more realistic, in assessing their control or performance. Since the theory was proposed 40 years ago, many innovations have been validated for testing cognitive accuracy, including improved measures of bias in perceived control and performance. We incorporate several of those innovations in a well-powered, pre-registered study designed to identify depressive realism. Amazon MTurk workers (N = 246) and undergraduate students (N = 134) completed a classic contingency task, an overconfidence task, and measures of mental health constructs, including depression and anxiety. We measured perceived control throughout the contingency task, allowing us to compare control estimates at the trial-level to estimates assessed at task conclusion. We found no evidence that depressive symptoms relate to illusory control or to overconfidence. Our results suggest that despite its popular acceptance, depressive realism is not replicable.","link":"/opendata/dev-et-al-2022/"},{"title":"Devine & Otto (2022)","text":"People tend to avoid engaging in cognitively demanding tasks unless it is worth our while-that is, if the benefits outweigh the costs of effortful action. Yet, we seemingly partake in a variety of effortful mental activities (e.g. playing chess, completing Sudoku puzzles) because they impart a sense of progress. Here, we examine the possibility that information about progress-specifically, the number of trials completed of a demanding cognitive control task, relative to the total number of trials to be completed-reduces individuals aversion to cognitively effort activity, across four experiments. In Experiment 1, we provide an initial demonstration that presenting progress information reduces individuals avoidance of cognitively demanding activity avoidance using a variant of the well-characterized Demand Selection Task (DST). The subsequent experiments buttress this finding using a more sophisticated within-subjects versions of the DST, independently manipulating progress information and demand level to further demonstrate that, 1) people prefer receiving information about temporal progress in a task, and 2) all else being equal, individuals will choose to engage in tasks that require greater levels of cognitive effort when the more demanding option confers information about their progress in a task. Together, these results suggest that progress information can motivate cognitive effort expenditure and, in some cases, override individuals default bias towards demand avoidance.","link":"/opendata/devine-otto-2022/"},{"title":"Devine et al. (2023)","text":"Considerable evidence suggests that people value the freedom of choice. However, it is unclear whether this preference for choice stems purely from choice’s intrinsic value, or whether people prefer to choose because it tends to provide instrumental information about desirable outcomes. To address this question, we asked participants (n=200) to complete a two-stage choice task in which they could freely choose to exert choice or not. Borrowing a concept from information theory—mutual information—we manipulated the instrumental contingency between participants’ choices and eventual outcomes. Across two experiments, and leveraging reinforcement learning modeling, we demonstrate a marked preference for choice, but importantly found that participants’ preference for free choice is weakened when actions are decoupled from outcomes. Taken together, our results demonstrate that a significant factor in people’s preference for choice is an assumption about the instrumental value of choice, suggesting against a purely intrinsic value of choice.","link":"/opendata/devine-et-al-2023/"},{"title":"Dezfouli et al. (2019)","text":"Popular computational models of decision-making make specific assumptions about learning processes that may cause them to underfit observed behaviours. Here we suggest an alternative method using recurrent neural networks (RNNs) to generate a flexible family of models that have sufficient capacity to represent the complex learning and decision- making strategies used by humans. In this approach, an RNN is trained to predict the next action that a subject will take in a decision-making task and, in this way, learns to imitate the processes underlying subjects choices and their learning abilities. We demonstrate the benefits of this approach using a new dataset drawn from patients with either unipolar (n = 34) or bipolar (n = 33) depression and matched healthy controls (n = 34) making decisions on a two-armed bandit task. The results indicate that this new approach is better than baseline reinforcement-learning methods in terms of overall performance and its capacity to predict subjects choices. We show that the model can be interpreted using off-policy simulations and thereby provides a novel clustering of subjects learning processes-something that often eludes traditional approaches to modelling and behavioural analysis.","link":"/opendata/dezfouli-et-al-2019/"},{"title":"Dildine et al. (2020)","text":"Self-report is the gold standard for measuring pain. However, decisions about pain can vary substantially within and between individuals. We measured whether self-reported pain is accompanied by metacognition and variations in confidence, similar to perceptual decision-making in other modalities. Eighty healthy volunteers underwent acute thermal pain and provided pain ratings followed by confidence judgments on continuous visual analogue scales. We investigated whether eye fixations and reaction time during pain rating might serve as implicit markers of confidence. Confidence varied across trials and increased confidence was associated with faster pain rating reaction times. The association between confidence and fixations varied across individuals as a function of the reliability of individuals’ association between temperature and pain. Taken together, this work indicates that individuals can provide metacognitive judgments of pain and extends research on confidence in perceptual decision-making to pain.","link":"/opendata/dildine-et-al-2020/"},{"title":"Dillon et al. (2021)","text":"Choices and response times in two-alternative decision-making tasks can be modeled by assuming that individuals steadily accrue evidence in favor of each alternative until a response boundary for one of them is crossed, at which point that alternative is chosen. Prior studies have reported that evidence accumulation during decision-making tasks takes longer in adults with psychopathology than in healthy controls, indicating that slow evidence accumulation may be transdiagnostic. However, few studies have examined perceptual decision making in anxiety disorders, where hypervigilance might enhance performance. Therefore, this study used the Hierarchical Drift Diffusion model to investigate evidence accumulation in adults with social anxiety disorder (SAD) and healthy controls as they performed a probabilistic reward task (PRT), in which social rewards were delivered for correct perceptual judgments. Adults with SAD completed the PRT before and after gaze-contingent music reward therapy (GCMRT), which trains attention allocation and has shown efficacy for SAD. Healthy controls also completed the PRT twice. Results revealed excellent performance in adults with SAD, especially after GCMRT: relative to controls, they showed faster evidence accumulation, better discriminability, and earned more rewards. These data highlight a positive effect of attention training on performance in anxious adults and show how a behavioral trait that is typically problematic-hypervigilance in SAD-can nevertheless confer advantages in certain contexts. The data also indicate that, in contrast to other forms of psychopathology, SAD is not characterized by slow evidence accumulation, at least in the context of the social PRT.","link":"/opendata/dillon-et-al-2021/"},{"title":"Dombrovski et al. (2019)","text":"Suicidal behavior is associated with impaired decision making in contexts of uncertainty. Existing studies, however, do not definitively address whether suicide attempers have 1) impairment in learning from experience or 2) impairment in choice based on comparison of estimated option values. Our reinforcement learning model-based behavioral study tested these hypotheses directly in middle-aged and older suicide attempters representative of those who die by suicide. Two samples (sample 1, n = 135; sample 2, n = 125) of suicide attempters with depression (nattempters = 54 and 39, respectively), suicide ideators, nonsuicidal patients with depression, and healthy control participants completed a probabilistic three-choice decision-making task. A second experiment in sample 2 experimentally dissociated long-term learned value from reward magnitude. Analyses combined computational reinforcement learning and mixed-effects models of decision times and choices. With regard to learning, suicide attempters (vs. all comparison groups) were less sensitive to one-back reinforcement, as indicated by a reduced effect on both choices and decision times. Learning deficits scaled with attempt lethality and were partially explained by poor cognitive control. With regard to value-based choice, suicide attempters (vs. all comparison groups) displayed abnormally long decision times when choosing between similarly valued options and were less able to distinguish between the best and second-best options. Group differences in value-based choice were robust to controlling for cognitive performance, comorbidities, impulsivity, psychotropic exposure, and possible brain damage from attempts. Serious suicidal behavior is associated with impaired reward learning, likely undermining the search for alternative solutions. Attempted suicide is associated with impaired value comparison during the choice process, potentially interfering with the consideration of deterrents and alternatives in a crisis.","link":"/opendata/dombrovski-et-al-2019/"},{"title":"Dombrovski et al. (2020)","text":"When making decisions, should one exploit known good options or explore potentially better alternatives? Exploration of spatially unstructured options depends on the neocortex, striatum, and amygdala. In natural environments, however, better options often cluster together, forming structured value distributions. The hippocampus binds reward information into allocentric cognitive maps to support navigation and foraging in such spaces. Here we report that human posterior hippocampus (PH) invigorates exploration while anterior hippocampus (AH) supports the transition to exploitation on a reinforcement learning task with a spatially structured reward function. These dynamics depend on differential reinforcement representations in the PH and AH. Whereas local reward prediction error signals are early and phasic in the PH tail, global value maximum signals are delayed and sustained in the AH body. AH compresses reinforcement information across episodes, updating the location and prominence of the value maximum and displaying goal cell-like ramping activity when navigating toward it.","link":"/opendata/dombrovski-et-al-2020/"},{"title":"Donegan et al. (2023)","text":"Model-based planning is thought to protect against over-reliance on habits. It is reduced in individuals high in compulsivity, but effect sizes are small and may depend on subtle features of the tasks used to assess it. We developed a diamond-shooting smartphone game that measures model-based planning in an at-home setting, and varied the game’s structure within and across participants to assess how it affects measurement reliability and validity with respect to previously established correlates of model-based planning, with a focus on compulsivity. Increasing the number of trials used to estimate model-based planning did remarkably little to affect the association with compulsivity. However, associations with compulsivity were higher when transition ratios were less deterministic and depending on the reward drift utilised. These findings suggest that model-based planning can be measured at home via an app, can be estimated in relatively few trials, and can be optimised for sensitivity to compulsive symptoms in the general population.","link":"/opendata/donegan-et-al-2023/"},{"title":"Doody et al. (2022)","text":"Reinforcement learning (RL) is widely regarded as divisible into two distinct computational strategies. Model-free learning is a simple RL process in which a value is associated with actions, whereas model-based learning relies on the formation of internal models of the environment to maximise reward. Recently, theoretical and animal work has suggested that such models might be used to train model-free behaviour, reducing the burden of costly forward planning. Here we devised a way to probe this possibility in human behaviour. We adapted a two-stage decision task and found evidence that model-based processes at the time of learning can alter model-free valuation in healthy individuals. We asked people to rate subjective value of an irrelevant feature that was seen at the time a model-based decision would have been made. These irrelevant feature value ratings were updated by rewards, but in a way that accounted for whether the selected action retrospectively ought to have been taken. This model-based influence on model-free value ratings was best accounted for by a reward prediction error that was calculated relative to the decision path that would most likely have led to the reward. This effect occurred independently of attention and was not present when participants were not explicitly told about the structure of the environment. These findings suggest that current conceptions of model-based and model-free learning require updating in favour of a more integrated approach. Our task provides an empirical handle for further study of the dialogue between these two learning systems in the future.","link":"/opendata/doody-et-al-2022/"},{"title":"Dorfman et al. (2019)","text":"People learn differently from good and bad outcomes. We argue that valence-dependent learning asymmetries are partly driven by beliefs about the causal structure of the environment. If hidden causes can intervene to generate bad (or good) outcomes, then a rational observer will assign blame (or credit) to these hidden causes, rather than to the stable outcome distribution. Thus, a rational observer should learn less from bad outcomes when they are likely to have been generated by a hidden cause, and this pattern should reverse when hidden causes are likely to generate good outcomes. To test this hypothesis, we conducted two experiments ( N = 80, N = 255) in which we explicitly manipulated the behavior of hidden agents. This gave rise to both kinds of learning asymmetries in the same paradigm, as predicted by a novel Bayesian model. These results provide a mechanistic framework for understanding how causal attributions contribute to biased learning.","link":"/opendata/dorfman-et-al-2019/"},{"title":"Dorfman & Gershman (2019)","text":"A Pavlovian bias to approach reward-predictive cues and avoid punishment-predictive cues can conflict with instrumentally-optimal actions. Here, we propose that the brain arbitrates between Pavlovian and instrumental control by inferring which is a better predictor of reward. The instrumental predictor is more flexible; it can learn values that depend on both stimuli and actions, whereas the Pavlovian predictor learns values that depend only on stimuli. The arbitration theory predicts that the Pavlovian predictor will be favored when rewards are relatively uncontrollable, because the additional flexibility of the instrumental predictor is not useful. Consistent with this hypothesis, we find that the Pavlovian approach bias is stronger under low control compared to high control contexts.","link":"/opendata/dorfman-gershman-2019/"},{"title":"Dubois et al. (2022)","text":"Deciding between exploring new avenues and exploiting known choices is central to learning, and this exploration-exploitation trade-off changes during development. Exploration is not a unitary concept, and humans deploy multiple distinct mechanisms, but little is known about their specific emergence during development. Using a previously validated task in adults, changes in exploration mechanisms were investigated between childhood (8-9 y/o, N = 26; 16 females), early (12-13 y/o, N = 38; 21 females), and late adolescence (16-17 y/o, N = 33; 19 females) in ethnically and socially diverse schools from disadvantaged areas. We find an increased usage of a computationally light exploration heuristic in younger groups, effectively accommodating their limited neurocognitive resources. Moreover, this heuristic was associated with self-reported, attention-deficit/hyperactivity disorder symptoms in this population-based sample. This study enriches our mechanistic understanding about how exploration strategies mature during development.","link":"/opendata/dubois-et-al-2022/"},{"title":"Dubois & Hauser (2022)","text":"Deciding whether to forgo a good choice in favour of exploring a potentially more rewarding alternative is one of the most challenging arbitrations both in human reasoning and in artificial intelligence. Humans show substantial variability in their exploration, and theoretical (but only limited empirical) work has suggested that excessive exploration is a critical mechanism underlying the psychiatric dimension of impulsivity. In this registered report, we put these theories to test using large online samples, dimensional analyses, and computational modelling. Capitalising on recent advances in disentangling distinct human exploration strategies, we not only demonstrate that impulsivity is associated with a specific form of exploration-value-free random exploration-but also explore links between exploration and other psychiatric dimensions.","link":"/opendata/dubois-hauser-2022/"},{"title":"Dumbalska et al. (2022)","text":"When a target stimulus occurs in the presence of distracters, decisions are less accurate. But how exactly do distracters affect choices? Here, we explored this question using measurement of human behaviour, psychophysical reverse correlation and computational modelling. We contrasted two models: one in which targets and distracters had independent influence on choices (independent model) and one in which distracters modulated choices in a way that depended on their similarity to the target (interaction model). Across three experiments, participants were asked to make fine orientation judgments about the tilt of a target grating presented adjacent to an irrelevant distracter. We found strong evidence for the interaction model, in that decisions were more sensitive when target and distracter were consistent relative to when they were inconsistent. This consistency bias occurred in the frame of reference of the decision, that is, it operated on decision values rather than on sensory signals, and surprisingly, it was independent of spatial attention. A normalization framework, where target features are normalized by the expectation and variability of the local context, successfully captures the observed pattern of results.","link":"/opendata/dumbalska-et-al-2022/"},{"title":"Duyck et al. (2021)","text":"Eye blinks strongly attenuate visual input, yet we perceive the world as continuous. How this visual continuity is achieved remains a fundamental and unsolved problem. A decrease in luminance sensitivity has been proposed as a mechanism but is insufficient to mask the even larger decrease in luminance because of blinks. Here we put forward a different hypothesis: visual continuity can be achieved through shortening of perceived durations of the sensory consequences of blinks. Here we probed the perceived durations of the blackouts caused by blinks and visual stimuli interrupted by blinks. We found that the perceived durations of blackouts because of blinks are about half as long as artificial blackouts immediately preceding or following the blink. Stimuli interrupted by blinks were perceived as briefer than uninterrupted stimuli, by about the same duration as the interruption-but so were stimuli interrupted by optically simulated blinks. There was a difference between real and simulated blinks, however: The decrease in perceived duration depended on the duration of the interruption for simulated, but not for real, blinks. These profound modifications in time perception during blinks show a way in which temporal processing contributes to the solution of an essential perceptual problem.","link":"/opendata/duyck-et-al-2021/"},{"title":"Eckert et al. (2022)","text":"Predictive processing posits that perception emerges from inferential processes within a hierarchical cortical system. Alterations of these processes may result in psychotic experiences, such as hallucinations and delusions. Central to the predictive processing account of psychosis is the notion of aberrant weights attributed to prior information and sensory input. Based on the notion that previous perceptual choices represent a relevant source of prior information, we here asked whether the propensity towards psychotic experiences may be related to altered choice history biases in perceptual decision-making. We investigated the relationship between choice history biases in perceptual decision-making and psychosis proneness in the general population. Choice history biases and their adaptation to experimentally induced changes in stimulus serial dependencies were investigated in decision-making tasks with auditory (experiment 1) and visual (experiment 2) stimuli. We further explored a potential compensatory mechanism for reduced choice history biases by reliance on predictive cross-modal cues. In line with our preregistered hypothesis, psychosis proneness was associated with decreased choice history biases in both experiments. This association is generalized across conditions with and without stimulus serial dependencies. We did not find consistent evidence for a compensatory reliance on cue information in psychosis-prone individuals across experiments. Our results show reduced choice history biases in psychosis proneness. A compensatory mechanism between implicit choice history effects and explicit cue information is not supported unequivocally by our data.","link":"/opendata/eckert-et-al-2022/"},{"title":"Eckstein et al. (2022a)","text":"During adolescence, youth venture out, explore the wider world, and are challenged to learn how to navigate novel and uncertain environments. We investigated how performance changes across adolescent development in a stochastic, volatile reversal-learning task that uniquely taxes the balance of persistence and flexibility. In a sample of 291 participants aged 8-30, we found that in the mid-teen years, adolescents outperformed both younger and older participants. We developed two independent cognitive models, based on Reinforcement learning (RL) and Bayesian inference (BI). The RL parameter for learning from negative outcomes and the BI parameters specifying participants’ mental models were closest to optimal in mid-teen adolescents, suggesting a central role in adolescent cognitive processing. By contrast, persistence and noise parameters improved monotonically with age. We distilled the insights of RL and BI using principal component analysis and found that three shared components interacted to form the adolescent performance peak: adult-like behavioral quality, child-like time scales, and developmentally-unique processing of positive feedback. This research highlights adolescence as a neurodevelopmental window that can create performance advantages in volatile and uncertain environments. It also shows how detailed insights can be gleaned by using cognitive models in new ways.","link":"/opendata/eckstein-et-al-2022a/"},{"title":"Eckstein et al. (2022b)","text":"Reinforcement Learning (RL) models have revolutionized the cognitive and brain sciences, promising to explain behavior from simple conditioning to complex problem solving, to shed light on developmental and individual differences, and to anchor cognitive processes in specific brain mechanisms. However, the RL literature increasingly reveals contradictory results, which might cast doubt on these claims. We hypothesized that many contradictions arise from two commonly-held assumptions about computational model parameters that are actually often invalid: That parameters generalize between contexts (e.g. tasks, models) and that they capture interpretable (i.e. unique, distinctive) neurocognitive processes. To test this, we asked 291 participants aged 8-30 years to complete three learning tasks in one experimental session, and fitted RL models to each. We found that some parameters (exploration / decision noise) showed significant generalization: they followed similar developmental trajectories, and were reciprocally predictive between tasks. Still, generalization was significantly below the methodological ceiling. Furthermore, other parameters (learning rates, forgetting) did not show evidence of generalization, and sometimes even opposite developmental trajectories. Interpretability was low for all parameters. We conclude that the systematic study of context factors (e.g. reward stochasticity; task volatility) will be necessary to enhance the generalizability and interpretability of computational cognitive models.","link":"/opendata/eckstein-et-al-2022b/"},{"title":"Egger & Florack (2022)","text":"In two experiments, we investigated two untested assumptions regarding the mechanism by which selective attention during search affects subsequent preferences for objects. First, we tested whether an increase in visual competition during search increases preferences for relevant objects and decreases preferences for irrelevant objects subsequent to search. Second, we tested whether searching for objects increases the perceived fluency to process relevant objects and decreases the perceived fluency to process irrelevant objects. Our results show that search can affect relevant and irrelevant objects differently. Selective attention increased preferences for target objects subsequent to search, whereas selective attention did not affect preferences for distractors. Furthermore, our results indicate that searching for a target object increased the perceived fluency for this target object during choice, whereas ignoring a distractor product blocked mere exposure effects. Contrary to assumptions made in previous research, we found no indication that the competition for visual resources during search is linked to preferences for targets or distractors.","link":"/opendata/egger-florack-2022/"},{"title":"Ehlers & Lonsdorf (2022)","text":"Data sharing holds promise for advancing and accelerating science by facilitating and fostering collaboration, reproducibility and optimal use of sparse resources. We argue that despite the existence of general data sharing guidelines (e.g, FAIR-principles), their translation and implementation requires field-specific considerations. Here, we addressed this timely question for the field of experimental research on fear and anxiety and showcase the enormous prospects by illustrating the wealth and richness of a curated data collection of publicly available datasets using the fear conditioning paradigm based on 103 studies and 8839 participants. We highlight challenges encountered when aiming to reuse the available data corpus and derive 10 simple steps for making data sharing in the field more efficient and sustainable and hence facilitating collaboration, cumulative knowledge generation and large scale mega-, meta- and psychometric analyses.We share our vision and first steps towards transforming such curated data collections into a homogenized and dynamically growing database allowing for easy contributions and for living analysis tools for the collective benefit of the research community.We share our vision and first steps towards transforming such curated data collections into a homogenized and dynamically growing database allowing for easy contributions and for living analysis tools for the collective benefit of the research community.","link":"/opendata/ehlers-lonsdorf-2022/"},{"title":"Eisenberg et al. (2021)","text":"Psychological sciences have identified a wealth of cognitive processes and behavioral phenomena, yet struggle to produce cumulative knowledge. Progress is hamstrung by siloed scientific traditions and a focus on explanation over prediction, two issues that are particularly damaging for the study of multifaceted constructs like self-regulation. Here, we derive a psychological ontology from a study of individual differences across a broad range of behavioral tasks, self-report surveys, and self-reported real-world outcomes associated with self-regulation. Though both tasks and surveys putatively measure self-regulation, they show little empirical relationship. Within tasks and surveys, however, the ontology identifies reliable individual traits and reveals opportunities for theoretic synthesis. We then evaluate predictive power of the psychological measurements and find that while surveys modestly and heterogeneously predict real-world outcomes, tasks largely do not. We conclude that self-regulation lacks coherence as a construct, and that data-driven ontologies lay the groundwork for a cumulative psychological science.","link":"/opendata/eisenberg-et-al-2021/"},{"title":"Eissa et al. (2022)","text":"Solutions to challenging inference problems are often subject to a fundamental trade-off between: 1) bias (being systematically wrong) that is minimized with complex inference strategies, and 2) variance (being oversensitive to uncertain observations) that is minimized with simple inference strategies. However, this trade-off is based on the assumption that the strategies being considered are optimal for their given complexity and thus has unclear relevance to forms of inference based on suboptimal strategies. We examined inference problems applied to rare, asymmetrically available evidence, which a large population of human subjects solved using a diverse set of strategies that varied in form and complexity. In general, subjects using more complex strategies tended to have lower bias and variance, but with a dependence on the form of strategy that reflected an inversion of the classic bias-variance trade-off: subjects who used more complex, but imperfect, Bayesian-like strategies tended to have lower variance but higher bias because of incorrect tuning to latent task features, whereas subjects who used simpler heuristic strategies tended to have higher variance because they operated more directly on the observed samples but lower, near-normative bias. Our results help define new principles that govern individual differences in behavior that depends on rare-event inference and, more generally, about the information-processing trade-offs that can be sensitive to not just the complexity, but also the optimality, of the inference process.","link":"/opendata/eissa-et-al-2022/"},{"title":"Elder et al. (2022)","text":"People learn about themselves from social feedback, but desires for coherence and positivity constrain how feedback is incorporated into the self-concept. We developed a network-based model of the self-concept and embedded it in a reinforcement-learning framework to provide a computational account of how motivations shape self-learning from feedback. Participants (N = 46 adult university students) received feedback while evaluating themselves on traits drawn from a causal network of trait semantics. Network-defined communities were assigned different likelihoods of positive feedback. Participants learned from positive feedback but dismissed negative feedback, as reflected by asymmetries in computational parameters that represent the incorporation of positive versus negative outcomes. Furthermore, participants were constrained in how they incorporated feedback: Self-evaluations changed less for traits that have more implications and are thus more important to the coherence of the network. We provide a computational explanation of how motives for coherence and positivity jointly constrain learning about the self from feedback, an explanation that makes testable predictions for future clinical research.","link":"/opendata/elder-et-al-2022/"},{"title":"Éltető et al. (2022)","text":"Humans can implicitly learn complex perceptuo-motor skills over the course of large numbers of trials. This likely depends on our becoming better able to take advantage of ever richer and temporally deeper predictive relationships in the environment. Here, we offer a novel characterization of this process, fitting a non-parametric, hierarchical Bayesian sequence model to the reaction times of human participants responses over ten sessions, each comprising thousands of trials, in a serial reaction time task involving higher-order dependencies. The model, adapted from the domain of language, forgetfully updates trial-by-trial, and seamlessly combines predictive information from shorter and longer windows onto past events, weighing the windows proportionally to their predictive power. As the model implies a posterior over window depths, we were able to determine how, and how many, previous sequence elements influenced individual participants internal predictions, and how this changed with practice. Already in the first session, the model showed that participants had begun to rely on two previous elements (i.e., trigrams), thereby successfully adapting to the most prominent higher-order structure in the task. The extent to which local statistical fluctuations in trigram frequency influenced participants responses waned over subsequent sessions, as participants forgot the trigrams less and evidenced skilled performance. By the eighth session, a subset of participants shifted their prior further to consider a context deeper than two previous elements. Finally, participants showed resistance to interference and slow forgetting of the old sequence when it was changed in the final sessions. Model parameters for individual participants covaried appropriately with independent measures of working memory and error characteristics. In sum, the model offers the first principled account of the adaptive complexity and nuanced dynamics of humans internal sequence representations during long-term implicit skill learning.","link":"/opendata/elteto-et-al-2022/"},{"title":"Embrey et al. (2023)","text":"Humans are often termed “cognitive misers” for their aversion to mental effort. Both in and outside the laboratory people often show preference for low-effort tasks and are willing to forgo financial reward to avoid more demanding alternatives. Mental effort, however, does not seem to be ubiquitously avoided: people play crosswords, board games, and read novels, all as forms of leisure. While such activities undoubtedly require effort, the type of cognitive demands they impose appear markedly different from the tasks typically used in mental-effort research (e.g., N-Back, demand selection tasks, vigilance tasks). We investigate the effect disparate demands such as rule discovery compared to rule implementation have on people’s aversion to or preference for increased mental effort. Across four experiments using three different tasks (i.e., N-Back, number sequence problems, and anagrams), and a mixture of online and lab-based settings we find that aversion to effort remains stable regardless of the types of cognitive demands a task imposes. The results are discussed in terms of other factors that might induce the pursuit of mental effort over and above the type of cognitive activity involved in a task.","link":"/opendata/embrey-et-al-2023/"},{"title":"Elder et al. (2023)","text":"Implicit measures were initially assumed to assess stable individual differences, but other perspectives posit that they reflect context-dependent processes. This pre-registered research investigates whether the processes contributing to responses on the race Implicit Association Test are temporally stable and reliably measured using multinomial processing tree modeling. We applied two models-the Quad model and the Process Dissociation Procedure-to six datasets (N = 2,036), each collected over two occasions, examined the within-measurement reliability and between-measurement stability of model parameters, and meta-analyzed the results. Parameters reflecting accuracy-oriented processes demonstrate adequate stability and reliability, which suggests these processes are relatively stable within individuals. Parameters reflecting evaluative associations demonstrate poor stability but modest reliability, which suggests that associations are either context-dependent or stable but noisily measured. These findings suggest that processes contributing to racial bias on implicit measures differ in temporal stability, which has practical implications for predicting behavior using the Implicit Association Test.","link":"/opendata/elder-et-al-2023/"},{"title":"Engelmann et al. (2017)","text":"Commonly observed distortions in decision-making among patients with major depressive disorder (MDD) may emerge from impaired reward processing and cognitive biases toward negative events. There is substantial theoretical support for the hypothesis that MDD patients overweight potential losses compared with gains, though the neurobiological underpinnings of this bias are uncertain. Twenty-one unmedicated patients with MDD were compared with 25 healthy controls (HC) using functional magnetic resonance imaging (fMRI) together with an economic decision-making task over mixed lotteries involving probabilistic gains and losses. Region-of-interest analyses evaluated neural signatures of gain and loss coding within a core network of brain areas known to be involved in valuation (anterior insula, caudate nucleus, ventromedial prefrontal cortex). Usable fMRI data were available for 19 MDD and 23 HC subjects. Anterior insula signal showed negative coding of losses (gain > loss) in HC subjects consistent with previous findings, whereas MDD subjects demonstrated significant reversals in these associations (loss > gain). Moreover, depression severity further enhanced the positive coding of losses in anterior insula, ventromedial prefrontal cortex, and caudate nucleus. The hyper-responsivity to losses displayed by the anterior insula of MDD patients was paralleled by a reduced influence of gain, but not loss, stake size on choice latencies. Patients with MDD demonstrate a significant shift from negative to positive coding of losses in the anterior insula, revealing the importance of this structure in value-based decision-making in the context of emotional disturbances.","link":"/opendata/engelmann-et-al-2017/"},{"title":"Erev et al. (2017)","text":"Experimental studies of choice behavior document distinct, and sometimes contradictory, deviations from maximization. For example, people tend to overweight rare events in 1-shot decisions under risk, and to exhibit the opposite bias when they rely on past experience. The common explanations of these results assume that the contradicting anomalies reflect situation-specific processes that involve the weighting of subjective values and the use of simple heuristics. The current article analyzes 14 choice anomalies that have been described by different models, including the Allais, St. Petersburg, and Ellsberg paradoxes, and the reflection effect. Next, it uses a choice prediction competition methodology to clarify the interaction between the different anomalies. It focuses on decisions under risk (known payoff distributions) and under ambiguity (unknown probabilities), with and without feedback concerning the outcomes of past choices. The results demonstrate that it is not necessary to assume situation-specific processes. The distinct anomalies can be captured by assuming high sensitivity to the expected return and 4 additional tendencies: pessimism, bias toward equal weighting, sensitivity to payoff sign, and an effort to minimize the probability of immediate regret. Importantly, feedback increases sensitivity to probability of regret. Simple abstractions of these assumptions, variants of the model Best Estimate and Sampling Tools (BEAST), allow surprisingly accurate ex ante predictions of behavior. Unlike the popular models, BEAST does not assume subjective weighting functions or cognitive shortcuts. Rather, it assumes the use of sampling tools and reliance on small samples, in addition to the estimation of the expected values.","link":"/opendata/erev-et-al-2017/"},{"title":"Ez-zizi et al. (2023)","text":"Two prominent types of uncertainty that have been studied extensively are expected and unexpected uncertainty. Studies suggest that humans are capable of learning from reward under both expected and unexpected uncertainty when the source of variability is the reward. How do people learn when the source of uncertainty is the environments state and rewards themselves are deterministic? How does their learning compare with the case of reward uncertainty? The present study addressed these questions using behavioural experimentation and computational modelling. Experiment 1 showed that human subjects were generally able to use reward feedback to successfully learn the task rules under state uncertainty, and were able to detect a non-signalled reversal of stimulus-response contingencies. Experiment 2, which combined all four types of uncertainties—expected versus unexpected uncertainty, and state versus reward uncertainty—highlighted key similarities and differences in learning between state and reward uncertainties. We found that subjects performed significantly better in the state uncertainty condition, primarily because they explored less and improved their state disambiguation. We also show that a simple reinforcement learning mechanism that ignores state uncertainty and updates the state-action value of only the identified state accounted for the behavioural data better than both a Bayesian reinforcement learning model that keeps track of belief states and a model that acts based on sampling from past experiences. Our findings suggest a common mechanism supports reward-based learning under state and reward uncertainty.","link":"/opendata/ez-zizi-et-al-2023/"},{"title":"Fan et al. (2021)","text":"Anxiety has been related to decreased physical exploration, but past findings on the interaction between anxiety and exploration during decision making were inconclusive. Here we examined how latent factors of trait anxiety relate to different exploration strategies when facing volatility-induced uncertainty. Across two studies (total N = 985), we demonstrated that people used a hybrid of directed, random and undirected exploration strategies, which were respectively sensitive to relative uncertainty, total uncertainty and value difference. Trait somatic anxiety, that is, the propensity to experience physical symptoms of anxiety, was inversely correlated with directed exploration and undirected exploration, manifesting as a lesser likelihood for choosing the uncertain option and reducing choice stochasticity regardless of uncertainty. Somatic anxiety is also associated with underestimation of relative uncertainty. Together, these results reveal the selective role of trait somatic anxiety in modulating both uncertainty-driven and value-driven exploration strategies.","link":"/opendata/fan-et-al-2021/"},{"title":"Fan et al. (2023)","text":"Exploration is an important part of decision making and is crucial to maximizing long-term reward. Past work has shown that people use different forms of uncertainty to guide exploration. In this study, we investigate the role of the pupil-linked arousal system in uncertainty-guided exploration. We measured participants’ pupil dilation (N = 48) while they performed a two- armed bandit task. Consistent with previous work, we found that people adopted a hybrid of directed, random and undirected exploration, which are sensitive to relative uncertainty, total uncertainty and value difference between options, respectively. We also found a positive correlation between pupil size and total uncertainty. Furthermore, augmenting the choice model with subject-specific total uncertainty estimates decoded from the pupil size improved predictions of held-out choices, suggesting that people used the uncertainty estimate encoded in pupil size to decide which option to explore Together, the data shed light on the computations underlying uncertainty-driven exploration. Under the assumption that pupil size reflects Locus Coeruleus-Norepinephrine (LC-NE) neuromodulatory activity, these results also extend the theory of LC-NE function in exploration, highlighting its selective role in driving uncertainty- guided random exploration.","link":"/opendata/fan-et-al-2023/"},{"title":"Farashahi & Soltani (2021)","text":"Learning appropriate representations of the reward environment is challenging in the real world where there are many options, each with multiple attributes or features. Despite existence of alternative solutions for this challenge, neural mechanisms underlying emergence and adoption of value representations and learning strategies remain unknown. To address this, we measure learning and choice during a multi-dimensional probabilistic learning task in humans and trained recurrent neural networks (RNNs) to capture our experimental observations. We find that human participants estimate stimulus-outcome associations by learning and combining estimates of reward probabilities associated with the informative feature followed by those of informative conjunctions. Through analyzing representations, connectivity, and lesioning of the RNNs, we demonstrate this mixed learning strategy relies on a distributed neural code and opponency between excitatory and inhibitory neurons through value-dependent disinhibition. Together, our results suggest computational and neural mechanisms underlying emergence of complex learning strategies in naturalistic settings.","link":"/opendata/farashahi-soltani-2021/"},{"title":"Fareri et al. (2022)","text":"Choices under risk often have consequences for ourselves and others. Yet, it is unclear how the other’s identity (stranger, close friend, etc.) influences risky choices made on their behalf. In a mixed within and between subjects design, two participant groups made three series of risky economic decisions: for themselves, another person, or for both themselves and another person (i.e., shared outcomes). One group made choices involving a same-sex stranger (n = 29), the other made choices involving a same-sex close friend (n = 28). Hierarchical Bayesian estimation of computations underlying risky decision-making revealed that relative to choosing for themselves, people were more risk averse, loss averse, and consistent when choices involved another person. Partner identity was additionally crucial: people became risk neutral and more consistent when choosing for friends relative to strangers. These findings establish that the complexity of the social world is mirrored in its nuanced consequences for our choices.","link":"/opendata/fareri-et-al-2022/"},{"title":"Farkas et al. (2023)","text":"Despite the fact that reliability estimation is crucial for robust inference, it is underutilized in neuroscience and cognitive psychology. Appreciating reliability can help researchers increase statistical power, effect sizes, and reproducibility, decrease the impact of measurement error, and inform methodological choices. However, accurately calculating reliability for many experimental learning tasks is challenging. In this study, we highlight a number of these issues, and estimate multiple metrics of internal consistency and split-half reliability of a widely used learning task on a large sample of 180 subjects. We show how pre-processing choices, task length, and sample size can affect reliability and its estimation. Our results show that the Alternating Serial Reaction Time Task has respectable reliability, especially when learning scores are calculated based on reaction times and two-stage averaging. We also show that a task length of 25 blocks can be sufficient to meet the usual thresholds for minimally acceptable reliability. We further illustrate how relying on a single point estimate of reliability can be misleading, and the calculation of multiple metrics, along with their uncertainties, can lead to a more complete characterization of the psychometric properties of tasks.","link":"/opendata/farkas-et-al-2023/"},{"title":"Feher-da-Silva et al. (2023)","text":"A standard assumption in neuroscience is that low-effort model-free learning is automatic and continuously used, whereas more complex model-based strategies are only used when the rewards they generate are worth the additional effort. We present evidence refuting this assumption. First, we demonstrate flaws in previous reports of combined model-free and model-based reward prediction errors in the ventral striatum that probably led to spurious results. More appropriate analyses yield no evidence of model-free prediction errors in this region. Second, we find that task instructions generating more correct model-based behaviour reduce rather than increase mental effort. This is inconsistent with cost-benefit arbitration between model-based and model-free strategies. Together, our data indicate that model-free learning may not be automatic. Instead, humans can reduce mental effort by using a model-based strategy alone rather than arbitrating between multiple strategies. Our results call for re-evaluation of the assumptions in influential theories of learning and decision-making.","link":"/opendata/feher-da-silva-et-al-2023/"},{"title":"Felso et al. (2022)","text":"While making plans, people have to decide how far out into the future they want to plan: days, months, years, or even longer. Overly short-sighted planning can harm peoples well-being in important life domains, such as health, finances, and academics. While self-report scales exist to measure peoples planning, peoples answers to such questions may be distorted by their desire to make a good impression and conform to norms and expectations. Here, we introduce a method for objectively quantifying peoples propensity to plan into the future. Our method combines a process-tracing method with Bayesian inverse reinforcement learning to measure how prone an individual is to plan multiple steps ahead. To infer this from a persons process-tracing data, our method inverts a new resource-rational model of individual differences in planning. This model assumes that subjective planning costs are captured by a cost function with two parameters: a mental effort cost and a planning depth cost. Upon showing that our model of planning explains individual participants planning behavior better than the best previous models, we validate our method on simulated data and real data from a large online experiment where the cost of planning was manipulated within participants. Our results show that our method can infer individual differences in the planning depth cost. Our model provides a mechanistic account for why some people plan too shortsightedly. The subjective planning costs inferred by our method can be used as an objective, non-self-report measure of individual differences in peoples propensity to plan into the future.","link":"/opendata/felso-et-al-2022/"},{"title":"Feng et al. (2021)","text":"Growing evidence suggests that behavioral variability plays a critical role in how humans manage the tradeoff between exploration and exploitation. In these decisions a little variability can help us to overcome the desire to exploit known rewards by encouraging us to randomly explore something else. Here we investigate how such ‘random exploration’ could be controlled using a drift-diffusion model of the explore-exploit choice. In this model, variability is controlled by either the signal-to-noise ratio with which reward is encoded (the ‘drift rate’), or the amount of information required before a decision is made (the ‘threshold’). By fitting this model to behavior, we find that while, statistically, both drift and threshold change when people randomly explore, numerically, the change in drift rate has by far the largest effect. This suggests that random exploration is primarily driven by changes in the signal-to-noise ratio with which reward information is represented in the brain.","link":"/opendata/feng-et-al-2021/"},{"title":"Ferrucci et al. (2021)","text":"A standard view in the literature is that decisions are the result of a process that accumulates evidence in favor of each alternative until such accumulation reaches a threshold and a decision is made. However, this view has been recently questioned by an alternative proposal that suggests that, instead of accumulated, evidence is combined with an urgency signal. Both theories have been mathematically formalized and supported by a variety of decision-making tasks with constant information. However, recently, tasks with changing information have shown to be more effective to study the dynamics of decision making. Recent research using one of such tasks, the tokens task, has shown that decisions are better described by an urgency mechanism than by an accumulation one. However, the results of that study could depend on a task where all fundamental information was noiseless and always present, favoring a mechanism of non-integration, such as the urgency one. Here, we wanted to address whether the same conclusions were also supported by an experimental paradigm in which sensory evidence was removed shortly after it was provided, making working memory necessary to properly perform the task. Here, we show that, under such condition, participants’ behavior could be explained by an urgency-gating mechanism that low-pass filters the mnemonic information and combines it with an urgency signal that grows with time but not by an accumulation process that integrates the same mnemonic information. Thus, our study supports the idea that, under certain situations with dynamic sensory information, decisions are better explained by an urgency-gating mechanism than by an accumulation one.","link":"/opendata/ferrucci-et-al-2021/"},{"title":"Fielder & Glöckner (2012)","text":"In the last years, research on risky choice has moved beyond analyzing choices only. Models have been suggested that aim to describe the underlying cognitive processes and some studies have tested process predictions of these models. Prominent approaches are evidence accumulation models such as decision field theory (DFT), simple serial heuristic models such as the adaptive toolbox, and connectionist approaches such as the parallel constraint satisfaction (PCS) model. In two studies involving measures of attention and pupil dilation, we investigate hypotheses derived from these models in choices between two gambles with two outcomes each. We show that attention to an outcome of a gamble increases with its probability and its value and that attention shifts toward the subsequently favored gamble after about two thirds of the decision process, indicating a gaze-cascade effect. Information search occurs mostly within-gambles, and the direction of search does not change over the course of decision making. Pupil dilation, which reflects both cognitive effort and arousal, increases during the decision process and increases with mean expected value. Overall, the results support aspects of automatic integration models for risky choice such as DFT and PCS, but in their current specification none of them can account for the full pattern of results.","link":"/opendata/fielder-glockner-2012/"},{"title":"Finke et al. (2023)","text":"Evidence regarding unaware differential fear conditioning in humans is mixed and even less is known about the effects of contingency awareness on appetitive conditioning. Phasic pupil dilation responses (PDR) might be more sensitive for capturing implicit learning than other measures, such as skin conductance responses (SCR). Here, we report data from two delay conditioning experiments utilizing PDR (alongside SCR and subjective assessments) to investigate the role of contingency awareness in aversive and appetitive conditioning. In both experiments, valence of unconditioned stimuli (UCS) was varied within participants by administering aversive (mild electric shocks) and appetitive UCSs (monetary rewards). Preceding visual stimuli (CSs) predicted either the reward, the shock (65% reinforcement), or neither UCS. In Exp. 1, participants were fully instructed about CS-UCS contingencies, whereas in Exp. 2, no such information was given. PDR and SCR demonstrated successful differential conditioning in Exp. 1 and in (learned) aware participants in Exp. 2. In non-instructed participants who remained fully unaware of contingencies (Exp. 2), differential modulation of early PDR (immediately after CS onset) by appetitive cues emerged. Associations with model-derived learning parameters further suggest that early PDR in unaware participants mainly reflect implicit learning of expected outcome value, whereas early PDR in aware (instructed/learned-aware) participants presumably index attentional processes (related to uncertainty/prediction error processing). Similar, but less clear results emerged for later PDR (preceding UCS onset). Our data argue in favor of a dual-process account of associative learning, suggesting that value-related processing can take place irrespective of mechanisms involved in conscious memory formation.","link":"/opendata/finke-et-al-2023/"},{"title":"Fleming et al. (2023)","text":"An important finding in the cognitive effort literature has been that sensitivity to the costs of effort varies between individuals, suggesting that some people find effort more aversive than others. It has been suggested this may explain individual differences in other aspects of cognition; in particular that greater effort sensitivity may underlie some of the symptoms of conditions such as depression and schizophrenia. In this paper, we highlight a major problem with existing measures of cognitive effort that hampers this line of research, specifically the confounding of effort and difficulty. This means that behaviour thought to reveal effort costs could equally be explained by cognitive capacity, which influences the frequency of success and thereby the chance of obtaining reward. To address this shortcoming, we introduce a new test, the Number Switching Task (NST), specially designed such that difficulty will be unaffected by the effort manipulation and can easily be standardised across participants. In a large, online sample, we show that these criteria are met successfully and reproduce classic effort discounting results with the NST. We also demonstrate the use of Bayesian modelling with this task, producing behavioural parameters which can be associated with other measures, and report a preliminary association with the Need for Cognition scale.","link":"/opendata/fleming-et-al-2023/"},{"title":"Fontanesi et al. (2019)","text":"Reinforcement learning (RL) models describe how humans and animals learn by trial-and-error to select actions that maximize rewards and minimize punishments. Traditional RL models focus exclusively on choices, thereby ignoring the interactions between choice preference and response time (RT), or how these interactions are influenced by contextual factors. However, in the field of perceptual decision-making, such interactions have proven to be important to dissociate between different underlying cognitive processes. Here, we investigated such interactions to shed new light on overlooked differences between learning to seek rewards and learning to avoid losses. We leveraged behavioral data from four RL experiments, which feature manipulations of two factors: outcome valence (gains vs. losses) and feedback information (partial vs. complete feedback). A Bayesian meta-analysis revealed that these contextual factors differently affect RTs and accuracy: While valence only affects RTs, feedback information affects both RTs and accuracy. To dissociate between the latent cognitive processes, we jointly fitted choices and RTs across all experiments with a Bayesian, hierarchical diffusion decision model (DDM). We found that the feedback manipulation affected drift rate, threshold, and non-decision time, suggesting that it was not a mere difficulty effect. Moreover, valence affected non-decision time and threshold, suggesting a motor inhibition in punishing contexts. To better understand the learning dynamics, we finally fitted a combination of RL and DDM (RLDDM). We found that while the threshold was modulated by trial-specific decision conflict, the non-decision time was modulated by the learned context valence. Overall, our results illustrate the benefits of jointly modeling RTs and choice data during RL, to reveal subtle mechanistic differences underlying decisions in different learning contexts.","link":"/opendata/fontanesi-et-al-2019/"},{"title":"Fontanesi et al. (2022)","text":"Recent years have witnessed a surge of interest in understanding the neural and cognitive dynamics that drive sequential decision making in general and foraging behavior in particular. Due to the intrinsic properties of most sequential decision-making paradigms, however, previous research in this area has suffered from the difficulty to disentangle properties of the decision related to (a) the value of switching to a new patch versus, which increases monotonically, and (b) the conflict experienced between choosing to stay or leave, which first increases but then decreases after reaching the point of indifference between staying and switching. Here, we show how the same problems arise in studies of sequential decision-making under risk, and how they can be overcome, taking as a specific example recent research on the ‘pig’ dice game. In each round of the ‘pig’ dice game, people roll a die and accumulate rewards until they either decide to proceed to the next round or lose all rewards. By combining simulation-based dissections of the task structure with two experiments, we show how an extension of the standard paradigm, together with cognitive modeling of decision-making processes, allows to disentangle properties related to either switch value or choice conflict. Our study elucidates the cognitive mechanisms of sequential decision making and underscores the importance of avoiding potential pitfalls of paradigms that are commonly used in this research area.","link":"/opendata/fontanesi-et-al-2022/"},{"title":"Forbes & Bennett (2023)","text":"The valence of an individual’s emotional response to an event is often thought to depend on their prior expectations for the event: better-than-expected outcomes produce positive affect and worse-than-expected outcomes produce negative affect. In recent years, this hypothesis has been instantiated within influential computational models of subjective affect that assume the valence of affect is driven by reward prediction errors. However, there remain a number of open questions regarding this association. In this project, we investigated the moderating effects of outcome valence and decision context (Experiment 1: free vs. forced choices; Experiment 2: trials with versus trials without counterfactual feedback) on the effects of reward prediction errors on subjective affect. We conducted two large-scale online experiments (N = 300 in total) of general-population samples recruited via Prolific to complete a risky decision-making task with embedded high-resolution sampling of subjective affect. Hierarchical Bayesian computational modelling revealed that the effects of reward prediction errors on subjective affect were significantly moderated by both outcome valence and decision context. Specifically, after accounting for concurrent reward amounts we found evidence that only negative reward prediction errors (worse-than-expected outcomes) influenced subjective affect, with no significant effect of positive reward prediction errors (better-than-expected outcomes). Moreover, these effects were only apparent on trials in which participants made a choice freely (but not on forced-choice trials) and when counterfactual feedback was absent (but not when counterfactual feedback was present). These results deepen our understanding of the effects of reward prediction errors on subjective affect.","link":"/opendata/forbes-bennett-2023/"},{"title":"Forest et al. (2023)","text":"Trajectories of cognitive and neural development suggest that, despite early emergence, the ability to extract environmental patterns changes across childhood. Here, 5- to 9-year-olds and adults (N = 211, 110 females, in a large Canadian city) completed a memory test assessing what they remembered after watching a stream of shape triplets: the particular sequence in which the shapes occurred and/or their group-level structure. After accounting for developmental improvements in overall memory, all ages remembered specific transitions, while memory for group membership was only observed in older children and adults (age by test-type interaction η2 = .05). Thus, while young children form memories for specifics of structured experience, memory for derived associations is refined later-underscoring that adults and young children form different memories despite identical experience.","link":"/opendata/forest-et-al-2023/"},{"title":"Fornari et al. (2023)","text":"Learning to predict action outcomes in morally conflicting situations is essential for social decision-making but poorly understood. Here we tested which forms of Reinforcement Learning Theory capture how participants learn to choose between self-money and other-shocks, and how they adapt to changes in contingencies. We find choices were better described by a reinforcement learning model based on the current value of separately expected outcomes than by one based on the combined historical values of past outcomes. Participants track expected values of self-money and other-shocks separately, with the substantial individual difference in preference reflected in a valuation parameter balancing their relative weight. This valuation parameter also predicted choices in an independent costly helping task. The expectations of self-money and other-shocks were biased toward the favored outcome but fMRI revealed this bias to be reflected in the ventromedial prefrontal cortex while the pain-observation network represented pain prediction errors independently of individual preferences.","link":"/opendata/fornari-et-al-2023/"},{"title":"Forys et al. (2022)","text":"We must often decide how much effort to exert or withhold to avoid undesirable outcomes or obtain rewards. In depression and anxiety, levels of avoidance can be excessive and reward-seeking may be reduced. Yet outstanding questions remain about the links between motivated action/inhibition and anxiety and depression levels, and whether they differ between men and women. Here we examined the relationship between anxiety and depression scores, and performance on effortful active and inhibitory avoidance (Study 1) and reward seeking (Study 2) in humans. Undergraduates and paid online workers (NAvoid=545, NReward=310; NFemale=368, NMale=450, MAge=22.58, RangeAge=17-62) were assessed on the Beck Depression Inventory II (BDI) and the Beck Anxiety Inventory (BAI) and performed an instructed online avoidance or reward-seeking task. Participants had to make multiple presses on active trials and withhold presses on inhibitory trials to avoid an unpleasant sound (Study 1) or obtain points towards a monetary reward (Study 2). Overall, men deployed more effort than women in both avoidance and reward-seeking, and anxiety scores were negatively associated with active reward-seeking performance based on sensitivity scores. Gender interacted with anxiety scores and inhibitory avoidance performance, such that women with higher anxiety showed worse avoidance performance. Our results illuminate effects of gender in the relationship between anxiety and depression levels and the motivation to actively and effortfully respond to obtain positive and avoid negative outcomes. We must often take or withhold effortful action to avoid unpleasant outcomes or obtain rewards. Depression and anxiety can impact these behaviours’ effectiveness, but the roles of avoidance in depression and reward-seeking in anxiety are not fully understood. Gender differences in avoidance and reward-seeking have also not been examined. We present a task in which community participants with a range of anxiety and depression levels made or withheld button presses to avoid hearing an unpleasant sound or obtain a reward. Men deployed more effort than women in avoidance, and women with higher anxiety scores had lower avoidance performance than men. We illuminate gender differences in how depressive and anxiety scores impact our ability to avoid threats and obtain rewards.","link":"/opendata/forys-et-al-2022/"},{"title":"Fox et al. (2023)","text":"Prior studies have found metacognitive impairments are linked to a transdiagnostic dimension of anxious-depression, manifesting as reduced confidence in performance (‘metacognitive bias’). However, previous work has been cross-sectional and so it is unclear if under-confidence is a trait-like marker of anxious-depression vulnerability, or if it resolves when anxious-depression improves. Data were collected as part of the ‘Precision in Psychiatry’ study, a large-scale transdiagnostic, four-week observational study of individuals initiating internet-based cognitive behavioural therapy (iCBT) or antidepressant medication. Self-reported clinical questionnaires and perceptual task performance were gathered to assess anxious-depression and metacognitive bias at baseline and four-week follow-up. Primary analyses were conducted for individuals who received iCBT (n=649), with comparisons between smaller samples that received antidepressant medication (n=88) and a control group receiving no intervention (n=82). Prior to receiving treatment, anxious-depression severity was associated with under-confidence in performance in the iCBT arm, replicating previous work. From baseline to follow-up, levels of anxious-depression were significantly reduced, and this was accompanied by a significant increase in metacognitive confidence (B=0.17, SE=0.02, p<0.001). These changes were correlated (r(647)=-0.12, p=0.002); those with the greatest reductions in anxious-depression levels had the largest increase in confidence. In the antidepressant arm, anxious-depression reduced (B=-0.61, SE=0.09, p<0.001) and confidence increased (B=0.31, SE=0.08, p<0.001). Among controls, confidence remained stable from baseline to follow-up (B=0.11, SE=0.07, p=0.103). Metacognitive biases in anxious-depression are state-dependent; when symptoms improve with treatment, so does confidence in performance. Our results suggest this is not specific to the type of intervention.","link":"/opendata/fox-et-al-2023/"},{"title":"Fradkin et al. (2020)","text":"Obsessive compulsive (OC) symptoms involve excessive information gathering (e.g., checking, reassurance-seeking), and uncertainty about possible, often catastrophic, future events. Here we propose that these phenomena are the result of excessive uncertainty regarding state transitions (transition uncertainty): a computational impairment in Bayesian inference leading to a reduced ability to use the past to predict the present and future, and to oversensitivity to feedback (i.e. prediction errors). Using a computational model of Bayesian learning under uncertainty in a reversal learning task, we investigate the relationship between OC symptoms and transition uncertainty. Individuals high and low in OC symptoms performed a task in which they had to detect shifts (i.e. transitions) in cue-outcome contingencies. Modeling subjects choices was used to estimate each individual participants transition uncertainty and associated responses to feedback. We examined both an optimal observer model and an approximate Bayesian model in which participants were assumed to attend (and learn about) only one of several cues on each trial. Results suggested the participants were more likely to distribute attention across cues, in accordance with the optimal observer model. As hypothesized, participants with higher OC symptoms exhibited increased transition uncertainty, as well as a pattern of behavior potentially indicative of a difficulty in relying on learned contingencies, with no evidence for perseverative behavior. Increased transition uncertainty compromised these individuals ability to predict ensuing feedback, rendering them more surprised by expected outcomes. However, no evidence for excessive belief updating was found. These results highlight a potential computational basis for OC symptoms and obsessive compulsive disorder (OCD). The fact the OC symptoms predicted a decreased reliance on the past rather than perseveration challenges preconceptions of OCD as a disorder of inflexibility. Our results have implications for the understanding of the neurocognitive processes leading to excessive uncertainty and distrust of past experiences in OCD.","link":"/opendata/fradkin-et-al-2020/"},{"title":"Fradkin & Eldar (2022)","text":"To attain goals, people must proactively prevent interferences and react to interferences once they occur. Whereas most research focuses on how people deal with external interferences, here we investigate the use of proactive and reactive control in dealing with unwanted thoughts. To examine this question, we asked people to generate an association to each of several repeating cue words, while forbidding the repetition of associations. Reactively rejecting and replacing unwanted repeated associations after they occur entails slower response times. Conversely, proactive control entails constricting the search space and thus faster response times. To gain further insight into different potential proactive thought control mechanisms, we augmented the analysis of raw response times with a novel, hypothesis-based, tractable computational model describing how people serially sample associations. Our results indicate that people primarily react to unwanted thoughts after they occur. Yet, we found evidence for two latent proactive control mechanisms: one that allows people to mitigate the episodic strengthening of repeated thoughts, and another that helps avoid looping in a repetitive thought. Exploratory analysis showed a relationship between model parameters and self-reported individual differences in the control over unwanted thoughts in daily life. The findings indicate the novel task and model can advance our understanding of how people can and cannot control their thoughts and memories, and benefit future research on the mechanisms responsible for unwanted thought in different psychiatric conditions. Finally, we discuss implications concerning the involvement of associative thinking and various control processes in semantic fluency, decision-making and creativity.","link":"/opendata/fradkin-eldar-2022/"},{"title":"Franco et al. (2022)","text":"The survival of human organisms depends on our ability to solve complex tasks in the face of limited cognitive resources. However, little is known about the factors that drive the complexity of those tasks. Here, building on insights from computational complexity theory, we quantify the computational hardness of cognitive tasks using a set of task-independent metrics related to the computational resource requirements of individual instances of a task. We then examine the relation between those metrics and human behavior and find that they predict both time spent on a task as well as accuracy in three canonical cognitive tasks. Our findings demonstrate that performance in cognitive tasks can be predicted based on generic metrics of their inherent computational hardness.","link":"/opendata/franco-et-al-2022/"},{"title":"Franklin & Frank (2020)","text":"Humans routinely face novel environments in which they have to generalize in order to act adaptively. However, doing so involves the non-trivial challenge of deciding which aspects of a task domain to generalize. While it is sometimes appropriate to simply re-use a learned behavior, often adaptive generalization entails recombining distinct components of knowledge acquired across multiple contexts. Theoretical work has suggested a computational trade-off in which it can be more or less useful to learn and generalize aspects of task structure jointly or compositionally, depending on previous task statistics, but it is unknown whether humans modulate their generalization strategy accordingly. Here we develop a series of navigation tasks that separately manipulate the statistics of goal values (what to do) and state transitions (how to do it) across contexts and assess whether human subjects generalize these task components separately or conjunctively. We find that human generalization is sensitive to the statistics of the previously experienced task domain, favoring compositional or conjunctive generalization when the task statistics are indicative of such structures, and a mixture of the two when they are more ambiguous. These results support a normative meta-generalization account and suggests that people not only generalize previous task components but also generalize the statistical structure most likely to support generalization.","link":"/opendata/franklin-frank-2020/"},{"title":"Frey et al. (2017)","text":"To what extent is there a general factor of risk preference, R, akin to g, the general factor of intelligence? Can risk preference be regarded as a stable psychological trait? These conceptual issues persist because few attempts have been made to integrate multiple risk-taking measures, particularly measures from different and largely unrelated measurement traditions (self-reported propensity measures assessing stated preferences, incentivized behavioral measures eliciting revealed preferences, and frequency measures assessing actual risky activities). Adopting a comprehensive psychometric approach (1507 healthy adults completing 39 risk-taking measures, with a subsample of 109 participants completing a retest session after 6 months), we provide a substantive empirical foundation to address these issues, finding that correlations between propensity and behavioral measures were weak. Yet, a general factor of risk preference, R, emerged from stated preferences and generalized to specific and actual real-world risky activities (for example, smoking). Moreover, R proved to be highly reliable across time, indicative of a stable psychological trait. Our findings offer a first step toward a general mapping of the construct risk preference, which encompasses both general and domain-specific components, and have implications for the assessment of risk preference in the laboratory and in the wild.","link":"/opendata/frey-et-al-2017/"},{"title":"Friehs et al. (2023)","text":"Stopping an already initiated action is crucial for human everyday behavior and empirical evidence points toward the prefrontal cortex playing a key role in response inhibition. Two regions that have been consistently implicated in response inhibition are the right inferior frontal gyrus (IFG) and the more superior region of the dorsolateral prefrontal cortex (DLPFC). The present study investigated the effect of offline 1 Hz transcranial magnetic stimulation (TMS) over the right IFG and DLPFC on performance in a gamified stop-signal task (SSG). We hypothesized that perturbing each area would decrease performance in the SSG, albeit with a quantitative difference in the performance decrease after stimulation. After offline TMS, functional short-term reorganization is possible, and the domain-general area (i.e., the right DLPFC) might be able to compensate for the perturbation of the domain-specific area (i.e., the right IFG). Results showed that 1 Hz offline TMS over the right DLPFC and the right IFG at 110% intensity of the resting motor threshold had no effect on performance in the SSG. In fact, evidence in favor of the null hypothesis was found. One intriguing interpretation of this result is that within-network compensation was triggered, canceling out the potential TMS effects as has been suggested in recent theorizing on TMS effects, although the presented results do not unambiguously identify such compensatory mechanisms. Future studies may result in further support for this hypothesis, which is especially important when studying reactive response in complex environments.","link":"/opendata/friehs-et-al-2023/"},{"title":"Frischkorn & von-Bastian (2021)","text":"Process-Overlap Theory (POT) suggests that measures of cognitive abilities sample from sets of independent cognitive processes. These cognitive processes can be separated into domain-general executive processes, sampled by the majority of cognitive ability measures, and domain-specific processes, sampled only by measures within a certain domain. According to POT, fluid intelligence measures are related because different tests sample similar domain-general executive cognitive processes to some extent. Re-analyzing data from a study by De Simoni and von Bastian (2018), we assessed domain-general variance from executive processing tasks measuring inhibition, shifting, and efficiency of removal from working memory, as well as examined their relation to a domain-general factor extracted from fluid intelligence measures. The results showed that domain-general factors reflecting general processing speed were moderately and negatively correlated with the domain-general fluid intelligence factor (r = -.17–.36). However, domain-general factors isolating variance specific to inhibition, shifting, and removal showed only small and inconsistent correlations with the domain-general fluid intelligence factor (r = .02–.22). These findings suggest that (1) executive processing tasks sample only few domain-general executive processes also sampled by fluid intelligence measures, as well as (2) that domain-general speed of processing contributes more strongly to individual differences in fluid intelligence than do domain-general executive processes.","link":"/opendata/frischkorn-von-bastian-2021/"},{"title":"Fromer et al. (2021)","text":"The amount of mental effort we invest in a task is influenced by the reward we can expect if we perform that task well. However, some of the rewards that have the greatest potential for driving these efforts are partly determined by factors beyond one’s control. In such cases, effort has more limited efficacy for obtaining rewards. According to the Expected Value of Control theory, people integrate information about the expected reward and efficacy of task performance to determine the expected value of control, and then adjust their control allocation (i.e., mental effort) accordingly. Here we test this theory’s key behavioral and neural predictions. We show that participants invest more cognitive control when this control is more rewarding and more efficacious, and that these incentive components separately modulate EEG signatures of incentive evaluation and proactive control allocation. Our findings support the prediction that people combine expectations of reward and efficacy to determine how much effort to invest.","link":"/opendata/fromer-et-al-2021/"},{"title":"Fung et al. (2019)","text":"Theoretical models distinguish between neural responses elicited by distal threats and those evoked by more immediate threats1-3. Specifically, slower cognitive fear responses towards distal threats involve a network of brain regions including the ventral hippocampus (vHPC) and medial prefrontal cortex (mPFC), while immediate reactive fear responses rely on regions such as the periaqueductal grey4,5. However, it is unclear how anxiety and its neural substrates relate to these distinct defensive survival circuits. We tested whether individual differences in trait anxiety would impact escape behaviour and neural responses to slow and fast attacking predators: conditions designed to evoke cognitive and reactive fear, respectively. Behaviourally, we found that trait anxiety was not related to escape decisions for fast threats, but individuals with higher trait anxiety escaped earlier during slow threats. Functional magnetic resonance imaging showed that when subjects faced slow threats, trait anxiety positively correlated with activity in the vHPC, mPFC, amygdala and insula. Furthermore, the strength of functional coupling between two components of the cognitive circuit-the vHPC and mPFC-was correlated with the degree of trait anxiety. This suggests that anxiety predominantly affects cognitive fear circuits that are involved in volitional strategic escape.","link":"/opendata/fung-et-al-2019/"},{"title":"Funkhouser et al. (2020)","text":"The popularity of network analysis in psychopathology research has increased exponentially in recent years. Yet, little research has examined the replicability of cross-sectional psychopathology network models, and those that have used single items for symptoms rather than multiitem scales. The present study therefore examined the replicability and generalizability of regularized partial correlation networks of internalizing symptoms within and across 5 samples (total N = 2,573) using the Inventory for Depression and Anxiety Symptoms, a factor analytically derived measure of individual internalizing symptoms. As different metrics may yield different conclusions about the replicability of network parameters, we examined both global and specific metrics of similarity between networks. Correlations within and between nonclinical samples suggested considerable global similarities in network structure (rss = .53-.87) and centrality strength (rss = .37-.86), but weaker similarities in network structure (rss = .36-.66) and centrality (rss = .04-.54) between clinical and nonclinical samples. Global strength (i.e., connectivity) did not significantly differ across all 5 networks and few edges (0-5.5%) significantly differed between networks. Specific metrics of similarity indicated that, on average, approximately 80% of edges were consistently estimated within and between all 5 samples. The most central symptom (i.e., dysphoria) was consistent within and across samples, but there were few other matches in centrality rank-order. In sum, there were considerable similarities in network structure, the presence and sign of individual edges, and the most central symptom within and across internalizing symptom networks estimated from nonclinical samples, but global metrics suggested network structure and symptom centrality had weak to moderate generalizability from nonclinical to clinical samples.","link":"/opendata/funkhouser-et-al-2020/"},{"title":"Gagne et al. (2020)","text":"Using a contingency volatility manipulation, we tested the hypothesis that difficulty adapting probabilistic decision-making to second-order uncertainty might reflect a core deficit that cuts across anxiety and depression and holds regardless of whether outcomes are aversive or involve reward gain or loss. We used bifactor modeling of internalizing symptoms to separate symptom variance common to both anxiety and depression from that unique to each. Across two experiments, we modeled performance on a probabilistic decision-making under volatility task using a hierarchical Bayesian framework. Elevated scores on the common internalizing factor, with high loadings across anxiety and depression items, were linked to impoverished adjustment of learning to volatility regardless of whether outcomes involved reward gain, electrical stimulation, or reward loss. In particular, high common factor scores were linked to dampened learning following better-than-expected outcomes in volatile environments. No such relationships were observed for anxiety- or depression-specific symptom factors.","link":"/opendata/gagne-et-al-2020/"},{"title":"Gagne et al. (2022)","text":"Individuals prone to anxiety and depression often report beliefs and make judgements about themselves that are more negative than those reported by others. We use computational modeling of a richly naturalistic task to disentangle the role of negative priors versus negatively biased belief updating and to investigate their association with different dimensions of Internalizing psychopathology. Undergraduate participants first provided profiles for a hypothetical tech internship. They then viewed pairs of other profiles and selected the individual they would prefer to work alongside out of each pair. In a subsequent phase of the experiment, participants made judgments about their relative popularity as hypothetical internship partners both before any feedback and after each of 20 items of feedback revealing whether or not they had been selected as the preferred teammate from a given pairing. Scores on latent factors of general negative affect, anxiety-specific affect and depression-specific affect were estimated using participants self-report scores on standardized measures of anxiety and depression together with factor loadings from a bifactor analysis conducted previously. Higher scores on the depression-specific factor were linked to more negative prior beliefs but were not associated with differences in belief updating. In contrast, higher scores on the anxiety-specific factor were associated with a negative bias in belief updating but no difference in prior beliefs. These findings indicate that, to at least some extent, distinct processes may impact the formation of belief priors and in-the-moment belief updating and that these processes may be differentially disrupted in depression and anxiety. Future directions for enquiry include examination of the possibility that prior beliefs biases in depression might reflect generalization from prior experiences or global schema whereas belief updating biases in anxiety might be more situationally specific.","link":"/opendata/gagne-et-al-2022/"},{"title":"Garcia et al. (2023)","text":"Standard models of decision-making assume each option is associated with subjective value, regardless of whether this value is inferred from experience (experiential) or explicitly instructed probabilistic outcomes (symbolic). In this study, we present results that challenge the assumption of unified representation of experiential and symbolic value. Across nine experiments, we presented participants with hybrid decisions between experiential and symbolic options. Participants choices exhibited a pattern consistent with a systematic neglect of the experiential values. This normatively irrational decision strategy held after accounting for alternative explanations, and persisted even when it bore an economic cost. Overall, our results demonstrate that experiential and symbolic values are not symmetrically considered in hybrid decisions, suggesting they recruit different representational systems that may be assigned different priority levels in the decision process. These findings challenge the dominant models commonly used in value-based decision-making research.","link":"/opendata/garcia-et-al-2023/"},{"title":"Garrett & Daw (2020)","text":"Deciding which options to engage, and which to forego, requires developing accurate beliefs about the overall distribution of prospects. Here we adapt a classic prey selection task from foraging theory to examine how individuals keep track of an environments reward rate and adjust choices in response to its fluctuations. Preference shifts were most pronounced when the environment improved compared to when it deteriorated. This is best explained by a trial-by-trial learning model in which participants estimate the reward rate with upward vs. downward changes controlled by separate learning rates. A failure to adjust expectations sufficiently when an environment becomes worse leads to suboptimal choices: options that are valuable given the environmental conditions are rejected in the false expectation that better options will materialize. These findings offer a previously unappreciated parallel in the serial choice setting of observations of asymmetric updating and resulting biased (often overoptimistic) estimates in other domains.","link":"/opendata/garrett-daw-2020/"},{"title":"Garrett & Sharot (2023)","text":"In a recent paper, Burton et al. claim that individuals update beliefs to a greater extent when learning an event is less likely compared to more likely than expected. Here, we investigate Burton’s et al.’s, findings. First, we show how Burton et al.’s data do not in fact support a belief update bias for neutral events. Next, in an attempt to replicate their findings, we collect a new data set employing the original belief update task design, but with neutral events. A belief update bias for neutral events is not observed. Finally, we highlight the statistical errors and confounds in Burton et al.’s design and analysis. This includes mis-specifying a reinforcement learning approach to model the data and failing to follow standard computational model fitting sanity checks such as parameter recovery, model comparison and out of sample prediction. Together, the results find little evidence for biased updating for neutral events.In a recent paper, Burton et al. claim that individuals update beliefs to a greater extent when learning an event is less likely compared to more likely than expected. Here, we investigate Burton’s et al.’s, findings. First, we show how Burton et al.’s data do not in fact support a belief update bias for neutral events. Next, in an attempt to replicate their findings, we collect a new data set employing the original belief update task design, but with neutral events. A belief update bias for neutral events is not observed. Finally, we highlight the statistical errors and confounds in Burton et al.’s design and analysis. This includes mis-specifying a reinforcement learning approach to model the data and failing to follow standard computational model fitting sanity checks such as parameter recovery, model comparison and out of sample prediction. Together, the results find little evidence for biased updating for neutral events.","link":"/opendata/garrett-sharot-2023/"},{"title":"Geddert & Egner (2022)","text":"Adaptive behavior requires the ability to focus on a current task and protect it from distraction (cognitive stability), as well as the ability to rapidly switch to another task in light of changing circumstances (cognitive flexibility). Cognitive stability and flexibility have been conceptualized as opposite endpoints on a stability-flexibility trade-off continuum, implying an obligatory reciprocity between the two: Greater flexibility necessitates less stability, and vice versa. Surprisingly, rigorous empirical tests of this critical assumption are lacking. Here, we acquired simultaneous measurements of cognitive stability (congruency effects) and flexibility (switch costs) on the same stimuli within the same task while independently varying contextual demands on these functions with block-wise manipulations of the proportion of incongruent trials and task switches, respectively. If cognitive stability and flexibility are reciprocal, increases in flexibility in response to higher switch rates should lead to commensurate decreases in stability, and increases in stability in response to more frequent incongruent trials should result in decreased flexibility. Across three experiments, using classic cued task-switching (Experiments 1 and 3) and attentional set-shifting (Experiment 2) protocols, we found robust evidence against an obligatory stability-flexibility trade-off. Although we observed the expected contextual adaptation of stability and flexibility to changing demands, strategic adjustments in stability had little influence on flexibility, and vice versa. These results refute the long-held assumption of a stability-flexibility trade-off, documenting instead that the cognitive processes mediating these functions can be regulated independently-it is possible to be both stable and flexible at the same time.","link":"/opendata/geddert-egner-2022/"},{"title":"Gera et al. (2022)","text":"Habits are a prominent feature of both adaptive and maladaptive behavior. Yet, despite substantial research efforts, there are currently no well-established experimental procedures for habit induction in humans. It is likely that laboratory experimental settings, as well as the session-based structure typically used in controlled experiments (also outside the lab), impose serious constraints on studying habits and other effects that are sensitive to context, motivation, and training duration and frequency. To overcome these challenges, we devised a unique real-world free-operant task structure, implemented through a novel smartphone application, whereby participants could freely enter the app (24 hours a day, 7 days a week) to win rewards. This procedure is free of typical laboratory constraints, yet well-controlled. Using the canonical sensitivity to outcome devaluation criterion, we successfully demonstrated habit formation as a function of training duration, a longstanding challenge in the field. Additionally, we show a positive relationship between multiple facets of engagement/motivation and goal-directedness. We suggest that our novel paradigm can be used to study the neurobehavioral and psychological mechanism underlying habits in humans. Moreover, the real-world free-operant framework can potentially be used to examine other instrumental behavior- related questions, with greater face validity in naturalistic conditions.","link":"/opendata/gera-et-al-2022/"},{"title":"Gerhardsson et al. (2020)","text":"To learn from feedback (trial and error) is essential for all species. Insufficient sleep has been found to reduce the sensitivity to feedback as well as increase reward sensitivity. To determine whether insufficient sleep alters learning from positive and negative feedback, healthy participants (n = 32, mean age 29.0 years, 18 women) were tested once after normal sleep (8 hr time in bed for 2 nights) and once after 2 nights of sleep restriction (4 hr/night) on a probabilistic selection task where learning behaviour was evaluated in three ways: as generalised learning, short-term win-stay/lose-shift learning strategies, and trial-by-trial learning rate. Sleep restriction did not alter the sensitivity to either positive or negative feedback on generalised learning. Also, short-term win-stay/lose-shift strategies were not affected by sleep restriction. Similarly, results from computational models that assess the trial-by-trial update of stimuli value demonstrated no difference between sleep conditions after the first block. However, a slower learning rate from negative feedback when evaluating all learning blocks was found after sleep restriction. Despite a marked increase in sleepiness and slowed learning rate for negative feedback, sleep restriction did not appear to alter strategies and generalisation of learning from positive or negative feedback.","link":"/opendata/gerhardsson-et-al-2020/"},{"title":"Gerlicher et al. (2022)","text":"Although reconsolidation-based interventions constitute a promising new avenue to treating fear and anxieties disorders, the success of the intervention is not guaranteed. The initiation of memory reconsolidation is dependent on whether a mismatch between the experienced and predicted outcome-a prediction error (PE)-occurs during fear memory reactivation. It remains, however, elusive whether any type of PE renders fear memories susceptible to reconsolidation disruption. Here, we investigated whether a value PE, elicited by an outcome that is better or worse than expected, is necessary to make fear memories susceptible to reconsolidation disruption or whether a model-based identity PE, i.e., a PE elicited by an outcome equally aversive but different than expected, would be sufficient. Blocking beta-adrenergic receptors with propranolol HCl after reactivation did, however, not reduce the expression of fear after either type of PE. Instead, we observed intact fear memory expression 24 h after reactivation in the value-, identity- and a no-PE control group. The present results do not corroborate our earlier findings of reconsolidation disruption and point towards challenges that the field is currently facing in observing evidence for memory reconsolidation at all. We provide potential explanations for the unexpected failure of replicating reconsolidation disruption and discuss future directions.","link":"/opendata/gerlicher-et-al-2022/"},{"title":"Gershman & Bhui (2020)","text":"Discounting of future rewards is traditionally interpreted as evidence for an intrinsic preference in favor of sooner rewards. However, temporal discounting can also arise from internal uncertainty in value representations of future events, if one assumes that noisy mental simulations of the future are rationally combined with prior beliefs. Here, we further develop this idea by considering how simulation noise may be adaptively modulated by task demands, based on principles of rational inattention. We show how the optimal allocation of mental effort can give rise to the magnitude effect in intertemporal choice. In a re-analysis of two prior data sets, and in another experiment, we reveal several behavioral signatures of this theoretical account, tying choice stochasticity to the magnitude effect. We conclude that some aspects of temporal discounting may result from a cognitively plausible adaptive response to the costs of information processing.","link":"/opendata/gershman-bhui-2020/"},{"title":"Gillan et al. (2016)","text":"Prominent theories suggest that compulsive behaviors, characteristic of obsessive-compulsive disorder and addiction, are driven by shared deficits in goal-directed control, which confers vulnerability for developing rigid habits. However, recent studies have shown that deficient goal-directed control accompanies several disorders, including those without an obvious compulsive element. Reasoning that this lack of clinical specificity might reflect broader issues with psychiatric diagnostic categories, we investigated whether a dimensional approach would better delineate the clinical manifestations of goal-directed deficits. Using large-scale online assessment of psychiatric symptoms and neurocognitive performance in two independent general-population samples, we found that deficits in goal-directed control were most strongly associated with a symptom dimension comprising compulsive behavior and intrusive thought. This association was highly specific when compared to other non-compulsive aspects of psychopathology. These data showcase a powerful new methodology and highlight the potential of a dimensional, biologically-grounded approach to psychiatry research.","link":"/opendata/gillan-et-al-2016/"},{"title":"Gillan et al. (2020)","text":"Goal-directed control guides optimal decision-making and it is an important cognitive faculty that protects against developing habits. Previous studies have found some evidence of goal-directed deficits when healthy individuals are stressed, and in psychiatric conditions characterised by compulsive behaviours and anxiety. Here, we tested if goal-directed control is affected by state anxiety, which might explain the former results. We carried out a causal test of this hypothesis in two experiments (between-subject N = 88; within-subject N = 50) that used the inhalation of hypercapnic gas (7.5% CO2) to induce an acute state of anxiety in healthy volunteers. In a third experiment (N = 1413), we used a correlational design to test if real-life anxiety-provoking events (panic attacks, stressful events) are associated with impaired goal-directed control. In the former two causal experiments, we induced a profoundly anxious state, both physiologically and psychologically, but this did not affect goal-directed performance. In the third, correlational, study, we found no evidence for an association between goal-directed control, panic attacks or stressful life eventsover and above variance accounted for by trait differences in compulsivity. In sum, three complementary experiments found no evidence that anxiety impairs goal-directed control in human subjects.","link":"/opendata/gillan-et-al-2020/"},{"title":"Giron et al. (2022)","text":"Analogies to stochastic optimization are common in developmental psychology, describing a gradual reduction in randomness (cooling off) over the lifespan. Yet for lack of concrete empirical comparison, there is ambiguity in interpreting this analogy. Using data from n=281 participants ages 5 to 55, we show that cooling off does not only apply to the single dimension of randomness. Rather, development resembles an optimization process along multiple dimensions of learning (i.e., reward generalization, uncertainty-directed exploration, and random temperature). What begins as large tweaks in the parameters that define learning during childhood plateaus and converges to efficient parameter constellations in adulthood. The developmental trajectory of human parameters is strikingly similar to several stochastic optimization algorithms, yet we observe intriguing differences in convergence. Notably, none of the optimization algorithms discovered reliably better regions of the strategy space than adult participants, suggesting a remarkable efficiency of human development.","link":"/opendata/giron-et-al-2022/"},{"title":"Gladhill et al. (2022)","text":"Previous research has demonstrated that emotional faces affect time perception, however, the underlying mechanisms are not fully understood. Earlier attempts focus on effects at the different stages of the pacemaker-accumulator model (clock, memory, and/or decision-making) including, an increase in pacemaker rate or accumulation rate via arousal or attention, respectively, or by biasing decision-making. A visual temporal bisection task with sub-second intervals was conducted in two groups to further investigate these effects; one group was strictly behavioral whereas the second included a 64-channel electroencephalogram (EEG). To separate the influence of face and timing responses, participants timed a visual stimulus, temporally flanked (before and after) by two faces, either negative or neutral, creating three trial-types: Neg→Neut, Neut→Neg, or Neut→Neut. We found a leftward shift in bisection point (BP) in Neg→Neut relative to Neut→Neut suggests an overestimation of the temporal stimulus when preceded by a negative face. Neurally, we found the face-responsive N170 was larger for negative faces and the N1 and contingent negative variation (CNV) were larger when the temporal stimulus was preceded by a negative face. Additionally, there was an interaction effect between condition and response for the late positive component of timing (LPCt) and a significant difference between response (short/long) in the neutral condition. We concluded that a preceding negative face affects the clock stage leading to more pulses being accumulated, either through attention or arousal, as indexed by a larger N1, CNV, and N170; whereas viewing a negative face after impacted decision-making mechanisms, as evidenced by the LPCt.","link":"/opendata/gladhill-et-al-2022/"},{"title":"Glickman et al. (2022)","text":"Evidence integration is a normative algorithm for choosing between alternatives with noisy evidence, which has been successful in accounting for vast amounts of behavioural and neural data. However, this mechanism has been challenged by non-integration heuristics, and tracking decision boundaries has proven elusive. Here we first show that the decision boundaries can be extracted using a model-free behavioural method termed decision classification boundary, which optimizes choice classification based on the accumulated evidence. Using this method, we provide direct support for evidence integration over non-integration heuristics, show that the decision boundaries collapse across time and identify an integration bias whereby incoming evidence is modulated based on its consistency with preceding information. This consistency bias, which is a form of pre-decision confirmation bias, was supported in four cross-domain experiments, showing that choice accuracy and decision confidence are modulated by stimulus consistency. Strikingly, despite its seeming sub-optimality, the consistency bias fosters performance by enhancing robustness to integration noise.","link":"/opendata/glickman-et-al-2022/"},{"title":"Glockner & Pachur (2012)","text":"In the behavioral sciences, a popular approach to describe and predict behavior is cognitive modeling with adjustable parameters (i.e., which can be fitted to data). Modeling with adjustable parameters allows, among other things, measuring differences between people. At the same time, parameter estimation also bears the risk of overfitting. Are individual differences as measured by model parameters stable enough to improve the ability to predict behavior as compared to modeling without adjustable parameters? We examined this issue in cumulative prospect theory (CPT), arguably the most widely used framework to model decisions under risk. Specifically, we examined (a) the temporal stability of CPTs parameters; and (b) how well different implementations of CPT, varying in the number of adjustable parameters, predict individual choice relative to models with no adjustable parameters (such as CPT with fixed parameters, expected value theory, and various heuristics). We presented participants with risky choice problems and fitted CPT to each individuals choices in two separate sessions (which were 1 week apart). All parameters were correlated across time, in particular when using a simple implementation of CPT. CPT allowing for individual variability in parameter values predicted individual choice better than CPT with fixed parameters, expected value theory, and the heuristics. CPTs parameters thus seem to pick up stable individual differences that need to be considered when predicting risky choice.","link":"/opendata/glockner-pachur-2012/"},{"title":"Gluck et al. (2023)","text":"A habitual avoidance component may enforce the persistence of maladaptive avoidance behavior in anxiety disorders. Whether habitual avoidance is acquired more strongly in anxiety disorders is unclear. Individuals with current social anxiety disorder, panic disorder and/or agoraphobia (n = 62) and healthy individuals (n = 62) completed a devaluation paradigm with extensive avoidance training, followed by the devaluation of the aversive outcome. In the subsequent test phase, habitual response tendencies were inferred from compatibility effects. Neutral control trials were added to assess general approach learning in the absence of previous extensive avoidance training. The compatibility effects indicating habitual control did not differ between patients with anxiety disorders and healthy controls. Patients showed lower overall approach accuracy, but this effect was unrelated to the compatibility effects. In this study, anxiety disorders were characterized by reduced approach but not stronger habitual avoidance. These results do not indicate a simple and direct association between anxiety disorders and the acquisition of pervasive habitual avoidance in this devaluation paradigm.","link":"/opendata/gluck-et-al-2023/"},{"title":"Goris et al. (2019)","text":"A common idea about individuals with autism spectrum disorder (ASD) is that they have an above-average preference for predictability and sameness. However, surprisingly little research has gone toward this core symptom, and some studies suggest the preference for predictability in ASD might be less general than commonly assumed. Here, we investigated this important symptom of ASD using three different paradigms, which allowed us to measure preference for predictability under well-controlled experimental conditions. Specifically, we used a dimensional approach by investigating correlations between autistic traits (as measured with the Autism-Spectrum Quotient and Social Responsiveness Scale in a neurotypical population) and the scores on three different tasks. The music preference task assessed preferences for tone sequences that varied in predictability. The perceptual fluency task required participants to evaluate stimuli that were preceded by a similar versus dissimilar subliminally presented prime. The gambling task presented four decks of cards that had equal outcome probabilities but varied in predictability. We observed positive correlations between autistic traits and a preference for predictability in both the music preference and perceptual fluency task. We did not find our hypothesized correlation with gambling behavior but did observe a post hoc correlation showing that participants with more autistic traits were faster to choose the predictable deck. Together, these findings show that a relation between autistic traits and preference for predictability can be observed in a standardized lab environment, and should be considered an important first step toward a better, more mechanistic understanding of insistence on sameness in ASD.","link":"/opendata/goris-et-al-2019/"},{"title":"Grahek et al. (2022)","text":"To determine how much cognitive control to invest in a task, people need to consider whether exerting control matters for obtaining rewards. In particular, they need to account for the efficacy of their performance-the degree to which rewards are determined by performance or by independent factors. Yet it remains unclear how people learn about their performance efficacy in an environment. Here we combined computational modeling with measures of task performance and EEG, to provide a mechanistic account of how people (i) learn and update efficacy expectations in a changing environment and (ii) proactively adjust control allocation based on current efficacy expectations. Across 2 studies, subjects performed an incentivized cognitive control task while their performance efficacy (the likelihood that rewards are performance-contingent or random) varied over time. We show that people update their efficacy beliefs based on prediction errors-leveraging similar neural and computational substrates as those that underpin reward learning-and adjust how much control they allocate according to these beliefs. Using computational modeling, we show that these control adjustments reflect changes in information processing, rather than the speed-accuracy tradeoff. These findings demonstrate the neurocomputational mechanism through which people learn how worthwhile their cognitive control is.","link":"/opendata/grahek-et-al-2022/"},{"title":"Grill et al. (2023)","text":"The Behavioral Avoidance Test (BAT) for spider phobia is a widely-used diagnostic tool assessing fear by measuring avoidance behavior. However, BATs require considerable preparation and different BAT protocols across studies hamper result comparability. To address this, we have developed an open-access online BAT (vBATon). We validated its efficacy in measuring avoidance and eliciting anxiety/disgust by comparing it to a real-life BAT. Spider-fearful (N = 31) and non-fearful (N = 31) individuals completed both tests on separate dates. Both tests successfully distinguished between spider-fearful and non-fearful individuals. Crucially, equivalence tests confirmed that vBATon captures avoidance, anxiety, and disgust equal to the real-life BAT. We found moderate to high correlations between vBATon and (1) the real-life BAT and (2) self-report measurements of spider-fear. Our study suggests that vBATon is a valid, standardized, efficient and user-friendly alternative to real-life BATs for measuring spider fear in both research and clinical practice. (Access to vBATon: https://spider-phobia-research.com/wp-content/uploads/2023/07/vbaton_pub/)","link":"/opendata/grill-et-al-2023/"},{"title":"Grogan et al. (2020)","text":"We can be motivated when reward depends on performance, or merely by the prospect of a guaranteed reward. Performance-dependent (contingent) reward is instrumental, relying on an internal action-outcome model, whereas motivation by guaranteed reward may minimise opportunity cost in reward-rich environments. Competing theories propose that each type of motivation should be dependent on dopaminergic activity. We contrasted these two types of motivation with a rewarded saccade task, in patients with Parkinson’s disease (PD). When PD patients were ON dopamine, they had greater response vigour (peak saccadic velocity residuals) for contingent rewards, whereas when PD patients were OFF medication, they had greater vigour for guaranteed rewards. These results support the view that reward expectation and contingency drive distinct motivational processes, and can be dissociated by manipulating dopaminergic activity. We posit that dopamine promotes goal-directed motivation, but dampens reward-driven vigour, contradictory to the prediction that increased tonic dopamine amplifies reward expectation.","link":"/opendata/grogan-et-al-2020/"},{"title":"Grogan et al. (2022)","text":"Motivation can improve performance when the potential rewards outweigh the cost of effort expended. In working memory (WM), people can prioritise rewarded items at the expense of unrewarded items, suggesting a fixed memory capacity. But can capacity itself change with motivation? Across four experiments (N = 30-34) we demonstrate motivational improvements in WM even when all items were rewarded. However, this was not due to better memory precision, but rather better selection of the probed item within memory. Motivational improvements operated independently of encoding, maintenance, or attention shifts between items in memory. Moreover, motivation slowed responses. This contrasted with the benefits of rewarding items unequally, which allowed prioritisation of one item over another. We conclude that motivation can improve memory recall, not via precision or capacity, but via speed-accuracy trade-offs when selecting the item to retrieve.","link":"/opendata/grogan-et-al-2022/"},{"title":"Gronau et al. (2023)","text":"Response inhibition is a key attribute of human executive control. Standard stop-signal tasks require countermanding a single response; the speed at which that response can be inhibited indexes the efficacy of the inhibitory control networks. However, more complex stopping tasks, where one or more components of a multi-component action are cancelled (i.e., response-selective stopping) cannot be explained by the independent-race model appropriate for the simple task (Logan and Cowan, 1984). Healthy human participants (n=28; 10 male; 19-40 years) completed a response-selective stopping task where a `go’ stimulus required simultaneous (bimanual) button presses in response to left and right pointing green arrows. On a subset of trials (30%) one, or both, arrows turned red (constituting the stop signal) requiring that only the button-press(es) associated with red arrows be cancelled. Electromyographic recordings from both index fingers (first dorsal interosseous) permitted voluntary motor responses that resulted in overt button presses, and activity that was cancelled prior to an overt response (i.e., partial, or covert, responses), to be assessed. We propose a simultaneously inhibit and start (SIS) model that extends the independent race model and provides a highly accurate account of response-selective stopping data. Together with fine-grained EMG analysis, our model-based analysis offers converging evidence that the selective-stop signal simultaneously triggers a process that stops the bimanual response and triggers a new unimanual response corresponding to the green arrow. Our results require a reconceptualization of response-selective stopping and offer a tractable framework for assessing such tasks in healthy and patient populations.","link":"/opendata/gronau-et-al-2023/"},{"title":"Gross et al. (2021)","text":"Helping other people can entail risks for the helper. For example, when treating infectious patients, medical volunteers risk their own health. In such situations, decisions to help should depend on the individual’s valuation of others’ well-being (social preferences) and the degree of personal risk the individual finds acceptable (risk preferences). We investigated how these distinct preferences are psychologically and neurobiologically integrated when helping is risky. We used incentivized decision-making tasks (Study 1; N = 292 adults) and manipulated dopamine and norepinephrine levels in the brain by administering methylphenidate, atomoxetine, or a placebo (Study 2; N = 154 adults). We found that social and risk preferences are independent drivers of risky helping. Methylphenidate increased risky helping by selectively altering risk preferences rather than social preferences. Atomoxetine influenced neither risk preferences nor social preferences and did not affect risky helping. This suggests that methylphenidate-altered dopamine concentrations affect helping decisions that entail a risk to the helper.","link":"/opendata/gross-et-al-2021/"},{"title":"Grosskurth et al. (2019)","text":"Human decisions can be habitual or goal-directed, also known as model-free (MF) or model-based (MB) control. Previous work suggests that the balance between the two decision systems is impaired in psychiatric disorders such as compulsion and addiction, via overreliance on MF control. However, little is known whether the balance can be altered through task training. Here, 20 healthy participants performed a well-established two-step task that differentiates MB from MF control, across five training sessions. We used computational modelling and functional near-infrared spectroscopy to assess changes in decision-making and brain hemodynamic over time. Mixed-effects modelling revealed overall no substantial changes in MF and MB behavior across training. Although our behavioral and brain findings show task-induced changes in learning rates, these parameters have no direct relation to either MF or MB control or the balance between the two systems, and thus do not support the assumption of training effects on MF or MB strategies. Our findings indicate that training on the two-step paradigm in its current form does not support a shift in the balance between MF and MB control. We discuss these results with respect to implications for restoring the balance between MF and MB control in psychiatric conditions.","link":"/opendata/grosskurth-et-al-2019/"},{"title":"Haddara & Rahnev (2022)","text":"It is widely believed that feedback improves behavior, but the mechanisms behind this improvement remain unclear. Different theories postulate that feedback has either a direct effect on performance through automatic reinforcement mechanisms or only an indirect effect mediated by a deliberate change in strategy. To adjudicate between these competing accounts, we performed two large experiments on human adults (total N = 518); approximately half the participants received trial-by-trial feedback on a perceptual task, whereas the other half did not receive any feedback. We found that feedback had no effect on either perceptual or metacognitive sensitivity even after 7 days of training. On the other hand, feedback significantly affected participants’ response strategies by reducing response bias and improving confidence calibration. These results suggest that the beneficial effects of feedback stem from allowing people to adjust their strategies for performing the task and not from direct reinforcement mechanisms, at least in the domain of perception.","link":"/opendata/haddara-rahnev-2022/"},{"title":"Haigh et al. (2021)","text":"Weber’s law predicts that stimulus sensitivity will increase proportionally with increases in stimulus intensity. Does this hold for the stimulus of time - specifically, duration in the milliseconds to seconds range? There is conflicting evidence on the relationship between temporal sensitivity and duration. Weber’s law predicts a linear relationship between sensitivity and duration on interval timing tasks, while two alternative models predict a reverse J-shaped and a U-shaped relationship. Based on previous research, we hypothesised that temporal sensitivity in humans would follow a U-shaped function, increasing and then decreasing with increases in duration, and that this model would provide a better statistical fit to the data than the reverse-J or the simple Weber’s Law model. In a two-alternative forced-choice interval comparison task, 24 participants made duration judgements about six groups of auditory intervals between 100 and 3,200 ms. Weber fractions were generated for each group of intervals and plotted against time to generate a function describing sensitivity to the stimulus of duration. Although the sensitivity function was slightly concave, and the model describing a U-shaped function gave the best fit to the data, the increase in the model fit was not sufficient to warrant the extra free parameter in the chosen model. Further analysis demonstrated that Weber’s law itself provided a better description of sensitivity to changes in duration than either of the two models tested.","link":"/opendata/haigh-et-al-2021/"},{"title":"Haines et al. (2020)","text":"Trait impulsivity—defined by strong preference for immediate over delayed rewards and difficulties inhibiting prepotent behaviors—is observed in all externalizing disorders, including substance-use disorders. Many laboratory tasks have been developed to identify decision-making mechanisms and correlates of impulsive behavior, but convergence between task measures and self-reports of impulsivity are consistently low. Long-standing theories of personality and decision-making predict that neurally mediated individual differences in sensitivity to (a) reward cues and (b) punishment cues (frustrative nonreward) interact to affect behavior. Such interactions obscure one-to-one correspondences between single personality traits and task performance. We used hierarchical Bayesian analysis in three samples with differing levels of substance use (N = 967) to identify interactive dependencies between trait impulsivity and state anxiety on impulsive decision-making. Our findings reveal how anxiety modulates impulsive decision-making and demonstrate benefits of hierarchical Bayesian analysis over traditional approaches for testing theories of psychopathology spanning levels of analysis.","link":"/opendata/haines-et-al-2020/"},{"title":"Hakim et al. (2019)","text":"Complex cognition relies on both on-line representations in working memory (WM), said to reside in the focus of attention, and passive off-line representations of related information. Here, we dissected the focus of attention by showing that distinct neural signals index the on-line storage of objects and sustained spatial attention. We recorded electroencephalogram (EEG) activity during two tasks that employed identical stimulus displays but varied the relative demands for object storage and spatial attention. We found distinct delay-period signatures for an attention task (which required only spatial attention) and a WM task (which invoked both spatial attention and object storage). Although both tasks required active maintenance of spatial information, only the WM task elicited robust contralateral delay activity that was sensitive to mnemonic load. Thus, we argue that the focus of attention is maintained via a collaboration between distinct processes for covert spatial orienting and object-based storage.","link":"/opendata/hakim-et-al-2019/"},{"title":"Hammond et al. (2023)","text":"High anxiety may be related insufficient sensitivity to changing reinforcement during operant learning. Whether such findings are specific to anxiety is unclear given a wider literature relating negative affect to abnormal learning and the possibility that relationships are not consistent across incentive types (i.e. punishment and reward) and outcomes (i.e., positive or negative). In two separate samples ( = 76; = 49), participants completed an operant learning task with positive, negative, and neutral socio-affective feedback, designed to assess adaptive responses to changing environmental volatility. Contrary to expectations, general affective distress, rather than anxiety or depression specifically, was related to an increase, rather than a decrease, in the rate of learning for negative outcomes in volatile, relative to stable, environments. Our results suggest an important but general role in anxiety and depression of overweighting negative feedback when the value of an action becomes uncertain, as when environmental volatility increases.","link":"/opendata/hammond-et-al-2023/"},{"title":"Hao et al. (2022)","text":"The Value Learning Task (VLT; e.g., Raymond & OBrien, 2009) is widely used to investigate how acquired value impacts how we perceive and process stimuli. The task consists of a series of trials in which participants attempt to maximize accumulated winnings as they make choices from a pair of presented images associated with probabilistic win, loss, or no-change outcomes. The probabilities and outcomes are initially unknown to the participant and thus the task involves decision making and learning under uncertainty. Despite the symmetric outcome structure for win and loss pairs, people learn win associations better than loss associations (Lin, Cabrera-Haro, & Reuter-Lorenz, 2020). This learning asymmetry could lead to differences when the stimuli are probed in subsequent tasks, compromising inferences about how acquired value affects downstream processing. We investigate the nature of the asymmetry using a standard error-driven reinforcement learning model with a softmax choice rule. Despite having no special role for valence, the model yields the learning asymmetry observed in human behavior, whether the model parameters are set to maximize empirical fit, or task payoff. The asymmetry arises from an interaction between a neutral initial value estimate and a choice policy that exploits while exploring, leading to more poorly discriminated value estimates for loss stimuli. We also show how differences in estimated individual learning rates help to explain individual differences in the observed win-loss asymmetries, and how the final value estimates produced by the model provide a simple account of a post-learning explicit value categorization task.","link":"/opendata/hao-et-al-2022/"},{"title":"Harada-Laszlo et al. (2023)","text":"Catastrophizing is a transdiagnostic construct that has been suggested to precipitate and maintain a multiplicity of psychiatric disorders, including anxiety, depression, PTSD and OCD. However, the underlying cognitive mechanisms that result in catastrophizing are unknown. Relating reinforcement learning model parameters to catastrophizing may allow us to further understand the process of catastrophizing.Objective: Using a modified four-armed bandit task, we aimed to investigate the relationship between reinforcement learning parameters and self-report catastrophizing questionnaire scores to gain a mechanistic understanding of how catastrophizing may alter learning. We recruited 211 participants to complete a computerised four-armed bandit task and tested the fit of six reinforcement learning models on our data, including two novel models which both incorporated a scaling factor related to a history of negative outcomes variable. We investigated the relationship between self-report catastrophizing scores and free parameters from the overall best-fitting model, along with the best-fitting model to include history, using Pearson’s correlations. Subsequently, we reassessed these relationships using multiple regression analyses to evaluate whether any observed relationships were altered when relevant IQ and mental health covariates were applied. Model-agnostic analyses indicated there were effects of outcome history on reaction time and accuracy, and that the effects on accuracy related to catastrophizing. The overall model of best fit was the Standard Rescorla-Wagner Model and the best-fitting model to include history was a model in which learning rate was scaled by history of negative outcome. We found no effect of catastrophizing on the scaling by history of negative outcome parameter (r=0.003, p=0.679), the learning rate parameter (r=0.026, p=0.703) or the inverse temperature parameter (r=0.086, p=0.220). We were unable to relate catastrophizing to any of the reinforcement learning parameters we investigated. This implies that catastrophizing is not straightforwardly linked to any changes to learning after a series of negative outcomes are received. Future research could use further exploration of the space of models which include a history parameter.","link":"/opendata/harada-laszlo-et-al-2023/"},{"title":"Harhen & Bornstein (2022)","text":"Patch foraging presents a sequential decision-making problem widely studied across organisms — stay with a current option or leave it in search of a better alternative? Behavioral ecology has identified an optimal strategy for these decisions, but, across species, foragers systematically deviate from it, staying too long with an option or overharvesting relative to this optimum. Despite the ubiquity of this behavior, the mechanism underlying it remains unclear and an object of extensive investigation. Here, we address this gap by approaching foraging as both a decision-making and learning problem. Specifically, we propose a model in which foragers 1) rationally infer the structure in their environment and 2) use their uncertainty over the inferred structure representation to adaptively discount future rewards. We find that overharvesting can emerge from this rational statistical inference and uncertainty adaptation process. In a patch leaving task, we show that human participants adapt their foraging to the richness and dynamics of the environment in ways consistent with our model. These findings suggest that definitions of optimal foraging could be extended by considering how foragers reduce and adapt to uncertainty over representations of their environment.","link":"/opendata/harhen-bornstein-2022/"},{"title":"Harootonian et al. (2022)","text":"Successful navigation requires the ability to compute one’s location and heading from incoming multisensory information. Previous work has shown that this multisensory input comes in two forms: body-based idiothetic cues, from one’s own rotations and translations, and visual allothetic cues, from the environment (usually visual landmarks). However, exactly how these two streams of information are integrated is unclear, with some models suggesting the body-based idiothetic and visual allothetic cues are combined, while others suggest they compete. In this paper we investigated the integration of body-based idiothetic and visual allothetic cues in the computation of heading using virtual reality. In our experiment, participants performed a series of body turns of up to 360 degrees in the dark with only a brief flash (300ms) of visual feedback en route. Because the environment was virtual, we had full control over the visual feedback and were able to vary the offset between this feedback and the true heading angle. By measuring the effect of the feedback offset on the angle participants turned, we were able to determine the extent to which they incorporated visual feedback as a function of the offset error. By further modeling this behavior we were able to quantify the computations people used. While there were considerable individual differences in performance on our task, with some participants mostly ignoring the visual feedback and others relying on it almost entirely, our modeling results suggest that almost all participants used the same strategy in which idiothetic and allothetic cues are combined when the mismatch between them is small, but compete when the mismatch is large. These findings suggest that participants update their estimate of heading using a hybrid strategy that mixes the combination and competition of cues.","link":"/opendata/harootonian-et-al-2022/"},{"title":"Harris et al. (2022)","text":"In this study, we examined the relationship between physiological encoding of surprise and the learning of anticipatory eye movements. Active inference portrays perception and action as interconnected inference processes, driven by the imperative to minimise the surprise of sensory observations. To examine this characterisation of oculomotor learning during a hand-eye coordination task, we tested whether anticipatory eye movements were updated in accordance with Bayesian principles and whether trial-by-trial learning rates tracked pupil dilation as a marker of ‘surprise’. Forty-four participants completed an interception task in immersive virtual reality that required them to hit bouncing balls that had either expected or unexpected bounce profiles. We recorded anticipatory eye movements known to index participants’ beliefs about likely ball bounce trajectories. By fitting a hierarchical Bayesian inference model to the trial-wise trajectories of these predictive eye movements, we were able to estimate each individual’s expectations about bounce trajectories, rates of belief updating, and precision-weighted prediction errors. We found that the task-evoked pupil response tracked prediction errors and learning rates but not beliefs about ball bounciness or environmental volatility. These findings are partially consistent with active inference accounts and shed light on how encoding of surprise may shape the control of action.","link":"/opendata/harris-et-al-2022/"},{"title":"Hayes & Wedell (2022a)","text":"Previous studies of reinforcement learning (RL) have established that choice outcomes are encoded in a context-dependent fashion. Several computational models have been proposed to explain context-dependent encoding, including reference point centering and range adaptation models. The former assumes that outcomes are centered around a running estimate of the average reward in each choice context, while the latter assumes that outcomes are compared to the minimum reward and then scaled by an estimate of the range of outcomes in each choice context. However, there are other computational mechanisms that can explain context dependence in RL. In the present study, a frequency encoding model is introduced that assumes outcomes are evaluated based on their proportional rank within a sample of recently experienced outcomes from the local context. A range-frequency model is also considered that combines the range adaptation and frequency encoding mechanisms. We conducted two fully incentivized behavioral experiments using choice tasks for which the candidate models make divergent predictions. The results were most consistent with models that incorporate frequency or rank-based encoding. The findings from these experiments deepen our understanding of the underlying computational processes mediating context-dependent outcome encoding in human RL.","link":"/opendata/hayes-wedell-2022a/"},{"title":"Hayes & Wedell (2022b)","text":"In reinforcement learning tasks, people learn the values of options relative to other options in the local context. Prior research suggests that relative value learning is enhanced when choice contexts are temporally clustered in a blocked sequence compared to a randomly interleaved sequence. The present study was aimed at further investigating the effects of blocked versus interleaved training using a choice task that distinguishes among different contextual encoding models. Our results showed that the presentation format in which contexts are experienced can lead to qualitatively distinct forms of relative value learning. This conclusion was supported by a combination of model-free and model-based analyses. In the blocked condition, choice behavior was most consistent with a reference point model in which outcomes are encoded relative to a dynamic estimate of the contextual average reward. In contrast, the interleaved condition was best described by a range-frequency encoding model. We propose that blocked training makes it easier to track contextual outcome statistics, such as the average reward, which may then be used to relativize the values of experienced outcomes. When contexts are interleaved, range-frequency encoding may serve as a more efficient means of storing option values in memory for later retrieval.","link":"/opendata/hayes-wedell-2022b/"},{"title":"Heffner et al. (2021)","text":"People make decisions based on deviations from expected outcomes, known as prediction errors. Past work has focused on reward prediction errors, largely ignoring violations of expected emotional experiences-emotion prediction errors. We leverage a method to measure real-time fluctuations in emotion as people decide to punish or forgive others. Across four studies (N = 1,016), we reveal that emotion and reward prediction errors have distinguishable contributions to choice, such that emotion prediction errors exert the strongest impact during decision-making. We additionally find that a choice to punish or forgive can be decoded in less than a second from an evolving emotional response, suggesting that emotions swiftly influence choice. Finally, individuals reporting significant levels of depression exhibit selective impairments in using emotion-but not reward-prediction errors. Evidence for emotion prediction errors potently guiding social behaviours challenge standard decision-making models that have focused solely on reward.","link":"/opendata/heffner-et-al-2021/"},{"title":"Hein et al. (2021)","text":"Clinical and subclinical (trait) anxiety impairs decision making and interferes with learning. Less understood are the effects of temporary anxious states on learning and decision making in healthy populations, and whether these can serve as a model for clinical anxiety. Here we test whether anxious states in healthy individuals elicit a pattern of aberrant behavioural, neural, and physiological responses comparable with those found in anxiety disorders-particularly when processing uncertainty in unstable environments. In our study, both a state anxious and a control group learned probabilistic stimulus-outcome mappings in a volatile task environment while we recorded their electrophysiological (EEG) signals. By using a hierarchical Bayesian model of inference and learning, we assessed the effect of state anxiety on Bayesian belief updating with a focus on uncertainty estimates. State anxiety was associated with an underestimation of environmental uncertainty, and informational uncertainty about the reward tendency. Anxious individuals beliefs about reward contingencies were more precise (had smaller uncertainty) and thus more resistant to updating, ultimately leading to impaired reward-based learning. State anxiety was also associated with greater uncertainty about volatility. We interpret this pattern as evidence that state anxious individuals are less tolerant to informational uncertainty about the contingencies governing their environment and more willing to be uncertain about the level of stability of the world itself. Further, we tracked the neural representation of belief update signals in the trial-by-trial EEG amplitudes. In control participants, lower-level precision-weighted prediction errors (pwPEs) about reward tendencies were represented in the ERP signals across central and parietal electrodes peaking at 496 ms, overlapping with the late P300 in classical ERP analysis. The state anxiety group did not exhibit a significant representation of low-level pwPEs, and there were no significant differences between the groups. Smaller variance in low-level pwPE about reward tendencies in state anxiety could partially account for the null results. Expanding previous computational work on trait anxiety, our findings establish that temporary anxious states in healthy individuals impair reward-based learning in volatile environments, primarily through changes in uncertainty estimates, which play a central role in current Bayesian accounts of perceptual inference and learning.","link":"/opendata/hein-et-al-2021/"},{"title":"Held et al. (2023)","text":"Humans can up- or downregulate the degree to which they rely on task information for goal directed behaviour, a process often referred to as cognitive control. Adjustments in cognitive control are traditionally studied in response to experienced or expected task-rule conflict. However, recent theories suggest that people can also learn to adapt control settings through reinforcement. Across three preregistered task switching experiments (n=415), we selectively rewarded correct performance on trials with either more (incongruent) or less (congruent) task-rule conflict. Results confirmed the hypothesis that people rewarded more on incongruent trials showed smaller task-rule congruency effects, thus optimally adapting their control settings to the reward scheme. Using drift diffusion modelling, we further show that this reinforcement of cognitive control may occur through conflict-dependent within-trial adjustments of response thresholds after conflict detection. Together, our findings suggest that, while people remain more efficient at learning stimulus-response associations through rewards, they can similarly learn cognitive control strategies through reinforcement.","link":"/opendata/held-et-al-2023/"},{"title":"Hellmann et al. (2022)","text":"Many decisions must be made with incomplete information. The ability to evaluate the resulting uncertainty is a key aspect of metacognition. As both confidence judgments and reaction times are expected to be closely related to sensory uncertainty, a mathematical model of human perceptual decision-making should be able to explain them both. Here, we propose the new dynamical evidence and visibility model (dynWEV), an extension of the drift diffusion model of decision making, to account for choices, reaction times, and confidence at the same time. The decision process in a binary perceptual task is described as a Wiener process accumulating sensory evidence about the choice options bounded by two constant thresholds. To account for confidence judgments, we assume a period of postdecisional accumulation of sensory evidence and parallel accumulation of information about the reliability of the present stimulus. We examined model fits in two experiments, a motion discrimination task with random dot kinematograms and a post-masked orientation discrimination task. A comparison between the dynamical evidence and visibility model, two-stage dynamical signal detection theory, and several versions of race models of decision making showed that only dynWEV produced acceptable fits of choices, confidence, and reaction time. This finding suggests that confidence judgments not only depend on choice evidence, but also on a parallel estimate of sensory uncertainty as well as postdecisional accumulation of evidence.","link":"/opendata/hellmann-et-al-2022/"},{"title":"Heo et al. (2021)","text":"Depression is characterized by deficits in the reinforcement learning (RL) process. Although many computational and neural studies have extended our knowledge of the impact of depression on RL, most focus on habitual control (model-free RL), yielding a relatively poor understanding of goal-directed control (model-based RL) and arbitration control to find a balance between the two. We investigated the effects of subclinical depression on model-based and model-free learning in the prefrontal-striatal circuitry. First, we found that subclinical depression is associated with the attenuated state and reward prediction error representation in the insula and caudate. Critically, we found that it accompanies the disrupted arbitration control between model-based and model-free learning in the predominantly inferior lateral prefrontal cortex and frontopolar cortex. We also found that depression undermines the ability to exploit viable options, called exploitation sensitivity. These findings characterize how subclinical depression influences different levels of the decision-making hierarchy, advancing previous conflicting views that depression simply influences either habitual or goal-directed control. Our study creates possibilities for various clinical applications, such as early diagnosis and behavioral therapy design.","link":"/opendata/heo-et-al-2021/"},{"title":"Hertz et al. (2021)","text":"Social learning underpins our speciess extraordinary success. Learning through observation has been investigated in several species, but learning from advice-where information is intentionally broadcast-is less understood. We used a pre-registered, online experiment (n = 1492) combined with computational modelling to examine learning through observation and advice. Participants were more likely to immediately follow advice than to copy an observed choice, but this was dependent upon trust in the adviser: highly paranoid participants were less likely to follow advice in the short term. Reinforcement learning modelling revealed two distinct patterns regarding the long-term effects of social information: some individuals relied fully on social information, whereas others reverted to trial-and-error learning. This variation may affect the prevalence and fidelity of socially transmitted information. Our results highlight the privileged status of advice relative to observation and how the assimilation of intentionally broadcast information is affected by trust in others.","link":"/opendata/hertz-et-al-2021/"},{"title":"Hewitson et al. (2023)","text":"People form metacognitive representations of their own abilities across a range of tasks. How these representations are influenced by errors during learning is poorly understood. Here we ask how metacognitive confidence judgments of performance during motor learning are shaped by the learner’s recent history of errors. Across four motor learning experiments, our computational modeling approach demonstrated that people’s confidence judgments are best explained by a recency-weighted averaging of visually observed errors. Moreover, in the formation of these confidence estimates, people appear to re-weight observed motor errors according to a subjective cost function. Confidence judgments were adaptive, incorporating recent motor errors in a manner that was sensitive to the volatility of the learning environment, integrating a shallower history when the environment was more volatile. Finally, confidence tracked motor errors in the context of both implicit and explicit motor learning, but only showed evidence of influencing behavior in the latter. Our study thus provides a novel descriptive model that successfully approximates the dynamics of metacognitive judgments during motor learning.","link":"/opendata/hewitson-et-al-2023/"},{"title":"Hezemans et al. (2022)","text":"Apathy is a debilitating feature of many neuropsychiatric diseases, that is typically described as a reduction of goal-directed behaviour. Despite its prevalence and prognostic importance, the mechanisms underlying apathy remain controversial. Degeneration of the locus coeruleus-noradrenaline system is known to contribute to motivational deficits, including apathy. In healthy people, noradrenaline has been implicated in signalling the uncertainty of expectations about the environment. We proposed that noradrenergic deficits contribute to apathy by modulating the relative weighting of prior beliefs about action outcomes. We tested this hypothesis in the clinical context of Parkinson’s disease, given its associations with apathy and noradrenergic dysfunction. Participants with mild-to-moderate Parkinson’s disease (N = 17) completed a randomised double-blind, placebo-controlled, crossover study with 40 mg of the noradrenaline reuptake inhibitor atomoxetine. Prior weighting was inferred from psychophysical analysis of performance in an effort-based visuomotor task, and was confirmed as negatively correlated with apathy. Locus coeruleus integrity was assessed in vivo using magnetisation transfer imaging at ultra-high field 7T. The effect of atomoxetine depended on locus coeruleus integrity: participants with a more degenerate locus coeruleus showed a greater increase in prior weighting on atomoxetine versus placebo. The results indicate a contribution of the noradrenergic system to apathy and potential benefit from noradrenergic treatment of people with Parkinson’s disease, subject to stratification according to locus coeruleus integrity. More broadly, these results reconcile emerging predictive processing accounts of the role of noradrenaline in goal-directed behaviour with the clinical symptom of apathy and its potential pharmacological treatment.","link":"/opendata/hezemans-et-al-2022/"},{"title":"Hirmas & Engelmann (2023)","text":"Does attention have a causal impact on risky decisions? We address this question in a preregistered experiment in which participants accept or reject a series of mixed gambles while exogenously varying how information can be sampled. Specifically, in each trial participants observe the outcomes of a mixed-gamble with gains and losses presented sequentially. To isolate the causal role of attention on the decision process, we manipulate for how long a specific attribute is presented before showing the next one (e.g., 600 ms/800 ms vs 400 ms). Our results partially confirm our preregistered hypotheses that longer exposure to an attribute increases its weight on the decision. While we find no effects on choice frequency, we observe specific effects on the decision weights of our Random Utility Model. Presenting losses longer (for 600 ms, but not 800 ms) than gains (400 ms) leads to increased sensitivity for losses. When gains are presented for longer (600 ms and 800 ms) than losses (400 ms), the participants show increased sensitivity to both gain and loss values in their decision. Loss aversion reflects this trend across attention treatments, but differences remain non-significant. Further exploratory analyses show that specifically participants with higher impulsiveness become more sensitive to attribute values when gains are presented for longer. Jointly, these results support the notion that attention has a causal impact on the sensitivity to specific attributes during risky choice. Moreover, our results underline the moderating role of impulsiveness on the relationship between attention and choice.","link":"/opendata/hirmas-engelmann-2023/"},{"title":"Hitchcock et al. (2021)","text":"How does rumination affect reinforcement learning-the ubiquitous process by which we adjust behavior after error in order to behave more effectively in the future? In a within-subject design (n=49), we tested whether experimentally manipulated rumination disrupts reinforcement learning in a multidimensional learning task previously shown to rely on selective attention. Rumination impaired performance, yet unexpectedly this impairment could not be attributed to decreased attentional breadth (quantified using a decay parameter in a computational model). Instead, trait rumination (between subjects) was associated with higher decay rates (implying narrower attention), yet not with impaired performance. Our task-performance results accord with the possibility that state rumination promotes stress-generating behavior in part by disrupting reinforcement learning. The trait-rumination finding accords with the predictions of a prominent model of trait rumination (the attentional-scope model). More work is needed to understand the specific mechanisms by which state rumination disrupts reinforcement learning.","link":"/opendata/hitchcock-et-al-2021/"},{"title":"Hitchcock et al. (2022)","text":"Cognitive theories of depression, and mindfulness theories of well-being, converge on the notion that self-judgment plays a critical role in mental health. However, these theories have rarely been tested via tasks and computational modeling analyses that can disentangle the information processes operative in self-judgments. We applied a drift-diffusion computational model to the self-referential encoding task (SRET) collected before and after an 8-week mindfulness intervention (n = 96). A drift-rate regression parameter representing positive-relative to negative-self-referential judgment strength positively related to mindful awareness and inversely related to depression, both at baseline and over time; however, this parameter did not significantly relate to the interaction between mindful awareness and nonjudgmentalness. At the level of individual depression symptoms, at baseline, a spectrum of symptoms (inversely) correlated with the drift-rate regression parameter, suggesting that many distinct depression symptoms relate to valenced self-judgment between subjects. By contrast, over the intervention, changes in only a smaller subset of anhedonia-related depression symptoms showed substantial relationships with this parameter. Both behavioral and model-derived measures showed modest split-half and test-retest correlations. Results support cognitive theories that implicate self-judgment in depression and mindfulness theories, which imply that mindful awareness should lead to more positive self-views.","link":"/opendata/hitchcock-et-al-2022/"},{"title":"Homan et al. (2019)","text":"By combining computational, morphological, and functional analyses, this study relates latent markers of associative threat learning to overt post-traumatic stress disorder (PTSD) symptoms in combat veterans. Using reversal learning, we found that symptomatic veterans showed greater physiological adjustment to cues that did not predict what they had expected, indicating greater sensitivity to prediction errors for negative outcomes. This exaggerated weighting of prediction errors shapes the dynamic learning rate (associability) and value of threat predictive cues. The degree to which the striatum tracked the associability partially mediated the positive correlation between prediction-error weights and PTSD symptoms, suggesting that both increased prediction-error weights and decreased striatal tracking of associability independently contribute to PTSD symptoms. Furthermore, decreased neural tracking of value in the amygdala, in addition to smaller amygdala volume, independently corresponded to higher PTSD symptom severity. These results provide evidence for distinct neurocomputational contributions to PTSD symptoms.","link":"/opendata/homan-et-al-2019/"},{"title":"Hong et al. (2021)","text":"To obtain a coherent perception of the world, our senses need to be in alignment. When we encounter misaligned cues from two sensory modalities, the brain must infer which cue is faulty and recalibrate the corresponding sense. We examined whether and how the brain uses cue reliability to identify the miscalibrated sense by measuring the audiovisual ventriloquism aftereffect for stimuli of varying visual reliability. To adjust for modality-specific biases, visual stimulus locations were chosen based on perceived alignment with auditory stimulus locations for each participant. During an audiovisual recalibration phase, participants were presented with bimodal stimuli with a fixed perceptual spatial discrepancy; they localized one modality, cued after stimulus presentation. Unimodal auditory and visual localization was measured before and after the audiovisual recalibration phase. We compared participants’ behavior to the predictions of three models of recalibration: (a) Reliability-based: each modality is recalibrated based on its relative reliability-less reliable cues are recalibrated more; (b) Fixed-ratio: the degree of recalibration for each modality is fixed; (c) Causal-inference: recalibration is directly determined by the discrepancy between a cue and its estimate, which in turn depends on the reliability of both cues, and inference about how likely the two cues derive from a common source. Vision was hardly recalibrated by audition. Auditory recalibration by vision changed idiosyncratically as visual reliability decreased: the extent of auditory recalibration either decreased monotonically, peaked at medium visual reliability, or increased monotonically. The latter two patterns cannot be explained by either the reliability-based or fixed-ratio models. Only the causal-inference model of recalibration captures the idiosyncratic influences of cue reliability on recalibration. We conclude that cue reliability, causal inference, and modality-specific biases guide cross-modal recalibration indirectly by determining the perception of audiovisual stimuli.","link":"/opendata/hong-et-al-2021/"},{"title":"Horvath et al. (2021)","text":"Humans often face sequential decision-making problems, in which information about the environmental reward structure is detached from rewards for a subset of actions. In the current exploratory study, we introduce an information-selective symmetric reversal bandit task to model such situations and obtained choice data on this task from 24 participants. To arbitrate between different decision-making strategies that participants may use on this task, we developed a set of probabilistic agent-based behavioral models, including exploitative and explorative Bayesian agents, as well as heuristic control agents. Upon validating the model and parameter recovery properties of our model set and summarizing the participants choice data in a descriptive way, we used a maximum likelihood approach to evaluate the participants choice data from the perspective of our model set. In brief, we provide quantitative evidence that participants employ a belief state-based hybrid explorative-exploitative strategy on the information-selective symmetric reversal bandit task, lending further support to the finding that humans are guided by their subjective uncertainty when solving exploration-exploitation dilemmas. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s42113-021-00112-3.","link":"/opendata/horvath-et-al-2021/"},{"title":"Horwath et al. (2022)","text":"A large body of research illustrates the prioritization of goal-relevant information in memory; however, it is unclear how reward-related memories are organized. Using a rewarded free recall paradigm, we investigated how reward motivation structures the organization of memory around temporal and higher-order contexts. To better understand these processes, we simulated our findings using a reward-modulated variant of the Context Maintenance and Retrieval Model (CMR; Polyn et al., 2009). In the first study, we found that reward did not influence temporal clustering, but instead organized memory based on reward category. Further, we showed that a reward-modulated learning rate and source features of CMR most accurately depict reward’s enhancement of memory and clustering by value. In a second study, we showed that reward-memory effects can exist in both extended periods of sustained motivation and frequent changes in motivation, by showing equivocal reward effects using mixed- and pure-list motivation manipulations. However, we showed that a reward-modulated learning rate in isolation most accurately depicts reward’s enhancement of memory using a pure-list manipulation. Overall, we conclude that reward-related memories are adaptively organized by higher-order value information, and contextual binding to value contexts may only be necessary when rewards are intermittent versus sustained.","link":"/opendata/horwath-et-al-2022/"},{"title":"Hotaling et al. (2019)","text":"When people make risky choices, two kinds of information are crucial: outcome values and outcome probabilities. Here, we demonstrate that the juncture at which value and probability information is provided has a fundamental effect on choice. Across four experiments involving 489 participants, we compared two decision-making scenarios: one in which value information was revealed during sampling (standard) and one in which value information was revealed after sampling (value ignorance). On average, participants made riskier choices when value information was provided after sampling. Moreover, parameter estimates from a hierarchical Bayesian implementation of cumulative-prospect theory suggested that participants overweighted rare events when value information was absent during sampling but did not overweight such events in the standard condition. This suggests that the impact of rare events on choice relies crucially on the timing of probability and value integration. We provide paths toward mechanistic explanations of our results based on frameworks that assume different underlying cognitive architectures.","link":"/opendata/hotaling-et-al-2019/"},{"title":"Houshmand-Chatroudi & Yotsumoto (2023)","text":"Perception of time is not always veridical; rather, it is subjected to distortions. One such compelling distortion is that the duration of regularly spaced intervals is often overestimated. One account suggests that excitatory phases of neural entrainment concomitant with such stimuli play a major role. However, assessing the correlation between the power of entrained oscillations and time dilation has yielded inconclusive results. In this study, we evaluated whether phase characteristics of neural oscillations impact time dilation. For this purpose, we entrained 10-Hz oscillations and experimentally manipulated the presentation of flickers so that they were presented either in-phase or out-of-phase relative to the established rhythm. Simultaneous electroencephalography (EEG) recordings confirmed that in-phase and out-of-phase flickers had landed on different inhibitory phases of high-amplitude alpha oscillations. Moreover, to control for confounding factors of expectancy and masking, we created two additional conditions. Results, supplemented by the Bayesian analysis, indicated that the phase of entrained visual alpha oscillation does not differentially affect flicker-induced time dilation. Repeating the same experiment with regularly spaced auditory stimuli replicated the null findings. Moreover, we found a robust enhancement of precision for the reproduction of flickers relative to static stimuli that were partially supported by entrainment models. We discussed our results within the framework of neural oscillations and time-perception models, suggesting that inhibitory cycles of visual alpha may have little relevance to the overestimation of regularly spaced intervals. Moreover, based on our findings, we proposed that temporal oscillators, assumed in entrainment models, may act independently of excitatory phases in the brain’s lower level sensory areas.","link":"/opendata/houshmand-chatroudi-yotsumoto-2023/"},{"title":"Hoven et al. (2022)","text":"A growing body of evidence suggests that, during decision-making, BOLD signal in the ventromedial prefrontal cortex (VMPFC) correlates both with motivational variables - such as incentives and expected values - and metacognitive variables - such as confidence judgments - which reflect the subjective probability of being correct. At the behavioral level, we recently demonstrated that the value of monetary stakes bias confidence judgments, with gain (respectively loss) prospects increasing (respectively decreasing) confidence judgments, even for similar levels of difficulty and performance. If and how this value-confidence interaction is reflected in the VMPFC remains unknown. Here, we used an incentivized perceptual decision-making fMRI task that dissociates key decision-making variables, thereby allowing to test several hypotheses about the role of the VMPFC in the value-confidence interaction. While our initial analyses seemingly indicate that the VMPFC combines incentives and confidence to form an expected value signal, we falsified this conclusion with a meticulous dissection of qualitative activation patterns. Rather, our results show that strong VMPFC confidence signals observed in trials with gain prospects are disrupted in trials with no - or negative (loss) - monetary prospects. Deciphering how decision variables are represented and interact at finer scales seems necessary to better understand biased (meta)cognition.","link":"/opendata/hoven-et-al-2022/"},{"title":"Hoven et al. (2023)","text":"Confidence is suggested to be a key component in psychiatry and manifests at various hierarchical levels, from confidence in a decision (local confidence), to confidence about performance (global confidence), to higher-order traits such as self-beliefs. Most research focuses on local confidence, but global levels may relate more closely to symptoms. Here, using a transdiagnostic framework, we tested the relationships between self-reported psychopathology, local and global confidence, and higher-order self-beliefs in a general population sample (N = 489). We show contrasting relationships between confidence and psychopathology dimensions. An anxious-depression dimension related to local and global underconfidence. By contrast, a compulsive-intrusive-thoughts dimension related to increased overconfidence at both levels, and showed a decoupling between (1) higher-order self-beliefs and (2) local and global task confidence. The strongest predictor of mental health was a self-beliefs dimension. This study examines higher-order confidence in relation to psychiatric symptoms fluctuating in the general population. Critically, psychopathological symptoms show distinct associations with confidence.","link":"/opendata/hoven-et-al-2023/"},{"title":"Hummos et al. (2022)","text":"Interactions across frontal cortex are critical for cognition. Animal studies suggest a role for mediodorsal thalamus (MD) in these interactions, but the computations performed and direct relevance to human decision making are unclear. Here, inspired by animal work, we extended a neural model of an executive frontal-MD network and trained it on a human decision-making task for which neuroimaging data were collected. Using a biologically-plausible learning rule, we found that the model MD thalamus compressed its cortical inputs (dorsolateral prefrontal cortex, dlPFC) underlying stimulus-response representations. Through direct feedback to dlPFC, this thalamic operation efficiently partitioned cortical activity patterns and enhanced task switching across different contingencies. To account for interactions with other frontal regions, we expanded the model to compute higher-order strategy signals outside dlPFC, and found that the MD offered a more efficient route for such signals to switch dlPFC activity patterns. Human fMRI data provided evidence that the MD engaged in feedback to dlPFC, and had a role in routing orbitofrontal cortex inputs when subjects switched behavioral strategy. Collectively, our findings contribute to the emerging evidence for thalamic regulation of frontal interactions in the human brain.","link":"/opendata/hummos-et-al-2022/"},{"title":"Hunter et al. (2021)","text":"A goal of computational psychiatry is to ground symptoms in basic mechanisms. Theory suggests that avoidance in anxiety disorders may reflect dysregulated mental simulation, a process for evaluating candidate actions. If so, these covert processes should have observable consequences: choices reflecting increased and biased deliberation. In two online general population samples, we examined how self-report symptoms of social anxiety disorder predict choices in a socially framed reinforcement learning task, the patent race, in which the pattern of choices reflects the content of deliberation. Using a computational model to assess learning strategy, we found that self-report social anxiety was indeed associated with increased deliberative evaluation. This effect was stronger for a particular subset of feedback (upward counterfactual) in one of the experiments, broadly matching the biased content of rumination in social anxiety disorder, and robust to controlling for other psychiatric symptoms. These results suggest a grounding of symptoms of social anxiety disorder in more basic neuro-computational mechanisms.","link":"/opendata/hunter-et-al-2021/"},{"title":"Ivanov et al. (2023)","text":"A series of recent studies has demonstrated that attentional selection is modulated by statistical regularities, even when they concern task-irrelevant stimuli. Irrelevant distractors presented more frequently at one location interfere less with search than distractors presented elsewhere. To account for this finding, it has been proposed that through statistical learning, the frequent distractor location becomes suppressed relative to the other locations. Learned distractor suppression has mainly been studied at the group level, where individual differences are treated as unexplained error variance. Yet these individual differences may provide important mechanistic insights and could be predictive of cognitive and real-life outcomes. In the current study, we ask whether in an additional singleton task, the standard measures of attentional capture and learned suppression are reliable and stable at the level of the individual. In an online study, we assessed both the within- and between-session reliability of individual-level measures of attentional capture and learned suppression. We show that the measures of attentional capture, but not of distractor suppression, are moderately stable within the same session (i.e., split-half reliability). Test-retest reliability over a 2-month period was found to be moderate for attentional capture but weak or absent for suppression. RT-based measures proved to be superior to accuracy measures. While producing very robust findings at the group level, the predictive validity of these RT-based measures is still limited when it comes to individual-level performance. We discuss the implications for future research drawing on inter-individual variation in the attentional biases that result from statistical learning.","link":"/opendata/ivanov-et-al-2023/"},{"title":"Jackson & Cavanagh (2023)","text":"Reward Positivity (RewP) is a feedback-locked event-related potential component that is specifically elicited by rewarding feedback and scales with positive reward prediction error, a hallmark of reinforcement learning models. The RewP is also diminished in depression, suggesting that it may be a novel marker of anhedonia. Here, we examined if a sad mood induction offered an opportunity to causally induce a mood-related alteration of the RewP and reward-related learning. In Experiment 1 (N = 50 total), participants were randomly assigned to previously established sad or neutral mood induction procedures before a probabilistic selection task. This manipulation failed to induce changes in affect, suggesting that standard methods are inadequate. In Experiment 2 (N = 50 total), participants were randomly assigned to newly developed happy versus sad mood manipulations, which successfully induced large changes in affect. While the RewP was unaffected by mood induction, positive mood moderated the relationship between prediction error encoding in the RewP and reward learning, such that low positive mood and low prediction error encoding resulted in poorer reward learning. These findings provide a mechanistic example of how reduced positive affect moderates reward learning via poorer information encoding in the RewP.","link":"/opendata/jackson-cavanagh-2023/"},{"title":"Jana & Aron (2022)","text":"Mind wandering is a state in which our mental focus shifts toward task-unrelated thoughts. Although it is known that mind wandering has a detrimental effect on concurrent task performance (e.g., decreased accuracy), its effect on executive functions is poorly studied. Yet the latter question is relevant to many real-world situations, such as rapid stopping during driving. Here, we studied how mind wandering would affect the requirement to subsequently stop an incipient motor response. In healthy adults, we tested whether mind wandering affected stopping and, if so, which component of stopping was affected: the triggering of the inhibitory brake or the implementation of the brake following triggering. We observed that during mind wandering, stopping latency increased, as did the percentage of trials with failed triggering. Indeed, 67% of the variance of the increase in stopping latency was explained by increased trigger failures. Thus, mind wandering primarily affects stopping by affecting the triggering of the brake.","link":"/opendata/jana-aron-2022/"},{"title":"Jang et al. (2019)","text":"Dopamine is thought to provide reward prediction error signals to temporal lobe memory systems, but the role of these signals in episodic memory has not been fully characterized. Here we developed an incidental memory paradigm to (i) estimate the influence of reward prediction errors on the formation of episodic memories, (ii) dissociate this influence from surprise and uncertainty, (iii) characterize the role of temporal correspondence between prediction error and memoranda presentation and (iv) determine the extent to which this influence is dependent on memory consolidation. We found that people encoded incidental memoranda more strongly when they gambled for potential rewards. Moreover, the degree to which gambling strengthened encoding scaled with the reward prediction error experienced when memoranda were presented (and not before or after). This encoding enhancement was detectable within minutes and did not differ substantially after 24 h, indicating that it is not dependent on memory consolidation. These results suggest a computationally and temporally specific role for reward prediction error signalling in memory formation.","link":"/opendata/jang-et-al-2019/"},{"title":"Jangraw et al. (2023)","text":"Does our mood change as time passes? This question is central to behavioural and affective science, yet it remains largely unexamined. To investigate, we intermixed subjective momentary mood ratings into repetitive psychology paradigms. Here we demonstrate that task and rest periods lowered participants’ mood, an effect we call ‘Mood Drift Over Time’. This finding was replicated in 19 cohorts totalling 28,482 adult and adolescent participants. The drift was relatively large (-13.8% after 7.3 min of rest, Cohen’s d = 0.574) and was consistent across cohorts. Behaviour was also impacted: participants were less likely to gamble in a task that followed a rest period. Importantly, the drift slope was inversely related to reward sensitivity. We show that accounting for time using a linear term significantly improves the fit of a computational model of mood. Our work provides conceptual and methodological reasons for researchers to account for time’s effects when studying mood and behaviour.","link":"/opendata/jangraw-et-al-2023/"},{"title":"Jaswetz et al. (2022)","text":"Simultaneous execution of memory retrieval and cognitively demanding interventions alter the subjective experience of aversive memories. This principle can be used in treatment to target traumatic memories. An often-used interpretation is that cognitive demand interferes with memory reconsolidation. Laboratory models applying this technique often do not meet some important procedural steps thought necessary to trigger reconsolidation. It remains therefore unclear whether cognitively demanding interventions can alter the reconsolidation process of aversive memories. Here, 78 (41 included) healthy participants completed an established 3-day threat conditioning paradigm. Two conditioned stimuli were paired with a shock (CS+ s) and one was not (CS-). The next day, one CS+ (CS+ R), but not the other (CS+), was presented as a reminder. After 10 min, participants performed a 2-back working memory task. On day three, we assessed retention. We found successful acquisition of conditioned threat and retention (CS+ s > CS-). However, SCRs to the CS+ R and the CS+ during retention did not significantly differ. Although threat conditioning was successful, the well-established cognitively demanding intervention did not alter the reconsolidation process of conditioned threat memories. These findings challenge current views on how cognitively demand may enhance psychotherapy-outcome.","link":"/opendata/jaswetz-et-al-2022/"},{"title":"Jayles et al. (2021)","text":"Cognitive biases are widespread in humans and animals alike, and can sometimes be reinforced by social interactions. One prime bias in judgment and decision-making is the human tendency to underestimate large quantities. Previous research on social influence in estimation tasks has generally focused on the impact of single estimates on individual and collective accuracy, showing that randomly sharing estimates does not reduce the underestimation bias. Here, we test a method of social information sharing that exploits the known relationship between the true value and the level of underestimation, and study if it can counteract the underestimation bias. We performed estimation experiments in which participants had to estimate a series of quantities twice, before and after receiving estimates from one or several group members. Our purpose was threefold: to study (i) whether restructuring the sharing of social information can reduce the underestimation bias, (ii) how the number of estimates received affects the sensitivity to social influence and estimation accuracy, and (iii) the mechanisms underlying the integration of multiple estimates. Our restructuring of social interactions successfully countered the underestimation bias. Moreover, we find that sharing more than one estimate also reduces the underestimation bias. Underlying our results are a human tendency to herd, to trust larger estimates than one’s own more than smaller estimates, and to follow disparate social information less. Using a computational modeling approach, we demonstrate that these effects are indeed key to explain the experimental results. Overall, our results show that existing knowledge on biases can be used to dampen their negative effects and boost judgment accuracy, paving the way for combating other cognitive biases threatening collective systems.","link":"/opendata/jayles-et-al-2021/"},{"title":"Jean-Richard-dit-Bressel et al. (2021)","text":"Punishment maximises the probability of our individual survival by reducing behaviours that cause us harm, and also sustains trust and fairness in groups essential for social cohesion. However, some individuals are more sensitive to punishment than others and these differences in punishment sensitivity have been linked to a variety of decision-making deficits and psychopathologies. The mechanisms for why individuals differ in punishment sensitivity are poorly understood, although recent studies of conditioned punishment in rodents highlight a key role for punishment contingency detection (Jean-Richard-Dit-Bressel et al., 2019). Here, we applied a novel Planets and Pirates conditioned punishment task in humans, allowing us to identify the mechanisms for why individuals differ in their sensitivity to punishment. We show that punishment sensitivity is bimodally distributed in a large sample of normal participants. Sensitive and insensitive individuals equally liked reward and showed similar rates of reward-seeking. They also equally disliked punishment and did not differ in their valuation of cues that signalled punishment. However, sensitive and insensitive individuals differed profoundly in their capacity to detect and learn volitional control over aversive outcomes. Punishment insensitive individuals did not learn the instrumental contingencies, so they could not withhold behaviour that caused punishment and could not generate appropriately selective behaviours to prevent impending punishment. These differences in punishment sensitivity could not be explained by individual differences in behavioural inhibition, impulsivity, or anxiety. This bimodal punishment sensitivity and these deficits in instrumental contingency learning are identical to those dictating punishment sensitivity in non-human animals, suggesting that they are general properties of aversive learning and decision-making.","link":"/opendata/jean-richard-dit-bressel-et-al-2021/"},{"title":"Jean-Richard-dit-Bressel et al. (2023)","text":"Individuals differ in sensitivity to the adverse consequences of their actions, leading some to persist in maladaptive behaviours. Two pathways have been identified for this insensitivity: a motivational pathway based on reward valuation and a behavioural pathway based on stimulus–response mechanisms. Here we identify a third, cognitive pathway based on differences in punishment knowledge. Exposed to identical punishment contingencies, some people (Sensitive) form correct causal beliefs that guide their behaviour to avoid punishment, whereas others form incorrect causal beliefs that lead them to earn punishment. Incorrect causal beliefs were not inherently problematic, many individuals benefited from information about why punishment was occurring, revaluing their actions and changing their behaviour (Unaware). However, we identify one condition where incorrect causal beliefs can be problematic: when punishment is infrequent. Under this condition, more individuals showed detrimental patterns of behaviour that resisted information-driven updating (Compulsive). For these individuals, rare punishment inoculated behavioural preferences against cognitive and behavioural updating.","link":"/opendata/jean-richard-dit-bressel-et-al-2023/"},{"title":"Jenkins et al. (2023)","text":"Implicit statistical learning, whereby predictable relationships between stimuli are detected without conscious awareness, is important for language acquisition. However, while this process is putatively implicit, it is often assessed using measures that require explicit reflection and conscious decision making. Here, we conducted three experiments combining an artificial grammar learning paradigm with a serial reaction time (SRT-AGL) task, to measure statistical learning of adjacent and nonadjacent dependencies implicitly, without conscious decision making. Participants viewed an array of six visual stimuli and were presented with a sequence of three auditory (nonsense words, Expt. 1; names of familiar objects, Expt. 2) or visual (abstract shapes, Expt. 3) cues and were asked to click on the corresponding visual stimulus as quickly as possible. In each experiment, the final stimulus in the sequence was predictable based on items earlier in the sequence. Faster responses to this predictable final stimulus compared to unpredictable stimuli would provide evidence of implicit statistical learning, without requiring explicit decision making or conscious reflection. Despite previous positive results (Misyak et al., 2009), we saw little evidence of implicit statistical learning in any of the experiments, suggesting that in this case, these SRT-AGL tasks were not an effective measure implicit statistical learning.","link":"/opendata/jenkins-et-al-2023/"},{"title":"Ji & MacLeod (2023)","text":"Reduced tendency to engage in potentially rewarding activities is a hallmark of depression. The present study investigated the role of future expectancy biases in depression-linked behavioural choice, in participants varying in self-reported depression symptoms (dysphoria). A novel laboratory paradigm was developed to test the hypotheses that the degree to which higher dysphoria is associated with reduced tendency to engage in a potentially rewarding activity is dependent on the presence of negative biases in the expected outcomes of activity engagement. Specifically, two types of expectancy biases were distinguished: a) the expected likelihood of a negative rather than positive outcome, and b) the expected emotional impact of either outcome. N = 176 undergraduate students with varied levels of dysphoria were given the opportunity to choose to engage in a coin-tossing game that could result in a win or loss monetary outcome in terms of charity donations, and then rated both types of expectancies. Results indicated that higher dysphoria was associated with more negative expectations concerning the likelihood of objective outcomes and the emotional impact of such outcomes, and as hypothesised, such negative expectancy biases mediated indirect associations between dysphoria and behavioural choice.","link":"/opendata/ji-macleod-2023/"},{"title":"Jiang, Mi et al. (2023)","text":"Social networks shape our decisions by constraining what information we learn and from whom. Yet, the mechanisms by which network structures affect individual learning and decision-making remain unclear. Here, by combining a real-time distributed learning task with functional magnetic resonance imaging, computational modeling and social network analysis, we studied how humans learn from observing others’ decisions on seven-node networks with varying topological structures. We show that learning on social networks can be approximated by a well-established error-driven process for observational learning, supported by an action prediction error encoded in the lateral prefrontal cortex. Importantly, learning is flexibly weighted toward well-connected neighbors, according to activity in the dorsal anterior cingulate cortex, but only insofar as social observations contain secondhand, potentially intertwining, information. These data suggest a neurocomputational mechanism of network-based filtering on the sources of information, which may give rise to biased learning and the spread of misinformation in an interconnected society.","link":"/opendata/jiang-mi-et-al-2023/"},{"title":"Jiang, Marcowski et al. (2023)","text":"Many consider moral decisions to follow an internal “moral compass”, resistant to social pressures. Here we examine how social influence shapes moral decisions under risk, and how it operates in different decision contexts. We employed an adapted Asian Disease Paradigm where participants chose between certain losses/gains and probabilistic losses/gains in a series of moral (lives) or financial (money) decisions. We assessed participants’ own risk preferences before and after exposing them to social norms that are generally risk-averse or risk-seeking. Our results showed that participants robustly shifted their own choices towards the observed risk preferences. This conformity holds even after a re-testing in three days. Interestingly, in the monetary domain, risk-averse norms have more influence on choices in the loss frame, whereas risk-seeking norms have more influence in the gain frame, presumably because norms that contradict default behavior are most informative. In the moral domain, risk-averse as opposed to risk-seeking norms are more effective in the loss frame but in the gain frame different norms are equally effective. Taken together, our results demonstrate conformity in risk preferences across contexts and highlight unique features of decisions and conformity in moral and monetary domains.","link":"/opendata/jiang-marcowski-et-al-2023/"},{"title":"Johnston et al. (2023)","text":"Memory is a reconstructive process with biases toward remembering events in line with prior higher orderbeliefs. This can result in events being remembered as more positive or negative than they actually were.While positive recall biases may contribute to well-being, negative recall biases may promote internalizingsymptoms, such as social anxiety. Adolescence is characterized by increased salience of peers and peakincidence of social anxiety. Symptoms often wax and wane before becoming more intractable duringadulthood. Open questions remain regarding expression of biases for social feedback, and how individualdifferences in such biases may contribute to social anxiety across development. Two independent studiesused a novel social feedback and cued response task to assess memory biases and prediction biasesabout being liked or disliked. Our findings revealed a robust positivity bias about memories for socialfeedback, regardless of whether memories were true or false. Moreover, memory bias was associated withsocial anxiety in a developmentally sensitive way. Among adults (Study 1), more severe symptoms of socialanxiety were associated with a negativity bias. During the transition from adolescence to adulthood (Study2), age strengthened the positivity bias in those with less severe symptoms and strengthened the negativitybias in those with more severe symptoms. Biases did not generalize to predictions about social feedback.These studies support a model by which higher order beliefs can infiltrate perceptions of memory for past,but not predictions of future, social events, shaping susceptibility for social anxiety, particularly during thetransition into adulthood.","link":"/opendata/johnston-et-al-2023/"},{"title":"Kaanders et al. (2022)","text":"No one likes to be wrong. Previous research has shown that participants may underweight information incompatible with previous choices, a phenomenon called confirmation bias. In this paper, we argue that a similar bias exists in the way information is actively sought. We investigate how choice influences information gathering using a perceptual choice task and find that participants sample more information from a previously chosen alternative. Furthermore, the higher the confidence in the initial choice, the more biased information sampling becomes. As a consequence, when faced with the possibility of revising an earlier decision, participants are more likely to stick with their original choice, even when incorrect. Critically, we show that agency controls this phenomenon. The effect disappears in a fixed sampling condition where presentation of evidence is controlled by the experimenter, suggesting that the way in which confirmatory evidence is acquired critically impacts the decision process. These results suggest active information acquisition plays a critical role in the propagation of strongly held beliefs over time.","link":"/opendata/kaanders-et-al-2022/"},{"title":"Kahveci et al. (2022)","text":"Reaction time (RT) data are often pre-processed before analysis by rejecting outliers and errors and aggregating the data. In stimulus-response compatibility paradigms such as the Approach-Avoidance Task (AAT), researchers often decide how to pre-process the data without an empirical basis, leading to the use of methods that may hurt rather than help data quality. To provide this empirical basis, we investigated how different pre-processing methods affect the reliability and validity of this task. Our literature review revealed 108 different pre-processing pipelines among 163 examined studies. Using simulated and real datasets, we found that validity and reliability were negatively affected by retaining error trials, by replacing error RTs with the mean RT plus a penalty, by retaining outliers, and by removing the highest and lowest sample-wide RT percentiles as outliers. We recommend removing error trials and rejecting RTs deviating more than 2 or 3 SDs from the participant mean. Bias scores were more reliable but not more valid if computed with means or D-scores rather than with medians. Bias scores were less accurate if based on averaging multiple conditions together, as with compatibility scores, rather being than based on separate averages per condition, as with double-difference scores. We call upon the field to drop the suboptimal practices to improve the psychometric properties of the AAT. We also call for similar investigations in related RT-based cognitive bias measures such as the implicit association task, as their commonly accepted pre-processing practices currently involve many of the aforementioned discouraged methods.","link":"/opendata/kahveci-et-al-2022/"},{"title":"Kaiser et al. (2021)","text":"In a dynamic world, it is essential to decide when to leave an exploited resource. Such patch-leaving decisions involve balancing the cost of moving against the gain expected from the alternative patch. This contrasts with value-guided decisions that typically involve maximizing reward by selecting the current best option. Patterns of neuronal activity pertaining to patch-leaving decisions have been reported in dorsal anterior cingulate cortex (dACC), whereas competition via mutual inhibition in ventromedial prefrontal cortex (vmPFC) is thought to underlie value-guided choice. Here, we show that the balance between cortical excitation and inhibition (E/I balance), measured by the ratio of GABA and glutamate concentrations, plays a dissociable role for the two kinds of decisions. Patch-leaving decision behaviour relates to E/I balance in dACC. In contrast, value-guided decision-making relates to E/I balance in vmPFC. These results support mechanistic accounts of value-guided choice and provide evidence for a role of dACC E/I balance in patch-leaving decisions.","link":"/opendata/kaiser-et-al-2021/"},{"title":"Kaplan & Solway (2022)","text":"Computational models of decision making have identified a relationship between obsessive-compulsive symptoms (OCS), both in the general population and in patients, and impairments in perceptual evidence accumulation. Some studies have interpreted these deficits to reflect global disease traits which give rise to clusters of OCS. Such assumptions are not uncommon, even if implicit, in computational psychiatry more broadly. However, it is well established that state- and trait-symptom scores are often correlated (e.g., state and trait anxiety), and the extent to which perceptual deficits are actually explained by state-based symptoms is unclear. State-based symptoms may give rise to information processing differences in a number of ways, including the mechanistically less interesting possibility of tying up working memory and attentional resources for off-task processing. In a general population sample (N = 150), we investigated the extent to which previously identified impairments in perceptual evidence accumulation were related to trait vs stated-based OCS. In addition, we tested whether differences in working memory capacity moderated state-based impairments, such that impairments were worse in individuals with lower working memory capacity. We replicated previous work demonstrating a negative relationship between the rate of evidence accumulation and trait-based OCS when state-based symptoms were unaccounted for. When state-based effects were included in the model, they captured a significant degree of impairment while trait-based effects were attenuated, although they did not disappear completely. We did not find evidence that working memory capacity moderated the state-based effects. Our work suggests that investigating the relationship between information processing and state-based symptoms may be important more generally in computational psychiatry beyond this specific context.","link":"/opendata/kaplan-solway-2022/"},{"title":"Karagoz et al. (2022)","text":"When making decisions, we sometimes rely on habit and at other times plan towards goals. Planning requires the construction and use of an internal representation of the environment, a cognitive map. How are these maps constructed, and how do they guide goal-directed decisions? We coupled a sequential decision-making task with a behavioral representational similarity analysis approach to examine how relationships between choice options change when people build a cognitive map of the task structure. We found that participants who encoded stronger higher-order relationships among choice options showed increased planning and better performance. These higher-order relationships were more strongly encoded among objects encountered in high-reward contexts, indicating a role for motivation during cognitive map construction. In contrast, lower-order relationships such as simple visual co-occurrence of objects did not predict goal-directed planning. These results show that humans actively construct and use cognitive maps of task structure to make goal-directed decisions.","link":"/opendata/karagoz-et-al-2022/"},{"title":"Kasparek et al. (2020)","text":"Childhood adversity is common and strongly associated with risk for psychopathology. Identifying factors that buffer children from experiencing psychopathology following adversity is critical for developing more effective intervention approaches. The present study examined several behavioral metrics of reward processing reflecting global approach motivation for reward and the degree to which reward responses scaled with reward value (i.e., behavioral sensitivity to reward value) as potential moderators of the association of multiple dimensions of adversity-including trauma, caregiver neglect, and food insecurity-with depression and externalizing psychopathology in a sample of youth aged 8-16 years (n = 132). Trauma exposure and externalizing problems were positively associated at low and moderate levels of reward reactivity, but this association became non-significant at high levels of reward reactivity. Our findings extend prior work, suggesting that high behavioral sensitivity to reward value may buffer against externalizing problems following exposure to trauma.","link":"/opendata/kasparek-et-al-2020/"},{"title":"Kapser et al. (2023)","text":"Why can initial biases persist in repeated choice tasks? Previous research has shown that frequent rewards can lure the decision maker into premature exploitation of a supposedly best option, which can result in the persistence of initial biases. Here, we demonstrate that even in the absence of rewards, initial biases can be perpetuated through a positive testing strategy. After eliciting a biased preference for one of two equally rewarding options, participants (N = 203) could sample freely from both options without the lure of any financial rewards. When participants were told to rule out alternatives in this phase, they explored the supposedly worse option and thereby managed to overcome their initial bias. When told to optimize their strategy, however, they exhibited a positive testing strategy resulting in the continued exploitation of the supposedly better option, a bias they maintained in an incentivized choice phase and later judgments. Across all participants, individual tendencies to exploit one option in earlier phases predicted biased behavior in subsequent phases. The findings highlight that not only the pursuit of instrumental rewards can lead to exploitation and the maintenance of initial biases. We discuss potential consequences for interventions.","link":"/opendata/kapser-et-al-2023/"},{"title":"Katabi & Shahar (2023)","text":"ADHD is characterized by a difficulty to act in a goal-directed manner. While most environments require a sequence of actions for goal attainment, ADHD was never studied in the context of value-based sequence learning. Here, we made use of current advancements in hierarchical reinforcement-learning algorithms to track the internal value and choice policy of individuals with ADHD performing a three-stage sequence learning task. Specifically, 54 participants (28 ADHD, 26 controls) completed a value-based reinforcement-learning task that allowed us to estimate internal action values for each trial and stage using computational modeling. We found attenuated sensitivity to action values in ADHD compared to controls, both in choice and reaction-time variability estimates. Remarkably, this was found only for first stage actions (i.e., initiatory actions), while for action performed just before outcome delivery the two groups were strikingly indistinguishable. These results suggest a difficulty in following value estimation for initiatory actions in ADHD.","link":"/opendata/katabi-shahar-2023/"},{"title":"Katyal et al. (2023)","text":"Individuals with anxiety and depression exhibit chronic metacognitive biases such as underconfidence. The origin of such biases is unknown. Here we quantified the impact of feedback valence on confidence in two large general population samples (N=230 and N=278). We studied metacognition both locally, as confidence in individual task instances, and globally, as longer run self-performance estimates. Global confidence was sensitive to both local confidence and feedback valence – more frequent positive (negative) feedback increased (respectively decreased) global confidence. Feedback valence impacted confidence in a domain-general fashion and also led to shifts in affective self-beliefs. Notably, global confidence was more sensitive to low (vs. high) local confidence in individuals with greater transdiagnostic anxious-depression symptomatology, despite sensitivity to feedback valence remaining intact. Together, our results reveal a mechanistic basis for chronic underconfidence in anxious-depression rooted in distorted interactions between local and global metacognition, while also highlighting potential for restoring confidence through targeted feedback.","link":"/opendata/katyal-et-al-2023/"},{"title":"Katzman et al. (2023)","text":"Across development, people seek to control their environments, often demonstrating a preference for contexts in which they have the opportunity to make choices. However, it is not clear how children, adolescents, and adults learn to calibrate this preference based on the costs and benefits of exerting control. Here, 92 participants between the ages of 10 and 25 completed a probabilistic reinforcement-learning task across contexts in which the instrumental value of control varied. On every trial, participants selected between two slot machines to try to gain the most reward. Critically, however, machine decisions were preceded by first-stage agency decisions in which participants decided whether to choose between the machines themselves or forgo agency and let a computer randomly select between them. On each trial, we manipulated the instrumental value of control by varying the reward probabilities of the machines, as well as an offer amount that participants would earn by forgoing agency. We found that across age, participants overvalued agency, but they also demonstrated a preference for control when it had greater instrumental value. Moreover, we found that sensitivity to the value of control increased with age, indicating developmental improvements in weighing the costs and benefits of agentic choice.","link":"/opendata/katzman-et-al-2023/"},{"title":"Kelly & Sharot (2021)","text":"Vast amounts of personalized information are now available to individuals. A vital research challenge is to establish how people decide what information they wish to obtain. Here, over five studies examining information-seeking in different domains we show that information-seeking is associated with three diverse motives. Specifically, we find that participants assess whether information is useful in directing action, how it will make them feel, and whether it relates to concepts they think of often. We demonstrate that participants integrate these assessments into a calculation of the value of information that explains information seeking or its avoidance. Different individuals assign different weights to these three factors when seeking information. Using a longitudinal approach, we find that the relative weights assigned to these information-seeking motives within an individual show stability over time, and are related to mental health as assessed using a battery of psychopathology questionnaires.","link":"/opendata/kelly-sharot-2021/"},{"title":"Kemp et al. (2022)","text":"The underlying vulnerability for schizophrenia-spectrum disorders is expressed across a continuum of clinical and subclinical symptoms and impairment referred to as schizotypy. Schizotypy is a multidimensional construct with positive, negative, and disorganized dimensions. Models of pathological personality provide useful frameworks for assessing the multidimensional structure of schizotypy. The present study examined the association of positive, negative, and disorganized schizotypy with pathological personality traits and facets assessed by the Personality Inventory for DSM-5 (PID-5) in 1,342 young adults. As hypothesized, positive schizotypy was largely associated with the PID-5 psychoticism domain and facets, negative schizotypy was primarily associated with the detachment domain and facets and the restricted affectivity facet, and disorganized schizotypy’s strongest associations were with the distractibility and eccentricity facets and the negative affect domain. Negative schizotypy was differentiated from the other two schizotypy dimensions in its pattern of correlations with the PID-5 domains and facets. As expected, positive and disorganized schizotypy exhibited a moderate degree of similarity in their associations. Overall, the PID-5 domains accounted for approximately half of the variance in each of the schizotypy dimensions, and the PID-5 facets accounted for upwards of 2/3rds of the variance in each schizotypy dimension. The associations among the schizotypy and PID-5 measures did not appear to reflect highly redundant items across the measures. We conclude by providing regression-based algorithms for computing positive, negative, and disorganized schizotypy scores based on the PID-5 facets.","link":"/opendata/kemp-et-al-2022/"},{"title":"Keung et al. (2020)","text":"Divisive normalization has long been used to account for computations in various neural processes and behaviours. The model proposes that inputs into a neural system are divisively normalized by the system’s total activity. More recently, dynamical versions of divisive normalization have been shown to account for how neural activity evolves over time in value-based decision making. Despite its ubiquity, divisive normalization has not been studied in decisions that require evidence to be integrated over time. Such decisions are important when the information is not all available at once. A key feature of such decisions is how evidence is weighted over time, known as the integration kernel. Here, we provide a formal expression for the integration kernel in divisive normalization, and show that divisive normalization quantitatively accounts for 133 human participants’ perceptual decision making behaviour, performing as well as the state-of-the-art Drift Diffusion Model, the predominant model for perceptual evidence accumulation.","link":"/opendata/keung-et-al-2020/"},{"title":"Khaw et al. (2017)","text":"The data presented in this article are related to the research article entitled “Discrete Adjustment to a Changing Environment: Experimental Evidence” (Khaw et al., 2017) [1]. We present data from a laboratory experiment that asks subjects to forecast the outcome of a time-varying Bernoulli process. On a computer program, subjects draw rings with replacement from a virtual box containing green and red rings in an unknown proportion. Subjects provide their estimates of the probability of drawing a green ring. They are rewarded for their participation and for the accuracy of their estimates. The actual probability of drawing a green ring is initially drawn from a uniform distribution. It then changes intermittently throughout the session, and each subsequent probability is an independent draw from the uniform distribution. Each session involves 1000 ring draws. The dataset contains the values of the underlying probability, the sequence of ring draws that are realized, and the subjects’ estimates and response times. The dataset contains the performance of 11 subjects who each completed 10 sessions over the course of several days.","link":"/opendata/khaw-et-al-2017/"},{"title":"Kim et al. (2019)","text":"It has previously been shown that the relative reliability of model-based and model-free reinforcement-learning (RL) systems plays a role in the allocation of behavioral control between them. However, the role of task complexity in the arbitration between these two strategies remains largely unknown. Here, using a combination of novel task design, computational modelling, and model-based fMRI analysis, we examined the role of task complexity alongside state-space uncertainty in the arbitration process. Participants tended to increase model-based RL control in response to increasing task complexity. However, they resorted to model-free RL when both uncertainty and task complexity were high, suggesting that these two variables interact during the arbitration process. Computational fMRI revealed that task complexity interacts with neural representations of the reliability of the two systems in the inferior prefrontal cortex.","link":"/opendata/kim-et-al-2019/"},{"title":"Kim et al. (2021)","text":"Controllability perception significantly influences motivated behavior and emotion and requires an estimation of one’s influence on an environment. Previous studies have shown that an agent can infer controllability by observing contingency between one’s own action and outcome if there are no other outcome-relevant agents in an environment. However, if there are multiple agents who can influence the outcome, estimation of one’s genuine controllability requires exclusion of other agents’ possible influence. Here, we first investigated a computational and neural mechanism of controllability inference in a multi-agent setting. Our novel multi-agent Bayesian controllability inference model showed that other people’s action-outcome contingency information is integrated with one’s own action-outcome contingency to infer controllability, which can be explained as a Bayesian inference. Model-based functional MRI analyses showed that multi-agent Bayesian controllability inference recruits the temporoparietal junction (TPJ) and striatum. Then, this inferred controllability information was leveraged to increase motivated behavior in the vmPFC. These results generalize the previously known role of the striatum and vmPFC in single-agent controllability to multi-agent controllability, and this generalized role requires the TPJ in addition to the striatum of single-agent controllability to integrate both self- and other-related information. Finally, we identified an innate positive bias toward the self during the multi-agent controllability inference, which facilitated behavioral adaptation under volatile controllability. Furthermore, low positive bias and high negative bias were associated with increased daily feelings of guilt. Our results provide a mechanism of how our sense of controllability fluctuates due to other people in our lives, which might be related to social learned helplessness and depression.","link":"/opendata/kim-et-al-2021/"},{"title":"Kim et al. (2023)","text":"Older adults tend to be more prone to distraction compared to young adults and this age-related deficit has been attributed to a deficiency in inhibitory processing. However, recent findings challenge the notion that aging leads to global impairments in inhibition. To reconcile these mixed findings, we investigated how aging modulates multiple mechanisms of attentional control including goal-directed target orienting, proactive distractor suppression, attention capture, and reactive disengagement by tracking the timing and direction of eye movements. When engaged in feature-search mode and proactive distractor suppression, older adults made fewer first fixations to the target but inhibited the task-irrelevant salient distractor as effectively as did young adults. In contrast, task-irrelevant salient distractors captured older adults’ attention significantly more than younger adults’ attention during singleton-search mode and reactive distractor disengagement. In addition to elevated attention capture, older adults showed increased fixation times in orienting to the target, longer dwell times on incorrect saccades, and increased saccadic reaction times. Thus, older adults exhibited deficiencies in goal-directed attentional control, disengagement, and processing speeds, but preserved mechanisms of proactive distractor suppression. Our findings suggest that older adults are more prone to initiating reflexive, stimulus-driven saccades over goal-oriented saccades due to longer top-down processing requirements and shifts in attentional priority within the visual cortex. We propose that aging leads to dual shifts in mechanisms of top-down and bottom-up attentional control, but that older adults still preserve mechanisms of proactive inhibition.","link":"/opendata/kim-et-al-2023/"},{"title":"Kinley et al. (2023)","text":"Probabilistic models of delay discounting allow the estimation of discount functions without assuming that these functions describe sharp boundaries in decision making. However, existing probabilistic models allow for two implausible possibilities: first, that no reward might sometimes be preferred over some reward (e.g., $0 now over $100 in 1 year), and second, that the same reward might sometimes be preferred later rather than sooner (e.g., $100 in a year over $100 now). Here we show that probabilistic models of discounting perform better when they assign these cases a probability of 0. We demonstrate this result across a range of discount functions using nonlinear regression. We also introduce a series of generalized linear models that implicitly parameterize various discount functions, and demonstrate the same result for these.","link":"/opendata/kinley-et-al-2023/"},{"title":"Kinzuka et al. (2022)","text":"As time plays a fundamental role in our social activities, scholars have studied temporal perception since the earliest days of experimental psychology. Since the 1960s, the ubiquity of color has been driving research on the potential effects of the colors red and blue on temporal perception and on its underlying mechanism. However, the results have been inconsistent, which could be attributed to the difficulty of controlling physical properties such as hue and luminance within and between studies. Therefore, we conducted a two-interval duration-discrimination task to evaluate the perceived duration of color stimuli under different equiluminant conditions: subjective or pupillary light reflex (PLR)-based equiluminance. The results, based on psychometric functional analyses and simultaneous pupillary recordings, showed that the perceived duration of red was overestimated compared with blue even when the intensity of the stimulus was controlled based on subjective equiluminance (Experiment 1). However, since blue is known to induce a larger PLR than red despite equiluminance, we conducted a controlled study to distinguish the indirect effect of pupillary response to temporal perception. Interestingly, the effect observed in Experiment 1 faded when the luminance levels of the two stimuli were matched based on PLR response (Experiment 2). These results indicate that duration judgement can be affected not only by the hue but also by different equiluminance methods. Furthermore, this causality between the equiluminance method and temporal perception can be explained by the fluctuations in incident light entering the pupil.","link":"/opendata/kinzuka-et-al-2022/"},{"title":"Klaassen et al. (2021)","text":"Successful responding to acutely threatening situations requires adequate approach-avoidance decisions. However, it is unclear how threat-induced states-like freezing-related bradycardia-impact the weighing of the potential outcomes of such value-based decisions. Insight into the underlying computations is essential, not only to improve our models of decision-making but also to improve interventions for maladaptive decisions, for instance in anxiety patients and first-responders who frequently have to make decisions under acute threat. Forty-two participants made passive and active approach-avoidance decisions under threat-of-shock when confronted with mixed outcome-prospects (i.e., varying money and shock amounts). Choice behavior was best predicted by a model including individual action-tendencies and bradycardia, beyond the subjective value of the outcome. Moreover, threat-related bradycardia (high-vs-low threat) interacted with subjective value, depending on the action-context (passive-vs-active). Specifically, in action-contexts incongruent with participants’ intrinsic action-tendencies, stronger bradycardia related to diminished effects of subjective value on choice across participants. These findings illustrate the relevance of testing approach-avoidance decisions in relatively ecologically valid conditions of acute and primarily reinforced threat. These mechanistic insights into approach-avoidance conflict-resolution may inspire biofeedback-related techniques to optimize decision-making under threat. Critically, the findings demonstrate the relevance of incorporating internal psychophysiological states and external action-contexts into models of approach-avoidance decision-making.","link":"/opendata/klaassen-et-al-2021/"},{"title":"Klein et al. (2017)","text":"To decide optimally between available options, organisms need to learn the values associated with these options. Reinforcement learning models offer a powerful explanation of how these values are learnt from experience. However, human choices often violate normative principles. We suggest that seemingly counterintuitive decisions may arise as a natural consequence of the learning mechanisms deployed by humans. Here, using fMRI and a novel behavioural task, we show that, when suddenly switched to novel choice contexts, participants choices are incongruent with values learnt by standard learning algorithms. Instead, behaviour is compatible with the decisions of an agent learning how good an option is relative to an option with which it had previously been paired. Striatal activity exhibits the characteristics of a prediction error used to update such relative option values. Our data suggest that choices can be biased by a tendency to learn option values with reference to the available alternatives.","link":"/opendata/klein-et-al-2017/"},{"title":"Klever et al. (2022)","text":"Visual perception is not only shaped by sensitivity but also by confidence, i.e., the ability to estimate the accuracy of a visual decision. Younger observers have been reported to have access to a reliable measure of their own uncertainty when making visual decisions. This metacognitive ability might be challenged during ageing due to increasing sensory noise and decreasing cognitive control resources. We investigated age effects on visual confidence using a visual contrast discrimination task and a confidence forced-choice paradigm. Younger adults (19-38 years) showed significantly lower discrimination thresholds than older adults (60-78 years). To focus on confidence sensitivity above and beyond differences in discrimination performance, we estimated confidence efficiency that reflects the ability to distinguish good from bad perceptual decisions. Confidence efficiency was estimated by comparing thresholds obtained from all trials and trials that were judged with relatively higher confidence, respectively. In both age groups, high confidence judgments were associated with better visual performance, but confidence efficiency was reduced in older adults. However, we observed substantial variability across all participants. Controlling for age group, confidence effciency was closely linked to individual differences in cognitive control capacities. Our findings provide evidence for age-related differences in confidence efficiency that present a specific challenge to perceptual performance in old age. We propose that these differences are driven by cognitive control capacities, supporting their crucial role for metacognitive efficiency.","link":"/opendata/klever-et-al-2022/"},{"title":"Klingelhoefer-Jens et al. (2023)","text":"Exposure to adverse childhood experiences (ACEs) is a strong predictor for developing behavioral, somatic and psychopathological conditions. Exposure to threat-related early adversity has been suggested to be specifically linked to altered emotional learning as well as changes in neural circuits involved in emotional responding and fear. Learning mechanisms are particularly interesting as they are central mechanisms through which environmental inputs shape emotional and cognitive processes and ultimately behavior. Multiple theories on the mechanisms underlying this association have been suggested which, however, differ in the operationalization of ACEs. 1,402 physically and mentally healthy participants underwent a fear conditioning paradigm including a fear acquisition and generalization phase while skin conductance responses (SCRs) and different subjective ratings were acquired. ACEs were retrospectively assessed through the childhood trauma questionnaire and participants were assigned to individuals exposed or unexposed to at least moderate adverse childhood experiences according to established cut-off criteria. In addition, we provide exploratory analyses aiming to shed light on different theoretical accounts on how ACEs impact individual risk profiles (i.e., cumulative risk account, specificity model, dimensional model). During fear acquisition training and generalization, we observed reduced discrimination in SCRs between the CS+ and the CS-, primarily due to reduced CS+ responding in exposed individuals. During fear generalization, no differences in generalization gradients were observed between exposed and unexposed individuals but generally blunted physiological responses in exposed individuals. No differences between the groups were observed in ratings in any of the experimental phases. The lower CS discrimination in SCRs in exposed individuals was evident across operationalizations according to the cumulative risk account, specificity as well as dimensional model. However, none of these theories showed clear explanatory superiority. Our results stand in stark contrast to typical patterns observed in patients suffering from anxiety and stress-related disorders (i.e., reduced CS discrimination due to increased responses to safety signals). Thus, reduced CS+ responding in individuals exposed to ACEs, yet not showing signs of psychopathology, may represent a specific characteristic of this resilient subgroup that warrants further investigation with respect to its relation to risk and resilience. In addition, we conclude that theories linking ACEs to psychopathology need refinement.","link":"/opendata/klingelhoefer-jens-et-al-2023/"},{"title":"Klingelhöfer-Jens et al. (2022)","text":"Here, we follow the call to target measurement reliability as a key prerequisite for individual-level predictions in translational neuroscience by investigating (1) longitudinal reliability at the individual and (2) group level, (3) internal consistency and (4) response predictability across experimental phases. One hundred and twenty individuals performed a fear conditioning paradigm twice 6 months apart. Analyses of skin conductance responses, fear ratings and blood oxygen level dependent functional magnetic resonance imaging (BOLD fMRI) with different data transformations and included numbers of trials were conducted. While longitudinal reliability was rather limited at the individual level, it was comparatively higher for acquisition but not extinction at the group level. Internal consistency was satisfactory. Higher responding in preceding phases predicted higher responding in subsequent experimental phases at a weak to moderate level depending on data specifications. In sum, the results suggest that while individual-level predictions are meaningful for (very) short time frames, they also call for more attention to measurement properties in the field.","link":"/opendata/klingelhofer-jens-et-al-2022/"},{"title":"Kobor et al. (2021)","text":"Both primarily and recently encountered information have been shown to influence experience-based risky decision making. The primacy effect predicts that initial experience will influence later choices even if outcome probabilities change and reward is ultimately more or less sparse than primarily experienced. However, it has not been investigated whether extended initial experience would induce a more profound primacy effect upon risky choices than brief experience. Therefore, the present study tested in two experiments whether young adults adjusted their risk-taking behavior in the Balloon Analogue Risk Task after an unsignaled and unexpected change point. The change point separated early good luck or bad luck trials from subsequent ones. While mostly positive (more reward) or mostly negative (no reward) events characterized the early trials, subsequent trials were unbiased. In Experiment 1, the change point occurred after one-sixth or one-third of the trials (brief vs. extended experience) without intermittence, whereas in Experiment 2, it occurred between separate task phases. In Experiment 1, if negative events characterized the early trials, after the change point, risk-taking behavior increased as compared with the early trials. Conversely, if positive events characterized the early trials, risk-taking behavior decreased after the change point. Although the adjustment of risk-taking behavior occurred due to integrating recent experiences, the impact of initial experience was simultaneously observed. The length of initial experience did not reliably influence the adjustment of behavior. In Experiment 2, participants became more prone to take risks as the task progressed, indicating that the impact of initial experience could be overcome. Altogether, we suggest that initial beliefs about outcome probabilities can be updated by recent experiences to adapt to the continuously changing decision environment.","link":"/opendata/kobor-et-al-2021/"},{"title":"Komar et al. (2023)","text":"The cognitive mechanisms underlying the animacy effect on free recall have as yet to be identified. According to the attentional-prioritization account, animate words are better recalled because they recruit more attention at encoding than inanimate words. The account implies that the animacy effect should be larger when animate words are presented together with inanimate words in mixed lists or pairs than when animate and inanimate words are presented separately in pure lists or pairs. The present series of experiments served to systematically test whether list composition or pair composition modulate the animacy effect. In Experiment 1, the animacy effect was compared between mixed and pure lists. In Experiments 2 and 3, the words were presented in mixed or pure pairs to manipulate the direct competition for attention between animate and inanimate words at encoding. While encoding was intentional in Experiments 1 and 2, it was incidental in Experiment 3. In each experiment, a significant animacy effect was obtained, but the effect was equally large in mixed and pure lists or pairs of animate and inanimate words despite considerable sensitivity of the statistical test of the critical interaction. These findings provide evidence against the attentional-prioritization account of the animacy effect.","link":"/opendata/komar-et-al-2023/"},{"title":"Konovalov & Krajbich (2020)","text":"Converging evidence has demonstrated that humans exhibit two distinct strategies when learning in complex environments. One is model-free learning, i.e., simple reinforcement of rewarded actions, and the other is model-based learning, which considers the structure of the environment. Recent work has argued that people exhibit little model-based behavior unless it leads to higher rewards. Here we use mouse tracking to study model-based learning in stochastic and deterministic (pattern-based) environments of varying difficulty. In both tasks participants mouse movements reveal that they learned the structures of their environments, despite the fact that standard behavior-based estimates suggested no such learning in the stochastic task. Thus, we argue that mouse tracking can reveal whether subjects have structure knowledge, which is necessary but not sufficient for model-based choice.","link":"/opendata/konovalov-krajbich-2020/"},{"title":"Konstantinidis et al. (2018)","text":"Recent experimental evidence in experience-based decision-making suggests that people are more risk seeking in the gains domain relative to the losses domain. This critical result is at odds with the standard reflection effect observed in description-based choice and explained by Prospect Theory. The so-called reversed-reflection effect has been predicated on the extreme-outcome rule, which suggests that memory biases affect risky choice from experience. To test the general plausibility of the rule, we conducted two experiments examining how the magnitude of prospective outcomes impacts risk preferences. We found that while the reversed-reflection effect was present with small-magnitude payoffs, using payoffs of larger magnitude brought participants behavior back in line with the standard reflection effect. Our results suggest that risk preferences in experience-based decision-making are not only affected by the relative extremeness but also by the absolute extremeness of past events.","link":"/opendata/konstantinidis-et-al-2018/"},{"title":"Kool et al. (2017)","text":"Human behavior is sometimes determined by habit and other times by goal-directed planning. Modern reinforcement-learning theories formalize this distinction as a competition between a computationally cheap but inaccurate model-free system that gives rise to habits and a computationally expensive but accurate model-based system that implements planning. It is unclear, however, how people choose to allocate control between these systems. Here, we propose that arbitration occurs by comparing each systems task-specific costs and benefits. To investigate this proposal, we conducted two experiments showing that people increase model-based control when it achieves greater accuracy than model-free control, and especially when the rewards of accurate performance are amplified. In contrast, they are insensitive to reward amplification when model-based and model-free control yield equivalent accuracy. This suggests that humans adaptively balance habitual and planned action through on-line cost-benefit analysis.","link":"/opendata/kool-et-al-2017/"},{"title":"Kool et al. (2018)","text":"Decision-making algorithms face a basic tradeoff between accuracy and effort (i.e., computational demands). It is widely agreed that humans can choose between multiple decision-making processes that embody different solutions to this tradeoff: Some are computationally cheap but inaccurate, whereas others are computationally expensive but accurate. Recent progress in understanding this tradeoff has been catalyzed by formalizing it in terms of model-free (i.e., habitual) versus model-based (i.e., planning) approaches to reinforcement learning. Intuitively, if two tasks offer the same rewards for accuracy but one of them is much more demanding, we might expect people to rely on habit more in the difficult task: Devoting significant computation to achieve slight marginal accuracy gains would not be worth it. We test and verify this prediction in a sequential reinforcement learning task. Because our paradigm is amenable to formal analysis, it contributes to the development of a computational model of how people balance the costs and benefits of different decision-making processes in a task-specific manner; in other words, how we decide when hard thinking is worth it.","link":"/opendata/kool-et-al-2018/"},{"title":"Koppel et al. (2022)","text":"Most disciplines rely on economic games to measure prosocial behavior in controlled experimental settings. However, participants’ comprehension of these games might be lower than desirable, which complicates interpretation of results. We here assess subject comprehension of the payoff structure of five standard economic games commonly used to study prosocial behavior: the Dictator Game, Ultimatum Game, Trust Game, Public Goods Game, and Prisoner’s Dilemma. Participants were recruited from two online platforms: Prolific (n = 528) and CloudResearch (using the CloudResearch MTurk toolkit; n = 540). The Trust Game had the highest level of misunderstanding (70%), followed by the Public Goods Game and the Prisoner’s Dilemma (each at 52%), the Ultimatum Game (27%), and the Dictator Game (24%). Study platform was a significant predictor of misunderstanding in the Dictator Game, Ultimatum Game, and Public Goods Game, with greater misunderstanding on Prolific than CloudResearch. Incentivizing the comprehension questions had no significant impact on misunderstanding in any of the games. The only variable that significantly predicted misunderstanding across all games was numeracy, which was associated with lower misunderstanding. Finally, we found suggestive evidence in exploratory analyses that misunderstanding predicts greater contributions in the Public Goods Game (in line with previous studies) and in the Dictator Game, increased the likelihood to choose the option that maximizes total payoff in the Prisoner’s Dilemma and reduced back transfers in the Trust Game. These findings suggest that misunderstanding may be an important factor in explaining prosocial behavior and that reliance on standard one-shot games may lead researchers to overestimate the importance of social preferences.","link":"/opendata/koppel-et-al-2022/"},{"title":"Kosciessa et al. (2021)","text":"Knowledge about the relevance of environmental features can guide stimulus processing. However, it remains unclear how processing is adjusted when feature relevance is uncertain. We hypothesized that (a) heightened uncertainty would shift cortical networks from a rhythmic, selective processing-oriented state toward an asynchronous (“excited”) state that boosts sensitivity to all stimulus features, and that (b) the thalamus provides a subcortical nexus for such uncertainty-related shifts. Here, we had young adults attend to varying numbers of task-relevant features during EEG and fMRI acquisition to test these hypotheses. Behavioral modeling and electrophysiological signatures revealed that greater uncertainty lowered the rate of evidence accumulation for individual stimulus features, shifted the cortex from a rhythmic to an asynchronous/excited regime, and heightened neuromodulatory arousal. Crucially, this unified constellation of within-person effects was dominantly reflected in the uncertainty-driven upregulation of thalamic activity. We argue that neuromodulatory processes involving the thalamus play a central role in how the brain modulates neural excitability in the face of momentary uncertainty.","link":"/opendata/kosciessa-et-al-2021/"},{"title":"Kraft et al. (2022)","text":"Cognitive flexibility - the ability to adjust one ´s behavior to changing environmental demands - is crucial for controlled behavior. However, the term cognitive flexibility is used heterogeneously, and associations between cognitive flexibility and other facets of flexible behavior have only rarely been studied systematically. To resolve some of these conceptual uncertainties, we directly compared cognitive flexibility (cue-instructed switching between two affectively neutral tasks), affective flexibility (switching between a neutral and an affective task using emotional stimuli), and feedback-based flexibility (non-cued, feedback-dependent switching between two neutral tasks). Three experimental paradigms were established that share as many procedural features (in terms of stimuli and/or task rules) as possible and administered in a pre-registered study plan (N = 100). Correlation analyses revealed significant associations between the efficiency of cognitive and affective task switching (response time switch costs). Feedback-based flexibility (measured as mean number of errors after rule reversals) did not correlate with task switching efficiency in the other paradigms, but selectively with the effectiveness of affective switching (error rate costs when switching from neutral to emotion task). While preregistered confirmatory factor analysis (CFA) provided no clear evidence for a shared factor underlying the efficiency of switching in all three domains of flexibility, an exploratory CFA suggested commonalities regarding switching effectiveness (accuracy-based switch costs). We propose shared mechanisms controlling the efficiency of cue-dependent task switching across domains, while the relationship to feedback-based flexibility may depend on mechanisms controlling switching effectiveness. Our results call for a more stringent conceptual differentiation between different variants of psychological flexibility.","link":"/opendata/kraft-et-al-2022/"},{"title":"Kriechbaum & Bäuml (2023)","text":"Retrieval practice performed shortly upon the encoding of information benefits recall of the retrieved information but causes forgetting of nonretrieved information. Here, we show that the forgetting effect on the nonretrieved information can quickly evolve into recall enhancement when retrieval practice is delayed. During a time window of twenty minutes upon the encoding of information, the forgetting effect observed shortly after encoding first disappeared and then turned into recall enhancement when the temporal lag between encoding and retrieval practice was prolonged. Strikingly, recall enhancement continued to emerge when retrieval practice was postponed up to one week. The results illustrate a fast transition from the forgetting of nonretrieved information to recall enhancement. This fast transition is of relevance for daily life, in which retrieval is often selective and delayed.","link":"/opendata/kriechbaum-bauml-2023/"},{"title":"Kristjansson et al. (2014)","text":"Predators are known to select food of the same type in non-random sequences or “runs” that are longer than would be expected by chance. If prey are conspicuous, predators will switch between available sources, interleaving runs of different prey types. However, when prey are cryptic, predators tend to focus on one food type at a time, effectively ignoring equally available sources. This latter finding is regarded as a key indicator that animal foraging is strongly constrained by attention. It is unknown whether human foraging is equally constrained. Here, using a novel iPad task, we demonstrate for the first time that it is. Participants were required to locate and touch 40 targets from 2 different categories embedded within a dense field of distractors. When individual target items “popped-out” search was organized into multiple runs, with frequent switching between target categories. In contrast, as soon as focused attention was required to identify individual targets, participants typically exhausted one entire category before beginning to search for the other. This commonality in animal and human foraging is compelling given the additional cognitive tools available to humans, and suggests that attention constrains search behavior in a similar way across a broad range of species.","link":"/opendata/kristjansson-et-al-2014/"},{"title":"Kristjansson & Kristjansson (2023)","text":"Go/No-Go responses in visual search yield different estimates of the properties of visual search than more standard present and absent tasks. Such minor methodological tweaks have a surprisingly large effect on measures that have, for the last half-century or so, formed the backbone of prominent theories of visual attention. Secondly, priming effects in visual search have a dominating influence on visual search, accounting for effects that have been attributed to top-down guidance in standard theories. Priming effects in visual search have never been investigated for Go/No-Go present/absent decisions. Here, Go/No-Go tasks were used to assess visual search for an odd-one-out face, defined either by color or facial expression. The Go/No-Go responses for the color-based task were very fast for both present and absent and interestingly “Go” responses were even faster for the target absent case. The “Go” responses were, on the other hand, much slower for expression and became higher with increased set-size, particularly for the target-absent response. Priming effects were considerable for the feature search, but for expression, the target absent priming was strong but very weak for target present trials, arguing that repetition priming for this search mainly reflects priming of context rather than target features. Overall, the results reinforce the point that Go/No-Go tasks are highly informative for theoretical accounts of visual attention, and cast a new light on attentional priming.","link":"/opendata/kristjansson-kristjansson-2023/"},{"title":"Kroker et al. (2022)","text":"SummaryThe framing-effect is a bias that affects decision-making depending on whether the available options are presented with positive or negative connotations. Even when the outcome of two choices is equivalent, people have a strong tendency to avoid the negatively framed option because losses are perceived about twice as salient as gains of the same amount (i.e. loss-aversion). The ventromedial prefrontal cortex (vmPFC) is crucial for rational decision-making, and dysfunctions in this region have been linked to cognitive biases, impulsive behavior and gambling addiction. Using a financial decision-making task in combination with magnetoencephalographic neuroimaging, we show that excitatory compared to inhibitory non-invasive transcranial direct current stimulation (tDCS) of the vmPFC reduces framing-effects while improving the assessment of loss-probabilities, ultimately leading to increased overall gains. Behavioral and neural data consistently suggest that this improvement in rational decision-making is predominately a consequence of reduced loss-aversion. These findings recommend further research towards clinical applications of vmPFC-tDCS in addictive disorders.","link":"/opendata/kroker-et-al-2022/"},{"title":"Kucina et al. (2022)","text":"Standard, well-established cognitive tasks that produce reliable effects in group comparisons also lead to unreliable measurement when assessing individual differences. This “reliability paradox” has been demonstrated in decision-conflict tasks such as the Simon, Flanker, and Stroop tasks, which measure various aspects of cognitive control. We aimed to address this paradox by implementing carefully calibrated versions of the standard tests with an additional manipulation to encourage processing of conflicting information, as well as combinations of standard tasks. A series of experiments concluded that a Flanker task and a combined Simon and Stroop task with the additional manipulation produced reliable estimates of individual differences in under 100 trials per task, which markedly improves on the reliability seen in benchmark Flanker, Simon, and Stroop data. We make the new tasks freely available and discuss both theoretical and applied implications regarding how the cognitive testing of individual differences is carried out.","link":"/opendata/kucina-et-al-2022/"},{"title":"Kuhrt et al. (2023)","text":"Contrary to the law of less work, individuals with high levels of need for cognition and self-control tend to choose harder tasks more often. While both traits can be integrated into a core construct of dispositional cognitive effort investment, its relation to actual cognitive effort investment remains unclear. As individuals with high levels of cognitive effort investment are characterized by a high intrinsic motivation towards effortful cognition, they would be less likely to increase their effort based on expected payoff, but rather based on increasing demand. In the present study, we measured actual effort investment on multiple dimensions, i.e., subjective load, reaction time, accuracy, early and late frontal midline theta power, N2 and P3 amplitude, and pupil dilation. In a sample of N = 148 participants, we examined the relationship of dispositional cognitive effort investment and effort indices during a flanker and an n-back task with varying demand and payoff. Exploratorily, we examined this relationship for the two subdimensions cognitive motivation and effortful-self-control as well. In both tasks, effort indices were sensitive to demand and partly to payoff. The analyses revealed a main effect of cognitive effort investment for accuracy (n-back task), interaction effects with payoff for reaction time (n-back and flanker task) and P3 amplitude (n-back task) and demand for early frontal midline theta power (flanker task). Taken together, our results partly support the notion that individuals with high levels of cognitive effort investment exert effort more efficiently. Moreover, the notion that these individuals exert effort regardless of payoff is partly supported, too. This may further our understanding of the conditions under which person-situation interactions occur, i.e. the conditions under which situations determine effort investment in goal-directed behavior more than personality, and vice versa.","link":"/opendata/kuhrt-et-al-2023/"},{"title":"Kumano et al. (2021)","text":"In everyday life, people sometimes find themselves making decisions on behalf of others, taking risks on another’s behalf, accepting the responsibility for these choices and possibly suffering regret for what they could have done differently. Previous research has extensively studied how people deal with risk when making decisions for others or when being observed by others. Here, we asked whether making decisions for present others is affected by regret avoidance. We studied value-based decision making under uncertainty, manipulating both whether decisions benefited the participant or a partner (beneficiary effect) and whether the partner watched the participant’s choices (audience effect) and their factual and counterfactual outcomes. Computational behavioural analysis revealed that participants were less mindful of regret (and more strongly driven by bigger risks) when choosing for others vs for themselves. Conversely, they chose more conservatively (regarding both regret and risk) when being watched vs alone. The effects of beneficiary and audience on anticipated regret counteracted each other, suggesting that participants’ financial and reputational interests impacted the feeling of regret independently.","link":"/opendata/kumano-et-al-2021/"},{"title":"Kuper-Smith & Korn (2023)","text":"Decisions in social dilemmas lead to outcomes for oneself and others. These outcomes can be gains or losses, yet we lack a full understanding of how people’s decisions depend on which outcomes are above or below zero. We systematically varied whether the outcomes of social dilemmas (Prisoner’s Dilemma, Stag-Hunt, Chicken) were losses, gains, or combinations thereof. Across 7 experiments (4 preregistered; N Offline = 197, N Online = 1,653), participants consistently tried to avoid losses altogether (loss avoidance), but they did not try to minimise losses (loss aversion). If cooperation avoided losses, people cooperated more, if defection avoided losses, people defected more, even if this imposed a loss on someone else. Loss avoidance was larger for one-shot than for iterated games and was present in all games studied. Our results suggest that loss avoidance, rather than loss aversion, systematically influences how people cooperate.","link":"/opendata/kuper-smith-korn-2023/"},{"title":"Kurtenbach et al. 2022","text":"Performance during instrumental learning is commonly believed to reflect the knowledge that has been acquired up to that point. However, recent work in rodents found that instrumental performance was enhanced during periods when reinforcement was withheld, relative to periods when reinforcement was provided. This suggests that reinforcement may mask acquired knowledge and lead to impaired performance. In the present study, we investigated whether such a beneficial effect of removing reinforcement translates to humans. Specifically, we tested whether performance during learning was improved during non-reinforced relative to reinforced task periods using signal detection theory and a computational modelling approach. To this end, 60 healthy volunteers performed a novel visual go/no-go learning task with deterministic reinforcement. To probe acquired knowledge in the absence of reinforcement, we interspersed blocks without feedback. In these non-reinforced task blocks, we found an increased d, indicative of enhanced instrumental performance. However, computational modelling showed that this improvement in performance was not due to an increased sensitivity of decision making to learnt values, but to a more cautious mode of responding, as evidenced by a reduction of a general response bias. Together with an initial tendency to act, this is sufficient to drive differential changes in hit and false alarm rates that jointly lead to an increased d. To conclude, the improved instrumental performance in the absence of reinforcement observed in studies using asymmetrically reinforced go/no-go tasks may reflect a change in response bias rather than unmasking latent knowledge.","link":"/opendata/kurtenbach-et-al-2022/"},{"title":"Kvam et al. (2023)","text":"People discount both future outcomes that could happen and past outcomes that could have happened according to how far away they are in time. A common finding is that future outcomes are often preferred to past ones when the payoffs and temporal distance (how long ago/until they occur) are matched, referred to as temporal value asymmetry. In this article, we examine the consistency of this effect by examining the effect of manipulating the magnitude and delays of past and future payoffs on participants’ choices and challenge the claim that differences in value are primarily due to differences in discounting rates for past and future events. We find reversals of the temporal value asymmetry when payoffs are low and when temporal distance is large, suggesting that people have different sensitivity to the magnitude of past and future payoffs. We show that these effects can be accommodated in a direct difference model of intertemporal choice but not in the most common discounting models (hyperboloid), suggesting that both temporal distance and payoff magnitude carry independent influences on the subjective value of past and future outcomes. Finally, we explore how these tendencies to represent past and future outcome values are related to one another and to individual differences in personality and psychological traits, showing how these measures cluster according to whether they measure processes related to past/future events, payoffs/delays, and whether they are behavioral/self-report measures.","link":"/opendata/kvam-et-al-2023/"},{"title":"Kwak et al. (2023)","text":"Visual perception is limited by spatial resolution, the ability to discriminate fine details. Spatial resolution not only declines with eccentricity but also differs for polar angle locations around the visual field, also known as ‘performance fields’. To compensate for poor peripheral resolution, we make rapid eye movements-saccades-to bring peripheral objects into high-acuity foveal vision. Already before saccade onset, visual attention shifts to the saccade target location and prioritizes visual processing. This presaccadic shift of attention improves performance in many visual tasks, but whether it changes resolution is unknown. Here, we investigated whether presaccadic attention sharpens peripheral spatial resolution; and if so, whether such effect interacts with performance fields asymmetries. We measured acuity thresholds in an orientation discrimination task during fixation and saccade preparation around the visual field. The results revealed that presaccadic attention sharpens acuity, which can facilitate a smooth transition from peripheral to foveal representation. This acuity enhancement is similar across the four cardinal locations; thus, the typically robust effect of presaccadic attention does not change polar angle differences in resolution.","link":"/opendata/kwak-et-al-2023/"},{"title":"Lamba et al. (2020)","text":"Very little is known about how individuals learn under uncertainty when other people are involved. We propose that humans are particularly tuned to social uncertainty, which is especially noisy and ambiguous. Individuals exhibiting less tolerance for uncertainty, such as those with anxiety, may have greater difficulty learning in uncertain social contexts and therefore provide an ideal test population to probe learning dynamics under uncertainty. Using a dynamic trust game and a matched nonsocial task, we found that healthy subjects (n = 257) were particularly good at learning under negative social uncertainty, swiftly figuring out when to stop investing in an exploitative social partner. In contrast, subjects with anxiety (n = 97) overinvested in exploitative partners. Computational modeling attributed this pattern to a selective reduction in learning from negative social events and a failure to enhance learning as uncertainty rises-two mechanisms that likely facilitate adaptive social choice.","link":"/opendata/lamba-et-al-2020/"},{"title":"Lamba et al. (2023)","text":"People learn adaptively from feedback, but the rate of such learning differs drastically across individuals and contexts. Here we examine whether this variability reflects differences in what is learned. Leveraging a neurocomputational approach that merges fMRI and an iterative reward learning task, we link the specificity of credit assignment-how well people are able to appropriately attribute outcomes to their causes-to the precision of neural codes in the prefrontal cortex (PFC). Participants credit task-relevant cues more precisely in social compared to nonsocial contexts, a process that is mediated by high-fidelity (i.e., distinct and consistent) state representations in the PFC. Specifically, the medial PFC and orbitofrontal cortex work in concert to match the neural codes from feedback to those at choice, and the strength of these common neural codes predict credit assignment precision. Together this work provides a window into how neural representations drive adaptive learning.","link":"/opendata/lamba-et-al-2023/"},{"title":"Lange et al. (2021)","text":"Making good decisions requires updating beliefs according to new evidence. This is a dynamical process that is prone to biases: in some cases, beliefs become entrenched and resistant to new evidence (leading to primacy effects), while in other cases, beliefs fade over time and rely primarily on later evidence (leading to recency effects). How and why either type of bias dominates in a given context is an important open question. Here, we study this question in classic perceptual decision-making tasks, where, puzzlingly, previous empirical studies differ in the kinds of biases they observe, ranging from primacy to recency, despite seemingly equivalent tasks. We present a new model, based on hierarchical approximate inference and derived from normative principles, that not only explains both primacy and recency effects in existing studies, but also predicts how the type of bias should depend on the statistics of stimuli in a given task. We verify this prediction in a novel visual discrimination task with human observers, finding that each observers temporal bias changed as the result of changing the key stimulus statistics identified by our model. The key dynamic that leads to a primacy bias in our model is an overweighting of new sensory information that agrees with the observers existing belief-a type of confirmation bias. By fitting an extended drift-diffusion model to our data we rule out an alternative explanation for primacy effects due to bounded integration. Taken together, our results resolve a major discrepancy among existing perceptual decision-making studies, and suggest that a key source of bias in human decision-making is approximate hierarchical inference.","link":"/opendata/lange-et-al-2021/"},{"title":"Larsen et al. (2022)","text":"Hallucinations are characterized by disturbances of perceptual processes involved in decision-making about environmental stimuli. Here, we examine whether cognitive and computational processes by which sensory information is integrated may offer insight into the perceptual mechanisms of hallucinatory symptoms. We used a multi-element perceptual averaging task in which observers made dichotomous judgments about the “average color” (red or blue) of an array of stimuli in trials that varied in the strength (mean) and reliability (variance) of the decision-relevant perceptual evidence. Generally, observers excluded or down-weighted extreme (outlying) perceptual evidence akin to a statistician excluding outlying data points; however, individuals prone to hallucinations afforded more weight to more extreme or untrustworthy evidence. Computational modeling showed that individuals prone to hallucinations tended not to use the optimal model in which evidence is integrated as a function of the log odds of each perceptual option leading to “robust averaging”. Finally, observers generally adapted to trials with unreliable evidence by increasingly downweighting extreme evidence, but the weighting strategy in hallucination prone individuals remained insensitive to the reliability of evidence. By showing that hallucination proneness is associated with reduced attenuation of untrustworthy evidence in perceptual decision-making, our findings suggest a novel perceptual mechanism underlying hallucinations. Our findings also provide support for the view that hallucination-proneness relates to alterations in the perceptual systems that track statistical regularities in environmental stimuli.","link":"/opendata/larsen-et-al-2022/"},{"title":"Latzman et al. (2020)","text":"A robust literature demonstrates that psychopathology and personality pathology are well-represented within quantitatively-derived, hierarchical dimensional models. Nevertheless, the location of core traits comprising psychopathic personality (psychopathy) as defined by the triarchic model has not been clearly explicated. We extended hierarchical structural models of personality pathology to include triarchic psychopathy trait dimensions (boldness, meanness, disinhibition) to interface the hierarchical framework of pathological personality dimensions with basic psychopathy trait dimensions. Using data from a racially diverse undergraduate sample (N = 749), “bass-ackwards” analyses revealed a coherently organized hierarchical structure of personality pathology. Psychopathy dimensions were clearly situated within levels of the hierarchy extending beyond the highest, undifferentiated general-factor level. A broad externalizing factor emerged at level 2, which bifurcated into callous-disinhibition and fearless dominance subfactors at level 3 – encompassing psychopathic traits of meanness and disinhibition (thought to represent the trait commonality between antisocial personality disorder and psychopathy) and boldness (thought to differentiate psychopathy from antisocial personality disorder), respectively, at the final two levels of the hierarchy. These results position triarchic psychopathy traits within an extended externalizing spectrum that accommodates boldness-related content.","link":"/opendata/latzman-et-al-2020/"},{"title":"Lawrence et al. (2022)","text":"Theoretical accounts have linked anxiety to intolerance of ambiguity. However, this relationship has not been well operationalized empirically. Here, we used computational and neuro-imaging methods to characterize anxiety-related differences in aversive decision-making under ambiguity and associated patterns of cortical activity. Adult human participants chose between two urns on each trial. The ratio of tokens (Os and Xs) in each urn determined probability of electrical stimulation receipt. A number above each urn indicated the magnitude of stimulation that would be received if a shock was delivered. On ambiguous trials, one of the two urns had tokens occluded. By varying the number of tokens occluded, we manipulated the extent of missing information. At higher levels of missing information, there is greater second order uncertainty, i.e., more uncertainty as to the probability of pulling a given type of token from the urn. Adult human participants demonstrated avoidance of ambiguous options which increased with level of missing information. Extent of information-level dependent ambiguity aversion was significantly positively correlated with trait anxiety. Activity in both the dorsal anterior cingulate cortex and inferior frontal sulcus during the decision-making period increased as a function of missing information. Greater engagement of these regions, on high missing information trials, was observed when participants went on to select the ambiguous option; this was especially apparent in high trait anxious individuals. These findings are consistent with individuals vulnerable to anxiety requiring greater activation of frontal regions supporting rational decision-making to overcome a predisposition to engage in ambiguity avoidance at high levels of missing information.","link":"/opendata/lawrence-et-al-2022/"},{"title":"Le-Bouc & Pessiglione (2022)","text":"Humans procrastinate despite being aware of potential adverse consequences. Yet, the neuro-computational mechanisms underlying procrastination remain poorly understood. Here, we use fMRI during intertemporal choice to inform a computational model that predicts procrastination behavior in independent tests. Procrastination is assessed in the laboratory as the preference for performing an effortful task on the next day as opposed to immediately, and at home as the delay taken in returning completed administrative forms. These procrastination behaviors are respectively modeled as unitary and repeated decisions to postpone a task until the next time step, based on a net expected value that integrates reward and effort attributes, both discounted with delay. The key feature that is associated with procrastination behavior across individuals (both in-lab and at-home) is the extent to which the expected effort cost (signaled by the dorsomedial prefrontal cortex) is attenuated by the delay before task completion. Thus, procrastination might stem from a cognitive bias that would make doing a task later (compared to now) appear as much less effortful but not much less rewarding.","link":"/opendata/le-bouc-pessiglione-2022/"},{"title":"Le-Pelley & Newell (2023)","text":"In two experiments we examined the influence of ‘history-driven’ attentional biases on choice behavior. In Experiment 1 we used a value-modulated attentional capture procedure to induce an automatic reward-related attentional bias, and found that this bias shaped choice in a subsequent task in which participants were required to pick the highest number from a briefly displayed choice array. In Experiment 2 we investigated the influence of a working memory manipulation, and found that choice in the number-selection task was influenced by the current (and prior) contents of memory, consistent with an influence of memory-driven attentional bias on information encoding. Our findings indicate that history-driven attentional biases can translate to an influence on overt, downstream processes of behavioural choice, and should be incorporated into models of the interaction between attention and choice.","link":"/opendata/le-pelley-newell-2023/"},{"title":"Lefebvre et al. (2017)","text":"When forming and updating beliefs about future life outcomes, people tend to consider good news and to disregard bad news. This tendency is assumed to support the optimism bias. Whether this learning bias is specific to ‘high-level’ abstract belief update or a particular expression of a more general ‘low-level’ reinforcement learning process is unknown. Here we report evidence in favour of the second hypothesis. In a simple instrumental learning task, participants incorporated better-than-expected outcomes at a higher rate than worse-than-expected ones. In addition, functional imaging indicated that inter-individual difference in the expression of optimistic update corresponds to enhanced prediction error signalling in the reward circuitry. Our results constitute a step towards the understanding of the genesis of optimism bias at the neurocomputational level.","link":"/opendata/lefebvre-et-al-2017/"},{"title":"Lehmann et al. (2019)","text":"In many daily tasks, we make multiple decisions before reaching a goal. In order to learn such sequences of decisions, a mechanism to link earlier actions to later reward is necessary. Reinforcement learning (RL) theory suggests two classes of algorithms solving this credit assignment problem: In classic temporal-difference learning, earlier actions receive reward information only after multiple repetitions of the task, whereas models with eligibility traces reinforce entire sequences of actions from a single experience (one-shot). Here, we show one-shot learning of sequences. We developed a novel paradigm to directly observe which actions and states along a multi-step sequence are reinforced after a single reward. By focusing our analysis on those states for which RL with and without eligibility trace make qualitatively distinct predictions, we find direct behavioral (choice probability) and physiological (pupil dilation) signatures of reinforcement learning with eligibility trace across multiple sensory modalities.","link":"/opendata/lehmann-et-al-2019/"},{"title":"Lei & Solway (2022)","text":"A large literature has accumulated suggesting that human and animal decision making is driven by at least two systems, and that important functions of these systems can be captured by reinforcement learning algorithms. The “model-free” system caches and uses stimulus-value or stimulus-response associations, and the “model-based” system implements more flexible planning using a model of the world. However, it is not clear how the two systems interact during deliberation and how a single decision emerges from this process, especially when they disagree. Most previous work has assumed that while the systems operate in parallel, they do so independently, and they combine linearly to influence decisions. Using an integrated reinforcement learning/drift-diffusion model, we tested the hypothesis that the two systems interact in a non-linear fashion similar to other situations with cognitive conflict. We differentiated two forms of conflict: action conflict, a binary state representing whether the systems disagreed on the best action, and value conflict, a continuous measure of the extent to which the two systems disagreed on the difference in value between the available options. We found that decisions with greater value conflict were characterized by reduced model-based control and increased caution both with and without action conflict. Action conflict itself (the binary state) acted in the opposite direction, although its effects were less prominent. We also found that between-system conflict was highly correlated with within-system conflict, and although it is less clear a priori why the latter might influence the strength of each system above its standard linear contribution, we could not rule it out. Our work highlights the importance of non-linear conflict effects, and provides new constraints for more detailed process models of decision making. It also presents new avenues to explore with relation to disorders of compulsivity, where an imbalance between systems has been implicated.","link":"/opendata/lei-solway-2022/"},{"title":"Leng et al. (2021)","text":"To invest effort into any cognitive task, people must be sufficiently motivated. Whereas prior research has focused primarily on how the cognitive control required to complete these tasks is motivated by the potential rewards for success, it is also known that control investment can be equally motivated by the potential negative consequence for failure. Previous theoretical and experimental work has yet to examine how positive and negative incentives differentially influence the manner and intensity with which people allocate control. Here, we develop and test a normative model of control allocation under conditions of varying positive and negative performance incentives. Our model predicts, and our empirical findings confirm, that rewards for success and punishment for failure should differentially influence adjustments to the evidence accumulation rate versus response threshold, respectively. This dissociation further enabled us to infer how motivated a given person was by the consequences of success versus failure.","link":"/opendata/leng-et-al-2021/"},{"title":"Leong et al. (2021)","text":"People’s perceptual reports are biased toward percepts they are motivated to see. The arousal system coordinates the body’s response to motivationally significant events and is well positioned to regulate motivational effects on perceptual judgments. However, it remains unclear whether arousal would enhance or reduce motivational biases. Here, we measured pupil dilation as a measure of arousal while participants (N = 38) performed a visual categorization task. We used monetary bonuses to motivate participants to perceive one category over another. Even though the reward-maximizing strategy was to perform the task accurately, participants were more likely to report seeing the desirable category. Furthermore, higher arousal levels were associated with making motivationally biased responses. Analyses using computational models suggested that arousal enhanced motivational effects by biasing evidence accumulation in favor of desirable percepts. These results suggest that heightened arousal biases people toward what they want to see and away from an objective representation of the environment.","link":"/opendata/leong-et-al-2021/"},{"title":"Lewis et al. (2023)","text":"In cost-benefit decision-making, women and men often show different trade-offs. However, surprisingly little is known about sex differences in instrumental tasks, where physical effort is exerted to gain rewards. To this end, we tested 81 individuals (47 women) with an effort allocation task, where participants had to repeatedly press a button to collect food and money tokens. We analyzed the motivational phases of invigoration and effort maintenance with varying reward magnitude, difficulty, and reward type. Whereas women and men did not differ in invigoration, we found that women showed higher effort maintenance as well as higher subjective wanting and exertion ratings for small rewards compared with men. Notably, men increased their effort more than women for higher rewards to match women’s levels of performance. Crucially, we found no sex differences depending on reward type or difficulty, indicating that sex differences were specific to the encoding of the magnitude of benefits, not costs. To summarize, women exerted higher physical effort for small rewards, which corresponded with an elevated subjective value in women compared with men. Therefore, sex differences in perceived reward magnitude may contribute to differential behavioral preferences highlighting the potential of cost-benefit decision-making to provide insights about potential mechanisms.","link":"/opendata/lewis-et-al-2023/"},{"title":"Li & Ma (2020)","text":"Decision confidence reflects our ability to evaluate the quality of decisions and guides subsequent behavior. Experiments on confidence reports have almost exclusively focused on two-alternative decision-making. In this realm, the leading theory is that confidence reflects the probability that a decision is correct (the posterior probability of the chosen option). There is, however, another possibility, namely that people are less confident if the best two options are closer to each other in posterior probability, regardless of how probable they are in absolute terms. This possibility has not previously been considered because in two-alternative decisions, it reduces to the leading theory. Here, we test this alternative theory in a three-alternative visual categorization task. We found that confidence reports are best explained by the difference between the posterior probabilities of the best and the next-best options, rather than by the posterior probability of the chosen (best) option alone, or by the overall uncertainty (entropy) of the posterior distribution. Our results upend the leading notion of decision confidence and instead suggest that confidence reflects the observer’s subjective probability that they made the best possible decision.","link":"/opendata/li-ma-2020/"},{"title":"Li & Ma (2021)","text":"When people view a consumable item for a longer amount of time, they choose it more frequently; this also seems to be the direction of causality. The leading model of this effect is a drift-diffusion model with a fixation-based attentional bias. Here, we propose an explicitly Bayesian account for the same data. This account is based on the notion that the brain builds a posterior belief over the value of an item in the same way it would over a sensory variable. As the agent gathers evidence about the item from sensory observations and from retrieved memories, the posterior distribution narrows. We further postulate that the utility of an item is a weighted sum of the posterior mean and the negative posterior standard deviation, with the latter accounting for risk aversion. Fixating for longer can increase or decrease the posterior mean, but will inevitably lower the posterior standard deviation. This model fits the data better than the original attentional drift-diffusion model but worse than a variant with a collapsing bound. We discuss the often overlooked technical challenges in fitting models simultaneously to choice and response time data in the absence of an analytical expression. Our results hopefully contribute to emerging accounts of valuation as an inference process.","link":"/opendata/li-ma-2021/"},{"title":"Li et al. (2022)","text":"While making decisions, we often rely on past experiences to guide our choices. However, not all experiences are remembered equally well, and some elements of an experience are more memorable than others. Thus, the intrinsic memorability of past experiences may bias our decisions. Here, we hypothesized that individuals would tend to choose more memorable options than less memorable ones. We investigated the effect of item memorability on choice in two experiments. First, using food images, we found that the same items were consistently remembered, and others consistently forgotten, across participants. However, contrary to our hypothesis, we found that participants did not prefer or choose the more memorable over the less memorable items when choice options were matched for the individuals valuation of the items. Second, we replicated these findings in an alternate stimulus domain, using words that described the same food items. These findings suggest that stimulus memorability does not play a significant role in determining choice based on subjective value.","link":"/opendata/li-et-al-2022/"},{"title":"Li & McClelland (2022)","text":"When we plan for long-range goals, proximal information cannot be exploited in a blindly myopic way, as relevant future information must also be considered. But when a subgoal must be resolved first, irrelevant future information should not interfere with the processing of more proximal, subgoal-relevant information. We explore the idea that decision making in both situations relies on the flexible modulation of the degree to which different pieces of information under consideration are weighted, rather than explicitly decomposing a problem into smaller parts and solving each part independently. We asked participants to find the shortest goal-reaching paths in mazes and modeled their initial path choices as a noisy, weighted information integration process. In a base task where choosing the optimal initial path required weighting starting-point and goal-proximal factors equally, participants did take both constraints into account, with participants who made more accurate choices tending to exhibit more balanced weighting. The base task was then embedded as an initial subtask in a larger maze, where the same two factors constrained the optimal path to a subgoal, and the final goal position was irrelevant to the initial path choice. In this more complex task, participants’ choices reflected predominant consideration of the subgoal-relevant constraints, but also some influence of the initially-irrelevant final goal. More accurate participants placed much less weight on the optimality-irrelevant goal and again tended to weight the two initially-relevant constraints more equally. These findings suggest that humans may rely on a graded, task-sensitive weighting of multiple constraints to generate approximately optimal decision outcomes in both hierarchical and non-hierarchical goal-directed tasks.","link":"/opendata/li-mcclelland-2022/"},{"title":"Li et al. (2023)","text":"The hypothesis that individuals can accurately represent temporal information within approximately 3 s is the premise of several theoretical models and empirical studies in the field of temporal processing. The significance of accurately representing time within 3 s and the universality of the overestimation contrast dramatically. To clarify whether this overestimation arises from an inability to accurately represent time or a response bias, we systematically examined whether feedback reduces overestimation at the 3 temporal processing stages of timing (encoding), working memory, and decisions proposed by the scalar timing model. Participants reproduced the time interval between 2 circles with or without feedback, while the electroencephalogram (EEG) was synchronously recorded. Behavioral results showed that feedback shortened reproduced times and significantly minimized overestimation. EEG results showed that feedback significantly decreased the amplitude of contingent negative variation (CNV) in the decision stage but did not modulate the CNV amplitude in the encoding stage or the P2-P3b amplitudes in the working memory stage. These results suggest that overestimation arises from response bias when individuals convert an accurate representation of time into behavior. Our study provides electrophysiological evidence to support the conception that short intervals under approximately 3 s can be accurately represented as “temporal gestalt.”","link":"/opendata/li-et-al-2023/"},{"title":"Ligneul (2019)","text":"The Iowa Gambling Task (IGT) is one of the most common paradigms used to assess decision-making and executive functioning in neurological and psychiatric disorders. Several reinforcement-learning (RL) models were recently proposed to refine the qualitative and quantitative inferences that can be made about these processes based on IGT data. Yet, these models do not account for the complex exploratory patterns which characterize participants behavior in the task. Using a dataset of more than 500 subjects, we demonstrate the existence of sequential exploration in the IGT and we describe a new computational architecture disentangling exploitation, random exploration and sequential exploration in this large population of participants. The new Value plus Sequential Exploration (VSE) architecture provided a better fit than previous models. Parameter recovery, model recovery and simulation analyses confirmed the superiority of the VSE scheme. Furthermore, using the VSE model, we confirmed the existence of a significant reduction in directed exploration across lifespan in the IGT, as previously reported with other paradigms. Finally, we provide a user-friendly toolbox enabling researchers to easily and flexibly fit computational models on the IGT data, hence promoting reanalysis of the numerous datasets acquired in various populations of patients and contributing to the development of computational psychiatry.","link":"/opendata/ligneul-2019/"},{"title":"Ligneul et al. (2022)","text":"Estimating the controllability of the environment enables agents to better predict upcoming events and decide when to engage controlled action selection. How does the human brain estimate controllability? Trial-by-trial analysis of choices, decision times and neural activity in an explore-and-predict task demonstrate that humans solve this problem by comparing the predictions of an actor model with those of a reduced spectator model of their environment. Neural blood oxygen level-dependent responses within striatal and medial prefrontal areas tracked the instantaneous difference in the prediction errors generated by these two statistical learning models. Blood oxygen level-dependent activity in the posterior cingulate, temporoparietal and prefrontal cortices covaried with changes in estimated controllability. Exposure to inescapable stressors biased controllability estimates downward and increased reliance on the spectator model in an anxiety-dependent fashion. Taken together, these findings provide a mechanistic account of controllability inference and its distortion by stress exposure.","link":"/opendata/ligneul-et-al-2022/"},{"title":"Lin et al. (2020)","text":"Neutral stimuli can acquire value when people learn to associate them with positive or negative outcomes (i.e., gain versus loss associations). Acquired value has been shown to affect how gain and loss associated stimuli are attended, remembered, and acted upon. Here we investigate a potential and previously unreported learning asymmetry in the acquisition of gain and loss associations that may have consequences for subsequent cognitive processing. In our first study, we provide meta-analytic evidence that in probabilistic learning tasks that pair neutral stimuli with intermixed win, loss, and no-change outcomes, people learn win-associations better than loss-associations despite the symmetrical task structure and symmetrical outcome probabilities. Then in two empirical studies, we demonstrate that this learning asymmetry is evident when acquiring gain versus loss associations to gray-scale landscape images whether participants earn points or money (Study 2), and whether or not they receive explicit instructions about the outcome contingencies (Study 3). Furthermore, performance on a post-learning source recognition task was also asymmetrical: explicit knowledge of associated outcomes was superior for optimal gain than optimal loss scenes. These findings indicate the acquisition of gain and loss associations need not be equivalent, despite symmetrical outcome probabilities, equivalent numbers of learning trials, and a constant learning criterion. Consequently, learning asymmetries could contribute to valence and optimality differences in subsequent cognitive processing.","link":"/opendata/lin-et-al-2020/"},{"title":"Lin et al. (2022)","text":"People who take on challenges and persevere longer are more likely to succeed in life. But individuals often avoid exerting effort, and there is limited experimental research investigating whether we can learn to value effort. Because existing research focuses on enhancing cognitive performance rather than increasing the value of effort, it also remains unclear whether individuals can learn to care more about challenging themselves than performing well. We developed a paradigm to test an intuitive idea: that people can learn to value effort and will seek effortful challenges if directly incentivized to do so. What’s more, we dissociate the effects of rewarding people for choosing effortful challenges and performing well. Results revealed that rewarding effort increased people’s willingness to choose harder tasks, even when rewards were no longer offered (near-transfer). Critically, the effects of this brief manipulation also carried over to an unrelated and unrewarded task (far-transfer). Our results suggest people can learn to value effort and that this valuation can generalise to unfamiliar and unrewarded tasks.","link":"/opendata/lin-et-al-2022/"},{"title":"Lin & von-Helversen (2023)","text":"Previous research showed that animals adopt different foraging strategies in different environment settings. However, research on whether humans adapt their foraging strategies to the foraging environment has shown little evidence of a change in strategies. This study aims to investigate whether humans will adapt their foraging strategies when performance differences between strategies are large and why participants may fixate on a single strategy. We conducted two foraging experiments and identified the strategies used by the participants. Most participants used the Give-Up Time (GUT) strategy regardless of the environment they encountered. GUT was used even in environments where other strategies such as the Fixed-Time strategy or the Fixed-Number strategy performed better. Using computer simulations, we further examined the conditions under which the GUT strategy will perform well compared to the other strategies. We found that even though the GUT strategy is not always the best strategy, it performs consistently on a satisfactory level and had an advantage when variance in the quality of patches was introduced. The consistently good performance of the GUT strategy could thus explain participants’ lack of strategy switching.","link":"/opendata/lin-von-helversen-2023/"},{"title":"Linka et al. (2023)","text":"Adult gaze behaviour towards naturalistic scenes is highly biased towards semantic object classes. Little is known about the ontological development of these biases, nor about group-level differences in gaze behaviour between adults and preschoolers. Here, we let preschoolers (n = 34, age 5 years) and adults (n = 42, age 18–59 years) freely view 40 complex scenes containing objects with different semantic attributes to compare their fixation behaviour. Results show that preschool children allocate a significantly smaller proportion of dwell time and first fixations on Text and instead fixate Faces, Touched objects, Hands and Bodies more. A predictive model of object fixations controlling for a range of potential confounds suggests that most of these differences can be explained by drastically reduced text salience in pre-schoolers and that this effect is independent of low-level salience. These findings are in line with a developmental attentional antagonism between text and body parts (touched objects and hands in particular), which resonates with recent findings regarding ‘cortical recycling’. We discuss this and other potential mechanisms driving salience differences between children and adults.","link":"/opendata/linka-et-al-2023/"},{"title":"Littman et al. (2023)","text":"The affordances task serves as an important tool for the assessment of cognition and visuomotor functioning, and yet its test-retest reliability has not been established. In the affordances task, participants attend to a goal-directed task (e.g., classifying manipulable objects such as cups and pots) while suppressing their stimulus-driven, irrelevant reactions afforded by these objects (e.g., grasping their handles). This results in cognitive conflicts manifesting at the task level and the response level. In the current study, we assessed the reliability of the affordances task for the first time. While doing so, we referred to the “reliability paradox,” according to which behavioral tasks that produce highly replicable group-level effects often yield low test-retest reliability due to the inadequacy of traditional correlation methods in capturing individual differences between participants. Alongside the simple test-retest correlations, we employed a Bayesian generative model that was recently demonstrated to result in a more precise estimation of test-retest reliability. Two hundred and ninety-five participants completed an online version of the affordances task twice, with a one-week gap. Performance on the online version replicated results obtained under in-lab administrations of the task. While the simple correlation method resulted in weak test-retest measures of the different effects, the generative model yielded a good reliability assessment. The current results support the utility of the affordances task as a reliable behavioral tool for the assessment of group-level and individual differences in cognitive and visuomotor functioning. The results further support the employment of generative modeling in the study of individual differences.","link":"/opendata/littman-et-al-2023/"},{"title":"Liu & Li (2019)","text":"Although some previous studies have investigated the time distortion of anxious patients, it remains open about the interactive effects of trait and state anxieties on time perception. In the present study, participants in high and low trait anxieties perceived 24 negative and 24 neutral words for 2 s in induced anxious and calm mood states, and their time perceptions were recorded by the time reproduction task. The results showed that high trait anxious individuals underestimated the 2-second duration while low trait anxious individuals overestimated the 2-second duration in the anxious state compared with those in the calm state. Meanwhile, in the anxious state, the timing error was absolute for high trait anxious individuals, whereas it was relative for low trait anxious individuals. The results show the interactive effects of trait and state anxieties on time perception. We suggest that the temporal distortion of the high trait anxious individuals in the anxious state might be due to an attentional dysfunction, whereas the temporal distortion of the low trait anxious individuals in the anxious state might be due to a mood congruency.","link":"/opendata/liu-li-2019/"},{"title":"Locke et al. (2022)","text":"Perceptual confidence is an important internal signal about the certainty of our decisions and there is a substantial debate on how it is computed. We highlight three confidence metric types from the literature: observers either use 1) the full probability distribution to compute probability correct (Probability metrics), 2) point estimates from the perceptual decision process to estimate uncertainty (Evidence-Strength metrics), or 3) heuristic confidence from stimulus-based cues to uncertainty (Heuristic metrics). These metrics are rarely tested against one another, so we examined models of all three types on a suprathreshold spatial discrimination task. Observers were shown a cloud of dots sampled from a dot generating distribution and judged if the mean of the distribution was left or right of centre. In addition to varying the horizontal position of the mean, there were two sensory uncertainty manipulations: the number of dots sampled and the spread of the generating distribution. After every two perceptual decisions, observers made a confidence forced-choice judgement whether they were more confident in the first or second decision. Model results showed that the majority of observers were best-fit by either: 1) the Heuristic model, which used dot cloud position, spread, and number of dots as cues; or 2) an Evidence-Strength model, which computed the distance between the sensory measurement and discrimination criterion, scaled according to sensory uncertainty. An accidental repetition of some sessions also allowed for the measurement of confidence agreement for identical pairs of stimuli. This N-pass analysis revealed that human observers were more consistent than their best-fitting model would predict, indicating there are still aspects of confidence that are not captured by our modelling. As such, we propose confidence agreement as a useful technique for computational studies of confidence. Taken together, these findings highlight the idiosyncratic nature of confidence computations for complex decision contexts and the need to consider different potential metrics and transformations in the confidence computation.","link":"/opendata/locke-et-al-2022/"},{"title":"Lockwood et al. (2018)","text":"Sense of ownership is a ubiquitous and fundamental aspect of human cognition. Here we used model-based functional magnetic resonance imaging and a novel minimal ownership paradigm to probe the behavioural and neural mechanisms underpinning ownership acquisition for ourselves, friends and strangers. We find a self-ownership bias at multiple levels of behaviour from initial preferences to reaction times and computational learning rates. Ventromedial prefrontal cortex (vmPFC) and anterior cingulate sulcus (ACCs) responded more to self vs. stranger associations, but despite a pervasive neural bias to track self-ownership, no brain area tracked self-ownership exclusively. However, ACC gyrus (ACCg) specifically coded ownership prediction errors for strangers and ownership associative strength for friends and strangers but not for self. Core neural mechanisms for associative learning are biased to learn in reference to self but also engaged when learning in reference to others. In contrast, ACC gyrus exhibits specialization for learning about others.","link":"/opendata/lockwood-et-al-2018/"},{"title":"Lockwood et al. (2021)","text":"Social cohesion relies on prosociality in increasingly aging populations. Helping other people requires effort, yet how willing people are to exert effort to benefit themselves and others, and whether such behaviors shift across the life span, is poorly understood. Using computational modeling, we tested the willingness of 95 younger adults (18-36 years old) and 92 older adults (55-84 years old) to put physical effort into self- and other-benefiting acts. Participants chose whether to work and exert force (30%-70% of maximum grip strength) for rewards (2-10 credits) accrued for themselves or, prosocially, for another. Younger adults were somewhat selfish, choosing to work more at higher effort levels for themselves, and exerted less force in prosocial work. Strikingly, compared with younger adults, older people were more willing to put in effort for others and exerted equal force for themselves and others. Increased prosociality in older people has important implications for human behavior and societal structure.","link":"/opendata/lockwood-et-al-2021/"},{"title":"Lojowska et al. (2023)","text":"Humans are exposed to environmental and economic threats that can profoundly affect individual survival and group functioning. Although anecdotal evidence suggests that threat exposure can increase collective action, the effects of threat on decision-making have been mainly investigated at the individual level. Here we examine how threat exposure and concomitant physiological responses modulate cooperation in small groups. Individuals (N = 105, ages 18-34 years) in groups of three were exposed to threat of electric shocks while deciding how much to contribute to a public good. Threat of shock induced a state of physiological freezing and, compared with no-threat conditions, reduced free riding and enabled groups to maintain higher cooperation over time. Exploratory analyses revealed that more cooperative responses under threat were driven by stronger baseline prosociality, suggesting that habitual prosociality is reinforced under threat. The current results support the view that human groups respond to outside threat with increased cooperation.","link":"/opendata/lojowska-et-al-2023/"},{"title":"Lopez-Gamundi et al. (2023)","text":"The ability to weigh a reward against the cost of acquiring it is critical for decision-making. While the medial prefrontal cortex (mPFC) has been implicated in tracking both mental effort demands and net value of rewards, these findings primarily come from choice paradigms that confound increased task difficulty with decreased reward probability. To resolve this issue, we designed novel tasks that kept probability of success – and therefore probability of reward – constant between levels of effort demand. In two experiments, participants completed a novel effort-based reward task that manipulated effort demand and either reward magnitude or probability of success. Electroencephalogram (EEG) data was recorded to compare an electrophysiological index of mPFC function (frontal midline theta (FMT)) to an index of incentive salience (component P3) at both cue evaluation and feedback phases. We found no evidence that FMT tracked effort demands or net value during cue evaluation. At feedback, however, FMT power was enhanced for high compared to low effort trials, but not modulated by reward magnitude or probability. Conversely, P3 was sensitive to reward magnitude and probability at both cue and feedback phases and only integrated expended effort costs at feedback, such that P3 amplitudes continued to scale with reward magnitude and probability but were also increased for high compared to low effort reward feedback. These findings suggest that, in the absence of option comparison and unequal likelihood of success, the mPFC does not track net value of prospective effort-based rewards. Instead, expended cognitive effort potentiates FMT power and enhances the saliency of rewards at feedback.","link":"/opendata/lopez-gamundi-et-al-2023/"},{"title":"Lopez-Yepez et al. (2021)","text":"Choice history effects describe how future choices depend on the history of past choices. In experimental tasks this is typically framed as a bias because it often diminishes the experienced reward rates. However, in natural habitats, choices made in the past constrain choices that can be made in the future. For foraging animals, the probability of earning a reward in a given patch depends on the degree to which the animals have exploited the patch in the past. One problem with many experimental tasks that show choice history effects is that such tasks artificially decouple choice history from its consequences on reward availability over time. To circumvent this, we use a variable interval (VI) reward schedule that reinstates a more natural contingency between past choices and future reward availability. By examining the behavior of optimal agents in the VI task we discover that choice history effects observed in animals serve to maximize reward harvesting efficiency. We further distil the function of choice history effects by manipulating first- and second-order statistics of the environment. We find that choice history effects primarily reflect the growth rate of the reward probability of the unchosen option, whereas reward history effects primarily reflect environmental volatility. Based on observed choice history effects in animals, we develop a reinforcement learning model that explicitly incorporates choice history over multiple time scales into the decision process, and we assess its predictive adequacy in accounting for the associated behavior. We show that this new variant, known as the double trace model, has a higher performance in predicting choice data, and shows near optimal reward harvesting efficiency in simulated environments. These results suggests that choice history effects may be adaptive for natural contingencies between consumption and reward availability. This concept lends credence to a normative account of choice history effects that extends beyond its description as a bias.","link":"/opendata/lopez-yepez-et-al-2021/"},{"title":"Loued-Khenissi et al. (2023)","text":"Does gravity affect decision-making? This question comes into sharp focus as plans for interplanetary human space missions solidify. In the framework of Bayesian brain theories, gravity encapsulates a strong prior, anchoring agents to a reference frame via the vestibular system, informing their decisions and possibly their integration of uncertainty. What happens when such a strong prior is altered? We address this question using a self-motion estimation task in a space analog environment under conditions of altered gravity. Two participants were cast as remote drone operators orbiting Mars in a virtual reality environment on board a parabolic flight, where both hyper- and microgravity conditions were induced. From a first-person perspective, participants viewed a drone exiting a cave and had to first predict a collision and then provide a confidence estimate of their response. We evoked uncertainty in the task by manipulating the motion’s trajectory angle. Post-decision subjective confidence reports were negatively predicted by stimulus uncertainty, as expected. Uncertainty alone did not impact overt behavioral responses (performance, choice) differentially across gravity conditions. However microgravity predicted higher subjective confidence, especially in interaction with stimulus uncertainty. These results suggest that variables relating to uncertainty affect decision-making distinctly in microgravity, highlighting the possible need for automatized, compensatory mechanisms when considering human factors in space research.","link":"/opendata/loued-khenissi-et-al-2023/"},{"title":"Luettgau et al. (2020)","text":"Decision-making is guided by memories of option values. However, retrieving items from memory renders them malleable. Here, we show that merely retrieving values from memory and making a choice between options is sufficient both to induce changes to stimulus-reward associations in the hippocampus and to bias future decision-making. After allowing participants to make repeated choices between reward-conditioned stimuli, in the absence of any outcome, we observe that participants prefer stimuli they have previously chosen, and neglect previously unchosen stimuli, over otherwise identical-valued options. Using functional brain imaging, we show that decisions induce changes to hippocampal representations of stimulus-outcome associations. These changes are correlated with future decision biases. Our results indicate that choice-induced preference changes are partially driven by choice-induced modification of memory representations and suggest that merely making a choice - even without experiencing any outcomes - induces associative plasticity.","link":"/opendata/luettgau-et-al-2020/"},{"title":"Lukics & Lukács (2022)","text":"While several studies suggest that the nature and properties of the input have significant effects on statistical learning, they have rarely been investigated systematically. In order to understand how input characteristics and their interactions impact statistical learning, we explored the effects of modality (auditory vs. visual), presentation type (serial vs. simultaneous), domain (linguistic vs. non-linguistic), and training type (random, starting small, starting big) on artificial grammar learning in young adults (N = 360). With serial presentation of stimuli, learning was more effective in the auditory than in the visual modality. However, with simultaneous presentation of visual and serial presentation of auditory stimuli, the modality effect was not present. We found a significant domain effect as well: a linguistic advantage over nonlinguistic material, which was driven by the domain effect in the auditory modality. Overall, the auditory linguistic condition had an advantage over other modality-domain types. Training types did not have any overall effect on learning; starting big enhanced performance only in the case of serial visual presentation. These results show that input characteristics such as modality, presentation type, domain and training type influence statistical learning, and suggest that their effects are also dependent on the specific stimuli and structure to be learned.","link":"/opendata/lukics-lukacs-2022/"},{"title":"Lukinova & Erlich (2021)","text":"Stress is a normal part of our everyday lives. It alerts us to changes in our environment working as an early warning system. However, when stress is prolonged, it can become harmful. The deleterious effects of stress on brain function are well established: chronic stress significantly impairs cognitive function reducing our ability to solve problems and to regulate behavior and, therefore, may lead to more challenges that can further exacerbate stress. An important class of decisions that may be made under stress include those between rewards delivered immediately vs. those delivered in the future. Not considering or devaluing future outcomes (delay discounting) can result in adverse outcomes such as not buying health insurance, gambling or drug use. To date, however, little is known about how chronic stress influences economic decisions that differ in the time of outcome delivery. A handful of studies suggest that increased stress may lead to more impulsive choices in subjects of average socioeconomic status and stress levels. Here, we address this gap by using a longitudinal design to test a combination of decision-making tasks, questionnaires, saliva and hair samples within subject (N = 41, 34 with all stress measures) to determine whether chronic stress measures are associated with the economic choices under different time scales. We found that the degree to which people think it is worth waiting, i.e. individual’s discount factor, over seconds, but not over days, varied reliably with endogenous stress. These results are imperative to studying stress in a variety of contexts: e.g., variation in consumer’s impulse purchases and the willingness to pay to avoid waiting in mobile gaming could linked to the individual stress responses to the environment, even in healthy adults.","link":"/opendata/lukinova-erlich-2021/"},{"title":"Lukinova & Erlich (2021a)","text":"Delay-discounting studies in neuroscience, psychology, and economics have been mostly focused on concepts of self-control, reward evaluation, and discounting. Another important relationship to consider is the link between intertemporal choice and time perception. We presented 50 college students with timing tasks on the range of seconds to minutes and intertemporal-choice tasks on both the time-scale of seconds and of days. We hypothesized that individual differences in time perception would influence decisions about short experienced delays but not long delays. While we found some evidence that individual differences in internal clock speed account for some unexplained variance between choices across time-horizons, overall our findings suggest a nominal contribution of the altered sense of time in intertemporal choice.","link":"/opendata/lukinova-erlich-2021a/"},{"title":"MacDonald et al. (2016)","text":"Childhood maltreatment has diverse, lifelong impact on morbidity and mortality. The Childhood Trauma Questionnaire (CTQ) is one of the most commonly used scales to assess and quantify these experiences and their impact. Curiously, despite very widespread use of the CTQ, scores on its Minimization-Denial (MD) subscale-originally designed to assess a positive response bias-are rarely reported. Hence, little is known about this measure. If response biases are either common or consequential, current practices of ignoring the MD scale deserve revision. Therewith, we designed a study to investigate 3 aspects of minimization, as defined by the CTQs MD scale: 1) its prevalence; 2) its latent structure; and finally 3) whether minimization moderates the CTQs discriminative validity in terms of distinguishing between psychiatric patients and community volunteers. Archival, item-level CTQ data from 24 multinational samples were combined for a total of 19,652 participants. Analyses indicated: 1) minimization is common; 2) minimization functions as a continuous construct; and 3) high MD scores attenuate the ability of the CTQ to distinguish between psychiatric patients and community volunteers. Overall, results suggest that a minimizing response bias-as detected by the MD subscale-has a small but significant moderating effect on the CTQs discriminative validity. Results also may suggest that some prior analyses of maltreatment rates or the effects of early maltreatment that have used the CTQ may have underestimated its incidence and impact. We caution researchers and clinicians about the widespread practice of using the CTQ without the MD or collecting MD data but failing to assess and control for its effects on outcomes or dependent variables.","link":"/opendata/macdonald-et-al-2016/"},{"title":"Mack et al. (2020)","text":"Prefrontal cortex (PFC) is thought to support the ability to focus on goal-relevant information by filtering out irrelevant information, a process akin to dimensionality reduction. Here, we test this dimensionality reduction hypothesis by relating a data-driven approach to characterizing the complexity of neural representation with a theoretically-supported computational model of learning. We find evidence of goal-directed dimensionality reduction within human ventromedial PFC during learning. Importantly, by using computational predictions of each participant’s attentional strategies during learning, we find that that the degree of neural compression predicts an individual’s ability to selectively attend to concept-specific information. These findings suggest a domain-general mechanism of learning through compression in ventromedial PFC.","link":"/opendata/mack-et-al-2020/"},{"title":"Madan et al. (2021)","text":"Both memory and choice are influenced by context: Memory is enhanced when encoding and retrieval contexts match, and choice is swayed by available options. Here, we assessed how context influences risky choice in an experience-based task in two main experiments (119 and 98 participants retained, respectively) and two additional experiments reported in the Supplemental Material available online (152 and 106 participants retained, respectively). Within a single session, we created two separate contexts by presenting blocks of trials in distinct backgrounds. Risky choices were context dependent; given the same choice, people chose differently depending on other outcomes experienced in that context. Choices reflected an overweighting of the most extreme outcomes within each local context rather than the global context of all outcomes. When tested in the nontrained context, people chose according to the context at encoding and not retrieval. In subsequent memory tests, people displayed biases specific to distinct contexts: Extreme outcomes from each context were more accessible and judged as more frequent. These results pose a challenge for theories of choice that rely on retrieval as guiding choice.","link":"/opendata/madan-et-al-2021/"},{"title":"Maekelae et al. (2023)","text":"Measuring individual differences in cognitive effort can be elusive as effort is a function of motivation and ability. We report six studies (N = 663) investigating the relationship of Need for Cognition and working memory capacity with three cognitive effort measures: demand avoidance in the Demand Selection Task, effort discounting measured as the indifference point in the Cognitive Effort Discounting paradigm, and rational reasoning score with items from the heuristic and bias literature. We measured perceived mental effort with the NASA task load index. The three tasks were not correlated with each other (all r’s .1). Need for Cognition was positively associated with effort discounting (r = .168, p < .001) and rational reasoning (r = .176, p < .001), but not demand avoidance (r = .085, p = .186). Working memory capacity was related to effort discounting (r = .185, p = .004). Higher perceived effort was related to poorer rational reasoning. Our data indicate that two of the tasks are related to Need for Cognition but are also influenced by a participant’s working memory capacity. We discuss whether any of the tasks measure cognitive effort.","link":"/opendata/maekelae-et-al-2023/"},{"title":"Maier et al. (2023)","text":"Many controversies arise from differences in how people resolve moral dilemmas by following deontological moral rules versus consequentialist cost-benefit reasoning (CBR). This article explores whether and, if so, how these seemingly intractable differences may arise from experience and whether they can be overcome through moral learning. We designed a new experimental paradigm to investigate moral learning from consequences of previous decisions. Our participants (N=387) faced a series of realistic moral dilemmas between two conflicting choices: one prescribed by a moral rule and the other favored by CBR. Critically, we let them observe the consequences of each of their decisions before making the next one. In one condition, CBR-based decisions consistently led to good outcomes, whereas rule-based decisions consistently led to bad outcomes. In the other condition, this contingency was reversed. We observed systematic, experience-dependent changes in people’s moral rightness ratings and moral decisions over the course of just 13 decisions. Without being aware of it, participants adjusted how much moral weight they gave to CBR versus moral rules according to which approach produced better consequences in their respective experimental condition. These learning effects transferred to their subsequent responses to the Oxford Utilitarianism Scale, indicating genuine moral learning rather than task-specific effects.Our findings demonstrate the existence of rapid adaptive moral learning from the consequences of previous decisions. Individual differences in morality may thus be more malleable than previously thought.","link":"/opendata/maier-et-al-2023/"},{"title":"Makarov et al. (2023)","text":"Can synchrony in stimulation guide attention and aid perceptual performance? Here, in a series of three experiments, we tested the influence of visual and auditory synchrony on attentional selection during a visual foraging task. Experiment 1 was performed online, where the task was to forage for 10 (out of 20) vertical lines among 60 randomly oriented distractor lines that changed color between yellow and blue at random intervals. The targets either changed colors in visual synchrony or not. In another condition, a non-spatial sound additionally occurred synchronously with the color change of the targets. Experiment 2 was run in the laboratory (within-subjects) with the same design. When the targets changed color in visual synchrony, foraging times were significantly shorter than when they randomly changed colors, but there was no additional benefit for the sound synchrony. In Experiment 3, task difficulty was increased as participants foraged for as many 45° rotated lines as possible among lines of different orientations within 10 seconds, with the same synchrony conditions as in Experiments 1 and 2. Again, there was a large benefit of visual synchrony but no additional benefit for sound synchronization. Our results provide strong evidence that visual synchronization can guide attention during multiple target foraging. This likely reflects temporal grouping of the synchronized targets. No additional benefit occurred for sound synchrony, even when the foraging task was quite difficult (Experiment 3).","link":"/opendata/makarov-et-al-2023/"},{"title":"Makowski et al. (2022)","text":"As a measure of the brain’s temporal fine-tuning capacity, temporal resolution power (TRP) explained repeatedly a substantial amount of variance in psychometric intelligence. Recently, spatial suppression, referred to as the increasing difficulty in quickly perceiving motion direction as the size of the moving stimulus increases, has attracted particular attention, when it was found to be positively related to psychometric intelligence. Due to the conceptual similarities of TRP and spatial suppression, the present study investigated their mutual interplay in the relation to psychometric intelligence in 273 young adults to better understand the reasons for these relationships. As in previous studies, psychometric intelligence was positively related to a latent variable representing TRP but, in contrast to previous reports, negatively to latent and manifest measures of spatial suppression. In a combined structural equation model, TRP still explained a substantial amount of variance in psychometric intelligence while the negative relation between spatial suppression and intelligence was completely explained by TRP. Thus, our findings confirmed TRP to be a robust predictor of psychometric intelligence but challenged the assumption of spatial suppression as a representation of general information processing efficiency as reflected in psychometric intelligence. Possible reasons for the contradictory findings on the relation between spatial suppression and psychometric intelligence are discussed.","link":"/opendata/makowski-et-al-2022/"},{"title":"Manning et al. (2022)","text":"Many studies report atypical responses to sensory information in autistic individuals, yet it is not clear which stages of processing are affected, with little consideration given to decision-making processes. We combined diffusion modelling with high-density EEG to identify which processing stages differ between 50 autistic and 50 typically developing children aged 6-14 years during two visual motion tasks. Our pre-registered hypotheses were that autistic children would show task-dependent differences in sensory evidence accumulation, alongside a more cautious decision-making style and longer non-decision time across tasks. We tested these hypotheses using hierarchical Bayesian diffusion models with a rigorous blind modelling approach, finding no conclusive evidence for our hypotheses. Using a data-driven method, we identified a response-locked centro-parietal component previously linked to the decision-making process. The build-up in this component did not consistently relate to evidence accumulation in autistic children. This suggests that the relationship between the EEG measure and diffusion-modelling is not straightforward in autistic children. Compared to a related study of children with dyslexia, motion processing differences appear less pronounced in autistic children. Exploratory analyses also suggest weak evidence that ADHD symptoms moderate perceptual decision-making in autistic children.","link":"/opendata/manning-et-al-2022/"},{"title":"Marchant et al. (2023)","text":"Humans excel at causal reasoning, yet at the same time consistently fail to respect its basic axioms. They seemingly fail to recognize, for instance, that only the direct causes of an event can affect its probability (the Markov condition). How can one explain this paradox? Here we argue that standard normative analyses of causal reasoning mostly apply to the idealized case where the reasoner has perfect confidence in her knowledge of the underlying causal model. Given uncertainty about the correct representation of a causal system, it is not always rational for a reasoner to respect the Markov condition and other ‘normative’ principles. To test whether uncertainty can account for the apparent fallibility of human judgments, we formulate a simple computational model of a rational-but-uncertain causal reasoner. In a re-analysis of a recent causal reasoning study, the model fits the data significantly better than its standard normative counterpart.","link":"/opendata/marchant-et-al-2023/"},{"title":"Marciano et al. (2023)","text":"Expectations are often dynamic: sports fans know that expectations are rapidly updated as games unfold. Yet expectations have traditionally been studied as static. Here we present behavioral and electrophysiological evidence of sub-second changes in expectations using slot machines as a case study. In Study 1, we demonstrate that EEG signal before the slot machine stops varies based on proximity to winning. Study 2 introduces a behavioral paradigm to measure dynamic expectations via betting, and shows that expectation trajectories vary as a function of winning proximity. Notably, these expectation trajectories parallel Study 1’s EEG activity. Studies 3 (EEG) and 4 (behavioral) replicate these findings in the loss domain. These four studies provide compelling evidence that dynamic sub-second updates in expectations can be behaviorally and electrophysiologically measured. Our research opens promising avenues for understanding the dynamic nature of reward expectations and their impact on cognitive processes.","link":"/opendata/marciano-et-al-2023/"},{"title":"Marshall et al. (2022)","text":"Optimality analysis of value-based decisions in binary and multi-alternative choice settings predicts that reaction times should be sensitive only to differences in stimulus magnitudes, but not to overall absolute stimulus magnitude. Yet experimental work in the binary case has shown magnitude sensitive reaction times, and theory shows that this can be explained by switching from linear to multiplicative time costs, but also by nonlinear subjective utility. Thus disentangling explanations for observed magnitude sensitive reaction times is difficult. Here for the first time we extend the theoretical analysis of geometric time-discounting to ternary choices, and present novel experimental evidence for magnitude-sensitivity in such decisions, in both humans and slime moulds. We consider the optimal policies for all possible combinations of linear and geometric time costs, and linear and nonlinear utility; interestingly, geometric discounting emerges as the predominant explanation for magnitude sensitivity.","link":"/opendata/marshall-et-al-2022/"},{"title":"Martin et al. (2013)","text":"Patients with schizophrenia experience a loss of temporal continuity or subjective fragmentation along the temporal dimension. Here, we develop the hypothesis that impaired temporal awareness results from a perturbed structuring of events in time-i.e., canonical neural dynamics. To address this, 26 patients and their matched controls took part in two psychophysical studies using desynchronized audiovisual speech. Two tasks were used and compared: first, an identification task testing for multisensory binding impairments in which participants reported what they heard while looking at a speaker’s face; in a second task, we tested the perceived simultaneity of the same audiovisual speech stimuli. In both tasks, we used McGurk fusion and combination that are classic ecologically valid multisensory illusions. First, and contrary to previous reports, our results show that patients do not significantly differ from controls in their rate of illusory reports. Second, the illusory reports of patients in the identification task were more sensitive to audiovisual speech desynchronies than those of controls. Third, and surprisingly, patients considered audiovisual speech to be synchronized for longer delays than controls. As such, the temporal tolerance profile observed in a temporal judgement task was less of a predictor for sensory binding in schizophrenia than for that obtained in controls. We interpret our results as an impairment of temporal event structuring in schizophrenia which does not specifically affect sensory binding operations but rather, the explicit access to timing information associated here with audiovisual speech processing. Our findings are discussed in the context of curent neurophysiological frameworks for the binding and the structuring of sensory events in time.","link":"/opendata/martin-et-al-2013/"},{"title":"Marton et al. (2019)","text":"Doubt is subjective uncertainty about ones perceptions and recall. It can impair decision-making and is a prominent feature of obsessive-compulsive disorder (OCD). We propose that evaluation of doubt during decision-making provides a useful endophenotype with which to study the underlying pathophysiology of OCD and potentially other psychopathologies. For the current study, we developed a new instrument, the Doubt Questionnaire, to clinically assess doubt. The random dot motion task was used to measure reaction time and subjective certainty, at varying levels of perceptual difficulty, in individuals who scored high and low on doubt, and in individuals with and without OCD. We found that doubt scores were significantly higher in OCD cases than controls. Drift diffusion modeling revealed that high doubt scores predicted slower evidence accumulation than did low doubt scores; and OCD diagnosis lower than controls. At higher levels of dot coherence, OCD participants exhibited significantly slower drift rates than did controls (q<0.05 for 30%, and 45% coherence; q<0.01 for 70% coherence). In addition, at higher levels of coherence, high doubt subjects exhibited even slower drift rates and reaction times than low doubt subjects (q<0.01 for 70% coherence). Moreover, under high coherence conditions, individuals with high doubt scores reported lower certainty in their decisions than did those with low doubt scores. We conclude that the Doubt Questionnaire is a useful instrument for measuring doubt. Compared to those with low doubt, those with high doubt accumulate evidence more slowly and report lower certainty when making decisions under conditions of low uncertainty. High doubt may affect the decision-making process in individuals with OCD. The dimensional doubt measure is a useful endophenotype for OCD research and could enable computationally rigorous and neurally valid understanding of decision-making and its pathological expression in OCD and other disorders.","link":"/opendata/marton-et-al-2019/"},{"title":"Marzuki et al. (2023)","text":"BackgroundCompulsive behaviour is often triggered by Pavlovian cues. Assessing how Pavlovian cues drive instrumental behaviour in obsessive-compulsive disorder (OCD) is therefore crucial to understand how compulsions develop and are maintained. An aversive Pavlovian-to-Instrumental transfer (PIT) paradigm, particularly one involving avoidance/cancellation of negative outcomes, can enable such investigation and has not previously been studied in clinical-OCD. MethodsForty-one participants diagnosed with OCD (21 adults; 20 youths) and 44 controls (21 adults; 23 youths) completed an aversive PIT task. Participants had to cancel unpleasant noises by learning response-outcome (instrumental) and stimulus-outcome (Pavlovian) associations. We then assessed whether Pavlovian cues elicited specific instrumental avoidance responses (specific PIT) and induced general instrumental avoidance (general PIT). We investigated whether task learning and confidence indices influenced PIT strength differentially between groups.ResultsUrge to avoid unpleasant noises and preference for safe over unsafe stimuli influenced specific and general PIT respectively in OCD, while PIT in controls was more influenced by confidence in instrumental and Pavlovian learning. However, there was no overall group difference in PIT performance, although youths with OCD showed weaker specific PIT than youth controls. ConclusionsIn OCD, implicit motivational factors, but not learnt knowledge, contribute to the successful integration of aversive Pavlovian and instrumental cues. This implies that compulsive avoidance may be driven by these automatic processes. Youths with OCD show deficits in specific PIT, suggesting cue integration impairments are only apparent in adolescence. These findings are clinically relevant as they emphasise the importance of targeting such implicit motivational processes when treating OCD.","link":"/opendata/marzuki-et-al-2023/"},{"title":"Mason et al. (2022)","text":"Decision-making involves weighing up the outcome likelihood, potential rewards, and effort needed. Previous research has focused on the trade-offs between risk and reward or between effort and reward. Here we bridge this gap and examine how risk in effort levels influences choice. With outcome uncertainty, people’s risk attitudes follow a fourfold pattern, varying with the domain (gains or losses) and probability (rare or common). Three experiments assessed people’s risk attitudes for money, physical effort, and mental effort. With monetary gambles, risk attitudes followed the classic fourfold pattern, and people were risk averse for increases in money (gains). With both physical and mental effort, however, people exhibited a “flipped” fourfold pattern of risk preferences and were instead risk seeking for increases in effort. Overall, these findings indicate that people treat effort as a loss of resources and are more willing to take risks to avoid potentially high levels of effort.","link":"/opendata/mason-et-al-2022/"},{"title":"Mason et al. (2023)","text":"Many real-world decisions involving rare events also involve extreme outcomes. Despite this confluence, decisions-from-experience research has focused on the impact of rare but non-extreme outcomes. In those situations, people typically choose as if they underestimate the probability of a rare outcome happening. Separately, people have been shown to overestimate the probability of an extreme outcome happening. Here, for the first time, we examine the confluence of these two competing biases in decisions from experience. In a between-subjects behavioural experiment, we examine people’s risk preferences for rare extreme outcomes and for rare non-extreme outcomes. When outcomes are both rare and extreme, people’s risk preferences shift away from traditional risk patterns for rare events: they underweight those extreme and rare events less. We simulate these results using a small-sample model of decision-making that accounts for the overweighting of extreme events. The additive effect of these decision biases on risk preferences reveals that to understand real-world risk for rare events we must also consider the extremity of the outcomes.","link":"/opendata/mason-et-al-2023/"},{"title":"Mathar et al. (2022)","text":"Supplementation with the catecholamine precursor L-Tyrosine might enhance cognitive performance, but overall findings are mixed. Here, we investigate the effect of a single dose of tyrosine (2g) vs. placebo on two catecholamine-dependent trans-diagnostic traits: model-based control during reinforcement learning (2-step task) and temporal discounting, using a double-blind, placebo-controlled, within-subject design (n = 28 healthy male participants). We leveraged drift diffusion models in a hierarchical Bayesian framework to jointly model participants’ choices and response times (RTS) in both tasks. Furthermore, comprehensive autonomic monitoring (heart rate, heart rate variability, pupillometry, spontaneous eye blink rate) was performed both pre- and post-supplementation, to explore potential physiological effects of supplementation. Across tasks, tyrosine consistently reduced participants’ RTs without deteriorating task-performance. Diffusion modeling linked this effect to attenuated decision-thresholds in both tasks and further revealed increased model-based control (2-step task) and (if anything) attenuated temporal discounting. On the physiological level, participants’ pupil dilation was predictive of the individual degree of temporal discounting. Tyrosine supplementation reduced physiological arousal as revealed by increases in pupil dilation variability and reductions in heart rate. Supplementation-related changes in physiological arousal predicted individual changes in temporal discounting. Our findings provide first evidence that tyrosine supplementation might impact psychophysiological parameters, and suggest that modeling approaches based on sequential sampling models can yield novel insights into latent cognitive processes modulated by amino-acid supplementation.","link":"/opendata/mathar-et-al-2022/"},{"title":"McDonald et al. (2019)","text":"Previous studies of strategic social interaction in game theory have predominantly used games with clearly-defined turns and limited choices. Yet, most real-world social behaviors involve dynamic, coevolving decisions by interacting agents, which poses challenges for creating tractable models of behavior. Here, using a game in which humans competed against both real and artificial opponents, we show that it is possible to quantify the instantaneous dynamic coupling between agents. Adopting a reinforcement learning approach, we use Gaussian Processes to model the policy and value functions of participants as a function of both game state and opponent identity. We found that higher-scoring participants timed their final change in direction to moments when the opponents counter-strategy was weaker, while lower-scoring participants less precisely timed their final moves. This approach offers a natural set of metrics for facilitating analysis at multiple timescales and suggests new classes of experimental paradigms for assessing behavior.","link":"/opendata/mcdonald-et-al-2019/"},{"title":"McDougle et al. (2019)","text":"Decisions must be implemented through actions, and actions are prone to error. As such, when an expected outcome is not obtained, an individual should be sensitive to not only whether the choice itself was suboptimal but also whether the action required to indicate that choice was executed successfully. The intelligent assignment of credit to action execution versus action selection has clear ecological utility for the learner. To explore this, we used a modified version of a classic reinforcement learning task in which feedback indicated whether negative prediction errors were, or were not, associated with execution errors. Using fMRI, we asked if prediction error computations in the human striatum, a key substrate in reinforcement learning and decision making, are modulated when a failure in action execution results in the negative outcome. Participants were more tolerant of non-rewarded outcomes when these resulted from execution errors versus when execution was successful, but reward was withheld. Consistent with this behavior, a model-driven analysis of neural activity revealed an attenuation of the signal associated with negative reward prediction errors in the striatum following execution failures. These results converge with other lines of evidence suggesting that prediction errors in the mesostriatal dopamine system integrate high-level information during the evaluation of instantaneous reward outcomes.","link":"/opendata/mcdougle-et-al-2019/"},{"title":"McDougle & Collins (2021)","text":"What determines the speed of our decisions? Various models of decision-making have focused on perceptual evidence, past experience, and task complexity as important factors determining the degree of deliberation needed for a decision. Here, we build on a sequential sampling decision-making framework to develop a new model that captures a range of reaction time (RT) effects by accounting for both working memory and instrumental learning processes. The model captures choices and RTs at various stages of learning, and in learning environments with varying complexity. Moreover, the model generalizes from tasks with deterministic reward contingencies to probabilistic ones. The model succeeds in part by incorporating prior uncertainty over actions when modeling RT. This straightforward process model provides a parsimonious account of decision dynamics during instrumental learning and makes unique predictions about internal representations of action values.","link":"/opendata/mcdougle-collins-2021/"},{"title":"Meder et al. (2021)","text":"Ergodicity describes an equivalence between the expectation value and the time average of observables. Applied to human behaviour, ergodic theories of decision-making reveal how individuals should tolerate risk in different environments. To optimize wealth over time, agents should adapt their utility function according to the dynamical setting they face. Linear utility is optimal for additive dynamics, whereas logarithmic utility is optimal for multiplicative dynamics. Whether humans approximate time optimal behavior across different dynamics is unknown. Here we compare the effects of additive versus multiplicative gamble dynamics on risky choice. We show that utility functions are modulated by gamble dynamics in ways not explained by prevailing decision theories. Instead, as predicted by time optimality, risk aversion increases under multiplicative dynamics, distributing close to the values that maximize the time average growth of in-game wealth. We suggest that our findings motivate a need for explicitly grounding theories of decision-making on ergodic considerations.","link":"/opendata/meder-et-al-2021/"},{"title":"Mehta et al. (2023)","text":"The sense of agency, or the belief in action causality, is an elusive construct that impacts day-to-day experience and decision-making. Despite its relevance in a range of neuropsychiatric disorders, it is widely under-studied and remains difficult to measure objectively in patient populations. We developed and tested a novel cognitive measure of reward-dependent agency perception in an in-person and online cohort. The in-person cohort consisted of 52 healthy control subjects and 20 subjects with depression and anxiety disorders (DA), including major depressive disorder and generalized anxiety disorder. The online sample consisted of 254 participants. The task consisted of an effort implementation for monetary rewards with computerized visual feedback interference and trial-by-trial ratings of self versus other agency. All subjects across both cohorts demonstrated higher self-agency after receiving positive-win feedback, compared to negative-loss feedback when the level of computer inference was kept constant. Patients with DA showed reduced positive feedback-dependent agency compared to healthy controls. Finally, in our online sample, we found that higher self-agency following negative-loss feedback was associated with worse anhedonia symptoms. Together this work suggests how positive and negative environmental information impacts the sense of self-agency in healthy subjects, and how it is perturbed in patients with depression and anxiety.","link":"/opendata/mehta-et-al-2023/"},{"title":"Mennella et al. (2022)","text":"Adaptation to our social environment requires learning how to avoid potentially harmful situations, such as encounters with aggressive individuals. Threatening facial expressions can evoke automatic stimulus-driven reactions, but whether their aversive motivational value suffices to drive instrumental active avoidance remains unclear. When asked to freely choose between different action alternatives, participants spontaneously-without instruction or monetary reward-developed a preference for choices that maximized the probability of avoiding angry individuals (sitting away from them in a waiting room). Most participants showed clear behavioral signs of instrumental learning, even in the absence of an explicit avoidance strategy. Inter-individual variability in learning depended on participants subjective evaluations and sensitivity to threat approach feedback. Counterfactual learning best accounted for avoidance behaviors, especially in participants who developed an explicit avoidance strategy. Our results demonstrate that implicit defensive behaviors in social contexts are likely the product of several learning processes, including instrumental learning.","link":"/opendata/mennella-et-al-2022/"},{"title":"Mezzadri et al. (2022)","text":"This study simultaneously manipulates within-category (rule-based vs. similarity-based), between-category (blocked vs. interleaved), and across-blocks (constant vs. variable) orders to investigate how different types of presentation order interact with one another. With regard to within-category orders, stimuli were presented either in a “rule plus exceptions” fashion (in the rule-based order) or by maximizing the similarity between contiguous examples (in the similarity-based order). As for the between-category manipulation, categories were either blocked (in the blocked order) or alternated (in the interleaved order). Finally, the sequence of stimuli was either repeated (in the constant order) or varied (in the variable order) across blocks. This research offers a novel approach through both an individual and concurrent analysis of the studied factors, with the investigation of across-blocks manipulations being unprecedented. We found a significant interaction between within-category and across-blocks orders, as well as between between-category and across-blocks orders. In particular, the combination similarity-based + variable orders was the most detrimental, whereas the combination blocked + constant was the most beneficial. We also found a main effect of across-blocks manipulation, with faster learning in the constant order as compared to the variable one. With regard to the classification of novel stimuli, learners in the rule-based and interleaved orders showed generalization patterns that were more consistent with a specific rule-based strategy, as compared to learners in the similarity-based and blocked orders, respectively. This study shows that different types of order can interact in a subtle fashion and thus should not be considered in isolation.","link":"/opendata/mezzadri-et-al-2022/"},{"title":"Michely et al. (2022)","text":"Instrumental learning is driven by a history of outcome success and failure. Here, we examined the impact of serotonin on learning from positive and negative outcomes. Healthy human volunteers were assessed twice, once after acute (single-dose), and once after prolonged (week-long) daily administration of the SSRI citalopram or placebo. Using computational modelling, we show that prolonged boosting of serotonin enhances learning from punishment and reduces learning from reward. This valence-dependent learning asymmetry increases subjects tendency to avoid actions as a function of cumulative failure without leading to detrimental, or advantageous, outcomes. By contrast, no significant modulation of learning was observed following acute SSRI administration. However, differences between the effects of acute and prolonged administration were not significant. Overall, these findings may help explain how serotonergic agents impact on mood disorders.","link":"/opendata/michely-et-al-2022/"},{"title":"Mieth et al. (2021)","text":"To determine the role of moral norms in cooperation and punishment, we examined the effects of a moral-framing manipulation in a Prisoner’s Dilemma game with a costly punishment option. In each round of the game, participants decided whether to cooperate or to defect. The Prisoner’s Dilemma game was identical for all participants with the exception that the behavioral options were paired with moral labels (“I cooperate” and “I cheat”) in the moral-framing condition and with neutral labels (“A” and “B”) in the neutral-framing condition. After each round of the Prisoner’s Dilemma game, participants had the opportunity to invest some of their money to punish their partners. In two experiments, moral framing increased moral and hypocritical punishment: participants were more likely to punish partners for defection when moral labels were used than when neutral labels were used. When the participants’ cooperation was enforced by their partners’ moral punishment, moral framing did not only increase moral and hypocritical punishment but also cooperation. The results suggest that moral framing activates a cooperative norm that specifically increases moral and hypocritical punishment. Furthermore, the experience of moral punishment by the partners may increase the importance of social norms for cooperation, which may explain why moral framing effects on cooperation were found only when participants were subject to moral punishment.","link":"/opendata/mieth-et-al-2021/"},{"title":"Mikus et al. (2022)","text":"Human behaviour requires flexible arbitration between actions we do out of habit and actions that are directed towards a specific goal. Drugs that target opioid and dopamine receptors are notorious for inducing maladaptive habitual drug consumption; yet, how the opioidergic and dopaminergic neurotransmitter systems contribute to the arbitration between habitual and goal-directed behaviour is poorly understood. By combining pharmacological challenges with a well-established decision-making task and a novel computational model, we show that the administration of the dopamine D2/3 receptor antagonist amisulpride led to an increase in goal-directed or model-based relative to habitual or model-free behaviour, whereas the non-selective opioid receptor antagonist naltrexone had no appreciable effect. The effect of amisulpride on model-based/model-free behaviour did not scale with drug serum levels in the blood. Furthermore, participants with higher amisulpride serum levels showed higher explorative behaviour. These findings highlight the distinct functional contributions of dopamine and opioid receptors to goal-directed and habitual behaviour and support the notion that even small doses of amisulpride promote flexible application of cognitive control.","link":"/opendata/mikus-et-al-2022/"},{"title":"Mikus et al. (2023)","text":"The ability to learn about other people is crucial for human social functioning. Dopamine has been proposed to regulate the precision of beliefs, but direct behavioural evidence of this is lacking. In this study, we investigate how a high dose of the D2/D3 dopamine receptor antagonist sulpiride impacts learning about other people’s prosocial attitudes in a repeated Trust game. Using a Bayesian model of belief updating, we show that in a sample of 76 male participants sulpiride increases the volatility of beliefs, which leads to higher precision weights on prediction errors. This effect is driven by participants with genetically conferred higher dopamine availability (Taq1a polymorphism) and remains even after controlling for working memory performance. Higher precision weights are reflected in higher reciprocal behaviour in the repeated Trust game but not in single-round Trust games. Our data provide evidence that the D2 receptors are pivotal in regulating prediction error-driven belief updating in a social context.","link":"/opendata/mikus-et-al-2023/"},{"title":"Miletic & van-Maanen (2019)","text":"The time available to inform decisions is often limited, for example because of a response deadline. In such circumstances, accurate knowledge of the amount of time available for a decision is crucial for optimal choice behavior. However, the relation between temporal cognition and decision-making under time pressure is poorly understood. Here, we test how the precision of the internal representation of time affects choice behavior when decision time is limited by a deadline. We show that participants with a precise internal representation of time respond more cautiously in decision-making. Furthermore, we provide an empirical test of theoretical accounts of decision-making that argue that it is optimal to commit to a decision based on increasingly less evidence as the deadline approaches (so-called ‘collapsing decision bounds’). These theories entail that the speed of collapse of the decision bound should depend on the precision of the internal representation of the deadline. However, although we find evidence that participants collapse decision bounds, we found no relation between the amount of collapse and the internal representation of time.","link":"/opendata/miletic-van-maanen-2019/"},{"title":"Millner et al. (2018)","text":"To survive in complex environments, animals need to have mechanisms to select effective actions quickly, with minimal computational costs. As perhaps the computationally most parsimonious of these systems, Pavlovian control accomplishes this by hardwiring specific stereotyped responses to certain classes of stimuli. It is well documented that appetitive cues initiate a Pavlovian bias toward vigorous approach; however, Pavlovian responses to aversive stimuli are less well understood. Gaining a deeper understanding of aversive Pavlovian responses, such as active avoidance, is important given the critical role these behaviors play in several psychiatric conditions. The goal of the current study was to establish a behavioral and computational framework to examine aversive Pavlovian responses (activation vs. inhibition) depending on the proximity of an aversive state (escape vs. avoidance). We introduce a novel task in which participants are exposed to primary aversive (noise) stimuli and characterized behavior using a novel generative computational model. This model combines reinforcement learning and drift-diffusion models so as to capture effects of invigoration/inhibition in both explicit choice behavior as well as changes in RT. Choice and RT results both suggest that escape is associated with a bias for vigorous action, whereas avoidance is associated with behavioral inhibition. These results lay a foundation for future work seeking insights into typical and atypical aversive Pavlovian responses involved in psychiatric disorders, allowing us to quantify both implicit and explicit indices of vigorous choice behavior in the context of aversion.","link":"/opendata/millner-et-al-2018/"},{"title":"Mkrtchian et al. (2017)","text":"Serious and debilitating symptoms of anxiety are the most common mental health problem worldwide, accounting for around 5% of all adult years lived with disability in the developed world. Avoidance behavior-avoiding social situations for fear of embarrassment, for instance-is a core feature of such anxiety. However, as for many other psychiatric symptoms the biological mechanisms underlying avoidance remain unclear. Reinforcement learning models provide formal and testable characterizations of the mechanisms of decision making; here, we examine avoidance in these terms. A total of 101 healthy participants and individuals with mood and anxiety disorders completed an approach-avoidance go/no-go task under stress induced by threat of unpredictable shock. We show an increased reliance in the mood and anxiety group on a parameter of our reinforcement learning model that characterizes a prepotent (pavlovian) bias to withhold responding in the face of negative outcomes. This was particularly the case when the mood and anxiety group was under stress. This formal description of avoidance within the reinforcement learning framework provides a new means of linking clinical symptoms with biophysically plausible models of neural circuitry and, as such, takes us closer to a mechanistic understanding of mood and anxiety disorders.","link":"/opendata/mkrtchian-et-al-2017/"},{"title":"Mkrtchian et al. (2023)","text":"Computational models can offer mechanistic insight into cognition and therefore have the potential to transform our understanding of psychiatric disorders and their treatment. For translational efforts to be successful, it is imperative that computational measures capture individual characteristics reliably. To date, this issue has received little consideration. Here we examine the reliability of reinforcement learning and economic models derived from two commonly used tasks. Healthy individuals (N=50) completed a restless four-armed bandit and a calibrated gambling task twice, two weeks apart. Reward and punishment processing parameters from the reinforcement learning model showed fair-to-good reliability, while risk/loss aversion parameters from a prospect theory model exhibited good-to-excellent reliability. Both models were further able to predict future behaviour above chance within individuals. This prediction was better when based on participants’ own model parameters than other participants’ parameter estimates. These results suggest that reinforcement learning, and particularly prospect theory parameters, can be measured reliably to assess learning and decision-making mechanisms, and that these processes may represent relatively distinct computational profiles across individuals. Overall, these findings indicate the translational potential of clinically-relevant computational parameters for precision psychiatry.","link":"/opendata/mkrtchian-et-al-2023/"},{"title":"Moeller et al. (2021)","text":"Reward prediction errors (RPEs) and risk preferences have two things in common: both can shape decision making behavior, and both are commonly associated with dopamine. RPEs drive value learning and are thought to be represented in the phasic release of striatal dopamine. Risk preferences bias choices towards or away from uncertainty; they can be manipulated with drugs that target the dopaminergic system. Based on the common neural substrate, we hypothesize that RPEs and risk preferences are linked on the level of behavior as well. Here, we develop this hypothesis theoretically and test it empirically. First, we apply a recent theory of learning in the basal ganglia to predict how RPEs influence risk preferences. We find that positive RPEs should cause increased risk-seeking, while negative RPEs should cause risk-aversion. We then test our behavioral predictions using a novel bandit task in which value and risk vary independently across options. Critically, conditions are included where options vary in risk but are matched for value. We find that our prediction was correct: participants become more risk-seeking if choices are preceded by positive RPEs, and more risk-averse if choices are preceded by negative RPEs. These findings cannot be explained by other known effects, such as nonlinear utility curves or dynamic learning rates.","link":"/opendata/moeller-et-al-2021/"},{"title":"Molenmaker et al. (2023)","text":"Peer punishment can help groups to establish collectively beneficial public goods. However, when humans condition punishment on other factors than poor contribution, punishment can become ineffective and group cooperation deteriorates. Here we show that this happens in pluriform groups where members have different socio-demographic characteristics. In our public good provision experiment, participants were confronted with a public good from which all group members benefitted equally, and in-between rounds they could punish each other. Groups were uniform (members shared the same academic background) or pluriform (half the members shared the same academic background, and the other half shared another background). We show that punishment effectively enforced cooperation in uniform groups where punishment was conditioned on poor contribution. In pluriform groups, punishment was conditioned on poor contribution too, but also partially on others’ social-demographic characteristics-dissimilar others were punished more than similar others regardless of their contribution. As a result, punishment lost its effectiveness in deterring free-riding and maintaining public good provision. Follow-up experiments indicated that such discriminatory punishment was used to demarcate and reinforce subgroup boundaries. This work reveals that peer punishment fails to enforce cooperation in groups with a pluriform structure, which is rule rather than exception in contemporary societies.","link":"/opendata/molenmaker-et-al-2023/"},{"title":"Molinaro & Collins (2023)","text":"When observing the outcome of a choice, people are sensitive to the choice’s context, such that the experienced value of an option depends on the alternatives: getting $1 when the possibilities were 0 or 1 feels much better than when the possibilities were 1 or 10. Context-sensitive valuation has been documented within reinforcement learning (RL) tasks, in which values are learned from experience through trial and error. Range adaptation, wherein options are rescaled according to the range of values yielded by available options, has been proposed to account for this phenomenon. However, we propose that other mechanisms-reflecting a different theoretical viewpoint-may also explain this phenomenon. Specifically, we theorize that internally defined goals play a crucial role in shaping the subjective value attributed to any given option. Motivated by this theory, we develop a new “intrinsically enhanced” RL model, which combines extrinsically provided rewards with internally generated signals of goal achievement as a teaching signal. Across 7 different studies (including previously published data sets as well as a novel, preregistered experiment with replication and control studies), we show that the intrinsically enhanced model can explain context-sensitive valuation as well as, or better than, range adaptation. Our findings indicate a more prominent role of intrinsic, goal-dependent rewards than previously recognized within formal models of human RL. By integrating internally generated signals of reward, standard RL theories should better account for human behavior, including context-sensitive valuation and beyond.","link":"/opendata/molinaro-collins-2023/"},{"title":"Molinaro et al. (2023)","text":"From an early age, children need to gather information to learn about their environment. Deciding which knowledge to pursue can be difficult because information can serve several, sometimes competing, purposes. Here, we examine the developmental trajectories of such diverse information-seeking motives. Over five experiments involving 521 children (aged 4-12), we find that school-age children integrate three key factors into their information-seeking choices: whether information reduces uncertainty, is useful in directing action, and is likely to be positive. Choices that likely reveal positive information and are useful for action emerge as early as age 4, followed by choices that reduce uncertainty (at ~age 5). Our results suggest that motives related to usefulness and uncertainty reduction become stronger with age, while the tendency to seek positive news does not show a statistically significant change throughout development. This study reveals how the relative importance of diverging, sometimes conflicting, information-seeking motives emerges throughout development.","link":"/opendata/molinaro-et-al-2023/"},{"title":"Molter et al. (2022)","text":"Choices are influenced by gaze allocation during deliberation, so that fixating an alternative longer leads to increased probability of choosing it. Gaze-dependent evidence accumulation provides a parsimonious account of choices, response times and gaze-behaviour in many simple decision scenarios. Here, we test whether this framework can also predict more complex context-dependent patterns of choice in a three-alternative risky choice task, where choices and eye movements were subject to attraction and compromise effects. Choices were best described by a gaze-dependent evidence accumulation model, where subjective values of alternatives are discounted while not fixated. Finally, we performed a systematic search over a large model space, allowing us to evaluate the relative contribution of different forms of gaze-dependence and additional mechanisms previously not considered by gaze-dependent accumulation models. Gaze-dependence remained the most important mechanism, but participants with strong attraction effects employed an additional similarity-dependent inhibition mechanism found in other models of multi-alternative multi-attribute choice.","link":"/opendata/molter-et-al-2022/"},{"title":"Moneta et al. (2023)","text":"The ventromedial prefrontal-cortex (vmPFC) is known to contain expected value signals that inform our choices. But expected values even for the same stimulus can differ by task. In this study, we asked how the brain flexibly switches between such value representations in a task-dependent manner. Thirty-five participants alternated between tasks in which either stimulus color or motion predicted rewards. We show that multivariate vmPFC signals contain a rich representation that includes the current task state or context (motion/color), the associated expected value, and crucially, the irrelevant value of the alternative context. We also find that irrelevant value representations in vmPFC compete with relevant value signals, interact with task-state representations and relate to behavioral signs of value competition. Our results shed light on vmPFC’s role in decision making, bridging between its role in mapping observations onto the task states of a mental map, and computing expected values for multiple states.","link":"/opendata/moneta-et-al-2023/"},{"title":"Montero-Porras et al. (2022)","text":"People have different preferences for what they allocate for themselves and what they allocate to others in social dilemmas. These differences result from contextual reasons, intrinsic values, and social expectations. What is still an area of debate is whether these differences can be estimated from differences in each individual’s deliberation process. In this work, we analyse the participants’ reaction times in three different experiments of the Iterated Prisoner’s Dilemma with the Drift Diffusion Model, which links response times to the perceived difficulty of the decision task, the rate of accumulation of information (deliberation), and the intuitive attitudes towards the choices. The correlation between these results and the attitude of the participants towards the allocation of resources is then determined. We observe that individuals who allocated resources equally are correlated with more deliberation than highly cooperative or highly defective participants, who accumulate evidence more quickly to reach a decision. Also, the evidence collection is faster in fixed neighbour settings than in shuffled ones. Consequently, fast decisions do not distinguish cooperators from defectors in these experiments, but appear to separate those that are more reactive to the behaviour of others from those that act categorically.","link":"/opendata/montero-porras-et-al-2022/"},{"title":"Moran et al. (2019)","text":"An extensive reinforcement learning literature shows that organisms assign credit efficiently, even under conditions of state uncertainty. However, little is known about credit-assignment when state uncertainty is subsequently resolved. Here, we address this problem within the framework of an interaction between model-free (MF) and model-based (MB) control systems. We present and support experimentally a theory of MB retrospective-inference. Within this framework, a MB system resolves uncertainty that prevailed when actions were taken thus guiding an MF credit-assignment. Using a task in which there was initial uncertainty about the lotteries that were chosen, we found that when participants momentary uncertainty about which lottery had generated an outcome was resolved by provision of subsequent information, participants preferentially assigned credit within a MF system to the lottery they retrospectively inferred was responsible for this outcome. These findings extend our knowledge about the range of MB functions and the scope of system interactions.","link":"/opendata/moran-et-al-2019/"},{"title":"Moran et al. (2021)","text":"Dual-reinforcement learning theory proposes behaviour is under the tutelage of a retrospective, value-caching, model-free (MF) system and a prospective-planning, model-based (MB), system. This architecture raises a question as to the degree to which, when devising a plan, a MB controller takes account of influences from its MF counterpart. We present evidence that such a sophisticated self-reflective MB planner incorporates an anticipation of the influences its own MF-proclivities exerts on the execution of its planned future actions. Using a novel bandit task, wherein subjects were periodically allowed to design their environment, we show that reward-assignments were constructed in a manner consistent with a MB system taking account of its MF propensities. Thus, in the task participants assigned higher rewards to bandits that were momentarily associated with stronger MF tendencies. Our findings have implications for a range of decision making domains that includes drug abuse, pre-commitment, and the tension between short and long-term decision horizons in economics.","link":"/opendata/moran-et-al-2021/"},{"title":"Morris et al. (2021)","text":"Humans have a remarkable capacity for flexible decision-making, deliberating among actions by modeling their likely outcomes. This capacity allows us to adapt to the specific features of diverse circumstances. In real-world decision-making, however, people face an important challenge: There are often an enormous number of possibilities to choose among, far too many for exhaustive consideration. There is a crucial, understudied prechoice step in which, among myriad possibilities, a few good candidates come quickly to mind. How do people accomplish this? We show across nine experiments (N = 3,972 U.S. residents) that people use computationally frugal cached value estimates to propose a few candidate actions on the basis of their success in past contexts (even when irrelevant for the current context). Deliberative planning is then deployed just within this set, allowing people to compute more accurate values on the basis of context-specific criteria. This hybrid architecture illuminates how typically valuable thoughts come quickly to mind during decision-making.","link":"/opendata/morris-et-al-2021/"},{"title":"Moutoussis et al. (2018)","text":"Pavlovian influences are important in guiding decision-making across health and psychopathology. There is an increasing interest in using concise computational tasks to parametrise such influences in large populations, and especially to track their evolution during development and changes in mental health. However, the developmental course of Pavlovian influences is uncertain, a problem compounded by the unclear psychometric properties of the relevant measurements. We assessed Pavlovian influences in a longitudinal sample using a well characterised and widely used Go-NoGo task. We hypothesized that the strength of Pavlovian influences and other psychomarkers guiding decision-making would behave like traits. As reliance on Pavlovian influence is not as profitable as precise instrumental decision-making in this Go-NoGo task, we expected this influence to decrease with higher IQ and age. Additionally, we hypothesized it would correlate with expressions of psychopathology. We found that Pavlovian effects had weak temporal stability, while model-fit was more stable. In terms of external validity, Pavlovian effects decreased with increasing IQ and experience within the task, in line with normative expectations. However, Pavlovian effects were poorly correlated with age or psychopathology. Thus, although this computational construct did correlate with important aspects of development, it does not meet conventional requirements for tracking individual development. We suggest measures that might improve psychometric properties of task-derived Pavlovian measures for future studies.","link":"/opendata/moutoussis-et-al-2018/"},{"title":"Muela et al. (2023)","text":"Negative/positive urgency (NU/PU) refers to the proneness to act rashly under negative/positive emotions. These traits are proxies to generalized emotion dysregulation, and are well-established predictors of gambling-related problems. We aimed to replicate a previous work (Quintero et al., 2020) showing NU to be related to faulty extinction of conditioned stimuli in an emotional conditioning task, to extend these findings to PU, and to clarify the role of urgency in the development of gambling-related craving and problems. 81 gamblers performed an acquisition-extinction task in which neutral, disgusting, erotic and gambling-related images were used as unconditioned stimuli (US), and color patches as conditioned stimuli (CS). Trial-by-trial predictive responses were analyzed using generalized linear mixed-effects models (GLME). PU was more strongly related than NU to craving and severity of gambling problems. PU did not influence acquisition in the associative task, whereas NU slightly slowed it. Extinction was hampered in individuals with high PU, and a follow-up analysis showed this effect to depend on relative preference for skill-based and casino games. Results suggest that resistance to extinction of emotionally conditioned cues is a sign of malfunctioning emotion regulation in problematic gambling. In our work, the key effect was driven by PU (instead of NU), and gambling craving and symptoms were also more closely predicted by it. Future research should compare the involvement of PU and NU in emotion regulation and gambling problems, for gamblers with preference for different gambling modalities (e.g., pure chance vs skill games).","link":"/opendata/muela-et-al-2023/"},{"title":"Mueller et al. (2019)","text":"In classical fear conditioning, neutral conditioned stimuli that have been paired with aversive physical unconditioned stimuli eventually trigger fear responses. Here, we tested whether aversive mental images systematically paired with a conditioned stimulus also cause de novo fear learning in the absence of any external aversive stimulation. In two experiments (N = 45 and N = 41), participants were first trained to produce aversive, neutral, or no imagery in response to three different visual-imagery cues. In a subsequent imagery-based differential-conditioning paradigm, each of the three cues systematically co-terminated with one of three different neutral faces. Although the face that was paired with the aversive-imagery cue was never paired with aversive external stimuli or threat-related instructions, participants rated it as more arousing, unpleasant, and threatening and displayed relative fear bradycardia and fear-potentiated startle. These results could be relevant for the development of fear and related disorders without trauma.","link":"/opendata/mueller-et-al-2019/"},{"title":"Müller et al. (2022)","text":"Research suggests that the temporal order in which people receive information about costs and benefits whilst making decisions can influence their choices. But, do people have a preference for seeking information about costs or benefits when making effort-based decisions, and does this impact motivation? Here, participants made choices about whether to exert different levels of physical effort to obtain different magnitudes of reward, or rest for low reward. Prior to each effort-based choice, they also had to decide which information they wanted to see first: how much physical effort would be required, or how large the reward would be. We found no overall preference for seeking reward or effort information first, but motivation did change when people saw reward or effort information first. Seeking effort information first, both someones average tendency to do so and their choice to see effort first on a given trial, was associated with reductions in the willingness to exert higher effort. Moreover, the tendency to prefer effort information first was associated with reduced vigorous exercise and higher levels of fatigue in everyday life. These findings highlight that preferences for seeking effort information may be a bias that reduces peoples willingness to exert effort in the lab and in everyday life.","link":"/opendata/muller-et-al-2022/"},{"title":"Naefgen et al. (2022)","text":"We expand the usually cross-sectional perspective on dual-tasking performance toinclude both intra- and interpersonal variability, which should capture within-persondynamics and psychological processes better. Two simple tasks, first as single-, then as dualtasks, were performed by 58 participants over 20 session. We found positive relationships between (1) single- and dual-tasking performance and (2) the dual-task component tasks both inter- and intrapersonally. Better single-taskers were better dual-taskers and better singletasking sessions were better dual-tasking sessions. This implies shared capacities that covary both inter- and intraindividually. We conclude that taking intra- and interpersonal variability into account is a promising future perspective.","link":"/opendata/naefgen-et-al-2022/"},{"title":"Najar et al. (2020)","text":"While there is no doubt that social signals affect human reinforcement learning, there is still no consensus about how this process is computationally implemented. To address this issue, we compared three psychologically plausible hypotheses about the algorithmic implementation of imitation in reinforcement learning. The first hypothesis, decision biasing (DB), postulates that imitation consists in transiently biasing the learners action selection without affecting their value function. According to the second hypothesis, model-based imitation (MB), the learner infers the demonstrators value function through inverse reinforcement learning and uses it to bias action selection. Finally, according to the third hypothesis, value shaping (VS), the demonstrators actions directly affect the learners value function. We tested these three hypotheses in 2 experiments (N = 24 and N = 44) featuring a new variant of a social reinforcement learning task. We show through model comparison and model simulation that VS provides the best explanation of learners behavior. Results replicated in a third independent experiment featuring a larger cohort and a different design (N = 302). In our experiments, we also manipulated the quality of the demonstrators choices and found that learners were able to adapt their imitation rate, so that only skilled demonstrators were imitated. We proposed and tested an efficient meta-learning process to account for this effect, where imitation is regulated by the agreement between the learner and the demonstrator. In sum, our findings provide new insights and perspectives on the computational mechanisms underlying adaptive imitation in human reinforcement learning.","link":"/opendata/najar-et-al-2020/"},{"title":"Nava et al. (2023)","text":"Cooperation is one of the most advantageous strategies to have evolved in small- and large-scale human societies, often considered essential to their success or survival. We investigated how cooperation and the mechanisms influencing it change across the lifespan, by assessing cooperative choices from adolescence to old age (12-79 years, N = 382) forcing participants to decide either intuitively or deliberatively through the use of randomised time constraints. As determinants of these choices, we considered participants’ level of altruism, their reciprocity expectations, their optimism, their desire to be socially accepted, and their attitude toward risk. We found that intuitive decision-making favours cooperation, but only from age 20 when a shift occurs: whereas in young adults, intuition favours cooperation, in adolescents it is reflection that favours cooperation. Participants’ decisions were shown to be rooted in their expectations about other people’s cooperative behaviour and influenced by individuals’ level of optimism about their own future, revealing that the journey to the cooperative humans we become is shaped by reciprocity expectations and individual predispositions.","link":"/opendata/nava-et-al-2023/"},{"title":"Navarro et al. (2018)","text":"In everyday life, people need to make choices without full information about the environment, which poses an explore-exploit dilemma in which one must balance the need to learn about the world and the need to obtain rewards from it. The explore-exploit dilemma is often studied using the multi-armed restless bandit task, in which people repeatedly select from multiple options, and human behaviour is modelled as a form of reinforcement learning via Kalman filters. Inspired by work in the judgment and decision-making literature, we present two experiments using multi-armed bandit tasks in both static and dynamic environments, in situations where options can become unviable and vanish if they are not pursued. A Kalman filter model using Thompson sampling provides an excellent account of human learning in a standard restless bandit task, but there are systematic departures in the vanishing bandit task. We explore the nature of this loss aversion signal and consider theoretical explanations for the results.","link":"/opendata/navarro-et-al-2018/"},{"title":"Navarro et al. (2023)","text":"Complex span tasks are perhaps the most widely used paradigm to measure working memory capacity (WMC). Researchers assume that all types of complex span tasks assess domain-general WM. However, most research supporting this claim comes from factor analysis approaches that do not examine task performance at the item level, thus not allowing comparison of the characteristics of verbal and spatial complex span tasks. Item response theory (IRT) can help determine the extent to which different complex span tasks assess domain-general WM. In the current study, spatial and verbal complex span tasks were examined using IRT. The results revealed differences between verbal and spatial tasks in terms of item difficulty and block difficulty, and showed that most subjects with below-average ability were able to answer most items correctly across all tasks. In line with previous research, the findings suggest that examining domain-general WM by using only one task might elicit skewed scores based on task domain. Further, visuospatial complex span tasks should be prioritized as a measure of WMC if resources are limited.","link":"/opendata/navarro-et-al-2023/"},{"title":"Navidi et al. (2023)","text":"Prosocial learning involves the acquisition of knowledge and skills necessary for making decisions that benefit others. We asked if, in the context of value-based decision-making, there is any difference between learning strategies for oneself vs. for others. We implemented a 2-step reinforcement learning paradigm in which participants learned, in separate blocks, to make decisions for themselves or for a present other confederate who evaluated their performance. We replicated the canonical features of the model-based and model-free reinforcement learning in our results. The behaviour of the majority of participants was best explained by a mixture of the model-based and model-free control, while most participants relied more heavily on MB control, and this strategy enhanced their learning success. Regarding our key self-other hypothesis, we did not find any significant difference between the behavioural performances nor in the model-based parameters of learning when comparing self and other conditions.","link":"/opendata/navidi-et-al-2023/"},{"title":"Neuser et al. (2020)","text":"Interoceptive feedback transmitted via the vagus nerve plays a vital role in motivation by tuning actions according to physiological needs. Whereas vagus nerve stimulation (VNS) reinforces actions in animals, motivational effects elicited by VNS in humans are still largely elusive. Here, we applied non-invasive transcutaneous auricular VNS (taVNS) on the left or right ear while participants exerted effort to earn rewards using a randomized cross-over design (vs. sham). In line with preclinical studies, acute taVNS enhances invigoration of effort, and stimulation on the left side primarily facilitates invigoration for food rewards. In contrast, we do not find conclusive evidence that acute taVNS affects effort maintenance or wanting ratings. Collectively, our results suggest that taVNS enhances reward-seeking by boosting invigoration, not effort maintenance and that the stimulation side affects generalization beyond food reward. Thus, taVNS may enhance the pursuit of prospective rewards which may pave avenues to treat motivational deficiencies.","link":"/opendata/neuser-et-al-2020/"},{"title":"Nicholas et al. (2022)","text":"A key question in decision-making is how humans arbitrate between competing learning and memory systems to maximize reward. We address this question by probing the balance between the effects, on choice, of incremental trial-and-error learning versus episodic memories of individual events. Although a rich literature has studied incremental learning in isolation, the role of episodic memory in decision-making has only recently drawn focus, and little research disentangles their separate contributions. We hypothesized that the brain arbitrates rationally between these two systems, relying on each in circumstances to which it is most suited, as indicated by uncertainty. We tested this hypothesis by directly contrasting contributions of episodic and incremental influence to decisions, while manipulating the relative uncertainty of incremental learning using a well-established manipulation of reward volatility. Across two large, independent samples of young adults, participants traded these influences off rationally, depending more on episodic information when incremental summaries were more uncertain. These results support the proposal that the brain optimizes the balance between different forms of learning and memory according to their relative uncertainties and elucidate the circumstances under which episodic memory informs decisions.","link":"/opendata/nicholas-et-al-2022/"},{"title":"Nitsch et al. (2022)","text":"A contemporary research agenda in behavioral economics and neuroeconomics aims to identify individual differences and (neuro)psychological correlates of rationality. This research has been widely received in important interdisciplinary and field outlets. However, the psychometric reliability of such measurements of rationality has been presumed without enough methodological scrutiny. Drawing from multiple original and published datasets (in total over 1,600 participants), we unequivocally show that contemporary measurements of rationality have moderate to poor reliability according to common standards. Further analyses of the variance components, as well as a allowing participants to revise previous choices, suggest that this is driven by low between-subject variance rather than high measurement error. As has been argued previously for other behavioral measurements, this poses a challenge to the predominant correlational research designs and the search for sociodemographic or neural predictors. While our results draw a sobering picture of the prospects of contemporary measurements of rationality, they are not necessarily surprising from a theoretical perspective, which we outline in our discussion.","link":"/opendata/nitsch-et-al-2022/"},{"title":"Niv et al. (2012)","text":"Humans and animals are exquisitely, though idiosyncratically, sensitive to risk or variance in the outcomes of their actions. Economic, psychological, and neural aspects of this are well studied when information about risk is provided explicitly. However, we must normally learn about outcomes from experience, through trial and error. Traditional models of such reinforcement learning focus on learning about the mean reward value of cues and ignore higher order moments such as variance. We used fMRI to test whether the neural correlates of human reinforcement learning are sensitive to experienced risk. Our analysis focused on anatomically delineated regions of a priori interest in the nucleus accumbens, where blood oxygenation level-dependent (BOLD) signals have been suggested as correlating with quantities derived from reinforcement learning. We first provide unbiased evidence that the raw BOLD signal in these regions corresponds closely to a reward prediction error. We then derive from this signal the learned values of cues that predict rewards of equal mean but different variance and show that these values are indeed modulated by experienced risk. Moreover, a close neurometric-psychometric coupling exists between the fluctuations of the experience-based evaluations of risky options that we measured neurally and the fluctuations in behavioral risk aversion. This suggests that risk sensitivity is integral to human learning, illuminating economic models of choice, neuroscientific models of affective learning, and the workings of the underlying neural mechanisms.","link":"/opendata/niv-et-al-2012/"},{"title":"NSPN Consortium (2022)","text":"Data from the NSPN study, a dataset of N=2400 adolescents and young adults (ages 14-24) who completed many self-report, cognitive, and reinforcement learning measures.","link":"/opendata/nspn-consortium-2022/"},{"title":"Nussenbaum et al. (2020)","text":"For years, adult psychological research has benefitted from web-based data collection. There is growing interest in harnessing this approach to facilitate data collection from children and adolescents to address foundational questions about cognitive development. To date, however, few studies have directly tested whether findings from in-lab developmental psychology tasks can be replicated online, particularly in the domain of value-based learning and decision-making. To address this question, we set up a pipeline for online data collection with children, adolescents, and adults, and conducted a replication of Decker et al. (2016). The original in-lab study employed a sequential decision-making paradigm to examine shifts in value-learning strategies from childhood to adulthood. Here, we used the same paradigm in a sample of 151 children (N = 50; ages 8 - 12 years), adolescents (N = 50; ages 13 - 17 years), and adults (N = 51; ages 18 - 25 years) and replicated the main finding that the use of a “model-based” learning strategy increases with age. In addition, we adapted a new index of abstract reasoning (MaRs-IB; Chierchia et al. 2019) for use online, and replicated a key result from Potter et al. (2017), which found that abstract reasoning ability mediated the relation between age and model-based learning. Our re-analyses of two previous in-lab datasets alongside our analysis of our online dataset revealed few qualitative differences across task administrations. These findings suggest that with appropriate precautions, researchers can effectively examine developmental differences in learning computations through unmoderated, online experiments.","link":"/opendata/nussenbaum-et-al-2020/"},{"title":"Nussenbaum et al. (2023)","text":"Across the lifespan, individuals frequently choose between exploiting known rewarding options or exploring unknown alternatives. A large body of work has suggested that children may explore more than adults. However, because novelty and reward uncertainty are often correlated, it is unclear how they differentially influence decision-making across development. Here, children, adolescents, and adults (ages 8-27 years, N = 122) completed an adapted version of a recently developed value-guided decision-making task that decouples novelty and uncertainty. In line with prior studies, we found that exploration decreased with increasing age. Critically, participants of all ages demonstrated a similar bias to select choice options with greater novelty, whereas aversion to reward uncertainty increased into adulthood. Computational modeling of participant choices revealed that whereas adolescents and adults demonstrated attenuated uncertainty aversion for more novel choice options, children’s choices were not influenced by reward uncertainty.","link":"/opendata/nussenbaum-et-al-2023/"},{"title":"O'Connell et al. (2202)","text":"Empathic experiences shape social behaviors and display considerable individual variation. Recent advances in computational behavioral modeling can help rigorously quantify individual differences, but remain understudied in the context of empathy and antisocial behavior. We adapted a go/no-go reinforcement learning task across social and non-social contexts such that monetary gains and losses explicitly impacted the subject, a study partner, or no one. Empathy was operationalized as sensitivity to others’ rewards, sensitivity to others’ losses, and as the Pavlovian influence of empathic outcomes on approach and avoidance behavior. Results showed that 61 subjects learned for a partner in a way that was computationally similar to how they learned for themselves. Results supported the psychometric value of individualized model parameters such as sensitivity to others’ loss, which was inversely associated with antisociality. Modeled empathic sensitivity also mapped onto motivation ratings, but was not associated with self-reported trait empathy. This work is the first to apply a social reinforcement learning task that spans affect and action requirement (go/no-go) to measure multiple facets of empathic sensitivity.","link":"/opendata/o-connell-et-al-2202/"},{"title":"Oguchi et al. (2023)","text":"Humans form complex societies in which we routinely engage in social decision-making regarding the allocation of resources among ourselves and others. One dimension that characterizes social decision-making in particular is whether to prioritize self-interest or respect for others-proself or prosocial. What causes this individual difference in social value orientation? Recent developments in the social dual-process theory argue that social decision-making is characterized by its underlying domain-general learning systems: the model-free and model-based systems. In line with this learning approach, we propose and experimentally test the hypothesis that differences in social preferences stem from which learning system is dominant in an individual. Here, we used a non-social state transition task that allowed us to assess the balance between model-free/model-based learning and investigate its relation to the social value orientations. The results showed that proselfs depended more on model-based learning, whereas prosocials depended more on model-free learning. Reward amount and reaction time analyses showed that proselfs learned the task structure earlier in the session than prosocials, reflecting their difference in model-based/model-free learning dependence. These findings support the learning hypothesis on what makes differences in social preferences and have implications for understanding the mechanisms of prosocial behavior.","link":"/opendata/oguchi-et-al-2023/"},{"title":"Otsuka & Yotsumoto (2023)","text":"Magnitude information is often correlated in the external world, providing complementary information about the environment. As if to reflect this relationship, the perceptions of different magnitudes (e.g., time and numerosity) are known to influence one another. Recent studies suggest that such magnitude interaction is similar to cue integration, such as multisensory integration. Here, we tested whether human observers could integrate the magnitudes of two quantities with distinct physical units (i.e., time and numerosity) as abstract magnitude information. The participants compared the magnitudes of two visual stimuli based on time, numerosity, or both. Consistent with the predictions of the maximum likelihood estimation (MLE) model, the participants integrated time and numerosity in a near-optimal manner; the weights for numerosity increased as the reliability of the numerosity information increased, and the integrated estimate was more reliable than either the time or numerosity estimate. Furthermore, the integration approached a statistical optimum as the temporal discrepancy of the acquisition of each piece of information became smaller. These results suggest that magnitude interaction arises through a similar computational mechanism to cue integration. They are also consistent with the idea that different magnitudes are processed by a generalized magnitude system.","link":"/opendata/otsuka-yotsumoto-2023/"},{"title":"Olschewski et al. (2019)","text":"The perception and integration of sequential numerical information is a common cognitive task. It is a prerequisite for experience-based economic choices, but it is usually not part of economic decision theory. To better understand the process of symbolic number integration and its influence on economic behavior, we performed three experimental studies that examined mean estimates and economic valuations of continuous number distributions. The results indicate that participants valued random number distributions below their respective arithmetic means and valued distributions as lower when their variance increased, indicating risk aversion. A similar though less pronounced pattern also occurred in the matched mean estimation task where accuracy was incentivized and preferences played no role. These patterns suggest that seemingly risk-averse preferences are partly due to cognitive biases when perceiving and estimating numbers. In addition, participants apparent economic preference for right-skewed outcome distributions could be attributed mainly to estimation biases. We discuss the extent to which the results can be explained based on a compressed mental number line and different sample weighting models. Finally, a new model that can account for the qualitative data pattern and has stronger overweighting of lower than higher numbers as its core feature is developed. Together, our results indicate that basic cognitive processes in perceiving and integrating number sequences play a key role in understanding experience-based economic behavior.","link":"/opendata/olschewski-et-al-2019/"},{"title":"Otsuka (2023)","text":"People can extract and learn statistical regularities from various aspects of everyday life. The current study examined whether people have a mechanism to learn regularity based on time information and investigated whether sensitivity to time information is modulated by individual time management. In the familiarization phase, participants were required to observe a visual sequence of objects. Although the objects were presented in a random order, the amount of time for which the objects were presented was organized into successive triplets (e.g., 850-1,000-700 ms). In the subsequent test phase, two three-object sequences were presented. One sequence was a timing triplet that had temporal regularities. The other was a foil created from three different triplets. Participants were required to judge which sequence was more familiar based on the familiarization phase. The results showed that the triplets were successfully discriminated from the foils. These results were also observed for blank intervals. The current findings also revealed that although visual statistical learning was expressed when participants observed the temporal regularities of shapes tied to the corresponding durations during familiarization, this learning overshadowed them from indicating generic timing regularities when they were untied to objects. Furthermore, participants with high scores on the Time Management Scale showed a higher extent of visual statistical learning on object durations than those with low scores. These results suggest that people extract and learn regularities based on time information and that statistical learning based on time information is correlated with individual time management. (PsycInfo Database Record (c) 2023 APA, all rights reserved).","link":"/opendata/otsuka-2023/"},{"title":"Pachur et al. (2017)","text":"We separate for the first time the roles of cognitive and motivational factors in shaping age differences in decision making under risk. Younger and older adults completed gain, loss, and mixed-domain choice problems as well as measures of cognitive functioning and affect. The older adults decision quality was lower than the younger adults in the loss domain, and this age difference was attributable to the older adults lower cognitive abilities. In addition, the older adults chose the more risky option more often than the younger adults in the gain and mixed domains; this difference in risk aversion was attributable to less pronounced negative affect among the older adults. Computational modeling with a hierarchical Bayesian implementation of cumulative prospect theory revealed that the older adults had higher response noise and more optimistic decision weights for gains than did the younger adults. Moreover, the older adults showed no loss aversion, a finding that supports a positivity-focus (rather than a loss-prevention) view of motivational reorientation in older age.","link":"/opendata/pachur-et-al-2017/"},{"title":"Pachur et al. (2018)","text":"There is a disconnect in the literature between analyses of risky choice based on cumulative prospect theory (CPT) and work on predecisional information processing. One likely reason is that for expectation models (e.g., CPT), it is often assumed that people behaved only as if they conducted the computations leading to the predicted choice and that the models are thus mute regarding information processing. We suggest that key psychological constructs in CPT, such as loss aversion and outcome and probability sensitivity, can be interpreted in terms of attention allocation. In two experiments, we tested hypotheses about specific links between CPT parameters and attentional regularities. Experiment 1 used process tracing to monitor participants predecisional attention allocation to outcome and probability information. As hypothesized, individual differences in CPTs loss-aversion, outcome-sensitivity, and probability-sensitivity parameters (estimated from participants choices) were systematically associated with individual differences in attention allocation to outcome and probability information. For instance, loss aversion was associated with the relative attention allocated to loss and gain outcomes, and a more strongly curved weighting function was associated with less attention allocated to probabilities. Experiment 2 manipulated participants attention to losses or gains, causing systematic differences in CPTs loss-aversion parameter. This result indicates that attention allocation can to some extent cause choice regularities that are captured by CPT. Our findings demonstrate an as-if models capacity to reflect characteristics of information processing. We suggest that the observed CPT-attention links can be harnessed to inform the development of process models of risky choice.","link":"/opendata/pachur-et-al-2018/"},{"title":"Palmer et al. (2023)","text":"Objective: Near-misses are a structural characteristic of gambling products that can be engineered within modern digital games. Over a series of pre-registered experiments using an online slot machine simulation, we investigated the impact of near-miss outcomes, on subjective ratings (motivation, valence) and two behavioural measures (speed of gambling, bet size).Method: Participants were recruited using Prolific and gambled on an online 3-reel slot machine simulator that delivered a 1 in 3 rate of X-X-O near-misses. Study 1 measured trial-by-trial subjective ratings of valence and motivation (Study 1a, n = 169; Study 1b, n = 148). Study 2 measured spin initiation latencies (n = 170) as a function of the previous trial outcome. Study 3 measured bet size (n = 172) as a function of the previous trial outcome.Results: In Study 1a, near-misses increased the motivation to continue gambling relative to full-misses, supporting H1. On valence ratings, near-misses were rated significantly more positively from full-misses, in the opposite direction to H2; this effect was confirmed in a close replication (Study 1b). In Study 2, participants gambled faster following near-misses relative to full-misses, supporting H3. In Study 3, participants significantly increased their bet size following near-misses relative to full-misses, supporting H4.Conclusion: Across all dependent variables, near-miss outcomes yielded statistically significant differences from objectively-equivalent full-miss outcomes, corroborating the ‘near miss effect’ across both subjective and behavioral measures, and in the environment of online gambling. The unexpected findings on valence ratings are considered in terms of boundary conditions for the near-miss effect, and competing theoretical accounts based on frustration/regret, goal generalization, and skill acquisition.","link":"/opendata/palmer-et-al-2023/"},{"title":"Palminteri et al. (2016)","text":"Adolescence is a period of life characterised by changes in learning and decision-making. Learning and decision-making do not rely on a unitary system, but instead require the coordination of different cognitive processes that can be mathematically formalised as dissociable computational modules. Here, we aimed to trace the developmental time-course of the computational modules responsible for learning from reward or punishment, and learning from counterfactual feedback. Adolescents and adults carried out a novel reinforcement learning paradigm in which participants learned the association between cues and probabilistic outcomes, where the outcomes differed in valence (reward versus punishment) and feedback was either partial or complete (either the outcome of the chosen option only, or the outcomes of both the chosen and unchosen option, were displayed). Computational strategies changed during development: whereas adolescents behaviour was better explained by a basic reinforcement learning algorithm, adults behaviour integrated increasingly complex computational features, namely a counterfactual learning module (enabling enhanced performance in the presence of complete feedback) and a value contextualisation module (enabling symmetrical reward and punishment learning). Unlike adults, adolescent performance did not benefit from counterfactual (complete) feedback. In addition, while adults learned symmetrically from both reward and punishment, adolescents learned from reward but were less likely to learn from punishment. This tendency to rely on rewards and not to consider alternative consequences of actions might contribute to our understanding of decision-making in adolescence.","link":"/opendata/palminteri-et-al-2016/"},{"title":"Palminteri et al. (2017)","text":"Previous studies suggest that factual learning, that is, learning from obtained outcomes, is biased, such that participants preferentially take into account positive, as compared to negative, prediction errors. However, whether or not the prediction error valence also affects counterfactual learning, that is, learning from forgone outcomes, is unknown. To address this question, we analysed the performance of two groups of participants on reinforcement learning tasks using a computational model that was adapted to test if prediction error valence influences learning. We carried out two experiments: in the factual learning experiment, participants learned from partial feedback (i.e., the outcome of the chosen option only); in the counterfactual learning experiment, participants learned from complete feedback information (i.e., the outcomes of both the chosen and unchosen option were displayed). In the factual learning experiment, we replicated previous findings of a valence-induced bias, whereby participants learned preferentially from positive, relative to negative, prediction errors. In contrast, for counterfactual learning, we found the opposite valence-induced bias: negative prediction errors were preferentially taken into account, relative to positive ones. When considering valence-induced bias in the context of both factual and counterfactual learning, it appears that people tend to preferentially take into account information that confirms their current choice.","link":"/opendata/palminteri-et-al-2017/"},{"title":"Panidi et al. (2022)","text":"In this study, we provide causal evidence that the dorsolateral prefrontal cortex (DLPFC) supports the computation of subjective value in choices under risk via its involvement in probability weighting. Following offline continuous theta-burst transcranial magnetic stimulation (cTBS) of the DLPFC subjects (N = 30, mean age 23.6, 56% females) completed a computerized task consisting of 96 binary lottery choice questions presented in random order. Using the hierarchical Bayesian modeling approach, we then estimated the structural parameters of risk preferences (the degree of risk aversion and the curvature of the probability weighting function) and analyzed the obtained posterior distributions to determine the effect of stimulation on model parameters. On a behavioral level, temporary downregulation of the left DLPFC excitability through cTBS decreased the likelihood of choosing an option with higher expected reward while the probability of choosing a riskier lottery did not significantly change. Modeling the stimulation effects on risk preference parameters showed anecdotal evidence as assessed by Bayes factors that probability weighting parameter increased after the left DLPFC TMS compared to sham.","link":"/opendata/panidi-et-al-2022/"},{"title":"Patt et al. (2021)","text":"When faced with intertemporal choices, people typically devalue rewards available in the future compared to rewards more immediately available, a phenomenon known as temporal discounting. Decisions involving intertemporal choices arise daily, with critical impact on health and financial wellbeing. Although many such decisions are “experiential” in that they involve delays and rewards that are experienced in real-time and can inform subsequent choices, most studies have focused on intertemporal choices with hypothetical outcomes (or outcomes delivered after all decisions are made). The present study focused on experiential intertemporal choices. First, a novel intertemporal choice task was developed and validated, using delays experienced in real time and artistic photographs as consumable perceptual rewards. Second, performance on the experiential task was compared to performance on a classic intertemporal choice task with hypothetical outcomes. Involvement of distinct processes across tasks was probed by examining differential relations to state and trait anxiety. A two-parameter logistic function framework was proposed to fit indifference point data. This approach accounts for individual variability not only in the delay at which an individual switches from choosing the delayed to more immediate option, but also in the slope of that switch. Fit results indicated that the experiential task elicited temporal discounting, with effective trade-off between delay and perceptual reward. Comparison with the hypothetical intertemporal choice task suggested distinct mechanisms: first, temporal discounting across the two tasks was not correlated; and second, state and trait anxiety both were associated with choice behavior in the experiential task, albeit in distinct ways, whereas neither was significantly associated with choice behavior in the hypothetical task. The engagement of different processes in the experiential compared to hypothetical task may align with neural evidence for the recruitment of the hippocampus in animal but not in classic human intertemporal choice studies.","link":"/opendata/patt-et-al-2021/"},{"title":"Pauli et al. (2022)","text":"Theoretical and empirical accounts suggest that adolescence is associated with heightened reward learning and impulsivity. Experimental tasks and computational models that can dissociate reward learning from the tendency to initiate actions impulsively (action initiation bias) are thus critical to characterise the mechanisms that drive developmental differences. However, existing work has rarely quantified both learning ability and action initiation, or it has tested small samples. Here, using computational modelling of a learning task collected from a large sample (N=742, 9-18 years, 11 countries), we tested differences in reward and punishment learning and action initiation from childhood to adolescence. Computational modelling revealed that whilst punishment learning rates increased with age, reward learning remained stable. In parallel, action initiation biases decreased with age. Results were similar when considering pubertal stage instead of chronological age. We conclude that heightened reward responsivity in adolescence can reflect differences in action initiation rather than enhanced reward learning.","link":"/opendata/pauli-et-al-2022/"},{"title":"Payzan-LeNestour et al. (2021)","text":"Research in the field of multisensory perception shows that what we hear can influence what we see in a wide range of perceptual tasks. It is however unknown whether this extends to the visual perception of risk, despite the importance of the question in many applied domains where properly assessing risk is crucial, starting with financial trading. To fill this knowledge gap, we ran interviews with professional traders and conducted three laboratory studies using judgments of financial asset risk as a testbed. We provide evidence that the presence of ambient sound impacts risk perception, possibly due to the combination of facilitatory and synesthetic effects of general relevance to the perception of risk in many species as well as humans. We discuss the implications of our findings for various applied domains (e.g., financial, medical, and military decision-making), and raise new questions for future research.","link":"/opendata/payzan-lenestour-et-al-2021/"},{"title":"Pedersen et al. (2021)","text":"Adaptive behavior requires balancing approach and avoidance based on the rewarding and aversive consequences of actions. Imbalances in this evaluation are thought to characterize mood disorders such as major depressive disorder (MDD). We present a novel application of the drift diffusion model (DDM) suited to quantify how offers of reward and aversiveness, and neural correlates thereof, are dynamically integrated to form decisions, and how such processes are altered in MDD. Hierarchical parameter estimation from the DDM demonstrated that the MDD group differed in three distinct reward-related parameters driving approach-based decision making. First, MDD was associated with reduced reward sensitivity, measured as the impact of offered reward on evidence accumulation. Notably, this effect was replicated in a follow-up study. Second, the MDD group showed lower starting point bias towards approaching offers. Third, this starting point was influenced in opposite directions by Pavlovian effects and by nucleus accumbens activity across the groups: greater accumbens activity was related to approach bias in controls but avoid bias in MDD. Cross-validation revealed that the combination of these computational biomarkers were diagnostic of patient status, with accumbens influences being particularly diagnostic. Finally, within the MDD group, reward sensitivity and nucleus accumbens parameters were differentially related to symptoms of perceived stress and depression. Collectively, these findings establish the promise of computational psychiatry approaches to dissecting approach-avoidance decision dynamics relevant for affective disorders.","link":"/opendata/pedersen-et-al-2021/"},{"title":"Peng et al. (2022)","text":"Obsessive-compulsive disorder (OCD) is characterized by uncontrollable repetitive actions thought to rely on abnormalities within fundamental instrumental learning systems. We investigated cognitive and computational mechanisms underlying Pavlovian biases on instrumental behavior in both clinical OCD patients and healthy controls using a Pavlovian-Instrumental Transfer (PIT) task. PIT is typically evidenced by increased responding in the presence of a positive (previously rewarded) Pavlovian cue, and reduced responding in the presence of a negative cue. Thirty OCD patients and thirty-one healthy controls completed the Pavlovian Instrumental Transfer test, which included instrumental training, Pavlovian training for positive, negative and neutral cues, and a PIT phase in which participants performed the instrumental task in the presence of the Pavlovian cues. Modified Rescorla-Wagner models were fitted to trial-by-trial data of participants to estimate underlying computational mechanism and quantify individual differences during training and transfer stages. Bayesian hierarchical methods were used to estimate free parameters and compare the models. Behavioral and computational results indicated a weaker Pavlovian influence on instrumental behavior in OCD patients than in HC, especially for negative Pavlovian cues. Our results contrast with the increased PIT effects reported for another set of disorders characterized by compulsivity, substance use disorders, in which PIT is enhanced. A possible reason for the reduced PIT in OCD may be impairment in using the contextual information provided by the cues to appropriately adjust behavior, especially when inhibiting responding when a negative cue is present. This study provides deeper insight into our understanding of deficits in OCD from the perspective of Pavlovian influences on instrumental behavior and may have implications for OCD treatment modalities focused on reducing compulsive behaviors.","link":"/opendata/peng-et-al-2022/"},{"title":"Pereg et al. (2022)","text":"The study of social learning examines how individuals learn from others by means of observation, imitation, or compliance with advice. However, it still remains largely unknown whether social learning processes have a distinct contribution to choice behavior, independent from non-social trial-and-error learning that often occurs simultaneously. 153 participants completed a reinforcement learning task, where they were asked to make choices to gain rewards. Advice from an artificial teacher was presented on 60% of the trials, allowing us to compare choice behavior with and without advice. Results showed a strong and reliable tendency to follow advice (test-retest reliability ~.73). Computational modeling suggested a unique contribution of three distinct learning strategies: (a) individual learning (i.e., learning the value of actions, independent of advice), (b) informed advice-taking (i.e., learning the value of following advice), and (c) non-informed advice-taking (i.e., a constant bias to follow advice regardless of outcome history). Computational lesion simulations analyses provided specific regression signatures to both informed and non-informed advice taking processes that were then confirmed by empirical data. We discuss the theoretical implications of integrating internal and external information during the learning process.","link":"/opendata/pereg-et-al-2022/"},{"title":"Pescetelli & Yeung (2019)","text":"In a world where ideas flow freely between people across multiple platforms, we often find ourselves relying on others’ information without an objective standard to judge whether those opinions are accurate. The present study tests an agreement-in-confidence hypothesis of advice perception, which holds that internal metacognitive evaluations of decision confidence play an important functional role - namely being a learning signal that allows to learn about the reliability of others in the absence of feedback - in the perception and use of social information, such as peers’ advice. We propose that confidence can be used, computationally, to estimate advisors’ trustworthiness and advice reliability. Specifically, these processes are hypothesized to be particularly important in situations where objective feedback is absent or difficult to acquire. Here, we use a judge-advisor system paradigm to precisely manipulate the profiles of virtual advisors whose opinions are provided to participants performing a perceptual decision making task. We find that when advisors’ and participants’ judgments are independent, people are able to discriminate subtle advice features, like confidence calibration, whether or not objective feedback is available. However, when observers’ judgments (and judgment errors) are correlated - as is the case in many social contexts - predictable distortions can be observed between feedback and feedback-free scenarios. A simple model of advice reliability estimation, endowed with metacognitive insight, is able to explain key patterns of results observed in the human data. Finally, we use agent-based modeling to explore implications of these individual-level decision strategies for network-level patterns of trust and belief formation.","link":"/opendata/pescetelli-yeung-2019/"},{"title":"Peterson et al. (2021)","text":"Predicting and understanding how people make decisions has been a long-standing goal in many fields, with quantitative models of human decision-making informing research in both the social sciences and engineering. We show how progress toward this goal can be accelerated by using large datasets to power machine-learning algorithms that are constrained to produce interpretable psychological theories. Conducting the largest experiment on risky choice to date and analyzing the results using gradient-based optimization of differentiable decision theories implemented through artificial neural networks, we were able to recapitulate historical discoveries, establish that there is room to improve on existing theories, and discover a new, more accurate model of human decision-making in a form that preserves the insights from centuries of research.","link":"/opendata/peterson-et-al-2021/"},{"title":"Petitet et al. (2022)","text":"Apathy and impulsivity are expressed in a wide range of neuropsychiatric disorders, and, to a less severe extent, in healthy people too. Although traditionally considered to be opposite extremes of a single motivational spectrum, recent epidemiological questionnaire-based data suggest that both traits can in fact co-exist within the same individual. Here, we sought to investigate the relationship between these constructs in healthy people within a controlled task environment that examines the ability to make a decision under temporal uncertainty and measures the vigour of the response. Sixty participants performed a new version of the Traffic Light Task (TLT) and completed self-report questionnaire measures of apathy and impulsivity. Although the two traits were positively correlated in questionnaire assessments, they were associated with distinct behavioural signatures on the task. Impulsivity was expressed as an inflexible tendency to generate rapid anticipatory responses, regardless of task context. Apathy, on the other hand, was associated with a blunted effect of reward on response vigour. These findings reveal how apathy and impulsivity are related to distinct dimensions of goal-directed behaviour, explaining how these traits might co-exist in the same individuals.","link":"/opendata/petitet-et-al-2022/"},{"title":"Pettine et al. (2023)","text":"The world is overabundant with feature-rich information obscuring the latent causes of experience. How do people approximate the complexities of the external world with simplified internal representations that generalize to novel examples or situations? Theories suggest that internal representations could be determined by decision boundaries that discriminate between alternatives, or by distance measurements against prototypes and individual exemplars. Each provide advantages and drawbacks for generalization. We therefore developed theoretical models that leverage both discriminative and distance components to form internal representations via action-reward feedback. We then developed three latent-state learning tasks to test how humans use goal-oriented discrimination attention and prototypes/exemplar representations. The majority of participants attended to both goal-relevant discriminative features and the covariance of features within a prototype. A minority of participants relied only on the discriminative feature. Behaviour of all participants could be captured by parameterizing a model combining prototype representations with goal-oriented discriminative attention.","link":"/opendata/pettine-et-al-2023/"},{"title":"Pike et al. (2022)","text":"Affective biases are commonly seen in disorders such as depression and anxiety, where individuals may show attention towards and more rapid processing of negative or threatening stimuli. Affective biases have been shown to change with effective intervention: randomized controlled trials into these biases and the mechanisms that underpin them may allow greater understanding of how interventions can be improved and their success be maximized. For trials to be informative, we must have reliable ways of measuring affective bias over time, so we can detect how interventions are changing these biases. In particular, the test-retest reliability of our measures puts an upper bound on our ability to detect effects: thus, in this study, we examine the test-retest reliability of two behavioural tasks that examine affective bias. We recruited 58 individuals in an online study who completed these tasks twice, with at least 14 days in between sessions. We analysed reliability of both summary statistics and parameters from computational models using Pearson’s correlations and intra-class correlations. Standard summary statistic measures from these affective bias tasks had reliability ranging from 0.18 (poor) to 0.49 (moderate). Parameters from computational modelling of these tasks were in many cases less reliable than summary statistics. Embedding the covariance between sessions within the generative modelling framework resulted in higher stability estimates. In sum, measures from these affective bias tasks are moderately reliable, but further work to improve the reliability of these tasks would improve still further our ability to draw inferences in randomized trials.","link":"/opendata/pike-et-al-2022/"},{"title":"Pike et al. (2023)","text":"Catastrophizing, when an individual overestimates the probability of a severe negative outcome, is related to various aspects of mental ill-health. Here, we further characterize catastrophizing by investigating the extent to which self-reported catastrophizing is associated with risk-taking, using an online behavioural task and computational modelling. We performed two online studies: a pilot study (n = 69) and a main study (n = 263). In the pilot study, participants performed the Balloon Analogue Risk Task (BART), alongside two other tasks (reported in the Supplement), and completed mental health questionnaires. Based on the findings from the pilot, we explored risk-taking in more detail in the main study using two versions of the Balloon Analogue Risk task (BART), with either a high or low cost for bursting the balloon. In the main study, there was a significant negative relationship between self-report catastrophizing scores and risk-taking in the low (but not high) cost version of the BART. Computational modelling of the BART task revealed no relationship between any parameter and Catastrophizing scores in either version of the task. We show that increased self-reported catastrophizing may be associated with reduced behavioural measures of risk-taking, but were unable to identify a computational correlate of this effect.","link":"/opendata/pike-et-al-2023/"},{"title":"Piray et al. (2019)","text":"Learning and decision-making are modulated by socio-emotional processing and such modulation is implicated in clinically relevant personality traits of social anxiety. The present study elucidates the computational and neural mechanisms by which emotionally aversive cues disrupt learning in socially anxious human individuals. Healthy volunteers with low or high trait social anxiety performed a reversal learning task requiring learning actions in response to angry or happy face cues. Choice data were best captured by a computational model in which learning rate was adjusted according to the history of surprises. High trait socially anxious individuals used a less-dynamic strategy for adjusting their learning rate in trials started with angry face cues and unlike the low social anxiety group, their dorsal anterior cingulate cortex (dACC) activity did not covary with the learning rate. Our results demonstrate that trait social anxiety is accompanied by disruption of optimal learning and dACC activity in threatening situations.","link":"/opendata/piray-et-al-2019/"},{"title":"Pisauro et al. (2022)","text":"Social interactions evolve continuously. Sometimes we cooperate, sometimes we compete, while at other times we strategically position ourselves somewhere in between to account for the ever-changing social contexts around us. Research on social interactions often focuses on a binary dichotomy between competition and cooperation, ignoring peoples evolving shifts along a continuum. Here, we develop an economic game - the Space Dilemma - where two players change their degree of cooperativeness over time in cooperative and competitive contexts. Using computational modelling we show how social contexts bias choices and characterise how inferences about others intentions modulate cooperativeness. Consistent with the modelling predictions, brain regions previously linked to social cognition, including the temporo-parietal junction, dorso-medial prefrontal cortex and the anterior cingulate gyrus, encode social prediction errors and context-dependent signals, correlating with shifts along a cooperation-competition continuum. These results provide a comprehensive account of the computational and neural mechanisms underlying the continuous trade-off between cooperation and competition.","link":"/opendata/pisauro-et-al-2022/"},{"title":"Plate et al. (2023)","text":"Similar decision-making situations often arise repeatedly, presenting tradeoffs between (i) acquiring new information to facilitate future-related decisions (exploration) and (ii) using existing information to secure expected outcomes (exploitation). Exploration choices have been well characterized in nonsocial contexts, however, choices to explore (or not) in social environments are less well understood. Social environments are of particular interest because a key factor that increases exploration in nonsocial contexts is environmental uncertainty, and the social world is generally appreciated to be highly uncertain. Although uncertainty sometimes must be reduced behaviorally (e.g., by trying something and seeing what happens), other times it may be reduced cognitively (e.g., by imagining possible outcomes). Across four experiments, participants searched for rewards in a series of grids that were either described as comprising real people distributing previously earned points (social context) or as the result of a computer algorithm or natural phenomenon (nonsocial context). In Experiments 1 and 2, participants explored more, and earned fewer rewards, in the social versus nonsocial context, suggesting that social uncertainty prompted behavioral exploration at the cost of task-relevant goals. In Experiments 3 and 4, we provided additional information about the people in the search space that could support social-cognitive approaches to uncertainty reduction, including relationships of the social agents distributing points (Experiment 3) and information relevant to social group membership (Experiment 4); exploration decreased in both instances. Taken together, these experiments highlight the approaches to, and tradeoffs of, uncertainty reduction in social contexts.","link":"/opendata/plate-et-al-2023/"},{"title":"Pleskac et al. (2019)","text":"Computational models of decision making typically assume as people deliberate between options they mentally simulate outcomes from each one and integrate valuations of these outcomes to form a preference. In two studies, we investigated this deliberation process using a task where participants make a series of decisions between a certain and an uncertain option, which were shown as dynamic visual samples that represented possible payoffs. We developed and validated a method of reverse correlational analysis for the task that measures how this time-varying signal was used to make a choice. The first study used this method to examine how information processing during deliberation differed from a perceptual analog of the task. We found participants were less sensitive to each sample of information during preferential choice. In a second study, we investigated how these different measures of deliberation were related to impulsivity and drug and alcohol use. We found that while properties of the deliberation process were not related to impulsivity, some aspects of the process may be related to substance use. In particular, alcohol abuse was related to diminished sensitivity to the payoff information and drug use was related to how the initial starting point of evidence accumulation. We synthesized our results with a rank-dependent sequential sampling model which suggests that participants allocated more attentional weight to larger potential payoffs during preferential choice.","link":"/opendata/pleskac-et-al-2019/"},{"title":"Poli et al. (2022)","text":"Exploration is curiosity-driven when it relies on the intrinsic motivation to know rather than on extrinsic rewards. Recent evidence shows that artificial agents perform better on a variety of tasks when their learning is curiosity-driven, and humans often engage in curiosity-driven learning when sampling information from the environment. However, which mechanisms underlie curiosity is still unclear. Here, we let participants freely explore different unknown environments that contained learnable sequences of events with varying degrees of noise and volatility. A hierarchical reinforcement learning model captured how participants were learning in these different kinds of unknown environments, and it also tracked the errors they expected to make and the learning opportunities they were planning to seek. With this computational approach, we show that participants exploratory behavior is guided by learning progress and perceptual novelty. Moreover, we demonstrate an overall tendency of participants to avoid extreme forms of uncertainty. These findings elucidate the cognitive mechanisms that underlie curiosity-driven exploration of unknown environments. Implications of this novel way of quantifying curiosity within a reinforcement learning framework are discussed.","link":"/opendata/poli-et-al-2022/"},{"title":"Poli et al. (2023)","text":"Humans are driven by an intrinsic motivation to learn, but the developmental origins of curiosity-driven exploration remain unclear. We investigated the computational principles guiding 4-year-old children’s exploration during a touchscreen game (N=102, F=49, M=53). Children guessed the location of characters that were hiding behind a hedge following predictable (yet noisy) patterns. Children could freely switch characters, which allowed us to quantify when they decided to explore something different and what they chose to explore. Bayesian modelling of their responses revealed that children selected activities that were more novel and offered greater learning progress. Moreover, children’s interest in making learning progress correlated with better learning performance. These findings highlight the importance of novelty and learning progress in guiding children’s curiosity-driven exploration.","link":"/opendata/poli-et-al-2023/"},{"title":"Polti et al. (2018)","text":"Psychological models of time perception involve attention and memory: while attention typically regulates the flow of events, memory maintains timed events or intervals. The precise, and possibly distinct, roles of attention and memory in time perception remain debated. In this behavioral study, we tested 48 participants in a prospective duration estimation task while they fully attended to time or performed a working memory (WM) task. We report that paying attention to time lengthened perceived duration in the range of seconds to minutes, whereas diverting attention away from time shortened perceived duration. The overestimation due to attending to time did not scale with durations. To the contrary, increasing WM load systematically decreased subjective duration and this effect scaled with durations. Herein, we discuss the dissociation between attention and WM in timing and scalar variability from the perspective of Bayesian models of time estimations.","link":"/opendata/polti-et-al-2018/"},{"title":"Pool et al. (2021)","text":"It has been suggested that there are two distinct and parallel mechanisms for controlling instrumental behavior in mammals: goal-directed actions and habits. To gain an understanding of how these two systems interact to control behavior, it is essential to characterize the mechanisms by which the balance between these systems is influenced by experience. Studies in rodents have shown that the amount of training governs the relative expression of these two systems: Behavior is goal-directed following moderate training, but the more extensively an instrumental action is trained, the more it becomes habitual. It is less clear whether humans exhibit similar training effects on the expression of goal-directed and habitual behavior, as human studies have reported contradictory findings. To tackle these contradictory findings, we formed a consortium, where four laboratories undertook a preregistered experimental induction of habits by manipulating the amount of training. There was no statistical evidence for a main effect of the amount of training on the formation and expression of habits. However, exploratory analyses suggest a moderating effect of the affective component of stress on the impact of training over habit expression. Participants who were lower in affective stress appeared to be initially goal-directed, but became habitual with increased training, whereas participants who were high in affective stress were already habitual even after moderate training, thereby manifesting insensitivity to overtraining effects. Our findings highlight the importance of the role of moderating variables such as individual differences in stress and anxiety when studying the experimental induction of habits in humans.","link":"/opendata/pool-et-al-2021/"},{"title":"Potter et al. (2017)","text":"Reinforcement learning theory distinguishes “model-free” learning, which fosters reflexive repetition of previously rewarded actions, from “model-based” learning, which recruits a mental model of the environment to flexibly select goal-directed actions. Whereas model-free learning is evident across development, recruitment of model-based learning appears to increase with age. However, the cognitive processes underlying the development of model-based learning remain poorly characterized. Here, we examined whether age-related differences in cognitive processes underlying the construction and flexible recruitment of mental models predict developmental increases in model-based choice. In a cohort of participants aged 9–25, we examined whether the abilities to infer sequential regularities in the environment (“statistical learning”), maintain information in an active state (“working memory”) and integrate distant concepts to solve problems (“fluid reasoning”) predicted age-related improvements in model-based choice. We found that age-related improvements in statistical learning performance did not mediate the relationship between age and model-based choice. Ceiling performance on our working memory assay prevented examination of its contribution to model-based learning. However, age-related improvements in fluid reasoning statistically mediated the developmental increase in the recruitment of a model-based strategy. These findings suggest that gradual development of fluid reasoning may be a critical component process underlying the emergence of model-based learning.","link":"/opendata/potter-et-al-2017/"},{"title":"Prieto et al. (2023)","text":"In developmental psychology, one of the questions that garners the attention of clinicians and experimental psychologists is caregiver bias in child’s psychological problems reports. Different models suggested by developmental psychology (e.g. The Depression-distortion, Accuracy and Combinatory model), had discussed the relation between the mother’s objective description of a child’s mental state and the degree of bias in her report. Recent evidence suggests that such bias could respond to a deficit in the caregiver’s ability to access, monitor and regulate their own psychological contents. The hypothesis to investigate is if such ability in caregivers, known as metacognition, could modulate the quality of mental attributions in the children. To this end, we recruited 102 Chilean mothers-children dyads users of the public care system in the city of Santiago. In an experimental setting, we evaluated metacognition in the mother through a perceptual metacognition task in addition to different psychopathology questionnaires (SCL-90-R & STAI) and biological reactivity to stress (TSST). By questioning the mother, we also investigated her belief about psychological symptoms in the child (CBCL 1½ - 5). Our results indicate that maternal severity and quantity of symptoms predict maternal reports of problematic psychological states in their child only in caregivers who show low metacognitive efficiency. We conclude the article with a discussion of the link between metacognitive capacity and the ability to attribute mental states to third parties.","link":"/opendata/prieto-et-al-2023/"},{"title":"Pronk et al. (2022)","text":"Research deployed via the internet and administered via smartphones could have access to more diverse samples than lab-based research. Diverse samples could have relatively high variation in their traits and so yield relatively reliable measurements of individual differences in these traits. Several cognitive tasks that originated from the experimental research tradition have been reported to yield relatively low reliabilities (Hedge et al., 2018) in samples with restricted variance (students). This issue could potentially be addressed by smartphone-mediated administration in diverse samples. We formulate several criteria to determine whether a cognitive task is suitable for individual differences research on commodity smartphones: no very brief or precise stimulus timing, relative response times (RTs), a maximum of two response options, and a small number of graphical stimuli. The flanker task meets these criteria. We compared the reliability of individual differences in the flanker effect across samples and devices in a preregistered study. We found no evidence that a more diverse sample yields higher reliabilities. We also found no evidence that commodity smartphones yield lower reliabilities than commodity laptops. Hence, diverse samples might not improve reliability above student samples, but smartphones may well measure individual differences with cognitive tasks reliably. Exploratively, we examined different reliability coefficients, split-half reliabilities, and the development of reliability estimates as a function of task length.","link":"/opendata/pronk-et-al-2022/"},{"title":"Ptasczynski et al. (2022)","text":"Reinforcement learning algorithms have a long-standing success story in explaining the dynamics of instrumental conditioning in humans and other species. While normative reinforcement learning models are critically dependent on external feedback, recent findings in the field of perceptual learning point to a crucial role of internally generated reinforcement signals based on subjective confidence, when external feedback is not available. Here, we investigated the existence of such confidence-based learning signals in a key domain of reinforcement-based learning: instrumental conditioning. We conducted a value-based decision making experiment which included phases with and without external feedback and in which participants reported their confidence in addition to choices. Behaviorally, we found signatures of self-reinforcement in phases without feedback, reflected in an increase of subjective confidence and choice consistency. To clarify the mechanistic role of confidence in value-based learning, we compared a family of confidence-based learning models with more standard models predicting either no change in value estimates or a devaluation over time when no external reward is provided. We found that confidence-based models indeed outperformed these reference models, whereby the learning signal of the winning model was based on the prediction error between current confidence and a stimulus-unspecific average of previous confidence levels. Interestingly, individuals with more volatile reward-based value updates in the presence of feedback also showed more volatile confidence-based value updates when feedback was not available. Together, our results provide evidence that confidence-based learning signals affect instrumentally learned subjective values in the absence of external feedback.","link":"/opendata/ptasczynski-et-al-2022/"},{"title":"Pupillo et al. (2023)","text":"Predictive processing accounts propose that our brain constantly tries to match top-down internal representations with bottom-up incoming information from the environment. Predictions can lead to prediction errors of varying degrees depending on the extent to which the information encountered in the environment conforms with prior expectations. Theoretical and computational models assume that prediction errors have beneficial effects on learning and memory. However, while there is strong evidence on the effects of prediction error on learning, relatively less evidence is available regarding its effects on memory. Moreover, most of the studies available so far manipulated prediction error by using monetary rewards, whereas in everyday life learning does not always occur in the presence of explicit rewards. We used a task in which participants leaned context/object-category associations of different strength based on the outcomes of their predictions. After learning these associations, participants were presented with trial-unique objects that could match or violate their predictions. Finally, participants were asked to complete a surprise recognition memory test. We used a reinforcement learning model to derive subject-specific trial-to-trial estimates of prediction error at encoding and link it to subsequent recognition memory. Results showed that model-derived prediction errors at encoding influenced subsequent memory as a function of the outcome of participants’ predictions (correct vs incorrect). When participants correctly predicted the object category, stronger prediction errors (as a consequence of weak expectations) led to enhanced memory. In contrast, when participants incorrectly predicted the object category, stronger prediction errors (as a consequence of strong expectations) led to impaired memory. These results reveal a computationally specific influence of prediction error on memory formation, highlighting the important moderating role of choice outcome that may be related to interactions between the hippocampal and striatal dopaminergic systems.","link":"/opendata/pupillo-et-al-2023/"},{"title":"Qin et al. (2023)","text":"A large amount of literature demonstrates that social behaviour can be triggered by environmental cues. A long-standing debate involves the question of whether such stimuli trigger behaviour directly (i.e. habits) or whether these effects mediate goals. As studies on automatic goal pursuit typically use real-world cues that are already associated with the behaviour and potentially the goal, it is impossible to make strong claims about the nature of the effects. In the present paper, we use a paradigm inspired by the Pavlovian-to-instrumental transfer (PIT) literature to examine how the environment can trigger goal-directed behaviour. Building on the essence of pro-self and pro-social motives in humans, two experiments explored the PIT effect when the outcomes were framed in terms of self- versus other-interest. Participants performed actions to earn money for themselves or a charity. Each outcome was linked to a different cue. The results showed that a cue predictive of self-interest outcomes facilitated responses instrumental in gaining the outcome, while such specific PIT effect for other-interest outcomes only emerged when participants were free to donate the money. We briefly discuss these findings reflecting on whether the PIT effect in our paradigm is indeed sensitive to the value of social goals.","link":"/opendata/qin-et-al-2023/"},{"title":"Raab & Hartley (2020)","text":"Multiple learning systems allow individuals to flexibly respond to opportunities and challenges present in the environment. An evolutionarily conserved Pavlovian learning mechanism couples valence and action, promoting a tendency to approach cues associated with reward and to inhibit action in the face of anticipated punishment. Although this default response system may be adaptive, these hard-wired reactions can hinder the ability to learn flexible instrumental actions in pursuit of a goal. Such constraints on behavioral flexibility have been studied extensively in adults. However, the extent to which these valence-specific response tendencies bias instrumental learning across development remains poorly characterized. Here, we show that while Pavlovian response biases constrain flexible action learning in children and adults, these biases are attenuated in adolescents. This adolescent-specific reduction in Pavlovian bias may promote unbiased exploration of approach and avoidance responses, facilitating the discovery of rewarding behavior in the many novel contexts that adolescents encounter.","link":"/opendata/raab-hartley-2020/"},{"title":"Raab et al. (2022)","text":"Accurate assessment of environmental controllability enables individuals to adaptively adjust their behavior-exploiting rewards when desirable outcomes are contingent upon their actions and minimizing costly deliberation when their actions are inconsequential. However, it remains unclear how estimation of environmental controllability changes from childhood to adulthood. Ninety participants (ages 8-25) completed a task that covertly alternated between controllable and uncontrollable conditions, requiring them to explore different actions to discover the current degree of environmental controllability. We found that while children were able to distinguish controllable and uncontrollable conditions, accuracy of controllability assessments improved with age. Computational modeling revealed that whereas younger participants controllability assessments relied on evidence gleaned through random exploration, older participants more effectively recruited their task structure knowledge to make highly informative interventions. Age-related improvements in working memory mediated this qualitative shift toward increased use of an inferential strategy. Collectively, these findings reveal an age-related shift in the cognitive processes engaged to assess environmental controllability. Improved detection of environmental controllability may foster increasingly adaptive behavior over development by revealing when actions can be leveraged for ones benefit.","link":"/opendata/raab-et-al-2022/"},{"title":"Rac-Lubashevsky & Frank (2021)","text":"Adaptive cognitive-control involves a hierarchical cortico-striatal gating system that supports selective updating, maintenance, and retrieval of useful cognitive and motor information. Here, we developed a task that independently manipulates selective gating operations into working-memory (input gating), from working-memory (output gating), and of responses (motor gating) and tested the neural dynamics and computational principles that support them. Increases in gating demands, captured by gate switches, were expressed by distinct EEG correlates at each gating level that evolved dynamically in partially overlapping time windows. Further, categorical representations of specific maintained items and of motor responses could be decoded from EEG when the corresponding gate was switching, thereby linking gating operations to prioritization. Finally, gate switching at all levels was related to increases in the motor decision threshold as quantified by the drift diffusion model. Together these results support the notion that cognitive gating operations scaffold on top of mechanisms involved in motor gating.","link":"/opendata/rac-lubashevsky-frank-2021/"},{"title":"Radulescu et al. (2020)","text":"There are a number of well-accepted ways to measure risk sensitivity, with researchers often making conclusions about individual differences based on a single task. Even though long-standing observations suggest that how risky outcomes are presented changes peoples behavior, it is unclear whether risk sensitivity is a unitary trait that can be measured by any one of these instruments. To directly answer this question, we administered three tasks commonly used to elicit risk sensitivity within-subject to a large sample of participants on Amazon Mechanical Turk. Our findings revealed high individual variability in each measure, with little evidence of consistency among different tasks: many participants who were classified as risk-averse in one task were risk-seeking in another, and we observed no significant correlations between continuous measures of risk sensitivity as measured in each of the tasks. Our results cast doubt on the pervasive assumption that risk paradigms measure a single underlying trait, and suggest instead that behavior in risky situations is the result of heterogeneous, interacting, and possibly task-dependent cognitive mechanisms.","link":"/opendata/radulescu-et-al-2020/"},{"title":"Rahnev et al. (2020)","text":"Understanding how people rate their confidence is critical for the characterization of a wide range of perceptual, memory, motor and cognitive processes. To enable the continued exploration of these processes, we created a large database of confidence studies spanning a broad set of paradigms, participant populations and fields of study. The data from each study are structured in a common, easy-to-use format that can be easily imported and analysed using multiple software packages. Each dataset is accompanied by an explanation regarding the nature of the collected data. At the time of publication, the Confidence Database (which is available at https://osf.io/s46pr/) contained 145 datasets with data from more than 8,700 participants and almost 4 million trials. The database will remain open for new submissions indefinitely and is expected to continue to grow. Here we show the usefulness of this large collection of datasets in four different analyses that provide precise estimations of several foundational confidence-related effects.’","link":"/opendata/rahnev-et-al-2020/"},{"title":"Rao & Hastie (2023)","text":"Beliefs like the Gamblers Fallacy and the Hot Hand have interested cognitive scientists, economists, and philosophers for centuries. We propose that these judgment patterns arise from the observers mental models of the sequence-generating mechanism, moderated by the strength of belief in an a priori base rate. In six behavioral experiments, participants observed one of three mechanisms generating sequences of eight binary events: a random mechanical device, an intentional goal-directed actor, and a financial market. We systematically manipulated participants beliefs about the base rate probabilities at which different outcomes were generated by each mechanism. Participants judged 18 sequences of outcomes produced by a mechanism with either an unknown base rate, a specified distribution of three equiprobable base rates, or a precise, fixed base rate. Six target sequences ended in streaks of between two and seven identical outcomes. The most common predictions for subsequent events were best described as pragmatic belief updating, expressed as an increasingly strong expectation that a streak of identical signals would repeat as the length of that streak increased. The exception to this pattern was for sequences generated by a random mechanical device with a fixed base rate of .50. Under this specific condition, participants exhibited a bias toward reversal of streaks, and this bias was larger when participants were asked to make a dichotomous choice versus a numerical probability rating. We review alternate accounts for the anomalous judgments of sequences and conclude with our favored interpretation that is based on Rabins version of Tversky & Kahnemans Law of Small Numbers.","link":"/opendata/rao-hastie-2023/"},{"title":"Rebholz et al. (2023)","text":"In sampling approaches to advice taking, participants can sequentially sample multiple pieces of advice before making a final judgment. To contribute to the understanding of active advice seeking, we develop and compare different strategies for information integration from external sources, including Bayesian belief updating. In a reanalysis of empirical data, we find that participants most frequently compromise between their initial beliefs and the distributions of multiple pieces of advice sampled from others. Moreover, across all participants, compromising predicts their final beliefs better than choosing one of the two sources of information. However, participants’ willingness to integrate external opinions is relatively higher for multiple pieces of reasonably distant as compared to close advice. Nevertheless, egocentrism is as pronounced as in the traditional paradigm where only a single piece of external evidence is provided. Crucially, there are large inter- and intra-individual differences in strategy selection for sequential advice taking. On the one hand, some participants choose their own or others’ judgments more often, and other participants are better described as compromisers between internal and external sources of information. On the other hand, virtually all participants apply different advice taking strategies for different items and trials. Our findings constitute initial evidence of the adaptive utilization of multiple, sequentially sampled external opinions.","link":"/opendata/rebholz-et-al-2023/"},{"title":"Reed et al. (2020)","text":"Paranoia is the belief that harm is intended by others. It may arise from selective pressures to infer and avoid social threats, particularly in ambiguous or changing circumstances. We propose that uncertainty may be sufficient to elicit learning differences in paranoid individuals, without social threat. We used reversal learning behavior and computational modeling to estimate belief updating across individuals with and without mental illness, online participants, and rats chronically exposed to methamphetamine, an elicitor of paranoia in humans. Paranoia is associated with a stronger prior on volatility, accompanied by elevated sensitivity to perceived changes in the task environment. Methamphetamine exposure in rats recapitulates this impaired uncertainty-driven belief updating and rigid anticipation of a volatile environment. Our work provides evidence of fundamental, domain-general learning differences in paranoid individuals. This paradigm enables further assessment of the interplay between uncertainty and belief-updating across individuals and species.","link":"/opendata/reed-et-al-2020/"},{"title":"Reiter et al. (2021)","text":"Adolescents are prone to social influence from peers, with implications for development, both adaptive and maladaptive. Here, using a computer-based paradigm, we replicate a cross-sectional effect of more susceptibility to peer influence in a large dataset of adolescents 14 to 24 years old. Crucially, we extend this finding by adopting a longitudinal perspective, showing that a within-person susceptibility to social influence decreases over a 1.5 year follow-up time period. Exploiting this longitudinal design, we show that susceptibility to social influences at baseline predicts an improvement in peer relations over the follow-up period. Using a Bayesian computational model, we demonstrate that in younger adolescents a greater tendency to adopt others preferences arises out of a higher uncertainty about their own preferences in the paradigmatic case of delay discounting (a phenomenon called preference uncertainty). This preference uncertainty decreases over time and, in turn, leads to a reduced susceptibility of ones own behaviour to an influence from others. Neuro-developmentally, we show that a measure of myelination within medial prefrontal cortex, estimated at baseline, predicts a developmental decrease in preference uncertainty at follow-up. Thus, using computational and neural evidence, we reveal adaptive mechanisms underpinning susceptibility to social influence during adolescence.","link":"/opendata/reiter-et-al-2021/"},{"title":"Ren et al. (2021)","text":"The coefficient of variation (CV), also known as relative standard deviation, has been used to measure the constancy of the Weber fraction, a key signature of efficient neural coding in time perception. It has long been debated whether or not duration judgments follow Weber’s law, with arguments based on examinations of the CV. However, what has been largely ignored in this debate is that the observed CVs may be modulated by temporal context and decision uncertainty, thus questioning conclusions based on this measure. Here, we used a temporal reproduction paradigm to examine the variation of the CV with two types of temporal context: full-range mixed vs. sub-range blocked intervals, separately for intervals presented in the visual and auditory modalities. We found a strong contextual modulation of both interval-duration reproductions and the observed CVs. We then applied a two-stage Bayesian model to predict those variations. Without assuming a violation of the constancy of the Weber fraction, our model successfully predicted the central-tendency effect and the variation in the CV. Our findings and modeling results indicate that both the accuracy and precision of our timing behavior are highly dependent on the temporal context and decision uncertainty. And, critically, they advise caution with using variations of the CV to reject the constancy of the Weber fraction of duration estimation.","link":"/opendata/ren-et-al-2021/"},{"title":"Rhoads et al. (2023)","text":"Most prosocial and antisocial behaviors affect ourselves and others simultaneously. To know whether to repeat choices that help or harm, we must learn from their outcomes. But the neurocomputational processes supporting such simultaneous learning remain poorly understood. In this pre-registered study, two independent samples (N=89) learned to make choices that simultaneously affected themselves and another person. Detailed model comparison showed that people integrate self- and other-relevant information into a single cached value per choice, but update this value asymmetrically based on different types of prediction errors related to the target (e.g., self, other) and valence (e.g., positive, negative). People who acquire more prosocial patterns are more sensitive to information about how their choices affect others. However, those with higher levels of subclinical psychopathic traits are relatively insensitive to unexpected outcomes for others and more sensitive for themselves. Model-based neuroimaging revealed distinct brain regions tracking prediction errors guided by the asymmetric value update. These results demonstrate that the way people distinctly encode self- and other-relevant outcomes resulting from a particular behavior guides how desirable the same behavior will be in the future, regardless of whether it is mutually beneficial or costly, instrumentally harmful, or altruistic.","link":"/opendata/rhoads-et-al-2023/"},{"title":"Riemer & Wolbers (2020)","text":"In time reproduction tasks, the reaction time of motor responses is intrinsically linked to the measure of perceptual timing. Decisions are based on a continuous comparison between elapsed time and a memory trace of the to-be-reproduced interval. Here, we investigate the possibility that negative reproduction errors can be explained by the tendency to prefer earlier over later response times, or whether the whole range of possible response times is shifted. In experiment 1, we directly compared point reproduction (participants indicate the exact time point of equality) and range reproduction (participants bracket an interval containing this time point). In experiment 2, participants indicated, in three separate tasks, the exact time point at which the reproduction phase was equal to the standard duration (point reproduction), the earliest (start reproduction), or the latest moment (stop reproduction) at which the exact time point of equality might have been reached. The results demonstrate that the bias towards earlier responses not only affects reproduction of the exact time point of equality. Rather, even if the decision threshold is changed in favor of late responses, they exhibit a continuous shift towards negative errors that increases with the length of the standard duration. The findings are discussed in the context of the hypothesis that systematic errors in time reproduction tasks reflect a dimension-unspecific tendency towards earlier responses caused by the psychophysical method rather than by a time-specific perceptual distortion.","link":"/opendata/riemer-wolbers-2020/"},{"title":"Riemer et al. (2022)","text":"Chronometric counting is a prevalent issue in the study of human time perception as it reduces the construct validity of tasks and can conceal existing timing deficits. Several methods have been proposed to prevent counting strategies, but the factors promoting those strategies in specific tasks are largely uninvestigated. Here, we modified a classical two-interval duration discrimination task in two aspects that could affect the tendency to apply counting strategies. We removed the pause between the two intervals and changed the task instructions: Participants decided whether a short event occurred in the first or in the second half of a reference duration. In Experiment 1, both classical and modified task versions were performed under timing conditions, in which participants were asked not to count, and counting conditions, in which counting was explicitly instructed. The task modifications led to (i) a general decrease in judgment precision, (ii) a shift of the point of subjective equality, and (iii) a counting-related increase in reaction times, suggesting enhanced cognitive effort of counting during the modified task version. Precision in the two task versions was not differently affected by instructed counting. Experiment 2 demonstrates that-in the absence of any counting-related instructions-participants are less likely to engage in spontaneous counting in the modified task version. These results enhance our understanding of the two-interval duration discrimination task and demonstrate that the modifications tested here-although they do not significantly reduce the effectiveness of instructed counting-can diminish the spontaneous tendency to adopt counting strategies.","link":"/opendata/riemer-et-al-2022/"},{"title":"Rischall et al. (2022)","text":"In natural settings, people decide not only when to request information, but also which attribute of a situation to inquire about. Little is known about how participants prioritize inquiries about task-relevant features. We show that, in a new task of information demand, participants inefficiently inquired about attributes that had high individual value but were less informative about a total payoff, and these inefficiencies persisted in instrumental conditions in which they entailed significantly lower rewards. Factors contributing to inefficient information demand included a form of anticipatory utility motivated by high value individual attributes rather than the total reward, and difficulty identifying the most informative observations. Across participants, more efficient inquiries were associated with personality traits, including lower extraversion and reward sensitivity scores and higher stress tolerance and need for cognition. The results highlight new affective, cognitive and personality factors involved in prioritizing sources of information.","link":"/opendata/rischall-et-al-2022/"},{"title":"Rischall et al. (2023)","text":"In natural settings, people evaluate complex multi-attribute situations and decide which attribute to request information about. Little is known about how people make this selection and specifically, how they identify individual observations that best predict the value of a multi-attribute situation. Here show that, in a simple task of information demand, participants inefficiently query attributes that have high individual value but are relatively uninformative about a total payoff. This inefficiency is robust in two instrumental conditions in which gathering less informative observations leads to significantly lower rewards. Across individuals, variations in the sensitivity to informativeness is associated with personality metrics, showing negative associations with extraversion and thrill seeking and positive associations with stress tolerance and need for cognition. Thus, people select informative queries using sub-optimal strategies that are associated with personality traits and influence consequential choices.","link":"/opendata/rischall-et-al-2023/"},{"title":"Rmus et al. (2023a)","text":"In reinforcement learning (RL) experiments, participants learn to make rewarding choices in response to different stimuli; RL models use outcomes to estimate stimulus–response values that change incrementally. RL models consider any response type indiscriminately, ranging from more concretely defined motor choices (pressing a key with the index finger), to more general choices that can be executed in a number of ways (selecting dinner at the restaurant). However, does the learning process vary as a function of the choice type? In Experiment 1, we show that it does: Participants were slower and less accurate in learning correct choices of a general format compared with learning more concrete motor actions. Using computational modeling, we show that two mechanisms contribute to this. First, there was evidence of irrelevant credit assignment: The values of motor actions interfered with the values of other choice dimensions, resulting in more incorrect choices when the correct response was not defined by a single motor action; second, information integration for relevant general choices was slower. In Experiment 2, we replicated and further extended the findings from Experiment 1 by showing that slowed learning was attributable to weaker working memory use, rather than slowed RL. In both experiments, we ruled out the explanation that the difference in performance between two condition types was driven by difficulty/different levels of complexity. We conclude that defining a more abstract choice space used by multiple learning systems for credit assignment recruits executive resources, limiting how much such processes then contribute to fast learning.","link":"/opendata/rmus-et-al-2023a/"},{"title":"Rmus et al. (2023b)","text":"The ability to use past experience to effectively guide decision-making declines in older adulthood. Such declines have been theorized to emerge from either impairments of striatal reinforcement learning systems (RL) or impairments of recurrent networks in prefrontal and parietal cortex that support working memory (WM). Distinguishing between these hypotheses has been challenging because either RL or WM could be used to facilitate successful decision-making in typical laboratory tasks. Here we investigated the neurocomputational correlates of age-related decision-making deficits using an RL-WM task to disentangle these mechanisms, a computational model to quantify them, and magnetic resonance spectroscopy to link them to their molecular bases. Our results reveal that task performance is worse in older age, in a manner best explained by working memory deficits, as might be expected if cortical recurrent networks were unable to sustain persistent activity across multiple trials. Consistent with this, we show that older adults had lower levels of prefrontal glutamate, the excitatory neurotransmitter thought to support persistent activity, compared to younger adults. Individuals with the lowest prefrontal glutamate levels displayed the greatest impairments in working memory after controlling for other anatomical and metabolic factors. Together, our results suggest that lower levels of prefrontal glutamate may contribute to failures of working memory systems and impaired decision-making in older adulthood.","link":"/opendata/rmus-et-al-2023b/"},{"title":"Roberts et al. (2022)","text":"Decades of research have established the ubiquity and importance of choice biases, such as the framing effect, yet why these seemingly irrational behaviors occur remains unknown. A prominent dual-system account maintains that alternate framings bias choices because of the unchecked influence of quick, affective processes, and findings that time pressure increases the framing effect have provided compelling support. Here, we present a novel alternative account of magnified framing biases under time pressure that emphasizes shifts in early visual attention and strategic adaptations in the decision-making process. In a preregistered direct replication (N = 40 adult undergraduates), we found that time constraints produced strong shifts in visual attention toward reward-predictive cues that, when combined with truncated information search, amplified the framing effect. Our results suggest that an attention-guided, strategic information-sampling process may be sufficient to explain prior results and raise challenges for using time pressure to support some dual-system accounts.","link":"/opendata/roberts-et-al-2022/"},{"title":"Robinson & Brady (2023)","text":"Ensemble perception is a process by which we summarize complex scenes. Despite the importance of ensemble perception to everyday cognition, there are few computational models that provide a formal account of this process. Here we develop and test a model in which ensemble representations reflect the global sum of activation signals across all individual items. We leverage this set of minimal assumptions to formally connect a model of memory for individual items to ensembles. We compare our ensemble model against a set of alternative models in five experiments. Our approach uses performance on a visual memory task for individual items to generate zero-free-parameter predictions of interindividual and intraindividual differences in performance on an ensemble continuous-report task. Our top-down modelling approach formally unifies models of memory for individual items and ensembles and opens a venue for building and comparing models of distinct memory processes and representations.","link":"/opendata/robinson-brady-2023/"},{"title":"Robison & Nguyen (2023)","text":"Across four experiments, we manipulated features of a simple reaction time (RT) task to examine the effects of such features on vigilance. In Experiment 1, we created simple reaction time “game” that pitted participants against two computerized avatars. In one condition, participants were awarded points, while the other condition did not receive points. Performance in the two conditions did not differ, but both conditions showed faster RTs and shallower time-on-task performance decrements compared to a standard psychomotor vigilance task. In Experiment 2, we removed the competitive feature but retained the point system. In this case, participants without a point system showed a steeper performance decrement than those with a point system. Experiments 3 and 4 replicated these effects and corroborated their findings with pupillometry. Participants in both conditions of Experiment 3 (competitive task) and the points condition of Experiment 4 showed larger task-evoked pupillary responses than participants in the no-points condition of Experiment 4. These findings challenge the notion that time-on-task performance decrements are caused by resource depletion (Smit et al., 2004), and are better explained by motivational control (Hockey, 2011) or cost-benefit theories (Boksem & Tops, 2008; Kurzban et al., 2013) of mental effort and cognitive fatigue.","link":"/opendata/robison-nguyen-2023/"},{"title":"Robison & Campbell (2023)","text":"There has been debate regarding the correlation between baseline/resting state measures of pupil diameter and cognitive abilities such as working memory capacity and fluid intelligence. A positive correlation between baseline pupil diameter and cognitive ability has been cited as evidence for a role of the locus coeruleus-norepinephrine (LC-NE) and its functional connection with cortical networks as a reason for individual differences in fluid intelligence (Tsukahara & Engle, Proceedings of the National Academy of Sciences, 118(46), e2110630118, 2021a). Several recent attempts to replicate this correlation have failed. The current studies make another attempt and find substantial evidence against a positive correlation between pupil diameter and intelligence. Given the data from the current studies in combination with other recent failures to replicate, we conclude that individual differences in baseline pupil diameter should not be used as evidence for a role of the LC-NE system in goal-directed cognitive activity.","link":"/opendata/robison-campbell-2023/"},{"title":"Rodman et al. (2023)","text":"Peer relationships and social belonging are particularly important during adolescence. Using a willingness-to-work paradigm to quantify incentive motivation, we examined whether evaluative information holds unique value for adolescents. Participants (N = 102; 12-23 years old) rated peers, predicted how peers rated them, and exerted physical effort to view each peer’s rating. We measured grip force, speed, and opt-out behavior to examine the motivational value of peer feedback, relative to money in a control condition, and to assess how peer desirability and participants’ expectations modulated motivated effort across age. Overall, when compared with adolescents, adults were relatively less motivated for feedback than money. Whereas adults exerted less force and speed for feedback when expecting rejection, adolescents exerted greater force and speed when expecting to be more strongly liked or disliked. These findings suggest that the transition into adulthood is accompanied by a self-protective focus, whereas adolescents are motivated to consume highly informative feedback, even if negative.","link":"/opendata/rodman-et-al-2023/"},{"title":"Rojek-Giffin et al. (2023)","text":"Globalizing economies and long-distance trade rely on individuals from different cultural groups to negotiate agreement on what to give and take. In such settings, individuals often lack insight into what interaction partners deem fair and appropriate, potentially seeding misunderstandings, frustration, and conflict. Here, we examine how individuals decipher distinct rules of engagement and adapt their behavior to reach agreements with partners from other cultural groups. Modeling individuals as Bayesian learners with inequality aversion reveals that individuals, in repeated ultimatum bargaining with responders sampled from different groups, can be more generous than needed. While this allows them to reach agreements, it also gives rise to biased beliefs about what is required to reach agreement with members from distinct groups. Preregistered behavioral (N = 420) and neuroimaging experiments (N = 49) support model predictions: Seeking equitable agreements can lead to overly generous behavior toward partners from different groups alongside incorrect beliefs about prevailing norms of what is appropriate in groups and cultures other than one’s own.","link":"/opendata/rojek-giffin-et-al-2023/"},{"title":"Rollwage et al. (2020)","text":"A prominent source of polarised and entrenched beliefs is confirmation bias, where evidence against one’s position is selectively disregarded. This effect is most starkly evident when opposing parties are highly confident in their decisions. Here we combine human magnetoencephalography (MEG) with behavioural and neural modelling to identify alterations in post-decisional processing that contribute to the phenomenon of confirmation bias. We show that holding high confidence in a decision leads to a striking modulation of post-decision neural processing, such that integration of confirmatory evidence is amplified while disconfirmatory evidence processing is abolished. We conclude that confidence shapes a selective neural gating for choice-consistent information, reducing the likelihood of changes of mind on the basis of new information. A central role for confidence in shaping the fidelity of evidence accumulation indicates that metacognitive interventions may help ameliorate this pervasive cognitive bias.","link":"/opendata/rollwage-et-al-2020/"},{"title":"Romero-Verdugo et al. (2023)","text":"In our connected era, we spend significant time and effort satisfying our curiosity. Often, we choose which information we seek, but sometimes the selection is made for us. We hypothesized that humans exhibit enhanced curiosity in the context of choice. We designed a task in which healthy participants saw two lotteries on each trial. On some trials, participants chose which lottery to play. On other trials, the lottery was selected for them. Participants then indicated their curiosity about the outcome of the to-be-played lottery via self-report ratings (Experiment 1, N = 34) or willingness-to-wait decisions (Experiment 2, N = 34). We found that participants exhibited higher curiosity ratings and greater willingness to wait for the outcome of lotteries they had chosen than for lotteries that had been selected for them (controlling for initial preference). This demonstrates that choice boosts curiosity, which may have implications for boosting learning, memory, and motivation.","link":"/opendata/romero-verdugo-et-al-2023/"},{"title":"Roseboom (2019)","text":"Recent sensory history affects subsequent experience. Behavioral results have demonstrated this effect in two forms: repeated exposure to the same sensory input produces negative aftereffects wherein sensory stimuli like those previously experienced are judged as less like the exposed stimulation, while singular exposures can produce positive aftereffects wherein judgments are more like previously experienced stimulation. For timing perception, there is controversy regarding the influence of recent exposure-both singular and repeated exposure produce apparently negative aftereffects-often referred to as temporal recalibration and rapid temporal recalibration, respectively. While negative aftereffects have been found following repeated exposure for all timing tasks, following a single exposure, they have only been demonstrated using synchrony judgments (SJs). Here, we examine the influence of a single presentation-serial dependence for timing-for standard timing tasks: SJ, temporal order judgments, and magnitude estimation judgments. We found that serial dependence produced apparently negative aftereffects in SJ, but positive aftereffects in temporal order judgment and magnitude estimation judgment. We propose that these findings, and those following repeated exposure, can be reconciled within a framework wherein negative aftereffects occur at sensory layers, consistent with classical depictions of sensory adaptation, and Bayesian-like positive aftereffects operate across different, higher, decision levels. These findings are consistent with the aftereffects known from other perceptual dimensions and provide a general framework for interpreting positive (serial dependence) and negative (sensory adaptation) aftereffects across different tasks.","link":"/opendata/roseboom-2019/"},{"title":"Rosenbaum et al. (2022)","text":"As individuals learn through trial and error, some are more influenced by good outcomes, while others weight bad outcomes more heavily. Such valence biases may also influence memory for past experiences. Here, we examined whether valence asymmetries in reinforcement learning change across adolescence, and whether individual learning asymmetries bias the content of subsequent memory. Participants ages 8-27 learned the values of point machines, after which their memory for trial-unique images presented with choice outcomes was assessed. Relative to children and adults, adolescents overweighted worse-than-expected outcomes during learning. Individuals valence biases modulated incidental memory, such that those who prioritized worse- (or better-) than-expected outcomes during learning were also more likely to remember images paired with these outcomes, an effect reproduced in an independent dataset. Collectively, these results highlight age-related changes in the computation of subjective value and demonstrate that a valence-asymmetric valuation process influences how information is prioritized in episodic memory.","link":"/opendata/rosenbaum-et-al-2022/"},{"title":"Rossi-Goldthorpe et al. (2021)","text":"Self-deception, paranoia, and overconfidence involve misbeliefs about the self, others, and world. They are often considered mistaken. Here we explore whether they might be adaptive, and further, whether they might be explicable in Bayesian terms. We administered a difficult perceptual judgment task with and without social influence (suggestions from a cooperating or competing partner). Crucially, the social influence was uninformative. We found that participants heeded the suggestions most under the most uncertain conditions and that they did so with high confidence, particularly if they were more paranoid. Model fitting to participant behavior revealed that their prior beliefs changed depending on whether the partner was a collaborator or competitor, however, those beliefs did not differ as a function of paranoia. Instead, paranoia, self-deception, and overconfidence were associated with participants perceived instability of their own performance. These data are consistent with the idea that self-deception, paranoia, and overconfidence flourish under uncertainty, and have their roots in low self-esteem, rather than excessive social concern. The model suggests that spurious beliefs can have value-self-deception is irrational yet can facilitate optimal behavior. This occurs even at the expense of monetary rewards, perhaps explaining why self-deception and paranoia contribute to costly decisions which can spark financial crashes and devastating wars.","link":"/opendata/rossi-goldthorpe-et-al-2021/"},{"title":"Rouhani et al. (2018)","text":"Reward-prediction errors track the extent to which rewards deviate from expectations, and aid in learning. How do such errors in prediction interact with memory for the rewarding episode? Existing findings point to both cooperative and competitive interactions between learning and memory mechanisms. Here, we investigated whether learning about rewards in a high-risk context, with frequent, large prediction errors, would give rise to higher fidelity memory traces for rewarding events than learning in a low-risk context. Experiment 1 showed that recognition was better for items associated with larger absolute prediction errors during reward learning. Larger prediction errors also led to higher rates of learning about rewards. Interestingly we did not find a relationship between learning rate for reward and recognition-memory accuracy for items, suggesting that these two effects of prediction errors were caused by separate underlying mechanisms. In Experiment 2, we replicated these results with a longer task that posed stronger memory demands and allowed for more learning. We also showed improved source and sequence memory for items within the high-risk context. In Experiment 3, we controlled for the difficulty of reward learning in the risk environments, again replicating the previous results. Moreover, this control revealed that the high-risk context enhanced item-recognition memory beyond the effect of prediction errors. In summary, our results show that prediction errors boost both episodic item memory and incremental reward learning, but the two effects are likely mediated by distinct underlying systems.","link":"/opendata/rouhani-et-al-2018/"},{"title":"Rouhani & Niv (2021)","text":"Memory helps guide behavior, but which experiences from the past are prioritized? Classic models of learning posit that events associated with unpredictable outcomes as well as, paradoxically, predictable outcomes, deploy more attention and learning for those events. Here, we test reinforcement learning and subsequent memory for those events, and treat signed and unsigned reward prediction errors (RPEs), experienced at the reward-predictive cue or reward outcome, as drivers of these two seemingly contradictory signals. By fitting reinforcement learning models to behavior, we find that both RPEs contribute to learning by modulating a dynamically changing learning rate. We further characterize the effects of these RPE signals on memory and show that both signed and unsigned RPEs enhance memory, in line with midbrain dopamine and locus-coeruleus modulation of hippocampal plasticity, thereby reconciling separate findings in the literature.","link":"/opendata/rouhani-niv-2021/"},{"title":"Ruggeri et al. (2020)","text":"Prospect theory is among the most influential frameworks in behavioural science, specifically in research on decision-making under risk. Kahneman and Tversky’s 1979 study tested financial choices under risk, concluding that such judgements deviate significantly from the assumptions of expected utility theory, which had remarkable impacts on science, policy and industry. Though substantial evidence supports prospect theory, many presumed canonical theories have drawn scrutiny for recent replication failures. In response, we directly test the original methods in a multinational study (n = 4,098 participants, 19 countries, 13 languages), adjusting only for current and local currencies while requiring all participants to respond to all items. The results replicated for 94% of items, with some attenuation. Twelve of 13 theoretical contrasts replicated, with 100% replication in some countries. Heterogeneity between countries and intra-individual variation highlight meaningful avenues for future theorizing and applications. We conclude that the empirical foundations for prospect theory replicate beyond any reasonable thresholds.","link":"/opendata/ruggeri-et-al-2020/"},{"title":"Ruggeri et al. (2022)","text":"Economic inequality is associated with preferences for smaller, immediate gains over larger, delayed ones. Such temporal discounting may feed into rising global inequality, yet it is unclear whether it is a function of choice preferences or norms, or rather the absence of sufficient resources for immediate needs. It is also not clear whether these reflect true differences in choice patterns between income groups. We tested temporal discounting and five intertemporal choice anomalies using local currencies and value standards in 61 countries (N = 13,629). Across a diverse sample, we found consistent, robust rates of choice anomalies. Lower-income groups were not significantly different, but economic inequality and broader financial circumstances were clearly correlated with population choice patterns.","link":"/opendata/ruggeri-et-al-2022/"},{"title":"Rutledge et al. (2021)","text":"The subjective well-being or happiness of individuals is an important metric for societies. Although happiness is influenced by life circumstances and population demographics such as wealth, we know little about how the cumulative influence of daily life events are aggregated into subjective feelings. Using computational modeling, we show that emotional reactivity in the form of momentary happiness in response to outcomes of a probabilistic reward task is explained not by current task earnings, but by the combined influence of recent reward expectations and prediction errors arising from those expectations. The robustness of this account was evident in a large-scale replication involving 18,420 participants. Using functional MRI, we show that the very same influences account for task-dependent striatal activity in a manner akin to the influences underpinning changes in happiness.","link":"/opendata/rutledge-et-al-2021/"},{"title":"Rybicki et al. (2022)","text":"Some theories of human cultural evolution posit that humans have social-specific learning mechanisms that are adaptive specialisations moulded by natural selection to cope with the pressures of group living. However, the existence of neurochemical pathways that are specialised for learning from social information and individual experience is widely debated. Cognitive neuroscientific studies present mixed evidence for social-specific learning mechanisms: some studies find dissociable neural correlates for social and individual learning, whereas others find the same brain areas and, dopamine-mediated, computations involved in both. Here, we demonstrate that, like individual learning, social learning is modulated by the dopamine D2 receptor antagonist haloperidol when social information is the primary learning source, but not when it comprises a secondary, additional element. Two groups (total N = 43) completed a decision-making task which required primary learning, from own experience, and secondary learning from an additional source. For one group, the primary source was social, and secondary was individual; for the other group this was reversed. Haloperidol affected primary learning irrespective of social/individual nature, with no effect on learning from the secondary source. Thus, we illustrate that dopaminergic mechanisms underpinning learning can be dissociated along a primary-secondary but not a social-individual axis. These results resolve conflict in the literature and support an expanding field showing that, rather than being specialised for particular inputs, neurochemical pathways in the human brain can process both social and non-social cues and arbitrate between the two depending upon which cue is primarily relevant for the task at hand.","link":"/opendata/rybicki-et-al-2022/"},{"title":"Sadeghi et al. (2023)","text":"The role of the heart in the experience of time has been long theorized but empirical evidence is scarce. Here, we examined the interaction between fine-grained cardiac dynamics and the momentary experience of subsecond intervals. Participants performed a temporal bisection task for brief tones (80-188 ms) synchronized with the heart. We developed a cardiac Drift-Diffusion Model (cDDM) that embedded contemporaneous heart rate dynamics into the temporal decision model. Results revealed the existence of temporal wrinkles-dilation or contraction of short intervals-in synchrony with cardiac dynamics. A lower prestimulus heart rate was associated with an initial bias in encoding the millisecond-level stimulus duration as longer, consistent with facilitation of sensory intake. Concurrently, a higher prestimulus heart rate aided more consistent and faster temporal judgments through more efficient evidence accumulation. Additionally, a higher speed of poststimulus cardiac deceleration, a bodily marker of attention, was associated with a greater accumulation of sensory temporal evidence in the cDDM. These findings suggest a unique role of cardiac dynamics in the momentary experience of time. Our cDDM framework opens a new methodological avenue for investigating the role of the heart in time perception and perceptual judgment.","link":"/opendata/sadeghi-et-al-2023/"},{"title":"Sadeghiyeh et al. (2020)","text":"The explore-exploit dilemma describes the trade off that occurs any time we must choose between exploring unknown options and exploiting options we know well. Implicit in this trade off is how we value future rewards - exploiting is usually better in the short term, but in the longer term the benefits of exploration can be huge. Thus, in theory there should be a tight connection between how much people value future rewards, i.e. how much they discount future rewards relative to immediate rewards, and how likely they are to explore, with less ‘temporal discounting’ associated with more exploration. By measuring individual differences in temporal discounting and correlating them with explore-exploit behavior, we tested whether this theoretical prediction holds in practice. We used the 27-item Delay-Discounting Questionnaire to estimate temporal discounting and the Horizon Task to quantify two strategies of explore-exploit behavior: directed exploration, where information drives exploration by choice, and random exploration, where behavioral variability drives exploration by chance. We find a clear correlation between temporal discounting and directed exploration, with more temporal discounting leading to less directed exploration. Conversely, we find no relationship between temporal discounting and random exploration. Unexpectedly, we find that the relationship with directed exploration appears to be driven by a correlation between temporal discounting and uncertainty seeking at short time horizons, rather than information seeking at long horizons. Taken together our results suggest a nuanced relationship between temporal discounting and explore-exploit behavior that may be mediated by multiple factors.","link":"/opendata/sadeghiyeh-et-al-2020/"},{"title":"Sadibolova et al. (2022)","text":"The perception of time is characterized by pronounced variability across individuals, with implications for a diverse array of psychological functions. The neurocognitive sources of this variability are poorly understood, but accumulating evidence suggests a role for inter-individual differences in striatal dopamine levels. Here we present a pre-registered study that tested the predictions that spontaneous eyeblink rates, which provide a proxy measure of striatal dopamine availability, would be associated with aberrant interval timing (lower temporal precision or overestimation bias). Neurotypical adults (N = 69) underwent resting state eye tracking and completed visual psychophysical interval timing and control tasks. Elevated spontaneous eyeblink rates were associated with poorer temporal precision but not with inter-individual differences in perceived duration or performance on the control task. These results signify a role for striatal dopamine in variability in human time perception and can help explain deficient temporal precision in psychiatric populations characterized by elevated dopamine levels.","link":"/opendata/sadibolova-et-al-2022/"},{"title":"Sadibolova et al. (2023)","text":"One of the ongoing controversies in interval timing concerns whether human time perception relies on multiple distinct mechanisms. This debate centres around whether subsecond and suprasecond timing may be attributed to a single semi-uniform mechanism or separate and interacting mechanisms. Whereas past studies offer valuable insights, this study overcomes previous limitations by adopting multiple convergent statistical approaches in a design with strong statistical power. We conducted two online experiments involving participants reproducing temporal intervals ranging from 400 to 2400ms (Experiment 1; N=302) and 1000 to 2000ms (Experiment 2; N=302). We contrasted the application of exploratory factor analysis and structural equation modelling to differentiate distinct latent structures underlying duration reproduction patterns. Additionally, we compared the model outcomes with results from changepoint analysis models fitted to individual participants’ data. In both experiments, these analyses yielded evidence for a two-factor model comprising a general timing factor spanning the full interval range and a second factor capturing the regression to the mean of presented stimulus intervals (central tendency bias). We observed a low proportion of detected changepoints, further supporting the limited evidence for a discontinuity between the distinct underlying mechanisms, while also finding that the changepoint detection was predicted by factor scores. These results highlight the role of central tendency effects in investigating changepoints and the continuity of timing systems. Our work contributes to integrating factor analytic and computational modelling approaches in the study of time perception and has implications for the measurement and interpretation of interval timing performance in a range of contexts.","link":"/opendata/sadibolova-et-al-2023/"},{"title":"Saeedpour et al. (2023)","text":"Pavlovian influences impair instrumental learning. It is easier to learn to approach reward-predictive signals and avoid punishment-predictive cues than their contrary. Whether the interindividual variability in this Pavlovian influence is consistent across time has been examined by a number of recent studies and met with mixed results. Here we introduce an open-source, web-based instance of a well-established Go-NoGo paradigm for measuring Pavlovian influence. We closely replicated the previous laboratory-based results. Moreover, the interindividual differences in Pavlovian influence were consistent across a two-week time window at the level of (i) raw measures of learning (i.e. performance accuracy), (ii) linear, descriptive estimates of Pavlovian bias (test-retest reliability: 0.40), and (iii) parameters obtained from reinforcement learning model fitting and model selection (test-retest reliability: 0.25). Nonetheless, the correlations reported here are still lower than the standards (i.e. 0.7) employed in psychometrics and self-reported measures. Our results provide support for trusting Pavlovian bias as a relatively stable individual characteristic and for using its measure in the computational understanding of human mental health.","link":"/opendata/saeedpour-et-al-2023/"},{"title":"Safra et al. (2019)","text":"Depression is characterized by a marked decrease in social interactions and blunted sensitivity to rewards. Surprisingly, despite the importance of social deficits in depression, non-social aspects have been disproportionally investigated. As a consequence, the cognitive mechanisms underlying atypical decision-making in social contexts in depression are poorly understood. In the present study, we investigate whether deficits in reward processing interact with the social context and how this interaction is affected by self-reported depression and anxiety symptoms in the general population. Two cohorts of subjects (discovery and replication sample: N = 50 each) took part in an experiment involving reward learning in contexts with different levels of social information (absent, partial and complete). Behavioral analyses revealed a specific detrimental effect of depressive symptoms-but not anxiety-on behavioral performance in the presence of social information, i.e. when participants were informed about the choices of another player. Model-based analyses further characterized the computational nature of this deficit as a negative audience effect, rather than a deficit in the way others choices and rewards are integrated in decision making. To conclude, our results shed light on the cognitive and computational mechanisms underlying the interaction between social cognition, reward learning and decision-making in depressive disorders.","link":"/opendata/safra-et-al-2019/"},{"title":"Sakamoto & Miyoshi (2023)","text":"Human behavior is flexibly regulated in accordance with specific goals of cognitive tasks. One notable instance is the goal-directed modulation of human metacognitive behavior, where logically equivalent decision-making problems can yield different patterns of introspective confidence depending on the frame in which they are presented. While this observation highlights the important heuristic nature of metacognitive monitoring, the computational mechanisms of this phenomenon remain elusive. Using a two-alternative dot-number discrimination task, we aimed to investigate the underlying computational rules of the confidence framing effect. Participants made decisions under “choose more” or “choose less” frames and reported their confidence. We replicated the previous observation of the confidence framing effect, demonstrating distinctive confidence rating patterns between the decision frames. Furthermore, formal model comparisons revealed that the confidence framing effect can be attributed to a simple heuristic that assigns greater weight to the evidence for the chosen alternative compared to that for the unchosen alternative. This computation appears to be based on internal decisional evidence constituted under specific task demands rather than physical stimulus intensity itself, a view justified in terms of ecological rationality. These results shed fresh light on the adaptive nature of human decision-making and metacognitive monitoring.","link":"/opendata/sakamoto-miyoshi-2023/"},{"title":"Salomon et al. (2022)","text":"Although research about preference formation and modification has classically focused on the role of external reinforcements, there is also increasing evidence for a key role of non-externally reinforced cognitive mechanisms such as attention and memory in preference modification. In a novel paradigm for behavioral change called the Cue-Approach training (CAT) task, preferences are modified via the mere association of images of stimuli with a neutral cue and a rapid motor response, without external reinforcements. The procedure’s efficacy has been replicated across dozens of studies, and the net behavioral change was linked with increased activity in a frontal value-based decision-making brain region during the post-training probe choice phase. However, the cognitive mechanisms during the training phase itself have not been elucidated. Based on the structure of the task alongside recent findings of the involvement of striatal and supplementary motor regions during training, we hypothesized that a motor-related learning process could be a prospective candidate. To test this hypothesis, we developed a computational model of the motor response pattern during training in a large corpus of data collected from 864 participants across 29 different CAT experiments. Using Bayesian modeling of the meta-analysis data, we developed a computational marker for individualized learning in the training task, which was found to be associated with the preference modification effect in the subsequent probe task, both at the participant-level as well as in the more granular individual-item level. Following the conclusions of the meta-analysis, in two additional experiments (a pilot study and a larger preregistered replication study) we aimed to affect learning efficacy by manipulating the training procedure difficulty. As hypothesized and preregistered, training difficulty was captured by the new computational marker identified on the previously collected samples. Manipulation of the training difficulty also resulted in a differential preference modification effect, suggesting a causal relationship between the motor learning captured by the computational model and the post-training behavioral change effect. Our work highlights a novel non-reinforced preference modification pathway, suggesting that attention and motor learning are linked to preference formation, and provides a computational framework to identify individualized training markers which could predict future behavioral change effects.","link":"/opendata/salomon-et-al-2022/"},{"title":"Sax et al. (2023)","text":"Depression is believed to hinder one’s ability to reason about oneself (metacognition). This impairment can arise from dysfunctional biases and/or learning processes. However, the relationship between depression, biases and learning in metacognition is not known. Here we combined multi-trial behavioural experiments with computa- tional modelling to explicitly test whether depression impacts biases and/or learning in a metacognitive task. First, using a perceptual estimation task with fixed feedback valence (N=131), we show that depressive symptoms predict negative metacognitive biases but do not impact learning. Next, we tested the generality of our results in a more realistic perceptual estimation task where we varied the valence of the feedback. Using a Rescorla-Wagner model of confidence fitted to behavioural data (N=74), we show that also in this task, depressive symptoms predict negative metacognitive biases but do not impair learning. Overall, our study suggests that depression impacts metacognitive states but not one’s ability to learn while offering a behavioural-computational framework for the study of metacog- nition in depression.","link":"/opendata/sax-et-al-2023/"},{"title":"Schaaf et al. (2023)","text":"Recently it has been suggested that parameters estimates of computational models can be used to understand individual differences at the process level. One area of research in which this approach, called computational phenotyping, took hold is computational psychiatry, but it is also used to understand differences in age and personality. One requirement for successful computational phenotyping is that behavior and parameters are stable over time. Surprisingly, the test-retest reliability of behavior and model parameters remains unknown for most experimental tasks and models. The present study seeks to close this gap by investigating the test-retest reliability of canonical reinforcement learning models in the context of two often-used learning paradigms: a two-armed bandit and a reversal learning task. We tested independent cohorts for the two tasks (N=142 and N=154) via an online testing platform with a between-test interval of five weeks. Whereas reliability was high for personality and cognitive measures, it was generally poor for the parameter estimates of the reinforcement learning models. Given that simulations indicated that our procedures could detect high test-retest reliability, this suggests that a significant proportion of the variability must be ascribed to the participants themselves. In support of that hypothesis, we show that mood (stress and happiness) can partly explain within-subject variability. Taken together, these results are critical for current practices in computational phenotyping and suggest that individual variability should be taken into account in the future development of the field.","link":"/opendata/schaaf-et-al-2023/"},{"title":"Schöbel et al. (2016)","text":"People often make decisions in a social environment. The present work examines social influence on peoples decisions in a sequential decision-making situation. In the first experimental study, we implemented an information cascade paradigm, illustrating that people infer information from decisions of others and use this information to make their own decisions. We followed a cognitive modeling approach to elicit the weight people give to social as compared to private individual information. The proposed social influence model shows that participants overweight their own private information relative to social information, contrary to the normative Bayesian account. In our second study, we embedded the abstract decision problem of Study 1 in a medical decision-making problem. We examined whether in a medical situation people also take others authority into account in addition to the information that their decisions convey. The social influence model illustrates that people weight social information differentially according to the authority of other decision makers. The influence of authority was strongest when an authoritys decision contrasted with private information. Both studies illustrate how the social environment provides sources of information that people integrate differently for their decisions.","link":"/opendata/schobel-et-al-2016/"},{"title":"Scholl et al. (2022)","text":"Real-life decision-making often comprises sequences of successive decisions about whether to take opportunities as they are encountered or keep searching for better ones instead. We investigated individual differences related to such sequential decision-making and link them especially to apathy and compulsivity in a large online sample (discovery sample: n = 449 and confirmation sample: n = 756). Our cognitive model revealed distinct changes in the way participants evaluated their environments and planned their own future behaviour. Apathy was linked to decision inertia, i.e., automatically persisting with a sequence of searches for longer than appropriate given the value of searching. Thus, despite being less motivated, they did not avoid the effort associated with longer searches. In contrast, compulsivity was linked to self-reported insensitivity to the cost of continuing with a sequence of searches. The objective measures of behavioural cost insensitivity were clearly linked to compulsivity only in the discovery sample. While the confirmation sample showed a similar effect, it did not reach significance. Nevertheless, in both samples, participants reported awareness of such bias (experienced as overchasing). In addition, this awareness made them report preemptively avoiding situations related to the bias. However, we found no evidence of them actually preempting more in the task, which might mean a misalignment of their metacognitive beliefs or that our behavioural measures were incomplete. In summary, individual variation in distinct, fundamental aspects of sequential decision-making can be linked to variation in 2 measures of behavioural traits associated with psychological illness in the normal population.","link":"/opendata/scholl-et-al-2022/"},{"title":"Schubert et al. (2023)","text":"The rise of large-scale collaborative panel studies in educational psychology and cognitive neuroscience has generated a need for fast, reliable, and valid assessments of cognitive abilities. In these studies, a detailed characterization of participants’ cognitive abilities is often unnecessary. Tests are chosen based on their ease of use and the duration and feasibility of their administration. These demands often result in the use of abbreviated measures or even related proxies, potentially compromising the reliabilities and validities of those measures. The present study evaluates the usefulness of the mini-q (Baudson & Preckel, 2016), a three-minute speeded reasoning test, as a brief assessment of general cognitive abilities in large-scale panel studies in a sample of 140 participants from diverse educational and occupational backgrounds. Participants’ test performance showed an excellent reliability and was substantially related (r = .57) to their general cognitive abilities measured with a broad test battery, supporting the test’s potential as a short screening of cognitive abilities. The largest share (54 %) of the relationship between test performance and general cognitive abilities was accounted for by participants’ working memory capacity, whereas individual differences in processing speed did not account for any part of the relationship between the two measures. Overall, our results support the notion that the mini-q can be used as a brief, reliable, and valid assessment of general cognitive abilities. However, possible disadvantages of participants with different native languages should be carefully considered due to the test’s reliance on verbal abilities.","link":"/opendata/schubert-et-al-2023/"},{"title":"Schultz et al. (2023)","text":"Reward improves memory through both encoding and consolidation processes. In this pre-registered study, we tested whether reward effects on memory generalize from rewarded items to unrewarded but episodically-related items. 59 human volunteers incidentally encoded associations between unique objects and repeated scenes. Some scenes typically yielded high reward, whereas others typically yielded low reward. Memory was tested immediately after encoding (n=29) or the next day (n=30). Overall, reward had only a limited influence on memory. It neither enhanced consolidation, nor did its effect generalize to episodically related stimuli. We thus contribute to understanding the boundary conditions of reward effects on memory.","link":"/opendata/schultz-et-al-2023/"},{"title":"Schulz et al. (2019)","text":"How do children and adults differ in their search for rewards? We considered three different hypotheses that attribute developmental differences to (a) children’s increased random sampling, (b) more directed exploration toward uncertain options, or (c) narrower generalization. Using a search task in which noisy rewards were spatially correlated on a grid, we compared the ability of 55 younger children (ages 7 and 8 years), 55 older children (ages 9-11 years), and 50 adults (ages 19-55 years) to successfully generalize about unobserved outcomes and balance the exploration-exploitation dilemma. Our results show that children explore more eagerly than adults but obtain lower rewards. We built a predictive model of search to disentangle the unique contributions of the three hypotheses of developmental differences and found robust and recoverable parameter estimates indicating that children generalize less and rely on directed exploration more than adults. We did not, however, find reliable differences in terms of random sampling.","link":"/opendata/schulz-et-al-2019/"},{"title":"Schurr et al. (2023)","text":"Computational phenotyping has emerged as a powerful tool for characterizing individual variability across a variety of cognitive domains. An individual’s computational phenotype is defined as a set of mechanistically interpretable parameters obtained from fitting computational models to behavioral data. However, the interpretation of these parameters hinges critically on their psychometric properties, which are rarely studied. In order to identify the sources governing the temporal variability of the computational phenotype, we carried out a 12-week longitudinal study using a battery of seven tasks that measure aspects of human learning, memory, perception, and decision making. To examine the influence of state-like effects, each week participants provided reports tracking their mood, habits and daily activities. We developed a dynamic computational phenotyping framework, which allowed us to tease apart the time-varying effects of practice and internal states such as affective valence and arousal. Our results show that many phenotype dimensions covary with practice and affective factors, indicating that what appears to be unreliability may reflect previously unmeasured structure. These results support a fundamentally dynamic understanding of cognitive variability within an individual.","link":"/opendata/schurr-et-al-2023/"},{"title":"Sedlinská et al. (2022)","text":"Pavlovian bias is an innate motivational tendency to approach rewards and remain passive in the face of punishment. The relative reliance on Pavlovian valuation has been found to increase when the perceived control over environmental reinforcers is compromised, leading to behavior resembling learned helplessness (LH). In our study, we used a version of an orthogonalized Go-NoGo reinforcement learning task to examine the relative reliance on Pavlovian and instrumental valuation during and after an intermittent loss of control over rewards and losses. Sixty healthy young adults underwent the task and received anodal high-definition transcranial direct current stimulation (HD-tDCS) over the medial prefrontal/ dorsal anterior cingulate cortex in a randomized, double-blind, sham-controlled study. Furthermore, we evaluated changes in cue-locked mid-frontal theta power derived from electroencephalography. We hypothesized that active stimulation would reduce Pavlovian bias during manipulation of outcome controllability, and the effect would be accompanied by stronger mid-frontal theta activity, representing arbitration between choice strategies in favor of instrumental relative to Pavlovian valuation. We found a progressive decrease in Pavlovian bias during and after the loss of control over feedback. Active HD-tDCS counteracted this effect while not affecting the mid-frontal theta signal. The results were at odds with our hypotheses but also with previous findings reporting LH-like patterns during and after the loss of control without brain stimulation. The discrepancy may be related to different protocols for the controllability manipulation. We argue that the subjective evaluation of task controllability is crucial in mediating the balance between Pavlovian and instrumental valuation during reinforcement learning and that the medial prefrontal/dorsal anterior cingulate cortex is a key region in this respect. These findings have implications for understanding the behavioral and neural underpinnings of LH in humans.","link":"/opendata/sedlinska-et-al-2022/"},{"title":"Seow et al. (2020)","text":"Alterations in error processing are implicated in a range of DSM-defined psychiatric disorders. For instance, obsessive-compulsive disorder (OCD) and generalised anxiety disorder show enhanced electrophysiological responses to errors-i.e. error-related negativity (ERN)-while others like schizophrenia have an attenuated ERN. However, as diagnostic categories in psychiatry are heterogeneous and also highly intercorrelated, the precise mapping of ERN enhancements/impairments is unclear. To address this, we recorded electroencephalograms (EEG) from 196 participants who performed the Flanker task and collected scores on 9 questionnaires assessing psychiatric symptoms to test if a dimensional framework could reveal specific transdiagnostic clinical manifestations of error processing dysfunctions. Contrary to our hypothesis, we found non-significant associations between ERN amplitude and symptom severity of OCD, trait anxiety, depression, social anxiety, impulsivity, eating disorders, alcohol addiction, schizotypy and apathy. A transdiagnostic approach did nothing to improve signal; there were non-significant associations between all three transdiagnostic dimensions (anxious-depression, compulsive behaviour and intrusive thought, and social withdrawal) and ERN magnitude. In these same individuals, we replicated a previously published transdiagnostic association between goal-directed learning and compulsive behaviour and intrusive thought. Possible explanations discussed are (i) that associations between the ERN and psychopathology might be smaller than previously assumed, (ii) that these associations might depend on a greater level of symptom severity than other transdiagnostic cognitive biomarkers, or (iii) that task parameters, such as the ratio of compatible to incompatible trials, might be crucial for ensuring the sensitivity of the ERN to clinical phenomena.","link":"/opendata/seow-et-al-2020/"},{"title":"Seow et al. (2021)","text":"Compulsive individuals have deficits in model-based planning, but the mechanisms that drive this have not been established. We examined two candidates-that compulsivity is linked to (1) an impaired model of the task environment and/or (2) an inability to engage cognitive control when making choices. To test this, 192 participants performed a two-step reinforcement learning task with concurrent EEG recordings, and we related the neural and behavioral data to their scores on a self-reported transdiagnostic dimension of compulsivity. To examine subjects’ internal model of the task, we used established behavioral and neural responses to unexpected events [reaction time (RT) slowing, P300 wave, and parietal-occipital alpha band power] measured when an unexpected transition occurred. To assess cognitive control, we probed theta power at the time of initial choice. As expected, model-based planning was linked to greater behavioral (RT) and neural (alpha power, but not P300) sensitivity to rare transitions. Critically, the sensitivities of both RT and alpha to task structure were weaker in those high in compulsivity. This RT-compulsivity effect was tested and replicated in an independent pre-existing dataset (N = 1413). We also found that mid-frontal theta power at the time of choice was reduced in highly compulsive individuals though its relation to model-based planning was less pronounced. These data suggest that model-based planning deficits in compulsive individuals may arise, at least in part, from having an impaired representation of the environment, specifically how actions lead to future states.","link":"/opendata/seow-et-al-2021/"},{"title":"Shahar et al. (2019)","text":"A well-established notion in cognitive neuroscience proposes that multiple brain systems contribute to choice behaviour. These include: (1) a model-free system that uses values cached from the outcome history of alternative actions, and (2) a model-based system that considers action outcomes and the transition structure of the environment. The widespread use of this distinction, across a range of applications, renders it important to index their distinct influences with high reliability. Here we consider the two-stage task, widely considered as a gold standard measure for the contribution of model-based and model-free systems to human choice. We tested the internal/temporal stability of measures from this task, including those estimated via an established computational model, as well as an extended model using drift-diffusion. Drift-diffusion modeling suggested that both choice in the first stage, and RTs in the second stage, are directly affected by a model-based/free trade-off parameter. Both parameter recovery and the stability of model-based estimates were poor but improved substantially when both choice and RT were used (compared to choice only), and when more trials (than conventionally used in research practice) were included in our analysis. The findings have implications for interpretation of past and future studies based on the use of the two-stage task, as well as for characterising the contribution of model-based processes to choice behaviour.","link":"/opendata/shahar-et-al-2019/"},{"title":"Sharp et al. (2022)","text":"Managing multiple goals is essential to adaptation, yet we are only beginning to understand computations by which we navigate the resource demands entailed in so doing. Here, we sought to elucidate how humans balance reward seeking and punishment avoidance goals, and relate this to variation in its expression within anxious individuals. To do so, we developed a novel multigoal pursuit task that includes trial-specific instructed goals to either pursue reward (without risk of punishment) or avoid punishment (without the opportunity for reward). We constructed a computational model of multigoal pursuit to quantify the degree to which participants could disengage from the pursuit goals when instructed to, as well as devote less model-based resources toward goals that were less abundant. In general, participants (n = 192) were less flexible in avoiding punishment than in pursuing reward. Thus, when instructed to pursue reward, participants often persisted in avoiding features that had previously been associated with punishment, even though at decision time these features were unambiguously benign. In a similar vein, participants showed no significant downregulation of avoidance when punishment avoidance goals were less abundant in the task. Importantly, we show preliminary evidence that individuals with chronic worry may have difficulty disengaging from punishment avoidance when instructed to seek reward. Taken together, the findings demonstrate that people avoid punishment less flexibly than they pursue reward. Future studies should test in larger samples whether a difficulty to disengage from punishment avoidance contributes to chronic worry.","link":"/opendata/sharp-et-al-2022/"},{"title":"Sharp & Eldar (2023)","text":"Leading models of human planning posit that planning relies on learned forward predictions, from a given state to the outcomes that typically follow it. Here, however, we show that in many situations planning can be made more efficient by relying on backward predictions, from a given outcome to the states that typically precede it. This holds specifically in environments where the number of states an agent may occupy increases with time (i.e., in diverging environments), because in such environments, backward predictions can be more compactly represented than forward predictions. Correspondingly, in three preregistered experiments, we find that humans engage in backward learning and planning in a diverging environment, and in forward learning and planning in a converging environment. We validate the applicability of these findings to real-life learning in a large-scale real-world prediction task. Thus, we establish that humans adaptively deploy forward and backward learning in the service of efficient planning, and this changes how they plan and what decisions they reach.","link":"/opendata/sharp-eldar-2023/"},{"title":"Sheffield et al. (2023)","text":"Multiple measures of decision-making under uncertainty (e.g. jumping to conclusions (JTC), bias against disconfirmatory evidence (BADE), win-switch behavior, random exploration) have been associated with delusional thinking in independent studies. Yet, it is unknown whether these variables explain shared or unique variance in delusional thinking, and whether these relationships are specific to paranoia or delusional ideation more broadly. Additionally, the underlying computational mechanisms require further investigation. To investigate these questions, task and self-report data were collected in 88 individuals (46 healthy controls, 42 schizophrenia-spectrum) and included measures of cognitive biases and behavior on probabilistic reversal learning and explore/exploit tasks. Of those, only win-switch rate significantly differed between groups. In regression, reversal learning performance, random exploration, and poor evidence integration during BADE showed significant, independent associations with paranoia. Only self-reported JTC was associated with delusional ideation, controlling for paranoia. Computational parameters increased the proportion of variance explained in paranoia. Overall, decision-making influenced by strong volatility and variability is specifically associated with paranoia, whereas self-reported hasty decision-making is specifically associated with other themes of delusional ideation. These aspects of decision-making under uncertainty may therefore represent distinct cognitive processes that, together, have the potential to worsen delusional thinking across the psychosis spectrum.","link":"/opendata/sheffield-et-al-2023/"},{"title":"Shin & Niv (2021)","text":"How do we evaluate a group of people after a few negative experiences with some members but mostly positive experiences otherwise? How do rare experiences influence our overall impression? We show that rare events may be overweighted due to normative inference of the hidden causes that are believed to generate the observed events. We propose a Bayesian inference model that organizes environmental statistics by combining similar events and separating outlying observations. Relying on the models inferred latent causes for group evaluation overweights rare or variable events. We tested the models predictions in eight experiments where participants observed a sequence of social or non-social behaviours and estimated their average. As predicted, estimates were biased toward sparse events when estimating after seeing all observations, but not when tracking a summary value as observations accrued. Our results suggest that biases in evaluation may arise from inferring the hidden causes of group members behaviours.","link":"/opendata/shin-niv-2021/"},{"title":"Sidarus et al. (2019)","text":"Value-based decision-making involves trading off the cost associated with an action against its expected reward. Research has shown that both physical and mental effort constitute such subjective costs, biasing choices away from effortful actions, and discounting the value of obtained rewards. Facing conflicts between competing action alternatives is considered aversive, as recruiting cognitive control to overcome conflict is effortful. Moreover, engaging control to proactively suppress irrelevant information that could conflict with task-relevant information would presumably also be cognitively costly. Yet, it remains unclear whether the cognitive control demands involved in preventing and resolving conflict also constitute costs in value-based decisions. The present study investigated this question by embedding irrelevant distractors (flanker arrows) within a reversal-learning task, with intermixed free and instructed trials. Results showed that participants learned to adapt their free choices to maximize rewards, but were nevertheless biased to follow the suggestions of irrelevant distractors. Thus, the perceived cost of investing cognitive control to suppress an external suggestion could sometimes trump internal value representations. By adapting computational models of reinforcement learning, we assessed the influence of conflict at both the decision and learning stages. Modelling the decision showed that free choices were more biased when participants were less sure about which action was more rewarding. This supports the hypothesis that the costs linked to conflict management were traded off against expected rewards. During the learning phase, we found that learning rates were reduced in instructed, relative to free, choices. Learning rates were further reduced by conflict between an instruction and subjective action values, whereas learning was not robustly influenced by conflict between ones actions and external distractors. Our results show that the subjective cognitive control costs linked to conflict factor into value-based decision-making, and highlight that different types of conflict may have different effects on learning about action outcomes.","link":"/opendata/sidarus-et-al-2019/"},{"title":"Siegel et al. (2022)","text":"Money can be tainted when it is associated with direct or indirect harm to others. Deciding whether to accept “dirty money” poses a dilemma because money can be used to help others, but accepting dirty money has moral costs. How people resolve the dilemma of dirty money remains unknown. One theory casts the dilemma as a valuation conflict that can be resolved by integrating the costs and benefits of accepting dirty money. Here, we use behavioral experiments and computational modeling to test the valuation conflict account and unveil the cognitive computations employed when deciding whether to accept or reject morally tainted cash. In Study 1, British participants decided whether to accept “dirty” money obtained by inflicting electric shocks on another person (versus “clean” money obtained by shocking oneself). Computational models showed that the source of the money (dirty versus clean) impacted decisions by shifting the relative valuation of the money’s positive and negative attributes, rather than imposing a uniform bias on decision-making. Studies 2 and 3 replicate this finding and show that participants were more willing to accept dirty money when the money was directed towards a good cause, and observers judged such decisions to be more praiseworthy than accepting dirty money for one’s own profit. Our findings suggest that dirty money can be psychologically “laundered” through charitable activities and have implications for understanding and preventing the social norms that can justify corrupt behavior.","link":"/opendata/siegel-et-al-2022/"},{"title":"Sierra et al. (2022)","text":"Precisely estimating event timing is essential for survival, yet temporal distortions are ubiquitous in our daily sensory experience. Here, we tested whether the relative position, duration, and distance in time of two sequentially-organized events-standard S, with constant duration, and comparison C, with duration varying trial-by-trial-are causal factors in generating temporal distortions. We found that temporal distortions emerge when the first event is shorter than the second event. Importantly, a significant interaction suggests that a longer inter-stimulus interval (ISI) helps to counteract such serial distortion effect only when the constant S is in the first position, but not if the unpredictable C is in the first position. These results imply the existence of a perceptual bias in perceiving ordered event durations, mechanistically contributing to distortion in time perception. We simulated our behavioral results with a Bayesian model and replicated the finding that participants disproportionately expand first-position dynamic (unpredictable) short events. Our results clarify the mechanisms generating time distortions by identifying a hitherto unknown duration-dependent encoding inefficiency in human serial temporal perception, something akin to a strong prior that can be overridden for highly predictable sensory events but unfolds for unpredictable ones.","link":"/opendata/sierra-et-al-2022/"},{"title":"Simon-Kutscher et al. (2019)","text":"During a threatening encounter, people can learn to associate the aversive event with a discrete preceding cue or with the context in which the event took place, corresponding to cue-dependent and context-dependent fear conditioning, respectively. Which of these forms of fear learning prevails has critical implications for fear-related psychopathology. We tested here whether acute stress may modulate the balance of cue-dependent and contextual fear learning. Participants (N = 72) underwent a stress or control manipulation 30 min before they completed a fear-learning task in a virtual environment that allowed both cued and contextual fear learning. Results showed equally strong cue- and context-dependent fear conditioning in the control group. Stress, however, abolished contextual fear learning, which was directly correlated with the activity of the stress hormone cortisol, and made cue-dependent fear more resistant to extinction. These results are the first to show that stress favors cue-dependent over contextual fear learning.","link":"/opendata/simon-kutscher-et-al-2019/"},{"title":"Sin et al. (2021)","text":"Many decisions in life are sequential and constrained by a time window. Although mathematically derived optimal solutions exist, it has been reported that humans often deviate from making optimal choices. Here, we used a secretary problem, a classic example of finite sequential decision-making, and investigated the mechanisms underlying individuals’ suboptimal choices. Across three independent experiments, we found that a dynamic programming model comprising subjective value function explains individuals’ deviations from optimality and predicts the choice behaviors under fewer and more opportunities. We further identified that pupil dilation reflected the levels of decision difficulty and subsequent choices to accept or reject the stimulus at each opportunity. The value sensitivity, a model-based estimate that characterizes each individual’s subjective valuation, correlated with the extent to which individuals’ physiological responses tracked stimuli information. Our results provide model-based and physiological evidence for subjective valuation in finite sequential decision-making, rediscovering human suboptimality in subjectively optimal decision-making processes.","link":"/opendata/sin-et-al-2021/"},{"title":"Slanina-Davies et al. (2023)","text":"OBJECTIVE: Eating disorders (EDs) are a heterogenous group of disorders characterized by disturbed eating patterns. Links have been made between ED symptoms and control-seeking behaviors, which may cause relief from distress. However, whether direct behavioral measures of control-seeking behavior correlate with ED symptoms has not been directly tested. Additionally, existing paradigms may conflate control-seeking behavior with uncertainty-reducing behavior. METHOD: A general population sample of 183 participants completed part in an online behavioral task, in which participants rolled a die in order to obtain/avoid a set of numbers. Prior to each roll, participants could choose to change arbitrary features of the task (such as the color of their die) or view additional information (such as the current trial number). Selecting these Control Options could cost participants points or not (Cost/No-Cost conditions). Each participant completed all four conditions, each with 15 trials, followed by a series of questionnaires, including the Eating Attitudes Test-26 (EAT-26), the Intolerance of Uncertainty Scale, and the Obsessive-Compulsive Inventory-Revised (OCI-R). RESULTS: A Spearman’s rank test indicated no significant correlation between total EAT-26 score and total number of Control Options selected, with only elevated scores on a measure of obsessions and compulsivity (OCI-R) correlating with the total number of Control Options selected (rs = .155, p = .036). DISCUSSION: In our novel paradigm, we find no relationship between EAT-26 score and control-seeking. However, we do find some evidence that this behavior may be present in other disorders that often coincide with ED diagnosis, which may indicate that transdiagnostic factors such as compulsivity are important to control-seeking.","link":"/opendata/slanina-davies-et-al-2023/"},{"title":"Slater et al. (2023)","text":"Contrary to classical economic theories, experimental findings show that people are not exclusively self-interested, rather, they have other-regarding preferences, such as fairness and reciprocity. Further, these social preferences are emotionally driven, and deliberative processes are required to implement ‘rational’ self-interested motives. Here, we aimed to enhance ‘rational’ self-interested behavior by enhancing the neuronal activity of the right ventrolateral prefrontal cortex (rVLPFC), a brain area associated with emotion regulation processes, using anodal transcranial-direct-current-stimulation (tDCS), among ultimatum game responders. We found that stimulated, compared to control (sham stimulation), participants accepted unfair offers significantly more often. Interestingly, this effect was not moderated by whether the unfair offers were made intentionally by the participants’ partners or through a fair mechanism (i.e., randomly made by the computer), suggesting that inequality per se is emotionally aversive even when it was not determined deliberately. In contrast, the effect was absent when playing on behalf of another random participant, suggesting that when the self is not involved, decisions are less emotional. These findings reveal the causal and vital role of the rVLPFC in promoting self-interested behavior in social exchange situations. We discuss theoretical implications for dual-system models and specifically in the context of social exchange situations.","link":"/opendata/slater-et-al-2023/"},{"title":"Smid et al. (2022)","text":"Human decision-making is underpinned by distinct systems that differ in flexibility and associated cognitive cost. A widely accepted dichotomy distinguishes between a cheap but rigid model-free system and a flexible but costly model-based system. Typically, humans use a hybrid of both types of decision-making depending on environmental demands. However, childrens use of a model-based system during decision-making has not yet been shown. While prior developmental work has identified simple building blocks of model-based reasoning in young children (1-4 years old), there has been little evidence of this complex cognitive system influencing behavior before adolescence. Here, by using a modified task to make engagement in cognitively costly strategies more rewarding, we show that children aged 5-11-years (N = 85), including the youngest children, displayed multiple indicators of model-based decision making, and that the degree of its use increased throughout childhood. Unlike adults (N = 24), however, children did not display adaptive arbitration between model-free and model-based decision-making. Our results demonstrate that throughout childhood, children can engage in highly sophisticated and costly decision-making strategies. However, the flexible arbitration between decision-making strategies might be a critically late-developing component in human development.","link":"/opendata/smid-et-al-2022/"},{"title":"Smith & Pollak (2022)","text":"To effectively navigate their environments, infants and children learn how to recognize events predict salient outcomes, such as rewards or punishments. Relatively little is known about how children acquire this ability to attach value to the stimuli they encounter. Studies often examine childrens ability to learn about rewards and threats using either classical conditioning or behavioral choice paradigms. Here, we assess both approaches and find that they yield different outcomes in terms of which individuals had efficiently learned the value of information presented to them. The findings offer new insights into understanding how to assess different facets of value learning in children.","link":"/opendata/smith-pollak-2022/"},{"title":"Smith et al. (2023)","text":"Behavioural and neuroimaging research has shown that older adults are less sensitive to financial losses compared to younger adults. Yet relatively less is known about age-related differences in social decisions and social reward processing. As part of a pilot study that was sponsored by the Scientific Research Network on Decision Neuroscience and Aging, we collected behavioural and functional magnetic resonance imaging (fMRI) data from 50 participants (Younger: N = 26, ages 18–34 years; Older: N = 24, ages 63–80 years) who completed three tasks in the scanner: an economic trust game as the investor with three partners (computer, stranger, friend) as the investee; a card-guessing task with monetary gains and losses shared with three partners (computer, stranger, friend); and an ultimatum game as responder to three anonymous proposers (computer, age-similar adults, age-dissimilar adults). We also collected B0 field maps and high-resolution structural images (T1-weighted and T2-weighted images). These data could be reused to answer questions about moment-to-moment variability in fMRI signal, representational similarity between tasks, and brain structure.","link":"/opendata/smith-et-al-2023/"},{"title":"Snijder et al. (2023)","text":"The domain of cognitive control has been a major focus of experimental, neuroscience, and individual differences research. Currently, however, no theory of cognitive control successfully unifies both experimental and individual differences findings. Some perspectives deny that there even exists a unified psychometric cognitive control construct to be measured at all. These shortcomings of the current literature may reflect the fact that current cognitive control paradigms are optimized for the detection of within-subject experimental effects rather than individual differences. In the current study, we examine the psychometric properties of the Dual Mechanisms of Cognitive Control (DMCC) task battery, which was designed in accordance with a theoretical framework that postulates common sources of within-subject and individual differences variation. We evaluated both internal consistency and test-retest reliability, and for the latter, utilized both classical test theory measures (i.e., split-half methods, intraclass correlation) and newer hierarchical Bayesian estimation of generative models. Although traditional psychometric measures suggested poor reliability, the hierarchical Bayesian models indicated a different pattern, with good to excellent test-retest reliability in almost all tasks and conditions examined. Moreover, within-task, between-condition correlations were generally increased when using the Bayesian model derived estimates, and these higher correlations appeared to be directly linked to the higher reliability of the measures. In contrast, between-task correlations remained low regardless of theoretical manipulations or estimation approach. Together, these findings highlight the advantages of Bayesian estimation methods, while also pointing to the important role of reliability in the search for a unified theory of cognitive control.","link":"/opendata/snijder-et-al-2023/"},{"title":"Solomyak et al. (2022)","text":"Many decision-making studies have demonstrated that humans learn either expected values or relative preferences among choice options, yet little is known about what environmental conditions promote one strategy over the other. Here, we test the novel hypothesis that humans adapt the degree to which they form absolute values to the diversity of the learning environment. Since absolute values generalize better to new sets of options, we predicted that the more options a person learns about the more likely they would be to form absolute values. To test this, we designed a multi-day learning experiment comprising twenty learning sessions in which subjects chose among pairs of images each associated with a different probability of reward. We assessed the degree to which subjects formed absolute values and relative preferences by asking them to choose between images they learned about in separate sessions. We found that concurrently learning about more images within a session enhanced absolute-value, and suppressed relative-preference, learning. Conversely, cumulatively pitting each image against a larger number of other images across multiple sessions did not impact the form of learning. These results show that the way humans encode preferences is adapted to the diversity of experiences offered by the immediate learning context.","link":"/opendata/solomyak-et-al-2022/"},{"title":"Son et al. (2023)","text":"To act adaptively in a complex and dynamic social landscape, humans must continually make inferences about who might be connected to whom. How do they solve this fundamental problem of social link prediction: inferring the existence of unobserved or potential relationships in their social network from noisy, limited information? We propose that people generate principled inferences by learning cognitive maps that systematically abstract over direct relations (friends) and multistep relations (e.g., friends-of-friends). We show that such abstracted cognitive maps enable a flexible solution for link prediction and provide a natural explanation for a variety of otherwise puzzling empirical observations in social psychology. Our proposal generalizes the theory of cognitive maps to the fundamental computational problem of social link prediction and presents a powerful framework for understanding the workings of a predictive mind operating within a complex social world.","link":"/opendata/son-et-al-2023/"},{"title":"Song et al. (2022)","text":"Realistic and complex decision tasks often allow for many possible solutions. How do we find the correct one? Introspection suggests a process of trying out solutions one after the other until success. However, such methodical serial testing may be too slow, especially in environments with noisy feedback. Alternatively, the underlying learning process may involve implicit reinforcement learning that learns about many possibilities in parallel. Here we designed a multi-dimensional probabilistic active-learning task tailored to study how people learn to solve such complex problems. Participants configured three-dimensional stimuli by selecting features for each dimension and received probabilistic reward feedback. We manipulated task complexity by changing how many feature dimensions were relevant to maximizing reward, as well as whether this information was provided to the participants. To investigate how participants learn the task, we examined models of serial hypothesis testing, feature-based reinforcement learning, and combinations of the two strategies. Model comparison revealed evidence for hypothesis testing that relies on reinforcement-learning when selecting what hypothesis to test. The extent to which participants engaged in hypothesis testing depended on the instructed task complexity: people tended to serially test hypotheses when instructed that there were fewer relevant dimensions, and relied more on gradual and parallel learning of feature values when the task was more complex. This demonstrates a strategic use of task information to balance the costs and benefits of the two methods of learning.","link":"/opendata/song-et-al-2022/"},{"title":"Soutscheck et al. (2022)","text":"Deciding whether to engage in strenuous mental activities requires trading-off the potential benefits against the costs of mental effort, but it is unknown which brain rhythms are causally involved in such cost-benefit calculations. We show that brain stimulation targeting midfrontal theta oscillations increases the engagement in goal-directed mental effort. Participants received transcranial alternating current stimulation over dorsomedial prefrontal cortex while deciding whether they are willing to perform a demanding working memory task for monetary rewards. Midfrontal theta tACS increased the willingness to exert mental effort for rewards while leaving working memory performance unchanged. Computational modelling using a hierarchical Bayesian drift diffusion model suggests that theta tACS shifts the starting bias before evidence accumulation towards high reward-high effort options without affecting the velocity of the evidence accumulation process. Our findings suggest that the motivation to engage in goal-directed mental effort can be increased via midfrontal tACS.","link":"/opendata/soutscheck-et-al-2022/"},{"title":"Soutschek et al. (2022)","text":"Deficits in impulse control belong to the core profile of nicotine dependence. Smokers might thus benefit from voluntarily self-restricting their access to the immediate temptation of nicotine products (precommitment) in order to avoid impulse control failures. However, little is known about how smokers’ willingness to engage in voluntary self-restrictions is determined by metacognitive insight into their general preferences for immediate over delayed rewards. Here, with a series of monetary intertemporal choice tasks, we provide empirical evidence for reduced metacognitive accuracy in smokers relative to non-smokers and show that smokers overestimate the subjective value of delayed rewards relative to their revealed preferences. In line with the metacognitive deficits, smokers were also less sensitive to the risk of preference reversals when deciding whether or not to restrict their access to short-term financial rewards. Taken together, the current findings suggest that deficits not only in impulse control but also in metacognition may hamper smokers’ resistance to immediate rewards and capacity to pursue long-term goals.","link":"/opendata/soutschek-et-al-2022/"},{"title":"Souza & Frischkorn (2023)","text":"The limited capacity of working memory constrains how well we can think and act. Focused attention alleviates this limitation by prioritizing the most relevant mental content at a given time. Retro-cues tap into this ability by guiding attention to one working memory content, thereby improving memory speed and accuracy. So far, few attempts have been made to understand the retro-cue effect through well-established computational models, nor how their parameters track age-related changes and individual differences in focusing efficiency. The present study aims to close these gaps. We applied the drift-diffusion model to the data from a large sample of younger and older adults (total N = 346) that completed four retro-cue tasks. Diffusion modeling showed that retro-cues increased the quality of evidence accumulation, reduced the time taken for retrieval, and changed response conservativeness. Younger and older adults benefited from retro-cues in all tasks and parameters. Yet, age-related decline was observed in the retro-cue boost for evidence quality. Likewise, evidence quality was the only parameter capturing individual differences in focusing efficiency. Our results suggest that people differ in how well they can strengthen and protect a focused representation to boost evidence-quality accumulation, and this ability declines with aging.","link":"/opendata/souza-frischkorn-2023/"},{"title":"Spektor et al. (2022)","text":"People rely on the choice context to guide their decisions, violating fundamental principles of rational choice theory and exhibiting phenomena called context effects. Recent research has uncovered that dominance relationships can both increase or decrease the choice share of the dominating option, marking the two ends of an attraction-repulsion continuum. However, empirical links between the two opposing effects are scarce and theoretical accounts are missing altogether. The present study (N = 55) used eye tracking alongside a within-subject design that contrasts a perceptual task and a preferential-choice analog in order to bridge this gap and uncover the underlying information-search processes. Although individuals differed in their perceptual and preferential choices, they generally engaged in alternative-wise comparisons and a repulsion effect was present in both conditions that became weaker the more predominant the attribute-wise comparisons were. Altogether, our study corroborates the notion that repulsion effects are a robust and general phenomenon that theoretical accounts need to take seriously.","link":"/opendata/spektor-et-al-2022/"},{"title":"Spektor et al. (2023)","text":"Individuals’ decisions under risk tend to be in line with the notion that “losses loom larger than gains”. This loss aversion in decision making is commonly understood as a stable individual preference that is manifested across different contexts. The presumed stability and generality, which underlies the prominence of loss aversion in the literature at large, has been recently questioned by studies showing how loss aversion can disappear, and even reverse, as a function of the choice context. The present study investigated whether loss aversion reflects a trait-like attitude of avoiding losses or rather individuals’ adaptability to different contexts. We report three experiments that investigated the within-subject context sensitivity of loss aversion in a two-alternative forced-choice task. The results show beside interindiviudal differences in loss aversion, that the context affects the extent of loss aversion. This indicates that even though the absolute value of loss aversion can be affected by external factors such as the choice context, estimates of people’s loss aversion still capture the relative dispositions towards gains and losses across individuals.","link":"/opendata/spektor-et-al-2023/"},{"title":"Spektor & Wulff (2023)","text":"Good decisions require information. When people face a situation in which they need to make a choice but know little about the options available, how do they search for information? We present an analysis of over 1,000,000 information-search decisions made by over 2,500 individuals in a decisions-from-experience setting. We found that individuals solve the problem in a smart way, relying on several strategies—including two novel ones. In discovery-driven search, people leverage detailed knowledge about the structure of the environment to find previously unobserved outcomes and terminate information search after all possible outcomes have been observed. In fixed search, on the other hand, people decide in advance how much information they want to obtain and stick to that decision irrespective of the feedback obtained. These novel strategies are distinct from uncertainty-driven search—the dominant strategy in research on information search—in which people engaged only after all outcomes had been observed. Overall, our results suggest that people adaptively and dynamically rely on a toolbox of information-search strategies. This is at odds with a narrow interpretation of information search as cost–benefit optimization and highlights a need for broader theories of information-search behavior in decisions under uncertainty, capturing the diversity of the strategic tools recruited.","link":"/opendata/spektor-wulff-2023/"},{"title":"Spicer et al. (2022)","text":"One of the most robust effects in cognitive psychology is anchoring, in which judgments show a bias toward previously viewed values. However, in what is essentially the same task as used in anchoring research, a perceptual illusion demonstrates the opposite effect of repulsion. Here, we united these two literatures, testing in two experiments with adults (total N = 200) whether prior comparative decisions bias cognitive and perceptual judgments in opposing directions or whether anchoring and repulsion are two domain-general biases whose co-occurrence has so far gone undetected. We found that in both perceptual and cognitive tasks, anchoring and repulsion co-occur. Further, the direction of the bias depends on the comparison value: Distant values attract judgments, whereas nearby values repulse judgments. Because none of the leading theories for either effect account for both biases, theoretical integration is needed. As a starting point, we describe one such joint theory based on sampling models of cognition.","link":"/opendata/spicer-et-al-2022/"},{"title":"Steffen et al. (2023)","text":"Forward planning is crucial to maximize outcome in complex sequential decision-making scenarios. In this cross-sectional study, we were particularly interested in age-related differences of forward planning. We presumed that especially older individuals would show a shorter planning depth to keep the costs of model-based decision-making within limits. To test this hypothesis, we developed a sequential decision-making task to assess forward planning in younger (age 60 years; n = 27) adults. By using reinforcement learning modelling, we inferred planning depths from participants’ choices. Our results showed significantly shorter planning depths and higher response noise for older adults. Age differences in planning depth were only partially explained by well-known cognitive covariates such as working memory and processing speed. Consistent with previous findings, this indicates age-related shifts away from model-based behaviour in older adults. In addition to a shorter planning depth, our findings suggest that older adults also apply a variety of heuristical low-cost strategies.","link":"/opendata/steffen-et-al-2023/"},{"title":"Steiner & Frey (2021)","text":"Representative design refers to the idea that experimental stimuli should be sampled or designed such that they represent the environments to which measured constructs are supposed to generalize. In this article we investigate the role of representative design in achieving valid and reliable psychological assessments, by focusing on a widely used behavioral measure of risk taking-the Balloon Analogue Risk Task (BART). Specifically, we demonstrate that the typical implementation of this task violates the principle of representative design, thus conflicting with the expectations people likely form from real balloons. This observation may provide an explanation for the previously observed limitations in some of the BARTs psychometric properties (e.g., convergent validity with other measures of risk taking). To experimentally test the effects of improved representative designs, we conducted two extensive empirical studies (N = 772 and N = 632), finding that participants acquired more accurate beliefs about the optimal behavior in the BART because of these task adaptions. Yet, improving the tasks representativeness proved to be insufficient to enhance the BARTs psychometric properties. It follows that for the development of valid behavioral measurement instruments-as are needed, for instance, in functional neuroimaging studies-our field has to overcome the philosophy of the repair program (i.e., fixing existing tasks). Instead, we suggest that the development of valid task designs requires novel ecological assessments, aimed at identifying those real-life behaviors and associated psychological processes that lab tasks are supposed to capture and generalize to.","link":"/opendata/steiner-frey-2021/"},{"title":"Steinke et al. (2020)","text":"The Wisconsin Card Sorting Test (WCST) is considered a gold standard for the assessment of cognitive flexibility. On the WCST, repeating a sorting category following negative feedback is typically treated as indicating reduced cognitive flexibility. Therefore such responses are referred to as ‘perseveration’ errors. Recent research suggests that the propensity for perseveration errors is modulated by response demands: They occur less frequently when their commitment repeats the previously executed response. Here, we propose parallel reinforcement-learning models of card sorting performance, which assume that card sorting performance can be conceptualized as resulting from model-free reinforcement learning at the level of responses that occurs in parallel with model-based reinforcement learning at the categorical level. We compared parallel reinforcement-learning models with purely model-based reinforcement learning, and with the state-of-the-art attentional-updating model. We analyzed data from 375 participants who completed a computerized WCST. Parallel reinforcement-learning models showed best predictive accuracies for the majority of participants. Only parallel reinforcement-learning models accounted for the modulation of perseveration propensity by response demands. In conclusion, parallel reinforcement-learning models provide a new theoretical perspective on card sorting and it offers a suitable framework for discerning individual differences in latent processes that subserve behavioral flexibility.","link":"/opendata/steinke-et-al-2020/"},{"title":"Stevenson et al. (2022)","text":"Decision-making behavior is often understood using the framework of evidence accumulation models (EAMs). Nowadays, EAMs are applied to various domains of decision-making with the underlying assumption that the latent cognitive constructs proposed by EAMs are consistent across these domains. In this study we investigate both the extent to which the parameters of EAMs are related between four different decision-making domains and across different time points. To that end, we make use of the novel joint modelling approach, that explicitly includes relationships between parameters, such as covariances or underlying factors, in one combined joint model. Consequently, this joint model also accounts for measurement error and uncertainty within the estimation of these relations. We found that EAM parameters were consistent between time points on three of the four decision-making tasks. For our between-task analysis we constructed a joint model with a factor analysis on the parameters of the different tasks. Our two factor joint model indicated that information processing ability was related between the different decision-making domains. However, other cognitive constructs such as the degree of response caution and urgency were only comparable on some domains.","link":"/opendata/stevenson-et-al-2022/"},{"title":"Stojić et al. (2020)","text":"Uncertainty plays a critical role in reinforcement learning and decision making. However, exactly how it influences behavior remains unclear. Multiarmed-bandit tasks offer an ideal test bed, since computational tools such as approximate Kalman filters can closely characterize the interplay between trial-by-trial values, uncertainty, learning, and choice. To gain additional insight into learning and choice processes, we obtained data from subjects overt allocation of gaze. The estimated value and estimation uncertainty of options influenced what subjects looked at before choosing; these same quantities also influenced choice, as additionally did fixation itself. A momentary measure of uncertainty in the form of absolute prediction errors determined how long participants looked at the obtained outcomes. These findings affirm the importance of uncertainty in multiple facets of behavior and help delineate its effects on decision making.","link":"/opendata/stojic-et-al-2020/"},{"title":"Stuppy-Sullivan et al. (2020)","text":"Aberrant cost–benefit decision making is a key factor related to individual differences in the expression of substance use disorders (SUDs). Previous research highlights how delay-cost sensitivity affects variability in SUDs; however, other forms of cost–benefit decision making—effort-based choice—have received less attention. We administered the Effort Expenditure for Rewards Task (EEfRT) in an SUD-enriched community sample (N = 80). Individuals with more severe SUDs were less likely to use information about expected value when deciding between high-effort, high-reward and low-effort, low-reward options. Furthermore, individuals whose severity of use was primarily related to avoiding aversive affective states and individuals with heightened sensitivity to delay costs during intertemporal decision making were the least sensitive to expected value signals when making decisions to engage in effortful behavior. Together, these findings suggest that individuals with more severe SUDs have difficulty integrating multiple decision variables to guide behavior during effort-based decision making.","link":"/opendata/stuppy-sullivan-et-al-2020/"},{"title":"Sugawara & Katahira (2021)","text":"The learning rate is a key parameter in reinforcement learning that determines the extent to which novel information (outcome) is incorporated in guiding subsequent actions. Numerous studies have reported that the magnitude of the learning rate in human reinforcement learning is biased depending on the sign of the reward prediction error. However, this asymmetry can be observed as a statistical bias if the fitted model ignores the choice autocorrelation (perseverance), which is independent of the outcomes. Therefore, to investigate the genuine process underlying human choice behavior using empirical data, one should dissociate asymmetry in learning and perseverance from choice behavior. The present study addresses this issue by using a Hybrid model incorporating asymmetric learning rates and perseverance. First, by conducting simulations, we demonstrate that the Hybrid model can identify the true underlying process. Second, using the Hybrid model, we show that empirical data collected from a web-based experiment are governed by perseverance rather than asymmetric learning. Finally, we apply the Hybrid model to two open datasets in which asymmetric learning was reported. As a result, the asymmetric learning rate was validated in one dataset but not another.","link":"/opendata/sugawara-katahira-2021/"},{"title":"Sukhov et al. (2023)","text":"Persistence and perseverance, even in the face of great adversity, are admirable qualities. However, knowing when to stop pursuing something is as important as exerting effort toward attaining a goal. Howdo people decide when to persist and when to quit? Here, we design a novel task to study this question, in which people were given a finite number of opportunities to pursue stochastic rewards by selecting among a set of options that provide a reward each trial. At any time, if people were not satisfied with the option they had selected they could choose to abandon it and instead try a new option. However, if they did so they could never return to the previous option. Mathematical analysis of this task shows that the optimal strategy explores a relatively small number of options before settling on a sufficiently good option. Further, we find that the optimal strategy is to abandon an option if the total number of remaining trials exceeds a threshold specified by the observed option’s performance. A large-scale, pre-registered experiment (N = 3,632) reveals that people largely behave in accordance with the optimal strategy. People also make decisions to persist with an option based on its performance and they typically explore relatively few options before settling on a sufficiently good one. However, compared to the optimal strategy, people are less sensitive to the number of remaining trials and are more likely to persist with sub-optimal options. Together, this work provides a new approach to studying how we decide when to quit and deepens our understanding of human persistence.","link":"/opendata/sukhov-et-al-2023/"},{"title":"Suthaharan et al. (2021)","text":"The COVID-19 pandemic has made the world seem less predictable. Such crises can lead people to feel that others are a threat. Here, we show that the initial phase of the pandemic in 2020 increased individuals paranoia and made their belief updating more erratic. A proactive lockdown made peoples belief updating less capricious. However, state-mandated mask-wearing increased paranoia and induced more erratic behaviour. This was most evident in states where adherence to mask-wearing rules was poor but where rule following is typically more common. Computational analyses of participant behaviour suggested that people with higher paranoia expected the task to be more unstable. People who were more paranoid endorsed conspiracies about mask-wearing and potential vaccines and the QAnon conspiracy theories. These beliefs were associated with erratic task behaviour and changed priors. Taken together, we found that real-world uncertainty increases paranoia and influences laboratory task behaviour.","link":"/opendata/suthaharan-et-al-2021/"},{"title":"Swart et al. (2017)","text":"Catecholamines modulate the impact of motivational cues on action. Such motivational biases have been proposed to reflect cue-based, Pavlovian effects. Here, we assess whether motivational biases may also arise from asymmetrical instrumental learning of active and passive responses following reward and punishment outcomes. We present a novel paradigm, allowing us to disentangle the impact of reward and punishment on instrumental learning from Pavlovian response biasing. Computational analyses showed that motivational biases reflect both Pavlovian and instrumental effects: reward and punishment cues promoted generalized (in)action in a Pavlovian manner, whereas outcomes enhanced instrumental (un)learning of chosen actions. These cue- and outcome-based biases were altered independently by the catecholamine enhancer melthylphenidate. Methylphenidates effect varied across individuals with a putative proxy of baseline dopamine synthesis capacity, working memory span. Our study uncovers two distinct mechanisms by which motivation impacts behaviour, and helps refine current models of catecholaminergic modulation of motivated action.","link":"/opendata/swart-et-al-2017/"},{"title":"Swart et al. (2018)","text":"Motivation exerts control over behavior by eliciting Pavlovian responses, which can either match or conflict with instrumental action. We can overcome maladaptive motivational influences putatively through frontal cognitive control. However, the neurocomputational mechanisms subserving this control are unclear; does control entail up-regulating instrumental systems, down-regulating Pavlovian systems, or both? We combined electroencephalography (EEG) recordings with a motivational Go/NoGo learning task (N = 34), in which multiple Go options enabled us to disentangle selective action learning from nonselective Pavlovian responses. Midfrontal theta-band (4 Hz-8 Hz) activity covaried with the level of Pavlovian conflict and was associated with reduced Pavlovian biases rather than reduced instrumental learning biases. Motor and lateral prefrontal regions synchronized to the midfrontal cortex, and these network dynamics predicted the reduction of Pavlovian biases over and above local, midfrontal theta activity. This work links midfrontal processing to detecting Pavlovian conflict and highlights the importance of network processing in reducing the impact of maladaptive, Pavlovian biases.","link":"/opendata/swart-et-al-2018/"},{"title":"Szücs et al. (2022)","text":"Rivalry and admiration-seeking are two distinct strategies humans use to gain status in social competition. However in vivo data is lacking about whether these behavioral manifestations of status pursuit are driven by distinct rivalry and admiration-seeking traits, as outlined by the Narcissistic Admiration and Rivalry Concept (NARC), whether NARC traits interact with environmental cues as suggested by the Status Pursuit In Narcissism (SPIN) model, and whether these interactions primarily occur with trait-relevant cues (defeat in the case of trait rivalry and victory in the case of trait admiration-seeking) as proposed by Trait activation theory (TAT). We used a rigged video game tournament with three randomized blocks with defeat manipulations of varying intensity: defeats to victory ratios of 1:1 (neutral), 2:1 (moderate losing), and 3:1 (extreme losing), measuring behavioral rivalry (stealing points from opponents) and admiration-seeking (paying to boost rank in the tournament) in a sample of 434 undergraduates assessed for trait rivalry and trait admiration-seeking with the Narcissistic Admiration and Rivalry Questionnaire. We found evidence for trait-congruent main and interaction effects: whereas behavioral rivalry scaled with trait rivalry and behavioral admiration-seeking with trait admiration-seeking, trait rivalry primarily increased status-pursuit behaviors following defeats and trait admiration-seeking following victories. These results corroborate the NARC’s two-dimensional conceptualization of narcissistic grandiosity, support the SPIN model’s interactionist view of status pursuit, and extend these frameworks by outlining trait-specific environmental effects consistent with TAT.","link":"/opendata/szucs-et-al-2022/"},{"title":"Tagu & Kristjansson (2021)","text":"A critical question in visual foraging concerns the mechanisms driving the next target selection. Observers first identify a set of candidate targets, and then select the best option among these candidates. Recent evidence suggests that target selection relies on internal biases towards proximity (nearest target from the last selection), priming (target from the same category as the last selection) and value (target associated with high value). Here, we tested the role of eye movements in target selection, and notably whether disabling eye movements during target selection could affect search strategy. We asked observers to perform four foraging tasks differing by selection modality and target value. During gaze foraging, participants had to accurately fixate the targets to select them and could not anticipate the next selection with their eyes, while during mouse foraging they selected the targets with mouse clicks and were free to move their eyes. We moreover manipulated both target value and proximity. Our results revealed notable individual differences in search strategy, confirming the existence of internal biases towards value, proximity and priming. Critically, there were no differences in search strategy between mouse and gaze foraging, suggesting that disabling eye movements during target selection did not affect foraging behaviour. These results importantly suggest that overt orienting is not necessary for target selection. This study provides fundamental information for theoretical conceptions of attentional selection, and emphasizes the importance of covert attention for target selection during visual foraging.","link":"/opendata/tagu-kristjansson-2021/"},{"title":"Tal-Perry & Yuval-Greenberg (2021)","text":"Eye movements are inhibited prior to the occurrence of temporally predictable events. This ‘oculomotor inhibition effect’ has been demonstrated with various tasks and modalities. Specifically, it was shown that when intervals between cue and target are fixed, saccade rate prior to the target is lower than when they are varied. However, it is still an open question whether this effect is linked to temporal expectation to the predictable target, or to the duration estimation of the interval preceding it. Here, we examined this question in 20 participants while they performed an implicit temporal expectation and an explicit time estimation task. In each trial, following cue onset, two consecutive grating patches were presented, each preceded by an interval. Temporal expectation was manipulated by setting the first interval duration to be either fixed or varied within each block. Participants were requested to compare either the durations of the two intervals (time estimation), or the tilts of the two grating patches (temporal expectation). Saccade rate, measured prior to the first grating, was lower in the fixed relative to the varied condition of both tasks. This suggests that the inhibition effect is elicited by target predictability and indicates that it is linked to temporal expectation, rather than to time estimation processes. Additionally, this finding suggests that the oculomotor inhibition is independent of motor readiness, as it was elicited even when no response was required. We conclude that the prestimulus oculomotor inhibition effect can be used as a marker of temporal expectation, and discuss its potential underlying mechanisms.","link":"/opendata/tal-perry-yuval-greenberg-2021/"},{"title":"Tarantola et al. (2017)","text":"Our personal preferences affect a broad array of social behaviors. This includes the way we learn the preferences of others, an ability that often relies on limited or ambiguous information. Here we report an egocentric influence on this type of social learning that is reflected in both performance and response times. Using computational models that combine inter-trial learning and intra-trial choice, we find transient effects of participants preferences on the learning process, through the influence of priors, and persistent effects on the choice process. A second experiment shows that these effects generalize to non-social learning, though participants in the social learning experiment appeared to additionally benefit by using their knowledge about the popularity of certain preferences. We further find that the domain-general egocentric influences we identify can yield performance advantages in uncertain environments.People often assume that other people share their preferences, but how exactly this bias manifests itself in learning and decision-making is unclear. Here, authors show that a persons own preferences influence learning in both social and non-social situations, and that this bias improves performance.","link":"/opendata/tarantola-et-al-2017/"},{"title":"Tardiff et al. (2022)","text":"Expectations, such as those arising from either learned rules or recent stimulus regularities, can bias subsequent auditory perception in diverse ways. However, it is not well understood if and how these diverse effects depend on the source of the expectations. Further, it is unknown whether different sources of bias use the same or different computational and physiological mechanisms. We examined how rule-based and stimulus-based expectations influenced behavior and pupil-linked arousal, a marker of certain forms of expectation-based processing, of human subjects performing an auditory frequency-discrimination task. Rule-based cues consistently biased choices and response times (RTs) toward the more-probable stimulus. In contrast, stimulus-based cues had a complex combination of effects, including choice and RT biases toward and away from the frequency of recently presented stimuli. These different behavioral patterns also had: 1) distinct computational signatures, including different modulations of key components of a novel form of a drift-diffusion decision model and 2) distinct physiological signatures, including substantial bias-dependent modulations of pupil size in response to rule-based but not stimulus-based cues. These results imply that different sources of expectations can modulate auditory processing via distinct mechanisms: one that uses arousal-linked, rule-based information and another that uses arousal-independent, stimulus-based information to bias the speed and accuracy of auditory perceptual decisions.","link":"/opendata/tardiff-et-al-2022/"},{"title":"Tashjian et al. (2022)","text":"Protection often involves the capacity to prospectively plan the actions needed to mitigate harm. The computational architecture of decisions involving protection remains unclear, as well as whether these decisions differ from other beneficial prospective actions such as reward acquisition. Here we compare protection acquisition to reward acquisition and punishment avoidance to examine overlapping and distinct features across the three action types. Protection acquisition is positively valenced similar to reward. For both protection and reward, the more the actor gains, the more benefit. However, reward and protection occur in different contexts, with protection existing in aversive contexts. Punishment avoidance also occurs in aversive contexts, but differs from protection because punishment is negatively valenced and motivates avoidance. Across three independent studies (Total N = 600) we applied computational modeling to examine model-based reinforcement learning for protection, reward, and punishment in humans. Decisions motivated by acquiring protection evoked a higher degree of model-based control than acquiring reward or avoiding punishment, with no significant differences in learning rate. The context-valence asymmetry characteristic of protection increased deployment of flexible decision strategies, suggesting model-based control depends on the context in which outcomes are encountered as well as the valence of the outcome.","link":"/opendata/tashjian-et-al-2022/"},{"title":"Tavoni et al. (2022)","text":"We must often infer latent properties of the world from noisy and changing observations. Complex, probabilistic approaches to this challenge such as Bayesian inference are accurate but cognitively demanding, relying on extensive working memory and adaptive processing. Simple heuristics are easy to implement but may be less accurate. What is the appropriate balance between complexity and accuracy? Here we model a hierarchy of strategies of variable complexity and find a power law of diminishing returns: increasing complexity gives progressively smaller gains in accuracy. The rate of diminishing returns depends systematically on the statistical uncertainty in the world, such that complex strategies do not provide substantial benefits over simple ones when uncertainty is either too high or too low. In between, there is a complexity dividend. In two psychophysical experiments, we confirm specific model predictions about how working memory and adaptivity should be modulated by uncertainty.","link":"/opendata/tavoni-et-al-2022/"},{"title":"Teghil et al. (2020)","text":"Behavioral evidence suggests that different mechanisms mediate duration perception depending on whether regular or irregular cues for time estimation are provided, and that individual differences in interoceptive processing may affect duration perception only in the latter case. However, no study has addressed brain correlates of this proposed distinction. Here participants performed a duration reproduction task in two conditions: with unevenly spaced stimuli during time estimation/reproduction (irregular), with regularly spaced stimuli provided during the same task (regular). They also underwent resting-state fMRI to assess regional functional connectivity, in order to link individual differences in behavioral performance to variations in patterns of intrinsic brain oscillations. Resting-state functional connectivity of the right precentral gyrus with the ipsilateral insula and putamen was predicted by duration reproduction performance selectively in the irregular condition. The connectivity of the right posterior insula, within a network modulated by participants’ degree of interoceptive awareness, correlated positively with performance in the irregular condition only. Findings support the distinction between brain networks involved in duration processing with or without regular cues, and the hypothesis that the multimodal integration of interoceptive and exteroceptive cues is specifically involved in the latter.","link":"/opendata/teghil-et-al-2020/"},{"title":"Teicher & Parigger (2015)","text":"There is increasing interest in childhood maltreatment as a potent stimulus that may alter trajectories of brain development, induce epigenetic modifications and enhance risk for medical and psychiatric disorders. Although a number of useful scales exist for retrospective assessment of abuse and neglect they have significant limitations. Moreover, they fail to provide detailed information on timing of exposure, which is critical for delineation of sensitive periods. The Maltreatment and Abuse Chronology of Exposure (MACE) scale was developed in a sample of 1051 participants using item response theory to gauge severity of exposure to ten types of maltreatment (emotional neglect, non-verbal emotional abuse, parental physical maltreatment, parental verbal abuse, peer emotional abuse, peer physical bullying, physical neglect, sexual abuse, witnessing interparental violence and witnessing violence to siblings) during each year of childhood. Items included in the subscales had acceptable psychometric properties based on infit and outfit mean square statistics, and each subscale passed Andersen’s Likelihood ratio test. The MACE provides an overall severity score and multiplicity score (number of types of maltreatment experienced) with excellent test-retest reliability. Each type of maltreatment showed good reliability as did severity of exposure across each year of childhood. MACE Severity correlated 0.738 with Childhood Trauma Questionnaire (CTQ) score and MACE Multiplicity correlated 0.698 with the Adverse Childhood Experiences scale (ACE). However, MACE accounted for 2.00- and 2.07-fold more of the variance, on average, in psychiatric symptom ratings than CTQ or ACE, respectively, based on variance decomposition. Different types of maltreatment had distinct and often unique developmental patterns. The 52-item MACE, a simpler Maltreatment Abuse and Exposure Scale (MAES) that only assesses overall exposure and the original test instrument (MACE-X) with several additional items plus spreadsheets and R code for scoring are provided to facilitate use and to spur further development.","link":"/opendata/teicher-parigger-2015/"},{"title":"Ten et al. (2021)","text":"Curiosity-driven learning is foundational to human cognition. By enabling humans to autonomously decide when and what to learn, curiosity has been argued to be crucial for self-organizing temporally extended learning curricula. However, the mechanisms driving people to set intrinsic goals, when they are free to explore multiple learning activities, are still poorly understood. Computational theories propose different heuristics, including competence measures (e.g., percent correct) and learning progress, that could be used as intrinsic utility functions to efficiently organize exploration. Such intrinsic utilities constitute computationally cheap but smart heuristics to prevent people from laboring in vain on unlearnable activities, while still motivating them to self-challenge on difficult learnable activities. Here, we provide empirical evidence for these ideas by means of a free-choice experimental paradigm and computational modeling. We show that while humans rely on competence information to avoid easy tasks, models that include a learning-progress component provide the best fit to task selection data. These results bridge the research in artificial and biological curiosity, reveal strategies that are used by humans but have not been considered in computational research, and introduce tools for probing how humans become intrinsically motivated to learn and acquire interests and skills on extended time scales.","link":"/opendata/ten-et-al-2021/"},{"title":"Teoh et al. (2020)","text":"Dual-process models of altruistic choice assume that automatic responses give way to deliberation over time, and are a popular way to conceptualize how people make generous choices and why those choices might change under time pressure. However, these models have led to conflicting interpretations of behaviour and underlying psychological dynamics. Here, we propose that flexible, goal-directed deployment of attention towards information priorities provides a more parsimonious account of altruistic choice dynamics. We demonstrate that time pressure tends to produce early gaze-biases towards a person’s own outcomes, and that individual differences in this bias explain how individuals’ generosity changes under time pressure. Our gaze-informed drift-diffusion model incorporating moment-to-moment eye-gaze further reveals that underlying social preferences both drive attention, and interact with it to shape generosity under time pressure. These findings help explain existing inconsistencies in the field by emphasizing the role of dynamic attention-allocation during altruistic choice.","link":"/opendata/teoh-et-al-2020/"},{"title":"Teoh & Hutcherson (2022)","text":"Time pressure is a powerful experimental manipulation frequently used to arbitrate between competing dual-process models of prosocial decision-making, which typically assume that automatic responses yield to deliberation over time. However, the use of time pressure has led to conflicting conclusions about the psychological dynamics of prosociality. Here, we proposed that flexible, context-sensitive information search, rather than automatic responses, underlies these divergent effects of time pressure on prosociality. We demonstrated in two preregistered studies (N = 304 adults from the United States and Canada; Prolific Academic) that different prosocial contexts (i.e., pure altruism vs. cooperation) have distinct effects on information search, driving people to prioritize information differently, particularly under time pressure. Furthermore, these information priorities subsequently influence prosocial choices, accounting for the different effects of time pressure in altruistic and cooperative contexts. These findings help explain existing inconsistencies in the field by emphasizing the role of dynamic context-sensitive information search during social decision-making, particularly under time pressure.","link":"/opendata/teoh-hutcherson-2022/"},{"title":"Thomas et al. (2019)","text":"How do we make simple choices such as deciding between an apple and an orange? Recent empirical evidence suggests that choice behaviour and gaze allocation are closely linked at the group level, whereby items looked at longer during the decision-making process are more likely to be chosen. However, it is unclear how variable this gaze bias effect is between individuals. Here we investigate this question across four different simple choice experiments and using a computational model that can be easily applied to individuals. We show that an association between gaze and choice is present for most individuals, but differs considerably in strength. Generally, individuals with a strong association between gaze and choice behaviour are worse at choosing the best item from a choice set compared with individuals with a weak association. Accounting for individuals variability in gaze bias in the model can explain and accurately predict individual differences in choice behaviour.","link":"/opendata/thomas-et-al-2019/"},{"title":"Thunberg et al. (2023)","text":"Response inhibition, the intentional stopping of planned or initiated actions, is often considered a key facet of control, impulsivity, and self-regulation. The stop signal task is argued to be the purest inhibition task we have, and it is thus central to much work investigating the role of inhibition in areas like development and psychopathology. Most of this work quantifies stopping behavior by calculating the stop signal reaction time as a measure of individual stopping latency. Individual difference studies aiming to investigate why and how stopping latencies differ between people often do this under the assumption that the stop signal reaction time indexes a stable, dispositional trait. However, empirical support for this assumption is lacking, as common measures of inhibition and control tend to show low test-retest reliability and thus appear unstable over time. The reasons for this could be methodological, where low stability is driven by measurement noise, or substantive, where low stability is driven by a larger influence of state-like and situational factors. To investigate this, we characterized the split-half and test-retest reliability of a range of common behavioral and electrophysiological measures derived from the stop signal task. Across three independent studies, different measurement modalities, and a systematic review of the literature, we found a pattern of low temporal stability for inhibition measures and higher stability for measures of manifest behavior and non-inhibitory processing. This pattern could not be explained by measurement noise and low internal consistency. Consequently, response inhibition appears to have mostly state-like and situational determinants, and there is little support for the validity of conceptualizing common inhibition measures as reflecting stable traits.","link":"/opendata/thunberg-et-al-2023/"},{"title":"Thyer et al. (2022)","text":"Past work has shown that storage in working memory elicits stimulus-specific neural activity that tracks the stored content. Here, we present evidence for a distinct class of load-sensitive neural activity that indexes items without representing their contents per se. We recorded electroencephalogram (EEG) activity while adult human subjects stored varying numbers of items in visual working memory. Multivariate analysis of the scalp topography of EEG voltage enabled precise tracking of the number of individuated items stored and robustly predicted individual differences in working memory capacity. Critically, this signature of working memory load generalized across variations in both the type and number of visual features stored about each item, suggesting that it tracked the number of individuated memory representations and not the content of those memories. We hypothesize that these findings reflect the operation of a capacity-limited pointer system that supports on-line storage and attentive tracking.","link":"/opendata/thyer-et-al-2022/"},{"title":"Tomov et al. (2021)","text":"The ability to transfer knowledge across tasks and generalize to novel ones is an important hallmark of human intelligence. Yet not much is known about human multitask reinforcement learning. We study participants behaviour in a two-step decision-making task with multiple features and changing reward functions. We compare their behaviour with two algorithms for multitask reinforcement learning, one that maps previous policies and encountered features to new reward functions and one that approximates value functions across tasks, as well as to standard model-based and model-free algorithms. Across three exploratory experiments and a large preregistered confirmatory experiment, our results provide evidence that participants who are able to learn the task use a strategy that maps previously learned policies to novel scenarios. These results enrich our understanding of human reinforcement learning in complex environments with changing task demands.","link":"/opendata/tomov-et-al-2021/"},{"title":"Torok et al. (2022)","text":"Internal models capture the regularities of the environment and are central to understanding how humans adapt to environmental statistics. In general, the correct internal model is unknown to observers, instead they rely on an approximate model that is continually adapted throughout learning. However, experimenters assume an ideal observer model, which captures stimulus structure but ignores the diverging hypotheses that humans form during learning. We combine non-parametric Bayesian methods and probabilistic programming to infer rich and dynamic individualised internal models from response times. We demonstrate that the approach is capable of characterizing the discrepancy between the internal model maintained by individuals and the ideal observer model and to track the evolution of the contribution of the ideal observer model to the internal model throughout training. In particular, in an implicit visuomotor sequence learning task the identified discrepancy revealed an inductive bias that was consistent across individuals but varied in strength and persistence.","link":"/opendata/torok-et-al-2022/"},{"title":"Torres et al. (2023)","text":"Visual working memory is thought to have a fixed capacity limit. However, recent evidence suggests that capacity is greater for real-world objects compared to simple features (i.e., colors). Here, we examined whether greater working memory for objects was due to greater memorability. In online samples of young adults, real-world objects were better remembered than colors, which was attributed to a higher proportion of high-confidence responses (Exp 1). Memory performance for objects was also improved compared to their scrambled counterparts (Exp 2), indicating that this benefit is unrelated to visual complexity. Critically, the specific objects that were likely to be remembered with high confidence were highly correlated across experiments, consistent with the idea that some objects are more memorable than others. Object memorability also predicted memory performance between objects within a display (Exp 3). These findings suggest that the object benefit in working memory may be supported by stimulus memorability.","link":"/opendata/torres-et-al-2023/"},{"title":"Toso et al. (2021)","text":"The connection between stimulus perception and time perception remains unknown. The present study combines human and rat psychophysics with sensory cortical neuronal firing to construct a computational model for the percept of elapsed time embedded within sense of touch. When subjects judged the duration of a vibration applied to the fingertip (human) or whiskers (rat), increasing stimulus intensity led to increasing perceived duration. Symmetrically, increasing vibration duration led to increasing perceived intensity. We modeled real spike trains recorded from vibrissal somatosensory cortex as input to dual leaky integrators-an intensity integrator with short time constant and a duration integrator with long time constant-generating neurometric functions that replicated the actual psychophysical functions of rats. Returning to human psychophysics, we then confirmed specific predictions of the dual leaky integrator model. This study offers a framework, based on sensory coding and subsequent accumulation of sensory drive, to account for how a feeling of the passage of time accompanies the tactile sensory experience.","link":"/opendata/toso-et-al-2021/"},{"title":"Traner et al. (2021)","text":"Classic foraging theory predicts that humans and animals aim to gain maximum reward per unit time. However, in standard instrumental conditioning tasks individuals adopt an apparently suboptimal strategy: they respond slowly when the expected value is low. This reward-related bias is often explained as reduced motivation in response to low rewards. Here we present evidence this behavior is associated with a complementary increased motivation to search the environment for alternatives. We trained monkeys to search for reward-related visual targets in environments with different values. We found that the reward-related bias scaled with environment value, was consistent with persistent searching after the target was already found, and was associated with increased exploratory gaze to objects in the environment. A novel computational model of foraging suggests that this search strategy could be adaptive in naturalistic settings where both environments and the objects within them provide partial information about hidden, uncertain rewards.","link":"/opendata/traner-et-al-2021/"},{"title":"Troudart & Shahar (2023)","text":"Acting in a goal-directed manner requires an ability to accurately predict the outcomes of one’s actions. However, not much is known regarding how threat-related cues influence our ability to form action-outcome associations according to the environment’s known causal structure. Here, we examined the extent to which threat-related cues influence individuals’ tendency to form and act in accordance with action-outcome associations that do not exist in the external environment (i.e., outcome-irrelevant learning). Forty-nine healthy participants completed an online multi-armed reinforcement-learning bandit task in which they were asked to help a child safely cross a street. Outcome-irrelevant learning was estimated as a tendency to assign value to response keys that did not predict an outcome but were used to report participants’ choices. We first replicated previous findings showing that individuals tend to form and act in accordance with irrelevant action-outcome associations, across experimental conditions, and despite explicit knowledge regarding the environment’s true structure. Importantly, results of a Bayesian regression analysis suggest that the presentation of threat-related images, compared with neutral or no visual cues at trial initiation, increased outcome-irrelevant learning. We discuss outcome-irrelevant learning as a possible theoretical mechanism that may lead to altered learning in the face of perceived threat.","link":"/opendata/troudart-shahar-2023/"},{"title":"Tsigeman et al. (2022)","text":"Visuospatial working memory (VSWM) is essential to human cognitive abilities and is associated with important life outcomes such as academic performance. Recently, a number of reliable measures of VSWM have been developed to help understand psychological processes and for practical use in education. We sought to extend this work using Item Response Theory (IRT) and Computerised Adaptive Testing (CAT) frameworks to construct, calibrate and validate a new adaptive, computerised, and open-source VSWM test. We aimed to overcome the limitations of previous instruments and provide researchers with a valid and freely available VSWM measurement tool. The Jack and Jill (JaJ) VSWM task was constructed using explanatory item response modelling of data from a sample of the general adult population (Study 1, N = 244) in the UK and US. Subsequently, a static version of the task was tested for validity and reliability using a sample of adults from the UK and Australia (Study 2, N = 148) and a sample of Russian adolescents (Study 3, N = 263). Finally, the adaptive version of the JaJ task was implemented on the basis of the underlying IRT model and evaluated with another sample of Russian adolescents (Study 4, N = 239). JaJ showed sufficient internal consistency and concurrent validity as indicated by significant and substantial correlations with established measures of working memory, spatial ability, non-verbal intelligence, and academic achievement. The findings suggest that JaJ is an efficient and reliable measure of VSWM from adolescent to adult age.","link":"/opendata/tsigeman-et-al-2022/"},{"title":"Tsuji & Imaizumi (2022)","text":"Sense of agency (SoA) is a feeling of controlling one’s own action. Recognition memory can improve for stimuli that involve SoA perhaps because of the self-reference effect. Recognition memory consists of recollection (i.e., detailed memory of stimuli) and familiarity (i.e., a feeling that stimuli are in memory). The self-reference effect is often observed in the recollection. Here, we investigated whether SoA particularly improves the recollection process. Participants pressed a key to produce an outcome (i.e., box movement followed by word presentation in Experiment 1 or word presentation in Experiment 2) and rated their SoA over the outcome. The outcome was spatially congruent or incongruent with the action. The participants learned the words intentionally (Experiment 1) or incidentally (Experiment 2). Performances of recollection and familiarity were assessed using the remember/know procedure. Our results suggest that the participants’ SoA was successfully manipulated. However, contrary to our hypothesis and previous findings, we found no effects of voluntary action and action-outcome congruence on recollection and familiarity processes of recognition memory, regardless of the latency of word presentation and learning strategies. Further studies are needed to replicate and elucidate the relationship between the SoA and recognition memory.","link":"/opendata/tsuji-imaizumi-2022/"},{"title":"Turan et al. (2023)","text":"Generating predictions about environmental regularities, relying on these predictions, and updating these predictions when there is a violation from incoming sensory evidence are considered crucial functions of our cognitive system for being adaptive in the future. The violation of a prediction can result in a prediction error (PE) which affects subsequent memory processing. In our preregistered studies, we examined the effects of different levels of PE on episodic memory. Participants were asked to generate predictions about the associations between sequentially presented cue-target pairs, which were violated later with individual items in three PE levels, namely low, medium, and high PE. Hereafter, participants were asked to provide old/new judgments on the items with confidence ratings, and to retrieve the paired cues. Our results indicated a better recognition memory for low PE than medium and high PE levels, suggesting a memory congruency effect. On the other hand, there was no evidence of memory benefit for high PE level. Together, these novel and coherent findings strongly suggest that high PE does not guarantee better memory.","link":"/opendata/turan-et-al-2023/"},{"title":"Turi et al. (2017)","text":"According to the placebo-reward hypothesis, placebo is a reward-anticipation process that increases midbrain dopamine (DA) levels. Reward-based learning processes, such as reinforcement learning, involves a large part of the DA-ergic network that is also activated by the placebo intervention. Given the neurochemical overlap between placebo and reward learning, we investigated whether verbal instructions in conjunction with a placebo intervention are capable of enhancing reward learning in healthy individuals by using a monetary reward-based reinforcement-learning task. Placebo intervention was performed with non-invasive brain stimulation techniques. In a randomized, triple-blind, cross-over study we investigated this cognitive placebo effect in healthy individuals by manipulating the participants perceived uncertainty about the interventions efficacy. Volunteers in the purportedly low- and high-uncertainty conditions earned more money, responded more quickly and had a higher learning rate from monetary rewards relative to baseline. Participants in the purportedly high-uncertainty conditions showed enhanced reward learning, and a model-free computational analysis revealed a higher learning rate from monetary rewards compared to the purportedly low-uncertainty and baseline conditions. Our results indicate that the placebo response is able to enhance reward learning in healthy individuals, opening up exciting avenues for future research in placebo effects on other cognitive functions.","link":"/opendata/turi-et-al-2017/"},{"title":"Tylen et al. (2023)","text":"Capacities for abstract thinking and problem-solving are central to human cognition. Processes of abstraction allow the transfer of experiences and knowledge between contexts helping us make informed decisions in new or changing contexts. While we are often inclined to relate such reasoning capacities to individual minds and brains, they may in fact be contingent on human-specific modes of collaboration, dialogue, and shared attention. In an experimental study, we test the hypothesis that social interaction enhances cognitive processes of rule-induction, which in turn improves problem-solving performance. Through three sessions of increasing complexity, individuals and groups were presented with a problem-solving task requiring them to categorize a set of visual stimuli. To assess the character of participants’ problem representations, after each training session, they were presented with a transfer task involving stimuli that differed in appearance, but shared relations among features with the training set. Besides, we compared participants’ categorization behaviors to simulated agents relying on exemplar learning. We found that groups performed superior to individuals and agents in the training sessions and were more likely to correctly generalize their observations in the transfer phase, especially in the high complexity session, suggesting that groups more effectively induced underlying categorization rules from the stimuli than individuals and agents. Crucially, variation in performance among groups was predicted by semantic diversity in members’ dialogical contributions, suggesting a link between social interaction, cognitive diversity, and abstraction.","link":"/opendata/tylen-et-al-2023/"},{"title":"Tymula et al. (2023)","text":"Research in the multidisciplinary field of neuroeconomics has mainly been driven by two influential theories regarding human economic choice: prospect theory, which describes decision-making under risk, and reinforcement learning theory, which describes learning for decision-making. We hypothesized that these two distinct theories guide decision-making in a comprehensive manner. Here, we propose and test a decision-making theory under uncertainty that combines these highly influential theories. Collecting many gambling decisions from laboratory monkeys allowed for reliable testing of our model and revealed a systematic violation of prospect theory’s assumption that probability weighting is static. Using the same experimental paradigm in humans, substantial similarities between these species were uncovered by various econometric analyses of our dynamic prospect theory model, which incorporates decision-by-decision learning dynamics of prediction errors into static prospect theory. Our model provides a unified theoretical framework for exploring a neurobiological model of economic choice in human and nonhuman primates.","link":"/opendata/tymula-et-al-2023/"},{"title":"Unger & Sloutsky (2022)","text":"Our knowledge of the world is populated with categories such as dogs, cups, and chairs. Such categories shape how we perceive, remember, and reason about their members. Much of our exposure to the entities we come to categorize occurs incidentally as we experience and interact with them in our everyday lives, with limited access to explicit teaching. This research investigated whether incidental exposure contributes to building category knowledge by rendering people “ready to learn”-allowing them to rapidly capitalize on brief access to explicit teaching. Across five experiments (N = 438 adults), we found that incidental exposure did produce a ready-to-learn effect, even when learners showed no evidence of robust category learning during exposure. Importantly, this readiness to learn occurred only when categories possessed a rich structure in which many features were correlated within categories. These findings offer a window into how our everyday experiences may contribute to building category knowledge.","link":"/opendata/unger-sloutsky-2022/"},{"title":"Urai et al. (2017)","text":"While judging their sensory environments, decision-makers seem to use the uncertainty about their choices to guide adjustments of their subsequent behaviour. One possible source of these behavioural adjustments is arousal: decision uncertainty might drive the brains arousal systems, which control global brain state and might thereby shape subsequent decision-making. Here, we measure pupil diameter, a proxy for central arousal state, in human observers performing a perceptual choice task of varying difficulty. Pupil dilation, after choice but before external feedback, reflects three hallmark signatures of decision uncertainty derived from a computational model. This increase in pupil-linked arousal boosts observers tendency to alternate their choice on the subsequent trial. We conclude that decision uncertainty drives rapid changes in pupil-linked arousal state, which shape the serial correlation structure of ongoing choice behaviour.","link":"/opendata/urai-et-al-2017/"},{"title":"van-Baar et al. (2019)","text":"Individuals employ different moral principles to guide their social decision-making, thus expressing a specific ‘moral strategy’. Which computations characterize different moral strategies, and how might they be instantiated in the brain? Here, we tackle these questions in the context of decisions about reciprocity using a modified Trust Game. We show that different participants spontaneously and consistently employ different moral strategies. By mapping an integrative computational model of reciprocity decisions onto brain activity using inter-subject representational similarity analysis of fMRI data, we find markedly different neural substrates for the strategies of ‘guilt aversion’ and ‘inequity aversion’, even under conditions where the two strategies produce the same choices. We also identify a new strategy, ‘moral opportunism’, in which participants adaptively switch between guilt and inequity aversion, with a corresponding switch observed in their neural activation patterns. These findings provide a valuable view into understanding how different individuals may utilize different moral principles.","link":"/opendata/van-baar-et-al-2019/"},{"title":"van-den-Berg et al. (2023)","text":"Previous work has shown that humans distribute their visual working memory (VWM) resources flexibly across items: the higher the importance of an item, the better it is remembered. A related, but much less studied question is whether people also have control over the total amount of VWM resource allocated to a task. Here, we approach this question by testing whether increasing monetary incentives results in better overall VWM performance. In three experiments, subjects performed a delayed-estimation task on the Amazon Turk platform. In the first two experiments, four groups of subjects received a bonus payment based on their performance, with the maximum bonus ranging from $0 to $10 between groups. We found no effect of the amount of bonus on intrinsic motivation or on VWM performance in either experiment. In the third experiment, reward was manipulated on a trial-by-trial basis using a within-subjects design. Again, no evidence was found that VWM performance depended on the magnitude of potential reward. These results suggest that encoding quality in visual working memory is insensitive to monetary reward, which has implications for resource-rational theories of VWM.","link":"/opendata/van-den-berg-et-al-2023/"},{"title":"van-Lieshout et al. (2021)","text":"You probably know what kind of things you are curious about, but can you also explain what it feels like to be curious? Previous studies have demonstrated that we are particularly curious when uncertainty is high and when information provides us with a substantial update of what we know. It is unclear, however, whether this drive to seek information (curiosity) is appetitive or aversive. Curiosity might correspond to an appetitive drive elicited by the state of uncertainty, because we like that state, or rather it might correspond to an aversive drive to reduce the state of uncertainty, because we don’t like it. To investigate this, we obtained both subjective valence (happiness) and curiosity ratings from subjects who performed a lottery task that elicits uncertainty-dependent curiosity. We replicated a strong main effect of outcome uncertainty on curiosity: Curiosity increased with outcome uncertainty, irrespective of whether the outcome represented a monetary gain or loss. By contrast, happiness decreased with higher outcome uncertainty. This indicates that people were more curious, but less happy about lotteries with higher outcome uncertainty. These findings raise the hypothesis, to be tested in future work, that curiosity reflects an aversive drive to reduce the unpleasant state of uncertainty.","link":"/opendata/van-lieshout-et-al-2021/"},{"title":"van Baar et al. (2021)","text":"Predicting the behaviour of others is an essential part of social cognition. Despite its ubiquity, social prediction poses a poorly understood generalization problem: we cannot assume that others will repeat past behaviour in new settings or that their future actions are entirely unrelated to the past. We demonstrate that humans solve this challenge using a structure learning mechanism that uncovers other peoples latent, unobservable motives, such as greed and risk aversion. In four studies, participants (N = 501) predicted other players decisions across four economic games, each with different social tensions (for example, Prisoners Dilemma and Stag Hunt). Participants achieved accurate social prediction by learning the stable motivational structure underlying a players changing actions across games. This motive-based abstraction enabled participants to attend to information diagnostic of the players next move and disregard irrelevant contextual cues. Participants who successfully learned anothers motives were more strategic in a subsequent competitive interaction with that player in entirely new contexts, reflecting that social structure learning supports adaptive social behaviour.","link":"/opendata/van-baar-et-al-2021/"},{"title":"Vandendriessche et al. (2022)","text":"Value-based decision-making impairment in depression is a complex phenomenon: while some studies did find evidence of blunted reward learning and reward-related signals in the brain, others indicate no effect. Here we test whether such reward sensitivity deficits are dependent on the overall value of the decision problem. We used a two-armed bandit task with two different contexts: one rich, one poor where both options were associated with an overall positive, negative expected value, respectively. We tested patients (N = 30) undergoing a major depressive episode and age, gender and socio-economically matched controls (N = 26). Learning performance followed by a transfer phase, without feedback, were analyzed to distangle between a decision or a value-update process mechanism. Finally, we used computational model simulation and fitting to link behavioral patterns to learning biases. Control subjects showed similar learning performance in the rich and the poor contexts, while patients displayed reduced learning in the poor context. Analysis of the transfer phase showed that the context-dependent impairment in patients generalized, suggesting that the effect of depression has to be traced to the outcome encoding. Computational model-based results showed that patients displayed a higher learning rate for negative compared to positive outcomes (the opposite was true in controls). Our results illustrate that reinforcement learning performances in depression depend on the value of the context. We show that depressive patients have a specific trouble in contexts with an overall negative state value, which in our task is consistent with a negativity bias at the learning rates level.","link":"/opendata/vandendriessche-et-al-2022/"},{"title":"van Timmeren et al. (2022)","text":"Despite our familiarity with the concept of habits, eliciting and measuring habits experimentally in humans has proven to be difficult. A possible explanation is that participants in psychological experiments actively recruit goal-directed control and therefore make few habitual slips-of-action in the presence of stimuli signalling devalued outcomes. In the current experiment we used the symmetrical outcome-revaluation task in combination with a working memory load in an attempt to tip the balance from goal-directed control to stimulus-response habit. During the instrumental learning phase, participants learned to make a Go response to stimuli signalling valuable outcomes (and points) while not responding (NoGo) to stimuli signalling not-valuable outcomes. During the subsequent test phase, the outcomes signalled by the stimuli were either value-congruent with training (still-valuable and still-not-valuable), or value-incongruent (devalued and upvalued). Participants had to flexibly adjust their behaviour on value-incongruent trials where the stimulus-response association learned during training was no longer appropriate. For half the participants, a concurrent working memory load was imposed during the test phase. In line with our preregistered hypotheses, participants showed evidence for habitual slips-of-action but those under working memory load showed increased habit tendencies (specifically failures to inhibit prepotent Go responses in the presence of stimuli signalling devalued outcomes). This central finding suggests that a working memory load can be used to reveal habits in humans.","link":"/opendata/van-timmeren-et-al-2022/"},{"title":"Vellani et al. (2022)","text":"Individual differences in behaviour, traits and mental-health are partially heritable. Traditionally, studies have focused on quantifying the heritability of high-order characteristics, such as happiness or education attainment. Here, we quantify the degree of heritability of lower-level mental processes that likely contribute to complex traits and behaviour. In particular, we quantify the degree of heritability of cognitive and affective factors that contribute to the generation of beliefs about risk, which drive behavior in domains ranging from finance to health. Monozygotic and dizygotic twin pairs completed a belief formation task. We first show that beliefs about risk are associated with vividness of imagination, affective evaluation and learning abilities. We then demonstrate that the genetic contribution to individual differences in these processes range between 13.5 and 39%, with affect evaluation showing a particular robust heritability component. These results provide clues to which mental factors may be driving the heritability component of beliefs formation, which in turn contribute to the heritability of complex traits.","link":"/opendata/vellani-et-al-2022/"},{"title":"Verdejo-Garcia et al. (2021)","text":"Impulsive behaviours are a major contributor to the global burden of disease, but existing measures of cognitive impulsivity have suboptimal reliability and validity. Here, we introduce the Cognitive Impulsivity Suite, comprising three computerized/online tasks using a gamified interface. We conceptualize rapid-response impulsive behaviours (disinhibition) as arising from the failure of three distinct cognitive mechanisms: attentional control, information gathering and monitoring/shifting. We demonstrate the construct and criterion validity of the Cognitive Impulsivity Suite in an online community sample (N = 1,056), show test-retest reliability and between-subjects variability in a face-to-face community sample (N = 63), and replicate the results in a community and clinical sample (N = 578). The results support the theoretical architecture of the attentional control, information gathering and monitoring/shifting constructs. The Cognitive Impulsivity Suite demonstrated incremental criterion validity for prediction of real-world, addiction-related problems and is a promising tool for large-scale research on cognitive impulsivity.","link":"/opendata/verdejo-garcia-et-al-2021/"},{"title":"Vieira & Olsson (2022)","text":"Empathy for others distress has long been considered the driving force of helping. However, when deciding to help others in danger, one must consider not only their distress, but also the risk to oneself. Whereas the role of self-defense in helping has been overlooked in human research, studies in other animals indicate defensive responses are necessary for the protection of conspecifics. In this pre-registered study (N=49), we demonstrate that human defensive neural circuits are implicated in helping others under threat. Participants underwent fMRI scanning while deciding whether to help another participant avoid aversive electrical shocks, at the risk of also being shocked. We found that higher engagement of neural circuits that coordinate fast escape from self-directed danger (including the insula, PAG, and ACC) facilitated decisions to help others. Importantly, using representational similarity analysis, we found that the strength with which the amygdala and insula uniquely represented the threat to oneself (and not the others distress) predicted helping. Our findings indicate that in humans, as other mammals, defensive mechanisms play a greater role in helping behavior than previously understood.","link":"/opendata/vieira-olsson-2022/"},{"title":"Vilgis et al. (2022)","text":"Working memory deficits are common in attention-deficit/hyperactivity disorder (ADHD) and depression-two common neurodevelopmental disorders with overlapping cognitive profiles but distinct clinical presentation. Multivariate techniques have previously been utilized to understand working memory processes in functional brain networks in healthy adults but have not yet been applied to investigate how working memory processes within the same networks differ within typical and atypical developing populations. We used multivariate pattern analysis (MVPA) to identify whether brain networks discriminated between spatial versus verbal working memory processes in ADHD and Persistent Depressive Disorder (PDD). Thirty-six male clinical participants and 19 typically developing (TD) boys participated in a fMRI scan while completing a verbal and a spatial working memory task. Within a priori functional brain networks (frontoparietal, default mode, salience), the TD group demonstrated differential response patterns to verbal and spatial working memory. The PDD group showed weaker differentiation than TD, with lower classification accuracies observed in primarily the left frontoparietal network. The neural profiles of the ADHD and PDD differed specifically in the SN where the ADHD groups neural profile suggests significantly less specificity in neural representations of spatial and verbal working memory. We highlight within-group classification as an innovative tool for understanding the neural mechanisms of how cognitive processes may deviate in clinical disorders, an important intermediary step towards improving translational psychiatry.","link":"/opendata/vilgis-et-al-2022/"},{"title":"Visalli et al. (2022)","text":"Evidence is discordant regarding how emotional processing and cognitive control interact to shape behavior. This study sought to examine this interaction by looking at the distinction between proactive and reactive modes of control and how they relate with emotional processing. Seventy-four healthy participants performed an emotional priming Stroop task. On each trial, target stimuli of a spatial Stroop task were preceded by sad or neutral facial expressions, providing two emotional conditions. To manipulate the requirement of both proactive and reactive control, the proportion of congruent trials (PC) was varied at the list-wide (LWPC) and item-specific (ISPC) levels, respectively. We found that sad priming led to behavioral costs only in trials with low proactive and reactive cognitive control exertion. Our findings suggest that emotional processing affects processing stages other than cognitive control in the Stroop task. Moreover, both proactive and reactive control modes seem effective in overcoming emotional interference of priming stimuli.","link":"/opendata/visalli-et-al-2022/"},{"title":"Vives et al. (2023)","text":"Decisions made under uncertainty often are considered according to their perceived subjective value. We move beyond this traditional framework to explore the hypothesis that conceptual representations of uncertainty influence risky choice. Results reveal that uncertainty concepts are represented along a dimension that jointly captures probabilistic and valenced features of the conceptual space. These uncertainty representations predict the degree to which an individual engages in risky decision-making. Moreover, we find that most individuals have two largely distinct representations: one for uncertainty and another for certainty. In contrast, a minority of individuals exhibit substantial overlap between their representations of uncertainty and certainty. Together, these findings reveal the relationship between the conceptualization of uncertainty and risky decisions.","link":"/opendata/vives-et-al-2023/"},{"title":"Viviani et al. (2023)","text":"The spatial Stroop task measures the ability to resolve interference between relevant and irrelevant spatial information. We recently proposed a four-choice spatial Stroop task that ensures methodological advantages over the original color-word verbal Stroop task, requiring participants to indicate the direction of an arrow while ignoring its position in one of the screen corners. However, its peripheral spatial arrangement might represent a methodological weakness and could introduce experimental confounds. Thus, aiming at improving our “Peripheral” spatial Stroop, we designed and made available five novel spatial Stroop tasks (Perifoveal, Navon, Figure-Ground, Flanker, and Saliency), wherein the stimuli appeared at the center of the screen. In a within-subjects online study, we compared the six versions to identify which task produced the largest but also the most reliable and robust Stroop effect. Indeed, although internal reliability is frequently overlooked, its estimate is fundamental, also in light of the recently proposed reliability paradox. Data analyses were performed using both the classical general linear model analytical approach and two multilevel modelling approaches (linear mixed models and random coefficient analysis), which specifically served for more accurately estimating the Stroop effect by explaining intra-subject, trial-by-trial variability. We then assessed our results based on their robustness to such analytic flexibility. Overall, our results indicate that the Perifoveal spatial Stroop is the best alternative task for its statistical properties and methodological advantages. Interestingly, our results also indicate that the Peripheral and Perifoveal Stroop effects were not only the largest, but also those with highest and most robust internal reliability.","link":"/opendata/viviani-et-al-2023/"},{"title":"von Clarenau et al. (2022)","text":"People routinely make decisions based on samples of numerical values. A common conclusion from the literature in psychophysics and behavioral economics is that observers subjectively compress magnitudes, such that extreme values have less sway over choice than prescribed by a normative model (underweighting). However, recent studies have reported evidence for anti-compression, that is, the relative overweighting of extreme values. Here, we investigate potential reasons for this discrepancy in findings and examine the possibility that it reflects adaptive responses to different task requirements. We performed a large-scale study (N = 607) of sequential numerical integration, manipulating (i) the task requirement (averaging a single stream or comparing two streams of numbers), (ii) the distribution of sample values (uniform or Gaussian), and (iii) their range (1 to 9 or 100 to 900). The data showed compression of subjective values in the averaging task, but anti-compression in the comparison task. This pattern held for both distribution types and for both ranges. The findings are consistent with model simulations showing that either compression or anti-compression can be beneficial for noisy observers, depending on the sample-level processing demands imposed by the task.","link":"/opendata/von-clarenau-et-al-2022/"},{"title":"Voulgaropoulou et al. (2022)","text":"Humans are continuously exposed to stressful challenges in everyday life. Such stressful events trigger a complex physiological reaction - the fight-or-flight response - that can hamper flexible decision-making and learning. Inspired by key neural and peripheral characteristics of the fight-or-flight response, here, we ask whether acute stress changes how humans learn about costs and benefits. Healthy adults were randomly exposed to an acute stress (age mean=23.48, 21/40 female) or no-stress control (age mean=23.80, 22/40 female) condition, after which they completed a reinforcement learning task in which they minimize cost (physical effort) and maximize benefits (monetary rewards). During the task pupillometry data were collected. A computational model of cost-benefit reinforcement learning was employed to investigate the effect of acute stress on cost and benefit learning and decision-making. Acute stress improved learning to maximize rewards relative to minimizing physical effort (Condition-by-Trial Type interaction: F(1,78)= 6.53, p = 0.01, n2G= 0.04; reward > effort in stress condition: t(39) = 5.40, p αR in control condition: t(39) = -4.75, p < 0.001]. This process was associated with distinct alterations in pupil size fluctuations. Data and scripts are available (https://osf.io/ydv2q/). Here we demonstrate that acute stress is associated with asymmetric learning about reward value versus action cost, thereby providing new insights into learning strategies under acute stress, which, depending on the context, may be maladaptive or beneficial. Our pupillometry and physiological results tentatively link asymmetric cost and benefit learning to stress-related changes in catecholamine activity.","link":"/opendata/voulgaropoulou-et-al-2022/"},{"title":"Vuletich & Payne (2019)","text":"Can implicit bias be changed? In a recent longitudinal study, Lai and colleagues (2016, Study 2) compared nine interventions intended to reduce racial bias across 18 university campuses. Although all interventions changed participants’ bias on an immediate test, none were effective after a delay. This study has been interpreted as strong evidence that implicit biases are difficult to change. We revisited Lai et al.’s study to test whether the stability observed reflected persistent individual attitudes or stable environments. Our reanalysis (N = 4,842) indicates that individual biases did not return to preexisting levels. Instead, campus means returned to preexisting campus means, whereas individual scores fluctuated mostly randomly. Campus means were predicted by markers of structural inequality. Our results are consistent with the theory that implicit bias reflects biases in the environment rather than individual dispositions. This conclusion is nearly the opposite of the original interpretation: Although social environments are stable, individual implicit biases are ephemeral.","link":"/opendata/vuletich-payne-2019/"},{"title":"Wall et al. (2023)","text":"Predictive inference is an important cognitive function and there are many tasks which measure it, and the error driven learning that underpins it. Context is a key contribution to this learning, with different contexts requiring different learning strategies. A factor not often considered however, is the conditions and time-frame over which a model of that context is developed. This study required participants to learn under two changing, unsignalled contexts with opposing optimal responses to large errors - change-points and oddballs. The changes in context occurred under two task structures: 1) a fixed task structure, with consecutive, short blocks of each context, and 2) a random task structure, with the context randomly selected for each new block. Through this design we examined the conditions under which learning contexts can be differentiated from each other, and the time-frame over which that learning occurs. We found that participants responded in accordance with the optimal strategy for each contexts, and did so within a short period of time, over very few meaningful errors. We further found that the responses became more optimal throughout the experiment, but only for periods of context consistency (the fixed task structure), and if the first experienced context involved meaningful errors. These results show that people will continue to refine their model of the environment across multiple trials and blocks, leading to more context-appropriate responding - but only in certain conditions. This highlights the importance of considering the task structure, and the time-frames of model development those patterns may encourage. This has implications for interpreting differences in learning across different contexts","link":"/opendata/wall-et-al-2023/"},{"title":"Waltmann et al. (2022a)","text":"Task-based measures that capture neurocognitive processes can help bridge the gap between brain and behavior. To transfer tasks to clinical application, reliability is a crucial benchmark because it imposes an upper bound to potential correlations with other variables (e.g., symptom or brain data). However, the reliability of many task readouts is low. In this study, we scrutinized the retest reliability of a probabilistic reversal learning task (PRLT) that is frequently used to characterize cognitive flexibility in psychiatric populations. We analyzed data from N = 40 healthy subjects, who completed the PRLT twice. We focused on how individual metrics are derived, i.e., whether data were partially pooled across participants and whether priors were used to inform estimates. We compared the reliability of the resulting indices across sessions, as well as the internal consistency of a selection of indices. We found good to excellent reliability for behavioral indices as derived from mixed-effects models that included data from both sessions. The internal consistency was good to excellent. For indices derived from computational modeling, we found excellent reliability when using hierarchical estimation with empirical priors and including data from both sessions. Our results indicate that the PRLT is well equipped to measure individual differences in cognitive flexibility in reinforcement learning. However, this depends heavily on hierarchical modeling of the longitudinal data (whether sessions are modeled separately or jointly), on estimation methods, and on the combination of parameters included in computational models. We discuss implications for the applicability of PRLT indices in psychiatric research and as diagnostic tools.","link":"/opendata/waltmann-et-al-2022a/"},{"title":"Waltmann et al. (2022b)","text":"Precisely charting the maturation of core neurocognitive functions such as reinforcement learning (RL) and flexible adaptation to changing action-outcome contingencies is key for developmental neuroscience and adjacent fields like developmental psychiatry. However, research in this area is both sparse and conflicted, especially regarding potentially asymmetric development of learning for different motives (obtain wins vs avoid losses) and learning from valenced feedback (positive vs negative). In the current study, we investigated the development of RL from adolescence to adulthood, using a probabilistic reversal learning task modified to experimentally separate motivational context and feedback valence, in a sample of 95 healthy participants between 12 and 45. We show that adolescence is characterized by enhanced novelty seeking and response shifting especially after negative feedback, which leads to poorer returns when reward contingencies are stable. Computationally, this is accounted for by reduced impact of positive feedback on behavior. We also show, using fMRI, that activity of the medial frontopolar cortex reflecting choice probability is attenuated in adolescence. We argue that this can be interpreted as reflecting diminished confidence in upcoming choices. Interestingly, we find no age- related differences between learning in win and loss contexts.","link":"/opendata/waltmann-et-al-2022b/"},{"title":"Waltmann et al. (2023a)","text":"Precisely charting the maturation of core neurocognitive functions such as reinforcement learning (RL) and flexible adaptation to changing action-outcome contingencies is key for developmental neuroscience and adjacent fields like developmental psychiatry. However, research in this area is both sparse and conflicted, especially regarding potentially asymmetric development of learning for different motives (obtain wins vs avoid losses) and learning from valenced feedback (positive vs negative). In the current study, we investigated the development of RL from adolescence to adulthood, using a probabilistic reversal learning task modified to experimentally separate motivational context and feedback valence, in a sample of 95 healthy participants between 12 and 45. We show that adolescence is characterized by enhanced novelty seeking and response shifting especially after negative feedback, which leads to poorer returns when reward contingencies are stable. Computationally, this is accounted for by reduced impact of positive feedback on behavior. We also show, using fMRI, that activity of the medial frontopolar cortex reflecting choice probability is attenuated in adolescence. We argue that this can be interpreted as reflecting diminished confidence in upcoming choices. Interestingly, we find no age-related differences between learning in win and loss contexts.","link":"/opendata/waltmann-et-al-2023a/"},{"title":"Waltmann et al. (2023b)","text":"Binge Eating Disorder (BED) is thought of as a disorder of cognitive control but evidence regarding its neurocognitive mechanisms is inconclusive. A key limitation in prior research is the lack of clear separation between effects of BED and obesity. Moreover, research has largely disregarded self-report evidence that neurocognitive deficits may emerge primarily in contexts focused on avoiding aversive states. Addressing these gaps, this longitudinal study investigated behavioral flexibility and its underlying neuro-computational processes in approach and avoidance contexts in normal weight individuals, obese individuals, and obese individuals with BED. Participants performed a probabilistic reversal learning task during functional imaging, with different blocks focused on obtaining wins or avoiding losses. They were reinvited for a 6-months follow-up. Analyses were informed by computational models of reinforcement learning (RL). Compared to obese individuals without BED, BED participants had relatively more difficulties to avoid losses, while obese participants without BED had relatively more difficulties to obtain rewards. Computationally, this was explained by differential learning sensitivities in the win vs loss conditions between groups. In the brain, this was mirrored in differential neural learning signals in the ventromedial prefrontal cortex (vmPFC) per condition. The condition-specific behavioral and neural differences were subtle, but strikingly scaled with BED symptoms between and within subjects. Compared to normal weight controls, obese participants without BED switched more between choice options. This was reflected in diminished representation of choice certainty in the vmPFC. Hence, RL alterations in obesity with and without BED may be qualitatively different: our finding that BED was associated with relative RL deficits in loss-avoidance contexts aligns with the notion that in BED, neurocognitive alterations may selectively emerge in aversive states. Our study thus highlights the importance of distinguishing between obesity with and without BED to identify unique neuro-computational alterations underlying different styles of maladaptive eating behavior.","link":"/opendata/waltmann-et-al-2023b/"},{"title":"Wamsley et al. (2023)","text":"Traditionally, neuroscience and psychology have studied the human brain during periods of “online” attention to the environment, while participants actively engage in processing sensory stimuli. But emerging evidence shows that the waking brain also intermittently enters an “offline” state, during which sensory processing is inhibited and our attention shifts inward. In fact, humans may spend up to half of their waking hours offline (Killingsworth & Gilbert, 2010; Wamsley & Summer, 2020). The function of alternating between online and offline forms of wakefulness remains unknown. We hypothesized that rapidly switching between online and offline states enables the brain to alternate between the competing demands of encoding new information and consolidating already-encoded information. N=46 participants (34 female) trained on a memory task just prior to a 30min retention interval, during which they completed a simple attention task while undergoing simultaneous high-density EEG (electroencephalography) and pupillometry recording. We used a data-driven method to parse this retention interval into a sequence of discrete online and offline states, with 5sec temporal resolution. We found evidence for three distinct states, one of which was an offline state with features well-suited to support memory consolidation, including increased EEG slow oscillation power, reduced attention to the external environment, and increased pupil diameter (a proxy for increased norepinephrine). Participants who spent more time in this offline state following encoding showed improved memory at delayed test. These observations are consistent with the hypothesis that even brief, seconds-long entry into an offline state may support the early stages of memory consolidation.","link":"/opendata/wamsley-et-al-2023/"},{"title":"Wan et al. (2022)","text":"How does the brain prioritize among the contents of working memory (WM) to appropriately guide behavior? Previous work, employing inverted encoding modeling (IEM) of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) datasets, has shown that unprioritized memory items (UMI) are actively represented in the brain, but in a “flipped”, or opposite, format compared to prioritized memory items (PMI). To acquire independent evidence for such a priority-based representational transformation, and to explore underlying mechanisms, we trained recurrent neural networks (RNNs) with a long short-term memory (LSTM) architecture to perform a 2-back WM task. Visualization of LSTM hidden layer activity using Principal Component Analysis (PCA) confirmed that stimulus representations undergo a representational transformation-consistent with a flip-while transitioning from the functional status of UMI to PMI. Demixed (d)PCA of the same data identified two representational trajectories, one each within a UMI subspace and a PMI subspace, both undergoing a reversal of stimulus coding axes. dPCA of data from an EEG dataset also provided evidence for priority-based transformations of the representational code, albeit with some differences. This type of transformation could allow for retention of unprioritized information in WM while preventing it from interfering with concurrent behavior. The results from this initial exploration suggest that the algorithmic details of how this transformation is carried out by RNNs, versus by the human brain, may differ.","link":"/opendata/wan-et-al-2022/"},{"title":"Wang et al. (2023)","text":"The ability to respond flexibly to an ever-changing environment relies on the orbitofrontal cortex (OFC). However, how the OFC associates sensory information with predicted outcomes to enable flexible sensory learning in humans remains elusive. Here, we combine a probabilistic tactile reversal learning task with functional magnetic resonance imaging (fMRI) to investigate how lateral OFC (lOFC) interacts with the primary somatosensory cortex (S1) to guide flexible tactile learning in humans. fMRI results reveal that lOFC and S1 exhibit distinct task-dependent engagement: while the lOFC responds transiently to unexpected outcomes immediately following reversals, S1 is persistently engaged during re-learning. Unlike the contralateral stimulus-selective S1, activity in ipsilateral S1 mirrors the outcomes of behavior during re-learning, closely related to top-down signals from lOFC. These findings suggest that lOFC contributes to teaching signals to dynamically update representations in sensory areas, which implement computations critical for adaptive behavior.","link":"/opendata/wang-et-al-2023/"},{"title":"Wang & Navarro-Martinez (2023)","text":"An increasing number of studies call into question the external validity of social preference games. In this paper, we show that these games have a low correlation with single pro-social behaviors in the field, but this correlation can be substantially increased by aggregating behaviors to reduce measurement error. We tracked people’s daily pro-social behaviors for 14 days using a day reconstruction method; the same people played three different social preference games on seven different occasions. We show that, as more pro-social behaviors and game rounds are aggregated, the games become much better predictors of pro-sociality. This predictive power is further increased by using statistical methods designed to better account for measurement error. These findings suggest that social preference games capture important underlying dispositions of real-world pro-sociality, and they can be successfully used to predict aggregated pro-social inclinations. This has crucial implications for the external validity and applicability of economic games.","link":"/opendata/wang-navarro-martinez-2023/"},{"title":"Warren et al. (2017)","text":"The adaptive regulation of the trade-off between pursuing a known reward (exploitation) and sampling lesser-known options in search of something better (exploration) is critical for optimal performance. Theory and recent empirical work suggest that humans use at least two strategies for solving this dilemma: a directed strategy in which choices are explicitly biased toward information seeking, and a random strategy in which decision noise leads to exploration by chance. Here we examined the hypothesis that random exploration is governed by the neuromodulatory locus coeruleus-norepinephrine system. We administered atomoxetine, a norepinephrine transporter blocker that increases extracellular levels of norepinephrine throughout the cortex, to 22 healthy human participants in a double-blind crossover design. We examined the effect of treatment on performance in a gambling task designed to produce distinct measures of directed exploration and random exploration. In line with our hypothesis we found an effect of atomoxetine on random, but not directed exploration. However, contrary to expectation, atomoxetine reduced rather than increased random exploration. We offer three potential explanations of our findings, involving the non-linear relationship between tonic NE and cognitive performance, the interaction of atomoxetine with other neuromodulators, and the possibility that atomoxetine affected phasic norepinephrine activity more so than tonic norepinephrine activity.","link":"/opendata/warren-et-al-2017/"},{"title":"Watson et al. (2019)","text":"Physically salient but task-irrelevant distractors can capture attention in visual search, but resource-dependent, executive-control processes can help reduce this distraction. However, it is not only physically salient stimuli that grab our attention: Recent research has shown that reward history also influences the likelihood that stimuli will capture attention. Here, we investigated whether resource-dependent control processes modulate the effect of reward on attentional capture, much as for the effect of physical salience. To this end, we used eye tracking with a rewarded visual search task and compared performance under conditions of high and low working memory load. In two experiments, we demonstrated that oculomotor capture by high-reward distractor stimuli is enhanced under high memory load. These results highlight the role of executive-control processes in modulating distraction by reward-related stimuli. Our findings have implications for understanding the neurocognitive processes involved in real-life conditions in which reward-related stimuli may influence behavior, such as addiction.","link":"/opendata/watson-et-al-2019/"},{"title":"Watson et al. (2022)","text":"The translation of the outcome-devaluation paradigm to study habit in humans has yielded interesting insights but proven to be challenging. We present a novel, outcome-revaluation task with a symmetrical design, in the sense that half of the available outcomes are always valuable and the other half not-valuable. In the present studies, during the instrumental learning phase, participants learned to respond (Go) to certain stimuli to collect valuable outcomes (and points) while refraining to respond (NoGo) to stimuli signaling not-valuable outcomes. Half of the stimuli were short-trained, while the other half were long-trained. Subsequently, in the test phase, the signaled outcomes were either value-congruent with training (still-valuable and still-not-valuable), or value-incongruent (devalued and upvalued). The change in outcome value on value-incongruent trials meant that participants had to flexibly adjust their behavior. At the end of the training phase, participants completed the self-report behavioral automaticity index - providing an automaticity score for each stimulus-response association. We conducted two experiments using this task, that both provided evidence for stimulus-driven habits as reflected in poorer performance on devalued and upvalued trials relative to still-not-valuable trials and still-valuable trials, respectively. While self-reported automaticity increased with longer training, behavioral flexibility was not affected. After extended training (Experiment 2), higher levels of self-reported automaticity when responding to stimuli signaling valuable outcomes were related to more slips of action when the associated outcome was subsequently devalued. We conclude that the symmetrical outcome-revaluation task provides a promising paradigm for the experimental investigation of habits in humans.","link":"/opendata/watson-et-al-2022/"},{"title":"Weber et al. (2022)","text":"Across species, animals have an intrinsic drive to approach appetitive stimuli and to withdraw from aversive stimuli. In affective science, influential theories of emotion link positive affect with strengthened behavioural approach and negative affect with avoidance. Based on these theories, we predicted that individuals positive and negative affect levels should particularly influence their behaviour when innate Pavlovian approach/avoidance tendencies conflict with learned instrumental behaviours. Here, across two experiments - exploratory Experiment 1 (N = 91) and a preregistered confirmatory Experiment 2 (N = 335) - we assessed how induced positive and negative affect influenced Pavlovian-instrumental interactions in a reward/punishment Go/No-Go task. Contrary to our hypotheses, we found no evidence for a main effect of positive/negative affect on either approach/avoidance behaviour or Pavlovian-instrumental interactions. However, we did find evidence that the effects of induced affect on behaviour were moderated by individual differences in self-reported behavioural inhibition and gender. Exploratory computational modelling analyses explained these demographic moderating effects as arising from positive correlations between demographic factors and individual differences in the strength of Pavlovian-instrumental interactions. These findings serve to sharpen our understanding of the effects of positive and negative affect on instrumental behaviour.","link":"/opendata/weber-et-al-2022/"},{"title":"Wehrman et al. (2023)","text":"Assimilation is the process by which one judgment tends to approach some aspect of another stimulus or judgment. This effect has been known for over half a century in various domains such as the judgment of weight or sound intensity. However, the assimilation of judgments of durations have been relatively unexplored. In the current article, we present the results of five experiments in which participant s were required to judge the duration of a visual stimulus on each trial. In each experiment, we manipulated the pattern of durations they experienced in order to systematically separate the effects of the objective and subjective duration of stimuli on subsequent judgments. We found that duration judgments were primarily driven by prior judgments, with little, if any, effect of the prior objective stimulus duration. This is in contrast to the findings previously reported in regards to non-temporal judgments. We propose two mechanist explanations of this effect; a representational account in which judgments represent the speed of an underlying pacemaker, and an assimilation account in which judgment is based in prior experience. We further discuss results in terms of predictive coding, in which the previous rating is representative of a prior expectation, which is modified by current experience.","link":"/opendata/wehrman-et-al-2023/"},{"title":"Weilbacher et al. (2020)","text":"Previous research has indicated a bias in memory-based decision-making, with people preferring options that they remember better. However, the cognitive mechanisms underlying this memory bias remain elusive. Here, we propose that choosing poorly remembered options is conceptually similar to choosing options with uncertain outcomes. We predicted that the memory bias would be reduced when options had negative subjective value, analogous to the reflection effect, according to which uncertainty aversion is stronger in gains than in losses. In two preregistered experiments (N = 36 each), participants made memory-based decisions between appetitive and aversive stimuli. People preferred better-remembered options in the gain domain, but this behavioral pattern reversed in the loss domain. This effect was not related to participants’ ambiguity or risk attitudes, as measured in a separate task. Our results increase the understanding of memory-based decision-making and connect this emerging field to well-established research on decisions under uncertainty.","link":"/opendata/weilbacher-et-al-2020/"},{"title":"Weiss et al. (2021)","text":"Making accurate decisions in uncertain environments requires identifying the generative cause of sensory cues, but also the expected outcomes of possible actions. Although both cognitive processes can be formalized as Bayesian inference, they are commonly studied using different experimental frameworks, making their formal comparison difficult. Here, by framing a reversal learning task either as cue-based or outcome-based inference, we found that humans perceive the same volatile environment as more stable when inferring its hidden state by interaction with uncertain outcomes than by observation of equally uncertain cues. Multivariate patterns of magnetoencephalographic (MEG) activity reflected this behavioral difference in the neural interaction between inferred beliefs and incoming evidence, an effect originating from associative regions in the temporal lobe. Together, these findings indicate that the degree of control over the sampling of volatile environments shapes human learning and decision-making under uncertainty.","link":"/opendata/weiss-et-al-2021/"},{"title":"Wester et al. (2022)","text":"The expanded version of the Inventory of Depression and Anxiety Symptoms (IDAS-II) is a 99-item self-report measure containing 18 nonoverlapping dimensional scales assessing symptoms of depression, anxiety, and mania. The aim of this study was to develop and validate a German adaptation of the IDAS-II. Participants from a community sample (N = 1,054) completed the IDAS-II (German version). In addition, part of the sample (N = 550) completed a series of additional measures of depression (Patient Health Questionnaire-9, WHO-Five Well-Being Index, Symptom Checklist-90 Revised-Short Version) and anxiety disorders (Generalized Anxiety Disorder Scale-7, Fear of Negative Evaluation Scale-5, Dimensional Obsessive-Compulsive Scale, The International Trauma Questionnaire). We conducted item-level confirmatory factor analyses (CFA) separately for the 15 nonsaturated IDAS-II scales, which confirmed unidimensionality. McDonalds ω indicated good internal consistency (ω > .8) of all symptom scales except Euphoria (ω = .751) and Ordering (ω = .728). A CFA of the IDAS-II scales, based on exploratory results by Watson et al. (2012), confirmed a three-factor model of Distress, Obsessions/Fear, and Positive Mood. Correlational analyses with additional symptom measures supported the convergent and discriminant validity of the IDAS-II scales. The IDAS-II (German version) allows for a reliable assessment of the severity of depression, anxiety, and bipolar symptoms and is one of the first clinical measures for German-speaking samples that is consistent with the Hierarchical Taxonomy of Psychopathology (HiTOP).","link":"/opendata/wester-et-al-2022/"},{"title":"Widge et al. (2019)","text":"Deep brain stimulation (DBS) is a circuit-oriented treatment for mental disorders. Unfortunately, even well-conducted psychiatric DBS clinical trials have yielded inconsistent symptom relief, in part because DBS mechanism(s) of action are unclear. One clue to those mechanisms may lie in the efficacy of ventral internal capsule/ventral striatum (VCVS) DBS in both major depression (MDD) and obsessive-compulsive disorder (OCD). MDD and OCD both involve deficits in cognitive control. Cognitive control depends on prefrontal cortex (PFC) regions that project into the VCVS. Here, we show that VCVS DBS effect is explained in part by enhancement of PFC-driven cognitive control. DBS improves human subjects performance on a cognitive control task and increases theta (5-8Hz) oscillations in both medial and lateral PFC. The theta increase predicts subjects clinical outcomes. Our results suggest a possible mechanistic approach to DBS therapy, based on tuning stimulation to optimize these neurophysiologic phenomena.","link":"/opendata/widge-et-al-2019/"},{"title":"Wiehler et al. (2021)","text":"Gambling disorder (GD) is a behavioral addiction associated with impairments in value-based decision-making and behavioral flexibility and might be linked to changes in the dopamine system. Maximizing long-term rewards requires a flexible trade-off between the exploitation of known options and the exploration of novel options for information gain. This exploration-exploitation trade-off is thought to depend on dopamine neurotransmission. We hypothesized that human gamblers would show a reduction in directed (uncertainty-based) exploration, accompanied by changes in brain activity in a fronto-parietal exploration-related network. Twenty-three frequent, non-treatment seeking gamblers and twenty-three healthy matched controls (all male) performed a four-armed bandit task during functional magnetic resonance imaging (fMRI). Computational modeling using hierarchical Bayesian parameter estimation revealed signatures of directed exploration, random exploration, and perseveration in both groups. Gamblers showed a reduction in directed exploration, whereas random exploration and perseveration were similar between groups. Neuroimaging revealed no evidence for group differences in neural representations of basic task variables (expected value, prediction errors). Our hypothesis of reduced frontal pole (FP) recruitment in gamblers was not supported. Exploratory analyses showed that during directed exploration, gamblers showed reduced parietal cortex and substantia-nigra/ventral-tegmental-area activity. Cross-validated classification analyses revealed that connectivity in an exploration-related network was predictive of group status, suggesting that connectivity patterns might be more predictive of problem gambling than univariate effects. Findings reveal specific reductions of strategic exploration in gamblers that might be linked to altered processing in a fronto-parietal network and/or changes in dopamine neurotransmission implicated in GD.","link":"/opendata/wiehler-et-al-2021/"},{"title":"Williams et al. (2021)","text":"Human learning, at least in part, appears to be dependent on the evaluation of how outcomes of our actions align with our expectations. Over the past 23 years, electroencephalography (EEG) has been used to probe the neural signatures of feedback processing. Seminal work demonstrated a difference in the human event-related potential (ERP) dependent on whether people were processing correct or incorrect feedback. Since then, these feedback evoked ERPs have been associated with reinforcement learning and conflict monitoring, tied to subsequent behavioral adaptations, and shown to be sensitive to a wide range of factors (e.g., Parkinson’s disease). Recently, research has turned to frequency decomposition techniques to examine how changes in the EEG power spectra are related to underlying learning mechanisms. Although the literature on the neural correlates of feedback processing is vast, there are still methodological discrepancies and differences in results across studies. Here, we provide reference results and an investigation of methodological considerations for the ERP (reward positivity) and frequency (delta and theta power) correlates of feedback evaluation with a large sample size. Specifically, participants (n = 500) performed a two-armed bandit task while we recorded EEG. Our findings provide key information about the data characteristics and relationships that exist between the neural signatures of feedback evaluation. Additionally, we conclude with selected methodological recommendations for standardization of future research. All data and scripts are freely provided to facilitate open science.","link":"/opendata/williams-et-al-2021/"},{"title":"Williams et al. (2022)","text":"Visual object recognition is not performed in isolation but depends on prior knowledge and context. Here, we found that auditory context plays a critical role in visual object perception. Using a psychophysical task in which naturalistic sounds were paired with noisy visual inputs, we demonstrated across two experiments (young adults; ns = 18-40 in Experiments 1 and 2, respectively) that the representations of ambiguous visual objects were shifted toward the visual features of an object that were related to the incidental sound. In a series of control experiments, we found that these effects were not driven by decision or response biases (ns = 40-85) nor were they due to top-down expectations (n = 40). Instead, these effects were driven by the continuous integration of audiovisual inputs during perception itself. Together, our results demonstrate that the perceptual experience of visual objects is directly shaped by naturalistic auditory context, which provides independent and diagnostic information about the visual world.","link":"/opendata/williams-et-al-2022/"},{"title":"Wimmer et al. (2018)","text":"Over the past few decades, neuroscience research has illuminated the neural mechanisms supporting learning from reward feedback. Learning paradigms are increasingly being extended to study mood and psychiatric disorders as well as addiction. However, one potentially critical characteristic that this research ignores is the effect of time on learning: human feedback learning paradigms are usually conducted in a single rapidly paced session, whereas learning experiences in ecologically relevant circumstances and in animal research are almost always separated by longer periods of time. In our experiments, we examined reward learning in short condensed sessions distributed across weeks versus learning completed in a single “massed” session in male and female participants. As expected, we found that after equal amounts of training, accuracy was matched between the spaced and massed conditions. However, in a 3-week follow-up, we found that participants exhibited significantly greater memory for the value of spaced-trained stimuli. Supporting a role for short-term memory in massed learning, we found a significant positive correlation between initial learning and working memory capacity. Neurally, we found that patterns of activity in the medial temporal lobe and prefrontal cortex showed stronger discrimination of spaced- versus massed-trained reward values. Further, patterns in the striatum discriminated between spaced- and massed-trained stimuli overall. Our results indicate that single-session learning tasks engage partially distinct learning mechanisms from distributed training. Our studies begin to address a large gap in our knowledge of human learning from reinforcement, with potential implications for our understanding of mood disorders and addiction.","link":"/opendata/wimmer-et-al-2018/"},{"title":"Wimmer & Poldrack (2022)","text":"Neuroscience research has illuminated the mechanisms supporting learning from reward feedback, demonstrating a critical role for the striatum and midbrain dopamine system. However, in humans, short-term working memory that is dependent on frontal and parietal cortices can also play an important role, particularly in commonly used paradigms in which learning is relatively condensed in time. Given the growing use of reward-based learning tasks in translational studies in computational psychiatry, it is important to understand the extent of the influence of working memory and also how core gradual learning mechanisms can be better isolated. In our experiments, we manipulated the spacing between repetitions along with a post-learning delay preceding a test phase. We found that learning was slower for stimuli repeated after a long delay (spaced-trained) compared to those repeated immediately (massed-trained), likely reflecting the remaining contribution of feedback learning mechanisms when working memory is not available. For massed learning, brief interruptions led to drops in subsequent performance, and individual differences in working memory capacity positively correlated with overall performance. Interestingly, when tested after a delay period but not immediately, relative preferences decayed in the massed condition and increased in the spaced condition. Our results provide additional support for a large role of working memory in reward-based learning in temporally condensed designs. We suggest that spacing training within or between sessions is a promising approach to better isolate and understand mechanisms supporting gradual reward-based learning, with particular importance for understanding potential learning dysfunctions in addiction and psychiatric disorders.","link":"/opendata/wimmer-poldrack-2022/"},{"title":"Wimmer et al. (2023)","text":"Theories of neural replay propose that it supports a range of functions, most prominently planning and memory consolidation. Here, we test the hypothesis that distinct signatures of replay in the same task are related to model-based decision-making (“planning”) and memory preservation. We designed a reward learning task wherein participants utilized structure knowledge for model-based evaluation, while at the same time had to maintain knowledge of two independent and randomly alternating task environments. Using magnetoencephalography and multivariate analysis, we first identified temporally compressed sequential reactivation, or replay, both prior to choice and following reward feedback. Before choice, prospective replay strength was enhanced for the current task-relevant environment when a model-based planning strategy was beneficial. Following reward receipt, and consistent with a memory preservation role, replay for the alternative distal task environment was enhanced as a function of decreasing recency of experience with that environment. Critically, these planning and memory preservation relationships were selective to pre-choice and post-feedback periods, respectively. Our results provide support for key theoretical proposals regarding the functional role of replay and demonstrate that the relative strength of planning and memory-related signals are modulated by ongoing computational and task demands.","link":"/opendata/wimmer-et-al-2023/"},{"title":"Wischnewski et al. (2021)","text":"In decision-making with uncertain outcomes people may rely on external cues, such as expert advice, even if this information has no predictive value. While the fronto-parietal event-related potential (ERP) components feedback-related negativity (FRN) and P3 are associated with both reward/punishment feedback processing, the relationship between ERP modulation and expert advice during decision making remains unclear. In this double-blind sham-controlled within-subject study transcranial alternating current stimulation (tACS) at an intensity of 1 mA was applied to the frontal cortex in twenty-four healthy volunteers. The aim was to decrease reliance on expert advice by targeting FRN, P3a and P3b components. Following administration of frontal delta (2.5 Hz), theta (5 Hz) and sham tACS, ERPs and advice-guided decision making were evaluated. Results showed a tentative behavioral effect of delta tACS in the response bias. In contrast, theta tACS significantly lowered P3b and P3a amplitudes, but no effects of tACS were observed for the FRN. Effects on electrophysiology and advice following behavior were uncorrelated. Our findings suggest that theta tACS may modulate electrocortical signals and delta tACS advice following, yet the relationship between both remains unresolved.","link":"/opendata/wischnewski-et-al-2021/"},{"title":"Wise et al. (2019)","text":"Visual selective attention acts as a filter on perceptual information, facilitating learning and inference about important events in an agents environment. A role for visual attention in reward-based decisions has previously been demonstrated, but it remains unclear how visual attention is recruited during aversive learning, particularly when learning about multiple stimuli concurrently. This question is of particular importance in psychopathology, where enhanced attention to threat is a putative feature of pathological anxiety. Using an aversive reversal learning task that required subjects to learn, and exploit, predictions about multiple stimuli, we show that the allocation of visual attention is influenced significantly by aversive value but not by uncertainty. Moreover, this relationship is bidirectional in that attention biases value updates for attended stimuli, resulting in heightened value estimates. Our findings have implications for understanding biased attention in psychopathology and support a role for learning in the expression of threat-related attentional biases in anxiety.","link":"/opendata/wise-et-al-2019/"},{"title":"Wise et al. (2020)","text":"Symptom expression in psychiatric conditions is often linked to altered threat perception, however how computational mechanisms that support aversive learning relate to specific psychiatric symptoms remains undetermined. We answer this question using an online game-based aversive learning task together with measures of common psychiatric symptoms in 400 subjects. We show that physiological symptoms of anxiety and a transdiagnostic compulsivity-related factor are associated with enhanced safety learning, as measured using a probabilistic computational model, while trait cognitive anxiety symptoms are associated with enhanced learning from danger. We use data-driven partial least squares regression to identify two separable components across behavioural and questionnaire data: one linking enhanced safety learning and lower estimated uncertainty to physiological anxiety, compulsivity, and impulsivity; the other linking enhanced threat learning and heightened uncertainty estimation to symptoms of depression and social anxiety. Our findings implicate aversive learning processes in the expression of psychiatric symptoms that transcend diagnostic boundaries.","link":"/opendata/wise-et-al-2020/"},{"title":"Wise et al. (2022)","text":"Successful avoidance of recurrent threats depends on inferring threatening agents’ preferences and predicting their movement patterns accordingly. However, it remains largely unknown how the human brain achieves this, despite the fact that many natural threats are posed by complex, dynamic agents that act according to their own goals. Here, we propose that humans exploit an interactive cognitive map of the social environment to infer threatening agents’ preferences and also to simulate their future behavior, providing for flexible, generalizable avoidance strategies. We tested this proposal across three preregistered experiments (total n=510) using a task in which participants collected rewards while avoiding one of several possible virtual threatening agents. A novel, model-based, hypothesis-testing inverse reinforcement learning computational model best explained participants’ inferences about threatening agents’ latent preferences, with participants using this inferred knowledge to enact generalizable, model-based avoidance strategies across different environments. Using tree-search planning models, we found that participants’ behavior was best explained by a planning algorithm that incorporated simulations of the threat’s goal-directed behavior, and that prior expectations about the threat’s predictability were linked to individual differences in avoidance. Together, our results indicate that humans use a cognitive map to determine threatening agents’ preferences, in turn facilitating generalized predictions of the threatening agent’s behavior and enabling flexible and effective avoidance.","link":"/opendata/wise-et-al-2022/"},{"title":"Wise et al. (2023)","text":"In social environments, survival can depend upon inferring and adapting to other agents’ goal-directed behavior. However, it remains unclear how humans achieve this, despite the fact that many decisions must account for complex, dynamic agents acting according to their own goals. Here, we use a predator-prey task (total n = 510) to demonstrate that humans exploit an interactive cognitive map of the social environment to infer other agents’ preferences and simulate their future behavior, providing for flexible, generalizable responses. A model-based inverse reinforcement learning model explained participants’ inferences about threatening agents’ preferences, with participants using this inferred knowledge to enact generalizable, model-based behavioral responses. Using tree-search planning models, we then found that behavior was best explained by a planning algorithm that incorporated simulations of the threat’s goal-directed behavior. Our results indicate that humans use a cognitive map to determine other agents’ preferences, facilitating generalized predictions of their behavior and effective responses.","link":"/opendata/wise-et-al-2023/"},{"title":"Woelk et al. (2022)","text":"Anxiety disorders are effectively treated with exposure therapy, but relapse remains high. Fear may reinstate after reoccurrence of the negative event because the expectancy of the aversive outcome (unconditioned stimulus [US]) is adjusted but not its evaluation. Imagery rescripting (ImRs) is an intervention that is proposed to work through revaluation of the US. The aim of our preregistered study was to test the effects of ImRs and extinction on US expectancy and US revaluation. Day 1 (n = 106) consisted of acquisition with an aversive film clip as US. The manipulation (ImRs + extinction, extinction-only, or ImRs-only) took place on Day 2. Reinstatement of fear was tested on Day 3. Results showed expectancy learning in both extinction conditions but not in the ImRs-only condition and no enhanced revaluation learning in ImRs. The combination of ImRs and extinction slowed down extinction but did not protect against reinstatement, which pleads in favor of stand-alone interventions in clinical practice.","link":"/opendata/woelk-et-al-2022/"},{"title":"Wu et al. (2018)","text":"From foraging for food to learning complex games, many aspects of human behaviour can be framed as a search problem with a vast space of possible actions. Under finite search horizons, optimal solutions are generally unobtainable. Yet, how do humans navigate vast problem spaces, which require intelligent exploration of unobserved actions? Using various bandit tasks with up to 121 arms, we study how humans search for rewards under limited search horizons, in which the spatial correlation of rewards (in both generated and natural environments) provides traction for generalization. Across various different probabilistic and heuristic models, we find evidence that Gaussian process function learning-combined with an optimistic upper confidence bound sampling strategy-provides a robust account of how people use generalization to guide search. Our modelling results and parameter estimates are recoverable and can be used to simulate human-like performance, providing insights about human behaviour in complex environments.","link":"/opendata/wu-et-al-2018/"},{"title":"Wu et al. (2022)","text":"How does time pressure influence exploration and decision-making? We investigated this question with several four-armed bandit tasks manipulating (within subjects) expected reward, uncertainty, and time pressure (limited vs. unlimited). With limited time, people have less opportunity to perform costly computations, thus shifting the cost-benefit balance of different exploration strategies. Through behavioral, reinforcement learning (RL), reaction time (RT), and evidence accumulation analyses, we show that time pressure changes how people explore and respond to uncertainty. Specifically, participants reduced their uncertainty-directed exploration under time pressure, were less value-directed, and repeated choices more often.Since our analyses relate uncertainty to slower responses and dampened evidence accumulation (i.e., drift rates), this demonstrates a resource-rational shift towards simpler, lower-cost strategies under time pressure. These results shed light on how people adapt their exploration and decision-making strategies to externally imposed cognitive constraints.","link":"/opendata/wu-et-al-2022/"},{"title":"Wu et al. (2023)","text":"When exposed to perceptual and motor sequences, people are able to gradually identify patterns within and form a compact internal description of the sequence. One proposal of how sequences can be compressed is people’s ability to form chunks. We study people’s chunking behavior in a serial reaction time task. We relate chunk representation with sequence statistics and task demands, and propose a rational model of chunking that rearranges and concatenates its representation to jointly optimize for accuracy and speed. Our model predicts that participants should chunk more if chunks are indeed part of the generative model underlying a task and should, on average, learn longer chunks when optimizing for speed than optimizing for accuracy. We test these predictions in two experiments. In the first experiment, participants learn sequences with underlying chunks. In the second experiment, participants were instructed to act either as fast or as accurately as possible. The results of both experiments confirmed our model’s predictions. Taken together, these results shed new light on the benefits of chunking and pave the way for future studies on step-wise representation learning in structured domains.","link":"/opendata/wu-et-al-2023/"},{"title":"Wulff et al. (2018)","text":"People can learn about the probabilistic consequences of their actions in two ways: One is by consulting descriptions of an action’s consequences and probabilities (e.g., reading up on a medication’s side effects). The other is by personally experiencing the probabilistic consequences of an action (e.g., beta testing software). In principle, people taking each route can reach analogous states of knowledge and consequently make analogous decisions. In the last dozen years, however, research has demonstrated systematic discrepancies between description- and experienced-based choices. This description-experience gap has been attributed to factors including reliance on a small set of experience, the impact of recency, and different weighting of probability information in the two decision types. In this meta-analysis focusing on studies using the sampling paradigm of decisions from experience, we evaluated these and other determinants of the decision–experience gap by reference to more than 70,000 choices made by more than 6,000 participants. We found, first, a robust description-experience gap but also a key moderator, namely, problem structure. Second, the largest determinant of the gap was reliance on small samples and the associated sampling error: free to terminate search, individuals explored too little to experience all possible outcomes. Third, the gap persisted when sampling error was basically eliminated, suggesting other determinants. Fourth, the occurrence of recency was contingent on decision makers’ autonomy to terminate search, consistent with the notion of optional stopping. Finally, we found indications of different probability weighting in decisions from experience versus decisions from description when the problem structure involved a risky and a safe option.","link":"/opendata/wulff-et-al-2018/"},{"title":"Wurtz et al. (2023)","text":"Depressive symptoms are associated with negative expectations and reduced belief updating by positive information. Cognitive immunization, the devaluation of positive information, has been argued to be central in this relationship and predictive processing models suggest that more positive information is associated with greater cognitive immunization. In an online experiment, N=347 healthy participants took part in a performance task with standardized feedback that was either mildly, moderately, or extremely positive. Effects of the feedback positivity on cognitive immunization were investigated. Further, depressive symptoms, interpretation bias, as well as participant’s self-evaluation were examined as potential moderators of belief updating. Participants in the mildly positive condition showed the greatest amount of cognitive immunization, with no difference between the moderately and the extremely positive condition. Irrespective of the feedback condition, participants’ positive expectations increased after feedback, yet depressive symptoms were associated with a weaker increase. Interpretation biases were unrelated to these associations, but exploratory analyses suggested that self-evaluation may be a potentially central factor. The results suggest that healthy participants engaged in cognitive immunization when feedback was less positive than expected. Depressive symptoms were associated with reduced belief updating, with self-evaluation being a promising factor for future research.","link":"/opendata/wurtz-et-al-2023/"},{"title":"Xia et al. (2021)","text":"In the real world, many relationships between events are uncertain and probabilistic. Uncertainty is also likely to be a more common feature of daily experience for youth because they have less experience to draw from than adults. Some studies suggest probabilistic learning may be inefficient in youths compared to adults, while others suggest it may be more efficient in youths in mid adolescence. Here we used a probabilistic reinforcement learning task to test how youth age 8-17 (N = 187) and adults age 18-30 (N = 110) learn about stable probabilistic contingencies. Performance increased with age through early-twenties, then stabilized. Using hierarchical Bayesian methods to fit computational reinforcement learning models, we show that all participants’ performance was better explained by models in which negative outcomes had minimal to no impact on learning. The performance increase over age was driven by 1) an increase in learning rate (i.e. decrease in integration time scale); 2) a decrease in noisy/exploratory choices. In mid-adolescence age 13-15, salivary testosterone and learning rate were positively related. We discuss our findings in the context of other studies and hypotheses about adolescent brain development.","link":"/opendata/xia-et-al-2021/"},{"title":"Xia et al. (2022)","text":"Fear conditioning is a laboratory paradigm commonly used to investigate aversive learning and memory. In context fear conditioning, a configuration of elemental cues (conditioned stimulus, CS) predicts an aversive event (unconditioned stimulus, US). To quantify context fear acquisition in humans, previous work has used startle eye-blink responses (SEBR), skin conductance responses (SCR) and verbal reports, but different quantification methods have rarely been compared. Moreover, it is unclear how to induce, and measure context fear memory retention over several days. First, we used a semi-immersive virtual reality paradigm. In two experiments, we found successful declarative learning and memory retention over seven days, but no evidence of conditioned responses. Next, we used a configural fear conditioning paradigm with five static room images as CSs in two experiments. Besides successful declarative learning and memory retention after seven days, SCR and pupil dilation to CS onset differentiated CS+/CS- during acquisition training, and SEBR and pupil dilation differentiated CS+/CS- during the recall test, with medium to large effect sizes for the most sensitive indices (SEBR: Hedge’s g = 0.56 and 0.69; pupil dilation: Hedge’s g = 0.99 and g = 0.88). Our results demonstrate that with a suitable experimental paradigm, context fear memory retention can be demonstrated over seven days, and we provide robust and replicable measurement methods.","link":"/opendata/xia-et-al-2022/"},{"title":"Xiang et al. (2023)","text":"How do people judge responsibility in collaborative tasks? Past work has proposed a number of metrics that people may use to attribute blame and credit to others, such as effort, competence, and force. Some theories consider only the produced effort or force (individuals are more responsible if they produce more effort or force), whereas others consider counterfactuals (individuals are more responsible if some alternative behavior on their or their collaborator’s part could have altered the outcome). Across four experiments (N = 717), we found that participants’ judgments are best described by a model that considers both produced and counterfactual effort. This finding generalized to an independent validation data set (N = 99). Our results thus support a dual-factor theory of responsibility attribution in collaborative tasks.","link":"/opendata/xiang-et-al-2023/"},{"title":"Xu et al. (2020)","text":"The literature on human delay discounting behavior is dominated by experimental paradigms, which do not impose actual delays. Given that waiting may be aversive even on short timescales, we present a novel delay discounting paradigm to study differences in delay discounting behavior either when real waiting is involved, or not. This paradigm retains the fundamental trade-off between rewards received versus their immediacy. We used hierarchical Bayesian modeling to decompose and test models that separate discounting and subjective time perception mechanisms. We report 2 experiments that also explore the magnitude effect and gain-loss asymmetry. In both experiments, we found greater discounting and nonlinear (convex) time perception in the online waiting task, which required waiting after each choice, compared to a control condition where waiting was deferred until the end of the experiment. Discounting was also measured by a parameter of a hyperbolic-type model and related to reversals of preference between initial desire to wait and then not doing so. These results from our novel paradigm support the emerging view that subjective time perception plays an important role in intertemporal choice in addition to discounting tendencies.","link":"/opendata/xu-et-al-2020/"},{"title":"Xu & Stocco (2021)","text":"Behavioral data, despite being a common index of cognitive activity, is under scrutiny for having poor reliability as a result of noise or lacking replications of reliable effects. Here, we argue that cognitive modeling can be used to enhance the test-retest reliability of the behavioral measures by recovering individual-level parameters from behavioral data. We tested this empirically with the Probabilistic Stimulus Selection (PSS) task, which is used to measure a participants sensitivity to positive or negative reinforcement. An analysis of 400,000 simulations from an Adaptive Control of Thought-Rational (ACT-R) model of this task showed that the poor reliability of the task is due to the instability of the end-estimates: because of the way the task works, the same participants might sometimes end up having apparently opposite scores. To recover the underlying interpretable parameters and enhance reliability, we used a Bayesian Maximum A Posteriori (MAP) procedure. We were able to obtain reliable parameters across sessions (intraclass correlation coefficient ≈ 0.5). A follow-up study on a modified version of the task also found the same pattern of results, with very poor test-retest reliability in behavior but moderate reliability in recovered parameters (intraclass correlation coefficient ≈ 0.4). Collectively, these results imply that this approach can further be used to provide superior measures in terms of reliability, and bring greater insights into individual differences.","link":"/opendata/xu-stocco-2021/"},{"title":"Xu et al. (2021)","text":"Classic reinforcement learning (RL) theories cannot explain human behavior in the absence of external reward or when the environment changes. Here, we employ a deep sequential decision-making paradigm with sparse reward and abrupt environmental changes. To explain the behavior of human participants in these environments, we show that RL theories need to include surprise and novelty, each with a distinct role. While novelty drives exploration before the first encounter of a reward, surprise increases the rate of learning of a world-model as well as of model-free action-values. Even though the world-model is available for model-based RL, we find that human decisions are dominated by model-free action choices. The world-model is only marginally used for planning, but it is important to detect surprising events. Our theory predicts human action choices with high probability and allows us to dissociate surprise, novelty, and reward in EEG signals.","link":"/opendata/xu-et-al-2021/"},{"title":"Yamaguchi & Swainson (2023)","text":"The task-switch cost is one of the most robust phenomena, but it can disappear after nogo trials where the actors decide not to respond to the target. According to the response-selection account, it is the occurrence of response selection that generates a task-switch cost on the following trial. The present study used a variety of selective go/nogo procedures to investigate whether response selection on nogo trials is followed by a switch cost. The first two experiments aimed to replicate previous studies in which go/nogo trials could be distinguished either by the target features or by the responses assigned to the target, but the results were mixed. The subsequent four experiments constrained the conditions so that the actors would need to select a specific response in order to decide whether or not they execute the response. In these experiments, the task-switch cost was consistently absent after selective nogo trials, even when response selection was required on these trials. These results contradict the hypothesis that response selection on nogo trials would be followed by a subsequent switch cost. The results are consistent with the proposal that a task-switch cost might have been established by response selection or other task-related process on nogo trials, but it is abolished because nogo signals interfered with the activated task-set.","link":"/opendata/yamaguchi-swainson-2023/"},{"title":"Yan et al. (2023)","text":"The neural circuits of reward processing and interval timing (including perception and production) are functionally intertwined, suggesting that it might be possible for momentary reward processing to influence subsequent timing behavior. Previous animal and human studies have mainly focused on the effect of reward on interval perception, whereas its impact on interval production is less clear. In this study, we examined whether feedback, as an example of performance-contingent reward, biases interval production. We recorded EEG from 20 participants while they engaged in a continuous drumming task with different realistic tempos (1728 trials per participant). Participants received color-coded feedback after each beat about whether they were correct (on time) or incorrect (early or late). Regression-based EEG analysis was used to unmix the rapid occurrence of a feedback response called the reward positivity (RewP), which is traditionally observed in more slow-paced tasks. Using linear mixed modelling, we found that RewP amplitude predicted timing behavior for the upcoming beat. This performance-biasing effect of the RewP was interpreted as reflecting the impact of fluctuations in dopaminergic activities on timing, and the necessity of continuous paradigms to make such observations was highlighted.","link":"/opendata/yan-et-al-2023/"},{"title":"Yeung & Han (2023)","text":"The n-back task is a popular paradigm for studying neurocognitive processing at varying working memory loads. Although much is known about the effects of load on behavior and neural activation during n-back performance, the temporal dynamics of such effects remain unclear. Here, we investigated the within- and between-session stability and consistency of task performance and frontal cortical activation during the n-back task using functional near-infrared spectroscopy (fNIRS). Forty healthy young adults performed the 1-back and 3-back conditions three times per condition. They then undertook identical retest sessions 3 weeks later (M = 21.2 days, SD = 0.9). Over the course of the task, activation in the participants’ frontopolar, dorsomedial, dorsolateral, ventrolateral, and posterolateral frontal cortices was measured with fNIRS. We found significantly improved working memory performance (difference between 1-back and 3-back accuracies) over time both within and between sessions. All accuracy and reaction time measures exhibited good to excellent consistency within and across sessions. Additionally, changes in frontal oxyhemoglobin (HbO) and deoxyhemoglobin (HbR) concentration were maintained over time across timescales, except that load-dependent (3-back > 1-back) HbO changes, particularly in the ventrolateral PFC, diminished over separate sessions. The consistency of fNIRS measures varied greatly, with changes in 3-back dorsolateral and ventrolateral HbO demonstrating fair-to-good consistency both within and between sessions. Overall, this study clarified the temporal dynamics of task performance and frontal activation during the n-back task. The findings revealed the neural mechanisms underlying the change in n-back task performance over time and have practical implications for future n-back research.","link":"/opendata/yeung-han-2023/"},{"title":"Yoo et al. (2022)","text":"How does the nature of a stimulus affect our ability to learn appropriate response associations? In typical laboratory experiments learning is investigated under somewhat ideal circumstances, where stimuli are easily discriminable visually and linguistically. This is not representative of most real-life learning, where visually or linguistically overlapping stimuli can result in different rewards (e.g., you may learn over time that you can pet one specific dog that is friendly, but that you should avoid a very similar looking one that isn’t). With two experiments, we test how humans learn in three stimulus conditions: stimuli with distinct visual representations but overlapping linguistic representations, stimuli with distinct linguistic representations but overlapping visual representations, and stimuli with distinct visual and linguistic representations. We find that decreasing linguistic and visual distinctness both decrease performance, substantially more for the lowered linguistic distinctness condition. We develop computational models to test different hypotheses about how reinforcement learning (RL) and working memory (WM) processes are affected by different stimulus conditions. Interestingly, we find that only RL, and not WM, is affected by stimulus condition: people learn slower and have higher across-stimulus value confusion at decision when linguistic information overlaps relative to when it is distinct. These results demonstrate strong effects of stimulus type on learning, and highlight the importance of considering the parallel contributions of different cognitive processes when studying behavior.","link":"/opendata/yoo-et-al-2022/"},{"title":"Zaatri et al. (2022)","text":"Although living in social groups provides many benefits for group members, such groups also serve as a setting for social competition over rank and influence. Evolutionary accounts suggest that social anxiety plays a role in regulating in-group conflict, as individuals who are concerned about social threat may choose to defer to others to maintain the hierarchical status quo. Here, we examine how social anxiety levels are related to the advice-giving style an individual adopts: a competitive influence-seeking strategy or a defensive blend-in strategy. We begin by demonstrating that similarity to others drives activity in the brains valuation system, even during a competitive advice-taking task. Then, in three behavioural experiments, we show that social anxiety levels are related to the tendency to give advice resembling the advice given by rival advisers and to refrain from status-seeking behaviour. Social anxiety was also associated with negative social comparisons with rival advisers. Our findings highlight the role of competing social goals in shaping information sharing.","link":"/opendata/zaatri-et-al-2022/"},{"title":"Zajkowski et al. (2017)","text":"The explore-exploit dilemma occurs anytime we must choose between exploring unknown options for information and exploiting known resources for reward. Previous work suggests that people use two different strategies to solve the explore-exploit dilemma: directed exploration, driven by information seeking, and random exploration, driven by decision noise. Here, we show that these two strategies rely on different neural systems. Using transcranial magnetic stimulation to inhibit the right frontopolar cortex, we were able to selectively inhibit directed exploration while leaving random exploration intact. This suggests a causal role for right frontopolar cortex in directed, but not random, exploration and that directed and random exploration rely on (at least partially) dissociable neural systems.","link":"/opendata/zajkowski-et-al-2017/"},{"title":"Zaller et al. (2021)","text":"Excessive information seeking, or exploratory behavior to minimize the uncertainty of unknown options, is a feature of anxiety disorders. The horizons task (Wilson et al. 2014) is a popular task for measuring information-seeking behavior, recently used to identify under-exploration in psychosis (Waltz et al. 2020). The horizons task has not yet been evaluated as a tool for measuring information seeking behavior in anxious individuals. We recruited 100 participants to complete an online version of the horizons task. Anxiety was measured with the Penn State Worry Questionnaire (PSWQ), and attitudes related to information seeking were measured with the Intolerance of Uncertainty scale (IUS) and the Need for Closure scale (NCS). Information seeking behavior on the horizons task was measured per participant using hierarchical Bayesian modeling. We confirmed that behavior on the online version of the horizons task is similar to previous in-person studies. Contrary to our hypotheses, we found no evidence of a relationship between information seeking behavior on the task and anxiety symptoms or the information seeking scales. Our results suggest behavior on the horizons task does not predict beliefs and attitudes towards anxious information seeking. We suspect this may reflect design features of the task that reduce the value of information seeking behaviors. We conclude by proposing modifications to the task that may improve its utility as a measure of information seeking behavior in anxiety.","link":"/opendata/zaller-et-al-2021/"},{"title":"Zang et al. (2022)","text":"Duration estimates are often biased by the sampled statistical context, yielding the classical central-tendency effect, i.e., short durations are over- and long duration underestimated. Most studies of the central-tendency bias have primarily focused on the integration of the sensory measure and the prior information, without considering any cognitive limits. Here, we investigated the impact of cognitive (visual working-memory) load on duration estimation in the duration encoding and reproduction stages. In four experiments, observers had to perform a dual, attention-sharing task: reproducing a given duration (primary) and memorizing a variable set of color patches (secondary). We found an increase in memory load (i.e., set size) during the duration-encoding stage to increase the central-tendency bias, while shortening the reproduced duration in general; in contrast, increasing the load during the reproduction stage prolonged the reproduced duration, without influencing the central tendency. By integrating an attentional-sharing account into a hierarchical Bayesian model, we were able to predict both the general over- and underestimation and the central-tendency effects observed in all four experiments. The model suggests that memory pressure during the encoding stage increases the sensory noise, which elevates the central-tendency effect. In contrast, memory pressure during the reproduction stage only influences the monitoring of elapsed time, leading to a general duration over-reproduction without impacting the central tendency.","link":"/opendata/zang-et-al-2022/"},{"title":"Zbozinek et al. (2022)","text":"In the natural world, stimulus-outcome associations are often ambiguous, and most associations are highly complex and situation-dependent. Learning to disambiguate these complex associations to identify which specific outcomes will occur in which situations is critical for survival. Pavlovian occasion setters are stimuli that determine whether other stimuli will result in a specific outcome. Occasion setting is a well-established phenomenon, but very little investigation has been conducted on how occasion setters are disambiguated when they themselves are ambiguous (i.e., when they do not consistently signal whether another stimulus will be reinforced). In two preregistered studies, we investigated the role of higher-order Pavlovian occasion setting in humans. We developed and tested the first computational model predicting direct associative learning, traditional occasion setting (i.e., 1st-order occasion setting), and 2nd-order occasion setting. This model operationalizes stimulus ambiguity as a mechanism to engage in higher-order Pavlovian learning. Both behavioral and computational modeling results suggest that 2nd-order occasion setting was learned, as evidenced by lack and presence of transfer of occasion setting properties when expected and the superior fit of our 2nd-order occasion setting model compared to the 1st-order occasion setting or direct associations models. These results provide a controlled investigation into highly complex associative learning and may ultimately lead to improvements in the treatment of Pavlovian-based mental health disorders (e.g., anxiety disorders, substance use).","link":"/opendata/zbozinek-et-al-2022/"},{"title":"Zech et al. (2022)","text":"The approach-avoidance task (AAT) is an implicit task that measures peoples behavioral tendencies to approach or avoid stimuli in the environment. In recent years, it has been used successfully to help explain a variety of health problems (e.g., addictions and phobias). Unfortunately, more recent AAT studies have failed to replicate earlier promising findings. One explanation for these replication failures could be that the AAT does not reliably measure approach-avoidance tendencies. Here, we first review existing literature on the reliability of various versions of the AAT. Next, we examine the AATs reliability in a large and diverse sample (N = 1077; 248 of whom completed all sessions). Using a smartphone-based, mobile AAT, we measured participants approach-avoidance tendencies eight times over a period of seven months (one measurement per month) in two distinct stimulus sets (happy/sad expressions and disgusting/neutral stimuli). The mobile AATs split-half reliability was adequate for face stimuli (r = .85), but low for disgust stimuli (r = .72). Its test-retest reliability based on a single measurement was poor for either stimulus set (all ICC1s < .3). Its test-retest reliability based on the average of all eight measurements was moderately good for face stimuli (ICCk = .73), but low for disgust stimuli (ICCk = .5). Results suggest that single-measurement AATs could be influenced by unexplained temporal fluctuations of approach-avoidance tendencies. These fluctuations could be examined in future studies. Until then, this work suggests that future research using the AAT should rely on multiple rather than single measurements.","link":"/opendata/zech-et-al-2022/"},{"title":"Zerna et al. (2022)","text":"When individuals set goals, they consider the subjective value (SV) of the anticipated reward and the required effort, a trade-off that is of great interest to psychological research. One approach to quantify the SVs of levels of difficulty of a cognitive task is the Cognitive Effort Discounting Paradigm by Westbrook and colleagues (2013). However, it fails to acknowledge the highly individual nature of effort, as it assumes a unidirectional, inverse relationship between task load and SVs. Therefore, it cannot map differences in effort perception that arise from traits like Need for Cognition, since individuals who enjoy effortful cognitive activities likely do not prefer the easiest level. We replicated the analysis of Westbrook and colleagues with an adapted version, the Cognitive and Affective Discounting (CAD) Paradigm. It quantifies SVs without assuming that the easiest level is preferred, thereby enabling the assessment of SVs for tasks without objective order of task load. Results show that many participants preferred a more or the most difficult level. Variance in SVs was best explained by a declining logistic contrast of the $n$-back levels and by the accuracy of responses, while reaction time as a predictor was highly volatile depending on the preprocessing pipeline. Participants with higher Need for Cognition scores perceived higher $n$-back levels as less effortful and found them less aversive. Effects of Need for Cognition on SVs in lower levels did not reach significance, as group differences only emerged in higher levels. The CAD Paradigm appears to be well suited for assessing and analysing task preferences independent of the supposed objective task difficulty.","link":"/opendata/zerna-et-al-2022/"},{"title":"Zetsche et al. (2023)","text":"Rumination is a well-known risk factor for the onset and recurrence of depressive episodes. Depressed individuals with a tendency to ruminate have been found to exhibit deficits in updating the contents of working memory. Thus, working memory training targeting updating-specific cognitive control processes may bear the potential to reduce ruminative tendencies. This registered clinical trial (ClinicalTrials.gov/NCT03011216) examined the effect of training cognitive control on rumination in the daily lives of clinically depressed individuals. Sixty-five individuals with a current major depressive disorder were randomized to 10 sessions of either cognitive control training (N=31) or placebo training (N=34). Primary outcome measures were the frequency and negativity of ruminative thoughts in the daily lives of participants assessed by a 7-day experience sampling procedure prior to training, after training, and at 3-months follow-up. Secondary outcomes were depressive symptoms, depressive mood, and level of disability. Cognitive control training led to stronger improvements in the trained task than placebo training. There was no transfer of the training effect to a novel cognitive control task. Cognitive control training did not lead to a greater reduction in daily rumination frequency, negativity of ruminative thoughts, or the negative influence of rumination on subsequent affect than the placebo training. There was no training-specific effect on participants’ depressive symptomatology or level of disability. Results join in a picture of mixed findings regarding the effect of cognitive control training on rumination. Future research has to identify determinants of beneficial training effects.","link":"/opendata/zetsche-et-al-2023/"},{"title":"Zhang et al. (2023)","text":"Humans are remarkably efficient at decision-making, even in “open-ended’’ problems where the set of possible actions is too large for exhaustive evaluation. Our success relies, in part, on efficient processes of calling to mind and considering the right candidate actions for evaluation. When this process fails, however, the result is a kind of cognitive puzzle in which the value of a solution or action would be obvious as soon as it is considered, but never gets considered in the first place. Recently, machine learning (ML) architectures have attained or even exceeded human performance on certain kinds of open-ended tasks such as the games of chess and go. We ask whether the broad architectural principles that underlie ML success in these domains tend to generate similar consideration failures to those observed in humans. We demonstrate a case in which they do, illuminating how humans make open-ended decisions, how this relates to ML approaches to similar problems, and how both architectures lead to characteristic patterns of success and failure.","link":"/opendata/zhang-et-al-2023/"},{"title":"Zhao et al. (2021)","text":"Working memory (WM) deficits have been widely documented in schizophrenia (SZ), and almost all existing studies attributed the deficits to decreased capacity as compared to healthy control (HC) subjects. Recent developments in WM research suggest that other components, such as precision, also mediate behavioral performance. It remains unclear how different WM components jointly contribute to deficits in schizophrenia. We measured the performance of 60 SZ (31 females) and 61 HC (29 females) in a classical delay-estimation visual working memory (VWM) task and evaluated several influential computational models proposed in basic science of VWM to disentangle the effect of various memory components. We show that the model assuming variable precision (VP) across items and trials is the best model to explain the performance of both groups. According to the VP model, SZ exhibited abnormally larger variability of allocating memory resources rather than resources or capacity per se. Finally, individual differences in the resource allocation variability predicted variation of symptom severity in SZ, highlighting its functional relevance to schizophrenic pathology. This finding was further verified using distinct visual features and subject cohorts. These results provide an alternative view instead of the widely accepted decreased-capacity theory and highlight the key role of elevated resource allocation variability in generating atypical VWM behavior in schizophrenia. Our findings also shed new light on the utility of Bayesian observer models to characterize mechanisms of mental deficits in clinical neuroscience.","link":"/opendata/zhao-et-al-2021/"},{"title":"Zheng & Mei (2023)","text":"The question of how the brain represents reward prediction errors is central to reinforcement learning and adaptive, goal-directed behavior. Previous studies have revealed prediction error representations in multiple electrophysiological signatures, but it remains elusive whether these electrophysiological correlates underlying prediction errors are sensitive to valence (in a signed form) or to salience (in an unsigned form). One possible reason concerns the loose correspondence between objective probability and subjective prediction resulting from the optimistic bias, that is, the tendency to overestimate the likelihood of encountering positive future events. In the present electroencephalography (EEG) study, we approached this question by directly measuring participants’ idiosyncratic, trial-to-trial prediction errors elicited by subjective and objective probabilities across two experiments. We adopted monetary gain and loss feedback in Experiment 1 and positive and negative feedback as communicated by the same zero-value feedback in Experiment 2. We provided electrophysiological evidence in time and time-frequency domains supporting both reward and salience prediction error signals. Moreover, we showed that these electrophysiological signatures were highly flexible and sensitive to an optimistic bias and various forms of salience. Our findings shed new light on multiple presentations of prediction error in the human brain, which differ in format and functional role.","link":"/opendata/zheng-mei-2023/"},{"title":"Zhong et al. (2022)","text":"Surviving in an uncertain environment requires not only the ability to select the best action, but also the flexibility to withhold inappropriate actions when the environmental conditions change. Although selecting and withholding actions have been extensively studied in both human and animals, there is still lack of consensus on the mechanism underlying these action regulation functions, and more importantly, how they inter-relate. A critical gap impeding progress is the lack of a computational theory that will integrate the mechanisms of action regulation into a unified framework. The current study aims to advance our understanding by developing a neurodynamical computational theory that models the mechanism of action regulation that involves suppressing responses, and predicts how disruption of this mechanism can lead to motor deficits in Parkinson’s disease (PD) patients. We tested the model predictions in neurotypical individuals and PD patients in three behavioral tasks that involve free action selection between two opposed directions, action selection in the presence of conflicting information and abandoning an ongoing action when a stop signal is presented. Our results and theory suggest an integrated mechanism of action regulation that affects both action initiation and inhibition. When this mechanism is disrupted, motor behavior is affected, leading to longer reaction times and higher error rates in action inhibition.","link":"/opendata/zhong-et-al-2022/"},{"title":"Zhou et al. (2023)","text":"Theories of moral development propose that empathy is transmitted across individuals, yet the mechanism through which empathy is socially transmitted remains unclear. We conducted three studies to investigate whether, and if so, how observing empathic responses in others affects the empathy of the observer. Our results show that observing empathic or non-empathic responses generates learning signals that respectively increases or decreases empathy ratings of the observer and alters empathy-related responses in the anterior insula (AI), i.e., the same region that correlated with empathy baseline ratings, as well as its functional connectivity with the temporal-parietal junction (TPJ). Together, our findings provide a neurocomputational mechanism for the social transmission of empathy that accounts for changes in individual empathic responses in empathic and non-empathic social environments.","link":"/opendata/zhou-et-al-2023/"},{"title":"Zhu et al. (2019)","text":"Both basal ganglia (BG) and orbitofrontal cortex (OFC) have been widely implicated in social and non-social decision-making. However, unlike OFC damage, BG pathology is not typically associated with disturbances in social functioning. Here we studied the behavior of patients with focal lesions to either BG or OFC in a multi-strategy competitive game known to engage these regions. We find that whereas OFC patients are significantly impaired, BG patients show intact learning in the economic game. By contrast, when information about the strategic context is absent, both cohorts are significantly impaired. Computational modeling further shows a preserved ability in BG patients to learn by anticipating and responding to the behavior of others using the strategic context. These results suggest that apparently divergent findings on BG contribution to social decision-making may instead reflect a model where higher-order learning processes are dissociable from trial-and-error learning, and can be preserved despite BG damage.","link":"/opendata/zhu-et-al-2019/"},{"title":"Zhu et al. (2022)","text":"Human cognition is fundamentally noisy. While routinely regarded as a nuisance in experimental investigation, the few studies investigating properties of cognitive noise have found surprising structure. A first line of research has shown that inter-response-time distributions are heavy-tailed. That is, response times between subsequent trials usually change only a small amount, but with occasional large changes. A second, separate, line of research has found that participants’ estimates and response times both exhibit long-range autocorrelations (i.e., 1/f noise). Thus, each judgment and response time not only depends on its immediate predecessor but also on many previous responses. These two lines of research use different tasks and have distinct theoretical explanations: models that account for heavy-tailed response times do not predict 1/f autocorrelations and vice versa. Here, we find that 1/f noise and heavy-tailed response distributions co-occur in both types of tasks. We also show that a statistical sampling algorithm, developed to deal with patchy environments, generates both heavy-tailed distributions and 1/f noise, suggesting that cognitive noise may be a functional adaptation to dealing with a complex world.","link":"/opendata/zhu-et-al-2022/"},{"title":"Ziaka & Protopapas (2023)","text":"Cognitive control has been typically examined using single-item tasks. This has implications for the generalizability of theories of control implementation. Previous studies have revealed that different control demands are posed by tasks depending on whether they present stimuli individually (i.e., single-item) or simultaneously in array format (i.e., multi-item). In the present study we tracked within-task performance in single-item and multi-item Stroop tasks using simultaneous pupillometry, gaze, and behavioral response measures, aiming to explore the implications of format differences for cognitive control. The results indicated within-task performance decline in the multi-item version of the Stroop task, accompanied by pupil constriction and dwell time increase, in both the incongruent and the neutral condition. In contrast, no performance decline or dwell time increase was observed in the course of the single-item version of the task. These findings point to capacity constraints beyond the explanatory range of current theories of cognitive control, with implications for cognitive control research, and highlight the need for better understanding the cognitive demands of multi-item tasks.","link":"/opendata/ziaka-protopapas-2023/"},{"title":"Zika et al. (2023)","text":"Updating beliefs in changing environments can be driven by gradually adapting expectations or by relying on inferred hidden states (i.e. contexts), and changes therein. Previous work suggests that increased reliance on context could underly fear relapse phenomena that hinder clinical treatment of anxiety disorders. We test whether trait anxiety variations in a healthy population influence how much individuals rely on hidden-state inference. In a Pavlovian learning task, participants observed cues that predicted an upcoming electrical shock with repeatedly changing probability, and were asked to provide expectancy ratings on every trial. We show that trait anxiety is associated with steeper expectation switches after contingency reversals and reduced oddball learning. Furthermore, trait anxiety is related to better fit of a state inference, compared to a gradual learning, model when contingency changes are large. Our findings support previous work suggesting hidden-state inference as a mechanism behind anxiety-related to fear relapse phenomena.","link":"/opendata/zika-et-al-2023/"},{"title":"Zilker et al. (2020)","text":"The canonical conclusion from research on age differences in risky choice is that older adults are more risk averse than younger adults, at least in choices involving gains. Most of the evidence for this conclusion derives from studies that used a specific type of choice problem: choices between a safe and a risky option. However, safe and risky options differ not only in the degree of risk but also in the amount of information to be processed-that is, in their complexity. In both an online and a lab experiment, we demonstrate that differences in option complexity can be a key driver of age differences in risk attitude. When the complexity of the safe option is increased, older adults no longer seem more risk averse than younger adults (in gains). Using computational modeling, we test mechanisms that potentially underlie the effect of option complexity. The results show that participants are not simply averse to complexity, and that increasing the complexity of safe options does more than simply make responses more noisy. Rather, differences in option complexity affect the processing of attribute information: whereas the availability of a simple safe option is associated with the distortion of probability weighting and lower outcome sensitivity, these effects are attenuated when both options are more similar in complexity. We also dissociate these effects of option complexity from an effect of certainty. Our findings may also have implications for age differences in other decision phenomena (e.g., framing effect, loss aversion, immediacy effect).","link":"/opendata/zilker-et-al-2020/"},{"title":"Zilker & Pachur (2023)","text":"Probability weighting is one of the most powerful theoretical constructs in descriptive models of risky choice and constitutes a central component of cumulative prospect theory (CPT). Probability weighting has been shown to be related to two facets of attention allocation: one analysis showed that differences in the shape of CPT’s probability-weighting function are linked to differences in how attention is allocated across attributes (i.e., probabilities vs. outcomes); another analysis (that used a different measure of attention) showed a link between probability weighting and differences in how attention is allocated across options. However, the relationship between these two links is unclear. We investigate to what extent attribute attention and option attention independently contribute to probability weighting. Reanalyzing data from a process-tracing study, we first demonstrate links between probability weighting and both attribute attention and option attention within the same data set and the same measure of attention. We then find that attribute attention and option attention are at best weakly related and have independent and distinct effects on probability weighting. Moreover, deviations from linear weighting mainly emerged when attribute attention or option attention were imbalanced. Our analyses enrich the understanding of the cognitive underpinnings of preferences and illustrate that similar probability-weighting patterns can be associated with very different attentional policies. This complicates an unambiguous psychological interpretation of psycho-economic functions. Our findings indicate that cognitive process models of decision making should aim to concurrently account for the effects of different facets of attention allocation on preference. In addition, we argue that the origins of biases in attribute attention and option attention need to be better understood.","link":"/opendata/zilker-pachur-2023/"},{"title":"Ziv et al. (2022)","text":"The purpose of this pre-registered study was to examine whether asking gamers and non-gamers about their video game playing habits before or after they performed computerized cognitive-motor tasks affects their performance of those tasks. We recruited 187 participants from an online participants’ recruitment platform. Out of those participants, 131 matched our criteria as gamers or non-gamers. They were then divided to two subgroups, and performed a choice-RT task, a Simon task, an alternate task-switching task, and a digit span memory task either before or after answering a video-game playing habits questionnaire. The results showed that gamers who completed a video-games questionnaire before performing the tasks had faster reaction times (RTs) in the Simon task compared with gamers who answered the questionnaire after performing the tasks. In contrast, non-gamers who answered the questionnaire before the task had slower RTs in the Simon task and the alternate task-switching task compared with non-gamers who answered the questionnaire after performing the tasks. The results suggest that answering a video-games questionnaire before the start of a study can lead to a response expectancy effect-positive for gamers and negative for non-gamers. This may bias findings of studies examining video games and the performance of cognitive-motor tasks.","link":"/opendata/ziv-et-al-2022/"},{"title":"Zivony & Eimer (2023)","text":"When observers have to identify an object embedded in a rapid serial visual presentation (RSVP) stream, they often erroneously report the identity of a distractor instead of the target (distractor intrusion). In two experiments, we examined whether these intrusion errors are associated with the speed of attentional engagement. Participants reported the identity of target digits indicated by shape selection cues. To manipulate the speed of engagement, targets appeared either within a single RSVP stream or unpredictably in one of two streams. Objects that followed the selection cue were reported more frequently when engagement was delayed (two streams), whereas the probability of reporting objects preceding the cue was higher when engagement was faster (single stream). These results show that distractor intrusions are closely linked to the allocation of selective attention in time, making the intrusion paradigm a useful tool for research into the temporal dynamics of attention. They also provide new evidence for the idea that attentional selectivity operates within brief periods of perceptual enhancement (attentional episodes), facilitating the processing of all objects within this period, regardless of their status as targets or distractors.","link":"/opendata/zivony-eimer-2023/"},{"title":"Zorowitz & Niv (2023)","text":"Data from N=149 participants who completed a gamified version of the two-step task under one of three conditions: (1) stimuli from both first- and second-state choices were randomly assigned to right/left positions on the screen on every trial; (2) stimuli from both first- and second-state choices were assigned fixed right/left positions on the screen (i.e., unchanging across trials); or (3) stimuli from first-state choices were randomly assigned to right/left positions on the screen on every trial. Second-state stimuli were assigned fixed right/left positions on the screen (i.e., unchanging across trials).","link":"/opendata/zorowitz-niv-2023/"},{"title":"Zorowitz et al. (2023a)","text":"Matrix reasoning tasks are among the most widely used measures of cognitive ability in the behavioral sciences, but the lack of matrix reasoning tests in the public domain complicates their use. Here we present an extensive investigation and psychometric validation of the matrix reasoning item bank (MaRs-IB), an open-access set of matrix reasoning items. In a first study, we calibrate the psychometric functioning of the items in the MaRs-IB in a large sample adults participants (N=1501). Using additive multilevel item structure models, we establish that the MaRs-IB has many desirable psychometric properties: its items span a wide range of difficulty, possess medium- to-large levels of discrimination, and exhibit robust associations between item complexity and difficulty. However, we also find that item clones are not always psychometrically equivalent and cannot assumed to be exchangeable. In a second study, we demonstrate how experimenters can use the estimated item parameters to design new matrix reasoning tests using optimal item assembly. Specifically, we design and validate two new sets of test forms in an independent sample of adults (N=600). We find these new tests possess good reliability and convergent validity with an established measure of matrix reasoning. We hope that the materials and results made available here will encourage experimenters to use the MaRs-IB in their research.","link":"/opendata/zorowitz-et-al-2023a/"},{"title":"Zorowitz et al. (2023b)","text":"A common research design in the field of computational psychiatry involves leveraging the power of online participant recruitment to assess correlations between behavior in cognitive tasks and the self-reported severity of psychiatric symptoms in large, diverse samples. Although large online samples have many advantages for psychiatric research, some potential pitfalls of this research design are not widely understood. Here we detail circumstances in which entirely spurious correlations may arise between task behavior and symptom severity as a result of inadequate screening of careless or low-effort responding on psychiatric symptom surveys. Specifically, since many psychiatric symptom surveys have asymmetric ground-truth score distributions in the general population, participants who respond carelessly on these surveys will show apparently elevated symptom levels. If these participants are similarly careless in their task performance, and are not excluded from analysis, this may result in a spurious association between greater symptom scores and worse behavioral task performance. Here, we demonstrate exactly this pattern of results in two independent samples of participants (total N = 779) recruited online to complete a self-report symptom battery and one of two common cognitive tasks. We show that many behavior-symptom correlations are entirely abolished when participants flagged for careless responding on surveys are excluded from analysis. We also show that exclusion based on task performance alone is often insufficient to prevent these spurious correlations. Of note, we demonstrate that false-positive rates for these spurious correlations increase with sample size, contrary to common assumptions. We offer guidance on how researchers using this general experimental design can guard against this issue in future research; in particular, we recommend the adoption of screening methods for self-report measures that are currently uncommon in this field.","link":"/opendata/zorowitz-et-al-2023b/"},{"title":"Zorowitz et al. (2023c)","text":"The Pavlovian go/no-go task is commonly used to measure individual differences in Pavlovian biases and their interaction with instrumental learning. However, prior research has found suboptimal reliability for computational model-based performance measures for this task, limiting its usefulness in individual-differences research. These studies did not make use of several strategies previously shown to enhance task-measure reliability (e.g., task gamification, hierarchical Bayesian modeling for model estimation). Here we investigated if such approaches could improve the task’s reliability. Across two experiments, we recruited two independent samples of adult participants (N=103, N=110) to complete a novel, gamified version of the Pavlovian go/no-go task multiple times over several weeks. We used hierarchical Bayesian modeling to derive reinforcement learning model-based indices of participants’ task performance, and additionally to estimate the reliability of these measures. In Experiment 1, we observed considerable and unexpected practice effects, with most participants reaching near-ceiling levels of performance with repeat testing. Consequently, the test-retest reliability of some model parameters was unacceptable (range: 0.379–0.973). In Experiment 2, participants completed a modified version of the task designed to lessen these practice effects. We observed greatly reduced practice effects and improved estimates of the test-retest reliability (range: 0.696–0.989). The results demonstrate that model-based measures of performance on the Pavlovian go/no-go task can reach levels of reliability sufficient for use in individual- differences research. However, additional investigation is necessary to validate the modified version of the task in other populations and settings.","link":"/opendata/zorowitz-et-al-2023c/"}],"tags":[{"name":"memory","slug":"memory","link":"/opendata/tags/memory/"},{"name":"explore/exploit","slug":"explore-exploit","link":"/opendata/tags/explore-exploit/"},{"name":"2-arm bandit","slug":"2-arm-bandit","link":"/opendata/tags/2-arm-bandit/"},{"name":"pavlovian go/no-go task","slug":"pavlovian-go-no-go-task","link":"/opendata/tags/pavlovian-go-no-go-task/"},{"name":"schizotypy","slug":"schizotypy","link":"/opendata/tags/schizotypy/"},{"name":"m/eeg","slug":"m-eeg","link":"/opendata/tags/m-eeg/"},{"name":"social decision making","slug":"social-decision-making","link":"/opendata/tags/social-decision-making/"},{"name":"cognitive control","slug":"cognitive-control","link":"/opendata/tags/cognitive-control/"},{"name":"planning","slug":"planning","link":"/opendata/tags/planning/"},{"name":"sequential sampling","slug":"sequential-sampling","link":"/opendata/tags/sequential-sampling/"},{"name":"probability estimation","slug":"probability-estimation","link":"/opendata/tags/probability-estimation/"},{"name":"metacognition","slug":"metacognition","link":"/opendata/tags/metacognition/"},{"name":"confidence","slug":"confidence","link":"/opendata/tags/confidence/"},{"name":"perceptual decision making","slug":"perceptual-decision-making","link":"/opendata/tags/perceptual-decision-making/"},{"name":"pavlovian conditioning","slug":"pavlovian-conditioning","link":"/opendata/tags/pavlovian-conditioning/"},{"name":"eye-tracking","slug":"eye-tracking","link":"/opendata/tags/eye-tracking/"},{"name":"information seeking","slug":"information-seeking","link":"/opendata/tags/information-seeking/"},{"name":"paranoia","slug":"paranoia","link":"/opendata/tags/paranoia/"},{"name":"time perception","slug":"time-perception","link":"/opendata/tags/time-perception/"},{"name":"database","slug":"database","link":"/opendata/tags/database/"},{"name":"multi-arm bandit","slug":"multi-arm-bandit","link":"/opendata/tags/multi-arm-bandit/"},{"name":"restless bandit","slug":"restless-bandit","link":"/opendata/tags/restless-bandit/"},{"name":"punishment","slug":"punishment","link":"/opendata/tags/punishment/"},{"name":"anxiety","slug":"anxiety","link":"/opendata/tags/anxiety/"},{"name":"depression","slug":"depression","link":"/opendata/tags/depression/"},{"name":"sequential decision making","slug":"sequential-decision-making","link":"/opendata/tags/sequential-decision-making/"},{"name":"approach/avoidance","slug":"approach-avoidance","link":"/opendata/tags/approach-avoidance/"},{"name":"foraging","slug":"foraging","link":"/opendata/tags/foraging/"},{"name":"risk sensitivity","slug":"risk-sensitivity","link":"/opendata/tags/risk-sensitivity/"},{"name":"development","slug":"development","link":"/opendata/tags/development/"},{"name":"adolescence","slug":"adolescence","link":"/opendata/tags/adolescence/"},{"name":"working memory","slug":"working-memory","link":"/opendata/tags/working-memory/"},{"name":"configural learning","slug":"configural-learning","link":"/opendata/tags/configural-learning/"},{"name":"compulsivity","slug":"compulsivity","link":"/opendata/tags/compulsivity/"},{"name":"tdcs","slug":"tdcs","link":"/opendata/tags/tdcs/"},{"name":"agency","slug":"agency","link":"/opendata/tags/agency/"},{"name":"reversal learning","slug":"reversal-learning","link":"/opendata/tags/reversal-learning/"},{"name":"economic game","slug":"economic-game","link":"/opendata/tags/economic-game/"},{"name":"range adaptation","slug":"range-adaptation","link":"/opendata/tags/range-adaptation/"},{"name":"counterfactual feedback","slug":"counterfactual-feedback","link":"/opendata/tags/counterfactual-feedback/"},{"name":"free association","slug":"free-association","link":"/opendata/tags/free-association/"},{"name":"narrative","slug":"narrative","link":"/opendata/tags/narrative/"},{"name":"test-retest","slug":"test-retest","link":"/opendata/tags/test-retest/"},{"name":"volatility","slug":"volatility","link":"/opendata/tags/volatility/"},{"name":"compound generalization","slug":"compound-generalization","link":"/opendata/tags/compound-generalization/"},{"name":"mood","slug":"mood","link":"/opendata/tags/mood/"},{"name":"decisions from description","slug":"decisions-from-description","link":"/opendata/tags/decisions-from-description/"},{"name":"effort","slug":"effort","link":"/opendata/tags/effort/"},{"name":"continuous outcomes","slug":"continuous-outcomes","link":"/opendata/tags/continuous-outcomes/"},{"name":"stress","slug":"stress","link":"/opendata/tags/stress/"},{"name":"two-step","slug":"two-step","link":"/opendata/tags/two-step/"},{"name":"mouse-tracking","slug":"mouse-tracking","link":"/opendata/tags/mouse-tracking/"},{"name":"iowa gambling task","slug":"iowa-gambling-task","link":"/opendata/tags/iowa-gambling-task/"},{"name":"temporal discounting","slug":"temporal-discounting","link":"/opendata/tags/temporal-discounting/"},{"name":"pathological gambling","slug":"pathological-gambling","link":"/opendata/tags/pathological-gambling/"},{"name":"mental imagery","slug":"mental-imagery","link":"/opendata/tags/mental-imagery/"},{"name":"time pressure","slug":"time-pressure","link":"/opendata/tags/time-pressure/"},{"name":"value construction","slug":"value-construction","link":"/opendata/tags/value-construction/"},{"name":"stroop","slug":"stroop","link":"/opendata/tags/stroop/"},{"name":"probabilistic selection task","slug":"probabilistic-selection-task","link":"/opendata/tags/probabilistic-selection-task/"},{"name":"parkinson's","slug":"parkinson-s","link":"/opendata/tags/parkinson-s/"},{"name":"eeg","slug":"eeg","link":"/opendata/tags/eeg/"},{"name":"latent cause inference","slug":"latent-cause-inference","link":"/opendata/tags/latent-cause-inference/"},{"name":"self-report only","slug":"self-report-only","link":"/opendata/tags/self-report-only/"},{"name":"reward visibility","slug":"reward-visibility","link":"/opendata/tags/reward-visibility/"},{"name":"generalization","slug":"generalization","link":"/opendata/tags/generalization/"},{"name":"ecological momentary assessment","slug":"ecological-momentary-assessment","link":"/opendata/tags/ecological-momentary-assessment/"},{"name":"mania","slug":"mania","link":"/opendata/tags/mania/"},{"name":"horizons task","slug":"horizons-task","link":"/opendata/tags/horizons-task/"},{"name":"impulsivity","slug":"impulsivity","link":"/opendata/tags/impulsivity/"},{"name":"psychosis","slug":"psychosis","link":"/opendata/tags/psychosis/"},{"name":"sequence learning","slug":"sequence-learning","link":"/opendata/tags/sequence-learning/"},{"name":"serial reaction time","slug":"serial-reaction-time","link":"/opendata/tags/serial-reaction-time/"},{"name":"inverse RL","slug":"inverse-RL","link":"/opendata/tags/inverse-RL/"},{"name":"avoidance","slug":"avoidance","link":"/opendata/tags/avoidance/"},{"name":"clinical trial","slug":"clinical-trial","link":"/opendata/tags/clinical-trial/"},{"name":"balloon analog risk task","slug":"balloon-analog-risk-task","link":"/opendata/tags/balloon-analog-risk-task/"},{"name":"habits","slug":"habits","link":"/opendata/tags/habits/"},{"name":"outcome devaluation","slug":"outcome-devaluation","link":"/opendata/tags/outcome-devaluation/"},{"name":"autism","slug":"autism","link":"/opendata/tags/autism/"},{"name":"attention","slug":"attention","link":"/opendata/tags/attention/"},{"name":"spatial navigation","slug":"spatial-navigation","link":"/opendata/tags/spatial-navigation/"},{"name":"mindfulness","slug":"mindfulness","link":"/opendata/tags/mindfulness/"},{"name":"self-judgment","slug":"self-judgment","link":"/opendata/tags/self-judgment/"},{"name":"ptsd","slug":"ptsd","link":"/opendata/tags/ptsd/"},{"name":"adhd","slug":"adhd","link":"/opendata/tags/adhd/"},{"name":"mind wandering","slug":"mind-wandering","link":"/opendata/tags/mind-wandering/"},{"name":"stop signal","slug":"stop-signal","link":"/opendata/tags/stop-signal/"},{"name":"monetary incentive delay task","slug":"monetary-incentive-delay-task","link":"/opendata/tags/monetary-incentive-delay-task/"},{"name":"go/no-go task","slug":"go-no-go-task","link":"/opendata/tags/go-no-go-task/"},{"name":"aging","slug":"aging","link":"/opendata/tags/aging/"},{"name":"trauma","slug":"trauma","link":"/opendata/tags/trauma/"},{"name":"visual perception","slug":"visual-perception","link":"/opendata/tags/visual-perception/"},{"name":"auditory perception","slug":"auditory-perception","link":"/opendata/tags/auditory-perception/"},{"name":"causal reasoning","slug":"causal-reasoning","link":"/opendata/tags/causal-reasoning/"},{"name":"longitudinal","slug":"longitudinal","link":"/opendata/tags/longitudinal/"},{"name":"pavlovian instrumental transfer","slug":"pavlovian-instrumental-transfer","link":"/opendata/tags/pavlovian-instrumental-transfer/"},{"name":"curiosity","slug":"curiosity","link":"/opendata/tags/curiosity/"},{"name":"abstract reasoning","slug":"abstract-reasoning","link":"/opendata/tags/abstract-reasoning/"},{"name":"executive functioning","slug":"executive-functioning","link":"/opendata/tags/executive-functioning/"},{"name":"anchoring","slug":"anchoring","link":"/opendata/tags/anchoring/"},{"name":"substance use","slug":"substance-use","link":"/opendata/tags/substance-use/"},{"name":"categorization","slug":"categorization","link":"/opendata/tags/categorization/"},{"name":"implicit association test","slug":"implicit-association-test","link":"/opendata/tags/implicit-association-test/"},{"name":"multisensory integration","slug":"multisensory-integration","link":"/opendata/tags/multisensory-integration/"}],"categories":[]} \ No newline at end of file