What is hBayesDM?

hBayesDM (hierarchical Bayesian modeling of Decision-Making tasks) is a user-friendly R package that offers hierarchical Bayesian analysis of various computational models on an array of decision-making tasks. Click here to download its help file (reference manual). Click here to read our preprint. Click here to download a poster we recently presented at several conferences/meetings. You can find hBayesDM on CRAN now.

User Support (New in December, 2016)

Users can ask questions and make suggestions through our mailing list or GitHub.

Latest Version

0.3.0 (Jan 14, 2016)

What’s new
Version 0.3.0
- Made several changes following the guidelines for R packages providing interfaces to Stan.
- Stan models are precompiled and models will run immediately when called.
- The default number of chains is set to 4.
- Set the default value of adapt_delta to 0.95 to reduce the potential for divergences.
- rhat function uses LOOIC by default. Users can select WAIC or both (LOOIC & WAIC) if needed.

Version 0.2.3.3
- Add help files.
- Add a function (rhat) for checking Rhat values.
- Change a link to its tutorial website.

Version 0.2.3.2
- Use wide normal distributions for unbounded parameters.
- Automatic removal of rows (trials) containing NAs.

Version 0.2.3.1
- Add a new function (plotInd) for plotting individual parameters. See its help file (?plotInd) for more details.

Motivation

Computational modeling provides a quantitative framework for investigating latent neurocognitive processes (e.g., learning rate, reward sensitivity) and interactions among multiple decision-making systems. Parameters of a computational model reflect psychologically meaningful individual differences: thus, getting accurate parameter estimates of a computational model is critical to improving the interpretation of its findings. Hierarchical Bayesian analysis (HBA) is regarded as the gold standard for parameter estimation, especially when the amount of information from each participant is small (see below “Why hierarchical Bayesian analysis?”). However, many researchers interested in HBA often find the approach too technical and challenging to be implemented.

We introduce a free R package hBayesDM, which offers HBA of various computational models on an array of decision-making tasks (see below for a list of tasks and models currently available). Users can perform HBA of various computational models with a single line of coding. Example datasets are also available. With hBayesDM, we hope anyone with minimal knowledge of programming can take advantage of advanced computational modeling and HBA. It is our expectation that hBayesDM will contribute to the dissemination of these computational tools and enable researchers in related fields to easily characterize latent neurocognitive processes within their study populations.


Why hierarchical Bayesian analysis (HBA)?


Figure caption here

Most computational models do not have closed form solutions and we need to estimate parameter values. Traditionally parameters are estimated at the individual level with maximum likelihood estimation (MLE): getting point estimates for each individual separately. However, individual MLE estimates are often noisy especially when there is insufficient amount of data. A group-level analysis (e.g., group-level MLE), which estimate a single set of parameters for the whole group of individuals, may generate more reliable estimates but inevitably ignores individual differences.

HBA and other hierarchical approaches (e.g., Huys et al., 2011) allow for individual differences while pooling information across individuals. Both individual and group parameter estimates (i.e., posterior distributions) are estimated simultaneously in a mutually constraining fashion. Consequently, individual parameter estimates tend to be more stable and reliable because commonalities among individuals are captured and informed by the group tendencies (e.g., Ahn et al., 2011). HBA also finds full posterior distributions instead of point estimates (thus providing rich information about the parameters). HBA also makes it easy to do group comparisons in a Bayesian fashion (e.g., comparing clinical and non-clinical groups, see an example below).

HBA is a branch of Bayesian statistics and the conceptual framework of Bayesian data analysis is clearly written in Chapter 2 of John Kruschke’s book (J. Kruschke, 2014). In Bayesian statistics, we assume prior beliefs (i.e., prior distributions) for model parameters and update the priors into posterior distributions given the data (e.g., trial-by-trial choices and outcomes) using Bayes’ rule. Note that the prior distributions we use for model parameters are vague (e.g., flat) or weakly informative priors, so they play a minimal role in the posterior distribution.

For Bayesian updating, we use the Stan software package (http://mc-stan.org/), which implements a very efficient Markov Chain Monte Carlo (MCMC) algorithm called Hamiltonian Monte Carlo (HMC). HMC is known to be effective and works well even for large complex models. See Stan reference manual (http://mc-stan.org/documentation/) and Chapter 14 of J. Kruschke (2014) for a comprehensive description of HMC and Stan. What is MCMC and why shoud we use it? Remember, we need to update our priors into posterior distributions in order to make inference about model parameters. Simply put, MCMC is a way of approximating a posterior distribution by drawing a large number of samples from it. MCMC algorithms are used when posterior distributions cannot be analytically achieved or using MCMC is more efficient than searching for the whole grid of parameter space (i.e., grid search). To learn more about the basic foundations of MCMC, we recommend Chapter 7 of J. Kruschke (2014).

Detailed specification of Bayesian models is not available in text yet (stay tuned for our tutorial paper whose citation is listed below). At the same time, users can go over our Stan codes to check how we implement each computational model (e.g., PathTo_gng_m1 = system.file("stan/gng_m1.stan", package="hBayesDM") ). We made strong efforts to optimize Stan codes through reparameterization (e.g., Matt trick) and vectorization.


Prerequisites

Note: Additional R packages (ggplot2, loo, mail, modeest) will be installed (if not installed yet) during the installation of hBayesDM.

List of tasks and computational models implemented in hBayesDM

As of hBayesDM v0.3.0 (January 14, 2017)
Task (alphabetical order) Model name hBayesDM function References (see below for full citations)

Delay Discounting Task

Constant-Sensitivity (CS) model
Exponential model
Hyperbolic model

dd_cs
dd_exp
dd_hyp

Ebert & Prelec (2007)
Samuelson (1937)
Mazur (1987)

Iowa Gambling Task (IGT)

Prospect Valence Learning-DecayRI Prospect Valence Learning-Delta
Value-Plus-Perseverance (VPP)

igt_pvl_decay
igt_pvl_delta
igt_vpp

Ahn et al. (2011; 2014)
Ahn et al. (2008)
Worthy et al. (2013)

Orthogonalized Go/Nogo Task

RW+noise
RW+noise+go bias
RW+noise+go bias+Pav. bias
M5 (see Table 1 of the reference)

gng_m1
gng_m2
gng_m3
gng_m4

Guitart-Masip et al. (2012)
Guitart-Masip et al. (2012)
Guitart-Masip et al. (2012)
Cavanagh et al. (2013)

Probabilistic Reversal Learning (PRL) Task

Experience-Weighted Attraction
Fictitious update
Reward-Punishment

prl_ewa
prl_fictitious
prl_rp

Ouden et al. (2013)
Gläscher et al. (2009)
Ouden et al. (2013)

Risk-Aversion Task

Prospect Theory (PT)
PT without loss aversion (LA)
PT without risk aversion (RA)

ra_prospect
ra_noLA
ra_noRA

Sokol-Hessner et al. (2009)

Tom et al. (2007)

Two-Armed Bandit (Experience-based) Task

Rescorla-Wagner (delta) model

bandit2arm_delta

Erev et al. (2010)
Hertwig et al. (2004)

Ultimatum Game

Ideal Bayesian observer model
Rescorla-Wagner (delta) model

ug_bayes
ug_delta

Xiang et al. (2013)
Gu et al. (2015)

How to install hBayesDM

The hBayesDM package is now available on CRAN! There are two ways to install hBayesDM, both of which are described below. Make sure to install RStan prior to install hBayesDM. Typically RStan can be installed just by typing install.packages("rstan", dependencies = TRUE). For Windows, you need to install Rtools first to install RStan. How can you tell if RStan is correctly installed? Check if you can fit the ‘Eight Schools’ model without a problem. Check here or here if you experience difficulty installing RStan.

Method B

Install the package from GitHub:

if (packageVersion("devtools") < 1.6) {  # install the devtools package 
  install.packages("devtools")
}
devtools::install_github("CCS-Lab/hBayesDM")


Method C

  1. Download a copy from here to a directory (e.g., “~/Downloads”).
  2. Open R(Studio) and set working directory to the downloaded folder. (e.g., setwd("~/Downloads") )
  3. Install the package from the downloaded file.
install.packages(pkgs="hBayesDM_0.3.0.tar.gz", dependencies=TRUE, repos=NULL)


How to use hBayesDM

First, open RStudio (or just R) and load the package:

library(hBayesDM)


Four steps of doing HBA with hBayesDM are illustrated below. As an example, four models of the orthogonalized Go/Nogo task (Guitart-Masip et al., 2012; Cavanagh et al., 2013) are fit and compared with the hBayesDM package.




1. Prepare the data

dataPath = system.file("extdata/gng_exampleData.txt", package="hBayesDM")

If you download the sample data to “~/Downloads”, you may specify the path to the data file like this:

dataPath = "~/Downloads/gng_exampleData.txt"


2. Fit candidate models

Below the gng_m1 model is fit with its sample data. The command indicates that three MCMC chains are run and three cores are used for parallel computing. If you enter “example” as an argument for data, hBayesDM will use the sample data for the task. Note that you can save the output to a file (see the saveDir argument) or send an email when fitting is complete (see the email argument). You can also assign your own initial values (see the inits argument; e.g., inits=c(0.1, 0.2, 1.0)):

output1 = gng_m1(data="example", niter=2000, nwarmup=1000, nchain=4, ncore=4)

, which is the same as the command below because the default numbers of total (including warmup) iterations (MCMC samples), warmup iterations, and chains are 2,000, 1,000, and 4 for gng models.

output1 = gng_m1("example", ncore=4)


Executing the command will generate messages like below in the R console. It will take approximately 2~3 minutes (with the gng_m1 model & “example” data) for the model fitting to complete. Note that you may get warning messages about “numerical problems” or that there are a certain number of “divergent transitions after warmup”. When we check our models with example datasets, warning messages appear mostly at the beginning of the warmup period and there are very few divergent transitions after warmup. In such cases, you can ignore the warnings. Also see Appendix D of the Stan Reference Manual.

Details:
 # of chains                   =  4 
 # of cores used               =  4 
 # of MCMC samples (per chain) =  2000 
 # of burn-in samples          =  1000 
 # of subjects                 =  10 
 # of (max) trials per subject =  240 

************************************
** Building a model. Please wait. **
************************************
starting worker pid=75130 on localhost:11950 at 08:25:48.905
starting worker pid=75138 on localhost:11950 at 08:25:49.101

SAMPLING FOR MODEL 'gng_m1' NOW (CHAIN 1).

Chain 1, Iteration:    1 / 2000 [  0%]  (Warmup)
SAMPLING FOR MODEL 'gng_m1' NOW (CHAIN 2).
...

When model fitting is complete, you see this message and data are stored into output1.

************************************
**** Model fitting is complete! ****
************************************


output1, a hBayesDM object, is a list with 4 elements (class: “hBayesDM”):

  1. model: Name of the fitted model (i.e., output1$model is ‘gng_m1’).
  2. allIndPars: Summary of individual subjects’ parameters (default: mean). Users can also choose to use median or mode (e.g., output1 = gng_m1("example", indPars="mode") ).
  3. parVals: Posterior samples of all parameters. Extracted by rstan::extract(rstan_object, permuted=T). Note that hyper (group) mean parameters are indicated by mu_PARAMETER (e.g., mu_xi, mu_ep, mu_rho).
  4. fit: RStan object (i.e., fit = stan(file='gng_m1.stan', ...) ).
  5. rawdata: Raw trial-by-trial data used for modeling. Raw data are provided in the output to allow users to easily access data and compare trial-by-trial model-based regressors (e.g., prediction errors) with choice data.
  6. modelRegressor (optional): Trial-by-trial model-based regressors such as prediction errors, the values of the chosen option, etc. For each model, we pre-select appropriate model-based regressors. Currently (version 0.2.3.3), this feature is available only for the orthogonalized Go/NoGo task.
> output1$allIndPars
           xi        ep      rho subjID
1  0.03688558 0.1397615 5.902901      1
2  0.02934812 0.1653435 6.066120      2
3  0.04467025 0.1268796 5.898099      3
4  0.02103926 0.1499842 6.185020      4
5  0.02620808 0.1498962 6.081908      5
...
> output1$fit
Inference for Stan model: gng_m1.
4 chains, each with iter=2000; warmup=1000; thin=1; 
post-warmup draws per chain=4000, total post-warmup draws=4000.

               mean se_mean   sd    2.5%     25%     50%     75%   97.5% n_eff Rhat
mu_xi          0.03    0.00 0.02    0.00    0.02    0.03    0.05    0.08  2316 1.00
mu_ep          0.15    0.00 0.02    0.11    0.13    0.15    0.16    0.19  4402 1.00
mu_rho         5.97    0.01 0.72    4.76    5.45    5.89    6.40    7.61  3821 1.00
sigma[1]       0.54    0.06 1.02    0.02    0.18    0.35    0.61    1.99   318 1.01
sigma[2]       0.12    0.00 0.08    0.01    0.05    0.10    0.16    0.31  2620 1.00
sigma[3]       0.12    0.00 0.09    0.01    0.05    0.10    0.16    0.33  2402 1.00
...


\(\hat{R}\) (Rhat) is an index of the convergence of the chains. \(\hat{R}\) values close to 1.00 would indicate that MCMC chains are converged to stationary target distributions. When we check MCMC performance of our models on sample data, \(\hat{R}\) values are 1.00 for most parameters or at most 1.04.


3. Plot model parameters

Make sure to visually diagnose MCMC performance (i.e., visually check whether MCMC samples are well mixed and converged to stationary distributions). For the diagnosis or visualization of hyper (group) parameters, you can use plot.hBayesDM or just plot, which searches for an extension function that contains the class name. The class of any hBayesDM output is hBayesDM:

Let’s first visually diagnose MCMC performance of hyper parameters with trace plots:

plot(output1, type="trace", fontSize=11)   # traceplot of hyper parameters. Set font size 11.

The trace plots indicate that MCMC samples are indeed well mixed and converged, which is consistent with their \(\hat{R}\) values (see here for some discussion on why we care about mixing). Note that the plots above exclude burn-in samples. If you want, you can include burn-in (warmup) MCMC samples.

plot(output1, type="trace", inc_warmup=T)   # traceplot of hyper parameters w/ warmup samples

You can also plot the posterior distributions of the hyper (group) parameters with plot:

plot(output1)

To visualize individual parameters, you can use our newly updated function called plotInd (based on Stan’s native function stan_plot). For example, to plot each individual’s \(\epsilon\) (learning rate) parameter (e.g., individual posterior distributions):

plotInd(output1, "ep")  



4. Compare models (and groups)

To compare models, you first fit all models in the same manner as the example above (e.g., outpu4 = gng_m4("example", niter=2000, nwarmup=1000, nchain=4, ncore=4) ). Next, we use the command printFit, which is a convenient way to summarize Leave-One-Out Information Criterion (LOOIC) or Widely Applicable Information Criterion (WAIC) of all models we consider (see Vehtari et al. (2015) for the details of LOOIC and WAIC). By default, printFit function uses the LOOIC which is preferable to the WAIC when there are influential observations (Vehtari et al., 2015).

Assuming four models’ outputs are output1 (gng_m1), output2 (gng_m2), output3 (gng_m3), and output4 (gng_m4), their model fits can be simultaneously summarized by:

> printFit(output1, output2, output3, output4)
   Model    LOOIC   
1 gng_m1 1588.843 
2 gng_m2 1571.129 
3 gng_m3 1573.872 
4 gng_m4 1543.335 

Note that the lower LOOIC is, the better its model-fit is. Thus, model#4 has the best LOOIC compared to other models. Users can print WAIC or both by calling printFit(output1, output2, output3, output4, ic="waic") or printFit(output1, output2, output3, output4, ic="both"). Use the extract_ic function (e.g., extract_ic(output3) ) if you want more detailed information including standard errors and expected log pointwise predictive density (elpd). Note that the extract_ic function can be used only for a single model output.

We also want to remind you that there are multiple ways to compare computational models (e.g., simulation method (absolute model performance), parameter recovery, generalization criterion) and the goodness of fit (e.g., LOOIC or WAIC) is just one of them. Check if predictions from your model (e.g., “posterior predictive check”) can mimic the data (same data or new data) with reasonable accuracy. See J. Kruschke (2014) (for posterior predictive check), Guitart-Masip et al. (2012) (for goodness of fit and simulation performance on the orthogonalized Go/Nogo task), and Busemeyer & Wang (2000) (for generalization criterion) as well as Ahn et al. (2008; 2014) and Steingroever et al. (2014) (for the combination of multiple model comparison methods).

To compare two groups in a Bayesian fashion (e.g., Ahn et al., 2014), first you need to fit each group with the same model and ideally the same number of MCMC samples. For example,

data_group1 = "~/Project_folder/gng_data_group1.txt"  # data file for group1
data_group2 = "~/Project_folder/gng_data_group2.txt"  # data file for group2

output_group1 = gng_m4(data_group1)  # fit group1 data with the gng_m4 model 
output_group2 = gng_m4(data_group2)  # fit group2 data with the gng_m4 model 

# After model fitting is complete for both groups,
# evaluate the group difference (e.g., on the 'pi' parameter) by examining the posterior distribution of group mean differences.

diffDist = output_group1$parVals$mu_pi - output_group2$parVals$mu_pi  # group1 - group2 
HDIofMCMC( diffDist )  # Compute the 95% Highest Density Interval (HDI). 
plotHDI( diffDist )    # plot the group mean differences


5. Extracting trial-by-trial regressors for model-based fMRI/EEG analysis

In model-based neuroimaging (e.g., O’Doherty et al., 2007), model-based time series of a latent cognitive process are generated by computational models, and then time series data are convolved with a hemodynamic response function and regressed again fMRI or EEG data. This model-based neuroimaging approach has been particularly popular in cognitive neuroscience.


The biggest challenge for performing model-based fMRI/EEG is to learn how to extract trial-by-trial model-based regressors. The hBayesDM package allows users to easily extract model-based regressors that can be used for model-based fMRI or EEG analysis. Note that in the current version (version 0.3.0), only the orthogonalized Go/NoGo task provides model-based regressors. The hBayesDM package currently provides the following model-based regressors. With the trial-by-trial regressors, users can easily use their favorite neuroimaging package (e.g., in Statistical Parametric Mapping (SPM; http://www.fil.ion.ucl.ac.uk/spm/) to perform model-based fMRI analysis. See Chapter 5.5 of our paper for more details.


As an example, if you would like to extract trial-by-trial stimulus values (i.e., expected value of stimulus on each trial), first fit a model like the follwoing (set the modelRegressor input variable to TRUE. Its default value is FALSE):


# fit example data with the gng_m3 model
output = gng_m3(data="example", niter=2000, nwarmup=1000, modelRegressor=TRUE)


Once the sampling is completed, all model-based regressors are contained in the modelRegressor list.

# store all subjects' stimulus value (SV) in ‘sv_all’
sv_all = output$modelRegressor$SV 

dim(output$modelRegressor$SV)  # number of rows=# of subjects (=10), number of columns=# of trials (=240)
## [1]  10 240
# visualize SV (Subject #1)
plot(sv_all[1, ], type="l", xlab="Trial", ylab="Stimulus Value (subject #1)")  

# visualize SV (Subject #5)
plot(sv_all[5, ], type="l", xlab="Trial", ylab="Stimulus Value (subject #5)")  

Similarly, users can extract and visualize other model-based regressors. W(Go), W(NoGo), Q(Go), Q(NoGo) are stored in Wgo, Wnogo, Qgo, and Qnogo, respectively.


To-do list

We are planning to add more tasks/models. We plan to include the following tasks and/or models in the near future. If you have any requests for a specific task or a model, please let us know.


Citation

If you find our package or some codes useful for your research, please consider citing our work:

Ahn, W.-Y., Haines, N., & Zhang, L. (2016). Revealing neuro-computational mechanisms of reinforcement learning and decision-making with the hBayesDM package. bioRxiv. http://doi.org/10.1101/064287

Contact information

If you have any suggestions/feedback regarding any aspect of hBayesDM or would like to contribute to this project, please contact Woo-Young Ahn (Email: ahn.280@osu.edu; Lab: ccs-lab.github.io).


Suggested reading

You can refer to other helpful review papers or books (Busemeyer & Diederich, 2010; Daw, 2011; M. D. Lee, 2011; M. D. Lee & Wagenmakers, 2014; Shiffrin et al., 2008) to know more about HBA or computational modeling in general.


References

Ahn, W.-Y., Busemeyer, J. R., Wagenmakers, E.-J., & Stout, J. C. (2008). Comparison of decision learning models using the generalization criterion method. Cognitive Science, 32(8), 1376–1402.

Ahn, W.-Y., Krawitz, A., Kim, W., Busemeyer, J. R., & Brown, J. (2011). A model-based fMRI analysis with hierarchical bayesian parameter estimation. Journal of Neuroscience, Psychology, and Economics, 4(2), 95.

Ahn, W.-Y., Vasilev, G., Lee, S.-H., Busemeyer, J. R., Kruschke, J. K., Bechara, A., & Vassileva, J. (2014). Decision-making in stimulant and opiate addicts in protracted abstinence: Evidence from computational modeling with pure users. Frontiers in Psychology, 5, 849.

Busemeyer, J. R., & Diederich, A. (2010). Cognitive modeling. Sage Publications, Inc.

Busemeyer, J. R., & Wang, Y.-M. (2000). Model comparisons and model selections based on generalization criterion methodology. Journal of Mathematical Psychology, 44, 171–189.

Cavanagh, J. F., Eisenberg, I., Guitart-Masip, M., Huys, Q. J. M., & Frank, M. J. (2013). Frontal theta overrides pavlovian learning biases. The Journal of Neuroscience, 33(19), 8541–8548.

Daw, N. D. (2011). Trial-by-trial data analysis using computational models. Decision Making, Affect, and Learning: Attention and Performance XXIII, 23, 3–38.

Daw, N. D., Gershman, S. J., Seymour, B., Dayan, P., & Dolan, R. J. (2011). Model-based influences on humans’ choices and striatal prediction errors. Neuron, 69(6), 1204–1215.

Daw, N. D., O’Doherty, J. P., Dayan, P., Seymour, B., & Dolan, R. J. (2006). Cortical substrates for exploratory decisions in humans. Nature, 441, 876–879.

Ebert, J. E., & Prelec, D. (2007). The fragility of time: Time-insensitivity and valuation of the near and far future. Management Science, 53(9), 1423–1438.

Erev, I., Ert, E., Roth, A. E., Haruvy, E., Herzog, S. M., Hau, R., … Lebiere, C. (2010). A choice prediction competition: Choices from experience and from description. Journal of Behavioral Decision Making, 23(1), 15–47.

Gläscher, J., Hampton, A. N., & O’Doherty, J. P. (2009). Determining a role for ventromedial prefrontal cortex in encoding action-based value signals during reward-related decision making. Cerebral Cortex, 19(2), 483–495.

Gu, X., Wang, X., Hula, A., Wang, S., Xu, S., Lohrenz, T. M., … Montague, P. R. (2015). Necessary, yet dissociable contributions of the insular and ventromedial prefrontal cortices to norm adaptation: Computational and lesion evidence in humans. The Journal of Neuroscience, 35(2), 467–473.

Guitart-Masip, M., Huys, Q. J. M., Fuentemilla, L., Dayan, P., Duzel, E., & Dolan, R. J. (2012). Go and no-go learning in reward and punishment: Interactions between affect and effect. Neuroimage, 62(1), 154–166.

Hertwig, R., Barren, G., Weber, E. U., & Erev, I. (2004). Decisions from experience and the effect of rare events in risky choice. Psychological Science, 15, 534–539.

Huys, Q. J. M., Cools, R., Gölzer, M., Friedel, E., Heinz, A., Dolan, R. J., & Dayan, P. (2011). Disentangling the roles of approach, activation and valence in instrumental and pavlovian responding. PLoS Computational Biology, 7(4), e1002028.

Jessup, R., Bishara, A., & Busemeyer, J. (2008). Feedback Produces Divergence From Prospect Theory in Descriptive Choice. Psychological Science, 19(10), 1015–1022.

Kruschke, J. (2014). Doing bayesian data analysis: A tutorial with R, JAGS, and Stan. Academic Press.

Lee, M. D. (2011). How cognitive modeling can benefit from hierarchical bayesian models. Journal of Mathematical Psychology, 55(1), 1–7.

Lee, M. D., & Wagenmakers, E.-J. (2014). Bayesian cognitive modeling: A practical course. Cambridge University Press.

Mazur, J. E. (1987). An adjusting procedure for studying delayed reinforcement.

Newman, J. P., & Kosson, D. S. (1986). Passive avoidance learning in psychopathic and nonpsychopathic offenders. Journal of Abnormal Psychology, 95(3), 252.

Newman, J. P., Widom, C. S., & Nathan, S. (1985). Passive avoidance in syndromes of disinhibition, psychopathy, and extraversion. Journal of Personality and Individual Differences, 48, 1316–1327.

Ouden, H. E. den, Daw, N. D., Fernandez, G., Elshout, J. A., Rijpkema, M., Hoogman, M., … Cools, R. (2013). Dissociable effects of dopamine and serotonin on reversal learning. Neuron, 80(4), 1090–1100.

O’Doherty, J. P., Hampton, A., & Kim, H. (2007). Model-based fMRI and its application to reward learning and decision making. Annals of the New York Academy of Sciences, 1104(1), 35–53.

Samuelson, P. A. (1937). A note on measurement of utility. The Review of Economic Studies, 4(2), 155–161.

Shiffrin, R., Lee, M. D., Kim, W., & Wagenmakers, E. (2008). A survey of model evaluation approaches with a tutorial on hierarchical Bayesian methods. Cognitive Science: A Multidisciplinary Journal, 32(8), 1248–1284.

Sokol-Hessner, P., Hsu, M., Curley, N. G., Delgado, M. R., Camerer, C. F., & Phelps, E. A. (2009). Thinking like a trader selectively reduces individuals’ loss aversion. Proceedings of the National Academy of Sciences, 106(13), 5035–5040.

Steingroever, H., Wetzels, R., & Wagenmakers, E.-J. (2014). Absolute performance of reinforcement-learning models for the iowa gambling task. Decision, 1(3), 161.

Tom, S. M., Fox, C. R., Trepel, C., & Poldrack, R. A. (2007). The neural basis of loss aversion in decision-making under risk. Science, 315, 515–518.

Vehtari, A., Gelman, A., & Gabry, J. (2015). Efficient implementation of leave-one-out cross-validation and WAIC for evaluating fitted Bayesian models. ArXiv Preprint ArXiv:1507.04544.

Worthy, D. A., Pang, B., & Byrne, K. A. (2013). Decomposing the roles of perseveration and expected value representation in models of the iowa gambling task. Frontiers in Psychology, 4, 640.