R/loo_model_weights.R
loo_model_weights.Rd
Model averaging via stacking of predictive distributions, pseudoBMA weighting or pseudoBMA+ weighting with the Bayesian bootstrap. See Yao et al. (2018), Vehtari, Gelman, and Gabry (2017), and Vehtari, Simpson, Gelman, Yao, and Gabry (2019) for background.
loo_model_weights(x, ...) # S3 method for default loo_model_weights( x, ..., method = c("stacking", "pseudobma"), optim_method = "BFGS", optim_control = list(), BB = TRUE, BB_n = 1000, alpha = 1, r_eff_list = NULL, cores = getOption("mc.cores", 1) ) stacking_weights(lpd_point, optim_method = "BFGS", optim_control = list()) pseudobma_weights(lpd_point, BB = TRUE, BB_n = 1000, alpha = 1)
x  A list of pointwise loglikelihood matrices or 

...  Unused, except for the generic to pass arguments to individual methods. 
method  Either 
optim_method  If 
optim_control  If 
BB  Logical used when 
BB_n  For pseudoBMA+ weighting only, the number of samples to use for
the Bayesian bootstrap. The default is 
alpha  Positive scalar shape parameter in the Dirichlet distribution
used for the Bayesian bootstrap. The default is 
r_eff_list  Optionally, a list of relative effective sample size
estimates for the likelihood 
cores  The number of cores to use for parallelization. This defaults to
the option

lpd_point  If calling 
A numeric vector containing one weight for each model.
loo_model_weights()
is a wrapper around the stacking_weights()
and
pseudobma_weights()
functions that implements stacking, pseudoBMA, and
pseudoBMA+ weighting for combining multiple predictive distributions. We can
use approximate or exact leaveoneout crossvalidation (LOOCV) or Kfold CV
to estimate the expected log predictive density (ELPD).
The stacking method (method="stacking"
), which is the default for
loo_model_weights()
, combines all models by maximizing the leaveoneout
predictive density of the combination distribution. That is, it finds the
optimal linear combining weights for maximizing the leaveoneout log score.
The pseudoBMA method (method="pseudobma"
) finds the relative weights
proportional to the ELPD of each model. However, when
method="pseudobma"
, the default is to also use the Bayesian bootstrap
(BB=TRUE
), which corresponds to the pseudoBMA+ method. The Bayesian
bootstrap takes into account the uncertainty of finite data points and
regularizes the weights away from the extremes of 0 and 1.
In general, we recommend stacking for averaging predictive distributions, while pseudoBMA+ can serve as a computationally easier alternative.
Vehtari, A., Gelman, A., and Gabry, J. (2017a). Practical Bayesian model evaluation using leaveoneout crossvalidation and WAIC. Statistics and Computing. 27(5), 14131432. doi:10.1007/s1122201696964 (journal version, preprint arXiv:1507.04544).
Vehtari, A., Simpson, D., Gelman, A., Yao, Y., and Gabry, J. (2019). Pareto smoothed importance sampling. preprint arXiv:1507.02646
Yao, Y., Vehtari, A., Simpson, D., and Gelman, A. (2018) Using stacking to average Bayesian predictive distributions. Bayesian Analysis, advance publication, doi:10.1214/17BA1091. (online).
The loo package vignettes, particularly Bayesian Stacking and PseudoBMA weights using the loo package.
loo()
for details on leaveoneout ELPD estimation.
constrOptim()
for the choice of optimization methods and controlparameters.
relative_eff()
for computing r_eff
.
#>#>#>#>#> #> #># generate fake data from N(0,1). N < 100 y < rnorm(N, 0, 1) # Suppose we have three models: N(1, sigma), N(0.5, sigma) and N(0.6,sigma). stan_code < " data { int N; vector[N] y; real mu_fixed; } parameters { real<lower=0> sigma; } model { sigma ~ exponential(1); y ~ normal(mu_fixed, sigma); } generated quantities { vector[N] log_lik; for (n in 1:N) log_lik[n] = normal_lpdf(y[n] mu_fixed, sigma); }" mod < stan_model(model_code = stan_code)#>#> Running /Library/Frameworks/R.framework/Resources/bin/R CMD SHLIB foo.c #> clang I"/Library/Frameworks/R.framework/Resources/include" DNDEBUG I"/Library/Frameworks/R.framework/Versions/3.6/Resources/library/Rcpp/include/" I"/Library/Frameworks/R.framework/Versions/3.6/Resources/library/RcppEigen/include/" I"/Library/Frameworks/R.framework/Versions/3.6/Resources/library/RcppEigen/include/unsupported" I"/Library/Frameworks/R.framework/Versions/3.6/Resources/library/BH/include" I"/Library/Frameworks/R.framework/Versions/3.6/Resources/library/StanHeaders/include/src/" I"/Library/Frameworks/R.framework/Versions/3.6/Resources/library/StanHeaders/include/" I"/Library/Frameworks/R.framework/Versions/3.6/Resources/library/rstan/include" DEIGEN_NO_DEBUG D_REENTRANT DBOOST_DISABLE_ASSERTS DBOOST_PENDING_INTEGER_LOG2_HPP include stan/math/prim/mat/fun/Eigen.hpp isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk I/usr/local/include fPIC Wall g O2 c foo.c o foo.o #> In file included from <builtin>:1: #> In file included from /Library/Frameworks/R.framework/Versions/3.6/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp:4: #> In file included from /Library/Frameworks/R.framework/Versions/3.6/Resources/library/RcppEigen/include/Eigen/Dense:1: #> In file included from /Library/Frameworks/R.framework/Versions/3.6/Resources/library/RcppEigen/include/Eigen/Core:88: #> /Library/Frameworks/R.framework/Versions/3.6/Resources/library/RcppEigen/include/Eigen/src/Core/util/Macros.h:613:1: error: unknown type name 'namespace' #> namespace Eigen { #> ^ #> /Library/Frameworks/R.framework/Versions/3.6/Resources/library/RcppEigen/include/Eigen/src/Core/util/Macros.h:613:16: error: expected ';' after top level declarator #> namespace Eigen { #> ^ #> ; #> In file included from <builtin>:1: #> In file included from /Library/Frameworks/R.framework/Versions/3.6/Resources/library/StanHeaders/include/stan/math/prim/mat/fun/Eigen.hpp:4: #> In file included from /Library/Frameworks/R.framework/Versions/3.6/Resources/library/RcppEigen/include/Eigen/Dense:1: #> /Library/Frameworks/R.framework/Versions/3.6/Resources/library/RcppEigen/include/Eigen/Core:96:10: fatal error: 'complex' file not found #> #include <complex> #> ^~~~~~~~~ #> 3 errors generated. #> make: *** [foo.o] Error 1#> #> SAMPLING FOR MODEL '3357c2d5f7fa83a0237f870309913d65' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 1.5e05 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.15 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.01791 seconds (Warmup) #> Chain 1: 0.015391 seconds (Sampling) #> Chain 1: 0.033301 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL '3357c2d5f7fa83a0237f870309913d65' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 4e06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.04 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.018771 seconds (Warmup) #> Chain 2: 0.013968 seconds (Sampling) #> Chain 2: 0.032739 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL '3357c2d5f7fa83a0237f870309913d65' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 7e06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.07 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.014651 seconds (Warmup) #> Chain 3: 0.018507 seconds (Sampling) #> Chain 3: 0.033158 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL '3357c2d5f7fa83a0237f870309913d65' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 5e06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.05 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.015283 seconds (Warmup) #> Chain 4: 0.014542 seconds (Sampling) #> Chain 4: 0.029825 seconds (Total) #> Chain 4:#> #> SAMPLING FOR MODEL '3357c2d5f7fa83a0237f870309913d65' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 7e06 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.07 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.022869 seconds (Warmup) #> Chain 1: 0.019815 seconds (Sampling) #> Chain 1: 0.042684 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL '3357c2d5f7fa83a0237f870309913d65' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 8e06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.08 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.017525 seconds (Warmup) #> Chain 2: 0.01712 seconds (Sampling) #> Chain 2: 0.034645 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL '3357c2d5f7fa83a0237f870309913d65' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 6e06 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.06 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.018689 seconds (Warmup) #> Chain 3: 0.016106 seconds (Sampling) #> Chain 3: 0.034795 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL '3357c2d5f7fa83a0237f870309913d65' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 4e06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.04 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.014916 seconds (Warmup) #> Chain 4: 0.016096 seconds (Sampling) #> Chain 4: 0.031012 seconds (Total) #> Chain 4:#> #> SAMPLING FOR MODEL '3357c2d5f7fa83a0237f870309913d65' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 6e06 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.06 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 0.016297 seconds (Warmup) #> Chain 1: 0.016247 seconds (Sampling) #> Chain 1: 0.032544 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL '3357c2d5f7fa83a0237f870309913d65' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 5e06 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.05 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 0.019315 seconds (Warmup) #> Chain 2: 0.015587 seconds (Sampling) #> Chain 2: 0.034902 seconds (Total) #> Chain 2: #> #> SAMPLING FOR MODEL '3357c2d5f7fa83a0237f870309913d65' NOW (CHAIN 3). #> Chain 3: #> Chain 3: Gradient evaluation took 1.3e05 seconds #> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.13 seconds. #> Chain 3: Adjust your expectations accordingly! #> Chain 3: #> Chain 3: #> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 3: #> Chain 3: Elapsed Time: 0.017026 seconds (Warmup) #> Chain 3: 0.015877 seconds (Sampling) #> Chain 3: 0.032903 seconds (Total) #> Chain 3: #> #> SAMPLING FOR MODEL '3357c2d5f7fa83a0237f870309913d65' NOW (CHAIN 4). #> Chain 4: #> Chain 4: Gradient evaluation took 4e06 seconds #> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.04 seconds. #> Chain 4: Adjust your expectations accordingly! #> Chain 4: #> Chain 4: #> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup) #> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup) #> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup) #> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup) #> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup) #> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup) #> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling) #> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling) #> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling) #> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling) #> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling) #> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling) #> Chain 4: #> Chain 4: Elapsed Time: 0.015483 seconds (Warmup) #> Chain 4: 0.015427 seconds (Sampling) #> Chain 4: 0.03091 seconds (Total) #> Chain 4:model_list < list(fit1, fit2, fit3) log_lik_list < lapply(model_list, extract_log_lik) # optional but recommended r_eff_list < lapply(model_list, function(x) { ll_array < extract_log_lik(x, merge_chains = FALSE) relative_eff(exp(ll_array)) }) # stacking method: wts1 < loo_model_weights( log_lik_list, method = "stacking", r_eff_list = r_eff_list, optim_control = list(reltol=1e10) ) print(wts1)#> Method: stacking #>  #> weight #> model1 0.245 #> model2 0.755 #> model3 0.000# can also pass a list of psis_loo objects to avoid recomputing loo loo_list < lapply(1:length(log_lik_list), function(j) { loo(log_lik_list[[j]], r_eff = r_eff_list[[j]]) }) wts2 < loo_model_weights( loo_list, method = "stacking", optim_control = list(reltol=1e10) ) all.equal(wts1, wts2)#> [1] TRUE#> Method: pseudoBMA+ with Bayesian bootstrap #>  #> weight #> model1 0.036 #> model2 0.948 #> model3 0.015# pseudoBMA method (set BB = FALSE): loo_model_weights(loo_list, method = "pseudobma", BB = FALSE)#> Method: pseudoBMA #>  #> weight #> model1 0.000 #> model2 0.988 #> model3 0.012# calling stacking_weights or pseudobma_weights directly lpd1 < loo(log_lik_list[[1]], r_eff = r_eff_list[[1]])$pointwise[,1] lpd2 < loo(log_lik_list[[2]], r_eff = r_eff_list[[2]])$pointwise[,1] lpd3 < loo(log_lik_list[[3]], r_eff = r_eff_list[[3]])$pointwise[,1] stacking_weights(cbind(lpd1, lpd2, lpd3))#> Method: stacking #>  #> weight #> model1 0.245 #> model2 0.755 #> model3 0.000#> Method: pseudoBMA+ with Bayesian bootstrap #>  #> weight #> model1 0.040 #> model2 0.944 #> model3 0.016#> Method: pseudoBMA #>  #> weight #> model1 0.000 #> model2 0.988 #> model3 0.012# }