Compare fitted models based on ELPD.

By default the print method shows only the most important information. Use print(..., simplify=FALSE) to print a more detailed summary.

loo_compare(x, ...)

# S3 method for default
loo_compare(x, ...)

# S3 method for compare.loo
print(x, ..., digits = 1, simplify = TRUE)

# S3 method for compare.loo_ss
print(x, ..., digits = 1, simplify = TRUE)

Arguments

x

An object of class "loo" or a list of such objects.

...

Additional objects of class "loo".

digits

For the print method only, the number of digits to use when printing.

simplify

For the print method only, should only the essential columns of the summary matrix be printed? The entire matrix is always returned, but by default only the most important columns are printed.

Value

A matrix with class "compare.loo" that has its own print method. See the Details section.

Details

When comparing two fitted models, we can estimate the difference in their expected predictive accuracy by the difference in elpd_loo or elpd_waic (or multiplied by \(-2\), if desired, to be on the deviance scale).

When using loo_compare(), the returned matrix will have one row per model and several columns of estimates. The values in the elpd_diff and se_diff columns of the returned matrix are computed by making pairwise comparisons between each model and the model with the largest ELPD (the model in the first row). For this reason the elpd_diff column will always have the value 0 in the first row (i.e., the difference between the preferred model and itself) and negative values in subsequent rows for the remaining models.

To compute the standard error of the difference in ELPD --- which should not be expected to equal the difference of the standard errors --- we use a paired estimate to take advantage of the fact that the same set of \(N\) data points was used to fit both models. These calculations should be most useful when \(N\) is large, because then non-normality of the distribution is not such an issue when estimating the uncertainty in these sums. These standard errors, for all their flaws, should give a better sense of uncertainty than what is obtained using the current standard approach of comparing differences of deviances to a Chi-squared distribution, a practice derived for Gaussian linear models or asymptotically, and which only applies to nested models in any case. Sivula et al. (2022) discuss the conditions when the normal approximation used for SE and se_diff is good.

If more than \(11\) models are compared, we internally recompute the model differences using the median model by ELPD as the baseline model. We then estimate whether the differences in predictive performance are potentially due to chance as described by McLatchie and Vehtari (2023). This will flag a warning if it is deemed that there is a risk of over-fitting due to the selection process. In that case users are recommended to avoid model selection based on LOO-CV, and instead to favor model averaging/stacking or projection predictive inference.

References

Vehtari, A., Gelman, A., and Gabry, J. (2017a). Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Statistics and Computing. 27(5), 1413--1432. doi:10.1007/s11222-016-9696-4 (journal version, preprint arXiv:1507.04544).

Vehtari, A., Simpson, D., Gelman, A., Yao, Y., and Gabry, J. (2022). Pareto smoothed importance sampling. preprint arXiv:1507.02646

Sivula, T, Magnusson, M., Matamoros A. A., and Vehtari, A. (2022). Uncertainty in Bayesian leave-one-out cross-validation based model comparison. preprint arXiv:2008.10296v3..

McLatchie, Y., and Vehtari, A. (2023). Efficient estimation and correction of selection-induced bias with order statistics. preprint arXiv:2309.03742

See also

  • The FAQ page on the loo website for answers to frequently asked questions.

Examples

# very artificial example, just for demonstration!
LL <- example_loglik_array()
loo1 <- loo(LL)     # should be worst model when compared
loo2 <- loo(LL + 1) # should be second best model when compared
loo3 <- loo(LL + 2) # should be best model when compared

comp <- loo_compare(loo1, loo2, loo3)
print(comp, digits = 2)
#>        elpd_diff se_diff
#> model3   0.00      0.00 
#> model2 -32.00      0.00 
#> model1 -64.00      0.00 

# show more details with simplify=FALSE
# (will be the same for all models in this artificial example)
print(comp, simplify = FALSE, digits = 3)
#>        elpd_diff se_diff elpd_loo se_elpd_loo p_loo   se_p_loo looic   se_looic
#> model3   0.000     0.000 -19.589    4.284       3.329   1.152   39.178   8.568 
#> model2 -32.000     0.000 -51.589    4.284       3.329   1.152  103.178   8.568 
#> model1 -64.000     0.000 -83.589    4.284       3.329   1.152  167.178   8.568 

# can use a list of objects
loo_compare(x = list(loo1, loo2, loo3))
#>        elpd_diff se_diff
#> model3   0.0       0.0  
#> model2 -32.0       0.0  
#> model1 -64.0       0.0  

# \dontrun{
# works for waic (and kfold) too
loo_compare(waic(LL), waic(LL - 10))
#> Warning: 
#> 3 (9.4%) p_waic estimates greater than 0.4. We recommend trying loo instead.
#> Warning: 
#> 3 (9.4%) p_waic estimates greater than 0.4. We recommend trying loo instead.
#>        elpd_diff se_diff
#> model1    0.0       0.0 
#> model2 -320.0       0.0 
# }