Leave-One-Out (LOO) predictive checks. See the Plot Descriptions section, below, and Gabry et al. (2019) for details.
ppc_loo_pit_overlay( y, yrep, lw, pit, samples = 100, ..., size = 0.25, alpha = 0.7, trim = FALSE, bw = "nrd0", adjust = 1, kernel = "gaussian", n_dens = 1024 ) ppc_loo_pit_qq( y, yrep, lw, pit, compare = c("uniform", "normal"), ..., size = 2, alpha = 1 ) ppc_loo_pit( y, yrep, lw, pit, compare = c("uniform", "normal"), ..., size = 2, alpha = 1 ) ppc_loo_intervals( y, yrep, psis_object, subset = NULL, intervals = NULL, ..., prob = 0.5, prob_outer = 0.9, size = 1, fatten = 3, order = c("index", "median") ) ppc_loo_ribbon( y, yrep, lw, psis_object, subset = NULL, intervals = NULL, ..., prob = 0.5, prob_outer = 0.9, alpha = 0.33, size = 0.25 )
| y | A vector of observations. See Details. |
|---|---|
| yrep | An \(S\) by \(N\) matrix of draws from the posterior
predictive distribution, where \(S\) is the size of the posterior sample
(or subset of the posterior sample used to generate |
| lw | A matrix of (smoothed) log weights with the same dimensions as
|
| pit | For |
| samples | For |
| ... | Currently unused. |
| alpha, size, fatten | Arguments passed to code geoms to control plot
aesthetics. For |
| trim | Passed to |
| bw, adjust, kernel, n_dens | Optional arguments passed to
|
| compare | For |
| psis_object | If using loo version |
| subset | For |
| intervals | For |
| prob, prob_outer | Values between 0 and 1 indicating the desired
probability mass to include in the inner and outer intervals. The defaults
are |
| order | For |
A ggplot object that can be further customized using the ggplot2 package.
ppc_loo_pit_overlay(), ppc_loo_pit_qq()The calibration of marginal predictions can be assessed using probability
integral transformation (PIT) checks. LOO improves the check by avoiding the
double use of data. See the section on marginal predictive checks in Gelman
et al. (2013, p. 152--153) and section 5 of Gabry et al. (2019) for an
example of using bayesplot for these checks.The LOO PIT values are asymptotically uniform (for continuous data) if the
model is calibrated. The ppc_loo_pit_overlay() function creates a plot
comparing the density of the LOO PITs (thick line) to the density estimates
of many simulated data sets from the standard uniform distribution (thin
lines). See Gabry et al. (2019) for an example of interpreting the shape of
the miscalibration that can be observed in these plots.The ppc_loo_pit_qq() function provides an alternative visualization of
the miscalibration with a quantile-quantile (Q-Q) plot comparing the LOO
PITs to the standard uniform distribution. Comparing to the uniform is not
good for extreme probabilities close to 0 and 1, so it can sometimes be
useful to set the compare argument to "normal", which will
produce a Q-Q plot comparing standardized PIT values to the standard normal
distribution that can help see the (mis)calibration better for the extreme
values. However, in most cases we have found that the overlaid density plot
(ppc_loo_pit_overlay()) function will provided a clearer picture of
calibration problems that the Q-Q plot.
ppc_loo_intervals(), ppc_loo_ribbon()Similar to ppc_intervals() and ppc_ribbon() but the intervals are for
the LOO predictive distribution.
Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., and Rubin, D. B. (2013). Bayesian Data Analysis. Chapman & Hall/CRC Press, London, third edition. (p. 152--153)
Gabry, J. , Simpson, D. , Vehtari, A. , Betancourt, M. and Gelman, A. (2019), Visualization in Bayesian workflow. J. R. Stat. Soc. A, 182: 389-402. doi:10.1111/rssa.12378. (journal version, arXiv preprint, code on GitHub)
Vehtari, A., Gelman, A., and Gabry, J. (2017). Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Statistics and Computing. 27(5), 1413--1432. doi:10.1007/s11222-016-9696-4. arXiv preprint: https://arxiv.org/abs/1507.04544/
Other PPCs:
PPC-discrete,
PPC-distributions,
PPC-errors,
PPC-intervals,
PPC-overview,
PPC-scatterplots,
PPC-test-statistics
# \dontrun{ suppressPackageStartupMessages(library(rstanarm)) suppressPackageStartupMessages(library(loo)) head(radon)#> floor county log_radon log_uranium #> 1 1 AITKIN 0.83290912 -0.6890476 #> 2 0 AITKIN 0.83290912 -0.6890476 #> 3 0 AITKIN 1.09861229 -0.6890476 #> 4 0 AITKIN 0.09531018 -0.6890476 #> 5 0 ANOKA 1.16315081 -0.8473129 #> 6 0 ANOKA 0.95551145 -0.8473129fit <- stan_lmer( log_radon ~ floor + log_uranium + floor:log_uranium + (1 + floor | county), data = radon, iter = 1000, chains = 2, # cores = 2 refresh = 500 )#> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 0.000408 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 4.08 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 1000 [ 0%] (Warmup) #> Chain 1: Iteration: 500 / 1000 [ 50%] (Warmup) #> Chain 1: Iteration: 501 / 1000 [ 50%] (Sampling) #> Chain 1: Iteration: 1000 / 1000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 12.721 seconds (Warm-up) #> Chain 1: 3.25757 seconds (Sampling) #> Chain 1: 15.9786 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 0.000195 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 1.95 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 1000 [ 0%] (Warmup) #> Chain 2: Iteration: 500 / 1000 [ 50%] (Warmup) #> Chain 2: Iteration: 501 / 1000 [ 50%] (Sampling) #> Chain 2: Iteration: 1000 / 1000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 12.5864 seconds (Warm-up) #> Chain 2: 3.23027 seconds (Sampling) #> Chain 2: 15.8167 seconds (Total) #> Chain 2:#> Warning: Found 1 observation(s) with a pareto_k > 0.7. We recommend calling 'loo' again with argument 'k_threshold = 0.7' in order to calculate the ELPD without the assumption that these observations are negligible. This will refit the model 1 times to compute the ELPDs for the problematic observations directly.psis1 <- loo1$psis_object lw <- weights(psis1) # marginal predictive check using LOO probability integral transform color_scheme_set("orange") ppc_loo_pit_overlay(y, yrep, lw = lw)ppc_loo_pit_qq(y, yrep, lw = lw)ppc_loo_pit_qq(y, yrep, lw = lw, compare = "normal")# loo predictive intervals vs observations keep_obs <- 1:50 ppc_loo_intervals(y, yrep, psis_object = psis1, subset = keep_obs)color_scheme_set("gray") ppc_loo_intervals(y, yrep, psis_object = psis1, subset = keep_obs, order = "median")# }