PPCloo.Rd
LeaveOneOut (LOO) predictive checks. See the Plot Descriptions section, below, and Gabry et al. (2019) for details.
ppc_loo_pit_overlay(y, yrep, lw, pit, samples = 100, ..., size = 0.25, alpha = 0.7, trim = FALSE, bw = "nrd0", adjust = 1, kernel = "gaussian", n_dens = 1024) ppc_loo_pit_qq(y, yrep, lw, pit, compare = c("uniform", "normal"), ..., size = 2, alpha = 1) ppc_loo_pit(y, yrep, lw, pit, compare = c("uniform", "normal"), ..., size = 2, alpha = 1) ppc_loo_intervals(y, yrep, psis_object, subset = NULL, intervals = NULL, ..., prob = 0.5, prob_outer = 0.9, size = 1, fatten = 3, order = c("index", "median")) ppc_loo_ribbon(y, yrep, lw, psis_object, subset = NULL, intervals = NULL, ..., prob = 0.5, prob_outer = 0.9, alpha = 0.33, size = 0.25)
y  A vector of observations. See Details. 

yrep  An \(S\) by \(N\) matrix of draws from the posterior
predictive distribution, where \(S\) is the size of the posterior sample
(or subset of the posterior sample used to generate 
lw  A matrix of (smoothed) log weights with the same dimensions as

pit  For 
samples  For 
...  Currently unused. 
alpha, size, fatten  Arguments passed to code geoms to control plot
aesthetics. For 
trim  Passed to 
bw, adjust, kernel, n_dens  Optional arguments passed to

compare  For 
psis_object  If using loo version 
subset  For 
intervals  For 
prob, prob_outer  Values between 0 and 1 indicating the desired
probability mass to include in the inner and outer intervals. The defaults
are 
order  For 
A ggplot object that can be further customized using the ggplot2 package.
ppc_loo_pit_overlay()
, ppc_loo_pit_qq()
The calibration of marginal predictions can be assessed using probability
integral transformation (PIT) checks. LOO improves the check by avoiding the
double use of data. See the section on marginal predictive checks in Gelman
et al. (2013, p. 152153) and section 5 of Gabry et al. (2019) for an
example of using bayesplot for these checks.
The LOO PIT values are asymptotically uniform (for continuous data) if the
model is calibrated. The ppc_loo_pit_overlay()
function creates a plot
comparing the density of the LOO PITs (thick line) to the density estimates
of many simulated data sets from the standard uniform distribution (thin
lines). See Gabry et al. (2019) for an example of interpreting the shape of
the miscalibration that can be observed in these plots.
The ppc_loo_pit_qq()
function provides an alternative visualization of
the miscalibration with a quantilequantile (QQ) plot comparing the LOO
PITs to the standard uniform distribution. Comparing to the uniform is not
good for extreme probabilities close to 0 and 1, so it can sometimes be
useful to set the compare
argument to "normal"
, which will
produce a QQ plot comparing standardized PIT values to the standard normal
distribution that can help see the (mis)calibration better for the extreme
values. However, in most cases we have found that the overlaid density plot
(ppc_loo_pit_overlay()
) function will provided a clearer picture of
calibration problems that the QQ plot.
ppc_loo_intervals()
, ppc_loo_ribbon()
Similar to ppc_intervals()
and ppc_ribbon()
but the intervals are for
the LOO predictive distribution.
Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., and Rubin, D. B. (2013). Bayesian Data Analysis. Chapman & Hall/CRC Press, London, third edition. (p. 152153)
Gabry, J. , Simpson, D. , Vehtari, A. , Betancourt, M. and Gelman, A. (2019), Visualization in Bayesian workflow. J. R. Stat. Soc. A, 182: 389402. doi:10.1111/rssa.12378. (journal version, arXiv preprint, code on GitHub)
Vehtari, A., Gelman, A., and Gabry, J. (2017). Practical Bayesian model evaluation using leaveoneout crossvalidation and WAIC. Statistics and Computing. 27(5), 14131432. doi:10.1007/s1122201696964. arXiv preprint: https://arxiv.org/abs/1507.04544/
Other PPCs: PPCdiscrete
,
PPCdistributions
,
PPCerrors
, PPCintervals
,
PPCoverview
,
PPCscatterplots
,
PPCteststatistics
#>#>#> #>#>#> #>#>#> #>head(radon)#> floor county log_radon log_uranium #> 1 1 AITKIN 0.83290912 0.6890476 #> 2 0 AITKIN 0.83290912 0.6890476 #> 3 0 AITKIN 1.09861229 0.6890476 #> 4 0 AITKIN 0.09531018 0.6890476 #> 5 0 ANOKA 1.16315081 0.8473129 #> 6 0 ANOKA 0.95551145 0.8473129fit < stan_lmer( log_radon ~ floor + log_uranium + floor:log_uranium + (1 + floor  county), data = radon, iter = 1000, chains = 2 # ,cores = 2 )#> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 0.000463 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 4.63 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 1000 [ 0%] (Warmup) #> Chain 1: Iteration: 100 / 1000 [ 10%] (Warmup) #> Chain 1: Iteration: 200 / 1000 [ 20%] (Warmup) #> Chain 1: Iteration: 300 / 1000 [ 30%] (Warmup) #> Chain 1: Iteration: 400 / 1000 [ 40%] (Warmup) #> Chain 1: Iteration: 500 / 1000 [ 50%] (Warmup) #> Chain 1: Iteration: 501 / 1000 [ 50%] (Sampling) #> Chain 1: Iteration: 600 / 1000 [ 60%] (Sampling) #> Chain 1: Iteration: 700 / 1000 [ 70%] (Sampling) #> Chain 1: Iteration: 800 / 1000 [ 80%] (Sampling) #> Chain 1: Iteration: 900 / 1000 [ 90%] (Sampling) #> Chain 1: Iteration: 1000 / 1000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 13.9553 seconds (Warmup) #> Chain 1: 3.20297 seconds (Sampling) #> Chain 1: 17.1582 seconds (Total) #> Chain 1: #> #> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2). #> Chain 2: #> Chain 2: Gradient evaluation took 0.000219 seconds #> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 2.19 seconds. #> Chain 2: Adjust your expectations accordingly! #> Chain 2: #> Chain 2: #> Chain 2: Iteration: 1 / 1000 [ 0%] (Warmup) #> Chain 2: Iteration: 100 / 1000 [ 10%] (Warmup) #> Chain 2: Iteration: 200 / 1000 [ 20%] (Warmup) #> Chain 2: Iteration: 300 / 1000 [ 30%] (Warmup) #> Chain 2: Iteration: 400 / 1000 [ 40%] (Warmup) #> Chain 2: Iteration: 500 / 1000 [ 50%] (Warmup) #> Chain 2: Iteration: 501 / 1000 [ 50%] (Sampling) #> Chain 2: Iteration: 600 / 1000 [ 60%] (Sampling) #> Chain 2: Iteration: 700 / 1000 [ 70%] (Sampling) #> Chain 2: Iteration: 800 / 1000 [ 80%] (Sampling) #> Chain 2: Iteration: 900 / 1000 [ 90%] (Sampling) #> Chain 2: Iteration: 1000 / 1000 [100%] (Sampling) #> Chain 2: #> Chain 2: Elapsed Time: 14.3509 seconds (Warmup) #> Chain 2: 3.45987 seconds (Sampling) #> Chain 2: 17.8108 seconds (Total) #> Chain 2:#> Warning: Found 2 observation(s) with a pareto_k > 0.7. We recommend calling 'loo' again with argument 'k_threshold = 0.7' in order to calculate the ELPD without the assumption that these observations are negligible. This will refit the model 2 times to compute the ELPDs for the problematic observations directly.psis1 < loo1$psis_object lw < weights(psis1) # marginal predictive check using LOO probability integral transform color_scheme_set("orange") ppc_loo_pit_overlay(y, yrep, lw = lw)ppc_loo_pit_qq(y, yrep, lw = lw)ppc_loo_pit_qq(y, yrep, lw = lw, compare = "normal")# loo predictive intervals vs observations keep_obs < 1:50 ppc_loo_intervals(y, yrep, psis_object = psis1, subset = keep_obs)color_scheme_set("gray") ppc_loo_intervals(y, yrep, psis_object = psis1, subset = keep_obs, order = "median")