PPCloo.Rd
LeaveOneOut (LOO) predictive checks. See the Plot Descriptions section below for details.
ppc_loo_pit_overlay(y, yrep, lw, pit, samples = 100, ..., size = 0.25, alpha = 0.7, trim = FALSE, bw = "nrd0", adjust = 1, kernel = "gaussian", n_dens = 1024) ppc_loo_pit_qq(y, yrep, lw, pit, compare = c("uniform", "normal"), ..., size = 2, alpha = 1) ppc_loo_pit(y, yrep, lw, pit, compare = c("uniform", "normal"), ..., size = 2, alpha = 1) ppc_loo_intervals(y, yrep, psis_object, subset = NULL, intervals = NULL, ..., prob = 0.5, prob_outer = 0.9, size = 1, fatten = 3, order = c("index", "median")) ppc_loo_ribbon(y, yrep, lw, psis_object, subset = NULL, intervals = NULL, ..., prob = 0.5, prob_outer = 0.9, alpha = 0.33, size = 0.25)
y  A vector of observations. See Details. 

yrep  An \(S\) by \(N\) matrix of draws from the posterior
predictive distribution, where \(S\) is the size of the posterior sample
(or subset of the posterior sample used to generate 
lw  A matrix of (smoothed) log weights with the same dimensions as

pit  For 
samples  For 
...  Currently unused. 
alpha, size, fatten  Arguments passed to code geoms to control plot
aesthetics. For 
trim  Passed to 
bw, adjust, kernel, n_dens  Optional arguments passed to

compare  For 
psis_object  If using loo version 
subset  For 
intervals  For 
prob, prob_outer  Values between 0 and 1 indicating the desired
probability mass to include in the inner and outer intervals. The defaults
are 
order  For 
A ggplot object that can be further customized using the ggplot2 package.
ppc_loo_pit_qq,ppc_loo_pit_overlay
The calibration of marginal predictions can be assessed using probability
integral transformation (PIT) checks. LOO improves the check by avoiding the
double use of data. See the section on marginal predictive checks in Gelman
et al. (2013, p. 152153) and section 5 of Gabry et al. (2018) for an
example of using bayesplot for these checks.
The LOO PIT values are asymptotically uniform (for continuous data) if the
model is calibrated. The ppc_loo_pit_overlay
function creates a plot
comparing the density of the LOO PITs (thick line) to the density estimates
of many simulated data sets from the standard uniform distribution (thin
lines). See Gabry et al. (2018) for an example of interpreting the shape of
the miscalibration that can be observed in these plots.
The ppc_loo_pit_qq
function provides an alternative visualization of
the miscalibration with a quantilequantile (QQ) plot comparing the LOO
PITs to the standard uniform distribution. Comparing to the uniform is not
good for extreme probabilities close to 0 and 1, so it can sometimes be
useful to set the compare
argument to "normal"
, which will
produce a QQ plot comparing standardized PIT values to the standard normal
distribution that can help see the (mis)calibration better for the extreme
values. However, in most cases we have found that the overlaid density plot
(ppc_loo_pit_overlay
) function will provided a clearer picture of
calibration problems that the QQ plot.
ppc_loo_intervals, ppc_loo_ribbon
Similar to ppc_intervals
and ppc_ribbon
but the
intervals are for the LOO predictive distribution.
Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., and Rubin, D. B. (2013). Bayesian Data Analysis. Chapman & Hall/CRC Press, London, third edition. (p. 152153)
Gabry, J., Simpson, D., Vehtari, A., Betancourt, M., and Gelman, A. (2018). Visualization in Bayesian workflow. Journal of the Royal Statistical Society Series A, accepted for publication. arXiv preprint: http://arxiv.org/abs/1709.01449.
Vehtari, A., Gelman, A., and Gabry, J. (2017). Practical Bayesian model evaluation using leaveoneout crossvalidation and WAIC. Statistics and Computing. 27(5), 14131432. doi:10.1007/s1122201696964. arXiv preprint: http://arxiv.org/abs/1507.04544/
Other PPCs: PPCdiscrete
,
PPCdistributions
,
PPCerrors
, PPCintervals
,
PPCoverview
,
PPCscatterplots
,
PPCteststatistics
# NOT RUN { library(rstanarm) library(loo) head(radon) fit < stan_lmer( log_radon ~ floor + log_uranium + floor:log_uranium + (1 + floor  county), data = radon, iter = 1000, chains = 2 # ,cores = 2 ) y < radon$log_radon yrep < posterior_predict(fit) loo1 < loo(fit, save_psis = TRUE, cores = 2) psis1 < loo1$psis_object lw < weights(psis1) # marginal predictive check using LOO probability integral transform color_scheme_set("orange") ppc_loo_pit_overlay(y, yrep, lw = lw) ppc_loo_pit_qq(y, yrep, lw = lw) ppc_loo_pit_qq(y, yrep, lw = lw, compare = "normal") # loo predictive intervals vs observations keep_obs < 1:50 ppc_loo_intervals(y, yrep, psis_object = psis1, subset = keep_obs) color_scheme_set("gray") ppc_loo_intervals(y, yrep, psis_object = psis1, subset = keep_obs, order = "median") # }