For models fit using MCMC (algorithm="sampling") or one of the variational approximations ("meanfield" or "fullrank"), the predictive_interval function computes Bayesian predictive intervals. The method for stanreg objects calls posterior_predict internally, whereas the method for objects of class "ppd" accepts the matrix returned by posterior_predict as input and can be used to avoid multiple calls to posterior_predict.

# S3 method for stanreg
predictive_interval(
object,
prob = 0.9,
newdata = NULL,
draws = NULL,
re.form = NULL,
fun = NULL,
seed = NULL,
offset = NULL,
...
)

# S3 method for ppd
predictive_interval(object, prob = 0.9, ...)

## Arguments

object Either a fitted model object returned by one of the rstanarm modeling functions (a stanreg object) or, for the "ppd" method, a matrix of draws from the posterior predictive distribution returned by posterior_predict. A number $$p \in (0,1)$$ indicating the desired probability mass to include in the intervals. The default is to report $$90$$% intervals (prob=0.9) rather than the traditionally used $$95$$% (see Details). Passed to posterior_predict. Currently ignored.

## Value

A matrix with two columns and as many rows as are in newdata. If newdata is not provided then the matrix will have as many rows as the data used to fit the model. For a given value of prob, $$p$$, the columns correspond to the lower and upper $$100p$$% central interval limits and have the names $$100\alpha/2$$% and $$100(1 - \alpha/2)$$%, where $$\alpha = 1-p$$. For example, if prob=0.9 is specified (a $$90$$% interval), then the column names will be "5%" and "95%", respectively.

predictive_error, posterior_predict, posterior_interval

## Examples

fit <- stan_glm(mpg ~ wt, data = mtcars, iter = 300)#>
#> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1).
#> Chain 1:
#> Chain 1: Gradient evaluation took 2e-05 seconds
#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.2 seconds.
#> Chain 1:
#> Chain 1:
#> Chain 1: Iteration:   1 / 300 [  0%]  (Warmup)
#> Chain 1: Iteration:  30 / 300 [ 10%]  (Warmup)
#> Chain 1: Iteration:  60 / 300 [ 20%]  (Warmup)
#> Chain 1: Iteration:  90 / 300 [ 30%]  (Warmup)
#> Chain 1: Iteration: 120 / 300 [ 40%]  (Warmup)
#> Chain 1: Iteration: 150 / 300 [ 50%]  (Warmup)
#> Chain 1: Iteration: 151 / 300 [ 50%]  (Sampling)
#> Chain 1: Iteration: 180 / 300 [ 60%]  (Sampling)
#> Chain 1: Iteration: 210 / 300 [ 70%]  (Sampling)
#> Chain 1: Iteration: 240 / 300 [ 80%]  (Sampling)
#> Chain 1: Iteration: 270 / 300 [ 90%]  (Sampling)
#> Chain 1: Iteration: 300 / 300 [100%]  (Sampling)
#> Chain 1:
#> Chain 1:  Elapsed Time: 0.012552 seconds (Warm-up)
#> Chain 1:                0.00663 seconds (Sampling)
#> Chain 1:                0.019182 seconds (Total)
#> Chain 1:
#>
#> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2).
#> Chain 2:
#> Chain 2: Gradient evaluation took 9e-06 seconds
#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds.
#> Chain 2:
#> Chain 2:
#> Chain 2: Iteration:   1 / 300 [  0%]  (Warmup)
#> Chain 2: Iteration:  30 / 300 [ 10%]  (Warmup)
#> Chain 2: Iteration:  60 / 300 [ 20%]  (Warmup)
#> Chain 2: Iteration:  90 / 300 [ 30%]  (Warmup)
#> Chain 2: Iteration: 120 / 300 [ 40%]  (Warmup)
#> Chain 2: Iteration: 150 / 300 [ 50%]  (Warmup)
#> Chain 2: Iteration: 151 / 300 [ 50%]  (Sampling)
#> Chain 2: Iteration: 180 / 300 [ 60%]  (Sampling)
#> Chain 2: Iteration: 210 / 300 [ 70%]  (Sampling)
#> Chain 2: Iteration: 240 / 300 [ 80%]  (Sampling)
#> Chain 2: Iteration: 270 / 300 [ 90%]  (Sampling)
#> Chain 2: Iteration: 300 / 300 [100%]  (Sampling)
#> Chain 2:
#> Chain 2:  Elapsed Time: 0.011152 seconds (Warm-up)
#> Chain 2:                0.004945 seconds (Sampling)
#> Chain 2:                0.016097 seconds (Total)
#> Chain 2:
#>
#> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3).
#> Chain 3:
#> Chain 3: Gradient evaluation took 9e-06 seconds
#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds.
#> Chain 3:
#> Chain 3:
#> Chain 3: Iteration:   1 / 300 [  0%]  (Warmup)
#> Chain 3: Iteration:  30 / 300 [ 10%]  (Warmup)
#> Chain 3: Iteration:  60 / 300 [ 20%]  (Warmup)
#> Chain 3: Iteration:  90 / 300 [ 30%]  (Warmup)
#> Chain 3: Iteration: 120 / 300 [ 40%]  (Warmup)
#> Chain 3: Iteration: 150 / 300 [ 50%]  (Warmup)
#> Chain 3: Iteration: 151 / 300 [ 50%]  (Sampling)
#> Chain 3: Iteration: 180 / 300 [ 60%]  (Sampling)
#> Chain 3: Iteration: 210 / 300 [ 70%]  (Sampling)
#> Chain 3: Iteration: 240 / 300 [ 80%]  (Sampling)
#> Chain 3: Iteration: 270 / 300 [ 90%]  (Sampling)
#> Chain 3: Iteration: 300 / 300 [100%]  (Sampling)
#> Chain 3:
#> Chain 3:  Elapsed Time: 0.011701 seconds (Warm-up)
#> Chain 3:                0.007061 seconds (Sampling)
#> Chain 3:                0.018762 seconds (Total)
#> Chain 3:
#>
#> SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4).
#> Chain 4:
#> Chain 4: Gradient evaluation took 1.1e-05 seconds
#> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds.
#> Chain 4:
#> Chain 4:
#> Chain 4: Iteration:   1 / 300 [  0%]  (Warmup)
#> Chain 4: Iteration:  30 / 300 [ 10%]  (Warmup)
#> Chain 4: Iteration:  60 / 300 [ 20%]  (Warmup)
#> Chain 4: Iteration:  90 / 300 [ 30%]  (Warmup)
#> Chain 4: Iteration: 120 / 300 [ 40%]  (Warmup)
#> Chain 4: Iteration: 150 / 300 [ 50%]  (Warmup)
#> Chain 4: Iteration: 151 / 300 [ 50%]  (Sampling)
#> Chain 4: Iteration: 180 / 300 [ 60%]  (Sampling)
#> Chain 4: Iteration: 210 / 300 [ 70%]  (Sampling)
#> Chain 4: Iteration: 240 / 300 [ 80%]  (Sampling)
#> Chain 4: Iteration: 270 / 300 [ 90%]  (Sampling)
#> Chain 4: Iteration: 300 / 300 [100%]  (Sampling)
#> Chain 4:
#> Chain 4:  Elapsed Time: 0.015004 seconds (Warm-up)
#> Chain 4:                0.006421 seconds (Sampling)
#> Chain 4:                0.021425 seconds (Total)
#> Chain 4: #> Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
#> Running the chains for more iterations may help. See
#> http://mc-stan.org/misc/warnings.html#bulk-ess#> Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
#> Running the chains for more iterations may help. See
#> http://mc-stan.org/misc/warnings.html#tail-esspredictive_interval(fit)#>                            5%      95%
#> Mazda RX4           18.005730 28.73362
#> Mazda RX4 Wag       16.686794 27.18966
#> Datsun 710          19.378961 30.52869
#> Hornet 4 Drive      14.519490 25.10673
#> Valiant             13.570513 23.81552
#> Duster 360          13.430621 23.73982
#> Merc 240D           15.234610 25.43294
#> Merc 230            14.983108 26.00783
#> Merc 280            13.603792 24.16747
#> Merc 280C           13.801138 24.17822
#> Merc 450SE          10.235224 21.25631
#> Merc 450SL          11.855471 22.57667
#> Merc 450SLC         11.840450 22.22804
#> Lincoln Continental  2.676172 13.98966
#> Chrysler Imperial    2.946626 14.44062
#> Fiat 128            20.187047 31.33859
#> Honda Civic         23.376361 33.82195
#> Toyota Corolla      22.403555 32.95855
#> Toyota Corona       19.140440 29.59045
#> Dodge Challenger    13.174725 23.31733
#> AMC Javelin         14.185602 24.37029
#> Camaro Z28          11.334363 22.38741
#> Pontiac Firebird    11.885473 22.05875
#> Fiat X1-9           21.806661 32.30304
#> Porsche 914-2       20.879908 31.00981
#> Lotus Europa        24.027026 34.48600
#> Ford Pantera L      15.035939 25.99151
#> Ferrari Dino        17.493230 28.27952
#> Maserati Bora       12.657825 23.44672
#> Volvo 142E          17.245372 27.26530predictive_interval(fit, newdata = data.frame(wt = range(mtcars\$wt)),
prob = 0.5)#>         25%      75%
#> 1 26.876641 31.28335
#> 2  6.258314 10.66288
# stanreg vs ppd methods
preds <- posterior_predict(fit, seed = 123)
all.equal(
predictive_interval(fit, seed = 123),
predictive_interval(preds)
)#> [1] TRUE