Bayesian inference for multivariate GLMs with group-specific coefficients that are assumed to be correlated across the GLM submodels.

stan_mvmer(
formula,
data,
family = gaussian,
weights,
prior = normal(autoscale = TRUE),
prior_intercept = normal(autoscale = TRUE),
prior_aux = cauchy(0, 5, autoscale = TRUE),
prior_covariance = lkj(autoscale = TRUE),
prior_PD = FALSE,
algorithm = c("sampling", "meanfield", "fullrank"),
adapt_delta = NULL,
max_treedepth = 10L,
init = "random",
QR = FALSE,
sparse = FALSE,
...
)

## Arguments

formula A two-sided linear formula object describing both the fixed-effects and random-effects parts of the longitudinal submodel similar in vein to formula specification in the lme4 package (see glmer or the lme4 vignette for details). Note however that the double bar (||) notation is not allowed when specifying the random-effects parts of the formula, and neither are nested grouping factors (e.g. (1 | g1/g2)) or (1 | g1:g2), where g1, g2 are grouping factors. For a multivariate GLM this should be a list of such formula objects, with each element of the list providing the formula for one of the GLM submodels. A data frame containing the variables specified in formula. For a multivariate GLM, this can be either a single data frame which contains the data for all GLM submodels, or it can be a list of data frames where each element of the list provides the data for one of the GLM submodels. The family (and possibly also the link function) for the GLM submodel(s). See glmer for details. If fitting a multivariate GLM, then this can optionally be a list of families, in which case each element of the list specifies the family for one of the GLM submodels. In other words, a different family can be specified for each GLM submodel. Same as in glm, except that when fitting a multivariate GLM and a list of data frames is provided in data then a corresponding list of weights must be provided. If weights are provided for one of the GLM submodels, then they must be provided for all GLM submodels. Same as in stan_glmer except that for a multivariate GLM a list of priors can be provided for any of prior, prior_intercept or prior_aux arguments. That is, different priors can optionally be specified for each of the GLM submodels. If a list is not provided, then the same prior distributions are used for each GLM submodel. Note that the "product_normal" prior is not allowed for stan_mvmer. Cannot be NULL; see priors for more information about the prior distributions on covariance matrices. Note however that the default prior for covariance matrices in stan_mvmer is slightly different to that in stan_glmer (the details of which are described on the priors page). A logical scalar (defaulting to FALSE) indicating whether to draw from the prior predictive distribution instead of conditioning on the outcome. A string (possibly abbreviated) indicating the estimation approach to use. Can be "sampling" for MCMC (the default), "optimizing" for optimization, "meanfield" for variational inference with independent normal distributions, or "fullrank" for variational inference with a multivariate normal distribution. See rstanarm-package for more details on the estimation algorithms. NOTE: not all fitting functions support all four algorithms. Only relevant if algorithm="sampling". See the adapt_delta help page for details. A positive integer specifying the maximum treedepth for the non-U-turn sampler. See the control argument in stan. The method for generating initial values. See stan. A logical scalar defaulting to FALSE, but if TRUE applies a scaled qr decomposition to the design matrix. The transformation does not change the likelihood of the data but is recommended for computational reasons when there are multiple predictors. See the QR-argument documentation page for details on how rstanarm does the transformation and important information about how to interpret the prior distributions of the model parameters when using QR=TRUE. A logical scalar (defaulting to FALSE) indicating whether to use a sparse representation of the design (X) matrix. If TRUE, the the design matrix is not centered (since that would destroy the sparsity) and likewise it is not possible to specify both QR = TRUE and sparse = TRUE. Depending on how many zeros there are in the design matrix, setting sparse = TRUE may make the code run faster and can consume much less RAM. Further arguments passed to the function in the rstan package (sampling, vb, or optimizing), corresponding to the estimation method named by algorithm. For example, if algorithm is "sampling" it is possibly to specify iter, chains, cores, refresh, etc.

## Value

A stanmvreg object is returned.

## Details

The stan_mvmer function can be used to fit a multivariate generalized linear model (GLM) with group-specific terms. The model consists of distinct GLM submodels, each which contains group-specific terms; within a grouping factor (for example, patient ID) the grouping-specific terms are assumed to be correlated across the different GLM submodels. It is possible to specify a different outcome type (for example a different family and/or link function) for each of the GLM submodels.

Bayesian estimation of the model is performed via MCMC, in the same way as for stan_glmer. Also, similar to stan_glmer, an unstructured covariance matrix is used for the group-specific terms within a given grouping factor, with priors on the terms of a decomposition of the covariance matrix.See priors for more information about the priors distributions that are available for the covariance matrices, the regression coefficients and the intercept and auxiliary parameters.

## See also

stan_glmer, stan_jm, stanreg-objects, stanmvreg-methods, print.stanmvreg, summary.stanmvreg, posterior_predict, posterior_interval.

## Examples

# \donttest{
#####
# A multivariate GLM with two submodels. For the grouping factor 'id', the
# group-specific intercept from the first submodel (logBili) is assumed to
# be correlated with the group-specific intercept and linear slope in the
# second submodel (albumin)
f1 <- stan_mvmer(
formula = list(
logBili ~ year + (1 | id),
albumin ~ sex + year + (year | id)),
data = pbcLong,
# this next line is only to keep the example small in size!
chains = 1, cores = 1, seed = 12345, iter = 1000)#> Fitting a multivariate glmer model.
#>
#> Please note the warmup may be much slower than later iterations!
#>
#> SAMPLING FOR MODEL 'mvmer' NOW (CHAIN 1).
#> Chain 1:
#> Chain 1: Gradient evaluation took 0.000106 seconds
#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.06 seconds.
#> Chain 1: Adjust your expectations accordingly!
#> Chain 1:
#> Chain 1:
#> Chain 1: Iteration:   1 / 1000 [  0%]  (Warmup)
#> Chain 1: Iteration: 100 / 1000 [ 10%]  (Warmup)
#> Chain 1: Iteration: 200 / 1000 [ 20%]  (Warmup)
#> Chain 1: Iteration: 300 / 1000 [ 30%]  (Warmup)
#> Chain 1: Iteration: 400 / 1000 [ 40%]  (Warmup)
#> Chain 1: Iteration: 500 / 1000 [ 50%]  (Warmup)
#> Chain 1: Iteration: 501 / 1000 [ 50%]  (Sampling)
#> Chain 1: Iteration: 600 / 1000 [ 60%]  (Sampling)
#> Chain 1: Iteration: 700 / 1000 [ 70%]  (Sampling)
#> Chain 1: Iteration: 800 / 1000 [ 80%]  (Sampling)
#> Chain 1: Iteration: 900 / 1000 [ 90%]  (Sampling)
#> Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
#> Chain 1:
#> Chain 1:  Elapsed Time: 4.56009 seconds (Warm-up)
#> Chain 1:                2.67051 seconds (Sampling)
#> Chain 1:                7.2306 seconds (Total)
#> Chain 1: #> Warning: The largest R-hat is 1.06, indicating chains have not mixed.
#> Running the chains for more iterations may help. See
#> http://mc-stan.org/misc/warnings.html#r-hat#> Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
#> Running the chains for more iterations may help. See
#> http://mc-stan.org/misc/warnings.html#bulk-ess#> Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
#> Running the chains for more iterations may help. See
#> http://mc-stan.org/misc/warnings.html#tail-esssummary(f1)#>
#> Model Info:
#>
#>  function:     stan_mvmer
#>  formula (y1): logBili ~ year + (1 | id)
#>  family  (y1): gaussian [identity]
#>  formula (y2): albumin ~ sex + year + (year | id)
#>  family  (y2): gaussian [identity]
#>  algorithm:    sampling
#>  priors:       see help('prior_summary')
#>  sample:       500 (posterior sample size)
#>  num obs:      304 (y1), 304 (y2)
#>  groups:       id (40)
#>  runtime:      0.1 mins
#>
#> Estimates:
#>                                           mean     sd       2.5%     25%
#> y1|(Intercept)                             0.719    0.186    0.396    0.579
#> y1|year                                    0.090    0.009    0.073    0.084
#> y1|sigma                                   0.508    0.021    0.471    0.492
#> y1|mean_PPD                                0.585    0.043    0.500    0.557
#> y2|(Intercept)                             3.423    0.205    3.027    3.287
#> y2|sexf                                    0.119    0.214   -0.287   -0.047
#> y2|year                                   -0.134    0.021   -0.184   -0.147
#> y2|sigma                                   0.290    0.013    0.265    0.281
#> y2|mean_PPD                                3.344    0.024    3.295    3.328
#> Sigma[id:y1|(Intercept),y1|(Intercept)]    1.687    0.382    1.042    1.406
#> Sigma[id:y2|(Intercept),y1|(Intercept)]   -0.430    0.133   -0.723   -0.512
#> Sigma[id:y2|year,y1|(Intercept)]          -0.073    0.030   -0.146   -0.089
#> Sigma[id:y2|(Intercept),y2|(Intercept)]    0.265    0.071    0.166    0.211
#> Sigma[id:y2|year,y2|(Intercept)]           0.016    0.012   -0.007    0.009
#> Sigma[id:y2|year,y2|year]                  0.007    0.003    0.003    0.005
#> log-posterior                           -481.227   10.849 -502.062 -488.473
#>                                           50%      75%      97.5%
#> y1|(Intercept)                             0.715    0.834    1.101
#> y1|year                                    0.089    0.096    0.108
#> y1|sigma                                   0.507    0.523    0.550
#> y1|mean_PPD                                0.585    0.613    0.670
#> y2|(Intercept)                             3.419    3.577    3.821
#> y2|sexf                                    0.124    0.260    0.523
#> y2|year                                   -0.133   -0.119   -0.098
#> y2|sigma                                   0.290    0.299    0.314
#> y2|mean_PPD                                3.344    3.360    3.387
#> Sigma[id:y1|(Intercept),y1|(Intercept)]    1.657    1.912    2.492
#> Sigma[id:y2|(Intercept),y1|(Intercept)]   -0.415   -0.332   -0.212
#> Sigma[id:y2|year,y1|(Intercept)]          -0.068   -0.053   -0.020
#> Sigma[id:y2|(Intercept),y2|(Intercept)]    0.253    0.307    0.441
#> Sigma[id:y2|year,y2|(Intercept)]           0.016    0.022    0.040
#> Sigma[id:y2|year,y2|year]                  0.006    0.008    0.015
#> log-posterior                           -481.111 -473.795 -460.966
#>
#> Diagnostics:
#>                                         mcse  Rhat  n_eff
#> y1|(Intercept)                          0.025 1.059   56
#> y1|year                                 0.000 0.998 1349
#> y1|sigma                                0.001 1.000  910
#> y1|mean_PPD                             0.002 1.001  446
#> y2|(Intercept)                          0.012 0.998  278
#> y2|sexf                                 0.015 0.998  212
#> y2|year                                 0.001 1.010  217
#> y2|sigma                                0.001 1.000  575
#> y2|mean_PPD                             0.001 0.998  536
#> Sigma[id:y1|(Intercept),y1|(Intercept)] 0.033 1.030  135
#> Sigma[id:y2|(Intercept),y1|(Intercept)] 0.012 1.039  119
#> Sigma[id:y2|year,y1|(Intercept)]        0.002 1.001  206
#> Sigma[id:y2|(Intercept),y2|(Intercept)] 0.006 1.017  147
#> Sigma[id:y2|year,y2|(Intercept)]        0.001 1.000  176
#> Sigma[id:y2|year,y2|year]               0.000 0.998  259
#> log-posterior                           0.914 0.998  141
#>
#> For each parameter, mcse is Monte Carlo standard error, n_eff is a crude measure of effective sample size, and Rhat is the potential scale reduction factor on split chains (at convergence Rhat=1).
#####
# A multivariate GLM with one bernoulli outcome and one
# gaussian outcome. We will artificially create the bernoulli
# outcome by dichotomising log serum bilirubin
pbcLong$ybern <- as.integer(pbcLong$logBili >= mean(pbcLong\$logBili))
f2 <- stan_mvmer(
formula = list(
ybern ~ year + (1 | id),
albumin ~ sex + year + (year | id)),
data = pbcLong,
family = list(binomial, gaussian),
chains = 1, cores = 1, seed = 12345, iter = 1000)#> Fitting a multivariate glmer model.
#>
#> Please note the warmup may be much slower than later iterations!
#>
#> SAMPLING FOR MODEL 'mvmer' NOW (CHAIN 1).
#> Chain 1:
#> Chain 1: Gradient evaluation took 0.000135 seconds
#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.35 seconds.
#> Chain 1: Adjust your expectations accordingly!
#> Chain 1:
#> Chain 1:
#> Chain 1: Iteration:   1 / 1000 [  0%]  (Warmup)
#> Chain 1: Iteration: 100 / 1000 [ 10%]  (Warmup)
#> Chain 1: Iteration: 200 / 1000 [ 20%]  (Warmup)
#> Chain 1: Iteration: 300 / 1000 [ 30%]  (Warmup)
#> Chain 1: Iteration: 400 / 1000 [ 40%]  (Warmup)
#> Chain 1: Iteration: 500 / 1000 [ 50%]  (Warmup)
#> Chain 1: Iteration: 501 / 1000 [ 50%]  (Sampling)
#> Chain 1: Iteration: 600 / 1000 [ 60%]  (Sampling)
#> Chain 1: Iteration: 700 / 1000 [ 70%]  (Sampling)
#> Chain 1: Iteration: 800 / 1000 [ 80%]  (Sampling)
#> Chain 1: Iteration: 900 / 1000 [ 90%]  (Sampling)
#> Chain 1: Iteration: 1000 / 1000 [100%]  (Sampling)
#> Chain 1:
#> Chain 1:  Elapsed Time: 5.07487 seconds (Warm-up)
#> Chain 1:                2.82034 seconds (Sampling)
#> Chain 1:                7.89521 seconds (Total)
#> Chain 1: #> Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
#> Running the chains for more iterations may help. See
#> http://mc-stan.org/misc/warnings.html#bulk-ess# }