R/stan_mvmer.R
stan_mvmer.Rd
Bayesian inference for multivariate GLMs with groupspecific coefficients that are assumed to be correlated across the GLM submodels.
stan_mvmer( formula, data, family = gaussian, weights, prior = normal(autoscale = TRUE), prior_intercept = normal(autoscale = TRUE), prior_aux = cauchy(0, 5, autoscale = TRUE), prior_covariance = lkj(autoscale = TRUE), prior_PD = FALSE, algorithm = c("sampling", "meanfield", "fullrank"), adapt_delta = NULL, max_treedepth = 10L, init = "random", QR = FALSE, sparse = FALSE, ... )
formula  A twosided linear formula object describing both the
fixedeffects and randomeffects parts of the longitudinal submodel
similar in vein to formula specification in the lme4 package
(see 

data  A data frame containing the variables specified in

family  The family (and possibly also the link function) for the
GLM submodel(s). See 
weights  Same as in 
prior, prior_intercept, prior_aux  Same as in 
prior_covariance  Cannot be 
prior_PD  A logical scalar (defaulting to 
algorithm  A string (possibly abbreviated) indicating the
estimation approach to use. Can be 
adapt_delta  Only relevant if 
max_treedepth  A positive integer specifying the maximum treedepth
for the nonUturn sampler. See the 
init  The method for generating initial values. See

QR  A logical scalar defaulting to 
sparse  A logical scalar (defaulting to 
...  Further arguments passed to the function in the rstan
package ( 
A stanmvreg object is returned.
The stan_mvmer
function can be used to fit a multivariate
generalized linear model (GLM) with groupspecific terms. The model consists
of distinct GLM submodels, each which contains groupspecific terms; within
a grouping factor (for example, patient ID) the groupingspecific terms are
assumed to be correlated across the different GLM submodels. It is
possible to specify a different outcome type (for example a different
family and/or link function) for each of the GLM submodels.
Bayesian estimation of the model is performed via MCMC, in the same way as
for stan_glmer
. Also, similar to stan_glmer
,
an unstructured covariance matrix is used for the groupspecific terms
within a given grouping factor, with priors on the terms of a decomposition
of the covariance matrix.See priors
for more information about
the priors distributions that are available for the covariance matrices,
the regression coefficients and the intercept and auxiliary parameters.
stan_glmer
, stan_jm
,
stanregobjects
, stanmvregmethods
,
print.stanmvreg
, summary.stanmvreg
,
posterior_predict
, posterior_interval
.
# \donttest{ ##### # A multivariate GLM with two submodels. For the grouping factor 'id', the # groupspecific intercept from the first submodel (logBili) is assumed to # be correlated with the groupspecific intercept and linear slope in the # second submodel (albumin) f1 < stan_mvmer( formula = list( logBili ~ year + (1  id), albumin ~ sex + year + (year  id)), data = pbcLong, # this next line is only to keep the example small in size! chains = 1, cores = 1, seed = 12345, iter = 1000)#> Fitting a multivariate glmer model. #> #> Please note the warmup may be much slower than later iterations! #> #> SAMPLING FOR MODEL 'mvmer' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 0.000106 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.06 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 1000 [ 0%] (Warmup) #> Chain 1: Iteration: 100 / 1000 [ 10%] (Warmup) #> Chain 1: Iteration: 200 / 1000 [ 20%] (Warmup) #> Chain 1: Iteration: 300 / 1000 [ 30%] (Warmup) #> Chain 1: Iteration: 400 / 1000 [ 40%] (Warmup) #> Chain 1: Iteration: 500 / 1000 [ 50%] (Warmup) #> Chain 1: Iteration: 501 / 1000 [ 50%] (Sampling) #> Chain 1: Iteration: 600 / 1000 [ 60%] (Sampling) #> Chain 1: Iteration: 700 / 1000 [ 70%] (Sampling) #> Chain 1: Iteration: 800 / 1000 [ 80%] (Sampling) #> Chain 1: Iteration: 900 / 1000 [ 90%] (Sampling) #> Chain 1: Iteration: 1000 / 1000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 4.56009 seconds (Warmup) #> Chain 1: 2.67051 seconds (Sampling) #> Chain 1: 7.2306 seconds (Total) #> Chain 1:#> Warning: The largest Rhat is 1.06, indicating chains have not mixed. #> Running the chains for more iterations may help. See #> http://mcstan.org/misc/warnings.html#rhat#> Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable. #> Running the chains for more iterations may help. See #> http://mcstan.org/misc/warnings.html#bulkess#> Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable. #> Running the chains for more iterations may help. See #> http://mcstan.org/misc/warnings.html#tailesssummary(f1)#> #> Model Info: #> #> function: stan_mvmer #> formula (y1): logBili ~ year + (1  id) #> family (y1): gaussian [identity] #> formula (y2): albumin ~ sex + year + (year  id) #> family (y2): gaussian [identity] #> algorithm: sampling #> priors: see help('prior_summary') #> sample: 500 (posterior sample size) #> num obs: 304 (y1), 304 (y2) #> groups: id (40) #> runtime: 0.1 mins #> #> Estimates: #> mean sd 2.5% 25% #> y1(Intercept) 0.719 0.186 0.396 0.579 #> y1year 0.090 0.009 0.073 0.084 #> y1sigma 0.508 0.021 0.471 0.492 #> y1mean_PPD 0.585 0.043 0.500 0.557 #> y2(Intercept) 3.423 0.205 3.027 3.287 #> y2sexf 0.119 0.214 0.287 0.047 #> y2year 0.134 0.021 0.184 0.147 #> y2sigma 0.290 0.013 0.265 0.281 #> y2mean_PPD 3.344 0.024 3.295 3.328 #> Sigma[id:y1(Intercept),y1(Intercept)] 1.687 0.382 1.042 1.406 #> Sigma[id:y2(Intercept),y1(Intercept)] 0.430 0.133 0.723 0.512 #> Sigma[id:y2year,y1(Intercept)] 0.073 0.030 0.146 0.089 #> Sigma[id:y2(Intercept),y2(Intercept)] 0.265 0.071 0.166 0.211 #> Sigma[id:y2year,y2(Intercept)] 0.016 0.012 0.007 0.009 #> Sigma[id:y2year,y2year] 0.007 0.003 0.003 0.005 #> logposterior 481.227 10.849 502.062 488.473 #> 50% 75% 97.5% #> y1(Intercept) 0.715 0.834 1.101 #> y1year 0.089 0.096 0.108 #> y1sigma 0.507 0.523 0.550 #> y1mean_PPD 0.585 0.613 0.670 #> y2(Intercept) 3.419 3.577 3.821 #> y2sexf 0.124 0.260 0.523 #> y2year 0.133 0.119 0.098 #> y2sigma 0.290 0.299 0.314 #> y2mean_PPD 3.344 3.360 3.387 #> Sigma[id:y1(Intercept),y1(Intercept)] 1.657 1.912 2.492 #> Sigma[id:y2(Intercept),y1(Intercept)] 0.415 0.332 0.212 #> Sigma[id:y2year,y1(Intercept)] 0.068 0.053 0.020 #> Sigma[id:y2(Intercept),y2(Intercept)] 0.253 0.307 0.441 #> Sigma[id:y2year,y2(Intercept)] 0.016 0.022 0.040 #> Sigma[id:y2year,y2year] 0.006 0.008 0.015 #> logposterior 481.111 473.795 460.966 #> #> Diagnostics: #> mcse Rhat n_eff #> y1(Intercept) 0.025 1.059 56 #> y1year 0.000 0.998 1349 #> y1sigma 0.001 1.000 910 #> y1mean_PPD 0.002 1.001 446 #> y2(Intercept) 0.012 0.998 278 #> y2sexf 0.015 0.998 212 #> y2year 0.001 1.010 217 #> y2sigma 0.001 1.000 575 #> y2mean_PPD 0.001 0.998 536 #> Sigma[id:y1(Intercept),y1(Intercept)] 0.033 1.030 135 #> Sigma[id:y2(Intercept),y1(Intercept)] 0.012 1.039 119 #> Sigma[id:y2year,y1(Intercept)] 0.002 1.001 206 #> Sigma[id:y2(Intercept),y2(Intercept)] 0.006 1.017 147 #> Sigma[id:y2year,y2(Intercept)] 0.001 1.000 176 #> Sigma[id:y2year,y2year] 0.000 0.998 259 #> logposterior 0.914 0.998 141 #> #> For each parameter, mcse is Monte Carlo standard error, n_eff is a crude measure of effective sample size, and Rhat is the potential scale reduction factor on split chains (at convergence Rhat=1).##### # A multivariate GLM with one bernoulli outcome and one # gaussian outcome. We will artificially create the bernoulli # outcome by dichotomising log serum bilirubin pbcLong$ybern < as.integer(pbcLong$logBili >= mean(pbcLong$logBili)) f2 < stan_mvmer( formula = list( ybern ~ year + (1  id), albumin ~ sex + year + (year  id)), data = pbcLong, family = list(binomial, gaussian), chains = 1, cores = 1, seed = 12345, iter = 1000)#> Fitting a multivariate glmer model. #> #> Please note the warmup may be much slower than later iterations! #> #> SAMPLING FOR MODEL 'mvmer' NOW (CHAIN 1). #> Chain 1: #> Chain 1: Gradient evaluation took 0.000135 seconds #> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.35 seconds. #> Chain 1: Adjust your expectations accordingly! #> Chain 1: #> Chain 1: #> Chain 1: Iteration: 1 / 1000 [ 0%] (Warmup) #> Chain 1: Iteration: 100 / 1000 [ 10%] (Warmup) #> Chain 1: Iteration: 200 / 1000 [ 20%] (Warmup) #> Chain 1: Iteration: 300 / 1000 [ 30%] (Warmup) #> Chain 1: Iteration: 400 / 1000 [ 40%] (Warmup) #> Chain 1: Iteration: 500 / 1000 [ 50%] (Warmup) #> Chain 1: Iteration: 501 / 1000 [ 50%] (Sampling) #> Chain 1: Iteration: 600 / 1000 [ 60%] (Sampling) #> Chain 1: Iteration: 700 / 1000 [ 70%] (Sampling) #> Chain 1: Iteration: 800 / 1000 [ 80%] (Sampling) #> Chain 1: Iteration: 900 / 1000 [ 90%] (Sampling) #> Chain 1: Iteration: 1000 / 1000 [100%] (Sampling) #> Chain 1: #> Chain 1: Elapsed Time: 5.07487 seconds (Warmup) #> Chain 1: 2.82034 seconds (Sampling) #> Chain 1: 7.89521 seconds (Total) #> Chain 1:#> Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable. #> Running the chains for more iterations may help. See #> http://mcstan.org/misc/warnings.html#bulkess# }