This is an old version, view current version.

21.1 Theoretical and Practical Background

A Bayesian posterior is technically a probability measure, which is a parameterization-invariant, abstract mathematical object.37

Stan’s modeling language, on the other hand, defines a probability density, which is a non-unique, parameterization-dependent function in \(\mathbb{R}^N \rightarrow \mathbb{R}^{+}\). In practice, this means a given model can be represented different ways in Stan, and different representations have different computational performances.

As pointed out by Gelman (2004) in a paper discussing the relation between parameterizations and Bayesian modeling, a change of parameterization often carries with it suggestions of how the model might change, because we tend to use certain natural classes of prior distributions. Thus, it’s not just that we have a fixed distribution that we want to sample from, with reparameterizations being computational aids. In addition, once we reparameterize and add prior information, the model itself typically changes, often in useful ways.

References

Gelman, Andrew. 2004. “Parameterization and Bayesian Modeling.” Journal of the American Statistical Association 99: 537–45.


  1. This is in contrast to (penalized) maximum likelihood estimates, which are not parameterization invariant.↩︎