This is an old version, view current version.

## 20.4 Posteriors with Unbounded Densities

In some cases, the posterior density grows without bounds as parameters approach certain poles or boundaries. In such, there are no posterior modes and numerical stability issues can arise as sampled parameters approach constraint boundaries.

### Mixture Models with Varying Scales

One such example is a binary mixture model with scales varying by component, $$\sigma_1$$ and $$\sigma_2$$ for locations $$\mu_1$$ and $$\mu_2$$. In this situation, the density grows without bound as $$\sigma_1 \rightarrow 0$$ and $$\mu_1 \rightarrow y_n$$ for some $$n$$; that is, one of the mixture components concentrates all of its mass around a single data item $$y_n$$.

### Beta-Binomial Models with Skewed Data and Weak Priors

Another example of unbounded densities arises with a posterior such as $$\textsf{beta}(\phi \mid 0.5,0.5)$$, which can arise if seemingly weak beta priors are used for groups that have no data. This density is unbounded as $$\phi \rightarrow 0$$ and $$\phi \rightarrow 1$$. Similarly, a Bernoulli likelihood model coupled with a “weak” beta prior, leads to a posterior \begin{align*} p(\phi \mid y) &\propto \textsf{beta}(\phi \mid 0.5,0.5) \times \prod_{n=1}^N \textsf{Bernoulli}(y_n \mid \phi) \\ &= \textsf{beta}\left(\phi \,\middle|\, 0.5 + \sum_{n=1}^N y_n, 0.5 + N - \sum_{n=1}^N y_n\right). \end{align*}

If $$N = 9$$ and each $$y_n = 1$$, the posterior is $$\textsf{beta}(\phi \mid 9.5,0,5)$$. This posterior is unbounded as $$\phi \rightarrow 1$$. Nevertheless, the posterior is proper, and although there is no posterior mode, the posterior mean is well-defined with a value of exactly 0.95.

#### Constrained vs. Unconstrained Scales

Stan does not sample directly on the constrained $$(0,1)$$ space for this problem, so it doesn’t directly deal with unconstrained density values. Rather, the probability values $$\phi$$ are logit-transformed to $$(-\infty,\infty)$$. The boundaries at 0 and 1 are pushed out to $$-\infty$$ and $$\infty$$ respectively. The Jacobian adjustment that Stan automatically applies ensures the unconstrained density is proper. The adjustment for the particular case of $$(0,1)$$ is $$\log \operatorname{logit}^{-1}(\phi) + \log \operatorname{logit}(1 - \phi)$$.

There are two problems that still arise, though. The first is that if the posterior mass for $$\phi$$ is near one of the boundaries, the logit-transformed parameter will have to sweep out long paths and thus can dominate the U-turn condition imposed by the no-U-turn sampler (NUTS). The second issue is that the inverse transform from the unconstrained space to the constrained space can underflow to 0 or overflow to 1, even when the unconstrained parameter is not infinite. Similar problems arise for the expectation terms in logistic regression, which is why the logit-scale parameterizations of the Bernoulli and binomial distributions are more stable.