This is an old version, view current version.

## 10.6 Ordered Vector

For some modeling tasks, a vector-valued random variable $$X$$ is required with support on ordered sequences. One example is the set of cut points in ordered logistic regression.

In constraint terms, an ordered $$K$$-vector $$x \in \mathbb{R}^K$$ satisfies

$x_k < x_{k+1}$

for $$k \in \{ 1, \ldots, K-1 \}$$.

### Ordered Transform

Stan’s transform follows the constraint directly. It maps an increasing vector $$x \in \mathbb{R}^{K}$$ to an unconstrained vector $$y \in \mathbb{R}^K$$ by setting

$y_k = \left\{ \begin{array}{ll} x_1 & \mbox{if } k = 1, \mbox{ and} \\ \log \left( x_{k} - x_{k-1} \right) & \mbox{if } 1 < k \leq K. \end{array} \right.$

### Ordered Inverse Transform

The inverse transform for an unconstrained $$y \in \mathbb{R}^K$$ to an ordered sequence $$x \in \mathbb{R}^K$$ is defined by the recursion

$x_k = \left\{ \begin{array}{ll} y_1 & \mbox{if } k = 1, \mbox{ and} \\ x_{k-1} + \exp(y_k) & \mbox{if } 1 < k \leq K. \end{array} \right.$

$$x_k$$ can also be expressed iteratively as

$x_k = y_1 + \sum_{k'=2}^k \exp(y_{k'}).$

### Absolute Jacobian Determinant of the Ordered Inverse Transform

The Jacobian of the inverse transform $$f^{-1}$$ is lower triangular, with diagonal elements for $$1 \leq k \leq K$$ of

$J_{k,k} = \left\{ \begin{array}{ll} 1 & \mbox{if } k = 1, \mbox{ and} \\ \exp(y_k) & \mbox{if } 1 < k \leq K. \end{array} \right.$

Because $$J$$ is triangular, the absolute Jacobian determinant is

$\left| \, \det \, J \, \right| \ = \ \left| \, \prod_{k=1}^K J_{k,k} \, \right| \ = \ \prod_{k=2}^K \exp(y_k).$

Putting this all together, if $$p_X$$ is the density of $$X$$, then the transformed variable $$Y$$ has density $$p_Y$$ given by

$p_Y(y) = p_X(f^{-1}(y)) \ \prod_{k=2}^K \exp(y_k).$