# Bounded Discrete Distributions

Bounded discrete probability functions have support on $$\{ 0, \ldots, N \}$$ for some upper bound $$N$$.

## Binomial distribution

### Probability mass function

Suppose $$N \in \mathbb{N}$$ and $$\theta \in [0,1]$$, and $$n \in \{0,\ldots,N\}$$. $\begin{equation*} \text{Binomial}(n~|~N,\theta) = \binom{N}{n} \theta^n (1 - \theta)^{N - n}. \end{equation*}$

### Log probability mass function

$\begin{eqnarray*} \log \text{Binomial}(n~|~N,\theta) & = & \log \Gamma(N+1) - \log \Gamma(n + 1) - \log \Gamma(N- n + 1) \\[4pt] & & { } + n \log \theta + (N - n) \log (1 - \theta), \end{eqnarray*}$

### Gradient of log probability mass function

$\begin{equation*} \frac{\partial}{\partial \theta} \log \text{Binomial}(n~|~N,\theta) = \frac{n}{\theta} - \frac{N - n}{1 - \theta} \end{equation*}$

### Distribution statement

n ~ binomial(N, theta)

Increment target log probability density with binomial_lupmf(n | N, theta).

Available since 2.0

### Stan functions

real binomial_lpmf(ints n | ints N, reals theta)
The log binomial probability mass of n successes in N trials given chance of success theta

Available since 2.12

real binomial_lupmf(ints n | ints N, reals theta)
The log binomial probability mass of n successes in N trials given chance of success theta dropping constant additive terms

Available since 2.25

real binomial_cdf(ints n | ints N, reals theta)
The binomial cumulative distribution function of n successes in N trials given chance of success theta

Available since 2.0

real binomial_lcdf(ints n | ints N, reals theta)
The log of the binomial cumulative distribution function of n successes in N trials given chance of success theta

Available since 2.12

real binomial_lccdf(ints n | ints N, reals theta)
The log of the binomial complementary cumulative distribution function of n successes in N trials given chance of success theta

Available since 2.12

R binomial_rng(ints N, reals theta)
Generate a binomial variate with N trials and chance of success theta; may only be used in transformed data and generated quantities blocks. For a description of argument and return types, see section vectorized PRNG functions.

Available since 2.18

## Binomial distribution, logit parameterization

Stan also provides a version of the binomial probability mass function distribution with the chance of success parameterized on the unconstrained logistic scale.

### Probability mass function

Suppose $$N \in \mathbb{N}$$, $$\alpha \in \mathbb{R}$$, and $$n \in \{0,\ldots,N\}$$. Then $\begin{eqnarray*} \text{BinomialLogit}(n~|~N,\alpha) & = & \text{Binomial}(n~|~N,\text{logit}^{-1}(\alpha)) \\[6pt] & = & \binom{N}{n} \left( \text{logit}^{-1}(\alpha) \right)^{n} \left( 1 - \text{logit}^{-1}(\alpha) \right)^{N - n}. \end{eqnarray*}$

### Log probability mass function

$\begin{eqnarray*} \log \text{BinomialLogit}(n~|~N,\alpha) & = & \log \Gamma(N+1) - \log \Gamma(n + 1) - \log \Gamma(N- n + 1) \\[4pt] & & { } + n \log \text{logit}^{-1}(\alpha) + (N - n) \log \left( 1 - \text{logit}^{-1}(\alpha) \right), \end{eqnarray*}$

### Gradient of log probability mass function

$\begin{equation*} \frac{\partial}{\partial \alpha} \log \text{BinomialLogit}(n~|~N,\alpha) = \frac{n}{\text{logit}^{-1}(-\alpha)} - \frac{N - n}{\text{logit}^{-1}(\alpha)} \end{equation*}$

### Distribution statement

n ~ binomial_logit(N, alpha)

Increment target log probability density with binomial_logit_lupmf(n | N, alpha).

Available since 2.0

### Stan functions

real binomial_logit_lpmf(ints n | ints N, reals alpha)
The log binomial probability mass of n successes in N trials given logit-scaled chance of success alpha

Available since 2.12

real binomial_logit_lupmf(ints n | ints N, reals alpha)
The log binomial probability mass of n successes in N trials given logit-scaled chance of success alpha dropping constant additive terms

Available since 2.25

## Binomial-logit generalized linear model (Logistic Regression)

Stan also supplies a single function for a generalized linear model with binomial distribution and logit link function, i.e., a function for logistic regression with aggregated outcomes. This provides a more efficient implementation of logistic regression than a manually written regression in terms of a binomial distribution and matrix multiplication.

### Probability mass function

Suppose $$N \in \mathbb{N}$$, $$x\in \mathbb{R}^{n\cdot m}, \alpha \in \mathbb{R}^n, \beta \in \mathbb{R}^m$$, and $$n \in \{0,\ldots,N\}$$. Then \begin{align*} &\text{BinomialLogitGLM}(n~|~N, x, \alpha, \beta) = \text{Binomial}(n~|~N,\text{logit}^{-1}(\alpha_i + x_i \cdot \beta)) \\ &= \binom{N}{n} \left( \text{logit}^{-1}(\alpha_i + \sum_{1\leq j\leq m}x_{ij}\cdot \beta_j) \right)^{n} \left( 1 - \text{logit}^{-1}(\alpha_i + \sum_{1\leq j\leq m}x_{ij}\cdot \beta_j) \right)^{N - n}. \end{align*}

### Distribution statement

n ~ binomial_logit_glm(N, x, alpha, beta)

Increment target log probability density with binomial_logit_glm_lupmf(n | N, x, alpha, beta).

Available since 2.34

### Stan Functions

real binomial_logit_glm_lpmf(int n | int N, matrix x, real alpha, vector beta)
The log binomial probability mass of n given N trials and chance of success inv_logit(alpha + x * beta).

Available since 2.34

real binomial_logit_glm_lupmf(int n | int N, matrix x, real alpha, vector beta)
The log binomial probability mass of n given N trials and chance of success inv_logit(alpha + x * beta) dropping constant additive terms.

Available since 2.34

real binomial_logit_glm_lpmf(int n | int N, matrix x, vector alpha, vector beta)
The log binomial probability mass of n given N trials and chance of success inv_logit(alpha + x * beta).

Available since 2.34

real binomial_logit_glm_lupmf(int n | int N, matrix x, vector alpha, vector beta)
The log binomial probability mass of n given N trials and chance of success inv_logit(alpha + x * beta) dropping constant additive terms.

Available since 2.34

real binomial_logit_glm_lpmf(array[] int n | array[] int N, row_vector x, real alpha, vector beta)
The log binomial probability mass of n given N trials and chance of success inv_logit(alpha + x * beta).

Available since 2.34

real binomial_logit_glm_lupmf(array[] int n | array[] int N, row_vector x, real alpha, vector beta)
The log binomial probability mass of n given N trials and chance of success inv_logit(alpha + x * beta) dropping constant additive terms.

Available since 2.34

real binomial_logit_glm_lpmf(array[] int n | array[] int N, row_vector x, vector alpha, vector beta)
The log binomial probability mass of n given N trials and chance of success inv_logit(alpha + x * beta).

Available since 2.34

real binomial_logit_glm_lupmf(array[] int n | array[] int N, row_vector x, vector alpha, vector beta)
The log binomial probability mass of n given N trials and chance of success inv_logit(alpha + x * beta) dropping constant additive terms.

Available since 2.34

real binomial_logit_glm_lpmf(array[] int n | array[] int N, matrix x, real alpha, vector beta)
The log binomial probability mass of n given N trials and chance of success inv_logit(alpha + x * beta).

Available since 2.34

real binomial_logit_glm_lupmf(array[] int n | array[] int N, matrix x, real alpha, vector beta)
The log binomial probability mass of n given N trials and chance of success inv_logit(alpha + x * beta) dropping constant additive terms.

Available since 2.34

real binomial_logit_glm_lpmf(array[] int n | array[] int N, matrix x, vector alpha, vector beta)
The log binomial probability mass of n given N trials and chance of success inv_logit(alpha + x * beta).

Available since 2.34

real binomial_logit_glm_lupmf(array[] int n | array[] int N, matrix x, vector alpha, vector beta)
The log binomial probability mass of n given N trials and chance of success inv_logit(alpha + x * beta) dropping constant additive terms.

Available since 2.34

## Beta-binomial distribution

### Probability mass function

If $$N \in \mathbb{N}$$, $$\alpha \in \mathbb{R}^+$$, and $$\beta \in \mathbb{R}^+$$, then for $$n \in {0,\ldots,N}$$, $\begin{equation*} \text{BetaBinomial}(n~|~N,\alpha,\beta) = \binom{N}{n} \frac{\mathrm{B}(n+\alpha, N -n + \beta)}{\mathrm{B}(\alpha,\beta)}, \end{equation*}$ where the beta function $$\mathrm{B}(u,v)$$ is defined for $$u \in \mathbb{R}^+$$ and $$v \in \mathbb{R}^+$$ by $\begin{equation*} \mathrm{B}(u,v) = \frac{\Gamma(u) \ \Gamma(v)}{\Gamma(u + v)}. \end{equation*}$

### Distribution statement

n ~ beta_binomial(N, alpha, beta)

Increment target log probability density with beta_binomial_lupmf(n | N, alpha, beta).

Available since 2.0

### Stan functions

real beta_binomial_lpmf(ints n | ints N, reals alpha, reals beta)
The log beta-binomial probability mass of n successes in N trials given prior success count (plus one) of alpha and prior failure count (plus one) of beta

Available since 2.12

real beta_binomial_lupmf(ints n | ints N, reals alpha, reals beta)
The log beta-binomial probability mass of n successes in N trials given prior success count (plus one) of alpha and prior failure count (plus one) of beta dropping constant additive terms

Available since 2.25

real beta_binomial_cdf(ints n | ints N, reals alpha, reals beta)
The beta-binomial cumulative distribution function of n successes in N trials given prior success count (plus one) of alpha and prior failure count (plus one) of beta

Available since 2.0

real beta_binomial_lcdf(ints n | ints N, reals alpha, reals beta)
The log of the beta-binomial cumulative distribution function of n successes in N trials given prior success count (plus one) of alpha and prior failure count (plus one) of beta

Available since 2.12

real beta_binomial_lccdf(ints n | ints N, reals alpha, reals beta)
The log of the beta-binomial complementary cumulative distribution function of n successes in N trials given prior success count (plus one) of alpha and prior failure count (plus one) of beta

Available since 2.12

R beta_binomial_rng(ints N, reals alpha, reals beta)
Generate a beta-binomial variate with N trials, prior success count (plus one) of alpha, and prior failure count (plus one) of beta; may only be used in transformed data and generated quantities blocks. For a description of argument and return types, see section vectorized PRNG functions.

Available since 2.18

## Hypergeometric distribution

### Probability mass function

If $$a \in \mathbb{N}$$, $$b \in \mathbb{N}$$, and $$N \in \{0,\ldots,a+b\}$$, then for $$n \in \{\max(0,N-b),\ldots,\min(a,N)\}$$, $\begin{equation*} \text{Hypergeometric}(n~|~N,a,b) = \frac{\normalsize{\binom{a}{n} \binom{b}{N - n}}} {\normalsize{\binom{a + b}{N}}}. \end{equation*}$

### Distribution statement

n ~ hypergeometric(N, a, b)

Increment target log probability density with hypergeometric_lupmf(n | N, a, b).

Available since 2.0

### Stan functions

real hypergeometric_lpmf(int n | int N, int a, int b)
The log hypergeometric probability mass of n successes in N trials given total success count of a and total failure count of b

Available since 2.12

real hypergeometric_lupmf(int n | int N, int a, int b)
The log hypergeometric probability mass of n successes in N trials given total success count of a and total failure count of b dropping constant additive terms

Available since 2.25

int hypergeometric_rng(int N, int a, int b)
Generate a hypergeometric variate with N trials, total success count of a, and total failure count of b; may only be used in transformed data and generated quantities blocks

Available since 2.18

## Categorical distribution

### Probability mass functions

If $$N \in \mathbb{N}$$, $$N > 0$$, and if $$\theta \in \mathbb{R}^N$$ forms an $$N$$-simplex (i.e., has nonnegative entries summing to one), then for $$y \in \{1,\ldots,N\}$$, $\begin{equation*} \text{Categorical}(y~|~\theta) = \theta_y. \end{equation*}$ In addition, Stan provides a log-odds scaled categorical distribution, $\begin{equation*} \text{CategoricalLogit}(y~|~\beta) = \text{Categorical}(y~|~\text{softmax}(\beta)). \end{equation*}$ See the definition of softmax for the definition of the softmax function.

### Distribution statement

y ~ categorical(theta)

Increment target log probability density with categorical_lupmf(y | theta) dropping constant additive terms.

Available since 2.0

### Distribution statement

y ~ categorical_logit(beta)

Increment target log probability density with categorical_logit_lupmf(y | beta).

Available since 2.4

### Stan functions

All of the categorical distributions are vectorized so that the outcome y can be a single integer (type int) or an array of integers (type array[] int).

real categorical_lpmf(ints y | vector theta)
The log categorical probability mass function with outcome(s) y in $$1:N$$ given $$N$$-vector of outcome probabilities theta. The parameter theta must have non-negative entries that sum to one, but it need not be a variable declared as a simplex.

Available since 2.12

real categorical_lupmf(ints y | vector theta)
The log categorical probability mass function with outcome(s) y in $$1:N$$ given $$N$$-vector of outcome probabilities theta dropping constant additive terms. The parameter theta must have non-negative entries that sum to one, but it need not be a variable declared as a simplex.

Available since 2.25

real categorical_logit_lpmf(ints y | vector beta)
The log categorical probability mass function with outcome(s) y in $$1:N$$ given log-odds of outcomes beta.

Available since 2.12

real categorical_logit_lupmf(ints y | vector beta)
The log categorical probability mass function with outcome(s) y in $$1:N$$ given log-odds of outcomes beta dropping constant additive terms.

Available since 2.25

int categorical_rng(vector theta)
Generate a categorical variate with $$N$$-simplex distribution parameter theta; may only be used in transformed data and generated quantities blocks

Available since 2.0

int categorical_logit_rng(vector beta)
Generate a categorical variate with outcome in range $$1:N$$ from log-odds vector beta; may only be used in transformed data and generated quantities blocks

Available since 2.16

## Categorical logit generalized linear model (softmax regression)

Stan also supplies a single function for a generalized linear model with categorical distribution and logit link function, i.e. a function for a softmax regression. This provides a more efficient implementation of softmax regression than a manually written regression in terms of a categorical distribution and matrix multiplication.

Note that the implementation does not put any restrictions on the coefficient matrix $$\beta$$. It is up to the user to use a reference category, a suitable prior or some other means of identifiability. See Multi-logit in the Stan User’s Guide.

### Probability mass functions

If $$N,M,K \in \mathbb{N}$$, $$N,M,K > 0$$, and if $$x\in \mathbb{R}^{M\times K}, \alpha \in \mathbb{R}^N, \beta\in \mathbb{R}^{K\cdot N}$$, then for $$y \in \{1,\ldots,N\}^M$$, $\begin{equation*} \begin{split} \text{CategoricalLogitGLM}(y~|~x,\alpha,\beta) & = \prod_{1\leq i \leq M}\text{CategoricalLogit}(y_i~|~\alpha+x_i\cdot\beta) \\[8pt] & = \prod_{1\leq i \leq M}\text{Categorical}(y_i~|~softmax(\alpha+x_i\cdot\beta)). \end{split} \end{equation*}$ See the definition of softmax for the definition of the softmax function.

### Distribution statement

y ~ categorical_logit_glm(x, alpha, beta)

Increment target log probability density with categorical_logit_glm_lupmf(y | x, alpha, beta).

Available since 2.23

### Stan functions

real categorical_logit_glm_lpmf(int y | row_vector x, vector alpha, matrix beta)
The log categorical probability mass function with outcome y in $$1:N$$ given $$N$$-vector of log-odds of outcomes alpha + x * beta.

Available since 2.23

real categorical_logit_glm_lupmf(int y | row_vector x, vector alpha, matrix beta)
The log categorical probability mass function with outcome y in $$1:N$$ given $$N$$-vector of log-odds of outcomes alpha + x * beta dropping constant additive terms.

Available since 2.25

real categorical_logit_glm_lpmf(int y | matrix x, vector alpha, matrix beta)
The log categorical probability mass function with outcomes y in $$1:N$$ given $$N$$-vector of log-odds of outcomes alpha + x * beta.

Available since 2.23

real categorical_logit_glm_lupmf(int y | matrix x, vector alpha, matrix beta)
The log categorical probability mass function with outcomes y in $$1:N$$ given $$N$$-vector of log-odds of outcomes alpha + x * beta dropping constant additive terms.

Available since 2.25

real categorical_logit_glm_lpmf(array[] int y | row_vector x, vector alpha, matrix beta)
The log categorical probability mass function with outcomes y in $$1:N$$ given $$N$$-vector of log-odds of outcomes alpha + x * beta.

Available since 2.23

real categorical_logit_glm_lupmf(array[] int y | row_vector x, vector alpha, matrix beta)
The log categorical probability mass function with outcomes y in $$1:N$$ given $$N$$-vector of log-odds of outcomes alpha + x * beta dropping constant additive terms.

Available since 2.25

real categorical_logit_glm_lpmf(array[] int y | matrix x, vector alpha, matrix beta)
The log categorical probability mass function with outcomes y in $$1:N$$ given $$N$$-vector of log-odds of outcomes alpha + x * beta.

Available since 2.23

real categorical_logit_glm_lupmf(array[] int y | matrix x, vector alpha, matrix beta)
The log categorical probability mass function with outcomes y in $$1:N$$ given $$N$$-vector of log-odds of outcomes alpha + x * beta dropping constant additive terms.

Available since 2.25

## Discrete range distribution

### Probability mass functions

If $$l, u \in \mathbb{Z}$$ are lower and upper bounds ($$l \le u$$), then for any integer $$y \in \{l,\ldots,u\}$$, $\begin{equation*} \text{DiscreteRange}(y ~|~ l, u) = \frac{1}{u - l + 1}. \end{equation*}$

### Distribution statement

y ~ discrete_range(l, u)

Increment the target log probability density with discrete_range_lupmf(y | l, u) dropping constant additive terms.

Available since 2.26

### Stan functions

All of the discrete range distributions are vectorized so that the outcome y and the bounds l, u can be a single integer (type int) or an array of integers (type array[] int).

real discrete_range_lpmf(ints y | ints l, ints u)
The log probability mass function with outcome(s) y in $$l:u$$.

Available since 2.26

real discrete_range_lupmf(ints y | ints l, ints u)
The log probability mass function with outcome(s) y in $$l:u$$ dropping constant additive terms.

Available since 2.26

real discrete_range_cdf(ints y | ints l, ints u)
The discrete range cumulative distribution function for the given y, lower and upper bounds.

Available since 2.26

real discrete_range_lcdf(ints y | ints l, ints u)
The log of the discrete range cumulative distribution function for the given y, lower and upper bounds.

Available since 2.26

real discrete_range_lccdf(ints y | ints l, ints u)
The log of the discrete range complementary cumulative distribution function for the given y, lower and upper bounds.

Available since 2.26

ints discrete_range_rng(ints l, ints u)
Generate a discrete variate between the given lower and upper bounds; may only be used in transformed data and generated quantities blocks. For a description of argument and return types, see section vectorized PRNG functions.

Available since 2.26

## Ordered logistic distribution

### Probability mass function

If $$K \in \mathbb{N}$$ with $$K > 2$$, $$c \in \mathbb{R}^{K-1}$$ such that $$c_k < c_{k+1}$$ for $$k \in \{1,\ldots,K-2\}$$, and $$\eta \in \mathbb{R}$$, then for $$k \in \{1,\ldots,K\}$$, $\begin{equation*} \text{OrderedLogistic}(k~|~\eta,c) = \left\{ \begin{array}{ll} 1 - \text{logit}^{-1}(\eta - c_1) & \text{if } k = 1, \\[4pt] \text{logit}^{-1}(\eta - c_{k-1}) - \text{logit}^{-1}(\eta - c_{k}) & \text{if } 1 < k < K, \text{and} \\[4pt] \text{logit}^{-1}(\eta - c_{K-1}) - 0 & \text{if } k = K. \end{array} \right. \end{equation*}$ The $$k=K$$ case is written with the redundant subtraction of zero to illustrate the parallelism of the cases; the $$k=1$$ and $$k=K$$ edge cases can be subsumed into the general definition by setting $$c_0 = -\infty$$ and $$c_K = +\infty$$ with $$\text{logit}^{-1}(-\infty) = 0$$ and $$\text{logit}^{-1}(\infty) = 1$$.

### Distribution statement

k ~ ordered_logistic(eta, c)

Increment target log probability density with ordered_logistic_lupmf(k | eta, c).

Available since 2.0

### Stan functions

real ordered_logistic_lpmf(ints k | vector eta, vectors c)
The log ordered logistic probability mass of k given linear predictors eta, and cutpoints c.

Available since 2.18

real ordered_logistic_lupmf(ints k | vector eta, vectors c)
The log ordered logistic probability mass of k given linear predictors eta, and cutpoints c dropping constant additive terms.

Available since 2.25

int ordered_logistic_rng(real eta, vector c)
Generate an ordered logistic variate with linear predictor eta and cutpoints c; may only be used in transformed data and generated quantities blocks

Available since 2.0

## Ordered logistic generalized linear model (ordinal regression)

### Probability mass function

If $$N,M,K \in \mathbb{N}$$ with $$N, M > 0$$, $$K > 2$$, $$c \in \mathbb{R}^{K-1}$$ such that $$c_k < c_{k+1}$$ for $$k \in \{1,\ldots,K-2\}$$, and $$x\in \mathbb{R}^{N\times M}, \beta\in \mathbb{R}^M$$, then for $$y \in \{1,\ldots,K\}^N$$, $\begin{equation*} \begin{split} \\ & \text{OrderedLogisticGLM}(y~|~x,\beta,c) \\[8pt] & = \prod_{1\leq i \leq N}\text{OrderedLogistic}(y_i~|~x_i\cdot \beta,c) \\ & = \prod_{1\leq i \leq N} \left\{ \begin{array}{ll} 1 - \text{logit}^{-1}(x_i\cdot \beta - c_1) & \text{if } y = 1, \\[4pt] \text{logit}^{-1}(x_i\cdot \beta - c_{y-1}) - \text{logit}^{-1}(x_i\cdot \beta - c_{y}) & \text{if } 1 < y < K, \text{and} \\[4pt] \text{logit}^{-1}(x_i\cdot \beta - c_{K-1}) - 0 & \text{if } y = K. \end{array} \right. \end{split} \end{equation*}$ The $$k=K$$ case is written with the redundant subtraction of zero to illustrate the parallelism of the cases; the $$y=1$$ and $$y=K$$ edge cases can be subsumed into the general definition by setting $$c_0 = -\infty$$ and $$c_K = +\infty$$ with $$\text{logit}^{-1}(-\infty) = 0$$ and $$\text{logit}^{-1}(\infty) = 1$$.

### Distribution statement

y ~ ordered_logistic_glm(x, beta, c)

Increment target log probability density with ordered_logistic_lupmf(y | x, beta, c).

Available since 2.23

### Stan functions

real ordered_logistic_glm_lpmf(int y | row_vector x, vector beta, vector c)
The log ordered logistic probability mass of y, given linear predictors x * beta, and cutpoints c. The cutpoints c must be ordered.

Available since 2.23

real ordered_logistic_glm_lupmf(int y | row_vector x, vector beta, vector c)
The log ordered logistic probability mass of y, given linear predictors x * beta, and cutpoints c dropping constant additive terms. The cutpoints c must be ordered.

Available since 2.25

real ordered_logistic_glm_lpmf(int y | matrix x, vector beta, vector c)
The log ordered logistic probability mass of y, given linear predictors x * beta, and cutpoints c. The cutpoints c must be ordered.

Available since 2.23

real ordered_logistic_glm_lupmf(int y | matrix x, vector beta, vector c)
The log ordered logistic probability mass of y, given linear predictors x * beta, and cutpoints c dropping constant additive terms. The cutpoints c must be ordered.

Available since 2.25

real ordered_logistic_glm_lpmf(array[] int y | row_vector x, vector beta, vector c)
The log ordered logistic probability mass of y, given linear predictors x * beta, and cutpoints c. The cutpoints c must be ordered.

Available since 2.23

real ordered_logistic_glm_lupmf(array[] int y | row_vector x, vector beta, vector c)
The log ordered logistic probability mass of y, given linear predictors x * beta, and cutpoints c dropping constant additive terms. The cutpoints c must be ordered.

Available since 2.25

real ordered_logistic_glm_lpmf(array[] int y | matrix x, vector beta, vector c)
The log ordered logistic probability mass of y, given linear predictors x * beta, and cutpoints c. The cutpoints c must be ordered.

Available since 2.23

real ordered_logistic_glm_lupmf(array[] int y | matrix x, vector beta, vector c)
The log ordered logistic probability mass of y, given linear predictors x * beta, and cutpoints c dropping constant additive terms. The cutpoints c must be ordered.

Available since 2.25

## Ordered probit distribution

### Probability mass function

If $$K \in \mathbb{N}$$ with $$K > 2$$, $$c \in \mathbb{R}^{K-1}$$ such that $$c_k < c_{k+1}$$ for $$k \in \{1,\ldots,K-2\}$$, and $$\eta \in \mathbb{R}$$, then for $$k \in \{1,\ldots,K\}$$, $\begin{equation*} \text{OrderedProbit}(k~|~\eta,c) = \left\{ \begin{array}{ll} 1 - \Phi(\eta - c_1) & \text{if } k = 1, \\[4pt] \Phi(\eta - c_{k-1}) - \Phi(\eta - c_{k}) & \text{if } 1 < k < K, \text{and} \\[4pt] \Phi(\eta - c_{K-1}) - 0 & \text{if } k = K. \end{array} \right. \end{equation*}$ The $$k=K$$ case is written with the redundant subtraction of zero to illustrate the parallelism of the cases; the $$k=1$$ and $$k=K$$ edge cases can be subsumed into the general definition by setting $$c_0 = -\infty$$ and $$c_K = +\infty$$ with $$\Phi(-\infty) = 0$$ and $$\Phi(\infty) = 1$$.

### Distribution statement

k ~ ordered_probit(eta, c)

Increment target log probability density with ordered_probit_lupmf(k | eta, c).

Available since 2.19

### Stan functions

real ordered_probit_lpmf(ints k | vector eta, vectors c)
The log ordered probit probability mass of k given linear predictors eta, and cutpoints c.

Available since 2.18

real ordered_probit_lupmf(ints k | vector eta, vectors c)
The log ordered probit probability mass of k given linear predictors eta, and cutpoints c dropping constant additive terms.

Available since 2.25

real ordered_probit_lpmf(ints k | real eta, vectors c)
The log ordered probit probability mass of k given linear predictor eta, and cutpoints c.

Available since 2.19

real ordered_probit_lupmf(ints k | real eta, vectors c)
The log ordered probit probability mass of k given linear predictor eta, and cutpoints c dropping constant additive terms.

Available since 2.19

int ordered_probit_rng(real eta, vector c)
Generate an ordered probit variate with linear predictor eta and cutpoints c; may only be used in transformed data and generated quantities blocks

Available since 2.18