# Real-Valued Basic Functions

This chapter describes built-in functions that take zero or more real or integer arguments and return real values.

## Vectorization of real-valued functions

Although listed in this chapter, many of Stan’s built-in functions are vectorized so that they may be applied to any argument type. The vectorized form of these functions is not any faster than writing an explicit loop that iterates over the elements applying the function—it’s just easier to read and write and less error prone.

### Unary function vectorization

Many of Stan’s unary functions can be applied to any argument type. For example, the exponential function, `exp`

, can be applied to `real`

arguments or arrays of `real`

arguments. Other than for integer arguments, the result type is the same as the argument type, including dimensionality and size. Integer arguments are first promoted to real values, but the result will still have the same dimensionality and size as the argument.

#### Real and real array arguments

When applied to a simple real value, the result is a real value. When applied to arrays, vectorized functions like `exp()`

are defined elementwise. For example,

```
// declare some variables for arguments
real x0;
array[5] real x1;
array[4, 7] real x2;
// ...
// declare some variables for results
real y0;
array[5] real y1;
array[4, 7] real y2;
// ...
// calculate and assign results
y0 = exp(x0);
y1 = exp(x1); y2 = exp(x2);
```

When `exp`

is applied to an array, it applies elementwise. For example, the statement above,

` y2 = exp(x2);`

produces the same result for `y2`

as the explicit loop

```
for (i in 1:4) {
for (j in 1:7) {
y2[i, j] = exp(x2[i, j]);
} }
```

#### Vector and matrix arguments

Vectorized functions also apply elementwise to vectors and matrices. For example,

```
vector[5] xv;
row_vector[7] xrv;
matrix[10, 20] xm;
vector[5] yv;
row_vector[7] yrv;
matrix[10, 20] ym;
yv = exp(xv);
yrv = exp(xrv); ym = exp(xm);
```

Arrays of vectors and matrices work the same way. For example,

```
array[12] matrix[17, 93] u;
array[12] matrix[17, 93] z;
z = exp(u);
```

After this has been executed, `z[i, j, k]`

will be equal to `exp(u[i, j, k])`

.

#### Integer and integer array arguments

Integer arguments are promoted to real values in vectorized unary functions. Thus if `n`

is of type `int`

, `exp(n)`

is of type `real`

. Arrays work the same way, so that if `n2`

is a one dimensional array of integers, then `exp(n2)`

will be a one-dimensional array of reals with the same number of elements as `n2`

. For example,

```
array[23] int n1;
array[23] real z1;
z1 = exp(n1);
```

It would be illegal to try to assign `exp(n1)`

to an array of integers; the return type is a real array.

### Binary function vectorization

Like the unary functions, many of Stan’s binary functions have been vectorized, and can be applied elementwise to combinations of both scalars or container types.

#### Scalar and scalar array arguments

When applied to two scalar values, the result is a scalar value. When applied to two arrays, or combination of a scalar value and an array, vectorized functions like `pow()`

are defined elementwise. For example,

```
// declare some variables for arguments
real x00;
real x01;
array[5] real x10;
array[5]real x11;
array[4, 7] real x20;
array[4, 7] real x21;
// ...
// declare some variables for results
real y0;
array[5] real y1;
array[4, 7] real y2;
// ...
// calculate and assign results
y0 = pow(x00, x01);
y1 = pow(x10, x11); y2 = pow(x20, x21);
```

When `pow`

is applied to two arrays, it applies elementwise. For example, the statement above,

` y2 = pow(x20, x21);`

produces the same result for `y2`

as the explicit loop

```
for (i in 1:4) {
for (j in 1:7) {
y2[i, j] = pow(x20[i, j], x21[i, j]);
} }
```

Alternatively, if a combination of an array and a scalar are provided, the scalar value is broadcast to be applied to each value of the array. For example, the following statement:

` y2 = pow(x20, x00);`

produces the same result for `y2`

as the explicit loop:

```
for (i in 1:4) {
for (j in 1:7) {
y2[i, j] = pow(x20[i, j], x00);
} }
```

#### Vector and matrix arguments

Vectorized binary functions also apply elementwise to vectors and matrices, and to combinations of these with scalar values. For example,

```
real x00;
vector[5] xv00;
vector[5] xv01;
row_vector[7] xrv;
matrix[10, 20] xm;
vector[5] yv;
row_vector[7] yrv;
matrix[10, 20] ym;
yv = pow(xv00, xv01);
yrv = pow(xrv, x00); ym = pow(x00, xm);
```

Arrays of vectors and matrices work the same way. For example,

```
array[12] matrix[17, 93] u;
array[12] matrix[17, 93] z;
z = pow(u, x00);
```

After this has been executed, `z[i, j, k]`

will be equal to `pow(u[i, j, k], x00)`

.

#### Input & return types

Vectorised binary functions require that both inputs, unless one is a real, be containers of the same type and size. For example, the following statements are legal:

```
vector[5] xv;
row_vector[7] xrv;
matrix[10, 20] xm;
vector[5] yv = pow(xv, xv)
row_vector[7] yrv = pow(xrv, xrv)
matrix[10, 20] = pow(xm, xm)
```

But the following statements are not:

```
vector[5] xv;
vector[7] xv2;
row_vector[5] xrv;
// Cannot mix different types
vector[5] yv = pow(xv, xrv)
// Cannot mix different sizes of the same type
vector[5] yv = pow(xv, xv2)
```

While the vectorized binary functions generally require the same input types, the only exception to this is for binary functions that require one input to be an integer and the other to be a real (e.g., `bessel_first_kind`

). For these functions, one argument can be a container of any type while the other can be an integer array, as long as the dimensions of both are the same. For example, the following statements are legal:

```
vector[5] xv;
matrix[5, 5] xm;
array[5] int xi;
array[5, 5] int xii;
vector[5] yv = bessel_first_kind(xi, xv);
matrix[5, 5] ym = bessel_first_kind(xii, xm);
```

Whereas these are not:

```
vector[5] xv;
matrix[5, 5] xm;
array[7] int xi;
// Dimensions of containers do not match
vector[5] yv = bessel_first_kind(xi, xv);
// Function requires first argument be an integer type
matrix[5, 5] ym = bessel_first_kind(xm, xm);
```

## Mathematical constants

Constants are represented as functions with no arguments and must be called as such. For instance, the mathematical constant \(\pi\) must be written in a Stan program as `pi()`

.

`real`

`pi`

`()`

\(\pi\), the ratio of a circle’s circumference to its diameter

*Available since 2.0*

`real`

`e`

`()`

\(e\), the base of the natural logarithm

*Available since 2.0*

`real`

`sqrt2`

`()`

The square root of 2

*Available since 2.0*

`real`

`log2`

`()`

The natural logarithm of 2

*Available since 2.0*

`real`

`log10`

`()`

The natural logarithm of 10

*Available since 2.0*

## Special values

`real`

`not_a_number`

`()`

Not-a-number, a special non-finite real value returned to signal an error

*Available since 2.0*

`real`

`positive_infinity`

`()`

Positive infinity, a special non-finite real value larger than all finite numbers

*Available since 2.0*

`real`

`negative_infinity`

`()`

Negative infinity, a special non-finite real value smaller than all finite numbers

*Available since 2.0*

`real`

`machine_precision`

`()`

The smallest number \(x\) such that \((x + 1) \neq 1\) in floating-point arithmetic on the current hardware platform

*Available since 2.0*

## Log probability function

The basic purpose of a Stan program is to compute a log probability function and its derivatives. The log probability function in a Stan model outputs the log density on the unconstrained scale. A log probability accumulator starts at zero and is then incremented in various ways by a Stan program. The variables are first transformed from unconstrained to constrained, and the log Jacobian determinant added to the log probability accumulator. Then the model block is executed on the constrained parameters, with each sampling statement (`~`

) and log probability increment statement (`increment_log_prob`

) adding to the accumulator. At the end of the model block execution, the value of the log probability accumulator is the log probability value returned by the Stan program.

Stan provides a special built-in function `target()`

that takes no arguments and returns the current value of the log probability accumulator. This function is primarily useful for debugging purposes, where for instance, it may be used with a print statement to display the log probability accumulator at various stages of execution to see where it becomes ill defined.

`real`

`target`

`()`

Return the current value of the log probability accumulator.

*Available since 2.10*

`target`

acts like a function ending in `_lp`

, meaning that it may only may only be used in the model block.

## Logical functions

Like C++, BUGS, and R, Stan uses 0 to encode false, and 1 to encode true. Stan supports the usual boolean comparison operations and boolean operators. These all have the same syntax and precedence as in C++; for the full list of operators and precedences, see the reference manual.

### Comparison operators

All comparison operators return boolean values, either 0 or 1. Each operator has two signatures, one for integer comparisons and one for floating-point comparisons. Comparing an integer and real value is carried out by first promoting the integer value.

`int`

`operator<`

`(int x, int y)`

`int`

`operator<`

`(real x, real y)`

Return 1 if x is less than y and 0 otherwise. \[\begin{equation*} \text{operator<}(x,y)
= \begin{cases} 1 & \text{if $x < y$} \\ 0 & \text{otherwise}
\end{cases} \end{equation*}\]

*Available since 2.0*

`int`

`operator<=`

`(int x, int y)`

`int`

`operator<=`

`(real x, real y)`

Return 1 if x is less than or equal y and 0 otherwise. \[\begin{equation*}
\text{operator<=}(x,y) = \begin{cases} 1 & \text{if $x \leq y$} \\ 0 & \text{otherwise} \end{cases}
\end{equation*}\]

*Available since 2.0*

`int`

`operator>`

`(int x, int y)`

`int`

`operator>`

`(real x, real y)`

Return 1 if x is greater than y and 0 otherwise. \[\begin{equation*}
\text{operator>}(x,y) = \begin{cases} 1 & \text{if $x > y$} \\ 0 & \text{otherwise} \end{cases}
\end{equation*}\]

*Available since 2.0*

`int`

`operator>=`

`(int x, int y)`

`int`

`operator>=`

`(real x, real y)`

Return 1 if x is greater than or equal to y and 0 otherwise. \[\begin{equation*}
\text{operator>=}(x,y) = \begin{cases} 1 & \text{if $x \geq y$} \\ 0 & \text{otherwise} \end{cases}
\end{equation*}\]

*Available since 2.0*

`int`

`operator==`

`(int x, int y)`

`int`

`operator==`

`(real x, real y)`

Return 1 if x is equal to y and 0 otherwise. \[\begin{equation*}
\text{operator==}(x,y) = \begin{cases} 1 & \text{if $x = y$} \\ 0 & \text{otherwise} \end{cases}
\end{equation*}\]

*Available since 2.0*

`int`

`operator!=`

`(int x, int y)`

`int`

`operator!=`

`(real x, real y)`

Return 1 if x is not equal to y and 0 otherwise. \[\begin{equation*}
\text{operator!=}(x,y) = \begin{cases} 1 & \text{if $x \neq y$} \\ 0 &
\text{otherwise} \end{cases} \end{equation*}\]

*Available since 2.0*

### Boolean operators

Boolean operators return either 0 for false or 1 for true. Inputs may be any real or integer values, with non-zero values being treated as true and zero values treated as false. These operators have the usual precedences, with negation (not) binding the most tightly, conjunction the next and disjunction the weakest; all of the operators bind more tightly than the comparisons. Thus an expression such as `!a && b`

is interpreted as `(!a) && b`

, and `a < b || c >= d && e != f`

as `(a < b) || (((c >= d) && (e != f)))`

.

`int`

`operator!`

`(int x)`

Return 1 if x is zero and 0 otherwise. \[\begin{equation*} \text{operator!}(x) =
\begin{cases} 0 & \text{if $x \neq 0$} \\ 1 & \text{if $x = 0$}
\end{cases} \end{equation*}\]

*Available since 2.0*

`int`

`operator!`

`(real x)`

Return 1 if x is zero and 0 otherwise. \[\begin{equation*} \text{operator!}(x) =
\begin{cases} 0 & \text{if $x \neq 0.0$} \\ 1 & \text{if $x = 0.0$}
\end{cases} \end{equation*}\] **deprecated;** - use `operator==`

instead.

*Available since 2.0, deprecated in 2.31*

`int`

`operator&&`

`(int x, int y)`

Return 1 if x is unequal to 0 and y is unequal to 0. \[\begin{equation*} \mathrm{operator\&\&}(x,y) = \begin{cases} 1 & \text{if $x \neq 0$} \text{ and } y \neq 0\\ 0 & \text{otherwise} \end{cases} \end{equation*}\]

*Available since 2.0*

`int`

`operator&&`

`(real x, real y)`

Return 1 if x is unequal to 0.0 and y is unequal to 0.0. \[\begin{equation*}
\mathrm{operator\&\&}(x,y) = \begin{cases} 1 & \text{if $x \neq 0.0$}
\text{ and } y \neq 0.0\\ 0 & \text{otherwise} \end{cases} \end{equation*}\] **deprecated**

*Available since 2.0, deprecated in 2.31*

`int`

`operator||`

`(int x, int y)`

Return 1 if x is unequal to 0 or y is unequal to 0. \[\begin{equation*}
\text{operator||}(x,y) = \begin{cases} 1 & \text{if $x \neq 0$}
\textrm{ or } y \neq 0\\ 0 & \text{otherwise} \end{cases} \end{equation*}\]

*Available since 2.0*

`int`

`operator||`

`(real x, real y)`

Return 1 if x is unequal to 0.0 or y is unequal to 0.0. \[\begin{equation*}
\text{operator||}(x,y) = \begin{cases} 1 & \text{if $x \neq 0.0$}
\textrm{ or } y \neq 0.0\\ 0 & \text{otherwise} \end{cases} \end{equation*}\] **deprecated**

*Available since 2.0, deprecated in 2.31*

#### Boolean operator short circuiting

Like in C++, the boolean operators `&&`

and `||`

and are implemented to short circuit directly to a return value after evaluating the first argument if it is sufficient to resolve the result. In evaluating `a || b`

, if `a`

evaluates to a value other than zero, the expression returns the value 1 without evaluating the expression `b`

. Similarly, evaluating `a && b`

first evaluates `a`

, and if the result is zero, returns 0 without evaluating `b`

.

### Logical functions

The logical functions introduce conditional behavior functionally and are primarily provided for compatibility with BUGS and JAGS.

`real`

`step`

`(real x)`

Return 1 if x is positive and 0 otherwise. \[\begin{equation*} \text{step}(x) =
\begin{cases} 0 & \text{if } x < 0 \\ 1 & \text{otherwise} \end{cases}
\end{equation*}\] **Warning:**`int_step(0)`

and `int_step(NaN)`

return 0 whereas `step(0)`

and `step(NaN)`

return 1.

The step function is often used in BUGS to perform conditional operations. For instance, `step(a-b)`

evaluates to 1 if `a`

is greater than `b`

and evaluates to 0 otherwise. `step`

is a step-like functions; see the warning in section step functions applied to expressions dependent on parameters.

*Available since 2.0*

`int`

`is_inf`

`(real x)`

Return 1 if x is infinite (positive or negative) and 0 otherwise.

*Available since 2.5*

`int`

`is_nan`

`(real x)`

Return 1 if x is NaN and 0 otherwise.

*Available since 2.5*

Care must be taken because both of these indicator functions are step-like and thus can cause discontinuities in gradients when applied to parameters; see section step-like functions for details.

## Real-valued arithmetic operators

The arithmetic operators are presented using C++ notation. For instance `operator+(x,y)`

refers to the binary addition operator and `operator-(x)`

to the unary negation operator. In Stan programs, these are written using the usual infix and prefix notations as `x + y`

and `-x`

, respectively.

### Binary infix operators

`real`

`operator+`

`(real x, real y)`

Return the sum of x and y. \[\begin{equation*} (x + y) = \text{operator+}(x,y) = x+y \end{equation*}\]

*Available since 2.0*

`real`

`operator-`

`(real x, real y)`

Return the difference between x and y. \[\begin{equation*} (x - y) =
\text{operator-}(x,y) = x - y \end{equation*}\]

*Available since 2.0*

`real`

`operator*`

`(real x, real y)`

Return the product of x and y. \[\begin{equation*} (x * y) = \text{operator*}(x,y) = xy
\end{equation*}\]

*Available since 2.0*

`real`

`operator/`

`(real x, real y)`

Return the quotient of x and y. \[\begin{equation*} (x / y) = \text{operator/}(x,y) =
\frac{x}{y} \end{equation*}\]

*Available since 2.0*

`real`

`operator^`

`(real x, real y)`

Return x raised to the power of y. \[\begin{equation*} (x^\mathrm{\wedge}y) =
\text{operator}^\mathrm{\wedge}(x,y) = x^y \end{equation*}\]

*Available since 2.5*

### Unary prefix operators

`real`

`operator-`

`(real x)`

Return the negation of the subtrahend x. \[\begin{equation*} \text{operator-}(x) = (-x)
\end{equation*}\]

*Available since 2.0*

`T`

`operator-`

`(T x)`

Vectorized version of `operator-`

. If `T x`

is a (possibly nested) array of reals, `-x`

is the same shape array where each individual number is negated.

*Available since 2.31*

`real`

`operator+`

`(real x)`

Return the value of x. \[\begin{equation*} \text{operator+}(x) = x \end{equation*}\]

*Available since 2.0*

## Step-like functions

**Warning:***These functions can seriously hinder sampling and optimization efficiency for gradient-based methods (e.g., NUTS, HMC, BFGS) if applied to parameters (including transformed parameters and local variables in the transformed parameters or model block). The problem is that they break gradients due to discontinuities coupled with zero gradients elsewhere. They do not hinder sampling when used in the data, transformed data, or generated quantities blocks.*

### Absolute value functions

`T`

`abs`

`(T x)`

The absolute value of x.

This function works elementwise over containers such as vectors. Given a type `T`

which is `real`

`vector`

, `row_vector`

, `matrix`

, or an array of those types, `abs`

returns the same type where each element has had its absolute value taken.

*Available since 2.0, vectorized in 2.30*

`real`

`fdim`

`(real x, real y)`

Return the positive difference between x and y, which is x - y if x is greater than y and 0 otherwise; see warning above. \[\begin{equation*} \text{fdim}(x,y) = \begin{cases} x-y &
\text{if } x \geq y \\ 0 & \text{otherwise} \end{cases} \end{equation*}\]

*Available since 2.0*

`R`

`fdim`

`(T1 x, T2 y)`

Vectorized implementation of the `fdim`

function

*Available since 2.25*

### Bounds functions

`real`

`fmin`

`(real x, real y)`

Return the minimum of x and y; see warning above. \[\begin{equation*} \text{fmin}(x,y) = \begin{cases} x &
\text{if } x \leq y \\ y & \text{otherwise} \end{cases} \end{equation*}\]

*Available since 2.0*

`R`

`fmin`

`(T1 x, T2 y)`

Vectorized implementation of the `fmin`

function

*Available since 2.25*

`real`

`fmax`

`(real x, real y)`

Return the maximum of x and y; see warning above. \[\begin{equation*} \text{fmax}(x,y) = \begin{cases} x &
\text{if } x \geq y \\ y & \text{otherwise} \end{cases} \end{equation*}\]

*Available since 2.0*

`R`

`fmax`

`(T1 x, T2 y)`

Vectorized implementation of the `fmax`

function

*Available since 2.25*

### Arithmetic functions

`real`

`fmod`

`(real x, real y)`

Return the real value remainder after dividing x by y; see warning above. \[\begin{equation*} \text{fmod}(x,y) = x - \left\lfloor \frac{x}{y} \right\rfloor \, y \end{equation*}\] The operator \(\lfloor u \rfloor\) is the floor operation; see below.

*Available since 2.0*

`R`

`fmod`

`(T1 x, T2 y)`

Vectorized implementation of the `fmod`

function

*Available since 2.25*

### Rounding functions

* Warning:* Rounding functions convert real values to integers. Because the output is an integer, any gradient information resulting from functions applied to the integer is not passed to the real value it was derived from. With MCMC sampling using HMC or NUTS, the MCMC acceptance procedure will correct for any error due to poor gradient calculations, but the result is likely to be reduced acceptance probabilities and less efficient sampling.

The rounding functions cannot be used as indices to arrays because they return real values. Stan may introduce integer-valued versions of these in the future, but as of now, there is no good workaround.

`R`

`floor`

`(T x)`

The floor of x, which is the largest integer less than or equal to x, converted to a real value; see warning at start of section step-like functions

*Available since 2.0, vectorized in 2.13*

`R`

`ceil`

`(T x)`

The ceiling of x, which is the smallest integer greater than or equal to x, converted to a real value; see warning at start of section step-like functions

*Available since 2.0, vectorized in 2.13*

`R`

`round`

`(T x)`

The nearest integer to x, converted to a real value; see warning at start of section step-like functions

*Available since 2.0, vectorized in 2.13*

`R`

`trunc`

`(T x)`

The integer nearest to but no larger in magnitude than x, converted to a double value; see warning at start of section step-like functions

*Available since 2.0, vectorized in 2.13*

## Power and logarithm functions

`R`

`sqrt`

`(T x)`

The square root of x

*Available since 2.0, vectorized in 2.13*

`R`

`cbrt`

`(T x)`

The cube root of x

*Available since 2.0, vectorized in 2.13*

`R`

`square`

`(T x)`

The square of x

*Available since 2.0, vectorized in 2.13*

`R`

`exp`

`(T x)`

The natural exponential of x

*Available since 2.0, vectorized in 2.13*

`R`

`exp2`

`(T x)`

The base-2 exponential of x

*Available since 2.0, vectorized in 2.13*

`R`

`log`

`(T x)`

The natural logarithm of x

*Available since 2.0, vectorized in 2.13*

`R`

`log2`

`(T x)`

The base-2 logarithm of x

*Available since 2.0, vectorized in 2.13*

`R`

`log10`

`(T x)`

The base-10 logarithm of x

*Available since 2.0, vectorized in 2.13*

`real`

`pow`

`(real x, real y)`

Return x raised to the power of y. \[\begin{equation*} \text{pow}(x,y) = x^y \end{equation*}\]

*Available since 2.0*

`R`

`pow`

`(T1 x, T2 y)`

Vectorized implementation of the `pow`

function

*Available since 2.25*

`R`

`inv`

`(T x)`

The inverse of x

*Available since 2.0, vectorized in 2.13*

`R`

`inv_sqrt`

`(T x)`

The inverse of the square root of x

*Available since 2.0, vectorized in 2.13*

`R`

`inv_square`

`(T x)`

The inverse of the square of x

*Available since 2.0, vectorized in 2.13*

## Trigonometric functions

`real`

`hypot`

`(real x, real y)`

Return the length of the hypotenuse of a right triangle with sides of length x and y. \[\begin{equation*} \text{hypot}(x,y) = \begin{cases} \sqrt{x^2+y^2} &
\text{if } x,y\geq 0 \\ \textrm{NaN} & \text{otherwise} \end{cases} \end{equation*}\]

*Available since 2.0*

`R`

`hypot`

`(T1 x, T2 y)`

Vectorized implementation of the `hypot`

function

*Available since 2.25*

`R`

`cos`

`(T x)`

The cosine of the angle x (in radians)

*Available since 2.0, vectorized in 2.13*

`R`

`sin`

`(T x)`

The sine of the angle x (in radians)

*Available since 2.0, vectorized in 2.13*

`R`

`tan`

`(T x)`

The tangent of the angle x (in radians)

*Available since 2.0, vectorized in 2.13*

`R`

`acos`

`(T x)`

The principal arc (inverse) cosine (in radians) of x

*Available since 2.0, vectorized in 2.13*

`R`

`asin`

`(T x)`

The principal arc (inverse) sine (in radians) of x

*Available since 2.0*

`R`

`atan`

`(T x)`

The principal arc (inverse) tangent (in radians) of x, with values from \(-\pi/2\) to \(\pi/2\)

*Available since 2.0, vectorized in 2.13*

`R`

`atan2`

`(T y, T x)`

Return the principal arc (inverse) tangent (in radians) of y divided by x, \[\begin{equation*} \text{atan2}(y, x) = \arctan\left(\frac{y}{x}\right) \end{equation*}\]

*Available since 2.0, vectorized in 2.34*

## Hyperbolic trigonometric functions

`R`

`cosh`

`(T x)`

The hyperbolic cosine of x (in radians)

*Available since 2.0, vectorized in 2.13*

`R`

`sinh`

`(T x)`

The hyperbolic sine of x (in radians)

*Available since 2.0, vectorized in 2.13*

`R`

`tanh`

`(T x)`

The hyperbolic tangent of x (in radians)

*Available since 2.0, vectorized in 2.13*

`R`

`acosh`

`(T x)`

The inverse hyperbolic cosine (in radians)

*Available since 2.0, vectorized in 2.13*

`R`

`asinh`

`(T x)`

The inverse hyperbolic cosine (in radians)

*Available since 2.0, vectorized in 2.13*

`R`

`atanh`

`(T x)`

The inverse hyperbolic tangent (in radians) of x

*Available since 2.0, vectorized in 2.13*

## Link functions

The following functions are commonly used as link functions in generalized linear models. The function \(\Phi\) is also commonly used as a link function (see section probability-related functions).

`R`

`logit`

`(T x)`

The log odds, or logit, function applied to x

*Available since 2.0, vectorized in 2.13*

`R`

`inv_logit`

`(T x)`

The logistic sigmoid function applied to x

*Available since 2.0, vectorized in 2.13*

`R`

`inv_cloglog`

`(T x)`

The inverse of the complementary log-log function applied to x

*Available since 2.0, vectorized in 2.13*

## Probability-related functions

### Normal cumulative distribution functions

The error function `erf`

is related to the standard normal cumulative distribution function \(\Phi\) by scaling. See section normal distribution for the general normal cumulative distribution function (and its complement).

`R`

`erf`

`(T x)`

The error function, also known as the Gauss error function, of x

*Available since 2.0, vectorized in 2.13*

`R`

`erfc`

`(T x)`

The complementary error function of x

*Available since 2.0, vectorized in 2.13*

`R`

`inv_erfc`

`(T x)`

The inverse of the complementary error function of x

*Available since 2.29, vectorized in 2.29*

`R`

`Phi`

`(T x)`

The standard normal cumulative distribution function of x

*Available since 2.0, vectorized in 2.13*

`R`

`inv_Phi`

`(T x)`

Return the value of the inverse standard normal cdf \(\Phi^{-1}\) at the specified quantile `x`

. The details of the algorithm can be found in (Wichura 1988). Quantile arguments below 1e-16 are untested; quantiles above 0.999999999 result in increasingly large errors.

*Available since 2.0, vectorized in 2.13*

`R`

`Phi_approx`

`(T x)`

The fast approximation of the unit (may replace `Phi`

for probit regression with maximum absolute error of 0.00014, see (Bowling et al. 2009) for details)

*Available since 2.0, vectorized in 2.13*

## Combinatorial functions

`real`

`beta`

`(real alpha, real beta)`

Return the beta function applied to alpha and beta. The beta function, \(\text{B}(\alpha,\beta)\), computes the normalizing constant for the beta distribution, and is defined for \(\alpha > 0\) and \(\beta > 0\). See section appendix for definition of \(\text{B}(\alpha, \beta)\).

*Available since 2.25*

`R`

`beta`

`(T1 x, T2 y)`

Vectorized implementation of the `beta`

function

*Available since 2.25*

`real`

`inc_beta`

`(real alpha, real beta, real x)`

Return the regularized incomplete beta function up to x applied to alpha and beta. See section appendix for a definition.

*Available since 2.10*

`real`

`inv_inc_beta`

`(real alpha, real beta, real p)`

Return the inverse of the regularized incomplete beta function. The return value `x`

is the value that solves `p = inc_beta(alpha, beta, x)`

. See section appendix for a definition of the `inc_beta`

.

*Available since 2.30*

`real`

`lbeta`

`(real alpha, real beta)`

Return the natural logarithm of the beta function applied to alpha and beta. The beta function, \(\text{B}(\alpha,\beta)\), computes the normalizing constant for the beta distribution, and is defined for \(\alpha > 0\) and \(\beta > 0\). \[\begin{equation*}
\text{lbeta}(\alpha,\beta) = \log \Gamma(a) + \log \Gamma(b) - \log \Gamma(a+b)
\end{equation*}\] See section appendix for definition of \(\text{B}(\alpha, \beta)\).

*Available since 2.0*

`R`

`lbeta`

`(T1 x, T2 y)`

Vectorized implementation of the `lbeta`

function

*Available since 2.25*

`R`

`tgamma`

`(T x)`

The gamma function applied to x. The gamma function is the generalization of the factorial function to continuous variables, defined so that \(\Gamma(n+1) = n!\). See for a full definition of \(\Gamma(x)\). The function is defined for positive numbers and non-integral negative numbers,

*Available since 2.0, vectorized in 2.13*

`R`

`lgamma`

`(T x)`

The natural logarithm of the gamma function applied to x,

*Available since 2.0, vectorized in 2.15*

`R`

`digamma`

`(T x)`

The digamma function applied to x. The digamma function is the derivative of the natural logarithm of the Gamma function. The function is defined for positive numbers and non-integral negative numbers

*Available since 2.0, vectorized in 2.13*

`R`

`trigamma`

`(T x)`

The trigamma function applied to x. The trigamma function is the second derivative of the natural logarithm of the Gamma function

*Available since 2.0, vectorized in 2.13*

`real`

`lmgamma`

`(int n, real x)`

Return the natural logarithm of the multivariate gamma function \(\Gamma_n\) with n dimensions applied to x. \[\begin{equation*}
\text{lmgamma}(n,x) =
\begin{cases} \frac{n(n-1)}{4} \log \pi + \sum_{j=1}^n \log \Gamma\left(x + \frac{1 - j}{2}\right)
& \text{if } x\not\in \{\dots,-3,-2,-1,0\}\\ \textrm{error} & \text{otherwise} \end{cases}
\end{equation*}\]

*Available since 2.0*

`R`

`lmgamma`

`(T1 x, T2 y)`

Vectorized implementation of the `lmgamma`

function

*Available since 2.25*

`real`

`gamma_p`

`(real a, real z)`

Return the normalized lower incomplete gamma function of a and z defined for positive a and nonnegative z. \[\begin{equation*}
\mathrm{gamma\_p}(a,z) =
\begin{cases} \frac{1}{\Gamma(a)}\int_0^zt^{a-1}e^{-t}dt &
\text{if } a > 0, z \geq 0 \\ \textrm{error} & \text{otherwise} \end{cases}
\end{equation*}\]

*Available since 2.0*

`R`

`gamma_p`

`(T1 x, T2 y)`

Vectorized implementation of the `gamma_p`

function

*Available since 2.25*

`real`

`gamma_q`

`(real a, real z)`

Return the normalized upper incomplete gamma function of a and z defined for positive a and nonnegative z. \[\begin{equation*}
\mathrm{gamma\_q}(a,z) =
\begin{cases} \frac{1}{\Gamma(a)}\int_z^\infty t^{a-1}e^{-t}dt &
\text{if } a > 0, z \geq 0 \\[6pt] \textrm{error} & \text{otherwise}
\end{cases}
\end{equation*}\]

*Available since 2.0*

`R`

`gamma_q`

`(T1 x, T2 y)`

Vectorized implementation of the `gamma_q`

function

*Available since 2.25*

`int`

`choose`

`(int x, int y)`

Return the binomial coefficient of x and y. For non-negative integer inputs, the binomial coefficient function is written as \(\binom{x}{y}\) and pronounced “x choose y.” In its the antilog of the `lchoose`

function but returns an integer rather than a real number with no non-zero decimal places. For \(0 \leq y \leq x\), the binomial coefficient function can be defined via the factorial function \[\begin{equation*}
\text{choose}(x,y) = \frac{x!}{\left(y!\right)\left(x - y\right)!}.
\end{equation*}\]

*Available since 2.14*

`R`

`choose`

`(T1 x, T2 y)`

Vectorized implementation of the `choose`

function

*Available since 2.25*

`real`

`bessel_first_kind`

`(int v, real x)`

Return the Bessel function of the first kind with order v applied to x. \[\begin{equation*}
\mathrm{bessel\_first\_kind}(v,x) = J_v(x),
\end{equation*}\] where \[\begin{equation*}
J_v(x)=\left(\frac{1}{2}x\right)^v \sum_{k=0}^\infty
\frac{\left(-\frac{1}{4}x^2\right)^k}{k!\, \Gamma(v+k+1)}
\end{equation*}\]

*Available since 2.5*

`R`

`bessel_first_kind`

`(T1 x, T2 y)`

Vectorized implementation of the `bessel_first_kind`

function

*Available since 2.25*

`real`

`bessel_second_kind`

`(int v, real x)`

Return the Bessel function of the second kind with order v applied to x defined for positive x and v. For \(x,v > 0\), \[\begin{equation*}
\mathrm{bessel\_second\_kind}(v,x) =
\begin{cases} Y_v(x) & \text{if } x > 0 \\ \textrm{error} & \text{otherwise} \end{cases}
\end{equation*}\] where \[\begin{equation*}
Y_v(x)=\frac{J_v(x)\cos(v\pi)-J_{-v}(x)}{\sin(v\pi)}
\end{equation*}\]

*Available since 2.5*

`R`

`bessel_second_kind`

`(T1 x, T2 y)`

Vectorized implementation of the `bessel_second_kind`

function

*Available since 2.25*

`real`

`modified_bessel_first_kind`

`(int v, real z)`

Return the modified Bessel function of the first kind with order v applied to z defined for all z and integer v. \[\begin{equation*}
\mathrm{modified\_bessel\_first\_kind}(v,z) = I_v(z)
\end{equation*}\] where \[\begin{equation*}
{I_v}(z) = \left(\frac{1}{2}z\right)^v\sum_{k=0}^\infty \frac{\left(\frac{1}{4}z^2\right)^k}{k!\Gamma(v+k+1)}
\end{equation*}\]

*Available since 2.1*

`R`

`modified_bessel_first_kind`

`(T1 x, T2 y)`

Vectorized implementation of the `modified_bessel_first_kind`

function

*Available since 2.25*

`real`

`log_modified_bessel_first_kind`

`(real v, real z)`

Return the log of the modified Bessel function of the first kind. v does not have to be an integer.

*Available since 2.26*

`R`

`log_modified_bessel_first_kind`

`(T1 x, T2 y)`

Vectorized implementation of the `log_modified_bessel_first_kind`

function

*Available since 2.26*

`real`

`modified_bessel_second_kind`

`(int v, real z)`

Return the modified Bessel function of the second kind with order v applied to z defined for positive z and integer v. \[\begin{equation*}
\mathrm{modified\_bessel\_second\_kind}(v,z) =
\begin{cases} K_v(z) & \text{if } z > 0 \\ \textrm{error} & \text{if } z \leq 0 \end{cases}
\end{equation*}\] where \[\begin{equation*} {K_v}(z) = \frac{\pi}{2}\cdot\frac{I_{-v}(z) - I_{v}(z)}{\sin(v\pi)}
\end{equation*}\]

*Available since 2.1*

`R`

`modified_bessel_second_kind`

`(T1 x, T2 y)`

Vectorized implementation of the `modified_bessel_second_kind`

function

*Available since 2.25*

`real`

`falling_factorial`

`(real x, real n)`

Return the falling factorial of x with power n defined for positive x and real n. \[\begin{equation*}
\mathrm{falling\_factorial}(x,n) =
\begin{cases} (x)_n & \text{if } x > 0 \\ \textrm{error} & \text{if } x \leq 0 \end{cases}
\end{equation*}\] where \[\begin{equation*}
(x)_n=\frac{\Gamma(x+1)}{\Gamma(x-n+1)}
\end{equation*}\]

*Available since 2.0*

`R`

`falling_factorial`

`(T1 x, T2 y)`

Vectorized implementation of the `falling_factorial`

function

*Available since 2.25*

`real`

`lchoose`

`(real x, real y)`

Return the natural logarithm of the generalized binomial coefficient of x and y. For non-negative integer inputs, the binomial coefficient function is written as \(\binom{x}{y}\) and pronounced “x choose y.” This function generalizes to real numbers using the gamma function. For \(0 \leq y \leq x\), \[\begin{equation*} \mathrm{binomial\_coefficient\_log}(x,y) =
\log\Gamma(x+1) - \log\Gamma(y+1) - \log\Gamma(x-y+1). \end{equation*}\]

*Available since 2.10*

`R`

`lchoose`

`(T1 x, T2 y)`

Vectorized implementation of the `lchoose`

function

*Available since 2.29*

`real`

`log_falling_factorial`

`(real x, real n)`

Return the log of the falling factorial of x with power n defined for positive x and real n. \[\begin{equation*} \mathrm{log\_falling\_factorial}(x,n) =
\begin{cases} \log (x)_n & \text{if } x > 0 \\ \textrm{error} &
\text{if } x \leq 0 \end{cases} \end{equation*}\]

*Available since 2.0*

`real`

`rising_factorial`

`(real x, int n)`

Return the rising factorial of x with power n defined for positive x and integer n. \[\begin{equation*}
\mathrm{rising\_factorial}(x,n) = \begin{cases} x^{(n)} & \text{if } x > 0 \\ \textrm{error} & \text{if } x \leq 0 \end{cases}
\end{equation*}\] where \[\begin{equation*} x^{(n)}=\frac{\Gamma(x+n)}{\Gamma(x)} \end{equation*}\]

*Available since 2.20*

`R`

`rising_factorial`

`(T1 x, T2 y)`

Vectorized implementation of the `rising_factorial`

function

*Available since 2.25*

`real`

`log_rising_factorial`

`(real x, real n)`

Return the log of the rising factorial of x with power n defined for positive x and real n. \[\begin{equation*} \mathrm{log\_rising\_factorial}(x,n) =
\begin{cases} \log x^{(n)} & \text{if } x > 0 \\ \textrm{error} &
\text{if } x \leq 0 \end{cases} \end{equation*}\]

*Available since 2.0*

`R`

`log_rising_factorial`

`(T1 x, T2 y)`

Vectorized implementation of the `log_rising_factorial`

function

*Available since 2.25*

## Composed functions

The functions in this section are equivalent in theory to combinations of other functions. In practice, they are implemented to be more efficient and more numerically stable than defining them directly using more basic Stan functions.

`R`

`expm1`

`(T x)`

The natural exponential of x minus 1

*Available since 2.0, vectorized in 2.13*

`real`

`fma`

`(real x, real y, real z)`

Return z plus the result of x multiplied by y. \[\begin{equation*} \text{fma}(x,y,z) =
(x \times y) + z \end{equation*}\]

*Available since 2.0*

`real`

`ldexp`

`(real x, int y)`

Return the product of x and two raised to the y power. \[\begin{equation*}
\text{ldexp}(x,y) = x 2^y \end{equation*}\]

*Available since 2.25*

`R`

`ldexp`

`(T1 x, T2 y)`

Vectorized implementation of the `ldexp`

function

*Available since 2.25*

`real`

`lmultiply`

`(real x, real y)`

Return the product of x and the natural logarithm of y. \[\begin{equation*}
\text{lmultiply}(x,y) = \begin{cases} 0 & \text{if } x = y = 0 \\ x
\log y & \text{if } x, y \neq 0 \\ \text{NaN} & \text{otherwise}
\end{cases} \end{equation*}\]

*Available since 2.10*

`R`

`lmultiply`

`(T1 x, T2 y)`

Vectorized implementation of the `lmultiply`

function

*Available since 2.25*

`R`

`log1p`

`(T x)`

The natural logarithm of 1 plus x

*Available since 2.0, vectorized in 2.13*

`R`

`log1m`

`(T x)`

The natural logarithm of 1 minus x

*Available since 2.0, vectorized in 2.13*

`R`

`log1p_exp`

`(T x)`

The natural logarithm of one plus the natural exponentiation of x

*Available since 2.0, vectorized in 2.13*

`R`

`log1m_exp`

`(T x)`

The logarithm of one minus the natural exponentiation of x

*Available since 2.0, vectorized in 2.13*

`real`

`log_diff_exp`

`(real x, real y)`

Return the natural logarithm of the difference of the natural exponentiation of x and the natural exponentiation of y. \[\begin{equation*}
\mathrm{log\_diff\_exp}(x,y) = \begin{cases} \log(\exp(x)-\exp(y)) &
\text{if } x > y \\[6pt] \textrm{NaN} & \text{otherwise} \end{cases}
\end{equation*}\]

*Available since 2.0*

`R`

`log_diff_exp`

`(T1 x, T2 y)`

Vectorized implementation of the `log_diff_exp`

function

*Available since 2.25*

`real`

`log_mix`

`(real theta, real lp1, real lp2)`

Return the log mixture of the log densities lp1 and lp2 with mixing proportion theta, defined by \[\begin{eqnarray*}
\mathrm{log\_mix}(\theta, \lambda_1, \lambda_2) & = & \log \!\left(
\theta \exp(\lambda_1) + \left( 1 - \theta \right) \exp(\lambda_2)
\right) \\[3pt] & = & \mathrm{log\_sum\_exp}\!\left(\log(\theta) +
\lambda_1, \ \log(1 - \theta) + \lambda_2\right). \end{eqnarray*}\]

*Available since 2.6*

`R`

`log_mix`

`(T1 theta, T2 lp1, T3 lp2)`

Vectorized implementation of the `log_mix`

function

*Available since 2.26*

`R`

`log_sum_exp`

`(T1 x, T2 y)`

Return the natural logarithm of the sum of the natural exponentiation of x and the natural exponentiation of y. \[\begin{equation*}
\mathrm{log\_sum\_exp}(x,y) = \log(\exp(x)+\exp(y)) \end{equation*}\]

*Available since 2.0, vectorized in 2.33*

`R`

`log_inv_logit`

`(T x)`

The natural logarithm of the inverse logit function of x

*Available since 2.0, vectorized in 2.13*

`R`

`log_inv_logit_diff`

`(T1 x, T2 y)`

The natural logarithm of the difference of the inverse logit function of x and the inverse logit function of y

*Available since 2.25*

`R`

`log1m_inv_logit`

`(T x)`

The natural logarithm of 1 minus the inverse logit function of x

*Available since 2.0, vectorized in 2.13*

## Special functions

`R`

`lambert_w0`

`(T x)`

Implementation of the \(W_0\) branch of the Lambert W function, i.e., solution to the function \(W_0(x) \exp^{ W_0(x)} = x\)

*Available since 2.25*

`R`

`lambert_wm1`

`(T x)`

Implementation of the \(W_{-1}\) branch of the Lambert W function, i.e., solution to the function \(W_{-1}(x) \exp^{W_{-1}(x)} = x\)

*Available since 2.25*

## References

*Journal of Industrial Engineering and Management*2 (1): 114–27.

*Journal of the Royal Statistical Society. Series C (Applied Statistics)*37 (3): 477–84. http://www.jstor.org/stable/2347330.