22.5 Multivariate Gaussian Process Distribution, Cholesky parameterization
22.5.1 Probability Density Function
If \(K,N \in \mathbb{N}\), \(L \in \mathbb{R}^{N \times N}\) is lower triangular and such that \(LL^{\top}\) is positive definite kernel matrix (implying \(L_{n,n} > 0\) for \(n \in 1{:}N\)), and \(w \in \mathbb{R}^{K}\) is a vector of positive inverse scales, then for \(y \in \mathbb{R}^{K \times N}\), \[ \text{MultiGPCholesky}(y \, | \ L,w) = \prod_{i=1}^{K} \text{MultiNormal}(y_i|0,w_i^{-1} LL^{\top}), \] where \(y_i\) is the \(i\)th row of \(y\). This is used to efficiently handle Gaussian Processes with multi-variate outputs where only the output dimensions share a kernel function but vary based on their scale. If the model allows parameterization in terms of Cholesky factor of the kernel matrix, this distribution is also more efficient than \(\text{MultiGP}()\). Note that this function does not take into account the mean prediction.
22.5.2 Sampling Statement
y ~
multi_gp_cholesky
(L, w)
Increment target log probability density with multi_gp_cholesky_lupdf(y | L, w)
.
22.5.3 Stan Functions
real
multi_gp_cholesky_lpdf
(matrix y | matrix L, vector w)
The log of the multivariate GP density of matrix y given
lower-triangular Cholesky factor of the kernel matrix L and inverses
scales w
real
multi_gp_cholesky_lupdf
(matrix y | matrix L, vector w)
The log of the multivariate GP density of matrix y given
lower-triangular Cholesky factor of the kernel matrix L and inverses
scales w dropping constant additive terms