7.14 Complex linear algebra functions

7.14.1 Complex matrix division operators and functions

In general, it is much more efficient and also more arithmetically stable to use matrix division than to multiply by an inverse.

7.14.1.1 Complex matrix division operators

complex_row_vector operator/(complex_row_vector b, complex_matrix A)
The right division of b by A; equivalently b * inverse(A)
Available since 2.30

complex_matrix operator/(complex_matrix B, complex_matrix A)
The right division of B by A; equivalently B * inverse(A)
Available since 2.30

7.14.2 Linear algebra functions

7.14.2.1 Trace

complex trace(complex_matrix A)
The trace of A, or 0 if A is empty; A is not required to be diagonal
Available since 2.30

7.14.2.2 Eigendecomposition

complex_vector eigenvalues_sym(complex_matrix A)
The vector of eigenvalues of a symmetric matrix A in ascending order
Available since 2.30

complex_matrix eigenvectors_sym(complex_matrix A)
The matrix with the (column) eigenvectors of symmetric matrix A in the same order as returned by the function eigenvalues_sym
Available since 2.30

Because multiplying an eigenvector by $$-1$$ results in an eigenvector, eigenvectors returned by a decomposition are only identified up to a sign change. In order to compare the eigenvectors produced by Stan’s eigendecomposition to others, signs may need to be normalized in some way, such as by fixing the sign of a component, or doing comparisons allowing a multiplication by $$-1$$.

The condition number of a symmetric matrix is defined to be the ratio of the largest eigenvalue to the smallest eigenvalue. Large condition numbers lead to difficulty in numerical algorithms such as computing inverses, and thus known as “ill conditioned.” The ratio can even be infinite in the case of singular matrices (i.e., those with eigenvalues of 0).

7.14.2.3 Singular value decomposition

The matrix A can be decomposed into a diagonal matrix of singular values, D, and matrices of its left and right singular vectors, U and V, $A = U D V^T.$ The matrices of singular vectors here are thin. That is for an $$N$$ by $$P$$ input A, $$M = min(N, P)$$, U is size $$N$$ by $$M$$ and V is size $$P$$ by $$M$$.

vector singular_values(complex_matrix A)
The singular values of A in descending order
Available since 2.30

complex_matrix svd_U(complex_matrix A)
The left-singular vectors of A
Available since 2.30

complex_matrix svd_V(complex_matrix A)
The right-singular vectors of A
Available since 2.30

7.14.2.4 Complex Schur Decomposition

The complex Schur decomposition of a square matrix $$A$$ produces a complex unitary matrix $$U$$ and a complex upper-triangular Schur form matrix $$T$$ such that $A = U \cdot T \cdot U^{-1}$

Since $$U$$ is unitary, its inverse is also its conjugate transpose, $$U^{-1} = U^*$$, $$U^*(i, j) = \mathrm{conj}(U(j, i))$$

complex_matrix complex_schur_decompose_t(matrix A)
Compute the upper-triangular Schur form matrix of the complex Schur decomposition of A.
Available since 2.31

complex_matrix complex_schur_decompose_t(complex_matrix A)
Compute the upper-triangular Schur form matrix of the complex Schur decomposition of A.
Available since 2.31

complex_matrix complex_schur_decompose_u(matrix A)
Compute the unitary matrix of the complex Schur decomposition of A.
Available since 2.31

complex_matrix complex_schur_decompose_u(complex_matrix A)
Compute the unitary matrix of the complex Schur decomposition of A.
Available since 2.31