Matrix Decomposition Methods
LU, QR, SVD, eigendecomposition, Cholesky — what each does and when to use it.
Reference
Decompositions
| Name | Form | Applies to | Used for |
|---|---|---|---|
| LU | A = L · U (or P · L · U) | Square, usually invertible | Solving Ax = b; determinant |
| Cholesky | A = L · Lᵀ | Symmetric positive-definite | Fast Ax = b for SPD |
| QR | A = Q · R | Any (even rectangular) | Least squares, numerical stability |
| Eigendecomp | A = P · D · P⁻¹ | Diagonalizable square | PCA (via covariance), dynamics |
| SVD | A = U · Σ · Vᵀ | Any matrix | PCA, pseudo-inverse, low-rank approx |
| Schur | A = Q · T · Qᵀ (T triangular) | Square | Stable eigenvalue computation |
| Hessenberg | A = Q · H · Qᵀ | Square | First step of QR eigenvalue algorithm |
| Jordan | A = P · J · P⁻¹ | Any square (defective okay) | Theoretical; numerically unstable |
Picking one
- Solve Ax = b once: LU.
- Solve Ax = b many times: factor once, forward/back-substitute per b.
- Symmetric positive-definite (covariance, Gram matrices): Cholesky — ~2× faster than LU.
- Least squares: QR (normal equations are less numerically stable).
- Rank, null space, low-rank approx: SVD.
- PCA: SVD of centered data, or eigendecomposition of covariance.
Last updated: