Computing a realization from a Gaussian process is analogous (identical to?) computing a realization from a multivariate normal distribution.
Let $X \sim N(\mu, \Sigma)$ a realization from a multivariate normal distribution. $X$ can be computed as
$$
X = \mu + Az
$$
where $\Sigma = A A^T$ and $z$ a collection of i.i.d. $N(0, 1)$ variables. See here on the Wikipedia for more information. To do so, we must find such a matrix $A$.
While there are a number of ways of doing so, a particularly good choice is to use the Cholesky factorization of $\Sigma$.
As of 4ec65a0, this Cholesky factor is recomputed each time the prior or posterior is sampled (in line 69, line 74, line 150, and line 163). Similar to the motivation in #118 to use a low-rank inverse update when possible to save time/compute, we should use a low-rank Cholesky update here if it is ever desired to change the points at which to sample the prior/posterior Gaussian process.
Computing a realization from a Gaussian process is analogous (identical to?) computing a realization from a multivariate normal distribution.
Let$X \sim N(\mu, \Sigma)$ a realization from a multivariate normal distribution. $X$ can be computed as
where$\Sigma = A A^T$ and $z$ a collection of i.i.d. $N(0, 1)$ variables. See here on the Wikipedia for more information. To do so, we must find such a matrix $A$ .
While there are a number of ways of doing so, a particularly good choice is to use the Cholesky factorization of$\Sigma$ .
As of 4ec65a0, this Cholesky factor is recomputed each time the prior or posterior is sampled (in line 69, line 74, line 150, and line 163). Similar to the motivation in #118 to use a low-rank inverse update when possible to save time/compute, we should use a low-rank Cholesky update here if it is ever desired to change the points at which to sample the prior/posterior Gaussian process.