Title of article :
Use of the Gibbs sampler to invert large, possibly sparse, positive definite matrices Original Research Article
Abstract :
A problem that is frequently encountered in statistics is that of computing some of the elements (e.g., the diagonal elements) of the inverse of a large, possibly sparse, positive definite matrix. LetW ={wij} represent anm × m positive definite matrix, letV ={vij} =W−1, and letx ={xi} represent anm × 1 random vecor whose distribution is multivariate normal with null mean vector and variance-covariance matrixV. The Gibbs sampler can be used to generate a sequence ofm × 1 vectorsx(1),x(2),… such that, for sufficiently large values ofk,x(k) can be regarded as a sample value ofx. Lettingx1(k), …,xm(k) represent the elements ofx(k),xi(k) is a draw from a univariate normal distribution with mean and variancewii−1. The sample values ofx can be used to obtain a Monte Carlo estimate of expectationE(xixi) and hence [since vij = E(xixj)] ofvij. A possibly more efficient alternatives is to use the sample values to evaluate the outer expectation in the expressionE[E(xixjxT)] , wherexT is a subvector ofx that excludesxi and/orxj. Sparsity (or various other kinds of structure) inW can be used to (computational) advantage in generating the drawsx(1),x(2), … Numerical results indicate that ifW is well-conditioned, then the statistical dependence between the draws is relatively small, reasonably accurate Monte Carlo estimates can be obtained from a relatively small number of draws, and the conditioning ofxixj onxT leads to significant improvements in accuracy.