This forum has been archived. All content is frozen. Please use KDE Discuss instead.

Inversing large Matrix using cholsky decomposition?

Tags: None
(comma "," separated)
trahn
Registered Member
Posts
3
Karma
0
Hello, I am using eigen library to solve a least squares problem (ie. Nx=b). I am using the LDLT solver and it works great :). Big jump in speed. My system has around 50000 observations and 1000 parameters that are being estimated. My question is how can I use Eigen to obtain the parameter covariance information. I know how to do it in theory but it involves computing the inverse of the normal matrix (N). For example:

N=A'*P*A

where:
A' is the transpose of my design matrix 1000 by 50000
P is my observation covariance matrix
A is my design matrix 50000 by 1000

After getting my estimated parameters, x, I also want to extract the covariance information associated with the estimated parameters Cx but:

Cx=inv(N).

My problem is then how to obtain the inverse of N using the Eigen library. I know that inv(N)=inv(L*) inv(L) where L is the lower triangular Cholesky decomposition of A. I think I get can L from using Eigen but then I would still need to inverse L.This is ok since we know that N must be positive semi-definite since it is a covariance matrix.

From everywhere I have read, I see that I should probably avoid inverseing if possible. So if anyone has a suggestion on how to avoid it but still get Cx that would also be helpful.

Thanks for your time!
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS
Just solve with the identity matrix as the right hand side:

A Cx = I

Note that the result is expected to be pretty dense.Since your matrix is pretty small, you can use a dense matrix for I, Cx. Note that in general it not recommended to explicitly compute the inverse of a matrix, especially a large one. Instead, if the inverse is only used for multiplication purposes, use ldlt.solve() instead.
trahn
Registered Member
Posts
3
Karma
0
Thanks ggael for your reply,

However, I am not sure I understand. Since A is a n x m matrix Cx is a m x m matrix and the identiy matrix must be square it does not seem to make sense to me since the dimensions do not agree. Do you have a reference where I can find your suggestion to get a better understanding?

Trah
trahn
Registered Member
Posts
3
Karma
0
Whoops :< I feel dumb now. N*inv(N)=I ......sorry for wasting your time!!
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS
yeah, I used A instead of N, sorry for the confusion.
zoharl
Registered Member
Posts
55
Karma
0
OS
ggael wrote:Just solve with the identity matrix as the right hand side:

A Cx = I

Note that the result is expected to be pretty dense.Since your matrix is pretty small, you can use a dense matrix for I, Cx. Note that in general it not recommended to explicitly compute the inverse of a matrix, especially a large one. Instead, if the inverse is only used for multiplication purposes, use ldlt.solve() instead.


In my app I actually noticed that calculating the inverse in a precomputation step, and later multiplying by it, is much faster than using solve. Does it make sense or is it just my specific matrix?
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS
that's surprising, usually the inverse is much denser than the factors (thanks to reordering), maybe your matrix has indeed a special structure.
zoharl
Registered Member
Posts
55
Karma
0
OS
I'll try to check these properties, but still, maybe the matrix multiplication has a better implementation that takes a better advantage of openmp for example.
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS
oh, do you apply it to dense objects or sparse matrices? In the later case that could make sense.
zoharl
Registered Member
Posts
55
Karma
0
OS
1. I'm using a dense matrix. How many non-zeros (in percent) should my matrix have in order for me to start working with sparse matrices, such that the solver would be faster?

2. On a second look, my matrix isn't necessarily positive definite, so I actually use ColPivHouseholderQR, but I don't mind using any other solver if it would make the .solve() any faster (than the multiplication).

3. What statistics should I check that would tell me if my matrix is indeed 'special', and would probably benefit better from multiplication than from solve()?
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS
oh, when I read "large matrix" I immediately thought about sparse matrices. For dense objects the only issue of explicitly computing the inverse is regarding accuracy, especially if the matrix in nearly singular. You might give PartialPivLU a try, it should be faster than QR, and offer similar performance than a matrix product (slightly slower, but you also avoid computing the inverse) with higher numerical accuracy.
zoharl
Registered Member
Posts
55
Karma
0
OS
You are right, my bad.
I"ll try the partial solver out.
Thanks.


Bookmarks



Who is online

Registered users: Baidu [Spider], Bing [Bot], Google [Bot]