This forum has been archived. All content is frozen. Please use KDE Discuss instead.

Improve Sparse Matrix Inverse performance

Tags: None
(comma "," separated)
ivanchang
Registered Member
Posts
7
Karma
0
Hi all-

I am solving a problem of inverse sparse Matrix. I use two ways to do this.

1) Store the matrix A as SparseMatrix, and use SparseLU to solve Ax=Identity, then x is the inverse matrix of A. The execution time is :

With Optimization compiler -O3:

The size of Sparse Matrix A is 2000 x 2000, and A has 70,000 non-zero elements, the running time is around 9s

2) Store it as dense matrix: MatrixXd, and the number of elements are the same. Use inverse() function to solve it.

Running time is about 2s.

And below is the things I am confused and want to figure it out:
1) I also test 500 x 500, 1000 x 1000, the second method shows a faster performance than the first one. As the matrix is sparse, I am confused why the first solution is worse.

2) What is the solution being used in that inverse() function and can it be multithreading so that it can run faster???

thanks
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS
First, are you sure you need to explicitly compute the inverse? Better keep a LU factorization of A and call lu.solve(y) anytime you want to apply A^-1 to some vector or matrix y. This will be faster and numerically much more accurate.

This is even worse if A is sparse, because even if A is sparse, its inverse is usually rather dense, make sparse algebra useless. This is why sparse solvers (including SparseLU) computes a fill-in reducing ordering to keep the number of non-zeros as small as possible in the L and U factors. But once you compute A^-1 this fill-in reordering is lost. So really, if you want to make use of sparse algebra, don't compute A^-1 explicitly.


Bookmarks



Who is online

Registered users: Bing [Bot], Google [Bot], Sogou [Bot]