This forum has been archived. All content is frozen. Please use KDE Discuss instead.

[SOLVED] Inversion of sparse matrix

Tags: None
(comma "," separated)
Seb
Registered Member
Posts
99
Karma
0

benchmark

Fri Feb 13, 2009 10:22 am
I wonder if one could improve situation, though

Out of curiousity, I tested the inversion of a FEM mass matrix using SuperLU and the above algorithm:

dim(mass) = 5766*5766
nonzeros(mass) = 132496
nonzeros(mass_inverse) = 25482948 (ok: probably a bad example for a sparse inverse)

Factorization of mass requires 00:00:07.567788
Factorization+Creating inverse requires 00:03:02.422070

It seems there is still a lot of room for improvements... :(
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS

RE: benchmark

Fri Feb 13, 2009 10:34 am
Seb wrote:I wonder if one could improve situation, though

Out of curiousity, I tested the inversion of a FEM mass matrix using SuperLU and the above algorithm:

dim(mass) = 5766*5766
nonzeros(mass) = 132496
nonzeros(mass_inverse) = 25482948 (ok: probably a bad example for a sparse inverse)

Factorization of mass requires 00:00:07.567788
Factorization+Creating inverse requires 00:03:02.422070

It seems there is still a lot of room for improvements... :(


yes indeed, the current solve() function (Ax=b) only works with x and b dense while to compute the inverse we have to solve for AX=I, and "I" is just the most sparse matrix that exists. Soon, I'll add a pure sparse triangular solver, and then I believe the computation of the inverse will take roughly the same amount of time than the factorization.

Now, in your example the inverse matrix is almost completely dense, 70% of non zeros ! This is exactly why it is a very bad idea to explicitly compute the inverse. It is always better to use solve(). This also explains why no library provide an inverse() function for sparse matrix. Actually, Morirz, I'd very interested to know your use case? Do you really need the explicit inverse matrix?
Seb
Registered Member
Posts
99
Karma
0

RE: benchmark

Fri Feb 13, 2009 10:50 am
Yes, that's the reason one should never touch the data being used by the factorizer. Sometimes I have to use the inverse in explicit form as well and I always have to deal with sparse matrices whose direct inverses are rather dense. In our old code we used to eliminate all items below a certain treshold. The problem is that we need scale invariance - sometimes one eliminates rather important values when the inverse has only small entries.
Would it help to estimate the largest eigenvalue or largest element of the inverse before actually creating it? By doing so we could find a relative bound under which all nonzeros can be ignored when creating the inverse?
Seb
Registered Member
Posts
99
Karma
0

RE: benchmark

Fri Feb 13, 2009 11:12 am
Update:

Code: Select all
inverse.startFill();
for (int j=0; j max ) ? Eigen::ei_abs(invCol.coeff(i)) : max;
   max *= epsilon;
                  
   for (int i=0; i max)
         inverse.fill(i,j) = invCol.coeff(i);
}
inverse.endFill();


This code reduces the number of entries to 23% using tolerance=1e-8 in my use case.

I forgot to add:
When we export an inverse matrix, we do it using "skyline" storage concept. Notibly, it works only well, if the input matrix has a banded form as well, but this can be achieved by a column-row-reordering during creation.

Regarding an additional usecase:
The main use case for exporting such a matrix is solving vector x for vector y, where matrices A and B are sparse:
A*x=B*y
Therein one should either be able to directly store the matrix A^(-1)*B or at least should be able to perform coordinate transformations like
matrix_K*x -> matrix_K2*y = [matrix_K*A^(-1)*B]*y
or even
x'*matrix_L*x -> y'*matrix_L2*y
where I need to store the operators [matrix_K*A^(-1)*B] or [B'*A^(-T)*matrix_K*A^(-1)*B]
(all matrices are sparse)

Last edited by Seb on Fri Feb 13, 2009 11:26 am, edited 1 time in total.
moritz
Registered Member
Posts
12
Karma
0

RE: benchmark

Fri Feb 13, 2009 1:32 pm
ggael wrote:Actually, Morirz, I'd very interested to know your use case?


I'm calculating electronic transport properties in mesoscopic systems.
That means I have to construct a Hamilton operator (sparse, self-adjoint, about 6 non-zero diagonals per triangle), add a "self energy" (also sparse) term for boundary condition, invert that matrix to obtain the Green's function, and then calculate products between that green function and other sparse matrices, and then the trace over that product.

As formulas:

Code: Select all
self_energy = sum_i s_i   // (s_i are sparse matrices, i=1..8)
green = inverse(hamiltonian + self_energy)
T_ij = trace[ s_i * green * s_j * adj(green)]


I also need the diagonal elements of the (s_i * green) and (s_j * adj(green)) elements for some other stuff.

Do you really need the explicit inverse matrix?


That's a good question, which I previously just answered with "yes", because I know of no way to optimize it away. But maybe there's a faster way of calculating the matrixes (s_i * green) and (s_i * adj(green)) directly. Not calculating the full inverse would make it a bit harder to debug, but I could keep two versions around for comparison.

(I have to add that linear algebra really isn't my strength, I know just as much as I need for day to day physicists' business; maybe I should investigate more in that direction. There's also a possible approach of exploiting some physical properties of my system to iteratively calculate the green's function)
shadowperi
Registered Member
Posts
1
Karma
0

RE: benchmark

Sun Apr 05, 2009 5:37 pm
Seb wrote:Yes, that's the reason one should never touch the data being used by the factorizer. Sometimes I have to use the inverse in explicit form as well and I always have to deal with sparse matrices whose direct inverses are rather dense. In our old code we used to eliminate all items below a certain treshold. The problem is that we need scale invariance - sometimes one eliminates rather important values when the inverse has only small entries.
Would it help to estimate the largest eigenvalue or largest element of the inverse before actually creating it? By doing so we could find a relative bound under which all nonzeros can be ignored when creating the inverse?

I'm quite intersted in how to estimate the largest elements of a sparse inverse. Can anyone give me a hand? Many thanks~
moritz
Registered Member
Posts
12
Karma
0
moritz wrote:Thank you very much everyone, it now works as I want it. The execution time dropped from 11 hours to about half an hour, and I'm rather happy with that improvement :-)


Actually it turned out that the half hour was mostly spent in the LU decomposition, and that choosing a more suitable ordering method (ColApproxMinimumDegree as opposed to NaturalOrdering) dropped that to a second.

I also found a way to avoid calculating the full inverse, and instead use the solve() method with different right hand sides (which have less non-zero rows than the unit matrix, so a good speed improvement here).

So all in all I'm now from 11 hours to a few seconds, thanks to eigen2 and superlu!
sqlguru
Registered Member
Posts
4
Karma
0
I also found a way to avoid calculating the full inverse, and instead use the solve() method with different right hand sides (which have less non-zero rows than the unit matrix, so a good speed improvement here).

Can you post your before and after code? Thanks.


Bookmarks



Who is online

Registered users: Bing [Bot], Google [Bot], Sogou [Bot]