Registered Member
|
I wonder if one could improve situation, though
Out of curiousity, I tested the inversion of a FEM mass matrix using SuperLU and the above algorithm: dim(mass) = 5766*5766 nonzeros(mass) = 132496 nonzeros(mass_inverse) = 25482948 (ok: probably a bad example for a sparse inverse) Factorization of mass requires 00:00:07.567788 Factorization+Creating inverse requires 00:03:02.422070 It seems there is still a lot of room for improvements... |
Moderator
|
yes indeed, the current solve() function (Ax=b) only works with x and b dense while to compute the inverse we have to solve for AX=I, and "I" is just the most sparse matrix that exists. Soon, I'll add a pure sparse triangular solver, and then I believe the computation of the inverse will take roughly the same amount of time than the factorization. Now, in your example the inverse matrix is almost completely dense, 70% of non zeros ! This is exactly why it is a very bad idea to explicitly compute the inverse. It is always better to use solve(). This also explains why no library provide an inverse() function for sparse matrix. Actually, Morirz, I'd very interested to know your use case? Do you really need the explicit inverse matrix? |
Registered Member
|
Yes, that's the reason one should never touch the data being used by the factorizer. Sometimes I have to use the inverse in explicit form as well and I always have to deal with sparse matrices whose direct inverses are rather dense. In our old code we used to eliminate all items below a certain treshold. The problem is that we need scale invariance - sometimes one eliminates rather important values when the inverse has only small entries.
Would it help to estimate the largest eigenvalue or largest element of the inverse before actually creating it? By doing so we could find a relative bound under which all nonzeros can be ignored when creating the inverse? |
Registered Member
|
Update:
This code reduces the number of entries to 23% using tolerance=1e-8 in my use case. I forgot to add: When we export an inverse matrix, we do it using "skyline" storage concept. Notibly, it works only well, if the input matrix has a banded form as well, but this can be achieved by a column-row-reordering during creation. Regarding an additional usecase: The main use case for exporting such a matrix is solving vector x for vector y, where matrices A and B are sparse: A*x=B*y Therein one should either be able to directly store the matrix A^(-1)*B or at least should be able to perform coordinate transformations like matrix_K*x -> matrix_K2*y = [matrix_K*A^(-1)*B]*y or even x'*matrix_L*x -> y'*matrix_L2*y where I need to store the operators [matrix_K*A^(-1)*B] or [B'*A^(-T)*matrix_K*A^(-1)*B] (all matrices are sparse)
Last edited by Seb on Fri Feb 13, 2009 11:26 am, edited 1 time in total.
|
Registered Member
|
I'm calculating electronic transport properties in mesoscopic systems. That means I have to construct a Hamilton operator (sparse, self-adjoint, about 6 non-zero diagonals per triangle), add a "self energy" (also sparse) term for boundary condition, invert that matrix to obtain the Green's function, and then calculate products between that green function and other sparse matrices, and then the trace over that product. As formulas:
I also need the diagonal elements of the (s_i * green) and (s_j * adj(green)) elements for some other stuff.
That's a good question, which I previously just answered with "yes", because I know of no way to optimize it away. But maybe there's a faster way of calculating the matrixes (s_i * green) and (s_i * adj(green)) directly. Not calculating the full inverse would make it a bit harder to debug, but I could keep two versions around for comparison. (I have to add that linear algebra really isn't my strength, I know just as much as I need for day to day physicists' business; maybe I should investigate more in that direction. There's also a possible approach of exploiting some physical properties of my system to iteratively calculate the green's function) |
Registered Member
|
I'm quite intersted in how to estimate the largest elements of a sparse inverse. Can anyone give me a hand? Many thanks~ |
Registered Member
|
Actually it turned out that the half hour was mostly spent in the LU decomposition, and that choosing a more suitable ordering method (ColApproxMinimumDegree as opposed to NaturalOrdering) dropped that to a second. I also found a way to avoid calculating the full inverse, and instead use the solve() method with different right hand sides (which have less non-zero rows than the unit matrix, so a good speed improvement here). So all in all I'm now from 11 hours to a few seconds, thanks to eigen2 and superlu! |
Registered Member
|
Can you post your before and after code? Thanks. |
Registered users: Bing [Bot], Google [Bot], Sogou [Bot]