This forum has been archived. All content is frozen. Please use KDE Discuss instead.

Banded LDLT and more

Tags: None
(comma "," separated)
duetosymmetry
Registered Member
Posts
18
Karma
0

Banded LDLT and more

Tue May 18, 2010 8:19 pm
Dear experts,
Please excuse my ignorance, I am just learning scientific computing.

I am going to be implementing a PDE evolution scheme and would like the code to be clean while still being fast, so I sought out matrix libraries using expression templates, which led me to eigen. It looks promising!

Fundamentally, the core of the evolution scheme is solving a sparse system which happens to be symmetric and positive definite. Seeing that there is a SparseLDLT in eigen made my day.

However, SparseLDLT might actually be overkill for me, since my matrix is actually a band matrix with a very low bandwidth (the diagonal plus two sub- and super- diagonals). I know offhand that Cholesky on a band matrix gives a factorization with the same bandwidth, and backsubstitution (solving) will be very fast. This problem (banded LDLT) seems to be sufficiently specific that it isn't in eigen or other matrix libraries using expression templates.

Here is the question:
Should I expect the SparseLDLT, when acting on a SparseMatrix which happens to be banded, to emit a banded factorization?

I also noted that the documentation says that complex<double> will not be vectorized because of some technicalities I don't know about. How much of a performance hit should I expect this to be? I also noted that real*complex multiplies are supposed to incur a penalty since the real matrix is converted to complex in order to multiply. How bad is this penalty?

Thanks for your time! I look forward to trying out eigen.
Leo
FMD
Registered Member
Posts
25
Karma
0

Re: Banded LDLT and more

Tue May 18, 2010 8:33 pm
duetosymmetry wrote:Dear experts,
...
I know offhand that Cholesky on a band matrix gives a factorization with the same bandwidth
...
Should I expect the SparseLDLT, when acting on a SparseMatrix which happens to be banded, to emit a banded factorization?


The LDLT factorization is unique. So yes: You already know that the result will be banded. The 2nd question may be: Is the result marked as banded, i.e do Eigen know about the banded nature of the result? Yes, in some kind, because the result will be a sparse matrix.

Cheers Frank

P.S.
I claim your disclaimer for my own: "I am just learning scientific computing."
duetosymmetry
Registered Member
Posts
18
Karma
0

Re: Banded LDLT and more

Tue May 18, 2010 10:10 pm
Frank,

Thanks for pointing this out! I began to think of uniqueness after posting (doh!).

Can anybody comment on why SparseLDLT has solveInPlace but no solve, and the (unrelated) issues regarding complex<double> ?

Cheers
Leo
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS

Re: Banded LDLT and more

Wed Jun 02, 2010 12:00 pm
yes using the SparseLDLT is the best you can currently do because it will exploit the sparsity of the matrix. On the other hand it cannot exploit the banded nature which allows for further optimizations. Actually we already have a banded storage class and adapting the cholesky factorization for dense matrix to a banded storage should be quite easy (for someone knowing well Eigen).

Regarding complex<double> themselves, I would say a factor 2 could be expected once we'll support vectorization for them. About complex * real, the speedup should be even larger but that's more tricky. Just be patient or try to give it shot by yourself!! Also as far as I know no library efficiently support complex * real...
duetosymmetry
Registered Member
Posts
18
Karma
0

Re: Banded LDLT and more

Wed Jun 02, 2010 7:56 pm
Thanks for all the help, Gael.
I have been testing SparseLDLT a bit and it seems to work, though I haven't looked at the speed at all yet. As far as this goes, would it be faster to use, for example, the GNU extension _Complex? Is this discouraged because of the std::complex<> container? I know nothing about these things.

I found one (unrelated) surprise so far, and I hope somebody can clear it up for me. I wanted to verify the decomposition coming from SparseLDLT, so I printed the products (matrixL() and vectorD()) and multiplied L * D.asDiagonal() * L.transpose(). This was incorrect.

The issue is that the unit diagonal does not seem to be stored for matrixL(). Somehow the solveInPlace() functionality seems to work. Does this have to do with the line in solveInPlace:

Code: Select all
m_matrix.template triangularView<UnitLower>().solveInPlace(b);


Is it the UnitLower flag which makes all the difference? Is it possible to set this flag on m_matrix directly?

Finally, there is a note in solveInPlace() that what is currently transpose() should be replaced with adjoint() (supposedly for complex decompositions), but that it fails to compile with adjoint(). I found that it compiled just fine with adjoint(), so maybe this should be changed.

Cheers!
Leo

P.S. I have never before seen the syntax object.template ... when is this used?
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS

Re: Banded LDLT and more

Thu Jun 03, 2010 10:05 am
Indeed, it is not recommended at all to use the _Complex GNU extension in C++!

Also I've just fixed the sparse LDLT solver for complexes which was broken.


Bookmarks



Who is online

Registered users: Bing [Bot], Google [Bot], Sogou [Bot]