This forum has been archived. All content is frozen. Please use KDE Discuss instead.

parallelization of eigen

Tags: None
(comma "," separated)
haavind
Registered Member
Posts
13
Karma
0
OS

parallelization of eigen

Mon Dec 19, 2011 5:49 pm
What's the current status for multicore development on eigen?
I'm talking both in terms of planned functionality and release date.

Specifically I'm looking for a parallel sparse cholesky decomposition routine for shared memory architechtures. I'm currently a factor of 2-4 behind on my real time requirements :'(
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS

Re: parallelization of eigen

Tue Dec 20, 2011 7:36 am
Currently only matrix-matrix products are parallelized that also implies PartialPivLU. Regarding Cholesky, the easiest would be parallelize GeneralMatrixMatrixTriangular, but the best would be to directly parallelize LLT by implementing a tile algorithm.

What are the average size of your matrices? Cannot you parallelize at the level of your algorithm?
haavind
Registered Member
Posts
13
Karma
0
OS

Re: parallelization of eigen

Tue Dec 20, 2011 10:00 am
Right now the typical matrix size is ~3000x3000, but i would like to at least double that without it running any slower.
The general nonzero structure is:
case 1) a main diagonal of bandwith 12 (in tiled 6x6 units)
case 2) an additional handfull (usually <50 but it varies) of scattered 6x6 off-diagonal blocks.

I'm probably going to parallelize the parts where I compute and construct the matrix as well, but as is usually the case the decomposition itself is a major part of total runtime.
There is only one single decomposition per iteration and so it cannot be parallelized at algorithm level.
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS

Re: parallelization of eigen

Tue Dec 20, 2011 4:21 pm
Oh, you are talking about sparse matrices! Parallelizing sparse direct solvers is a lot of work and won't likely happen in Eigen. On the other hand we'll offer interfaces to existing parallel libraries (SuperLU, PastiX, MUMPS, Pardiso MKL).

Perhaps the ConjugateGradient would be faster is you solve for only one right hand side. You can also adjust the accuracy if your application does not need full precision. Moreover parallelizing a conjugate-gradient can be quickly done by parallelizing sparse matrix*vector products which is trivial for row-major matrices.
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS

Re: parallelization of eigen

Tue Dec 20, 2011 4:35 pm
btw, if you're brave enough to try the ConjugateGradient solver I could send you a block-diagonal preconditioner which should be ideal in your case (6x6 block diagonal). It is not in Eigen yet because I never find it useful so far, but in your case...


Also, it would be interesting to have an example of your matrices for benchmarking purpose on various kind of real-world sparse matrices. You can export it:

#include <unsupported/Eigen/SparseExtra>
...
Eigen::saveMarket(mat,"path/to/filename.txt");

and send a zip/gzip/whatever of filename.txt to me.

thanks a lot.
haavind
Registered Member
Posts
13
Karma
0
OS

Re: parallelization of eigen

Wed Dec 21, 2011 10:28 pm
I've tried both a self implemented conjugate gradient and jacobi iteration, but my matrices aren't always that well conditioned, so the iteration convergence seems to be too slow.
Also the numerical robustness of a proper LDLt factoring is somewhat desirable in my case. If you have a suitable preconditioner though I'll gladly give it a try. It would be nice to have the option of using both direct and iterative methods.

Are the interfaces to the external parallel libraries implemented today, or are they on the horizon? Are any of them bundled with the eigen code, or do I have to download and install them myself?

How about band matrices? Any plans for parallel solvers there?

I will certainly send you a benchmark sample once I start work again sometime over new year.
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS

Re: parallelization of eigen

Thu Dec 22, 2011 6:30 pm
check this page:

http://eigen.tuxfamily.org/dox-devel/Tu ... ectSolvers

To try our CG, just replace "SimplicialLDLt" by "ConjugateGradient" and #include <Eigen/IterativeLinearSolvers>, that's all for a first try :) then of course you can start playing with the different parameters, check the number of iterations which have been required...

Non-multithreaded SuperLU is already supported. The MT version should not be difficult to add. Support for PastiX and MUMPS should be available within a month or two.


Bookmarks



Who is online

Registered users: Bing [Bot], Google [Bot], Yahoo [Bot]