This forum has been archived. All content is frozen. Please use KDE Discuss instead.

Shared memory parallelization in Eigen

Tags: None
(comma "," separated)
teoHPC
Registered Member
Posts
2
Karma
0
Hi,

I'm new to Eigen, I've used it for the past 2 weeks now. I am aware that Eigen provides parallelization for a number of algorithms as stated here. However sparse matrix-vector products, for example, are not among them. Is there a mechanism which allows to introduce parallelism to Eigen? How difficult would it be to implement a general parallelization mechanism which works on more complex expressions using lazy evaluation? (e.g. expressions like
Code: Select all
 mat = mat * vec + vec.transpose() * vec 
where vec and mat are sparse.). Alternatively, how would Eigen's current parallelization mechanism handle the example above, if now vec and mat are dense and openMP enabled so that the matrix-metrix(matrix-vector) is parallelized?

Cheers,
Teo
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS
For sparse*vec, this is done in the devel branch (or 3.3-beta1), see respective doc: https://eigen.tuxfamily.org/dox-devel/T ... ading.html

Your exemple will fail, you are adding a vector to a matrix. Anyway, matrix products are not lazy operations. For efficiency reasons, they are computed within temporaries or directly within the destination matrix if possible. For instance, in the dense world, and using eigen 3.3, the following:

m5.noalias() = 3*m1*m2 - m3*m4;

will be evaluated as:

m5.setZero();
gemm(3,m1, m2, m5);
gemm(-1,m3, m4, m5);

where gemm stands for an internal routine similar to BLAS's gemm ones.

Dense matrix-vector products are not parallelized yet.


Bookmarks



Who is online

Registered users: Bing [Bot], Evergrowing, Google [Bot], rockscient