Registered Member
|
Hello,
I'd like to do a matrix-vector multiplication, x = M*v This evaluation happens millions of times. I'm not interested in certain elements of x (suppose x has size 100, and I want to know only x[2], x[4], x[10], x[11], so skip some values). Typicall I can discard 50% of the values. My current idea is this one: Before going into the main loop, make a small version M2 by removing the unnecessary rows. Then in the main loop I calculate v2=[select elements from v] x2=M2*v2 x=[expand from x2] M is usually sparse, and I'm not sure how much I gain in this way, knowing I have to convert v to v2 and x2 to x every time. Does eigen support this type of operation? If not, I can safely resort to the option above, knowing I'm doing the optimal thing. Thanks, Martin |
Moderator
|
If M is really sparse, like 10% of non zeros, then you could use a column major sparse matrix representation as well as a sparse vector and let Eigen do the job. Otherwise, since dense matrix-vector product are highly optimized with very low level optimizations, and that only 50% of the columns can be skipped I would simply do the full computation.
|
Moderator
|
|
Registered Member
|
Hey ggael, thanks for the suggestion!
The matrices are usually 10% sparse or less. The problem is I need row-major matrices for some solving methods, so I don't have them natively in column-major format. But.. shouldn't your suggestion mean row-major when I'm interested in certain x? (following http://netlib.org/linalg/html_templates ... 0000000000), CSR means row-major right? That would mean it can easily skip some row i during the calculation, which corresponds to calculating x[i]. On the opposite, when I'm not interested in values v[j], I'd better use column-major, as then it's easy to skip columns j when I don't need them in the calculation. I looked at the suggestions in here, viewtopic.php?f=74&t=98891, it makes some things clear to me. Kind regards Martin |
Moderator
|
What do you mean by 10% sparse? 10% of zeros or 10% of nonzeros? I hope that's the later case, otherwise sparse representations make no sense here. Then you are right, if you are interested in some x[.] then row major is what you need. Currently you would have to write the outer loop yourself though, and then do, e.g.:
x(j) = M.row(j).dot(v); |
Registered Member
|
Exactly, they are sparse. So 10% or less are nonzero.
Thanks for the suggestion, that's exactly what I need. Even if v is sparse, will this still optimize further to take into account the sparsness of v? Regards, Martin |
Moderator
|
if v is a SparseVector, then yes. Again, this is worth it only if v is very sparse. |
Registered users: Bing [Bot], claydoh, Google [Bot], rblackwell, Yahoo [Bot]