Registered Member
|
Hello,
I have the following function
where the matrix M can be very sparse. Following Gael's suggestion (https://forum.kde.org/viewtopic.php?f=74&t=141964&p=384873#p381830), I changed the code to the following to accelerate the sparse matrix-vector multiplication:
where 'matrixType' can be 'MatrixXd' or 'SparseMatrix<double,RowMajor>' depending on the sparseness of M. Now I create a wrapper for using the above function:
My questions are: 1. Are the above good? Any advice for improvements? 2. What is the best value of 'alpha' should I choose? 3. I saw in the documentation that the sparseView function can have two arguments: S = D.sparseView(reference,epsilon). What are 'reference' and 'epsilon'? Could you please give an example of using them? Thank you so much in advance for your help!! |
Moderator
|
Mostly good, but M.nonZeros(); with M dense returns the size of the matrix. Recall that what we call "non-zeros" are stored coefficients. So you should rather do:
int nn = (M.array()!=0).count(); Then, regarding alpha, obviously you need to do some little benchmark to tune it, but my guess is that 0.7 is way too large. I would rather start with alpha=0.1 and then do some experiments to fine tune it. It probably also depends a lot on the the size of M, because if M is large (1k x 1k), then you'll get more cache misses and you thus need to switch to sparse sooner, but if M is small (10x10) then there is no point in switching to sparse. Also, dense products can take advantage of AVX/FMA, so don't forget to enable them (-march=native) and of course, make sure to bench in "release" mode (-O3). And please report here your findings, I'm curious to known about the outcomes. |
Registered Member
|
Thank you very much, Gael! I'll get back with the benchmark results in a few days (I have something urgent to do in the next days).
I have again learned something new from you |
Registered users: Bing [Bot], Google [Bot], Sogou [Bot]