This forum has been archived. All content is frozen. Please use KDE Discuss instead.

Some questions regarding Eigen sparse matrix

Tags: None
(comma "," separated)
netw0rkf10w
Registered Member
Posts
8
Karma
0
Hello,

I have the following function
Code: Select all
void function(const MatrixXd &M)
{
// A lot of M*v computation where v changes every iteration.
}

where the matrix M can be very sparse. Following Gael's suggestion (https://forum.kde.org/viewtopic.php?f=74&t=141964&p=384873#p381830), I changed the code to the following to accelerate the sparse matrix-vector multiplication:

Code: Select all
template <class matrixType>
void function(const matrixType &M)
{
// A lot of M*v computation where v changes every iteration.
}

where 'matrixType' can be 'MatrixXd' or 'SparseMatrix<double,RowMajor>' depending on the sparseness of M.

Now I create a wrapper for using the above function:
Code: Select all
void wrapper(const MatrixXd &M)
{
    // Minimum proportion of non-zero elements to be considered 'dense'
    double alpha = 0.7

    int nn = M.nonZeros();
    if(nn > alpha*M.rows()*M.cols()){
         function<MatrixXd>(M);
    } else{
        SparseMatrix<double,RowMajor> P = M.sparseView();
        function< SparseMatrix<double,RowMajor> >(P);
    }
}

My questions are:
1. Are the above good? Any advice for improvements?
2. What is the best value of 'alpha' should I choose?
3. I saw in the documentation that the sparseView function can have two arguments: S = D.sparseView(reference,epsilon). What are 'reference' and 'epsilon'? Could you please give an example of using them?

Thank you so much in advance for your help!!
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS
Mostly good, but M.nonZeros(); with M dense returns the size of the matrix. Recall that what we call "non-zeros" are stored coefficients. So you should rather do:

int nn = (M.array()!=0).count();

Then, regarding alpha, obviously you need to do some little benchmark to tune it, but my guess is that 0.7 is way too large. I would rather start with alpha=0.1 and then do some experiments to fine tune it. It probably also depends a lot on the the size of M, because if M is large (1k x 1k), then you'll get more cache misses and you thus need to switch to sparse sooner, but if M is small (10x10) then there is no point in switching to sparse. Also, dense products can take advantage of AVX/FMA, so don't forget to enable them (-march=native) and of course, make sure to bench in "release" mode (-O3).

And please report here your findings, I'm curious to known about the outcomes.
netw0rkf10w
Registered Member
Posts
8
Karma
0
Thank you very much, Gael! I'll get back with the benchmark results in a few days (I have something urgent to do in the next days).

ggael wrote: Also, dense products can take advantage of AVX/FMA, so don't forget to enable them (-march=native)

I have again learned something new from you ;)


Bookmarks



Who is online

Registered users: Bing [Bot], Google [Bot], Sogou [Bot]