Registered Member
|
Dear Eigen Team and Users,
I'm new here and I have a newbie question. I have implemented a version of Block Conjugate Gradient, to resolve AX=B where A(nxn), X(nxp) and B(nxp). Then, I compare the walltime of this block version (using p=1) agains Classics CG using just VectorXd. Result: the block version took more time. Specifically I see that the operation A*V, with V defined as MatrixXd V(n,1) took more time than the operations A*v, with v defined as VectorXd v(n). For example, with a sparse matrix with n=14800, the kernel A*V took 5.4 seconds and A*v took 3.8 seconds, both resolve the problem in 2400 iterations. My questions, is this an expected result?, if yes why?... can I do something to get approximately more closer results?. Thanks in advance. Best Regards Pedro Torres |
Moderator
|
Can you show some piece of code, because there should not be much difference between VectorXd and a column major MatrixXd for CG.
|
Registered users: Bing [Bot], Google [Bot], Sogou [Bot], Yahoo [Bot]