Registered Member
|
Hello - I am using SparseMatrix and CholmodSimplicialLLT to solve a large sparse system with Eigen. I've installed cholmod 2.0.1, in which I've enabled support for CUBLAS. I know that CUBLAS support is active in my cholmod installation because my executable links to libcublas.
My question is: if I am using the cholmod support in Eigen, and I've enabled CUDA support in cholmod, will all my Cholmod*L(D)LT solvers use my GPU? Should I be seeing significant speedup vs. the built-in Eigen sparse solvers? My GPU is reasonably fast -- a Nvidia GTX 460. It's hard to notice any speed up right now, because my problem isn't all that big (5000x5000 matrix), and there are a number of other bugs in my program which prevents me from trying this on larger problems... |
Moderator
|
The speedup you can expect depends on matrix size and number of non zeros per column/row. For a 5k x 5k matrix, clearly you should not expect any speedup by using the GPU.
Now, to make Cholmod really use your GPU you must use the supernodal variant, and perhaps you also need to enable some option in the cholmod_common structure (there is a method in CholmodSupernodalLLT for that), refer to cholmod doc. Please keep us informed about your experiments with Cholmod and CUBLAS. Sounds interesting. |
Moderator
|
The speedup you can expect depends on matrix size and number of non zeros per column/row. For a 5k x 5k matrix, clearly you should not expect any speedup by using the GPU.
Now, to make Cholmod really use your GPU you must use the supernodal variant, and perhaps you also need to enable some option in the cholmod_common structure (there is a method in CholmodSupernodalLLT for that), refer to cholmod doc. Please keep us informed about your experiments with Cholmod and CUBLAS. Sounds interesting. |
Registered users: Bing [Bot], Google [Bot], Yahoo [Bot]