![]() Registered Member ![]()
|
Hi,
I'm going to write an application with heavy use of matrix calculation. Actually the size of the matrices is small, i.e. 4x4, but I have to do *a lot* of matrix-per-matrix calculation. I'm planning to use CUDA to speedup the process. On the other hand I want to have an adapter between my app and the basic technology so I'm looking for something that can work even without CUDA. Eigen library seems great but I don't understand, reading the docs, what is the current status of CUDA support in Eigen. Is it full supported? If not, what is "backended" whit CUDA and what not? I need to use a function like cublasSgemmBatched() of cuBLAS to do parallel processing of the matrix calculation, is it possible via some high level method of Eigen? As last option I could write my own class and copy to/from gpu the data via a raw data pointer. The method data() of the class matrix should work for my goal, isn't it? Thanks. Mark |
![]() Moderator ![]()
|
Currently Eigen do not call CUDA by itself, basically we only added the required __DEVICE__ token at the right place (plus some other adjustments) so that one can use Eigen *from* a CUDA kernel. So you can write a generic function and call it from CUDA or from the CPU. A typical example is our respective unit test in test/cuda_*
|
![]() Registered Member ![]()
|
I didn't find test for cuda in the latest release, it's only present in dev branch, so I guess it's not ready for production use. Can you confirm that I can go writing my own methods using the data() method of the matrix class? |
![]() Moderator ![]()
|
Indeed, CUDA support is only in the devel branch. If you only want to access to the internal storage of an Eigen::Matrix , then using mat.data() is safe.
|
Registered users: Bing [Bot], claydoh, Google [Bot], rblackwell, Yahoo [Bot]