Registered Member
|
Hello,
I have a function that is calculating a (double) logLikelihood taking two (MatrixXd) matrices as input arguments. The majority of the computational time is spent doing a Cholesky decomposition and the subsequent solution of the system. My results are correct but much slower than expected. Do you see any obvious mistakes in my code that might cost me time, or do you have any other suggestions of how this function could be written more efficiently? Thanks in advance.
|
Moderator
|
first of all, make sure you compiled with optimization enabled (-O2) and with a recent compiler.
Then, since Y is a vector, use a VectorXd, the solve and product operations will be much faster. You can also simplify your code a bit, no need to copy L, nor its diagonal: A2 = llt_k.matrixL().diagonal().array().log().sum(); for A1, you can simply do: double A1 = 0.5 * (Y.transpose() * alpha).value(); |
Registered Member
|
Thank you for the suggestions.
My code got an approximate 10% speedup just by using ".log().sum()" when it came to the determinant. Strangely when using vectors instead of matrices the solve operations seem slower rather than faster some reason. I am using Eigen 3.0.5, compiling with gcc version 4.4.3 (Ubuntu 4.4.3-4ubuntu5.1). In my original post I forgot to mention I was using "-march=native -O3" as compile arguments. |
Moderator
|
here is what I meant:
|
Registered Member
|
My code works better now, I got there after a while. Yes, using vectors is faster.
I had introduced an unrelated mistake during the initialisation of the matrices/vectors when I changed my code to work with vectors instead of matrices. (I found it thanks to your code.) Thanks again. |
Registered users: Bing [Bot], claydoh, Google [Bot], rblackwell, Yahoo [Bot]