This forum has been archived. All content is frozen. Please use KDE Discuss instead.

Possibly Inefficient Chol Decomposition

Tags: None
(comma "," separated)
kalakouentin
Registered Member
Posts
5
Karma
0
OS
Hello,
I have a function that is calculating a (double) logLikelihood taking two (MatrixXd) matrices as input arguments.
The majority of the computational time is spent doing a Cholesky decomposition and the subsequent solution of the system.
My results are correct but much slower than expected. Do you see any obvious mistakes in my code that might cost me time, or do you have any other suggestions of how this function could be written more efficiently? Thanks in advance.

Code: Select all
double LogLikelihood(  MatrixXd Y, MatrixXd K  ) {
int N= Y.rows(); //N-by-1
//K is N-by-N;

LLT <MatrixXd> llt_K(K); // compute the Cholesky decomposition of K

double logLikelihood =.0;
if (0 == llt_K.info ()) { // if the Cholesky decomposition exists.
      
   MatrixXd L = llt_K.matrixL(); 
   MatrixXd alpha = llt_K.solve(Y); 
   // equivalently but computationally far worse: MatrixXd alpha = K.inverse() * Y;
   
   //First term  // (.5)*Y'*inv(K)*Y
   ArrayXd A1 =  ((.5)*Y.transpose()*alpha ).array()  ;
   //Second term // (.5)*log(det(K))
   double A2 = 0.0;     
   ArrayXd  Q = (L.diagonal().array()) ;   
   for (int e=0; e<N; e++){
    A2 = log(Q(e)) + A2;
   }
   logLikelihood = -( double(A1(0)) + A2 + (N*.5)*log(2*PI)) ;
      logLikelihood = -logLikelihood; }
else {   
    logLikelihood= 12^12;    }   //just a non-sensical number
return (logLikelihood);
}
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS
first of all, make sure you compiled with optimization enabled (-O2) and with a recent compiler.

Then, since Y is a vector, use a VectorXd, the solve and product operations will be much faster.

You can also simplify your code a bit, no need to copy L, nor its diagonal:

A2 = llt_k.matrixL().diagonal().array().log().sum();

for A1, you can simply do:

double A1 = 0.5 * (Y.transpose() * alpha).value();
kalakouentin
Registered Member
Posts
5
Karma
0
OS
Thank you for the suggestions.
My code got an approximate 10% speedup just by using ".log().sum()" when it came to the determinant.

Strangely when using vectors instead of matrices the solve operations seem slower rather than faster some reason.

I am using Eigen 3.0.5, compiling with gcc version 4.4.3 (Ubuntu 4.4.3-4ubuntu5.1).
In my original post I forgot to mention I was using "-march=native -O3" as compile arguments.
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS
here is what I meant:

Code: Select all
double LogLikelihood(const VectorXd& Y, const MatrixXd& K) {
  int N = Y.size();

  LLT<MatrixXd> llt_K(K);

  double logLikelihood = 0.;
  if (llt_K.info()==Eigen::Success) { // if the Cholesky decomposition exists.

   VectorXd alpha = llt_K.solve(Y);

   double A1 =  0.5*(Y.transpose()*alpha).value();
   double A2 = llt_K.matrixLLT().diagonal().array().log().sum();

   logLikelihood = ( A1 + A2 + (N*.5)*log(2*M_PI));
  }
  else {
    //just a non-sensical number
    logLikelihood= 12^12;
  }
  return logLikelihood;
}
kalakouentin
Registered Member
Posts
5
Karma
0
OS
My code works better now, I got there after a while. Yes, using vectors is faster. :)
I had introduced an unrelated mistake during the initialisation of the matrices/vectors when I changed my code to work with vectors instead of matrices. (I found it thanks to your code.)

Thanks again.


Bookmarks



Who is online

Registered users: Bing [Bot], claydoh, Google [Bot], rblackwell, Yahoo [Bot]