This forum has been archived. All content is frozen. Please use KDE Discuss instead.

Compute L^{-T} from LDLT instead of LLT

Tags: None
(comma "," separated)
To_N
Registered Member
Posts
6
Karma
0
Hey there,
I'm currently having a problem with LLT and LDLT decomposition:

I use an algorithm (from viewtopic.php?f=74&t=92393) that uses a LLT decomposition to compute L^{-T}

Code: Select all
//Compute LLT of G = L*L^T = U^T*U
// => L = U^T
Eigen::LLT<EMatrix,Eigen::Lower> chol(G.cols());
chol.compute(G);
EMatrix L = chol.matrixL();

//U*X = I <=> L^T*X = I => X = L^{-T}
EMatrix X(G.rows(),G.cols());
X.setIdentity();
X = chol.matrixU().solve(X);

This works and the algorithm continues working fine with the solution of X

However it is stated in the docs that LLT is unstable for semi-definite matrices, and I'm having the problem that for some matrices I can not compute the LLT. For those matrices however I can compute the LDLT decomposition.
My idea was to exchange the LLT in the algorithm by a LDLT. However if I compute the matrix X as written above (and using Eigen::LDLT<EMatrix,Eigen::Lower> chol(G.cols()); instead) I get a different solution of X and the algorithm fails in subsequent steps.

My question is now, how to compute the same matrix X as in the LLT case, but using a LDLT decomposition... I understand the the L in the LLT and LDLT case are different. But is there a way to compute the L from the LLT case by using the L from the LDLT case?
Hope that was not too cryptic ;)

Regards...
dharmon
Registered Member
Posts
6
Karma
0
A = LDL* = LD^{1/2} D^{1/2} L* = LD^{1/2} (LD^{1/2)* = "L" "L"*,

So the "L" that you want is actually L D^{1/2}, from the LDLT decomposition.
Be careful with negative and zero values in D, though, in the case of negative and semi-definite
matrices.
To_N
Registered Member
Posts
6
Karma
0
Hey, thx for the answer!
This totally makes sense.
However I made a small demo, that shows that there are still significant differences of both L's computed this way. However both L's fulfill A = L*Lt. It would we really helpful to know, how the matrix is decomposed internally in LDLT.
Appearantly, Eigen uses some sort of pivoting for the LDLT decomposition, which I also included.

The demo code is as follows

Code: Select all
MatrixXd G(3,3);
G(0,0)=2.1; G(0,1)=1.5; G(0,2)=1.2;
G(1,0)=1.5; G(1,1)=2.2; G(1,2)=1.3;
G(2,0)=1.2; G(2,1)=1.3; G(2,2)=3.1;

//LLT
{
   LLT<MatrixXd, Lower> chol(G.cols());
   chol.compute(G);

   MatrixXd L = chol.matrixL();

   cout << "LLT: " << endl;
   cout << "Error: " << ((L*(L.transpose())) - G).norm() << endl << endl; // |L*L^t - G|
   cout << L << endl << endl;
}

//LDLT
{
   LDLT<MatrixXd, Lower> chol(G.cols());
   chol.compute(G);

   MatrixXd res(chol.rows(), chol.rows());
   res.setIdentity();

   MatrixXd P = chol.transpositionsP() * res;
   MatrixXd L = chol.matrixL();
   MatrixXd D = chol.vectorD().asDiagonal();
   for(unsigned int i = 0; i < D.rows(); i++)
      D(i,i) = sqrt(D(i, i));

   EMatrix LS = P.transpose()*L*D;

   cout << "LDLT: " << endl;
   cout << "Error: " << ((LS*(LS.transpose())) - G).norm() << endl << endl; // |LS*LS^t - G|;
   cout << LS << endl;
}


The output is:
LLT:
Error: 4.44089e-016

1.44914 0 0
1.0351 1.06234 0
0.828079 0.416869 1.49683

LDLT:
Error: 7.69185e-016

0.681554 0.774852 1.01739
0.73835 1.28641 0
1.76068 0 0


The error |L*L^t - G| is in both cases zero, however L (from LLT) and LS (from LDLT) are totally different.
What I want is to be LS exactly the same as L.

Would be very nice if someone could tell me what I'm doing wrong here...
Regards
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS
you cannot because of the permutation. If you need to apply the inverse of L (or more generally need the triangular structure of L) then you have to keep it as a product (P^T * L) and, e.g., replace L^1* M by L^-1 * P * M in your code.
inspirit
Registered Member
Posts
15
Karma
0
Hi,

seems to be an old topic however i was wondering if one can provide the correction to obtain triangular structure of L matrix using LDLT

i see the note "replace L^1* M by L^-1 * P * M in your code." but i can not see what is M in the provided code.

Code: Select all
// so given
LDLT<MatrixXd> chol; // computed

MatrixXd I = MatrixXd::Identity(chol.rows(), chol.rows());
MatrixXd P = chol.transpositionsP() * I;
MatrixXd L = chol.matrixL();
MatrixXd D = chol.vectorD().asDiagonal();

// how do we obtain L saving its triangular structure,
// and more importantly its diagonal property that i'm mostly interested in my case?



Please let me know the correct steps
Thanx
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS
You cannot. What my comment says, is that when you want to apply L or L^-1 to some matrix vector M, then you need to apply P^T * L or L^-1 * P. But you cannot get a true triangular form that takes the permutation P into account as P^T * L is not triangular. The diagonal is the vector D.
inspirit
Registered Member
Posts
15
Karma
0
I see, now its clear.
regarding diagonal D, does it have the same property as diagonal of L from LLT?
in my computation i use L.diagonal().array().log().sum() during optimizing hyper parameters for Gaussian Processes...
so i was wondering if D holds similar properties as L?

Thank you
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS
D is the square of L.diagonal(). But why do you compute LDLT instead of directly computing LLT if that's what you need?
inspirit
Registered Member
Posts
15
Karma
0
the issue i have is that besides getting the Diagonal values of L i also use LLT/LDLT decomposition later for solving 1-2 linear systems.
and it seems like in some cases LLT failing to solve during optimization loop. Thats why i wanted to use LDLT decomposition to test
if it will be more robust
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS
oh, I missed the point that you only need L.diagonal().array().log().sum(), that is half the log determinant of the matrix, so you can simply use D.array().log().sum()/2.0


Bookmarks



Who is online

Registered users: Bing [Bot], blue_bullet, Google [Bot], rockscient, Yahoo [Bot]