Registered Member
|
Hi.
I added explicit conversion to scalar, employed coeff and coeffRef, took care of updating blocks in splitOffTwoRows and pushDownZero, rewrote block operations you mentioned, changed indentation to two spaces. There's slight performance decrease (like from 30.5 seconds to 31.5 seconds for solution of one thousand 100x100 problems on my laptop). I'll take a look at unit tests. Best |
Registered Member
|
I don't see how HessenbergDecomposition would help... There's a way to speed up Hessenberg-Triangular reduction working with blocks instead of individual matrix elements, but I don't understand it well yet.
I added unit test. |
Moderator
|
hi,
I've pulled your changes into the main repository and granted you write access: https://bitbucket.org/eigen/eigen/chang ... 8aaaba572/ I also took the liberty to update the license (to MPL2) and optimized a bit the hessenberg to not apply Givens on zero entries. Regarding the performance, I get similar ones than Lapack. Not surprising since Lapack implements the same algorithm from the same sources. |
Moderator
|
Btw, in your repo there was a changeset hanging (29561ed775b7 Added random shifts, changed the way 0 on sub-diagonal of A is found, minor ). You can see what I mean there:
https://bitbucket.org/Khumarahn/eigen-qz/changesets Since it is just before the "rewrite", I guess that was on purpose? |
Moderator
|
Hello,
there seems to be an issue when the input matrices are SPD. For instance, when adding:
to the unit test, I get errors. The problem seems to be that complex eigenvalues are found while they are clearly all real. Here is a sequence of the matrix m_S during at each iteration:
Since there is not much comments in the code, it's pretty hard to investigate more deeply. |
Registered Member
|
Hello.
I will investigate the problem, it looks very interesting. I was going to add more comments, but had no time lately. For commit 29561ed775b7, it was rather a mistake, and can be ignored. I would remove it if I knew how. |
Moderator
|
one more thing: I don't really understand how the shift vector (x, y, z) is computed: it seems you simply used the diagonal of the 2x2 block as its eigenvalues. That does not look correct, is there any hidden trick?
|
Registered Member
|
No, I compute x,y,z honestly. I take 2x2 trailing blocks SS and TT of S and T respectively, and compute eigenvalues of SS TT^{-1}. I found a good way to compute x,y,z in "An algorithm for generalized eigenvalue problems" by C.B.Moler and G.W.Stewart.
x,y,z are the non-zero elements in (M- lambda1 I)(M-lambda2 I) where M = S T^{-1} (or more precisely diagonal blocks of S and T where we currently work), and lambda1 and lambda2 are the eigenvalues of SS TT^{-1} |
Registered Member
|
I found the problem. I appreciate your attention, Gael
https://bitbucket.org/eigen/eigen/chang ... d5d4c038f7 https://bitbucket.org/eigen/eigen/chang ... 70d48db6d6 |
Registered Member
|
Still there are symmetric matrices A and B
for which algorithm finds complex eigenvalues:
For errors on the bottom are |QSZ - A|/|A|, |QTZ-B|/|B| , |QQ*-I|, |ZZ*-I| |
Registered users: Baidu [Spider], Bing [Bot], Google [Bot]