Registered Member
|
I was recently quite surprised when I found that the bug I was chasing appears to be the result of Eigen's LU decomposition algorithm incorrectly claiming that my matrix is non-invertible. I have attached the matrix (R) as well as a short piece of code (test.cpp) to reproduce this issue. On my machine (GCC 4.4, Eigen 2.0.12), the reproduction code fails with,
$ ./test Error: Not invertible Despite the fact that Octave has no difficulty inverting the same matrix, octave:1> load "R" octave:2> rank(transpose(R)*R) ans = 10 octave:3> inv(transpose(R)*R) ans = ... Note that I am inverting transpose(R)*R, not R. I have verified that Eigen computes this correctly, however. Any help you could offer would be greatly appreciated. Thanks! - Ben |
Registered Member
|
Apparently it's not possible to attach files here. I have posted the reproduction code and associated data at http://goldnerlab.physics.umass.edu/~bg ... est_eigen/ .
Thanks again, - Ben |
Registered Member
|
This problem, which is inherent to the Eigen 2.0 API, has been fixed in Eigen 3 (the development branch). I encourage you to move to it. Does it fix your problem?
Join us on Eigen's IRC channel: #eigen on irc.freenode.net
Have a serious interest in Eigen? Then join the mailing list! |
Registered Member
|
Sadly, Eigen 3 does not fix the issue. I have uploaded a new test program based on Eigen 3, yet the matrix is still claimed to be uninvertible.
- Ben |
Registered Member
|
This is not a bug, your RR matrix is actually not invertible to machine precision, with single-precision floats!
However, - the matrix R is almost (but not quite) invertible to machine precision with float; - the matrix RR would be almost (but not quite) invertible to machine precision if you were using double precision. Probably Octave is using double precision, and is being lucky. You can do that with Eigen3 too, just use PartialPivLU or use FullPivLU with setThreshold(0). Not recommended, anyway. There's almost no good reason why you'd want a LU decomposition of that kind of matrix (as opposed to other decompositions). Here's how I found out:
result:
Join us on Eigen's IRC channel: #eigen on irc.freenode.net
Have a serious interest in Eigen? Then join the mailing list! |
Registered Member
|
Hmmm. Well, if you couldn't tell my linear algebra knowledge is sorely lacking. This code is ultimately trying to produce a linear regression (ordinary least squares with terms up to second order) for data from a set of sensors. If I plot the data, it looks pretty planar but the data certainly isn't free of noise. Could this perhaps be the reason for the singular values? Otherwise, it seems very strange to me that a matrix formed from a fairly well-behaved data-set would be singular.
It appears that the invertible matrix theorem dictates that as long as the columns of R are linearly independent RR should be invertible[1]. [1] http://books.google.com/books?id=MjNv6r ... &q&f=false |
Registered Member
|
After doing some research, it seems that my problem may be the overdetermined nature of my regression. It seems that using singular value decomposition is a pretty common approach for doing these sorts of regressions. Unfortunately, I don't see how one would use Eigen's SVD module actually compute the components of the decomposition. Both Eigen::JacobiSVD and Eigen::SVD have solve() functions, but there is nothing else to suggest that one can access the underlying decomposition (unless I'm missing something).
Any advice? Thanks again. |
Registered Member
|
i confirm that SVD is what you want here, and rather JacobiSVD. The components of the SVD decomposition are the U and V matrices (aka the singular vectors), and the singular values. All of them can be obtained by methods with self-explanatory names in class JacobiSVD. Note that the API of JacobiSVD is going to get simplified very soon so don't be surprised if code written today doesn't compile anymore in 1 week. You'll have to adapt only once.
Note: you'll need only one among U and V, make sure you skip the other one (SkipU/SkipV option to jacobiSVD), or else it's going to be slow.
Join us on Eigen's IRC channel: #eigen on irc.freenode.net
Have a serious interest in Eigen? Then join the mailing list! |
Registered Member
|
Thanks a ton for your help. I'm now simply using SVD::solve to solve the linear system directly. Is there a reason the SVD::matrix*() functions aren't documented? Thanks again!
- Ben |
Registered Member
|
The development branch's documentation is lacking, we know, will fix this before the release.
solve() is ... a nice first step as long as it's working, but eventually, when a singular value will be exactly 0, it won't work anymore. The real solution is to read the wikipedia page on least squares using the SVD... anyway, good luck!
Join us on Eigen's IRC channel: #eigen on irc.freenode.net
Have a serious interest in Eigen? Then join the mailing list! |
Registered users: Bing [Bot], Google [Bot], Sogou [Bot]