![]() Registered Member ![]()
|
Why U matrix is always square? It is very wasteful with OLS type matrices. My design matrix has many thousand rows and only several columns. Transposing won't help, because it makes V matrix huge.
Any tips? |
![]() Registered Member ![]()
|
Yep, we know about that problem. We have plans for an improved, more flexible SVD, just haven't gotten around to coding it yet.
Join us on Eigen's IRC channel: #eigen on irc.freenode.net
Have a serious interest in Eigen? Then join the mailing list! |
![]() Registered Member ![]()
|
Did you look at Jama?
http://math.nist.gov/javanumerics/jama/ It is in public domain and not copyrighted. |
![]() Registered Member ![]()
|
JAMA is where we took code for the old implementation we have in Eigen 2.0.
The current SVD in Eigen3 is already much faster than that, it just has various issues that leave room for improvement, such as the one mentioned in this thread. |
![]() Registered Member ![]()
|
I see.
Btw, how dot product works in Eigen? Does it use SIMD? Also, does it split large vector into blocks for the sake of numerical accuracy? |
![]() Registered Member ![]()
|
Yes it is using SIMD; no it is not using a divide-and-conquer approach (would be a welcome addition, if done well i.e. not slowing us down wrt vectorization).
Join us on Eigen's IRC channel: #eigen on irc.freenode.net
Have a serious interest in Eigen? Then join the mailing list! |
![]() Registered Member ![]()
|
Block dot product makes sense only with large vectors. If block size is multiple of 64, 128, 256, 512, etc, then each block is automatically aligned to 16 byte boundaries. The tail block (max 3 with floats and max 1 with doubles) maybe not vectorized, but the overhead should be negligible.
Btw, is there a way to change the aliment from 16 to 128 bytes? I want to align to cache line, which should be good for SIMD as well. |
![]() Registered Member ![]()
|
Yep, exactly. If you want to tackle it, this is in Eigen/src/Core/Redux.h. This is not specific to dot-product, it's shared with all reduction operations (sum, product, max, etc).
Join us on Eigen's IRC channel: #eigen on irc.freenode.net
Have a serious interest in Eigen? Then join the mailing list! |
![]() Moderator ![]()
|
Note that for small fixed sizes our meta unroller already uses a divide and conquer strategy to reduce instruction dependency and thus speed up reductions. So for performance reasons, it might be interesting to treat large objects as many small fixed size ones. However, this has little impact on the numerical accuracy. If what you want is to compute the L2 norm in a stable way then we have a set of robust functions for that.
|
![]() Registered Member ![]()
|
Hello out there!
I am a fan of Eigen-It is very usefull. So I cannot calculate rectangular SVDs? Or I should "square" my matrix using zero-padding? |
![]() Moderator ![]()
|
no no, that only means currently it is not as efficient as it could be when you don't want a full orthogonalization of the null space, e.g., for linear solving.
|
![]() Registered Member ![]()
|
I tried SVD with 8192x4 matrix and its about 100 times slower than Jama. Strangely, all U matrix columns except first 4 are filled with zeros. |
![]() Registered Member ![]()
|
If JAMA allows to skip computation of U and we don't, that explains it.
If you want to compute SVD of a 8192x4 matrix with Eigen3, just use JacobiSVD, it's much more reliable and also faster.
Join us on Eigen's IRC channel: #eigen on irc.freenode.net
Have a serious interest in Eigen? Then join the mailing list! |
![]() Registered Member ![]()
|
Jama doesn't skip computation of U matrix. Eigen computes full SVD, whereas Jama computes thin version. I think that is the reason for performance difference. With 8192x4 matrix we need only 8192x4 U matrix, but Eigen's U matrix is 8192x8192. JacobiSVD computes the same big U matrix.
|
![]() Registered Member ![]()
|
Right, OK. All I can say is "we're working on it"
![]()
Join us on Eigen's IRC channel: #eigen on irc.freenode.net
Have a serious interest in Eigen? Then join the mailing list! |
Registered users: Bing [Bot], blue_bullet, Google [Bot], rockscient, Yahoo [Bot]