Registered Member
|
Hi all,
I've been checking out eigen's SVD implementations, the JAMA-based of 2.03 and the very recent quick-and-dirty (non-Eigenized) port from numerical recipes. I've noticed that in both cases temporary matrices/vectors are created Our application is such that in certain parts (real-time code) no dynamic memory allocations should be made, and we require computing SVDs. The ideal scenario for us would be to create an instance of SVD in the non-real-time part of the code, and perform all allocations there, and call SVD.compute in the real-time part. The question is then, what are your thoughts on differing all memory allocation to the object construction phase? are there any important cons to this besides having additional class members? ...and just out of curiosity, will the numeric recipes algorithm stay in eigen (with the adequate modifications), or no decision has been made yet on the subject? Thanks in advance, Adolfo. |
Moderator
|
Actually the SVD is going to be rewritten, and don't worry we'll take care of preallocating everything that can be: this is the main reason in having a compute() method.
|
Registered Member
|
Cool :)
Indeed, that's the nice thing of having the compute() method! Cheers, Adolfo |
Registered Member
|
I need a pseudo-inverse calculation and as I did not find one in the SVD module, I was planning to submit a patch for it.
I think even with a rewritting of the SVD, such a patch will be usefull in the meantime. Do you have a schedule for the rewritting of the SVD? Do you think a pseudoinverse method should be part of the API of matrixBase, as for the inverse method? P.S. I noticed that the method matrixU, matrixV, etc are not declared inlined, is there some particular reason for that? P.P.S. I also noticed that the types Scalar, RealScalar, MatrixUType, MatrixVType and SingularValuesType are private even if some are the return type of public methods. I have the same interrogation: what is the reason for that design choice? |
Registered Member
|
I have almost finished to write the patch.
However I notice a strange thing. I don't know if you plan to change it in the rewritten version. It happens if the numbers of rows and columns are different. The API choose to represent the rectangular diagonal matrix, D, of U * D * V^*, as a vector of singular values which has the same size than the number of columns. It is fine if the number of columns is smaller than the number of rows but otherwise it introduces zeros at random places in the vector. The consequence is that it is not possible to easily construct the rectangular diagonal matrix. In this case I choose to sort the vector and take the first entries to construct the rectangular diagonal matrix. However it makes the calculi awkward. I wonder if it could not be possible in the compute() method to transpose the input matrix, take its svd and copy the transpose of the resulting matrices back instead. Ok, again this workaround is not very interesting since you are rewritting the module... However I wanted to point out this problem. |
Registered Member
|
OK, i'm really starting the SVD rewrite now.
Yes, pseudoinverse is always useful, it's independent of the SVD rewrite! According to this: http://en.wikipedia.org/wiki/Moore%E2%8 ... SVD_method the pseudoinverse is quite trivial to compute from the SVD.
Join us on Eigen's IRC channel: #eigen on irc.freenode.net
Have a serious interest in Eigen? Then join the mailing list! |
Registered users: Bing [Bot], Google [Bot], Sogou [Bot]