Registered Member
|
Hello,
I have a question regarding the Eigen interface. We are currently designing an optimization library. For representing the objective function, we have a base class "Function" which can have several "interfaces" (multiple inheritance from Function and an e.g. an abstract base class "IHessian").
We would then derive QuadraticFunction, LinearFunction, CompositeFunction, ... from "Function", with the corresponding Hessians.
The tricky part is that we cannot fix f (not even the class) at compile time. Thus we cannot templatize "optimize" and need one common base class "Function". Since getHessian needs to be virtual, we cannot templatize it as well. In particular, we would like to allow getHession() to return fast specialized Matrix implementations for simple functions, e.g. identity matrix, sparse matrix etc. The question is now how we can integrate this with Eigen, i.e. how to use Eigen-style matrices for "Matrix". Unfortunately the Eigen matrices do not have a common base type (after all, as far as I understand, the speed comes from the compiler generating explicit specialized code for each expression instead of function calls and avoiding virtual function calls). Is there a way to bring these two worlds together? The question boils down to the point whether we can have a matrix type in Eigen whose type is not clear at compile time. An option might be to implement such a type and provide an Eigen-compatible interface. However I do not know enough about the inner workings to estimate how much effort this would be, and if it would be possible to do without modifying the existing code base. Do you have any hints on this, or do you know a matrix library which might be suitable for this case? Best Regards, Jan |
Registered Member
|
I see. What you want is a Matrix type that can at runtime be either a plain Matrix, or a SparseMatrix, or the Identity expression...
First of all, note that in the development branch, we do have a base class common to all these, it's called AnyMatrixBase. But, it's not virtual. Being so general, it only allows very few operatioons such as assigning, multiplying etc. We can't make AnyMatrixBase virtual, because that would make in particular Matrix virtual, and having a vtable increases the sizeof(), which is not an option for Matrix. (Think the case of Vector2f, 8 bytes, you don't want that to be doubled because of the vtable pointer). There may be other reasons why virtual is not an option, but this one is serious enough. Instead of trying to introduce a virtual analogue of AnyMatrixBase (not even sure if that is possible), have you considered something far easier: introduce a Variant class? http://en.wikipedia.org/wiki/Variant_type http://doc.trolltech.com/4.5/qvariant.html Your function would return such a "variant" object. Either you make Variant a virtual base class, or you implement explicitly variant as containing a "type ID" and a void* pointer. I don't have experience implementing variants... Thus your Function class would look like:
I removed the & as I don't see how you can return a reference. You'll have to return that Variant by value, or to pass a reference to it as argument, but i don't see how you can return a reference. You could just implement copy-on-write in Variant so returning it by value isn't costly. http://en.wikipedia.org/wiki/Copy-on-write You will have to manually provide all necessary operators. For example, if you are interested in multiplication, provide a virtual operator* in Variant, and then specialize it in the subclasses. Note that the whole point of the Variant approach is that you don't need to deal with Eigen internals at all. So you're on your own! I also don't think that any of the major matrix libraries does that. Good luck
Join us on Eigen's IRC channel: #eigen on irc.freenode.net
Have a serious interest in Eigen? Then join the mailing list! |
Registered Member
|
How about these two options? The idea is to template the function optimize. I will let the code speak for itself...
- Hauke |
Registered Member
|
Your solution seems better, if it's OK to have templated "optimize", but Fragment wrote:
That's why I tried something with Variants. But indeed, let's suggest to Fragment to consider seriously if it might be possible to template "optimize" as that would be much more natural!
Join us on Eigen's IRC channel: #eigen on irc.freenode.net
Have a serious interest in Eigen? Then join the mailing list! |
Registered Member
|
@fragment:
Another option is to edit the file Eigen/src/Core/AnyMatrixBase and put "virtual" keywords for all you need to be virtual there. Then you return a AnyMatrixBase (i guess you can't return it by value, you'd have to let your function take a reference to it) If that works for you, we can always integrate that as a non-default option controlled by a #define.
Join us on Eigen's IRC channel: #eigen on irc.freenode.net
Have a serious interest in Eigen? Then join the mailing list! |
Registered Member
|
Oh. Yeah right. They probably want to switch the cost functions at run-time. I need to tinker a bit when I need the next break. |
Registered Member
|
First of all I want to thank you a lot for the fast and extensive help.
I didn't know of the AnyMatrixBase, I'll definitely have a look at that. I have been thinking about the Variant class as sort of a "wrapper" for several matrix implementation. There are two main points:
I have also been thinking about making optimize() templated. Actually I thought about redesigning the whole "Function" framework to a templated form. There are two problems I see with that:
On a side note, I also had a look at FLENS (http://flens.sourceforge.net/) and liked the way it interfaces with BLAS/LAPACK. Since our matrices will typically have image size (i.e. 640x480 upwards), I was wondering if the built-in eigen functions have good performance on "larger" matrices? Is the storage layout generally compatible with BLAS, so that we could use native BLAS/LAPACK methods for time-critical parts? Best Regards, Jan |
Registered Member
|
Apparently you're developing a library. If it's a C++ library, and you can let part of that library be templated, then just let optimize be a template. Otherwise, indeed that's not an option indeed. What you say below about interfacing with matlab suggests so.
You're right, I hadn't thought of that. Since operator= is virtual, we can't make it a template, and so we can't make an operator= that directly takes any Eigen expression. Perhaps that's not a big deal. You could make an operator= that takes a Variant. And you could make also an operator= that takes a plain matrix (that is, you could let the Variant type know at least about the underlying "plain matrix type" corresponding to it, in other words, you don't need the same Variant type to represent simultaneously MatrixXd, MatrixXd, Matrix3f... there's got to be a limit to the polymorphism of Variant i think the Variant class itself is still allowed to the templated, right? Otherwise, you still have the option of only implementing operator= taking a Variant, and converting anything else to Variant first.)
Oh sure, I wasn't recommending that. I was only suggesting to implement a few operators, and for the rest, you'd let the user explicitly refer to the Eigen object inside the Variant.
Your first problem is non-fundamental. Anything you can do at runtime with virtual functions, you can also so at compile time ("static polymorphism") with templates. It's just a matter of familiarizing yourself with template metaprogramming, I can't enter into that here. But it only applies for C++ code. If you're writing a C++ library, that's an option (that part of your lib would then be templated, like Eigen is). Your second problem is more fundamental. Templates are specific to c++, Matlab doesn't know about that. FLENS is probably a nice wrapper around BLAS/LAPACK. Performance-wise, it's hard to give a general answer as that depends on your target hardware (massively multi core? GPU accelerated?) Currently Eigen only uses 1 CPU Core. In that particular context, it performs very well, also for large matrices: http://eigen.tuxfamily.org/index.php?title=Benchmark
Join us on Eigen's IRC channel: #eigen on irc.freenode.net
Have a serious interest in Eigen? Then join the mailing list! |
Registered Member
|
Thanks, I'm amazed by the response time in this board!
Currently it seems templating the whole library is not an option. I'll try to implement the Variant approach to see if it works out. Thanks to the link to the performance comparison. Do you know of a benchmark of the sparse matrix part? Are there LAPACK bindings for Eigen matrices/vectors? Since you mention GPU acceleration, is there a chance that this (i.e. a matrix class which does computations on the GPU) could at some time work with Eigen? I imagine this would be quite difficult, as one has to avoid using loops over the ()(i,j) operator at any cost, which as far as I can see somehow contradicts Eigen's approach of having the compiler compile specialized code (loops) for each expression. Regards, Jan |
Registered Member
|
It's different, because in the Sparse module, we rely more on the use of external libraries as back-ends for many algorithms. So in this case we perform just like the chosen library. However, for many basic operations we have our own implementation, and we then perform well. Gael is the chief there. I'm not aware of systematic comparative benchmarks on that stuff.
Depends in what direction. * Using Eigen as back-end while using the BLAS/LAPACK API: we have that for large parts of BLAS in the development branch, see the blas/ directory, LAPACK not started yet (not sure if LAPACK will happen). * Using BLAS/LAPACK as back-ends while using the Eigen API: for BLAS, it's not clear if this is useful or if we can just do just as well. For LAPACK, this is planned and much wanted, but still waiting for someone to do the job. See the Todo.
Gael has been thinking about this, see recent thread "BLAS back-ends" on the list.
Join us on Eigen's IRC channel: #eigen on irc.freenode.net
Have a serious interest in Eigen? Then join the mailing list! |
Registered users: Bing [Bot], daret, Google [Bot], sandyvee, Sogou [Bot]