Registered Member
|
Hi,
I'm using a few of Eigen unsupported modules (as they are there, I prefer to use them rather than adding other third party dependencies). I'm using the LevenbergMarquardt module and the NumericalDiff module (to compute the Jacobian of my error function). I want to speed up the minimization, and I thought of using the AutoDiff module to compute the Jacobian -— as then I'll have 6 times fewer evaluations of my error function (the Jacobian for my error function is 6 inputs by 104 outputs) I got the minimization working well with a derivative calculated with AutoDiff. However, to my disappointment, the performance is about 6 times slower with AutoDiff than with NumericalDiff (same error function). I'm not sure if this is to expect from Automatic Differentiation in general for this size of Jacobian, or maybe the AutoDiff module is not optimized, or maybe I'm using AutoDiff module the wrong way??? I add the parts of my code that use AutoDiff to clarify how I'm using it. Please, any help on usage or potential performance of AutoDiff would be highly appreciated.
|
Moderator
|
Make sure that InputsAtCompileTime is 6 so that you can avoid very expensive heap allocation within AutoDiffScalar.
|
Registered Member
|
For the problems I'm working on (large datasets, few parameters), there are typically common sub-expressions between the various derivatives (and the error function itself). I'd be impressed if the compiler can combine those. I always end up calculating the symbolic derivatives using wxMaxima. The main drawback there is that a typo is easily made. Using Autodiff to verify these derivatives might help. But with Eigen you never know, it is amazing how close it gets to optimal code by tweaking the expressions a bit. |
Registered Member
|
Thanks for your reply ggael. I'm sorry it's taken me so long to get back to this, I had to leave this work aside and I haven't had a chance to look at it again until now! I find that InputsAtCompileTime is -1 and I guess this is the reason for the slow down. But now the problem is that I don't know how to get InputsAtCompileTime = 6 at compile time. After looking at the DenseFunctor definition in LevenbergMarquardt.h, I have tried this change in my example code:
This should set InputsAtCompileTime = 6, and ValuesAtCompileTime = -1 in DenseFunctor, however I get the compile error below which seems to indicate that lmqrsolv expects a PermutationMatrix with dynamic size...
I don't know how to get around this error. How could I change my example code to get InputsAtCompileTime = 6 at compile time?
Last edited by mtosas on Wed Sep 23, 2015 2:28 pm, edited 2 times in total.
|
Registered Member
|
Thanks twithaar, wxMaxima sounds very interesting I'll have a look at it. |
Registered Member
|
I'm still trying to find a solution for this. Any help?
|
Registered users: Bing [Bot], Google [Bot], q.ignora, watchstar