Registered Member
|
Hello,
I dare to ask - even if AutoDiff is still considered unsupported. As far as I understood the module, the following code should work. Indeed multiplying a AutoDiff scalar with a double work(first test), but doing the same on Matrix level does not. Probably i did not understand the full meaning of the following sentence in the docs: "AutoDiffScalar can be used as the scalar type of an Eigen::Matrix object. However, in that case, the expression template mechanism only occurs at the top Matrix level, while derivatives are computed right away. "
I try to compile this with g++4-.6 against the repository default branch. g++ says the following things:
Any help is appreciated. Alex |
Registered Member
|
the following will wok:
m3 = m2 * m4.cast<active_double>(); However, I suspect it is not very efficient because multiplying two active doubles can be much more expensive than one active double and one ordinary double when the number of derivatives is large. It would be great to be able to directly multiply matricies of any two different (compatible) types without casting. Some special cases work, like complex<double> and double, for example. |
Registered Member
|
Thanks for the answer. As you point out this is just a workaround performance wise. Additionally
it makes it impossible to parameterize existing algorithms with active_double without modifying the code. As you mention some special cross-type multiplications work - I think because those special cases are implemented in num_traits, which is also the case for active_double - at least that's what I get from the docs. Also this can seems to be quite general:
Alex |
Moderator
|
Hi, I forgot to reply, but I recently pushed 2 changesets that should make combining double and active_double more flexible. In particular your example works now.
|
Registered Member
|
Hi,
thanks for the answer. I updated and it works. However I found another similar case which has the same problem:
Where the second case ( normal matrix * active scalar ) fails to compile with:
Thanks a lot for having a look. Alex |
Registered Member
|
Dear Gael,
The example does indeed compile now, but if I change the dimension of m2, m3 and m4 to 8 or more then it does not. In that case there are some error messages: Eigen/src/Core/util/BlasUtil.h:112:61: error: cannot convert ‘const Eigen::AutoDiffScalar<Eigen::Matrix<double, 2, 1> >’ to ‘double’ in return Eigen/src/Core/products/GeneralMatrixVector.h:86:62: error: ‘Eigen::internal::conj_helper<Eigen::AutoDiffScalar<Eigen::Matrix<double, 2, 1> >, double, false, false> cj’ has incomplete type Eigen/src/Core/products/GeneralMatrixVector.h:87:62: error: ‘Eigen::internal::conj_helper<Eigen::AutoDiffScalar<Eigen::Matrix<double, 2, 1> >, double, false, false> pcj’ has incomplete type Also there are some other operations which do not compile for any dimension, for example m3 = d2*m4 m3 = m3 + m4 m3 = m3 - m4 I am using gcc 4.7.2 Thank you very much. |
Moderator
|
Yes, we are aware of that:
http://eigen.tuxfamily.org/bz/show_bug.cgi?id=279 fixing this feature request generally and elegantly will require some times... Regarding products, you can workaround with: a.lazyProduct(b). For additions, I've no solution to propose but casting. |
Registered Member
|
Hi,
thanks a lot for your support, it works! Also I find the timing quite attractive - though I have no comparison. My algorithm takes 0.25 ms without derivatives and 28 ms with 62 directions of differentiation. Do you see any chance to adapt this to generate higher derivatives? How complex do you estimate it would be? |
Moderator
|
I remember I already did that, but I'm not 100% sure it was with the repository version or with a locally modified one:
typedef AutoDiffScalar<VectorXd> ADS; typedef AutoDiffScalar<Matrix<ADS,Dynamic,1> > ADDS; ADDS should track the value, gradient, and hessian. |
Registered Member
|
Hello ggael, would you be so kind to give a small example, e.g. for f = x1*x1 * x1*x2 + x1*x1 * x2*x2*x2*x2 how to get the gradient and the hessian by AutoDiff. Thank you Ralf |
Registered Member
|
Hi,
here a little example. Maybe there should be a small tutorial. Can we put one together on the Eigen wiki?
Alex |
Registered Member
|
Hello Alex,
thanks for this! But I was more interested in the second derivative of f, i.e. the Hessian matrix. I couldn't figure out how to use ggael's ADDS defined by typedef AutoDiffScalar<VectorXd> ADS; typedef AutoDiffScalar<Matrix<ADS,Dynamic,1> > ADDS; ggael's comment was: "ADDS should track the value, gradient, and hessian." to compute the Hessian H = [ d2f / (dx1 dx1) d2f / (dx1 dx2); d2f / (dx2 dx1) d2f / (dx2 dx2) ] Bye Ralf |
Registered Member
|
I was also curious and investigated that some time ago.
I just extended the small example from my last post. Alex
|
Registered Member
|
Thanks you very much Alex,
I had to add the two .setZero(); in the snippet below, because .resize(2) may not lead to zero-initialization and thus wrong results. I find your example very useful. Maybe you could integrate it in unsupported/test/autodiff.cpp ? Bye Ralf
|
Registered Member
|
It would be very helpful if you could write a short description of the module which we could put in the documentation at http://eigen.tuxfamily.org/dox-devel/un ... odule.html . I am happy to do the formatting if you guys can give me a draft (I don't use the module myself).
|
Registered users: abc72656, Bing [Bot], daret, Google [Bot], Sogou [Bot], Yahoo [Bot]