Registered Member
|
Hi all!
I went through the AutoDiffScalar's definitions for derivatives and found one that seems non-optimal, tanh. As the tanh(x.value()) is already calculated it would be good to reuse it, as this tentative improvement:
Minor thing, but seems nice to have it as good as the rest. Unless there is something here that I have missed which made the choice of cosh better. Thanks for your time! |
Moderator
|
The problem is that 1 - numext::abs2(th) will suffer from so called catastrophic cancellation (https://en.wikipedia.org/wiki/Loss_of_significance) for x not too small. For instance, with x around 9 I observe relative errors of about 10^-8 with double precision (so you lost half the precision).
|
Registered Member
|
Oh, it was that bad.
I only did a quick test in Matlab and it seems rewriting it as (1 - th) * (1 + th) does perform significantly better than the naive implementation. However, I was unable to get the relative error you got, would you mind sharing the code you used to test the relative error in a good way? I would like to play a bit with the expression. Thanks Gael! |
Moderator
|
You'll get the same issue with the "1 - th" factor.
octave/matlab code: x=1:0.001:10; max(abs((1-tanh(x).^2)-(1./cosh(x).^2)) ./ (1./cosh(x).^2)) |
Registered Member
|
Hi Gael, thank you for the code!
I see you are using precision rather than accuracy as the performance criteria. Quite interesting that the accuracy between the two representations is always within epsilon (1.11e-16) for double numbers, but that the precision goes as high as 1e-8. But when you divide epsilon with a number close to zero it will indeed scale a lot. Thanks for the check! |
Moderator
|
Comparing float to double:
This gives me:
So the "fast" version is completely off. |
Registered Member
|
Indeed, quite atrocious.
Thanks for giving it a test! |
Registered users: Bing [Bot], Google [Bot], Sogou [Bot]