This forum has been archived. All content is frozen. Please use KDE Discuss instead.

Using LevenbergMarquardt to fit a Gaussian curve

Tags: None
(comma "," separated)
andrewmarshall
Registered Member
Posts
2
Karma
0
I am attempting to use the LevenbergMarquardt routine to fit a Gaussian Curve to some noisy data.

The code I have written is here :

https://gist.github.com/planetmarshall/ ... 3c320ad4a9

I may not be understanding the algorithm correctly, however I have implemented the functor as follows -

Code: Select all
int operator()( fvec ) {
   // g(a,b,c,x) is the gaussian function, y(x) is the sample data
   fvec = y(x) - g(a,b,c,x);
}


This fails to converge. However, if I implement the functor as follows

Code: Select all
int operator()( fvec ) {
   fvec = g(a,b,c,x) - y(x);
}


The algorithm converges as expected. What I don't understand is why the sign of fvec matters, when it is the norm of fvec that is minimized?

Any help appreciated.

Cheers,
Andrew.
matthieu.ft
Registered Member
Posts
2
Karma
0
Hi Andrew,

I have no idea of how the implementation of Levenberg Marquardt algorithm by Eigen, but I suggest you to have a look at your jacobian. The jacobian that you give in df has to be the jacobian of the function that you define in the operator ().

This implies, that if you optimize f(x) - g(a,x), the jacobian is let's say J, whereas is you optimize g(a,x)-f(x) the jacobian will be -J.
I suppose this is the explanation of your problem.

Best,
Matthieu

Last edited by matthieu.ft on Thu Mar 26, 2015 6:28 pm, edited 1 time in total.
andrewmarshall
Registered Member
Posts
2
Karma
0
matthieu.ft wrote:This implies, that if you optimize f(x) - alpha, the jacobian is Jf, the jacobian of f, whereas is you optimize a-f(x) the jacobian will be -Jf.
I suppose this is the explanation of the problem.


Yes of course, thanks Matthieu.


Bookmarks



Who is online

Registered users: Bing [Bot], Google [Bot], Yahoo [Bot]