Registered Member
|
Hi all,
I am using the eigen library for my c++ program. In fact, I want to use the Levenberg Marquardt to optimize the least squares function. First, I define the least squares function that I want to optimize it : min(h) ll p - Fv ll² p (x1,y1, x2,y2.......... xn,yn) : data v (v1,w1, v2,w2......... vn,wn) : data The F matrix : l h11 h12 l l h21 h22 l The parameters that I want to minmize : h11 , h12 , h21 ,h22 So I want to minimize F with Levenberg. For example in my case, the least squares function : 1 iteration : ll (x1 y1)' - F ( v1 w1)' ll² // the ' symbol means the transpose 2 iteration : ll (x2 y2)' - F ( v2 w2)' ll² . . n iteration : ll (xn yn)' - F ( vn wn)' ll² So, how can I use the eigen library to minimize the least squares with Levenberg? Thank you. |
Moderator
|
LM is for non-linear least square whereas you have simple linear equations. Arranging P and V into Nx2 matrices, you have:
min |V * F' - P|^2 using the normal equation you end up with: F' = (V' * V)^1 * (V' * P) where (V' * V) is a 2x2 symmetric positive definite matrix. In that case you can likely explicitly inverse it with the "mat.inverse()" method. If you have doubt on the numerical conditioning, then see this page: http://eigen.tuxfamily.org/dox-devel/gr ... uares.html |
Registered Member
|
Hi ggael,
Thank you for your reply. I tried to simplify my probleme but I give more details. In fact, in the beginning I find the F matrix with SVD method (F0) . But since , I have n observations and I obtained F0 through these observations, F0 is not the only solution (the solution comes from a homogeneous linear equation) and I need a refinement process to optimize it (F0) ( many observations, noise...). So, first I find F0 with SVD and I iteratively update the solution by using a nonlinear least squares method. Specifically, Levenberg-Marquardt algorithm. Any help? Thank you. |
Registered users: bartoloni, Bing [Bot], Evergrowing, Google [Bot], ourcraft