This forum has been archived. All content is frozen. Please use KDE Discuss instead.

LevenbergMarquardt

Tags: None
(comma "," separated)
shirint
Registered Member
Posts
1
Karma
0

LevenbergMarquardt

Sat Jun 17, 2017 5:02 pm
Hi,
I have written a simple example of a least square problem that I want to solve using LevenbergMarquardt minimization. But what I get as the final result is not the optimal value. Would you please help me to find out why LevenbergMarquardt does not give the optimal value.
My objective function is as follows:
f(x)= sum(abs(A*x+1 ))+0.5*sum_square(x)
where A << -2, 1,
-4, 1,
4,-1;
The optimal value varrries for different initial value of x. For instance, for x=(0 , 0), the optimal value gets f*=2.40647 and for x=(1,1), the optimal value is f*=2.76425.
The C++ code is as follows:
#include <fstream>
#include <sstream> // for stringstream
#include <iostream>
#include <string>
#include <chrono>
#include <unsupported/Eigen/NonLinearOptimization>
#include <Eigen/src/Core/util/DisableStupidWarnings.h>
#include <numeric>
#include <memory>
#include <assert.h>

using namespace Eigen;
using namespace std;
int iter = 0;
//// Generic functor
template <typename _Scalar, int NX = Dynamic, int NY = Dynamic>
struct DenseFunctor
{
typedef _Scalar Scalar;
enum {
InputsAtCompileTime = NX,
ValuesAtCompileTime = NY
};
typedef Matrix<Scalar, InputsAtCompileTime, 1> InputType;
typedef Matrix<Scalar, ValuesAtCompileTime, 1> ValueType;
typedef Matrix<Scalar, ValuesAtCompileTime, InputsAtCompileTime> JacobianType;
typedef ColPivHouseholderQR<JacobianType> QRSolver;
const int m_inputs, m_values;

DenseFunctor() : m_inputs(InputsAtCompileTime), m_values(ValuesAtCompileTime) {}
DenseFunctor(int inputs, int values) : m_inputs(inputs), m_values(values) {}

int inputs() const { return m_inputs; }
int values() const { return m_values; }

};
struct MyFunctor : DenseFunctor<double>
{
Eigen::MatrixXd Mat;
public:
MyFunctor(Eigen::MatrixXd& A) : DenseFunctor<double>(A.cols(), A.rows()), Mat(A) {}
int operator()(const Eigen::VectorXd &x, Eigen::VectorXd &fvec) const {
fvec = Eigen::VectorXd::Zero(Mat.rows());
fvec(0) = ((Mat*x + Eigen::VectorXd::Ones(Mat.rows())).cwiseAbs()).sum() + 0.5*x.squaredNorm();
return 0;
}
};

int main()
{
int info;
Eigen::MatrixXd A(3, 2);
Eigen::VectorXd x = Eigen::VectorXd::Zero(A.cols());
Eigen::VectorXd fvec(A.rows());

A << -2, 1,
-4, 1,
4,-1;

MyFunctor functor(A);
Eigen::NumericalDiff<MyFunctor> numDiff(functor);
Eigen::LevenbergMarquardt<Eigen::NumericalDiff<MyFunctor>> lm(numDiff);
info = lm.minimize(x);
functor.operator()(x, fvec);
cout << "The final solution is : " << endl << x << endl << endl;
cout << "The optimal objective is : " << endl << fvec << endl << endl;

cin.get();
return 0;
}
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS

Re: LevenbergMarquardt

Fri Jun 23, 2017 7:55 am
LM is for solving non-least-least squares problem like this:

f(x) = f_0(x)^2 + f_1(x)^2 + f_2(x)^2 ....

then in your functor you must fill fvec with:

for(i...)
fvec(i) = f_i(x)

So if your objective function is:

f(x)= sum(abs(A*x+1 ))+0.5*sum_square(x)

then LM cannot be applied because the term "sum(abs(A*x+1 ))" is not a least-square term. Only the term "sum_square(x)" can be addressed, for instance, with x a vector, you would have written:

fvec.tail(x.size()) = sqrt(0.5)*x


Bookmarks



Who is online

Registered users: Bing [Bot], daret, Google [Bot], sandyvee, Sogou [Bot]