This forum has been archived. All content is frozen. Please use KDE Discuss instead.

linear least squares with inequality constraint

Tags: None
(comma "," separated)
lood339
Registered Member
Posts
2
Karma
0
OS
Hi all,
I need to solve a least squares equation with inequality constaint. The problme is like that Ax = b, with x>0.
The matlab provides lsqlin http://www.mathworks.com/help/toolbox/o ... sqlin.html to solve this kind of problem.
I don't known whether Eigen provides similar method to solve it. Any suggestion, code is very welcomed.
Thanks very much.

lood339
lood339
Registered Member
Posts
2
Karma
0
OS
I just found a lib that can solve this problem, but its a Fortran program.
http://cran.r-project.org/web/packages/bvls/index.html
Is there anyone know how to run Fortran dll from c/c++ Thanks.
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS
I have some code there:
http://dl.dropbox.com/u/260133/QuadProg/eiquadprog.hpp
with a simple example:
http://dl.dropbox.com/u/260133/QuadProg/example.cpp

It is a direct adaptation of QuadProg++ to use Eigen. It will only work for A dense (not sparse).
normvcr
Registered Member
Posts
12
Karma
0
OS
lood339 wrote:I just found a lib that can solve this problem, but its a Fortran program.
http://cran.r-project.org/web/packages/bvls/index.html
Is there anyone know how to run Fortran dll from c/c++ Thanks.


You can also convert the fortran to C, using f2c,
BigDaddyDrew
Registered Member
Posts
14
Karma
0
Hi Gael,

I'm using your solve_quadprog() method, and running into some problems. I'm first attempting to implement normal least squares with no constraints, however solve_quadprog simply returns NaN for the obective function, and for the x vector. I'm pretty sure the following should work...

My 'variable' space (x vector, if you will) consists of the residuals from the least square problem appended with the coefficients from the least squares problem. I use the equality constraints to mate up the residuals, betas, X matrix, and y vector.

Any ideas?

Code: Select all
    // Implement y ~ X via a Quadratic problem
    DoubleMat X( 10, 2 );
    X << 0.8, 0.6,
         0.8, 0.9,
         0.3, 0.2,
         0.6, 0.8,
         1.0, 0.9,
         0.0, 0.2,
         0.1, 0.4,
         0.5, 0.9,
         0.6, 0.2,
         0.9, 0.9;

    DoubleVec y( 10 );
    y <<  7.2,
          5.8,
          3.0,
          4.5,
          7.3,
         -0.1,
         -0.6,
          1.3,
          7.0,
          6.5;

    DoubleMat G( DoubleMat::Zero( 12, 12 ) );
    G.block( 0, 0, 10, 10 ).diagonal() = DoubleVec::Ones( 10 ) * 2; // Only first 10 elements, since the 2 'betas' don't appear in the objective
    DoubleVec g0( DoubleVec::Zero( 12 ) );

    DoubleMat CE( DoubleMat::Zero( 10, 12 ) );
    CE.block( 0, 0, 10, 10 ).diagonal() = DoubleVec::Ones( 10 );
    CE.block( 0, 10, 10, 2 ) = -X;

    DoubleMat CI( DoubleMat::Zero( 12, 1 ) ); DoubleVec ci0( DoubleVec::Zero( 1 ) );

    cout << "G:"   << endl << G   << endl << endl;
    cout << "g0:"  << endl << g0  << endl << endl;
    cout << "CE:"  << endl << CE  << endl << endl;
    cout << "y:"   << endl << y   << endl << endl;
    cout << "CI:"  << endl << CI  << endl << endl;
    cout << "ci0:" << endl << ci0 << endl << endl;


    DoubleVec result( 12 );
    double objective( solve_quadprog( G, g0, CE.transpose(), y, CI, ci0, result ) );

    cout << "objective: " << objective << endl << "result: " << endl << result << endl;




Thanks!
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS
G^T G must be positive definite which is not the case with your matrix G.
BigDaddyDrew
Registered Member
Posts
14
Karma
0
Ahh, touche thank you.

The original poster was looking to do something similar. Do you have a suggestion for how to shoehorn the Linear Least Squares formulation into a quadratic problem in an acceptable way?

My method puts the coefficients into the variable space, which results in the 0 entries along the diagonal of G (non-zero would be a different type of regression - Lasso if I remember correctly).

Thanks!
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS
I don't know if that answer your question, but to solve for Ax=b with x>0, call:

solve_quadprog( A, b, CE, ce0, CI, ci0, x )

with CE and ce0 empty matrix and vector, and:

CI.setIdentity();
ci0.setZero();
BigDaddyDrew
Registered Member
Posts
14
Karma
0
Hmm, thanks for the reply. Maybe I'm not seeing the light here!

I'm essentially trying to implement a Least Squares solution, with constraints on the coefficients.

So, say I have an [m * n] matrix 'X' containting m observations with n independant variables. I have an [m * 1] vectory 'y' containing the m observations for the dependant variable. I then have an [n * 1] vector 'b' that will contain the least squares coefficients.

In an unconstrained problem, I can simply do: b = X.colPivHouseholderQr().solve( y ). (or one of the other solvers)

However, I'd like to impose constraints on b. Say, b(0) > 5, or b(1) < b(0), etc.

In the past I've used quadratic solvers to do this (for example, Mosek), using the setup from my earlier post, so i was trying to invoke your quadtratic solver in a similar fashion. (albeit without the constraints on b for now) If it weren't for the requirement that G' * G be positive semi-definite, then this would work fine.

Perhaps you have some advice for how to implement what I'm after (or perhaps I'm incorrectly interpreting the advice you already gave!)
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS
ok, with your notation, you should call the solver like this:

solve_quadprog( X, y, CE, ce0, CI, ci0, b )

CE and ce0 are empty because there is no equality constraint, then for the inequality:

CI = [ 1 0 0 0 0 ... ; 1 -1 0 0 0 .... ]
ci0 = [ 5 0 ]

first line says: b(0)>=5
second line says: b(0)-b(1)>=0
BigDaddyDrew
Registered Member
Posts
14
Karma
0
Thanks for your help and patience with this Gael :)

However, doesn't the G matrix have to be square? My X matrix is [m x n] where m almost never equals n.

solve_quadprog is the normal quadratic 'optimization' routine of min( .5 x' G x ... ) right? So, I was originally 'recoding' least squares into that format. ie, my 'x' contained the regression residuals and the regression coefficients. my 'G' contained 1s on the diagonal for each residual (ie, so we are now effectively minimizing the sum of squared residuals). Then I used the equality constraints to formulate the relationship between the X matrix, the y vector, the residuals, and the regression coefficients.
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS
You are right, I forgot that this routine requires the least square problem in the normal equation form, so:

solve_quadprog( X.transpose()*X, -X.transpose()*y, CE, ce0, CI, ci0, b )
BigDaddyDrew
Registered Member
Posts
14
Karma
0
Ahhhhh, you are right. Thank you very much.
jbauer
Registered Member
Posts
25
Karma
0
Really cool port of QuadProg, thanks for posting! With g++ 4.6 and -Wall -Wextra -O3 I receive the following warning:
...longpath/eiquadprog.hpp:518:13: warning: 'qq' may be used uninitialized in this function [-Wuninitialized]

Is there a safe fix for this warning?

Thank you!
euans
Registered Member
Posts
3
Karma
0
Hi All,

I'm trying to solve a least squares problem and ensure that the solution for all variables is in the range -1 to 1. It looks like eiquadprog should be able to do this, although even though the solution is within these limits eiquadprog doesn't return this solution. I'm pretty sure all the constraints are set up correctly and that the matrix G is invertable. Is this the correct way to specify these constraints?

Thanks

Code:


//Given:
//G = 1.07696e-004 1.02703e-005 g0^T = [2.17890e-005 -5.86376e-005]
// 1.02703e-005 1.22411e-004
//Solve :
// min f(x) = 1 / 2 x G x + g0 x
// s.t.
// x_1 + 1 >= 0
// x_2 + 1 >= 0
// -x_1 - 1 >= 0
// -x_2 - 1 >= 0
// i.e. -1 <= x_1 <= 1, and -1 <= x_2 <= 1
//
//Solution: x_1 = 0.25, x_2 = -0.5

// Set up the number of variables
int nVar = 2;

// No equality constraints
MatrixXd CE = MatrixXd::Zero(nVar, 0);
VectorXd ce0 = VectorXd::Zero(0);

// Positive and negative inequality constraints for every variable
MatrixXd CI = MatrixXd::Zero(nVar, nVar * 2);
VectorXd ci0 = VectorXd::Zero(nVar * 2);

// Set up the inequality constraints such that: -1 <= x_1 <= 1, and -1 <= x_2 <= 1
for (size_t i = 0; i < nVar; ++i)
{
// x + 1 >= 0, i.e. x >= -1
CI(i, i) = 1.;
ci0(i) = 1.;
}
for (size_t i = 0; i < nVar; ++i)
{
// -x - 1 >= 0, i.e. x <= 1
CI(i, i + nVar) = -1.;
ci0(i + nVar) = -1.;
}

// Set up matrix A
MatrixXd emA(nVar, nVar);
emA(0, 0) = 1.07696e-004;
emA(0, 1) = 1.02703e-005;
emA(1, 0) = 1.02703e-005;
emA(1, 1) = 1.22411e-004;

// Set up vector b
VectorXd evB(nVar);
evB(0) = 2.17890e-005;
evB(1) = -5.86376e-005;

VectorXd evX = VectorXd::Zero(nVar);

// Solve inverse problem as a standard invertable matrix problem
evX = emA.inverse() * evB;
printf("\nSolution Ax = b");
printf("\nevX : %.5e, %.5e", evX(0), evX(1));
printf("\nevX : %.5f, %.5f", evX(0), evX(1));

// >> Solution Ax = b
// >> evX : 2.50001e-001, -4.99997e-001
// >> evX : 0.25000, -0.50000

// Solve inverse problem as a quadprog problem with constraints
solve_quadprog(emA, evX, CE, ce0, CI, ci0, evX);
printf("\nSolution Ax = b with quadprog and constraints");
printf("\nevX : %.5e, %.5e", evX(0), evX(1));
printf("\nevX : %.5f, %.5f", evX(0), evX(1));

// >> Solution Ax = b with quadprog and constraints
// >> evX : -1.00000e+000, -1.00000e+000
// >> evX : -1.00000, -1.00000

Last edited by euans on Thu Aug 04, 2016 11:43 am, edited 2 times in total.


Bookmarks



Who is online

Registered users: Bing [Bot], daret, Google [Bot], sandyvee, Sogou [Bot]