This forum has been archived. All content is frozen. Please use KDE Discuss instead.

Beginner example for leastSquares with D>3

Tags: None
(comma "," separated)
SanCarlos
Registered Member
Posts
3
Karma
0
I admit I'm a brand new Eigen user, but I'm having some conceptual problems with some simple examples.

I have about m=10 known basis functions, and N~=20,000 known sample points.
I want to know the 10 basis weights for a least squared fit and the residual L2 error.


The leastSquares() function can do this, I'm sure, but the example given is for a set of 3 basis functions and 5 points. It's unclear whether I can just build a matrix of size (n,m) and use this for leastSquares or if I really need to build 20,000 individual vectors and put them into an array like the leastSquares example code does.

For reference, here's the example Eigen gives:
Code: Select all
 Vector3d points[5];
    points[0] = Vector3d( 3.02, 6.89, -4.32 );
    points[1] = Vector3d( 2.01, 5.39, -3.79 );
    points[2] = Vector3d( 2.41, 6.01, -4.01 );
    points[3] = Vector3d( 2.09, 5.55, -3.86 );
    points[4] = Vector3d( 2.58, 6.32, -4.10 );
 // create a vector of pointers to the points
    std::vector<Vector3d> points_ptrs(5);
    for(int k=0; k<5; ++k) points_ptrs[k] = &points[k];
    Vector3d coeffs; // will store the coefficients a, b, c
    linearRegression(
      5,
      &(points_ptrs[0]),
      &coeffs,
      1 // the coord to express as a function of
        // the other ones. 0 means x, 1 means y, 2 means z.
    );
 


How would I do this except with 10 by 20,000 instead of the 3 by 5 example here? It doesn't feel right to build 20,000 VectorXf vectors and then add pointers to each one of them.

Thanks for helping a newbie!
SanCarlos
Registered Member
Posts
3
Karma
0
After spending a full day experimenting, I have to conclude least squares fits like this really aren't possible to efficiently compute in Eigen. The overhead of making the individual vectors and then assembling an array of them is just too large.
ErlendA
Registered Member
Posts
12
Karma
0
OS
Linear least squares should be really efficient: here you only need matrices. See http://en.wikipedia.org/wiki/Least_squares and the formulae given there. $\hat{\beta}=(X^T X)^{-1} X^T y$. Here $y$ is your data, $X$ is your model and $\beta$ is your estimate. Hence, you only need to assemble a matrix and one vector (the data vector).

If assembling of the model matrix, $X$ is really slow, there probably are improvements to be made in the code, but it may happen that assembling inherently is slow, but then it should be slow on any other library as well.
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS
SanCarlos: this linearRegression function was in Eigen2, and it is deprecated in Eigen3. The fastest way to solve your problem is to create a m x N matrix D containing your evaluation coordinates and solve using Cholesky:

VectorXf x = (D.adjoint() * D).eval().llt().solve(D.adjoint() * B);

where B a vector of size N containing the value at the evaluation points.


Bookmarks



Who is online

Registered users: Bing [Bot], Google [Bot], Yahoo [Bot]