This forum has been archived. All content is frozen. Please use KDE Discuss instead.

Re-use/re-size matrix without realloc()

Tags: None
(comma "," separated)
naptrel
Registered Member
Posts
5
Karma
0
Hello,

I'm performing repeated non-linear least-square optimisations to solve for N unknowns using M knows, where N is typically between 10 and 200 and M is typically between N and 10*N. So, moderately sized problems but big enough to reveal inefficiencies in implementation. I'm porting code from my own homebrew linear algebra library (in C) to Eigen. Eigen is certainly more stable, numerically, and is faster where it counts most (matrix multiplication), though some of the decompositions are slower (a price worth paying for the greater numerical stability).

But this forms part of a (near) real-time system, which requires repeated, fast computations and runs unsupervised for months at a time without restarts. I'm concerned about hammering the heap, so I presently use one of the standard tricks to reduce allocs/frees by re-using dense matrix memory from one computation to the next: suppose I've previously allocated a 100x100 matrix, and am now solving a smaller problem needing an 80x80 matrix say, I could:

a. Create a 'virtual' 80x80 matrix that points to the memory in the 100x100 matrix, but uses only the first 6400 elements from it.

b. Resize the 100x100 matrix to be an 80x80 matrix (such that rows()==80 and cols()==80), but without shrinking the allocated memory so that it could subsequently be restored to a 100x100 array without a realloc(), i.e. decouple to matrix dimensions and allocated memory size.

I don't need to conserve the contents of the matrix upon resizing.

Is there an efficient way of doing something similar in Eigen? I've been trying to figure some way of doing (a) utilising block(), but it's proving very inelegent. As for (b): is there a way to force resize() to only realloc() when growing a matrix to a bigger size than its every previously been?

If I'm not simply missing some equivelent Eigen trick, then I'll have to evaluate whether it's worth maintaing this functionality, given the greater care required with threading for marginal speed gain. I'm pretty set on sticking with Eigen.
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS
Given the sizes of your problems, the expected gain should really be small indeed. Moreover, unless your program alloc/dealloc many times in other places, a realloc will likely behave just like (b). Anyways, you can also use names Map objects, e.g.:
Code: Select all
double* data = new double[biggest_data_size];
Map<MatrixXd> mat(data, rows, cols);
mat = 2 * mat * mat.adjoint();
...
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS
btw, could you be more specific about which solvers are slower? Do you compare same algorithms?
naptrel
Registered Member
Posts
5
Karma
0
ggael wrote:btw, could you be more specific about which solvers are slower? Do you compare same algorithms?


Hello Gael,

Thanks for your help.

Regarding algorithms that are slower in Eigen, the single algorithm that I've spent most efforts timing is cholesky decomposition. I am NOT comparing equivelent algorithms. Eigen's algorithms are better, being much less prone to failing as the matrix approaches the limits of positive definiteness.

Up till now, I've been using a naive A = LL' in-place decomposition. The decomposition is very marginally faster than Eigen's LL' or LDL' for small matrices (< ~100x100). Eigen pulls ahead as the matrices grow. But a more significant slow down comes when attempting to compute inverse(A) for the solution covariance matrix. The home-brew code uses a dedicated in-place algorithm whereas in Eigen I'm solving A^-1 = LLT.solve(I). Here, for matrices smaller than (< ~500x500), the home-brew version is much faster (2-8 times faster). For larger matrices, Eigen again pulls ahead.

With such small matrices, such quick running times, and such an unrigorous testing platform (I'm just hacking about) I'm not concerned. Eigen is always faster (often MUCH faster) on big matrices, and that's what counts. For me, the biggest problems I have to deal with are estimating ~2000 unknowns in a Kalman filter. At these dimensions, Eigen trounces home-brew. Besides, I'm considering trialing a move from Cholesky to QR for non-linear least-squares to see how it pans out.

Nathaniel
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS
OK, indeed we are still missing an efficient inverse computation of triangular matrices.


Bookmarks



Who is online

Registered users: Bing [Bot], Google [Bot], Yahoo [Bot]