This forum has been archived. All content is frozen. Please use KDE Discuss instead.

Inverse matrix optimization in ROS using Eigen

Tags: None
(comma "," separated)
xaru8145
Registered Member
Posts
1
Karma
0
Hi all,

I am trying to write a C++ code in ROS using Eigen3. I am basically taking real-time data and doing matrix operations so I need the operations to be as fast as possible.
I want to invert a real and symmetric matrix A which is 200x200 and this is the operation that is really slowing down my code. The computation of the inverse is taking around 0.3 seconds when I need it to be less than 0.1 seconds. My A matrix has a very particular shape that is shown in the image below using spy(A) in Matlab. It is symmetric and a fusion between a tridiagonal and band matrix.

Image

The inverse matrix of A has exactly the same shape of A.
Right now my A matrix is defined as a Dense double matrix:
Code: Select all
Eigen::MatrixXd A = HPH.tranpose();
Eigen::MatrixXd invA = A.inverse();

Things I have tried:
1. Cholesky decomposition:
Code: Select all
LLT<MatrixXd> lltOfA(A);
MatrixXd L = lltOfA.matrixL();
MatrixXd LLt = L*L.transpose();
MatrixXd invA = LLt.inverse();

But I still need to compute the inverse and in addition to that I am adding another operation L*L' so it is even slower.

2. Sparse matrices
I have read that this is not the best option since sparse matrices are not meant for computing the inverse. That is why they do not have this function explicitly. Of course you can compute the inverse with a solver but I do not think it is a good option.

3. Eigen FAQs suggests to "Make sure you compile with optimization enabled. This can easily gain you a factor of ten or more." Seems like a good option but there is no explanation as to how to apply it inLinux/ROS. Well, there is not any type of explanation at all.

4. The A matrix is very particular and maybe there is an algebraic way to use a simple expression. Did some research but found nothing.

Any suggestion or idea on how to make the inverse faster??
Thanks a lot.
andrew-dy
Registered Member
Posts
15
Karma
0
Consider if you need the actual inverse, or a function which simulates the action of the inverse.
In practice, computing an inverse and multiplying by that inverse is extremely expensive. Unless you need it, never compute it.
Instead, use a method that, given A and b in Ax = b, finds x.

For your system, an iterative solver (compared to a direct, factorizing solver) might be extremely efficient.
For example, a Conjugate Gradient solver is about as good as you can get with symmetric positive definite matrices for A,
if you matrix is positive definite.

You definitely want to store this matrix as a sparse matrix.
As a dense matrix, it stores 200*200 = 40,000 doubles.
And each time you multiply with it, that's a lot of unnecessary multiplications by 0.
As a sparse matrix, it only stores [between 2 and 3] * [number of nonzero entries] numbers, which could be as small as 1000, looking at your matrix.
A sparse matrix type also has optimized solvers for sparse systems.

And when compiling your code, you may add in a flag to your compiler to optimize it.
Be careful, as it's harder to debug code compiled with optimizations in general.
Here is link to a simple description of a few flag options and the differences between them for the gcc compiler:
https://www.rapidtables.com/code/linux/gcc/gcc-o.html


Bookmarks



Who is online

Registered users: bartoloni, Bing [Bot], Google [Bot], Yahoo [Bot]