Registered Member
|
Hi,
we are implementing the LSCM algorithm for parameterizing a 3D triangle mesh. We have to solve a linear least squares problem, our matrix A is very sparse (always at maximum 6 entries per row), but not symmetric. It is also not square. The resulting matrices can be solved very fast (just a few milliseconds) for sizes like 1200x660. But when we try to solve with larger input meshes, the time needed for solving quickly becomes unacceptably slow. For example, a matrix A with 20,000x10,000 entries takes a few minutes. We formerly used the SPQR solver, which was much faster (a few seconds for the 20,000x10,000 cases). However, we would prefer to use Eigen, without the SPQR library. So, the question is: is there any chance to make our code faster with Eigen? Are there ways to precondition our data, which are maybe automatically applied in SPQR? We also don't need to necessarily use a QR decomposition, we just tried it and it was faster than, for example, solving the pseudoinverse with BiCGSTAB, and also delivered a much better result. BTW, here's our - very simple - sover code:
Thanks a lot in advance for any hints |
Moderator
|
You might try the normal equation with SimplicialLDLT<> or ConjugateGradient<> instead of BiCGSTAB.
Regarding SparseQR, a sensitive parameter is the pivot-threshold (http://eigen.tuxfamily.org/dox-devel/cl ... 50d1506017). If your problem is full-rank, you can set it to zero to disable expensive pivoting. |
Moderator
|
BTW, it would be nice if you could share your large matrix as explain here: http://eigen.tuxfamily.org/dox-devel/gr ... tml#title3 to that we can include it in our library of real-world problem for quality and performance checking.
|
Registered Member
|
Hi all,
I also would like to share my experience with SpareQR. In my application I would like to find the null-space of a sparse rectangular matrix C(m,n) with n>m. I'm trying to use Sparse QR for this (maybe better approaches exist, I'm open to suggestions). Here is my code:
I understand that qr.rightCols(...) at the moment is not possible but I don't understand why the assignment Q=qr.matrixQ() is so slow. Any help is greatly appreciated, Thanks! |
Registered Member
|
Thanks a lot for your answers, and sorry for the long delay.
Here's our feedback: Since our matrix has full rank, we have tried to use setPivotThreshold, but unfortunately this did not make any noteable difference for the compute time:
We have also exported our matrices as you suggested and tried to feed them to the benchmark routine. The spbenchmark, however, crashed, probably because our matrix is not square:
We could try to make it somehow square, if that helps... We can then also try with SimplicialLDLT<>. I remember doing something like this for Conjugate Gradient, but it was also slower, even with the worse, approximate results (which was surprising, since I'd expect QR to be slowest, delivering the exact solution). BTW, here are the matrices for a very simple 4000x2004 example, which took 3.4 seconds: https://cloud.igd.fraunhofer.de/ownclou ... 7105e6be59 Typical examples will be larger, and hence need significantly more time for being solved, but we found this small example a nice and quick test case for performance. |
Moderator
|
On your sample matrix ConjugateGradient, or better, LeastSquaresConjugateGradient (devel branch only) work pretty well (0.015s).
Output:
|
Registered Member
|
Wow, thanks a lot!
Seems we were just using the CG solver and the normal equations wrongly before... With your help, using directly the CG solver as you proposed, the results look as correct as before, but we got a speed up of more than factor 100... This really solved our problem, and it actually makes us stick to Eigen now, which is great! Thank you so much, and keep up the great work! |
Registered users: Bing [Bot], Google [Bot], Yahoo [Bot]