Moderator
|
done. My next targets are sub-matrix expressions and features to add/remove some rows/columns etc... |
Registered Member
|
WOW!
friday: frustrated user saturday: discusion sunday: idea monday: solved :star: I have a question (naturally) left: How to use the new features? I tried something like DynamicSparseMatrix mat; SparseMatrix final_mat; (both Eigen::ColMajor|Eigen::SelfAdjoint|Eigen::UpperTriangular) mat.startFill(); for(;;) mat.fillrand(i,j,)+=val; mat.endFill(); mat_final=mat; The number of nonzeros is not okay: should: 3.2e6 , is: 28.2e6 Off course I do not need to wonder that the memory is blowing up. The copy mat_final=mat; seems to be ok. Is it probably due to the fact, that the operation += always inserts a new element? |
Registered Member
|
I'd like to add that when filling a 24x24 matrix (=576 max.) yields 624 nonzeros here?!
|
Moderator
|
look at the doc of each member function.
what you need to use is coeffRef()... no alis with operator(,) because then we cannot always distinguish between a const and non const call, and the difference here is HUGE ! the fill and fillrand functions are provided to optimize the filling of a matrix, i.e., setting its elements. |
Registered Member
|
Ah, thanks!
Time and memory usage exceed my expectations! Thanks! Btw: It don't know if it is intended: It seems that I have to compile the docu twice. I compiled it once before and most command descriptions covered a very slim table column. A second "make doc" adjusts the layout...
Last edited by Seb on Mon Jan 19, 2009 5:27 pm, edited 1 time in total.
|
Registered Member
|
Let me say that this thread is very impressive. I really have to dedicate some time to try out Eigen!
'And all those exclamation marks, you notice? Five? A sure sign of someone who wears his underpants on his head.' ~Terry Pratchett
'It's funny. All you have to do is say something nobody understands and they'll do practically anything you want them to.' ~J.D. Salinger |
Registered Member
|
Hi Gael,
though this thread seems to move into another direction I would like to add some new problems herein. 1st: When performing the product A=C'BC where B is symmetric (upper triangular). I encounter a segfault if A is upper triangular as well. It seems that the operation A = general Sparsematrix does not work. I also tried A = general Sparsematrix.marked() (here I assume the problem that the storage still is the one of a general matrix, thus leading to the first case). 2nd: I tried using SparseVector The first problem I encountered was when using std::vector types (vector of fixed size array of sparse vectors). The resize method of the std::vector does not work well. I know that I can pass a custom allocator method, but I have to give him vector.resize(number, {SparseVector(), SparseVector, ...}); which is rather unsatisfactory. The reason I use c-arrays and std::vectors is relatively easy: (a) ublas::vectors have problems in dealing with SparseVector for allocation. (b) Eigen::Vector with fixed size array could be used for the c-array, Eigen::Vector with dynamic size could be used for the std::vector. But then I have to reimplement Eigen::NumTraits and other stuffs that I do not understand at the moment... I wonder if the reimplementation of those is a must for Eigen... 3rd: When using SparseVector I found that certain operations are not yet implemented (using the svn version from last week): sparsevector += sparsevector for example... I also have a question on InnerIterators: Can I use them also to modify values and indices? When iterating through a sparse object I would like to modify an element in place. Thanks again! |
Moderator
|
I guess A, C and B are all sparse. Anyway, you are currently asking too much from the UpperTriangular / SelfAdjoint flags which not really supported yet. Currently, in the sparse module, they are only correctly managed by he matrix product with a dense vector or a dense matrix, and by solveTriangular(). This should be improved soon. Also note that C'BC is computed as (C'B)C and since (C'B) is not symmetric Eigen cannot determine that C'BC is selfadjoint automatically.
std::vector wow, even std::vector won't work nicely. The best is to write your own very basic fixed size vector class which would only have a ctor and operator[] defined. (it is at most 10 lines of code) Eigen::Matrix assume your are storing scalar value to do math, it is not a generic container. Same for ublas I guess.
I forgot some macros... now it's working.
yes you can ! simply use it.valueRef() = ...; |
Registered Member
|
Thanks, the idea introducing an own container class was not in my mind so far.
I encountered a few new problems: (1) sparse_matrix=sparse_matrix*sparse_matrix does a breakdown when matrices become too large. It worked very well for many cases here. Now I tested matrix A: nnz: 3428676, dim: 183618 x 183618 matrix B: nnz: 176418, dim: 183618 x 176418 (is an identity matrix with some deleted columns) Then, C=A*B throws some exception (frankly, the objects are too large to compile in debug mode ). (2) For factorizing sparse matrices, which method (LU, LLT, LDLT) is currently supported? (3) I tried to do something like double C = vector.transpose()*sparse_matrix*vector; It did not compile (cant assign a matrix_product to double - is this behaviour intended? I think it is not very convenient.). I even have to use temporaries D=sparse_matrix*vector; C=vector.dot(D); because I can not insert the matrix-vector product into the argument of dot() due to its inefficiency... (It took ages and I had to abort the operation after minutes)
Last edited by Seb on Wed Jan 28, 2009 3:31 pm, edited 1 time in total.
|
Moderator
|
I'll try that, but your matrices does not look so large, nor dense.
LU via SuperLU and UmfPack LLT via Cholmod and TAUCS but the support is quite experimental as I don't know well those library. check test/sparse_solver.cpp for examples, and feel free to propose patches for the respective backends (in Eigen/src/Sparse/ one file per backend). At least they compute correct results, but for instance, Cholmod has so many options that I'm pretty sure I did not set them to get best performance.
you can also do: double C = (vector.transpose()*sparse_matrix*vector).coeff(0,0); or: C = vector.dot(sparse_matrix*vector); which should work well. I'll check why it does not. |
Moderator
|
Registered users: Bing [Bot], Evergrowing, Google [Bot], rblackwell