This forum has been archived. All content is frozen. Please use KDE Discuss instead.

Operations on special types of matrices

Tags: None
(comma "," separated)
mcopik
Registered Member
Posts
3
Karma
0
I'd like to use Eigen for operations involving "special" types of matrices such as triangular, symmetric/Hermitian, tridiagonal and I have three questions here:

1) I have found the information about triangular and symmetric views on matrices, but is there any support for arithmetical operations on BandMatrix? Has anything changed since this discussion? If not then is it advisable to use regular dense or sparse matrix for storing tridiagonal matrices?

2) Do views have full capabilities in arithmetical operations? I have been wondering if I can use them as a type to define matrix or they should be used only in expression? For example, if I implement a generic kernel computing GEMM with second operand transposed:
A * B.transpose();
does not compile if B is obtained from a selfadjointView<Eigen::Upper> due to lack of transpose implementation. Am I using this incorrectly? My idea was to have kernel which performs a GEMM for dense matrices with a transposition and for symmetric matrices transpose would be a NOOP.

3) Let's assume that we want to perform a sequence of operations where input matrices may have special properties, e.g.
MatrixXd C = f(A, B)
MatrixXd E = g(C, D)
If both A and B have a specific property (symmetry, triangularity) and f is preserving this property, then is Eigen capable of detecting such situation and using this knowledge for possible optimizations in g? Is it possible that the example before can perform worse than this:
MatrixXd E = g( f(A, B), D)
because of explicit type definition in the former example?

I do not want to decrease the performance by explicit type definitions and I know that using auto is not recommended in Eigen. I'm not sure if:
auto C = f(A, B).eval();
MatrixXd E = g(C, D);
is safe in terms of performance (no unexpected temporaries, use C and evaluate expression in its constructor) and correctness for complex implementations of f.

Should I worry about that or does Eigen expects from users to follow how matrix properties change during execution and use optimizations by explicitly using triangular or symmetric views in each stage of operation?
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS
1) nobody seems to be interested in working on this aspect, so still the same status...

2) oops, that's a mistake. I though that transpose() was implemented in TriangularBase (which is a base of SelfAdjointView), but actually it is only available in TriangularView. Adding it is straightforward, it's just a matter of returning mat.transpose().selfadjointView<opposite>().

3) that depends on what f and g are, but in most cases the structure is lost. For instance, if you do A^T * S * A with S symmetric, we cannot known at compile time that the two A are the same, and thus we cannot known the result is selfadjoint.
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS
for 2):

https://bitbucket.org/eigen/eigen/commits/4a73bcc26ca2/
Summary: Add transpose, adjoint, conjugate methods to SelfAdjointView (useful to write generic code)

This is backported to 3.3 and will be part of 3.3.2.
mcopik
Registered Member
Posts
3
Karma
0
Thanks for a swift reply and bugfix!

3) I'm trying to find a basic policy how to introduce those special types to Eigen code. I want to make comparisons between Eigen and different linear algebra toolkits and I want to give it a fair trial by avoiding writing code which may prevent optimizations utilized by Eigen. I'm not really sure whether it's better in a long run to explicitly define matrix type or use the auto-eval approach? If the latter option should not cause any problems in assignment, maybe it's a better solution? At that point I don't assume any specific linear algebra operation, hence I can't conclude that I'd never create optimization possibilities for Eigen.

Just a quick follow-up question: am I right to think that SelfAdjointView is merely a wrapper keeping a reference to a matrix? If I'm going to use symmetric/self-adjoint matrix in my computations, then I should keep the main matrix still alive? I guess this is not safe and would cause a dangling reference?

template<typename T>
/*some type*/ generate(int rows, int cols)
{
Matrix<T, Dynamic, Dynamic> C(rows, cols);
// generate entries.
return C.template selfadjointView<Upper>();
}
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS
mcopik wrote:Just a quick follow-up question: am I right to think that SelfAdjointView is merely a wrapper keeping a reference to a matrix? If I'm going to use symmetric/self-adjoint matrix in my computations, then I should keep the main matrix still alive? I guess this is not safe and would cause a dangling reference?

template<typename T>
/*some type*/ generate(int rows, int cols)
{
Matrix<T, Dynamic, Dynamic> C(rows, cols);
// generate entries.
return C.template selfadjointView<Upper>();
}


yes, SelfAdjointView is a view, so don't do that !!!! What you would need is a SelfAdjointMatrix, possibly with compact storage.
mcopik
Registered Member
Posts
3
Karma
0
ggael wrote:
mcopik wrote:What you would need is a SelfAdjointMatrix, possibly with compact storage.


So does Eigen offer such type? I can't find in docs. If not, then using views is the only way to give Eigen an information that used matrix is symmetric/triangular etc?
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS
Sorry I've not been very clear, someone has to implement it!
Royi
Registered Member
Posts
34
Karma
0
What about using Intel MKL solvers when it is linked?
It had Tridiagonal / Banded Matrix solvers:

https://software.intel.com/en-us/mkl-de ... nce-c-gtsv

https://software.intel.com/en-us/mkl-de ... nce-c-gbsv

Does Eigen solve system with Tridiagonal Matrix using optimized algorithm?


Bookmarks



Who is online

Registered users: Bing [Bot], Google [Bot], Sogou [Bot]