This forum has been archived. All content is frozen. Please use KDE Discuss instead.

Symmetric matrices: views and storage

Tags: None
(comma "," separated)
smh
Registered Member
Posts
2
Karma
0

Symmetric matrices: views and storage

Tue Jan 16, 2018 12:07 pm
Hi,

Firstly, thank you from the ATLAS experiment for the Eigen project. We've had great success over the last few years using it.

There are a few questions on symmetric matrices in the forum, but nothing that seems to exactly fit

Mostly we use small (5x5) covariance matrices, which are symmetric by definition. Do I understand correctly that for Eigen::Matrix you can say

Code: Select all
mat_squ = mat.selfAdjointView<Upper>*mat.selfAdjointView<Upper>()

and this will give a more efficient multiplication for real symmetric matrices? Is there a way to indicate that a matrix is self-adjoint, so this doesn't need to be specified?

Additionally, is it correct that a SymMatrix class with packed storage like
http://netlib.org/lapack/lapack-3.2.htm ... ked_format
is not yet implemented, and no one is working on it? I might be able to get someone to work on this if it would be useful. For us, we serialise our matrices so we could potentially save some space here, as well as a moderate increase in processing speed.

Thanks,
Stewart Martin-Haugh
STFC RAL, ATLAS Experiment CERN
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS
Mostly we use small (5x5) covariance matrices, which are symmetric by definition. Do I understand correctly that for Eigen::Matrix you can say

Code: Select all
mat_squ = mat.selfAdjointView<Upper>*mat.selfAdjointView<Upper>()

and this will give a more efficient multiplication for real symmetric matrices? Is there a way to indicate that a matrix is self-adjoint, so this doesn't need to be specified?


No and No ;) More precisely, symmetric*symmetric is not support yet and in terms of performance, there is nothing to expect because the number of arithmetic operation is the same as multiplying two dense matrices. The "symmetric*dense" is supported by both Eigen and BLAS mostly for convenience when working on large matrices where the opposite triangular part of the symmetric matrice is used to store something else. And no, there is no SelfadjointMatrix class yet or equivalent. You need to repeat .selfadjointView every time or make an alias:

Matrix<...> A;
auto symA = A.selfadjointView<upper>();

For such small matrices (5x5) I would not bother though. Using selfadjointView can only slow down computations.

Additionally, is it correct that a SymMatrix class with packed storage like
http://netlib.org/lapack/lapack-3.2.htm ... ked_format
is not yet implemented, and no one is working on it? I might be able to get someone to work on this if it would be useful. For us, we serialise our matrices so we could potentially save some space here, as well as a moderate increase in processing speed.


Indeed, there is no such class yet and no body has started anything yet even though this is a feature we have in mind since the beginning of Eigen. However, instead of the rectangular_full_packed_format which is quite of cumbersome to use (its sole purpose is to able to readily exploit existing dense product kernels for very large matrices), I would suggest sticking with more classical packed format (like stacking the columns or the rows), and leverage efficient products by extending existing Eigen's kernel to support a compile-time increment (typically +1, 0 or -1) of the "outer-stride" (aka leading dimension in BLAS terminology). This way very high performance would be achieved without sacrificing usability.
smh
Registered Member
Posts
2
Karma
0
Thanks, after some thought that now makes sense: if A and B are symmetric then no guarantees that C is symmetric. So indeed you need the whole product. For inverse() it's still better to use selfAdjointView, right?

If I manage to get the student project on symmetric matrix implementation going, I guess we can discuss further on the Eigen mailing list?

Thanks,
Stewart
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS
Yes, for inverse or solving it's x2 faster if the matrix is symmetric.


Bookmarks



Who is online

Registered users: Bing [Bot], Google [Bot], Sogou [Bot]