Registered Member
|
In the example below I make some basic operations with dynamic matrices and what I would like is to avoid any dynamic memory allocation when these operations occur.
I try to avoid them by using .noalias() but that didn't solve my problem. Notice I perform the same operation in two ways. The first one passes the test but not the second one. I would like to understand why Eigen is trying to allocate memory in this case instead of just returning an expression that would get copied entry by entry on the left hand side. What are the best practices for this? Thank you much in advance. The following code is self-contained. You can copy directly into a text editor and compile if you wish to do so.
|
Moderator
|
To be performed fully efficiently, by default matrix-matrix or matrix-vector products have to be evaluated within an existing matrix. In the first case:
NoAlias::operator= detects that the right hand side is a product which can be directly evaluated within vd. In the second case, operator= sees a CwiseUnaryOp and thus the product gets evaluated within a temporary. There are many other variants that will avoid the temporary:
If the sizes are small enough, you can enforce a coefficient-wise evaluation with lazyProduct:
but be aware that if it gets large, this might be much slower. |
Registered Member
|
ggael, Thank you so much for your response.
I am using Eigen with small matrices (2x2, 3x3, 4x4, 6x6) exclusively. I am trying to understand the mechanism to adopt a common practice within my code that would avoid these allocations. I can see how the second example (using operator-= ) would efficiently be evaluated since this operator is not const operating directly on the calling instance. What would the rules here be?:
I would like to point out that I really appreciate the excellent job done with Eigen. My question only aims for me to understand how to properly get the most of it. |
Moderator
|
Products in Eigen are really complicated because there are numerous variants to handle: outer-product, inner-product, matrix-vector, matrix-matrix, and then for each case we have a different path whether the product is considered to small at compile-time or potentially large.
In your case, since one dimension is Dynamic, we assume that it will be large and follows the "large-matrix-vector" path which is implemented to compute "y += alpha * M * x". the above generic "large-matrix-vector" case. Let's call it gemv. Regarding the third example, "= md.transpose() * -v3", operator= sees a Product, so we can directly map it to the above generic "large-matrix-vector" case by extracting alpha from the factors. Let's call it gemv. Actually, "= -md.transpose() * v3" would also be OK for the same reason. Regarding the first example, "= (-1) * md.transpose() * v3", we have a special treatment for "alpha * (A * B)" to map it to gemv. On the other hand, there is no such special treatment for "-(A*b)". In Eigen 3.3, such rules will be much easier to add, and more expressions will be automagically rewritten for you to follow the best strategy according to the information we can extract at compile-time. Finally, if the maximal matrix size is 6x6, then using lazyProduct is a safe approach. Also note that you can specify maximal matrix size to enable static allocation while allowing the resize the matrix, e.g.:
md.resize(2,2) -> no allocation, but you waste 32 doubles on the stack. |
Registered Member
|
ggael, Thank you so much for your help. I can't wait for Eigen 3.3 to be released!!
|
Registered users: Bing [Bot], Google [Bot], Yahoo [Bot]