Registered Member
|
Hi,
I've only been working with Eigen for a couple weeks and I'm also fairly new to coding in general. I'm working on a program which will use fairly large matrices and vectors (anywhere from 50 to 1000+ rows and columns). Most of my code runs very fast but the the small portion of my code that I've copied below runs very slowly. When the matrices are small ( < 15 x 15) the multiplication takes about a minute or two to compute, but when the matrices are larger ( > 40 x 40) it takes an hour if not much, much longer to compute. float b_eta; MatrixXd I, K, M, tempmat; VectorXd eta; tempmat = eta.transpose() * (I.setIdentity() - K * M * K.transpose()) * eta; float b_eta_p = b_eta + 0.5 * tempmat(0,0); The above I found is just slightly faster than: float b_eta_p = b_eta + 0.5 * eta.transpose() * (I.setIdentity() - K * M * K.transpose()) * eta; Is there a better way I can write out this multiplication so that it computes faster? Any suggestions would be appreciated. |
Moderator
|
How many times are you executing this expression? Even for 1000^2 matrices it should take less than a second. Make sure optimizations are on (e.g., -O2). Which compiler? Which CPU?
Finally you can significantly reduce the number of operations by performing matrix-vector products only, instead of matrix-matrix products, e.g.: VectorXf tmp = K.transpose() * eta; b_eta_p = b_eta + 0.5*(eta.squaredNorm() - tmp.dot(M*tmp)); |
Registered Member
|
Thanks very much for your help! The expression is only executed once, and optimizations were on. Your suggestion of performing only matrix-vector products helped a lot though and it now runs much much much faster. |
Registered users: Bing [Bot], Google [Bot], Sogou [Bot], Yahoo [Bot]