Registered Member
|
I want to make a function that returns a matrix to real power. In addition, I'm trying to put it into <unsupported/Eigen/MatrixFunctions>. However, I encountered some problems.
|
Moderator
|
Why not calling pow on the eigenvalues ?
|
Registered Member
|
Thanks, it works like a charm for Hermitian matrices.
I'm trying to code like this. (Algorithm 5.1 inside) Do you have suggestion on 2x2 matrices? |
Moderator
|
You can use our RealSchur class as a start and follow their algorithm, but again why not performing a full eigen decomposition (using EigenSolver): A = V L V^-1 with V diagonal, so A^n = V L^n V^-1.
|
Registered Member
|
That is probably not stable if the matrix A is close to having a double eigenvalue. |
Moderator
|
Thank you Jitse, that sounded too simple!
|
Registered Member
|
You can always perfom the SVD if you suspect that you are going to have eigenvalues with geometric multiplicity higher than one. A^n = U S^n V^H Slower, but safer anyway. |
Registered Member
|
SVD is not likely to help matrix powering. ∵ V* U ≠ I ∴ (U S V*)^n = U S V* U S V* ... U S V* ≠ U S^n V* Recently, I found many people (including MATLAB) belive that A^-n should be computed as (A^n)^-1. However, (A^-1)^n is more stable in fact. The condition number κ(A^n) = κ(A)^n is huge, making (A^n)^-1 unstable, while κ(A^-1) is just the same as κ(A). By the way, matrix power usually serves as Markov chain, like A^n x. Is it possible to optimize the whole expression? Maybe we can derive a new class for A^n and then use operator* (MatrixType x), so users can simply call A.pow(n) * x while Eigen secretly optimizes the expression. This is especially useful when x is thinner than A. If this can't be done, is it the best to employ x.markovChain(A, n)? (or maybe some other name or some other arrangement such as A.functionName(n, x)) I'm sorry for not trying to code this because I have an Android application project now. I'll start working by 2 Aug. |
Registered Member
|
For small powers you could also write down explicitly the recursive relation that exists between the powers of a matrix in virtue of Cayley–Hamilton theorem. That way you just need to compute the matrix invariants once and the rest is a few scalar multiplications and matrix sums involving the original matrix only. This in theory, I'm not sure about the numerical efficiency of this approach.
|
Registered Member
|
If we use this way to compute A^p, where A is an n-by-n matrix and p is an integer, we have to get all eigenvalues, this costs 25 n^3, so it's unsuitable for int powers. However, it may work for huge (large integer in float or double) powers. I'm not sure it's more efficient to use binary powering or (scalar) polynomial division.
|
Registered Member
|
Actually you don't need to compute any eigenvalue because, by definition, you can compute the invariants from the matrix coefficients themselves. But you still need to compute the first n-2 powers in the recursive relation somehow, and I was thinking that you were interested in the 2x2 case only, where the recursive relationship is trivial.
|
Registered users: Baidu [Spider], Bing [Bot], Google [Bot]