This forum has been archived. All content is frozen. Please use KDE Discuss instead.

Implementation of matrix power

Tags: None
(comma "," separated)
User avatar
jdh8
Registered Member
Posts
12
Karma
0
OS

Implementation of matrix power

Tue Jul 24, 2012 5:01 pm
I want to make a function that returns a matrix to real power. In addition, I'm trying to put it into <unsupported/Eigen/MatrixFunctions>. However, I encountered some problems.
  • There's no auto in C++03. How do I efficiently manage matrix products?
  • Is it better to use squaring and sqrt() or exp() and log()?
  • If the choice is to use squaring and sqrt(), how to compute A^7? Is it A * A^2 * A^4 or A^8 / A?
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS

Re: Implementation of matrix power

Tue Jul 24, 2012 9:50 pm
Why not calling pow on the eigenvalues ?
User avatar
jdh8
Registered Member
Posts
12
Karma
0
OS

Re: Implementation of matrix power

Wed Jul 25, 2012 8:59 am
Thanks, it works like a charm for Hermitian matrices.
I'm trying to code like this. (Algorithm 5.1 inside)
Do you have suggestion on 2x2 matrices?
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS

Re: Implementation of matrix power

Wed Jul 25, 2012 9:11 pm
You can use our RealSchur class as a start and follow their algorithm, but again why not performing a full eigen decomposition (using EigenSolver): A = V L V^-1 with V diagonal, so A^n = V L^n V^-1.
jitseniesen
Registered Member
Posts
204
Karma
2

Re: Implementation of matrix power

Wed Jul 25, 2012 9:27 pm
ggael wrote:why not performing a full eigen decomposition (using EigenSolver): A = V L V^-1 with V diagonal, so A^n = V L^n V^-1.
That is probably not stable if the matrix A is close to having a double eigenvalue.
User avatar
ggael
Moderator
Posts
3447
Karma
19
OS

Re: Implementation of matrix power

Thu Jul 26, 2012 6:01 am
Thank you Jitse, that sounded too simple!
noether
Registered Member
Posts
10
Karma
0
OS

Re: Implementation of matrix power

Sat Jul 28, 2012 9:39 am
ggael wrote:Thank you Jitse, that sounded too simple!


You can always perfom the SVD if you suspect that you are going to have eigenvalues with geometric multiplicity higher than one.

A^n = U S^n V^H

Slower, but safer anyway.
User avatar
jdh8
Registered Member
Posts
12
Karma
0
OS

Re: Implementation of matrix power

Sun Jul 29, 2012 2:22 pm
noether wrote:You can always perfom the SVD if you suspect that you are going to have eigenvalues with geometric multiplicity higher than one.

A^n = U S^n V*

Slower, but safer anyway.
SVD is not likely to help matrix powering. :o
∵ V* U ≠ I
∴ (U S V*)^n = U S V* U S V* ... U S V* ≠ U S^n V*

Recently, I found many people (including MATLAB) belive that A^-n should be computed as (A^n)^-1. However, (A^-1)^n is more stable in fact. The condition number κ(A^n) = κ(A)^n is huge, making (A^n)^-1 unstable, while κ(A^-1) is just the same as κ(A). ;)

By the way, matrix power usually serves as Markov chain, like A^n x. Is it possible to optimize the whole expression? Maybe we can derive a new class for A^n and then use operator* (MatrixType x), so users can simply call A.pow(n) * x while Eigen secretly optimizes the expression. This is especially useful when x is thinner than A. If this can't be done, is it the best to employ x.markovChain(A, n)? (or maybe some other name or some other arrangement such as A.functionName(n, x))

I'm sorry for not trying to code this because I have an Android application project now. I'll start working by 2 Aug. ;D
n2k
Registered Member
Posts
41
Karma
0

Re: Implementation of matrix power

Mon Jul 30, 2012 11:26 pm
For small powers you could also write down explicitly the recursive relation that exists between the powers of a matrix in virtue of Cayley–Hamilton theorem. That way you just need to compute the matrix invariants once and the rest is a few scalar multiplications and matrix sums involving the original matrix only. This in theory, I'm not sure about the numerical efficiency of this approach.
User avatar
jdh8
Registered Member
Posts
12
Karma
0
OS

Re: Implementation of matrix power

Tue Jul 31, 2012 5:31 am
n2k wrote:For small powers you could also write down explicitly the recursive relation that exists between the powers of a matrix in virtue of Cayley–Hamilton theorem. That way you just need to compute the matrix invariants once and the rest is a few scalar multiplications and matrix sums involving the original matrix only. This in theory, I'm not sure about the numerical efficiency of this approach.

If we use this way to compute A^p, where A is an n-by-n matrix and p is an integer, we have to get all eigenvalues, this costs 25 n^3, so it's unsuitable for int powers. However, it may work for huge (large integer in float or double) powers. I'm not sure it's more efficient to use binary powering or (scalar) polynomial division.
  1. Naive binary powering costs k log2(p) n^3. (1 ≤ k < 2, depending on the number of 1's in the binary expression of p)
  2. Binary powering with Schur decomposition costs k [log2(p) / 3 + 25] n^3.
  3. I'm unsure if there is an efficient algorithm for polynomial division. If not, schoolbook division costs n p flops.
  4. Polynomial division is fast, but we have to take numerical stability into account. Still, I'm not sure about the answer of the two questions below.
  • Will Schur decomposition make it more unstable than naive binary powering too much? (Is the error still in O(log p)?)
  • Is polynomial division stable enough? (Can we restrict the error below O(p) i.e. make it o(p)?)
n2k
Registered Member
Posts
41
Karma
0

Re: Implementation of matrix power

Tue Jul 31, 2012 7:39 am
Actually you don't need to compute any eigenvalue because, by definition, you can compute the invariants from the matrix coefficients themselves. But you still need to compute the first n-2 powers in the recursive relation somehow, and I was thinking that you were interested in the 2x2 case only, where the recursive relationship is trivial.


Bookmarks



Who is online

Registered users: Baidu [Spider], Bing [Bot], Google [Bot]