## Least Squares solution when rank deficient

mikity
Registered Member
Posts
2
Karma
0

### Least Squares solution when rank deficient

Wed Jan 09, 2013 9:20 am
Hi. I just started testing EIGEN. I'm interested in using the least squares solver provided by EIGEN.
I have successfully got correct solutions when system matrices are full-rank.
However, when the specified system matrix is rank deficient, it seems that
almost zero singular values are treated as non-zero values.
I think, the better management of singular values should be that
almost zero singular values are treated as exactly zero values.
(I consistently use MatrixXf and VectorXf.)

Here is an example,
|4 2 6|
|2 1 3|
The rank of this matrix is obviously 1. And JacobSVD
returns its singular values as
8.3666
1.98603e-16
This seems correct.
However, when I gave a vector
|5|
|2|
to JacobSVD().solve(), it returns
|-6.01819e+014|
| 2.14698e+015|
|-3.14698e+015|
This is too much bigger than I expected and this means that "1.98603e-16" is treated as non-zero singular value.
The expected behavior is that "1.98603e-16" is treated as zero and thus its inverse is kept 0.
If "1.98603e-16" is treated as non-zero, its inverse is "1/1.98703e+16" and thus bigger numbers appear.

By the way, in JacobSVD.h, I found the following statements.
if(maxRemainingSingularValue == RealScalar(0))
{
m_nonzeroSingularValues = i;
break;
}
And I think RealScalar(0) should be replaced with an appropriate threshold.
If double precision is used, any singular value is impossible to be precisely zero.

Yesterday, I modified the statements in the end of JacobiSVD::compute() as follows and finally, it worked fine.
The presented equation used for "tol" is (I think) the same equation used in Scilab, Octave and Matlab.
/*** step 4. Sort singular values in descending order and compute the number of nonzero singular values ***/
Index pos;
RealScalar maxSingularValue = m_singularValues.maxCoeff(&pos);

double tol = std::max(m_rows,m_cols) * m_singularValues(pos) * precision;

m_nonzeroSingularValues = m_diagSize;
for(Index i = 0; i < m_diagSize; i++)
{
RealScalar maxRemainingSingularValue = m_singularValues.tail(m_diagSize-i).maxCoeff(&pos);
if(maxRemainingSingularValue < tol/*RealScalar(0)*/)
{
m_nonzeroSingularValues = i;
break;
}
if(pos)
{
pos += i;
std::swap(m_singularValues.coeffRef(i), m_singularValues.coeffRef(pos));
if(computeU()) m_matrixU.col(pos).swap(m_matrixU.col(i));
if(computeV()) m_matrixV.col(pos).swap(m_matrixV.col(i));
}
}

Thanks.
ggael
Moderator
Posts
2194
Karma
15
OS

### Re: Least Squares solution when rank deficient

Thu Jan 10, 2013 7:49 am
right, I think JacobiSVD should provide a setTolerance() function to let the user adjust its favorite tolerance value to consider what is a zero singularvalue.
mikity
Registered Member
Posts
2
Karma
0

### Re: Least Squares solution when rank deficient

Fri Jan 11, 2013 2:00 am
Thanks, ggael. For me, EIGN is really great work and hope JacobiSVD().solve will improve its computational efficiency. For me, Lapack family seems too old fashion and I expect (and feel) EIGEN to be one of standard numerical libraries.

## Who is online

Registered users: Andreas Mair, Baidu [Spider], Bing [Bot], capslock, einar, Exabot [Bot], Google [Bot], Hans, koriun, Majestic-12 [Bot], rainbowgoblin, raymondsarver, rulet111, tparrott, urgo, Uri_Herrera, Yahoo [Bot]