Registered Member
|
Hi
I am using least squares for adaptive polynomial regression, i.e. I re-calculate the system repeatedly with different subsets of polynomial terms x_i*x_j*x_k*... to find the best subset. I am using a LR decomposition since it is fast. When adding a new column (term) or removing one from the system, is there a way to re-use parts of a previous decomposition to speed the process up? If not in a LR decomposition, then maybe in QR? thanks in advance! Pete |
Moderator
|
This is theoretically straightforward with QR, if you have:
A = Q1*R1 then: [A B] = [Q1*R1 B] = Q1*[R1 Q1'*B] = (Q1*Q2) * [R1 Q2'*Q1'*B] = Q * R with Q2 the concatenation of Householder transformations that cancels the bottom left trapezoidal part of (Q1'*B). This is exactly how the block-wise HouseholdQR implementation works: https://bitbucket.org/eigen/eigen/src/6 ... erQR.h-288 Now, from a practical point of view, it is much easier to start from an upperbound and remove columns on the right, as this simply boils down in omitting columns of R and latest householder reflectors:
|
Registered Member
|
Thanks, that sounds good!
I can't calculate the full system because it is underdetermined, but I could build several matrices from subsets of all columns and calculate these instead. This would require adding the newly accepted columns to all matrices and doing the decompositions again, which may be costly unless there's an efficient way to use information from the previous decompositions. However if I only remove column(s) on the right, I can't implement the algorithm to add one column in each step and remove one column in every n>1 step. This would require to remove columns from within the matrix and I am not sure if it would be possible to do that without distorting the QR-structure. |
Registered users: abc72656, Bing [Bot], daret, Google [Bot], Sogou [Bot], Yahoo [Bot]