Registered Member
|
Hi,
I am implementing Incremental Gaussian Mixture Regression. In order to accelerate things up, for each of the Multivariate Gaussian components I not only store the mean and covariance matrix, but the Cholesky factorization to compute Mahalanobis distance. In that case, when a new input vector arrives, I can make use of the rank-1 update. In order to perform learning, I have to compute the likelihood for the new input vector. That is pretty straightforward having the LDLT object and using the solve() method. My question comes when I perform regression, as I observe N dimensions out of D (covariance is DxD). I have 2 questions: - Q1: In the rank-1 update, can I pass a value from (0,1] as a weight of the new incoming vector? (I perform a rank-1 update for each of the K covariance matrix, in my K-component mixture model, so the input vector mass of 1 is splitted over all the components). - Q2: In order to compute the regression coefficients (Conditional MVN), I need get the inverse of the NxN submatrix of the covariance and multiply it by the (D-N)xN submatrix. In MATLAB I do this by applying the solve() over the NxN submatrix, but it implies having 2 Cholesky factorizations of the entire matrix. Can I reuse the full matrix LDLT object that I already have computed? I hope I made it clear enough. Also, if anyone knows of an already made implementation which is freely available, please let me know, as I haven't found one that deals with incremental updates by caching the factorization from the previous step. Thanks in advance. |
Moderator
|
I'm not sure to understand what you really want to do, but for Q1 what about multiplying the incoming vector by the weight and handle the normalization on your side? For Q2, I did not tried to write the math, but since LDLT is doing pivoting, my intuition is that's not possible.
|
Registered users: Bing [Bot], Google [Bot], Sogou [Bot], Yahoo [Bot]