Registered Member
|
I see that in various non-linear least squares minimization libraries a robust loss function is used to reduce the influence of outliers. See rho in here or here. This rho is applied to the squared residuals.
Now, I'm using the Eigen's Levenberg Marquardt unsupported module with finite differences to compute the jacobian. This implementation of LM doesn't support for providing an external robust loss function. What I've done, instead, is to apply a robust loss function, such as Cauchy or Huber, directly to the residuals just after I calculate them in my error function and return this to the minimizer. So I'm applying rho before the residuals are squared (inside the minimizer). This seems to work for some tests I've done but I'm not sure if that is correct or may give me some problems (I wonder how this affects the jacobian if it's calculated with finite differences). Is what I'm doing Ok? or Is there a better way of making the LM module minimizer robust to outliers? |
Moderator
|
The only "problem" is that the error function is not locally close to a quadratic error thus diminishing the rate of convergence. Perhaps it is possible to apply the Iteratively Reweighted Least Squares approach with LM as the least-square solver backend. See: https://en.wikipedia.org/wiki/Iterative ... st_squares
|
Registered Member
|
Thanks Ggael, I didn't know about IRLS.
I tried the IRLS approach with an LM backend as you suggest. I manage to downweight outliers (however not removing their influence completely) and get better performance in my application than if I don't use IRLS. However, I couldn't setup IRLS to get better performance than applying a Cauchy loss function directly to the residuals in my LM. Maybe there is a better way of setting up IRLS but I have already spent too long on this. So for the moment I'll just use the loss function directly on the residuals or do a RANSAC. |
Registered users: Bing [Bot], blue_bullet, Google [Bot], rockscient, Yahoo [Bot]