Menu
Home Explore People Places Arts History Plants & Animals Science Life & Culture Technology
On this page
Iteratively reweighted least squares
Method for solving certain optimization problems

The method of iteratively reweighted least squares (IRLS) is used to solve certain optimization problems with objective functions of the form of a p-norm:

a r g m i n β ⁡ ∑ i = 1 n | y i − f i ( β ) | p , {\displaystyle \mathop {\operatorname {arg\,min} } _{\boldsymbol {\beta }}\sum _{i=1}^{n}{\big |}y_{i}-f_{i}({\boldsymbol {\beta }}){\big |}^{p},}

by an iterative method in which each step involves solving a weighted least squares problem of the form:

β ( t + 1 ) = a r g m i n β ∑ i = 1 n w i ( β ( t ) ) | y i − f i ( β ) | 2 . {\displaystyle {\boldsymbol {\beta }}^{(t+1)}={\underset {\boldsymbol {\beta }}{\operatorname {arg\,min} }}\sum _{i=1}^{n}w_{i}({\boldsymbol {\beta }}^{(t)}){\big |}y_{i}-f_{i}({\boldsymbol {\beta }}){\big |}^{2}.}

IRLS is used to find the maximum likelihood estimates of a generalized linear model, and in robust regression to find an M-estimator, as a way of mitigating the influence of outliers in an otherwise normally-distributed data set, for example, by minimizing the least absolute errors rather than the least square errors.

One of the advantages of IRLS over linear programming and convex programming is that it can be used with Gauss–Newton and Levenberg–Marquardt numerical algorithms.

We don't have any images related to Iteratively reweighted least squares yet.
We don't have any YouTube videos related to Iteratively reweighted least squares yet.
We don't have any PDF documents related to Iteratively reweighted least squares yet.
We don't have any Books related to Iteratively reweighted least squares yet.
We don't have any archived web articles related to Iteratively reweighted least squares yet.

Examples

L1 minimization for sparse recovery

IRLS can be used for 1 minimization and smoothed p minimization, p < 1, in compressed sensing problems. It has been proved that the algorithm has a linear rate of convergence for 1 norm and superlinear for t with t < 1, under the restricted isometry property, which is generally a sufficient condition for sparse solutions.23

Lp norm linear regression

To find the parameters β = (β1, …,βk)T which minimize the Lp norm for the linear regression problem,

a r g m i n β ‖ y − X β ‖ p = a r g m i n β ∑ i = 1 n | y i − X i β | p , {\displaystyle {\underset {\boldsymbol {\beta }}{\operatorname {arg\,min} }}{\big \|}\mathbf {y} -X{\boldsymbol {\beta }}\|_{p}={\underset {\boldsymbol {\beta }}{\operatorname {arg\,min} }}\sum _{i=1}^{n}\left|y_{i}-X_{i}{\boldsymbol {\beta }}\right|^{p},}

the IRLS algorithm at step t + 1 involves solving the weighted linear least squares problem:4

β ( t + 1 ) = a r g m i n β ∑ i = 1 n w i ( t ) | y i − X i β | 2 = ( X T W ( t ) X ) − 1 X T W ( t ) y , {\displaystyle {\boldsymbol {\beta }}^{(t+1)}={\underset {\boldsymbol {\beta }}{\operatorname {arg\,min} }}\sum _{i=1}^{n}w_{i}^{(t)}\left|y_{i}-X_{i}{\boldsymbol {\beta }}\right|^{2}=(X^{\rm {T}}W^{(t)}X)^{-1}X^{\rm {T}}W^{(t)}\mathbf {y} ,}

where W(t) is the diagonal matrix of weights, usually with all elements set initially to:

w i ( 0 ) = 1 {\displaystyle w_{i}^{(0)}=1}

and updated after each iteration to:

w i ( t ) = | y i − X i β ( t ) | p − 2 . {\displaystyle w_{i}^{(t)}={\big |}y_{i}-X_{i}{\boldsymbol {\beta }}^{(t)}{\big |}^{p-2}.}

In the case p = 1, this corresponds to least absolute deviation regression (in this case, the problem would be better approached by use of linear programming methods,5 so the result would be exact) and the formula is:

w i ( t ) = 1 | y i − X i β ( t ) | . {\displaystyle w_{i}^{(t)}={\frac {1}{{\big |}y_{i}-X_{i}{\boldsymbol {\beta }}^{(t)}{\big |}}}.}

To avoid dividing by zero, regularization must be done, so in practice the formula is:

w i ( t ) = 1 max { δ , | y i − X i β ( t ) | } . {\displaystyle w_{i}^{(t)}={\frac {1}{\max \left\{\delta ,\left|y_{i}-X_{i}{\boldsymbol {\beta }}^{(t)}\right|\right\}}}.}

where δ {\displaystyle \delta } is some small value, like 0.0001.6 Note the use of δ {\displaystyle \delta } in the weighting function is equivalent to the Huber loss function in robust estimation. 7

See also

Notes

References

  1. C. Sidney Burrus, Iterative Reweighted Least Squares https://web.archive.org/web/20221017041048/https://cnx.org/exports/[email protected]/iterative-reweighted-least-squares-12.pdf

  2. Chartrand, R.; Yin, W. (March 31 – April 4, 2008). "Iteratively reweighted algorithms for compressive sensing". IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2008. pp. 3869–3872. doi:10.1109/ICASSP.2008.4518498. /wiki/Doi_(identifier)

  3. Daubechies, I.; Devore, R.; Fornasier, M.; Güntürk, C. S. N. (2010). "Iteratively reweighted least squares minimization for sparse recovery". Communications on Pure and Applied Mathematics. 63: 1–38. arXiv:0807.0575. doi:10.1002/cpa.20303. /wiki/ArXiv_(identifier)

  4. Gentle, James (2007). "6.8.1 Solutions that Minimize Other Norms of the Residuals". Matrix algebra. Springer Texts in Statistics. New York: Springer. doi:10.1007/978-0-387-70873-7. ISBN 978-0-387-70872-0. 978-0-387-70872-0

  5. William A. Pfeil, Statistical Teaching Aids, Bachelor of Science thesis, Worcester Polytechnic Institute, 2006 http://www.wpi.edu/Pubs/E-project/Available/E-project-050506-091720/unrestricted/IQP_Final_Report.pdf

  6. William A. Pfeil, Statistical Teaching Aids, Bachelor of Science thesis, Worcester Polytechnic Institute, 2006 http://www.wpi.edu/Pubs/E-project/Available/E-project-050506-091720/unrestricted/IQP_Final_Report.pdf

  7. Fox, J.; Weisberg, S. (2013),Robust Regression, Course Notes, University of Minnesota http://users.stat.umn.edu/~sandy/courses/8053/handouts/robust.pdf