Menu
Home Explore People Places Arts History Plants & Animals Science Life & Culture Technology
On this page
Conjugate gradient squared method
Algorithm for solving matrix-vector equations

In numerical linear algebra, the conjugate gradient squared method (CGS) is an iterative algorithm for solving systems of linear equations of the form A x = b {\displaystyle A{\mathbf {x}}={\mathbf {b}}} , particularly in cases where computing the transpose A T {\displaystyle A^{T}} is impractical. The CGS method was developed as an improvement to the biconjugate gradient method.

We don't have any images related to Conjugate gradient squared method yet.
We don't have any YouTube videos related to Conjugate gradient squared method yet.
We don't have any PDF documents related to Conjugate gradient squared method yet.
We don't have any Books related to Conjugate gradient squared method yet.
We don't have any archived web articles related to Conjugate gradient squared method yet.

Background

A system of linear equations A x = b {\displaystyle A{\mathbf {x}}={\mathbf {b}}} consists of a known matrix A {\displaystyle A} and a known vector b {\displaystyle {\mathbf {b}}} . To solve the system is to find the value of the unknown vector x {\displaystyle {\mathbf {x}}} .56 A direct method for solving a system of linear equations is to take the inverse of the matrix A {\displaystyle A} , then calculate x = A − 1 b {\displaystyle {\mathbf {x}}=A^{-1}{\mathbf {b}}} . However, computing the inverse is computationally expensive. Hence, iterative methods are commonly used. Iterative methods begin with a guess x ( 0 ) {\displaystyle {\mathbf {x}}^{(0)}} , and on each iteration the guess is improved. Once the difference between successive guesses is sufficiently small, the method has converged to a solution.78

As with the conjugate gradient method, biconjugate gradient method, and similar iterative methods for solving systems of linear equations, the CGS method can be used to find solutions to multi-variable optimisation problems, such as power-flow analysis, hyperparameter optimisation, and facial recognition.9

Algorithm

The algorithm is as follows:10

  1. Choose an initial guess x ( 0 ) {\displaystyle {\mathbf {x}}^{(0)}}
  2. Compute the residual r ( 0 ) = b − A x ( 0 ) {\displaystyle {\mathbf {r}}^{(0)}={\mathbf {b}}-A{\mathbf {x}}^{(0)}}
  3. Choose r ~ ( 0 ) = r ( 0 ) {\displaystyle {\tilde {\mathbf {r}}}^{(0)}={\mathbf {r}}^{(0)}}
  4. For i = 1 , 2 , 3 , … {\displaystyle i=1,2,3,\dots } do:
    1. ρ ( i − 1 ) = r ~ T ( i − 1 ) r ( i − 1 ) {\displaystyle \rho ^{(i-1)}={\tilde {\mathbf {r}}}^{T(i-1)}{\mathbf {r}}^{(i-1)}}
    2. If ρ ( i − 1 ) = 0 {\displaystyle \rho ^{(i-1)}=0} , the method fails.
    3. If i = 1 {\displaystyle i=1} :
      1. p ( 1 ) = u ( 1 ) = r ( 0 ) {\displaystyle {\mathbf {p}}^{(1)}={\mathbf {u}}^{(1)}={\mathbf {r}}^{(0)}}
    4. Else:
      1. β ( i − 1 ) = ρ ( i − 1 ) / ρ ( i − 2 ) {\displaystyle \beta ^{(i-1)}=\rho ^{(i-1)}/\rho ^{(i-2)}}
      2. u ( i ) = r ( i − 1 ) + β i − 1 q ( i − 1 ) {\displaystyle {\mathbf {u}}^{(i)}={\mathbf {r}}^{(i-1)}+\beta _{i-1}{\mathbf {q}}^{(i-1)}}
      3. p ( i ) = u ( i ) + β ( i − 1 ) ( q ( i − 1 ) + β ( i − 1 ) p ( i − 1 ) ) {\displaystyle {\mathbf {p}}^{(i)}={\mathbf {u}}^{(i)}+\beta ^{(i-1)}({\mathbf {q}}^{(i-1)}+\beta ^{(i-1)}{\mathbf {p}}^{(i-1)})}
    5. Solve M p ^ = p ( i ) {\displaystyle M{\hat {\mathbf {p}}}={\mathbf {p}}^{(i)}} , where M {\displaystyle M} is a pre-conditioner.
    6. v ^ = A p ^ {\displaystyle {\hat {\mathbf {v}}}=A{\hat {\mathbf {p}}}}
    7. α ( i ) = ρ ( i − 1 ) / r ~ T v ^ {\displaystyle \alpha ^{(i)}=\rho ^{(i-1)}/{\tilde {\mathbf {r}}}^{T}{\hat {\mathbf {v}}}}
    8. q ( i ) = u ( i ) − α ( i ) v ^ {\displaystyle {\mathbf {q}}^{(i)}={\mathbf {u}}^{(i)}-\alpha ^{(i)}{\hat {\mathbf {v}}}}
    9. Solve M u ^ = u ( i ) + q ( i ) {\displaystyle M{\hat {\mathbf {u}}}={\mathbf {u}}^{(i)}+{\mathbf {q}}^{(i)}}
    10. x ( i ) = x ( i − 1 ) + α ( i ) u ^ {\displaystyle {\mathbf {x}}^{(i)}={\mathbf {x}}^{(i-1)}+\alpha ^{(i)}{\hat {\mathbf {u}}}}
    11. q ^ = A u ^ {\displaystyle {\hat {\mathbf {q}}}=A{\hat {\mathbf {u}}}}
    12. r ( i ) = r ( i − 1 ) − α ( i ) q ^ {\displaystyle {\mathbf {r}}^{(i)}={\mathbf {r}}^{(i-1)}-\alpha ^{(i)}{\hat {\mathbf {q}}}}
    13. Check for convergence: if there is convergence, end the loop and return the result

See also

References

  1. Noel Black; Shirley Moore. "Conjugate Gradient Squared Method". Wolfram Mathworld. https://mathworld.wolfram.com/ConjugateGradientSquaredMethod.html

  2. Mathworks. "cgs". Matlab documentation. /wiki/Mathworks

  3. Henk van der Vorst (2003). "Bi-Conjugate Gradients". Iterative Krylov Methods for Large Linear Systems. Cambridge University Press. ISBN 0-521-81828-1. 0-521-81828-1

  4. Peter Sonneveld (1989). "CGS, A Fast Lanczos-Type Solver for Nonsymmetric Linear systems". SIAM Journal on Scientific and Statistical Computing. 10 (1): 36–52. doi:10.1137/0910004. ProQuest 921988114. https://www.proquest.com/docview/921988114

  5. Henk van der Vorst (2003). "Bi-Conjugate Gradients". Iterative Krylov Methods for Large Linear Systems. Cambridge University Press. ISBN 0-521-81828-1. 0-521-81828-1

  6. "Linear equations" (PDF), Matrix Analysis and Applied Linear Algebra, Philadelphia, PA: SIAM, 2000, pp. 1–40, doi:10.1137/1.9780898719512.ch1 (inactive 1 November 2024), ISBN 978-0-89871-454-8, archived from the original (PDF) on 2004-06-10, retrieved 2023-12-18{{citation}}: CS1 maint: DOI inactive as of November 2024 (link) 978-0-89871-454-8

  7. "Iterative Methods for Linear Systems". Mathworks. https://au.mathworks.com/help/matlab/math/iterative-methods-for-linear-systems.html

  8. Jean Gallier. "Iterative Methods for Solving Linear Systems" (PDF). UPenn. https://www.cis.upenn.edu/~cis5150/cis515-12-sl5.pdf

  9. Alexandra Roberts; Anye Shi; Yue Sun. "Conjugate gradient methods". Cornell University. Retrieved 2023-12-26. https://optimization.cbe.cornell.edu/index.php?title=Conjugate_gradient_methods

  10. R. Barrett; M. Berry; T. F. Chan; J. Demmel; J. Donato; J. Dongarra; V. Eijkhout; R. Pozo; C. Romine; H. Van der Vorst (1994). Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, 2nd Edition. SIAM. https://netlib.org/linalg/html_templates/Templates.html