krylov Package¶
krylov Package¶
Krylov Solvers
This module contains several Krylov subspace methods, in addition to two simple iterations, to solve linear systems iteratively. These methods often use multigrid as a preconditioner to accelerate convergence to the solution.
Functions¶
- gmres
- fgmres
- cgne
- cgnr
- cg
- bicgstab
- steepest descent, (simple iteration)
- minimial residual (MR), (simple iteration)
References¶
[1] | Yousef Saad, “Iterative Methods for Sparse Linear Systems, Second Edition”, SIAM, pp. 231-234, 2003 http://www-users.cs.umn.edu/~saad/books.html |
[2] | Richard Barrett et al. “Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, 2nd Edition”, SIAM http://www.netlib.org/linalg/html_templates/Templates.html http://www.netlib.org/templates/ |
- pyamg.krylov.bicgstab(A, b, x0=None, tol=1e-05, maxiter=None, xtype=None, M=None, callback=None, residuals=None)¶
Biconjugate Gradient Algorithm with Stabilization
Solves the linear system Ax = b. Left preconditioning is supported.
- A : {array, matrix, sparse matrix, LinearOperator}
- n x n, linear system to solve
- b : {array, matrix}
- right hand side, shape is (n,) or (n,1)
- x0 : {array, matrix}
- initial guess, default is a vector of zeros
- tol : float
- relative convergence tolerance, i.e. tol is scaled by ||r_0||_2
- maxiter : int
- maximum number of allowed iterations
- xtype : type
- dtype for the solution, default is automatic type detection
- M : {array, matrix, sparse matrix, LinearOperator}
- n x n, inverted preconditioner, i.e. solve M A A.H x = M b.
- callback : function
- User-supplied function is called after each iteration as callback(xk), where xk is the current solution vector
- residuals : list
- residuals has the residual norm history, including the initial residual, appended to it
(xNew, info) xNew : an updated guess to the solution of Ax = b info : halting status of bicgstab
0 successful exit >0 convergence to tolerance not achieved, return iteration count instead. <0 numerical breakdown, or illegal input The LinearOperator class is in scipy.sparse.linalg.interface. Use this class if you prefer to define A or M as a mat-vec routine as opposed to explicitly constructing the matrix. A.psolve(..) is still supported as a legacy.
>>> from pyamg.krylov.bicgstab import bicgstab >>> from pyamg.util.linalg import norm >>> import numpy >>> from pyamg.gallery import poisson >>> A = poisson((10,10)) >>> b = numpy.ones((A.shape[0],)) >>> (x,flag) = bicgstab(A,b, maxiter=2, tol=1e-8) >>> print norm(b - A*x) 4.68163045309
[1] Yousef Saad, “Iterative Methods for Sparse Linear Systems, Second Edition”, SIAM, pp. 231-234, 2003 http://www-users.cs.umn.edu/~saad/books.html
- pyamg.krylov.cg(A, b, x0=None, tol=1e-05, maxiter=None, xtype=None, M=None, callback=None, residuals=None)¶
Conjugate Gradient algorithm
Solves the linear system Ax = b. Left preconditioning is supported.
- A : {array, matrix, sparse matrix, LinearOperator}
- n x n, linear system to solve
- b : {array, matrix}
- right hand side, shape is (n,) or (n,1)
- x0 : {array, matrix}
- initial guess, default is a vector of zeros
- tol : float
- relative convergence tolerance, i.e. tol is scaled by the preconditioner norm of r_0, or ||r_0||_M.
- maxiter : int
- maximum number of allowed iterations
- xtype : type
- dtype for the solution, default is automatic type detection
- M : {array, matrix, sparse matrix, LinearOperator}
- n x n, inverted preconditioner, i.e. solve M A x = M b.
- callback : function
- User-supplied function is called after each iteration as callback(xk), where xk is the current solution vector
- residuals : list
- residuals contains the residual norm history, including the initial residual. The preconditioner norm is used, instead of the Euclidean norm.
(xNew, info) xNew : an updated guess to the solution of Ax = b info : halting status of cg
0 successful exit >0 convergence to tolerance not achieved, return iteration count instead. <0 numerical breakdown, or illegal input The LinearOperator class is in scipy.sparse.linalg.interface. Use this class if you prefer to define A or M as a mat-vec routine as opposed to explicitly constructing the matrix. A.psolve(..) is still supported as a legacy.
The residual in the preconditioner norm is both used for halting and returned in the residuals list.
>>> from pyamg.krylov.cg import cg >>> from pyamg.util.linalg import norm >>> import numpy >>> from pyamg.gallery import poisson >>> A = poisson((10,10)) >>> b = numpy.ones((A.shape[0],)) >>> (x,flag) = cg(A,b, maxiter=2, tol=1e-8) >>> print norm(b - A*x) 10.9370700187
[1] Yousef Saad, “Iterative Methods for Sparse Linear Systems, Second Edition”, SIAM, pp. 262-67, 2003 http://www-users.cs.umn.edu/~saad/books.html
- pyamg.krylov.cgne(A, b, x0=None, tol=1e-05, maxiter=None, xtype=None, M=None, callback=None, residuals=None)¶
Conjugate Gradient, Normal Error algorithm
Applies CG to the normal equations, A.H A x = b. Left preconditioning is supported. Note that unless A is well-conditioned, the use of CGNE is inadvisable
- A : {array, matrix, sparse matrix, LinearOperator}
- n x n, linear system to solve
- b : {array, matrix}
- right hand side, shape is (n,) or (n,1)
- x0 : {array, matrix}
- initial guess, default is a vector of zeros
- tol : float
- relative convergence tolerance, i.e. tol is scaled by ||r_0||_2
- maxiter : int
- maximum number of allowed iterations
- xtype : type
- dtype for the solution, default is automatic type detection
- M : {array, matrix, sparse matrix, LinearOperator}
- n x n, inverted preconditioner, i.e. solve M A A.H x = M b.
- callback : function
- User-supplied function is called after each iteration as callback(xk), where xk is the current solution vector
- residuals : list
- residuals has the residual norm history, including the initial residual, appended to it
(xNew, info) xNew : an updated guess to the solution of Ax = b info : halting status of cgne
0 successful exit >0 convergence to tolerance not achieved, return iteration count instead. <0 numerical breakdown, or illegal input - The LinearOperator class is in scipy.sparse.linalg.interface. Use this class if you prefer to define A or M as a mat-vec routine as opposed to explicitly constructing the matrix. A.psolve(..) is still supported as a legacy.
>>> from pyamg.krylov.cgne import cgne >>> from pyamg.util.linalg import norm >>> import numpy >>> from pyamg.gallery import poisson >>> A = poisson((10,10)) >>> b = numpy.ones((A.shape[0],)) >>> (x,flag) = cgne(A,b, maxiter=2, tol=1e-8) >>> print norm(b - A*x) 46.1547104367
[1] Yousef Saad, “Iterative Methods for Sparse Linear Systems, Second Edition”, SIAM, pp. 276-7, 2003 http://www-users.cs.umn.edu/~saad/books.html
- pyamg.krylov.cgnr(A, b, x0=None, tol=1e-05, maxiter=None, xtype=None, M=None, callback=None, residuals=None)¶
Conjugate Gradient, Normal Residual algorithm
Applies CG to the normal equations, A.H A x = b. Left preconditioning is supported. Note that unless A is well-conditioned, the use of CGNR is inadvisable
- A : {array, matrix, sparse matrix, LinearOperator}
- n x n, linear system to solve
- b : {array, matrix}
- right hand side, shape is (n,) or (n,1)
- x0 : {array, matrix}
- initial guess, default is a vector of zeros
- tol : float
- relative convergence tolerance, i.e. tol is scaled by ||r_0||_2
- maxiter : int
- maximum number of allowed iterations
- xtype : type
- dtype for the solution, default is automatic type detection
- M : {array, matrix, sparse matrix, LinearOperator}
- n x n, inverted preconditioner, i.e. solve M A.H A x = b.
- callback : function
- User-supplied function is called after each iteration as callback(xk), where xk is the current solution vector
- residuals : list
- residuals has the residual norm history, including the initial residual, appended to it
(xNew, info) xNew : an updated guess to the solution of Ax = b info : halting status of cgnr
0 successful exit >0 convergence to tolerance not achieved, return iteration count instead. <0 numerical breakdown, or illegal input The LinearOperator class is in scipy.sparse.linalg.interface. Use this class if you prefer to define A or M as a mat-vec routine as opposed to explicitly constructing the matrix. A.psolve(..) is still supported as a legacy.
>>> from pyamg.krylov.cgnr import cgnr >>> from pyamg.util.linalg import norm >>> import numpy >>> from pyamg.gallery import poisson >>> A = poisson((10,10)) >>> b = numpy.ones((A.shape[0],)) >>> (x,flag) = cgnr(A,b, maxiter=2, tol=1e-8) >>> print norm(b - A*x) 9.3910201849
[1] Yousef Saad, “Iterative Methods for Sparse Linear Systems, Second Edition”, SIAM, pp. 276-7, 2003 http://www-users.cs.umn.edu/~saad/books.html
- pyamg.krylov.cr(A, b, x0=None, tol=1e-05, maxiter=None, xtype=None, M=None, callback=None, residuals=None)¶
Conjugate Residual algorithm
Solves the linear system Ax = b. Left preconditioning is supported. The matrix A must be Hermitian symmetric (but not necessarily definite).
- A : {array, matrix, sparse matrix, LinearOperator}
- n x n, linear system to solve
- b : {array, matrix}
- right hand side, shape is (n,) or (n,1)
- x0 : {array, matrix}
- initial guess, default is a vector of zeros
- tol : float
- relative convergence tolerance, i.e. tol is scaled by the preconditioner norm of r_0, or ||r_0||_M.
- maxiter : int
- maximum number of allowed iterations
- xtype : type
- dtype for the solution, default is automatic type detection
- M : {array, matrix, sparse matrix, LinearOperator}
- n x n, inverted preconditioner, i.e. solve M A x = M b.
- callback : function
- User-supplied function is called after each iteration as callback(xk), where xk is the current solution vector
- residuals : list
- residuals contains the residual norm history, including the initial residual. The preconditioner norm is used, instead of the Euclidean norm.
(xNew, info) xNew : an updated guess to the solution of Ax = b info : halting status of cr
0 successful exit >0 convergence to tolerance not achieved, return iteration count instead. <0 numerical breakdown, or illegal input The LinearOperator class is in scipy.sparse.linalg.interface. Use this class if you prefer to define A or M as a mat-vec routine as opposed to explicitly constructing the matrix. A.psolve(..) is still supported as a legacy.
The 2-norm of the preconditioned residual is used both for halting and returned in the residuals list.
>>> from pyamg.krylov.cr import cr >>> from pyamg.util.linalg import norm >>> import numpy >>> from pyamg.gallery import poisson >>> A = poisson((10,10)) >>> b = numpy.ones((A.shape[0],)) >>> (x,flag) = cr(A,b, maxiter=2, tol=1e-8) >>> print norm(b - A*x) 10.9370700187
[1] Yousef Saad, “Iterative Methods for Sparse Linear Systems, Second Edition”, SIAM, pp. 262-67, 2003 http://www-users.cs.umn.edu/~saad/books.html
- pyamg.krylov.fgmres(A, b, x0=None, tol=1e-05, restrt=None, maxiter=None, xtype=None, M=None, callback=None, residuals=None)¶
Flexible Generalized Minimum Residual Method (fGMRES)
fGMRES iteratively refines the initial solution guess to the system Ax = b. fGMRES is flexible in the sense that the right preconditioner (M) can vary from iteration to iteration.
- A : {array, matrix, sparse matrix, LinearOperator}
- n x n, linear system to solve
- b : {array, matrix}
- right hand side, shape is (n,) or (n,1)
- x0 : {array, matrix}
- initial guess, default is a vector of zeros
- tol : float
- relative convergence tolerance, i.e. tol is scaled by ||r_0||_2
- restrt : {None, int}
- if int, restrt is max number of inner iterations and maxiter is the max number of outer iterations
- if None, do not restart GMRES, and max number of inner iterations is maxiter
- maxiter : {None, int}
- if restrt is None, maxiter is the max number of inner iterations and GMRES does not restart
- if restrt is int, maxiter is the max number of outer iterations, and restrt is the max number of inner iterations
- xtype : type
- dtype for the solution, default is automatic type detection
- M : {array, matrix, sparse matrix, LinearOperator}
- n x n, inverted preconditioner, i.e. solve A M x = M b. M need not be stationary for fgmres
- callback : function
- User-supplied function is called after each iteration as callback( ||rk||_2 ), where rk is the current residual vector
- residuals : list
- residuals has the residual norm history, including the initial residual, appended to it
(xNew, info) xNew : an updated guess to the solution of Ax = b info : halting status of gmres
0 successful exit >0 convergence to tolerance not achieved, return iteration count instead. This value is precisely the order of the Krylov space. <0 numerical breakdown, or illegal input - The LinearOperator class is in scipy.sparse.linalg.interface. Use this class if you prefer to define A or M as a mat-vec routine as opposed to explicitly constructing the matrix. A.psolve(..) is still supported as a legacy.
- fGMRES allows for non-stationary preconditioners, as opposed to GMRES
- For robustness, Householder reflections are used to orthonormalize the Krylov Space Givens Rotations are used to provide the residual norm each iteration Flexibility implies that the right preconditioner, M or A.psolve, can vary from iteration to iteration
>>> from pyamg.krylov.fgmres import fgmres >>> from pyamg.util.linalg import norm >>> import numpy >>> from pyamg.gallery import poisson >>> A = poisson((10,10)) >>> b = numpy.ones((A.shape[0],)) >>> (x,flag) = fgmres(A,b, maxiter=2, tol=1e-8) >>> print norm(b - A*x) 6.5428213057
[1] Yousef Saad, “Iterative Methods for Sparse Linear Systems, Second Edition”, SIAM, pp. 151-172, pp. 272-275, 2003 http://www-users.cs.umn.edu/~saad/books.html
- pyamg.krylov.gmres(A, b, x0=None, tol=1e-05, restrt=None, maxiter=None, xtype=None, M=None, callback=None, residuals=None, orthog='mgs', **kwargs)¶
- Generalized Minimum Residual Method (GMRES)
- GMRES iteratively refines the initial solution guess to the system Ax = b
- A : {array, matrix, sparse matrix, LinearOperator}
- n x n, linear system to solve
- b : {array, matrix}
- right hand side, shape is (n,) or (n,1)
- x0 : {array, matrix}
- initial guess, default is a vector of zeros
- tol : float
- relative convergence tolerance, i.e. tol is scaled by the norm of the initial preconditioned residual
- restrt : {None, int}
- if int, restrt is max number of inner iterations and maxiter is the max number of outer iterations
- if None, do not restart GMRES, and max number of inner iterations is maxiter
- maxiter : {None, int}
- if restrt is None, maxiter is the max number of inner iterations and GMRES does not restart
- if restrt is int, maxiter is the max number of outer iterations, and restrt is the max number of inner iterations
- xtype : type
- dtype for the solution, default is automatic type detection
- M : {array, matrix, sparse matrix, LinearOperator}
- n x n, inverted preconditioner, i.e. solve M A x = M b.
- callback : function
- User-supplied function is called after each iteration as callback( ||rk||_2 ), where rk is the current preconditioned residual vector
- residuals : list
- residuals contains the preconditioned residual norm history, including the initial residual.
- orthog : string
- ‘householder’ calls _gmres_householder which uses Householder reflections to find the orthogonal basis for the Krylov space. ‘mgs’ calls _gmres_mgs which uses modified Gram-Schmidt to find the orthogonal basis for the Krylov space
(xNew, info) xNew : an updated guess to the solution of Ax = b info : halting status of gmres
0 successful exit >0 convergence to tolerance not achieved, return iteration count instead. This value is precisely the order of the Krylov space. <0 numerical breakdown, or illegal input - The LinearOperator class is in scipy.sparse.linalg.interface. Use this class if you prefer to define A or M as a mat-vec routine as opposed to explicitly constructing the matrix. A.psolve(..) is still supported as a legacy.
- The orthogonalization method, orthog=’householder’, is more robust than orthog=’mgs’, however for the majority of problems your problem will converge before ‘mgs’ loses orthogonality in your basis.
- orthog=’householder’ has been more rigorously tested, and is therefore currently the default
>>> from pyamg.krylov import gmres >>> from pyamg.util.linalg import norm >>> import numpy >>> from pyamg.gallery import poisson >>> A = poisson((10,10)) >>> b = numpy.ones((A.shape[0],)) >>> (x,flag) = gmres(A,b, maxiter=2, tol=1e-8) >>> print norm(b - A*x) 6.5428213057
[1] Yousef Saad, “Iterative Methods for Sparse Linear Systems, Second Edition”, SIAM, pp. 151-172, pp. 272-275, 2003 http://www-users.cs.umn.edu/~saad/books.html
- pyamg.krylov.minimal_residual(A, b, x0=None, tol=1e-05, maxiter=None, xtype=None, M=None, callback=None, residuals=None)¶
Minimal residual (MR) algorithm
Solves the linear system Ax = b. Left preconditioning is supported.
- A : {array, matrix, sparse matrix, LinearOperator}
- n x n, linear system to solve
- b : {array, matrix}
- right hand side, shape is (n,) or (n,1)
- x0 : {array, matrix}
- initial guess, default is a vector of zeros
- tol : float
- relative convergence tolerance, i.e. tol is scaled by the preconditioner norm of r_0, or ||r_0||_M.
- maxiter : int
- maximum number of allowed iterations
- xtype : type
- dtype for the solution, default is automatic type detection
- M : {array, matrix, sparse matrix, LinearOperator}
- n x n, inverted preconditioner, i.e. solve M A x = M b.
- callback : function
- User-supplied function is called after each iteration as callback(xk), where xk is the current solution vector
- residuals : list
- residuals contains the residual norm history, including the initial residual. The preconditioner norm is used, instead of the Euclidean norm.
(xNew, info) xNew : an updated guess to the solution of Ax = b info : halting status of cg
0 successful exit >0 convergence to tolerance not achieved, return iteration count instead. <0 numerical breakdown, or illegal input The LinearOperator class is in scipy.sparse.linalg.interface. Use this class if you prefer to define A or M as a mat-vec routine as opposed to explicitly constructing the matrix. A.psolve(..) is still supported as a legacy.
The residual in the preconditioner norm is both used for halting and returned in the residuals list.
>>> from pyamg.krylov import minimal_residual >>> from pyamg.util.linalg import norm >>> import numpy >>> from pyamg.gallery import poisson >>> A = poisson((10,10)) >>> b = numpy.ones((A.shape[0],)) >>> (x,flag) = minimal_residual(A,b, maxiter=2, tol=1e-8) >>> print norm(b - A*x) 7.26369350856
[1] Yousef Saad, “Iterative Methods for Sparse Linear Systems, Second Edition”, SIAM, pp. 137–142, 2003 http://www-users.cs.umn.edu/~saad/books.html
- pyamg.krylov.steepest_descent(A, b, x0=None, tol=1e-05, maxiter=None, xtype=None, M=None, callback=None, residuals=None)¶
Steepest descent algorithm
Solves the linear system Ax = b. Left preconditioning is supported.
- A : {array, matrix, sparse matrix, LinearOperator}
- n x n, linear system to solve
- b : {array, matrix}
- right hand side, shape is (n,) or (n,1)
- x0 : {array, matrix}
- initial guess, default is a vector of zeros
- tol : float
- relative convergence tolerance, i.e. tol is scaled by the preconditioner norm of r_0, or ||r_0||_M.
- maxiter : int
- maximum number of allowed iterations
- xtype : type
- dtype for the solution, default is automatic type detection
- M : {array, matrix, sparse matrix, LinearOperator}
- n x n, inverted preconditioner, i.e. solve M A x = M b.
- callback : function
- User-supplied function is called after each iteration as callback(xk), where xk is the current solution vector
- residuals : list
- residuals contains the residual norm history, including the initial residual. The preconditioner norm is used, instead of the Euclidean norm.
(xNew, info) xNew : an updated guess to the solution of Ax = b info : halting status of cg
0 successful exit >0 convergence to tolerance not achieved, return iteration count instead. <0 numerical breakdown, or illegal input The LinearOperator class is in scipy.sparse.linalg.interface. Use this class if you prefer to define A or M as a mat-vec routine as opposed to explicitly constructing the matrix. A.psolve(..) is still supported as a legacy.
The residual in the preconditioner norm is both used for halting and returned in the residuals list.
>>> from pyamg.krylov import steepest_descent >>> from pyamg.util.linalg import norm >>> import numpy >>> from pyamg.gallery import poisson >>> A = poisson((10,10)) >>> b = numpy.ones((A.shape[0],)) >>> (x,flag) = steepest_descent(A,b, maxiter=2, tol=1e-8) >>> print norm(b - A*x) 7.89436429704
[1] Yousef Saad, “Iterative Methods for Sparse Linear Systems, Second Edition”, SIAM, pp. 137–142, 2003 http://www-users.cs.umn.edu/~saad/books.html
_bicgstab Module¶
- pyamg.krylov._bicgstab.bicgstab(A, b, x0=None, tol=1e-05, maxiter=None, xtype=None, M=None, callback=None, residuals=None)[source]¶
Biconjugate Gradient Algorithm with Stabilization
Solves the linear system Ax = b. Left preconditioning is supported.
- A : {array, matrix, sparse matrix, LinearOperator}
- n x n, linear system to solve
- b : {array, matrix}
- right hand side, shape is (n,) or (n,1)
- x0 : {array, matrix}
- initial guess, default is a vector of zeros
- tol : float
- relative convergence tolerance, i.e. tol is scaled by ||r_0||_2
- maxiter : int
- maximum number of allowed iterations
- xtype : type
- dtype for the solution, default is automatic type detection
- M : {array, matrix, sparse matrix, LinearOperator}
- n x n, inverted preconditioner, i.e. solve M A A.H x = M b.
- callback : function
- User-supplied function is called after each iteration as callback(xk), where xk is the current solution vector
- residuals : list
- residuals has the residual norm history, including the initial residual, appended to it
(xNew, info) xNew : an updated guess to the solution of Ax = b info : halting status of bicgstab
0 successful exit >0 convergence to tolerance not achieved, return iteration count instead. <0 numerical breakdown, or illegal input The LinearOperator class is in scipy.sparse.linalg.interface. Use this class if you prefer to define A or M as a mat-vec routine as opposed to explicitly constructing the matrix. A.psolve(..) is still supported as a legacy.
>>> from pyamg.krylov.bicgstab import bicgstab >>> from pyamg.util.linalg import norm >>> import numpy >>> from pyamg.gallery import poisson >>> A = poisson((10,10)) >>> b = numpy.ones((A.shape[0],)) >>> (x,flag) = bicgstab(A,b, maxiter=2, tol=1e-8) >>> print norm(b - A*x) 4.68163045309
[1] Yousef Saad, “Iterative Methods for Sparse Linear Systems, Second Edition”, SIAM, pp. 231-234, 2003 http://www-users.cs.umn.edu/~saad/books.html
_cg Module¶
- pyamg.krylov._cg.cg(A, b, x0=None, tol=1e-05, maxiter=None, xtype=None, M=None, callback=None, residuals=None)[source]¶
Conjugate Gradient algorithm
Solves the linear system Ax = b. Left preconditioning is supported.
- A : {array, matrix, sparse matrix, LinearOperator}
- n x n, linear system to solve
- b : {array, matrix}
- right hand side, shape is (n,) or (n,1)
- x0 : {array, matrix}
- initial guess, default is a vector of zeros
- tol : float
- relative convergence tolerance, i.e. tol is scaled by the preconditioner norm of r_0, or ||r_0||_M.
- maxiter : int
- maximum number of allowed iterations
- xtype : type
- dtype for the solution, default is automatic type detection
- M : {array, matrix, sparse matrix, LinearOperator}
- n x n, inverted preconditioner, i.e. solve M A x = M b.
- callback : function
- User-supplied function is called after each iteration as callback(xk), where xk is the current solution vector
- residuals : list
- residuals contains the residual norm history, including the initial residual. The preconditioner norm is used, instead of the Euclidean norm.
(xNew, info) xNew : an updated guess to the solution of Ax = b info : halting status of cg
0 successful exit >0 convergence to tolerance not achieved, return iteration count instead. <0 numerical breakdown, or illegal input The LinearOperator class is in scipy.sparse.linalg.interface. Use this class if you prefer to define A or M as a mat-vec routine as opposed to explicitly constructing the matrix. A.psolve(..) is still supported as a legacy.
The residual in the preconditioner norm is both used for halting and returned in the residuals list.
>>> from pyamg.krylov.cg import cg >>> from pyamg.util.linalg import norm >>> import numpy >>> from pyamg.gallery import poisson >>> A = poisson((10,10)) >>> b = numpy.ones((A.shape[0],)) >>> (x,flag) = cg(A,b, maxiter=2, tol=1e-8) >>> print norm(b - A*x) 10.9370700187
[1] Yousef Saad, “Iterative Methods for Sparse Linear Systems, Second Edition”, SIAM, pp. 262-67, 2003 http://www-users.cs.umn.edu/~saad/books.html
_cgne Module¶
- pyamg.krylov._cgne.cgne(A, b, x0=None, tol=1e-05, maxiter=None, xtype=None, M=None, callback=None, residuals=None)[source]¶
Conjugate Gradient, Normal Error algorithm
Applies CG to the normal equations, A.H A x = b. Left preconditioning is supported. Note that unless A is well-conditioned, the use of CGNE is inadvisable
- A : {array, matrix, sparse matrix, LinearOperator}
- n x n, linear system to solve
- b : {array, matrix}
- right hand side, shape is (n,) or (n,1)
- x0 : {array, matrix}
- initial guess, default is a vector of zeros
- tol : float
- relative convergence tolerance, i.e. tol is scaled by ||r_0||_2
- maxiter : int
- maximum number of allowed iterations
- xtype : type
- dtype for the solution, default is automatic type detection
- M : {array, matrix, sparse matrix, LinearOperator}
- n x n, inverted preconditioner, i.e. solve M A A.H x = M b.
- callback : function
- User-supplied function is called after each iteration as callback(xk), where xk is the current solution vector
- residuals : list
- residuals has the residual norm history, including the initial residual, appended to it
(xNew, info) xNew : an updated guess to the solution of Ax = b info : halting status of cgne
0 successful exit >0 convergence to tolerance not achieved, return iteration count instead. <0 numerical breakdown, or illegal input - The LinearOperator class is in scipy.sparse.linalg.interface. Use this class if you prefer to define A or M as a mat-vec routine as opposed to explicitly constructing the matrix. A.psolve(..) is still supported as a legacy.
>>> from pyamg.krylov.cgne import cgne >>> from pyamg.util.linalg import norm >>> import numpy >>> from pyamg.gallery import poisson >>> A = poisson((10,10)) >>> b = numpy.ones((A.shape[0],)) >>> (x,flag) = cgne(A,b, maxiter=2, tol=1e-8) >>> print norm(b - A*x) 46.1547104367
[1] Yousef Saad, “Iterative Methods for Sparse Linear Systems, Second Edition”, SIAM, pp. 276-7, 2003 http://www-users.cs.umn.edu/~saad/books.html
_cgnr Module¶
- pyamg.krylov._cgnr.cgnr(A, b, x0=None, tol=1e-05, maxiter=None, xtype=None, M=None, callback=None, residuals=None)[source]¶
Conjugate Gradient, Normal Residual algorithm
Applies CG to the normal equations, A.H A x = b. Left preconditioning is supported. Note that unless A is well-conditioned, the use of CGNR is inadvisable
- A : {array, matrix, sparse matrix, LinearOperator}
- n x n, linear system to solve
- b : {array, matrix}
- right hand side, shape is (n,) or (n,1)
- x0 : {array, matrix}
- initial guess, default is a vector of zeros
- tol : float
- relative convergence tolerance, i.e. tol is scaled by ||r_0||_2
- maxiter : int
- maximum number of allowed iterations
- xtype : type
- dtype for the solution, default is automatic type detection
- M : {array, matrix, sparse matrix, LinearOperator}
- n x n, inverted preconditioner, i.e. solve M A.H A x = b.
- callback : function
- User-supplied function is called after each iteration as callback(xk), where xk is the current solution vector
- residuals : list
- residuals has the residual norm history, including the initial residual, appended to it
(xNew, info) xNew : an updated guess to the solution of Ax = b info : halting status of cgnr
0 successful exit >0 convergence to tolerance not achieved, return iteration count instead. <0 numerical breakdown, or illegal input The LinearOperator class is in scipy.sparse.linalg.interface. Use this class if you prefer to define A or M as a mat-vec routine as opposed to explicitly constructing the matrix. A.psolve(..) is still supported as a legacy.
>>> from pyamg.krylov.cgnr import cgnr >>> from pyamg.util.linalg import norm >>> import numpy >>> from pyamg.gallery import poisson >>> A = poisson((10,10)) >>> b = numpy.ones((A.shape[0],)) >>> (x,flag) = cgnr(A,b, maxiter=2, tol=1e-8) >>> print norm(b - A*x) 9.3910201849
[1] Yousef Saad, “Iterative Methods for Sparse Linear Systems, Second Edition”, SIAM, pp. 276-7, 2003 http://www-users.cs.umn.edu/~saad/books.html
_cr Module¶
- pyamg.krylov._cr.cr(A, b, x0=None, tol=1e-05, maxiter=None, xtype=None, M=None, callback=None, residuals=None)[source]¶
Conjugate Residual algorithm
Solves the linear system Ax = b. Left preconditioning is supported. The matrix A must be Hermitian symmetric (but not necessarily definite).
- A : {array, matrix, sparse matrix, LinearOperator}
- n x n, linear system to solve
- b : {array, matrix}
- right hand side, shape is (n,) or (n,1)
- x0 : {array, matrix}
- initial guess, default is a vector of zeros
- tol : float
- relative convergence tolerance, i.e. tol is scaled by the preconditioner norm of r_0, or ||r_0||_M.
- maxiter : int
- maximum number of allowed iterations
- xtype : type
- dtype for the solution, default is automatic type detection
- M : {array, matrix, sparse matrix, LinearOperator}
- n x n, inverted preconditioner, i.e. solve M A x = M b.
- callback : function
- User-supplied function is called after each iteration as callback(xk), where xk is the current solution vector
- residuals : list
- residuals contains the residual norm history, including the initial residual. The preconditioner norm is used, instead of the Euclidean norm.
(xNew, info) xNew : an updated guess to the solution of Ax = b info : halting status of cr
0 successful exit >0 convergence to tolerance not achieved, return iteration count instead. <0 numerical breakdown, or illegal input The LinearOperator class is in scipy.sparse.linalg.interface. Use this class if you prefer to define A or M as a mat-vec routine as opposed to explicitly constructing the matrix. A.psolve(..) is still supported as a legacy.
The 2-norm of the preconditioned residual is used both for halting and returned in the residuals list.
>>> from pyamg.krylov.cr import cr >>> from pyamg.util.linalg import norm >>> import numpy >>> from pyamg.gallery import poisson >>> A = poisson((10,10)) >>> b = numpy.ones((A.shape[0],)) >>> (x,flag) = cr(A,b, maxiter=2, tol=1e-8) >>> print norm(b - A*x) 10.9370700187
[1] Yousef Saad, “Iterative Methods for Sparse Linear Systems, Second Edition”, SIAM, pp. 262-67, 2003 http://www-users.cs.umn.edu/~saad/books.html
_fgmres Module¶
- pyamg.krylov._fgmres.fgmres(A, b, x0=None, tol=1e-05, restrt=None, maxiter=None, xtype=None, M=None, callback=None, residuals=None)[source]¶
Flexible Generalized Minimum Residual Method (fGMRES)
fGMRES iteratively refines the initial solution guess to the system Ax = b. fGMRES is flexible in the sense that the right preconditioner (M) can vary from iteration to iteration.
- A : {array, matrix, sparse matrix, LinearOperator}
- n x n, linear system to solve
- b : {array, matrix}
- right hand side, shape is (n,) or (n,1)
- x0 : {array, matrix}
- initial guess, default is a vector of zeros
- tol : float
- relative convergence tolerance, i.e. tol is scaled by ||r_0||_2
- restrt : {None, int}
- if int, restrt is max number of inner iterations and maxiter is the max number of outer iterations
- if None, do not restart GMRES, and max number of inner iterations is maxiter
- maxiter : {None, int}
- if restrt is None, maxiter is the max number of inner iterations and GMRES does not restart
- if restrt is int, maxiter is the max number of outer iterations, and restrt is the max number of inner iterations
- xtype : type
- dtype for the solution, default is automatic type detection
- M : {array, matrix, sparse matrix, LinearOperator}
- n x n, inverted preconditioner, i.e. solve A M x = M b. M need not be stationary for fgmres
- callback : function
- User-supplied function is called after each iteration as callback( ||rk||_2 ), where rk is the current residual vector
- residuals : list
- residuals has the residual norm history, including the initial residual, appended to it
(xNew, info) xNew : an updated guess to the solution of Ax = b info : halting status of gmres
0 successful exit >0 convergence to tolerance not achieved, return iteration count instead. This value is precisely the order of the Krylov space. <0 numerical breakdown, or illegal input - The LinearOperator class is in scipy.sparse.linalg.interface. Use this class if you prefer to define A or M as a mat-vec routine as opposed to explicitly constructing the matrix. A.psolve(..) is still supported as a legacy.
- fGMRES allows for non-stationary preconditioners, as opposed to GMRES
- For robustness, Householder reflections are used to orthonormalize the Krylov Space Givens Rotations are used to provide the residual norm each iteration Flexibility implies that the right preconditioner, M or A.psolve, can vary from iteration to iteration
>>> from pyamg.krylov.fgmres import fgmres >>> from pyamg.util.linalg import norm >>> import numpy >>> from pyamg.gallery import poisson >>> A = poisson((10,10)) >>> b = numpy.ones((A.shape[0],)) >>> (x,flag) = fgmres(A,b, maxiter=2, tol=1e-8) >>> print norm(b - A*x) 6.5428213057
[1] Yousef Saad, “Iterative Methods for Sparse Linear Systems, Second Edition”, SIAM, pp. 151-172, pp. 272-275, 2003 http://www-users.cs.umn.edu/~saad/books.html
_gmres Module¶
- pyamg.krylov._gmres.gmres(A, b, x0=None, tol=1e-05, restrt=None, maxiter=None, xtype=None, M=None, callback=None, residuals=None, orthog='mgs', **kwargs)[source]¶
- Generalized Minimum Residual Method (GMRES)
- GMRES iteratively refines the initial solution guess to the system Ax = b
- A : {array, matrix, sparse matrix, LinearOperator}
- n x n, linear system to solve
- b : {array, matrix}
- right hand side, shape is (n,) or (n,1)
- x0 : {array, matrix}
- initial guess, default is a vector of zeros
- tol : float
- relative convergence tolerance, i.e. tol is scaled by the norm of the initial preconditioned residual
- restrt : {None, int}
- if int, restrt is max number of inner iterations and maxiter is the max number of outer iterations
- if None, do not restart GMRES, and max number of inner iterations is maxiter
- maxiter : {None, int}
- if restrt is None, maxiter is the max number of inner iterations and GMRES does not restart
- if restrt is int, maxiter is the max number of outer iterations, and restrt is the max number of inner iterations
- xtype : type
- dtype for the solution, default is automatic type detection
- M : {array, matrix, sparse matrix, LinearOperator}
- n x n, inverted preconditioner, i.e. solve M A x = M b.
- callback : function
- User-supplied function is called after each iteration as callback( ||rk||_2 ), where rk is the current preconditioned residual vector
- residuals : list
- residuals contains the preconditioned residual norm history, including the initial residual.
- orthog : string
- ‘householder’ calls _gmres_householder which uses Householder reflections to find the orthogonal basis for the Krylov space. ‘mgs’ calls _gmres_mgs which uses modified Gram-Schmidt to find the orthogonal basis for the Krylov space
(xNew, info) xNew : an updated guess to the solution of Ax = b info : halting status of gmres
0 successful exit >0 convergence to tolerance not achieved, return iteration count instead. This value is precisely the order of the Krylov space. <0 numerical breakdown, or illegal input - The LinearOperator class is in scipy.sparse.linalg.interface. Use this class if you prefer to define A or M as a mat-vec routine as opposed to explicitly constructing the matrix. A.psolve(..) is still supported as a legacy.
- The orthogonalization method, orthog=’householder’, is more robust than orthog=’mgs’, however for the majority of problems your problem will converge before ‘mgs’ loses orthogonality in your basis.
- orthog=’householder’ has been more rigorously tested, and is therefore currently the default
>>> from pyamg.krylov import gmres >>> from pyamg.util.linalg import norm >>> import numpy >>> from pyamg.gallery import poisson >>> A = poisson((10,10)) >>> b = numpy.ones((A.shape[0],)) >>> (x,flag) = gmres(A,b, maxiter=2, tol=1e-8) >>> print norm(b - A*x) 6.5428213057
[1] Yousef Saad, “Iterative Methods for Sparse Linear Systems, Second Edition”, SIAM, pp. 151-172, pp. 272-275, 2003 http://www-users.cs.umn.edu/~saad/books.html
_gmres_householder Module¶
- pyamg.krylov._gmres_householder.gmres_householder(A, b, x0=None, tol=1e-05, restrt=None, maxiter=None, xtype=None, M=None, callback=None, residuals=None)[source]¶
- Generalized Minimum Residual Method (GMRES)
- GMRES iteratively refines the initial solution guess to the system Ax = b Householder reflections are used for orthogonalization
- A : {array, matrix, sparse matrix, LinearOperator}
- n x n, linear system to solve
- b : {array, matrix}
- right hand side, shape is (n,) or (n, 1)
- x0 : {array, matrix}
- initial guess, default is a vector of zeros
- tol : float
- relative convergence tolerance, i.e. tol is scaled by the norm of the initial preconditioned residual
- restrt : {None, int}
- if int, restrt is max number of inner iterations and maxiter is the max number of outer iterations
- if None, do not restart GMRES, and max number of inner iterations is maxiter
- maxiter : {None, int}
- if restrt is None, maxiter is the max number of inner iterations and GMRES does not restart
- if restrt is int, maxiter is the max number of outer iterations, and restrt is the max number of inner iterations
- xtype : type
- dtype for the solution, default is automatic type detection
- M : {array, matrix, sparse matrix, LinearOperator}
- n x n, inverted preconditioner, i.e. solve M A x = M b.
- callback : function
- User-supplied function is called after each iteration as callback( ||rk||_2 ), where rk is the current preconditioned residual vector
- residuals : list
- residuals contains the preconditioned residual norm history, including the initial residual.
(xNew, info) xNew : an updated guess to the solution of Ax = b info : halting status of gmres
0 successful exit >0 convergence to tolerance not achieved, return iteration count instead. This value is precisely the order of the Krylov space. <0 numerical breakdown, or illegal input - The LinearOperator class is in scipy.sparse.linalg.interface. Use this class if you prefer to define A or M as a mat-vec routine as opposed to explicitly constructing the matrix. A.psolve(..) is still supported as a legacy.
- For robustness, Householder reflections are used to orthonormalize the Krylov Space Givens Rotations are used to provide the residual norm each iteration
>>> from pyamg.krylov import gmres >>> from pyamg.util.linalg import norm >>> import numpy >>> from pyamg.gallery import poisson >>> A = poisson((10, 10)) >>> b = numpy.ones((A.shape[0],)) >>> (x, flag) = gmres(A, b, maxiter=2, tol=1e-8, orthog='householder') >>> print norm(b - A*x) 6.5428213057
[1] Yousef Saad, “Iterative Methods for Sparse Linear Systems, Second Edition”, SIAM, pp. 151-172, pp. 272-275, 2003 http://www-users.cs.umn.edu/~saad/books.html
_gmres_mgs Module¶
- pyamg.krylov._gmres_mgs.gmres_mgs(A, b, x0=None, tol=1e-05, restrt=None, maxiter=None, xtype=None, M=None, callback=None, residuals=None, reorth=False)[source]¶
- Generalized Minimum Residual Method (GMRES)
- GMRES iteratively refines the initial solution guess to the system Ax = b Modified Gram-Schmidt version
- A : {array, matrix, sparse matrix, LinearOperator}
- n x n, linear system to solve
- b : {array, matrix}
- right hand side, shape is (n,) or (n,1)
- x0 : {array, matrix}
- initial guess, default is a vector of zeros
- tol : float
- relative convergence tolerance, i.e. tol is scaled by the norm of the initial preconditioned residual
- restrt : {None, int}
- if int, restrt is max number of inner iterations and maxiter is the max number of outer iterations
- if None, do not restart GMRES, and max number of inner iterations is maxiter
- maxiter : {None, int}
- if restrt is None, maxiter is the max number of inner iterations and GMRES does not restart
- if restrt is int, maxiter is the max number of outer iterations, and restrt is the max number of inner iterations
- xtype : type
- dtype for the solution, default is automatic type detection
- M : {array, matrix, sparse matrix, LinearOperator}
- n x n, inverted preconditioner, i.e. solve M A x = M b.
- callback : function
- User-supplied function is called after each iteration as callback( ||rk||_2 ), where rk is the current preconditioned residual vector
- residuals : list
- residuals contains the preconditioned residual norm history, including the initial residual.
- reorth : boolean
- If True, then a check is made whether to re-orthogonalize the Krylov space each GMRES iteration
(xNew, info) xNew : an updated guess to the solution of Ax = b info : halting status of gmres
0 successful exit >0 convergence to tolerance not achieved, return iteration count instead. This value is precisely the order of the Krylov space. <0 numerical breakdown, or illegal input - The LinearOperator class is in scipy.sparse.linalg.interface. Use this class if you prefer to define A or M as a mat-vec routine as opposed to explicitly constructing the matrix. A.psolve(..) is still supported as a legacy.
- For robustness, modified Gram-Schmidt is used to orthogonalize the Krylov Space Givens Rotations are used to provide the residual norm each iteration
>>> from pyamg.krylov import gmres >>> from pyamg.util.linalg import norm >>> import numpy >>> from pyamg.gallery import poisson >>> A = poisson((10,10)) >>> b = numpy.ones((A.shape[0],)) >>> (x,flag) = gmres(A,b, maxiter=2, tol=1e-8, orthog='mgs') >>> print norm(b - A*x) >>> 6.5428213057
[1] Yousef Saad, “Iterative Methods for Sparse Linear Systems, Second Edition”, SIAM, pp. 151-172, pp. 272-275, 2003 http://www-users.cs.umn.edu/~saad/books.html [2]
_minimal_residual Module¶
- pyamg.krylov._minimal_residual.minimal_residual(A, b, x0=None, tol=1e-05, maxiter=None, xtype=None, M=None, callback=None, residuals=None)[source]¶
Minimal residual (MR) algorithm
Solves the linear system Ax = b. Left preconditioning is supported.
- A : {array, matrix, sparse matrix, LinearOperator}
- n x n, linear system to solve
- b : {array, matrix}
- right hand side, shape is (n,) or (n,1)
- x0 : {array, matrix}
- initial guess, default is a vector of zeros
- tol : float
- relative convergence tolerance, i.e. tol is scaled by the preconditioner norm of r_0, or ||r_0||_M.
- maxiter : int
- maximum number of allowed iterations
- xtype : type
- dtype for the solution, default is automatic type detection
- M : {array, matrix, sparse matrix, LinearOperator}
- n x n, inverted preconditioner, i.e. solve M A x = M b.
- callback : function
- User-supplied function is called after each iteration as callback(xk), where xk is the current solution vector
- residuals : list
- residuals contains the residual norm history, including the initial residual. The preconditioner norm is used, instead of the Euclidean norm.
(xNew, info) xNew : an updated guess to the solution of Ax = b info : halting status of cg
0 successful exit >0 convergence to tolerance not achieved, return iteration count instead. <0 numerical breakdown, or illegal input The LinearOperator class is in scipy.sparse.linalg.interface. Use this class if you prefer to define A or M as a mat-vec routine as opposed to explicitly constructing the matrix. A.psolve(..) is still supported as a legacy.
The residual in the preconditioner norm is both used for halting and returned in the residuals list.
>>> from pyamg.krylov import minimal_residual >>> from pyamg.util.linalg import norm >>> import numpy >>> from pyamg.gallery import poisson >>> A = poisson((10,10)) >>> b = numpy.ones((A.shape[0],)) >>> (x,flag) = minimal_residual(A,b, maxiter=2, tol=1e-8) >>> print norm(b - A*x) 7.26369350856
[1] Yousef Saad, “Iterative Methods for Sparse Linear Systems, Second Edition”, SIAM, pp. 137–142, 2003 http://www-users.cs.umn.edu/~saad/books.html
_steepest_descent Module¶
- pyamg.krylov._steepest_descent.steepest_descent(A, b, x0=None, tol=1e-05, maxiter=None, xtype=None, M=None, callback=None, residuals=None)[source]¶
Steepest descent algorithm
Solves the linear system Ax = b. Left preconditioning is supported.
- A : {array, matrix, sparse matrix, LinearOperator}
- n x n, linear system to solve
- b : {array, matrix}
- right hand side, shape is (n,) or (n,1)
- x0 : {array, matrix}
- initial guess, default is a vector of zeros
- tol : float
- relative convergence tolerance, i.e. tol is scaled by the preconditioner norm of r_0, or ||r_0||_M.
- maxiter : int
- maximum number of allowed iterations
- xtype : type
- dtype for the solution, default is automatic type detection
- M : {array, matrix, sparse matrix, LinearOperator}
- n x n, inverted preconditioner, i.e. solve M A x = M b.
- callback : function
- User-supplied function is called after each iteration as callback(xk), where xk is the current solution vector
- residuals : list
- residuals contains the residual norm history, including the initial residual. The preconditioner norm is used, instead of the Euclidean norm.
(xNew, info) xNew : an updated guess to the solution of Ax = b info : halting status of cg
0 successful exit >0 convergence to tolerance not achieved, return iteration count instead. <0 numerical breakdown, or illegal input The LinearOperator class is in scipy.sparse.linalg.interface. Use this class if you prefer to define A or M as a mat-vec routine as opposed to explicitly constructing the matrix. A.psolve(..) is still supported as a legacy.
The residual in the preconditioner norm is both used for halting and returned in the residuals list.
>>> from pyamg.krylov import steepest_descent >>> from pyamg.util.linalg import norm >>> import numpy >>> from pyamg.gallery import poisson >>> A = poisson((10,10)) >>> b = numpy.ones((A.shape[0],)) >>> (x,flag) = steepest_descent(A,b, maxiter=2, tol=1e-8) >>> print norm(b - A*x) 7.89436429704
[1] Yousef Saad, “Iterative Methods for Sparse Linear Systems, Second Edition”, SIAM, pp. 137–142, 2003 http://www-users.cs.umn.edu/~saad/books.html
info Module¶
Krylov Solvers
This module contains several Krylov subspace methods, in addition to two simple iterations, to solve linear systems iteratively. These methods often use multigrid as a preconditioner to accelerate convergence to the solution.
Functions¶
- gmres
- fgmres
- cgne
- cgnr
- cg
- bicgstab
- steepest descent, (simple iteration)
- minimial residual (MR), (simple iteration)
References¶
[1] | Yousef Saad, “Iterative Methods for Sparse Linear Systems, Second Edition”, SIAM, pp. 231-234, 2003 http://www-users.cs.umn.edu/~saad/books.html |
[2] | Richard Barrett et al. “Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, 2nd Edition”, SIAM http://www.netlib.org/linalg/html_templates/Templates.html http://www.netlib.org/templates/ |