The implementation of accelerated conjugated gradients for the solution of large sparse systems of linear equations on vector/parallel processors requires programming features which are significantly different from the one needed on a scalar machine. Furthermore, a numerical algorithm which works well on the latter may be largely inefficient on the former. In the present analysis the numerical performances on a CRAY X-MP/48 of some widely known preconditioning techniques are compared, including the diagonalscaling, the incomplete Cholesky decomposition and the least squares polynomial preconditioners. The last ones, which are not suited to scalar machines, appear to be particularly attractive from a conceptual view point on vector/parallel architectures. The results obtained with 12 arbitrarily sparse finite element problems of growing size up to almost 5000, shows surprisingly that simplediagonalscaling is the most efficient preconditioning scheme in the majority of applications. In the few cases where it is not so, its performance is nevertheless comparable with that of the incomplete Cholesky factorization. Contrary to the general expectation, the polynomials exhibit a poor performance which does not increase with the degree and appear never to be competitive with the other two more traditional preconditioners.

Is a simple diagonal scaling the best preconditioneer for conjugate gradients on supercomputers?

PINI, GIORGIO;GAMBOLATI, GIUSEPPE
1990

Abstract

The implementation of accelerated conjugated gradients for the solution of large sparse systems of linear equations on vector/parallel processors requires programming features which are significantly different from the one needed on a scalar machine. Furthermore, a numerical algorithm which works well on the latter may be largely inefficient on the former. In the present analysis the numerical performances on a CRAY X-MP/48 of some widely known preconditioning techniques are compared, including the diagonalscaling, the incomplete Cholesky decomposition and the least squares polynomial preconditioners. The last ones, which are not suited to scalar machines, appear to be particularly attractive from a conceptual view point on vector/parallel architectures. The results obtained with 12 arbitrarily sparse finite element problems of growing size up to almost 5000, shows surprisingly that simplediagonalscaling is the most efficient preconditioning scheme in the majority of applications. In the few cases where it is not so, its performance is nevertheless comparable with that of the incomplete Cholesky factorization. Contrary to the general expectation, the polynomials exhibit a poor performance which does not increase with the degree and appear never to be competitive with the other two more traditional preconditioners.
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/2494796
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 35
  • ???jsp.display-item.citation.isi??? 32
social impact