Minimizing Round-Off Error in Direct Solution of Large Sparse

Jul 23, 2009 - Computer Applications to Chemical Engineering. Chapter 14, pp 269– ... Accurate solutions for linear systems, which contain up to 500...
1 downloads 8 Views 861KB Size
14 Minimizing Round-Off Error in Direct Solution of Large Sparse Systems of Linear Equations M. S H A C H A M

1

Downloaded by RUTGERS UNIV on December 30, 2017 | http://pubs.acs.org Publication Date: May 30, 1980 | doi: 10.1021/bk-1980-0124.ch014

Department of Chemical Engineering, Ben Gurion University of the Negev, Beer Sheva, Israel

Many engineering problems require solution of large and sparse systems of linear equations. Typical examples are: model­ ing and simulation of staged, multicomponent processes, pipeline network design, solution of e l e c t r i c a l networks and p a r t i a l d i f ­ ferential equations. Among others, chemical engineers also show a growing interest in the solution method of sparse linear systems (Lin and Mah ( χ ) , Gabrielli and Spadoni {2)). A system of linear equations can be written in the form Ax = b

(1)

where A is an nxn matrix, xeR is the unknown vector and beR is a vector of constants. Sparse systems encountered in practical problems are charac­ terized by a very large value of η (several hundreds) and the number of non-zero elements in matrix A is small, usually less than 5$. When solving such a system, advantage of i t s sparseness can be taken in two ways: 1. By storage of only the nonzero ele­ ments of A and 2. By making arithmetical operations with these elements only. Several storage schemes can be used for sparse matrices. The most simple scheme is the "random packing" scheme, where A is stored in three vectors. One of them (a real array) contains the value of the non-zero elements. Two integer vectors specify their row and column numbers in the corresponding locations. The advan­ tage of this storage scheme is that new nonzero elements (such as the ones generated during the Gauss-elimination process) can be easily added at the end of the l i s t . The major drawback of this scheme is that the scanning of the matrix (both the row-wise and column-wise scanning) is quite complicated and extra computer time is required for i t . n

1 Current address: Department of Chemical Engineering, University of Connecticut, Storrs, Connecticut 06268. 0-8412-0549-3/80/47-124-269$05.00/0 © 1980 American C h e m i c a l Society Squires and Reklaitis; Computer Applications to Chemical Engineering ACS Symposium Series; American Chemical Society: Washington, DC, 1980.

n

Downloaded by RUTGERS UNIV on December 30, 2017 | http://pubs.acs.org Publication Date: May 30, 1980 | doi: 10.1021/bk-1980-0124.ch014

270

COMPUTER APPLICATIONS TO CHEMICAL ENGINEERING

The "systematic packing" scheme is more efficient for such operations. In this scheme, A is stored in two vectors. A real vector containing the row number is followed by the value of the nonzero elements in this row, ordered according to increasing column numbers. An integer vector contains the column numbers and also dummy elements to indicate the location of the row numbers in the f i r s t vector. Matrices of special structure like symmetric or band type matrices can be stored even more effectively. In some cases, the matrix A must not be stored at a l l and the elements of a single row or column can be generated during the computation (see the example at the end of this a r t i c l e ) . The solution of a sparse system of equations can be carried out in three stages: 1. Partitioning, 2. Reordering or "tearing", and 3. Numerical solution. Stages 1 and 2 contain only l o g i c a l operations and their objective is to obtain a system which can be solved faster and/or with smaller round-off error propagated. These stages are much more time-consuming than stage 3. (The ratio between the computer time required for stages 1 and 2, to the time required for stage 3 can be up to two orders of magnitude (7.)). For this reason, i t i s recommended to carry out stages 1 and 2 only for linear systems that have to be solved several times with different numerical values. Partitioning means separation of the matrix into smaller blocks which may be considered consecutively rather than considering the whole matrix simultaneously. The exact details of the reordering or tearing stage depend on the numerical solution used, so these stages w i l l be discussed together. Direct or indirect methods are used for the numerical solution of Eq. ( l ) . The direct method involves a fixed number of operations where indirect or iterative methods necessitate the repetition of certain steps u n t i l the required accuracy is achieved. A l l of the direct methods are variants of the Gauss or Gauss-Jordan elimination. The disadvantages of the direct method for a general non-band type sparse system are as follows: 1. During the elimination, the structure of matrix A is changed, since new nonzero elements ("fill-ins") are generated. The generation of " f i l l - i n s " increases storage and computation time requirements. 2. Both row and column-wise scanning of the matrix has to be carried out, which makes i t necessary to use a larger and more complicated storage scheme. The amount of " f i l l - i n s " can be minimized by reordering the equations (in stage 2). One of the more recent algorithms for reordering has been published by Lin and Mah (6). The reordering may often be dangerous, since i t may prevent "pivoting" which is necessary for reducing round-off error propagation. Iterative methods (like Gauss-Seidel, Successive over relaxation and conjugate gradient) have often been preferred to the

Squires and Reklaitis; Computer Applications to Chemical Engineering ACS Symposium Series; American Chemical Society: Washington, DC, 1980.

Downloaded by RUTGERS UNIV on December 30, 2017 | http://pubs.acs.org Publication Date: May 30, 1980 | doi: 10.1021/bk-1980-0124.ch014

14.

SHACHAM

271

Large Sparse Systems

d i r e c t method f o r s o l v i n g sparse sets o f equations f o r the f o l l o w ­ i n g reasons: 1. The matrix A remains unchanged during the compu­ tation. 2. Only row-wise scanning o f the matrix i s r e q u i r e d . These two p r o p e r t i e s l e a d t o much simpler computer codes and r e d u c t i o n of the overhead time which i s necessary f o r index manip­ ulation. I t e r a t i v e methods however, cannot be used f o r a l l types of problems, s i n c e unless the matrix A has some s p e c i a l properties, the convergence may be slow or u n a t t a i n a b l e . The concept of " t e a r i n g " has been developed i n connection w i t h the i t e r a t i v e methods. F i r s t an output set f o r the system o f equations i s chosen. Then one or more t e a r i n g v a r i a b l e s are selected. These v a r i a b l e s are the i t e r a t e s t h a t need t o be chosen to o b t a i n a s o l u t i o n of the system. The number of t e a r i n g v a r i a ­ b l e s i s u s u a l l y much smaller than the number o f the equations. An accepted c r i t e r i a f o r s e l e c t i n g t e a r i n g v a r i a b l e s i s the minimum number of such v a r i a b l e s which w i l l make i t p o s s i b l e to s o l v e the whole system. The ordered set of equations t h a t r e s u l t s i s then solved u s i n g an i t e r a t i v e method. The concepts o f p a r t i t i o n i n g and t e a r i n g have been d i s c u s s e d r e c e n t l y by Bunch and Rose ( l ) and Kevorkian (3.). Shacham and Kehat (3) i n t r o d u c e d a d i r e c t method of s o l u t i o n which shares most o f the advantages of both the d i r e c t and i t e r a t i v e methods. I t has been found, however, t h a t t h i s method i s q u i t e s e n s i t i v e to round-off e r r o r propagation. This a r t i c l e presents methods both f o r minimizing round-off e r r o r and f o r i t e r a t i v e refinement o f the solution. 1

Shacham and Kehat s D i r e c t Method It already Eq. ( l ) Eq. ( l ) system:

i s assumed t h a t the o r i g i n a l system o f equations has been p a r t i t i o n e d i n t o the s m a l l e s t p o s s i b l e b l o c k s and d e f i n e s one o f these b l o c k s . The l i n e a r system given i n i s permuted so as t o a r r i v e at the f o l l o w i n g equivalent



*

S"

F ±-ι

*1

F £.2

mi

LB

e R m

11

m

where: x . i and x e R are subvectors of x, b R and b eR are subvectors o f S i s an (n-m)(n-m) lower t r i a n g u l a r submatrix of A w i t h non-zero main d i a g o n a l T, F j and F are (n-m)xm, mxm and mx(n-m) submatrices of A r e s p e c t i v e l y . Let us r e w r i t e Eq. (2) and d e f i n e f ( x i ) : 2

2L

2

Χ

= §7 (£ι

- TxJ

(3a)

f (xj ) = Fx.! + F\,x

2

- b

where f.(x.i ) i s a v e c t o r o f m f u n c t i o n s .

Squires and Reklaitis; Computer Applications to Chemical Engineering ACS Symposium Series; American Chemical Society: Washington, DC, 1980.

(3b)

272

COMPUTER APPLICATIONS TO CHEMICAL ENGINEERING

Downloaded by RUTGERS UNIV on December 30, 2017 | http://pubs.acs.org Publication Date: May 30, 1980 | doi: 10.1021/bk-1980-0124.ch014

The system consisting of Eqs. (3a) and (3b) can be solved as follows : 1. Assume an i n i t i a l value for the tear variables ( x p ; 2.

Compute x£ from Eq. (3a), (note that this can be done by substitution since is lower triangular);

3.

Compute f.(x°) from Eq. (3b);

4.

Compute Vf (x°) where V f ( x ° ) is the mxm matrix of p a r t i a l derivatives of f ( x ) with respect to x.. (The method of calculating this matrix is given below;.

5.

Use Newton iteration to compute xj

- χ. - V f ( x ) '

6.

1

f (χ.)

(4)

Compute χ ϊ from: x = S. (b- - Tx ) ; 1 1 7. The vector ( x , x ) i s the solution of system (1). The matrix of p a r t i a l derivatives is calculated numerically column by column by setting ID = 0; χ , χ . . . , j - i j+l* * " * * m ®* χ . φ 0, calculating f £ x ) using equations (3a) and (3b) and then ç \ the derivatives from Eq. (5) 9

X

J

x

X

=

9

f

= 3 X j

1

1 X

(5)

j

The use of this algorithm requires (m+1) times of computation of the system (3a) and (3b) and solution of an (mxm) linear system of equations. (In Eq. (4)). The storage requirements are mainly the sparse storage for the matrix A and storage for the mxm dense matrix of the derivatives. The m variables of x can be regarded as tearing variables and clearly the smaller m i s , the more effective the algorithm. Since computer programs for selection of minimal sets are readily available (Ledet and Hinmelblau (5)), this problem w i l l not be discussed. Shacham and Kehat s direct method has several advantages over similar methods. The matrix A remains unchanged during the solution process and only row-wise scanning operations have to be carried out. The "systematic packing storage scheme can be used and the amount of index manipulation is reduced to minimum. The problem of f i l l - i n s " does not exist since newly-generated elements are included either i n the matrix Vf ( x ) or in the vector f.(jx). An example which demonstrates the use and the advantages of this method for a small 6x6 system is given in the Appendix. Shacham and Kehat (8) showed substantial savings in computer time when using this method. It has been noticed however that for certain systems, round-off error propagation may prevent achievement of correct results. In the next section, the cause of error propagation i s investigated. 1

11

11

Squires and Reklaitis; Computer Applications to Chemical Engineering ACS Symposium Series; American Chemical Society: Washington, DC, 1980.

14.

Large Sparse Systems

SHACHAM

273

Error Analysis For practical problems, m « n and i t can be assumed that error propagation mainly occurs when using equation (3a) for calculating x . Let us denote as 6x the error in χ then equation (3a) can be used for obtaining an error bound for x . 2

1

Downloaded by RUTGERS UNIV on December 30, 2017 | http://pubs.acs.org Publication Date: May 30, 1980 | doi: 10.1021/bk-1980-0124.ch014

6x

χ

(6)

δχ,

S" T L

0

I I indicates some type of matrix norm. Equation (6) where shows that the larger the elements of S T , the bigger is the error Since S i s a triangular matrix, in the calculated value of χ i t s inverse can be generated using recursive formulas (see p. 2k3 in (h)). In these formulas, the diagonal elements of S are divisors and the off-diagonal elements are multipliers. Obviously both the magnitude of the elements in £T and the round-off error propagation can be minimized by bringing the elements of largest absolute value into the main diagonal of the S matrix. The round-off error in f ( x ) can be calculated using Eq.(3b): 2

of

2

= F ôx 1

+ F 6x

1

2

(7)

2

where of is the error in f(x ). Substituting 6x into Eq.(7) gives λ

x

2

§Li

= (Ii - Lil

1

(8)

1) δχ !

We have found i n a l l the practical cases, that when serioys error propagation occurs, the term F is much smaller than F S f Τ in Eq. (8). Thus a good estimation of 6f is obtained from: x