Muutke küpsiste eelistusi

E-raamat: Direct Methods for Sparse Matrices

(Rutherford Appleton Laboratory and Cranfield University), (The Boeing Company, Seattle (retired) and Seattle Pacific University), (Rutherford Appleton Laboratory, CERFACS, Toulouse, France, and Strathclyde University)
  • Formaat - PDF+DRM
  • Hind: 97,83 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

The subject of sparse matrices has its root in such diverse fields as management science, power systems analysis, surveying, circuit theory, and structural analysis. Efficient use of sparsity is a key to solving large problems in many fields.

This second edition is a complete rewrite of the first edition published 30 years ago. Much has changed since that time. Problems have grown greatly in size and complexity; nearly all examples in the first edition were of order less than 5,000 in the first edition, and are often more than a million in the second edition. Computer architectures are now much more complex, requiring new ways of adapting algorithms to parallel environments with memory hierarchies. Because the area is such an important one to all of computational science and engineering, a huge amount of research has been done in the last 30 years, some of it by the authors themselves. This new research is integrated into the text with a clear explanation of the underlying mathematics and algorithms.

New research that is described includes new techniques for scaling and error control, new orderings, new combinatorial techniques for partitioning both symmetric and unsymmetric problems, and a detailed description of the multifrontal approach to solving systems that was pioneered by the research of the authors and colleagues. This includes a discussion of techniques for exploiting parallel architectures and new work for indefinite and unsymmetric systems.

Arvustused

This is a thorough and thoughtful revision of a classic text, bringing it up to date with respect to subsequent developments in the field whilst retaining the qualities of the first edition. * James Andrew J. Hall, Mathematical Reviews * [ T]his book is a well written and authoritative reference that should be of interest to anyone involved in the implementation of sparse LU factorization software * Brian Borchers, MAA Reviews *

1 Introduction
1(17)
1.1 Introduction
1(1)
1.2 Graph theory
2(3)
1.3 Example of a sparse matrix
5(5)
1.4 Modern computer architectures
10(2)
1.5 Computational performance
12(1)
1.6 Problem formulation
13(1)
1.7 Sparse matrix test collections
14(4)
2 Sparse matrices: storage schemes and simple operations
18(25)
2.1 Introduction
18(1)
2.2 Sparse vector storage
18(2)
2.3 Inner product of two packed vectors
20(1)
2.4 Adding packed vectors
20(2)
2.5 Use of full-sized arrays
22(1)
2.6 Coordinate scheme for storing sparse matrices
22(1)
2.7 Sparse matrix as a collection of sparse vectors
23(2)
2.8 Sherman's compressed index scheme
25(1)
2.9 Linked lists
26(3)
2.10 Sparse matrix in column-linked list
29(1)
2.11 Sorting algorithms
30(2)
2.11.1 The counting sort
30(1)
2.11.2 Heap sort
31(1)
2.12 Transforming the coordinate scheme to other forms
32(2)
2.13 Access by rows and columns
34(1)
2.14 Supervariables
35(1)
2.15 Matrix by vector products
36(1)
2.16 Matrix by matrix products
36(1)
2.17 Permutation matrices
37(2)
2.18 Clique (or finite-element) storage
39(1)
2.19 Comparisons between sparse matrix structures
40(3)
3 Gaussian elimination for dense matrices: the algebraic problem
43(19)
3.1 Introduction
43(1)
3.2 Solution of triangular systems
43(1)
3.3 Gaussian elimination
44(2)
3.4 Required row interchanges
46(1)
3.5 Relationship with LU factorization
47(2)
3.6 Dealing with interchanges
49(1)
3.7 LU factorization of a rectangular matrix
49(1)
3.8 Computational sequences, including blocking
50(2)
3.9 Symmetric matrices
52(2)
3.10 Multiple right-hand sides and inverses
54(1)
3.11 Computational cost
55(2)
3.12 Partitioned factorization
57(2)
3.13 Solution of block triangular systems
59(3)
4 Gaussian elimination for dense matrices: numerical considerations
62(27)
4.1 Introduction
62(1)
4.2 Computer arithmetic error
63(2)
4.3 Algorithm instability
65(2)
4.4 Controlling algorithm stability through pivoting
67(3)
4.4.1 Partial pivoting
67(1)
4.4.2 Threshold pivoting
68(1)
4.4.3 Rook pivoting
68(1)
4.4.4 Full pivoting
69(1)
4.4.5 The choice of pivoting strategy
70(1)
4.5 Orthogonal factorization
70(1)
4.6 Partitioned factorization
70(1)
4.7 Monitoring the stability
71(1)
4.8 Special stability considerations
72(2)
4.9 Solving indefinite symmetric systems
74(1)
4.10 Ill-conditioning: introduction
74(1)
4.11 Ill-conditioning: theoretical discussion
75(4)
4.12 Ill-conditioning: automatic detection
79(1)
4.12.1 The LINPACK condition estimator
79(1)
4.12.2 Hager's method
80(1)
4.13 Iterative refinement
80(1)
4.14 Scaling
81(2)
4.15 Automatic scaling
83(6)
4.15.1 Scaling so that all entries are close to one
84(1)
4.15.2 Scaling norms
84(1)
4.15.3 I-matrix scaling
85(4)
5 Gaussian elimination for sparse matrices: an introduction
89(19)
5.1 Introduction
89(1)
5.2 Numerical stability in sparse Gaussian elimination
90(4)
5.2.1 Trade-offs between numerical stability and sparsity
90(2)
5.2.2 Incorporating rook pivoting
92(1)
5.2.3 2 × 2 pivoting
93(1)
5.2.4 Other stability considerations
93(1)
5.2.5 Estimating condition numbers in sparse computation
94(1)
5.3 Orderings
94(5)
5.3.1 Block triangular matrix
95(1)
5.3.2 Local pivot strategies
96(1)
5.3.3 Band and variable band ordering
96(2)
5.3.4 Dissection
98(1)
5.4 Features of a code for the solution of sparse equations
99(4)
5.4.1 Input of data
101(1)
5.4.2 The ANALYSE phase
101(1)
5.4.3 The FACTORIZE phase
101(1)
5.4.4 The SOLVE phase
102(1)
5.4.5 Output of data and analysis of results
103(1)
5.5 Relative work required by each phase
103(1)
5.6 Multiple right-hand sides
104(1)
5.7 Computation of entries of the inverse
105(1)
5.8 Matrices with complex entries
106(1)
5.9 Writing compared with using sparse matrix software
106(2)
6 Reduction to block triangular form
108(29)
6.1 Introduction
108(1)
6.2 Finding the block triangular form in three stages
109(1)
6.3 Looking for row and column singletons
110(1)
6.4 Finding a transversal
111(7)
6.4.1 Background
111(2)
6.4.2 Transversal extension by depth-first search
113(2)
6.4.3 Analysis of the depth-first search transversal algorithm
115(1)
6.4.4 Implementation of the transversal algorithm
116(2)
6.5 Symmetric permutations to block triangular form
118(7)
6.5.1 Background
118(1)
6.5.2 The algorithm of Sargent and Westerberg
119(3)
6.5.3 Tarjan's algorithm
122(3)
6.5.4 Implementation of Tarjan's algorithm
125(1)
6.6 Essential uniqueness of the block triangular form
125(2)
6.7 Experience with block triangular forms
127(1)
6.8 Maximum transversals
128(2)
6.9 Weighted matchings
130(3)
6.10 The Dulmage--Mendelsohn decomposition
133(4)
7 Local pivotal strategies for sparse matrices
137(19)
7.1 Introduction
137(1)
7.2 The Markowitz criterion
138(1)
7.3 Minimum degree (Tinney scheme 2)
138(2)
7.4 A priori column ordering
140(2)
7.5 Simpler strategies
142(1)
7.6 A more ambitious strategy: minimum fill-in
143(2)
7.7 Effect of tie-breaking on the minimum degree algorithm
145(2)
7.8 Numerical pivoting
147(2)
7.9 Sparsity in the right-hand side and partial solution
149(4)
7.10 Variability-type ordering
153(1)
7.11 The symmetric indefinite case
153(3)
8 Ordering sparse matrices for band solution
156(21)
8.1 Introduction
156(1)
8.2 Band and variable-hand matrices
156(2)
8.3 Small bandwidth and profile: Cuthill--McKee algorithm
158(4)
8.4 Small bandwidth and profile: the starting node
162(1)
8.5 Small bandwidth and profile: Sloan algorithm
162(1)
8.6 Spectral ordering for small profile
163(3)
8.7 Calculating the Fiedler vector
166(1)
8.8 Hybrid orderings for small bandwidth and profile
167(1)
8.9 Hager's exchange methods for profile reduction
168(1)
8.10 Blocking the entries of a symmetric variable-band matrix
169(1)
8.11 Refined quotient trees
170(3)
8.12 Incorporating numerical pivoting
173(2)
8.12.1 The fixed bandwidth case
173(1)
8.12.2 The variable bandwidth case
174(1)
8.13 Conclusion
175(2)
9 Orderings based on dissection
177(27)
9.1 Introduction
177(1)
9.2 One-way dissection
177(3)
9.2.1 Finding the dissection cuts for one-way dissection
179(1)
9.3 Nested dissection
180(2)
9.4 Introduction to finding dissection cuts
182(1)
9.5 Multisection
182(3)
9.6 Comparing nested dissection with minimum degree
185(1)
9.7 Edge and vertex separators
186(2)
9.8 Methods for obtaining dissection sets
188(3)
9.8.1 Obtaining an initial separator set
188(1)
9.8.2 Refining the separator set
189(2)
9.9 Graph partitioning algorithms and software
191(2)
9.10 Dissection techniques for unsymmetric systems
193(8)
9.10.1 Background
193(1)
9.10.2 Graphs for unsymmetric matrices
194(3)
9.10.3 Ordering to singly bordered block diagonal form
197(3)
9.10.4 The performance of the ordering
200(1)
9.11 Some concluding remarks
201(3)
10 Implementing Gaussian elimination without symbolic factorize
204(28)
10.1 Introduction
204(1)
10.2 Markowitz ANALYSE
205(5)
10.3 FACTORIZE without pivoting
210(3)
10.4 FACTORIZE with pivoting
213(2)
10.5 SOLVE
215(3)
10.6 Hyper-sparsity and linear programming
218(1)
10.7 Switching to full form
219(2)
10.8 Loop-free code
221(1)
10.9 Interpretative code
222(3)
10.10 The use of drop tolerances to preserve sparsity
225(3)
10.11 Exploitation of parallelism
228(4)
10.11.1 Various parallelization opportunities
228(1)
10.11.2 Parallelizing the local ordering and sparse factorization steps
228(4)
11 Implementing Gaussian elimination with symbolic FACTORIZE
232(26)
11.1 Introduction
232(1)
11.2 Minimum degree ordering
233(3)
11.3 Approximate minimum degree ordering
236(2)
11.4 Dissection orderings
238(1)
11.5 Numerical FACTORIZE using static data structures
239(1)
11.6 Numerical pivoting within static data structures
240(1)
11.7 Band methods
241(3)
11.8 Variable-band (profile) methods
244(1)
11.9 Frontal methods: introduction
245(1)
11.10 Frontal methods: SPD finite-element problems
246(4)
11.11 Frontal methods: general finite-element problems
250(1)
11.12 Frontal methods for non-element problems
251(4)
11.13 Exploitation of parallelism
255(3)
12 Gaussian elimination using trees
258(23)
12.1 Introduction
258(1)
12.2 Multifrontal methods for finite-element problems
259(4)
12.3 Elimination and assembly trees
263(3)
12.3.1 The elimination tree
263(3)
12.3.2 Using the assembly tree for factorization
266(1)
12.4 The efficient generation of elimination trees
266(3)
12.5 Constructing the sparsity pattern of U
269(1)
12.6 The patterns of data movement
270(1)
12.7 Manipulations on assembly trees
271(6)
12.7.1 Ordering of children
271(2)
12.7.2 Tree rotations
273(3)
12.7.3 Node amalgamation
276(1)
12.8 Multifrontal methods: symmetric indefinite problems
277(4)
13 Graphs for symmetric and unsymmetric matrices
281(34)
13.1 Introduction
281(1)
13.2 Symbolic analysis on unsymmetric systems
282(1)
13.3 Numerical pivoting using dynamic data structures
283(1)
13.4 Static pivoting
284(3)
13.5 Scaling and reordering
287(3)
13.5.1 The aims of scaling
287(1)
13.5.2 Scaling and reordering a symmetric matrix
287(1)
13.5.3 The effect of scaling
288(1)
13.5.4 Discussion of scaling strategies
289(1)
13.6 Supernodal techniques using assembly trees
290(2)
13.7 Directed acyclic graphs
292(2)
13.8 Parallel issues
294(1)
13.9 Parallel factorization
295(9)
13.9.1 Parallelization levels
295(2)
13.9.2 The balance between tree and node parallelism
297(2)
13.9.3 Use of memory
299(1)
13.9.4 Static and dynamic mapping
300(1)
13.9.5 Static mapping and scheduling
300(2)
13.9.6 Dynamic scheduling
302(1)
13.9.7 Codes for shared and distributed memory computers
303(1)
13.10 The use of low-rank matrices in the factorization
304(2)
13.11 Using rectangular frontal matrices with local pivoting
306(4)
13.12 Rectangular frontal matrices with structural pivoting
310(2)
13.13 Trees for unsymmetric matrices
312(3)
14 The SOLVE phase
315(10)
14.1 Introduction
315(1)
14.2 SOLVE at the node level
316(2)
14.3 Use of the tree by the SOLVE phase
318(1)
14.4 Sparse right-hand sides
318(1)
14.5 Multiple right-hand sides
319(1)
14.6 Computation of null-space basis
319(1)
14.7 Parallelization of SOLVE
320(5)
14.7.1 Parallelization of dense solve
321(1)
14.7.2 Order of access to the tree nodes
321(1)
14.7.3 Experimental results
322(3)
15 Other sparsity-oriented issues
325(30)
15.1 Introduction
325(1)
15.2 The matrix modification formula
326(2)
15.2.1 The basic formula
326(1)
15.2.2 The stability of the matrix modification formula
327(1)
15.3 Applications of the matrix modification formula
328(3)
15.3.1 Application to stability corrections
328(1)
15.3.2 Building a large problem from subproblems
328(1)
15.3.3 Comparison with partitioning
329(1)
15.3.4 Application to sensitivity analysis
330(1)
15.4 The model and the matrix
331(3)
15.4.1 Model reduction
331(2)
15.4.2 Model reduction with a regular submodel
333(1)
15.5 Sparsity constrained backward error analysis
334(1)
15.6 Why the inverse of a sparse irreducible matrix is dense
335(2)
15.7 Computing entries of the inverse of a sparse matrix
337(2)
15.8 Sparsity in nonlinear computations
339(2)
15.9 Estimating a sparse Jacobian matrix
341(2)
15.10 Updating a sparse Hessian matrix
343(1)
15.11 Approximating a sparse matrix by a positive-definite one
344(2)
15.12 Solution methods based on orthogonalization
346(2)
15.13 Hybrid methods
348(7)
15.13.1 Domain decomposition
348(2)
15.13.2 Block iterative methods
350(5)
A Matrix and vector norms 355(4)
B Pictures of sparse matrices 359(8)
C Solutions to selected exercises 367(23)
References 390(22)
Author Index 412(6)
Subject Index 418
I. S. (Iain) Duff is an STFC Senior Fellow in the Scientific Computing Department at the STFC Rutherford Appleton Laboratory in Oxfordshire, England. He is also the Scientific Advisor for the Parallel Algorithms Group at CERFACS in Toulouse and is a Visiting Professor of Mathematics at the University of Strathclyde.



J. K. (John) Reid is an Honorary Scientist at the STFC Rutherford Appleton Laboratory in Oxfordshire, England. He is also a Visiting Professor at the Shrivenham Campus of Cranfield University and is Convener of the ISO/IEC Fortran Committee.



A. M. (Al) Erisman is the Executive in Residence in the School of Business, Government, and Economics at Seattle Pacific University and is executive editor of Ethix magazine (), which he co-founded with a colleague in 1998. Over the past 15 years he has lectured on five continents in areas of business, technology, mathematics, ethics, faith, and economic development.