Muutke küpsiste eelistusi

E-raamat: Krylov Subspace Methods: Principles and Analysis

(, Professor of Numerical Mathematics, Technical University of Berlin), (, Professor of Mathematics, Charles University, Prague)
  • Formaat - PDF+DRM
  • Hind: 64,84 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Raamatukogudele
    • Oxford Scholarship Online e-raamatud

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

The mathematical theory of Krylov subspace methods with a focus on solving systems of linear algebraic equations is given a detailed treatment in this principles-based book. Starting from the idea of projections, Krylov subspace methods are characterised by their orthogonality and minimisation properties. Projections onto highly nonlinear Krylov subspaces can be linked with the underlying problem of moments, and therefore Krylov subspace methods can be viewed as matching moments model reduction. This allows enlightening reformulations of questions from matrix computations into the language of orthogonal polynomials, Gauss-Christoffel quadrature, continued fractions, and, more generally, of Vorobyev's method of moments. Using the concept of cyclic invariant subspaces, conditions are studied that allow the generation of orthogonal Krylov subspace bases via short recurrences. The results motivate the important practical distinction between Hermitian and non-Hermitian problems. Finally, the book thoroughly addresses the computational cost while using Krylov subspace methods. The investigation includes effects of finite precision arithmetic and focuses on the method of conjugate gradients (CG) and generalised minimal residuals (GMRES) as major examples.

There is an emphasis on the way algebraic computations must always be considered in the context of solving real-world problems, where the mathematical modelling, discretisation and computation cannot be separated from each other. The book also underlines the importance of the historical context and demonstrates that knowledge of early developments can play an important role in understanding and resolving very recent computational problems. Many extensive historical notes are included as an inherent part of the text as well as the formulation of some omitted issues and challenges which need to be addressed in future work.

This book is applicable to a wide variety of graduate courses on Krylov subspace methods and related subjects, as well as benefiting those interested in the history of mathematics.
1 Introduction
1(11)
1.1 Solving Real-world Problems
2(5)
1.2 A Short Recollection of Ideas
7(3)
1.3 Specification of the Subject and Basic Notation
10(2)
2 Krylov Subspace Methods
12(59)
2.1 Projection Processes
12(7)
2.2 How Krylov Subspaces Come into Play
19(3)
2.3 Mathematical Characterisation of Some Krylov Subspace Methods
22(4)
2.4 The Arnoldi and Lanczos Algorithms
26(10)
2.4.1 Arnoldi and Hermitian Lanczos
26(5)
2.4.2 Non-Hermitian Lanczos
31(2)
2.4.3 Historical Note: The Gram-Schmidt Method
33(3)
2.5 Derivation of Some Krylov Subspace Methods
36(33)
2.5.1 Derivation of CG Using the Hermitian Lanczos Algorithm
36(6)
2.5.2 CG and the Galerkin Finite Element Method
42(5)
2.5.3 CG and the Minimisation of a Quadratic Functional
47(7)
2.5.4 Hermitian Indefinite Matrices and the SYMMLQ Method
54(3)
2.5.5 Minimising the Residual Norm: MINRES and GMRES
57(4)
2.5.6 Further Krylov Subspace Methods
61(3)
2.5.7 Historical Note: A. N. Krylov and the Early History of Krylov Subspace Methods
64(5)
2.6 Summary and Outlook: Linear Projections and Nonlinear Behaviour
69(2)
3 Matching Moments and Model Reduction View
71(97)
3.1 Stieltjes' Moment Problem
73(3)
3.2 Model Reduction via Orthogonal Polynomials
76(13)
3.2.1 Derivation of the Gauss-Christoffel Quadrature
77(7)
3.2.2 Moment Matching and Convergence Properties of the Gauss-Christoffel Quadrature
84(3)
3.2.3 Historical Note: Gauss' Fundamental Idea, an Early Application, and Later Developments
87(2)
3.3 Orthogonal Polynomials and Continued Fractions
89(19)
3.3.1 Three-term Recurrence and the Interlacing Property
89(7)
3.3.2 Continued Fractions
96(5)
3.3.3 The Gauss-Christoffel Quadrature for Analytic Functions
101(2)
3.3.4 Summary of the Previous Mathematical Development
103(1)
3.3.5 Historical Note: Chebyshev, Markov, Stieltjes, and the Moment Problem
104(3)
3.3.6 Historical Note: Orthogonal Polynomials and Three-term Recurrences
107(1)
3.4 Jacobi Matrices
108(28)
3.4.1 Algebraic Properties of Jacobi Matrices
109(12)
3.4.2 The Persistence Theorem, Stabilisation of Nodes and Weights in the Gauss-Christoffel Quadrature
121(9)
3.4.3 Historical Note: The Origin and Early Applications of Jacobi Matrices
130(6)
3.5 Model Reduction via Matrices: Hermitian Lanczos and CG
136(6)
3.6 Factorisation of the Matrix of Moments
142(3)
3.7 Vorobyev's Moment Problem
145(11)
3.7.1 Application to the Hermitian Lanczos Algorithm and the CG Method
146(2)
3.7.2 Application to the Non-Hermitian Lanczos Algorithm
148(3)
3.7.3 Application to the Arnoldi Algorithm
151(2)
3.7.4 Matching Moments and Generalisations of the Gauss-Christoffel Quadrature
153(3)
3.8 Matching Moments and Projection Processes
156(4)
3.9 Model Reduction of Large-scale Dynamical Systems
160(8)
3.9.1 Approximation of the Transfer Function
161(4)
3.9.2 Estimates in Quadratic Forms
165(3)
4 Short Recurrences for Generating Orthogonal Krylov Subspace Bases
168(59)
4.1 The Existence of Conjugate Gradient Like Descent Methods
169(3)
4.2 Cyclic Subspaces and the Jordan Canonical Form
172(17)
4.2.1 Invariant Subspaces and the Cyclic Decomposition
173(12)
4.2.2 The Jordan Canonical Form and the Length of the Krylov Sequences
185(3)
4.2.3 Historical Note: Classical Results of Linear Algebra
188(1)
4.3 Optimal Short Recurrences
189(4)
4.4 Sufficient Conditions
193(3)
4.5 Necessary Conditions
196(8)
4.6 Matrix Formulation and Equivalent Characterisations
204(5)
4.7 Short Recurrences and the Number of Distinct Eigenvalues
209(2)
4.8 Other Types of Recurrences
211(11)
4.8.1 The Isometric Arnoldi Algorithm
211(4)
4.8.2 (s + 2, t)-term Recurrences
215(4)
4.8.3 Generalised Krylov Subspaces
219(3)
4.9 Remarks on Functional Analytic Representations
222(5)
5 Cost of Computations Using Krylov Subspace Methods
227(122)
5.1 Seeing the Context is Essential
228(10)
5.2 Direct and Iterative Algebraic Computations
238(7)
5.2.1 Historical Note: Are the Lanczos Method and the CG Method Direct or Iterative Methods?
241(4)
5.3 Computational Cost of Individual Iterations
245(3)
5.4 Particular Computations and Complexity
248(2)
5.5 Closer Look at the Concept of Convergence
250(8)
5.5.1 Linear Stationary Iterative Methods
250(2)
5.5.2 Richardson Iteration and Chebyshev Semi-iteration
252(2)
5.5.3 Historical Note: Semi-iterative and Krylov Subspace Methods
254(3)
5.5.4 Krylov Subspace Methods: Nonlinear Methods for Linear Problems
257(1)
5.6 CG in Exact Arithmetic
258(27)
5.6.1 Expressions for the CG Errors and Their Norms
260(5)
5.6.2 Eigenvalue-based Convergence Results for CG
265(6)
5.6.3 Illustrations of the Convergence Behaviour of CG
271(4)
5.6.4 Outlying Eigenvalues and Superlinear Convergence
275(5)
5.6.5 Clustered Eigenvalues and the Sensitivity of the Gauss-Christoffel Quadrature
280(5)
5.7 GMRES in Exact Arithmetic
285(23)
5.7.1 Ritz Values and Harmonic Ritz Values
286(3)
5.7.2 Convergence Descriptions Based on Spectral Information
289(5)
5.7.3 More General Approaches
294(3)
5.7.4 Any Nonincreasing Convergence Curve is Possible with Any Eigenvalues
297(6)
5.7.5 Convection-diffusion Example
303(3)
5.7.6 Asymptotic Estimates for the GMRES Convergence
306(2)
5.8 Rounding Errors and Backward Stability
308(12)
5.8.1 Rounding Errors in Direct Computations
311(4)
5.8.2 Historical Note: Wilkinson and the Backward Error Concept
315(1)
5.8.3 The Backward Error Concept in Iterative Computations
316(4)
5.9 Rounding Errors in the CG Method
320(18)
5.9.1 Delay of Convergence
320(6)
5.9.2 Delay of Convergence can Invalidate Composite Convergence Bounds
326(2)
5.9.3 Maximal Attainable Accuracy
328(3)
5.9.4 Back to the Poisson Model Problem
331(7)
5.10 Rounding Errors in the GMRES Method
338(7)
5.10.1 The Choice of the Basis Affects Numerical Stability
338(2)
5.10.2 Does the Orthogonalisation Algorithm Matter?
340(3)
5.10.3 MGS GMRES is Backward Stable
343(2)
5.11 Omitted Issues and an Outlook
345(4)
References 349(36)
Index of Historical Personalities 385(3)
Index of Technical Terms 388
Jörg Liesen is a professor of Numerical Mathematics at the TU Berlin, Germany. He received his Ph.D. in Mathematics from the University of Bielefeld under the supervision of Ludwig Elsner, and the Habilitation in Mathematics at the TU Berlin. During his professional career he spent two years at the University of Illinois at Urbana-Champaign, USA. His research interests in numerical analysis include the convergence and stability analysis of iterative methods, and the theory and computation of matrix functions. He is also interested in the history of mathematics, in particular of linear algebra. He is the recipient of several prizes and awards, including the Householder Award in 1999, the Emmy Noether Fellowship of the DFG, and the Heisenberg Professorship of the DFG.

Zdenek Strakos is a professor of Mathematics at the Charles University in Prague, Czech Republic. He received his Ph.D. in Computer Science from the Czechoslovak Academy of Sciences, and D.Sc. in Mathematics from the Academy of Sciences of the Czech Republic. During his professional career he spent one year at IMA, University of Minnesota, and three years at Emory University, Atlanta. He likes to look for interconnections between problems and disciplines and to view particular questions in their wide context. He is an active member of professional committees and boards, such as the Householder Committee, the EMS Applied Mathematics Committee, ERC AdG Evaluation Panel for Computer Science and Informatics etc. In 1994 he was awarded the SIAM Activity Group of Linear Algebra Prize and in 2007 the Annual Prize of the Academy of Sciences of the Czech Republic.