Muutke küpsiste eelistusi

Matrix Algebra: Theory, Computations and Applications in Statistics Second Edition 2017 [Pehme köide]

  • Formaat: Paperback / softback, 648 pages, kõrgus x laius: 254x178 mm, kaal: 1274 g, 40 Illustrations, black and white; XXIX, 648 p. 40 illus., 1 Paperback / softback
  • Sari: Springer Texts in Statistics
  • Ilmumisaeg: 21-Oct-2017
  • Kirjastus: Springer International Publishing AG
  • ISBN-10: 3319648667
  • ISBN-13: 9783319648668
Teised raamatud teemal:
  • Pehme köide
  • Hind: 154,41 €*
  • * saadame teile pakkumise kasutatud raamatule, mille hind võib erineda kodulehel olevast hinnast
  • See raamat on trükist otsas, kuid me saadame teile pakkumise kasutatud raamatule.
  • Kogus:
  • Lisa ostukorvi
  • Tasuta tarne
  • Lisa soovinimekirja
  • Formaat: Paperback / softback, 648 pages, kõrgus x laius: 254x178 mm, kaal: 1274 g, 40 Illustrations, black and white; XXIX, 648 p. 40 illus., 1 Paperback / softback
  • Sari: Springer Texts in Statistics
  • Ilmumisaeg: 21-Oct-2017
  • Kirjastus: Springer International Publishing AG
  • ISBN-10: 3319648667
  • ISBN-13: 9783319648668
Teised raamatud teemal:
This textbook for graduate and advanced undergraduate students presents the theory of matrix algebra for statistical applications, explores various types of matrices encountered in statistics, and covers numerical linear algebra. Matrix algebra is one of the most important areas of mathematics in data science and in statistical theory, and the second edition of this very popular textbook provides essential updates and comprehensive coverage on critical topics in mathematics in data science and in statistical theory.

Part I offers a self-contained description of relevant aspects of the theory of matrix algebra for applications in statistics. It begins with fundamental concepts of vectors and vector spaces; covers basic algebraic properties of matrices and analytic properties of vectors and matrices in multivariate calculus; and concludes with a discussion on operations on matrices in solutions of linear systems and in eigenanalysis. Part II considers various types of matricesencountered in statistics, such as projection matrices and positive definite matrices, and describes special properties of those matrices; and describes various applications of matrix theory in statistics, including linear models, multivariate analysis, and stochastic processes. Part III covers numerical linear algebraone of the most important subjects in the field of statistical computing. It begins with a discussion of the basics of numerical computations and goes on to describe accurate and efficient algorithms for factoring matrices, how to solve linear systems of equations, and the extraction of eigenvalues and eigenvectors.



Although the book is not tied to any particular software system, it describes and gives examples of the use of modern computer software for numerical linear algebra. This part is essentially self-contained, although it assumes some ability to program in Fortran or C and/or the ability to use R or Matlab.

The first two parts of the text are ideal for a course in matrix algebra for statistics students or as a supplementary text for various courses in linear models or multivariate statistics. The third part is ideal for use as a text for a course in statistical computing or as a supplementary text for various courses that emphasize computations.

New to this edition

100 pages of additional material

30 more exercises186 exercises overall Added discussion of vectors and matrices with complex elements Additional material on statistical applications Extensive and reader-friendly cross references and index

Arvustused

Gentle has put in a lot of time and effort to writing this book with careful attention to details. it is all needed to make sure the student has a firm and solid understanding of matrix algebra on the graduate level. I would recommend this book for all those who teach graduate level matrix algebra or to those undergraduate students who wish to have an independent study. (Peter Olszewski, MAA Reviews, January, 2018)



Beautifully written, easy to read, with a well subindexed index of 16 pages and a bibliography of 13 that includes most modern and relevant textbooks and articles in the area of matrix theory and computations, as well as for statistics and big data computations. (Frank Uhlig, zbMATH 1386.15002, 2018) This very reader-friendly written volume presents an opportunity to graduate students and researchers to enjoy reading on the classical matrix analysis in its modern applications to statistics and to implement thesemethods in practical problem solving. (Stan Lipovetsky, Technometrics, Vol. 60 (2), 2018)

Preface to the Second Edition vii
Preface to the First Edition ix
Part I Linear Algebra
1 Basic Vector/Matrix Structure and Notation
3(8)
1.1 Vectors
4(1)
1.2 Arrays
5(1)
1.3 Matrices
5(3)
1.3.1 Subvectors and Submatrices
8(1)
1.4 Representation of Data
8(3)
2 Vectors and Vector Spaces
11(44)
2.1 Operations on Vectors
11(24)
2.1.1 Linear Combinations and Linear Independence
12(1)
2.1.2 Vector Spaces and Spaces of Vectors
13(8)
2.1.3 Basis Sets for Vector Spaces
21(2)
2.1.4 Inner Products
23(2)
2.1.5 Norms
25(6)
2.1.6 Normalized Vectors
31(1)
2.1.7 Metrics and Distances
32(1)
2.1.8 Orthogonal Vectors and Orthogonal Vector Spaces
33(1)
2.1.9 The "One Vector"
34(1)
2.2 Cartesian Coordinates and Geometrical Properties of Vectors
35(13)
2.2.1 Cartesian Geometry
36(1)
2.2.2 Projections
36(1)
2.2.3 Angles Between Vectors
37(1)
2.2.4 Orthogonalization Transformations: Gram-Schmidt
38(2)
2.2.5 Orthonormal Basis Sets
40(1)
2.2.6 Approximation of Vectors
41(2)
2.2.7 Flats, Affine Spaces, and Hyperplanes
43(1)
2.2.8 Cones
43(3)
2.2.9 Cross Products in IR3
46(2)
2.3 Centered Vectors and Variances and Covariances of Vectors
48(7)
2.3.1 The Mean and Centered Vectors
48(1)
2.3.2 The Standard Deviation, the Variance, and Scaled Vectors
49(1)
2.3.3 Covariances and Correlations Between Vectors
50(2)
Exercises
52(3)
3 Basic Properties of Matrices
55(130)
3.1 Basic Definitions and Notation
55(20)
3.1.1 Multiplication of a Matrix by a Scalar
56(1)
3.1.2 Diagonal Elements: diag(·) and vecdiag(·)
56(1)
3.1.3 Diagonal, Hollow, and Diagonally Dominant Matrices
57(1)
3.1.4 Matrices with Special Patterns of Zeroes
58(1)
3.1.5 Matrix Shaping Operators
59(2)
3.1.6 Partitioned Matrices
61(2)
3.1.7 Matrix Addition
63(2)
3.1.8 Scalar-Valued Operators on Square Matrices: The Trace
65(1)
3.1.9 Scalar-Valued Operators on Square Matrices: The Determinant
66(9)
3.2 Multiplication of Matrices and Multiplication of Vectors and Matrices
75(24)
3.2.1 Matrix Multiplication (Cayley)
75(3)
3.2.2 Multiplication of Matrices with Special Patterns
78(2)
3.2.3 Elementary Operations on Matrices
80(8)
3.2.4 The Trace of a Cayley Product That Is Square
88(1)
3.2.5 The Determinant of a Cayley Product of Square Matrices
88(1)
3.2.6 Multiplication of Matrices and Vectors
89(1)
3.2.7 Outer Products
90(1)
3.2.8 Bilinear and Quadratic Forms: Definiteness
91(2)
3.2.9 Anisometric Spaces
93(1)
3.2.10 Other Kinds of Matrix Multiplication
94(5)
3.3 Matrix Rank and the Inverse of a Matrix
99(22)
3.3.1 Row Rank and Column Rank
100(1)
3.3.2 Full Rank Matrices
101(1)
3.3.3 Rank of Elementary Operator Matrices and Matrix Products Involving Them
101(1)
3.3.4 The Rank of Partitioned Matrices, Products of Matrices, and Sums of Matrices
102(2)
3.3.5 Full Rank Partitioning
104(1)
3.3.6 Full Rank Matrices and Matrix Inverses
105(4)
3.3.7 Full Rank Factorization
109(1)
3.3.8 Equivalent Matrices
110(2)
3.3.9 Multiplication by Full Rank Matrices
112(3)
3.3.10 Gramian Matrices: Products of the Form AT A
115(2)
3.3.11 A Lower Bound on the Rank of a Matrix Product
117(1)
3.3.12 Determinants of Inverses
117(1)
3.3.13 Inverses of Products and Sums of Nonsingular Matrices
118(2)
3.3.14 Inverses of Matrices with Special Forms
120(1)
3.3.15 Determining the Rank of a Matrix
121(1)
3.4 More on Partitioned Square Matrices: The Schur Complement
121(2)
3.4.1 Inverses of Partitioned Matrices
122(1)
3.4.2 Determinants of Partitioned Matrices
122(1)
3.5 Linear Systems of Equations
123(4)
3.5.1 Solutions of Linear Systems
123(3)
3.5.2 Null Space: The Orthogonal Complement
126(1)
3.6 Generalized Inverses
127(4)
3.6.1 Immediate Properties of Generalized Inverses
127(1)
3.6.2 Special Generalized Inverses: The Moore-Penrose Inverse
127(3)
3.6.3 Generalized Inverses of Products and Sums of Matrices
130(1)
3.6.4 Generalized Inverses of Partitioned Matrices
131(1)
3.7 Orthogonality
131(3)
3.7.1 Orthogonal Matrices: Definition and Simple Properties
132(1)
3.7.2 Orthogonal and Orthonormal Columns
133(1)
3.7.3 The Orthogonal Group
133(1)
3.7.4 Conjugacy
134(1)
3.8 Eigenanalysis: Canonical Factorizations
134(30)
3.8.1 Eigenvalues and Eigenvectors Are Remarkable
135(1)
3.8.2 Left Eigenvectors
135(1)
3.8.3 Basic Properties of Eigenvalues and Eigenvectors
136(2)
3.8.4 The Characteristic Polynomial
138(3)
3.8.5 The Spectrum
141(5)
3.8.6 Similarity Transformations
146(1)
3.8.7 Schur Factorization
147(1)
3.8.8 Similar Canonical Factorization: Diagonalizable Matrices
148(4)
3.8.9 Properties of Diagonalizable Matrices
152(1)
3.8.10 Eigenanalysis of Symmetric Matrices
153(6)
3.8.11 Positive Definite and Nonnegative Definite Matrices
159(1)
3.8.12 Generalized Eigenvalues and Eigenvectors
160(1)
3.8.13 Singular Values and the Singular Value Decomposition (SVD)
161(3)
3.9 Matrix Norms
164(11)
3.9.1 Matrix Norms Induced from Vector Norms
165(2)
3.9.2 The Frobenius Norm---The "Usual" Norm
167(2)
3.9.3 Other Matrix Norms
169(1)
3.9.4 Matrix Norm Inequalities
170(1)
3.9.5 The Spectral Radius
171(1)
3.9.6 Convergence of a Matrix Power Series
171(4)
3.10 Approximation of Matrices
175(10)
3.10.1 Measures of the Difference Between Two Matrices
175(1)
3.10.2 Best Approximation with a Matrix of Given Rank
176(2)
Exercises
178(7)
4 Vector/Matrix Derivatives and Integrals
185(42)
4.1 Functions of Vectors and Matrices
186(1)
4.2 Basics of Differentiation
186(4)
4.2.1 Continuity
188(1)
4.2.2 Notation and Properties
188(2)
4.2.3 Differentials
190(1)
4.3 Types of Differentiation
190(8)
4.3.1 Differentiation with Respect to a Scalar
190(1)
4.3.2 Differentiation with Respect to a Vector
191(5)
4.3.3 Differentiation with Respect to a Matrix
196(2)
4.4 Optimization of Scalar-Valued Functions
198(16)
4.4.1 Stationary Points of Functions
200(1)
4.4.2 Newton's Method
200(2)
4.4.3 Least Squares
202(4)
4.4.4 Maximum Likelihood
206(2)
4.4.5 Optimization of Functions with Constraints
208(5)
4.4.6 Optimization Without Differentiation
213(1)
4.5 Integration and Expectation: Applications to Probability Distributions
214(13)
4.5.1 Multidimensional Integrals and Integrals Involving Vectors and Matrices
215(1)
4.5.2 Integration Combined with Other Operations
216(1)
4.5.3 Random Variables and Probability Distributions
217(5)
Exercises
222(5)
5 Matrix Transformations and Factorizations
227(38)
5.1 Factorizations
227(1)
5.2 Computational Methods: Direct and Iterative
228(1)
5.3 Linear Geometric Transformations
229(6)
5.3.1 Invariance Properties of Linear Transformations
229(1)
5.3.2 Transformations by Orthogonal Matrices
230(1)
5.3.3 Rotations
231(2)
5.3.4 Reflections
233(1)
5.3.5 Translations: Homogeneous Coordinates
234(1)
5.4 Householder Transformations (Reflections)
235(3)
5.4.1 Zeroing All Elements But One in a Vector
236(1)
5.4.2 Computational Considerations
237(1)
5.5 Givens Transformations (Rotations)
238(3)
5.5.1 Zeroing One Element in a Vector
239(1)
5.5.2 Givens Rotations That Preserve Symmetry
240(1)
5.5.3 Givens Rotations to Transform to Other Values
240(1)
5.5.4 Fast Givens Rotations
241(1)
5.6 Factorization of Matrices
241(1)
5.7 LU and LDU Factorizations
242(6)
5.7.1 Properties: Existence
243(3)
5.7.2 Pivoting
246(1)
5.7.3 Use of Inner Products
247(1)
5.7.4 Properties: Uniqueness
247(1)
5.7.5 Properties of the LDU Factorization of a Square Matrix
248(1)
5.8 QR Factorization
248(6)
5.8.1 Related Matrix Factorizations
249(1)
5.8.2 Matrices of Full Column Rank
249(1)
5.8.3 Relation to the Moore-Penrose Inverse for Matrices of Full Column Rank
250(1)
5.8.4 Nonfull Rank Matrices
251(1)
5.8.5 Relation to the Moore-Penrose Inverse
251(1)
5.8.6 Determining the Rank of a Matrix
252(1)
5.8.7 Formation of the QR Factorization
252(1)
5.8.8 Householder Reflections to Form the QR Factorization
252(1)
5.8.9 Givens Rotations to Form the QR, Factorization
253(1)
5.8.10 Gram-Schmidt Transformations to Form the QR Factorization
254(1)
5.9 Factorizations of Nonnegative Definite Matrices
254(5)
5.9.1 Square Roots
254(1)
5.9.2 Cholesky Factorization
255(3)
5.9.3 Factorizations of a Gramian Matrix
258(1)
5.10 Approximate Matrix Factorization
259(6)
5.10.1 Nonnegative Matrix Factorization
259(1)
5.10.2 Incomplete Factorizations
260(1)
Exercises
261(4)
6 Solution of Linear Systems
265(42)
6.1 Condition of Matrices
266(8)
6.1.1 Condition Number
267(5)
6.1.2 Improving the Condition Number
272(1)
6.1.3 Numerical Accuracy
273(1)
6.2 Direct Methods for Consistent Systems
274(5)
6.2.1 Gaussian Elimination and Matrix Factorizations
274(5)
6.2.2 Choice of Direct Method
279(1)
6.3 Iterative Methods for Consistent Systems
279(7)
6.3.1 The Gauss-Seidel Method with Successive Overrelaxation
279(2)
6.3.2 Conjugate Gradient Methods for Symmetric Positive Definite Systems
281(5)
6.3.3 Multigrid Methods
286(1)
6.4 Iterative Refinement
286(1)
6.5 Updating a Solution to a Consistent System
287(2)
6.6 Overdetermined Systems: Least Squares
289(7)
6.6.1 Least Squares Solution of an Overdetermined System
290(2)
6.6.2 Least Squares with a Full Rank Coefficient Matrix
292(1)
6.6.3 Least Squares with a Coefficient Matrix Not of Full Rank
293(2)
6.6.4 Weighted Least Squares
295(1)
6.6.5 Updating a Least Squares Solution of an Overdetermined System
295(1)
6.7 Other Solutions of Overdetermined Systems
296(11)
6.7.1 Solutions that Minimize Other Norms of the Residuals
297(3)
6.7.2 Regularized Solutions
300(1)
6.7.3 Minimizing Orthogonal Distances
301(4)
Exercises
305(2)
7 Evaluation of Eigenvalues and Eigenvectors
307(22)
7.1 General Computational Methods
308(5)
7.1.1 Numerical Condition of an Eigenvalue Problem
308(2)
7.1.2 Eigenvalues from Eigenvectors and Vice Versa
310(1)
7.1.3 Deflation
310(2)
7.1.4 Preconditioning
312(1)
7.1.5 Shifting
312(1)
7.2 Power Method
313(2)
7.2.1 Inverse Power Method
315(1)
7.3 Jacobi Method
315(3)
7.4 QR Method
318(3)
7.5 Krylov Methods
321(1)
7.6 Generalized Eigenvalues
321(1)
7.7 Singular Value Decomposition
322(7)
Exercises
324(5)
Part II Applications in Data Analysis
8 Special Matrices and Operations Useful in Modeling and Data Analysis
329(70)
8.1 Data Matrices and Association Matrices
330(10)
8.1.1 Flat Files
330(1)
8.1.2 Graphs and Other Data Structures
331(7)
8.1.3 Term-by-Document Matrices
338(1)
8.1.4 Probability Distribution Models
339(1)
8.1.5 Derived Association Matrices
340(1)
8.2 Symmetric Matrices and Other Unitarily Diagonalizable Matrices
340(6)
8.2.1 Some Important Properties of Symmetric Matrices
340(1)
8.2.2 Approximation of Symmetric Matrices and an Important Inequality
341(4)
8.2.3 Normal Matrices
345(1)
8.3 Nonnegative Definite Matrices: Cholesky Factorization
346(2)
8.3.1 Eigenvalues of Nonnegative Definite Matrices
347(1)
8.3.2 The Square Root and the Cholesky Factorization
347(1)
8.3.3 The Convex Cone of Nonnegative Definite Matrices
348(1)
8.4 Positive Definite Matrices
348(4)
8.4.1 Leading Principal Submatrices of Positive Definite Matrices
350(1)
8.4.2 The Convex Cone of Positive Definite Matrices
351(1)
8.4.3 Inequalities Involving Positive Definite Matrices
351(1)
8.5 Idempotent and Projection Matrices
352(7)
8.5.1 Idempotent Matrices
353(5)
8.5.2 Projection Matrices: Symmetric Idempotent Matrices
358(1)
8.6 Special Matrices Occurring in Data Analysis
359(13)
8.6.1 Gramian Matrices
360(2)
8.6.2 Projection and Smoothing Matrices
362(3)
8.6.3 Centered Matrices and Variance-Covariance Matrices
365(3)
8.6.4 The Generalized Variance
368(2)
8.6.5 Similarity Matrices
370(1)
8.6.6 Dissimilarity Matrices
371(1)
8.7 Nonnegative and Positive Matrices
372(8)
8.7.1 The Convex Cones of Nonnegative and Positive Matrices
373(1)
8.7.2 Properties of Square Positive Matrices
373(2)
8.7.3 Irreducible Square Nonnegative Matrices
375(4)
8.7.4 Stochastic Matrices
379(1)
8.7.5 Leslie Matrices
380(1)
8.8 Other Matrices with Special Structures
380(19)
8.8.1 Helmert Matrices
381(1)
8.8.2 Vandermonde Matrices
382(1)
8.8.3 Hadamard Matrices and Orthogonal Arrays
382(2)
8.8.4 Toeplitz Matrices
384(2)
8.8.5 Circulant Matrices
386(1)
8.8.6 Fourier Matrices and the Discrete Fourier Transform
387(3)
8.8.7 Hankel Matrices
390(1)
8.8.8 Cauchy Matrices
391(1)
8.8.9 Matrices Useful in Graph Theory
392(4)
8.8.10 Z-Matrices and M-Matrices
396(1)
Exercises
396(3)
9 Selected Applications in Statistics
399(62)
9.1 Structure in Data and Statistical Data Analysis
399(1)
9.2 Multivariate Probability Distributions
400(3)
9.2.1 Basic Definitions and Properties
400(1)
9.2.2 The Multivariate Normal Distribution
401(1)
9.2.3 Derived Distributions and Cochran's Theorem
401(2)
9.3 Linear Models
403(21)
9.3.1 Fitting the Model
405(3)
9.3.2 Linear Models and Least Squares
408(2)
9.3.3 Statistical Inference
410(4)
9.3.4 The Normal Equations and the Sweep Operator
414(1)
9.3.5 Linear Least Squares Subject to Linear Equality Constraints
415(1)
9.3.6 Weighted Least Squares
416(1)
9.3.7 Updating Linear Regression Statistics
417(2)
9.3.8 Linear Smoothing
419(1)
9.3.9 Multivariate Linear Models
420(4)
9.4 Principal Components
424(4)
9.4.1 Principal Components of a Random Vector
424(1)
9.4.2 Principal Components of Data
425(3)
9.5 Condition of Models and Data
428(12)
9.5.1 Ill-Conditioning in Statistical Applications
429(1)
9.5.2 Variable Selection
429(1)
9.5.3 Principal Components Regression
430(1)
9.5.4 Shrinkage Estimation
431(2)
9.5.5 Statistical Inference about the Rank of a Matrix
433(4)
9.5.6 Incomplete Data
437(3)
9.6 Optimal Design
440(3)
9.6.1 D-Optimal Designs
441(2)
9.7 Multivariate Random Number Generation
443(2)
9.7.1 The Multivariate Normal Distribution
443(1)
9.7.2 Random Correlation Matrices
444(1)
9.8 Stochastic Processes
445(16)
9.8.1 Markov Chains
445(3)
9.8.2 Markovian Population Models
448(1)
9.8.3 Autoregressive Processes
449(3)
Exercises
452(9)
Part III Numerical Methods and Software
10 Numerical Methods
461(62)
10.1 Digital Representation of Numeric Data
466(17)
10.1.1 The Fixed-Point Number System
466(2)
10.1.2 The Floating-Point Model for Real Numbers
468(8)
10.1.3 Language Constructs for Representing Numeric Data
476(6)
10.1.4 Other Variations in the Representation of Data; Portability of Data
482(1)
10.2 Computer Operations on Numeric Data
483(13)
10.2.1 Fixed-Point Operations
485(1)
10.2.2 Floating-Point Operations
485(6)
10.2.3 Language Constructs for Operations on Numeric Data
491(2)
10.2.4 Software Methods for Extending the Precision
493(2)
10.2.5 Exact Computations
495(1)
10.3 Numerical Algorithms and Analysis
496(27)
10.3.1 Algorithms and Programs
496(1)
10.3.2 Error in Numerical Computations
496(8)
10.3.3 Efficiency
504(6)
10.3.4 Iterations and Convergence
510(3)
10.3.5 Other Computational Techniques
513(3)
Exercises
516(7)
11 Numerical Linear Algebra
523(16)
11.1 Computer Storage of Vectors and Matrices
523(2)
11.1.1 Storage Modes
524(1)
11.1.2 Strides
524(1)
11.1.3 Sparsity
524(1)
11.2 General Computational Considerations for Vectors and Matrices
525(4)
11.2.1 Relative Magnitudes of Operands
525(2)
11.2.2 Iterative Methods
527(1)
11.2.3 Assessing Computational Errors
528(1)
11.3 Multiplication of Vectors and Matrices
529(4)
11.3.1 Strassen's Algorithm
531(2)
11.3.2 Matrix Multiplication Using MapReduce
533(1)
11.4 Other Matrix Computations
533(6)
11.4.1 Rank Determination
534(1)
11.4.2 Computing the Determinant
535(1)
11.4.3 Computing the Condition Number
535(2)
Exercises
537(2)
12 Software for Numerical Linear Algebra
539(50)
12.1 General Considerations
539(16)
12.1.1 Software Development and Open Source Software
540(1)
12.1.2 Collaborative Research and Version Control
541(1)
12.1.3 Finding Software
541(1)
12.1.4 Software Design
541(9)
12.1.5 Software Development, Maintenance, and Testing
550(3)
12.1.6 Reproducible Research
553(2)
12.2 Software Libraries
555(9)
12.2.1 BLAS
555(2)
12.2.2 Level 2 and Level 3 BLAS, LAPACK, and Related Libraries
557(2)
12.2.3 Libraries for High Performance Computing
559(3)
12.2.4 The IMSL Libraries
562(2)
12.3 General Purpose Languages
564(8)
12.3.1 Programming Considerations
566(2)
12.3.2 Modern Fortran
568(2)
12.3.3 C and C++
570(1)
12.3.4 Python
571(1)
12.4 Interactive Systems for Array Manipulation
572(17)
12.4.1 R
572(8)
12.4.2 MATLAB and Octave
580(2)
Exercises
582(7)
Appendices and Back Matter
Notation and Definitions
589(14)
A.1 General Notation
589(2)
A.2 Computer Number Systems
591(1)
A.3 General Mathematical Functions and Operators
592(2)
A.3.1 Special Functions
594(1)
A.4 Linear Spaces and Matrices
595(2)
A.4.1 Norms and Inner Products
597(1)
A.4.2 Matrix Shaping Notation
598(2)
A.4.3 Notation for Rows or Columns of Matrices
600(1)
A.4.4 Notation Relating to Matrix Determinants
600(1)
A.4.5 Matrix-Vector Differentiation
600(1)
A.4.6 Special Vectors and Matrices
601(1)
A.4.7 Elementary Operator Matrices
601(1)
A.5 Models and Data
602(1)
Solutions and Hints for Selected Exercises 603(16)
Bibliography 619(14)
Index 633
James E. Gentle, PhD, is University Professor of Computational Statistics at George Mason University. He is a Fellow of the American Statistical Association (ASA) and of the American Association for the Advancement of  Science. Professor Gentle has held several national offices in the ASA and has served as editor and associate editor of journals of the ASA as well as for other journals in statistics and computing. He is author of Random Number Generation and Monte Carlo Methods (Springer, 2003) and Computational Statistics (Springer, 2009).