Muutke küpsiste eelistusi

E-raamat: Matrix Differential Calculus with Applications in Statistics and Econometrics, Third Edition 3rd Edition [Wiley Online]

(University of Amsterdam), (London School of Economics)
Teised raamatud teemal:
  • Wiley Online
  • Hind: 123,65 €*
  • * hind, mis tagab piiramatu üheaegsete kasutajate arvuga ligipääsu piiramatuks ajaks
Teised raamatud teemal:

A brand new, fully updated edition of a popular classic on matrix differential calculus with applications in statistics and econometrics

This exhaustive, self-contained book on matrix theory and matrix differential calculus provides a treatment of matrix calculus based on differentials and shows how easy it is to use this theory once you have mastered the technique. Jan Magnus, who, along with the late Heinz Neudecker, pioneered the theory, develops it further in this new edition and provides many examples along the way to support it.

Matrix calculus has become an essential tool for quantitative methods in a large number of applications, ranging from social and behavioral sciences to econometrics. It is still relevant and used today in a wide range of subjects such as the biosciences and psychology. Matrix Differential Calculus with Applications in Statistics and Econometrics, Third Edition contains all of the essentials of multivariable calculus with an emphasis on the use of differentials. It starts by presenting a concise, yet thorough overview of matrix algebra, then goes on to develop the theory of differentials. The rest of the text combines the theory and application of matrix differential calculus, providing the practitioner and researcher with both a quick review and a detailed reference.

  • Fulfills the need for an updated and unified treatment of matrix differential calculus
  • Contains many new examples and exercises based on questions asked of the author over the years
  • Covers new developments in field and features new applications
  • Written by a leading expert and pioneer of the theory
  • Part of the Wiley Series in Probability and Statistics 

Matrix Differential Calculus With Applications in Statistics and Econometrics Third Edition is an ideal text for graduate students and academics studying the subject, as well as for postgraduates and specialists working in biosciences and psychology.

Preface xiii
Part One Matrices
1 Basic properties of vectors and matrices
3(28)
1 Introduction
3(1)
2 Sets
3(1)
3 Matrices: addition and multiplication
4(2)
4 The transpose of a matrix
6(1)
5 Square matrices
6(1)
6 Linear forms and quadratic forms
7(2)
7 The rank of a matrix
9(1)
8 The inverse
10(1)
9 The determinant
10(1)
10 The trace
11(1)
11 Partitioned matrices
12(2)
12 Complex matrices
14(1)
13 Eigenvalues and eigenvectors
14(3)
14 Schur's decomposition theorem
17(1)
15 The Jordan decomposition
18(2)
16 The singular-value decomposition
20(1)
17 Further results concerning eigenvalues
20(3)
18 Positive (semi)definite matrices
23(2)
19 Three further results for positive definite matrices
25(1)
20 A useful result
26(1)
21 Symmetric matrix functions
27(1)
Miscellaneous exercises
28(2)
Bibliographical notes
30(1)
2 Kronecker products, vec operator, and Moore-Penrose inverse
31(16)
1 Introduction
31(1)
2 The Kronecker product
31(2)
3 Eigenvalues of a Kronecker product
33(1)
4 The vec operator
34(2)
5 The Moore-Penrose (MP) inverse
36(1)
6 Existence and uniqueness of the MP inverse
37(1)
7 Some properties of the MP inverse
38(1)
8 Further properties
39(2)
9 The solution of linear equation systems
41(2)
Miscellaneous exercises
43(2)
Bibliographical notes
45(2)
3 Miscellaneous matrix results
47(26)
1 Introduction
47(1)
2 The adjoint matrix
47(2)
3 Proof of Theorem 3.1
49(2)
4 Bordered determinants
51(1)
5 The matrix equation AX = 0
51(1)
6 The Hadamard product
52(2)
7 The commutation matrix Kmn
54(2)
8 The duplication matrix Dn
56(2)
9 Relationship between Dn+1 and Dn, I
58(1)
10 Relationship between Dn+1 and Dn, II
59(1)
11 Conditions for a quadratic form to be positive (negative) subject to linear constraints
60(3)
12 Necessary and sufficient conditions for r(A : B) = r(A) + r(B)
63(2)
13 The bordered Gramian matrix
65(2)
14 The equations X1A + X2B' = G1, X1B = G2
67(2)
Miscellaneous exercises
69(1)
Bibliographical notes
70(3)
Part Two Differentials: the theory
4 Mathematical preliminaries
73(14)
1 Introduction
73(1)
2 Interior points and accumulation points
73(2)
3 Open and closed sets
75(2)
4 The Bolzano-Weierstrass theorem
77(1)
5 Functions
78(1)
6 The limit of a function
79(1)
7 Continuous functions and compactness
80(1)
8 Convex sets
81(2)
9 Convex and concave functions
83(3)
Bibliographical notes
86(1)
5 Differentials and differentiability
87(24)
1 Introduction
87(1)
2 Continuity
88(2)
3 Differentiability and linear approximation
90(1)
4 The differential of a vector function
91(2)
5 Uniqueness of the differential
93(1)
6 Continuity of differentiate functions
94(1)
7 Partial derivatives
95(1)
8 The first identification theorem
96(1)
9 Existence of the differential, I
97(2)
10 Existence of the differential, II
99(1)
11 Continuous differentiability
100(1)
12 The chain rule
100(2)
13 Cauchy invariance
102(1)
14 The mean-value theorem for real-valued functions
103(1)
15 Differentiable matrix functions
104(2)
16 Some remarks on notation
106(2)
17 Complex differentiation
108(2)
Miscellaneous exercises
110(1)
Bibliographical notes
110(1)
6 The second differential
111(18)
1 Introduction
111(1)
2 Second-order partial derivatives
111(1)
3 The Hessian matrix
112(1)
4 Twice differentiability and second-order approximation, I
113(1)
5 Definition of twice differentiability
114(1)
6 The second differential
115(2)
7 Symmetry of the Hessian matrix
117(2)
8 The second identification theorem
119(1)
9 Twice differentiability and second-order approximation, II
119(2)
10 Chain rule for Hessian matrices
121(2)
11 The analog for second differentials
123(1)
12 Taylor's theorem for real-valued functions
124(1)
13 Higher-order differentials
125(1)
14 Real analytic functions
125(1)
15 Twice differentiable matrix functions
126(1)
Bibliographical notes
127(2)
7 Static optimization
129(34)
1 Introduction
129(1)
2 Unconstrained optimization
130(1)
3 The existence of absolute extrema
131(1)
4 Necessary conditions for a local minimum
132(2)
5 Sufficient conditions for a local minimum: first-derivative test
134(2)
6 Sufficient conditions for a local minimum: second-derivative test
136(2)
7 Characterization of differentiable convex functions
138(3)
8 Characterization of twice differentiable convex functions
141(1)
9 Sufficient conditions for an absolute minimum
142(1)
10 Monotonic transformations
143(1)
11 Optimization subject to constraints
144(1)
12 Necessary conditions for a local minimum under constraints
145(4)
13 Sufficient conditions for a local minimum under constraints
149(5)
14 Sufficient conditions for an absolute minimum under constraints
154(1)
15 A note on constraints in matrix form
155(1)
16 Economic interpretation of Lagrange multipliers
155(2)
Appendix: the implicit function theorem
157(2)
Bibliographical notes
159(4)
Part Three Differentials: the practice
8 Some important differentials
163(28)
1 Introduction
163(1)
2 Fundamental rules of differential calculus
163(2)
3 The differential of a determinant
165(3)
4 The differential of an inverse
168(1)
5 Differential of the Moore-Penrose inverse
169(3)
6 The differential of the adjoint matrix
172(2)
7 On differentiating eigenvalues and eigenvectors
174(2)
8 The continuity of eigenprojections
176(4)
9 The differential of eigenvalues and eigenvectors: symmetric case
180(3)
10 Two alternative expressions for dλ
183(2)
11 Second differential of the eigenvalue function
185(1)
Miscellaneous exercises
186(3)
Bibliographical notes
189(2)
9 First-order differentials and Jacobian matrices
191(20)
1 Introduction
191(1)
2 Classification
192(1)
3 Derisatives
192(2)
4 Derivatives
194(2)
5 Identification of Jacobian matrices
196(1)
6 The first identification table
197(1)
7 Partitioning of the derivative
197(1)
8 Scalar functions of a scalar
198(1)
9 Scalar functions of a vector
198(1)
10 Scalar functions of a matrix, I: trace
199(2)
11 Scalar functions of a matrix, II: determinant
201(1)
12 Scalar functions of a matrix, III: eigenvalue
202(1)
13 Two examples of vector functions
203(1)
14 Matrix functions
204(2)
15 Kronecker products
206(2)
16 Some other problems
208(1)
17 Jacobians of transformations
209(1)
Bibliographical notes
210(1)
10 Second-order differentials and Hessian matrices
211(14)
1 Introduction
211(1)
2 The second identification table
211(1)
3 Linear and quadratic forms
212(1)
4 A useful theorem
213(1)
5 The determinant function
214(1)
6 The eigenvalue function
215(1)
7 Other examples
215(2)
8 Composite functions
217(1)
9 The eigenvector function
218(1)
10 Hessian of matrix functions, I
219(1)
11 Hessian of matrix functions, II
219(1)
Miscellaneous exercises
220(5)
Part Four Inequalities
11 Inequalities
225(48)
1 Introduction
225(1)
2 The Cauchy-Schwarz inequality
226(1)
3 Matrix analogs of the Cauchy-Schwarz inequality
227(1)
4 The theorem of the arithmetic and geometric means
228(2)
5 The Rayleigh quotient
230(2)
6 Concavity of Ai and convexity of An
232(1)
7 Variational description of eigenvalues
232(2)
8 Fischer's min-max theorem
234(2)
9 Monotonicity of the eigenvalues
236(1)
10 The Poincare separation theorem
236(1)
11 Two corollaries of Poincare's theorem
237(1)
12 Further consequences of the Poincare theorem
238(1)
13 Multiplicative version
239(2)
14 The maximum of a bilinear form
241(1)
15 Hadamard's inequality
242(1)
16 An interlude: Karamata's inequality
242(2)
17 Karamata's inequality and eigenvalues
244(1)
18 An inequality concerning positive semidefinite matrices
245(1)
19 A representation theorem for (Σ api)1/P
246(1)
20 A representation theorem for (trAp)1/p
247(1)
21 Holder's inequality
248(2)
22 Concavity of log|A|
250(1)
23 Minkowski's inequality
251(2)
24 Quasilinear representation of |A|1/n
253(2)
25 Minkowski's determinant theorem
255(1)
26 Weighted means of order p
256(2)
27 Schlomilch's inequality
258(1)
28 Curvature properties of Mp(x, a)
259(1)
29 Least squares
260(1)
30 Generalized least squares
261(1)
31 Restricted least squares
262(2)
32 Restricted least squares: matrix version
264(1)
Miscellaneous exercises
265(4)
Bibliographical notes
269(4)
Part Five The linear model
12 Statistical preliminaries
273(12)
1 Introduction
273(1)
2 The cumulative distribution function
273(1)
3 The joint density function
274(1)
4 Expectations
274(1)
5 Variance and covariance
275(2)
6 Independence of two random variables
277(2)
7 Independence of n random variables
279(1)
8 Sampling
279(1)
9 The one-dimensional normal distribution
279(1)
10 The multivariate normal distribution
280(2)
11 Estimation
282(1)
Miscellaneous exercises
282(1)
Bibliographical notes
283(2)
13 The linear regression model
285(36)
1 Introduction
285(1)
2 Affine minimum-trace unbiased estimation
286(1)
3 The Gauss-Markov theorem
287(3)
4 The method of least squares
290(1)
5 Aitken's theorem
291(2)
6 Multicollinearity
293(2)
7 Estimable functions
295(1)
8 Linear constraints: the case M(R') ⊂ M(X')
296(4)
9 Linear constraints: the general case
300(2)
10 Linear constraints: the case M(R') ⊂ M(X') = {0}
302(2)
11 A singular variance matrix: the case M(X) ⊂ M(V)
304(1)
12 A singular variance matrix: the case r(X'V+X) = r(X)
305(2)
13 A singular variance matrix: the general case, I
307(1)
14 Explicit and implicit linear constraints
307(3)
15 The general linear model, I
310(1)
16 A singular variance matrix: the general case, II
311(3)
17 The general linear model, II
314(1)
18 Generalized least squares
315(1)
19 Restricted least squares
316(2)
Miscellaneous exercises
318(1)
Bibliographical notes
319(2)
14 Further topics in the linear model
321(26)
1 Introduction
321(1)
2 Best quadratic unbiased estimation of σ2
322(1)
3 The best quadratic and positive unbiased estimator of σ2
322(2)
4 The best quadratic unbiased estimator of σ2
324(2)
5 Best quadratic invariant estimation of σ2
326(1)
6 The best quadratic and positive invariant estimator of σ2
327(2)
7 The best quadratic invariant estimator of σ2
329(1)
8 Best quadratic, unbiased estimation: multivariate normal case
330(2)
9 Bounds for the bias of the least-squares estimator of σ2, I
332(1)
10 Bounds for the bias of the Least-squares estimator of σ2, II
333(2)
11 The prediction of disturbances
335(1)
12 Best linear unbiased predictors with scalar variance matrix
336(2)
13 Best linear unbiased predictors with fixed variance matrix, I
338(2)
14 Best linear unbiased predictors with fixed variance matrix, II
340(1)
15 Local sensitivity of the posterior mean
341(1)
16 Local sensitivity of the posterior precision
342(2)
Bibliographical notes
344(3)
Part Six Applications to maximum likelihood estimation
15 Maximum likelihood estimation
347(20)
1 Introduction
347(1)
2 The method of maximum likelihood (ML)
347(1)
3 ML estimation of the multivariate normal distribution
348(2)
4 Symmetry: implicit versus explicit treatment
350(1)
5 The treatment of positive definiteness
351(1)
6 The information matrix
352(2)
7 ML estimation of the multivariate normal distribution: distinct means
354(1)
8 The multivariate linear regression model
354(3)
9 The errors-in-variables model
357(2)
10 The nonlinear regression model with normal errors
359(2)
11 Special case: functional independence of mean and variance parameters
361(1)
12 Generalization of Theorem 15.6
362(2)
Miscellaneous exercises
364(1)
Bibliographical notes
365(2)
16 Simultaneous equations
367(22)
1 Introduction
367(1)
2 The simultaneous equations model
367(2)
3 The identification problem
369(2)
4 Identification with linear constraints on B and V only
371(1)
5 Identification with linear constraints on B, T, and D
371(2)
6 Nonlinear constraints
373(1)
7 FIML: the information matrix (general case)
374(2)
8 FIML: asymptotic variance matrix (special case)
376(2)
9 LIML: first-order conditions
378(3)
10 LIML: information matrix
381(2)
11 LIML: asymptotic variance matrix
383(5)
Bibliographical notes
388(1)
17 Topics in psychometrics
389(34)
1 Introduction
389(1)
2 Population principal components
390(1)
3 Optimality of principal components
391(1)
4 A related result
392(1)
5 Sample principal components
393(2)
6 Optimality of sample principal components
395(1)
7 One-mode component analysis
395(3)
8 One-mode component analysis and sample principal components
398(1)
9 Two-mode component analysis
399(1)
10 Multimode component analysis
400(4)
11 Factor analysis
404(3)
12 A zigzag routine
407(1)
13 A Newton-Raphson routine
408(4)
14 Kaiser's varimax method
412(2)
15 Canonical correlations and variates in the population
414(3)
16 Correspondence analysis
417(1)
17 Linear discriminant analysis
418(1)
Bibliographical notes
419(4)
Part Seven Summary
18 Matrix calculus: the essentials
423(26)
1 Introduction
423(1)
2 Differentials
424(2)
3 Vector calculus
426(3)
4 Optimization
429(2)
5 Least squares
431(1)
6 Matrix calculus
432(2)
7 Interlude on linear and quadratic forms
434(1)
8 The second differential
434(2)
9 Chain rule for second differentials
436(2)
10 Four examples
438(1)
11 The Kronecker product and vec operator
439(2)
12 Identification
441(1)
13 The commutation matrix
442(1)
14 From second differential to Hessian
443(1)
15 Symmetry and the duplication matrix
444(1)
16 Maximum likelihood
445(3)
Further reading
448(1)
Bibliography 449(18)
Index of symbols 467(4)
Subject index 471
JAN R. MAGNUS is Emeritus Professor at the Department of Econometrics & Operations Research, Tilburg University, and Extraordinary Professor at the Department of Econometrics & Operations Research, Vrije University, Amsterdam. He is research fellow of CentER and the Tinbergen Institute. He has co-authored nine books and is the author of over 100 scientific papers.

HEINZ NEUDECKER (1933-2017) was Professor of Econometrics at the University of Amsterdam from 1972 until his retirement in 1998.