Muutke küpsiste eelistusi

E-raamat: Linear Models And Regression With R: An Integrated Approach

(Univ Of California, Santa Barbara, Usa), (Indian Statistical Inst, India)
  • Formaat: 772 pages
  • Sari: Series On Multivariate Analysis 11
  • Ilmumisaeg: 30-Jul-2019
  • Kirjastus: World Scientific Publishing Co Pte Ltd
  • Keel: eng
  • ISBN-13: 9789811200427
Teised raamatud teemal:
  • Formaat - PDF+DRM
  • Hind: 58,38 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Formaat: 772 pages
  • Sari: Series On Multivariate Analysis 11
  • Ilmumisaeg: 30-Jul-2019
  • Kirjastus: World Scientific Publishing Co Pte Ltd
  • Keel: eng
  • ISBN-13: 9789811200427
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

Starting with the basic linear model where the design and covariance matrices are of full rank, this book demonstrates how the same statistical ideas can be used to explore the more general linear model with rank-deficient design and/or covariance matrices. The unified treatment presented here provides a clearer understanding of the general linear model from a statistical perspective, thus avoiding the complex matrix-algebraic arguments that are often used in the rank-deficient case. Elegant geometric arguments are used as needed.The book has a very broad coverage, from illustrative practical examples in Regression and Analysis of Variance alongside their implementation using R, to providing comprehensive theory of the general linear model with 181 worked-out examples, 227 exercises with solutions, 152 exercises without solutions (so that they may be used as assignments in a course), and 320 up-to-date references.This completely updated and new edition of Linear Models: An Integrated Approach includes the following features:
Preface vii
Glossary of Abbreviations xix
Glossary of Matrix Notations xxi
1 Introduction
1(28)
1.1 The linear model
3(6)
1.2 Why a linear model?
9(1)
1.3 Uses of the linear model
10(4)
1.4 Description of the linear model and notations
14(3)
1.5 Scope of the linear model
17(2)
1.6 Related models*
19(2)
1.7 A tour through the rest of the book
21(3)
1.8 Complements/Exercises
24(5)
2 Regression and the Normal Distribution
29(32)
2.1 Multivariate normal and related distributions
30(6)
2.1.1 Matrix of variances and covariances
30(1)
2.1.2 The multivariate normal distribution
31(1)
2.1.3 The conditional normal distribution
32(2)
2.1.4 Related distributions
34(2)
2.2 The case of singular dispersion matrix
36(3)
2.3 Regression as best prediction
39(9)
2.3.1 Mean regression
39(6)
2.3.2 Median and quantile regression*
45(2)
2.3.3 Regression with multivariate response
47(1)
2.4 Linear regression as best linear prediction
48(5)
2.5 Multiple correlation coefficient*
53(2)
2.6 Decorrelation
55(2)
2.7 Complements/Exercises
57(4)
3 Estimation in the Linear Model
61(60)
3.1 Least squares estimation
62(3)
3.2 Fitted values and residuals
65(7)
3.3 Variances and covariances
72(2)
3.4 Estimation of error variance
74(2)
3.5 Linear estimation: some basic facts
76(6)
3.5.1 Linear functions of the response
76(3)
3.5.2 Estimability and identifiability
79(3)
3.6 Best linear unbiased estimation
82(6)
3.7 Maximum likelihood estimation
88(1)
3.8 Error sum of squares and degrees of freedom*
89(5)
3.9 Reparametrization*
94(2)
3.10 Linear restrictions*
96(6)
3.11 Nuisance parameters*
102(2)
3.12 Information matrix and Cramer-Rao bound*
104(5)
3.13 Collinearity in the linear model
109(3)
3.14 Complements/Exercises
112(9)
4 Further Inference in the Linear Model
121(60)
4.1 Distribution of the estimators
121(1)
4.2 Tests of linear hypotheses
122(29)
4.2.1 Significance of regression coefficients
122(9)
4.2.2 Testability of linear hypotheses*
131(4)
4.2.3 Hypotheses with a single degree of freedom
135(3)
4.2.4 Decomposing the sum of squares*
138(2)
4.2.5 GLRT and ANOVA table*
140(6)
4.2.6 Power of the generalized likelihood ratio test*
146(1)
4.2.7 Multiple comparisons*
147(3)
4.2.8 Nested hypotheses*
150(1)
4.3 Confidence regions
151(10)
4.3.1 Confidence interval for a single LPF
151(2)
4.3.2 Confidence region for a vector LPF*
153(2)
4.3.3 Simultaneous confidence intervals*
155(4)
4.3.4 Confidence band for regression surface*
159(2)
4.4 Prediction in the linear model
161(8)
4.4.1 Best linear unbiased predictor
162(1)
4.4.2 Prediction interval
163(3)
4.4.3 Simultaneous prediction intervals*
166(1)
4.4.4 Tolerance interval*
167(2)
4.5 Consequences of collinearity*
169(2)
4.6 Complements/Exercises
171(10)
5 Model Building and Diagnostics in Regression
181(100)
5.1 Selection of regressors
184(18)
5.1.1 Too many and too few regressors
184(8)
5.1.2 Some criteria for subset selection
192(6)
5.1.3 Methods of subset selection
198(2)
5.1.4 Selection bias
200(2)
5.2 Leverages and residuals
202(5)
5.3 Checking for violation of assumptions
207(30)
5.3.1 Nonlinearity
207(6)
5.3.2 Heteroscedasticity
213(11)
5.3.3 Correlated observations
224(5)
5.3.4 Non-normality
229(8)
5.4 Casewise diagnostics
237(9)
5.4.1 Diagnostics for parameter estimates
238(1)
5.4.2 Diagnostics for fit
239(1)
5.4.3 Precision diagnostics
240(2)
5.4.4 Spotting unusual cases graphically
242(3)
5.4.5 Dealing with discordant observations
245(1)
5.5 Collinearity
246(9)
5.5.1 Indicators of collinearity
248(6)
5.5.2 Strategies for collinear data
254(1)
5.6 Biased estimators with smaller dispersion*
255(5)
5.6.1 Principal components estimator
256(2)
5.6.2 Ridge estimator
258(1)
5.6.3 Shrinkage estimator
259(1)
5.7 Generalized linear model
260(10)
5.7.1 Maximum likelihood estimation
263(2)
5.7.2 Logistic regression
265(5)
5.8 Complements/Exercises
270(11)
6 Analysis of Variance
281(60)
6.1 Optimal designs
283(1)
6.2 One-way classified data
284(12)
6.2.1 The model
284(3)
6.2.2 Estimation of model parameters
287(4)
6.2.3 Analysis of variance
291(3)
6.2.4 Multiple comparisons of group means
294(2)
6.3 Two-way classified data
296(21)
6.3.1 Single observation per cell
297(7)
6.3.2 Interaction in two-way classified data
304(5)
6.3.3 Multiple observations per cell: balanced data
309(6)
6.3.4 Unbalanced data
315(2)
6.4 Multiple treatment/block factors
317(1)
6.5 Nested models
318(4)
6.6 Analysis of covariance
322(11)
6.6.1 The model
322(1)
6.6.2 Uses of the model
323(1)
6.6.3 Estimation of parameters
324(3)
6.6.4 Tests of hypotheses
327(3)
6.6.5 ANCOVA table and adjustment for covariates*
330(3)
6.7 Complements/Exercises
333(8)
7 General Linear Model
341(54)
7.1 Why study the singular model?
342(1)
7.2 Special considerations with singular models
343(5)
7.2.1 Checking for model consistency*
344(1)
7.2.2 LUE, LZF, estimability and identifiability*
345(3)
7.3 Best linear unbiased estimation
348(7)
7.3.1 BLUE, fitted values and residuals
348(4)
7.3.2 Dispersions
352(2)
7.3.3 The nonsingular case
354(1)
7.4 Estimation of error variance
355(2)
7.5 Maximum likelihood estimation
357(1)
7.6 Weighted least squares estimation
358(3)
7.7 Some recipes for obtaining the BLUE*
361(5)
7.7.1 `Unified theory' of least squares estimation
361(2)
7.7.2 The inverse partitioned matrix approach
363(2)
7.7.3 A constrained least squares approach
365(1)
7.8 Information matrix and Cramer-Rao bound*
366(3)
7.9 Effect of linear restrictions
369(8)
7.9.1 Linear restrictions in the general linear model
369(3)
7.9.2 Improved estimation through restrictions
372(1)
7.9.3 Stochastic restrictions*
373(2)
7.9.4 Inequality constraints*
375(2)
7.10 Model with nuisance parameters
377(2)
7.11 Tests of hypotheses
379(2)
7.12 Confidence regions
381(2)
7.13 Prediction
383(5)
7.13.1 Best linear unbiased predictor
383(1)
7.13.2 Prediction and tolerance intervals
384(1)
7.13.3 Inference in finite population sampling*
385(3)
7.14 Complements/Exercises
388(7)
8 Misspecified or Unknown Dispersion
395(56)
8.1 Misspecified dispersion matrix
396(16)
8.1.1 Tolerable misspecification*
398(6)
8.1.2 Efficiency of least squares estimators*
404(6)
8.1.3 Effect on the estimated variance of LSEs*
410(2)
8.2 Unknown dispersion: the general case
412(7)
8.2.1 An estimator based on prior information*
412(1)
8.2.2 Maximum likelihood estimator
413(2)
8.2.3 Translation invariance and REML
415(2)
8.2.4 A two-stage estimator
417(2)
8.3 Mixed effects and variance components
419(18)
8.3.1 Identifiability and estimability
420(2)
8.3.2 ML and REML methods
422(6)
8.3.3 ANOVA methods
428(2)
8.3.4 Minimum norm quadratic unbiased estimator
430(5)
8.3.5 Best quadratic unbiased estimator
435(1)
8.3.6 Further inference in the mixed model*
436(1)
8.4 Other special cases with correlated error
437(4)
8.4.1 Serially correlated observations
437(2)
8.4.2 Models for spatial data
439(2)
8.5 Special cases with uncorrelated error
441(3)
8.5.1 Combining experiments: meta-analysis
441(2)
8.5.2 Systematic heteroscedasticity
443(1)
8.6 Some problems of signal processing
444(2)
8.7 Complements/Exercises
446(5)
9 Updates in the General Linear Model
451(48)
9.1 Inclusion of observations
452(21)
9.1.1 A simple case
452(2)
9.1.2 General case: linear zero functions gained
454(3)
9.1.3 General case: update equations
457(5)
9.1.4 Application to model diagnostics
462(1)
9.1.5 Design augmentation
462(4)
9.1.6 Recursive prediction and Kalman filter
466(7)
9.2 Exclusion of observations
473(11)
9.2.1 A simple case
473(2)
9.2.2 General case: linear zero functions lost
475(2)
9.2.3 General case: update equations
477(2)
9.2.4 Deletion diagnostics
479(3)
9.2.5 Missing plot substitution
482(2)
9.3 Exclusion of explanatory variables
484(6)
9.3.1 A simple case
485(1)
9.3.2 General case: linear zero functions gained
486(2)
9.3.3 General case: update equations
488(1)
9.3.4 Consequences of omitted variables
489(1)
9.3.5 Sequential linear restrictions
490(1)
9.4 Inclusion of explanatory variables
490(4)
9.4.1 A simple case
490(2)
9.4.2 General case: linear zero functions lost
492(1)
9.4.3 General case: update equations
493(1)
9.4.4 Application to regression model building
494(1)
9.5 Data exclusion and variable inclusion
494(1)
9.6 Complements/Exercises
495(4)
10 Multivariate Linear Model
499(38)
10.1 Description of the multivariate linear model
500(1)
10.2 Best linear unbiased estimation
501(5)
10.3 Unbiased estimation of error dispersion
506(4)
10.4 Maximum likelihood estimation
510(2)
10.4.1 Estimator of mean
510(1)
10.4.2 Estimator of error dispersion
510(2)
10.4.3 REML estimator of error dispersion
512(1)
10.5 Effect of linear restrictions
512(3)
10.5.1 Effect on estimable LPFs, LZFs and BLUEs
512(1)
10.5.2 Change in error sum of squares and products
513(1)
10.5.3 Change in `BLUE' and MSE matrix
514(1)
10.6 Tests of linear hypotheses
515(12)
10.6.1 Generalized likelihood ratio test
515(2)
10.6.2 Roy's union-intersection test
517(1)
10.6.3 Other tests
518(6)
10.6.4 A more general hypothesis
524(2)
10.6.5 Multiple comparisons
526(1)
10.6.6 Test for additional information
526(1)
10.7 Linear prediction and confidence regions
527(3)
10.8 Applications
530(3)
10.8.1 One-sample problem
530(1)
10.8.2 Two-sample problem
530(1)
10.8.3 Multivariate ANOVA
531(1)
10.8.4 Growth models
532(1)
10.9 Complements/Exercises
533(4)
11 Linear Inference --- Other Perspectives
537(46)
11.1 Foundations of linear inference
538(14)
11.1.1 General theory
538(5)
11.1.2 Basis set of BLUEs
543(2)
11.1.3 A decomposition of the response
545(5)
11.1.4 Estimation and error spaces
550(2)
11.2 Admissible, Bayes and minimax linear estimators
552(13)
11.2.1 Admissible linear estimator
552(3)
11.2.2 Bayes linear estimator
555(3)
11.2.3 Minimax linear estimator
558(7)
11.3 Other linear estimators
565(2)
11.3.1 Biased estimators revisited
565(1)
11.3.2 Best linear minimum bias estimator
565(2)
11.3.3 `Consistent' estimator
567(1)
11.4 A geometric view of BLUE in the linear model
567(8)
11.4.1 The homoscedastic case
568(1)
11.4.2 The effect of linear restrictions
569(1)
11.4.3 The general linear model
570(5)
11.5 Large sample properties of estimators
575(4)
11.6 Complements/Exercises
579(4)
Appendix A Matrix and Vector Preliminaries 583(34)
A.1 Matrices and vectors
583(7)
A.2 Generalized inverse
590(3)
A.3 Vector space and projection
593(4)
A.4 Column space
597(6)
A.5 Matrix decompositions
603(6)
A.6 Lowner order
609(1)
A.7 Solutions of linear equations
610(1)
A.8 Optimization of quadratic forms and functions
611(3)
A.9 Complements/Exercises
614(3)
Appendix B Review of Statistical Theory 617(20)
B.1 Basic concepts of inference
617(4)
B.2 Point estimation
621(7)
B.3 Bayesian estimation
628(3)
B.4 Tests of hypotheses
631(3)
B.5 Confidence region
634(1)
B.6 Complements/Exercises
635(2)
Appendix C Solutions to Selected Exercises 637(76)
Bibliography and Author Index 713(22)
Index 735