Muutke küpsiste eelistusi

Regression Analysis by Example 5th edition [Kõva köide]

(New York University), (Cornell University, Ithaca)
  • Formaat: Hardback, 424 pages, kõrgus x laius x paksus: 259x183x31 mm, kaal: 907 g, Tables: 70 B&W, 0 Color; Graphs: 105 B&W, 0 Color
  • Sari: Wiley Series in Probability and Statistics
  • Ilmumisaeg: 05-Oct-2012
  • Kirjastus: John Wiley & Sons Inc
  • ISBN-10: 0470905840
  • ISBN-13: 9780470905845
Teised raamatud teemal:
  • Kõva köide
  • Hind: 158,50 €*
  • * saadame teile pakkumise kasutatud raamatule, mille hind võib erineda kodulehel olevast hinnast
  • See raamat on trükist otsas, kuid me saadame teile pakkumise kasutatud raamatule.
  • Kogus:
  • Lisa ostukorvi
  • Tasuta tarne
  • Lisa soovinimekirja
  • Formaat: Hardback, 424 pages, kõrgus x laius x paksus: 259x183x31 mm, kaal: 907 g, Tables: 70 B&W, 0 Color; Graphs: 105 B&W, 0 Color
  • Sari: Wiley Series in Probability and Statistics
  • Ilmumisaeg: 05-Oct-2012
  • Kirjastus: John Wiley & Sons Inc
  • ISBN-10: 0470905840
  • ISBN-13: 9780470905845
Teised raamatud teemal:
Praise for the Fourth Edition:

"This book is . . . an excellent source of examples for regression analysis. It has been and still is readily readable and understandable."

Journal of the American Statistical Association Regression analysis is a conceptually simple method for investigating relationships among variables. Carrying out a successful application of regression analysis, however, requires a balance of theoretical results, empirical rules, and subjective judgment. Regression Analysis by Example, Fifth Edition has been expanded and thoroughly updated to reflect recent advances in the field. The emphasis continues to be on exploratory data analysis rather than statistical theory. The book offers in-depth treatment of regression diagnostics, transformation, multicollinearity, logistic regression, and robust regression.

The book now includes a new chapter on the detection and correction of multicollinearity, while also showcasing the use of the discussed methods on newly added data sets from the fields of engineering, medicine, and business. The Fifth Edition also explores additional topics, including:





Surrogate ridge regression Fitting nonlinear models Errors in variables ANOVA for designed experiments

Methods of regression analysis are clearly demonstrated, and examples containing the types of irregularities commonly encountered in the real world are provided. Each example isolates one or two techniques and features detailed discussions, the required assumptions, and the evaluated success of each technique. Additionally, methods described throughout the book can be carried out with most of the currently available statistical software packages, such as the software package R.

Regression Analysis by Example, Fifth Edition is suitable for anyone with an understanding of elementary statistics.

Arvustused

 The text is suitable for anyone with an understanding of elementary statistics.  (Zentralblatt MATH, 1 July 2013)

All in all, here we have a nice and valuable up-to-date book showing examples how the famous ever-lasting regression analysis works with the data. No doubt, this book will continue to be frequently used in statistics classrooms.  (International Statistical Review, 15 February 2013)





 

Preface xiii
1 Introduction
1(24)
1.1 What Is Regression Analysis?
1(1)
1.2 Publicly Available Data Sets
2(1)
1.3 Selected Applications of Regression Analysis
3(10)
1.3.1 Agricultural Sciences
3(1)
1.3.2 Industrial and Labor Relations
4(2)
1.3.3 Government
6(1)
1.3.4 History
6(3)
1.3.5 Environmental Sciences
9(1)
1.3.6 Industrial Production
9(3)
1.3.7 The Space Shuttle Challenger
12(1)
1.3.8 Cost of Health Care
12(1)
1.4 Steps in Regression Analysis
13(8)
1.4.1 Statement of the Problem
13(2)
1.4.2 Selection of Potentially Relevant Variables
15(1)
1.4.3 Data Collection
15(1)
1.4.4 Model Specification
16(3)
1.4.5 Method of Fitting
19(1)
1.4.6 Model Fitting
19(1)
1.4.7 Model Criticism and Selection
19(1)
1.4.8 Objectives of Regression Analysis
20(1)
1.5 Scope and Organization of the Book
21(4)
Exercises
23(2)
2 Simple Linear Regression
25(32)
2.1 Introduction
25(1)
2.2 Covariance and Correlation Coefficient
25(5)
2.3 Example: Computer Repair Data
30(2)
2.4 The Simple Linear Regression Model
32(1)
2.5 Parameter Estimation
33(3)
2.6 Tests of Hypotheses
36(5)
2.7 Confidence Intervals
41(1)
2.8 Predictions
41(2)
2.9 Measuring the Quality of Fit
43(3)
2.10 Regression Line Through the Origin
46(2)
2.11 Trivial Regression Models
48(1)
2.12 Bibliographic Notes
49(8)
Exercises
49(8)
3 Multiple Linear Regression
57(36)
3.1 Introduction
57(1)
3.2 Description of the Data and Model
57(1)
3.3 Example: Supervisor Performance Data
58(1)
3.4 Parameter Estimation
59(3)
3.5 Interpretations of Regression Coefficients
62(2)
3.6 Centering and Scaling
64(3)
3.6.1 Centering and Scaling in Intercept Models
65(1)
3.6.2 Scaling in No-Intercept Models
66(1)
3.7 Properties of the Least Squares Estimators
67(1)
3.8 Multiple Correlation Coefficient
68(1)
3.9 Inference for Individual Regression Coefficients
69(2)
3.10 Tests of Hypotheses in a Linear Model
71(10)
3.10.1 Testing All Regression Coefficients Equal to Zero
73(2)
3.10.2 Testing a Subset of Regression Coefficients Equal to Zero
75(3)
3.10.3 Testing the Equality of Regression Coefficients
78(1)
3.10.4 Estimating and Testing of Regression Parameters Under Constraints
79(2)
3.11 Predictions
81(1)
3.12 Summary
82(11)
Exercises
82(7)
Appendix: Multiple Regression in Matrix Notation
89(4)
4 Regression Diagnostics: Detection of Model Violations
93(36)
4.1 Introduction
93(1)
4.2 The Standard Regression Assumptions
94(2)
4.3 Various Types of Residuals
96(2)
4.4 Graphical Methods
98(3)
4.5 Graphs Before Fitting a Model
101(4)
4.5.1 One-Dimensional Graphs
101(1)
4.5.2 Two-Dimensional Graphs
101(3)
4.5.3 Rotating Plots
104(1)
4.5.4 Dynamic Graphs
104(1)
4.6 Graphs After Fitting a Model
105(1)
4.7 Checking Linearity and Normality Assumptions
105(1)
4.8 Leverage, Influence, and Outliers
106(5)
4.8.1 Outliers in the Response Variable
108(1)
4.8.2 Outliers in the Predictors
108(1)
4.8.3 Masking and Swamping Problems
108(3)
4.9 Measures of Influence
111(4)
4.9.1 Cook's Distance
111(1)
4.9.2 Welsch and Kuh Measure
112(1)
4.9.3 Hadi's Influence Measure
113(2)
4.10 The Potential-Residual Plot
115(1)
4.11 What to Do with the Outliers?
116(1)
4.12 Role of Variables in a Regression Equation
117(4)
4.12.1 Added-Variable Plot
117(1)
4.12.2 Residual Plus Component Plot
118(3)
4.13 Effects of an Additional Predictor
121(2)
4.14 Robust Regression
123(6)
Exercises
123(6)
5 Qualitative Variables as Predictors
129(34)
5.1 Introduction
129(1)
5.2 Salary Survey Data
130(3)
5.3 Interaction Variables
133(4)
5.4 Systems of Regression Equations
137(10)
5.4.1 Models with Different Slopes and Different Intercepts
138(7)
5.4.2 Models with Same Slope and Different Intercepts
145(1)
5.4.3 Models with Same Intercept and Different Slopes
146(1)
5.5 Other Applications of Indicator Variables
147(1)
5.6 Seasonality
148(2)
5.7 Stability of Regression Parameters Over Time
150(13)
Exercises
154(9)
6 Transformation of Variables
163(28)
6.1 Introduction
163(2)
6.2 Transformations to Achieve Linearity
165(2)
6.3 Bacteria Deaths Due to X-Ray Radiation
167(4)
6.3.1 Inadequacy of a Linear Model
168(2)
6.3.2 Logarithmic Transformation for Achieving Linearity
170(1)
6.4 Transformations to Stabilize Variance
171(5)
6.5 Detection of Heteroscedastic Errors
176(2)
6.6 Removal of Heteroscedasticity
178(1)
6.7 Weighted Least Squares
179(1)
6.8 Logarithmic Transformation of Data
180(1)
6.9 Power Transformation
181(4)
6.10 Summary
185(6)
Exercises
186(5)
7 Weighted Least Squares
191(18)
7.1 Introduction
191(1)
7.2 Heteroscedastic Models
192(3)
7.2.1 Supervisors Data
192(2)
7.2.2 College Expense Data
194(1)
7.3 Two-Stage Estimation
195(2)
7.4 Education Expenditure Data
197(9)
7.5 Fitting a Dose-Response Relationship Curve
206(3)
Exercises
208(1)
8 The Problem of Correlated Errors
209(24)
8.1 Introduction: Autocorrelation
209(1)
8.2 Consumer Expenditure and Money Stock
210(2)
8.3 Durbin-Watson Statistic
212(2)
8.4 Removal of Autocorrelation by Transformation
214(2)
8.5 Iterative Estimation with Autocorrelated Errors
216(1)
8.6 Autocorrelation and Missing Variables
217(1)
8.7 Analysis of Housing Starts
218(4)
8.8 Limitations of the Durbin-Watson Statistic
222(1)
8.9 Indicator Variables to Remove Seasonality
223(3)
8.10 Regressing Two Time Series
226(7)
Exercises
228(5)
9 Analysis of Collinear Data
233(26)
9.1 Introduction
233(1)
9.2 Effects of Collinearity on Inference
234(6)
9.3 Effects of Collinearity on Forecasting
240(5)
9.4 Detection of Collinearity
245(14)
9.4.1 Simple Signs of Collinearity
245(3)
9.4.2 Variance Inflation Factors
248(3)
9.4.3 The Condition Indices
251(4)
Exercises
255(4)
10 Working With Collinear Data
259(40)
10.1 Introduction
259(1)
10.2 Principal Components
259(4)
10.3 Computations Using Principal Components
263(2)
10.4 Imposing Constraints
265(3)
10.5 Searching for Linear Functions of the β's
268(3)
10.6 Biased Estimation of Regression Coefficients
271(1)
10.7 Principal Components Regression
272(2)
10.8 Reduction of Collinearity in the Estimation Data
274(2)
10.9 Constraints on the Regression Coefficients
276(1)
10.10 Principal Components Regression: A Caution
277(2)
10.11 Ridge Regression
279(2)
10.12 Estimation by the Ridge Method
281(5)
10.13 Ridge Regression: Some Remarks
286(1)
10.14 Summary
287(1)
10.15 Bibliographic Notes
287(12)
Exercises
288(4)
Appendix 10.A Principal Components
292(2)
Appendix 10.B Ridge Regression
294(2)
Appendix 10.C Surrogate Ridge Regression
296(3)
11 Variable Selection Procedures
299(36)
11.1 Introduction
299(1)
11.2 Formulation of the Problem
300(1)
11.3 Consequences of Variables Deletion
300(2)
11.4 Uses of Regression Equations
302(1)
11.4.1 Description and Model Building
302(1)
11.4.2 Estimation and Prediction
302(1)
11.4.3 Control
302(1)
11.5 Criteria for Evaluating Equations
303(3)
11.5.1 Residual Mean Square
303(1)
11.5.2 Mallows Cp
304(1)
11.5.3 Information Criteria
305(1)
11.6 Collinearity and Variable Selection
306(1)
11.7 Evaluating All Possible Equations
306(1)
11.8 Variable Selection Procedures
307(2)
11.8.1 Forward Selection Procedure
307(1)
11.8.2 Backward Elimination Procedure
308(1)
11.8.3 Stepwise Method
308(1)
11.9 General Remarks on Variable Selection Methods
309(1)
11.10 A Study of Supervisor Performance
310(4)
11.11 Variable Selection with Collinear Data
314(1)
11.12 The Homicide Data
314(3)
11.13 Variable Selection Using Ridge Regression
317(1)
11.14 Selection of Variables in an Air Pollution Study
318(8)
11.15 A Possible Strategy for Fitting Regression Models
326(2)
11.16 Bibliographic Notes
328(7)
Exercises
328(3)
Appendix: Effects of Incorrect Model Specifications
331(4)
12 Logistic Regression
335(24)
12.1 Introduction
335(1)
12.2 Modeling Qualitative Data
336(1)
12.3 The Logit Model
336(2)
12.4 Example: Estimating Probability of Bankruptcies
338(3)
12.5 Logistic Regression Diagnostics
341(1)
12.6 Determination of Variables to Retain
342(3)
12.7 Judging the Fit of a Logistic Regression
345(2)
12.8 The Multinomial Logit Model
347(7)
12.8.1 Multinomial Logistic Regression
347(1)
12.8.2 Example: Determining Chemical Diabetes
348(4)
12.8.3 Ordinal Logistic Regression
352(1)
12.8.4 Example: Determining Chemical Diabetes Revisited
353(1)
12.9 Classification Problem: Another Approach
354(5)
Exercises
355(4)
13 Further Topics
359(12)
13.1 Introduction
359(1)
13.2 Generalized Linear Model
359(1)
13.3 Poisson Regression Model
360(1)
13.4 Introduction of New Drugs
361(2)
13.5 Robust Regression
363(1)
13.6 Fitting a Quadratic Model
364(2)
13.7 Distribution of PCB in U.S. Bays
366(5)
Exercises
370(1)
Appendix A Statistical Tables 371(10)
References 381(8)
Index 389
SAMPRIT CHATTERJEE, PhD, is Professor Emeritus of Statistics at New York University. A Fellow of the American Statistical Association, Dr. Chatterjee has been a Fulbright scholar in both Kazhakstan and Mongolia. He is the coauthor of Sensitivity Analysis in Linear Regression and A Casebook for a First Course in Statistics and Data Analysis, both published by Wiley.

ALI S. HADI, PhD, is a Distinguished University Professor and former vice provost at the American University in Cairo (AUC). He is the founding Director of the Actuarial Science Program at AUC. He is also a Stephen H. Weiss Presidential Fellow and Professor Emeritus at Cornell University. Dr. Hadi is the author of four other books, a Fellow of the American Statistical Association, and an elected Member of the International Statistical Institute.