Muutke küpsiste eelistusi

E-raamat: Methods of Statistical Model Estimation

(University of Melbourne, Parkville, Australia),
  • Formaat: 255 pages
  • Ilmumisaeg: 19-Apr-2016
  • Kirjastus: Chapman & Hall/CRC
  • Keel: eng
  • ISBN-13: 9781439858035
Teised raamatud teemal:
  • Formaat - PDF+DRM
  • Hind: 80,59 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Formaat: 255 pages
  • Ilmumisaeg: 19-Apr-2016
  • Kirjastus: Chapman & Hall/CRC
  • Keel: eng
  • ISBN-13: 9781439858035
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

"Preface Methods of Statistical Model Estimation has been written to develop a particular pragmatic viewpoint of statistical modelling. Our goal has been to try to demonstrate the unity that underpins statistical parameter estimation for a wide range of models. We have sought to represent the techniques and tenets of statistical modelling using executable computer code. Our choice does not preclude the use of explanatory text, equations, or occasional pseudo-code. However, we have written computer code that is motivated by pedagogic considerations first and foremost. An example is in the development of a single function to compute deviance residuals in Chapter 4. We defer the details to Section 4.7, but mention here that deviance residuals are an important model diagnostic tool for GLMs. Each distribution in the exponential family has its own deviance residual, defined by the likelihood. Many statistical books will present tables of equations for computing each of these residuals. Rather than develop a unique function for each distribution, we prefer to present a single function that calls the likelihood appropriately itself. This single function replaces five or six, and in so doing, demonstrates the unity that underpins GLM. Of course, the code is lessefficient and less stable than a direct representation of the equations would be, but our goal is clarity rather than speed or stability. This book also provides guidelines to enable statisticians and researchers from across disciplines to more easily program their own statistical models using R. R, more than any other statistical application, is driven by the contributions of researchers who have developed scripts, functions, and complete packages for the use of others in the general research community"--



Methods of Statistical Model Estimation examines the most important and popular methods used to estimate parameters for statistical models and provide informative model summary statistics. Designed for R users, the book is also ideal for anyone wanting to better understand the algorithms used for statistical model fitting.

The text presents algorithms for the estimation of a variety of regression procedures using maximum likelihood estimation, iteratively reweighted least squares regression, the EM algorithm, and MCMC sampling. Fully developed, working R code is constructed for each method. The book starts with OLS regression and generalized linear models, building to two-parameter maximum likelihood models for both pooled and panel models. It then covers a random effects model estimated using the EM algorithm and concludes with a Bayesian Poisson model using Metropolis-Hastings sampling.

The book's coverage is innovative in several ways. First, the authors use executable computer code to present and connect the theoretical content. Therefore, code is written for clarity of exposition rather than stability or speed of execution. Second, the book focuses on the performance of statistical estimation and downplays algebraic niceties. In both senses, this book is written for people who wish to fit statistical models and understand them.

See Professor Hilbe discuss the book.

Arvustused

"This book is a concise volume of statistical methods associated with parametric models. With a rich set of R codes, the book contains full demonstration of how to apply the parametric statistical models to obtain desired results of analyses with minimal theoretical details. ... a useful reference book for a graduate course on statistical models using a standard textbook ... many illustrative samples are truly easy to understand. This book is also handy for understanding the algorithm used in statistical model fitting, using the R programming language." -Jae-kwang Kim, Biometrics, March 2014

Preface ix
1 Programming and R
1(2)
1.1 Introduction
1(1)
1.2 R Specifics
1(26)
1.2.1 Objects
3(1)
1.2.1.1 Vectors
3(4)
1.2.1.2 Subsetting
7(1)
1.2.2 Container Objects
7(1)
1.2.2.1 Lists
8(1)
1.2.2.2 Dataframes
9(1)
1.2.3 Functions
10(1)
1.2.3.1 Arguments
11(2)
1.2.3.2 Body
13(1)
1.2.3.3 Environments and Scope
14(2)
1.2.4 Matrices
16(3)
1.2.5 Probability Families
19(3)
1.2.6 Flow Control
22(1)
1.2.6.1 Conditional Execution
23(1)
1.2.6.2 Loops
23(2)
1.2.7 Numerical Optimization
25(2)
1.3 Programming
27(7)
1.3.1 Programming Style
27(1)
1.3.2 Debugging
28(1)
1.3.2.1 Debugging in Batch
29(1)
1.3.3 Object-Oriented Programming
30(1)
1.3.4 S3 Classes
30(4)
1.4 Making R Packages
34(3)
1.4.1 Building a Package
35(1)
1.4.2 Testing
36(1)
1.4.3 Installation
36(1)
1.5 Further Reading
37(1)
1.6 Exercises
37(2)
2 Statistics and Likelihood-Based Estimation
39(1)
2.1 Introduction
39(1)
2.2 Statistical Models
39(2)
2.3 Maximum Likelihood Estimation
41(8)
2.3.1 Process
41(4)
2.3.2 Estimation
45(1)
2.3.2.1 Exponential Family
46(1)
2.3.3 Properties
47(2)
2.4 Interval Estimates
49(7)
2.4.1 Wald Intervals
49(1)
2.4.2 Inverting the LRT: Profile Likelihood
50(2)
2.4.3 Nuisance Parameters
52(4)
2.5 Simulation for Fun and Profit
56(3)
2.5.1 Pseudo-Random Number Generators
56(3)
2.6 Exercises
59(2)
3 Ordinary Regression
61(36)
3.1 Introduction
61(1)
3.2 Least-Squares Regression
62(12)
3.2.1 Properties
64(2)
3.2.2 Matrix Representation
66(3)
3.2.3 QR Decomposition
69(2)
3.2.4 Example
71(3)
3.3 Maximum-Likelihood Regression
74(2)
3.4 Infrastructure
76(18)
3.4.1 Easing Model Specification
76(1)
3.4.2 Missing Data
77(1)
3.4.3 Link Function
78(1)
3.4.4 Initializing the Search
78(1)
3.4.5 Making Failure Informative
79(1)
3.4.6 Reporting Asymptotic SE and CI
79(1)
3.4.7 The Regression Function
80(2)
3.4.8 S3 Classes
82(1)
3.4.8.1 Print
82(1)
3.4.8.2 Fitted Values
83(1)
3.4.8.3 Residuals
84(1)
3.4.8.4 Diagnostics
85(2)
3.4.8.5 Metrics of Fit
87(2)
3.4.8.6 Presenting a Summary
89(2)
3.4.9 Example Redux
91(3)
3.4.10 Follow-up
94(1)
3.5 Conclusion
94(1)
3.6 Exercises
94(3)
4 Generalized Linear Models
97(48)
4.1 Introduction
97(2)
4.2 GLM: Families and Terms
99(3)
4.3 The Exponential Family
102(2)
4.4 The IRLS Fitting Algorithm
104(1)
4.5 Bernoulli or Binary Logistic Regression
105(9)
4.5.1 IRLS
111(3)
4.6 Grouped Binomial Models
114(6)
4.7 Constructing a GLM Function
120(9)
4.7.1 A Summary Function
125(3)
4.7.2 Other Link Function
128(1)
4.8 GLM Negative Binomial Model
129(4)
4.9 Offsets
133(3)
4.10 Dispersion, Over- and Under-
136(3)
4.11 Goodness-of-Fit and Residual Analysis
139(4)
4.11.1 Goodness-of-Fit
139(2)
4.11.2 Residual Analysis
141(2)
4.12 Weights
143(1)
4.13 Conclusion
143(1)
4.14 Exercises
144(1)
5 Maximum Likelihood Estimation
145(32)
5.1 Introduction
145(1)
5.2 MLE for GLM
146(14)
5.2.1 The Log-Likelihood
146(2)
5.2.2 Parameter Estimation
148(1)
5.2.3 Residuals
149(1)
5.2.4 Deviance
150(1)
5.2.5 Initial Values
151(1)
5.2.6 Printing the Object
151(2)
5.2.7 GLM Function
153(4)
5.2.8 Fitting for a New Family
157(3)
5.3 Two-Parameter MLE
160(16)
5.3.1 The Log-Likelihood
160(2)
5.3.2 Parameter Estimation
162(1)
5.3.3 Deviance and Deviance Residuals
163(2)
5.3.4 Initial Values
165(1)
5.3.5 Printing and Summarizing the Object
165(1)
5.3.6 GLM Function
165(6)
5.3.7 Building on the Model
171(2)
5.3.8 Fitting for a New Family
173(3)
5.4 Exercises
176(1)
6 Panel Data
177(26)
6.1 What Is a Panel Model?
177(4)
6.1.1 Fixed- or Random-Effects Models
181(1)
6.2 Fixed-Effects Model
181(7)
6.2.1 Unconditional Fixed-Effects Models
181(2)
6.2.2 Conditional Fixed-Effects Models
183(2)
6.2.3 Coding a Conditional Fixed-Effects Negative Binomial
185(3)
6.3 Random-Intercept Model
188(6)
6.3.1 Random-Effects Models
188(3)
6.3.2 Coding a Random-Intercept Gaussian Model
191(3)
6.4 Handling More Advanced Models
194(1)
6.5 The EM Algorithm
194(7)
6.5.1 A Simple Example
196(1)
6.5.2 The Random-Intercept Model
197(4)
6.6 Further Reading
201(1)
6.7 Exercises
202(1)
7 Model Estimation Using Simulation
203(30)
7.1 Simulation: Why and When?
203(2)
7.2 Synthetic Statistical Models
205(14)
7.2.1 Developing Synthetic Models
205(4)
7.2.2 Monte Carlo Estimation
209(7)
7.2.3 Reference Distributions
216(3)
7.3 Bayesian Parameter Estimation
219(11)
7.3.1 Gibbs Sampling
229(1)
7.4 Discussion
230(1)
7.5 Exercises
231(2)
Bibliography 233(6)
Index 239
Joseph M. Hilbe is a Solar System Ambassador with NASA's Jet Propulsion Laboratory at the California Institute of Technology, an adjunct professor of statistics at Arizona State University, and an Emeritus Professor at the University of Hawaii. An elected fellow of the American Statistical Association and elected member (fellow) of the International Statistical Institute, Professor Hilbe is president of the International Astrostatistics Association, editor-in-chief of two book series, and currently on the editorial boards of six journals in statistics and mathematics. He has authored twelve statistics texts, including Logistic Regression Models, two editions of the bestseller Negative Binomial Regression, and two editions of Generalized Estimating Equations (with J. Hardin).

Andrew P. Robinson is Deputy Director of the Australian Centre for Excellence in Risk Analysis with the Department of Mathematics and Statistics at the University of Melbourne. He has coauthored the popular Forest Analytics with R and the best-selling Introduction to Scientific Programming and Simulation using R. Dr. Robinson is the author of "IcebreakeR," a well-received introduction to R that is freely available online. With Professor Hilbe, he authored the R COUNT and MSME packages, both available on CRAN. He has also presented at numerous workshops on R programming to the scientific community.