Muutke küpsiste eelistusi

E-raamat: Probability and Bayesian Modeling

, (Emeritus Professor at Bowling Green State Uni.)
  • Formaat - EPUB+DRM
  • Hind: 123,49 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

Probability and Bayesian Modeling is an introduction to probability and Bayesian thinking for undergraduate students with a calculus background. The first part of the book provides a broad view of probability including foundations, conditional probability, discrete and continuous distributions, and joint distributions. Statistical inference is presented completely from a Bayesian perspective. The text introduces inference and prediction for a single proportion and a single mean from Normal sampling. After fundamentals of Markov Chain Monte Carlo algorithms are introduced, Bayesian inference is described for hierarchical and regression models including logistic regression. The book presents several case studies motivated by some historical Bayesian studies and the authors research.

This text reflects modern Bayesian statistical practice. Simulation is introduced in all the probability chapters and extensively used in the Bayesian material to simulate from the posterior and predictive distributions. One chapter describes the basic tenets of Metropolis and Gibbs sampling algorithms; however several chapters introduce the fundamentals of Bayesian inference for conjugate priors to deepen understanding. Strategies for constructing prior distributions are described in situations when one has substantial prior information and for cases where one has weak prior knowledge. One chapter introduces hierarchical Bayesian modeling as a practical way of combining data from different groups. There is an extensive discussion of Bayesian regression models including the construction of informative priors, inference about functions of the parameters of interest, prediction, and model selection.

The text uses JAGS (Just Another Gibbs Sampler) as a general-purpose computational method for simulating from posterior distributions for a variety of Bayesian models. An R package ProbBayes is available containing all of the book datasets and special functions for illustrating concepts from the book.

A complete solutions manual is available for instructors who adopt the book in the Additional Resources section.

Arvustused

"The book can be used by upper undergraduate and graduate students as well as researchers and practitioners in statistics and data science from all disciplinesA background of calculus is required for the reader but no experience in programming is needed. The writing style of the book is extremely reader friendly. It provides numerous illustrative examples, valuable resources, a rich collection of materials, and a memorable learning experience." ~Technometrics

"Over many years, I have wondered about the following: Should a first undergraduate course in statistics be a Bayesian course? After reading this book, I have come to the conclusion that the answer isyes!... this is very well written textbook that can also be used as self-learning material for practitioners. It presents a clear, accessible, and entertaining account of the interplay of probability, computations, and statistical inference from the Bayesian perspective." ~ISCB News

Preface xi
1 Probability: A Measurement of Uncertainty
1(32)
1.1 Introduction
1(1)
1.2 The Classical View of a Probability
2(2)
1.3 The Frequency View of a Probability
4(2)
1.4 The Subjective View of a Probability
6(3)
1.5 The Sample Space
9(3)
1.6 Assigning Probabilities
12(3)
1.7 Events and Event Operations
15(1)
1.8 The Three Probability Axioms
16(2)
1.9 The Complement and Addition Properties
18(1)
1.10 Exercises
19(14)
2 Counting Methods
33(24)
2.1 Introduction: Rolling Dice, Yahtzee, and Roulette
33(1)
2.2 Equally Likely Outcomes
34(1)
2.3 The Multiplication Counting Rule
35(2)
2.4 Permutations
37(2)
2.5 Combinations
39(3)
2.6 Arrangements of Non-Distinct Objects
42(4)
2.7 Playing Yahtzee
46(3)
2.8 Exercises
49(8)
3 Conditional Probability
57(40)
3.1 Introduction: The Three Card Problem
57(3)
3.2 In Everyday Life
60(2)
3.3 In a Two-Way Table
62(3)
3.4 Definition and the Multiplication Rule
65(4)
3.5 The Multiplication Rule under Independence
69(6)
3.6 Learning Using Bayes' Rule
75(3)
3.7 R Example: Learning about a Spinner
78(5)
3.8 Exercises
83(14)
4 Discrete Distributions
97(40)
4.1 Introduction: The Hat Check Problem
97(1)
4.2 Random Variable and Probability Distribution
98(4)
4.3 Summarizing a Probability Distribution
102(2)
4.4 Standard Deviation of a Probability Distribution
104(6)
4.5 Coin-Tossing Distributions
110(11)
4.5.1 Binomial probabilities
111(4)
4.5.2 Binomial computations
115(2)
4.5.3 Mean and standard deviation of a binomial
117(1)
4.5.4 Negative binomial experiments
118(3)
4.6 Exercises
121(16)
5 Continuous Distributions
137(48)
5.1 Introduction: A Baseball Spinner Game
137(2)
5.2 The Uniform Distribution
139(4)
5.3 Probability Density: Waiting for a Bus
143(3)
5.4 The Cumulative Distribution Function
146(3)
5.5 Summarizing a Continuous Random Variable
149(2)
5.6 Normal Distribution
151(6)
5.7 Binomial Probabilities and the Normal Curve
157(4)
5.8 Sampling Distribution of the Mean
161(8)
5.9 Exercises
169(16)
6 Joint Probability Distributions
185(32)
6.1 Introduction
185(1)
6.2 Joint Probability Mass Function: Sampling from a Box
185(6)
6.3 Multinomial Experiments
191(4)
6.4 Joint Density Functions
195(5)
6.5 Independence and Measuring Association
200(2)
6.6 Flipping a Random Coin: The Beta-Binomial Distribution
202(3)
6.7 Bivariate Normal Distribution
205(4)
6.8 Exercises
209(8)
7 Learning about a Binomial Probability
217(50)
7.1 Introduction: Thinking Subjectively about a Proportion
217(4)
7.2 Bayesian Inference with Discrete Priors
221(8)
7.2.1 Example: students' dining preference
221(1)
7.2.2 Discrete prior distributions for proportion p
221(3)
7.2.3 Likelihood of proportion p
224(1)
7.2.4 Posterior distribution for proportion p
225(2)
7.2.5 Inference: students' dining preference
227(1)
7.2.6 Discussion: using a discrete prior
228(1)
7.3 Continuous Priors
229(8)
7.3.1 The beta distribution and probabilities
231(3)
7.3.2 Choosing a beta density to represent prior opinion
234(3)
7.4 Updating the Beta Prior
237(5)
7.4.1 Bayes' rule calculation
238(1)
7.4.2 From beta prior to beta posterior: conjugate priors
239(3)
7.5 Bayesian Inferences with Continuous Priors
242(8)
7.5.1 Bayesian hypothesis testing
243(1)
7.5.2 Bayesian credible intervals
244(3)
7.5.3 Bayesian prediction
247(3)
7.6 Predictive Checking
250(6)
7.7 Exercises
256(11)
8 Modeling Measurement and Count Data
267(46)
8.1 Introduction
267(1)
8.2 Modeling Measurements
267(4)
8.2.1 Examples
267(2)
8.2.2 The general approach
269(1)
8.2.3 Outline of chapter
270(1)
8.3 Bayesian Inference with Discrete Priors
271(7)
8.3.1 Example: Roger Federer's time-to-serve
271(4)
8.3.2 Simplification of the likelihood
275(3)
8.3.3 Inference: Federer's time-to-serve
278(1)
8.4 Continuous Priors
278(3)
8.4.1 The normal prior for mean μ
278(1)
8.4.2 Choosing a normal prior
279(2)
8.5 Updating the Normal Prior
281(7)
8.5.1 Introduction
281(1)
8.5.2 A quick peak at the update procedure
282(3)
8.5.3 Bayes' rule calculation
285(1)
8.5.4 Conjugate normal prior
286(2)
8.6 Bayesian Inferences for Continuous Normal Mean
288(4)
8.6.1 Bayesian hypothesis testing and credible interval
288(2)
8.6.2 Bayesian prediction
290(2)
8.7 Posterior Predictive Checking
292(2)
8.8 Modeling Count Data
294(7)
8.8.1 Examples
295(1)
8.8.2 The Poisson distribution
296(1)
8.8.3 Bayesian inferences
297(3)
8.8.4 Case study: Learning about website counts
300(1)
8.9 Exercises
301(12)
9 Simulation by Markov Chain Monte Carlo
313(52)
9.1 Introduction
313(4)
9.1.1 The Bayesian computation problem
313(1)
9.1.2 Choosing a prior
313(2)
9.1.3 The two-parameter normal problem
315(1)
9.1.4 Overview of the chapter
316(1)
9.2 Markov Chains
317(3)
9.2.1 Definition
317(1)
9.2.2 Some properties
318(1)
9.2.3 Simulating a Markov chain
319(1)
9.3 The Metropolis Algorithm
320(6)
9.3.1 Example: Walking on a number line
320(3)
9.3.2 The general algorithm
323(3)
9.3.3 A general function for the Metropolis algorithm
326(1)
9.4 Example: Cauchy-Normal Problem
326(4)
9.4.1 Choice of starting value and proposal region
327(2)
9.4.2 Collecting the simulated draws
329(1)
9.5 Gibbs Sampling
330(8)
9.5.1 Bivariate discrete distribution
330(2)
9.5.2 Beta-binomial sampling
332(1)
9.5.3 Normal sampling - both parameters unknown
333(5)
9.6 MCMC Inputs and Diagnostics
338(3)
9.6.1 Burn-in, starting values, and multiple chains
338(1)
9.6.2 Diagnostics
338(1)
9.6.3 Graphs and summaries
339(2)
9.7 Using JAGS
341(13)
9.7.1 Normal sampling model
342(3)
9.7.2 Multiple chains
345(2)
9.7.3 Posterior predictive checking
347(2)
9.7.4 Comparing two proportions
349(5)
9.8 Exercises
354(11)
10 Bayesian Hierarchical Modeling
365(44)
10.1 Introduction
365(4)
10.1.1 Observations in groups
365(1)
10.1.2 Example: standardized test scores
366(1)
10.1.3 Separate estimates?
366(1)
10.1.4 Combined estimates?
367(1)
10.1.5 A two-stage prior leading to compromise estimates
367(2)
10.2 Hierarchical Normal Modeling
369(12)
10.2.1 Example: ratings of animation movies
369(1)
10.2.2 A hierarchical Normal model with random σ
370(4)
10.2.3 Inference through MCMC
374(7)
10.3 Hierarchical Beta-Binomial Modeling
381(12)
10.3.1 Example: Deaths after heart attacks
381(1)
10.3.2 A hierarchical beta-binomial model
381(4)
10.3.3 Inference through MCMC
385(8)
10.4 Exercises
393(16)
11 Simple Linear Regression
409(40)
11.1 Introduction
409(3)
11.2 Example: Prices and Areas of House Sales
412(1)
11.3 A Simple Linear Regression Model
413(1)
11.4 A Weakly Informative Prior
414(1)
11.5 Posterior Analysis
415(1)
11.6 Inference through MCMC
416(4)
11.7 Bayesian Inferences with Simple Linear Regression
420(7)
11.7.1 Simulate fits from the regression model
420(1)
11.7.2 Learning about the expected response
421(2)
11.7.3 Prediction of future response
423(2)
11.7.4 Posterior predictive model checking
425(2)
11.8 Informative Prior
427(6)
11.8.1 Standardization
428(1)
11.8.2 Prior distributions
429(2)
11.8.3 Posterior Analysis
431(2)
11.9 A Conditional Means Prior
433(4)
11.10 Exercises
437(12)
12 Bayesian Multiple Regression and Logistic Models
449(38)
12.1 Introduction
449(1)
12.2 Bayesian Multiple Linear Regression
449(10)
12.2.1 Example: expenditures of U.S. households
449(2)
12.2.2 A multiple linear regression model
451(2)
12.2.3 Weakly informative priors and inference through MCMC
453(4)
12.2.4 Prediction
457(2)
12.3 Comparing Regression Models
459(6)
12.4 Bayesian Logistic Regression
465(12)
12.4.1 Example: U.S. women labor participation
465(2)
12.4.2 A logistic regression model
467(2)
12.4.3 Conditional means priors and inference through MCMC
469(6)
12.4.4 Prediction
475(2)
12.5 Exercises
477(10)
13 Case Studies
487(38)
13.1 Introduction
487(1)
13.2 Federalist Papers Study
488(9)
13.2.1 Introduction
488(1)
13.2.2 Data on word use
488(1)
13.2.3 Poisson density sampling
489(2)
13.2.4 Negative binomial sampling
491(3)
13.2.5 Comparison of rates for two authors
494(2)
13.2.6 Which words distinguish the two authors?
496(1)
13.3 Career Trajectories
497(8)
13.3.1 Introduction
497(1)
13.3.2 Measuring hitting performance in baseball
498(1)
13.3.3 A hitter's career trajectory
498(1)
13.3.4 Estimating a single trajectory
499(3)
13.3.5 Estimating many trajectories by a hierarchical model
502(3)
13.4 Latent Class Modeling
505(12)
13.4.1 Two classes of test takers
505(4)
13.4.2 A latent class model with two classes
509(5)
13.4.3 Disputed authorship of the Federalist Papers
514(3)
13.5 Exercises
517(8)
14 Appendices
525(4)
14.1 Appendix A: The constant in the beta posterior
525(1)
14.2 Appendix B: The posterior predictive distribution
526(1)
14.3 Appendix C: Comparing Bayesian models
527(2)
Bibliography 529(2)
Index 531
Jim Albert is a Distinguished University Professor of Statistics at Bowling Green State University. His research interests include Bayesian modeling and applications of statistical thinking in sports. He has authored or coauthored several books including Ordinal Data Modeling, Bayesian Computation with R, and Workshop Statistics: Discovery with Data, A Bayesian Approach.

Jingchen (Monika) Hu is an Assistant Professor of Mathematics and Statistics at Vassar College. She teaches an undergraduate-level Bayesian Statistics course at Vassar, which is shared online across several liberal arts colleges. Her research focuses on dealing with data privacy issues by releasing synthetic data.