Muutke küpsiste eelistusi

E-raamat: Statistical Modeling and Inference for Social Science

(University of California, Berkeley)
  • Formaat - EPUB+DRM
  • Hind: 30,86 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

Written specifically for graduate students and practitioners beginning social science research, Statistical Modeling and Inference for Social Science covers the essential statistical tools, models and theories that make up the social scientist's toolkit. Assuming no prior knowledge of statistics, this textbook introduces students to probability theory, statistical inference and statistical modeling, and emphasizes the connection between statistical procedures and social science theory. Sean Gailmard develops core statistical theory as a set of tools to model and assess relationships between variables - the primary aim of social scientists - and demonstrates the ways in which social scientists express and test substantive theoretical arguments in various models. Chapter exercises guide students in applying concepts to data, extending their grasp of core theoretical concepts. Students will also gain the ability to create, read and critique statistical applications in their fields of interest.

Arvustused

'With careful consideration for both rigor and intuition, Gailmard fills a large void in the social science literature. Those seeking clear mathematical exposition will not be disappointed. Those hoping for substantive applications to illuminate the data analysis will also be pleased. This book strikes a nearly perfect balance.' Wendy K. Tam Cho, National Center for Supercomputing Applications and University of Illinois, Urbana-Champaign 'This is the single best book on modeling in social science - it goes beyond any extant book and will without a doubt become the standard text in methods courses throughout the social sciences.' James N. Druckman, Payson S. Wild Professor of Political Science, Northwestern University, Illinois 'In Statistical Modeling and Inference for Social Science, Gailmard provides a complete and well-written review of statistical modeling from the modern perspective of causal inference. It provides all the material necessary for an introduction to quantitative methods for social science students.' Jonathan N. Katz, Kay Sugahara Professor of Social Sciences and Statistics, and Chair, Division of the Humanities and Social Sciences, California Institute of Technology

Muu info

This textbook is an introduction to probability theory, statistical inference and statistical modeling for graduate students and practitioners beginning social science research.
List of Figures
xiii
List of Tables
xv
Acknowledgments xvii
1 Introduction
1(11)
2 Descriptive Statistics: Data and Information
12(57)
2.1 Measurement
14(7)
2.1.1 Measurement Scales
14(3)
2.1.2 Index Construction
17(2)
2.1.3 Measurement Validity
19(2)
2.2 Univariate Distributions
21(12)
2.2.1 Sample Central Tendency
22(5)
2.2.2 Sample Dispersion
27(3)
2.2.3 Graphical Summaries: Histograms
30(3)
2.3 Bivariate Distributions
33(28)
2.3.1 Graphical Summaries: Scatterplots
34(2)
2.3.2 Numerical Summaries: Crosstabs
36(2)
2.3.3 Conditional Sample Mean
38(2)
2.3.4 Association between Variables: Covariance and Correlation
40(3)
2.3.5 Regression
43(9)
2.3.6 Multiple Regression
52(4)
2.3.7 Specifying Regression Models
56(5)
2.4 Conclusion
61(8)
3 Observable Data and Data-Generating Processes
69(14)
3.1 Data and Data-Generating Processes
70(3)
3.2 Sampling Uncertainty
73(1)
3.3 Theoretical Uncertainty
74(3)
3.4 Fundamental Uncertainty
77(1)
3.5 Randomness in DGPs and Observation of Social Events
78(1)
3.6 Stochastic DGPs and the Choice of Empirical Methodology
79(3)
3.7 Conclusion
82(1)
4 Probability Theory: Basic Properties of Data-Generating Processes
83(33)
4.1 Set-Theoretic Foundations
84(6)
4.1.1 Formal Definitions
84(2)
4.1.2 Probability Measures and Probability Spaces
86(1)
4.1.3 Ontological Interpretations of Probability
87(2)
4.1.4 Further Properties of Probability Measures
89(1)
4.2 Independence and Conditional Probability
90(8)
4.2.1 Examples and Simple Combinatorics
92(4)
4.2.2 Bayes's Theorem
96(2)
4.3 Random Variables
98(2)
4.4 Distribution Functions
100(6)
4.4.2 Cumulative Distribution Functions
101(1)
4.4.2 Probability Mass and Density Functions
102(4)
4.5 Multiple Random Variables
106(1)
4.6 Multivariate Probability Distributions
107(7)
4.6.1 Joint Distributions
107(2)
4.6.2 Marginal Distributions
109(1)
4.6.3 Conditional Distributions
110(3)
4.6.4 Independence of Random Variables
113(1)
4.7 Conclusion
114(2)
5 Expectation and Moments: Summaries of Data-Generating Processes
116(21)
5.1 Expectation in Univariate Distributions
116(8)
5.1.1 Properties of Expectation
118(2)
5.1.2 Variance
120(2)
5.1.3 The Chebyshev and Markov Inequalities
122(1)
5.1.4 Expectation of a Function of X
123(1)
5.2 Expectation in Multivariate Distributions
124(11)
5.2.1 Conditional Mean and Variance
126(1)
5.2.2 The Law of Iterated Expectations
127(1)
5.2.3 Covariance
128(2)
5.2.4 Correlation
130(5)
5.3 Conclusion
135(2)
6 Probability and Models: Linking Positive Theories and Data-Generating Processes
137(50)
6.2 DGPs and Theories of Social Phenomena
138(4)
6.1.1 Statistical Models
138(2)
6.1.2 Parametric Families of DGPs
140(2)
6.2 The Bernoulli and Binomial Distribution: Binary Events
142(9)
6.2.1 Introducing a Covariate
144(5)
6.2.2 Other Flavors of Logit and Probit
149(2)
6.3 The Poisson Distribution: Event Counts
151(4)
6.4 DGPs for Durations
155(4)
6.4.1 Exponential Distribution
155(2)
6.4.2 Exponential Hazard Rate Model
157(1)
6.4.3 Weibull Distribution
158(1)
6.5 The Uniform Distribution: Equally Likely Outcomes
159(1)
6.6 The Normal Distribution: When All Else Fails
160(8)
6.6.1 Normal Density
161(2)
6.6.2 Z Scores and the Standard Normal Distribution
163(1)
6.6.3 Models with a Normal DGP
164(2)
6.6.4 Bivariate Normal Distribution
166(2)
6.7 Specifying Linear Models
168(4)
6.7.1 Interaction Effects
168(2)
6.7.2 Exponential Effects
170(1)
6.7.3 Saturated Models
171(1)
6.8 Beyond the Means: Comparisons of DGPs
172(4)
6.8.1 DGP Comparisons by First-Order Stochastic Dominance
173(1)
6.8.2 DGP Comparisons by Variance and Second-Order Stochastic Dominance
174(2)
6.9 Examples
176(8)
6.9.1 Attitudes toward High- and Low-Skilled Immigration
176(5)
6.9.2 Protest Movements in the Former Soviet Union
181(3)
6.10 Conclusion
184(3)
7 Sampling Distributions: Linking Data-Generating Processes and Observable Data
187(49)
7.1 Random Sampling and iid Draws from a DGP
188(1)
7.2 Sample Mean with iid Draws
189(8)
7.2.1 Expectation of the Sample Mean
190(2)
7.2.2 The Standard Error of X: Standard Deviation of the Sample Mean
192(2)
7.2.3 The Shape of the Sampling Distribution of X
194(2)
7.2.4 Sampling Distribution of a Difference in Means from Two Samples
196(1)
7.3 Sums of Random Variables and the CLT
197(6)
7.4 Sample Variance with iid Draws
203(4)
7.4.1 Expected Value of Sample Variance S2
204(1)
7.4.2 Sampling from a Normal Distribution and the X2 Distribution
205(2)
7.5 Sample Regression Coefficients with iid Draws
207(13)
7.5.1 Expected Value of the OLS Regression Coefficient
208(4)
7.5.2 Exogeneity of Covariates
212(3)
7.5.3 Omitted Variable Bias
215(1)
7.5.4 Sample Selection Bias from "Selecting on the Dependent Variable"
216(1)
7.5.5 Standard Error of β under Random Sampling
217(2)
7.5.6 The CLT and the Sampling Distribution of β under Random Sampling
219(1)
7.6 Derived Distributions: Sampling from Normal DGPs When 2 Must Be Estimated
220(6)
7.6.1 Student's t Distribution
221(3)
7.6.2 The F Distribution
224(2)
7.7 Failures of iid in Sampling
226(7)
7.7.1 Expectation and Standard Error of X and β under Nonindependent Sampling
227(2)
7.7.2 Heteroskedasticity and Issues for Regression Modeling
229(1)
7.7.3 OLS with "Robust" Standard Errors
230(2)
7.7.4 OLS versus Generalized Least Squares
232(1)
7.8 Conclusion
233(3)
8 Hypothesis Testing: Assessing Claims about the Data-Generating Process
236(54)
8.1 A Contrived Example
237(2)
8.2 Concepts of Hypothesis Testing
239(12)
8.2.1 Hypotheses and Parameter Space
240(1)
8.2.2 Test Statistics
241(1)
8.2.3 Decision Rules and Ex Post Errors
242(1)
8.2.4 Significance
243(1)
8.2.5 p-Values
244(2)
8.2.6 Test Power
246(3)
8.2.7 False Positives in Multiple Tests
249(1)
8.2.8 Publication Bias and the File-Drawer Problem
250(1)
8.3 Tests about Means Based on Normal Sampling Distributions
251(12)
8.3.1 z Test for a Single DGP Mean
251(4)
8.3.2 t Test for a Single DGP Mean
255(2)
8.3.3 z Test for a Population Proportion
257(2)
8.3.4 Difference in Means t Test
259(2)
8.3.5 Difference in Proportions z Test
261(1)
8.3.6 Matched Pairs t Test
262(1)
8.4 Tests Based on a Normal DGP
263(3)
8.4.1 Single Variance X2 Test
264(1)
8.4.2 Difference in Variance F Test
264(2)
8.5 Tests about Regression Coefficients
266(8)
8.5.1 z and t Tests for Regression Coefficients
266(4)
8.5.2 Comparing Regression Slopes
270(2)
8.5.3 F Tests in Regression
272(2)
8.6 Example: Public Debt and Gross Domestic Product Growth
274(4)
8.7 Nonparametric Tests
278(9)
8.7.1 Contingency Tables: X2 Test of Association
279(3)
8.7.2 Mann-Whitney-Wilcoxon U Test
282(2)
8.7.3 Kolmogorov-Smirnov Test for Difference in Distribution
284(3)
8.8 Conclusion
287(3)
9 Estimation: Recovering Properties of the Data-Generating Process
290(45)
9.1 Interval Estimation
291(7)
9.1.1 Confidence Intervals for Normal Sampling Distributions
293(2)
9.1.2 Confidence Intervals with Estimated Standard Errors
295(1)
9.1.3 Confidence Intervals and Hypothesis Tests
296(1)
9.1.4 Confidence Intervals and Opinion Polls
297(1)
9.2 Point Estimation and Criteria for Evaluating Point Estimators
298(12)
9.2.1 Bias
300(1)
9.2.2 Mean Squared Error
301(1)
9.2.3 Variance, Precision, and Efficiency
302(2)
9.2.4 Consistency
304(3)
9.2.5 The Cramer-Rao Theorem
307(3)
9.3 Maximum Likelihood Estimation
310(11)
9.3.1 Maximum Likelihood Estimation of Regression Models
314(4)
9.3.2 Likelihood Ratio Tests
318(1)
9.3.3 Properties of MLEs
319(2)
9.4 Bayesian Estimation
321(4)
9.5 Examples
325(6)
9.5.1 Attitudes toward High-and Low-Skilled Immigration
325(2)
9.5.2 Protest Movements in the Former Soviet Union
327(4)
9.6 Conclusion
331(4)
10 Causal Inference: Inferring Causation from Correlation
335(23)
10.1 Treatments and Counterfactuals: The Potential Outcomes Model
336(5)
10.2 Causal Inference in Regression: The Problem
341(3)
10.2.1 Endogeneity Critiques in Applied Research
343(1)
10.3 Causal Inference and Controlled Experiments
344(3)
10.4 Solutions by Controlling for Selection Based on Observable Covariates
347(3)
10.4.1 Confounding Variables and Conditional Independence in Regression
347(1)
10.4.2 Matching
348(1)
10.4.3 Regression Discontinuity
349(1)
10.5 Solutions with Selection on Unobservables
350(5)
10.5.1 Difference-in-Differences and Fixed Effects Regression Models
350(3)
10.5.2 Instrumental Variables Regression
353(2)
10.6 Conclusion
355(3)
Afterword: Statistical Methods and Empirical Research 358(3)
Bibliography 361(6)
Index 367
Sean Gailmard is Associate Professor of Political Science at the University of California, Berkeley. Formerly an Assistant Professor at Northwestern University and at the University of Chicago, Gailmard earned his PhD in Social Science (economics and political science) from the California Institute of Technology. He is the author of Learning While Governing: Institutions and Accountability in the Executive Branch (2013), winner of the 2013 American Political Science Association's William H. Riker Prize for best book on political economy. His articles have been published in a variety of journals, including American Political Science Review, American Journal of Political Science and Journal of Politics. He currently serves as an associate editor for the Journal of Experimental Political Science and on the editorial boards for Political Science Research and Methods and Journal of Public Policy.