Preface |
|
xiii | |
Acknowledgments |
|
xv | |
|
|
1 | (19) |
|
The Role of the Computer in Data Analysis |
|
|
1 | (1) |
|
Statistics: Descriptive and Inferential |
|
|
2 | (1) |
|
|
2 | (1) |
|
The Measurement of Variables |
|
|
3 | (5) |
|
Discrete and Continuous Variables |
|
|
8 | (3) |
|
Setting a Context with Real Data |
|
|
11 | (1) |
|
|
12 | (8) |
|
Examining Univariate Distributions |
|
|
20 | (40) |
|
Counting the Occurrence of Data Values |
|
|
20 | (15) |
|
When Variables Are Measured at the Nominal Level |
|
|
20 | (1) |
|
|
21 | (2) |
|
|
23 | (1) |
|
When Variables Are Measured at the Ordinal, Interval, or Ratio Level |
|
|
24 | (1) |
|
Frequency and Percent Distribution Tables |
|
|
24 | (2) |
|
|
26 | (3) |
|
|
29 | (2) |
|
|
31 | (2) |
|
Describing the Shape of a Distribution |
|
|
33 | (2) |
|
|
35 | (10) |
|
Cumulative Percent Distributions |
|
|
35 | (1) |
|
|
35 | (1) |
|
|
36 | (1) |
|
|
37 | (3) |
|
Five-Number Summaries and Boxplots |
|
|
40 | (5) |
|
|
45 | (15) |
|
Measures of Location, Spread, and Skewness |
|
|
60 | (31) |
|
Characterizing the Location of a Distribution |
|
|
60 | (10) |
|
|
60 | (3) |
|
|
63 | (2) |
|
|
65 | (2) |
|
Comparing the Mode, Median, and Mean |
|
|
67 | (3) |
|
Characterizing the Spread of a Distribution |
|
|
70 | (7) |
|
The Range and Interquartile Range |
|
|
72 | (2) |
|
|
74 | (2) |
|
|
76 | (1) |
|
Characterizing the Skewness of a Distribution |
|
|
77 | (1) |
|
Selecting Measures of Location and Spread |
|
|
78 | (1) |
|
Applying What We Have Learned |
|
|
79 | (3) |
|
|
82 | (9) |
|
|
91 | (30) |
|
Linear and Nonlinear Transformations |
|
|
91 | (1) |
|
Linear Transformations: Addition, Subtraction, Multiplication, and Division |
|
|
91 | (12) |
|
The Effect on the Shape of a Distribution |
|
|
93 | (2) |
|
The Effect on Summary Statistics of a Distribution |
|
|
95 | (1) |
|
Common Linear Transformations |
|
|
95 | (2) |
|
|
97 | (1) |
|
|
98 | (2) |
|
Using z-Scores to Detect Outliers |
|
|
100 | (1) |
|
Using z-Scores to Compare Scores in Different Distributions |
|
|
101 | (1) |
|
Relating z-Scores to Percentile Ranks |
|
|
102 | (1) |
|
Nonlinear Transformations: Square Roots and Logarithms |
|
|
103 | (7) |
|
Nonlinear Transformations: Ranking Variables |
|
|
110 | (1) |
|
Other Transformations: Recoding and Combining Variables |
|
|
111 | (3) |
|
|
111 | (2) |
|
|
113 | (1) |
|
|
114 | (7) |
|
Exploring Relationships Between Two Variables |
|
|
121 | (37) |
|
When Both Variables Are at Least Interval-Leveled |
|
|
121 | (12) |
|
|
122 | (4) |
|
The Pearson Product Moment Correlation Coefficient |
|
|
126 | (4) |
|
Interpreting the Pearson Correlation Coefficient |
|
|
130 | (2) |
|
The Effect of Linear Transformations |
|
|
132 | (1) |
|
|
132 | (1) |
|
The Shape of the Underlying Distributions |
|
|
133 | (1) |
|
The Reliability of the Data |
|
|
133 | (1) |
|
When at Least One Variable Is Ordinal and the Other Is at Least Ordinal: The Spearman Rank Correlation Coefficient |
|
|
133 | (2) |
|
When at Least One Variable Is Dichotomous: Other Special Cases of the Pearson Correlation Coefficient |
|
|
135 | (9) |
|
The Point Biserial Correlation Coefficient: The Case of One at-Least-Interval and One Dichotomous Variable |
|
|
135 | (5) |
|
The Phi Coefficient: The Case of Two Dichotomous Variables |
|
|
140 | (4) |
|
Other Visual Displays of Bivariate Relationships |
|
|
144 | (3) |
|
Selection of Appropriate Statistic/Graph to Summarize a Relationship |
|
|
147 | (1) |
|
|
148 | (10) |
|
|
158 | (24) |
|
The ``Best-Fitting'' Linear Equation |
|
|
158 | (6) |
|
The Accuracy of Prediction Using the Linear Regression Model |
|
|
164 | (1) |
|
The Standardized Regression Equation |
|
|
165 | (1) |
|
Ras a Measure of the Overall Fit of the Linear Regression Model |
|
|
165 | (4) |
|
Simple Linear Regression When the Independent Variable Is Dichotomous |
|
|
169 | (3) |
|
Using r and Ras Measures of Effect Size |
|
|
172 | (1) |
|
Emphasizing the Importance of the Scatterplot |
|
|
172 | (2) |
|
|
174 | (8) |
|
|
182 | (13) |
|
|
182 | (2) |
|
The Complement Rule of Probability |
|
|
184 | (1) |
|
The Additive Rules of Probability |
|
|
184 | (3) |
|
First Additive Rule of Probability |
|
|
185 | (1) |
|
Second Additive Rule of Probability |
|
|
186 | (1) |
|
The Multiplicative Rule of Probability |
|
|
187 | (2) |
|
The Relationship between Independence and Mutual Exclusivity |
|
|
189 | (1) |
|
|
190 | (1) |
|
|
191 | (1) |
|
|
192 | (3) |
|
Theoretical Probability Models |
|
|
195 | (22) |
|
The Binomial Probability Model and Distribution |
|
|
195 | (9) |
|
The Applicability of the Binomial Probability Model |
|
|
200 | (4) |
|
The Normal Probability Model and Distribution |
|
|
204 | (6) |
|
Using the Normal Distribution to Approximate the Binomial Distribution |
|
|
210 | (1) |
|
|
210 | (7) |
|
The Role of Sampling in Inferential Statistics |
|
|
217 | (17) |
|
|
217 | (1) |
|
|
218 | (3) |
|
Obtaining a Simple Random Sample |
|
|
219 | (2) |
|
Sampling with and without Replacement |
|
|
221 | (2) |
|
|
223 | (1) |
|
Describing the Sampling Distribution of Means Empirically |
|
|
223 | (3) |
|
Describing the Sampling Distribution of Means Theoretically: The Central Limit Theorem |
|
|
226 | (4) |
|
Central Limit Theorem (CLT) |
|
|
227 | (3) |
|
|
230 | (1) |
|
|
231 | (3) |
|
Inferences Involving the Mean of a Single Population When σ Is Known |
|
|
234 | (25) |
|
Estimating the Population Mean μ When the Population Standard Deviation σ Is Known |
|
|
234 | (2) |
|
|
236 | (3) |
|
Relating the Length of a Confidence Interval, the Level of Confidence, and the Sample Size |
|
|
239 | (1) |
|
|
239 | (8) |
|
The Relationship between Hypothesis Testing and Interval Estimation |
|
|
247 | (1) |
|
|
248 | (1) |
|
Type II Error and the Concept of Power |
|
|
249 | (5) |
|
Increasing the Level of Significance, α |
|
|
253 | (1) |
|
Increasing the Effect Size, δ |
|
|
253 | (1) |
|
Decreasing the Standard Error of the Mean, σ x |
|
|
253 | (1) |
|
|
254 | (1) |
|
|
254 | (5) |
|
Inferences Involving The Mean When σ Is Not Known: One-And Two-Sample Designs |
|
|
259 | (56) |
|
Single Sample Designs When the Parameter of Interest Is the Mean and σ Is Not Known |
|
|
259 | (14) |
|
|
260 | (1) |
|
Degrees of Freedom for the One-Sample t-Test |
|
|
261 | (1) |
|
Violating the Assumption of a Normally Distributed Parent Population in the One-Sample t-Test |
|
|
262 | (1) |
|
Confidence Intervals for the One-Sample t-Test |
|
|
263 | (4) |
|
Hypothesis Tests: The One-Sample t-Test |
|
|
267 | (2) |
|
Effect Size for the One-Sample t-Test |
|
|
269 | (4) |
|
Two-Sample Designs When the Parameter of Interest Is μ, and σ Is Not Known |
|
|
273 | (21) |
|
Independent (or Unrelated) and Dependent (or Related) Samples |
|
|
274 | (1) |
|
Independent Samples t-Test and Confidence Interval |
|
|
275 | (2) |
|
The Assumptions of the Independent Samples t-Test |
|
|
277 | (8) |
|
Effect Size for the Independent Samples t-Test |
|
|
285 | (3) |
|
Paired Samples t-Test and Confidence Interval |
|
|
288 | (1) |
|
The Assumptions of the Paired Samples t-Test |
|
|
289 | (4) |
|
Effect Size for the Paired Samples t-Test |
|
|
293 | (1) |
|
|
294 | (1) |
|
The Standard Error of the Mean Difference for Independent Samples: A More Complete Account (Optional) |
|
|
295 | (5) |
|
|
295 | (2) |
|
|
297 | (2) |
|
Step 1: Estimating σ2 Using the Variance Estimators σ 2/1 and σ 2/2 |
|
|
299 | (1) |
|
Step 2: Estimating the Standard Error of the Mean Difference, σx1 - x2 Using σ2 |
|
|
299 | (1) |
|
|
300 | (15) |
|
One-Way Analysis of Variance |
|
|
315 | (35) |
|
The Disadvantage of Multiple t-Tests |
|
|
315 | (2) |
|
The One-Way Analysis of Variance |
|
|
317 | (11) |
|
A Graphical Illustration of the Role of Variance in Tests on Means |
|
|
317 | (1) |
|
ANOVA as an Extension of the Independent Samples t-Test |
|
|
318 | (1) |
|
Developing an Index of Separation for the Analysis of Variance |
|
|
319 | (1) |
|
Carrying Out the ANOVA Computation |
|
|
319 | (1) |
|
The Between-Group Variance (MSB) |
|
|
320 | (1) |
|
The Within-Group Variance (MSW) |
|
|
321 | (1) |
|
The Assumptions of the One-Way ANOVA |
|
|
321 | (1) |
|
Testing the Equality of Population Means: The F-Ratio |
|
|
322 | (2) |
|
How to Read the Tables and to Use the SPSS Compute Statement for the F-Distribution |
|
|
324 | (3) |
|
|
327 | (1) |
|
Measuring the Effect Size |
|
|
328 | (2) |
|
Post-HOC Multiple Comparison Tests |
|
|
330 | (10) |
|
The Bonferroni Adjustment: Testing Planned Comparisons |
|
|
340 | (3) |
|
The Bonferroni Tests on Multiple Measures |
|
|
343 | (2) |
|
|
345 | (5) |
|
Two-Way Analysis of Variance |
|
|
350 | (41) |
|
|
350 | (3) |
|
The Concept of Interaction |
|
|
353 | (5) |
|
The Hypotheses That Are Tested by a Two-Way Analysis of Variance |
|
|
358 | (1) |
|
Assumptions of the Two-Way ANOVA |
|
|
358 | (2) |
|
Balanced versus Unbalanced Factorial Designs |
|
|
360 | (1) |
|
Partitioning the Total Sum of Squares |
|
|
360 | (1) |
|
Using the F-Ratio to Test the Effects in Two-Way ANOVA |
|
|
361 | (1) |
|
Carrying Out the Two-Way ANOVA Computation by Hand |
|
|
362 | (4) |
|
Decomposing Score Deviations about the Grand Mean |
|
|
366 | (1) |
|
Modeling Each Score as a Sum of Component Parts |
|
|
367 | (1) |
|
Explaining the Interaction as a Joint (or Multiplicative) Effect |
|
|
368 | (1) |
|
|
368 | (4) |
|
Fixed versus Random Factors |
|
|
372 | (1) |
|
Post-Hoc Multiple Comparison Tests |
|
|
373 | (6) |
|
Summary of Steps to Be Taken in a Two-Way ANOVA Procedure |
|
|
379 | (4) |
|
|
383 | (8) |
|
Correlation and Simple Regression as Inferential Techniques |
|
|
391 | (44) |
|
The Bivariate Normal Distribution |
|
|
391 | (3) |
|
Testing Whether the Population Pearson Product Moment Correlation Equals Zero |
|
|
394 | (3) |
|
Using a Confidence Interval to Estimate the Size of the Population Correlation Coefficient, ρ |
|
|
397 | (3) |
|
Revisiting Simple Linear Regression for Prediction |
|
|
400 | (10) |
|
Estimating the Population Standard Error of Prediction, σ Yix |
|
|
400 | (1) |
|
Testing the b-Weight for Statistical Significance |
|
|
401 | (4) |
|
Explaining Simple Regression Using an Analysis of Variance Framework |
|
|
405 | (2) |
|
Measuring the Fit of the Overall Regression Equation: Using R and R2 |
|
|
407 | (1) |
|
|
408 | (1) |
|
Testing R2 for Statistical Significance |
|
|
409 | (1) |
|
Estimating the True Population R2: The Adjusted R2 |
|
|
409 | (1) |
|
Exploring the Goodness of Fit of the Regression Equation: Using Regression Diagnostics |
|
|
410 | (14) |
|
Residual Plots: Evaluating the Assumptions Underlying Regression |
|
|
413 | (2) |
|
Detecting Influential Observations: Discrepancy and Leverage |
|
|
415 | (2) |
|
Using SPSS to Obtain Leverage |
|
|
417 | (1) |
|
Using SPSS to Obtain Discrepancy |
|
|
417 | (1) |
|
Using SPSS to Obtain Influence |
|
|
418 | (4) |
|
Using the Prediction Model to Predict Ice Cream Sales |
|
|
422 | (1) |
|
Simple Regression When the Predictor Is Dichotomous |
|
|
422 | (2) |
|
|
424 | (11) |
|
An Introduction to Multiple Regression |
|
|
435 | (50) |
|
The Basic Equation with Two Predictors |
|
|
436 | (1) |
|
Equations for b, β, and RY.12 When the Predictors Are Not Correlated |
|
|
437 | (1) |
|
Equations for b, β, and RY.12 When the Predictors Are Correlated |
|
|
438 | (2) |
|
Summarizing and Expanding on Some Important Principles of Multiple Regression |
|
|
440 | (4) |
|
Testing the b-Weights for Statistical Significance |
|
|
444 | (1) |
|
Assessing the Relative Importance of the Independent Variables in the Equation |
|
|
445 | (1) |
|
Measuring the Decrease in R2 Directly: An Alternative to the Squared Part Correlation |
|
|
446 | (1) |
|
Evaluating the Statistical Significance of the Change in R2 |
|
|
446 | (2) |
|
The b-Weight as a Partial Slope in Multiple Regression |
|
|
448 | (2) |
|
Multiple Regression When One of the Two Independent Variables Is Dichotomous |
|
|
450 | (4) |
|
The Concept of Interaction between Two Variables That Are at Least Interval-Leveled |
|
|
454 | (2) |
|
Testing the Statistical Significance of an Interaction Using SPSS |
|
|
456 | (4) |
|
Centering First-Order Effects to Achieve Meaningful Interpretations of b-Weights |
|
|
460 | (1) |
|
Understanding the Nature of a Statistically Significant Two-Way Interaction |
|
|
460 | (3) |
|
Interaction When One of the Independent Variables Is Dichotomous and the Other Is Continuous |
|
|
463 | (3) |
|
Putting It All Together: A Student Project Reprinted |
|
|
466 | (1) |
|
|
467 | (1) |
|
Examining the Variables Individually and in Pairs |
|
|
468 | (3) |
|
Examining the Variables Multivariately with Mathematics Achievement as the Criterion |
|
|
471 | (4) |
|
|
475 | (10) |
|
|
485 | (34) |
|
Parametric versus Nonparametric Methods |
|
|
485 | (1) |
|
Nonparametric Methods When the Dependent Variable Is at the Nominal Level |
|
|
486 | (1) |
|
The Chi-Square Distribution (X2) |
|
|
486 | (19) |
|
The Chi-Square Goodness-of-Fit Test |
|
|
489 | (4) |
|
The Chi-Square Test of Independence |
|
|
493 | (4) |
|
Assumptions of the Chi-Square Test of Independence |
|
|
497 | (2) |
|
|
499 | (2) |
|
Calculating the Fisher Exact Test by Hand Using the Hypergeometric Distribution |
|
|
501 | (4) |
|
Nonparametric Methods When the Dependent Variable Is Ordinal-Leveled |
|
|
505 | (9) |
|
|
505 | (3) |
|
|
508 | (4) |
|
The Kruskal-Wallis Analysis of Variance |
|
|
512 | (2) |
|
|
514 | (5) |
|
APPENDIX A. DATA SET DESCRIPTIONS |
|
|
519 | (12) |
|
|
519 | (1) |
|
|
519 | (1) |
|
|
519 | (1) |
|
|
520 | (1) |
|
|
520 | (1) |
|
|
520 | (2) |
|
|
522 | (1) |
|
|
522 | (1) |
|
|
522 | (1) |
|
|
523 | (1) |
|
|
524 | (1) |
|
|
524 | (1) |
|
|
524 | (4) |
|
|
528 | (1) |
|
|
529 | (1) |
|
|
529 | (1) |
|
|
530 | (1) |
|
|
530 | (1) |
|
APPENDIX B. GENERATING DISTRIBUTIONS FOR CHAPTERS 8 AND 9 USING SPSS SYNTAX |
|
|
531 | (6) |
|
Creating a New Data Set File with ID Values for 75 Cases |
|
|
531 | (1) |
|
The SPSS Syntax Program (Called, in General, a Macro) to Generate the Set of 50,000 Sample Means Used to Form the Sampling Distribution of Means Graphed as the Histogram of Figure 9.2 |
|
|
532 | (1) |
|
The SPSS Syntax Program to Generate the Set of 1,000 Normally Distributed Scores with Mean = 15 and SD = 3 as Illustrated by the Histogram of Figure 9.3 |
|
|
533 | (1) |
|
The SPSS Syntax Program to Generate the Set of 1,000 Normally Distributed Scores with Mean = 15 and SD = 3 as Illustrated by the Histogram of Figure 9.4 |
|
|
534 | (1) |
|
The SPSS Syntax Program to Generate the Set of 1,000 Positively Skewed Distributed Scores with Mean = 8 and SD = 4 as Illustrated by the Histogram of Figure 9.5 |
|
|
534 | (1) |
|
The SPSS Syntax Program, Sampdisver2.Sps, to Generate the Set of 5,000 Sample Means Used to Form the Sampling Distribution of Means Graphed as the Histogram of Figure 9.6 |
|
|
535 | (2) |
|
APPENDIX C. STATISTICAL TABLES |
|
|
537 | (17) |
|
Table 1. Areas under the Standard Normal Curve (to the Right of the z-Score) |
|
|
537 | (1) |
|
Table 2. Distribution of t-Values for Right-Tail Areas |
|
|
538 | (1) |
|
Table 3. Distribution of F-Values for Right-Tail Areas |
|
|
539 | (4) |
|
Table 4. Binomial Distribution Table |
|
|
543 | (5) |
|
Table 5. Chi-Square Distribution Values for Right-Tailed Areas |
|
|
548 | (1) |
|
Table 6. The Critical q-Values |
|
|
549 | (1) |
|
Table 7. The Critical U-Values |
|
|
550 | (4) |
|
|
554 | (3) |
|
APPENDIX E. SOLUTIONS TO EXERCISES |
|
|
557 | (195) |
|
|
557 | (2) |
|
|
559 | (20) |
|
|
579 | (18) |
|
|
597 | (10) |
|
|
607 | (19) |
|
|
626 | (14) |
|
|
640 | (1) |
|
|
641 | (3) |
|
|
644 | (4) |
|
|
648 | (1) |
|
|
649 | (24) |
|
|
673 | (16) |
|
|
689 | (14) |
|
|
703 | (12) |
|
|
715 | (28) |
|
|
743 | (9) |
Index |
|
752 | |