Preface |
|
xvii | |
Acknowledgments |
|
xxiii | |
|
PART I DEVELOPING A CONTEXT FOR STATISTICAL ANALYSIS |
|
|
1 | (40) |
|
1 A Context for Solving Quantitative Problems |
|
|
3 | (12) |
|
|
4 | (1) |
|
|
5 | (1) |
|
Answers the Old-Fashioned Way |
|
|
6 | (1) |
|
|
7 | (1) |
|
Different Kinds of Statistics |
|
|
8 | (1) |
|
|
8 | (1) |
|
The Connection Between Quantitative Analysis and Research Design |
|
|
9 | (1) |
|
|
10 | (1) |
|
|
10 | (1) |
|
|
11 | (1) |
|
|
12 | (1) |
|
|
13 | (2) |
|
|
15 | (26) |
|
|
16 | (4) |
|
|
17 | (1) |
|
|
17 | (1) |
|
|
18 | (1) |
|
|
19 | (1) |
|
|
20 | (5) |
|
Measures of Central Tendency |
|
|
20 | (1) |
|
|
21 | (1) |
|
|
22 | (2) |
|
|
24 | (1) |
|
|
25 | (7) |
|
|
26 | (1) |
|
|
26 | (1) |
|
|
27 | (3) |
|
|
30 | (2) |
|
|
32 | (9) |
|
|
33 | (3) |
|
|
36 | (1) |
|
|
36 | (1) |
|
Other Measures of Variability |
|
|
36 | (1) |
|
Another Word About Samples and Populations |
|
|
37 | (1) |
|
|
37 | (4) |
|
|
41 | (66) |
|
3 Data Distributions: Picturing Statistics |
|
|
43 | (34) |
|
The Frequency Distribution |
|
|
44 | (6) |
|
Apparent Versus Actual Limits |
|
|
47 | (1) |
|
Guidelines for Developing a Grouped Frequency Distribution |
|
|
47 | (2) |
|
Frequencies, Relative Frequencies, and Cumulative Relative Frequencies |
|
|
49 | (1) |
|
|
50 | (2) |
|
Interpreting Tables and Figures |
|
|
52 | (1) |
|
|
53 | (17) |
|
|
53 | (2) |
|
|
55 | (1) |
|
|
56 | (1) |
|
|
57 | (2) |
|
|
59 | (1) |
|
Central Tendency and Normality |
|
|
60 | (2) |
|
|
62 | (3) |
|
The Standard Deviation and Normality |
|
|
65 | (2) |
|
|
67 | (3) |
|
Determining What Is Representative |
|
|
70 | (2) |
|
|
72 | (5) |
|
4 Working With the Normal Curve: z Scores |
|
|
77 | (30) |
|
|
78 | (4) |
|
|
79 | (1) |
|
|
79 | (1) |
|
Determining the "Best" Performance |
|
|
80 | (2) |
|
z and the Percent of the Population Under the Curve |
|
|
82 | (1) |
|
|
82 | (1) |
|
|
83 | (1) |
|
|
84 | (2) |
|
Probabilities and Percentages |
|
|
86 | (2) |
|
The Probability of Scoring Between Two Points |
|
|
88 | (5) |
|
Two Values on Opposite Sides of the Mean |
|
|
88 | (2) |
|
Two Values on the Same Side of the Mean |
|
|
90 | (1) |
|
The Percentage of the Distribution Outside an Interval |
|
|
91 | (1) |
|
Rules for Determining the Percentage of the Population Under Areas of the Curve |
|
|
91 | (2) |
|
|
93 | (1) |
|
From Percentages to z Scores |
|
|
94 | (1) |
|
|
94 | (2) |
|
|
96 | (7) |
|
|
96 | (1) |
|
The Normal Curve Equivalent |
|
|
97 | (1) |
|
The Standard Nine-Point Scale |
|
|
98 | (2) |
|
The Nonstandard Grade Equivalent Score |
|
|
100 | (1) |
|
Standard Scores With Specified Characteristics |
|
|
101 | (2) |
|
|
103 | (4) |
|
PART III EXAMINING DIFFERENCES |
|
|
107 | (146) |
|
5 Probability and the Normal Distribution |
|
|
109 | (30) |
|
|
110 | (1) |
|
The Distribution of Sample Means |
|
|
110 | (5) |
|
The Central Limit Theorem |
|
|
111 | (3) |
|
Sampling Error and the Law of Large Numbers |
|
|
114 | (1) |
|
The Effect of Extreme Scores |
|
|
114 | (1) |
|
Describing the Distribution of Sample Means |
|
|
115 | (1) |
|
|
116 | (3) |
|
Deriving the Values for the z Test |
|
|
117 | (1) |
|
|
118 | (1) |
|
Representativeness and Statistical Significance |
|
|
119 | (4) |
|
Back to the Gaussian Distribution |
|
|
121 | (1) |
|
|
121 | (1) |
|
|
122 | (1) |
|
Probability, the Alpha Level, and Decision Errors |
|
|
123 | (4) |
|
Decision Errors Continued |
|
|
125 | (1) |
|
On the Horns of a Dilemma |
|
|
126 | (1) |
|
Who Decides? Determining Statistical Significance |
|
|
127 | (1) |
|
|
127 | (2) |
|
Calculating the Confidence Interval |
|
|
128 | (1) |
|
Interpreting the Confidence Interval |
|
|
129 | (1) |
|
Sample Size and Confidence |
|
|
129 | (4) |
|
|
133 | (1) |
|
|
133 | (6) |
|
|
139 | (34) |
|
|
139 | (1) |
|
From the z Test to the One-Sample t-Test |
|
|
140 | (5) |
|
The Estimated Standard Error of the Mean, SEm |
|
|
141 | (1) |
|
|
142 | (1) |
|
|
142 | (1) |
|
Degrees of Freedom and the t Distribution |
|
|
143 | (2) |
|
|
145 | (2) |
|
|
147 | (1) |
|
|
148 | (7) |
|
The Distribution of Difference Scores |
|
|
149 | (1) |
|
The Logic of the Independent t-Test |
|
|
149 | (1) |
|
|
149 | (2) |
|
Calculating the Independent t |
|
|
151 | (1) |
|
Interpreting the t Statistic |
|
|
151 | (1) |
|
|
152 | (1) |
|
|
152 | (1) |
|
|
153 | (1) |
|
|
154 | (1) |
|
|
155 | (1) |
|
The Confidence Interval of the Difference |
|
|
156 | (1) |
|
|
157 | (2) |
|
The Scale of the Independent and Dependent Variables |
|
|
159 | (1) |
|
|
159 | (2) |
|
Two-, Versus One-Tailed Tests |
|
|
161 | (2) |
|
The Risk of Using One-Tailed Tests |
|
|
163 | (1) |
|
|
163 | (1) |
|
Requirements for Independent t-Tests |
|
|
163 | (1) |
|
A Reminder About Decision Errors |
|
|
164 | (3) |
|
Power and Statistical Testing |
|
|
164 | (1) |
|
|
165 | (2) |
|
|
167 | (6) |
|
7 One-Way Analysis of Variance |
|
|
173 | (26) |
|
A Context for Analysis of Variance |
|
|
174 | (3) |
|
|
175 | (1) |
|
The Advantages of Analysis of Variance |
|
|
175 | (2) |
|
|
177 | (17) |
|
|
178 | (1) |
|
|
179 | (1) |
|
|
179 | (3) |
|
The Between and Within Sums of Squares |
|
|
182 | (4) |
|
Interpreting the Variability Measures |
|
|
186 | (1) |
|
Degrees of Freedom and the ANOVA |
|
|
186 | (1) |
|
|
187 | (1) |
|
|
188 | (1) |
|
|
188 | (1) |
|
Understanding the Calculated Value of F |
|
|
189 | (1) |
|
|
189 | (2) |
|
|
191 | (1) |
|
Requirements for the One-Way ANOVA |
|
|
192 | (2) |
|
|
194 | (1) |
|
|
194 | (5) |
|
|
199 | (22) |
|
Limitations in the Independent t-Test and One-Way ANOVA |
|
|
199 | (2) |
|
|
201 | (17) |
|
Describing the Factorial ANOVA |
|
|
202 | (1) |
|
|
203 | (1) |
|
|
203 | (1) |
|
Calculating the SSbet Components |
|
|
204 | (2) |
|
|
206 | (1) |
|
|
206 | (1) |
|
The Within Sum of Squares (SSwith) |
|
|
207 | (2) |
|
|
209 | (1) |
|
|
210 | (1) |
|
|
210 | (1) |
|
The Degrees of Freedom for a Factorial ANOVA |
|
|
210 | (1) |
|
|
210 | (1) |
|
|
211 | (1) |
|
|
212 | (1) |
|
Using a Graph to Explain the Interaction |
|
|
212 | (3) |
|
|
215 | (2) |
|
|
217 | (1) |
|
|
218 | (3) |
|
9 Dependent Groups Tests for Interval Data |
|
|
221 | (32) |
|
Working Backward to Move Forward |
|
|
221 | (1) |
|
Statistical Power and the Standard Error of the Difference |
|
|
222 | (1) |
|
The Dependent Samples t-Tests |
|
|
223 | (12) |
|
|
224 | (1) |
|
|
225 | (1) |
|
Calculating the t Statistic |
|
|
225 | (1) |
|
|
226 | (1) |
|
|
227 | (1) |
|
The Give and Take of the Before/After t |
|
|
228 | (1) |
|
|
228 | (2) |
|
The Dependent Samples t-Test Versus the Independent t |
|
|
230 | (2) |
|
|
232 | (3) |
|
|
235 | (11) |
|
The Difficulty of Matching Multiple Groups |
|
|
235 | (1) |
|
Examining Sources of Variability |
|
|
236 | (1) |
|
Calculating the Within-Subjects F |
|
|
237 | (1) |
|
|
238 | (1) |
|
|
239 | (1) |
|
Another Within-Subjects F Example |
|
|
240 | (1) |
|
Locating Significant Differences |
|
|
241 | (1) |
|
How Much of the Variance Is Explained? |
|
|
242 | (1) |
|
Comparing the Within-Subjects F With the One-Way ANOVA |
|
|
243 | (1) |
|
A Comment on the Within-Subjects F and SPSS |
|
|
244 | (2) |
|
|
246 | (7) |
|
PART IV ASSOCIATION AND PREDICTION |
|
|
253 | (76) |
|
|
255 | (30) |
|
A Context for Correlation |
|
|
255 | (1) |
|
|
256 | (1) |
|
|
257 | (1) |
|
|
258 | (3) |
|
Variations on the Correlation Theme |
|
|
261 | (1) |
|
|
262 | (12) |
|
The Correlation Hypotheses |
|
|
267 | (1) |
|
|
267 | (2) |
|
Assumptions and Attenuated Range |
|
|
269 | (1) |
|
Interpreting the Correlation Coefficient: Direction and Strength |
|
|
270 | (1) |
|
Regarding the Strength of the Correlation |
|
|
270 | (2) |
|
The Coefficient of Determination |
|
|
272 | (1) |
|
The Correlation Coefficient Versus the Coefficient of Determination |
|
|
273 | (1) |
|
Significance and Sample Size |
|
|
273 | (1) |
|
The Point-Biserial Correlation |
|
|
274 | (3) |
|
Another Thought on Interpreting Correlation Values |
|
|
277 | |
|
Nonparametric Correlations |
|
|
276 | (1) |
|
A Partial List of Bivariate Correlations |
|
|
276 | (3) |
|
|
279 | (6) |
|
11 Regression With One Predictor |
|
|
285 | (22) |
|
|
286 | (9) |
|
|
288 | (2) |
|
The Least Squares Criterion |
|
|
290 | (1) |
|
The Regression Equation With One Predictor |
|
|
290 | (1) |
|
Calculating the Intercept and the Slope |
|
|
291 | (1) |
|
|
292 | (1) |
|
Interpreting the Intercept |
|
|
292 | (2) |
|
Regress Which Variable on Which? |
|
|
294 | (1) |
|
|
295 | (8) |
|
Calculating the Standard Error of the Estimate |
|
|
296 | (1) |
|
|
297 | (1) |
|
Another Example of Calculating SEest |
|
|
297 | (1) |
|
A Confidence Interval for the Estimate of y |
|
|
298 | (2) |
|
|
300 | (2) |
|
|
302 | (1) |
|
|
303 | (4) |
|
12 Regression With More Than One Predictor |
|
|
307 | (22) |
|
|
307 | (18) |
|
Multiple R, or Multiple Correlation |
|
|
308 | (1) |
|
|
308 | (2) |
|
Another Multiple R Problem |
|
|
310 | (1) |
|
|
311 | (1) |
|
|
311 | (2) |
|
Making the Application to Regression |
|
|
313 | (1) |
|
The Multiple Regression Equation |
|
|
313 | (4) |
|
Using Multiple Regression |
|
|
317 | (3) |
|
A Comment About SPSS Output |
|
|
320 | (1) |
|
The Standard Error of the Multiple Estimate |
|
|
320 | (2) |
|
The Confidence Interval for v' |
|
|
322 | (1) |
|
Another Confidence Interval Example |
|
|
323 | (1) |
|
Overfitting the Data and Shrinkage |
|
|
324 | (1) |
|
|
324 | (1) |
|
|
325 | (4) |
|
PART V TESTS FOR NOMINAL AND ORDINAL DATA |
|
|
329 | (58) |
|
13 Some of the Chi-Square Tests |
|
|
331 | (24) |
|
|
332 | (10) |
|
The Goodness-of-Fit Chi-Square |
|
|
333 | (1) |
|
Calculating the Test Statistic |
|
|
334 | (1) |
|
The Chi-Square Hypotheses |
|
|
335 | (1) |
|
Interpreting the Test Statistic |
|
|
336 | (1) |
|
A1xk (Goodness-of-Fit) Chi-Square Problem With Unequal fe Values |
|
|
337 | (1) |
|
Calculating fe Values for Unequal Categories |
|
|
337 | (2) |
|
|
339 | (1) |
|
Another 1 x k Problem With Equal Categories |
|
|
339 | (1) |
|
The Chi-Square and Statistical Power |
|
|
339 | (2) |
|
|
341 | (1) |
|
The Chi-Square Test of Independence |
|
|
342 | (1) |
|
|
343 | (8) |
|
|
343 | (2) |
|
The fo and fe Values in the Chi-Square Test of Independence |
|
|
345 | (1) |
|
Degrees of Freedom for the Chi-Square Test of Independence |
|
|
345 | (1) |
|
|
346 | (1) |
|
Interpreting the Chi-Square Test of Independence |
|
|
346 | (1) |
|
Phi Coefficient and Cramer's V |
|
|
346 | (1) |
|
|
347 | (1) |
|
|
347 | (1) |
|
Another Example of the Test of Independence |
|
|
348 | (2) |
|
|
350 | (1) |
|
|
351 | (4) |
|
14 Working With Ordinal, More-, or Less-Than Data |
|
|
355 | (32) |
|
|
356 | (1) |
|
The Hypothesis of Association for Ordinal Data: Spearman's rho |
|
|
356 | (7) |
|
|
357 | (1) |
|
|
358 | (2) |
|
Interpreting Spearman's rho |
|
|
360 | (1) |
|
|
361 | (2) |
|
The Hypothesis of Difference for Ordinal Data |
|
|
363 | (14) |
|
Two Independent Groups: Mann-Whitney U |
|
|
364 | (1) |
|
Calculating the Mann-Whitney U |
|
|
364 | (1) |
|
|
365 | (2) |
|
The Mann-Whitney and Power |
|
|
367 | (1) |
|
Looking at SPSS Output for Mann-Whitney |
|
|
367 | (1) |
|
Two or More Independent Groups: Kruskal-Wallis H |
|
|
368 | (1) |
|
Calculating the Kruskal-Wallis H |
|
|
368 | (1) |
|
|
369 | (2) |
|
Looking at SPSS Output for Kruskal-Wallis |
|
|
371 | (1) |
|
Two Related Groups: Wilcoxon T |
|
|
371 | (1) |
|
Calculating the Wilcoxon T |
|
|
372 | (1) |
|
Interpreting the Test Statistic, T |
|
|
373 | (1) |
|
Two or More Related Groups: Friedman's ANOVA |
|
|
373 | (2) |
|
A Friedman's ANOVA Example |
|
|
375 | (2) |
|
Interpreting the Test Value |
|
|
377 | (1) |
|
|
377 | (3) |
|
|
380 | (7) |
|
PART VI TESTS, MEASUREMENT ISSUES, AND SELECTED ADVANCED TOPICS |
|
|
387 | (60) |
|
|
389 | (32) |
|
|
390 | (1) |
|
|
391 | (4) |
|
The Relationship Between Reliability and Measurement Error |
|
|
391 | (1) |
|
Calculating Reliability: An Example |
|
|
392 | (2) |
|
Alternate Forms Reliability |
|
|
394 | (1) |
|
What's a High Reliability Coefficient? |
|
|
394 | (1) |
|
|
394 | (1) |
|
The Spearman-Brown Prophecy Formula |
|
|
395 | (1) |
|
Other Applications for Spearman-Brown |
|
|
396 | (1) |
|
|
397 | (9) |
|
|
398 | (1) |
|
Calculating Coefficient Alpha |
|
|
399 | (2) |
|
|
401 | (1) |
|
The Kuder and Richardson Formulae |
|
|
401 | (1) |
|
Score Versus Test Reliability |
|
|
402 | (2) |
|
Strengthening Reliability |
|
|
404 | (1) |
|
Classification Consistency |
|
|
404 | (2) |
|
The Standard Error of Measurement |
|
|
406 | (2) |
|
|
407 | (1) |
|
Reliability for the Group Versus the Individual |
|
|
408 | (1) |
|
|
408 | (1) |
|
The Most Tested Generation |
|
|
409 | (6) |
|
Bias in Admissions Testing |
|
|
409 | (1) |
|
|
410 | (2) |
|
|
412 | (2) |
|
|
414 | (1) |
|
|
415 | (6) |
|
16 A Brief Introduction to Selected Advanced Topics |
|
|
421 | (26) |
|
|
422 | (5) |
|
|
423 | (1) |
|
Understanding the Results: The Covariate |
|
|
424 | (1) |
|
|
425 | (1) |
|
|
425 | (1) |
|
Some Final Considerations |
|
|
426 | (1) |
|
Multivariate Analysis of Variance |
|
|
427 | (5) |
|
Combining the Dependent Variables |
|
|
427 | (1) |
|
|
427 | (2) |
|
Understanding MANOVA Results |
|
|
429 | (2) |
|
|
431 | (1) |
|
|
431 | (1) |
|
Some Final Considerations |
|
|
431 | (1) |
|
Discriminant Function Analysis |
|
|
432 | (5) |
|
|
433 | (1) |
|
|
434 | (2) |
|
|
436 | (1) |
|
Some Final Considerations |
|
|
436 | (1) |
|
|
437 | (7) |
|
|
437 | (1) |
|
A Forward Regression Solution |
|
|
438 | (1) |
|
A Backward Regression Solution |
|
|
438 | (1) |
|
A Stepwise Regression Solution |
|
|
438 | (1) |
|
A Hierarchical Regression Solution |
|
|
439 | (1) |
|
|
439 | (1) |
|
|
440 | (1) |
|
|
441 | (1) |
|
|
441 | (3) |
|
|
444 | (3) |
|
|
447 | (10) |
|
|
457 | (52) |
|
Tables of Critical Values |
|
|
458 | (15) |
|
|
473 | (26) |
|
Solutions to Selected Problems |
|
|
499 | (10) |
Index |
|
509 | (14) |
About the Author |
|
523 | |