Preface |
|
vii | |
Acknowledgements |
|
ix | |
Using this book |
|
xxii | |
|
1 Why am I reading this book? |
|
|
1 | (11) |
|
|
1 | (1) |
|
1.1 My lecturer is a sadist! |
|
|
1 | (2) |
|
1.2 Doing science: the big picture |
|
|
3 | (3) |
|
1.2.1 Descriptive questions |
|
|
3 | (1) |
|
1.2.2 Questions answered using a research hypothesis |
|
|
4 | (1) |
|
Stage 1 Developing research hypotheses |
|
|
4 | (1) |
|
Stage 2 Generating predictions |
|
|
5 | (1) |
|
Stage 3 Testing predictions |
|
|
6 | (1) |
|
1.3 The process in practice |
|
|
6 | (1) |
|
1.4 Essential skills for doing science |
|
|
7 | (2) |
|
|
7 | (1) |
|
Developing hypotheses and predictions |
|
|
7 | (1) |
|
|
8 | (1) |
|
|
8 | (1) |
|
|
8 | (1) |
|
Health, safety, and ethical assessment |
|
|
9 | (1) |
|
1.5 Types of data analysis |
|
|
9 | (1) |
|
|
10 | (1) |
|
|
11 | (1) |
|
2 Getting to grips with the basics |
|
|
12 | (18) |
|
|
12 | (1) |
|
2.1 Populations and samples |
|
|
12 | (5) |
|
2.1.1 The sampling process |
|
|
13 | (1) |
|
|
14 | (1) |
|
Replication and pseudoreplication |
|
|
14 | (1) |
|
|
15 | (1) |
|
|
15 | (2) |
|
2.2 Variation and variables |
|
|
17 | (4) |
|
2.2.1 Identifying variables |
|
|
18 | (1) |
|
2.2.2 Dependent and independent variables |
|
|
18 | (1) |
|
2.2.3 Relationships and differences |
|
|
19 | (1) |
|
2.2.4 Manipulated versus natural variation in independent variables |
|
|
20 | (1) |
|
2.2.5 Lack of independence between variables |
|
|
21 | (1) |
|
|
21 | (5) |
|
2.3.1 Differences: related and unrelated data |
|
|
22 | (2) |
|
2.3.2 Levels of measurement |
|
|
24 | (1) |
|
|
24 | (1) |
|
|
24 | (1) |
|
Scale (counts and measures) |
|
|
25 | (1) |
|
2.4 Demystifying formulae |
|
|
26 | (2) |
|
2.4.1 Squiggles, lines, and letters |
|
|
26 | (1) |
|
2.4.2 Doing things in order |
|
|
27 | (1) |
|
|
28 | (1) |
|
|
29 | (1) |
|
3 Describing a single sample |
|
|
30 | (29) |
|
|
30 | (1) |
|
|
30 | (1) |
|
3.2 Descriptive statistics |
|
|
31 | (6) |
|
|
31 | (1) |
|
|
31 | (1) |
|
|
32 | (1) |
|
|
33 | (1) |
|
|
33 | (1) |
|
|
33 | (1) |
|
|
34 | (1) |
|
|
34 | (3) |
|
|
37 | (1) |
|
3.3 Frequency distributions |
|
|
37 | (4) |
|
3.3.1 For nominal, ordinal, and discrete scale data |
|
|
38 | (1) |
|
3.3.2 For continuous scale data |
|
|
39 | (2) |
|
3.4 The normal, and other, theoretical distributions |
|
|
41 | (2) |
|
3.4.1 Characteristics of the normal distribution |
|
|
42 | (1) |
|
3.4.2 Other distributions: binomial and Poisson |
|
|
43 | (1) |
|
3.5 Pies, boxes, and errors |
|
|
43 | (1) |
|
35.1 Pie charts as alternatives to frequency-distribution charts |
|
|
43 | (3) |
|
3.5.2 Understanding boxplots |
|
|
44 | (1) |
|
3.5.3 Introducing error bars |
|
|
44 | (2) |
|
3.6 Example data: ranger patrol tusk records |
|
|
46 | (1) |
|
3.7 Worked example: using SPSS |
|
|
47 | (10) |
|
3.7.1 Descriptive statistics and frequency distributions |
|
|
47 | (1) |
|
For nominal, ordinal, and discrete scale data |
|
|
48 | (1) |
|
For continuous scale data |
|
|
49 | (4) |
|
|
53 | (2) |
|
|
55 | (2) |
|
|
57 | (1) |
|
|
58 | (1) |
|
4 Inferring and estimating |
|
|
59 | (14) |
|
|
59 | (1) |
|
4.1 Overview of inferential statistics |
|
|
59 | (2) |
|
4.1.1 Why we need inferential statistics---a reminder |
|
|
59 | (1) |
|
4.1.2 Uncertainty and probability |
|
|
60 | (1) |
|
4.2 Inferring through estimation |
|
|
61 | (5) |
|
4.2.1 Standard error (of the mean, S-y) |
|
|
62 | (1) |
|
4.2.2 Confidence intervals {of the mean) |
|
|
63 | (1) |
|
4.2.3 Error bars revisited |
|
|
64 | (1) |
|
|
65 | (1) |
|
4.3 Example data: ground squirrels |
|
|
66 | (1) |
|
4.4 Worked example: using SPSS |
|
|
67 | (4) |
|
|
68 | (3) |
|
|
71 | (1) |
|
|
72 | (1) |
|
5 Choosing the right test and graph |
|
|
73 | (22) |
|
|
73 | (1) |
|
|
74 | (2) |
|
5.2 NHST and other options |
|
|
76 | (2) |
|
|
78 | (3) |
|
5.3.1 Tests of frequencies |
|
|
79 | (1) |
|
5.3.2 Tests of relationship |
|
|
80 | (1) |
|
5.3.3 Tests of difference |
|
|
80 | (1) |
|
5.4 Worked examples: graphs with two variables using SPSS |
|
|
81 | (12) |
|
5.4.1 Frequency distributions |
|
|
82 | (1) |
|
|
83 | (2) |
|
|
85 | (1) |
|
|
86 | (1) |
|
|
87 | (1) |
|
|
88 | (2) |
|
|
90 | (1) |
|
|
90 | (1) |
|
|
91 | (2) |
|
|
93 | (1) |
|
|
93 | (2) |
|
6 Overview of null hypothesis significance testing |
|
|
95 | (16) |
|
|
95 | (1) |
|
6.1 Four steps of null hypothesis significance testing |
|
|
95 | (5) |
|
6.1.1 Step 1: construct a (statistical) null hypothesis (H0) |
|
|
96 | (1) |
|
6.1.2 Step 2: decide on a critical significance level (α) |
|
|
97 | (1) |
|
6.1.3 Step 3: calculate your statistic |
|
|
97 | (1) |
|
6.1.4 Step 4: reject or accept the null hypothesis |
|
|
98 | (1) |
|
Step 4 Using critical value tables |
|
|
98 | (1) |
|
Step 4 Using P values on computer output |
|
|
98 | (2) |
|
6.2 Parametric and nonparametric |
|
|
100 | (2) |
|
6.2.1 Comparison of parametric and nonparametric |
|
|
100 | (1) |
|
6.2.2 Checking criteria for parametric tests using the normal distribution |
|
|
101 | (1) |
|
6.2.3 Choosing between parametric and nonparametric |
|
|
101 | (1) |
|
|
102 | (1) |
|
6.3 One-and two-tailed tests |
|
|
102 | (1) |
|
6.4 Effect sizes and their confidence intervals |
|
|
103 | (2) |
|
6.4.1 Uses of effect size |
|
|
103 | (1) |
|
6.4.2 Ways of measuring effect size |
|
|
104 | (1) |
|
|
105 | (3) |
|
6.5.1 Type I and type II error |
|
|
105 | (1) |
|
|
106 | (1) |
|
6.5.3 Power analyses: a priori and post hoc |
|
|
106 | (1) |
|
6.5.4 Implications for interpreting your results |
|
|
107 | (1) |
|
|
108 | (1) |
|
|
109 | (1) |
|
|
110 | (1) |
|
|
111 | (32) |
|
|
111 | (1) |
|
|
111 | (1) |
|
7.1 Introduction to chi-square tests |
|
|
111 | (5) |
|
7.1.1 Only use frequency data |
|
|
112 | (1) |
|
7.1.2 Types of chi-square test |
|
|
112 | (2) |
|
7.1.3 Sample size considerations |
|
|
114 | (1) |
|
7.1.4 When to use chi-square with caution |
|
|
114 | (1) |
|
7.1.5 Alternatives to chi-square tests |
|
|
115 | (1) |
|
|
116 | (3) |
|
7.2.1 One-way: Mendel's peas |
|
|
116 | (1) |
|
7.2.2 Two way: Mikumi's elephants |
|
|
117 | (2) |
|
7.3 One-way chi-square test |
|
|
119 | (11) |
|
|
119 | (1) |
|
|
120 | (1) |
|
Using critical value tables |
|
|
120 | (1) |
|
Using P values on computer output |
|
|
120 | (1) |
|
7.3.3 Worked example: by hand |
|
|
121 | (1) |
|
With expected according to a 1:1:1:1 Mendelian ratio (test of homogeneity) |
|
|
121 | (1) |
|
With expected according to a 9:3:3:1 Mendelian ratio |
|
|
122 | (1) |
|
7.3.4 Worked example: using SPSS |
|
|
123 | (2) |
|
With expected according to a 1:1:1:1 Mendelian ratio (test of homogeneity) |
|
|
125 | (2) |
|
With expected according to a 9:3:3:1 Mendelian ratio |
|
|
127 | (2) |
|
7.3.5 Literature link: weary lettuces |
|
|
129 | (1) |
|
7.4 Two-way chi-square test |
|
|
130 | (10) |
|
|
130 | (1) |
|
|
131 | (1) |
|
Using critical value tables |
|
|
132 | (1) |
|
Using P values on computer output |
|
|
132 | (1) |
|
7.4.3 Worked example: by hand |
|
|
132 | (2) |
|
7.4.4 Worked example: using SPSS |
|
|
134 | (3) |
|
7.4.3 Literature link: treatment alliance |
|
|
137 | (3) |
|
|
140 | (1) |
|
|
141 | (2) |
|
8 Tests of difference: two unrelated samples |
|
|
143 | (23) |
|
|
143 | (1) |
|
8.1 Introduction to the t- and Mann-Whitney U tests |
|
|
143 | (3) |
|
8.1.1 Variables and levels of measurement needed |
|
|
143 | (1) |
|
8.1.2 Comparison of t- and Mann-Whitney U tests |
|
|
144 | (1) |
|
8.1.3 The t-test and the parametric criteria |
|
|
144 | (1) |
|
8.1.4 Sample size considerations |
|
|
145 | (1) |
|
8.1.5 Alternatives to t-and Mann-Whitney U tests |
|
|
146 | (1) |
|
8.2 Example data: dem bones |
|
|
146 | (2) |
|
|
148 | (7) |
|
|
148 | (1) |
|
8.3.2 Four steps of a t-test |
|
|
149 | (1) |
|
Using critical value tables |
|
|
149 | (1) |
|
Using P values on computer output |
|
|
150 | (1) |
|
8.3.3 Worked example: by hand |
|
|
150 | (2) |
|
8.3.4 Worked example: using SPSS |
|
|
152 | (2) |
|
8.3.5 Literature link: silicon and sorghum |
|
|
154 | (1) |
|
|
155 | (8) |
|
|
155 | (1) |
|
8.4.2 Four steps of a Mann-Whitney U test |
|
|
156 | (1) |
|
Using critical value tables |
|
|
156 | (1) |
|
Using P values on computer output |
|
|
157 | (1) |
|
8.4.3 Worked example: by hand |
|
|
157 | (1) |
|
8.4.4 Worked example: using SPSS |
|
|
158 | (4) |
|
8.4.5 Literature link: Alzheimer's disease |
|
|
162 | (1) |
|
|
163 | (1) |
|
|
164 | (2) |
|
9 Tests of difference: two related samples |
|
|
166 | (23) |
|
|
166 | (1) |
|
9.1 Introduction to paired t- and Wilcoxon signed-rank tests |
|
|
166 | (3) |
|
9.1.1 Variables and levels of measurement needed |
|
|
167 | (1) |
|
9.1.2 Comparison of paired t- and Wilcoxon signed-rank tests |
|
|
167 | (1) |
|
9.1.3 The paired t-test and the parametric criteria |
|
|
168 | (1) |
|
9.1.4 Sample size considerations |
|
|
168 | (1) |
|
9.1.5 Alternatives to and extensions of the paired t- and Wilcoxon signed-rank tests |
|
|
169 | (1) |
|
9.2 Example data; bighorn ewes |
|
|
169 | (3) |
|
|
172 | (7) |
|
|
172 | (1) |
|
9.3.2 Four steps of a paired t-test |
|
|
172 | (1) |
|
Using critical value tables |
|
|
173 | (1) |
|
Using P values on computer output |
|
|
173 | (1) |
|
9.3.3 Worked example: by hand |
|
|
173 | (3) |
|
9.3.4 Worked example: using SPSS |
|
|
176 | (1) |
|
9.3.5 Literature link: slug slime |
|
|
177 | (2) |
|
9.4 Wilcoxon signed-rank test |
|
|
179 | (8) |
|
|
179 | (1) |
|
9.4.2 Four steps of a Wilcoxon signed-rank test |
|
|
180 | (1) |
|
Using critical value tables |
|
|
181 | (1) |
|
Using P values on computer output |
|
|
181 | (1) |
|
9.4.3 Worked example: by hand |
|
|
182 | (1) |
|
9.4.4 Worked example: using SPSS |
|
|
183 | (2) |
|
9.4.5 Literature link: head injuries |
|
|
185 | (2) |
|
|
187 | (1) |
|
|
187 | (2) |
|
10 Tests of difference: more than two samples |
|
|
189 | (22) |
|
|
189 | (1) |
|
10.1 Introduction to one-way and Kruskal--Wallis tests |
|
|
189 | (5) |
|
10.1.1 Variables and levels of measurement needed |
|
|
190 | (1) |
|
10.1.2 Comparison of one-way and Kruskal--Wallis tests |
|
|
191 | (1) |
|
10.1.3 One-way Anova and the parametric criteria |
|
|
191 | (1) |
|
10.1.4 Sample size considerations |
|
|
192 | (1) |
|
10.1.5 Alternatives to and extensions of one-way and Kruskal--Wallis tests |
|
|
193 | (1) |
|
10.1.6 The language of Anova |
|
|
193 | (1) |
|
10.1.7 Multiple comparisons |
|
|
194 | (1) |
|
10.2 Example data: nitrogen levels in reeds |
|
|
194 | (2) |
|
|
196 | (1) |
|
|
197 | (1) |
|
10.32 Four steps of a one-way Anova |
|
|
197 | (5) |
|
Using critical value tables |
|
|
199 | (1) |
|
Using P values on computer output |
|
|
199 | (1) |
|
10.3.3 Worked example: using SPSS |
|
|
199 | (2) |
|
10.3.4 Literature link: running rats |
|
|
201 | (1) |
|
10.4 Kruskal--Wallis test |
|
|
202 | (6) |
|
|
203 | (1) |
|
10.4.2 Four steps of a Kruskal--Wallis test |
|
|
203 | (1) |
|
Using critical value tables |
|
|
204 | (1) |
|
Using P values on computer output |
|
|
204 | (1) |
|
10.4.3 Worked example: using, SPSS |
|
|
205 | (2) |
|
10.4.4 Literature link: cooperating long tailed tits |
|
|
207 | (1) |
|
10.5 Model I and model II Anova |
|
|
208 | (1) |
|
|
208 | (1) |
|
|
209 | (2) |
|
11 Tests of relationship: regression |
|
|
211 | (20) |
|
|
211 | (1) |
|
11.1 Introduction to bivariate linear regression |
|
|
211 | (3) |
|
11.1.1 Variables arid levels of measurement needed |
|
|
212 | (1) |
|
11.1.2 Linear model: scary-not! |
|
|
213 | (1) |
|
11.13 The three regression questions |
|
|
214 | (3) |
|
11.1.4 Added extras: how much is explained, and prediction |
|
|
214 | (1) |
|
11.1.5 Regression and the parametric criteria |
|
|
215 | (1) |
|
11.1.6 Sample size considerations |
|
|
216 | (1) |
|
11.1.7 Alternatives to and extensions of bivariate linear regression and Anova |
|
|
217 | (1) |
|
11.2 Example data: species richness |
|
|
217 | (1) |
|
|
218 | (1) |
|
|
219 | (1) |
|
11.3.2 Four steps of a regression test |
|
|
219 | (1) |
|
Using critical value tables |
|
|
220 | (1) |
|
Using P values on computer output |
|
|
220 | (1) |
|
11.3.3 Worked example: using SPSS for a regression test |
|
|
221 | (2) |
|
11.3.4 Worked example: using SPSS to get the added extras |
|
|
223 | (2) |
|
11.3.5 Reporting bivariate linear regression results |
|
|
225 | (1) |
|
11.3.6 Literature link: nodules |
|
|
226 | (1) |
|
11.4 Model I and model II regression |
|
|
227 | (1) |
|
|
228 | (1) |
|
|
229 | (2) |
|
12 Tests of relationship: correlation |
|
|
231 | (22) |
|
|
231 | (1) |
|
12.1 Introduction to the Pearson and Spearman correlation tests |
|
|
231 | (6) |
|
12.1.1 Variables and levels of measurement needed |
|
|
232 | (2) |
|
12.1.2 Comparison of Pearson's and Spearman's tests |
|
|
234 | (1) |
|
12.1.3 Pearson and the parametric criteria |
|
|
235 | (1) |
|
12.1.4 Sample size considerations |
|
|
235 | (1) |
|
12.1.5 The correlation coefficient |
|
|
236 | (1) |
|
12.1.6 Partial, multiple, and multivariate correlation |
|
|
236 | (1) |
|
12.2 Example data: eyeballs |
|
|
237 | (1) |
|
12.3 Pearson correlation test |
|
|
238 | (6) |
|
|
238 | (2) |
|
12.3.2 Four steps of a Pearson correlation test |
|
|
240 | (1) |
|
Using critical value tables |
|
|
241 | (1) |
|
Using P values on computer output |
|
|
241 | (1) |
|
12.3.3 Worked example: using SPSS |
|
|
241 | (2) |
|
12.3.4 Literature link: male sacrifice |
|
|
243 | (1) |
|
12.4 Spearman correlation test |
|
|
244 | (5) |
|
|
245 | (1) |
|
12.4.2 Four steps of a Spearman correlation test |
|
|
245 | (1) |
|
Using critical value tables |
|
|
246 | (1) |
|
Using P values on computer output |
|
|
246 | (1) |
|
12.4.3 Worked example: using SPSS |
|
|
246 | (2) |
|
12.4.4 Literature link: defoliating ryegrass |
|
|
248 | (1) |
|
12.5 Comparison of correlation and regression |
|
|
249 | (1) |
|
|
250 | (1) |
|
|
251 | (2) |
|
13 Introducing the generalized linear model: general linear model |
|
|
253 | (37) |
|
|
253 | (1) |
|
|
253 | (1) |
|
13.1 Introduction to the general linear model |
|
|
254 | (9) |
|
13.1.1 Variables and levels of measurement |
|
|
255 | (1) |
|
13.1.2 The linear model revisited |
|
|
256 | (3) |
|
13.1.3 The language of GLMs |
|
|
259 | (1) |
|
13.1.4 Questions and extras |
|
|
259 | (1) |
|
13.1.5 Types of sums of squares |
|
|
260 | (1) |
|
13.1.6 Model assumptions and the parametric criteria |
|
|
261 | (1) |
|
|
261 | (1) |
|
|
261 | (1) |
|
|
261 | (2) |
|
13.2 Example data: watered willows |
|
|
263 | (2) |
|
13.3 General linear model |
|
|
265 | (14) |
|
|
265 | (1) |
|
13.3.2 GLM and the four steps |
|
|
266 | (1) |
|
13.3.3 Worked example: using SPSS |
|
|
267 | (1) |
|
Looking at the model overall (answering question 2a, Section 13.1.4) |
|
|
267 | (2) |
|
Looking at individual explanatory variables (answering question 2b, Section 13.1.4) |
|
|
269 | (2) |
|
Looking at R: (answering question 3, Section 13.1.4) |
|
|
271 | (1) |
|
Using coefficients (answering question 1, Section 13.1.4) |
|
|
272 | (2) |
|
Using coefficients (making predictions) |
|
|
274 | (1) |
|
Checking model assumptions |
|
|
274 | (1) |
|
13.3.4 Reporting results from GLM |
|
|
275 | (3) |
|
13.3.5 Literature link: brains and booze |
|
|
278 | (1) |
|
|
279 | (3) |
|
13.4.1 Worked example: using SPSS, interaction |
|
|
281 | (1) |
|
13.5 Random factors and mixed models |
|
|
282 | (1) |
|
13.6 The multiple-model approach |
|
|
283 | (2) |
|
13.6.1 Finding the best model |
|
|
284 | (1) |
|
13.6.2 Reporting results from multiple models |
|
|
284 | (1) |
|
13.7 The general and generalized linear models compared |
|
|
285 | (1) |
|
|
285 | (3) |
|
|
288 | (2) |
|
14 More on the generalized linear model: logistic and loglinear models |
|
|
290 | (32) |
|
|
290 | (1) |
|
14.1 Introduction to the logistic and loglinear models |
|
|
290 | (4) |
|
14.1.1 Variables and levels of measurement |
|
|
291 | (1) |
|
14.1.2 Link functions revisited |
|
|
292 | (1) |
|
14.1.3 Questions and extras |
|
|
293 | (1) |
|
14.1.4 Model assumptions and overdispersion |
|
|
293 | (1) |
|
14.2 Example data: urban birds |
|
|
294 | (1) |
|
14.3 The binary logistic model |
|
|
295 | (12) |
|
|
295 | (1) |
|
14.3.2 Binary logistic models and the four steps |
|
|
296 | (1) |
|
14.3.3 Worked example: using SPSS |
|
|
296 | (6) |
|
Answering question 3 How good is the model? |
|
|
302 | (1) |
|
Answering question 2a Is the model significant overall? |
|
|
302 | (1) |
|
Answering question 2b Are individual explanatory variables significant? |
|
|
303 | (1) |
|
Answering question 1 What is the model? |
|
|
304 | (1) |
|
|
305 | (1) |
|
Checking for overdispersion |
|
|
306 | (1) |
|
14.3.4 Literature link: death by AMI |
|
|
306 | (1) |
|
|
307 | (11) |
|
|
307 | (1) |
|
14.4.2 Loglinear models and the four steps |
|
|
308 | (1) |
|
14.4.3 Worked example: using SPSS |
|
|
308 | (5) |
|
Answering question 3 How good is the model? |
|
|
313 | (1) |
|
Answering question 2a Is the model significant overall? |
|
|
313 | (1) |
|
Answering question 2b Are individual explanatory variables significant? |
|
|
314 | (1) |
|
Answering question 1 What is the model? |
|
|
315 | (1) |
|
|
316 | (1) |
|
Checking for overdispersion |
|
|
317 | (1) |
|
14.4.4 Literature link: sea cows |
|
|
317 | (1) |
|
14.5 The general, binary logistic, and loglinear models compared |
|
|
318 | (1) |
|
14.6 Alternatives and extensions |
|
|
318 | (2) |
|
|
320 | (1) |
|
|
321 | (1) |
|
Answers to self-help questions |
|
|
322 | (5) |
|
|
322 | (1) |
|
|
322 | (1) |
|
|
322 | (1) |
|
|
323 | (1) |
|
|
323 | (1) |
|
|
324 | (1) |
|
|
324 | (1) |
|
|
324 | (1) |
|
|
325 | (1) |
|
|
325 | (1) |
|
|
325 | (1) |
|
|
325 | (1) |
|
|
325 | (1) |
|
|
326 | (1) |
|
Appendix I How to enter data into SPSS |
|
|
327 | (3) |
|
|
327 | (1) |
|
Type, width, and decimals |
|
|
327 | (1) |
|
|
327 | (1) |
|
|
327 | (1) |
|
|
328 | (1) |
|
|
328 | (1) |
|
|
328 | (1) |
|
|
328 | (1) |
|
|
329 | (1) |
|
Appendix II Statistical tables of critical values |
|
|
330 | (10) |
|
|
330 | (1) |
|
|
331 | (1) |
|
|
332 | (2) |
|
|
334 | (2) |
|
|
336 | (1) |
|
|
337 | (1) |
|
|
338 | (1) |
|
|
339 | (1) |
|
Appendix III Summary guidance on reporting statistical results |
|
|
340 | (3) |
|
|
340 | (1) |
|
|
340 | (1) |
|
|
340 | (2) |
|
|
342 | (1) |
|
Appendix IV Statistics and experimental design |
|
|
343 | (3) |
|
Designs with control groups |
|
|
343 | (1) |
|
Balanced and unbalanced design |
|
|
343 | (1) |
|
Completely randomized designs |
|
|
343 | (1) |
|
One-way or one-factor designs |
|
|
343 | (1) |
|
Multi-way or multi-factor designs |
|
|
344 | (1) |
|
|
344 | (1) |
|
|
344 | (1) |
|
|
344 | (1) |
|
|
344 | (1) |
|
|
344 | (1) |
|
|
345 | (1) |
|
|
345 | (1) |
|
|
346 | (3) |
|
|
346 | (1) |
|
For when you are feeling stronger ... |
|
|
347 | (1) |
|
|
348 | (1) |
References |
|
349 | (4) |
Index |
|
353 | |