Muutke küpsiste eelistusi

E-raamat: Univariate, Bivariate, and Multivariate Statistics Using R: Quantitative Tools for Data Analysis and Data Science

  • Formaat: PDF+DRM
  • Ilmumisaeg: 25-Mar-2020
  • Kirjastus: John Wiley & Sons Inc
  • Keel: eng
  • ISBN-13: 9781119549956
Teised raamatud teemal:
  • Formaat - PDF+DRM
  • Hind: 123,44 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Raamatukogudele
  • Formaat: PDF+DRM
  • Ilmumisaeg: 25-Mar-2020
  • Kirjastus: John Wiley & Sons Inc
  • Keel: eng
  • ISBN-13: 9781119549956
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

A practical source for performing essential statistical analyses and data management tasks in R

Univariate, Bivariate, and Multivariate Statistics Using R offers a practical and very user-friendly introduction to the use of R software that covers a range of statistical methods featured in data analysis and data science. The author— a noted expert in quantitative teaching —has written a quick go-to reference for performing essential statistical analyses and data management tasks in R. Requiring only minimal prior knowledge, the book introduces concepts needed for an immediate yet clear understanding of statistical concepts essential to interpreting software output.    

The author explores univariate, bivariate, and multivariate statistical methods, as well as select nonparametric tests. Altogether a hands-on manual on the applied statistics and essential R computing capabilities needed to write theses, dissertations, as well as research publications. The book is comprehensive in its coverage of univariate through to multivariate procedures, while serving as a friendly and gentle introduction to R software for the newcomer. This important resource:

  • Offers an introductory, concise guide to the computational tools that are useful for making sense out of data using R statistical software
  • Provides a resource for students and professionals in the social, behavioral, and natural sciences
  • Puts the emphasis on the computational tools used in the discovery of empirical patterns
  • Features a variety of popular statistical analyses and data management tasks that can be immediately and quickly applied as needed to research projects
  • Shows how to apply statistical analysis using R to data sets in order to get started quickly performing essential tasks in data analysis and data science

Written for students, professionals, and researchers primarily in the social, behavioral, and natural sciences, Univariate, Bivariate, and Multivariate Statistics Using R offers an easy-to-use guide for performing data analysis fast, with an emphasis on drawing conclusions from empirical observations. The book can also serve as a primary or secondary textbook for courses in data analysis or data science, or others in which quantitative methods are featured. 

Preface xiii
1 Introduction to Applied Statistics
1(30)
1.1 The Nature of Statistics and Inference
2(1)
1.2 A Motivating Example
3(1)
1.3 What About "Big Data"?
4(3)
1.4 Approach to Learning R
7(1)
1.5 Statistical Modeling in a Nutshell
7(3)
1.6 Statistical Significance Testing and Error Rates
10(1)
1.7 Simple Example of Inference Using a Coin
11(2)
1.8 Statistics Is for Messy Situations
13(1)
1.9 Type I versus Type II Errors
14(1)
1.10 Point Estimates and Confidence Intervals
15(3)
1.11 So What Can We Conclude from One Confidence Interval?
18(1)
1.12 Variable Types
19(3)
1.13 Sample Size, Statistical Power, and Statistical Significance
22(1)
1.14 How "p < 0.05" Happens
23(2)
1.15 Effect Size
25(1)
1.16 The Verdict on Significance Testing
26(1)
1.17 Training versus Test Data
27(1)
1.18 How to Get the Most Out of This Book
28(3)
Exercises
29(2)
2 Introduction to R and Computational Statistics
31(40)
2.1 How to Install R on Your Computer
34(1)
2.2 How to Do Basic Mathematics with R
35(6)
2.2.1 Combinations and Permutations
38(1)
2.2.2 Plotting Curves Using curve()
39(2)
2.3 Vectors and Matrices in R
41(3)
2.4 Matrices in R
44(8)
2.4.1 The Inverse of a Matrix
47(2)
2.4.2 Eigenvalues and Eigenvectors
49(3)
2.5 How to Get Data into R
52(3)
2.6 Merging Data Frames
55(1)
2.7 How to Install a Package in R, and How to Use It
55(3)
2.8 How to View the Top, Bottom, and "Some" of a Data File
58(2)
2.9 How to Select Subsets from a Dataframe
60(2)
2.10 How R Deals with Missing Data
62(1)
2.11 Using Is () to See Objects in the Workspace
63(2)
2.12 Writing Your Own Functions
65(1)
2.13 Writing Scripts
65(1)
2.14 How to Create Factors in R
66(1)
2.15 Using the table () Function
67(1)
2.16 Requesting a Demonstration Using the example () Function
68(1)
2.17 Citing R in Publications
69(2)
Exercises
69(2)
3 Exploring Data with R: Essential Graphics and Visualization
71(30)
3.1 Statistics, R, and Visualization
71(2)
3.2 R's plot () Function
73(4)
3.3 Scatterplots and Depicting Data in Two or More Dimensions
77(2)
3.4 Communicating Density in a Plot
79(6)
3.5 Stem-and-Leaf Plots
85(2)
3.6 Assessing Normality
87(2)
3.7 Box-and-Whisker Plots
89(6)
3.8 Violin Plots
95(2)
3.9 Pie Graphs and Charts
97(1)
3.10 Plotting Tables
98(3)
Exercises
99(2)
4 Means, Correlations, Counts: Drawing Inferences Using Easy-to-lmplement Statistical Tests
101(30)
4.1 Computing z and Related Scores in R
101(4)
4.2 Plotting Normal Distributions
105(1)
4.3 Correlation Coefficients in R
106(4)
4.4 Evaluating Pearson's r for Statistical Significance
110(1)
4.5 Spearman's Rho: A Nonparametric Alternative to Pearson
111(2)
4.6 Alternative Correlation Coefficients in R
113(1)
4.7 Tests of Mean Differences
114(6)
4.7.1 r-Tests for One Sample
114(1)
4.7.2 Two-Sample f-Test
115(2)
4.7.3 Was the Welch Test Necessary?
117(1)
4.7.4 f-Test via Linear Model Set-up
118(1)
4.7.5 Paired-Samples f-Test
118(2)
4.8 Categorical Data
120(6)
4.8.1 Binomial Test
120(3)
4.8.2 Categorical Data Having More Than Two Possibilities
123(3)
4.9 Radar Charts
126(1)
4.10 Cohen's Kappa
127(4)
Exercises
129(2)
5 Power Analysis and Sample Size Estimation Using R
131(16)
5.1 What Is Statistical Power?
131(2)
5.2 Does That Mean Power and Huge Sample Sizes Are "Bad?"
133(1)
5.3 Should I Be Estimating Power or Sample Size?
134(1)
5.4 How Do I Know What the Effect Size Should Be?
135(1)
5.4.1 Ways of Setting Effect Size in Power Analyses
135(1)
5.5 Power for r-Tests
136(4)
5.5.1 Example: Treatment versus Control Experiment
137(1)
5.5.2 Extremely Small Effect Size
138(2)
5.6 Estimating Power for a Given Sample Size
140(1)
5.7 Power for Other Designs - The Principles Are the Same
140(3)
5.7.1 Power for One-Way ANOVA
141(2)
5.7.2 Converting R2 to
143(1)
5.8 Power for Correlations
143(2)
5.9 Concluding Thoughts on Power
145(2)
Exercises
146(1)
6 Analysis of Variance: Fixed Effects, Random Effects, Mixed Models, and Repeated Measures
147(42)
6.1 Revisiting t-Tests
147(2)
6.2 Introducing the Analysis of Variance (ANOVA)
149(3)
6.2.1 Achievement as a Function of Teacher
149(3)
6.3 Evaluating Assumptions
152(4)
6.3.1 Inferential Tests for Normality
153(1)
6.3.2 Evaluating Homogeneity of Variances
154(2)
6.4 Performing the ANOVA Using aov ()
156(5)
6.4.1 The Analysis of Variance Summary Table
157(1)
6.4.2 Obtaining Treatment Effects
158(1)
6.4.3 Plotting Results of the ANOVA
159(1)
6.4.4 Post Hoc Tests on the Teacher Factor
159(2)
6.5 Alternative Way of Getting ANOVA Results via lm ()
161(2)
6.5.1 Contrasts in lm () versus Tukey's HSD
163(1)
6.6 Factorial Analysis of Variance
163(3)
6.6.1 Why Not Do Two One-Way ANOVAs?
163(3)
6.7 Example of Factorial ANOVA
166(6)
6.7.1 Graphing Main Effects and Interaction in the Same Plot
171(1)
6.8 Should Main Effects Be Interpreted in the Presence of Interaction?
172(1)
6.9 Simple Main Effects
173(2)
6.10 Random Effects ANOVA and Mixed Models
175(5)
6.10.1 A Rationale for Random Factors
176(1)
6.10.2 One-Way Random Effects ANOVA in R
177(3)
6.11 Mixed Models
180(1)
6.12 Repeated-Measures Models
181(8)
Exercises
186(3)
7 Simple and Multiple Linear Regression
189(36)
7.1 Simple Linear Regression
190(2)
7.2 Ordinary Least-Squares Regression
192(6)
7.3 Adjusted R2
198(1)
7.4 Multiple Regression Analysis
199(3)
7.5 Verifying Model Assumptions
202(4)
7.6 Collinearity Among Predictors and the Variance Inflation Factor
206(3)
1.1 Model-Building and Selection Algorithms
209(5)
7.7.1 Simultaneous Inference
209(1)
7.1.2 Hierarchical Regression
210(1)
7.7.2.1 Example of Hierarchical Regression
211(3)
7.8 Statistical Mediation
214(3)
7.9 Best Subset and Forward Regression
217(2)
7.9.1 How Forward Regression Works
218(1)
7.10 Stepwise Selection
219(2)
7.11 The Controversy Surrounding Selection Methods
221(4)
Exercises
223(2)
8 Logistic Regression and the Generalized Linear Model
225(26)
8.1 The "Why" Behind Logistic Regression
225(4)
8.2 Example of Logistic Regression in R
229(3)
8.3 Introducing the Logit: The Log of the Odds
232(1)
8.4 The Natural Log of the Odds
233(2)
8.5 From Logits Back to Odds
235(1)
8.6 Full Example of Logistic Regression
236(4)
8.6.1 Challenger O-ring Data
236(4)
8.7 Logistic Regression on Challenger Data
240(1)
8.8 Analysis of Deviance Table
241(1)
8.9 Predicting Probabilities
242(1)
8.10 Assumptions of Logistic Regression
243(1)
8.11 Multiple Logistic Regression
244(3)
8.12 Training Error Rate Versus Test Error Rate
247(4)
Exercises
248(3)
9 Multivariate Analysis of Variance (MANOVA) and Discriminant Analysis
251(30)
9.1 Why Conduct MANOVA?
252(2)
9.2 Multivariate Tests of Significance
254(3)
9.3 Example of MANOVA in R
257(2)
9.4 Effect Size for MANOVA
259(2)
9.5 Evaluating Assumptions in MANOVA
261(1)
9.6 Outliers
262(1)
9.7 Homogeneity of Covariance Matrices
263(2)
9.7.1 What if the Box-M Test Had Suggested a Violation?
264(1)
9.8 Linear Discriminant Function Analysis
265(1)
9.9 Theory of Discriminant Analysis
266(1)
9.10 Discriminant Analysis in R
267(3)
9.11 Computing Discriminant Scores Manually
270(1)
9.12 Predicting Group Membership
271(1)
9.13 How Well Did the Discriminant Function Analysis Do?
272(3)
9.14 Visualizing Separation
275(1)
9.15 Quadratic Discriminant Analysis
276(2)
9.16 Regularized Discriminant Analysis
278(3)
Exercises
278(3)
10 Principal Component Analysis
281(26)
10.1 Principal Component Analysis Versus Factor Analysis
282(1)
10.2 A Very Simple Example of PCA
283(9)
10.2.1 Pearson's 1901 Data
284(2)
10.2.2 Assumptions of PCA
286(2)
10.2.3 Running the PCA
288(2)
10.2.4 Loadings in PCA
290(2)
10.3 What Are the Loadings in PCA?
292(1)
10.4 Properties of Principal Components
293(1)
10.5 Component Scores
294(1)
10.6 How Many Components to Keep?
295(2)
10.6.1 The Scree Plot as an Aid to Component Retention
295(2)
10.7 Principal Components of USA Arrests Data
297(4)
10.8 Unstandardized Versus Standardized Solutions
301(6)
Exercises
304(3)
11 Exploratory Factor Analysis
307(20)
11.1 Common Factor Analysis Model
308(2)
11.2 A Technical and Philosophical Pitfall of EFA
310(1)
11.3 Factor Analysis Versus Principal Component Analysis on the Same Data
311(3)
11.3.1 Demonstrating the Non-Uniqueness Issue
311(3)
11.4 The Issue of Factor Retention
314(1)
11.5 Initial Eigenvalues in Factor Analysis
315(1)
11.6 Rotation in Exploratory Factor Analysis
316(2)
11.7 Estimation in Factor Analysis
318(1)
11.8 Example of Factor Analysis on the Holzinger and Swineford Data
318(9)
11.8.1 Obtaining Initial Eigenvalues
323(1)
11.8.2 Making Sense of the Factor Solution
324(1)
Exercises
325(2)
12 Cluster Analysis
327(20)
12.1 A Simple Example of Cluster Analysis
329(3)
12.2 The Concepts of Proximity and Distance in Cluster Analysis
332(1)
12.3 fe-Means Cluster Analysis
332(1)
12.4 Minimizing Criteria
333(1)
12.5 Example of /c-Means Clustering in R
334(5)
12.5.1 Plotting the Data
335(4)
12.6 Hierarchical Cluster Analysis
339(4)
12.7 Why Clustering Is Inherently Subjective
343(4)
Exercises
344(3)
13 Nonparametric Tests
347(12)
13.1 Mann-Whitney U Test
348(1)
13.2 Kruskal-Wallis Test
349(2)
13.3 Nonparametric Test for Paired Comparisons and Repeated Measures
351(3)
13.3.1 Wilcoxon Signed-Rank Test and Friedman Test
351(3)
13.4 Sign Test
354(5)
Exercises
356(3)
References 359(4)
Index 363
DANIEL J. DENIS, PHD, is Professor of Quantitative Psychology in the Department of Psychology at the University of Montana. D. Denis is the author of Applied Univariate, Bivariate, and Multivariate Statistics and SPSS Data Analysis for Univariate, Bivariate, and Multivariate Statistics, both published by Wiley.