Muutke küpsiste eelistusi

Forest Sampling Desk Reference [Kõva köide]

(Professor of Forestry Emeritus, Auburn, Alabama, USA)
  • Formaat: Hardback, 1008 pages, kõrgus x laius: 254x178 mm, kaal: 2020 g, 135 Tables, black and white
  • Ilmumisaeg: 27-Jun-2000
  • Kirjastus: CRC Press Inc
  • ISBN-10: 0849300584
  • ISBN-13: 9780849300585
  • Formaat: Hardback, 1008 pages, kõrgus x laius: 254x178 mm, kaal: 2020 g, 135 Tables, black and white
  • Ilmumisaeg: 27-Jun-2000
  • Kirjastus: CRC Press Inc
  • ISBN-10: 0849300584
  • ISBN-13: 9780849300585
Designed to help forest practitioners and students choose the most appropriate sampling procedure for a given purpose, this volume explains the uses and limitations of individual forest sampling designs in forest inventory operations. Covers topics including accuracy and precision of data; symbology; averages; measures of dispersion; probability; discrete and continuous probability distributions; and sampling theory. Additional features include detailed derivations of most of the statistical methods used in forestry and a thorough treatment of systematic and other frequently used forms of sampling one resource. Requires some knowledge of differential and integral calculus and matrix algebra but the mathematical background of most forestry students should be adequate. Annotation c. Book News, Inc., Portland, OR (booknews.com)

Should damaged trees be clear cut and replanted or allowed to recover naturally? Is the deer herd large enough to survive hunting pressure? Managing forest resources entails numerous decisions. Making these decisions intelligently requires sound information about the resource in question. Ideally, assessments should be based on the entire population involved. However, the costs in time and money often prevent this, and evaluations - or sampling - are done on a small portion of the whole.

The most complete treatment of systematic sampling in one volume, Forest Sampling Desk Reference explains the uses and limitations of individual sampling designs in forest inventory operations. This text contains detailed derivations of the most commonly used statistical methods in forestry. It provides examples that highlight the statistical methods.

The author covers probability and probability distributions and the development of logical regression models. The text discusses systematic sampling, describing its benefits and shortcomings in detail. It provides an in depth examination of the controversial 3-P sampling procedure.

The validity and strength of sampling results vary from option to option, along with their costs in terms of money and time. Before selecting a sampling procedure you need to know their strengths and weaknesses in relation to their expense. Forest Sampling Desk Reference supplies the background necessary for making these decisions.
Introduction to Sampling
Introduction
1(1)
Some Basic Concepts and Terminology
1(2)
Reasons Sampling May Be Preferred over Complete Enumeration
3(1)
Some Further Terminology
4(3)
Accuracy and Precision of Data
Accuracy of Data
7(1)
Precision
7(1)
Accuracy and Precision with Discrete Variables
7(1)
Accuracy and Precision with Continuous Variables
8(1)
Significant Figures
9(1)
Rounding Off
10(1)
Computations Using Approximate Numbers
10(3)
Data Arrangement
Introduction
13(1)
Methods of Arrangement
13(2)
Rules Controlling Frequency Distributions
15(4)
Number of Classes
15(1)
Class Width or Interval
15(1)
Class Midpoints
16(1)
Class Limits
17(2)
Cumulative Frequency Distributions
19(1)
Relative Frequency Distributions
20(1)
Graphical Presentations of Frequency Distributions
20(4)
Descriptive Terms Used for Frequency Distributions
24(5)
Symbology
Introduction
29(1)
Identification of Variables
29(1)
Subscripting
29(1)
Summation Symbology
30(2)
Distinguishing between Populations and Sample Values
32(1)
Averages
Introduction
33(1)
Assessing Averages
33(1)
The Arithmetic Mean
33(11)
Ungrouped Data
34(1)
Grouped Data
35(3)
Weighted Mean
38(2)
Characteristics of the Arithmetic Mean --- The Concept of Least Squares
40(2)
Errors and Residuals
42(1)
Further Characteristics of the Arithmetic Mean
42(2)
The Median
44(4)
Ungrouped Data
44(1)
Grouped Data
45(2)
Characteristics of the Median
47(1)
Fractiles
48(1)
The Mode
48(1)
Determination of the Mode
48(1)
Multimodality
48(1)
Characteristics of the Mode
49(1)
The Geometric Mean
49(3)
Ungrouped Data
49(2)
Grouped Data
51(1)
Use of the Geometric Mean
51(1)
Characteristics of the Geometric Mean
52(1)
The Harmonic Mean
52(4)
Ungrouped Data
53(1)
Grouped Data
54(1)
Use of the Harmonic Mean
54(1)
Characteristics of the Harmonic Mean
55(1)
The Quadratic Mean
56(3)
Ungrouped Data
56(1)
Grouped Data
56(1)
Use of the Quadratic Mean
57(1)
Characteristics of the Quadratic Mean
57(2)
Measures of Dispersion
Introduction
59(1)
The Range
59(1)
The Interquartile and Semi-Interquartile Ranges
60(1)
The Average Deviation
61(2)
The Variance
63(4)
The Standard Deviation
67(1)
The Coefficient of Variation
68(1)
Probability
Some Basic Concepts and Terms
69(2)
Relative Frequency
71(4)
Empirical Probability
75(1)
Classical Probability
76(4)
Probability of Alternative Events
80(1)
Probability of Multiple Events
81(12)
Sampling with and without Replacement
82(2)
Probability in the Case of Independent Events
84(2)
Dependent Events and Conditional Probability
86(5)
Bayesian Probability
91(1)
Hypergeometric Probabilities
92(1)
Objective and Subjective Probabilities
93(4)
Probability Distributions
Random Variables
97(2)
Probability Distributions
99(3)
Probability Distribution and Empirical Relative Frequency Distributions
102(1)
Discrete Probability Distributions
103(1)
Continuous Probability Distributions
104(2)
Cumulative Probability Distributions
106(3)
The Mean of a Random Variable
109(2)
The Function of a Random Variable
111(4)
The Mean of a Function of a Random Variable
115(2)
Expectation
117(1)
The Variance of a Random Variable
117(3)
Moments, Skewness, and Kurtosis
120(4)
Moment-Generating Functions
124(5)
Multidimensional Probability Distributions
Multidimensional Random Variables
129(2)
Joint Probability Distributions
131(1)
Marginal Probabilities
132(3)
Conditional Probabilities
135(2)
Joint Probability Distribution Functions
137(5)
Functions of Multidimensional Random Variables
142(10)
The pdf of a Sum
147(3)
The pdf of a Product
150(1)
The pdf of a Quotient
151(1)
Expectation of Functions of Multidimensional Random Variables
152(11)
Correlation
163(2)
Variances of Functions of Multidimensional Random Variables
165(5)
The Regression of the Mean
170(5)
Discrete Probability Distributions
Introduction
175(1)
The Rectangular Probability Distribution (D:M, YA, YB)
175(3)
The Binomial Probability Distribution (B:n,p)
178(9)
The Hypergeometric Probability Distribution (H:N, Ns, n)
187(5)
The Poisson Probability Distribution (p:λ)
192(8)
The Geometric Probability Distribution (G:p)
200(3)
The Negative Binomial and Pascal Probability Distributions (NB:m, p and C:m, p)
203(10)
The Multinomial Probability Distribution (M:p1, p2,...)
213(2)
Continuous Probability Distributions
The Rectangular Probability Distribution (R:yA, yB)
215(1)
The Normal Probability Distribution (N:μ, σ)
216(16)
The Gamma Probability Distribution (Γ:r, a)
232(7)
The Exponential Probability Distribution (E:a)
239(2)
The Chi-Square Probability Distribution (χ2:k)
241(3)
The Beta Probability Distribution (β:r1, r2)
244(6)
The Bivariate Normal Probability Distribution (N:μ1, σ1, N2:μ2, σ2)
250(4)
The Weibull Probability Distribution (W:r, a, ya)
254(7)
Sampling Theory
Introduction
261(1)
Sampling Design
261(1)
Sampling, Sampling Units, and Sampling Frames
261(1)
Random Sampling and Random Numbers
262(2)
Sampling Frames and Infinite Populations
264(1)
The Sample Size and the Sampling Fraction
265(1)
Sampling Distributions
265(3)
Standard Errors
268(1)
Bias
268(1)
Precision
269(1)
Sampling Distribution of Means
269(7)
Sampling Distribution of Variances
276(9)
Sampling Distribution of Standard Deviations
285(5)
Sampling Distribution of Proportions
290(4)
Sampling Distribution of Covariances
294(5)
Sampling Distribution of Differences
299(10)
Estimation of Parameters
Introduction
309(1)
Estimators and Estimates
309(1)
Point Estimates
310(1)
Criteria for Judging Estimators of Parameters
310(1)
Laws of Large Numbers
311(1)
Maximum Likelihood Estimators
312(5)
Interval Estimates
317(1)
Confidence Interval Estimates of Population Parameters Using the Normal Distribution
318(2)
Confidence Interval Estimates of Population Means Using Student's t Distribution
320(8)
Chebyshev's Theorem
328(2)
Confidence Interval Estimates of Population Proportions
330(18)
True Confidence Intervals in the Case of the Binomial Distribution
331(3)
True Confidence Intervals in the Case of the Hypergeometric Distribution
334(2)
Confidence Intervals Based on the Normal Distribution
336(4)
Confidence Limits Based on the Use of the F Distribution
340(4)
Confidence Limits in the Case of the Poisson Distribution
344(3)
Confidence Intervals from Published Curves and Tables
347(1)
Confidence Interval Estimates of Population Variances and Standard Deviations
348(5)
One-Sided Confidence Intervals
353(1)
Sampling Errors, Allowable Errors, and Needed Sample Size
354(7)
Fundamentals of Hypothesis Testing
Introduction
361(1)
Statistical Hypotheses
361(1)
Type I and Type II Errors
362(1)
Statistical Significance
363(1)
Homogeneity Tests
363(2)
One- and Two-Tailed Tests
365(1)
More on Type I and Type II Errors
365(3)
The Power of the Test
368(1)
Decision Rules
368(1)
Test Procedures
369(1)
Homogeneity Test Using a Sample Mean
370(4)
Test of a Difference (Unpaired)
374(4)
Test of a Difference (Paired)
378(6)
The Linear Additive Model
384(3)
The F Distribution
387(12)
Testing for Homogeneity of Variance Using F
399(1)
Bartlett's Test for Homogeneity of Variance
400(2)
Testing for Goodness of Fit Using χ2
402(3)
Testing for Normality
405(5)
The Analysis of Variance
410(9)
Transformations
419(1)
Multiple Comparisons
420(9)
The Least Significant Difference
420(2)
The Studentized Range
422(7)
Regression and Correlation
Introduction
429(1)
Theory of Regression
429(8)
The Variables
430(1)
Coordinate Systems
430(1)
Two-Variable Regression
430(2)
Multiple Variable Regression
432(5)
Fitting a Regression
437(42)
The Controlling Criteria
437(1)
Manual or Freehand Fitting
438(2)
Modeling
440(4)
Developing a Model
444(8)
Logarithmic Transformations of Nonlinear Models
452(1)
The MATCHACURVE Procedure
453(3)
Least Squares Fitting
456(2)
Fitting a Two-Dimensional Model
458(7)
Fitting a Multidimensional Model
465(4)
Weighted Regression
469(8)
Conditioned Regression
477(2)
Dummy Variables
479(1)
Variance about the Regression
480(6)
Confidence Intervals for Regression Estimates
486(10)
Correlation
496(8)
Hypothesis Testing in Regression Operations
504(8)
Stepwise Variable Selection
512(9)
Stratified Random Sampling
Introduction
521(1)
The Stratification Process
521(5)
The Stratified Population
526(4)
Estimation of the Parameters
530(7)
Allocation of the Sample
537(21)
Total Sample Size
538(1)
Equal Allocation
539(3)
Proportional Allocation
542(4)
Value Allocation
546(2)
Optimum Allocation
548(6)
Neyman Allocation
554(4)
Comparison of the Methods of Allocation
558(6)
Stratification and the Estimation of Proportions
564(10)
Effectiveness of Stratification
574(1)
Stratum Sizes
575(1)
Miscellany
576(3)
Cluster Sampling
Introduction
579(1)
Types of Cluster Sampling
580(1)
Cluster Configuration
580(1)
Population Parameters
581(3)
Cluster Parameters
581(2)
Overall Parameters
583(1)
Estimation of the Parameters
584(14)
Cluster Statistics
584(2)
Overall Statistics
586(2)
The Variance of the Overall Total
588(6)
The Variance of the Overall Mean
594(4)
The Intraclass Correlation Coefficient
598(4)
Overlapping and Interlocking of Clusters
602(11)
Examples
613(25)
Two-Stage Sampling When 1 < m < M, 1 < ni < Ni, and Ni Is Constant
613(6)
Simple Cluster Sampling When 1 < m < M (Example I)
619(3)
Two-Stage Sampling When m = M and 1 < ni < Ni
622(5)
Two-Stage Sampling When m = M and ni = 1
627(2)
Simple Cluster Sampling When m = 1
629(3)
Simple Cluster Sampling When 1 < m < M (Example II)
632(3)
Two-Stage Sampling When m = M = 1 and 1 < ni < Ni
635(1)
Two-Stage Sampling When 1 < m < M, 1 < ni < Ni, and Ni Is Not Constant
636(2)
Variance Functions
638(13)
Cost Functions
651(4)
Efficiency of Cluster Sampling
655(4)
Two-Stage Cluster Sampling
655(3)
Simple Cluster Sampling
658(1)
Sample Size Determination
659(7)
Two-Stage Cluster Sampling
660(3)
Simple Cluster Sampling
663(3)
Optimum Cluster Size
666(3)
Cluster Sampling for Proportions
669(11)
Using Two-Stage Cluster Sampling, When 1 < m < M and 1 < ni < Ni
670(6)
Using Simple Cluster Sampling, When 1 < m < M and ni = a = Ni = A
676(4)
Three-Stage Sampling
680(16)
The Parameters
681(1)
Estimation of the Parameters
682(1)
Variance of the Overall Mean
683(2)
Example of Three-Stage Sampling
685(5)
Sample Sizes
690(6)
Stratified Cluster Sampling
696(11)
The Parameters and Their Estimation
696(1)
Example of Stratified Cluster Sampling Using Subsampling
697(6)
Example of Stratified Cluster Sampling without Subsampling
703(2)
Sample Size
705(2)
Systematic Sampling
Introduction
707(1)
Approaches to Systematic Sampling
707(1)
Element Spacing and Sampling Intensity
707(4)
Cluster Sampling Frames
711(1)
Under Approach 1
711(1)
Under Approach 2
712(1)
Random and Arbitrary Starts
712(2)
Representativeness of the Sample
714(1)
Population Types
714(6)
Random Populations
714(1)
Ordered Populations
714(1)
Periodic Populations
715(2)
Uniform Populations
717(1)
Stratified Populations
718(1)
Autocorrelated Populations
718(2)
Population Parameters
720(1)
Variance of the Overall Total and Mean
720(26)
Under Approach 1
720(6)
Under Approach 2
726(6)
Effect of a Linear Trend
732(5)
Effect of a Curvilinear Trend
737(1)
Removing Trend Effects Using Regression
737(3)
Removing Trend Effects without Using Regression
740(3)
Effect of a Naturally Stratified Population
743(1)
Effects of Periodicity
744(1)
Effect of Autocorrelation
745(1)
Estimation of the Parameters
746(4)
Under Approach 1
746(2)
Under Approach 2
748(2)
Approximations of the Variance of the Overall Mean
750(11)
Use of Multiple Random Starts
750(1)
Using Random Sampling Formulae
750(4)
Grouping Pairs of Clusters
754(1)
Method of Successive Differences
755(2)
Method of Balanced Differences
757(4)
Ratio Estimation
Introduction
761(1)
Estimating R
762(3)
Bias of r
765(10)
Ratio of Means
765(5)
Mean of Ratios
770(3)
Unbiased Ratio-Type Estimators
773(2)
Approximations of the Ratio Estimators
775(3)
Ratio of Means
775(2)
Mean of Ratios
777(1)
Using a Ratio to Estimate a Population Total and Mean
778(3)
Ratio of Means
778(2)
Mean of Ratios
780(1)
Unbiased Ratio-Type Estimators
781(1)
Variance of the Ratio Estimates
781(7)
Ratio of Means
781(4)
Mean of Ratios
785(3)
Unbiased Ratio-Type Estimators
788(1)
Sample size
788(5)
Ratio of Means
788(4)
Mean of Ratios
792(1)
Efficiency of Ratio Estimators
793(3)
Ratio of Means
793(2)
Mean of Ratios
795(1)
The Hartley--Ross Unbiased Ratio Estimator
795(1)
Ratio Estimation and Stratified Sampling
796(10)
The Combined Ratio Estimate
797(3)
The Separate Ratios Estimate
800(2)
Allocation of the Sample
802(2)
Discussion
804(2)
Ratio Estimation and Cluster Sampling
806(3)
Comments
809(2)
Double Sampling
Introduction
811(1)
Using Regression
812(23)
Estimating μy
812(3)
The Variance of the Estimate
815(5)
Estimating the Variance of the Estimate
820(1)
Example
821(2)
Relative Precision
823(3)
Sample Size
826(1)
Using a Budget Constraint
826(4)
Using an Allowable Sampling Error Constraint
830(3)
Relative Efficiency
833(2)
Using Ratio Estimators
835(12)
Estimating μy
835(3)
The Variance of the Estimate
838(1)
The Approximate Variance When Subsampling Is Used
838(5)
The Approximate Variance When Subsampling Is Not Used
843(1)
Estimating the Variance of the Estimate
844(1)
Sample Size
844(3)
Sampling with Unequal Probabilities
Introduction
847(1)
Mathematical Rationale
847(9)
Sampling with Probability Proportional to Size (PPS)
856(44)
List Sampling
856(2)
Angle-Count Sampling
858(1)
Point Sampling
858(1)
Tree-Centered Plots
859(1)
Probabilities
860(3)
Horizontal Point Sampling
863(6)
Instrumentation
869(1)
The Critical Angle
869(1)
Slopover
870(2)
Vertical Point Sampling
872(2)
Line Sampling
874(2)
Statistical Properties
876(9)
Compensating for Slopover and Borderline Trees
885(3)
Volume Estimation
888(4)
Estimating the Number of Trees
892(2)
Estimating the Total Amount of dbh
894(1)
Estimating the Mean dbh
895(1)
Estimating the Mean Volume per Tree
896(1)
Example of the Computations Associated with Horizontal Point Sampling
896(4)
Remarks
900(1)
Sampling
900(31)
The Context
901(1)
An Equal Probability Solution
901(3)
Adjusted 3P Sampling
904(2)
The Bias in Adjusted 3P Sampling
906(8)
Expected Sample Size
914(2)
Probability of n = 0
916(1)
Variance of the Adjusted 3P Estimate
917(2)
Control of the Probability of Selection
919(1)
Unadjusted 3P Sampling
920(3)
Variance of the Unadjusted 3P Sample
923(3)
Interpenetrating Samples
926(1)
Using Random Numbers
926(1)
Use of Dendrometers
926(1)
Supporting Computer Programs
926(2)
Units of Measure
928(1)
Grosenbaugh's Symbology
928(1)
Remarks
928(3)
Appendix 931(32)
References 963(8)
Index 971


Evert W. Johnson