Muutke küpsiste eelistusi

E-raamat: Statistical Inference Based on Divergence Measures

(University of Madrid, Spain)
Teised raamatud teemal:
  • Formaat - EPUB+DRM
  • Hind: 59,79 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

The idea of using functionals of Information Theory, such as entropies or divergences, in statistical inference is not new. However, in spite of the fact that divergence statistics have become a very good alternative to the classical likelihood ratio test and the Pearson-type statistic in discrete models, many statisticians remain unaware of this powerful approach.

Statistical Inference Based on Divergence Measures explores classical problems of statistical inference, such as estimation and hypothesis testing, on the basis of measures of entropy and divergence. The first two chapters form an overview, from a statistical perspective, of the most important measures of entropy and divergence and study their properties. The author then examines the statistical analysis of discrete multivariate data with emphasis is on problems in contingency tables and loglinear models using phi-divergence test statistics as well as minimum phi-divergence estimators. The final chapter looks at testing in general populations, presenting the interesting possibility of introducing alternative test statistics to classical ones like Wald, Rao, and likelihood ratio. Each chapter concludes with exercises that clarify the theoretical results and present additional results that complement the main discussions.

Clear, comprehensive, and logically developed, this book offers a unique opportunity to gain not only a new perspective on some standard statistics problems, but the tools to put it into practice.

Arvustused

"There are a number of measures of divergence between distributions. Describing them properly requires a very mathematically well-written book, which the author here provides This book is a fine course text, and is beautifully produced. There are about four hundred references. Recommended."

-ISI Short Book Reviews

". . . suitable for a beginning graduate course on information theory based on statistical inference. This book will be a useful and important addition to the resources of practitioners and many others engaged information theory and statistics. Overall, this is an impressive book on information theory based statistical inference."

Prasanna Sahoo, in Zentralblatt Math, 2008, Vol. 1120

Preface ix
Divergence Measures: Definition and Properties
1(54)
Introduction
1(2)
Phi-divergence Measures between Two Probability Distributions: Definition and Properties
3(15)
Basic Properties of the Phi-divergence Measures
8(10)
Other Divergence Measures between Two Probability Distributions
18(9)
Entropy Measures
19(6)
Burbea and Rao Divergence Measures
25(2)
Bregman's Distances
27(1)
Divergence among k Populations
27(2)
Phi-disparities
29(2)
Exercises
31(3)
Answers to Exercises
34(21)
Entropy as a Measure of Diversity: Sampling Distributions
55(58)
Introduction
55(3)
Phi-entropies. Asymptotic Distribution
58(6)
Testing and Confidence Intervals for Phi-entropies
64(11)
Test for a Predicted Value of the Entropy of a Population (Diversity of a Population)
64(1)
Test for the Equality of the Entropies of Two Independent Populations (Equality of Diversities of Two Populations)
65(1)
Test for the Equality of the Entropies of r Independent Populations
65(5)
Tests for Parameters
70(4)
Confidence Intervals
74(1)
Multinomial Populations: Asymptotic Distributions
75(12)
Test of Discrete Uniformity
84(3)
Maximum Entropy Principle and Statistical Inference on Condensed Ordered Data
87(6)
Exercises
93(5)
Answers to Exercises
98(15)
Goodness-of-fit: Simple Null Hypothesis
113(52)
Introduction
113(3)
Phi-divergences and Goodness-of-fit with Fixed Number of Classes
116(9)
Phi-divergence Test Statistics under Sparseness Assumptions
125(7)
Nonstandard Problems: Tests Statistics based on Phi-divergences
132(14)
Goodness-of-fit with Quantile Characterization
132(3)
Goodness-of-fit with Dependent Observations
135(5)
Misclassified Data
140(4)
Goodness-of-fit for and against Order Restrictions
144(2)
Exercises
146(4)
Answers to Exercises
150(15)
Optimality of Phi-divergence Test Statistics in Goodness-of-fit
165(48)
Introduction
165(1)
Asymptotic Efficiency
166(9)
Pitman Asymptotic Relative Efficiency
166(2)
Bahadur Efficiency
168(6)
Approximations to the Power Function: Comparisons
174(1)
Exact and Asymptotic Moments: Comparison
175(15)
Under the Null Hypothesis
176(9)
Under Contiguous Alternative Hypotheses
185(4)
Corrected Phi-divergence Test Statistic
189(1)
A Second Order Approximation to the Exact Distribution
190(4)
Exact Powers Based on Exact Critical Regions
194(4)
Small Sample Comparisons for the Phi-divergence Test Statistics
198(5)
Exercises
203(1)
Answers to Exercises
204(9)
Minimum Phi-divergence Estimators
213(44)
Introduction
213(2)
Maximum Likelihood and Minimum Phi-divergence Estimators
215(9)
Minimum Power-divergence Estimators in Normal and Weibull Populations
220(4)
Properties of the Minimum Phi-divergence Estimator
224(11)
Asymptotic Properties
227(6)
Minimum Phi-divergence Functional Robustness
233(2)
Normal Mixtures: Minimum Phi-divergence Estimator
235(9)
Minimum Phi-divergence Estimator with Constraints: Properties
244(3)
Exercises
247(2)
Answers to Exercises
249(8)
Goodness-of-fit: Composite Null Hypothesis
257(40)
Introduction
257(2)
Asymptotic Distribution with Fixed Number of Classes
259(12)
Nonstandard Problems: Test Statistics Based on Phi-divergences
271(11)
Maximum Likelihood Estimator Based on Original Data and Test Statistics Based on Phi-divergences
271(5)
Goodness-of-fit with Quantile Characterization
276(2)
Estimation from an Independent Sample
278(1)
Goodness-of-fit with Dependent Observations
279(2)
Goodness-of-fit with Constraints
281(1)
Exercises
282(2)
Answers to Exercises
284(13)
Testing Loglinear Models Using Phi-divergence Test Statistics
297(54)
Introduction
297(8)
Loglinear Models: Definition
305(2)
Asymptotic Results for Minimum Phi-divergence Estimators in Loglinear Models
307(2)
Testing in Loglinear Models
309(16)
Simulation Study
325(11)
Exercises
336(4)
Answers to Exercises
340(11)
Phi-divergence Measures in Contingency Tables
351(56)
Introduction
351(1)
Independence
352(16)
Restricted Minimum Phi-divergence Estimator
354(6)
Test of Independence
360(8)
Multiway Contingency Tables
368(1)
Symmetry
368(14)
Test of Symmetry
373(5)
Symmetry in a Three-way Contingence Table
378(4)
Marginal Homogeneity
382(6)
Quasi-symmetry
388(6)
Homogeneity
394(5)
Exercises
399(1)
Answers to Exercises
400(7)
Testing in General Populations
407(52)
Introduction
407(1)
Simple Null Hypotheses: Wald, Rao, Wilks and Phi-divergence Test Statistics
408(12)
Confidence Regions
420(1)
Composite Null Hypothesis
420(10)
Multi-sample Problem
430(6)
Some Topics in Multivariate Analysis
436(1)
Exercises
437(3)
Answers to Exercises
440(19)
References 459(26)
Index 485


Leandro Pardo