Muutke küpsiste eelistusi

E-raamat: Elements of Causal Inference

(University of Copenhagen), (Max Planck Institute for Intelligent Systems), (Max Planck Institute for Intelligent Systems)
  • Formaat - EPUB+DRM
  • Hind: 42,64 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

The mathematization of causality is a relatively recent development, and has become increasingly important in data science and machine learning. This book offers a self-contained and concise introduction to causal models and how to learn them from data. After explaining the need for causal models and discussing some of the principles underlying causal inference, the book teaches readers how to use causal models: how to compute intervention distributions, how to infer causal models from observational and interventional data, and how causal ideas could be exploited for classical machine learning problems. All of these topics are discussed first in terms of two variables and then in the more general multivariate case. The bivariate case turns out to be a particularly hard problem for causal learning because there are no conditional independences as used by classical methods for solving multivariate cases. The authors consider analyzing statistical asymmetries between cause and effect to be highly instructive, and they report on their decade of intensive research into this problem.

The book is accessible to readers with a background in machine learning or statistics, and can be used in graduate courses or as a reference for researchers. The text includes code snippets that can be copied and pasted, exercises, and an appendix with a summary of the most important technical concepts.

Preface xi
Notation and Terminology xv
1 Statistical and Causal Models
1(14)
1.1 Probability Theory and Statistics
1(2)
1.2 Learning Theory
3(2)
1.3 Causal Modeling and Learning
5(2)
1.4 Two Examples
7(8)
2 Assumptions for Causal Inference
15(18)
2.1 The Principle of Independent Mechanisms
16(6)
2.2 Historical Notes
22(4)
2.3 Physical Structure Underlying Causal Models
26(7)
3 Cause-Effect Models
33(10)
3.1 Structural Causal Models
33(1)
3.2 Interventions
34(2)
3.3 Counterfactuals
36(1)
3.4 Canonical Representation of Structural Causal Models
37(2)
3.5 Problems
39(4)
4 Learning Cause-Effect Models
43(28)
4.1 Structure Identifiability
44(18)
4.2 Methods for Structure Identification
62(7)
4.3 Problems
69(2)
5 Connections to Machine Learning, I
71(10)
5.1 Semi-Supervised Learning
71(6)
5.2 Covariate Shift
77(2)
5.3 Problems
79(2)
6 Multivariate Causal Models
81(54)
6.1 Graph Terminology
81(2)
6.2 Structural Causal Models
83(5)
6.3 Interventions
88(8)
6.4 Counterfactuals
96(4)
6.5 Markov Property, Faithfulness, and Causal Minimality
100(9)
6.6 Calculating Intervention Distributions by Covariate Adjustment
109(9)
6.7 Do-Calculus
118(2)
6.8 Equivalence and Falsifiability of Causal Models
120(2)
6.9 Potential Outcomes
122(4)
6.10 Generalized Structural Causal Models Relating Single Objects
126(3)
6.11 Algorithmic Independence of Conditionals
129(3)
6.12 Problems
132(3)
7 Learning Multivariate Causal Models
135(22)
7.1 Structure Identifiability
136(6)
7.2 Methods for Structure Identification
142(13)
7.3 Problems
155(2)
8 Connections to Machine Learning, II
157(14)
8.1 Half-Sibling Regression
157(2)
8.2 Causal Inference and Episodic Reinforcement Learning
159(8)
8.3 Domain Adaptation
167(2)
8.4 Problems
169(2)
9 Hidden Variables
171(26)
9.1 Interventional Sufficiency
171(3)
9.2 Simpson's Paradox
174(1)
9.3 Instrumental Variables
175(2)
9.4 Conditional Independences and Graphical Representations
177(8)
9.5 Constraints beyond Conditional Independence
185(10)
9.6 Problems
195(2)
10 Time Series
197(16)
10.1 Preliminaries and Terminology
197(2)
10.2 Structural Causal Models and Interventions
199(2)
10.3 Learning Causal Time Series Models
201(9)
10.4 Dynamic Causal Modeling
210(1)
10.5 Problems
211(2)
Appendices
Appendix A Some Probability and Statistics
213(8)
A.1 Basic Definitions
213(3)
A.2 Independence and Conditional Independence Testing
216(3)
A.3 Capacity of Function Classes
219(2)
Appendix B Causal Orderings and Adjacency Matrices
221(4)
Appendix C Proofs
225(10)
C.1 Proof of Theorem 4.2
225(1)
C.2 Proof of Proposition 6.3
226(1)
C.3 Proof of Remark 6.6
226(1)
C.4 Proof of Proposition 6.13
226(2)
C.5 Proof of Proposition 6.14
228(1)
C.6 Proof of Proposition 6.36
228(1)
C.7 Proof of Proposition 6.48
228(1)
C.8 Proof of Proposition 6.49
229(1)
C.9 Proof of Proposition 7.1
230(1)
C.10 Proof of Proposition 7.4
230(1)
C.11 Proof of Proposition 8.1
230(1)
C.12 Proof of Proposition 8.2
231(1)
C.13 Proof of Proposition 9.3
231(1)
C.14 Proof of Theorem 10.3
232(1)
C.15 Proof of Theorem 10.4
232(3)
Bibliography 235(28)
Index 263