Muutke küpsiste eelistusi

Practical Fairness: Achieving Fair and Secure Data Models [Pehme köide]

  • Formaat: Paperback / softback, 175 pages, kõrgus x laius: 233x178 mm
  • Ilmumisaeg: 31-Dec-2020
  • Kirjastus: O'Reilly Media
  • ISBN-10: 1492075736
  • ISBN-13: 9781492075738
Teised raamatud teemal:
  • Pehme köide
  • Hind: 54,01 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Tavahind: 63,54 €
  • Säästad 15%
  • Raamatu kohalejõudmiseks kirjastusest kulub orienteeruvalt 2-4 nädalat
  • Kogus:
  • Lisa ostukorvi
  • Tasuta tarne
  • Tellimisaeg 2-4 nädalat
  • Lisa soovinimekirja
  • Formaat: Paperback / softback, 175 pages, kõrgus x laius: 233x178 mm
  • Ilmumisaeg: 31-Dec-2020
  • Kirjastus: O'Reilly Media
  • ISBN-10: 1492075736
  • ISBN-13: 9781492075738
Teised raamatud teemal:
The author, a software engineer, guides readers through the technical, legal and ethical aspects of making code fair and secure while highlighting up-to-date academic research and ongoing legal developments related to fairness and algorithms. Original.

Fairness is becoming a paramount consideration for data scientists. Mounting evidence indicates that the widespread deployment of machine learning and AI in business and government is reproducing the same biases we're trying to fight in the real world. But what does fairness mean when it comes to code? This practical book covers basic concerns related to data security and privacy to help data and AI professionals use code that's fair and free of bias.

Many realistic best practices are emerging at all steps along the data pipeline today, from data selection and preprocessing to closed model audits. Author Aileen Nielsen guides you through technical, legal, and ethical aspects of making code fair and secure, while highlighting up-to-date academic research and ongoing legal developments related to fairness and algorithms.

  • Identify potential bias and discrimination in data science models
  • Use preventive measures to minimize bias when developing data modeling pipelines
  • Understand what data pipeline components implicate security and privacy concerns
  • Write data processing and modeling code that implements best practices for fairness
  • Recognize the complex interrelationships between fairness, privacy, and data security created by the use of machine learning models
  • Apply normative and legal concepts relevant to evaluating the fairness of machine learning models
Preface vii
1 Fairness, Technology, and the Real World
1(32)
Fairness in Engineering Is an Old Problem
3(2)
Our Fairness Problems Now
5(15)
Legal Responses to Fairness in Technology
20(2)
The Assumptions and Approaches in This Book
22(2)
What If I'm Skeptical of All This Fairness Talk?
24(3)
What Is Fairness?
27(3)
Rules to Code By
30(3)
2 Understanding Fairness and the Data Science Pipeline
33(34)
Metrics for Fairness
36(21)
Connected Concepts
57(4)
Automated Fairness?
61(1)
Checklist of Points of Entry for Fairness in the Data Science Pipeline
61(4)
Concluding Remarks
65(2)
3 Fair Data
67(32)
Ensuring Data Integrity
69(6)
Choosing Appropriate Data
75(12)
Case Study: Choosing the Right Question for a Data Set and the Right Data Set for a Question
87(2)
Quality Assurance for a Data Set: Identifying Potential Discrimination
89(6)
A Timeline for Fairness Interventions
95(2)
Comprehensive Data-Acquisition Checklist
97(1)
Concluding Remarks
98(1)
4 Fairness Pre-Processing
99(34)
Simple Pre-Processing Methods
100(1)
Suppression: The Baseline
100(2)
Massaging the Data Set: Relabeling
102(2)
AIF360 Pipeline
104(6)
The US Census Data Set
110(3)
Suppression
113(2)
Reweighting
115(6)
Learning Fair Representations
121(4)
Optimized Data Transformations
125(5)
Fairness Pre-Processing Checklist
130(2)
Concluding Remarks
132(1)
5 Fairness In-Processing
133(22)
The Basic Idea
134(1)
The Medical Data Set
135(3)
Prejudice Remover
138(5)
Adversarial Debiasing
143(7)
In-Processing Beyond Antidiscrimination
150(1)
Model Selection
151(1)
Concluding Remarks
152(3)
6 Fairness Post-Processing
155(18)
Post-Processing Versus Black-Box Auditing
156(2)
The Data Set
158(3)
Equality of Opportunity
161(5)
Calibration-Preserving Equalized Odds
166(6)
Concluding Remarks
172(1)
7 Model Auditing for Fairness and Discrimination
173(28)
The Parameters of an Audit
174(8)
Scoping: What Should We Audit?
182(1)
Black-Box Auditing
182(17)
Concluding Remarks
199(2)
8 Interpretable Models and Explainability Algorithms
201(38)
Interpretation Versus Explanation
202(2)
Interpretable Models
204(11)
Explainability Methods
215(18)
What Interpretation and Explainability Miss
233(4)
Interpretation and Explanation Checklist
237(1)
Concluding Remarks
238(1)
9 ML Models and Privacy
239(24)
Membership Attacks
241(18)
Other Privacy Problems and Attacks
259(1)
Important Privacy Techniques
260(1)
Concluding Remarks
261(2)
10 ML Models and Security
263(22)
Evasion Attacks
264(15)
Poisoning Attacks
279(5)
Concluding Remarks
284(1)
11 Fair Product Design and Deployment
285(18)
Reasonable Expectations
286(2)
Fiduciary Obligations
288(1)
Respecting Traditional Spheres of Privacy and Private Life
289(1)
Value Creation
290(2)
Complex Systems
292(2)
Clear Security Promises and Delineated Limitations
294(1)
Possibility of Downstream Control and Verification
294(1)
Products That Work Better for Privileged People
295(3)
Dark Patterns
298(2)
Fair Products Checklist
300(1)
Concluding Remarks
301(2)
12 Laws for Machine Learning
303(20)
Personal Data
309(3)
Algorithmic Decision Making
312(2)
Security
314(2)
Logical Processes
316(2)
Some Application-Specific Laws
318(3)
Concluding Remarks
321(2)
Index 323