Muutke küpsiste eelistusi

E-raamat: Machine Learning for High-Risk Applications: Approaches to Responsible AI

  • Formaat: 470 pages
  • Ilmumisaeg: 17-Apr-2023
  • Kirjastus: O'Reilly Media
  • Keel: eng
  • ISBN-13: 9781098102395
Teised raamatud teemal:
  • Formaat - EPUB+DRM
  • Hind: 47,96 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Formaat: 470 pages
  • Ilmumisaeg: 17-Apr-2023
  • Kirjastus: O'Reilly Media
  • Keel: eng
  • ISBN-13: 9781098102395
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

The past decade has witnessed a wide adoption of artificial intelligence and machine learning (AI/ML) technologies. However, a lack of oversight into their widespread implementation has resulted in harmful outcomes that could have been avoided with proper oversight. Before we can realize AI/ML's true benefit, practitioners must understand how to mitigate its risks. This book describes responsible AI, a holistic approach for improving AI/ML technology, business processes, and cultural competencies that builds on best practices in risk management, cybersecurity, data privacy, and applied social science.

It's an ambitious undertaking that requires a diverse set of talents, experiences, and perspectives. Data scientists and nontechnical oversight folks alike need to be recruited and empowered to audit and evaluate high-impact AI/ML systems. Author Patrick Hall created this guide for a new generation of auditors and assessors who want to make AI systems better for organizations, consumers, and the public at large.

Learn how to create a successful and impactful responsible AI practice Get a guide to existing standards, laws, and assessments for adopting AI technologies Look at how existing roles at companies are evolving to incorporate responsible AI Examine business best practices and recommendations for implementing responsible AI Learn technical approaches for responsible AI at all stages of system development
Foreword ix
Preface xi
Part I Theories and Practical Applications of Al Risk Management
1 Contemporary Machine Learning Risk Management
3(30)
A Snapshot of the Legal and Regulatory Landscape
4(1)
The Proposed EU AI Act
4(1)
US Federal Laws and Regulations
5(1)
State and Municipal Laws
5(1)
Basic Product Liability
6(1)
Federal Trade Commission Enforcement
7(1)
Authoritative Best Practices
8(3)
AI Incidents
11(2)
Cultural Competencies for Machine Learning Risk Management
13(1)
Organizational Accountability
13(1)
Culture of Effective Challenge
14(1)
Diverse and Experienced Teams
15(1)
Drinking Our Own Champagne
15(1)
Moving Fast and Breaking Things
16(1)
Organizational Processes for Machine Learning Risk Management
16(1)
Forecasting Failure Modes
17(1)
Model Risk Management Processes
18(4)
Beyond Model Risk Management
22(5)
Case Study: The Rise and Fall of Zillow's iBuying
27(1)
Fallout
28(1)
Lessons Learned
28(3)
Resources
31(2)
2 Interpretable and Explainable Machine Learning
33(48)
Important Ideas for Interpretability and Explainability
34(5)
Explainable Models
39(1)
Additive Models
39(5)
Decision Trees
44(3)
An Ecosystem of Explainable Machine Learning Models
47(3)
Post Hoc Explanation
50(1)
Feature Attribution and Importance
51(12)
Surrogate Models
63(5)
Plots of Model Performance
68(3)
Cluster Profiling
71(1)
Stubborn Difficulties of Post Hoc Explanation in Practice
71(4)
Pairing Explainable Models and Post Hoc Explanation
75(2)
Case Study: Graded by Algorithm
77(3)
Resources
80(1)
3 Debugging Machine Learning Systems for Safety and Performance
81(42)
Training
83(1)
Reproducibility
83(2)
Data Quality
85(3)
Model Specification for Real-World Outcomes
88(3)
Model Debugging
91(1)
Software Testing
92(1)
Traditional Model Assessment
93(2)
Common Machine Learning Bugs
95(8)
Residual Analysis
103(4)
Sensitivity Analysis
107(3)
Benchmark Models
110(2)
Remediation: Fixing Bugs
112(2)
Deployment
114(1)
Domain Safety
114(2)
Model Monitoring
116(4)
Case Study: Death by Autonomous Vehicle
120(1)
Fallout
120(1)
An Unprepared Legal System
120(1)
Lessons Learned
121(1)
Resources
122(1)
4 Managing Bias in Machine Learning
123(36)
ISO and NIST Definitions for Bias
126(1)
Systemic Bias
126(1)
Statistical Bias
126(1)
Human Biases and Data Science Culture
127(1)
Legal Notions of ML Bias in the United States
128(3)
Who Tends to Experience Bias from ML Systems
131(2)
Harms That People Experience
133(2)
Testing for Bias
135(1)
Testing Data
135(2)
Traditional Approaches: Testing for Equivalent Outcomes
137(4)
A New Mindset: Testing for Equivalent Performance Quality
141(2)
On the Horizon: Tests for the Broader ML Ecosystem
143(3)
Summary Test Plan
146(1)
Mitigating Bias
147(1)
Technical Factors in Mitigating Bias
148(1)
The Scientific Method and Experimental Design
148(1)
Bias Mitigation Approaches
149(4)
Human Factors in Mitigating Bias
153(3)
Case Study: The Bias Bug Bounty
156(2)
Resources
158(1)
5 Security for Machine Learning
159(30)
Security Basics
161(1)
The Adversarial Mindset
161(1)
CIA Triad
162(1)
Best Practices for Data Scientists
163(3)
Machine Learning Attacks
166(1)
Integrity Attacks: Manipulated Machine Learning Outputs
166(5)
Confidentiality Attacks: Extracted Information
171(2)
General ML Security Concerns
173(2)
Countermeasures
175(1)
Model Debugging for Security
175(3)
Model Monitoring for Security
178(1)
Privacy-Enhancing Technologies
179(3)
Robust Machine Learning
182(1)
General Countermeasures
182(2)
Case Study: Real-World Evasion Attacks
184(1)
Evasion Attacks
184(1)
Lessons Learned
185(1)
Resources
186(3)
Part II Putting AI Risk Management into Action
6 Explainable Boosting Machines and Explaining XGBoost
189(42)
Concept Refresher: Machine Learning Transparency
190(1)
Additivity Versus Interactions
190(1)
Steps Toward Causality with Constraints
191(1)
Partial Dependence and Individual Conditional Expectation
191(3)
Shapley Values
194(1)
Model Documentation
195(1)
The GAM Family of Explainable Models
196(1)
Elastic Net-Penalized GLM with Alpha and Lambda Search
196(4)
Generalized Additive Models
200(5)
GA2M and Explainable Boosting Machines
205(3)
XGBoost with Constraints and Post Hoc Explanation
208(1)
Constrained and Unconstrained XGBoost
208(6)
Explaining Model Behavior with Partial Dependence and ICE
214(3)
Decision Tree Surrogate Models as an Explanation Technique
217(4)
Shapley Value Explanations
221(3)
Problems with Shapley values
224(4)
Better-Informed Model Selection
228(1)
Resources
229(2)
7 Explaining a PyTorch Image Classifier
231(30)
Explaining Chest X-Ray Classification
232(1)
Concept Refresher: Explainable Models and Post Hoc Explanation Techniques
233(1)
Explainable Models Overview
233(1)
Occlusion Methods
234(1)
Gradient-Based Methods
234(1)
Explainable AI for Model Debugging
235(1)
Explainable Models
235(1)
ProtoPNet and Variants
236(1)
Other Explainable Deep Learning Models
237(1)
Training and Explaining a PyTorch Image Classifier
238(1)
Training Data
238(1)
Addressing the Dataset Imbalance Problem
239(1)
Data Augmentation and Image Cropping
240(2)
Model Training
242(2)
Evaluation and Metrics
244(1)
Generating Post Hoc Explanations Using Captum
244(6)
Evaluating Model Explanations
250(2)
The Robustness of Post Hoc Explanations
252(6)
Conclusion
258(1)
Resources
259(2)
8 Selecting and Debugging XGBoost Models
261(36)
Concept Refresher: Debugging ML
262(1)
Model Selection
262(1)
Sensitivity Analysis
262(2)
Residual Analysis
264(1)
Remediation
265(1)
Selecting a Better XGBoost Model
266(5)
Sensitivity Analysis for XGBoost
271(1)
Stress Testing XGBoost
272(1)
Stress Testing Methodology
273(1)
Altering Data to Simulate Recession Conditions
274(2)
Adversarial Example Search
276(4)
Residual Analysis for XGBoost
280(1)
Analysis and Visualizations of Residuals
281(4)
Segmented Error Analysis
285(2)
Modeling Residuals
287(3)
Remediating the Selected Model
290(1)
Overemphasis of PAY 0
291(2)
Miscellaneous Bugs
293(2)
Conclusion
295(1)
Resources
296(1)
9 Debugging a PyTorch Image Classifier
297(30)
Concept Refresher: Debugging Deep Learning
299(3)
Debugging a PyTorch Image Classifier
302(1)
Data Quality and Leaks
303(2)
Software Testing for Deep Learning
305(1)
Sensitivity Analysis for Deep Learning
306(8)
Remediation
314(7)
Sensitivity Fixes
321(4)
Conclusion
325(1)
Resources
326(1)
10 Testing and Remediating Bias with XGBoost
327(42)
Concept Refresher: Managing ML Bias
328(3)
Model Training
331(4)
Evaluating Models for Bias
335(1)
Testing Approaches for Groups
335(10)
Individual Fairness
345(4)
Proxy Bias
349(1)
Remediating Bias
350(1)
Preprocessing
350(5)
In-processing
355(4)
Postprocessing
359(3)
Model Selection
362(4)
Conclusion
366(2)
Resources
368(1)
11 Red-Teaming XGBoost
369(30)
Concept Refresher
370(1)
CIA Triad
370(1)
Attacks
371(2)
Countermeasures
373(2)
Model Training
375(4)
Attacks for Red-Teaming
379(1)
Model Extraction Attacks
379(4)
Adversarial Example Attacks
383(3)
Membership Attacks
386(1)
Data Poisoning
387(3)
Backdoors
390(4)
Conclusion
394(1)
Resources
395(4)
Part III Conclusion
12 How to Succeed in High-Risk Machine Learning
399(16)
Who Is in the Room?
400(2)
Science Versus Engineering
402(1)
The Data-Scientific Method
403(1)
The Scientific Method
404(1)
Evaluation of Published Results and Claims
405(2)
Apply External Standards
407(3)
Commonsense Risk Mitigation
410(3)
Conclusion
413(1)
Resources
414(1)
Index 415
Patrick Hall is principal scientist at bnh.ai, a Cc.C.-based law firm focused on AI and data analytics, and visiting faculty at the George Washington University School of Business (GWSB). James Curtis is a quantitative researcher focused on US power markets and renewable resource asset management. Parul Pandey is a Machine Learning Engineer at Weights & Biases.