Foreword |
|
xi | |
Preface |
|
xiii | |
|
|
1 | (22) |
|
|
2 | (4) |
|
|
2 | (2) |
|
|
4 | (1) |
|
Three Layers: Core, Libraries, and Ecosystem |
|
|
5 | (1) |
|
A Distributed Computing Framework |
|
|
6 | (2) |
|
A Suite of Data Science Libraries |
|
|
8 | (12) |
|
Ray AIR and the Data Science Workflow |
|
|
8 | (2) |
|
Data Processing with Ray Datasets |
|
|
10 | (2) |
|
|
12 | (4) |
|
|
16 | (2) |
|
|
18 | (2) |
|
|
20 | (1) |
|
|
21 | (2) |
|
2 Getting Started with Ray Core |
|
|
23 | (26) |
|
An Introduction to Ray Core |
|
|
24 | (12) |
|
A First Example Using the Ray API |
|
|
25 | (10) |
|
An Overview of the Ray Core API |
|
|
35 | (1) |
|
Understanding Ray System Components |
|
|
36 | (5) |
|
Scheduling and Executing Work on a Node |
|
|
36 | (3) |
|
|
39 | (1) |
|
Distributed Scheduling and Execution |
|
|
39 | (2) |
|
A Simple Map Reduce Example with Ray |
|
|
41 | (6) |
|
Mapping and Shuffling Document Data |
|
|
43 | (2) |
|
|
45 | (2) |
|
|
47 | (2) |
|
3 Building Your First Distributed Application |
|
|
49 | (20) |
|
Introducing Reinforcement Learning |
|
|
49 | (1) |
|
Setting Up a Simple Maze Problem |
|
|
50 | (5) |
|
|
55 | (4) |
|
Training a Reinforcement Learning Model |
|
|
59 | (3) |
|
Building a Distributed Ray App |
|
|
62 | (4) |
|
|
66 | (1) |
|
|
67 | (2) |
|
4 Reinforcement Learning with Ray RLlib |
|
|
69 | (32) |
|
|
70 | (1) |
|
Getting Started with RLlib |
|
|
71 | (11) |
|
Building a Gym Environment |
|
|
71 | (2) |
|
|
73 | (2) |
|
Using the RLlib Python API |
|
|
75 | (7) |
|
Configuring RLlib Experiments |
|
|
82 | (3) |
|
|
83 | (1) |
|
Rollout Worker Configuration |
|
|
83 | (1) |
|
Environment Configuration |
|
|
84 | (1) |
|
Working with RLlib Environments |
|
|
85 | (8) |
|
An Overview of RLlib Environments |
|
|
85 | (1) |
|
Working with Multiple Agents |
|
|
86 | (4) |
|
Working with Policy Servers and Clients |
|
|
90 | (3) |
|
|
93 | (6) |
|
Building an Advanced Environment |
|
|
94 | (1) |
|
Applying Curriculum Learning |
|
|
95 | (2) |
|
Working with Offline Data |
|
|
97 | (1) |
|
|
98 | (1) |
|
|
99 | (2) |
|
5 Hyperparameter Optimization with Ray Tune |
|
|
101 | (20) |
|
|
102 | (3) |
|
Building a Random Search Example with Ray |
|
|
102 | (2) |
|
|
104 | (1) |
|
|
105 | (1) |
|
|
106 | (9) |
|
Configuring and Running Tune |
|
|
110 | (5) |
|
Machine Learning with Tune |
|
|
115 | (4) |
|
|
115 | (1) |
|
|
116 | (3) |
|
|
119 | (2) |
|
6 Data Processing with Ray |
|
|
121 | (18) |
|
|
122 | (12) |
|
|
123 | (3) |
|
Computing Over Ray Datasets |
|
|
126 | (1) |
|
|
127 | (3) |
|
Example: Training Copies of a Classifier in Parallel |
|
|
130 | (4) |
|
External Library Integrations |
|
|
134 | (2) |
|
|
136 | (2) |
|
|
138 | (1) |
|
7 Distributed Training with Ray Train |
|
|
139 | (18) |
|
The Basics of Distributed Model Training |
|
|
139 | (2) |
|
Introduction to Ray Train by Example |
|
|
141 | (7) |
|
Predicting Big Tips in NYC Taxi Rides |
|
|
141 | (1) |
|
Loading, Preprocessing, and Featurization |
|
|
142 | (1) |
|
Denning a Deep Learning Model |
|
|
143 | (1) |
|
Distributed Training with Ray Train |
|
|
144 | (3) |
|
Distributed Batch Inference |
|
|
147 | (1) |
|
More on Trainers in Ray Train |
|
|
148 | (8) |
|
Migrating to Ray Train with Minimal Code Changes |
|
|
150 | (2) |
|
|
152 | (1) |
|
Preprocessing with Ray Train |
|
|
153 | (1) |
|
Integrating Trainers with Ray Tune |
|
|
154 | (2) |
|
Using Callbacks to Monitor Training |
|
|
156 | (1) |
|
|
156 | (1) |
|
8 Online Inference with Ray Serve |
|
|
157 | (22) |
|
Key Characteristics of Online Inference |
|
|
158 | (2) |
|
ML Models Are Compute Intensive |
|
|
158 | (1) |
|
ML Models Aren't Useful in Isolation |
|
|
159 | (1) |
|
An Introduction to Ray Serve |
|
|
160 | (10) |
|
|
160 | (1) |
|
Defining a Basic HTTP Endpoint |
|
|
161 | (2) |
|
Scaling and Resource Allocation |
|
|
163 | (2) |
|
|
165 | (1) |
|
Multimodel Inference Graphs |
|
|
166 | (4) |
|
End-to-End Example: Building an NLP-Powered API |
|
|
170 | (6) |
|
Fetching Content and Preprocessing |
|
|
172 | (1) |
|
|
172 | (1) |
|
HTTP Handling and Driver Logic |
|
|
173 | (2) |
|
|
175 | (1) |
|
|
176 | (3) |
|
|
179 | (16) |
|
Manually Creating a Ray Cluster |
|
|
180 | (2) |
|
|
182 | (8) |
|
Setting Up Your First KubeRay Cluster |
|
|
183 | (1) |
|
Interacting with the KubeRay Cluster |
|
|
184 | (2) |
|
|
186 | (1) |
|
|
187 | (2) |
|
Configuring Logging for KubeRay |
|
|
189 | (1) |
|
Using the Ray Cluster Launcher |
|
|
190 | (2) |
|
Configuring Your Ray Cluster |
|
|
190 | (1) |
|
Using the Cluster Launcher CLI |
|
|
191 | (1) |
|
Interacting with a Ray Cluster |
|
|
191 | (1) |
|
Working with Cloud Clusters |
|
|
192 | (2) |
|
|
192 | (1) |
|
Using Other Cloud Providers |
|
|
193 | (1) |
|
|
194 | (1) |
|
|
194 | (1) |
|
10 Getting Started with the Ray Al Runtime |
|
|
195 | (20) |
|
|
195 | (2) |
|
Key AIR Concepts by Example |
|
|
197 | (10) |
|
Ray Datasets and Preprocessors |
|
|
198 | (1) |
|
|
199 | (2) |
|
|
201 | (2) |
|
|
203 | (1) |
|
|
204 | (3) |
|
Workloads That Are Suited for AIR |
|
|
207 | (6) |
|
|
209 | (2) |
|
|
211 | (1) |
|
|
212 | (1) |
|
Autoscaling AIR Workloads |
|
|
213 | (1) |
|
|
213 | (2) |
|
11 Ray's Ecosystem and Beyond |
|
|
215 | (20) |
|
|
216 | (11) |
|
Data Loading and Processing |
|
|
216 | (2) |
|
|
218 | (4) |
|
|
222 | (3) |
|
Building Custom Integrations |
|
|
225 | (1) |
|
An Overview of Ray's Integrations |
|
|
226 | (1) |
|
|
227 | (4) |
|
Distributed Python Frameworks |
|
|
227 | (1) |
|
Ray AIR and the Broader ML Ecosystem |
|
|
228 | (2) |
|
How to Integrate AIR into Your ML Platform |
|
|
230 | (1) |
|
|
231 | (1) |
|
|
232 | (3) |
Index |
|
235 | |