Muutke küpsiste eelistusi

Building Serverless Applications with Google Cloud Run: A Real-World Guide to Building Production-Ready Services [Pehme köide]

  • Formaat: Paperback / softback, 250 pages, kõrgus x laius: 232x178 mm
  • Ilmumisaeg: 31-Dec-2020
  • Kirjastus: O'Reilly Media
  • ISBN-10: 1492057096
  • ISBN-13: 9781492057093
Teised raamatud teemal:
  • Pehme köide
  • Hind: 63,19 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Tavahind: 74,34 €
  • Säästad 15%
  • Raamatu kohalejõudmiseks kirjastusest kulub orienteeruvalt 2-4 nädalat
  • Kogus:
  • Lisa ostukorvi
  • Tasuta tarne
  • Tellimisaeg 2-4 nädalat
  • Lisa soovinimekirja
  • Formaat: Paperback / softback, 250 pages, kõrgus x laius: 232x178 mm
  • Ilmumisaeg: 31-Dec-2020
  • Kirjastus: O'Reilly Media
  • ISBN-10: 1492057096
  • ISBN-13: 9781492057093
Teised raamatud teemal:

Learn how to build a serverless real-world application in the cloud that's reliable, secure, maintainable, and scalable. If you have experience building traditional web applications, this practical guide shows you how to get started with serverless containers.

Cloud engineer Wietse Venema takes you through the steps necessary to build serverless applications with Cloud Run, a container-based serverless platform on Google Cloud.

Through the course of the book, you'll build and explore several example applications that highlight different parts of the serverless stack. You can also follow the lessons in the book using your own project on Google Cloud Platform.

You'll learn how to:

  • Building a serverless application with Google's Cloud Run
  • The fundamentals of containers, and how to build container image with (and without) Docker
  • Connect with a managed relational database.
  • How to connect with private IPs in your Google Cloud project
  • Managing and defining access policies, to manage risk
  • Task scheduling on a serverless runtime
  • Infrastructure as Code using Terraform
  • Portability to other vendors using Knative Serving
Foreword xiii
Preface xvii
1 Introduction
1(12)
Serverless Applications
1(4)
A Simple Developer Experience
2(1)
Autoscalable Out of the Box
3(1)
A Different Cost Model
4(1)
Serverless Is Not Functions as a Service
5(1)
Google Cloud
5(2)
Serverless on Google Cloud
7(1)
Cloud Run
8(2)
Service
8(1)
Container Image
8(1)
Scalability and Self-Healing
9(1)
HTTPS Serving
9(1)
Microservices Support
9(1)
Identity, Authentication, and Access Management
9(1)
Monitoring and Logging
10(1)
Transparent Deployments
10(1)
Pay-Per-Use
10(1)
Concerns About Serverless
10(2)
Unpredictable Costs
11(1)
Hyper-Scalability
11(1)
When Things Go Really Wrong
11(1)
Separation of Compute and Storage
11(1)
Open Source Compatibility
12(1)
Summary
12(1)
2 Understanding Cloud Run
13(20)
Getting Started with Google Cloud
13(3)
Costs
14(1)
Interacting with Google Cloud
14(1)
Google Cloud Projects
15(1)
Installing and Authenticating the SDK
15(1)
Installing Beta Components
16(1)
Deploying Your First Service
16(6)
Deploying the Sample Container
16(1)
Region
17(1)
Structure of the HTTPS Endpoint
18(1)
Viewing Your Service in the Web Console
18(1)
Deploying a New Version
19(1)
Revision
20(2)
Understanding Cloud Run
22(7)
Container Life Cycle
22(2)
CPU Throttling
24(1)
Task Scheduling and Throttling
24(1)
Load Balancer and Autoscaler
24(2)
Concurrent Request Limit
26(1)
Autoscaler
26(1)
Tuning the Concurrency Setting
27(1)
Cold Starts
27(1)
Disposable Containers
27(1)
In-Memory Filesystem
28(1)
Ready for Requests
28(1)
Cloud Run Key Points
28(1)
Choosing a Serverless Compute Product on Google Cloud
29(3)
Cloud Functions: Glue Code
29(1)
App Engine: Platform as a Service
30(1)
Key Differences
30(1)
What Will the Future Look Like?
31(1)
Summary
32(1)
3 Building Containers
33(22)
Containers: A Hands-On Exploration
34(2)
Running an Interactive Shell
34(1)
Overriding the Default Command
35(1)
Running a Server
35(1)
Containers from First Principles
36(8)
Inside a Container Image
36(1)
The Linux Kernel
37(1)
Container Isolation
38(1)
Starting a Container
39(1)
Building a Container with Docker
40(1)
Dockerfile Instructions
41(1)
Installing Additional Tooling
42(1)
Smaller Is Better When Deploying to Production
43(1)
Creating Small Containers with Distroless
43(1)
Artifact Registry
44(2)
Building and Tagging the Container Image
45(1)
Authenticating and Pushing the Container Image
46(1)
Building a Container Without a Dockerfile
46(4)
Go Containers with ko
47(2)
Java Containers with Jib
49(1)
Cloud Native Buildpacks
49(1)
Cloud Build
50(4)
Remote Docker Build
51(1)
Advanced Builds
51(2)
Running Arbitrary Programs
53(1)
Connecting with Version Control
53(1)
Shutting Down
54(1)
Summary
54(1)
4 Working with a Relational Database
55(16)
Introducing the Demo Application
55(10)
Creating the Cloud SQL Instance
57(1)
Understanding Cloud SQL Proxy
58(1)
Connecting and Loading the Schema
59(1)
Securing the Default User
60(1)
Connecting Cloud Run to Cloud SQL
61(1)
Disabling the Direct Connection
62(1)
Deploying the Demo Application
63(1)
Connection String
64(1)
Public and Private IP
64(1)
Limiting Concurrency
65(4)
Transaction Concurrency
66(1)
Resource Contention
67(1)
Scaling Boundaries and Connection Pooling
67(1)
External Connection Pool
68(1)
A Real-World Example
69(1)
Cloud SQL in Production
69(1)
Monitoring
69(1)
Automatic Storage Increase
69(1)
High Availability
69(1)
Making Your Application Resilient Against Short Downtime
70(1)
Shutting Down
70(1)
Summary
70(1)
5 Working with HTTP Sessions
71(10)
How HTTP Sessions Work
72(1)
Storing Sessions in Memorystore: A Hands-On Exploration
73(5)
Creating a Memorystore Instance
73(1)
What Is a VPC Connector?
74(2)
Creating a VPC Connector
76(1)
Deploying the Demo App
77(1)
Alternative Session Stores
78(1)
Session Affinity
79(1)
Use Cases
79(1)
Session Affinity Is Not for Session Data
80(1)
Shutting Down
80(1)
Summary
80(1)
6 Service Identity and Authentication
81(18)
Cloud IAM Fundamentals
81(7)
Roles
81(1)
Policy Binding
82(3)
Service Accounts
85(2)
Creating and Using a New Service Account
87(1)
Sending Authenticated Requests to Cloud Run
88(3)
Deploying a Private Service
89(1)
Using an ID Token to Send Authenticated Requests
90(1)
When Is an ID Token Valid?
91(1)
Programmatically Calling Private Cloud Run Services
91(2)
Google Frontend Server
92(1)
A Story About Inter-Service Latency
93(1)
Demo Application
93(5)
Embedded Read-Only SQL Database
94(1)
Running Locally
94(1)
Edit, Compile, Reload
95(1)
Deploying to Cloud Run
96(1)
Update the Frontend Configuration
97(1)
Add Custom Service Accounts
97(1)
Add IAM Policy Binding
98(1)
Summary
98(1)
7 Task Scheduling
99(12)
Cloud Tasks
99(2)
Hands-On Learning: A Demo Application
101(5)
Building the Container Images
101(1)
Creating a Cloud Tasks Queue
102(1)
Creating Service Accounts
102(1)
Deploying the Worker Service
102(1)
Deploying the Task App Service
103(1)
Connecting the Task Queue
103(1)
Scheduling a Task with the Cloud Tasks Client Library
104(1)
Automatic ID Token
105(1)
Connecting the Worker
105(1)
Test the App
105(1)
Queue Configuration
106(1)
Retry Configuration
106(1)
Rate Limiting
107(1)
Viewing and Updating Queue Configuration
107(1)
Considerations
107(2)
Cloud Tasks Might Deliver Your Request Twice
107(1)
Local Development
108(1)
Alternatives
109(1)
Summary
109(2)
8 Infrastructure as Code Using Terraform
111(18)
What Is Infrastructure as Code?
111(1)
Why Infrastructure as Code?
112(1)
Serverless Infrastructure
113(2)
How It Works
113(1)
When Not to Use Infrastructure as Code
114(1)
Terraform
115(7)
Installing Terraform
115(1)
Getting Started with a Minimal Example
116(6)
The Terraform Workflow
122(4)
Change with Terraform: Adding the Access Policy
124(1)
Expressing Dependencies with References
125(1)
Supplemental Resources
126(1)
Summary
127(2)
9 Structured Logging and Tracing
129(18)
Logging on Cloud Run
129(3)
Viewing Logs in the Web Console
130(1)
Viewing Logs in the Terminal
130(1)
Finding Invisible Logs
131(1)
Plain-Text Logs Leave You Wanting More
132(1)
Demo Application
132(1)
Structured Logging
132(5)
Client Libraries
134(1)
Structured Logging in Other Languages
134(1)
How to Use Log Levels
134(1)
Capturing Panics
135(2)
Local Development
137(1)
Request Context
137(2)
Trace Context
139(4)
Forwarding Trace ID
139(2)
Preparing All Incoming Requests with the Trace ID
141(1)
Passing Request Context to Outgoing Requests
141(2)
Viewing Trace Context in Cloud Logging
143(1)
Additional Resources About Tracing
143(1)
Log-Based Metrics with Cloud Monitoring
143(1)
Summary
144(3)
10 Cloud Run and Knative Serving
147(14)
What Is Knative Serving?
148(1)
Cloud Run Is Not Managed Knative Serving
148(1)
Knative Serving on Google Cloud
148(1)
Understanding Kubernetes
149(3)
API Server
150(1)
Kubernetes Resources
151(1)
Database
151(1)
Controllers
151(1)
Adding Extensions to Kubernetes
152(1)
Running Knative Serving Locally
152(7)
Running a Local Kubernetes Cluster
152(1)
Installing Minikube and kubecti
153(1)
Starting Your Local Cluster
153(1)
Install the Knative Operator
154(1)
Starting Minikube Tunnel
155(1)
Installing an HTTP Load Balancer
156(1)
Configuring DNS
157(1)
Deploying a Service
157(1)
Deploying the Same Service to Cloud Run
158(1)
Alternative API Clients
158(1)
Shutting Down
159(1)
Discussion
159(1)
Serving
159(1)
Moving from Kubernetes to Cloud Run Is Harder
159(1)
Service Identity and Authentication
159(1)
Proprietary Managed Services
160(1)
Summary
160(1)
Index 161
Wietse Venema is a cloud architect at Binx.io, a Dutch cloud consultancy that helps companies like Booking.com, ING, DHL, Royal Flora Holland, and Port of Rotterdam build what's next in the public cloud. As an engineer, he is always looking to find clear solutions for complex problems. He splits his time between consulting, engineering, and training other developers.