Muutke küpsiste eelistusi

E-raamat: Managing Kubernetes: Operating Kubernetes Clusters in the Real World

  • Formaat: 188 pages
  • Ilmumisaeg: 12-Nov-2018
  • Kirjastus: O'Reilly Media
  • Keel: eng
  • ISBN-13: 9781492033882
  • Formaat - PDF+DRM
  • Hind: 48,56 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Formaat: 188 pages
  • Ilmumisaeg: 12-Nov-2018
  • Kirjastus: O'Reilly Media
  • Keel: eng
  • ISBN-13: 9781492033882

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

Kubernetes has greatly simplified the task of deploying containerized applications to the cloud, but if you want to get the most out of this open source orchestrator, you need a dedicated team to manage it on a day-to-day basis. This practical book shows Site Reliability Engineers and DevOps Leads how to build, operate, manage, and upgrade a Kubernetes cluster, whether you’re using cloud infrastructure or bare metal servers.

Brendan Burns, co-founder of the Kubernetes platform, and Craig Tracey, Solutions Engineer at Heptio, take you through initial deployment, architectural choices for designing a cluster, monitoring and alerting, managing access control, and upgrading Kubernetes. By diving deep into Kubernetes management, your organization can take full advantage of this orchestrator’s capabilities.

  • Learn how your cluster operates, how developers use it to deploy applications, and how Kubernetes facilitates a developer’s job
  • Adjust, secure, and tune your cluster by understanding Kubernetes’ APIs and configuration options
  • Detect when things in the cluster break, and learn the steps necessary to respond and recover from these problems
  • Determine how and when to add libraries, tools, and platforms that build on, extend, or otherwise improve the usage of a Kubernetes cluster
Preface ix
1 Introduction 1(6)
How the Cluster Operates
2(1)
Adjust, Secure, and Tune the Cluster
3(1)
Responding When Things Go Wrong
3(1)
Extending the System with New and Custom Functionality
4(1)
Summary
5(2)
2 An Overview of Kubernetes 7(14)
Containers
7(2)
Container Orchestration
9(1)
The Kubernetes API
10(8)
Basic Objects: Pods, ReplicaSets, and Services
10(4)
Organizing Your Cluster with Namespaces, Labels, and Annotations
14(1)
Advanced Concepts: Deployments, Ingress, and StatefulSets
15(3)
Batch Workloads: Job and ScheduledJob
18(1)
Cluster Agents and Utilities: DaemonSets
18(1)
Summary
18(3)
3 Kubernetes Architecture 21(10)
Concepts
21(4)
Declarative Configuration
21(1)
Reconciliation or Controllers
22(1)
Implicit or Dynamic Grouping
23(2)
Structure
25(1)
Unix Philosophy of Many Components
25(1)
API-Driven Interactions
25(1)
Components
26(4)
Head Node Components
26(2)
Components On All Nodes
28(1)
Scheduled Components
29(1)
Summary
30(1)
4 The Kubernetes API Server 31(18)
Basic Characteristics for Manageability
31(1)
Pieces of the API Server
31(7)
API Management
32(1)
API Paths
32(1)
API Discovery
33(3)
OpenAPI Spec Serving
36(1)
API Translation
37(1)
Request Management
38(8)
Types of Requests
38(1)
Life of a Request
39(7)
API Server Internals
46(1)
CRD Control Loop
46(1)
Debugging the API Server
46(2)
Basic Logs
47(1)
Audit Logs
47(1)
Activating Additional Logs
47(1)
Debugging kubectl Requests
48(1)
Summary
48(1)
5 Scheduler 49(10)
An Overview of Scheduling
49(1)
Scheduling Process
50(3)
Predicates
50(1)
Priorities
50(1)
High-Level Algorithm
51(1)
Conflicts
52(1)
Controlling Scheduling with Labels, Affinity, Taints, and Tolerations
53(4)
Node Selectors
53(1)
Node Affinity
54(2)
Taints and Tolerations
56(1)
Summary
57(2)
6 Installing Kubernetes 59(16)
kubeadm
59(3)
Requirements
60(1)
kubelet
61(1)
Installing the Control Plane
62(6)
kubeadm Configuration
63(1)
Preflight Checks
64(1)
Certificates
65(1)
etcd
65(2)
kubeconfig
67(1)
Taints
68(1)
Installing Worker Nodes
68(1)
Add-Ons
69(1)
Phases
70(1)
High Availability
70(1)
Upgrades
71(2)
Summary
73(2)
7 Authentication and User Management 75(16)
Users
76(1)
Authentication
77(8)
kubeconfig
85(2)
Service Accounts
87(2)
Summary
89(2)
8 Authorization 91(10)
REST
91(1)
Authorization
92(1)
Role-Based Access Control
93(6)
Role and ClusterRole
94(2)
RoleBinding and ClusterRoleBinding
96(2)
Testing Authorization
98(1)
Summary
99(2)
9 Admission Control 101(14)
Configuration
102(1)
Common Controllers
102(5)
PodSecurityPolicies
102(2)
ResourceQuota
104(2)
LimitRange
106(1)
Dynamic Admission Controllers
107(6)
Validating Admission Controllers
108(2)
Mutating Admission Controllers
110(3)
Summary
113(2)
10 Networking 115(12)
Container Network Interface
115(2)
Choosing a Plug-in
117(1)
kube-proxy
117(2)
Service Discovery
119(2)
DNS
119(1)
Environment Variables
120(1)
Network Policy
121(2)
Service Mesh
123(1)
Summary
124(3)
11 Monitoring Kubernetes 127(14)
Goals for Monitoring
127(2)
Differences Between Logging and Monitoring
129(1)
Building a Monitoring Stack
129(5)
Getting Data from Your Cluster and Applications
129(2)
Aggregating Metrics and Logs from Multiple Sources
131(2)
Storing Data for Retrieval and Querying
133(1)
Visualizing and Interacting with Your Data
134(1)
What to Monitor?
134(7)
Monitoring Machines
135(1)
Monitoring Kubernetes
136(1)
Monitoring Applications
136(1)
Blackbox Monitoring
137(1)
Streaming Logs
138(1)
Alerting
138(1)
Summary
139(2)
12 Disaster Recovery 141(8)
High Availability
141(1)
State
142(1)
Application Data
142(1)
Persistent Volumes
143(1)
Local Data
143(1)
Worker Nodes
143(1)
etcd
144(1)
Ark
145(1)
Summary
146(3)
13 Extending Kubernetes 149(14)
Kubernetes Extension Points
149(1)
Cluster Daemons
150(2)
Use Cases for Cluster Daemons
150(1)
Installing a Cluster Daemon
151(1)
Operational Considerations for Cluster Daemons
151(1)
Hands-On: Example of Creating a Cluster Daemon
152(1)
Cluster Assistants
152(3)
Use Cases for Cluster Assistants
153(1)
Installing a Cluster Assistant
153(1)
Operational Considerations for Cluster Assistants
154(1)
Hands-On: Example of Cluster Assistants
154(1)
Extending the Life Cycle of the API Server
155(3)
Use Cases for Extending the API Life Cycle
155(1)
Installing API Life Cycle Extensions
156(1)
Operational Considerations for Life Cycle Extensions
156(1)
Hands-On: Example of Life Cycle Extensions
156(2)
Adding Custom APIs to Kubernetes
158(3)
Use Cases for Adding New APIs
158(1)
Custom Resource Definitions and Aggregated API Servers
159(1)
Architecture for Custom Resource Definitions
160(1)
Installing Custom Resource Definitions
160(1)
Operational Considerations for Custom Resources
161(1)
Summary
161(2)
14 Conclusions 163(2)
Index 165
Brendan Burns is a co-founder of the Kubernetes open source container management platform. He is currently a distinguished engineer at Microsoft running the Azure Resource Manager and Azure Container Service teams. Before Microsoft he was a senior staff engineer on the Google Cloud Platform. Prior to working in Cloud he developed web search backends that helped power Google search. Prior to that he was a Professor of Computer Science at Union College in Schenectady, NY. Brendan received a PhD in Computer Science from the University of Massachusetts Amherst and a BA from Williams College.

For the last 20 years Craig Tracey has helped build the infrastructure that powers the Internet. In this time he has had the opportunity to develop everything from kernel device drivers, to massive-scale cloud storage services, and even a few distributed compute platforms. Now as a Software Engineer turned Field Engineer at Heptio, he helps organizations accelerate their adoption of Kubernetes by teaching the principles of cloud native architectures through code.

Based in Boston, Massachusetts, in his free time, Craig loves playing hockey and exploring Europe. Craig holds a BS in Computer Science from Providence College.