Muutke küpsiste eelistusi

Ultimate AI Guide for Linux Engineers: A practical guide to harnessing AI, LLMs, and Automation in Linux environments [Pehme köide]

  • Formaat: Paperback / softback, kõrgus x laius: 216x216 mm
  • Ilmumisaeg: 20-May-2026
  • Kirjastus: Packt Publishing Limited
  • ISBN-10: 1806664232
  • ISBN-13: 9781806664238
Teised raamatud teemal:
  • Pehme köide
  • Hind: 74,69 €
  • See raamat ei ole veel ilmunud. Raamatu kohalejõudmiseks kulub orienteeruvalt 3-4 nädalat peale raamatu väljaandmist.
  • Kogus:
  • Lisa ostukorvi
  • Tasuta tarne
  • Tellimisaeg 2-4 nädalat
  • Lisa soovinimekirja
  • Formaat: Paperback / softback, kõrgus x laius: 216x216 mm
  • Ilmumisaeg: 20-May-2026
  • Kirjastus: Packt Publishing Limited
  • ISBN-10: 1806664232
  • ISBN-13: 9781806664238
Teised raamatud teemal:
Learn how to integrate AI into Linux environments with real-world automation, observability, and scalable deployment techniques for modern infrastructure teams

Key Features

Apply AI to Linux, from core concepts to production-ready deployments at scale Build intelligent automation using LLMs, RAG, and AI agents for monitoring, troubleshooting, and system administration Deploy secure, scalable AI workloads with Docker, Kubernetes, and cloud-native best practices

Book DescriptionUnlock the power of artificial intelligence to transform Linux infrastructure and operations. The Ultimate AI Guide for Linux Engineers is a practical, hands-on handbook for applying AI to real-world Linux systems. You will demystify AI, machine learning, and large language models (LLMs) in practice, prepare AI-ready Linux environments for CPU and GPU workloads, and work with containers and essential open-source frameworks such as PyTorch, Hugging Face Transformers, LangChain, and OpenVINO. Moving into real operational use cases, you will build AI agents and agentic workflows to automate system administration, integrate LLMs into monitoring and troubleshooting pipelines, and apply Retrieval-Augmented Generation (RAG) to query logs, documentation, and internal knowledge bases. You will also enhance observability and incident response with intelligent automation. Finally, you will learn how to deploy and scale AI services using Docker, Kubernetes, and cloud-native architectures, implement security and privacy guardrails, and design reliable AI-driven workflows for enterprise Linux environments. By the end, you will have a practical framework to integrate AI into Linux workflows securely and at scale.What you will learn

Optimize Linux kernels and GPUs for AI workloads Orchestrate LLM pipelines across distributed systems Design agentic workflows for autonomous operations Implement RAG over logs and internal knowledge graphs Embed AI into observability and incident triage Deploy scalable AI microservices on Kubernetes Enforce security, isolation, and model guardrails

Who this book is forThis book is for Linux engineers, system administrators, DevOps professionals, SREs, and platform engineers who want to integrate AI into real-world infrastructure and operations. Prior hands-on experience with Linux, the command line, and basic system administration is expected. Some familiarity with containers (Docker), Kubernetes, and scripting (Bash or Python) would be helpful. Prior AI or machine learning knowledge is beneficial but not required, as core concepts are explained in practical Linux terms.
Table of Contents

Why AI Matters for Linux Engineers
Demystifying AI, ML, and LLMs for Linux Engineers
Preparing an AI-Ready Linux Environment
Implementation of Open Source Frameworks for Linux Engineers
Automating System Administration with AI Agents and Scripts
Building Agentic AI Workflows on Linux
Monitoring and Troubleshooting Linux Systems with LLMs
Retrieval-Augmented Generation (RAG) for Linux Knowledge and Logs
Deploying and Scaling AI Services on Linux and Kubernetes
Security, Privacy, and Guardrails for Production AI
Real-world applications in Enterprises
Looking Ahead: The Future of AI-Driven Linux Workflow
Ezequiel Lanza is an Open Source AI Evangelist with a Master's degree in Data Science and over 15 years of development experience. A passionate advocate for artificial intelligence and machine learning, he has presented at more than 30 conferences (Kubecon/NeurIPS/AAAI/AIdev/ODSC/All Things Open, among others), workshops, and webinars, sharing expertise through videos, tutorials, and hands-on guides. He has collaborated with major companies, including AWS, Google, and IBM, helping organizations design and implement AI solutions. He is deeply involved in developing practical AI tutorials, use cases, and adoption strategies for developers and organizations, and contributes to LF AI & Data as a Chair and Board Member, advancing open-source AI initiatives and fostering collaboration across the community. Eduardo Spotti is an experienced cloud-native and Kubernetes specialist who has delivered multiple public presentations at KubeCon, Kubernetes Community Days, AWS Community Days, AWS User Groups, and GitTogether events. His speaking topics include Kubernetes, cloud-native development, cybersecurity, FinOps, and Generative AI for modernization across telecom, finance, and SaaS industries. He is the author of the Kubernetes Adoption Maturity Model, a framework designed to evaluate and measure the adoption of Kubernetes best practices as a foundational platform for building scalable products. Eduardo has worked on highly complex, large-scale architectures across major organizations in Latin America, including Globant, Mercado Libre, Telecom, and Amazon Web Services.