Muutke küpsiste eelistusi

AI-Powered DevOps with LLMs: Applying Large Language Models to Software Delivery and SRE [Pehme köide]

  • Formaat: Paperback / softback, kõrgus x laius: 216x216 mm
  • Ilmumisaeg: 08-May-2026
  • Kirjastus: Packt Publishing Limited
  • ISBN-10: 1807609197
  • ISBN-13: 9781807609191
  • Pehme köide
  • Hind: 69,29 €
  • See raamat ei ole veel ilmunud. Raamatu kohalejõudmiseks kulub orienteeruvalt 3-4 nädalat peale raamatu väljaandmist.
  • Kogus:
  • Lisa ostukorvi
  • Tasuta tarne
  • Tellimisaeg 2-4 nädalat
  • Lisa soovinimekirja
  • Formaat: Paperback / softback, kõrgus x laius: 216x216 mm
  • Ilmumisaeg: 08-May-2026
  • Kirjastus: Packt Publishing Limited
  • ISBN-10: 1807609197
  • ISBN-13: 9781807609191
A practical guide to applying LLMs across the software development and delivery lifecycle, improve development, testing, operations, and project efficiency across modern software organizations.

Key Features

Apply LLMs to modern DevOps workflows across development and operations with practical enterprise examples Build architectural fluency in GPT, fine-tuning, RAG, and agent-based systems Strengthen software delivery pipelines with AI-informed automation and operational intelligence

Book DescriptionIf you work in software engineering, DevOps, SRE, or platform teams, this book written by enterprise digital transformation specialists demonstrates how large language models (LLMs) can enhance automation, software delivery, and operational reliability across modern engineering organizations. To build familiarity, the book begins hands-on with the technical underpinnings of LLMs, including Transformers, GPT architectures, and fine-tuning techniques such as LoRA and QLoRA. It then develops these foundations to demonstrate how retrieval-augmented generation (RAG) and agent-based systems can be embedded into real enterprise workflows. Across development, testing, operations, security, and project management scenarios, you will see how LLMs enhance code generation, automate testing, improve log analysis and incident response, support root cause analysis, and assist in risk-based decision-making. By the end of the book, you will be able to move from isolated model experimentation to scalable enterprise practice, designing intelligent DevOps and SRE workflows that are efficient, reliable, and strategically aligned. What you will learn

Understand the evolution of large language models and Transformer-based architectures Build and optimize GPT-style models, including fine-tuning and reinforcement learning techniques Apply RAG and agent architectures to enterprise DevOps and platform engineering scenarios Use LLMs to automate operations tasks such as log analysis, ticket handling, and root cause analysis Enhance testing, programming, and CI/CD workflows with large language models Apply LLMs to project management, risk analysis, and security use cases in DevOps environments

Who this book is forThis book is for software engineers, DevOps and SRE professionals, QA and security teams, and technical managers who want to apply and operationalize LLMs across the software delivery lifecycle.
Table of Contents

Introduction to Large Language Models
The Cornerstone of Large Language ModelsTransformer
From Transformer to ChatGPT33
Fine-Tuning Techniques for Large Language Models
Enterprise AI Application Technology RAG
Three Foundational Pillars of Software Delivery
Practical Applications of Large Language Models in Operations Scenarios
Practical Applications of Large Language Models in Testing Scenarios
Practical Applications of Large Language Models in Programming Scenarios
Practical Applications of Large Language Models in Project Management
Scenarios
Practical Applications of Large Language Models in Security Scenarios
Huangliang Gu is a senior DevOps/R&D Efficiency Specialist with extensive experience in operations and development. Focuses on enterprise IT digital transformation and implementation, dedicated to building intelligent operations systems for businesses. Currently employed at a licensed financial institution. Included in the China Commerce Association Expert Think Tank; Deputy Director of the Think Tank Expert Committee, National Internet Data Center Industry Technology Innovation Strategic Alliance; Candidate Expert for the Jiangsu Banking and Insurance Industry Fintech Expert Committee; Specially Appointed Expert for the Ministry of Industry and Information Technology's Enterprise Digital Transformation IOMM Committee; Specially Appointed Expert for the China Academy of Information and Communications Technology's Trusted Cloud Standard; Specially Appointed Expert for the China Academy of Information and Communications Technology's Low-Code/No-Code Promotion Center; Tencent Cloud Most Valuable Professional (TVP); Alibaba Cloud Most Valuable Professional (MVP). Author of the best-selling books DevOps Authoritative Guide, Enterprise-Level DevOps Practical Cases: Continuous Delivery Edition, and Core Author of the DevOps Capability Maturity Model and Enterprise IT Operations Development White Paper. Frequent speaker at multiple technology summits. Qingzheng Zheng is a senior researcher at the FinTech Research Center. Ph.D. in Computer Science from Durham University, UK; M.Sc. in Computer Software Engineering from Swansea University, UK. Formerly served as Technical Planning Engineer and Image Research Engineer at Huawei. Focuses on financial big data risk control and machine vision. Participated in developing facial recognition, telecom CRM, and in-memory database systems. Published 3 papers and holds 3 authorized patents. Xiaoling Niu is the chair of the DevOps Standards Working Group and Editor of DevOps International Standards. Long-term researcher in DevOps, including cloud service operations management system reviews. Contributed to over 20 domestic and international standards, including: - Cloud Computing Service Agreement Reference Framework - Object Storage - Cloud Database - DevOps Capability Maturity Model - Y.3525 Cloud Computing - Requirements for Cloud Service Development and Operation Management - General Evaluation Method for Intelligent Cloud Computing Operations Conducted DevOps maturity assessments for over 50 projects, possessing extensive experience in standard development and evaluation testing. Xin Che is a deputy director of the Government and Enterprise Digital Transformation Department at the China Academy of Information and Communications Technology (CAICT) Cloud Computing and Big Data Research Institute. Primarily engaged in technical research and transformation consulting planning for areas including the Enterprise Digital Transformation Maturity Model (IOMM), Trusted Digital Services, Integrated Cloud Platforms for Digital Infrastructure, Middleware Series, Low/No-Code, Modularization, Safe Production, and Smart Operations. Responsible for developing relevant standards, conducting evaluation and testing, and organizing technical practice exchanges.