Muutke küpsiste eelistusi

AI Red Teaming in Practice: Plan, execute, and report AI red team engagements against LLMs and agentic systems [Pehme köide]

  • Formaat: Paperback / softback, kõrgus x laius: 235x191 mm
  • Ilmumisaeg: 30-Jun-2026
  • Kirjastus: Packt Publishing Limited
  • ISBN-10: 1806380854
  • ISBN-13: 9781806380855
Teised raamatud teemal:
  • Pehme köide
  • Hind: 58,49 €
  • See raamat ei ole veel ilmunud. Raamatu kohalejõudmiseks kulub orienteeruvalt 3-4 nädalat peale raamatu väljaandmist.
  • Kogus:
  • Lisa ostukorvi
  • Tasuta tarne
  • Tellimisaeg 2-4 nädalat
  • Lisa soovinimekirja
  • Formaat: Paperback / softback, kõrgus x laius: 235x191 mm
  • Ilmumisaeg: 30-Jun-2026
  • Kirjastus: Packt Publishing Limited
  • ISBN-10: 1806380854
  • ISBN-13: 9781806380855
Teised raamatud teemal:
A hands-on guide to finding and exploiting vulnerabilities in LLMs, agentic systems, and AI pipelines through structured labs and real attack techniques.

Key Features

Build the adversarial mindset needed to find vulnerabilities traditional security testing misses Enumerate, exploit, and chain vulnerabilities in RAG pipelines, tool integrations, and MCP servers Design and automate AI red team campaigns to measure risk statistically across deployments

Book DescriptionAs organizations deploy LLMs and AI agents into production, traditional security testing fails to keep pace. AI Red Teaming in Practice gives you the structured methodology and hands-on skills to assess these systems effectively. Written by a practitioner who discovered critical vulnerabilities in production AI systems contributed to OWASP GenAI security guides, this book takes you from foundational concepts through advanced exploitation and campaign automation. You will learn why AI systems fail in ways that go beyond unauthorized access, including biased outputs, unreliable behavior, and misaligned actions that cause real business damage. You learn to threat model any GenAI system, define scope, and build a prioritized test plan. A purpose-built lab, the TechCorp AI Recruiting Assistant, runs throughout the book. This agentic system combines RAG retrieval, tool calling, and multi-role access, giving you a realistic target for chapters covering reconnaissance, fingerprinting, prompt injection, data extraction, tool exploitation, and supply chain assessment. Final chapters cover campaign design, PyRIT integration, and reporting strategies for executives, engineers, and auditors. By the end, you will be equipped to plan and execute professional AI red team engagements against any generative AI deployment.What you will learn

Understand how AI systems fail in ways classic testing never catches Build threat models and prioritized test plans for agentic systems Conduct black-box, grey-box, and white-box AI assessments Execute prompt injection campaigns and measure success statistically Extract sensitive data from RAG pipelines and tool integrations Exploit MCP servers and multi-step agentic attack chains Automate AI red team campaigns using PyRIT and human-LLM attack loops Report AI security findings to executives, engineers, and audit teams

Who this book is forThis book is written for penetration testers, security engineers, and red teamers who want to specialize in generative AI security. It is also valuable for AI engineers and security architects responsible for deploying and protecting LLM-based systems, and for security managers building AI red team capabilities. Readers should be comfortable with Python and have a basic understanding of cybersecurity concepts such as penetration testing or vulnerability assessment. No prior experience with machine learning or large language models is required.
Table of Contents

What is GenAI Red Teaming?
Threat Modeling and Scope
Black, Grey, White Box Testing
Lab Setup
Reconnaissance and Model Fingerprinting
Attack Surface Mapping: Tools, Memory and RAG
Prompt Injection and Jailbreaking
Data Extraction and Leakage
Tool and Agent Exploitation
Supply Chain and Deployment Attacks
Campaigns,Toolchains&Automation
Volkan Kutal is an AI Red Teaming Engineer at a major European bank, where he leads adversarial testing across LLM-based and agentic AI systems in a regulated environment. He is the founder of an independent AI security consulting practice specializing in AI red teaming, architecture review, and security advisory for agentic AI systems across EU and US markets. He is a contributor to the OWASP GenAI Security Project, a member of the AIUC-1 Consortium shaping the first certification standard for AI agents, and a contributor to Microsoft's PyRIT framework.