Muutke küpsiste eelistusi

Trojan Code: Adversarial Machine Learning and Secure AI Systems [Kõva köide]

  • Formaat: Hardback, 390 pages, kõrgus x laius: 235x155 mm, 95 Illustrations, color; 3 Illustrations, black and white
  • Ilmumisaeg: 23-Jun-2026
  • Kirjastus: Springer Nature Switzerland AG
  • ISBN-10: 3032245214
  • ISBN-13: 9783032245212
Teised raamatud teemal:
  • Kõva köide
  • Hind: 149,39 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Tavahind: 199,19 €
  • Säästad 25%
  • See raamat ei ole veel ilmunud. Raamatu kohalejõudmiseks kulub orienteeruvalt 3-4 nädalat peale raamatu väljaandmist.
  • Kogus:
  • Lisa ostukorvi
  • Tasuta tarne
  • Tellimisaeg 2-4 nädalat
  • Lisa soovinimekirja
  • Formaat: Hardback, 390 pages, kõrgus x laius: 235x155 mm, 95 Illustrations, color; 3 Illustrations, black and white
  • Ilmumisaeg: 23-Jun-2026
  • Kirjastus: Springer Nature Switzerland AG
  • ISBN-10: 3032245214
  • ISBN-13: 9783032245212
Teised raamatud teemal:
This book provides a comprehensive and accessible guide to the rapidly growing field of AI security, addressing the threats, vulnerabilities, and defensive strategies that shape modern machine-learning systems. The book examines how adversaries exploit poisoned data, hidden triggers, model theft, and privacy leakage to compromise AI, and explains why securing learning systems requires approaches fundamentally different from traditional cybersecurity. Across four structured parts, it maps the threat landscape, dissects backdoor attacks, develops defensive and game-theoretic frameworks, and introduces robust watermarking methods for protecting AI intellectual property.



Drawing from real-world case studies in healthcare, finance, autonomous systems, and defense, the book translates academic research into practical insights for evaluating risk, designing resilient models, and understanding the economic and operational impact of AI breaches. Its coverage extends from adversarial examples and federated learning sabotage to ownership verification and governance-aware design.



Designed for researchers, engineers, graduate students, and institutional decision-makers, this book serves both as a technical reference and a strategic resource for organizations deploying AI in mission-critical environments. It equips readers with the knowledge needed to anticipate emerging threats and to build AI systems that are not only powerful and efficient, but secure, trustworthy, and resilient by design.
Chapter 1 Introduction.- Part I Foundations of Artificial Intelligence
Security.
Chapter 2 Mapping the AI-Security Battlefield: Threats Across the
Machine-Learning Lifecycle.
Chapter 3 Behind the Backdoors: Threats and
Safeguards for Deep-Learning Systems.- Part II Backdoor Attacks and Defenses
in Deep Neural Networks.
Chapter 4 Stealthy Clean-Label Backdoors: How an
Image-Classification Model Can Be Attacked.
Chapter 5 Illumination-Modulated
Video Backdoor Attacks on Anti-Spoofing Rebroadcast Detectors.
Chapter 6
Power Play: Backdooring DNNs Through Energy-Drain Triggers.
Chapter 7
Expecting the Next Move: Robust Backdoors under Non-IID Federated Training.-
Chapter 8 When One Shield Is Not Enough: Layering Defenses Against Backdoor
Attacks.
Chapter 9 Rare-Event Simulation for Black-Box Backdoor Defense.-
Chapter 10 Game-Theoretic Modeling of BackdoorAttackerDefender Dynamics.-
Chapter 11 Cost-Constrained Backdoor Games in Deep Learning.- Part III DNN
Watermarking for Intellectual Property Protection.
Chapter 12 Robust and
Secure Watermarking for Deep Neural Networks.
Chapter 13 DNN Watermarking in
Blackbox Settings using Image Mixup.
Chapter 14 Cryptographically Bound
Mixup Watermarks for Black-Box DNNs.- Part IV Emerging Trends, Open Issues,
and Future Research Directions in AI
Security.
Chapter 15 Security Horizons: Emerging Threats and Future
Directions for Trustworthy AI.- Index.
Kassem Kallas, Ph.D., HDR, EMBA, Senior Member IEEE, is a Senior Scientist at the French National Institute of Health and Medical Research (INSERM) and a Professor in collaboration with IMT Atlantique. He is internationally recognized for his contributions to Artificial Intelligence Security, Cybersecurity, and Adversarial Machine Learning, with impactful work spanning Europe, the United States, and the Middle East.



Dr. Kallas earned his Ph.D. in Information Engineering and Science from the University of Siena (Italy), where he developed a game-theoretic framework for adversarial information fusion in distributed sensor networks. This foundational work influenced research in wireless sensor networks, cognitive radio systems, image forensics, and adversarial learning. In 2025, he completed the Habilitation à Diriger des Recherches (HDR) at the University of Western Brittany (UBO), the highest academic qualification in the French university system.



His international career includes positions at prestigious institutions. From 2020 to 2022, he served as a Research Fellow at the U.S. National Institute of Standards and Technology (NIST), contributing to wireless communication and signal analysis research. He then joined Inria France, where he worked on the SAIDA project focused on securing AI systems for defense applications. He currently serves at INSERM, where he conducts research on the security and privacy of AI for healthcare applications and supervises doctoral work in these areas.



Dr. Kallas contributes to several major French national research initiatives, including the PEPR Secure, Safe, and Fair Machine Learning for Digital Health (SSF-ML-DH), and the CybAille Industrial Chair in Cybersecurity and Trustworthy AI for Health, which bring together partners such as IMT Atlantique, Université PSL, CEA-LIST, Thales, and CHU Brest.