Muutke küpsiste eelistusi

Deep Learning and XAI Techniques for Anomaly Detection: Integrate the theory and practice of deep anomaly explainability [Pehme köide]

, Foreword by
  • Formaat: Paperback / softback, 218 pages, kõrgus x laius: 93x75 mm
  • Ilmumisaeg: 31-Jan-2023
  • Kirjastus: Packt Publishing Limited
  • ISBN-10: 180461775X
  • ISBN-13: 9781804617755
Teised raamatud teemal:
  • Formaat: Paperback / softback, 218 pages, kõrgus x laius: 93x75 mm
  • Ilmumisaeg: 31-Jan-2023
  • Kirjastus: Packt Publishing Limited
  • ISBN-10: 180461775X
  • ISBN-13: 9781804617755
Teised raamatud teemal:
Create interpretable AI models for transparent and explainable anomaly detection with this hands-on guide

Purchase of the print or Kindle book includes a free PDF eBook

Key Features

Build auditable XAI models for replicability and regulatory compliance Derive critical insights from transparent anomaly detection models Strike the right balance between model accuracy and interpretability

Book DescriptionDespite promising advances, the opaque nature of deep learning models makes it difficult to interpret them, which is a drawback in terms of their practical deployment and regulatory compliance.

Deep Learning and XAI Techniques for Anomaly Detection shows you state-of-the-art methods thatll help you to understand and address these challenges. By leveraging the Explainable AI (XAI) and deep learning techniques described in this book, youll discover how to successfully extract business-critical insights while ensuring fair and ethical analysis.

This practical guide will provide you with tools and best practices to achieve transparency and interpretability with deep learning models, ultimately establishing trust in your anomaly detection applications. Throughout the chapters, youll get equipped with XAI and anomaly detection knowledge thatll enable you to embark on a series of real-world projects. Whether you are building computer vision, natural language processing, or time series models, youll learn how to quantify and assess their explainability.

By the end of this deep learning book, youll be able to build a variety of deep learning XAI models and perform validation to assess their explainability.

What you will learn

Explore deep learning frameworks for anomaly detection Mitigate bias to ensure unbiased and ethical analysis Increase your privacy and regulatory compliance awareness Build deep learning anomaly detectors in several domains Compare intrinsic and post hoc explainability methods Examine backpropagation and perturbation methods Conduct model-agnostic and model-specific explainability techniques Evaluate the explainability of your deep learning models

Who this book is forThis book is for anyone who aspires to explore explainable deep learning anomaly detection, tenured data scientists or ML practitioners looking for Explainable AI (XAI) best practices, or business leaders looking to make decisions on trade-off between performance and interpretability of anomaly detection applications. A basic understanding of deep learning and anomaly detectionrelated topics using Python is recommended to get the most out of this book.
Table of Contents

Understanding Deep Learning Anomaly Detection
Understanding Explainable AI
Natural Language Processing Anomaly Explainability
Time Series Anomaly Explainability
Computer Vision Anomaly Explainability
Differentiating Intrinsic versus Post Hoc Explainability
Backpropagation Versus Perturbation Explainability
Model-Agnostic versus Model-Specific Explainability
Explainability Evaluation Schemes
Cher Simon is a principal solutions architect specializing in artificial intelligence, machine learning, and data analytics at AWS. Cher has 20 years of experience in architecting enterprise-scale, data-driven, and AI-powered industry solutions. Besides building cloud-native solutions in her day-to-day role with customers, Cher is also an avid writer and a frequent speaker at AWS conferences.