Most intermediate-level machine learning books focus on how to optimize
models by increasing accuracy or decreasing prediction error. But this
approach often overlooks the importance of understanding why and how
your ML model makes the predictions that it does.
Explainability methods provide an essential toolkit for better
understanding model behavior, and this practical guide brings together
best-in-class techniques for model explainability. Experienced machine
learning engineers and data scientists will learn hands-on how these
techniques work so that you'll be able to apply these tools more easily
in your daily workflow.
This essential book provides:
- A detailed look at some of the most useful and commonly used
explainability techniques, highlighting pros and cons to help you
choose the best tool for your needs
- Tips and best practices for implementing these techniques
- A guide to interacting with explainability and how to avoid common
pitfalls
- The knowledge you need to incorporate explainability in your ML
workflow to help build more robust ML systems
- Advice about explainable AI techniques, including how to apply
techniques to models that consume tabular, image, or text data
- Example implementation code in Python using well-known explainability
libraries for models built in Keras and TensorFlow 2.0, PyTorch, and
HuggingFace