The co-op bookstore for avid readers
Book Cover for: Robust Explainable AI, Francesco Leofante

Robust Explainable AI

Francesco Leofante

The area of Explainable Artificial Intelligence (XAI) is concerned with providing methods and tools to improve the interpretability of black-box learning models. While several approaches exist to generate explanations, they are often lacking robustness, e.g., they may produce completely different explanations for similar events. This phenomenon has troubling implications, as lack of robustness indicates that explanations are not capturing the underlying decision-making process of a model and thus cannot be trusted.

This book aims at introducing Robust Explainable AI, a rapidly growing field whose focus is to ensure that explanations for machine learning models adhere to the highest robustness standards. We will introduce the most important concepts, methodologies, and results in the field, with a particular focus on techniques developed for feature attribution methods and counterfactual explanations for deep neural networks.

As prerequisites, a certain familiarity with neural networks and approaches within XAI is desirable but not mandatory. The book is designed to be self-contained, and relevant concepts will be introduced when needed, together with examples to ensure a successful learning experience.

Book Details

  • Publisher: Springer
  • Publish Date: May 17th, 2025
  • Pages: NA
  • Language: English
  • Edition: undefined - undefined
  • Dimensions: 0.00in - 0.00in - 0.00in - 0.00lb
  • EAN: 9783031890215
  • Categories: Artificial Intelligence - Expert SystemsProbability & Statistics - GeneralInternet - Online Safety & Privacy

About the Author

Francesco Leofante is a researcher affiliated with the Centre for Explainable AI at Imperial College. His research focuses on explainable AI, with special emphasis on counterfactual explanations for AI-based decision-making. His recent work highlighted several vulnerabilities of counterfactual explanations and proposed innovative solutions to improve their robustness.

Matthew Wicker is an Assistant Professor (Lecturer) at Imperial College London and a Research Associate at The Alan Turing Institute. He works on formal verification of trustworthy machine learning properties with collaborators form academia and industry. His work focuses on provable guarantees for diverse notions of trustworthiness for machine learning models in order to enable responsible deployment.