The eXplainable Artificial Intelligence (XAI) is an artificial intelligence model that is able to explain its decisions and actions to human users.

As dramatic success in machine learning and deep learning these days, the capability of explaining the reason of decision of AI model is getting more important. However, considering the explosion of new AI capabilities, research on XAI is now a starting point. The Defense Advanced Research Projects Agency (DARPA) just initiated research of XAI on 2016, and there are few other agencies and companies following the research on XAI. As there are very limited references about XAI for now, I list up some references on XAI below.

  • Research paper
    • “Why Should I Trust You?”: Explaining the Predictions of Any Classifier, Marco Tulio Ribeiro et al., 2016 [Link]
    • Nothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance, Marco Tulio Ribeiro et al., 2016 [Link]
    • Building Machines That Learn and Think Like People, Brenden M. Lake et al., 2016 [Link]
    • How to Explain Individual Classification Decisions, David Baehrens et al., 2009 [Link]
    • An Explainable Artificial Intelligence System for Small-unit Tactical Behavior, Michael van Lent et al., 2004 [Link]
  • ETC.
    • DARPA XAI project site [Link]
    • PPT from DARPA XAI project [Link]
    • Workshop on XAI [Link]
    • Blog post: The State of Explainable AI [Link]
    • Blog post: Explainable AI: 3 Deep Explanations Approaches to XAI [Link]
    • Video: Programming Your Way to Explainable AI [Link]