Bayesian Teaching for XAI

Overview

Producing explanations, and learning from explanations, depends on cooperative inference, where a teacher and a learner collaborate to help the learner achieve a more accurate understanding. We have primarily studied explainable AI (XAI) as a Bayesian Teaching problem in the formalism of cooperative inference. In this collected body of works we show how cooperative inference solves a number of theoretical and practical problems in XAI.

Intended Use

This contribution has the following goals: 1) establish the Bayesian Teaching framework; 2) provide prototypical examples of the framework’s application to generate explanations; and 3) describe an empirical approach to validate the explanations.

The Bayesian teaching framework has been applied to image classification problems in the following domains: emotion from facial expression, ImageNet categories, and pneumothorax diagnosis from chest X-rays.

Model/Data

Limitations

References

Updated: