Often the hardest part of leveraging explainable artificial intelligence can be finding the right tool for the job. Different tools are better suited for different types of data and different problems, especially when there are humans in the loop. The concept map below is designed to give users a bit of a rough guide on how to approach problems with regard to which types of tools to try on your data and models.

Click on any resource tag in the concept map below to see associated capabilities.

image/svg+xml Creator: FreeHEP Graphics2D Driver Producer: org.freehep.graphicsio.svg.SVGGraphics2D Revision Source: Date: Tuesday, July 27, 2021 4:33:51 PM EDT

CmapTools

Other Resources

There are a lot of other awesome tools being created by the explainable AI community that we would like to share. Please feel free to submit a pull request to contribute to this list.

AIX360
Interpretability and explainability of data and machine learning models

Alibi Explain
Algorithms for explaining machine learning models

Captum
Model interpretability and understanding for PyTorch

Interpretable Machine Learning Book
A guide for making black box models explainable

InterpretML
A toolkit to help understand models and enable responsible machine learning

PAIR Saliency
Framework-agnostic implementation for state-of-the-art saliency methods

Responsible AI Toolkit
Integrate Responsible AI practices into ML workflows using TensorFlow

Skater
Python library for model interpretation and explanations