Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
Learn more about the history of the toolkit and our industry, academic, and government partners.
A collection of data, software, and papers from the field of explainable AI.
If you’d like to get in touch with us, please reach out using the contact information below.
Help to further advance the field of explainable AI by contributing new capabilities to the toolkit.
Leverage explainable AI and find the right tool for the job by using our interactive concept map.
An open-source, explainable AI toolkit built for analytics and autonomy applications. Latest release v0.9.0
Explore papers from the latest state-of-the-art research in explainable AI.
This contribution provides a framework for generating counterfactual explanations for AI agents in the domain of StarCraft 2.
Our visual analytics tool, enhanced with an AARfAI workflow, allows domain experts to navigate an AI’s reasoning process in a systematic way to quickly find ...
Dataset with metrics for quantitative evaluation of consistency in Visual Question Answering (VQA).
Human curated counterfactual edits on VQA images for studying effective ways to produce counterfactual explanations.
An interactive interface and APIs for segmenting brain tumors from fMRI scans and life expectancy prediction with deep attentional explanations and counterfa...
Measuring and improving attention map helpfulness in Visual Question Answering (VQA) with Error Maps.
Demo several capabilities of the Spatial-Object Attention BERT (SOBERT) Visual Question Answering (VQA) model with BERT and ErrorCam attention maps.
This paper describes strategies for dealing with issues that came into play when doing two research studies with remote human subjects in the COVID era.
Bayesian teaching provides a human-centered, theoretical framework for XAI based on cognitive science. We showcase the framework’s applicability in domains s...
Demo of our XAI system built to detect backdoor poisoned classifiers with an adversarial approach.
We model both the AI and the human performer in a common modeling framework and use cognitive salience to reveal their respective mental models
The purpose of the Curiosity Checklist is to enable researchers to gain a quick look at why a user wants an explanation.
Data set explorer to enable discovery and sensemaking and to support explanation of AI.
Explainable Question Answering System one shot detector demo modality software.
Algorithm to generate explanations from functional and mechanistic perspectives.
The Scorecard presents a number of Levels of explanation. At the lower levels are explanations in the terms of the cues or features of individual instances. ...
Algorithm for generating conceptual and counterfactual explanations for an image classification model.
The Explanation Goodness Checklist is intended to be used by researchers or their XAI systems that have created explanations. The Explanation Goodness Checkl...
Integrated gradient-based saliency maps for multi-resolution explanation of deep networks.
Our research investigates the effects of XAI assistants embedded in news review platforms for combating the propagation of mis-info.
Technical reports from Task Area 2 of the DARPA XAI program.
Explanation Satisfaction is an evaluation of explanations by a user. The reference is to the explanatory value to the user.
Similarity Based Saliency Maps (SBSM) is a similarity based saliency algorithm that utilizes standard distance metrics to compute image regions that result i...
The Playbook lists the explanation requirements of jurisprudence professionals, Contracting Officers, Procurement Officers, Program Managers, Development Te...
An introspective textual explanation model for self-driving cars and Berkeley Deep Drive-X Dataset
Multi-step salience via sub-task decomposition for reasoning tasks
Datasets with multimodal explanations for activity recognition (ACT-X) and visual question answering (VQA-X).
A method to produce visual explanations using natural language justifications.
Juice is a library for learning tractable probabilistic circuits from data, and using the learned models to build solutions in explainable AI, robustness to ...
This library helps the user learn tractable, interpretable cutset networks (a type of probabilistic model which combines decision trees and tree Bayesian net...
Demo of our XAI system built to detect activities in videos with post-hoc explanations for its predictions.
Demo of our explainable query-building tool to search activities within a collection of videos.
XDeep is an open-source Python package developed to interpret deep models for both practitioners and researchers.