Capabilities by Tag
- Analytics 21
- Computer vision 20
- Explanation framework 17
- Demos 13
- Data 12
- Saliency 10
- Visual question answering (vqa) 9
- Human-machine teaming 9
- Methodology 7
- Natural language processing 7
- Autonomy 6
- Reinforcement learning 5
- Metrics 4
- Medical 3
- Data poisoning 2
- Robotics 1
Analytics
After Action Review for AI (AARfAI)
Our visual analytics tool, enhanced with an AARfAI workflow, allows domain experts to navigate an AI's reasoning process in a systematic way to quickly find faults and convey actionable information and insights to engineers.
Bayesian Teaching for XAI
Bayesian teaching provides a human-centered, theoretical framework for XAI based on cognitive science. We showcase the framework’s applicability in domains such as image classification and medical diagnosis.
Counterfactual Explanations for Enhancing Human-Machine Teaming
This contribution provides a framework for generating counterfactual explanations for AI agents in the domain of StarCraft 2.
Datasets with multimodal explanations
Datasets with multimodal explanations for activity recognition (ACT-X) and visual question answering (VQA-X).
Detecting Mis-Information with XAI
Our research investigates the effects of XAI assistants embedded in news review platforms for combating the propagation of mis-info.
Explainable VQA with SOBERT
Demo several capabilities of the Spatial-Object Attention BERT (SOBERT) Visual Question Answering (VQA) model with BERT and ErrorCam attention maps.
Explainable Video Activity Recognition Demo
Demo of our XAI system built to detect activities in videos with post-hoc explanations for its predictions.
Explainable Video Activity Search Tool
Demo of our explainable query-building tool to search activities within a collection of videos.
Explainable poisoned classifier identification
Demo of our XAI system built to detect backdoor poisoned classifiers with an adversarial approach.
Fault-Line Image Explanations
Algorithm for generating conceptual and counterfactual explanations for an image classification model.
Generating Error Maps and Evaluating Attention and Error Maps
Measuring and improving attention map helpfulness in Visual Question Answering (VQA) with Error Maps.
Juice
Juice is a library for learning tractable probabilistic circuits from data, and using the learned models to build solutions in explainable AI, robustness to missing data, algorithmic fairness, compression, etc.
Learning Tractable Interpretable Cutset Networks
This library helps the user learn tractable, interpretable cutset networks (a type of probabilistic model which combines decision trees and tree Bayesian networks) from data. The learned networks can be used to answer various decision and explanation queries such as most probable explanation and estimating posterior marginal probability.
Multi-step salience with neural modular networks
Multi-step salience via sub-task decomposition for reasoning tasks
Natural Language Explanations for Fine-grained Image Classification
A method to produce visual explanations using natural language justifications.
Remote Controlled Studies with Humans
This paper describes strategies for dealing with issues that came into play when doing two research studies with remote human subjects in the COVID era.
SRI-DARE-BraTS explainable brain tumor segmentation
An interactive interface and APIs for segmenting brain tumors from fMRI scans and life expectancy prediction with deep attentional explanations and counterfactual explanations.
Similarity Based Saliency Maps
Similarity Based Saliency Maps (SBSM) is a similarity based saliency algorithm that utilizes standard distance metrics to compute image regions that result in the largest drop in distance between unaltered images and perturbed versions of the same images.
The Consistent Visual Question Answering (ConVQA) Dataset
Dataset with metrics for quantitative evaluation of consistency in Visual Question Answering (VQA).
The Counterfactual Visual Question Answering (VQA) Dataset
Human curated counterfactual edits on VQA images for studying effective ways to produce counterfactual explanations.
XAI Discovery Platform
Data set explorer to enable discovery and sensemaking and to support explanation of AI.
Computer vision
Cognitive Models for Common Ground Modeling
We model both the AI and the human performer in a common modeling framework and use cognitive salience to reveal their respective mental models
Datasets with multimodal explanations
Datasets with multimodal explanations for activity recognition (ACT-X) and visual question answering (VQA-X).
Explainable AI self-driving controller
An introspective textual explanation model for self-driving cars and Berkeley Deep Drive-X Dataset
Explainable Question Answering System (EQUAS) demo system
Explainable Question Answering System one shot detector demo modality software.
Explainable VQA with SOBERT
Demo several capabilities of the Spatial-Object Attention BERT (SOBERT) Visual Question Answering (VQA) model with BERT and ErrorCam attention maps.
Explainable Video Activity Recognition Demo
Demo of our XAI system built to detect activities in videos with post-hoc explanations for its predictions.
Explainable Video Activity Search Tool
Demo of our explainable query-building tool to search activities within a collection of videos.
Explainable poisoned classifier identification
Demo of our XAI system built to detect backdoor poisoned classifiers with an adversarial approach.
Fault-Line Image Explanations
Algorithm for generating conceptual and counterfactual explanations for an image classification model.
Generating Error Maps and Evaluating Attention and Error Maps
Measuring and improving attention map helpfulness in Visual Question Answering (VQA) with Error Maps.
Integrated-Gradient Optimized Saliency Maps (iGOS++/I-GOS)
Integrated gradient-based saliency maps for multi-resolution explanation of deep networks.
Juice
Juice is a library for learning tractable probabilistic circuits from data, and using the learned models to build solutions in explainable AI, robustness to missing data, algorithmic fairness, compression, etc.
Multi-step salience with neural modular networks
Multi-step salience via sub-task decomposition for reasoning tasks
Natural Language Explanations for Fine-grained Image Classification
A method to produce visual explanations using natural language justifications.
SRI-DARE-BraTS explainable brain tumor segmentation
An interactive interface and APIs for segmenting brain tumors from fMRI scans and life expectancy prediction with deep attentional explanations and counterfactual explanations.
Similarity Based Saliency Maps
Similarity Based Saliency Maps (SBSM) is a similarity based saliency algorithm that utilizes standard distance metrics to compute image regions that result in the largest drop in distance between unaltered images and perturbed versions of the same images.
The Consistent Visual Question Answering (ConVQA) Dataset
Dataset with metrics for quantitative evaluation of consistency in Visual Question Answering (VQA).
The Counterfactual Visual Question Answering (VQA) Dataset
Human curated counterfactual edits on VQA images for studying effective ways to produce counterfactual explanations.
XAI Discovery Platform
Data set explorer to enable discovery and sensemaking and to support explanation of AI.
XDeep - A Post-Hoc Interpretation Package for Model Developers
XDeep is an open-source Python package developed to interpret deep models for both practitioners and researchers.
Explanation framework
After Action Review for AI (AARfAI)
Our visual analytics tool, enhanced with an AARfAI workflow, allows domain experts to navigate an AI's reasoning process in a systematic way to quickly find faults and convey actionable information and insights to engineers.
Bayesian Teaching for XAI
Bayesian teaching provides a human-centered, theoretical framework for XAI based on cognitive science. We showcase the framework’s applicability in domains such as image classification and medical diagnosis.
Counterfactual Explanations for Enhancing Human-Machine Teaming
This contribution provides a framework for generating counterfactual explanations for AI agents in the domain of StarCraft 2.
Explainable AI self-driving controller
An introspective textual explanation model for self-driving cars and Berkeley Deep Drive-X Dataset
Explainable Question Answering System (EQUAS) demo system
Explainable Question Answering System one shot detector demo modality software.
Explainable VQA with SOBERT
Demo several capabilities of the Spatial-Object Attention BERT (SOBERT) Visual Question Answering (VQA) model with BERT and ErrorCam attention maps.
Explainable Video Activity Recognition Demo
Demo of our XAI system built to detect activities in videos with post-hoc explanations for its predictions.
Explainable Video Activity Search Tool
Demo of our explainable query-building tool to search activities within a collection of videos.
Explaining Robot Behaviour
Algorithm to generate explanations from functional and mechanistic perspectives.
Fault-Line Image Explanations
Algorithm for generating conceptual and counterfactual explanations for an image classification model.
Integrated-Gradient Optimized Saliency Maps (iGOS++/I-GOS)
Integrated gradient-based saliency maps for multi-resolution explanation of deep networks.
Juice
Juice is a library for learning tractable probabilistic circuits from data, and using the learned models to build solutions in explainable AI, robustness to missing data, algorithmic fairness, compression, etc.
Learning Tractable Interpretable Cutset Networks
This library helps the user learn tractable, interpretable cutset networks (a type of probabilistic model which combines decision trees and tree Bayesian networks) from data. The learned networks can be used to answer various decision and explanation queries such as most probable explanation and estimating posterior marginal probability.
Natural Language Explanations for Fine-grained Image Classification
A method to produce visual explanations using natural language justifications.
Psychological Models of Explanatory Reasoning
Technical reports from Task Area 2 of the DARPA XAI program.
Remote Controlled Studies with Humans
This paper describes strategies for dealing with issues that came into play when doing two research studies with remote human subjects in the COVID era.
XAI Discovery Platform
Data set explorer to enable discovery and sensemaking and to support explanation of AI.
Demos
After Action Review for AI (AARfAI)
Our visual analytics tool, enhanced with an AARfAI workflow, allows domain experts to navigate an AI's reasoning process in a systematic way to quickly find faults and convey actionable information and insights to engineers.
Cognitive Models for Common Ground Modeling
We model both the AI and the human performer in a common modeling framework and use cognitive salience to reveal their respective mental models
Counterfactual Explanations for Enhancing Human-Machine Teaming
This contribution provides a framework for generating counterfactual explanations for AI agents in the domain of StarCraft 2.
Detecting Mis-Information with XAI
Our research investigates the effects of XAI assistants embedded in news review platforms for combating the propagation of mis-info.
Explainable Question Answering System (EQUAS) demo system
Explainable Question Answering System one shot detector demo modality software.
Explainable Video Activity Recognition Demo
Demo of our XAI system built to detect activities in videos with post-hoc explanations for its predictions.
Explainable Video Activity Search Tool
Demo of our explainable query-building tool to search activities within a collection of videos.
Explainable poisoned classifier identification
Demo of our XAI system built to detect backdoor poisoned classifiers with an adversarial approach.
Integrated-Gradient Optimized Saliency Maps (iGOS++/I-GOS)
Integrated gradient-based saliency maps for multi-resolution explanation of deep networks.
Juice
Juice is a library for learning tractable probabilistic circuits from data, and using the learned models to build solutions in explainable AI, robustness to missing data, algorithmic fairness, compression, etc.
Psychological Models of Explanatory Reasoning
Technical reports from Task Area 2 of the DARPA XAI program.
Similarity Based Saliency Maps
Similarity Based Saliency Maps (SBSM) is a similarity based saliency algorithm that utilizes standard distance metrics to compute image regions that result in the largest drop in distance between unaltered images and perturbed versions of the same images.
XAI Discovery Platform
Data set explorer to enable discovery and sensemaking and to support explanation of AI.
Data
Bayesian Teaching for XAI
Bayesian teaching provides a human-centered, theoretical framework for XAI based on cognitive science. We showcase the framework’s applicability in domains such as image classification and medical diagnosis.
Datasets with multimodal explanations
Datasets with multimodal explanations for activity recognition (ACT-X) and visual question answering (VQA-X).
Detecting Mis-Information with XAI
Our research investigates the effects of XAI assistants embedded in news review platforms for combating the propagation of mis-info.
Explainable AI self-driving controller
An introspective textual explanation model for self-driving cars and Berkeley Deep Drive-X Dataset
Explainable Question Answering System (EQUAS) demo system
Explainable Question Answering System one shot detector demo modality software.
Explainable Video Activity Recognition Demo
Demo of our XAI system built to detect activities in videos with post-hoc explanations for its predictions.
Explainable poisoned classifier identification
Demo of our XAI system built to detect backdoor poisoned classifiers with an adversarial approach.
Explaining Robot Behaviour
Algorithm to generate explanations from functional and mechanistic perspectives.
Integrated-Gradient Optimized Saliency Maps (iGOS++/I-GOS)
Integrated gradient-based saliency maps for multi-resolution explanation of deep networks.
Psychological Models of Explanatory Reasoning
Technical reports from Task Area 2 of the DARPA XAI program.
The Consistent Visual Question Answering (ConVQA) Dataset
Dataset with metrics for quantitative evaluation of consistency in Visual Question Answering (VQA).
The Counterfactual Visual Question Answering (VQA) Dataset
Human curated counterfactual edits on VQA images for studying effective ways to produce counterfactual explanations.
Saliency
Cognitive Models for Common Ground Modeling
We model both the AI and the human performer in a common modeling framework and use cognitive salience to reveal their respective mental models
Explainable AI self-driving controller
An introspective textual explanation model for self-driving cars and Berkeley Deep Drive-X Dataset
Explainable VQA with SOBERT
Demo several capabilities of the Spatial-Object Attention BERT (SOBERT) Visual Question Answering (VQA) model with BERT and ErrorCam attention maps.
Fault-Line Image Explanations
Algorithm for generating conceptual and counterfactual explanations for an image classification model.
Generating Error Maps and Evaluating Attention and Error Maps
Measuring and improving attention map helpfulness in Visual Question Answering (VQA) with Error Maps.
Integrated-Gradient Optimized Saliency Maps (iGOS++/I-GOS)
Integrated gradient-based saliency maps for multi-resolution explanation of deep networks.
Multi-step salience with neural modular networks
Multi-step salience via sub-task decomposition for reasoning tasks
SRI-DARE-BraTS explainable brain tumor segmentation
An interactive interface and APIs for segmenting brain tumors from fMRI scans and life expectancy prediction with deep attentional explanations and counterfactual explanations.
Similarity Based Saliency Maps
Similarity Based Saliency Maps (SBSM) is a similarity based saliency algorithm that utilizes standard distance metrics to compute image regions that result in the largest drop in distance between unaltered images and perturbed versions of the same images.
XDeep - A Post-Hoc Interpretation Package for Model Developers
XDeep is an open-source Python package developed to interpret deep models for both practitioners and researchers.
Visual question answering (vqa)
Datasets with multimodal explanations
Datasets with multimodal explanations for activity recognition (ACT-X) and visual question answering (VQA-X).
Explainable Question Answering System (EQUAS) demo system
Explainable Question Answering System one shot detector demo modality software.
Explainable VQA with SOBERT
Demo several capabilities of the Spatial-Object Attention BERT (SOBERT) Visual Question Answering (VQA) model with BERT and ErrorCam attention maps.
Explainable Video Activity Recognition Demo
Demo of our XAI system built to detect activities in videos with post-hoc explanations for its predictions.
Explainable Video Activity Search Tool
Demo of our explainable query-building tool to search activities within a collection of videos.
Generating Error Maps and Evaluating Attention and Error Maps
Measuring and improving attention map helpfulness in Visual Question Answering (VQA) with Error Maps.
Multi-step salience with neural modular networks
Multi-step salience via sub-task decomposition for reasoning tasks
The Consistent Visual Question Answering (ConVQA) Dataset
Dataset with metrics for quantitative evaluation of consistency in Visual Question Answering (VQA).
The Counterfactual Visual Question Answering (VQA) Dataset
Human curated counterfactual edits on VQA images for studying effective ways to produce counterfactual explanations.
Human-machine teaming
Cognitive Models for Common Ground Modeling
We model both the AI and the human performer in a common modeling framework and use cognitive salience to reveal their respective mental models
Counterfactual Explanations for Enhancing Human-Machine Teaming
This contribution provides a framework for generating counterfactual explanations for AI agents in the domain of StarCraft 2.
Detecting Mis-Information with XAI
Our research investigates the effects of XAI assistants embedded in news review platforms for combating the propagation of mis-info.
Explainable Question Answering System (EQUAS) demo system
Explainable Question Answering System one shot detector demo modality software.
Explainable Video Activity Recognition Demo
Demo of our XAI system built to detect activities in videos with post-hoc explanations for its predictions.
Explainable Video Activity Search Tool
Demo of our explainable query-building tool to search activities within a collection of videos.
Explainable poisoned classifier identification
Demo of our XAI system built to detect backdoor poisoned classifiers with an adversarial approach.
Learning Tractable Interpretable Cutset Networks
This library helps the user learn tractable, interpretable cutset networks (a type of probabilistic model which combines decision trees and tree Bayesian networks) from data. The learned networks can be used to answer various decision and explanation queries such as most probable explanation and estimating posterior marginal probability.
Similarity Based Saliency Maps
Similarity Based Saliency Maps (SBSM) is a similarity based saliency algorithm that utilizes standard distance metrics to compute image regions that result in the largest drop in distance between unaltered images and perturbed versions of the same images.
Methodology
Curiosity Checklist
The purpose of the Curiosity Checklist is to enable researchers to gain a quick look at why a user wants an explanation.
Explanation Goodness Checklist
The Explanation Goodness Checklist is intended to be used by researchers or their XAI systems that have created explanations. The Explanation Goodness Checklist is an independent evaluation of explanations by other researchers. The reference is to the properties of explanations.
Explanation Satisfaction Scale
Explanation Satisfaction is an evaluation of explanations by a user. The reference is to the explanatory value to the user.
Explanation Scorecard
The Scorecard presents a number of Levels of explanation. At the lower levels are explanations in the terms of the cues or features of individual instances. At the higher levels are explanations that answer more general questions about how the AI works. Going from the lower to higher levels can be thought of as enabling insights about the strengths and weaknesses of the AI system.
Psychological Models of Explanatory Reasoning
Technical reports from Task Area 2 of the DARPA XAI program.
Remote Controlled Studies with Humans
This paper describes strategies for dealing with issues that came into play when doing two research studies with remote human subjects in the COVID era.
Stakeholder Playbook
The Playbook lists the explanation requirements of jurisprudence professionals, Contracting Officers, Procurement Officers, Program Managers, Development Team Leaders, System Integrators, System evaluators, policy makers, and trainers.
Natural language processing
Datasets with multimodal explanations
Datasets with multimodal explanations for activity recognition (ACT-X) and visual question answering (VQA-X).
Detecting Mis-Information with XAI
Our research investigates the effects of XAI assistants embedded in news review platforms for combating the propagation of mis-info.
Explainable AI self-driving controller
An introspective textual explanation model for self-driving cars and Berkeley Deep Drive-X Dataset
Multi-step salience with neural modular networks
Multi-step salience via sub-task decomposition for reasoning tasks
Natural Language Explanations for Fine-grained Image Classification
A method to produce visual explanations using natural language justifications.
The Consistent Visual Question Answering (ConVQA) Dataset
Dataset with metrics for quantitative evaluation of consistency in Visual Question Answering (VQA).
XDeep - A Post-Hoc Interpretation Package for Model Developers
XDeep is an open-source Python package developed to interpret deep models for both practitioners and researchers.
Autonomy
After Action Review for AI (AARfAI)
Our visual analytics tool, enhanced with an AARfAI workflow, allows domain experts to navigate an AI's reasoning process in a systematic way to quickly find faults and convey actionable information and insights to engineers.
Cognitive Models for Common Ground Modeling
We model both the AI and the human performer in a common modeling framework and use cognitive salience to reveal their respective mental models
Counterfactual Explanations for Enhancing Human-Machine Teaming
This contribution provides a framework for generating counterfactual explanations for AI agents in the domain of StarCraft 2.
Explainable AI self-driving controller
An introspective textual explanation model for self-driving cars and Berkeley Deep Drive-X Dataset
Explaining Robot Behaviour
Algorithm to generate explanations from functional and mechanistic perspectives.
Remote Controlled Studies with Humans
This paper describes strategies for dealing with issues that came into play when doing two research studies with remote human subjects in the COVID era.
Reinforcement learning
After Action Review for AI (AARfAI)
Our visual analytics tool, enhanced with an AARfAI workflow, allows domain experts to navigate an AI's reasoning process in a systematic way to quickly find faults and convey actionable information and insights to engineers.
Cognitive Models for Common Ground Modeling
We model both the AI and the human performer in a common modeling framework and use cognitive salience to reveal their respective mental models
Counterfactual Explanations for Enhancing Human-Machine Teaming
This contribution provides a framework for generating counterfactual explanations for AI agents in the domain of StarCraft 2.
Remote Controlled Studies with Humans
This paper describes strategies for dealing with issues that came into play when doing two research studies with remote human subjects in the COVID era.
XAI Discovery Platform
Data set explorer to enable discovery and sensemaking and to support explanation of AI.
Metrics
Curiosity Checklist
The purpose of the Curiosity Checklist is to enable researchers to gain a quick look at why a user wants an explanation.
Explanation Goodness Checklist
The Explanation Goodness Checklist is intended to be used by researchers or their XAI systems that have created explanations. The Explanation Goodness Checklist is an independent evaluation of explanations by other researchers. The reference is to the properties of explanations.
Explanation Satisfaction Scale
Explanation Satisfaction is an evaluation of explanations by a user. The reference is to the explanatory value to the user.
Psychological Models of Explanatory Reasoning
Technical reports from Task Area 2 of the DARPA XAI program.
Medical
Bayesian Teaching for XAI
Bayesian teaching provides a human-centered, theoretical framework for XAI based on cognitive science. We showcase the framework’s applicability in domains such as image classification and medical diagnosis.
SRI-DARE-BraTS explainable brain tumor segmentation
An interactive interface and APIs for segmenting brain tumors from fMRI scans and life expectancy prediction with deep attentional explanations and counterfactual explanations.
Similarity Based Saliency Maps
Similarity Based Saliency Maps (SBSM) is a similarity based saliency algorithm that utilizes standard distance metrics to compute image regions that result in the largest drop in distance between unaltered images and perturbed versions of the same images.
Data poisoning
Counterfactual Explanations for Enhancing Human-Machine Teaming
This contribution provides a framework for generating counterfactual explanations for AI agents in the domain of StarCraft 2.
Explainable poisoned classifier identification
Demo of our XAI system built to detect backdoor poisoned classifiers with an adversarial approach.
Robotics
Explaining Robot Behaviour
Algorithm to generate explanations from functional and mechanistic perspectives.