Capabilities by Tag


Analytics

After Action Review for AI (AARfAI)

Our visual analytics tool, enhanced with an AARfAI workflow, allows domain experts to navigate an AI's reasoning process in a systematic way to quickly find faults and convey actionable information and insights to engineers.

The Consistent Visual Question Answering (ConVQA) Dataset

Dataset with metrics for quantitative evaluation of consistency in Visual Question Answering (VQA).

The Counterfactual Visual Question Answering (VQA) Dataset

Human curated counterfactual edits on VQA images for studying effective ways to produce counterfactual explanations.

SRI-DARE-BraTS explainable brain tumor segmentation

An interactive interface and APIs for segmenting brain tumors from fMRI scans and life expectancy prediction with deep attentional explanations and counterfactual explanations.

Generating Error Maps and Evaluating Attention and Error Maps

Measuring and improving attention map helpfulness in Visual Question Answering (VQA) with Error Maps.

Explainable VQA with SOBERT

Demo several capabilities of the Spatial-Object Attention BERT (SOBERT) Visual Question Answering (VQA) model with BERT and ErrorCam attention maps.

Remote Controlled Studies with Humans

This paper describes strategies for dealing with issues that came into play when doing two research studies with remote human subjects in the COVID era.

Bayesian Teaching for XAI

Bayesian teaching provides a human-centered, theoretical framework for XAI based on cognitive science. We showcase the framework’s applicability in domains such as image classification and medical diagnosis.

XAI Discovery Platform

Data set explorer to enable discovery and sensemaking and to support explanation of AI.

FakeSal

FakeSal is a whitebox saliency algorithm.

Fault-Line Image Explanations

Algorithm for generating conceptual and counterfactual explanations for an image classification model.

Detecting Mis-Information with XAI

Our research investigates the effects of XAI assistants embedded in news review platforms for combating the propagation of mis-info.

Similarity Based Saliency Maps

Similarity Based Saliency Maps (SBSM) is a similarity based saliency algorithm that utilizes standard distance metrics to compute image regions that result in the largest drop in distance between unaltered images and perturbed versions of the same images.

Juice

Juice is a library for learning tractable probabilistic circuits from data, and using the learned models to build solutions in explainable AI, robustness to missing data, algorithmic fairness, compression, etc.

Learning Tractable Interpretable Cutset Networks

This library helps the user learn tractable, interpretable cutset networks (a type of probabilistic model which combines decision trees and tree Bayesian networks) from data. The learned networks can be used to answer various decision and explanation queries such as most probable explanation and estimating posterior marginal probability.

Explainable Video Activity Recognition Demo

Demo of our XAI system built to detect activities in videos with post-hoc explanations for its predictions.

Explainable Video Activity Search Tool

Demo of our explainable query-building tool to search activities within a collection of videos.

Back to Top ↑

Software

SRI-DARE-BraTS explainable brain tumor segmentation

An interactive interface and APIs for segmenting brain tumors from fMRI scans and life expectancy prediction with deep attentional explanations and counterfactual explanations.

Generating Error Maps and Evaluating Attention and Error Maps

Measuring and improving attention map helpfulness in Visual Question Answering (VQA) with Error Maps.

Explainable VQA with SOBERT

Demo several capabilities of the Spatial-Object Attention BERT (SOBERT) Visual Question Answering (VQA) model with BERT and ErrorCam attention maps.

Cognitive Models for Common Ground Modeling

We model both the AI and the human performer in a common modeling framework and use cognitive salience to reveal their respective mental models

XAI Discovery Platform

Data set explorer to enable discovery and sensemaking and to support explanation of AI.

Explainable Question Answering System (EQUAS) demo system

Explainable Question Answering System one shot detector demo modality software.

Explaining Robot Behaviour

Algorithm to generate explanations from functional and mechanistic perspectives.

FakeSal

FakeSal is a whitebox saliency algorithm.

Fault-Line Image Explanations

Algorithm for generating conceptual and counterfactual explanations for an image classification model.

Integrated-Gradient Optimized Saliency Maps (iGOS++/I-GOS)

Integrated gradient-based saliency maps for multi-resolution explanation of deep networks.

Detecting Mis-Information with XAI

Our research investigates the effects of XAI assistants embedded in news review platforms for combating the propagation of mis-info.

Similarity Based Saliency Maps

Similarity Based Saliency Maps (SBSM) is a similarity based saliency algorithm that utilizes standard distance metrics to compute image regions that result in the largest drop in distance between unaltered images and perturbed versions of the same images.

Juice

Juice is a library for learning tractable probabilistic circuits from data, and using the learned models to build solutions in explainable AI, robustness to missing data, algorithmic fairness, compression, etc.

Learning Tractable Interpretable Cutset Networks

This library helps the user learn tractable, interpretable cutset networks (a type of probabilistic model which combines decision trees and tree Bayesian networks) from data. The learned networks can be used to answer various decision and explanation queries such as most probable explanation and estimating posterior marginal probability.

Explainable Video Activity Recognition Demo

Demo of our XAI system built to detect activities in videos with post-hoc explanations for its predictions.

Explainable Video Activity Search Tool

Demo of our explainable query-building tool to search activities within a collection of videos.

XDeep - A Post-Hoc Interpretation Package for Model Developers

XDeep is an open-source Python package developed to interpret deep models for both practitioners and researchers.

Back to Top ↑

Computer vision

The Consistent Visual Question Answering (ConVQA) Dataset

Dataset with metrics for quantitative evaluation of consistency in Visual Question Answering (VQA).

The Counterfactual Visual Question Answering (VQA) Dataset

Human curated counterfactual edits on VQA images for studying effective ways to produce counterfactual explanations.

SRI-DARE-BraTS explainable brain tumor segmentation

An interactive interface and APIs for segmenting brain tumors from fMRI scans and life expectancy prediction with deep attentional explanations and counterfactual explanations.

Generating Error Maps and Evaluating Attention and Error Maps

Measuring and improving attention map helpfulness in Visual Question Answering (VQA) with Error Maps.

Explainable VQA with SOBERT

Demo several capabilities of the Spatial-Object Attention BERT (SOBERT) Visual Question Answering (VQA) model with BERT and ErrorCam attention maps.

Cognitive Models for Common Ground Modeling

We model both the AI and the human performer in a common modeling framework and use cognitive salience to reveal their respective mental models

XAI Discovery Platform

Data set explorer to enable discovery and sensemaking and to support explanation of AI.

Explainable Question Answering System (EQUAS) demo system

Explainable Question Answering System one shot detector demo modality software.

FakeSal

FakeSal is a whitebox saliency algorithm.

Fault-Line Image Explanations

Algorithm for generating conceptual and counterfactual explanations for an image classification model.

Integrated-Gradient Optimized Saliency Maps (iGOS++/I-GOS)

Integrated gradient-based saliency maps for multi-resolution explanation of deep networks.

Similarity Based Saliency Maps

Similarity Based Saliency Maps (SBSM) is a similarity based saliency algorithm that utilizes standard distance metrics to compute image regions that result in the largest drop in distance between unaltered images and perturbed versions of the same images.

Juice

Juice is a library for learning tractable probabilistic circuits from data, and using the learned models to build solutions in explainable AI, robustness to missing data, algorithmic fairness, compression, etc.

Explainable Video Activity Recognition Demo

Demo of our XAI system built to detect activities in videos with post-hoc explanations for its predictions.

Explainable Video Activity Search Tool

Demo of our explainable query-building tool to search activities within a collection of videos.

XDeep - A Post-Hoc Interpretation Package for Model Developers

XDeep is an open-source Python package developed to interpret deep models for both practitioners and researchers.

Back to Top ↑

Explanation framework

After Action Review for AI (AARfAI)

Our visual analytics tool, enhanced with an AARfAI workflow, allows domain experts to navigate an AI's reasoning process in a systematic way to quickly find faults and convey actionable information and insights to engineers.

Explainable VQA with SOBERT

Demo several capabilities of the Spatial-Object Attention BERT (SOBERT) Visual Question Answering (VQA) model with BERT and ErrorCam attention maps.

Remote Controlled Studies with Humans

This paper describes strategies for dealing with issues that came into play when doing two research studies with remote human subjects in the COVID era.

Bayesian Teaching for XAI

Bayesian teaching provides a human-centered, theoretical framework for XAI based on cognitive science. We showcase the framework’s applicability in domains such as image classification and medical diagnosis.

XAI Discovery Platform

Data set explorer to enable discovery and sensemaking and to support explanation of AI.

Explainable Question Answering System (EQUAS) demo system

Explainable Question Answering System one shot detector demo modality software.

Explaining Robot Behaviour

Algorithm to generate explanations from functional and mechanistic perspectives.

Fault-Line Image Explanations

Algorithm for generating conceptual and counterfactual explanations for an image classification model.

Integrated-Gradient Optimized Saliency Maps (iGOS++/I-GOS)

Integrated gradient-based saliency maps for multi-resolution explanation of deep networks.

Juice

Juice is a library for learning tractable probabilistic circuits from data, and using the learned models to build solutions in explainable AI, robustness to missing data, algorithmic fairness, compression, etc.

Learning Tractable Interpretable Cutset Networks

This library helps the user learn tractable, interpretable cutset networks (a type of probabilistic model which combines decision trees and tree Bayesian networks) from data. The learned networks can be used to answer various decision and explanation queries such as most probable explanation and estimating posterior marginal probability.

Explainable Video Activity Recognition Demo

Demo of our XAI system built to detect activities in videos with post-hoc explanations for its predictions.

Explainable Video Activity Search Tool

Demo of our explainable query-building tool to search activities within a collection of videos.

Back to Top ↑

Demos

After Action Review for AI (AARfAI)

Our visual analytics tool, enhanced with an AARfAI workflow, allows domain experts to navigate an AI's reasoning process in a systematic way to quickly find faults and convey actionable information and insights to engineers.

Cognitive Models for Common Ground Modeling

We model both the AI and the human performer in a common modeling framework and use cognitive salience to reveal their respective mental models

XAI Discovery Platform

Data set explorer to enable discovery and sensemaking and to support explanation of AI.

Explainable Question Answering System (EQUAS) demo system

Explainable Question Answering System one shot detector demo modality software.

Integrated-Gradient Optimized Saliency Maps (iGOS++/I-GOS)

Integrated gradient-based saliency maps for multi-resolution explanation of deep networks.

Detecting Mis-Information with XAI

Our research investigates the effects of XAI assistants embedded in news review platforms for combating the propagation of mis-info.

Similarity Based Saliency Maps

Similarity Based Saliency Maps (SBSM) is a similarity based saliency algorithm that utilizes standard distance metrics to compute image regions that result in the largest drop in distance between unaltered images and perturbed versions of the same images.

Juice

Juice is a library for learning tractable probabilistic circuits from data, and using the learned models to build solutions in explainable AI, robustness to missing data, algorithmic fairness, compression, etc.

Explainable Video Activity Recognition Demo

Demo of our XAI system built to detect activities in videos with post-hoc explanations for its predictions.

Explainable Video Activity Search Tool

Demo of our explainable query-building tool to search activities within a collection of videos.

Back to Top ↑

Saliency

SRI-DARE-BraTS explainable brain tumor segmentation

An interactive interface and APIs for segmenting brain tumors from fMRI scans and life expectancy prediction with deep attentional explanations and counterfactual explanations.

Generating Error Maps and Evaluating Attention and Error Maps

Measuring and improving attention map helpfulness in Visual Question Answering (VQA) with Error Maps.

Explainable VQA with SOBERT

Demo several capabilities of the Spatial-Object Attention BERT (SOBERT) Visual Question Answering (VQA) model with BERT and ErrorCam attention maps.

Cognitive Models for Common Ground Modeling

We model both the AI and the human performer in a common modeling framework and use cognitive salience to reveal their respective mental models

FakeSal

FakeSal is a whitebox saliency algorithm.

Fault-Line Image Explanations

Algorithm for generating conceptual and counterfactual explanations for an image classification model.

Integrated-Gradient Optimized Saliency Maps (iGOS++/I-GOS)

Integrated gradient-based saliency maps for multi-resolution explanation of deep networks.

Similarity Based Saliency Maps

Similarity Based Saliency Maps (SBSM) is a similarity based saliency algorithm that utilizes standard distance metrics to compute image regions that result in the largest drop in distance between unaltered images and perturbed versions of the same images.

XDeep - A Post-Hoc Interpretation Package for Model Developers

XDeep is an open-source Python package developed to interpret deep models for both practitioners and researchers.

Back to Top ↑

Data

The Consistent Visual Question Answering (ConVQA) Dataset

Dataset with metrics for quantitative evaluation of consistency in Visual Question Answering (VQA).

The Counterfactual Visual Question Answering (VQA) Dataset

Human curated counterfactual edits on VQA images for studying effective ways to produce counterfactual explanations.

Bayesian Teaching for XAI

Bayesian teaching provides a human-centered, theoretical framework for XAI based on cognitive science. We showcase the framework’s applicability in domains such as image classification and medical diagnosis.

Explainable Question Answering System (EQUAS) demo system

Explainable Question Answering System one shot detector demo modality software.

Explaining Robot Behaviour

Algorithm to generate explanations from functional and mechanistic perspectives.

Integrated-Gradient Optimized Saliency Maps (iGOS++/I-GOS)

Integrated gradient-based saliency maps for multi-resolution explanation of deep networks.

Detecting Mis-Information with XAI

Our research investigates the effects of XAI assistants embedded in news review platforms for combating the propagation of mis-info.

Explainable Video Activity Recognition Demo

Demo of our XAI system built to detect activities in videos with post-hoc explanations for its predictions.

Back to Top ↑

Human-machine teaming

Cognitive Models for Common Ground Modeling

We model both the AI and the human performer in a common modeling framework and use cognitive salience to reveal their respective mental models

Explainable Question Answering System (EQUAS) demo system

Explainable Question Answering System one shot detector demo modality software.

Detecting Mis-Information with XAI

Our research investigates the effects of XAI assistants embedded in news review platforms for combating the propagation of mis-info.

Similarity Based Saliency Maps

Similarity Based Saliency Maps (SBSM) is a similarity based saliency algorithm that utilizes standard distance metrics to compute image regions that result in the largest drop in distance between unaltered images and perturbed versions of the same images.

Learning Tractable Interpretable Cutset Networks

This library helps the user learn tractable, interpretable cutset networks (a type of probabilistic model which combines decision trees and tree Bayesian networks) from data. The learned networks can be used to answer various decision and explanation queries such as most probable explanation and estimating posterior marginal probability.

Explainable Video Activity Recognition Demo

Demo of our XAI system built to detect activities in videos with post-hoc explanations for its predictions.

Explainable Video Activity Search Tool

Demo of our explainable query-building tool to search activities within a collection of videos.

Back to Top ↑

Visual question answering (vqa)

The Consistent Visual Question Answering (ConVQA) Dataset

Dataset with metrics for quantitative evaluation of consistency in Visual Question Answering (VQA).

The Counterfactual Visual Question Answering (VQA) Dataset

Human curated counterfactual edits on VQA images for studying effective ways to produce counterfactual explanations.

Generating Error Maps and Evaluating Attention and Error Maps

Measuring and improving attention map helpfulness in Visual Question Answering (VQA) with Error Maps.

Explainable VQA with SOBERT

Demo several capabilities of the Spatial-Object Attention BERT (SOBERT) Visual Question Answering (VQA) model with BERT and ErrorCam attention maps.

Explainable Question Answering System (EQUAS) demo system

Explainable Question Answering System one shot detector demo modality software.

Explainable Video Activity Recognition Demo

Demo of our XAI system built to detect activities in videos with post-hoc explanations for its predictions.

Explainable Video Activity Search Tool

Demo of our explainable query-building tool to search activities within a collection of videos.

Back to Top ↑

Methodology

Remote Controlled Studies with Humans

This paper describes strategies for dealing with issues that came into play when doing two research studies with remote human subjects in the COVID era.

Curiosity Checklist

The purpose of the Curiosity Checklist is to enable researchers to gain a quick look at why a user wants an explanation.

Explanation Scorecard

The Scorecard presents a number of Levels of explanation. At the lower levels are explanations in the terms of the cues or features of individual instances. At the higher levels are explanations that answer more general questions about how the AI works. Going from the lower to higher levels can be thought of as enabling insights about the strengths and weaknesses of the AI system.

Explanation Goodness Checklist

The Explanation Goodness Checklist is intended to be used by researchers or their XAI systems that have created explanations. The Explanation Goodness Checklist is an independent evaluation of explanations by other researchers. The reference is to the properties of explanations.

Explanation Satisfaction Scale

Explanation Satisfaction is an evaluation of explanations by a user. The reference is to the explanatory value to the user.

Stakeholder Playbook

The Playbook lists the explanation requirements of jurisprudence professionals, Contracting Officers, Procurement Officers, Program Managers, Development Team Leaders, System Integrators, System evaluators, policy makers, and trainers.

Back to Top ↑

Reinforcement learning

After Action Review for AI (AARfAI)

Our visual analytics tool, enhanced with an AARfAI workflow, allows domain experts to navigate an AI's reasoning process in a systematic way to quickly find faults and convey actionable information and insights to engineers.

Remote Controlled Studies with Humans

This paper describes strategies for dealing with issues that came into play when doing two research studies with remote human subjects in the COVID era.

Cognitive Models for Common Ground Modeling

We model both the AI and the human performer in a common modeling framework and use cognitive salience to reveal their respective mental models

XAI Discovery Platform

Data set explorer to enable discovery and sensemaking and to support explanation of AI.

Back to Top ↑

Autonomy

After Action Review for AI (AARfAI)

Our visual analytics tool, enhanced with an AARfAI workflow, allows domain experts to navigate an AI's reasoning process in a systematic way to quickly find faults and convey actionable information and insights to engineers.

Remote Controlled Studies with Humans

This paper describes strategies for dealing with issues that came into play when doing two research studies with remote human subjects in the COVID era.

Cognitive Models for Common Ground Modeling

We model both the AI and the human performer in a common modeling framework and use cognitive salience to reveal their respective mental models

Explaining Robot Behaviour

Algorithm to generate explanations from functional and mechanistic perspectives.

Back to Top ↑

Metrics

Curiosity Checklist

The purpose of the Curiosity Checklist is to enable researchers to gain a quick look at why a user wants an explanation.

Explanation Goodness Checklist

The Explanation Goodness Checklist is intended to be used by researchers or their XAI systems that have created explanations. The Explanation Goodness Checklist is an independent evaluation of explanations by other researchers. The reference is to the properties of explanations.

Explanation Satisfaction Scale

Explanation Satisfaction is an evaluation of explanations by a user. The reference is to the explanatory value to the user.

Back to Top ↑

Medical

SRI-DARE-BraTS explainable brain tumor segmentation

An interactive interface and APIs for segmenting brain tumors from fMRI scans and life expectancy prediction with deep attentional explanations and counterfactual explanations.

Bayesian Teaching for XAI

Bayesian teaching provides a human-centered, theoretical framework for XAI based on cognitive science. We showcase the framework’s applicability in domains such as image classification and medical diagnosis.

Similarity Based Saliency Maps

Similarity Based Saliency Maps (SBSM) is a similarity based saliency algorithm that utilizes standard distance metrics to compute image regions that result in the largest drop in distance between unaltered images and perturbed versions of the same images.

Back to Top ↑

Natural language processing

The Consistent Visual Question Answering (ConVQA) Dataset

Dataset with metrics for quantitative evaluation of consistency in Visual Question Answering (VQA).

Detecting Mis-Information with XAI

Our research investigates the effects of XAI assistants embedded in news review platforms for combating the propagation of mis-info.

XDeep - A Post-Hoc Interpretation Package for Model Developers

XDeep is an open-source Python package developed to interpret deep models for both practitioners and researchers.

Back to Top ↑

Robotics

Explaining Robot Behaviour

Algorithm to generate explanations from functional and mechanistic perspectives.

Back to Top ↑