Explainable Question Answering System (EQUAS) demo system

Overview

EQUAS demo on the one shot detector context using human editable explanations (saliency heat-maps emphasizing or deemphasizing image features).

Intended Use

This contribution is a software application to demo XAI capabilities, specifically the creation and manipulation of ML explanations in the context of a one shot detector. The intended use of this contribution is to be deployed as a demo or for reuse of its constituent interface and backend parts to support the development of other systems.

This contribution’s domain is AI explanation presentation and manipulation in the context of a one shot detector/classifier system.

Model/Data

A VGG16 pre-trained model was used and trained with an aircraft image data set (see Data) with 10 held out classes used to create/test one shot detectors based on a single class image. Inputs are (224,224) RGB images with the aircraft class as the output.

Limitations

The system allows for the creation of one shot detectors by creating and manipulating “aspects”, where aspects are saliency maps on the target image. There is a limit to how many aspects can be created which is enforced by the application due to possible memory constraints when evaluating the one shot detector.

Additional external classifier files are needed to run this application on aircraft data (see Software). These files were uploaded here (due to large file size) and need to be added to the \evaluation_dataset folder before starting the application.

References

@inproceedings{bau2017network,
  title={Network dissection: Quantifying interpretability of deep visual representations},
  author={Bau, David and Zhou, Bolei and Khosla, Aditya and Oliva, Aude and Torralba, Antonio},
  booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
  pages={6541--6549},
  year={2017}
}

Updated: