Fault-Line Image Explanations
Overview
We propose this new explanation framework to identify the minimal semantic-level features (e.g. stripes on zebra, pointed ears on dog) referred to as the explainable concept, that can be added/deleted to alter the classification category.
Intended Use
Our framework is helpful for generating conceptual and counterfactual explanations for an image classification model.
The proposed framework is applicable to image analytics domain.
Model/Data
We take a set of input images and extract xconcepts. These xconcepts are then utilized in generating optimal explanations.