Similarity Based Saliency Maps

Overview

SBSM is a saliency algorithm that compares image descriptors in the embedding space in an attempt to reason about retrieval performance between two reference images.

Explainability for Content-Based Image Retrieval (2019)

Intended Use

When to use SBSM

  • Does not need a black-box algorithm
  • To explain image similarity between two images
  • Each image can be represented by a single descriptor

Model/Data

The input to SBSM are two images between whom we need to compute saliency maps based on their distance in embedding space. Additionally the user can fix the resolution of sampling by changing the window size and stride.

Limitations

Does not support white-box model explanations and requires saliency computation between two images. Additionally, if the model generating the feature vector does not change, the saliency maps between two images always remains the same.

References

@inproceedings{dong2019explainability,
  title={Explainability for Content-Based Image Retrieval.},
  author={Dong, Bo and Collins, Roddy and Hoogs, Anthony},
  booktitle={CVPR Workshops},
  pages={95--98},
  year={2019}
}

Updated: