Explainable Video Activity Recognition Demo
Overview
Machine Learning Systems are often times referred to as black-boxes, as their users cannot properly understand how to use them. Explainable Artificial Intelligence (XAI) systems are therefore used to bring transparency for the end-users to help them understand such systems and build a better mental model of these systems. Through this project, we built an explainable tool for activity recognition in cooking videos, where the system provides yes/no responses to whether an activity is happening in a video, and the system’s justification for that output. We then used this system to explore how presence and quality of explanations can affect certain user behaviors, such as user-task performance, trust, and understanding.
Intended Use
The tool illustrates how to build a user-friendly visual interface and the different explanation types that one should provide to a user for solving the task of explainable activity recognition in video data. At a high level, the system works as follows. The user uploads a video and asks a yes/no question (e.g., did the user perform activity X) about the video. The system processes the video, answers the question and provides three types of explanations, explaining the yes or no answer to the question.
Limitations
Only works with pre-defined questions and videos.
References
@article{nourani2020,
author = {Mahsan Nourani and Chiradeep Roy and Tahrima Rahman and Eric D. Ragan and Nicholas Ruozzi and Vibhav Gogate},
title = {Don't Explain without Verifying Veracity: An Evaluation of Explainable {AI} with Video Activity Recognition},
journal = {CoRR},
volume = {abs/2005.02335},
year = {2020},
url = {https://arxiv.org/abs/2005.02335},
}