Explaining Robot Behaviour


We propose this new explanation framework to generate explanations from functional and mechanistic perspectives. The robot system learns from human demonstration and applies it to unseen tasks (e.g. opening a bottle). Visualizations of the robot’s internal decisions are more effective in promoting human trust than explanations based on summary text explanations. The contribution contains loosely coupled modules in a code base.

Intended Use

The robot system not only shows the ability to learn from human demonstrators but also succeeds in opening new, unseen bottles.

The proposed framework is applicable to autonomy domain.


Please see the details about the model and data included in the corresponding Software and Data links.


  title={A tale of two explanations: Enhancing human trust by explaining robot behavior},
  author={Edmonds, Mark and Gao, Feng and Liu, Hangxin and Xie, Xu and Qi, Siyuan and Rothrock, Brandon and Zhu, Yixin and Wu, Ying Nian and Lu, Hongjing and Zhu, Song-Chun},
  journal={Science Robotics},
  publisher={Science Robotics}