Do explanations make VQA models more predictable to a human?

Arjun Chandrasekaran, Viraj Prabhu, Deshraj Yadav, Prithvijit Chattopadhyay, Devi Parikh. Do explanations make VQA models more predictable to a human?. In Ellen Riloff, David Chiang 0001, Julia Hockenmaier, Jun'ichi Tsujii, editors, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018. pages 1036-1042, Association for Computational Linguistics, 2018. [doi]

@inproceedings{ChandrasekaranP18-0,
  title = {Do explanations make VQA models more predictable to a human?},
  author = {Arjun Chandrasekaran and Viraj Prabhu and Deshraj Yadav and Prithvijit Chattopadhyay and Devi Parikh},
  year = {2018},
  url = {https://aclanthology.info/papers/D18-1128/d18-1128},
  researchr = {https://researchr.org/publication/ChandrasekaranP18-0},
  cites = {0},
  citedby = {0},
  pages = {1036-1042},
  booktitle = {Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018},
  editor = {Ellen Riloff and David Chiang 0001 and Julia Hockenmaier and Jun'ichi Tsujii},
  publisher = {Association for Computational Linguistics},
  isbn = {978-1-948087-84-1},
}