Attending Self-Attention: A Case Study of Visually Grounded Supervision in Vision-and-Language Transformers

Jules Samaran, Noa Garcia, Mayu Otani, Chenhui Chu, Yuta Nakashima. Attending Self-Attention: A Case Study of Visually Grounded Supervision in Vision-and-Language Transformers. In Jad Kabbara, Haitao Lin, Amandalynne Paullada, Jannis Vamvas, editors, Proceedings of the ACL-IJCNLP 2021 Student Research Workshop, ACL 2021, Online, JUli 5-10, 2021. pages 81-86, Association for Computational Linguistics, 2021. [doi]

@inproceedings{SamaranGOCN21,
  title = {Attending Self-Attention: A Case Study of Visually Grounded Supervision in Vision-and-Language Transformers},
  author = {Jules Samaran and Noa Garcia and Mayu Otani and Chenhui Chu and Yuta Nakashima},
  year = {2021},
  url = {https://aclanthology.org/2021.acl-srw.8},
  researchr = {https://researchr.org/publication/SamaranGOCN21},
  cites = {0},
  citedby = {0},
  pages = {81-86},
  booktitle = {Proceedings of the ACL-IJCNLP 2021 Student Research Workshop, ACL 2021, Online, JUli 5-10, 2021},
  editor = {Jad Kabbara and Haitao Lin and Amandalynne Paullada and Jannis Vamvas},
  publisher = {Association for Computational Linguistics},
  isbn = {978-1-952148-03-3},
}