Memory Injections: Correcting Multi-Hop Reasoning Failures During Inference in Transformer-Based Language Models

Mansi Sakarvadia, Aswathy Ajith, Arham Khan, Daniel Grzenda, Nathaniel Hudson, André Bauer 0001, Kyle Chard, Ian T. Foster. Memory Injections: Correcting Multi-Hop Reasoning Failures During Inference in Transformer-Based Language Models. In Yonatan Belinkov, Sophie Hao, Jaap Jumelet, Najoung Kim, Arya McCarthy, Hosein Mohebbi, editors, Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, BlackboxNLP@EMNLP 2023, Singapore, December 7, 2023. pages 342-356, Association for Computational Linguistics, 2023. [doi]

@inproceedings{SakarvadiaAKGHBCF23,
  title = {Memory Injections: Correcting Multi-Hop Reasoning Failures During Inference in Transformer-Based Language Models},
  author = {Mansi Sakarvadia and Aswathy Ajith and Arham Khan and Daniel Grzenda and Nathaniel Hudson and André Bauer 0001 and Kyle Chard and Ian T. Foster},
  year = {2023},
  url = {https://aclanthology.org/2023.blackboxnlp-1.26},
  researchr = {https://researchr.org/publication/SakarvadiaAKGHBCF23},
  cites = {0},
  citedby = {0},
  pages = {342-356},
  booktitle = {Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, BlackboxNLP@EMNLP 2023, Singapore, December 7, 2023},
  editor = {Yonatan Belinkov and Sophie Hao and Jaap Jumelet and Najoung Kim and Arya McCarthy and Hosein Mohebbi},
  publisher = {Association for Computational Linguistics},
  isbn = {979-8-89176-052-3},
}