Memory Injections: Correcting Multi-Hop Reasoning Failures During Inference in Transformer-Based Language Models

Mansi Sakarvadia, Aswathy Ajith, Arham Khan, Daniel Grzenda, Nathaniel Hudson, André Bauer 0001, Kyle Chard, Ian T. Foster. Memory Injections: Correcting Multi-Hop Reasoning Failures During Inference in Transformer-Based Language Models. In Yonatan Belinkov, Sophie Hao, Jaap Jumelet, Najoung Kim, Arya McCarthy, Hosein Mohebbi, editors, Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, BlackboxNLP@EMNLP 2023, Singapore, December 7, 2023. pages 342-356, Association for Computational Linguistics, 2023. [doi]

Abstract

Abstract is missing.