XC-Cache: Cross-Attending to Cached Context for Efficient LLM Inference

João Monteiro 0002, Étienne Marcotte, Pierre-André Noël, Valentina Zantedeschi, David Vázquez 0001, Nicolas Chapados, Christopher Pal, Perouz Taslakian. XC-Cache: Cross-Attending to Cached Context for Efficient LLM Inference. In Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen, editors, Findings of the Association for Computational Linguistics: EMNLP 2024, Miami, Florida, USA, November 12-16, 2024. pages 15284-15302, Association for Computational Linguistics, 2024. [doi]

@inproceedings{0002MNZ0CPT24,
  title = {XC-Cache: Cross-Attending to Cached Context for Efficient LLM Inference},
  author = {João Monteiro 0002 and Étienne Marcotte and Pierre-André Noël and Valentina Zantedeschi and David Vázquez 0001 and Nicolas Chapados and Christopher Pal and Perouz Taslakian},
  year = {2024},
  url = {https://aclanthology.org/2024.findings-emnlp.896},
  researchr = {https://researchr.org/publication/0002MNZ0CPT24},
  cites = {0},
  citedby = {0},
  pages = {15284-15302},
  booktitle = {Findings of the Association for Computational Linguistics: EMNLP 2024, Miami, Florida, USA, November 12-16, 2024},
  editor = {Yaser Al-Onaizan and Mohit Bansal and Yun-Nung Chen},
  publisher = {Association for Computational Linguistics},
  isbn = {979-8-89176-168-1},
}