Context Limitations Make Neural Language Models More Human-Like

Tatsuki Kuribayashi, Yohei Oseki, Ana Brassard, Kentaro Inui. Context Limitations Make Neural Language Models More Human-Like. In Yoav Goldberg, Zornitsa Kozareva, Yue Zhang, editors, Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11. pages 10421-10436, Association for Computational Linguistics, 2022. [doi]

@inproceedings{KuribayashiOBI22,
  title = {Context Limitations Make Neural Language Models More Human-Like},
  author = {Tatsuki Kuribayashi and Yohei Oseki and Ana Brassard and Kentaro Inui},
  year = {2022},
  url = {https://aclanthology.org/2022.emnlp-main.712},
  researchr = {https://researchr.org/publication/KuribayashiOBI22},
  cites = {0},
  citedby = {0},
  pages = {10421-10436},
  booktitle = {Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11},
  editor = {Yoav Goldberg and Zornitsa Kozareva and Yue Zhang},
  publisher = {Association for Computational Linguistics},
}