Pre-Trained Multilingual Sequence-to-Sequence Models: A Hope for Low-Resource Language Translation?

En-Shiun Annie Lee, Sarubi Thillainathan, Shravan Nayak, Surangika Ranathunga, David Ifeoluwa Adelani, Ruisi Su, Arya McCarthy. Pre-Trained Multilingual Sequence-to-Sequence Models: A Hope for Low-Resource Language Translation?. In Smaranda Muresan, Preslav Nakov, Aline Villavicencio, editors, Findings of the Association for Computational Linguistics: ACL 2022, Dublin, Ireland, May 22-27, 2022. pages 58-67, Association for Computational Linguistics, 2022. [doi]

@inproceedings{LeeTNRASM22,
  title = {Pre-Trained Multilingual Sequence-to-Sequence Models: A Hope for Low-Resource Language Translation?},
  author = {En-Shiun Annie Lee and Sarubi Thillainathan and Shravan Nayak and Surangika Ranathunga and David Ifeoluwa Adelani and Ruisi Su and Arya McCarthy},
  year = {2022},
  url = {https://aclanthology.org/2022.findings-acl.6},
  researchr = {https://researchr.org/publication/LeeTNRASM22},
  cites = {0},
  citedby = {0},
  pages = {58-67},
  booktitle = {Findings of the Association for Computational Linguistics: ACL 2022, Dublin, Ireland, May 22-27, 2022},
  editor = {Smaranda Muresan and Preslav Nakov and Aline Villavicencio},
  publisher = {Association for Computational Linguistics},
  isbn = {978-1-955917-25-4},
}