Rethinking Style Transformer with Energy-based Interpretation: Adversarial Unsupervised Style Transfer using a Pretrained Model

Hojun Cho, Dohee Kim, Seungwoo Ryu, ChaeHun Park, Hyungjong Noh, Jeong-In Hwang, Minseok Choi, Edward Choi, Jaegul Choo. Rethinking Style Transformer with Energy-based Interpretation: Adversarial Unsupervised Style Transfer using a Pretrained Model. In Yoav Goldberg, Zornitsa Kozareva, Yue Zhang, editors, Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11. pages 5452-5467, Association for Computational Linguistics, 2022. [doi]

@inproceedings{ChoKRPNHCCC22,
  title = {Rethinking Style Transformer with Energy-based Interpretation: Adversarial Unsupervised Style Transfer using a Pretrained Model},
  author = {Hojun Cho and Dohee Kim and Seungwoo Ryu and ChaeHun Park and Hyungjong Noh and Jeong-In Hwang and Minseok Choi and Edward Choi and Jaegul Choo},
  year = {2022},
  url = {https://aclanthology.org/2022.emnlp-main.366},
  researchr = {https://researchr.org/publication/ChoKRPNHCCC22},
  cites = {0},
  citedby = {0},
  pages = {5452-5467},
  booktitle = {Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11},
  editor = {Yoav Goldberg and Zornitsa Kozareva and Yue Zhang},
  publisher = {Association for Computational Linguistics},
}