COPEN: Probing Conceptual Knowledge in Pre-trained Language Models

Hao Peng, Xiaozhi Wang, Shengding Hu, Hailong Jin, Lei Hou 0001, Juanzi Li, Zhiyuan Liu 0010, Qun Liu 0001. COPEN: Probing Conceptual Knowledge in Pre-trained Language Models. In Yoav Goldberg, Zornitsa Kozareva, Yue Zhang, editors, Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11. pages 5015-5035, Association for Computational Linguistics, 2022. [doi]

@inproceedings{PengWHJ0L0022,
  title = {COPEN: Probing Conceptual Knowledge in Pre-trained Language Models},
  author = {Hao Peng and Xiaozhi Wang and Shengding Hu and Hailong Jin and Lei Hou 0001 and Juanzi Li and Zhiyuan Liu 0010 and Qun Liu 0001},
  year = {2022},
  url = {https://aclanthology.org/2022.emnlp-main.335},
  researchr = {https://researchr.org/publication/PengWHJ0L0022},
  cites = {0},
  citedby = {0},
  pages = {5015-5035},
  booktitle = {Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11},
  editor = {Yoav Goldberg and Zornitsa Kozareva and Yue Zhang},
  publisher = {Association for Computational Linguistics},
}