The following publications are possibly variants of this publication:
- Adversarial Knowledge Distillation For Robust Spoken Language UnderstandingYe Wang, Baishun Ling, Yanmeng Wang, Junhao Xue, Shaojun Wang, Jing Xiao 0006. interspeech 2022: 2708-2712 [doi]
- Combining Statistical and Knowledge-Based Spoken Language Understanding in Conditional ModelsYe-Yi Wang, Alex Acero, Milind Mahajan, John Lee. acl 2006: [doi]
- Discriminative models for spoken language understandingYe-Yi Wang, Alex Acero. interspeech 2006: [doi]
- 2KD-SLU: An Intra-Inter Knowledge Distillation Framework for Zero-Shot Cross-Lingual Spoken Language UnderstandingTianjun Mao, Chenghong Zhang. icann 2023: 345-356 [doi]
- 2KD-SLU: An Intra-Inter Knowledge Distillation Framework for Zero-Shot Cross-Lingual Spoken Language UnderstandingTianjun Mao, Chenghong Zhang. icann 2023: 1 [doi]
- Deep contextual language understanding in spoken dialogue systemsChunxi Liu, Puyang Xu, Ruhi Sarikaya. interspeech 2015: 120-124 [doi]
- Spoken Language Understanding with Sememe Knowledge as Domain KnowledgeSixia Li, Jianwu Dang, Longbiao Wang. iscslp 2021: 1-5 [doi]