The following publications are possibly variants of this publication:
- Distributed training of large scale exponential language modelsAbhinav Sethy, Stanley F. Chen, Bhuvana Ramabhadran. icassp 2011: 5520-5523 [doi]
- SpecEE: Accelerating Large Language Model Inference with Speculative Early ExitingJiaming Xu, Jiayi Pan, Yongkang Zhou, Siming Chen, Jinhao Li, Yaoxiu Lian, Junyi Wu, Guohao Dai. isca 2025: 467-481 [doi]
- WLB-LLM: Workload-Balanced 4D Parallelism for Large Language Model TrainingZheng Wang 0075, Anna Cai, Xinfeng Xie, Zaifeng Pan, Yue Guan, Weiwei Chu, Jie Wang 0022, Shikai Li, Jianyu Huang, Chris Cai, Yuchen Hao, Yufei Ding 0001. osdi 2025: 785-801 [doi]
- TeraPipe: Token-Level Pipeline Parallelism for Training Large-Scale Language ModelsZhuohan Li, Siyuan Zhuang, Shiyuan Guo, Danyang Zhuo, Hao Zhang, Dawn Song, Ion Stoica. icml 2021: 6543-6552 [doi]
- Multi-step Iterative Automated Domain Modeling with Large Language ModelsYujing Yang, Boqi Chen, Kua Chen, Gunter Mussbacher, Dániel Varró. MoDELS 2024: 587-595 [doi]
- Automated Domain Modeling with Large Language Models: A Comparative StudyKua Chen, Yujing Yang, Boqi Chen, José Antonio Hernández López, Gunter Mussbacher, Dániel Varró. MoDELS 2023: 162-172 [doi]
- Prompting or Fine-tuning? A Comparative Study of Large Language Models for Taxonomy ConstructionBoqi Chen, Fandi Yi, Dániel Varró. MoDELS 2023: 588-596 [doi]