The following publications are possibly variants of this publication:
- Preference Ranking Optimization for Human AlignmentFeifan Song 0001, Bowen Yu 0002, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang. AAAI 2024: 18990-18998 [doi]
- Arithmetic Control of LLMs for Diverse User Preferences: Directional Preference Alignment with Multi-Objective RewardsHaoxiang Wang, Yong Lin, Wei Xiong, Rui Yang, Shizhe Diao, Shuang Qiu, Han Zhao, Tong Zhang. acl 2024: 8642-8655 [doi]
- Discovering diverse human behavior from two-dimensional preferencesPu-Tai Yang, Tony Cheng Kui Huang, Huan-Lin Chu, Yung-Ting Chuang. kbs, 152:11-25, 2018. [doi]
- AlignDiff: Aligning Diverse Human Preferences via Behavior-Customisable Diffusion ModelZibin Dong, Yifu Yuan, Jianye Hao, Fei Ni, Yao Mu, Yan Zheng 0002, Yujing Hu, Tangjie Lv, Changjie Fan, Zhipeng Hu. iclr 2024: [doi]
- Uni-RLHF: Universal Platform and Benchmark Suite for Reinforcement Learning with Diverse Human FeedbackYifu Yuan, Jianye Hao, Yi Ma, Zibin Dong, Hebin Liang, Jinyi Liu 0002, Zhixin Feng, Kai Zhao, Yan Zheng 0002. iclr 2024: [doi]