A Contextual Multi-armed Bandit Approach Based on Implicit Feedback for Online Recommendation

Yongquan Wan, Junli Xian, Cairong Yan. A Contextual Multi-armed Bandit Approach Based on Implicit Feedback for Online Recommendation. In Lorna Uden, I-Hsien Ting, Kai Wang 0038, editors, Knowledge Management in Organizations - 15th International Conference, KMO 2021, Kaohsiung, Taiwan, July 20-22, 2021, Proceedings. Volume 1438 of Communications in Computer and Information Science, pages 380-392, Springer, 2021. [doi]

Abstract

Abstract is missing.