A Hidden Markov Restless Multi-armed Bandit Model for Playout Recommendation Systems

Rahul Meshram, Aditya Gopalan, D. Manjunath. A Hidden Markov Restless Multi-armed Bandit Model for Playout Recommendation Systems. In Nishanth R. Sastry, Sandip Chakraborty, editors, Communication Systems and Networks - 9th International Conference, COMSNETS 2017, Bengaluru, India, January 4-8, 2017, Revised Selected Papers and Invited Papers. Volume 10340 of Lecture Notes in Computer Science, pages 335-362, Springer, 2017. [doi]

Abstract

Abstract is missing.