MDP Policy for Partially Observable Markov Decision Processes in Large Domains: Embedding Exploration Dynamics

Giorgos Apostolikas, Spyros G. Tzafestas. MDP Policy for Partially Observable Markov Decision Processes in Large Domains: Embedding Exploration Dynamics. Intelligent Automation & Soft Computing, 10(3):209-220, 2004. [doi]

Abstract

Abstract is missing.