Abstract is missing.
- NeuroHex: A Deep Q-learning Hex AgentKenny Young, Gautham Vasan, Ryan Hayward. 3-18 [doi]
- Deep or Wide? Learning Policy and Value Neural Networks for Combinatorial GamesStefan Edelkamp. 19-33 [doi]
- Integrating Factorization Ranked Features in MCTS: An Experimental StudyChenjun Xiao, Martin Müller 0003. 34-43 [doi]
- Nested Rollout Policy Adaptation with Selective PoliciesTristan Cazenave. 44-56 [doi]
- A Rollout-Based Search Algorithm Unifying MCTS and Alpha-BetaHendrik Baier. 57-70 [doi]
- Learning from the Memory of Atari 2600Jakub Sygnowski, Henryk Michalewski. 71-85 [doi]
- Clustering-Based Online Player ModelingJason M. Bindewald, Gilbert L. Peterson, Michael E. Miller. 86-100 [doi]
- AI Wolf Contest - Development of Game AI Using Collective Intelligence -Fujio Toriumi, Hirotaka Osawa, Michimasa Inaba, Daisuke Katagami, Kosuke Shinoda, Hitoshi Matsubara. 101-115 [doi]
- Semantic Classification of Utterances in a Language-Driven GameKellen Gillespie, Michael W. Floyd, Matthew Molineaux, Swaroop Vattam, David W. Aha. 116-129 [doi]
- Optimizing Propositional NetworksChiara F. Sironi, Mark H. M. Winands. 133-151 [doi]
- Grounding GDL Game DescriptionsStephan Schiffel. 152-164 [doi]
- A General Approach of Game Description Decomposition for General Game PlayingAline Hufschmitt, Jean-Noël Vittaut, Jean Méhat. 165-177 [doi]