| 26 | -- | 0 | Richard Fox, Elliot A. Ludvig. Assimilating human feedback from autonomous vehicle interaction in reinforcement learning models |
| 27 | -- | 0 | Andreas Kallinteris, Stavros Orfanoudakis, Georgios Chalkiadakis. A comprehensive analysis of agent factorization and learning algorithms in multiagent systems |
| 28 | -- | 0 | Ildikó Schlotter, Katarína Cechlárová, Diana Trellová. Parameterized complexity of candidate nomination for elections based on positional scoring rules |
| 29 | -- | 0 | Marco Faella, Luigi Sauro. On preferences and reward policies over rankings |
| 30 | -- | 0 | Jake Barrett, Kobi Gal, Loizos Michael, Dan Vilenchik. Beyond the echo chamber: modelling open-mindedness in citizens' assemblies |
| 31 | -- | 0 | Michael P. Wellman, Katherine Mayo. Navigating in a space of game views |
| 32 | -- | 0 | Andrea Agiollo, Luciano Cavalcante Siebert, Pradeep K. Murukannaiah, Andrea Omicini. From large language models to small logic programs: building global explanations from disagreeing local post-hoc explainers |
| 33 | -- | 0 | Zhenwu Wang, Jiayin Shen, Xiaosong Tang, Mengjie Han, Zhenhua Feng, Jinghua Wu. An agent-based persuasion model using emotion-driven concession and multi-objective optimization |
| 34 | -- | 0 | Thomy Phan, Felix Sommer, Fabian Ritz, Philipp Altmann, Jonas Nüßlein, Michael Kölle 0001, Lenz Belzner, Claudia Linnhoff-Popien. Emergent cooperation from mutual acknowledgment exchange in multi-agent reinforcement learning |
| 35 | -- | 0 | Edmond Awad, Sydney Levine, Andrea Loreggia, Nicholas Mattei, Iyad Rahwan, Francesca Rossi 0001, Kartik Talamadupula, Joshua B. Tenenbaum, Max Kleiman-Weiner. When is it acceptable to break the rules? Knowledge representation of moral judgements based on empirical data |
| 36 | -- | 0 | Ming Yang, Kaiyan Zhao, Yiming Wang, Renzhi Dong, Yali Du 0001, Furui Liu, Mingliang Zhou, Leong Hou U. Team-wise effective communication in multi-agent reinforcement learning |
| 38 | -- | 0 | Qinghao Wang, Yaodong Yang. Carbon trading supply chain management based on constrained deep reinforcement learning |
| 39 | -- | 0 | Sándor P. Fekete, Peter Kramer 0001, Christian Rieck, Christian Scheffer, Arne Schmidt 0001. Efficiently reconfiguring a connected swarm of labeled robots |
| 40 | -- | 0 | Jugal Garg, Thorben Tröbst, Vijay V. Vazirani. One-sided matching markets with endowments: equilibria and algorithms |
| 41 | -- | 0 | Argyrios Deligkas, Aris Filos-Ratsikas, Alexandros A. Voudouris. Truthful interval covering |
| 42 | -- | 0 | Hadi Hosseini, Andrew McGregor 0001, Justin Payan, Rik Sengupta, Rohit Vaish, Vignesh Viswanathan. Graphical house allocation with identical valuations |
| 43 | -- | 0 | Niclas Boehmer, Robert Bredereck, Klaus Heeger, Dusan Knop, Junjie Luo 0001. Multivariate algorithmics for eliminating envy by donating goods |
| 45 | -- | 0 | Elnaz Shafipour, Sebastian Stein 0001, Selin Damla Ahipasaoglu. Personalised electric vehicle charging stop planning through online estimators |
| 46 | -- | 0 | Dave De Jonge. Theoretical properties of the MiCRO negotiation strategy |
| 47 | -- | 0 | Anna Maria Kerkmann, Jörg Rothe. The complexity of verifying popularity and strict popularity in altruistic hedonic games |
| 48 | -- | 0 | Jan de Mooij, Tabea S. Sonnenschein, Marco Pellegrino, Mehdi Dastani, Dick Ettema, Brian Logan 0001, Judith Anne Verstegen. GenSynthPop: generating a spatially explicit synthetic population of individuals and households from aggregated data |
| 49 | -- | 0 | Richard Willis, Yali Du 0001, Joel Z. Leibo, Michael Luck. Resolving social dilemmas with minimal reward transfer |
| 50 | -- | 0 | Ana Ozaki, Anum Rehman, Marija Slavkovik 0001. Finding middle grounds for incoherent horn expressions: the moral machine case |
| 51 | -- | 0 | Andreas A. Haupt, Phillip J. K. Christoffersen, Mehul Damani, Dylan Hadfield-Menell. Formal contracts mitigate social dilemmas in multi-agent reinforcement learning |