Abstract is missing.
- Using Agent-Based Simulator to Assess Interventions Against COVID-19 in a Small Community Generated from Map DataMitsuteru Abe, Fabio Henrique Kiyoiti dos Santos Tanaka, Jair Pereira Junior, Anna Bogdanova, Tetsuya Sakurai, Claus Aranha. 1-8 [doi]
- Multi-Objective Reinforcement Learning with Non-Linear ScalarizationMridul Agarwal, Vaneet Aggarwal, Tian Lan. 9-17 [doi]
- Be Considerate: Avoiding Negative Side Effects in Reinforcement LearningParand Alizadeh Alamdari, Toryn Q. Klassen, Rodrigo Toro Icarte, Sheila A. McIlraith. 18-26 [doi]
- Hacking the Colony: On the Disruptive Effect of Misleading Pheromone and How to Defend against ItAshay Aswale, Antonio López, Aukkawut Ammartayakun, Carlo Pinciroli. 27-34 [doi]
- State Supervised Steering Function for Sampling-based Kinodynamic PlanningPranav Atreya, Joydeep Biswas. 35-43 [doi]
- Unbiased Asymmetric Reinforcement Learning under Partial ObservabilityAndrea Baisero, Christopher Amato. 44-52 [doi]
- Multi-Agent Heterogeneous Digital Twin Framework with Dynamic Responsibility Allocation for Complex Task SimulationAdrian Simon Bauer, Anne Köpken, Daniel Leidner. 53-61 [doi]
- Reasoning about Human-Friendly Strategies in Repeated Keyword AuctionsFrancesco Belardinelli, Wojtek Jamroga, Vadim Malvone, Munyque Mittelmann, Aniello Murano, Laurent Perrussel. 62-71 [doi]
- COPALZ: A Computational Model of Pathological Appraisal Biases for an Interactive Virtual Alzheimer PatientAmine Benamara, Jean-Claude Martin, Elise Prigent, Laurence Chaby, Mohamed Chetouani, Jean Zagdoun, Hélène Vanderstichel, Sébastien Dacunha, Brian Ravenet. 72-81 [doi]
- Computing Balanced Solutions for Large International Kidney Exchange SchemesMárton Benedek, Péter Biró, Walter Kern, Daniël Paulusma. 82-90 [doi]
- Agent-based Modeling and Simulation for Malware Spreading in D2D NetworksZiyad Benomar, Chaima Ghribi, Elie Cali, Alexander Hinsen, Benedikt Jahnel. 91-99 [doi]
- Quantitative Group Trust: A Two-Stage Verification ApproachJamal Bentahar, Nagat Drawel, Abdeladim Sadiki. 100-108 [doi]
- Asynchronous Opinion Dynamics in Social NetworksPetra Berenbrink, Martin Hoefer 0001, Dominik Kaaser, Pascal Lenzner, Malin Rau, Daniel Schmand. 109-117 [doi]
- Interpretable Preference-based Reinforcement Learning with Tree-Structured Reward FunctionsTom Bewley, Freddy Lécué. 118-126 [doi]
- Multivariate Algorithmics for Eliminating Envy by Donating GoodsNiclas Boehmer, Robert Bredereck, Klaus Heeger, Dusan Knop, Junjie Luo 0001. 127-135 [doi]
- Proportional Representation in Matching Markets: Selecting Multiple Matchings under Dichotomous PreferencesNiclas Boehmer, Markus Brill, Ulrike Schmidt-Kraepelin. 136-144 [doi]
- A Hierarchical Bayesian Process for Inverse RL in Partially-Controlled EnvironmentsKenneth D. Bogert, Prashant Doshi. 145-153 [doi]
- Little House (Seat) on the Prairie: Compactness, Gerrymandering, and Population DistributionAllan Borodin, Omer Lev, Nisarg Shah 0001, Tyrone Strangway. 154-162 [doi]
- Knowledge Transmission and Improvement Across Generations do not Need Strong SelectionYasser Bourahla, Manuel Atencia, Jérôme Euzenat. 163-171 [doi]
- Explainability in Multi-Agent Path/Motion Planning: User-study-driven Taxonomy and RequirementsMartim Brandao, Masoumeh Mansouri, Areeb Mohammed, Paul Luff, Amanda Jane Coles. 172-180 [doi]
- Relaxed Notions of Condorcet-Consistency and Efficiency for Strategyproof Social Decision SchemesFelix Brandt 0001, Patrick Lederer, René Romen. 181-189 [doi]
- Fair Stable Matching Meets Correlated PreferencesAngelina Brilliantova, Hadi Hosseini. 190-198 [doi]
- Exploiting Causal Structure for Transportability in Online, Multi-Agent EnvironmentsAxel Browne, Andrew Forney. 199-207 [doi]
- Beyond Cake Cutting: Allocating Homogeneous Divisible GoodsIoannis Caragiannis, Vasilis Gkatzelis, Alexandros Psomas, Daniel Schoepflin 0001. 208-216 [doi]
- Planning, Execution, and Adaptation for Multi-Robot Systems using Probabilistic and Temporal PlanningYaniel Carreno, Jun Hao Alvin Ng, Yvan R. Petillot, Ron P. A. Petrick. 217-225 [doi]
- Bayesian Persuasion Meets Mechanism Design: Going Beyond Intractability with Type ReportingMatteo Castiglioni, Alberto Marchesi, Nicola Gatti 0001. 226-234 [doi]
- Best-Response Bayesian Reinforcement Learning with Bayes-adaptive POMDPs for CentaursMustafa Mert Çelikok, Frans A. Oliehoek, Samuel Kaski. 235-243 [doi]
- Anomaly Guided Policy Learning from Imperfect DemonstrationsZi-Xuan Chen, Xin-Qiang Cai, Yuan Jiang, Zhi-Hua Zhou. 244-252 [doi]
- Individual-Level Inverse Reinforcement Learning for Mean Field GamesYang Chen, Libo Zhang, Jiamou Liu, Shuyue Hu. 253-262 [doi]
- Simulating Multiwinner Voting Rules in Judgment AggregationJulian Chingoma, Ulle Endriss, Ronald de Haan. 263-271 [doi]
- Coordinated Multi-Agent Pathfinding for Drones and Trucks over Road NetworksShushman Choudhury, Kiril Solovey, Mykel J. Kochenderfer, Marco Pavone. 272-280 [doi]
- Pippi: Practical Protocol InstantiationSamuel H. Christie V., Amit K. Chopra, Munindar P. Singh. 281-289 [doi]
- Optimizing Multi-Agent Coordination via Hierarchical Graph Probabilistic Recursive ReasoningSaar Cohen, Noa Agmon. 290-299 [doi]
- Pareto Optimal and Popular House Allocation with Lower and Upper QuotasÁgnes Cseh, Tobias Friedrich 0001, Jannik Peters 0001. 300-308 [doi]
- Three-Dimensional Popular Matching with Cyclic PreferencesÁgnes Cseh, Jannik Peters 0001. 309-317 [doi]
- Poincaré-Bendixson Limit Sets in Multi-Agent LearningAleksander Czechowski, Georgios Piliouras. 318-326 [doi]
- A Distributed Differentially Private Algorithm for Resource Allocation in Unboundedly Large SettingsPanayiotis Danassis, Aleksei Triastcyn, Boi Faltings. 327-335 [doi]
- Computation and Bribery of Voting Power in Delegative Simple GamesGianlorenzo D'Angelo, Esmaeil Delfaraz, Hugo Gilbert. 336-344 [doi]
- Budgeted Combinatorial Multi-Armed BanditsDebojit Das, Shweta Jain 0002, Sujit Gujar. 345-353 [doi]
- Efficient Approximation Algorithms for the Inverse Semivalue ProblemIlias Diakonikolas, Chrystalla Pavlou, John Peebles, Alistair Stewart. 354-362 [doi]
- Multiagent Dynamics of Gradual Argumentation SemanticsLouise Dupuis de Tarlé, Elise Bonzon, Nicolas Maudet. 363-371 [doi]
- How to Fairly Allocate Easy and Difficult ChoresSoroush Ebadian, Dominik Peters, Nisarg Shah 0001. 372-380 [doi]
- Scalable Multi-Agent Model-Based Reinforcement LearningVladimir Egorov, Alexey Shpilman. 381-390 [doi]
- Facility Location With Approval Preferences: Strategyproofness and FairnessEdith Elkind, Minming Li, Houyu Zhou. 391-399 [doi]
- Betweenness Centrality in Multi-Agent Path FindingEric Ewing, Jingyao Ren, Dhvani Kansara, Vikraman Sathiyanarayanan, Nora Ayanian. 400-408 [doi]
- Welfare vs. Representation in Participatory BudgetingRoy Fairstein, Dan Vilenchik, Reshef Meir, Kobi Gal. 409-417 [doi]
- A Path-following Polynomial Equations Systems Approach for Computing Nash EquilibriaHélène Fargier, Paul Jourdan, Régis Sabbadin. 418-426 [doi]
- Ensemble and Incremental Learning for Norm Violation DetectionThiago Freitas dos Santos, Nardine Osman, Marco Schorlemmer. 427-435 [doi]
- The Price of Majority SupportRobin Fritsch, Roger Wattenhofer. 436-444 [doi]
- A Symbolic Representation for Probabilistic Dynamic Epistemic LogicSébastien Gamblin, Alexandre Niveau, Maroua Bouzid. 445-453 [doi]
- Fully-Autonomous, Vision-based Traffic Signal Control: From Simulation to RealityDeepeka Garg, Maria Chli, George Vogiatzis. 454-462 [doi]
- One-Sided Matching Markets with Endowments: Equilibria and AlgorithmsJugal Garg, Thorben Tröbst, Vijay V. Vazirani. 463-471 [doi]
- Negotiated Path Planning for Non-Cooperative Multi-Robot SystemsAnna Gautier, Alex Stephens, Bruno Lacerda, Nick Hawes, Michael J. Wooldridge. 472-480 [doi]
- Refined Hardness of Distance-Optimal Multi-Agent Path FindingTzvika Geft, Dan Halperin. 481-488 [doi]
- Concave Utility Reinforcement Learning: The Mean-field Game ViewpointMatthieu Geist, Julien Pérolat, Mathieu Laurière, Romuald Elie, Sarah Perrin, Olivier Bachem, Rémi Munos, Olivier Pietquin. 489-497 [doi]
- D3C: Reducing the Price of Anarchy in Multi-Agent LearningIan M. Gemp, Kevin R. McKee, Richard Everett 0001, Edgar A. Duéñez-Guzmán, Yoram Bachrach, David Balduzzi, Andrea Tacchetti. 498-506 [doi]
- Sample-based Approximation of Nash in Large Many-Player Games via Gradient DescentIan M. Gemp, Rahul Savani, Marc Lanctot, Yoram Bachrach, Thomas W. Anthony, Richard Everett 0001, Andrea Tacchetti, Tom Eccles, János Kramár. 507-515 [doi]
- Building Contrastive Explanations for Multi-Agent Team FormationAthina Georgara, Juan A. Rodríguez-Aguilar, Carles Sierra. 516-524 [doi]
- Long-Term Resource Allocation Fairness in Average Markov Decision Process (AMDP) EnvironmentGanesh Ghalme, Vineet Nair, Vishakha Patil, Yilun Zhou. 525-533 [doi]
- Fair and Truthful Mechanism with Limited SubsidyHiromichi Goko, Ayumi Igarashi, Yasushi Kawase, Kazuhisa Makino, Hanna Sumita, Akihisa Tamura, Yu Yokoi, Makoto Yokoo. 534-542 [doi]
- Robust No-Regret Learning in Min-Max Stackelberg GamesDenizalp Goktas, Jiayi Zhao, Amy Greenwald. 543-552 [doi]
- Multi-Agent Curricula and Emergent Implicit SignalingNiko A. Grupen, Daniel D. Lee, Bart Selman. 553-561 [doi]
- Intention-Aware Navigation in Crowds with Extended-Space POMDP PlanningHimanshu Gupta, Bradley Hayes, Zachary Sunberg. 562-570 [doi]
- Multiagent Model-based Credit Assignment for Continuous ControlDongge Han, Chris Xiaoxuan Lu, Tomasz P. Michalak, Michael J. Wooldridge. 571-579 [doi]
- Hierarchical Value Decomposition for Effective On-demand Ride-PoolingJiang Hao, Pradeep Varakantham. 580-587 [doi]
- Computing Nash Equilibria for District-based NominationsPaul Harrenstein, Paolo Turrini. 588-596 [doi]
- Ordinal Maximin Share Approximation for ChoresHadi Hosseini, Andrew Searns, Erel Segal-haLevi. 597-605 [doi]
- A Mean Field Game Model of Spatial Evolutionary GamesVincent Hsiao, Dana S. Nau. 606-614 [doi]
- The Dynamics of Q-learning in Population Games: A Physics-inspired Continuity Equation ModelShuyue Hu, Chin-wing Leung, Ho-Fung Leung, Harold Soh. 615-623 [doi]
- Reduction-based Solving of Multi-agent Pathfinding on Large Maps Using Graph PruningMatej Husár, Jirí Svancara, Philipp Obermeier, Roman Barták, Torsten Schaub. 624-632 [doi]
- Autonomous Swarm Shepherding Using Curriculum-Based Reinforcement LearningAya Hussein, Eleni Petraki, Sondoss ElSawah, Hussein A. Abbass. 633-641 [doi]
- Cascades and Overexposure in Social Networks: The Budgeted CaseMohammad T. Irfan, Kim Hancock, Laura M. Friel. 642-650 [doi]
- Being Central on the Cheap: Stability in Heterogeneous Multiagent Centrality GamesGabriel Istrate, Cosmin Bonchis. 651-659 [doi]
- k-plex Enumeration ProblemsSaïd Jabbour, Nizar Mhadhbi, Badran Raddaoui, Lakhdar Sais. 660-668 [doi]
- Lazy-MDPs: Towards Interpretable RL by Learning When to ActAlexis Jacq, Johan Ferret, Olivier Pietquin, Matthieu Geist. 669-677 [doi]
- Balancing Fairness and Efficiency in Traffic Routing via Interpolated Traffic AssignmentDevansh Jalota, Kiril Solovey, Matthew Tsao, Stephen Zoepf, Marco Pavone. 678-686 [doi]
- Selecting PhD Students and Projects with Limited FundingJatin Jindal, Jérôme Lang, Katarína Cechlárová, Julien Lesca. 687-695 [doi]
- Optimal Matchings with One-Sided Preferences: Fixed and Cost-Based QuotasSanthini K. A., Govind S. Sankar, Meghana Nasre. 696-704 [doi]
- Planning Not to Talk: Multiagent Systems that are Robust to Communication LossMustafa O. Karabag, Cyrus Neary, Ufuk Topcu. 705-713 [doi]
- How Hard is Safe Bribery?Neel Karia, Faraaz Mallick, Palash Dey. 714-722 [doi]
- BADDr: Bayes-Adaptive Deep Dropout RL for POMDPsSammie Katt, Hai Nguyen, Frans A. Oliehoek, Christopher Amato. 723-731 [doi]
- Translating Omega-Regular Specifications to Average Objectives for Model-Free Reinforcement LearningMilad Kazemi, Mateo Perez, Fabio Somenzi, Sadegh Soudjani, Ashutosh Trivedi 0001, Alvaro Velasquez. 732-741 [doi]
- Tactile Pose Estimation and Policy Learning for Unknown Object ManipulationTarik Kelestemur, Robert Platt, Taskin Padir. 742-750 [doi]
- Disentangling Successor Features for Coordination in Multi-agent Reinforcement LearningSeung-Hyun Kim, Neale Van Stralen, Girish Chowdhary 0001, Huy T. Tran. 751-760 [doi]
- Equilibria in Schelling Games: Computational Hardness and RobustnessLuca Kreisel, Niclas Boehmer, Vincent Froese, Rolf Niedermeier. 761-769 [doi]
- Multimodal Analysis of the Predictability of Hand-gesture PropertiesTaras Kucherenko, Rajmund Nagy, Michael Neff, Hedvig Kjellström, Gustav Eje Henter. 770-779 [doi]
- p-RegressionRoger Lera-Leri, Filippo Bistaffa, Marc Serramia, Maite López-Sánchez, Juan A. Rodríguez-Aguilar. 780-788 [doi]
- Deploying Vaccine Distribution Sites for Improved Accessibility and Equity to Support Pandemic ResponseGeorge Z. Li, Ann Li, Madhav Marathe, Aravind Srinivasan, Leonidas Tsepenekas, Anil Vullikanti. 789-797 [doi]
- ASM-PPO: Asynchronous and Scalable Multi-Agent PPO for Cooperative ChargingYongheng Liang, Hejun Wu, Haitao Wang. 798-806 [doi]
- Equilibrium Computation For Knockout Tournaments Played By GroupsGrzegorz Lisowski, M. S. Ramanujan 0001, Paolo Turrini. 807-815 [doi]
- Residual Entropy-based Graph Generative AlgorithmsWencong Liu, Jiamou Liu, Zijian Zhang 0001, Yiwei Liu, Liehuang Zhu. 816-824 [doi]
- The Spoofing Resistance of Frequent Call MarketsBuhong Liu, Maria Polukarov, Carmine Ventre, Lingbo Li, Leslie Kanthan, Fan Wu, Michail Basios. 825-832 [doi]
- Logical Theories of Collective Attitudes and the Belief Base PerspectiveEmiliano Lorini, Éloan Rapion. 833-841 [doi]
- Lyapunov Exponents for Diversity in Differentiable GamesJonathan Lorraine, Paul Vicol, Jack Parker-Holder, Tal Kachman, Luke Metz, Jakob N. Foerster. 842-852 [doi]
- Any-Play: An Intrinsic Augmentation for Zero-Shot CoordinationKeane Lucas, Ross E. Allen. 853-861 [doi]
- Coalition Formation Games and Social Ranking SolutionsRoberto Lucchetti, Stefano Moretti, Tommaso Rea. 862-870 [doi]
- On Parameterized Complexity of Binary Networked Public Goods GameArnab Maiti, Palash Dey. 871-879 [doi]
- Efficient Algorithms for Finite Horizon and Streaming Restless Multi-Armed Bandit ProblemsAditya S. Mate, Arpita Biswas, Christoph Siebenbrunner, Susobhan Ghosh, Milind Tambe. 880-888 [doi]
- CAPS: Comprehensible Abstract Policy Summaries for Explaining Reinforcement Learning AgentsJoe McCalmon, Thai Le, Sarra Alqahtani, Dongwon Lee 0001. 889-897 [doi]
- Warmth and Competence in Human-Agent CooperationKevin R. McKee, Xuechunzi Bai, Susan T. Fiske. 898-907 [doi]
- Cooperation and Learning Dynamics under Risk Diversity and Financial IncentivesRamona Merhej, Fernando P. Santos, Francisco S. Melo, Mohamed Chetouani, Francisco C. Santos. 908-916 [doi]
- Preference-Based Goal Refinement in BDI AgentsMostafa Mohajeri Parizi, Giovanni Sileno, Tom M. van Engers. 917-925 [doi]
- Learning Equilibria in Mean-Field Games: Introducing Mean-Field PSROPaul Muller, Mark Rowland, Romuald Elie, Georgios Piliouras, Julien Pérolat, Mathieu Laurière, Raphaël Marinier, Olivier Pietquin, Karl Tuyls. 926-934 [doi]
- A Graph-Based Algorithm for the Automated Justification of Collective DecisionsOliviero Nardi, Arthur Boixel, Ulle Endriss. 935-943 [doi]
- Deep Reinforcement Learning for Active Wake ControlGrigory Neustroev, Sytze P. E. Andringa, Remco A. Verzijlbergh, Mathijs Michiel de Weerdt. 944-953 [doi]
- Learning Theory of Mind via Dynamic Traits AttributionDung Nguyen, Phuoc Nguyen, Hung Le, Kien Do, Svetha Venkatesh, Truyen Tran 0001. 954-962 [doi]
- Learning to Transfer Role Assignment Across Team SizesDung Nguyen, Phuoc Nguyen, Svetha Venkatesh, Truyen Tran 0001. 963-971 [doi]
- CTRMs: Learning to Construct Cooperative Timed Roadmaps for Multi-agent Path Planning in Continuous SpacesKeisuke Okumura 0001, Ryo Yonetani, Mai Nishimura, Asako Kanezaki. 972-981 [doi]
- Factorial Agent Markov Model: Modeling Other Agents' Behavior in presence of Dynamic Latent Decision FactorsLiubove Orlov-Savko, Abhinav Jain, Gregory M. Gremillion, Catherine E. Neubauer, Jonroy D. Canady, Vaibhav V. Unhelkar. 982-1000 [doi]
- Networked Restless Multi-Armed Bandits for Mobile InterventionsHan-Ching Ou, Christoph Siebenbrunner, Jackson A. Killian, Meredith B. Brooks, David Kempe 0001, Yevgeniy Vorobeychik, Milind Tambe. 1001-1009 [doi]
- Characterizing Attacks on Deep Reinforcement LearningXinlei Pan, Chaowei Xiao, Warren He, Shuang Yang, Jian Peng, Mingjie Sun, Mingyan Liu, Bo Li, Dawn Song. 1010-1018 [doi]
- BOID*: Autonomous Goal Deliberation through AbductionStipe Pandzic, Jan M. Broersen, Henk Aarts. 1019-1027 [doi]
- Scaling Mean Field Games by Online Mirror DescentJulien Pérolat, Sarah Perrin, Romuald Elie, Mathieu Laurière, Georgios Piliouras, Matthieu Geist, Karl Tuyls, Olivier Pietquin. 1028-1037 [doi]
- MORAL: Aligning AI with Human Norms through Multi-Objective Reinforced Active LearningMarkus Peschl, Arkady Zgonnikov, Frans A. Oliehoek, Luciano Cavalcante Siebert. 1038-1046 [doi]
- Emergent Cooperation from Mutual Acknowledgment ExchangeThomy Phan, Felix Sommer, Philipp Altmann, Fabian Ritz, Lenz Belzner, Claudia Linnhoff-Popien. 1047-1055 [doi]
- Auction-based and Distributed Optimization Approaches for Scheduling Observations in Satellite Constellations with Exclusive Orbit PortionsGauthier Picard. 1056-1064 [doi]
- Trajectory Coordination based on Distributed Constraint Optimization Techniques in Unmanned Air Traffic ManagementGauthier Picard. 1065-1073 [doi]
- Learning Heuristics for Combinatorial Assignment by Optimally Solving SubproblemsFredrik Präntare, Herman Appelgren, Mattias Tiger, David Bergström 0002, Fredrik Heintz. 1074-1082 [doi]
- Evaluating the Role of Interactivity on Improving Transparency in Autonomous AgentsPeizhu Qian, Vaibhav V. Unhelkar. 1083-1091 [doi]
- Revenue and User Traffic Maximization in Mobile Short-Video AdvertisingDezhi Ran, Weiqiang Zheng, Yunqi Li, Kaigui Bian, Jie Zhang 0008, Xiaotie Deng. 1092-1100 [doi]
- Automated Configuration and Usage of Strategy Portfolios Mixed-Motive BargainingBram M. Renting, Holger H. Hoos, Catholijn M. Jonker. 1101-1109 [doi]
- Pareto Conditioned NetworksMathieu Reymond, Eugenio Bargiacchi, Ann Nowé. 1110-1118 [doi]
- Testing Requirements via User and System Stories in Agent SystemsSebastian Rodriguez, John Thangarajah, Michael Winikoff, Dhirendra Singh. 1119-1127 [doi]
- GCS: Graph-Based Coordination Strategy for Multi-Agent Reinforcement LearningJingqing Ruan, Yali Du, Xuantang Xiong, Dengpeng Xing, Xiyun Li, Linghui Meng 0001, Haifeng Zhang, Jun Wang, Bo Xu. 1128-1136 [doi]
- REMAX: Relational Representation for Multi-Agent ExplorationHeechang Ryu, Hayong Shin, Jinkyoo Park. 1137-1145 [doi]
- Decoupled Reinforcement Learning to Stabilise Intrinsically-Motivated ExplorationLukas Schäfer 0001, Filippos Christianos, Josiah P. Hanna, Stefano V. Albrecht. 1146-1154 [doi]
- Group Fairness in Bandits with Biased FeedbackCandice Schumann, Zhi Lang, Nicholas Mattei, John P. Dickerson. 1155-1163 [doi]
- Sympathy-based Reinforcement Learning AgentsManisha Senadeera, Thommen George Karimpanal, Sunil Gupta 0001, Santu Rana. 1164-1172 [doi]
- Learning Efficient Diverse Communication for Cooperative Heterogeneous TeamingEsmaeil Seraj, Zheyuan Wang, Rohan R. Paleja, Daniel Martin, Matthew Sklar, Anirudh Patel, Matthew C. Gombolay. 1173-1182 [doi]
- ACuTE: Automatic Curriculum Transfer from Simple to Complex EnvironmentsYash Shukla, Christopher Thierauf, Ramtin Hosseini, Gyan Tatiya, Jivko Sinapov. 1192-1200 [doi]
- Anti-Malware Sandbox GamesSujoy Sikdar, Sikai Ruan, Qishen Han, Paween Pitimanaaree, Jeremy Blackthorne, Bülent Yener, Lirong Xia. 1201-1209 [doi]
- Properties of Reputation Lag Attack StrategiesSean Sirur, Tim Muller. 1210-1218 [doi]
- The Generalized Magician Problem under Unknown Distributions and Related ApplicationsAravind Srinivasan, Pan Xu 0001. 1219-1227 [doi]
- Context-Aware Modelling for Multi-Robot Systems Under UncertaintyCharlie Street, Bruno Lacerda, Michal Staniaszek, Manuel Mühlig, Nick Hawes. 1228-1236 [doi]
- Off-Policy Evolutionary Reinforcement Learning with Maximum MutationsKarush Suri. 1237-1245 [doi]
- Justifying Social-Choice Mechanism Outcome for Improving Participant SatisfactionSharadhi Alape Suryanarayana, David Sarne, Sarit Kraus. 1246-1255 [doi]
- Descriptive and Prescriptive Visual Guidance to Improve Shared Situational Awareness in Human-Robot TeamingAaquib Tabrez, Matthew B. Luebbers, Bradley Hayes. 1256-1264 [doi]
- How Hard is Bribery in Elections with Randomly Selected VotersLiangde Tao, Lin Chen, Lei Xu, Weidong Shi, Ahmed Sunny, Md Mahabub Uz Zaman. 1265-1273 [doi]
- Socially Supervised Representation Learning: The Role of Subjectivity in Learning Efficient RepresentationsJulius Taylor, Eleni Nisioti, Clément Moulin-Frier. 1274-1282 [doi]
- Corruption in Auctions: Social Welfare Loss in Hybrid Multi-Unit AuctionsAndries van Beek, Ruben Brokkelkamp, Guido Schäfer. 1283-1291 [doi]
- Coaching Agent: Making Recommendations for Behavior Change. A Case Study on Improving Eating HabitsJules Vandeputte, Antoine Cornuéjols, Nicolas Darcel, Fabien Delaere, Christine Martin. 1292-1300 [doi]
- How to Sense the World: Leveraging Hierarchy in Multimodal Perception for Robust Reinforcement Learning AgentsMiguel Vasco, Hang Yin, Francisco S. Melo, Ana Paiva 0001. 1301-1309 [doi]
- Controller Synthesis for Omega-Regular and Steady-State SpecificationsAlvaro Velasquez, Ismail Alkhouri, Andre Beckus, Ashutosh Trivedi 0001, George K. Atia. 1310-1318 [doi]
- Graphical Representation Enhances Human Compliance with Principles for Graded Argumentation SemanticsSrdjan Vesic, Bruno Yun, Predrag Teovanovic. 1319-1327 [doi]
- Epistemic Reasoning in JasonMichael J. Vezina, Babak Esfandiari. 1328-1336 [doi]
- Robust Learning from Observation with Model MisspecificationLuca Viano, Yu-Ting Huang, Parameswaran Kamalaruban, Craig Innes, Subramanian Ramamoorthy, Adrian Weller. 1337-1345 [doi]
- Evaluating Strategy Exploration in Empirical Game-Theoretic AnalysisYongzhao Wang, Qiurui Ma, Michael P. Wellman. 1346-1354 [doi]
- FCMNet: Full Communication Memory Net for Team-Level Cooperation in Multi-Agent SystemsYutong Wang, Guillaume Sartoretti. 1355-1363 [doi]
- Online Collective Multiagent Planning by Offline Policy Reuse with Applications to City-Scale Mobility-on-Demand SystemsWanyuan Wang, Gerong Wu, Weiwei Wu 0001, Yichuan Jiang, Bo An 0001. 1364-1372 [doi]
- Position-Based Matching with Multi-Modal PreferencesYinghui Wen, Aizhong Zhou, Jiong Guo. 1373-1381 [doi]
- Empirical Estimates on Hand Manipulation are Recoverable: A Step Towards Individualized and Explainable Robotic Support in Everyday ActivitiesAlexander Wich, Holger Schultheis, Michael Beetz. 1382-1390 [doi]
- Agent-Temporal Attention for Reward Redistribution in Episodic Multi-Agent Reinforcement LearningBaicen Xiao, Bhaskar Ramasubramanian, Radha Poovendran. 1391-1399 [doi]
- SIDE: State Inference for Partially Observable Cooperative Multi-Agent Reinforcement LearningZhiwei Xu, Yunpeng Bai, Dapeng Li, Bin Zhang, Guoliang Fan. 1400-1408 [doi]
- Spiking Pitch Black: Poisoning an Unknown Environment to Attack Unknown Reinforcement LearnersHang Xu, Xinghua Qu, Zinovi Rabinovich. 1409-1417 [doi]
- Mis-spoke or mis-lead: Achieving Robustness in Multi-Agent Communicative Reinforcement LearningWanqi Xue, Wei Qiu 0001, Bo An 0001, Zinovi Rabinovich, Svetlana Obraztsova, Chai Kiat Yeo. 1418-1426 [doi]
- Standby-Based Deadlock Avoidance Method for Multi-Agent Pickup and Delivery TasksTomoki Yamauchi, Yuki Miyashita, Toshiharu Sugawara. 1427-1435 [doi]
- Adaptive Incentive Design with Multi-Agent Meta-Gradient Reinforcement LearningJiachen Yang, Ethan Wang, Rakshit Trivedi, Tuo Zhao, Hongyuan Zha. 1436-1445 [doi]
- Strategy-Proof House Allocation with Existing Tenants over Social NetworksBo You, Ludwig Dierks, Taiki Todo, Minming Li, Makoto Yokoo. 1446-1454 [doi]
- Segregation in Social Networks of Heterogeneous Agents Acting under Incomplete InformationD. Kai Zhang, Alexander Carver. 1455-1463 [doi]
- Multi-Agent Path Finding for Precedence-Constrained Goal SequencesHan Zhang, Jingkai Chen, Jiaoyang Li 0001, Brian C. Williams, Sven Koenig. 1464-1472 [doi]
- The Competition and Inefficiency in Urban Road Last-Mile DeliveryKeyang Zhang, Jose Javier Escribano Macias, Dario Paccagnan, Panagiotis Angeloudis. 1473-1481 [doi]
- Tracking Truth by Weighting Proxies in Liquid DemocracyYuzhe Zhang, Davide Grossi. 1482-1490 [doi]
- A Deeper Look at Discounting Mismatch in Actor-Critic AlgorithmsShangtong Zhang, Romain Laroche, Harm van Seijen, Shimon Whiteson, Remi Tachet des Combes. 1491-1499 [doi]
- Centralized Model and Exploration Policy for Multi-Agent RLQizhen Zhang 0002, Chris Lu, Animesh Garg, Jakob N. Foerster. 1500-1508 [doi]
- Incentives to Invite Others to Form Larger CoalitionsYao Zhang, Dengji Zhao. 1509-1517 [doi]
- R-CHECK: A Model Checker for Verifying Reconfigurable MASYehia Abd Alrahman, Shaun Azzopardi, Nir Piterman. 1518-1520 [doi]
- RASS: Risk-Aware Swarm StorageSamuel Arseneault, David Vielfaure, Giovanni Beltrame. 1521-1523 [doi]
- Local Advantage Networks for Cooperative Multi-Agent Reinforcement LearningRaphaël Avalos, Mathieu Reymond, Ann Nowé, Diederik M. Roijers. 1524-1526 [doi]
- Advising Agent for Service-Providing Live-Chat OperatorsAviram Aviv, Yaniv Oshrat, Samuel A. Assefa, Tobi Mustapha, Daniel Borrajo, Manuela Veloso, Sarit Kraus. 1527-1529 [doi]
- Status-quo Policy Gradient in Multi-Agent Reinforcement LearningPinkesh Badjatiya, Mausoom Sarkar, Nikaash Puri, Jayakumar Subramanian, Abhishek Sinha, Siddharth Singh, Balaji Krishnamurthy. 1530-1532 [doi]
- Deep Learnable Strategy Templates for Multi-Issue Bilateral NegotiationPallavi Bagga, Nicola Paoletti, Kostas Stathis. 1533-1535 [doi]
- Can Algorithms be Explained Without Compromising Efficiency? The Benefits of Detection and Imitation in Strategic ClassificationFlavia Barsotti, Rüya Gökhan Koçer, Fernando P. Santos. 1536-1538 [doi]
- A New Porous Structure for Modular RobotsJad Bassil, Benoît Piranda, Abdallah Makhoul, Julien Bourgeois. 1539-1541 [doi]
- On the Average-Case Complexity of Predicting Round-Robin TournamentsDorothea Baumeister, Tobias Hogrebe. 1542-1544 [doi]
- The Evolutionary Dynamics of Soft-Max Policy Gradient in Multi-Agent SettingsMartino Bernasconi de Luca, Federico Cacciamani, Simone Fioravanti, Nicola Gatti 0001, Francesco Trovò. 1545-1547 [doi]
- A Refined Complexity Analysis of Fair Districting over GraphsNiclas Boehmer, Tomohiro Koana, Rolf Niedermeier. 1548-1550 [doi]
- Contrastive Explanations for Argumentation-Based ConclusionsAnnemarie Borg, Floris Bex. 1551-1553 [doi]
- Voting for CentralityUlrik Brandes, Christian Laußmann, Jörg Rothe. 1554-1556 [doi]
- Solving N-Player Dynamic Routing Games with Congestion: A Mean-Field ApproachTheophile Cabannes, Mathieu Laurière, Julien Pérolat, Raphaël Marinier, Sertan Girgin, Sarah Perrin, Olivier Pietquin, Alexandre M. Bayen, Eric Goubault, Romuald Elie. 1557-1559 [doi]
- On Fair and Efficient Solutions for Budget ApportionmentPierre Cardi, Laurent Gourvès, Julien Lesca. 1560-1562 [doi]
- Optimal Local Bayesian Differential Privacy over Markov ChainsDarshan Chakrabarti, Jie Gao 0001, Aditya Saraf, Grant Schoenebeck, Fang-Yi Yu. 1563-1565 [doi]
- Augmented Reality Visualizations using Imitation Learning for Collaborative Warehouse RobotsKishan Chandan, Jack Albertson, ShiQi Zhang. 1566-1568 [doi]
- Multi-unit Double Auctions: Equilibrium Analysis and Bidding Strategy using DDPG in Smart-gridsSanjay Chandlekar, Easwar Subramanian, Sanjay P. Bhat, Praveen Paruchuri, Sujit Gujar. 1569-1571 [doi]
- Multi-agent Covering Option Discovery through Kronecker Product of Factor GraphsJiayu Chen, Jingdi Chen, Tian Lan, Vaneet Aggarwal. 1572-1574 [doi]
- Priced GerrymanderingPalash Dey. 1575-1577 [doi]
- Behavior Exploration and Team Balancing for Heterogeneous Multiagent CoordinationGaurav Dixit, Kagan Tumer. 1578-1579 [doi]
- Multi-Agent Adversarial Attacks for Multi-Channel CommunicationsJuncheng Dong, Suya Wu, Mohammadreza Soltani, Vahid Tarokh. 1580-1582 [doi]
- Rawlsian Fairness in Online Bipartite Matching: Two-sided, Group, and IndividualSeyed A. Esmaeili, Sharmila Duppala, Vedant Nanda, Aravind Srinivasan, John P. Dickerson. 1583-1585 [doi]
- Approaching the Overbidding Puzzle in All-Pay Auctions: Explaining Human Behavior through Bayesian Optimization and Equilibrium LearningMarkus Ewert, Stefan Heidekrüger, Martin Bichler. 1586-1588 [doi]
- Safety Shields, an Automated Failure Handling Mechanism for BDI AgentsAngelo Ferrando 0001, Rafael C. Cardoso. 1589-1591 [doi]
- Beyond Uninformed Search: Improving Branch-and-bound Based Acceleration Algorithms for Belief Propagation via Heuristic StrategiesJunsong Gao, Ziyu Chen, Dingding Chen, Wenxin Zhang. 1592-1594 [doi]
- Stable Matching GamesFelipe Garrido-Lucero, Rida Laraki. 1595-1597 [doi]
- An Anytime Heuristic Algorithm for Allocating Many Teams to Many TasksAthina Georgara, Juan A. Rodríguez-Aguilar, Carles Sierra, Ornella Mich, Raman Kazhamiakin, Alessio Palmero Aprosio, Jean-Christophe R. Pazzaglia. 1598-1600 [doi]
- Influencing Emergent Self-Assembled Structures in Robotic Collectives Through Traffic ControlEverardo Gonzalez, Lucie Houel, Radhika Nagpal, Melinda J. D. Malley. 1601-1603 [doi]
- Minimizing Robot Navigation Graph for Position-Based Predictability by HumansSriram Gopalakrishnan, Subbarao Kambhampati. 1604-1606 [doi]
- A Graph Neural Network Reasoner for Game Description LanguageAlvaro Gunawan, Ji Ruan, Xiaowei Huang 0001. 1607-1609 [doi]
- Adaptive Aggregation Weight Assignment for Federated Learning: A Deep Reinforcement Learning ApproachEnwei Guo, Xiumin Wang, Weiwei Wu 0001. 1610-1612 [doi]
- Proof-of-Work as a Stigmergic Consensus AlgorithmÖnder Gürcan. 1613-1615 [doi]
- Capacitated Network Design Games on a Generalized Fair Allocation ModelTesshu Hanaka, Toshiyuki Hirose, Hirotaka Ono. 1616-1617 [doi]
- Multi-agent Task Allocation for Fruit Picker Team FormationHelen Harman, Elizabeth I. Sklar. 1618-1620 [doi]
- Decision-Theoretic Planning for the Expected Scalarised ReturnsConor F. Hayes, Diederik M. Roijers, Enda Howley, Patrick Mannion. 1621-1623 [doi]
- Implementation of Actual Data for Artificial Market SimulationMasanori Hirano 0001, Kiyoshi Izumi, Hiroki Sakaji. 1624-1626 [doi]
- Intelligent Communication over Realistic Wireless Networks in Multi-Agent Cooperative GamesDiyi Hu, Chi Zhang 0022, Viktor K. Prasanna, Bhaskar Krishnamachari. 1627-1629 [doi]
- Multiagent Q-learning with Sub-Team CoordinationWenhan Huang, Kai Li, Kun Shao, Tianze Zhou, Jun Luo, Dongge Wang, Hangyu Mao, Jianye Hao, Jun Wang, Xiaotie Deng. 1630-1632 [doi]
- Guaranteeing Half-Maximin Shares Under Cardinality ConstraintsHalvard Hummel, Magnus Lie Hetland. 1633-1635 [doi]
- Argumentative ForecastingBenjamin Irwin, Antonio Rago 0001, Francesca Toni. 1636-1638 [doi]
- Data-driven Agent-based Models for Optimal Evacuation of Large Metropolitan Areas for Improved Disaster PlanningKazi Ashik Islam, Madhav Marathe, Henning S. Mortveit, Samarth Swarup, Anil Vullikanti. 1639-1641 [doi]
- Near-Optimal Reviewer Splitting in Two-Phase Paper Reviewing and Conference Experiment DesignSteven Jecmen, Hanrui Zhang, Ryan Liu, Fei Fang, Vincent Conitzer, Nihar B. Shah. 1642-1644 [doi]
- Learning to Advise and Learning from Advice in Cooperative Multiagent Reinforcement LearningYue Jin, Shuangqing Wei, Jian Yuan, Xudong Zhang 0001. 1645-1647 [doi]
- REFORM: Reputation Based Fair and Temporal Reward Framework for CrowdsourcingSamhita Kanaparthy, Sankarshan Damle, Sujit Gujar. 1648-1650 [doi]
- Forgiving Debt in Financial Network GamesPanagiotis Kanellopoulos, Maria Kyropoulou, Hao Zhou. 1651-1653 [doi]
- How to Train Your Agent: Active Learning from Human Preferences and Justifications in Safety-critical EnvironmentsIlias Kazantzidis, Timothy J. Norman, Yali Du, Christopher T. Freeman. 1654-1656 [doi]
- Popularity and Strict Popularity in Altruistic Hedonic Games and Minimum-Based Altruistic Hedonic GamesAnna Maria Kerkmann, Jörg Rothe. 1657-1659 [doi]
- Minimizing Expected Intrusion Detection Time in Adversarial PatrollingDavid Klaska, Antonín Kucera 0001, Vít Musil, Vojtech Rehák. 1660-1662 [doi]
- Learning Generalizable Multi-Lane Mixed-Autonomy Behaviors in Single Lane Representations of TrafficAbdul Rahman Kreidieh, YiBo Zhao, Samyak Parajuli, Alexandre M. Bayen. 1663-1665 [doi]
- Measuring Resilience in Collective Robotic AlgorithmsJennifer Leaf, Julie A. Adams. 1666-1668 [doi]
- Automated Story Sifting Using Story ArcsWilkins Leong, Julie Porteous, John Thangarajah. 1669-1671 [doi]
- Theoretical Models and Preliminary Results for Contact Tracing and IsolationGeorge Z. Li, Arash Haddadan, Ann Li, Madhav V. Marathe, Aravind Srinivasan, Anil Vullikanti, Zeyu Zhao 0002. 1672-1674 [doi]
- Improving Generalization with Cross-State Behavior Matching in Deep Reinforcement LearningGuan-Ting Liu, Guan-Yu Lin, Pu-Jen Cheng. 1675-1677 [doi]
- (Almost) Envy-Free, Proportional and Efficient Allocations of an Indivisible Mixed MannaVasilis Livanos, Ruta Mehta, Aniket Murhekar. 1678-1680 [doi]
- Modeling Affective Reaction in Multi-agent SystemsJieting Luo, Mehdi Dastani. 1681-1683 [doi]
- Multimodal Reinforcement Learning with Effective State Representation LearningJinming Ma, Yingfeng Chen, Feng Wu 0001, Xianpeng Ji, Yu Ding 0001. 1684-1686 [doi]
- Group-level Fairness Maximization in Online Bipartite MatchingWill Ma, Pan Xu 0001, Yifan Xu. 1687-1689 [doi]
- A Simulation Based Online Planning Algorithm for Multi-Agent Cooperative EnvironmentsRafid Ameer Mahmud, Fahim Faisal, Saaduddin Mahmud, Md. Mosaddek Khan. 1690-1692 [doi]
- Parameterized Algorithms for Kidney ExchangeArnab Maiti, Palash Dey. 1693-1695 [doi]
- Active Generation of Logical Rules for POMCP ShieldingGiulio Mazzi, Alberto Castellini, Alessandro Farinelli. 1696-1698 [doi]
- Reinforcement Learning for Traffic Signal Control Optimization: A Concept for Real-World ImplementationHenri Meess, Jeremias Gerner, Daniel Hein 0001, Stefanie Schmidtner, Gordon Elger. 1699-1701 [doi]
- Towards Assume-Guarantee Verification of Strategic AbilityLukasz Mikulski, Wojciech Jamroga, Damian Kurpiewski. 1702-1704 [doi]
- On Achieving Leximin Fairness and Stability in Many-to-One MatchingsShivika Narang, Arpita Biswas, Yadati Narahari. 1705-1707 [doi]
- Towards an Enthymeme-Based Communication FrameworkAlison R. Panisson, Peter McBurney, Rafael H. Bordini. 1708-1710 [doi]
- I Will Have Order! Optimizing Orders for Fair Reviewer AssignmentJustin Payan, Yair Zick. 1711-1713 [doi]
- Concise Representations and Complexity of Combinatorial Assignment ProblemsFredrik Präntare, George Osipov, Leif Eriksson. 1714-1716 [doi]
- A Stit Logic of ResponsibilityAldo Iván Ramírez Abarca, Jan M. Broersen. 1717-1719 [doi]
- Behavior vs Appearance: What Type of Adaptations are More Socially Motivated?Diogo Rato, Marta Couto, Rui Prada. 1720-1722 [doi]
- Agent-Time Attention for Sparse Rewards Multi-Agent Reinforcement LearningJennifer She, Jayesh K. Gupta, Mykel J. Kochenderfer. 1723-1725 [doi]
- Environment Guided Interactive Reinforcement Learning: Learning from Binary Feedback in High-Dimensional Robot Task EnvironmentsIsaac S. Sheidlower, Elaine Schaertl Short, Allison Moore. 1726-1728 [doi]
- Pre-trained Language Models as Prior Knowledge for Playing Text-based GamesIshika Singh, Gargi Singh, Ashutosh Modi. 1729-1731 [doi]
- Resource-Aware Adaptation of Heterogeneous Strategies for Coalition FormationAnusha Srikanthan, Harish Ravichandar. 1732-1734 [doi]
- Speeding up Deep Reinforcement Learning through Influence-Augmented Local SimulatorsMiguel Suau, Jinke He, Matthijs T. J. Spaan, Frans A. Oliehoek. 1735-1737 [doi]
- Maximizing Resource Allocation Likelihood with Minimum CompromiseYohai Trabelsi, Abhijin Adiga, Sarit Kraus, S. S. Ravi. 1738-1740 [doi]
- Max-sum with Quadtrees for Continuous DCOPs with Application to Lane-Free Autonomous DrivingDimitrios Troullinos, Georgios Chalkiadakis, Vasilis Samoladas, Markos Papageorgiou. 1741-1743 [doi]
- Autonomous Flight Arcade Challenge: Single- and Multi-Agent Learning Environments for Aerial VehiclesPaul Tylkin, Tsun-Hsuan Wang, Tim Seyde, Kyle Palko, Ross Allen, Alexander Amini, Daniela Rus. 1744-1746 [doi]
- Non-Parametric Neuro-Adaptive Coordination of Multi-Agent SystemsChristos K. Verginis, Zhe Xu 0005, Ufuk Topcu. 1747-1749 [doi]
- Moving Target Defense under Uncertainty for Web ApplicationsVignesh Viswanathan, Megha Bose, Praveen Paruchuri. 1750-1752 [doi]
- The Ethical Acceptability of Artificial Social AgentsRavi Vythilingam, Deborah Richards, Paul Formosa. 1753-1755 [doi]
- Near On-Policy Experience Sampling in Multi-Objective Reinforcement LearningShang Wang, Mathieu Reymond, Athirai A. Irissappane, Diederik M. Roijers. 1756-1758 [doi]
- On Agent Incentives to Manipulate Human Feedback in Multi-Agent Reward Learning ScenariosFrancis Rhys Ward, Francesca Toni, Francesco Belardinelli. 1759-1761 [doi]
- How to Train PointGoal Navigation Agents on a (Sample and Compute) BudgetErik Wijmans, Irfan Essa, Dhruv Batra. 1762-1764 [doi]
- Performance of Deep Reinforcement Learning for High Frequency Market Making on Actual Tick DataZiyi Xu, Xue Cheng, Yangbo He. 1765-1767 [doi]
- On the Complexity of Controlling Amendment and Successive WinnersYongjie Yang 0001. 1768-1770 [doi]
- On-the-fly Strategy Adaptation for ad-hoc Agent CoordinationJaleh Zand, Jack Parker-Holder, Stephen J. Roberts. 1771-1773 [doi]
- Off-Policy Correction For Multi-Agent Reinforcement LearningMichal Zawalski, Blazej Osinski, Henryk Michalewski, Piotr Milos. 1774-1776 [doi]
- An Agent-based Model for Emergency Evacuation from a Multi-floor BuildingXiaoyan Zhang, Graham Coates, Sarah Dunn, Jean Hall. 1777-1779 [doi]
- Irrational Behaviour and GlobalisationYuanzi Zhu, Carmine Ventre. 1780-1782 [doi]
- Robots Teaching Humans: A New Communication Paradigm via Reverse TeleoperationRika Antonova, Ankur Handa. 1783-1787 [doi]
- Social Choice Around the Block: On the Computational Social Choice of BlockchainDavide Grossi. 1788-1793 [doi]
- Augmented Democratic Deliberation: Can Conversational Agents Boost Deliberation in Social Media?Rafik Hadfi, Takayuki Ito 0001. 1794-1798 [doi]
- Towards Anomaly Detection in Reinforcement LearningRobert Müller, Steffen Illium, Thomy Phan, Tom Haider, Claudia Linnhoff-Popien. 1799-1803 [doi]
- The Holy Grail of Multi-Robot Planning: Learning to Generate Online-Scalable Solutions from Offline-Optimal ExpertsAmanda Prorok, Jan Blumenkamp, Qingbiao Li, Ryan Kortvelesy, Zhe Liu, Ethan Stump. 1804-1808 [doi]
- "Go to the Children": Rethinking Intelligent Agent Design and Programming in a Developmental Learning PerspectiveAlessandro Ricci. 1809-1813 [doi]
- Foundations for Grassroots Democratic MetaverseEhud Shapiro, Nimrod Talmon. 1814-1818 [doi]
- Agent-Assisted Life-Long Education and LearningTomas Trescak, Roger Lera-Leri, Filippo Bistaffa, Juan A. Rodríguez-Aguilar. 1819-1823 [doi]
- Macro Ethics for Governing Equitable Sociotechnical SystemsJessica Woodgate, Nirav Ajmeri. 1824-1828 [doi]
- Exploration and Communication for Partially Observable Collaborative Multi-Agent Reinforcement LearningRaphaël Avalos. 1829-1832 [doi]
- Manipulation of Machine Learning AlgoirhtmsNicholas Bishop. 1833-1835 [doi]
- Collaborative Training of Multiple Autonomous AgentsFilippos Christianos. 1836-1838 [doi]
- Towards Multi-Agent Interactive Reinforcement Learning for Opportunistic Software Composition in Ambient EnvironmentsKevin Delcourt. 1839-1840 [doi]
- Online Learning against Strategic AdversaryLe Cong Dinh. 1841-1842 [doi]
- Non-Cooperative Multi-Robot Planning Under Shared ResourcesAnna Gautier. 1843-1845 [doi]
- Incentive Design for Equitable Resource Allocation: Artificial Currencies and Allocation ConstraintsDevansh Jalota. 1846-1848 [doi]
- Model-free and Model-based Reinforcement Learning, the Intersection of Learning and PlanningPiotr Januszewski. 1849-1851 [doi]
- Data-driven Approaches for Formal Synthesis of Dynamical SystemsMilad Kazemi. 1852-1853 [doi]
- Budget Feasible Mechanisms in Auction Markets: Truthfulness, Diffusion and FairnessXiang Liu. 1854-1856 [doi]
- Fair Allocation Problems in Reviewer AssignmentJustin Payan. 1857-1859 [doi]
- Designing Mechanisms for Participatory BudgetingSimon Rey. 1860-1862 [doi]
- Task Generalisation in Multi-Agent Reinforcement LearningLukas Schäfer 0001. 1863-1865 [doi]
- Empathetic Reinforcement Learning AgentsManisha Senadeera. 1866-1868 [doi]
- Embodied Team Intelligence in Multi-Robot SystemsEsmaeil Seraj. 1869-1871 [doi]
- The Reputation Lag AttackSean Sirur. 1872-1874 [doi]
- Using Multi-objective Optimization to Generate Timely Responsive BDI AgentsMarcio Fernando Stabile Jr.. 1875-1877 [doi]
- Engineering Normative and Cognitive Agents with Emotions and ValuesSz-Ting Tzeng. 1878-1880 [doi]
- The Coaching Scenario: Recommender Systems with a Long Term Goal. A Case Study in Changing Dietary HabitsJules Vandeputte. 1881-1883 [doi]
- Transferable Environment Poisoning: Training-time Attack on Reinforcement Learner with Limited Prior KnowledgeHang Xu. 1884-1886 [doi]
- Chameleon - A Framework for Developing Conversational Agents for Medical Training PurposesAl-Hussein Abutaleb, Bruno Yun. 1887-1889 [doi]
- An Agent-Based Simulator for Maritime Transport DecarbonisationJan Bürmann, Dimitar Georgiev, Enrico H. Gerding, Lewis Hill, Obaid Malik, Alexandru Pop, Matthew Pun, Sarvapali D. Ramchurn, Elliot Salisbury, Ivan Stojanovic. 1890-1892 [doi]
- AdLeap-MAS: An Open-source Multi-Agent Simulator for Ad-hoc ReasoningMatheus Aparecido do Carmo Alves, Amokh Varma, Yehia Elkhatib, Leandro Soriano Marcolino. 1893-1895 [doi]
- KnowLedger - A Multi-Agent System Blockchain for Smart Cities DataBruno Fernandes 0002, André Diogo, Fábio Silva, José Neves 0001, Cesar Analide. 1896-1898 [doi]
- A Multi-Agent System for Automated Machine LearningBruno Fernandes 0002, Paulo Novais, Cesar Analide. 1899-1901 [doi]
- Demonstrating the Rapid Integration & Development Environment (RIDE): Embodied Conversational Agent (ECA) and Multiagent CapabilitiesArno Hartholt, Ed Fast, Andrew Leeds, Kevin Kim, Andrew Gordon, Kyle McCullough, Volkan Ustun, Sharon Mozgai. 1902-1904 [doi]
- SIERRA: A Modular Framework for Research AutomationJohn Harwell, London Lowmanstone, Maria L. Gini. 1905-1907 [doi]
- Cellulan World: Interactive Platform to Learn Swarm BehaviorsHala Khodr, Barbara Bruno, Aditi Kothiyal, Pierre Dillenbourg. 1908-1910 [doi]
- Ev-IDID: Enhancing Solutions to Interactive Dynamic Influence Diagrams through Evolutionary AlgorithmsBiyang Ma, Yinghui Pan, Yifeng Zeng, Zhong Ming 0001. 1911-1913 [doi]
- fT: Learning Bayesian Network Structures from Text in Autonomous Typhoon Response SystemsYinghui Pan, Junhan Chen, Yifeng Zeng, Zhangrui Yao, Qianwen Li, Biyang Ma, Yi Ji, Zhong Ming 0001. 1914-1916 [doi]
- Reaching Consensus Under a DeadlineMarina Bannikova, Lihi Dery, Svetlana Obraztsova, Zinovi Rabinovich, Jeffrey S. Rosenschein. 1920-1922 [doi]
- Goal-Driven Active LearningNicolas Bougie, Ryutaro Ichise. 1923-1925 [doi]
- Combining Quantitative and Qualitative Reasoning in Concurrent Multi-player GamesNils Bulling, Valentin Goranko. 1926-1928 [doi]
- Voting with Random Classifiers (VORACE): Theoretical and Experimental AnalysisCristina Cornelio, Michele Donini, Andrea Loreggia, Maria Silvia Pini, Francesca Rossi. 1929-1931 [doi]
- Enabling BDI Group Plans with Coordination Middleware: Semantics and ImplementationStephen Cranefield. 1932-1934 [doi]
- GDL as a Unifying Domain Description Language for Declarative Automated NegotiationDave De Jonge, Dongmo Zhang. 1935-1937 [doi]
- Designing Efficient and Fair Mechanisms for Multi-Type Resource AllocationXiaoxi Guo, Sujoy Sikdar, Haibin Wang, Lirong Xia, Yongzhi Cao, Hanpin Wang. 1938-1940 [doi]
- Automatic Calibration Framework of Agent-based Models for Dynamic and Heterogeneous ParametersDongjun Kim, Tae-Sub Yun, Il-Chul Moon, Jang Won Bae. 1941-1943 [doi]
- Trust Repair in Human-Agent Teams: The Effectiveness of Explanations and Expressing RegretE. S. Kox, J. H. Kerstholt, T. F. Hueting, P. W. de Vries. 1944-1946 [doi]
- Concurrent Negotiations with Global Utility FunctionsYasser Mohammad, Shinji Nakadai. 1947-1949 [doi]
- Towards Addressing Dynamic Multi-agent Task Allocation in Law EnforcementItshak Tkach, Sofia Amador Nelke. 1950-1951 [doi]