Abstract is missing.
- Trustworthy Reinforcement Learning: Opportunities and ChallengesAnn Nowé. 1 [doi]
- Agents and Humans: Trajectories and PerspectivesLiz Sonenberg. 2 [doi]
- 30 Years of Engineering Multi-Agent Systems: What and Why?Michael Winikoff. 3 [doi]
- Team Performance and User Satisfaction in Mixed Human-Agent TeamsSami Abuhaimed, Sandip Sen. 4-12 [doi]
- Value-based Resource Matching with Fairness Criteria: Application to Agricultural Water TradingAbhijin Adiga, Yohai Trabelsi, Tanvir Ferdousi, Madhav V. Marathe, S. S. Ravi, Samarth Swarup, Anil Kumar S. Vullikanti, Mandy L. Wilson, Sarit Kraus, Reetwika Basu, Supriya Savalkar, Matthew Yourek, Michael Brady, Kirti Rajagopalan, Jonathan Yoder. 13-21 [doi]
- Can Poverty Be Reduced by Acting on Discrimination? An Agent-based Model for Policy MakingAlba Aguilera, Nieves Montes, Georgina Curto, Carles Sierra, Nardine Osman 0001. 22-30 [doi]
- Provably Learning Nash Policies in Constrained Markov Potential GamesPragnya Alatur, Giorgia Ramponi, Niao He, Andreas Krause 0001. 31-39 [doi]
- Beliefs, Shocks, and the Emergence of Roles in Asset Markets: An Agent-Based Modeling ApproachEvan Albers, Mohammad T. Irfan, Matthew J. Bosch. 40-48 [doi]
- On the Potential and Limitations of Proxy Voting: Delegation with Incomplete VotesGeorgios Amanatidis, Aris Filos-Ratsikas, Philip Lazos, Evangelos Markakis, Georgios Papasotiropoulos. 49-57 [doi]
- Offline Risk-sensitive RL with Partial Observability to Enhance Performance in Human-Robot TeamingGiorgio Angelotti, Caroline P. C. Chanel, Adam Henrique Moreira Pinto, Christophe Lounis, Corentin Chauffaut, Nicolas Drougard. 58-67 [doi]
- Collective Robustness of Heterogeneous Decision-Makers Against Stubborn IndividualsNemanja Antonic, Raina Zakir, Marco Dorigo, Andreagiovanni Reina. 68-77 [doi]
- Willy Wonka MechanismsThomas Archbold, Bart de Keijzer, Carmine Ventre. 78-86 [doi]
- Extended Ranking Mechanisms for the m-Capacitated Facility Location Problem in Bayesian Mechanism DesignGennaro Auricchio, Jie Zhang, Mengxiao Zhang. 87-95 [doi]
- Stability of Weighted Majority Voting under Estimated WeightsShaojie Bai, Dongxia Wang 0002, Tim Muller, Peng Cheng, Jiming Chen 0001. 96-104 [doi]
- Impact of Tie-Breaking on the Manipulability of ElectionsJames P. Bailey, Craig A. Tovey. 105-113 [doi]
- Minimax Exploiter: A Data Efficient Approach for Competitive Self-PlayDaniel Bairamian, Philippe Marcotte, Joshua Romoff, Gabriel Robert, Derek Nowrouzezahrai. 114-122 [doi]
- Strategic Reasoning under Capacity-constrained AgentsGabriel Ballot, Vadim Malvone, Jean Leneutre, Youssef Laarouchi. 123-131 [doi]
- Trust in Shapley: A Cooperative Quest for Global Trust in P2P NetworkArti Bandhana, Tomás Kroupa, Sebastian García. 132-140 [doi]
- A Model-Based Solution to the Offline Multi-Agent Reinforcement Learning Coordination ProblemPaul Barde, Jakob Foerster, Derek Nowrouzezahrai, Amy Zhang. 141-150 [doi]
- Parameterized Guarantees for Almost Envy-Free AllocationsSiddharth Barman, Debajyoti Kar, Shraddha Pathak. 151-159 [doi]
- Verification of Stochastic Multi-Agent Systems with Forgetful StrategiesFrancesco Belardinelli, Wojtek Jamroga, Munyque Mittelmann, Aniello Murano. 160-169 [doi]
- Combining Voting and Abstract Argumentation to Understand Online DiscussionsMichael Bernreiter, Jan Maly 0001, Oliviero Nardi, Stefan Woltran. 170-179 [doi]
- Monitoring Second-Order HyperpropertiesRaven Beutner, Bernd Finkbeiner, Hadar Frenkel, Niklas Metzger 0001. 180-188 [doi]
- Hyper Strategy LogicRaven Beutner, Bernd Finkbeiner. 189-197 [doi]
- Optimal Referral Auction DesignRangeet Bhattacharyya, Parvik Dave, Palash Dey, Swaprava Nath. 198-206 [doi]
- On Green Sustainability of Resource Selection Games with Equitable Cost-SharingVittorio Bilò, Michele Flammini, Gianpiero Monaco, Luca Moscardelli, Cosimo Vinci. 207-215 [doi]
- An Online Learning Theory of BrokerageNatasa Bolic, Tommaso Cesari, Roberto Colomboni. 216-224 [doi]
- Robust Popular MatchingsMartin Bullinger, Rohith Reddy Gangam, Parnian Shahkar. 225-233 [doi]
- HELP! Providing Proactive Support in the Presence of Knowledge AsymmetryTurgay Caglar, Sarath Sreedharan. 234-243 [doi]
- On the Complexity of Pareto-Optimal and Envy-Free LotteriesIoannis Caragiannis, Kristoffer Arnsfelt Hansen, Nidhi Rathi. 244-252 [doi]
- A Distributed Approach for Fault Detection in Swarms of RobotsAlessandro Carminati, Davide Azzalini, Simone Vantini, Francesco Amigoni. 253-261 [doi]
- Finding Effective Ad Allocations: How to Exploit User HistoryMatteo Castiglioni, Alberto Latino, Alberto Marchesi 0001, Giulia Romano, Nicola Gatti 0001, Chokha Palayamkottai. 262-270 [doi]
- Obstruction Alternating-time Temporal Logic: A Strategic Logic to Reason about Dynamic ModelsDavide Catta, Jean Leneutre, Vadim Malvone, Aniello Murano. 271-280 [doi]
- Aligning Credit for Multi-Agent Cooperation via Model-based Counterfactual ImaginationJiajun Chai, Yuqian Fu, Dongbin Zhao, Yuanheng Zhu. 281-289 [doi]
- Cooperative Electric Vehicles PlanningJaël Champagne Gareau, Marc-André Lavoie, Guillaume Gosset, Éric Beaudry. 290-298 [doi]
- Think Global, Act Local - Agent-Based Inline Recovery for Airline OperationsYashovardhan S. Chati, Ramasubramanian Suriyanarayanan, Arunchandar Vasan 0001. 299-307 [doi]
- Deep Anomaly Detection via Active Anomaly SearchChao Chen, Dawei Wang, Feng Mao, Jiacheng Xu 0003, Zongzhang Zhang, Yang Yu 0001. 308-316 [doi]
- Foresight Distribution Adjustment for Off-policy Reinforcement LearningRuifeng Chen 0003, Xu-Hui Liu, Tian-Shuo Liu, Shengyi Jiang, Feng Xu, Yang Yu 0001. 317-325 [doi]
- Adaptive Primal-Dual Method for Safe Reinforcement LearningWeiqin Chen 0003, James Onyejizu, Long Vu, Lan Hoang, Dharmashankar Subramanian, Koushik Kar, Sandipan Mishra, Santiago Paternain. 326-334 [doi]
- Boosting Continuous Control with Consistency PolicyYuhui Chen, Haoran Li, Dongbin Zhao. 335-344 [doi]
- ODEs Learn to Walk: ODE-Net based Data-Driven Modeling for Crowd DynamicsChen Cheng, Jinglai Li. 345-353 [doi]
- Fast and Slow Goal RecognitionMattia Chiari, Alfonso Emilio Gerevini, Andrea Loreggia, Luca Putelli, Ivan Serina. 354-362 [doi]
- Learning a Social Network by Influencing OpinionsDmitry Chistikov 0001, Luisa Estrada, Mike Paterson, Paolo Turrini. 363-371 [doi]
- Fairness and Efficiency Trade-off in Two-sided MatchingSung Ho Cho, Kei Kimura, Kiki Liu, Kwei-guu Liu, Zhengjie Liu, Zhaohong Sun 0001, Kentaro Yahiro, Makoto Yokoo. 372-380 [doi]
- Private Agent-Based ModelingAyush Chopra, Arnau Quera-Bofarull, Nurullah Giray Kuru, Michael J. Wooldridge, Ramesh Raskar. 381-390 [doi]
- flame: A Framework for Learning in Agent-based ModElsAyush Chopra, Jayakumar Subramanian, Balaji Krishnamurthy, Ramesh Raskar. 391-399 [doi]
- Multi-Robot Allocation of Assistance from a Shared Uncertain OperatorClarissa Costen, Anna Gautier, Nick Hawes, Bruno Lacerda. 400-408 [doi]
- A Simple 1.5-approximation Algorithm for a Wide Range of Maximum Size Stable Matching ProblemsGergely Csáji. 409-415 [doi]
- Designing Redistribution Mechanisms for Reducing Transaction Fees in BlockchainsSankarshan Damle, Manisha Padala, Sujit Gujar. 416-424 [doi]
- The Parameterized Complexity of Welfare Guarantees in Schelling SegregationArgyrios Deligkas, Eduard Eiben, Tiger-Lily Goldsmith. 425-433 [doi]
- Toward a Quality Model for Hybrid Intelligence TeamsDavide Dell'Anna, Pradeep K. Murukannaiah, Bernd Dudzik, Davide Grossi, Catholijn M. Jonker, Catharine Oertel, Pinar Yolum. 434-443 [doi]
- Informativeness of Reward Functions in Reinforcement LearningRati Devidze, Parameswaran Kamalaruban, Adish Singla. 444-452 [doi]
- Continual Optimistic Initialization for Value-Based Reinforcement LearningSheelabhadra Dey, James Ault, Guni Sharon. 453-462 [doi]
- Gerrymandering Planar GraphsJack Dippel, Max Dupré la Tour, April Niu, Sanjukta Roy, Adrian Vetta. 463-471 [doi]
- It Is Among Us: Identifying Adversaries in Ad-hoc Domains using Q-valued Bayesian EstimationsMatheus Aparecido do Carmo Alves, Amokh Varma, Yehia Elkhatib, Leandro Soriano Marcolino. 472-480 [doi]
- Dynamic Epistemic Logic of Resource Bounded Information Mining AgentsVitaliy Dolgorukov, Rustam Galimullin, Maksim Gladyshev. 481-489 [doi]
- Population Synthesis as Scenario Generation for Simulation-based Planning under UncertaintyJoel Dyer, Arnau Quera-Bofarull, Nicholas Bishop, J. Doyne Farmer, Anisoara Calinescu, Michael J. Wooldridge. 490-498 [doi]
- Computational Aspects of DistortionSoroush Ebadian, Aris Filos-Ratsikas, Mohamad Latifian, Nisarg Shah 0001. 499-507 [doi]
- Multi-Agent Reinforcement Learning for Assessing False-Data Injection Attacks on Transportation NetworksTaha Eghtesad, Sirui Li, Yevgeniy Vorobeychik, Aron Laszka. 508-515 [doi]
- Reinforcement Learning in the Wild with Maximum Likelihood-based Model TransferHannes Eriksson, Tommy Tram, Debabrota Basu, Mina Alibeigi, Christos Dimitrakakis. 516-524 [doi]
- Holonic Learning: A Flexible Agent-based Distributed Machine Learning FrameworkAhmad Esmaeili, Zahra Ghorrati, Eric T. Matson. 525-533 [doi]
- Learning and Calibrating Heterogeneous Bounded Rational Market Behaviour with Multi-agent Reinforcement LearningBenjamin Patrick Evans, Sumitra Ganesh. 534-543 [doi]
- High-Level, Collaborative Task Planning Grammar and Execution for Heterogeneous AgentsAmy Fang, Hadas Kress-Gazit. 544-552 [doi]
- Facility Location Games with Fractional Preferences and Limited ResourcesJiazhu Fang, Wenjing Liu. 553-561 [doi]
- Generalized Strategy Synthesis of Infinite-state Impartial Combinatorial Games via Exact Binary ClassificationLiangda Fang, Meihong Yang, Dingliang Cheng, Yunlai Hao, Quanlong Guan, Liping Xiong. 562-570 [doi]
- Probabilistic Multi-agent Only-BelievingQihui Feng, Gerhard Lakemeyer. 571-579 [doi]
- Preventing Deadlocks for Multi-Agent Pickup and Delivery in Dynamic EnvironmentsBenedetta Flammini, Davide Azzalini, Francesco Amigoni. 580-588 [doi]
- Potential-Based Reward Shaping for Intrinsic MotivationGrant C. Forbes, Nitish Gupta, Leonardo Villalobos-Arias, Colin M. Potts, Arnav Jhala, David L. Roberts 0001. 589-597 [doi]
- Learning Complex Teamwork Tasks using a Given Sub-task DecompositionElliot Fosong, Arrasy Rahman, Ignacio Carlucho, Stefano V. Albrecht. 598-606 [doi]
- BrainSLAM: SLAM on Neural Population Activity DataKipp McAdam Freud, Nathan F. Lepora, Matt W. Jones, Cian O'Donnell. 607-613 [doi]
- From Market Saturation to Social Reinforcement: Understanding the Impact of Non-Linearity in Information Diffusion ModelsTobias Friedrich 0001, Andreas Göbel 0001, Nicolas Klodt, Martin S. Krejca, Marcus Pappik. 614-622 [doi]
- Analysing the Sample Complexity of Opponent ShapingKitty Fung, Qizhen Zhang 0002, Chris Lu 0001, Jia Wan, Timon Willi, Jakob N. Foerster. 623-631 [doi]
- RACCER: Towards Reachable and Certain Counterfactual Explanations for Reinforcement LearningJasmina Gajcin, Ivana Dusparic. 632-640 [doi]
- Surge Routing: Event-informed Multiagent Reinforcement Learning for Autonomous RideshareDaniel Garces, Stephanie Gil. 641-650 [doi]
- Incentives for Early Arrival in Cooperative GamesYaoxin Ge, Yao Zhang, Dengji Zhao, Zhihao Gavin Tang, Hu Fu 0001, Pinyan Lu. 651-659 [doi]
- Deep Reinforcement Learning with Coalition Action Selection for Online Combinatorial Resource Allocation with Arbitrary Action SpaceZemuy Tesfay Gebrekidan, Sebastian Stein 0001, Timothy J. Norman. 660-668 [doi]
- Approximating the Core via Iterative Coalition SamplingIan Gemp, Marc Lanctot, Luke Marris, Yiran Mao, Edgar A. Duéñez-Guzmán, Sarah Perrin, Andras Gyorgy, Romuald Elie, Georgios Piliouras, Michael Kaisers, Daniel Hennes, Kalesha Bullard, Kate Larson, Yoram Bachrach. 669-678 [doi]
- Modelling the Rise and Fall of Two-sided MarketsFarnoud Ghasemi, Rafal Kucharski. 679-687 [doi]
- NovelGym: A Flexible Ecosystem for Hybrid Planning and Learning Agents Designed for Open WorldsShivam Goel, Yichen Wei, Panagiotis Lymperopoulos, Klára Churá, Matthias Scheutz, Jivko Sinapov. 688-696 [doi]
- Capacity Modification in the Stable Matching ProblemSalil Gokhale, Samarth Singla, Shivika Narang, Rohit Vaish. 697-705 [doi]
- Nash Stability in Hedonic Skill GamesLaurent Gourvès, Gianpiero Monaco. 706-714 [doi]
- Symbolic Computation of Sequential EquilibriaMoritz Graf, Thorsten Engesser, Bernhard Nebel. 715-723 [doi]
- Reinforcement Learning with Ensemble Model Predictive Safety CertificationSven Gronauer, Tom Haider, Felippe Schmoeller da Roza, Klaus Diepold. 724-732 [doi]
- MaDi: Learning to Mask Distractions for Generalization in Visual Deep Reinforcement LearningBram Grooten, Tristan Tomilin, Gautham Vasan, Matthew E. Taylor, A. Rupam Mahmood, Meng Fang, Mykola Pechenizkiy, Decebal Constantin Mocanu. 733-742 [doi]
- Cost-aware Offline Safe Meta Reinforcement Learning with Robust In-Distribution Online Task AdaptationCong Guan, Ruiqi Xue, Ziqian Zhang, Lihe Li, Yi-Chen Li, Lei Yuan, Yang Yu 0001. 743-751 [doi]
- Cooperation and Coordination in Heterogeneous Populations with Interaction DiversityHao Guo, Zhen Wang 0004, Junliang Xing, Pin Tao, Yuanchun Shi. 752-760 [doi]
- First 100 days of Pandemic: An Interplay of Pharmaceutical, Behavioral and Digital Interventions - A Study using Agent Based ModelingGauri Gupta, Ritvik Kapila, Ayush Chopra, Ramesh Raskar. 761-770 [doi]
- Causal Explanations for Sequential Decision-Making in Multi-Agent SystemsBalint Gyevnar, Cheng Wang, Christopher G. Lucas, Shay B. Cohen, Stefano V. Albrecht. 771-779 [doi]
- Weighted Proportional Allocations of Indivisible Goods and Chores: Insights via MatchingsVishwa Prakash HV, Prajakta Nimbhorkar. 780-788 [doi]
- Sample and Communication Efficient Fully Decentralized MARL Policy Evaluation via a New Approach: Local TD UpdateHairi, Zifan Zhang, Jia Liu. 789-797 [doi]
- Forecasting and Mitigating Disruptions in Public Bus Transit ServicesChaeeun Han, Jose Paolo Talusan, Daniel Freudberg, Ayan Mukhopadhyay, Abhishek Dubey, Aron Laszka. 798-806 [doi]
- Solving Two-player Games with QBF Solvers in General Game PlayingYifan He, Abdallah Saffidine, Michael Thielscher. 807-815 [doi]
- Facility Location Games with Scaling EffectsYu He, Alexander Lam, Minming Li. 816-824 [doi]
- Tight Approximations for Graphical House AllocationHadi Hosseini, Andrew McGregor 0001, Rik Sengupta, Rohit Vaish, Vignesh Viswanathan. 825-833 [doi]
- Measuring Policy Distance for Multi-Agent Reinforcement LearningTianyi Hu, Zhiqiang Pu, Xiaolin Ai, Tenghai Qiu, Jianqiang Yi. 834-842 [doi]
- Applying Opponent Modeling for Automatic Bidding in Online Repeated AuctionsYudong Hu, Congying Han, Tiande Guo, Hao Xiao. 843-851 [doi]
- Keeping the Harmony Between Neighbors: Local Fairness in Graph Fair DivisionHalvard Hummel, Ayumi Igarashi 0001. 852-860 [doi]
- On the Stability of Learning in Network Games with Many PlayersAamal Abbas Hussain, Dan Leonte, Francesco Belardinelli, Georgios Piliouras. 861-870 [doi]
- Rational Verification with Quantitative Probabilistic GoalsDavid Hyland, Julian Gutierrez 0001, Shankaranarayanan Krishna, Michael J. Wooldridge. 871-879 [doi]
- BDI Agents in Natural Language EnvironmentsAlexandre Yukio Ichida, Felipe Meneguzzi, Rafael C. Cardoso 0001. 880-888 [doi]
- A Cloud-Based Microservices Solution for Multi-Agent Traffic Control SystemsChikadibia Ihejimba, Rym Z. Wenkstern. 889-897 [doi]
- Is Limited Information Enough? An Approximate Multi-agent Coverage Control in Non-Convex Discrete EnvironmentsTatsuya Iwase, Aurélie Beynier, Nicolas Bredèche, Nicolas Maudet, Jason R. Marden. 898-906 [doi]
- Towards a Principle-based Framework for Repair Selection in Inconsistent Knowledge BasesSaïd Jabbour, Yue Ma 0009, Badran Raddaoui. 907-915 [doi]
- Unraveling the Tapestry of Deception and Personality: A Deep Dive into Multi-Issue Human-Agent Negotiation DynamicsNusrath Jahan, Johnathan Mell. 916-925 [doi]
- Playing Quantitative Games Against an Authority: On the Module Checking ProblemWojciech Jamroga, Munyque Mittelmann, Aniello Murano, Giuseppe Perelli. 926-934 [doi]
- Discovering Consistent SubelectionsLukasz Janeczko, Jérôme Lang, Grzegorz Lisowski, Stanislaw Szufa. 935-943 [doi]
- Disentangling Policy from Offline Task Representation Learning via Adversarial Data AugmentationChengxing Jia, Fuxiang Zhang, Yi-Chen Li 0001, Chenxiao Gao, Xu-Hui Liu, Lei Yuan, Zongzhang Zhang, Yang Yu 0001. 944-953 [doi]
- Recourse under Model Multiplicity via Argumentative EnsemblingJunqi Jiang, Francesco Leofante, Antonio Rago 0001, Francesca Toni. 954-963 [doi]
- Decentralized Federated Policy Gradient with Byzantine Fault-Tolerance and Provably Fast ConvergencePhilip Jordan, Florian Grötschla, Flint Xiaofeng Fan, Roger Wattenhofer. 964-972 [doi]
- Safe Model-Based Multi-Agent Mean-Field Reinforcement LearningMatej Jusup, Barna Pásztor, Tadeusz Janik, Kenan Zhang, Francesco Corman, Andreas Krause 0001, Ilija Bogunovic. 973-982 [doi]
- To Lead or to be Led: A Generalized Condorcet Jury Theorem under DependenceJonas Karge, Juliette-Michelle Burkhardt, Sebastian Rudolph, Dominik Rusovac. 983-991 [doi]
- Efficient Method for Finding Optimal Strategies in Chopstick Auctions with Uniform Objects ValuesStanislaw Kazmierowski, Marcin Dziubinski. 992-1000 [doi]
- Scaling Opponent Shaping to High Dimensional GamesAkbir Khan, Timon Willi, Newton Kwan, Andrea Tacchetti, Chris Lu 0001, Edward Grefenstette, Tim Rocktäschel, Jakob N. Foerster. 1001-1010 [doi]
- Catfished! Impacts of Strategic Misrepresentation in Online DatingOz Kilic, Alan Tsang. 1011-1019 [doi]
- Veto Core Consistent Preference AggregationAleksei Y. Kondratev, Egor Ianovski. 1020-1028 [doi]
- Fine-Grained Liquid Democracy for Cumulative BallotsMatthias Köppe, Martin Koutecký, Krzysztof Sornat, Nimrod Talmon. 1029-1037 [doi]
- Minimizing State Exploration While Searching Graphs with Unknown ObstaclesDaniel Koyfman, Shahaf S. Shperberg, Dor Atzmon, Ariel Felner. 1038-1046 [doi]
- Continuous Monte Carlo Graph SearchKalle Kujanpää, Amin Babadi, Yi Zhao, Juho Kannala, Alexander Ilin, Joni Pajarinen. 1047-1056 [doi]
- Approximating APS Under Submodular and XOS Valuations with Binary MarginalsPooja Kulkarni, Rucha Kulkarni, Ruta Mehta. 1057-1065 [doi]
- Higher Order Reasoning under Intent Uncertainty Reinforces the Hobbesian TrapOtto Kuusela, Debraj Roy. 1066-1074 [doi]
- Proportional Fairness in Obnoxious Facility LocationAlexander Lam, Haris Aziz 0001, Bo Li 0037, Fahimeh Ramezani 0002, Toby Walsh. 1075-1083 [doi]
- Beyond Surprise: Improving Exploration Through Surprise NoveltyHung Le, Kien Do, Dung Nguyen, Svetha Venkatesh. 1084-1092 [doi]
- Policy Learning for Off-Dynamics RL with Deficient SupportLinh Le Pham Van, Hung The Tran, Sunil Gupta 0001. 1093-1100 [doi]
- The Stochastic Evolutionary Dynamics of Softmax Policy Gradient in GamesChin-wing Leung, Shuyue Hu, Ho-Fung Leung. 1101-1109 [doi]
- Learning Partner Selection Rules that Sustain Cooperation in Social Dilemmas with the Option of Opting OutChin-wing Leung, Paolo Turrini. 1110-1118 [doi]
- Coalition Formation with Bounded Coalition SizeChaya Levinger, Noam Hazon, Sofia Simola, Amos Azaria. 1119-1127 [doi]
- Bounding the Incentive Ratio of the Probabilistic Serial RuleBo Li 0037, Ankang Sun, Shiji Xing. 1128-1136 [doi]
- Normalization Enhances Generalization in Visual Reinforcement LearningLu Li, Jiafei Lyu, Guozheng Ma, Zilin Wang, Zhenjie Yang, Xiu Li 0001, Zhiheng Li. 1137-1146 [doi]
- Grasper: A Generalist Pursuer for Pursuit-Evasion ProblemsPengdeng Li, Shuxin Li, Xinrun Wang, Jakub Cerný, Youzhi Zhang 0001, Stephen McAleer, Hau Chan, Bo An 0001. 1147-1155 [doi]
- Context-aware Communication for Multi-agent Reinforcement LearningXinran Li, Jun Zhang 0004. 1156-1164 [doi]
- Factor Graph Neural Network Meets Max-Sum: A Real-Time Route Planning Algorithm for Massive-Scale TripsYixuan Li, Wanyuan Wang, Weiyi Xu, Yanchen Deng, Weiwei Wu 0001. 1165-1173 [doi]
- Developing a Multi-agent and Self-adaptive Framework with Deep Reinforcement Learning for Dynamic Portfolio Risk ManagementZhenglong Li, Vincent W. L. Tam, Kwan L. Yeung. 1174-1182 [doi]
- A Complete Landscape for the Price of Envy-FreenessZihao Li, Shengxin Liu, Xinhang Lu, Biaoshuai Tao, Yichen Tao. 1183-1191 [doi]
- Episodic Reinforcement Learning with Expanded State-reward SpaceDayang Liang, Yaru Zhang, Yunlong Liu 0003. 1192-1200 [doi]
- Policy-regularized Offline Multi-objective Reinforcement LearningQian Lin, Chao Yu, Zongkai Liu, Zifan Wu. 1201-1209 [doi]
- Progression with Probabilities in the Situation Calculus: Representation and SuccinctnessDaxin Liu 0002, Vaishak Belle. 1210-1218 [doi]
- LLM-Powered Hierarchical Language Agent for Real-time Human-AI CoordinationJijia Liu, Chao Yu, Jiaxuan Gao, Yuqing Xie 0005, Qingmin Liao, Yi Wu, Yu Wang. 1219-1228 [doi]
- A Trajectory Perspective on the Role of Data Sampling Techniques in Offline Reinforcement LearningJinyi Liu, Yi Ma, Jianye Hao, Yujing Hu, Yan Zheng, Tangjie Lv, Changjie Fan. 1229-1237 [doi]
- 2D-Ptr: 2D Array Pointer Network for Solving the Heterogeneous Capacitated Vehicle Routing ProblemQidong Liu, Chaoyue Liu 0009, Shaoyao Niu, Cheng Long, Jie Zhang, Mingliang Xu. 1238-1246 [doi]
- Neural Population Learning beyond Symmetric Zero-Sum GamesSiqi Liu, Luke Marris, Marc Lanctot, Georgios Piliouras, Joel Z. Leibo, Nicolas Heess. 1247-1255 [doi]
- GraphSAID: Graph Sampling via Attention based Integer Programming MethodZiqi Liu, Laurence Liu. 1256-1264 [doi]
- Uncoupled Learning of Differential Stackelberg Equilibria with CommitmentsRobert T. Loftin, Mustafa Mert Çelikok, Herke van Hoof, Samuel Kaski, Frans A. Oliehoek. 1265-1273 [doi]
- Safe Reinforcement Learning with Free-form Natural Language Constraints and Pre-Trained Language ModelsXingzhou Lou, Junge Zhang, Ziyan Wang, Kaiqi Huang, Yali Du 0001. 1274-1282 [doi]
- DuaLight: Enhancing Traffic Signal Control by Leveraging Scenario-Specific and Scenario-Shared KnowledgeJiaming Lu, Jingqing Ruan, Haoyuan Jiang, Ziyue Li 0002, Hangyu Mao, Rui Zhao. 1283-1291 [doi]
- A Task-Driven Multi-UAV Coalition Formation MechanismXinpeng Lu, Heng Song, Huailing Ma, Junwu Zhu. 1292-1300 [doi]
- Act as You Learn: Adaptive Decision-Making in Non-Stationary Markov Decision ProcessesBaiting Luo, Yunuo Zhang, Abhishek Dubey, Ayan Mukhopadhyay. 1301-1309 [doi]
- Oh, Now I See What You Want: Learning Agent Models with Internal States from ObservationsPanagiotis Lymperopoulos, Matthias Scheutz. 1310-1318 [doi]
- Covert Planning aganist Imperfect ObserversHaoxiang Ma, Chongyang Shi, Shuo Han 0002, Michael R. Dorothy, Jie Fu. 1319-1327 [doi]
- Mixed-Initiative Bayesian Sub-Goal Optimization in Hierarchical Reinforcement LearningHaozhe Ma, Thanh Vinh Vo, Tze-Yun Leong. 1328-1336 [doi]
- Attacking Multi-Player Bandits and How to Robustify ThemShivakumar Mahesh, Anshuka Rangi, Haifeng Xu, Long Tran-Thanh. 1337-1345 [doi]
- Explaining the Behavior of POMDP-based Agents Through the Impact of Counterfactual InformationSaaduddin Mahmud, Marcell Vazquez-Chanlatte, Stefan J. Witwicki, Shlomo Zilberstein. 1346-1354 [doi]
- Bayesian Behavioural Model Estimation for Live Crowd SimulationFumiyasu Makinoshima, Tetsuro Takahashi, Yusuke Oishi. 1355-1362 [doi]
- PDiT: Interleaving Perception and Decision-making Transformers for Deep Reinforcement LearningHangyu Mao, Rui Zhao, Ziyue Li 0002, Zhiwei Xu 0005, Hao Chen, Yiqun Chen, Bin Zhang, Zhen Xiao, Junge Zhang, Jiangjin Yin. 1363-1371 [doi]
- Network Agency: An Agent-based Model of Forced Migration from UkraineZakaria Mehrab, Logan Stundal, Samarth Swarup, Srinivasan Venkatramanan, Bryan Lewis, Henning S. Mortveit, Christopher L. Barrett, Abhishek Pandey, Chad R. Wells, Alison P. Galvani, Burton H. Singer, David Leblang, Rita R. Colwell, Madhav V. Marathe. 1372-1380 [doi]
- Containing the Spread of a Contagion on a TreeMichela Meister, Jon M. Kleinberg. 1381-1389 [doi]
- TaxAI: A Dynamic Economic Simulator and Benchmark for Multi-agent Reinforcement LearningQirui Mi, Siyu Xia, Yan Song, Haifeng Zhang 0002, Shenghao Zhu, Jun Wang. 1390-1399 [doi]
- Evaluating District-based Election Surveys with Synthetic Dirichlet LikelihoodAdway Mitra, Palash Dey. 1400-1408 [doi]
- Observer-Aware Planning with Implicit and Explicit CommunicationShuwa Miura, Shlomo Zilberstein. 1409-1417 [doi]
- PI-NeuGODE: Physics-Informed Graph Neural Ordinary Differential Equations for Spatiotemporal Trajectory PredictionZhaobin Mo, Yongjie Fu, Xuan Di. 1418-1426 [doi]
- Policy Optimization using Horizon Regularized Advantage to Improve Generalization in Reinforcement LearningNasik Muhammad Nafi, Raja Farrukh Ali, William H. Hsu, Kevin Duong, Mason Vick. 1427-1435 [doi]
- Linking Vision and Multi-Agent Communication through Visible Light Communication using Event CamerasHaruyuki Nakagawa, Yoshitaka Miyatani, Asako Kanezaki. 1436-1444 [doi]
- Rethinking Out-of-Distribution Detection for Reinforcement Learning: Advancing Methods for Evaluation and DetectionLinas Nasvytis, Kai Sandbrink, Jakob N. Foerster, Tim Franzmeyer, Christian Schröder de Witt. 1445-1453 [doi]
- Mixed-Initiative Human-Robot Teaming under Suboptimality with Online Bayesian AdaptationManisha Natarajan, Chunyue Xue, Sanne van Waveren, Karen M. Feigh, Matthew C. Gombolay. 1454-1462 [doi]
- Bootstrapping Linear Models for Fast Online Adaptation in Human-Agent CollaborationBenjamin A. Newman, Christopher Jason Paxton, Kris Kitani, Henny Admoni. 1463-1472 [doi]
- Solution-oriented Agent-based Models Generation with Verifier-assisted Iterative In-context LearningTong Niu, Weihao Zhang, Rong Zhao. 1473-1481 [doi]
- Reinforcement Learning Interventions on Boundedly Rational Human Agents in Frictionful TasksEura Nofshin, Siddharth Swaroop, Weiwei Pan, Susan A. Murphy, Finale Doshi-Velez. 1482-1491 [doi]
- RAISE the Bar: Restriction of Action Spaces for Improved Social Welfare and Equity in Traffic ManagementMichael Oesterle, Tim Grams, Christian Bartelt, Heiner Stuckenschmidt. 1492-1500 [doi]
- Engineering LaCAM*: Towards Real-time, Large-scale, and Near-optimal Multi-agent PathfindingKeisuke Okumura 0001. 1501-1509 [doi]
- Learning and Sustaining Shared Normative Systems via Bayesian Rule Induction in Markov GamesNinell Oldenburg, Tan Zhi-Xuan. 1510-1520 [doi]
- Emergent Cooperation under Uncertain Incentive AlignmentNicole Orzan, Erman Acar, Davide Grossi, Roxana Radulescu. 1521-1530 [doi]
- A Computational Framework of Human ValuesNardine Osman 0001, Mark d'Inverno. 1531-1539 [doi]
- Improving Mobile Maternal and Child Health Care Programs: Collaborative Bandits for Time Slot SelectionSoumyabrata Pal, Milind Tambe, Arun Sai Suggala, Karthikeyan Shanmugam, Aparna Taneja. 1540-1548 [doi]
- Monitored Markov Decision ProcessesSimone Parisi, Montaser Mohammedalamen, Alireza Kazemipour, Matthew E. Taylor, Michael Bowling. 1549-1557 [doi]
- Confidence-Based Curriculum Learning for Multi-Agent Path FindingThomy Phan, Joseph Driscoll, Justin Romberg, Sven Koenig. 1558-1566 [doi]
- Single-Winner Voting with Alliances: Avoiding the Spoiler EffectGrzegorz Pierczynski, Stanislaw Szufa. 1567-1575 [doi]
- Simultaneously Achieving Group Exposure Fairness and Within-Group Meritocracy in Stochastic BanditsSubham Pokhriyal, Shweta Jain 0002, Ganesh Ghalme, Swapnil Dhamal, Sujit Gujar. 1576-1584 [doi]
- Atlas-X Equity Financing: Unlocking New Methods to Securely Obfuscate Axe Inventory Data Based on Differential PrivacyAntigoni Polychroniadou, Gabriele Cipriani, Richard Hua, Tucker Balch. 1585-1592 [doi]
- Robust Knowledge Extraction from Large Language Models using Social Choice TheoryNico Potyka, Yuqicheng Zhu, Yunjie He, Evgeny Kharlamov, Steffen Staab. 1593-1601 [doi]
- Online Decentralised Mechanisms for Dynamic RidesharingNicos Protopapas, Vahid Yazdanpanah, Enrico H. Gerding, Sebastian Stein 0001. 1602-1610 [doi]
- Interactively Learning the User's Utility for Best-Arm Identification in Multi-Objective Multi-Armed BanditsMathieu Reymond, Eugenio Bargiacchi, Diederik M. Roijers, Ann Nowé. 1611-1620 [doi]
- Design Patterns for Explainable Agents (XAg)Sebastian Rodriguez, John Thangarajah, Andrew Davey. 1621-1629 [doi]
- Multi-Agent Diagnostics for Robustness via Illuminated DiversityMikayel Samvelyan, Davide Paglieri, Minqi Jiang, Jack Parker-Holder, Tim Rocktäschel. 1630-1644 [doi]
- The Triangles of Dishonesty: Modelling the Evolution of Lies, Bullshit, and Deception in Agent SocietiesStefan Sarkadi, Peter R. Lewis 0001. 1645-1653 [doi]
- Computing Optimal Commitments to Strategies and Outcome-Conditional Utility TransfersNathaniel Sauerberg, Caspar Oesterheld. 1654-1663 [doi]
- CORE: Towards Scalable and Efficient Causal Discovery with Reinforcement LearningAndreas Sauter, Nicolò Botteghi, Erman Acar, Aske Plaat. 1664-1672 [doi]
- IDIL: Imitation Learning of Intent-Driven Expert BehaviorSangwon Seo, Vaibhav V. Unhelkar. 1673-1682 [doi]
- Multi-user Norm ConsensusMarc Serramia, Natalia Criado, Michael Luck. 1683-1691 [doi]
- Value Alignment in Participatory BudgetingMarc Serramia, Maite López-Sánchez, Juan A. Rodríguez-Aguilar, Stefano Moretti 0001. 1692-1700 [doi]
- Efficient Public Health Intervention Planning Using Decomposition-Based Decision-focused LearningSanket Shah, Arun Sai Suggala, Milind Tambe, Aparna Taneja. 1701-1709 [doi]
- Battlefield Transfers in Coalitional Blotto GamesVade Shah, Jason R. Marden. 1710-1717 [doi]
- Modeling Cognitive Biases in Decision-theoretic Planning for Active Cyber DeceptionAditya Shinde, Prashant Doshi. 1718-1726 [doi]
- Relaxed Exploration Constrained Reinforcement LearningShahaf S. Shperberg, Bo Liu, Peter Stone. 1727-1735 [doi]
- LgTS: Dynamic Task Sampling using LLM-generated Sub-Goals for Reinforcement Learning AgentsYash Shukla, Wenchang Gao, Vasanth Sarathy, Alvaro Velasquez, Robert Wright, Jivko Sinapov. 1736-1744 [doi]
- PAS: Probably Approximate Safety Verification of Reinforcement Learning Policy Using Scenario OptimizationArambam James Singh, Arvind Easwaran. 1745-1753 [doi]
- Frugal Actor-Critic: Sample Efficient Off-Policy Deep Reinforcement Learning Using Unique ExperiencesNikhil Kumar Singh 0004, Indranil Saha. 1754-1762 [doi]
- n PropertyTran Cao Son, Loc Pham, Enrico Pontelli. 1763-1771 [doi]
- Boosting Studies of Multi-Agent Reinforcement Learning on Google Research Football Environment: The Past, Present, and FutureYan Song, He Jiang, Haifeng Zhang, Zheng Tian, Weinan Zhang 0001, Jun Wang. 1772-1781 [doi]
- Algorithmic Filtering, Out-Group Stereotype, and Polarization on Social MediaJean Springsteen, William Yeoh 0001, Dino P. Christenson. 1782-1790 [doi]
- Multi-Agent Alternate Q-LearningKefan Su, Siyuan Zhou, Jiechuan Jiang, Chuang Gan, Xiangjun Wang, Zongqing Lu. 1791-1799 [doi]
- Allocating Contiguous Blocks of Indivisible Chores Fairly: RevisitedAnkang Sun, Bo Li 0037. 1800-1808 [doi]
- On the Transit Obfuscation ProblemHideaki Takahashi, Alex Fukunaga. 1809-1817 [doi]
- Towards Efficient Auction Design with ROI ConstraintsXinyu Tang, Hongtao Lv, Yingjie Gao, Fan Wu 0006, Lei Liu 0003, LiZhen Cui. 1818-1826 [doi]
- Assessing Fairness of Residential Dynamic Pricing for Electricity using Active Learning with Agent-based SimulationSwapna Thorve, Henning S. Mortveit, Anil Vullikanti, Madhav V. Marathe, Samarth Swarup. 1827-1836 [doi]
- Norm Enforcement with a Soft Touch: Faster Emergence, Happier AgentsSz-Ting Tzeng, Nirav Ajmeri, Munindar P. Singh. 1837-1846 [doi]
- Reducing Optimism Bias in Incomplete Cooperative GamesFilip Úradník, David Sychrovský, Jakub Cerný, Martin Cerný 0005. 1847-1855 [doi]
- Enabling BDI Agents to Reason on a Dynamic Action Repertoire in Hypermedia EnvironmentsDanai Vachtsevanou, Bruno de Lima, Andrei Ciortea, Jomi Fred Hübner, Simon Mayer, Jérémy Lemée. 1856-1864 [doi]
- MABL: Bi-Level Latent-Variable World Model for Sample-Efficient Multi-Agent Reinforcement LearningAravind Venugopal, Stephanie Milani, Fei Fang 0001, Balaraman Ravindran. 1865-1873 [doi]
- Optimal Flash Loan Fee Function with Respect to Leverage StrategiesChenmin Wang, Peng Li, Yulong Zeng, Xuepeng Fan. 1874-1882 [doi]
- Positive Intra-Group Externalities in Facility LocationYing Wang, Houyu Zhou, Minming Li. 1883-1891 [doi]
- Generalized Response Objectives for Strategy Exploration in Empirical Game-Theoretic AnalysisYongzhao Wang 0001, Michael P. Wellman. 1892-1900 [doi]
- The Reasons that Agents Act: Intention and Instrumental GoalsFrancis Rhys Ward, Matt MacDermott, Francesco Belardinelli, Francesca Toni, Tom Everitt. 1901-1909 [doi]
- Distributed Online Rollout for Multivehicle Routing in Unmapped EnvironmentsJamison W. Weber, Dhanush R. Giriyan, Devendra R. Parkar, Dimitri P. Bertsekas, Andréa W. Richa. 1910-1918 [doi]
- Towards Generalizability of Multi-Agent Reinforcement Learning in Graphs with Recurrent Message PassingJannis Weil, Zhenghua Bao, Osama Abboud, Tobias Meuser. 1919-1927 [doi]
- Multi-Robot Motion and Task Planning in Automotive Production Using Controller-based Safe Reinforcement LearningEric Wete, Joel Greenyer, Daniel Kudenko, Wolfgang Nejdl. 1928-1937 [doi]
- New Algorithms for Distributed Fair k-Center Clustering: Almost Accurate as Sequential AlgorithmsXiaoliang Wu, Qilong Feng, Ziyun Huang, Jinhui Xu 0001, Jianxin Wang 0001. 1938-1946 [doi]
- Adaptive Evolutionary Reinforcement Learning Algorithm with Early Termination StrategyXiaoqiang Wu, Qingling Zhu, Qiuzhen Lin, Weineng Chen, Jianqiang Li 0001. 1947-1955 [doi]
- Collaborative Deep Reinforcement Learning for Solving Multi-Objective Vehicle Routing ProblemsYaoxin Wu, Mingfeng Fan, Zhiguang Cao, Ruobin Gao, Yaqing Hou, Guillaume Sartoretti. 1956-1965 [doi]
- Safeguard Privacy for Minimal Data Collection with Trustworthy Autonomous AgentsMengwei Xu, Louise A. Dennis, Mustafa A. Mustafa. 1966-1974 [doi]
- Learning to Schedule Online Tasks with Bandit FeedbackYongxin Xu, Shangshang Wang, Hengquan Guo, Xin Liu, Ziyu Shao. 1975-1983 [doi]
- Successively Pruned Q-Learning: Using Self Q-function to Reduce the OverestimationZhaolin Xue, Lihua Zhang, Zhiyan Dong. 1984-1992 [doi]
- Attention-based Priority Learning for Limited Time Multi-Agent Path FindingYibin Yang, Mingfeng Fan, Chengyang He, Jianqiang Wang 0003, Heye Huang, Guillaume Sartoretti. 1993-2001 [doi]
- Automatic Curriculum for Unsupervised Reinforcement LearningYucheng Yang, Tianyi Zhou 0001, Lei Han, Meng Fang, Mykola Pechenizkiy. 2002-2010 [doi]
- Multimodal Pretrained Models for Verifiable Sequential Decision-Making: Planning, Grounding, and PerceptionYunhao Yang, Cyrus Neary, Ufuk Topcu. 2011-2019 [doi]
- Whom to Trust? Elective Learning for Distributed Gaussian Process RegressionZewen Yang, Xiaobing Dai, Akshat Dubey, Sandra Hirche, Georges Hattab. 2020-2028 [doi]
- Risk-Aware Constrained Reinforcement Learning with Non-Stationary PoliciesZhaoxing Yang, Haiming Jin, Yao Tang, Guiyun Fan. 2029-2037 [doi]
- When is Mean-Field Reinforcement Learning Tractable and Relevant?Batuhan Yardim, Artur Goldman, Niao He. 2038-2046 [doi]
- Viral Marketing in Social Networks with Competing ProductsAhad N. Zehmakan, Xiaotian Zhou, Zhongzhi Zhang. 2047-2056 [doi]
- Majority-based Preference Diffusion on Social NetworksAhad N. Zehmakan. 2057-2065 [doi]
- Human Goal Recognition as Bayesian Inference: Investigating the Impact of Actions, Timing, and Goal SolvabilityChenyuan Zhang, Charles Kemp, Nir Lipovetzky. 2066-2074 [doi]
- Memory-Based Resilient Control Against Non-cooperation in Multi-agent FlockingMingyue Zhang, Nianyu Li, Jialong Li, Jiachun Liao, Jiamou Liu. 2075-2084 [doi]
- MESA: Cooperative Meta-Exploration in Multi-Agent Learning through Exploiting State-Action Space StructureZhicheng Zhang, Yancheng Liang, Yi Wu 0013, Fei Fang 0001. 2085-2093 [doi]
- Pragmatic Instruction Following and Goal Assistance via Cooperative Language-Guided Inverse PlanningTan Zhi-Xuan, Lance Ying, Vikash Mansinghka 0001, Joshua B. Tenenbaum. 2094-2103 [doi]
- Maximising the Influence of Temporary Participants in Opinion FormationZhiqiang Zhuang, Kewen Wang 0001, Zhe Wang, Junhu Wang, Yinong Yang. 2104-2110 [doi]
- Defining Deception in Decision MakingMarwa Abdulhai, Micah Carroll, Justin Svegliato, Anca D. Dragan, Sergey Levine. 2111-2113 [doi]
- Actual Trust in Multiagent SystemsMichael Akintunde, Vahid Yazdanpanah, Asieh Salehi Fathabadi, Corina Cîrstea, Mehdi Dastani, Luc Moreau 0001. 2114-2116 [doi]
- On General Epistemic Abstract Argumentation FrameworksGianvincenzo Alfano, Sergio Greco, Francesco Parisi, Irina Trubitsyna. 2117-2119 [doi]
- Approximately Fair Allocation of Indivisible Items with Random ValuationsAlessandro Aloisio, Vittorio Bilò, Antonio Mario Caruso, Michele Flammini, Cosimo Vinci. 2120-2122 [doi]
- Quantum Circuit Design: A Reinforcement Learning ChallengePhilipp Altmann, Adelina Bärligea, Jonas Stein 0001, Michael Kölle 0001, Thomas Gabor, Thomy Phan, Claudia Linnhoff-Popien. 2123-2125 [doi]
- Charging Electric Vehicles Fairly and EfficientlyRamsundar Anandanarayanan, Swaprava Nath, Rohit Vaish. 2126-2128 [doi]
- Bounding Consideration Probabilities in Consider-Then-Choose Ranking ModelsBen Aoki-Sherwood, Catherine Bregou, David Liben-Nowell, Kiran Tomlinson, Thomas Zeng. 2129-2131 [doi]
- Abstracting Assumptions in Structured ArgumentationIosif Apostolakis, Zeynep G. Saribatur, Johannes Peter Wallner. 2132-2134 [doi]
- Liquid Democracy for Low-Cost Ensemble PruningBen Armstrong, Kate Larson. 2135-2137 [doi]
- MiKe: Task Scheduling for UAV-based Parcel DeliveryViviana Arrigoni, Giulio Attenni, Novella Bartolini, Matteo Finelli, Gaia Maselli. 2138-2140 [doi]
- Entropy Seeking Constrained Multiagent Reinforcement LearningAyhan Alp Aydeniz, Enrico Marchesini, Christopher Amato, Kagan Tumer. 2141-2143 [doi]
- Metric Distortion Under Public-Spirited VotingAmirreza Bagheridelouee, Marzie Nilipour, Masoud Seddighin, Maziar Shamsipour. 2144-2146 [doi]
- Concurrency Model of BDI Programming Frameworks: Why Should We Control It?Martina Baiardi, Samuele Burattini, Giovanni Ciatto, Danilo Pianini, Andrea Omicini, Alessandro Ricci. 2147-2149 [doi]
- Adaptive Discounting of Training Time AttacksRidhima Bector, Abhay Aradhya, Chai Quek, Zinovi Rabinovich. 2150-2152 [doi]
- Computing Balanced Solutions for Large International Kidney Exchange Schemes when Cycle Length is UnboundedMárton Benedek, Péter Biró 0001, Gergely Csáji, Matthew Johnson 0002, Daniël Paulusma, Xin Ye. 2153-2155 [doi]
- Decentralized Control of Distributed Manipulators: An Information Diffusion ApproachNicolas Bessone, Payam Zahadat, Kasper Støy. 2156-2158 [doi]
- Gaze Supervision for Mitigating Causal Confusion in Driving AgentsAbhijat Biswas, Badal Arun Pardhi, Caleb Chuck, Jarrett Holtz, Scott Niekum, Henny Admoni, Alessandro Allievi. 2159-2161 [doi]
- Fair Allocation of Conflicting Courses under Additive UtilitiesArpita Biswas, Yiduo Ke, Samir Khuller, Quanquan C. Liu. 2162-2164 [doi]
- Factored MDP based Moving Target Defense with Dynamic Threat ModelingMegha Bose, Praveen Paruchuri, Akshat Kumar. 2165-2167 [doi]
- Decentralised Emergence of Robust and Adaptive Linguistic Conventions in Populations of Autonomous Agents Grounded in Continuous WorldsJérôme Botoko Ekila, Jens Nevens, Lara Verheyen, Katrien Beuls, Paul Van Eecke. 2168-2170 [doi]
- Who gets the Maximal Extractable Value? A Dynamic Sharing Blockchain MechanismPedro Braga, Georgios Chionas, Piotr Krysta, Stefanos Leonardos, Georgios Piliouras, Carmine Ventre. 2171-2173 [doi]
- User-centric Explanation Strategies for Interactive RecommendersBerk Buzcu, Emre Kuru, Reyhan Aydogan. 2174-2176 [doi]
- Non Stationary Bandits with Periodic VariationTitas Chakraborty, Parth Shettiwar. 2177-2179 [doi]
- Mechanism Design for Reducing Agent Distances to Prelocated FacilitiesHau Chan, Xinliang Fu, Minming Li, Chenhao Wang 0001. 2180-2182 [doi]
- Anytime Multi-Agent Path Finding using Operation Parallelism in Large Neighborhood SearchShao-Hung Chan, Zhe Chen 0016, Dian-Lun Lin, Yue Zhang, Daniel Harabor, Sven Koenig, Tsung-Wei Huang, Thomy Phan. 2183-2185 [doi]
- Agent-Based Triangle Counting and Its Applications in Anonymous GraphsPrabhat Kumar Chand, Apurba Das, Anisur Rahaman Molla. 2186-2188 [doi]
- HLG: Bridging Human Heuristic Knowledge and Deep Reinforcement Learning for Optimal Agent PerformanceBin Chen, Zehong Cao. 2189-2191 [doi]
- Cutsets and EF1 Fair Division of GraphsJiehua Chen 0001, William S. Zwicker. 2192-2194 [doi]
- ANOTO: Improving Automated Negotiation via Offline-to-Online Reinforcement LearningSiqi Chen, Jianing Zhao, Kai Zhao, Gerhard Weiss 0001, Fengyun Zhang, Ran Su, Yang Dong, Daqian Li, Kaiyou Lei. 2195-2197 [doi]
- Mastering Robot Control through Point-based Reinforcement Learning with Pre-trainingYihong Chen, Cong Wang, Tianpei Yang, Meng Wang, Yingfeng Chen, Jifei Zhou, Chaoyi Zhao, Xinfeng Zhang, Zeng Zhao, Changjie Fan, Zhipeng Hu, Rong Xiong, Long Zeng. 2198-2200 [doi]
- Quantifying Agent Interaction in Multi-agent Reinforcement Learning for Cost-efficient GeneralizationYuxin Chen, Chen Tang, Ran Tian, Chenran Li, Jinning Li, Masayoshi Tomizuka, Wei Zhan. 2201-2203 [doi]
- Cognizing and Imitating Robotic Skills via a Dual Cognition-Action ArchitectureZixuan Chen, Ze Ji, Shuyang Liu, Jing Huo, Yiyu Chen, Yang Gao. 2204-2206 [doi]
- Modelling the Dynamics of Subjective Identity in Allocation GamesJanvi Chhabra, Jayati Deshmukh, Srinath Srinivasa. 2207-2209 [doi]
- Optimal Task Assignment and Path Planning using Conflict-Based Search with Precedence and Temporal ConstraintsYu Quan Chong, Jiaoyang Li 0001, Katia P. Sycara. 2210-2212 [doi]
- Minimizing Negative Side Effects in Cooperative Multi-Agent Systems using Distributed CoordinationMoumita Choudhury, Sandhya Saisubramanian, Hao Zhang, Shlomo Zilberstein. 2213-2215 [doi]
- A Reinforcement Learning Framework for Studying Group and Individual FairnessAlexandra Cimpean, Catholijn M. Jonker, Pieter Libin, Ann Nowé. 2216-2218 [doi]
- Near-Optimal Online Resource Allocation in the Random-Order ModelSaar Cohen 0001, Noa Agmon. 2219-2221 [doi]
- Inferring Lewisian Common Knowledge using Theory of Mind Reasoning in a Forward-chaining Rule EngineStephen Cranefield, Sriashalya Srivathsan, Jeremy Pitt. 2222-2224 [doi]
- Analyzing Crowdfunding of Public Projects Under Dynamic BeliefsSankarshan Damle, Sujit Gujar. 2225-2227 [doi]
- No Transaction Fees? No Problem! Achieving Fairness in Transaction Fee Mechanism DesignSankarshan Damle, Varul Srivastava, Sujit Gujar. 2228-2230 [doi]
- Deep Learning for Population-Dependent Controls in Mean Field Control Problems with Common NoiseGökçe Dayanikli, Mathieu Laurière, Jiacheng Zhang. 2231-2233 [doi]
- Attila: A Negotiating Agent for the Game of Diplomacy, Based on Purely Symbolic A.IDave De Jonge, Laura Rodriguez Cima. 2234-2236 [doi]
- Evaluation of Robustness of Off-Road Autonomous Driving Segmentation against Adversarial Attacks: A Dataset-Centric StudyPankaj Deoli, Rohit Kumar, Axel Vierling, Karsten Berns. 2237-2239 [doi]
- A Comparison of the Myerson Value and the Position ValueAyse Mutlu Derya. 2240-2242 [doi]
- Pruning Neural Networks Using Cooperative Game TheoryMauricio Diaz-Ortiz Jr., Benjamin Kempinski, Daphne Cornelisse, Yoram Bachrach, Tal Kachman. 2243-2245 [doi]
- Verifying Proportionality in Temporal VotingEdith Elkind, Svetlana Obraztsova, Nicholas Teh. 2246-2248 [doi]
- Computational Theory of Mind with Abstractions for Effective Human-Agent CollaborationEmre Erdogan, Rineke Verbrugge, Pinar Yolum. 2249-2251 [doi]
- Attention Graph for Multi-Robot Social Navigation with Deep Reinforcement LearningErwan Escudie, Laëtitia Matignon, Jacques Saraydaryan. 2252-2254 [doi]
- Strategic Cost Selection in Participatory BudgetingPiotr Faliszewski, Lukasz Janeczko, Andrzej Kaczmarczyk 0001, Grzegorz Lisowski, Piotr Skowron 0001, Stanislaw Szufa. 2255-2257 [doi]
- Deceptive Path Planning via Reinforcement Learning with Graph Neural NetworksMichael Y. Fatemi, Wesley A. Suttle, Brian M. Sadler. 2258-2260 [doi]
- Influence-Focused Asymmetric Island ModelAndrew Festa, Gaurav Dixit, Kagan Tumer. 2261-2263 [doi]
- A Negotiator's Backup Plan: Optimal Concessions with a Reservation ValueTamara C. P. Florijn, Pinar Yolum, Tim Baarslag. 2264-2266 [doi]
- Aleatoric Predicates: Reasoning about MarblesTim French 0002. 2267-2269 [doi]
- Synthesizing Social Laws with ATL ConditionsRustam Galimullin, Louwe B. Kuijer. 2270-2272 [doi]
- Combinatorial Client-Master Multiagent Deep Reinforcement Learning for Task Offloading in Mobile Edge ComputingZemuy Tesfay Gebrekidan, Sebastian Stein 0001, Timothy J. Norman. 2273-2275 [doi]
- Behaviour Modelling of Social Animals via Causal Structure Discovery and Graph Neural NetworksGaël Gendron, Yang Chen, Mitchell Rogers, Yiping Liu, Mihailo Azhar, Shahrokh Heidari, David Arturo Soriano Valdez, Kobe Knowles, Padriac O'Leary, Simon Eyre, Michael Witbrock, Gillian Dobbie, Jiamou Liu, Patrice Delmas. 2276-2278 [doi]
- Benchmarking MARL on Long Horizon Sequential Multi-Objective TasksMinghong Geng, Shubham Pateria, Budhitama Subagdja, Ah-Hwee Tan. 2279-2281 [doi]
- Risk-Sensitive Multi-Agent Reinforcement Learning in Network Aggregative Markov GamesHafez Ghaemi, Hamed Kebriaei, Alireza Ramezani Moghaddam, Majid Nili Ahmadabadi. 2282-2284 [doi]
- Facility Location Games with Task AllocationZifan Gong, Minming Li, Houyu Zhou. 2285-2287 [doi]
- Indirect Credit Assignment in a Multiagent SystemEverardo Gonzalez, Siddarth Viswanathan, Kagan Tumer. 2288-2290 [doi]
- Leveraging Approximate Model-based Shielding for Probabilistic Safety Guarantees in Continuous EnvironmentsAlexander W. Goodall, Francesco Belardinelli. 2291-2293 [doi]
- Reinforcement Learning for Question Answering in Programming Domain using Public Community Scoring as a Human FeedbackAlexey Gorbatovski, Sergey V. Kovalchuk. 2294-2296 [doi]
- Towards Socially-Acceptable Multi-Criteria Resolution of the 4D-Contracts Repair ProblemYoussef Hamadi, Gauthier Picard. 2297-2299 [doi]
- Taking Agent-Based Social Simulation to the Next Level Using Exascale Computing: Potential Use-Cases, Capacity Requirements and ThreatsMatthew P. Hare, Doug Salt, Ric Colasanti, Richard Milton, Mike Batty, Alison J. Heppenstall, Gary Polhill. 2300-2302 [doi]
- Addressing Permutation Challenges in Multi-Agent Reinforcement LearningSomnath Hazra, Pallab Dasgupta, Soumyajit Dey. 2303-2305 [doi]
- Distribution of Chores with Information AsymmetryHadi Hosseini, Joshua Kavner, Tomasz Was, Lirong Xia. 2306-2308 [doi]
- Computing Nash Equilibria in Multidimensional Congestion GamesMohammad T. Irfan, Hau Chan, Jared Soundy. 2309-2311 [doi]
- Strategic Routing and Scheduling for EvacuationsKazi Ashik Islam, Da Qi Chen, Madhav V. Marathe, Henning S. Mortveit, Samarth Swarup, Anil Vullikanti. 2312-2314 [doi]
- Dual-Policy-Guided Offline Reinforcement Learning with Optimal StoppingWeibo Jiang, Shaohui Li, Zhi Li, Yuxin Ke, Zhizhuo Jiang, Yaowen Li, Yu Liu 0005. 2315-2317 [doi]
- PP-Completeness of Control by Adding Players to Change the Penrose-Banzhaf Power Index in Weighted Voting GamesJoanna Kaczmarek 0001, Jörg Rothe. 2318-2320 [doi]
- TIMAT: Temporal Information Multi-Agent TransformerQitong Kang, Fuyong Wang, Zhongxin Liu, Zengqiang Chen 0001. 2321-2323 [doi]
- On the Computational Complexity of Quasi-Variational Inequalities and Multi-Leader-Follower GamesBruce M. Kapron, Koosha Samieefar. 2324-2326 [doi]
- Contiguous Allocation of Binary Valued Indivisible Items on a PathYasushi Kawase, Bodhayan Roy, Mohammad Azharuddin Sanpui. 2327-2329 [doi]
- Decentralized Safe Control for Multi-Robot Navigation in Dynamic Environments with Limited SensingSaad Khan, Mayank Baranwal, Srikant Sukumar. 2330-2332 [doi]
- GLIDE-RL: Grounded Language Instruction through DEmonstration in RLChaitanya Kharyal, Sai Krishna Gottipati, Tanmay Kumar Sinha, Srijita Das, Matthew E. Taylor. 2333-2335 [doi]
- Electric Vehicle Routing for Emergency Power Supply with Deep Reinforcement LearningDaisuke Kikuta, Hiroki Ikeuchi, Kengo Tajiri, Yuta Toyama, Masaki Nakamura, Yuusuke Nakano. 2336-2338 [doi]
- Difference of Convex Functions Programming for Policy Optimization in Reinforcement LearningAkshat Kumar. 2339-2341 [doi]
- Deep Hawkes Process for High-Frequency Market MakingPankaj Kumar. 2342-2344 [doi]
- Fair Scheduling of Indivisible ChoresYatharth Kumar, Sarfaraz Equbal, Rohit Gurjar, Swaprava Nath, Rohit Vaish. 2345-2347 [doi]
- Guided Exploration in Reinforcement Learning via Monte Carlo Critic OptimizationIgor Kuznetsov. 2348-2350 [doi]
- A SAT-based Approach for Argumentation DynamicsJean-Marie Lagniez, Emmanuel Lonca, Jean-Guy Mailly. 2351-2353 [doi]
- Which Games are Unaffected by Absolute Commitments?Daji Landis, Nikolaj I. Schwartzbach. 2354-2356 [doi]
- ELA: Exploited Level Augmentation for Offline Learning in Zero-Sum GamesShiqi Lei, Kanghoon Lee, Linjing Li, Jinkyoo Park, Jiachen Li 0001. 2357-2359 [doi]
- From Explicit Communication to Tacit Cooperation: A Novel Paradigm for Cooperative MARLDapeng Li, Zhiwei Xu, Bin Zhang, Guangchong Zhou, Zeren Zhang, Guoliang Fan. 2360-2362 [doi]
- Efficient Collaboration with Unknown Agents: Ignoring Similar Agents without Checking SimilarityYansong Li, Shuo Han 0002. 2363-2365 [doi]
- Simple k-crashing Plan with a Good Approximation RatioRuixi Luo, Kai Jin, Zelin Ye. 2366-2368 [doi]
- Towards Understanding How to Reduce Generalization Gap in Visual Reinforcement LearningJiafei Lyu, Le Wan, Xiu Li, Zongqing Lu. 2369-2371 [doi]
- Opinion Diffusion on Society Graphs Based on Approval BallotsJayakrishnan Madathil, Neeldhara Misra, Yash More. 2372-2374 [doi]
- Time-Constrained Restless Multi-Armed Bandits with Applications to City Service SchedulingYi Mao, Andrew Perrault. 2375-2377 [doi]
- Multi-level Aggregation with Delays and Stochastic ArrivalsMathieu Mari, Michal Pawlowski, Runtian Ren, Piotr Sankowski. 2378-2380 [doi]
- Projection-Optimal Monotonic Value Function Factorization in Multi-Agent Reinforcement LearningYongsheng Mei, Hanhan Zhou, Tian Lan. 2381-2383 [doi]
- Shield Decentralization for Safe Reinforcement Learning in General Partially Observable Multi-Agent EnvironmentsDaniel Melcer, Christopher Amato, Stavros Tripakis. 2384-2386 [doi]
- Enhancing Search and Rescue Capabilities in Hazardous Communication-Denied Environments through Path-Based Sensors with BacktrackingAlexander Mendelsohn, Donald Sofge, Michael W. Otte. 2387-2389 [doi]
- Fairness in Repeated House AllocationKarl Jochen Micheel, Anaëlle Wilczynski. 2390-2392 [doi]
- Continual Depth-limited Responses for Computing Counter-strategies in Sequential GamesDavid Milec, Ondrej Kubícek, Viliam Lisý. 2393-2395 [doi]
- Simulated Robotic Soft Body ManipulationGlareh Mir, Michael Beetz. 2396-2398 [doi]
- Leveraging Sub-Optimal Data for Human-in-the-Loop Reinforcement LearningCalarina Muslimani, Matthew E. Taylor. 2399-2401 [doi]
- MA-MIX: Value Function Decomposition for Cooperative Multiagent Reinforcement Learning Based on Multi-Head Attention MechanismYu Niu, Hengxu Zhao, Lei Yu. 2402-2404 [doi]
- Ontological Modeling and Reasoning for Comparison and Contrastive Narration of Robot PlansAlberto Olivares Alarcos, Sergi Foix, Júlia Borràs Sol, Gerard Canal, Guillem Alenyà. 2405-2407 [doi]
- Sentimental Agents: Combining Sentiment Analysis and Non-Bayesian Updating for Cooperative Decision-MakingDaniele Orner, Elizabeth Akinyi Ondula, Nick Mumero Mwangi, Richa Goyal. 2408-2410 [doi]
- DCT: Dual Channel Training of Action Embeddings for Reinforcement Learning with Large Discrete Action SpacesPranavi Pathakota, Hardik Meisheri, Harshad Khadilkar. 2411-2413 [doi]
- Incentive-based MARL Approach for Commons Dilemmas in Property-based EnvironmentsLukasz Pelcner, Matheus Aparecido do Carmo Alves, Leandro Soriano Marcolino, Paula Harrison, Peter Atkinson. 2414-2416 [doi]
- Decision Making in Non-Stationary Environments with Policy-Augmented SearchAva Pettet, Yunuo Zhang, Baiting Luo, Kyle Wray, Hendrik Baier, Aron Laszka, Abhishek Dubey, Ayan Mukhopadhyay. 2417-2419 [doi]
- Optimal Majority Rules and Quantitative Condorcet Properties of Setwise Kemeny Voting SchemesXuan Kien Phung, Sylvie Hamel. 2420-2422 [doi]
- Fully Independent Communication in Multi-Agent Reinforcement LearningRafael Pina, Varuna De Silva, Corentin Artaud, Xiaolan Liu. 2423-2425 [doi]
- Emergent Dominance Hierarchies in Reinforcement Learning AgentsRam Rachum, Yonatan Nakar, Bill Tomlinson, Nitay Alon, Reuth Mirsky. 2426-2428 [doi]
- GOV-REK: Governed Reward Engineering Kernels for Designing Robust Multi-Agent Reinforcement Learning SystemsAshish Rana, Michael Oesterle, Jannik Brinkmann. 2429-2431 [doi]
- Banzhaf Power in Hierarchical GamesJohn Randolph, Amy Greenwald, Denizalp Goktas. 2432-2434 [doi]
- BAR Nash Equilibrium and Application to Blockchain DesignMaxime Reynouard, Olga Gorelkina, Rida Laraki. 2435-2437 [doi]
- Psychophysiological Models of Cognitive States Can Be Operator-AgnosticErin E. Richardson, Savannah Lynn Buchner, Jacob R. Kintz, Torin K. Clark, Allison P. Anderson. 2438-2440 [doi]
- The Selfishness Level of Social DilemmasStefan Roesch, Stefanos Leonardos, Yali Du 0001. 2441-2443 [doi]
- JaxMARL: Multi-Agent RL Environments and Algorithms in JAXAlexander Rutherford, Benjamin Ellis, Matteo Gallici, Jonathan Cook, Andrei Lupu, Garðar Ingvarsson, Timon Willi, Akbir Khan, Christian Schröder de Witt, Alexandra Souly, Saptarashmi Bandyopadhyay, Mikayel Samvelyan, Minqi Jiang, Robert Tjarko Lange, Shimon Whiteson, Bruno Lacerda, Nick Hawes, Tim Rocktäschel, Chris Lu 0001, Jakob N. Foerster. 2444-2446 [doi]
- Source Detection in Networks using the Stationary Distribution of a Markov ChainYael Sabato, Amos Azaria, Noam Hazon. 2447-2449 [doi]
- Social Identities and Responsible AgencyKarthik Sama, Jayati Deshmukh, Srinath Srinivasa. 2450-2452 [doi]
- Centralized Training with Hybrid Execution in Multi-Agent Reinforcement LearningPedro P. Santos, Diogo S. Carvalho, Miguel Vasco, Alberto Sardinha, Pedro A. Santos, Ana Paiva 0001, Francisco S. Melo. 2453-2455 [doi]
- Geospatial Active Search for Preventing EvictionsAnindya Sarkar, Alex DiChristofano, Sanmay Das, Patrick J. Fowler, Nathan Jacobs, Yevgeniy Vorobeychik. 2456-2458 [doi]
- Balanced and Incentivized Learning with Limited Shared Information in Multi-agent Multi-armed BanditJunning Shao, Siwei Wang 0002, Zhixuan Fang. 2459-2461 [doi]
- Cournot Queueing Games with Applications to Mobility SystemsMatthew Sheldon, Dario Paccagnan, Giuliano Casale. 2462-2464 [doi]
- OPEx: A Large Language Model-Powered Framework for Embodied Instruction FollowingHaochen Shi, Zhiyuan Sun, Xingdi Yuan, Marc-Alexandre Côté, Bang Liu. 2465-2467 [doi]
- Fairness and Cooperation between Independent Reinforcement Learners through Indirect ReciprocityJacobus Smit, Fernando P. Santos. 2468-2470 [doi]
- Fairness and Privacy Guarantees in Federated Contextual BanditsSambhav Solanki, Sujit Gujar, Shweta Jain 0002. 2471-2473 [doi]
- Fairness of Exposure in Online Restless Multi-armed BanditsArchit Sood, Shweta Jain 0002, Sujit Gujar. 2474-2476 [doi]
- Unlocking the Potential of Machine Ethics with ExplainabilityTimo Speith. 2477-2479 [doi]
- Hybrid Participatory Budgeting: Divisible, Indivisible, and BeyondGogulapati Sreedurga. 2480-2482 [doi]
- Decent-BRM: Decentralization through Block Reward MechanismsVarul Srivastava, Sujit Gujar. 2483-2485 [doi]
- Ethical Markov Decision Processes with Moral Worth as RewardsMihail Stojanovski, Nadjet Bourdache, Grégory Bonnet, Abdel-Illah Mouaddib. 2486-2488 [doi]
- A Multiagent Path Search Algorithm for Large-Scale Coalition Structure GenerationRedha Taguelmimt, Samir Aknine, Djamila Boukredera, Narayan Changder, Tuomas Sandholm. 2489-2491 [doi]
- Efficient Size-based Hybrid Algorithm for Optimal Coalition Structure GenerationRedha Taguelmimt, Samir Aknine, Djamila Boukredera, Narayan Changder, Tuomas Sandholm. 2492-2494 [doi]
- Pure Nash Equilibria in Weighted Congestion Games with Complementarities and BeyondKenjiro Takazawa. 2495-2497 [doi]
- HiMAP: Learning Heuristics-Informed Policies for Large-Scale Multi-Agent PathfindingHuijie Tang, Federico Berto, Zihan Ma, Chuanbo Hua, Kyuree Ahn, Jinkyoo Park. 2498-2500 [doi]
- Fuzzy Clustered Federated Learning Under Mixed Data DistributionsPeng Tang, Lifan Wang, Weidong Qiu, Zheng Huang, Qiangmin Wang. 2501-2503 [doi]
- Neurological Based Timing Mechanism for Reinforcement LearningMichael J. Tarlton, Gustavo B. M. Mello, Anis Yazidi. 2504-2506 [doi]
- Unifying Regret and State-Action Space Coverage for Effective Unsupervised Environment DesignJayden Teoh Jing Teoh, Wenjun Li, Pradeep Varakantham. 2507-2509 [doi]
- Persuasion by Shaping Beliefs about Multidimensional Features of a ThingKazunori Terada, Yasuo Noma, Masanori Hattori. 2510-2512 [doi]
- Game Transformations That Preserve Nash Equilibria or Best-Response SetsEmanuel Tewolde, Vincent Conitzer. 2513-2515 [doi]
- Consensus of Nonlinear Multi-Agent Systems with Semi-Markov Switching Under DoS Attackssheng Tian, Hong Shen 0001, Yuan Tian 0021, Hui Tian. 2516-2518 [doi]
- Reducing Systemic Risk in Financial Networks through DonationsJinyun Tong, Bart de Keijzer, Carmine Ventre. 2519-2521 [doi]
- Joint Intrinsic Motivation for Coordinated Exploration in Multi-Agent Deep Reinforcement LearningMaxime Toquebiau, Nicolas Bredèche, Faïz Ben Amar, Jae Yun Jun. 2522-2524 [doi]
- Embracing Relational Reasoning in Multi-Agent Actor-CriticSharlin Utke, Jeremie Houssineau, Giovanni Montana. 2525-2527 [doi]
- Bayesian Ensembles for Exploration in Deep Q-LearningPascal R. van der Vaart, Neil Yorke-Smith, Matthijs T. J. Spaan. 2528-2530 [doi]
- Understanding the Impact of Promotions on Consumer BehaviorJarod Vanderlynden, Philippe Mathieu, Romain Warlop. 2531-2533 [doi]
- On the existence of EFX under picky or non-differentiative agentsMaya Viswanathan, Ruta Mehta. 2534-2536 [doi]
- Explaining Sequences of Actions in Multi-agent Deep Reinforcement Learning ModelsKhaing Phyo Wai, Minghong Geng, Shubham Pateria, Budhitama Subagdja, Ah-Hwee Tan. 2537-2539 [doi]
- Clique Analysis and Bypassing in Continuous-Time Conflict-Based SearchThayne T. Walker, Nathan R. Sturtevant, Ariel Felner. 2540-2542 [doi]
- Detecting Anomalous Agent Decision Sequences Based on Offline Imitation LearningChen Wang, Sarah M. Erfani, Tansu Alpcan, Christopher Leckie. 2543-2545 [doi]
- On the Utility of External Agent Intention Predictor for Human-AI CoordinationChenxu Wang, Zilong Chen, Huaping Liu. 2546-2548 [doi]
- Decision Market Based Learning for Multi-agent Contextual Bandit ProblemsWenlong Wang, Thomas Pfeiffer 0003. 2549-2551 [doi]
- Reinforcement Nash Equilibrium SolverXinrun Wang, Chang Yang, Shuxin Li, Pengdeng Li, Xiao Huang, Hau Chan, Bo An 0001. 2552-2554 [doi]
- Potential Games on Cubic Splines for Multi-Agent Motion Planning of Autonomous AgentsSam Williams, Jyotirmoy Deshmukh. 2555-2557 [doi]
- Competitive Analysis of Online Facility Open ProblemBinghan Wu, Wei Bao 0001, Bing Zhou. 2558-2560 [doi]
- Population-aware Online Mirror Descent for Mean-Field Games by Deep Reinforcement LearningZida Wu, Mathieu Laurière, Samuel Jia Cong Chua, Matthieu Geist, Olivier Pietquin, Ankur Mehta. 2561-2563 [doi]
- Truthful and Stable One-sided Matching on NetworksTianyi Yang, Yuxiang Zhai, Dengji Zhao, Xinwei Song, Miao Li. 2564-2566 [doi]
- On the Complexity of Candidates-Embedded Multiwinner Voting under the Hausdorff FunctionYongjie Yang 0018. 2567-2569 [doi]
- Dual Role AoI-based Incentive Mechanism for HD map CrowdsourcingWentao Ye, Bo Liu, Yuan Luo, Jianwei Huang. 2570-2572 [doi]
- Toward Socially Friendly Autonomous Driving Using Multi-agent Deep Reinforcement LearningJhih-Ching Yeh, Von-Wun Soo. 2573-2575 [doi]
- Solving Offline 3D Bin Packing Problem with Large-sized Bin via Two-stage Deep Reinforcement LearningHao Yin, Fan Chen, HongJie He. 2576-2578 [doi]
- Overview of t-DGR: A Trajectory-Based Deep Generative Replay Method for Continual Learning in Decision MakingWilliam Yue, Bo Liu, Peter Stone. 2579-2581 [doi]
- MATLight: Traffic Signal Coordinated Control Algorithm based on Heterogeneous-Agent Mirror Learning with TransformerHaipeng Zhang, Zhiwen Wang, Na Li. 2582-2584 [doi]
- PADDLE: Logic Program Guided Policy Reuse in Deep Reinforcement LearningHao Zhang, Tianpei Yang, Yan Zheng, Jianye Hao, Matthew E. Taylor. 2585-2587 [doi]
- Bellman Momentum on Deep Reinforcement LearningHuiHui Zhang. 2588-2590 [doi]
- Auto-Encoding Adversarial Imitation LearningKaifeng Zhang, Rui Zhao, Ziming Zhang, Yang Gao. 2591-2593 [doi]
- Large Language Model Assissted Multi-Agent Dialogue for Ontology AlignmentShiyao Zhang, Yuji Dong, Yichuan Zhang, Terry R. Payne, Jie Zhang 0030. 2594-2596 [doi]
- Mutual Information as Intrinsic Reward of Reinforcement Learning Agents for On-demand Ride PoolingXianjie Zhang, Jiahao Sun, Chen Gong 0005, Kai Wang, Yifei Cao, Hao Chen, Yu Liu. 2597-2599 [doi]
- Optimal Diffusion AuctionsYao Zhang, Shanshan Zheng, Dengji Zhao. 2600-2602 [doi]
- Decentralized Competing Bandits in Many-to-One Matching MarketsYirui Zhang, Zhixuan Fang. 2603-2605 [doi]
- Distance-Aware Attentive Framework for Multi-Agent Collaborative Perception in Presence of Pose ErrorBinyu Zhao, Wei Zhang, Zhaonian Zou. 2606-2608 [doi]
- ENOTO: Improving Offline-to-Online Reinforcement Learning with Q-EnsemblesKai Zhao, Jianye Hao, Yi Ma, Jinyi Liu 0001, Yan Zheng, Zhaopeng Meng. 2609-2611 [doi]
- JDRec: Practical Actor-Critic Framework for Online Combinatorial Recommender SystemXin Zhao, Jiaxin Li, Zhiwei Fang, Yuchen Guo, Jinyuan Zhao, Jie He, Wenlong Chen, Changping Peng, Guiguang Ding. 2612-2614 [doi]
- Bootstrapped Policy Learning: Goal Shaping for Efficient Task-oriented Dialogue Policy LearningYangyang Zhao, Mehdi Dastani, Shihan Wang 0001. 2615-2617 [doi]
- Towards Zero Shot Learning in Restless Multi-armed BanditsYunfan Zhao, Nikhil Behari, Edward Hughes 0001, Edwin Zhang, Dheeraj Nagaraj, Karl Tuyls, Aparna Taneja, Milind Tambe. 2618-2620 [doi]
- vMFER: von Mises-Fisher Experience Resampling Based on Uncertainty of Gradient Directions for Policy Improvement of Actor-Critic AlgorithmsYiwen Zhu, Jinyi Liu 0001, Wenya Wei, Qianyi Fu, Yujing Hu, Zhou Fang, Bo An 0001, Jianye Hao, Tangjie Lv, Changjie Fan. 2621-2623 [doi]
- Controlling Delegations in Liquid DemocracyShiri Alouf-Heffetz, Tanmay Inamdar 0002, Pallavi Jain 0001, Nimrod Talmon, Yash More Hiren. 2624-2632 [doi]
- Regret-based Defense in Adversarial Reinforcement LearningRoman Belaire, Pradeep Varakantham, Thanh Hong Nguyen, David Lo 0001. 2633-2640 [doi]
- Fair and Efficient Division of a Discrete Cake with Switching Utility LossZheng Chen, Bo Li, Minming Li, Guochuan Zhang. 2641-2649 [doi]
- MAGNets: Micro-Architectured Group Neural NetworksSumanta Dey, Briti Gangopadhyay, Pallab Dasgupta, Soumyajit Dey. 2650-2658 [doi]
- Budget-feasible Egalitarian Allocation of Conflicting JobsSushmita Gupta, Pallavi Jain 0001, A. Mohanapriya, Vikash Tripathi. 2659-2667 [doi]
- Multi-deal NegotiationTim Baarslag. 2668-2673 [doi]
- Going Beyond Mono-Mission Earth Observation: Using the Multi-Agent Paradigm to Federate Multiple MissionsJean-Loup Farges, Filipo Perotto, Gauthier Picard, Cédric Pralet, Cyrille de Lussy, Jonathan Guerra, Philippe Pavero, Fabrice Planchou. 2674-2678 [doi]
- Empowering BDI Agents with Generalised Decision-MakingRamon Fraga Pereira, Felipe Meneguzzi. 2679-2683 [doi]
- Adaptive Incentive Engineering in Citizen-Centric AIBehrad Koohy, Jan Buermann, Vahid Yazdanpanah, Pamela Briggs, Paul Pschierer-Barnfather, Enrico H. Gerding, Sebastian Stein 0001. 2684-2689 [doi]
- Designing Artificial Reasoners for CommunicationEmiliano Lorini. 2690-2695 [doi]
- Towards Sustainable Human-Agent Teams: A Framework for Understanding Human-Agent Team DynamicsRui Prada, Astrid C. Homan, Gerben A. van Kleef. 2696-2700 [doi]
- Selecting Representative Bodies: An Axiomatic ViewManon Revel, Niclas Boehmer, Rachael Colley, Markus Brill, Piotr Faliszewski, Edith Elkind. 2701-2705 [doi]
- The Cognitive Hourglass: Agent Abstractions in the Large Models EraAlessandro Ricci, Stefano Mariani 0001, Franco Zambonelli, Samuele Burattini, Cristiano Castelfranchi. 2706-2711 [doi]
- Explainable Agents (XAg) by DesignSebastian Rodriguez, John Thangarajah. 2712-2716 [doi]
- Utility-Based Reinforcement Learning: Unifying Single-objective and Multi-objective Reinforcement LearningPeter Vamplew 0001, Cameron Foale, Conor F. Hayes, Patrick Mannion, Enda Howley, Richard Dazeley, Scott Johnson, Johan Källström, Gabriel de Oliveira Ramos, Roxana Radulescu, Willem Röpke, Diederik M. Roijers. 2717-2721 [doi]
- Abstraction in Non-Monotonic ReasoningIosif Apostolakis. 2722-2724 [doi]
- Emergence of Linguistic Conventions In Multi-Agent Systems Through Situated Communicative InteractionsJérôme Botoko Ekila. 2725-2727 [doi]
- Communication and Generalization in Multi-Agent LearningJiaxun Cui. 2728-2730 [doi]
- The Multi-agent System based on LLM for Online DiscussionsYihan Dong. 2731-2733 [doi]
- Negotiation Strategies for Combining Partials Deals in One-To-Many NegotiationsTamara C. P. Florijn. 2734-2736 [doi]
- Scaling up Cooperative Multi-agent Reinforcement Learning SystemsMinghong Geng. 2737-2739 [doi]
- Toward Explainable Agent BehaviourVictor Gimenez-Abalos. 2740-2742 [doi]
- Towards building Autonomous AI Agents and Robots for Open World EnvironmentsShivam Goel. 2743-2745 [doi]
- Large Learning Agents: Towards Continually Aligned Robots with Scale in RLBram Grooten. 2746-2748 [doi]
- Efficient Continuous Space BeliefMDP Solutions for Navigation and Active SensingHimanshu Gupta. 2749-2751 [doi]
- Building Trustworthy Human-Centric Autonomous Systems Via ExplanationsBalint Gyevnar. 2752-2754 [doi]
- Adaptive Decision-Making in Non-Stationary Markov Decision ProcessesBaiting Luo. 2755-2757 [doi]
- Interactive Control and Decision-Making for Multi-Robots SystemsYiwei Lyu. 2758-2760 [doi]
- Leveraging Interpretable Human Models to Personalize AI Interventions for Behavior ChangeEura Nofshin. 2761-2763 [doi]
- Predicting and Protecting the Cognitive Health of Operators in Isolated, Confined, and Extreme EnvironmentsErin E. Richardson. 2764-2766 [doi]
- Generalizing Objective-Specification in Markov Decision ProcessesPedro P. Santos. 2767-2769 [doi]
- Cooperative Multi-Agent Reinforcement Learning in Convention Reliant EnvironmentsJarrod Shipton. 2773-2775 [doi]
- Formal and Natural Language assisted Curriculum Generation for Reinforcement Learning AgentsYash Shukla. 2776-2778 [doi]
- Distributive and Temporal Fairness in Algorithmic Collective Decision-MakingNicholas Teh. 2779-2781 [doi]
- Bayesian Model-Free Deep Reinforcement LearningPascal R. van der Vaart. 2782-2784 [doi]
- Autonomous Skill Acquisition for Robots Using Graduated LearningGautham Vasan. 2785-2787 [doi]
- Allocating Resources with Imperfect InformationShiji Xing. 2788-2790 [doi]
- Advancing Sample Efficiency and Explainability in Multi-Agent Reinforcement LearningZhicheng Zhang. 2791-2793 [doi]
- EVtonomy: A Personalised Route Planner for Electric VehiclesAlexandry Augustin, Elnaz Shafipour, Sebastian Stein 0001. 2794-2796 [doi]
- End to End Camera only Drone Detection and Tracking Demo within a Multi-agent Framework with a CNN-LSTM Model for Range EstimationMaxence de Rochechouart, Raed Abu Zitar, Amal El Fallah-Seghrouchni, Frédéric Barbaresco. 2797-2799 [doi]
- Imitation Learning Datasets: A Toolkit For Creating Datasets, Training Agents and BenchmarkingNathan Gavenski, Michael Luck, Odinaldo Rodrigues. 2800-2802 [doi]
- A Symbolic Sequential Equilibria Solver for Game Theory ExplorerMoritz Graf, Thorsten Engesser, Bernhard Nebel. 2803-2805 [doi]
- Naphtha Cracking Center Scheduling Optimization using Multi-Agent Reinforcement LearningSunghoon Hong, Deunsol Yoon, Whiyoung Jung, Jinsang Lee, Hyundam Yoo, Jiwon Ham, Suhyun Jung, Chanwoo Moon, Yeontae Jung, Kanghoon Lee, Woohyung Lim, Somin Jeon, Myounggu Lee, Sohui Hong, Jaesang Lee, Hangyoul Jang, Changhyun Kwak, Jeonghyeon Park, Changhoon Kang, Jungki Kim. 2806-2808 [doi]
- Conversational Language Models for Human-in-the-Loop Multi-Robot CoordinationWilliam Hunt, Toby Godfrey, Mohammad Divband Soorati. 2809-2811 [doi]
- STV+KH: Towards Practical Verification of Strategic Ability for Knowledge and Information FlowMateusz Kaminski, Damian Kurpiewski, Wojciech Jamroga. 2812-2814 [doi]
- SMT4SMTL: A Tool for SMT-Based Satisfiability Checking of SMTLArtur Niewiadomski 0001, Maciej Nazarczuk, Mateusz Przychodzki, Magdalena Kacprzak, Wojciech Penczek, Andrzej Zbrzezny. 2815-2817 [doi]
- Engaging the Elderly in Exercise with Agents: A Gamified Stationary Bike System for Sarcopenia ManagementYang Qiu, Ping Chen, Huiguo Zhang, Bo Huang, Di Wang, Zhiqi Shen 0001. 2818-2820 [doi]
- pgeon applied to Overcooked-AI to explain agents' behaviourAdrián Tormos, Victor Gimenez-Abalos, Javier Vázquez-Salceda, Sergio Álvarez-Napagao. 2821-2823 [doi]
- Generating and Choosing Organizations for Multi-Agent SystemsCleber Jorge Amaral, Jomi F. Hübner, Stephen Cranefield. 2824-2826 [doi]
- ⊕: an RDF Graph Synchronization System for Collaborative RoboticsCyrille Berger, Patrick Doherty 0001, Piotr Rudol, Mariusz Wzorek. 2827-2829 [doi]
- A Summary of Online Markov Decision Processes with Non-oblivious Strategic AdversaryLe Cong Dinh, David Henry Mguni, Long Tran-Thanh, Jun Wang, Yaodong Yang. 2830-2832 [doi]
- Extended Abstract of Diffusion Auction Design with Transaction CostsBin Li 0035, Dong Hao, Dengji Zhao. 2833-2835 [doi]
- Toward a Normative Approach for Resilient Multiagent Systems: A SummaryGeeta Mahala, Özgür Kafali, Hoa Khanh Dam, Aditya Ghose, Munindar P. Singh. 2836-2838 [doi]
- Combining Theory of Mind and Abductive Reasoning in Agent-Oriented ProgrammingNieves Montes, Michael Luck, Nardine Osman 0001, Odinaldo Rodrigues, Carles Sierra. 2839-2841 [doi]
- Extended Abstract: Price of Anarchy of Traffic Assignment with Exponential Cost FunctionsJianglin Qiao, Dave De Jonge, Dongmo Zhang, Simeon Simoff, Carles Sierra, Bo Du 0004. 2842-2844 [doi]
- A Survey of Multi-Agent Deep Reinforcement Learning with CommunicationChangxi Zhu, Mehdi Dastani, Shihan Wang 0001. 2845-2847 [doi]