Abstract is missing.
- Learning on Manifolds: Universal Approximations Properties using Geometric Controllability Conditions for Neural ODEsKarthik Elamvazhuthi, Xuechen Zhang, Samet Oymak, Fabio Pasqualetti. 1-11 [doi]
- Interval Reachability of Nonlinear Dynamical Systems with Neural Network ControllersSaber Jafarpour, Akash Harapanahalli, Samuel Coogan 0001. 12-25 [doi]
- Physics-Informed Model-Based Reinforcement LearningAdithya Ramesh, Balaraman Ravindran. 26-37 [doi]
- Learning-to-Learn to Guide Random Search: Derivative-Free Meta Blackbox Optimization on ManifoldBilgehan Sel, Ahmad Tawaha, Yuhao Ding, Ruoxi Jia, Bo Ji, Javad Lavaei, Ming Jin 0002. 38-50 [doi]
- Can Direct Latent Model Learning Solve Linear Quadratic Gaussian Control?Yi Tian, Kaiqing Zhang, Russ Tedrake, Suvrit Sra. 51-63 [doi]
- Policy Learning for Active Target Tracking over Continuous SE(3) TrajectoriesPengzhi Yang, Shumon Koga, Arash Asgharivaskasi, Nikolay Atanasov. 64-75 [doi]
- Guaranteed Conformance of Neurosymbolic Models to Natural ConstraintsKaustubh Sridhar, Souradeep Dutta, James Weimer, Insup Lee 0001. 76-89 [doi]
- ISAACS: Iterative Soft Adversarial Actor-Critic for SafetyKai-Chieh Hsu, Duy Phuong Nguyen, Jaime Fernández Fisac. 90-103 [doi]
- Safe and Efficient Reinforcement Learning using Disturbance-Observer-Based Control Barrier FunctionsYikun Cheng, Pan Zhao, Naira Hovakimyan. 104-115 [doi]
- Learning the dynamics of autonomous nonlinear delay systemsXunbi A. Ji, Gábor Orosz. 116-127 [doi]
- Improving Gradient Computation for Differentiable Physics Simulation with ContactsYaofeng Desmond Zhong, Jiequn Han, Biswadip Dey, Georgia Olympia Brikis. 128-141 [doi]
- Learning Trust Over Directed Graphs in Multiagent SystemsOrhan Eren Akgün, Arif Kerem Dayi, Stephanie Gil, Angelia Nedich. 142-154 [doi]
- Contrastive Example-Based ControlKyle Beltran Hatch, Benjamin Eysenbach, Rafael Rafailov, Tianhe Yu, Ruslan Salakhutdinov, Sergey Levine, Chelsea Finn. 155-169 [doi]
- +: Hyperparameter-Free Auto-Tuning using Auto-DifferentiationSheng Cheng, Lin Song, Minkyung Kim 0011, Shenlong Wang, Naira Hovakimyan. 170-183 [doi]
- Policy Gradient Play with Networked Agents in Markov Potential GamesSarper Aydin, Ceyhun Eksin. 184-195 [doi]
- Sample Complexity Bound for Evaluating the Robust Observer's Performance under Coprime Factors UncertaintySerban Sabau, Yifei Zhang, Sourav Kumar Ukil. 196-207 [doi]
- Learning Robust State Observers using Neural ODEsKeyan Miao, Konstantinos Gatsis. 208-219 [doi]
- End-to-End Learning to Warm-Start for Real-Time Quadratic OptimizationRajiv Sambharya, Georgina Hall, Brandon Amos, Bartolomeo Stellato. 220-234 [doi]
- Full Gradient Deep Reinforcement Learning for Average-Reward CriterionTejas Pagare, Vivek S. Borkar, Konstantin Avrachenkov. 235-247 [doi]
- Regret Analysis of Online LQR Control via Trajectory Prediction and TrackingYitian Chen, Timothy L. Molloy, Tyler H. Summers, Iman Shames. 248-258 [doi]
- Learning Policy-Aware Models for Model-Based Reinforcement Learning via Transition Occupancy MatchingYecheng Jason Ma, Kausik Sivakumar, Jason Yan, Osbert Bastani, Dinesh Jayaraman. 259-271 [doi]
- Compositional Neural Certificates for Networked Dynamical SystemsSongyuan Zhang, Yumeng Xiu, Guannan Qu, Chuchu Fan. 272-285 [doi]
- In-Distribution Barrier Functions: Self-Supervised Policy Filters that Avoid Out-of-Distribution StatesFernando Castañeda, Haruki Nishimura, Rowan Thomas McAllister, Koushil Sreenath, Adrien Gaidon. 286-299 [doi]
- Adaptive Conformal Prediction for Motion Planning among Dynamic AgentsAnushri Dixit, Lars Lindemann, Skylar X. Wei, Matthew Cleaveland, George J. Pappas, Joel W. Burdick. 300-314 [doi]
- Provably Efficient Generalized Lagrangian Policy Optimization for Safe Multi-Agent Reinforcement LearningDongsheng Ding, Xiaohan Wei, Zhuoran Yang, Zhaoran Wang, Mihailo R. Jovanovic. 315-332 [doi]
- Equilibria of Fully Decentralized Learning in Networked SystemsYan Jiang, Wenqi Cui, Baosen Zhang, Jorge Cortés 0001. 333-345 [doi]
- Operator Learning for Nonlinear Adaptive ControlLuke Bhan, Yuanyuan Shi, Miroslav Krstic. 346-357 [doi]
- A Generalizable Physics-informed Learning Framework for Risk Probability EstimationZhuoyuan Wang, Yorie Nakahira. 358-370 [doi]
- Efficient Reinforcement Learning Through Trajectory GenerationWenqi Cui, Linbin Huang, Weiwei Yang, Baosen Zhang. 371-382 [doi]
- Concentration Phenomenon for Random Dynamical Systems: An Operator Theoretic ApproachMuhammad Abdullah Naeem. 383-394 [doi]
- Modified Policy Iteration for Exponential Cost Risk Sensitive MDPsYashaswini Murthy, Mehrdad Moharrami, R. Srikant 0001. 395-406 [doi]
- Automated Reachability Analysis of Neural Network-Controlled Systems via Adaptive PolytopesTaha Entesari, Mahyar Fazlyab. 407-419 [doi]
- Designing System Level Synthesis Controllers for Nonlinear Systems with Stability GuaranteesLauren E. Conger, Sydney Vernon, Eric Mazumdar. 420-430 [doi]
- Targeted Adversarial Attacks against Neural Network Trajectory PredictorsKaiyuan Tan, Jun Wang, Yiannis Kantaros. 431-444 [doi]
- Can Learning Deteriorate Control? Analyzing Computational Delays in Gaussian Process-Based Event-Triggered Online LearningXiaobing Dai, Armin Lederer, Zewen Yang, Sandra Hirche. 445-457 [doi]
- Probabilistic Invariance for Gaussian Process State Space ModelsPaul Griffioen, Alex Devonport, Murat Arcak. 458-468 [doi]
- Compositional Learning-based Planning for Vision POMDPsSampada Deglurkar, Michael H. Lim, Johnathan Tucker, Zachary N. Sunberg, Aleksandra Faust, Claire J. Tomlin. 469-482 [doi]
- Certified Invertibility in Neural Networks via Mixed-Integer ProgrammingTianqi Cui, Thomas Bertalan, George J. Pappas, Manfred Morari, Yannis G. Kevrekidis, Mahyar Fazlyab. 483-496 [doi]
- The Impact of the Geometric Properties of the Constraint Set in Safe Optimization with Bandit FeedbackSpencer Hutchinson, Berkay Turan, Mahnoosh Alizadeh. 497-508 [doi]
- Template-Based Piecewise Affine RegressionGuillaume O. Berger, Sriram Sankaranarayanan 0001. 509-520 [doi]
- Physics-enhanced Gaussian Process Variational AutoencoderThomas Beckers 0001, Qirui Wu, George J. Pappas. 521-533 [doi]
- A Reinforcement Learning Look at Risk-Sensitive Linear Quadratic Gaussian ControlLeilei Cui 0002, Tamer Basar, Zhong-Ping Jiang. 534-546 [doi]
- Time-Incremental Learning of Temporal Logic Classifiers Using Decision TreesErfan Aasi, Mingyu Cai, Cristian Ioan Vasile, Calin Belta. 547-559 [doi]
- Adaptive Regret for Control of Time-Varying DynamicsPaula Gradu, Elad Hazan, Edgar Minasyan. 560-572 [doi]
- Automatic Integration for Fast and Interpretable Neural Point ProcessesZihao Zhou, Rose Yu. 573-585 [doi]
- Multi-Task Imitation Learning for Linear Dynamical SystemsThomas T. C. K. Zhang, Katie Kang, Bruce D. Lee, Claire J. Tomlin, Sergey Levine, Stephen Tu, Nikolai Matni. 586-599 [doi]
- Accelerating Trajectory Generation for Quadrotors Using TransformersSrinath Tankasala, Mitch Pryor. 600-611 [doi]
- A finite-sample analysis of multi-step temporal difference estimatesYaqi Duan, Martin J. Wainwright. 612-624 [doi]
- Practical Critic Gradient based Actor Critic for On-Policy Reinforcement LearningSwaminathan Gurumurthy, Zachary Manchester, J. Zico Kolter. 625-638 [doi]
- Deep Off-Policy Iterative Learning ControlSwaminathan Gurumurthy, J. Zico Kolter, Zachary Manchester. 639-652 [doi]
- Transportation-Inequalities, Lyapunov Stability and Sampling for Dynamical Systems on Continuous State SpaceMuhammad Abdullah Naeem, Miroslav Pajic. 653-664 [doi]
- Learning Disturbances Online for Risk-Aware Control: Risk-Aware Flight with Less Than One Minute of DataPrithvi Akella, Skylar X. Wei, Joel W. Burdick, Aaron D. Ames. 665-678 [doi]
- Compositional Learning of Dynamical System Models Using Port-Hamiltonian Neural NetworksCyrus Neary, Ufuk Topcu. 679-691 [doi]
- Multi-Agent Reinforcement Learning with Reward DelaysYuyang Zhang, Runyu Zhang, Yuantao Gu, Na Li 0002. 692-704 [doi]
- CatlNet: Learning Communication and Coordination Policies from CaTL+ SpecificationsWenliang Liu, Kevin Leahy 0001, Zachary Serlin, Calin Belta. 705-717 [doi]
- Roll-Drop: accounting for observation noise with a single parameterLuigi Campanaro, Daniele De Martini, Siddhant Gangapurwala, Wolfgang Merkt, Ioannis Havoutis. 718-730 [doi]
- Lie Group Forced Variational Integrator Networks for Learning and Control of Robot SystemsValentin Duruisseaux, Thai P. Duong, Melvin Leok, Nikolay Atanasov. 731-744 [doi]
- Learning Object-Centric Dynamic Modes from Video and Emerging PropertiesArmand Comas Massague, Christian Fernandez Lopez, Sandesh Ghimire, Haolin Li, Mario Sznaier, Octavia I. Camps. 745-769 [doi]
- Continuous Versatile Jumping Using Learned Action ResidualsYuxiang Yang, Xiangyun Meng, Wenhao Yu 0003, Tingnan Zhang, Jie Tan, Byron Boots. 770-782 [doi]
- Probabilistic Safeguard for Reinforcement Learning Using Safety Index Guided Gaussian Process ModelsWeiye Zhao, Tairan He, Changliu Liu. 783-796 [doi]
- Hierarchical Policy Blending As Optimal TransportAn T. Le 0001, Kay Hansel, Jan Peters 0001, Georgia Chalvatzaki. 797-812 [doi]
- Top-k data selection via distributed sample quantile inferenceXu Zhang, Marcos M. Vasconcelos. 813-824 [doi]
- Model-based Validation as Probabilistic InferenceHarrison Delecki, Anthony Corso 0001, Mykel J. Kochenderfer. 825-837 [doi]
- Nonlinear Controllability and Function Representation by Neural Stochastic Differential EquationsTanya Veeravalli, Maxim Raginsky. 838-850 [doi]
- Agile Catching with Whole-Body MPC and Blackbox Policy LearningSaminda Abeyruwan, Alex Bewley, Nicholas Matthew Boffi, Krzysztof Marcin Choromanski, David B. D'Ambrosio, Deepali Jain, Pannag R. Sanketi, Anish Shankar, Vikas Sindhwani, Sumeet Singh, Jean-Jacques E. Slotine, Stephen Tu. 851-863 [doi]
- Distributionally Robust Lyapunov Function Search Under UncertaintyKehan Long, Yinzhuang Yi, Jorge Cortés 0001, Nikolay Atanasov. 864-877 [doi]
- Black-Box vs. Gray-Box: A Case Study on Learning Table Tennis Ball Trajectory Prediction with Spin and ImpactsJan Achterhold, Philip Tobuschat, Hao Ma, Dieter Büchler, Michael Muehlebach, Joerg Stueckler. 878-890 [doi]
- Data-driven memory-dependent abstractions of dynamical systemsAdrien Banse, Licio Romao, Alessandro Abate, Raphaël M. Jungers. 891-902 [doi]
- Congestion Control of Vehicle Traffic Networks by Learning Structural and Temporal PatternsSooJean Han, Soon Jo Chung, Johanna Gustafson. 903-914 [doi]
- A Learning and Control Perspective for MicrofinanceXiyu Deng, Christian Kurniawan, Adhiraj Chakraborty, Assane Gueye, Niangjun Chen, Yorie Nakahira. 915-927 [doi]
- Physics-Guided Active Learning of Environmental Flow FieldsReza Khodayi-mehr, Pingcheng Jian, Michael M. Zavlanos. 928-940 [doi]
- CT-DQN: Control-Tutored Deep Reinforcement LearningFrancesco De Lellis, Marco Coraggio, Giovanni Russo 0002, Mirco Musolesi, Mario di Bernardo. 941-953 [doi]
- Failing with Grace: Learning Neural Network Controllers that are Boundedly UnsafePanagiotis Vlantis, Leila Bridgeman, Michael M. Zavlanos. 954-965 [doi]
- Probabilistic Verification of ReLU Neural Networks via Characteristic FunctionsJoshua Pilipovsky, Vignesh Sivaramakrishnan, Meeko Oishi, Panagiotis Tsiotras. 966-979 [doi]
- Data-driven Stochastic Output-Feedback Predictive Control: Recursive Feasibility through Interpolated Initial ConditionsGuanru Pan, Ruchuan Ou, Timm Faulwasser. 980-992 [doi]
- Detection of Man-in-the-Middle Attacks in Model-Free Reinforcement LearningRishi Rani, Massimo Franceschetti. 993-1007 [doi]
- On Controller Reduction in Linear Quadratic Gaussian Control with Performance BoundsZhaolin Ren, Yang Zheng 0001, Maryam Fazel, Na Li. 1008-1019 [doi]
- Competing Bandits in Time Varying Matching MarketsDeepan Muthirayan, Chinmay Maheshwari, Pramod P. Khargonekar, Shankar S. Sastry. 1020-1031 [doi]
- Regret Guarantees for Online Deep ControlXinyi Chen, Edgar Minasyan, Jason D. Lee, Elad Hazan. 1032-1045 [doi]
- ∞ UncertaintiesAlex Devonport, Peter Seiler 0001, Murat Arcak. 1046-1057 [doi]
- Satellite Navigation and Coordination with Limited Information SharingSydney Dolan, Siddharth Nayak, Hamsa Balakrishnan. 1058-1071 [doi]
- Toward Multi-Agent Reinforcement Learning for Distributed Event-Triggered ControlLukas Kesper, Sebastian Trimpe, Dominik Baumann. 1072-1085 [doi]
- Analysis and Detectability of Offline Data Poisoning Attacks on Linear Dynamical SystemsAlessio Russo. 1086-1098 [doi]
- Learning Stability Attention in Vision-based End-to-end Driving PoliciesTsun-Hsuan Wang, Wei Xiao 0003, Makram Chahine, Alexander Amini, Ramin M. Hasani, Daniela Rus. 1099-1111 [doi]
- Provably Efficient Model-free RL in Leader-Follower MDP with Linear Function ApproximationArnob Ghosh. 1112-1124 [doi]
- Learning-enhanced Nonlinear Model Predictive Control using Knowledge-based Neural Ordinary Differential Equations and Deep EnsemblesKong Yao Chee, M. Ani Hsieh, Nikolai Matni. 1125-1137 [doi]
- Online switching control with stability and regret guaranteesYingying Li, James A. Preiss, Na Li 0002, Yiheng Lin, Adam Wierman, Jeff S. Shamma. 1138-1151 [doi]
- CLAS: Coordinating Multi-Robot Manipulation with Central Latent Action SpacesElie Aljalbout, Maximilian Karl, Patrick van der Smagt. 1152-1166 [doi]
- Learning Coherent Clusters in Weakly-Connected Network SystemsHancheng Min, Enrique Mallada. 1167-1179 [doi]
- Predictive safety filter using system level synthesisAntoine Leeman, Johannes Köhler, Samir Bennani, Melanie N. Zeilinger. 1180-1192 [doi]
- Time Dependent Inverse Optimal Control using Trigonometric Basis FunctionsRahel Rickenbach, Elena Arcari, Melanie N. Zeilinger. 1193-1204 [doi]
- Interpreting Primal-Dual Algorithms for Constrained Multiagent Reinforcement LearningDaniel Tabas, Ahmed S. Zamzam, Baosen Zhang. 1205-1217 [doi]
- Learning Locomotion Skills from MPC in Sensor SpaceMajid Khadiv, Avadesh Meduri, Huaijiang Zhu, Ludovic Righetti, Bernhard Schölkopf. 1218-1230 [doi]
- Probabilistic Symmetry for Multi-Agent DynamicsSophia Huiwen Sun, Robin Walters, Jinxi Li, Rose Yu. 1231-1244 [doi]
- Policy Evaluation in Distributional LQRZifan Wang, Yulong Gao, Siyi Wang, Michael M. Zavlanos, Alessandro Abate, Karl Henrik Johansson. 1245-1256 [doi]
- Reachability Analysis-based Safety-Critical Control using Online Fixed-Time Reinforcement LearningNick-Marios T. Kokolakis, Kyriakos G. Vamvoudakis, Wassim M. Haddad. 1257-1270 [doi]
- Online Estimation of the Koopman Operator Using Fourier FeaturesTahiya Salam, Alice Kate Li, M. Ani Hsieh. 1271-1283 [doi]
- Hybrid Multi-agent Deep Reinforcement Learning for Autonomous Mobility on Demand SystemsTobias Enders, James Harrison, Marco Pavone 0001, Maximilian Schiffer. 1284-1296 [doi]
- Model-Based Reinforcement Learning for Cavity Filter TuningDoumitrou Daniil Nimara, Mohammadreza Malek-Mohammadi, Petter Ögren, Jieqiang Wei, Vincent Huang 0002. 1297-1307 [doi]
- FedSysID: A Federated Approach to Sample-Efficient System IdentificationHan Wang, Leonardo Felipe Toso, James Anderson. 1308-1320 [doi]
- Lipschitz constant estimation for 1D convolutional neural networksPatricia Pauli, Dennis Gramlich, Frank Allgöwer. 1321-1332 [doi]
- Rectified Pessimistic-Optimistic Learning for Stochastic Continuum-armed Bandit with ConstraintsHengquan Guo, Zhu Qi, Xin Liu. 1333-1344 [doi]
- Best of Both Worlds in Online Control: Competitive Ratio and Policy RegretGautam Goel, Naman Agarwal, Karan Singh, Elad Hazan. 1345-1356 [doi]
- Offline Model-Based Reinforcement Learning for Tokamak ControlIan Char, Joseph Abbate, Laszlo Bardoczi, Mark D. Boyer, Youngseog Chung, Rory Conlin, Keith Erickson, Viraj Mehta, Nathan Richner, Egemen Kolemen, Jeff G. Schneider. 1357-1372 [doi]
- A Dynamical Systems Perspective on Discrete OptimizationTong Guanchun, Michael Muehlebach. 1373-1386 [doi]
- Linear Stochastic Bandits over a Bit-Constrained ChannelAritra Mitra, Hamed Hassani, George J. Pappas. 1387-1399 [doi]
- Hybrid Systems Neural Control with Region-of-Attraction PlannerYue Meng, Chuchu Fan. 1400-1415 [doi]
- Online Saddle Point Tracking with Decision-Dependent DataKillian Reed Wood, Emiliano Dall'Anese. 1416-1428 [doi]
- Wing shape estimation with Extended Kalman filtering and KalmanNet neural network of a flexible wing aircraftBence Zsombor Hadlaczky, Noémi Friedman, Béla Takarics, Bálint Vanek. 1429-1440 [doi]
- Filter-Aware Model-Predictive ControlBaris Kayalibay, Atanas Mirchev, Ahmed Agha, Patrick van der Smagt, Justin Bayer. 1441-1454 [doi]
- Hyperparameter Tuning of an Off-Policy Reinforcement Learning Algorithm for H∞ Tracking ControlAlireza Farahmandi, Brian C. Reitz, Mark J. Debord, Douglas Philbrick, Katia Estabridis, Gary A. Hewer. 1455-1466 [doi]
- DLKoopman: A deep learning software package for Koopman theorySourya Dey, Eric William Davis. 1467-1479 [doi]
- Benchmarking Rigid Body Contact ModelsMichelle Guo, Yifeng Jiang 0002, Andrew Everett Spielberg, Jiajun Wu 0001, Karen Liu. 1480-1492 [doi]
- Model Predictive Control via On-Policy Imitation LearningKwangjun Ahn, Zakaria Mhammedi, Horia Mania, Zhang-Wei Hong, Ali Jadbabaie. 1493-1505 [doi]