Abstract is missing.
- Analysis of Langevin Monte Carlo from Poincare to Log-SobolevSinho Chewi, Murat A. Erdogdu, Mufan (Bill) Li, Ruoqi Shen, Shunshi Zhang 0001. 1-2 [doi]
- Optimization-Based Separations for Neural NetworksItay Safran, Jason D. Lee. 3-64 [doi]
- Mirror Descent Strikes Again: Optimal Stochastic Convex Optimization under Infinite Noise VarianceNuri Mert Vural, Lu Yu, Krishnakumar Balasubramanian, Stanislav Volgushev, Murat A. Erdogdu. 65-102 [doi]
- Wasserstein GANs with Gradient Penalty Compute Congested TransportTristan Milne, Adrian I. Nachman. 103-129 [doi]
- Robust Estimation for Random GraphsJayadev Acharya, Ayush Jain, Gautam Kamath 0001, Ananda Theertha Suresh, Huanyu Zhang. 130-166 [doi]
- Tight query complexity bounds for learning graph partitionsXizhi Liu, Sayan Mukherjee. 167-181 [doi]
- Pushing the Efficiency-Regret Pareto Frontier for Online Learning of Portfolios and Quantum StatesJulian Zimmert, Naman Agarwal, Satyen Kale. 182-226 [doi]
- Risk bounds for aggregated shallow neural networks using Gaussian priorsLaura Tinsi, Arnak S. Dalalyan. 227-253 [doi]
- On the Benefits of Large Learning Rates for Kernel MethodsGaspard Beugnot, Julien Mairal, Alessandro Rudi. 254-282 [doi]
- Near-Optimal Statistical Query Lower Bounds for Agnostically Learning Intersections of Halfspaces with Gaussian MarginalsDaniel J. Hsu, Clayton Hendrick Sanford, Rocco A. Servedio, Emmanouil-Vasileios Vlatakis-Gkaragkounis. 283-312 [doi]
- The Power of Adaptivity in SGD: Self-Tuning Step Sizes with Unbounded Gradients and Affine VarianceMatthew Faw, Isidoros Tziotis, Constantine Caramanis, Aryan Mokhtari, Sanjay Shakkottai, Rachel Ward. 313-355 [doi]
- Optimal Mean Estimation without a VarianceYeshwanth Cherapanamjeri, Nilesh Tripuraneni, Peter L. Bartlett, Michael I. Jordan. 356-357 [doi]
- Beyond No Regret: Instance-Dependent PAC Reinforcement LearningAndrew J. Wagenmaker, Max Simchowitz, Kevin Jamieson 0001. 358-418 [doi]
- Learning Low Degree HypergraphsEric Balkanski, Oussama Hanguir, Shatian Wang. 419-420 [doi]
- Depth and Feature Learning are Provably Beneficial for Neural Network DiscriminatorsCarles Domingo-Enrich. 421-447 [doi]
- The Implicit Bias of Benign OverfittingOhad Shamir. 448-478 [doi]
- Universal Online Learning with Bounded Loss: Reduction to Binary ClassificationMoïse Blanchard, Romain Cosson. 479-495 [doi]
- Negative curvature obstructs acceleration for strongly geodesically convex optimization, even with exact first-order oraclesChristopher Criscitiello, Nicolas Boumal. 496-542 [doi]
- Multi-Agent Learning for Iterative Dominance Elimination: Formal Barriers and New AlgorithmsJibang Wu, Haifeng Xu, Fan Yao. 543 [doi]
- A Private and Computationally-Efficient Estimator for Unbounded GaussiansGautam Kamath 0001, Argyris Mouzakis, Vikrant Singhal, Thomas Steinke 0002, Jonathan R. Ullman. 544-572 [doi]
- The Price of Tolerance in Distribution TestingClément L. Canonne, Ayush Jain, Gautam Kamath 0001, Jerry Li 0001. 573-624 [doi]
- A bounded-noise mechanism for differential privacyYuval Dagan, Gil Kur. 625-661 [doi]
- Learning with metric lossesDan Tsir Cohen, Aryeh Kontorovich. 662-700 [doi]
- Rate of Convergence of Polynomial Networks to Gaussian ProcessesAdam Klukowski. 701-722 [doi]
- Private Robust Estimation by Stabilizing Convex RelaxationsPravesh Kothari, Pasin Manurangsi, Ameya Velingker. 723-777 [doi]
- Stochastic Variance Reduction for Variational Inequality MethodsAhmet Alacaoglu, Yura Malitsky. 778-816 [doi]
- Self-Consistency of the Fokker Planck EquationZebang Shen, Zhenfu Wang, Satyen Kale, Alejandro Ribeiro, Amin Karbasi, Hamed Hassani. 817-841 [doi]
- Monotone LearningOlivier Bousquet, Amit Daniely, Haim Kaplan, Yishay Mansour, Shay Moran, Uri Stemmer. 842-866 [doi]
- Chasing Convex Bodies and Functions with Black-Box AdviceNicolas Christianson, Tinashe Handina, Adam Wierman. 867-908 [doi]
- ROOT-SGD: Sharp Nonasymptotics and Asymptotic Efficiency in a Single AlgorithmChris Junchi Li, Wenlong Mou, Martin J. Wainwright, Michael I. Jordan. 909-981 [doi]
- Policy Optimization for Stochastic Shortest PathLiyu Chen, Haipeng Luo, Aviv Rosenberg 0002. 982-1046 [doi]
- Optimal SQ Lower Bounds for Learning Halfspaces with Massart NoiseRajai Nasser, Stefan Tiegel. 1047-1074 [doi]
- Private and polynomial time algorithms for learning Gaussians and beyondHassan Ashtiani, Christopher Liaw. 1075-1076 [doi]
- Universal Online Learning: an Optimistically Universal Learning RuleMoïse Blanchard. 1077-1125 [doi]
- (Nearly) Optimal Private Linear Regression for Sub-Gaussian Data via Adaptive ClippingPrateek Varshney, Abhradeep Thakurta, Prateek Jain 0002. 1126-1166 [doi]
- Differential privacy and robust statistics in high dimensionsXiyang Liu, Weihao Kong, Sewoong Oh. 1167-1246 [doi]
- Lattice-Based Methods Surpass Sum-of-Squares in ClusteringIlias Zadik, Min Jae Song, Alexander S. Wein, Joan Bruna. 1247-1248 [doi]
- Width is Less Important than Depth in ReLU Neural NetworksGal Vardi, Gilad Yehudai, Ohad Shamir. 1249-1281 [doi]
- Computational-Statistical Gap in Reinforcement LearningDaniel Kane, Sihan Liu, Shachar Lovett, Gaurav Mahajan. 1282-1302 [doi]
- Trace norm regularization for multi-task learning with scarce dataEtienne Boursier, Mikhail Konobeev, Nicolas Flammarion. 1303-1327 [doi]
- The Role of Interactivity in Structured EstimationJayadev Acharya, Clément L. Canonne, Himanshu Tyagi, Ziteng Sun. 1328-1355 [doi]
- Dimension-free convergence rates for gradient Langevin dynamics in RKHSBoris Muzellec, Kanji Sato, Mathurin Massias, Taiji Suzuki. 1356-1420 [doi]
- Adversarially Robust Multi-Armed Bandit Algorithm with Variance-Dependent Regret BoundsShinji Ito, Taira Tsuchiya, Junya Honda. 1421-1422 [doi]
- A Sharp Memory-Regret Trade-off for Multi-Pass Streaming BanditsArpit Agarwal, Sanjeev Khanna, Prathamesh Patil. 1423-1462 [doi]
- Approximate Cluster Recovery from Noisy LabelsBuddhima Gamlath, Silvio Lattanzi, Ashkan Norouzi-Fard, Ola Svensson. 1463-1509 [doi]
- An Efficient Minimax Optimal Estimator For Multivariate Convex RegressionGil Kur, Eli Putterman. 1510-1546 [doi]
- Minimax Regret for Partial Monitoring: Infinite Outcomes and Rustichini's RegretTor Lattimore. 1547-1575 [doi]
- Adaptive Bandit Convex Optimization with Heterogeneous CurvatureHaipeng Luo, Mengxiao Zhang, Peng Zhao 0006. 1576-1612 [doi]
- Statistical Estimation and Online Inference via Local SGDXiang Li 0050, Jiadong Liang, Xiangyu Chang, Zhihua Zhang. 1613-1661 [doi]
- Community Recovery in the Degree-Heterogeneous Stochastic Block ModelVincent Cohen-Addad, Frederik Mallmann-Trenn, David Saulpic. 1662-1692 [doi]
- Strong Gaussian Approximation for the Sum of Random VectorsNazar Buzun, Nikolay Shvetsov, Dmitry V. Dylov. 1693-1715 [doi]
- Smoothed Online Learning is as Easy as Statistical LearningAdam Block, Yuval Dagan, Noah Golowich, Alexander Rakhlin. 1716-1786 [doi]
- Gardner formula for Ising perceptron models at small densitiesErwin Bolthausen, Shuta Nakajima, Nike Sun, Changji Xu. 1787-1911 [doi]
- Derivatives and residual distribution of regularized M-estimators with application to adaptive tuningPierre C. Bellec, Yiwei Shen. 1912-1947 [doi]
- Private Convex Optimization via Exponential MechanismSivakanth Gopi, Yin Tat Lee, Daogao Liu. 1948-1989 [doi]
- Towards Optimal Algorithms for Multi-Player Bandits without Collision Sensing InformationWei Huang, Richard Combes, Cindy Trinh. 1990-2012 [doi]
- Generalization Bounds for Data-Driven Numerical Linear AlgebraPeter L. Bartlett, Piotr Indyk, Tal Wagner. 2013-2040 [doi]
- The query complexity of sampling from strongly log-concave distributions in one dimensionSinho Chewi, Patrik R. Gerber, Chen Lu, Thibaut Le Gouic, Philippe Rigollet. 2041-2059 [doi]
- Optimal and instance-dependent guarantees for Markovian linear stochastic approximationWenlong Mou, Ashwin Pananjady, Martin J. Wainwright, Peter L. Bartlett. 2060-2061 [doi]
- Accelerated SGD for Non-Strongly-Convex Least SquaresAditya Varre, Nicolas Flammarion. 2062-2126 [doi]
- Label noise (stochastic) gradient descent implicitly solves the Lasso for quadratic parametrisationLoucas Pillaud-Vivien, Julien Reygner, Nicolas Flammarion. 2127-2159 [doi]
- Tracking Most Significant Arm Switches in BanditsJoe Suk, Samory Kpotufe. 2160-2182 [doi]
- Exact Community Recovery in Correlated Stochastic Block ModelsJulia Gaudio, Miklós Z. Rácz, Anirudh Sridhar. 2183-2241 [doi]
- Mean-field nonparametric estimation of interacting particle systemsRentian Yao, Xiaohui Chen, Yun Yang. 2242-2275 [doi]
- Inductive Bias of Multi-Channel Linear Convolutional Networks with Bounded Weight NormMeena Jagadeesan, Ilya P. Razenshteyn, Suriya Gunasekar. 2276-2325 [doi]
- New Projection-free Algorithms for Online Convex Optimization with Adaptive Regret GuaranteesDan Garber, Ben Kretzu. 2326-2359 [doi]
- Making SGD Parameter-FreeYair Carmon, Oliver Hinder. 2360-2389 [doi]
- Efficient Convex Optimization Requires Superlinear MemoryAnnie Marsden, Vatsal Sharan, Aaron Sidford, Gregory Valiant. 2390-2430 [doi]
- Big-Step-Little-Step: Efficient Gradient Methods for Objectives with Multiple ScalesJonathan A. Kelner, Annie Marsden, Vatsal Sharan, Aaron Sidford, Gregory Valiant, Honglin Yuan. 2431-2540 [doi]
- Toward Instance-Optimal State Certification With Incoherent MeasurementsSitan Chen, Jerry Li 0001, Ryan O'Donnell. 2541-2596 [doi]
- EM's Convergence in Gaussian Latent Tree ModelsYuval Dagan, Anthimos Vardis Kandiros, Constantinos Daskalakis. 2597-2667 [doi]
- Benign Overfitting without Linearity: Neural Network Classifiers Trained by Gradient Descent for Noisy Linear DataSpencer Frei, Niladri S. Chatterji, Peter L. Bartlett. 2668-2703 [doi]
- Minimax Regret Optimization for Robust Machine Learning under Distribution ShiftAlekh Agarwal, Tong Zhang. 2704-2729 [doi]
- Offline Reinforcement Learning with Realizability and Single-policy ConcentrabilityWenhao Zhan, Baihe Huang, Audrey Huang, Nan Jiang, Jason D. Lee. 2730-2775 [doi]
- Non-Linear Reinforcement Learning in Large Action Spaces: Structural Conditions and Sample-efficiency of Posterior SamplingAlekh Agarwal, Tong Zhang. 2776-2814 [doi]
- Learning GMMs with Nearly Optimal Robustness GuaranteesAllen Liu, Ankur Moitra. 2815-2895 [doi]
- Towards a Theory of Non-Log-Concave Sampling: First-Order Stationarity Guarantees for Langevin Monte CarloKrishna Balasubramanian, Sinho Chewi, Murat A. Erdogdu, Adil Salim, Shunshi Zhang 0001. 2896-2923 [doi]
- Understanding Riemannian Acceleration via a Proximal Extragradient FrameworkJikai Jin, Suvrit Sra. 2924-2962 [doi]
- On Almost Sure Convergence Rates of Stochastic Gradient MethodsJun Liu 0015, Ye Yuan 0002. 2963-2983 [doi]
- Improved analysis for a proximal algorithm for samplingYongxin Chen, Sinho Chewi, Adil Salim, Andre Wibisono. 2984-3014 [doi]
- Realizable Learning is All You NeedMax Hopkins, Daniel M. Kane, Shachar Lovett, Gaurav Mahajan. 3015-3069 [doi]
- Streaming Algorithms for Ellipsoidal Approximation of Convex PolytopesYury Makarychev, Naren Sarayu Manoj, Max Ovsiankin. 3070-3093 [doi]
- The Pareto Frontier of Instance-Dependent Guarantees in Multi-Player Multi-Armed Bandits with no CommunicationAllen Liu, Mark Sellke. 3094 [doi]
- Minimax Regret on Patterns Using Kullback-Leibler Divergence CoveringJennifer Tang. 3095-3112 [doi]
- Sharp Constants in Uniformity Testing via the Huber StatisticShivam Gupta 0002, Eric Price 0001. 3113-3192 [doi]
- Low-Degree MulticalibrationParikshit Gopalan, Michael P. Kim, Mihir Singhal, Shengjia Zhao. 3193-3234 [doi]
- Thompson Sampling Achieves $\tilde{O}(\sqrt{T})$ Regret in Linear Quadratic ControlTaylan Kargin, Sahin Lale, Kamyar Azizzadenesheli, Animashree Anandkumar, Babak Hassibi. 3235-3284 [doi]
- Return of the bias: Almost minimax optimal high probability bounds for adversarial linear banditsJulian Zimmert, Tor Lattimore. 3285-3312 [doi]
- Uniform Stability for First-Order Empirical Risk MinimizationAmit Attia, Tomer Koren. 3313-3332 [doi]
- Single Trajectory Nonparametric Learning of Nonlinear DynamicsIngvar M. Ziemann, Henrik Sandberg, Nikolai Matni. 3333-3364 [doi]
- On characterizations of learnability with computable learnersTom F. Sterkenburg. 3365-3379 [doi]
- Stability vs Implicit Bias of Gradient Methods on Separable Data and BeyondMatan Schliserman, Tomer Koren. 3380-3394 [doi]
- Near optimal efficient decoding from pooled dataMax Hahn-Klimroth, Noela Müller. 3395-3409 [doi]
- Kernel interpolation in Sobolev spaces is not consistent in low dimensionsSimon Buchholz. 3410-3440 [doi]
- Random Graph Matching in Geometric Models: the Case of Complete GraphsHaoyu Wang, Yihong Wu, Jiaming Xu, Israel Yolou. 3441-3488 [doi]
- Offline Reinforcement Learning: Fundamental Barriers for Value Function ApproximationDylan J. Foster, Akshay Krishnamurthy, David Simchi-Levi, Yunzong Xu. 3489 [doi]
- Improved Parallel Algorithm for Minimum Cost Submodular Cover ProblemYingli Ran, Zhao Zhang 0002, ShaoJie Tang. 3490-3502 [doi]
- The Dynamics of Riemannian Robbins-Monro AlgorithmsMohammad Reza Karimi, Ya-Ping Hsieh, Panayotis Mertikopoulos, Andreas Krause 0001. 3503 [doi]
- Corruption-Robust Contextual Search through Density UpdatesRenato Paes Leme, Chara Podimata, Jon Schneider. 3504-3505 [doi]
- On The Memory Complexity of Uniformity TestingTomer Berg, Or Ordentlich, Ofer Shayevitz. 3506-3523 [doi]
- Generalization Bounds via Convex AnalysisGábor Lugosi, Gergely Neu. 3524-3546 [doi]
- Private Matrix Approximation and Geometry of Unitary OrbitsOren Mangoubi, Yikai Wu, Satyen Kale, Abhradeep Thakurta, Nisheeth K. Vishnoi. 3547-3588 [doi]
- Efficient Online Linear Control with Stochastic Convex Costs and Unknown DynamicsAsaf B. Cassel, Alon Cohen, Tomer Koren. 3589-3604 [doi]
- Two-Sided Weak Submodularity for Matroid Constrained Optimization and RegressionTheophile Thiery, Justin Ward. 3605-3634 [doi]
- Corralling a Larger Band of Bandits: A Case Study on Switching Regret for Linear BanditsHaipeng Luo, Mengxiao Zhang, Peng Zhao 0006, Zhi-Hua Zhou. 3635-3684 [doi]
- Assemblies of neurons learn to classify well-separated distributionsMax Dabagia, Santosh S. Vempala, Christos H. Papadimitriou. 3685-3717 [doi]
- The Structured Abstain Problem and the Lovász HingeEnrique B. Nueve, Rafael M. Frongillo, Jessica Finocchiaro. 3718-3740 [doi]
- Fast algorithm for overcomplete order-3 tensor decompositionJingqiu Ding, Tommaso d'Orsi, Chih-Hung Liu 0001, David Steurer, Stefan Tiegel. 3741-3799 [doi]
- Hardness of Maximum Likelihood Learning of DPPsElena Grigorescu, Brendan Juba, Karl Wimmer, Ning Xie 0002. 3800-3819 [doi]
- Learning to Control Linear Systems can be HardAnastasios Tsiamis, Ingvar M. Ziemann, Manfred Morari, Nikolai Matni, George J. Pappas. 3820-3857 [doi]
- Horizon-Free Reinforcement Learning in Polynomial Time: the Power of Stationary PoliciesZihan Zhang, Xiangyang Ji, Simon S. Du. 3858-3904 [doi]
- On the well-spread property and its relation to linear regressionHongjie Chen, Tommaso d'Orsi. 3905-3935 [doi]
- Optimal SQ Lower Bounds for Robustly Learning Discrete Product Distributions and Ising ModelsIlias Diakonikolas, Daniel M. Kane, Yuxin Sun. 3936-3978 [doi]
- Private High-Dimensional Hypothesis TestingShyam Narayanan. 3979-4027 [doi]
- How catastrophic can catastrophic forgetting be in linear regression?Itay Evron, Edward Moroshko, Rachel Ward, Nathan Srebro, Daniel Soudry. 4028-4079 [doi]
- Efficient decentralized multi-agent learning in asymmetric queuing systemsDaniel Freund 0001, Thodoris Lykouris, Wentao Weng. 4080-4084 [doi]
- Online Learning to Transport via the Minimal Selection PrincipleWenxuan Guo, YoonHaeng Hur, Tengyuan Liang, Chris Ryan. 4085-4109 [doi]
- On the Role of Channel Capacity in Learning Gaussian Mixture ModelsElad Romanov, Tamir Bendory, Or Ordentlich. 4110-4159 [doi]
- Parameter-free Mirror DescentAndrew Jacobsen, Ashok Cutkosky. 4160-4211 [doi]
- Chained generalisation boundsEugenio Clerico, Amitis Shidani, George Deligiannidis, Arnaud Doucet. 4212-4257 [doi]
- Near-Optimal Statistical Query Hardness of Learning Halfspaces with Massart NoiseIlias Diakonikolas, Daniel Kane. 4258-4282 [doi]
- Faster online calibration without randomization: interval forecasts and the power of two choicesChirag Gupta, Aaditya Ramdas. 4283-4309 [doi]
- Universality of empirical risk minimizationAndrea Montanari, Basil Saeed. 4310-4312 [doi]
- Learning a Single Neuron with Adversarial Label Noise via Gradient DescentIlias Diakonikolas, Vasilis Kontonis, Christos Tzamos, Nikos Zarifis. 4313-4361 [doi]
- Sharper Rates for Separable Minimax and Finite Sum Optimization via Primal-Dual Extragradient MethodsYujia Jin, Aaron Sidford, Kevin Tian. 4362-4415 [doi]
- Rate-Distortion Theoretic Generalization Bounds for Stochastic Learning AlgorithmsMilad Sefidgaran, Amin Gohari, Gaël Richard, Umut Simsekli. 4416-4463 [doi]
- Scale-free Unconstrained Online Learning for Curved LossesJack J. Mayo, Hédi Hadiji, Tim van Erven. 4464-4497 [doi]
- Robustly-reliable learners under poisoning attacksMaria-Florina Balcan, Avrim Blum, Steve Hanneke, Dravyansh Sharma. 4498-4534 [doi]
- Non-Gaussian Component Analysis via Lattice Basis ReductionIlias Diakonikolas, Daniel Kane. 4535-4547 [doi]
- Can Q-learning be Improved with Advice?Noah Golowich, Ankur Moitra. 4548-4619 [doi]
- Non-Convex Optimization with Certificates and Fast Rates Through Kernel Sums of SquaresBlake E. Woodworth, Francis R. Bach, Alessandro Rudi. 4620-4642 [doi]
- Hierarchical Clustering in Graph Streams: Single-Pass Algorithms and Space Lower BoundsSepehr Assadi, Vaggos Chatziafratis, Jakub Lacki, Vahab Mirrokni, Chen Wang. 4643-4702 [doi]
- Robust Sparse Mean Estimation via Sum of SquaresIlias Diakonikolas, Daniel M. Kane, Sushrut Karmalkar, Ankit Pensia, Thanasis Pittas. 4703-4763 [doi]
- Statistical and Computational Phase Transitions in Group TestingAmin Coja-Oghlan, Oliver Gebhard, Max Hahn-Klimroth, Alexander S. Wein, Ilias Zadik. 4764-4781 [doi]
- The merged-staircase property: a necessary and nearly sufficient condition for SGD learning of sparse functions on two-layer neural networksEmmanuel Abbe, Enric Boix-Adserà, Theodor Misiakiewicz. 4782-4887 [doi]
- Eigenspace Restructuring: A Principle of Space and Frequency in Neural NetworksLechao Xiao. 4888-4944 [doi]
- Sampling Approximately Low-Rank Ising Models: MCMC meets Variational MethodsFrederic Koehler, Holden Lee, Andrej Risteski. 4945-4988 [doi]
- Strong Memory Lower Bounds for Learning Natural ModelsGavin Brown, Mark Bun, Adam D. Smith. 4989-5029 [doi]
- On the power of adaptivity in statistical adversariesGuy Blanc, Jane Lange, Ali Malik, Li-Yang Tan. 5030-5061 [doi]
- Sample-Efficient Reinforcement Learning in the Presence of Exogenous InformationYonathan Efroni, Dylan J. Foster, Dipendra Misra, Akshay Krishnamurthy, John Langford 0001. 5062-5127 [doi]
- The Query Complexity of Local Search and Brouwer in RoundsSimina Brânzei, Jiawei Li. 5128-5145 [doi]
- Complete Policy Regret Bounds for Tallying BanditsDhruv Malik, Yuanzhi Li, Aarti Singh. 5146-5174 [doi]
- When Is Partially Observable Reinforcement Learning Not Scary?Qinghua Liu, Alan Chung, Csaba Szepesvári, Chi Jin. 5175-5220 [doi]
- Strategizing against Learners in Bayesian GamesYishay Mansour, Mehryar Mohri, Jon Schneider, Balasubramanian Sivan. 5221-5252 [doi]
- Orthogonal Statistical Learning with Self-Concordant LossLang Liu, Carlos Cinelli, Zaïd Harchaoui. 5253-5277 [doi]
- Clustering with Queries under Semi-Random NoiseAlberto Del Pia, Mingchen Ma, Christos Tzamos. 5278-5313 [doi]
- Efficient Projection-Free Online Convex Optimization with Membership OracleZakaria Mhammedi. 5314-5390 [doi]
- Better Private Algorithms for Correlation ClusteringDaogao Liu. 5391-5412 [doi]
- Neural Networks can Learn Representations with Gradient DescentAlexandru Damian, Jason D. Lee, Mahdi Soltanolkotabi. 5413-5452 [doi]
- Stochastic linear optimization never overfits with quadratically-bounded losses on general dataMatus Telgarsky. 5453-5488 [doi]
- Multilevel Optimization for Inverse ProblemsSimon Weissmann, Ashia Wilson, Jakob Zech. 5489-5524 [doi]
- High-Dimensional Projection Pursuit: Outer Bounds and Applications to Interpolation in Neural NetworksKangjie Zhou, Andrea Montanari. 5525-5527 [doi]
- Memorize to generalize: on the necessity of interpolation in high dimensional linear regressionChen Cheng, John Duchi, Rohith Kuditipudi. 5528-5560 [doi]
- Damped Online Newton Step for Portfolio SelectionZakaria Mhammedi, Alexander Rakhlin. 5561-5595 [doi]
- From Sampling to Optimization on Discrete Domains with Applications to Determinant MaximizationNima Anari, Thuy Duong Vuong. 5596-5618 [doi]