Abstract is missing.
- Preface1-2 [doi]
- Inference under Information Constraints: Lower Bounds from Chi-Square ContractionJayadev Acharya, Clément L. Canonne, Himanshu Tyagi. 3-17 [doi]
- Learning in Non-convex Games with an Optimization OracleNaman Agarwal, Alon Gonen, Elad Hazan. 18-29 [doi]
- Learning to Prune: Speeding up Repeated ComputationsDaniel Alabi, Adam Tauman Kalai, Katrina Ligett, Cameron Musco, Christos Tzamos, Ellen Vitercik. 30-33 [doi]
- Towards Testing Monotonicity of Distributions Over General PosetsMaryam Aliakbarpour, Themis Gouleakis, John Peebles, Ronitt Rubinfeld, Anak Yodpinyanee. 34-82 [doi]
- Testing Mixtures of Discrete DistributionsMaryam Aliakbarpour, Ravi Kumar 0001, Ronitt Rubinfeld. 83-114 [doi]
- Normal Approximation for Stochastic Gradient Descent via Non-Asymptotic Rates of Martingale CLTAndreas Anastasiou, Krishnakumar Balasubramanian, Murat A. Erdogdu. 115-137 [doi]
- Adaptively Tracking the Best Bandit Arm with an Unknown Number of Distribution ChangesPeter Auer, Pratik Gajane, Ronald Ortner. 138-158 [doi]
- Achieving Optimal Dynamic Regret for Non-stationary Bandits without Prior InformationPeter Auer, Yifang Chen, Pratik Gajane, Chung-wei Lee, Haipeng Luo, Ronald Ortner, Chen-Yu Wei. 159-163 [doi]
- A Universal Algorithm for Variational Inequalities Adaptive to Smoothness and NoiseFrancis Bach, Kfir Y. Levy. 164-194 [doi]
- Learning Two Layer Rectified Neural Networks in Polynomial TimeAinesh Bakshi, Rajesh Jayaram, David P. Woodruff. 195-268 [doi]
- Private Center Points and Learning of HalfspacesAmos Beimel, Shay Moran, Kobbi Nissim, Uri Stemmer. 269-282 [doi]
- Lower bounds for testing graphical models: colorings and antiferromagnetic Ising modelsIvona Bezáková, Antonio Blanca, Zongchen Chen, Daniel Stefankovic, Eric Vigoda. 283-298 [doi]
- Approximate Guarantees for Dictionary LearningAditya Bhaskara, Wai Ming Tai. 299-317 [doi]
- The Optimal Approximation Factor in Density EstimationOlivier Bousquet, Daniel Kane, Shay Moran. 318-341 [doi]
- Sorted Top-k in RoundsMark Braverman, Jieming Mao, Yuval Peres. 342-382 [doi]
- Multi-armed Bandit Problems with Strategic ArmsMark Braverman, Jieming Mao, Jon Schneider, S. Matthew Weinberg. 383-416 [doi]
- Universality of Computational Lower Bounds for Submatrix DetectionMatthew Brennan, Guy Bresler, Wasim Huleihel. 417-468 [doi]
- Optimal Average-Case Reductions to Sparse PCA: From Weak Assumptions to Strong HardnessMatthew Brennan, Guy Bresler. 469-470 [doi]
- Learning rates for Gaussian mixtures under group actionVictor-Emmanuel Brunel. 471-491 [doi]
- Near-optimal method for highly smooth convex optimizationSébastien Bubeck, Qijia Jiang, Yin Tat Lee, Yuanzhi Li, Aaron Sidford. 492-507 [doi]
- Improved Path-length Regret Bounds for BanditsSébastien Bubeck, Yuanzhi Li, Haipeng Luo, Chen-Yu Wei. 508-528 [doi]
- Optimal Learning of Mallows Block ModelRóbert Busa-Fekete, Dimitris Fotakis, Balázs Szörényi, Manolis Zampetakis. 529-532 [doi]
- Gaussian Process Optimization with Adaptive Sketching: Scalable and No RegretDaniele Calandriello, Luigi Carratino, Alessandro Lazaric, Michal Valko, Lorenzo Rosasco. 533-557 [doi]
- Disagreement-Based Combinatorial Pure Exploration: Sample Complexity Bounds and an Efficient AlgorithmTongyi Cao, Akshay Krishnamurthy. 558-588 [doi]
- A Rank-1 Sketch for Matrix Multiplicative WeightsYair Carmon, John C. Duchi, Aaron Sidford, Kevin Tian. 589-623 [doi]
- On the Computational Power of Online Gradient DescentVaggos Chatziafratis, Tim Roughgarden, Joshua R. Wang. 624-662 [doi]
- Active Regression via Linear-Sample SparsificationXue Chen, Eric Price. 663-695 [doi]
- A New Algorithm for Non-stationary Contextual Bandits: Efficient, Optimal and Parameter-freeYifang Chen, Chung-wei Lee, Haipeng Luo, Chen-Yu Wei. 696-726 [doi]
- Faster Algorithms for High-Dimensional Robust Covariance EstimationYu Cheng 0002, Ilias Diakonikolas, Rong Ge 0001, David P. Woodruff. 727-757 [doi]
- Testing Symmetric Markov Chains Without HittingYeshwanth Cherapanamjeri, Peter L. Bartlett. 758-785 [doi]
- Fast Mean Estimation with Sub-Gaussian RatesYeshwanth Cherapanamjeri, Nicolas Flammarion, Peter L. Bartlett. 786-806 [doi]
- Vortices Instead of Equilibria in MinMax Optimization: Chaos and Butterfly Effects of Online Learning in Zero-Sum GamesYun Kuen Cheung, Georgios Piliouras. 807-834 [doi]
- Pure entropic regularization for metrical task systemsChristian Coester, James R. Lee. 835-848 [doi]
- A near-optimal algorithm for approximating the John EllipsoidMichael B. Cohen, Ben Cousins, Yin Tat Lee, Xin Yang. 849-873 [doi]
- Artificial Constraints and Hints for Unbounded Online LearningAshok Cutkosky. 874-894 [doi]
- Combining Online Learning GuaranteesAshok Cutkosky. 895-913 [doi]
- Learning from Weakly Dependent Data under Dobrushin's ConditionYuval Dagan, Constantinos Daskalakis, Nishanth Dikkala, Siddhartha Jayanti. 914-928 [doi]
- Space lower bounds for linear prediction in the streaming modelYuval Dagan, Gil Kur, Ohad Shamir. 929-954 [doi]
- Computationally and Statistically Efficient Truncated RegressionConstantinos Daskalakis, Themis Gouleakis, Christos Tzamos, Manolis Zampetakis. 955-960 [doi]
- Reconstructing Trees from TracesSami Davies, Miklós Z. Rácz, Cyrus Rashtchian. 961-978 [doi]
- Is your function low dimensional?Anindya De, Elchanan Mossel, Joe Neeman. 979-993 [doi]
- Computational Limitations in Robust Classification and Win-Win ResultsAkshay Degwekar, Preetum Nakkiran, Vinod Vaikuntanathan. 994-1028 [doi]
- Fast determinantal point processes via distortion-free intermediate samplingMichal Derezinski. 1029-1049 [doi]
- Minimax experimental design: Bridging the gap between statistical and worst-case approaches to least squares regressionMichal Derezinski, Kenneth L. Clarkson, Michael W. Mahoney, Manfred K. Warmuth. 1050-1069 [doi]
- Communication and Memory Efficient Testing of Discrete DistributionsIlias Diakonikolas, Themis Gouleakis, Daniel M. Kane, Sankeerth Rao. 1070-1106 [doi]
- Testing Identity of Multidimensional HistogramsIlias Diakonikolas, Daniel M. Kane, John Peebles. 1107-1131 [doi]
- Lower Bounds for Parallel and Randomized Convex OptimizationJelena Diakonikolas, Cristóbal Guzmán. 1132-1157 [doi]
- On the Performance of Thompson Sampling on Logistic BanditsShi Dong, Tengyu Ma, Benjamin Van Roy. 1158-1160 [doi]
- Lower Bounds for Locally Private Estimation via Communication ComplexityJohn C. Duchi, Ryan Rogers 0003. 1161-1191 [doi]
- Sharp Analysis for Nonconvex SGD Escaping from Saddle PointsCong Fang, Zhouchen Lin, Tong Zhang. 1192-1234 [doi]
- Achieving the Bayes Error Rate in Stochastic Block Model by SDP, RobustlyYingjie Fei, Yudong Chen. 1235-1269 [doi]
- High probability generalization bounds for uniformly stable algorithms with nearly optimal rateVitaly Feldman, Jan Vondrák. 1270-1279 [doi]
- Sum-of-squares meets square loss: Fast rates for agnostic tensor completionDylan J. Foster, Andrej Risteski. 1280-1318 [doi]
- The Complexity of Making the Gradient Small in Stochastic Convex OptimizationDylan J. Foster, Ayush Sekhari, Ohad Shamir, Nathan Srebro, Karthik Sridharan, Blake E. Woodworth. 1319-1345 [doi]
- Statistical Learning with a Nuisance ComponentDylan J. Foster, Vasilis Syrgkanis. 1346-1348 [doi]
- On the Regret Minimization of Nonconvex Online Gradient Ascent for Online PCADan Garber. 1349-1373 [doi]
- Optimal Tensor Methods in Smooth Convex and Uniformly ConvexOptimizationAlexander Gasnikov, Pavel Dvurechensky, Eduard Gorbunov, Evgeniya Vorontsova, Daniil Selikhanovych, César A. Uribe. 1374-1391 [doi]
- Near Optimal Methods for Minimizing Convex Functions with Lipschitz $p$-th DerivativesAlexander Gasnikov, Pavel Dvurechensky, Eduard Gorbunov, Evgeniya Vorontsova, Daniil Selikhanovych, César A. Uribe, Bo Jiang 0007, Haoyue Wang, Shuzhong Zhang, Sébastien Bubeck, Qijia Jiang, Yin Tat Lee, Yuanzhi Li, Aaron Sidford. 1392-1393 [doi]
- Stabilized SVRG: Simple Variance Reduction for Nonconvex OptimizationRong Ge, Zhize Li, Weiyao Wang, Xiang Wang. 1394-1448 [doi]
- Learning Ising Models with Independent FailuresSurbhi Goel, Daniel M. Kane, Adam R. Klivans. 1449-1469 [doi]
- Learning Neural Networks with Two Nonlinear Layers in Polynomial TimeSurbhi Goel, Adam R. Klivans. 1470-1499 [doi]
- When can unlabeled data improve the learning rate?Christina Göpfert, Shai Ben-David, Olivier Bousquet, Sylvain Gelly, Ilya O. Tolstikhin, Ruth Urner. 1500-1518 [doi]
- Sampling and Optimization on Convex Sets in Riemannian Manifolds of Non-Negative CurvatureNavin Goyal, Abhishek Shetty. 1519-1561 [doi]
- Better Algorithms for Stochastic Bandits with Adversarial CorruptionsAnupam Gupta, Tomer Koren, Kunal Talwar. 1562-1578 [doi]
- Tight analyses for non-smooth stochastic gradient descentNicholas J. A. Harvey, Christopher Liaw, Yaniv Plan, Sikander Randhawa. 1579-1613 [doi]
- Reasoning in Bayesian Opinion Exchange Networks Is PSPACE-HardJan Hazla, Ali Jadbabaie, Elchanan Mossel, M. Amin Rahimian. 1614-1648 [doi]
- How Hard is Robust Mean Estimation?Samuel B. Hopkins, Jerry Li 0001. 1649-1682 [doi]
- A Robust Spectral Algorithm for Overcomplete Tensor DecompositionSamuel B. Hopkins, Tselil Schramm, Jonathan Shi. 1683-1722 [doi]
- Sample-Optimal Low-Rank Approximation of Distance MatricesPiotr Indyk, Ali Vakilian, Tal Wagner, David P. Woodruff. 1723-1751 [doi]
- Making the Last Iterate of SGD Information Theoretically OptimalPrateek Jain 0002, Dheeraj Nagaraj, Praneeth Netrapalli. 1752-1755 [doi]
- Accuracy-Memory Tradeoffs and Phase Transitions in Belief PropagationVishesh Jain, Frederic Koehler, Jingbo Liu, Elchanan Mossel. 1756-1771 [doi]
- The implicit bias of gradient descent on nonseparable dataZiwei Ji, Matus Telgarsky. 1772-1798 [doi]
- An Optimal High-Order Tensor Method for Convex OptimizationBo Jiang 0007, Haoyue Wang, Shuzhong Zhang. 1799-1801 [doi]
- Parameter-Free Online Convex Optimization with Sub-Exponential NoiseKwang-Sung Jun, Francesco Orabona. 1802-1823 [doi]
- Sample complexity of partition identification using multi-armed banditsSandeep Juneja, Subhashini Krishnasamy. 1824-1852 [doi]
- Privately Learning High-Dimensional DistributionsGautam Kamath, Jerry Li 0001, Vikrant Singhal, Jonathan Ullman. 1853-1902 [doi]
- On Communication Complexity of Classification ProblemsDaniel Kane, Roi Livni, Shay Moran, Amir Yehudayoff. 1903-1943 [doi]
- Non-asymptotic Analysis of Biased Stochastic Approximation SchemeBelhal Karimi, Blazej Miasojedow, Eric Moulines, Hoi-To Wai. 1944-1974 [doi]
- Discrepancy, Coresets, and Sketches in Machine LearningZohar S. Karnin, Edo Liberty. 1975-1993 [doi]
- Bandit Principal Component AnalysisWojciech Kotlowski, Gergely Neu. 1994-2024 [doi]
- Contextual bandits with continuous actions: Smoothing, zooming, and adaptingAkshay Krishnamurthy, John Langford 0001, Aleksandrs Slivkins, Chicheng Zhang. 2025-2027 [doi]
- Distribution-Dependent Analysis of Gibbs-ERM PrincipleIlja Kuzborskij, Nicolò Cesa-Bianchi, Csaba Szepesvári. 2028-2054 [doi]
- Global Convergence of the EM Algorithm for Mixtures of Two Component Linear RegressionJeongyeol Kwon, Wei Qian, Constantine Caramanis, Yudong Chen, Damek Davis. 2055-2110 [doi]
- An Information-Theoretic Approach to Minimax Regret in Partial MonitoringTor Lattimore, Csaba Szepesvári. 2111-2139 [doi]
- Solving Empirical Risk Minimization in the Current Matrix Multiplication TimeYin Tat Lee, Zhao Song, Qiuyi Zhang. 2140-2157 [doi]
- On Mean Estimation for General Norms with Statistical QueriesJerry Li, Aleksandar Nikolov, Ilya P. Razenshteyn, Erik Waingarten. 2158-2172 [doi]
- Nearly Minimax-Optimal Regret for Linearly Parameterized BanditsYingkai Li, Yining Wang, Yuan Zhou 0007. 2173-2174 [doi]
- Sharp Theoretical Analysis for Nonparametric Testing under Random ProjectionMeimei Liu, Zuofeng Shang, Guang Cheng. 2175-2209 [doi]
- Combinatorial Algorithms for Optimal DesignVivek Madan, Mohit Singh, Uthaipon Tantipongpipat, Weijun Xie. 2210-2258 [doi]
- Nonconvex sampling with the Metropolis-adjusted Langevin algorithmOren Mangoubi, Nisheeth K. Vishnoi. 2259-2293 [doi]
- Beyond Least-Squares: Fast Rates for Regularized Empirical Risk Minimization through Self-ConcordanceUlysse Marteau-Ferey, Dmitrii Ostrovskii, Francis Bach, Alessandro Rudi. 2294-2340 [doi]
- Planting trees in graphs, and finding them backLaurent Massoulié, Ludovic Stephan, Don Towsley. 2341-2371 [doi]
- Uniform concentration and symmetrization for weak interactionsAndreas Maurer, Massimiliano Pontil. 2372-2387 [doi]
- Mean-field theory of two-layers neural networks: dimension-free bounds and kernel limitSong Mei, Theodor Misiakiewicz, Andrea Montanari. 2388-2464 [doi]
- Batch-Size Independent Regret Bounds for the Combinatorial Multi-Armed Bandit ProblemNadav Merlis, Shie Mannor. 2465-2489 [doi]
- Lipschitz Adaptivity with Multiple Learning Rates in Online LearningZakaria Mhammedi, Wouter M. Koolen, Tim van Erven. 2490-2511 [doi]
- VC Classes are Adversarially Robustly Learnable, but Only ImproperlyOmar Montasser, Steve Hanneke, Nathan Srebro. 2512-2530 [doi]
- Affine Invariant Covariance Estimation for Heavy-Tailed DistributionsDmitrii M. Ostrovskii, Alessandro Rudi. 2531-2550 [doi]
- Stochastic Gradient Descent Learns State Equations with Nonlinear ActivationsSamet Oymak. 2551-2579 [doi]
- A Theory of Selective PredictionMingda Qiao, Gregory Valiant. 2580-2594 [doi]
- Consistency of Interpolation with Laplace Kernels is a High-Dimensional PhenomenonAlexander Rakhlin, Xiyu Zhai. 2595-2623 [doi]
- Classification with unknown class-conditional label noise on non-compact feature spacesHenry W. J. Reeve, Ata Kabán. 2624-2651 [doi]
- The All-or-Nothing Phenomenon in Sparse Linear RegressionGalen Reeves, Jiaming Xu, Ilias Zadik. 2652-2663 [doi]
- Depth Separations in Neural Networks: What is Actually Being Separated?Itay Safran, Ronen Eldan, Ohad Shamir. 2664-2666 [doi]
- How do infinite width bounded norm networks look in function space?Pedro Savarese, Itay Evron, Daniel Soudry, Nathan Srebro. 2667-2690 [doi]
- Exponential Convergence Time of Gradient Descent for One-Dimensional Deep Linear Neural NetworksOhad Shamir. 2691-2713 [doi]
- Learning Linear Dynamical Systems with Semi-Parametric Least SquaresMax Simchowitz, Ross Boczar, Benjamin Recht. 2714-2802 [doi]
- Finite-Time Error Bounds For Linear Stochastic Approximation andTD LearningR. Srikant, Lei Ying. 2803-2830 [doi]
- Robustness of Spectral Methods for Community DetectionLudovic Stephan, Laurent Massoulié. 2831-2860 [doi]
- Maximum Entropy Distributions: Bit Complexity and StabilityDamian Straszak, Nisheeth K. Vishnoi. 2861-2891 [doi]
- Adaptive Hard Thresholding for Near-optimal Consistent Robust RegressionArun Sai Suggala, Kush Bhatia, Pradeep Ravikumar, Prateek Jain 0002. 2892-2897 [doi]
- Model-based RL in Contextual Decision Processes: PAC bounds and Exponential Improvements over Model-free ApproachesWen Sun, Nan Jiang, Akshay Krishnamurthy, Alekh Agarwal, John Langford 0001. 2898-2933 [doi]
- Stochastic first-order methods: non-asymptotic and computer-aided analyses via potential functionsAdrien Taylor, Francis Bach. 2934-2992 [doi]
- The Relative Complexity of Maximum Likelihood Estimation, MAP Estimation, and SamplingChristopher Tosh, Sanjoy Dasgupta. 2993-3035 [doi]
- The Gap Between Model-Based and Model-Free Methods on the Linear Quadratic Regulator: An Asymptotic ViewpointStephen Tu, Benjamin Recht. 3036-3083 [doi]
- Theoretical guarantees for sampling and inference in generative models with latent diffusionsBelinda Tzen, Maxim Raginsky. 3084-3114 [doi]
- Gradient Descent for One-Hidden-Layer Neural Networks: Polynomial Convergence and SQ Lower BoundsSantosh Vempala, John Wilmes. 3115-3117 [doi]
- Estimation of smooth densities in Wasserstein distanceJonathan Weed, Quentin Berthet. 3118-3119 [doi]
- Estimating the Mixing Time of Ergodic Markov ChainsGeoffrey Wolfer, Aryeh Kontorovich. 3120-3159 [doi]
- Stochastic Approximation of Smooth and Strongly Convex Functions: Beyond the $O(1/T)$ Convergence RateLijun Zhang, Zhi-Hua Zhou. 3160-3179 [doi]
- Open Problem: Is Margin Sufficient for Non-Interactive Private Distributed Learning?Amit Daniely, Vitaly Feldman. 3180-3184 [doi]
- Open Problem: How fast can a multiclass test set be overfit?Vitaly Feldman, Roy Frostig, Moritz Hardt. 3185-3189 [doi]
- Open Problem: Do Good Algorithms Necessarily Query Bad Points?Rong Ge 0001, Prateek Jain 0002, Sham M. Kakade, Rahul Kidambi, Dheeraj M. Nagaraj, Praneeth Netrapalli. 3190-3193 [doi]
- Open Problem: Risk of Ruin in Multiarmed BanditsFilipo Studzinski Perotto, Mathieu Bourgais, Bruno C. Silva, Laurent Vercouter. 3194-3197 [doi]
- Open Problem: Monotonicity of LearningTom J. Viering, Alexander Mey, Marco Loog. 3198-3201 [doi]
- Open Problem: The Oracle Complexity of Convex Optimization with Limited MemoryBlake E. Woodworth, Nathan Srebro. 3202-3210 [doi]