Abstract is missing.
- Conference on Learning Theory 2018: PrefaceSébastien Bubeck, Philippe Rigollet. 1 [doi]
- Algorithmic Regularization in Over-parameterized Matrix Sensing and Neural Networks with Quadratic ActivationsYuanzhi Li, Tengyu Ma, Hongyang Zhang. 2-47 [doi]
- Reducibility and Computational Lower Bounds for Problems with Planted Sparse StructureMatthew Brennan, Guy Bresler, Wasim Huleihel. 48-166 [doi]
- Logistic Regression: The Importance of Being ImproperDylan J. Foster, Satyen Kale, Haipeng Luo, Mehryar Mohri, Karthik Sridharan. 167-208 [doi]
- Actively Avoiding Nonsense in Generative ModelsSteve Hanneke, Adam Tauman Kalai, Gautam Kamath, Christos Tzamos. 209-227 [doi]
- A Faster Approximation Algorithm for the Gibbs Partition FunctionVladimir Kolmogorov. 228-249 [doi]
- Exponential Convergence of Testing Error for Stochastic Gradient MethodsLoucas Pillaud-Vivien, Alessandro Rudi, Francis Bach. 250-296 [doi]
- Size-Independent Sample Complexity of Neural NetworksNoah Golowich, Alexander Rakhlin, Ohad Shamir. 297-299 [doi]
- Underdamped Langevin MCMC: A non-asymptotic analysisXiang Cheng, Niladri S. Chatterji, Peter L. Bartlett, Michael I. Jordan. 300-323 [doi]
- Online Variance Reduction for Stochastic OptimizationZalan Borsos, Andreas Krause 0001, Kfir Y. Levy. 324-357 [doi]
- Information Directed Sampling and Bandits with Heteroscedastic NoiseJohannes Kirschner, Andreas Krause. 358-384 [doi]
- Testing Symmetric Markov Chains From a Single TrajectoryConstantinos Daskalakis, Nishanth Dikkala, Nick Gravin. 385-409 [doi]
- Detection limits in the high-dimensional spiked rectangular modelAhmed El Alaoui, Michael I. Jordan. 410-438 [doi]
- Learning Without Mixing: Towards A Sharp Analysis of Linear System IdentificationMax Simchowitz, Horia Mania, Stephen Tu, Michael I. Jordan, Benjamin Recht. 439-473 [doi]
- Active Tolerant TestingAvrim Blum, Lunjia Hu. 474-497 [doi]
- Polynomial Time and Sample Complexity for Non-Gaussian Component Analysis: Spectral MethodsYan Shuo Tan, Roman Vershynin. 498-534 [doi]
- Calibrating Noise to Variance in Adaptive Data AnalysisVitaly Feldman, Thomas Steinke. 535-544 [doi]
- Accelerating Stochastic Gradient Descent for Least Squares RegressionPrateek Jain 0002, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli, Aaron Sidford. 545-604 [doi]
- Generalization Bounds of SGLD for Non-convex Learning: Two Theoretical ViewpointsWenlong Mou, Liwei Wang 0001, Xiyu Zhai, Kai Zheng 0007. 605-638 [doi]
- Optimal approximation of continuous functions by very deep ReLU networksDmitry Yarotsky. 639-649 [doi]
- Averaging Stochastic Gradient Descent on Riemannian ManifoldsNilesh Tripuraneni, Nicolas Flammarion, Francis Bach, Michael I. Jordan. 650-687 [doi]
- Fitting a Putative Manifold to Noisy DataCharles Fefferman, Sergei Ivanov 0001, Yaroslav Kurylev, Matti Lassas, Hariharan Narayanan. 688-720 [doi]
- Private Sequential LearningJohn N. Tsitsiklis, Kuang Xu, Zhi Xu. 721-727 [doi]
- Optimal Errors and Phase Transitions in High-Dimensional Generalized Linear ModelsJean Barbier, Florent Krzakala, Nicolas Macris, Léo Miolane, Lenka Zdeborová. 728-731 [doi]
- Exact and Robust Conformal Inference Methods for Predictive Machine Learning with Dependent DataVictor Chernozhukov, Kaspar Wüthrich, Yinchu Zhu. 732-749 [doi]
- Nonstochastic Bandits with Composite Anonymous FeedbackNicolò Cesa-Bianchi, Claudio Gentile, Yishay Mansour. 750-773 [doi]
- Lower Bounds for Higher-Order Convex OptimizationNaman Agarwal, Elad Hazan. 774-792 [doi]
- Log-concave sampling: Metropolis-Hastings algorithms are fast!Raaz Dwivedi, Yuansi Chen, Martin J. Wainwright, Bin Yu 0001. 793-797 [doi]
- Incentivizing Exploration by Heterogeneous UsersBangrui Chen, Peter I. Frazier, David Kempe 0001. 798-818 [doi]
- Fast and Sample Near-Optimal Algorithms for Learning Multidimensional HistogramsIlias Diakonikolas, Jerry Li 0001, Ludwig Schmidt. 819-842 [doi]
- Time-Space Tradeoffs for Learning Finite Functions from Random Evaluations, with Applications to PolynomialsPaul Beame, Shayan Oveis Gharan, Xin Yang. 843-856 [doi]
- Local Optimality and Generalization Guarantees for the Langevin Algorithm via Empirical MetastabilityBelinda Tzen, Tengyuan Liang, Maxim Raginsky. 857-875 [doi]
- Hardness of Learning Noisy Halfspaces using Polynomial ThresholdsArnab Bhattacharyya, Suprovat Ghoshal, Rishi Saket. 876-917 [doi]
- Best of both worlds: Stochastic & adversarial best-arm identificationYasin Abbasi-Yadkori, Peter L. Bartlett, Victor Gabillon, Alan Malek, Michal Valko. 918-949 [doi]
- Learning Patterns for Detection with Multiscale Scan StatisticsJames Sharpnack. 950-969 [doi]
- Global Guarantees for Enforcing Deep Generative Priors by Empirical RiskPaul Hand, Vladislav Voroninski. 970-978 [doi]
- Small-loss bounds for online learning with partial informationThodoris Lykouris, Karthik Sridharan, Éva Tardos. 979-986 [doi]
- Empirical bounds for functions with weak interactionsAndreas Maurer, Massimiliano Pontil. 987-1010 [doi]
- Restricted Eigenvalue from Stable Rank with Applications to Sparse Linear RegressionShiva Prasad Kasiviswanathan, Mark Rudelson. 1011-1041 [doi]
- Accelerated Gradient Descent Escapes Saddle Points Faster than Gradient DescentChi Jin, Praneeth Netrapalli, Michael I. Jordan. 1042-1085 [doi]
- Convex Optimization with Unbounded Nonconvex Oracles using Simulated AnnealingOren Mangoubi, Nisheeth K. Vishnoi. 1086-1124 [doi]
- Learning Mixtures of Linear Regressions with Nearly Optimal ComplexityYuanzhi Li, Yingyu Liang. 1125-1144 [doi]
- Detecting Correlations with Little Memory and CommunicationYuval Dagan, Ohad Shamir. 1145-1198 [doi]
- Finite Sample Analysis of Two-Timescale Stochastic Approximation with Applications to Reinforcement LearningGal Dalal, Gugan Thoppe, Balázs Szörényi, Shie Mannor. 1199-1233 [doi]
- Near-Optimal Sample Complexity Bounds for Maximum Likelihood Estimation of Multivariate Log-concave DensitiesTimothy Carpenter, Ilias Diakonikolas, Anastasios Sidiropoulos, Alistair Stewart. 1234-1262 [doi]
- More Adaptive Algorithms for Adversarial BanditsChen-Yu Wei, Haipeng Luo. 1263-1291 [doi]
- Efficient Convex Optimization with Membership OraclesYin Tat Lee, Aaron Sidford, Santosh S. Vempala. 1292-1294 [doi]
- A General Approach to Multi-Armed Bandits Under Risk CriteriaAsaf Cassel, Shie Mannor, Assaf Zeevi. 1295-1306 [doi]
- An Optimal Learning Algorithm for Online Unconstrained Submodular MaximizationTim Roughgarden, Joshua R. Wang. 1307-1325 [doi]
- The Mean-Field Approximation: Information Inequalities, Algorithms, and ComplexityVishesh Jain, Frederic Koehler, Elchanan Mossel. 1326-1347 [doi]
- Approximation beats concentration? An approximation view on inference with smooth radial kernelsMikhail Belkin. 1348-1361 [doi]
- Non-Convex Matrix Completion Against a Semi-Random AdversaryYu Cheng 0002, Rong Ge 0001. 1362-1394 [doi]
- The Vertex Sample Complexity of Free Energy is PolynomialVishesh Jain, Frederic Koehler, Elchanan Mossel. 1395-1419 [doi]
- Efficient Algorithms for Outlier-Robust RegressionAdam R. Klivans, Pravesh K. Kothari, Raghu Meka. 1420-1430 [doi]
- Action-Constrained Markov Decision Processes With Kullback-Leibler CostAna Busic, Sean P. Meyn. 1431-1444 [doi]
- Fundamental Limits of Weak Recovery with Applications to Phase RetrievalMarco Mondelli, Andrea Montanari. 1445-1450 [doi]
- Cutting plane methods can be extended into nonconvex optimizationOliver Hinder. 1451-1454 [doi]
- An Analysis of the t-SNE Algorithm for Data VisualizationSanjeev Arora, Wei Hu, Pravesh K. Kothari. 1455-1462 [doi]
- Adaptivity to Smoothness in X-armed banditsAndrea Locatelli, Alexandra Carpentier. 1463-1492 [doi]
- Black-Box Reductions for Parameter-free Online Learning in Banach SpacesAshok Cutkosky, Francesco Orabona. 1493-1529 [doi]
- A Data Prism: Semi-verified learning in the small-alpha regimeMichela Meister, Gregory Valiant. 1530-1546 [doi]
- A Direct Sum Result for the Information Complexity of LearningIdo Nachum, Jonathan Shafer, Amir Yehudayoff. 1547-1568 [doi]
- Online learning over a finite action set with limited switchingJason Altschuler, Kunal Talwar. 1569-1573 [doi]
- Smoothed Online Convex Optimization in High Dimensions via Online Balanced DescentNiangjun Chen, Gautam Goel, Adam Wierman. 1574-1594 [doi]
- Faster Rates for Convex-Concave GamesJacob D. Abernethy, Kevin A. Lai, Kfir Y. Levy, Jun-Kun Wang. 1595-1625 [doi]
- $\ell_1$ Regression using Lewis Weights Preconditioning and Stochastic Gradient DescentDavid Durfee, Kevin A. Lai, Saurabh Sawlani. 1626-1656 [doi]
- Optimal Single Sample Tests for Structured versus Unstructured Network DataGuy Bresler, Dheeraj Nagaraj. 1657-1690 [doi]
- A Finite Time Analysis of Temporal Difference Learning With Linear Function ApproximationJalaj Bhandari, Daniel Russo 0001, Raghav Singal. 1691-1692 [doi]
- Privacy-preserving PredictionCynthia Dwork, Vitaly Feldman. 1693-1702 [doi]
- An Estimate Sequence for Geodesically Convex OptimizationHongyi Zhang, Suvrit Sra. 1703-1723 [doi]
- The Externalities of Exploration and How Data Diversity Helps ExploitationManish Raghavan, Aleksandrs Slivkins, Jennifer Wortman Vaughan, Zhiwei Steven Wu. 1724-1738 [doi]
- Efficient Contextual Bandits in Non-stationary WorldsHaipeng Luo, Chen-Yu Wei, Alekh Agarwal, John Langford 0001. 1739-1776 [doi]
- Langevin Monte Carlo and JKO splittingEspen Bernton. 1777-1798 [doi]
- Subpolynomial trace reconstruction for random strings \{and arbitrary deletion probabilityNina Holden, Robin Pemantle, Yuval Peres. 1799-1840 [doi]
- An explicit analysis of the entropic penalty in linear programmingJonathan Weed. 1841-1855 [doi]
- Efficient active learning of sparse halfspacesChicheng Zhang. 1856-1880 [doi]
- Marginal Singularity, and the Benefits of Labels in Covariate-ShiftSamory Kpotufe, Guillaume Martinet. 1882-1886 [doi]
- Learning Single-Index Models in Gaussian SpaceRishabh Dudeja, Daniel Hsu. 1887-1930 [doi]
- Hidden Integrality of SDP Relaxations for Sub-Gaussian Mixture ModelsYingjie Fei, Yudong Chen. 1931-1965 [doi]
- Counting Motifs with Graph SamplingJason M. Klusowski, Yihong Wu. 1966-2011 [doi]
- Approximate Nearest Neighbors in Limited SpacePiotr Indyk, Tal Wagner. 2012-2036 [doi]
- Breaking the $1/\sqrtn$ Barrier: Faster Rates for Permutation-based Models in Polynomial TimeCheng Mao, Ashwin Pananjady, Martin J. Wainwright. 2037-2042 [doi]
- Unleashing Linear Optimizers for Group-Fair Learning and OptimizationDaniel Alabi, Nicole Immorlica, Adam Kalai. 2043-2066 [doi]
- The Many Faces of Exponential Weights in Online LearningDirk van der Hoeven, Tim van Erven, Wojciech Kotlowski. 2067-2092 [doi]
- Sampling as optimization in the space of measures: The Langevin dynamics as a composite optimization problemAndre Wibisono. 2093-3027 [doi]
- Online Learning: Sufficient Statistics and the Burkholder MethodDylan J. Foster, Alexander Rakhlin, Karthik Sridharan. 3028-3064 [doi]
- Minimax Bounds on Stochastic Batched Convex OptimizationJohn C. Duchi, Feng Ruan, Chulhee Yun. 3065-3162 [doi]
- Geometric Lower Bounds for Distributed Parameter Estimation under Communication ConstraintsYanjun Han, Ayfer Özgür, Tsachy Weissman. 3163-3188 [doi]
- Local moment matching: A unified methodology for symmetric functional estimation and distribution estimation under Wasserstein distanceYanjun Han, Jiantao Jiao, Tsachy Weissman. 3189-3221 [doi]
- Iterate Averaging as Regularization for Stochastic Gradient DescentGergely Neu, Lorenzo Rosasco. 3222-3242 [doi]
- Smoothed analysis for low-rank solutions to semidefinite programs in quadratic penalty formSrinadh Bhojanapalli, Nicolas Boumal, Prateek Jain 0002, Praneeth Netrapalli. 3243-3270 [doi]
- Certified Computation from Unreliable DatasetsThemis Gouleakis, Christos Tzamos, Manolis Zampetakis. 3271-3294 [doi]
- Open Problem: The Dependence of Sample Complexity Lower Bounds on Planning HorizonNan Jiang, Alekh Agarwal. 3395-3398 [doi]
- Open problem: Improper learning of mixtures of GaussiansElad Hazan, Roi Livni. 3399-3402 [doi]