Abstract is missing.
- Driving Policy Transfer via Modularity and AbstractionMatthias Mueller, Alexey Dosovitskiy, Bernard Ghanem, Vladlen Koltun. 1-15 [doi]
- Personalized Dynamics Models for Adaptive Assistive Navigation SystemsEshed Ohn-Bar, Kris Kitani, Chieko Asakawa. 16-39 [doi]
- Few-Shot Goal Inference for Visuomotor Learning and PlanningAnnie Xie, Avi Singh, Sergey Levine, Chelsea Finn. 40-52 [doi]
- Neural Modular Control for Embodied Question AnsweringAbhishek Das, Georgia Gkioxari, Stefan Lee, Devi Parikh, Dhruv Batra. 53-62 [doi]
- Visual Curiosity: Learning to Ask Questions to Learn Visual RecognitionJianwei Yang, Jiasen Lu, Stefan Lee, Dhruv Batra, Devi Parikh. 63-80 [doi]
- Guided Feature Transformation (GFT): A Neural Language Grounding Module for Embodied AgentsHaonan Yu, Xiaochen Lian, Haichao Zhang, Wei Xu. 81-98 [doi]
- Grasp2Vec: Learning Object Representations from Self-Supervised GraspingEric Jang, Coline Devin, Vincent Vanhoucke, Sergey Levine. 99-112 [doi]
- Energy-Based Hindsight Experience PrioritizationRui Zhao, Volker Tresp. 113-122 [doi]
- Including Uncertainty when Learning from Human CorrectionsDylan P. Losey, Marcia K. O'Malley. 123-132 [doi]
- Deep Drone Racing: Learning Agile Flight in Dynamic EnvironmentsElia Kaufmann, Antonio Loquercio, Rene Ranftl, Alexey Dosovitskiy, Vladlen Koltun, Davide Scaramuzza. 133-145 [doi]
- HDNET: Exploiting HD Maps for 3D Object DetectionBin Yang, Ming Liang, Raquel Urtasun. 146-155 [doi]
- Motion Perception in Reinforcement Learning with Dynamic ObjectsArtemij Amiranashvili, Alexey Dosovitskiy, Vladlen Koltun, Thomas Brox. 156-168 [doi]
- Particle Filter Networks with Application to Visual LocalizationPéter Karkus, David Hsu, Wee Sun Lee. 169-178 [doi]
- Sparse Gaussian Process Temporal Difference Learning for Marine Robot NavigationJohn Martin, Jinkun Wang, Brendan J. Englot. 179-189 [doi]
- Fast 3D Modeling with Approximated Convolutional KernelsVitor Guizilini, Fabio Ramos. 190-199 [doi]
- Unpaired Learning of Dense Visual Depth Estimators for Urban EnvironmentsVitor Guizilini, Fabio Ramos. 200-212 [doi]
- Learning over Subgoals for Efficient Navigation of Structured, Unknown EnvironmentsGregory J. Stein, Christopher Bradley, Nicholas Roy. 213-222 [doi]
- Inferring geometric constraints in human demonstrationsGuru Subramani, Michael R. Zinn, Michael Gleicher. 223-236 [doi]
- Conditional Affordance Learning for Driving in Urban EnvironmentsAxel Sauer, Nikolay Savinov, Andreas Geiger. 237-252 [doi]
- Modular Vehicle Control for Transferring Semantic Information Between Weather Conditions Using GANsPatrick Wenzel, Qadeer Khan, Daniel Cremers, Laura Leal-Taixé. 253-269 [doi]
- GPU-Accelerated Robotic Simulation for Distributed Reinforcement LearningJacky Liang, Viktor Makoviychuk, Ankur Handa, Nuttapong Chentanez, Miles Macklin, Dieter Fox. 270-282 [doi]
- Feature Learning for Scene Flow Estimation from LIDARArash K. Ushani, Ryan M. Eustice. 283-292 [doi]
- PAC-Bayes Control: Synthesizing Controllers that Provably Generalize to Novel EnvironmentsAnirudha Majumdar, Maxwell Goldstein. 293-305 [doi]
- Deep Object Pose Estimation for Semantic Robotic Grasping of Household ObjectsJonathan Tremblay, Thang To, Balakumar Sundaralingam, Yu Xiang, Dieter Fox, Stan Birchfield. 306-316 [doi]
- SPNets: Differentiable Fluid Dynamics for Deep Neural NetworksConnor Schenck, Dieter Fox. 317-335 [doi]
- A Data-Efficient Approach to Precise and Controlled PushingMaria Bauzá, Francois Robert Hogan, Alberto Rodriguez. 336-345 [doi]
- Learning Deployable Navigation Policies at Kilometer Scale from a Single TraversalJake Bruce, Niko Sünderhauf, Piotr Mirowski, Raia Hadsell, Michael Milford. 346-361 [doi]
- Risk-Aware Active Inverse Reinforcement LearningDaniel S. Brown, Yuchen Cui, Scott Niekum. 362-372 [doi]
- Dense Object Nets: Learning Dense Visual Object Descriptors By and For Robotic ManipulationPeter R. Florence, Lucas Manuelli, Russ Tedrake. 373-385 [doi]
- Bayesian RL for Goal-Only RewardsPhilippe Morere, Fabio Ramos. 386-398 [doi]
- Benchmarks for reinforcement learning in mixed-autonomy trafficEugene Vinitsky, Aboudy Kreidieh, Luc Le Flem, Nishant Kheterpal, Kathy Jang, Fangyu Wu, Richard Liaw, Eric Liang, Alexandre M. Bayen. 399-409 [doi]
- Intervention Aided Reinforcement Learning for Safe and Practical Policy Optimization in NavigationFan Wang, Bo Zhou, Ke Chen, Tingxiang Fan, Xi Zhang, Jiangyong Li, Hao Tian 0003, Jia Pan. 410-421 [doi]
- Reinforcement Learning of Active Vision for Manipulating Objects under OcclusionsRicson Cheng, Arpit Agarwal, Katerina Fragkiadaki. 422-431 [doi]
- Adaptable replanning with compressed linear action models for learning from demonstrationsClement Gehring, Leslie Pack Kaelbling, Tomás Lozano-Pérez. 432-442 [doi]
- Automorphing Kernels for Nonstationarity in Mapping Unstructured EnvironmentsRansalu Senanayake, Anthony Tompkins, Fabio Ramos. 443-455 [doi]
- Leveraging Deep Visual Descriptors for Hierarchical Efficient LocalizationPaul-Edouard Sarlin, Frédéric Debraine, Marcin Dymczyk, Roland Siegwart. 456-465 [doi]
- The Lyapunov Neural Network: Adaptive Stability Certification for Safe Learning of Dynamical SystemsSpencer M. Richards, Felix Berkenkamp, Andreas Krause 0001. 466-476 [doi]
- Learning 6-DoF Grasping and Pick-Place Using Attention FocusMarcus Gualtieri, Robert Platt Jr.. 477-486 [doi]
- Curiosity Driven Exploration of Learned Disentangled Goal SpacesAdrien Laversanne-Finot, Alexandre Péré, Pierre-Yves Oudeyer. 487-504 [doi]
- Mapping Navigation Instructions to Continuous Control Actions with Position-Visitation PredictionValts Blukis, Dipendra Kumar Misra, Ross A. Knepper, Yoav Artzi. 505-518 [doi]
- Batch Active Preference-Based Learning of Reward FunctionsErdem Biyik, Dorsa Sadigh. 519-528 [doi]
- Learning Audio Feedback for Estimating Amount and Flow of Granular MaterialSamuel Clarke, Travers Rhodes, Christopher G. Atkeson, Oliver Kroemer. 529-550 [doi]
- HybridNet: Integrating Model-based and Data-driven Learning to Predict Evolution of Dynamical SystemsYun Long, Xueyuan She, Saibal Mukhopadhyay. 551-560 [doi]
- Benchmarking Reinforcement Learning Algorithms on Real-World RobotsA. Rupam Mahmood, Dmytro Korenkevych, Gautham Vasan, William Ma, James Bergstra. 561-591 [doi]
- Learning Neural Parsers with Deterministic Differentiable Imitation LearningTanmay Shankar, Nicholas Rhinehart, Katharina Muelling, Kris M. Kitani. 592-604 [doi]
- Learning to Localize Using a LiDAR Intensity MapIoan Andrei Barsan, Shenlong Wang, Andrei Pokrovsky, Raquel Urtasun. 605-616 [doi]
- Model-Based Reinforcement Learning via Meta-Policy OptimizationIgnasi Clavera, Jonas Rothfuss, John Schulman, Yasuhiro Fujita 0001, Tamim Asfour, Pieter Abbeel. 617-629 [doi]
- Reinforcement Learning of Phase Oscillators for Fast Adaptation to Moving TargetsGuilherme Maeda, Okan Koc, Jun Morimoto. 630-640 [doi]
- Global Search with Bernoulli Alternation Kernel for Task-oriented Grasping Informed by SimulationRika Antonova, Mia Kokic, Johannes A. Stork, Danica Kragic. 641-650 [doi]
- Scalable Deep Reinforcement Learning for Vision-Based Robotic ManipulationDmitry Kalashnikov, Alex Irpan, Peter Pastor, Julian Ibarz, Alexander Herzog, Eric Jang, Deirdre Quillen, Ethan Holly, Mrinal Kalakrishnan, Vincent Vanhoucke, Sergey Levine. 651-673 [doi]
- Reward Estimation for Variance Reduction in Deep Reinforcement LearningJoshua Romoff, Peter Henderson 0002, Alexandre Piché, Vincent François-Lavet, Joelle Pineau. 674-699 [doi]
- Domain Randomization for Simulation-Based Policy Optimization with Transferability AssessmentFabio Muratore, Felix Treede, Michael Gienger, Jan Peters 0001. 700-713 [doi]
- Grounding Robot Plans from Natural Language Instructions with Incomplete World KnowledgeDaniel Nyga, Subhro Roy, Rohan Paul, Daehyung Park, Mihai Pomarlan, Michael Beetz, Nicholas Roy. 714-723 [doi]
- Learning What Information to Give in Partially Observed DomainsRohan Chitnis, Leslie Pack Kaelbling, Tomás Lozano-Pérez. 724-733 [doi]
- Sim-to-Real Reinforcement Learning for Deformable Object ManipulationJan Matas, Stephen James, Andrew J. Davison. 734-743 [doi]
- Expanding Motor Skills using Relay NetworksVisak C. V. Kumar, Sehoon Ha, C. Karen Liu. 744-756 [doi]
- Efficient Hierarchical Robot Motion Planning Under Uncertainty and Hybrid DynamicsAjinkya Jain, Scott Niekum. 757-766 [doi]
- SURREAL: Open-Source Reinforcement Learning Framework and Robot Manipulation BenchmarkLinxi Fan, Yuke Zhu, Jiren Zhu, Zihua Liu, Orien Zeng, Anchit Gupta, Joan Creus-Costa, Silvio Savarese, Li Fei-Fei. 767-782 [doi]
- Task-Embedded Control Networks for Few-Shot Imitation LearningStephen James, Michael Bloesch, Andrew J. Davison. 783-795 [doi]
- Learning under Misspecified Objective SpacesAndreea Bobu, Andrea Bajcsy, Jaime F. Fisac, Anca D. Dragan. 796-805 [doi]
- Composable Action-Conditioned Predictors: Flexible Off-Policy Learning for Robot NavigationGregory Kahn, Adam Villaflor, Pieter Abbeel, Sergey Levine. 806-816 [doi]
- Sim-to-Real Transfer with Neural-Augmented Robot SimulationFlorian Golemo, Adrien Ali Taïga, Aaron C. Courville, Pierre-Yves Oudeyer. 817-828 [doi]
- Bayesian Generalized Kernel Inference for Terrain Traversability MappingTixiao Shan, Jinkun Wang, Brendan J. Englot, Kevin Doherty. 829-838 [doi]
- Multi-objective Model-based Policy Search for Data-efficient Learning with Sparse RewardsRituraj Kaushik, Konstantinos I. Chatzilygeroudis, Jean-Baptiste Mouret. 839-855 [doi]
- Modular meta-learningFerran Alet, Tomás Lozano-Pérez, Leslie Pack Kaelbling. 856-868 [doi]
- Dyadic collaborative Manipulation through Hybrid Trajectory OptimizationTheodoros Stouraitis, Iordanis Chatzinikolaidis, Michael Gienger, Sethu Vijayakumar. 869-878 [doi]
- ROBOTURK: A Crowdsourcing Platform for Robotic Skill Learning through ImitationAjay Mandlekar, Yuke Zhu, Animesh Garg, Jonathan Booher, Max Spero, Albert Tung, Julian Gao, John Emmons, Anchit Gupta, Emre Orbay, Silvio Savarese, Li Fei-Fei. 879-893 [doi]
- Integrating kinematics and environment context into deep inverse reinforcement learning for predicting off-road vehicle trajectoriesYanfu Zhang, Wenshan Wang, Rogerio Bonatti, Daniel Maturana, Sebastian Scherer. 894-905 [doi]
- Multiple Interactions Made Easy (MIME): Large Scale Demonstrations Data for ImitationPratyusha Sharma, Lekha Mohan, Lerrel Pinto, Abhinav Gupta. 906-915 [doi]
- Policies Modulating Trajectory GeneratorsAtil Iscen, Ken Caluwaerts, Jie Tan, Tingnan Zhang, Erwin Coumans, Vikas Sindhwani, Vincent Vanhoucke. 916-926 [doi]
- A Physically-Consistent Bayesian Non-Parametric Mixture Model for Dynamical System LearningNadia Figueroa, Aude Billard. 927-946 [doi]
- IntentNet: Learning to Predict Intention from Raw Sensor DataSergio Casas, Wenjie Luo, Raquel Urtasun. 947-956 [doi]
- Interpretable Latent Spaces for Learning from DemonstrationYordan Hristov, Alex Lascarides, Subramanian Ramamoorthy. 957-968 [doi]
- ESIM: an Open Event Camera SimulatorHenri Rebecq, Daniel Gehrig, Davide Scaramuzza. 969-982 [doi]
- Robustness via Retrying: Closed-Loop Robotic Manipulation with Self-Supervised LearningFrederik Ebert, Sudeep Dasari, Alex X. Lee, Sergey Levine, Chelsea Finn. 983-993 [doi]