Abstract is missing.
- Automatic GPU memory management for large neural models in TensorFlowTung D. Le, Haruki Imai, Yasushi Negishi, Kiyokuni Kawachiya. 1-13 [doi]
- Massively parallel GPU memory compactionMatthias Springer, Hidehiko Masuhara. 14-26 [doi]
- Scaling up parallel GC work-stealing in many-core environmentsMichihiro Horie, Kazunori Ogata, Mikio Takeuchi, Hiroshi Horii. 27-40 [doi]
- Exploration of memory hybridization for RDD caching in SparkMd. Muhib Khan, Muhammad Ahad Ul Alam, Amit Kumar Nath, Weikuan Yu. 41-52 [doi]
- Learning when to garbage collect with random forestsNicholas Jacek, J. Eliot B. Moss. 53-63 [doi]
- Timescale functions for parallel memory allocationPengcheng Li, Hao Luo, Chen Ding. 64-78 [doi]
- A lock-free coalescing-capable mechanism for memory managementRicardo Leite, Ricardo Rocha 0001. 79-88 [doi]
- Concurrent marking of shape-changing objectsUlan Degenbaev, Michael Lippautz, Hannes Payer. 89-102 [doi]
- Design and analysis of field-logging write barriersStephen M. Blackburn. 103-114 [doi]
- Gradual write-barrier insertion into a Ruby interpreterKoichi Sasada. 115-121 [doi]
- snmalloc: a message passing allocatorPaul LiƩtar, Theodore Butler, Sylvan Clebsch, Sophia Drossopoulou, Juliana Franco, Matthew J. Parkinson, Alex Shamis, Christoph M. Wintersteiger, David Chisnall. 122-135 [doi]