The following publications are possibly variants of this publication:
- TIME: A Training-in-Memory Architecture for RRAM-Based Deep Neural NetworksMing Cheng, Lixue Xia, Zhenhua Zhu, Yi Cai, Yuan Xie 0001, Yu Wang 0002, Huazhong Yang. tcad, 38(5):834-847, 2019. [doi]
- Structured Pruning of RRAM Crossbars for Efficient In-Memory Computing Acceleration of Deep Neural NetworksJian Meng, Li Yang, Xiaochen Peng, Shimeng Yu, Deliang Fan, Jae-sun Seo. tcasII, 68(5):1576-1580, 2021. [doi]
- SIAM: Chiplet-based Scalable In-Memory Acceleration with Mesh for Deep Neural NetworksGokul Krishnan, Sumit K. Mandal, Manvitha Pannala, Chaitali Chakrabarti, Jae-sun Seo, Ümit Y. Ogras, Yu Cao 0001. tecs, 20(5s), 2021. [doi]