The following publications are possibly variants of this publication:
- A 28nm 16.9-300TOPS/W Computing-in-Memory Processor Supporting Floating-Point NN Inference/Training with Intensive-CIM Sparse-Digital ArchitectureJinshan Yue, Chaojie He, Zi Wang, Zhaori Cong, Yifan He, Mufeng Zhou, Wenyu Sun, Xueqing Li, Chunmeng Dou, Feng Zhang 0014, Huazhong Yang, Yongpan Liu, Ming Liu 0022. isscc 2023: 252-253 [doi]
- ShareFloat CIM: A Compute-In-Memory Architecture with Floating-Point Multiply-and-Accumulate OperationsAn Guo, Yongliang Zhou, Bo Wang, Tianzhu Xiong, Chen Xue, Yufei Wang, Xin Si, Jun Yang. iscas 2022: 2276-2280 [doi]
- A 28-nm 64-kb 31.6-TFLOPS/W Digital-Domain Floating-Point-Computing-Unit and Double-Bit 6T-SRAM Computing-in-Memory Macro for Floating-Point CNNsAn Guo, Chen Xi, Fangyuan Dong, Xingyu Pu, Dongqi Li, Jingmin Zhang, Xueshan Dong, Hui Gao, Yiran Zhang, Bo Wang 0023, Jun Yang 0006, Xin Si. jssc, 59(9):3032-3044, September 2024. [doi]