The following publications are possibly variants of this publication:
- A 1.97 TFLOPS/W Configurable SRAM-Based Floating-Point Computation-in-Memory Macro for Energy-Efficient AI ChipsYangzhan Mai, Mingyu Wang, Chuanghao Zhang, Baiqing Zhong, Zhiyi Yu. iscas 2023: 1-5 [doi]
- A 28nm 64-kb 31.6-TFLOPS/W Digital-Domain Floating-Point-Computing-Unit and Double-Bit 6T-SRAM Computing-in-Memory Macro for Floating-Point CNNsAn Guo, Xin Si, Xi Chen, Fangyuan Dong, Xingyu Pu, Dongqi Li, Yongliang Zhou, Lizheng Ren, Yeyang Xue, Xueshan Dong, Hui Gao, Yiran Zhang, Jingmin Zhang, Yuyao Kong, Tianzhu Xiong, Bo Wang, Hao Cai, Weiwei Shan, Jun Yang. isscc 2023: 128-129 [doi]
- RRAM Computing-in-Memory Using Transient Charge Transferring for Low-Power and Small-Latency AI Edge InferenceLinfang Wang, Junjie An, Wang Ye, Weizeng Li, Hanghang Gao, Yangu He, Jianfeng Gao, Jinshan Yue, Lingyan Fan, Chunmeng Dou. apccas 2021: 497-500 [doi]