The following publications are possibly variants of this publication:
- MuITCIM: A 28nm $2.24 \mu\mathrm{J}$/Token Attention-Token-Bit Hybrid Sparse Digital CIM-Based Accelerator for Multimodal TransformersFengbin Tu, Zihan Wu 0006, Yiqi Wang, Weiwei Wu, Leibo Liu, Yang Hu 0001, Shaojun Wei, Shouyi Yin. isscc 2023: 248-249 [doi]
- A 28nm 15.59µJ/Token Full-Digital Bitline-Transpose CIM-Based Sparse Transformer Accelerator with Pipeline/Parallel Reconfigurable ModesFengbin Tu, Zihan Wu, Yiqi Wang, Ling Liang, Liu Liu, Yufei Ding, Leibo Liu, Shaojun Wei, Yuan Xie, Shouyi Yin. isscc 2022: 466-468 [doi]
- A 28nm 16.9-300TOPS/W Computing-in-Memory Processor Supporting Floating-Point NN Inference/Training with Intensive-CIM Sparse-Digital ArchitectureJinshan Yue, Chaojie He, Zi Wang, Zhaori Cong, Yifan He, Mufeng Zhou, Wenyu Sun, Xueqing Li, Chunmeng Dou, Feng Zhang 0014, Huazhong Yang, Yongpan Liu, Ming Liu 0022. isscc 2023: 252-253 [doi]