The following publications are possibly variants of this publication:
- Self-attention learning network for face super-resolutionKangli Zeng, Zhongyuan Wang 0001, Tao Lu 0001, Jianyu Chen, Jiaming Wang, Zixiang Xiong. NN, 160:164-174, March 2023. [doi]
- Learning Spatial Attention for Face Super-ResolutionChaofeng Chen, Dihong Gong, Hao Wang 0050, Zhifeng Li 0003, Kwan-Yee K. Wong. TIP, 30:1219-1231, 2021. [doi]
- Efficient Multi-Scale Cosine Attention Transformer for Image Super-ResolutionYuzhen Chen, Gencheng Wang, Rong Chen. spl, 30:1442-1446, 2023. [doi]
- Dual Self-Attention Swin Transformer for Hyperspectral Image Super-ResolutionYaqian Long, Xun Wang, Meng Xu, Shuyu Zhang, Shuguo Jiang, Sen Jia. tgrs, 61:1-12, 2023. [doi]
- Exploiting Multi-Scale Parallel Self-Attention and Local Variation via Dual-Branch Transformer-CNN Structure for Face Super-ResolutionJingang Shi, Yusi Wang, Zitong Yu, Guanxin Li, Xiaopeng Hong, Fei Wang 0037, Yihong Gong. tmm, 26:2608-2620, 2024. [doi]
- A discriminative self-attention cycle GAN for face super-resolution and recognitionXiaoguang Li, Ning Dong, Jianglu Huang, Li Zhuo 0001, Jiafeng Li. iet-ipr, 15(11):2614-2628, 2021. [doi]
- A Face Structure Attention Network for Face Super-ResolutionChengjie Li, Nanfeng Xiao. icpr 2022: 75-81 [doi]