The following publications are possibly variants of this publication:
- Multimodal Data Matters: Language Model Pre-Training Over Structured and Unstructured Electronic Health RecordsSicen Liu, Xiaolong Wang 0001, Yongshuai Hou, Ge Li 0002, Hui Wang, Hui Xu, Yang Xiang 0003, Buzhou Tang. titb, 27(1):504-514, 2023. [doi]
- Boosting Modality Representation With Pre-Trained Models and Multi-Task Training for Multimodal Sentiment AnalysisJiarui Hai, Yu-Jeh Liu, Mounya Elhilali. asru 2023: 1-8 [doi]
- MAP: Multimodal Uncertainty-Aware Vision-Language Pre-training ModelYatai Ji, Junjie Wang, Yuan Gong, Lin Zhang, Yanru Zhu, Hongfa Wang, Jiaxing Zhang, Tetsuya Sakai, Yujiu Yang. cvpr 2023: 23262-23271 [doi]
- Memobert: Pre-Training Model with Prompt-Based Learning for Multimodal Emotion RecognitionJinming Zhao, Ruichen Li, Qin Jin, Xinchao Wang, Haizhou Li 0001. icassp 2022: 4703-4707 [doi]
- Multimodal Pre-Training Model for Sequence-based Prediction of Protein-Protein InteractionYang Xue, Zijing Liu, Xiaomin Fang, Fan Wang. mlcb 2021: 34-46 [doi]
- On Pre-training Language Model for AntibodyDanqing Wang, Fei Ye, Hao Zhou 0012. iclr 2023: [doi]