The following publications are possibly variants of this publication:
- Learning deep multimodal affective features for spontaneous speech emotion recognitionShiqing Zhang, Xin Tao, Yuelong Chuang, Xiaoming Zhao. speech, 127:73-81, 2021. [doi]
- Learning emotion-discriminative and domain-invariant features for domain adaptation in speech emotion recognitionQirong Mao, Guopeng Xu, Wentao Xue, Jianping Gou, Yongzhao Zhan. speech, 93:1-10, 2017. [doi]
- Learning affective representations based on magnitude and dynamic relative phase information for speech emotion recognitionLili Guo, Longbiao Wang, Jianwu Dang, Eng Siong Chng, Seiichi Nakagawa. speech, 136:118-127, 2022. [doi]
- Speech emotion recognition using fusion of three multi-task learning-based classifiers: HSF-DNN, MS-CNN and LLD-RNNZengwei Yao, Zihao Wang, Weihuang Liu, Yaqian Liu, Jiahui Pan. speech, 120:11-19, 2020. [doi]
- Key-Sparse Transformer for Multimodal Speech Emotion RecognitionWeidong Chen, Xiaofeng Xing, Xiangmin Xu, Jichen Yang, Jianxin Pang. icassp 2022: 6897-6901 [doi]
- Multimodal transformer augmented fusion for speech emotion recognitionYuanyuan Wang, Yu Gu 0015, Yifei Yin, Yingping Han, He Zhang, Shuang Wang, Chenyu Li, Dou Quan. finr, 17, June 2023. [doi]
- Hierarchical sparse coding framework for speech emotion recognitionDiana Torres-Boza, Meshia Cédric Oveneke, Fengna Wang, Dongmei Jiang, Werner Verhelst, Hichem Sahli. speech, 99:80-89, 2018. [doi]