text-dependent speech emotion recognition 相關文獻閱讀

Towards multimodal sentiment analysis: harvesting opinions from the web
Morency L P, Mihalcea R, Doshi P. Towards multimodal sentiment analysis: harvesting opinions from the web[C]// International Conference on Multimodal Interfaces. ACM, 2012:169-176.
本文建立了多模態融合系統用於分析社交網絡視頻所包含的情感信息,融合的信息包括視頻、音頻和文本。數據庫是從網上爬取的YouTube視頻數據,並提取其音頻流和文本做單獨分析。數據庫情感包含正面、負面、平靜三種。
對音頻做FFT,對文字信息建立詞典和語義模型。所有模型均使用HMM作爲分類器。
文章單獨對比了僅用單一信息進行識別和多信息融合識別效果,並分析了視頻中圖像、語言和聲音顯著變化時對情感的影響。

Multimodal Sentiment Analysis of Social Media
Maynard D, Dupplaw D, Hare J. Multimodal Sentiment Analysis of Social Media[C]// BCS SGAI Workshop on Social Media Analysis. 2014.

Utterance-Level Multimodal Sentiment Analysis
Pérez-Rosas V, Mihalcea R, Morency L P. Utterance-Level Multimodal Sentiment Analysis[C]// Association for Computational Linguistics. ACL. 2014.

Speaker and Text Dependent Automatic Emotion Recognition from Female Speech by Using Artificial Neural Networks
Firoz S A, Raji S A, Babu A P. Speaker and text dependent automatic emotion recognition from female speech by using artificial neural networks[C]// Nature & Biologically Inspired Computing, 2010. NaBIC 2010. World Congress on. IEEE, 2010:1411-1413.

Bimodal Emotion Recognition from Speech and Text
Ye W, Fan X. Bimodal Emotion Recognition from Speech and Text[J]. International Journal of Advanced Computer Science & Applications, 2015, 5(2):26-29.

Speech emotion recognition integrating local features of sentiment words
Song Minghu, Yu Zhengtao, Gao Shengxiang.Speech emotion recognition integrating local features of sentiment words [J]. Computer Engineering and Science, 2017, 39(1):194-198.

CAN PROSODY INFORM SENTIMENT ANALYSIS? EXPERIMENTS ON SHORT SPOKEN REVIEWS
Mairesse F, Polifroni J, Fabbrizio G D. Can prosody inform sentiment analysis? Experiments on short spoken reviews[J]. 2013:5093-5096.

Speech emotion recognition combining acoustic features and linguistic information in a hybrid support vector machine-belief network architecture
Schuller B, Rigoll G, Lang M. Speech emotion recognition combining acoustic features and linguistic information in a hybrid support vector machine-belief network architecture[C]// IEEE International Conference on Acoustics, Speech, and Signal Processing, 2004. Proceedings. IEEE, 2004:I-577-80 vol.1.

Speech Emotion Recognition Exploiting Acoustic and Linguistic Information Sources
Rigoll G, Müller R, Schuller B. Speech Emotion Recognition Exploiting Acoustic and Linguistic Information Sources[C]// International Conference Speech and Computer, Specom. 2005:61-67.

Multi-Modal Emotion Recognition from Speech and Text
Chuang Z J, Wu C H. Multi-Modal Emotion Recognition from Speech and Text[J]. International Journal of Computational Linguistics & Chinese Language Processing, 2004, 1(9:2):779-783.

Multimodal subjectivity analysis of multiparty conversation
Raaijmakers S, Truong K, Wilson T. Multimodal subjectivity analysis of multiparty conversation[C]// Conference on Empirical Methods in Natural Language Processing, EMNLP 2008, Proceedings of the Conference, 25-27 October 2008, Honolulu, Hawaii, Usa, A Meeting of Sigdat, A Special Interest Group of the ACL. DBLP, 2008:466-474.

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章