Automatic Speech Emotion Recognition Using Recurrent Neural Networks with Local Attention | Microsoft Research

This post has been republished via RSS; it originally appeared at: Channel 9.

Automatic emotion recognition from speech is a challenging task which significantly relies on the emotional relevance of specific features extracted from the speech signal. In this study, our goal is to use deep learning to automatically discover emotionally relevant features. It is shown that using a deep Recurrent Neural Network (RNN), we can learn both the short-time frame-level acoustic features that are emotionally relevant, as well as an appropriate temporal aggregation of those features into a compact sentence-level representation. Moreover, we propose a novel strategy for feature pooling over time using attention mechanism with the RNN, which is able to focus on local regions of a speech signal that are more emotionally salient. The proposed solution was tested on the IEMOCAP emotion corpus, and was shown to provide more accurate predictions compared to existing emotion recognition algorithms.

See more on this video at https://www.microsoft.com/en-us/research/video/automatic-speech-emotion-recognition-using-recurrent-neural-networks-local-attention/