Publication: Interspeech 2009 emotion recognition challenge evaluation
Program
KU-Authors
KU Authors
Co-Authors
Erdem, Çiǧdem Eroǧlu
Erdem, A. Tanju
Publication Date
Language
Embargo Status
Journal Title
Journal ISSN
Volume Title
Alternative Title
Interspeech 2009 duygu tanıma yarışması deǧerlendirmesi
Abstract
In this paper we evaluate INTERSPEECH 2009 Emotion Recognition Challenge results. The challenge presents the problem of accurate classification of natural and emotionally rich FAU Aibo recordings into five and two emotion classes. We evaluate prosody related, spectral and HMM-based features with Gaussian mixture model (GMM) classifiers to attack this problem. Spectral features consist of mel-scale cepstral coefficients (MFCC), line spectral frequency (LSF) features and their derivatives, whereas prosody-related features consist of pitch, first derivative of pitch and intensity. We employ unsupervised training of HMM structures with prosody related temporal features to define HMM-based features. We also investigate data fusion of different features and decision fusion of different classifiers to improve emotion recognition results. Our two-stage decision fusion method achieves 41.59 % and 67.90 % recall rate for the five and two-class problems, respectively and takes second and fourth place among the overall challenge results. ©2010 IEEE.
Source
Publisher
IEEE
Subject
Computer engineering
Citation
Has Part
Source
SIU 2010 - IEEE 18th Signal Processing and Communications Applications Conference
Book Series Title
Edition
DOI
10.1109/SIU.2010.5649919