Publication:
Formant position based weighted spectral features for emotion recognition

dc.contributor.coauthorErdem, Çiğdem Eroğlu
dc.contributor.coauthorErdem, Arif Tanju
dc.contributor.departmentDepartment of Computer Engineering
dc.contributor.departmentGraduate School of Sciences and Engineering
dc.contributor.kuauthorBozkurt, Elif
dc.contributor.kuauthorErzin, Engin
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.schoolcollegeinstituteGRADUATE SCHOOL OF SCIENCES AND ENGINEERING
dc.date.accessioned2024-11-09T23:39:22Z
dc.date.issued2011
dc.description.abstractIn this paper, we propose novel spectrally weighted mel-frequency cepstral coefficient (WMFCC) features for emotion recognition from speech. The idea is based on the fact that formant locations carry emotion-related information, and therefore critical spectral bands around formant locations can be emphasized during the calculation of MFCC features. The spectral weighting is derived from the normalized inverse harmonic mean function of the line spectral frequency (LSF) features, which are known to be localized around formant frequencies. The above approach can be considered as an early data fusion of spectral content and formant location information. We also investigate methods for late decision fusion of unimodal classifiers. We evaluate the proposed WMFCC features together with the standard spectral and prosody features using HMM based classifiers on the spontaneous FAU Aibo emotional speech corpus. The results show that unimodal classifiers with the WMFCC features perform significantly better than the classifiers with standard spectral features. Late decision fusion of classifiers provide further significant performance improvements. (C) 2011 Elsevier B.V. All rights reserved.
dc.description.indexedbyWOS
dc.description.indexedbyScopus
dc.description.issue45208
dc.description.openaccessNO
dc.description.publisherscopeInternational
dc.description.sponsoredbyTubitakEuN/A
dc.description.sponsorshipTurkish Scientific and Technical Research Council (TUBITAK) [106E201, COST2102, 110E056] This work was supported in part by the Turkish Scientific and Technical Research Council (TUBITAK) under projects 106E201 (COST2102 action) and 110E056. The authors would like to acknowledge and thank the anonymous referees for their valuable comments that have significantly improved the quality of the paper.
dc.description.volume53
dc.identifier.doi10.1016/j.specom.2011.04.003
dc.identifier.eissn1872-7182
dc.identifier.issn0167-6393
dc.identifier.quartileQ2
dc.identifier.scopus2-s2.0-79960848203
dc.identifier.urihttps://doi.org/10.1016/j.specom.2011.04.003
dc.identifier.urihttps://hdl.handle.net/20.500.14288/13099
dc.identifier.wos294104000010
dc.keywordsEmotion recognition
dc.keywordsEmotional speech classification
dc.keywordsSpectral features
dc.keywordsFormant frequency
dc.keywordsLine spectral frequency
dc.keywordsDecision fusion
dc.language.isoeng
dc.publisherElsevier
dc.relation.ispartofSpeech Communication
dc.subjectAcoustics
dc.subjectComputer science
dc.titleFormant position based weighted spectral features for emotion recognition
dc.typeJournal Article
dspace.entity.typePublication
local.contributor.kuauthorBozkurt, Elif
local.contributor.kuauthorErzin, Engin
local.publication.orgunit1GRADUATE SCHOOL OF SCIENCES AND ENGINEERING
local.publication.orgunit1College of Engineering
local.publication.orgunit2Department of Computer Engineering
local.publication.orgunit2Graduate School of Sciences and Engineering
relation.isOrgUnitOfPublication89352e43-bf09-4ef4-82f6-6f9d0174ebae
relation.isOrgUnitOfPublication3fc31c89-e803-4eb1-af6b-6258bc42c3d8
relation.isOrgUnitOfPublication.latestForDiscovery89352e43-bf09-4ef4-82f6-6f9d0174ebae
relation.isParentOrgUnitOfPublication8e756b23-2d4a-4ce8-b1b3-62c794a8c164
relation.isParentOrgUnitOfPublication434c9663-2b11-4e66-9399-c863e2ebae43
relation.isParentOrgUnitOfPublication.latestForDiscovery8e756b23-2d4a-4ce8-b1b3-62c794a8c164

Files