Publication:
Automatic emotion recognition for facial expression animation from speech

dc.contributor.coauthorErdem, Çiğdem Eroğlu
dc.contributor.coauthorErdem, A. Tanju
dc.contributor.departmentDepartment of Computer Engineering
dc.contributor.departmentGraduate School of Sciences and Engineering
dc.contributor.kuauthorBozkurt, Elif
dc.contributor.kuauthorErzin, Engin
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.schoolcollegeinstituteGRADUATE SCHOOL OF SCIENCES AND ENGINEERING
dc.date.accessioned2024-11-09T23:04:47Z
dc.date.issued2009
dc.description.abstractWe present a framework for automatically generating the facial expression animation of 3D talking heads using only the speech information. Our system is trained on the Berlin emotional speech dataset that is in German and includes seven emotions. We first parameterize the speech signal with prosody related features and spectral features. Then, we investigate two different classifier architectures for the emotion recognition: Gaussian mixture model (GMM) and hidden Markov model (HMM) based classifiers. In the experimental studies, we achieve an average emotion recognition rate of 83.42% using 5-fold stratified cross validation (SCV) method with a GMM classifier based on Mel frequency cepstral coefficients (MFCC) and dynamic MFCC features. Moreover, decision fusion of two GMM classifiers based on MFCC and line spectral frequency (LSF) features yields an average recognition rate of 85.30%. Also, a second-stage decision fusion of this result with a prosody-based HMM classifier further advances the average recognition rate up to 86.45%. Experimental results on automatic emotion recognition to drive facial expression animation synthesis are encouraging.
dc.description.indexedbyWOS
dc.description.openaccessNO
dc.description.publisherscopeInternational
dc.description.sponsoredbyTubitakEuN/A
dc.identifier.doi10.1109/SIU.2009.5136564
dc.identifier.eissnN/A
dc.identifier.isbn978-1-4244-4435-9
dc.identifier.issnN/A
dc.identifier.quartileN/A
dc.identifier.scopus2-s2.0-70350340452
dc.identifier.urihttps://doi.org/10.1109/SIU.2009.5136564
dc.identifier.urihttps://hdl.handle.net/20.500.14288/8685
dc.identifier.wos273935600068
dc.keywordsComputer science, hardware and architecture
dc.keywordsEngineering, electrical and electronic
dc.keywordsTelecommunications
dc.language.isotur
dc.publisherIEEE
dc.relation.ispartof2009 IEEE 17th Signal Processing and Communications Applications Conference, Vols 1 and 2
dc.subjectComputer science
dc.subjectHardware architecture
dc.subjectEngineering
dc.subjectElectrical electronic engineering
dc.subjectTelecommunications
dc.titleAutomatic emotion recognition for facial expression animation from speech
dc.title.alternativeYüz i̇fadesi canlandırma için konuşma sinyalinden otomatik duygu tanıma
dc.typeConference Proceeding
dspace.entity.typePublication
local.contributor.kuauthorBozkurt, Elif
local.contributor.kuauthorErzin, Engin
local.publication.orgunit1GRADUATE SCHOOL OF SCIENCES AND ENGINEERING
local.publication.orgunit1College of Engineering
local.publication.orgunit2Department of Computer Engineering
local.publication.orgunit2Graduate School of Sciences and Engineering
relation.isOrgUnitOfPublication89352e43-bf09-4ef4-82f6-6f9d0174ebae
relation.isOrgUnitOfPublication3fc31c89-e803-4eb1-af6b-6258bc42c3d8
relation.isOrgUnitOfPublication.latestForDiscovery89352e43-bf09-4ef4-82f6-6f9d0174ebae
relation.isParentOrgUnitOfPublication8e756b23-2d4a-4ce8-b1b3-62c794a8c164
relation.isParentOrgUnitOfPublication434c9663-2b11-4e66-9399-c863e2ebae43
relation.isParentOrgUnitOfPublication.latestForDiscovery8e756b23-2d4a-4ce8-b1b3-62c794a8c164

Files