Publication:
Emotion dependent facial animation from affective speech

dc.contributor.coauthorN/A
dc.contributor.departmentDepartment of Computer Engineering
dc.contributor.departmentGraduate School of Sciences and Engineering
dc.contributor.kuauthorAsadiabadi, Sasan
dc.contributor.kuauthorErzin, Engin
dc.contributor.kuauthorSadiq, Rizwan
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.schoolcollegeinstituteGRADUATE SCHOOL OF SCIENCES AND ENGINEERING
dc.date.accessioned2024-11-09T23:48:00Z
dc.date.issued2020
dc.description.abstractIn human-to-computer interaction, facial animation in synchrony with affective speech can deliver more naturalistic conversational agents. In this paper, we present a two-stage deep learning approach for affective speech driven facial shape animation. In the first stage, we classify affective speech into seven emotion categories. In the second stage, we train separate deep estimators within each emotion category to synthesize facial shape from the affective speech. Objective and subjective evaluations are performed over the SAVEE dataset. The proposed emotion dependent facial shape model performs better in terms of the Mean Squared Error (MSE) loss and in generating the landmark animations, as compared to training a universal model regardless of the emotion.
dc.description.indexedbyWOS
dc.description.indexedbyScopus
dc.description.openaccessNO
dc.description.sponsoredbyTubitakEuN/A
dc.description.sponsorshipScientific and Technological Research Council of Turkey (TUBITAK) [217E107] This work was supported in part by the Scientific and Technological Research Council of Turkey (TUBITAK) under grant number 217E107.
dc.identifier.isbn978-1-7281-9320-5
dc.identifier.issn2163-3517
dc.identifier.scopus2-s2.0-85099256959
dc.identifier.urihttps://hdl.handle.net/20.500.14288/14217
dc.identifier.wos652200700037
dc.keywordsRecognition
dc.language.isoeng
dc.publisherIeee
dc.relation.ispartof2020 Ieee 22nd International Workshop On Multimedia Signal Processing (Mmsp)
dc.subjectComputer science
dc.subjectSoftware engineering
dc.subjectEngineering
dc.subjectElectrical and electronic engineering
dc.titleEmotion dependent facial animation from affective speech
dc.typeConference Proceeding
dspace.entity.typePublication
local.contributor.kuauthorSadiq, Rizwan
local.contributor.kuauthorAsadiabadi, Sasan
local.contributor.kuauthorErzin, Engin
local.publication.orgunit1GRADUATE SCHOOL OF SCIENCES AND ENGINEERING
local.publication.orgunit1College of Engineering
local.publication.orgunit2Department of Computer Engineering
local.publication.orgunit2Graduate School of Sciences and Engineering
relation.isOrgUnitOfPublication89352e43-bf09-4ef4-82f6-6f9d0174ebae
relation.isOrgUnitOfPublication3fc31c89-e803-4eb1-af6b-6258bc42c3d8
relation.isOrgUnitOfPublication.latestForDiscovery89352e43-bf09-4ef4-82f6-6f9d0174ebae
relation.isParentOrgUnitOfPublication8e756b23-2d4a-4ce8-b1b3-62c794a8c164
relation.isParentOrgUnitOfPublication434c9663-2b11-4e66-9399-c863e2ebae43
relation.isParentOrgUnitOfPublication.latestForDiscovery8e756b23-2d4a-4ce8-b1b3-62c794a8c164

Files