Publication: Emotion dependent facial animation from affective speech
dc.contributor.coauthor | N/A | |
dc.contributor.department | Department of Computer Engineering | |
dc.contributor.department | Graduate School of Sciences and Engineering | |
dc.contributor.kuauthor | Asadiabadi, Sasan | |
dc.contributor.kuauthor | Erzin, Engin | |
dc.contributor.kuauthor | Sadiq, Rizwan | |
dc.contributor.schoolcollegeinstitute | College of Engineering | |
dc.contributor.schoolcollegeinstitute | GRADUATE SCHOOL OF SCIENCES AND ENGINEERING | |
dc.date.accessioned | 2024-11-09T23:48:00Z | |
dc.date.issued | 2020 | |
dc.description.abstract | In human-to-computer interaction, facial animation in synchrony with affective speech can deliver more naturalistic conversational agents. In this paper, we present a two-stage deep learning approach for affective speech driven facial shape animation. In the first stage, we classify affective speech into seven emotion categories. In the second stage, we train separate deep estimators within each emotion category to synthesize facial shape from the affective speech. Objective and subjective evaluations are performed over the SAVEE dataset. The proposed emotion dependent facial shape model performs better in terms of the Mean Squared Error (MSE) loss and in generating the landmark animations, as compared to training a universal model regardless of the emotion. | |
dc.description.indexedby | WOS | |
dc.description.indexedby | Scopus | |
dc.description.openaccess | NO | |
dc.description.sponsoredbyTubitakEu | N/A | |
dc.description.sponsorship | Scientific and Technological Research Council of Turkey (TUBITAK) [217E107] This work was supported in part by the Scientific and Technological Research Council of Turkey (TUBITAK) under grant number 217E107. | |
dc.identifier.isbn | 978-1-7281-9320-5 | |
dc.identifier.issn | 2163-3517 | |
dc.identifier.scopus | 2-s2.0-85099256959 | |
dc.identifier.uri | https://hdl.handle.net/20.500.14288/14217 | |
dc.identifier.wos | 652200700037 | |
dc.keywords | Recognition | |
dc.language.iso | eng | |
dc.publisher | Ieee | |
dc.relation.ispartof | 2020 Ieee 22nd International Workshop On Multimedia Signal Processing (Mmsp) | |
dc.subject | Computer science | |
dc.subject | Software engineering | |
dc.subject | Engineering | |
dc.subject | Electrical and electronic engineering | |
dc.title | Emotion dependent facial animation from affective speech | |
dc.type | Conference Proceeding | |
dspace.entity.type | Publication | |
local.contributor.kuauthor | Sadiq, Rizwan | |
local.contributor.kuauthor | Asadiabadi, Sasan | |
local.contributor.kuauthor | Erzin, Engin | |
local.publication.orgunit1 | GRADUATE SCHOOL OF SCIENCES AND ENGINEERING | |
local.publication.orgunit1 | College of Engineering | |
local.publication.orgunit2 | Department of Computer Engineering | |
local.publication.orgunit2 | Graduate School of Sciences and Engineering | |
relation.isOrgUnitOfPublication | 89352e43-bf09-4ef4-82f6-6f9d0174ebae | |
relation.isOrgUnitOfPublication | 3fc31c89-e803-4eb1-af6b-6258bc42c3d8 | |
relation.isOrgUnitOfPublication.latestForDiscovery | 89352e43-bf09-4ef4-82f6-6f9d0174ebae | |
relation.isParentOrgUnitOfPublication | 8e756b23-2d4a-4ce8-b1b3-62c794a8c164 | |
relation.isParentOrgUnitOfPublication | 434c9663-2b11-4e66-9399-c863e2ebae43 | |
relation.isParentOrgUnitOfPublication.latestForDiscovery | 8e756b23-2d4a-4ce8-b1b3-62c794a8c164 |