Publication: Investigating contributions of speech and facial landmarks for talking head generation
dc.contributor.coauthor | N/A | |
dc.contributor.department | N/A | |
dc.contributor.department | Department of Computer Engineering | |
dc.contributor.kuauthor | Kesim, Ege | |
dc.contributor.kuauthor | Erzin, Engin | |
dc.contributor.kuprofile | Master Student | |
dc.contributor.kuprofile | Faculty Member | |
dc.contributor.other | Department of Computer Engineering | |
dc.contributor.researchcenter | Koç Üniversitesi İş Bankası Yapay Zeka Uygulama ve Araştırma Merkezi (KUIS AI)/ Koç University İş Bank Artificial Intelligence Center (KUIS AI) | |
dc.contributor.schoolcollegeinstitute | Graduate School of Sciences and Engineering | |
dc.contributor.schoolcollegeinstitute | College of Engineering | |
dc.contributor.yokid | N/A | |
dc.contributor.yokid | 34503 | |
dc.date.accessioned | 2024-11-09T22:55:57Z | |
dc.date.issued | 2021 | |
dc.description.abstract | Talking head generation is an active research problem. It has been widely studied as a direct speech-to-video or two stage speech-to-landmarks-to-video mapping problem. in this study, our main motivation is to assess individual and joint contributions of the speech and facial landmarks to the talking head generation quality through a state-of-the-art generative adversarial network (Gan) architecture. incorporating frame and sequence discriminators and a feature matching loss, we investigate performances of speech only, landmark only and joint speech and landmark driven talking head generation on the CREMa-D dataset. Objective evaluations using the peak signal-to-noise ratio (PSNR), structural similarity index (SSIM) and landmark distance (LMD) indicate that while landmarks bring PSNR and SSIM improvements to the speech driven system, speech brings LMD improvement to the landmark driven system. Furthermore, feature matching is observed to improve the speech driven talking head generation models significantly. | |
dc.description.indexedby | WoS | |
dc.description.indexedby | Scopus | |
dc.description.openaccess | YES | |
dc.description.publisherscope | International | |
dc.identifier.doi | 10.21437/interspeech.2021-1585 | |
dc.identifier.issn | 2308-457X | |
dc.identifier.quartile | N/A | |
dc.identifier.scopus | 2-s2.0-85119170398 | |
dc.identifier.uri | http://dx.doi.org/10.21437/interspeech.2021-1585 | |
dc.identifier.uri | https://hdl.handle.net/20.500.14288/7273 | |
dc.identifier.wos | 841879501148 | |
dc.keywords | Talking head generation | |
dc.keywords | Speech driven animation | |
dc.language | English | |
dc.publisher | Isca-int Speech Communication assoc | |
dc.source | interspeech 2021 | |
dc.subject | Audiology | |
dc.subject | Speech-language pathology | |
dc.subject | Computer science | |
dc.subject | Artificial intelligence | |
dc.subject | Computer science | |
dc.subject | Software engineering | |
dc.title | Investigating contributions of speech and facial landmarks for talking head generation | |
dc.type | Conference proceeding | |
dspace.entity.type | Publication | |
local.contributor.authorid | N/A | |
local.contributor.authorid | 0000-0002-2715-2368 | |
local.contributor.kuauthor | Kesim, Ege | |
local.contributor.kuauthor | Erzin, Engin | |
relation.isOrgUnitOfPublication | 89352e43-bf09-4ef4-82f6-6f9d0174ebae | |
relation.isOrgUnitOfPublication.latestForDiscovery | 89352e43-bf09-4ef4-82f6-6f9d0174ebae |