Publication: Comparison of phoneme and viseme based acoustic units for speech driven realistic lip animation
dc.contributor.coauthor | Bozkurt, Elif | |
dc.contributor.coauthor | Erdem, Çiǧdem Eroǧlu | |
dc.contributor.coauthor | Erdem, Tanju | |
dc.contributor.coauthor | Özkan, Mehmet | |
dc.contributor.department | Department of Computer Engineering | |
dc.contributor.kuauthor | Erzin, Engin | |
dc.contributor.kuprofile | Faculty Member | |
dc.contributor.other | Department of Computer Engineering | |
dc.contributor.schoolcollegeinstitute | College of Engineering | |
dc.contributor.yokid | 34503 | |
dc.date.accessioned | 2024-11-10T00:01:35Z | |
dc.date.issued | 2007 | |
dc.description.abstract | Natural looking lip animation, synchronized with incoming speech, is essential for realistic character animation. In this work, we evaluate the performance of phone and viseme based acoustic units, with and without context information, for generating realistic lip synchronization using HMM based recognition systems. We conclude via objective evaluations that utilization of viseme based units with context information outperforms the other methods./ Öz: Konuşma ile senkronize ve doğal görünen dudak hareketlerinin üretilmesi, gerçekçi karakter animasyonu için önemli bir problemdir. Bu çalışmada, gerçekçi dudak hareketleri üretebilmek için Saklı Markov Modeli (SMM) kullanarak, fonem ve vizem temelli akustik birimlerin başarımlarını karşılaştırıyoruz. Nesnel değerlendirmeler sonucunda, komşuluk bilgisini kullanan vizem temelli akustik birimlerin diğer metodlardan daha üstün olduğunu gösteriyoruz. | |
dc.description.indexedby | Scopus | |
dc.description.indexedby | WoS | |
dc.description.openaccess | YES | |
dc.description.publisherscope | International | |
dc.description.sponsoredbyTubitakEu | TÜBİTAK | |
dc.identifier.doi | 10.1109/SIU.2007.4298572 | |
dc.identifier.isbn | 1424-4071-92 | |
dc.identifier.isbn | 9781-4244-0719-4 | |
dc.identifier.link | https://www.scopus.com/inward/record.uri?eid=2-s2.0-50249153615anddoi=10.1109%2fSIU.2007.4298572andpartnerID=40andmd5=50fa5f56f4d0dad09b2202ec2e4412f6 | |
dc.identifier.quartile | N/A | |
dc.identifier.scopus | 2-s2.0-50249153615 | |
dc.identifier.uri | http://dx.doi.org/10.1109/SIU.2007.4298572 | |
dc.identifier.uri | https://hdl.handle.net/20.500.14288/15987 | |
dc.identifier.wos | 252924600106 | |
dc.keywords | Character animation | |
dc.keywords | Context information | |
dc.keywords | HMM based recognition systems | |
dc.keywords | Lip synchronization | |
dc.keywords | Speech driven | |
dc.keywords | Acoustics | |
dc.keywords | Animation | |
dc.keywords | Signal processing | |
dc.keywords | Telephone systems | |
dc.keywords | Speech | |
dc.language | Turkish | |
dc.publisher | IEEE | |
dc.source | 2007 IEEE 15th Signal Processing and Communications Applications, SIU | |
dc.subject | Engineering | |
dc.subject | Electrical electronics engineering | |
dc.subject | Engineering | |
dc.subject | Computer engineering | |
dc.title | Comparison of phoneme and viseme based acoustic units for speech driven realistic lip animation | |
dc.title.alternative | Gerçekçi̇ dudak ani̇masyonu i̇çi̇n fonem ve vi̇zeme dayalı akusti̇k bi̇ri̇mleri̇n karşılaştırması | |
dc.type | Conference proceeding | |
dspace.entity.type | Publication | |
local.contributor.authorid | 0000-0002-2715-2368 | |
local.contributor.kuauthor | Erzin, Engin | |
relation.isOrgUnitOfPublication | 89352e43-bf09-4ef4-82f6-6f9d0174ebae | |
relation.isOrgUnitOfPublication.latestForDiscovery | 89352e43-bf09-4ef4-82f6-6f9d0174ebae |