Publication: Multimodal analysis of speech and arm motion for prosody-driven synthesis of beat gestures
dc.contributor.coauthor | N/A | |
dc.contributor.department | Department of Computer Engineering | |
dc.contributor.department | Graduate School of Sciences and Engineering | |
dc.contributor.kuauthor | Bozkurt, Elif | |
dc.contributor.kuauthor | Erzin, Engin | |
dc.contributor.kuauthor | Yemez, Yücel | |
dc.contributor.schoolcollegeinstitute | College of Engineering | |
dc.contributor.schoolcollegeinstitute | GRADUATE SCHOOL OF SCIENCES AND ENGINEERING | |
dc.date.accessioned | 2024-11-09T23:03:21Z | |
dc.date.issued | 2016 | |
dc.description.abstract | We propose a framework for joint analysis of speech prosody and arm motion towards automatic synthesis and realistic animation of beat gestures from speech prosody and rhythm. In the analysis stage, we first segment motion capture data and speech audio into gesture phrases and prosodic units via temporal clustering, and assign a class label to each resulting gesture phrase and prosodic unit. We then train a discrete hidden semi-Markov model (HSMM) over the segmented data, where gesture labels are hidden states with duration statistics and frame-level prosody labels are observations. The HSMM structure allows us to effectively map sequences of shorter duration prosodic units to longer duration gesture phrases. In the analysis stage, we also construct a gesture pool consisting of gesture phrases segmented from the available dataset, where each gesture phrase is associated with a class label and speech rhythm representation. In the synthesis stage, we use a modified Viterbi algorithm with a duration model, that decodes the optimal gesture label sequence with duration information over the HSMM, given a sequence of prosody labels. In the animation stage, the synthesized gesture label sequence with duration and speech rhythm information is mapped into a motion sequence by using a multiple objective unit selection algorithm. Our framework is tested using two multimodal datasets in speaker-dependent and independent settings. The resulting motion sequence when accompanied with the speech input yields natural-looking and plausible animations. We use objective evaluations to set parameters of the proposed prosody-driven gesture animation system, and subjective evaluations to assess quality of the resulting animations. The conducted subjective evaluations show that the difference between the proposed HSMM based synthesis and the motion capture synthesis is not statistically significant. Furthermore, the proposed HSMM based synthesis is evaluated significantly better than a baseline synthesis which animates random gestures based on only joint angle continuity. (C) 2016 Elsevier B.V. All rights reserved. | |
dc.description.indexedby | WOS | |
dc.description.indexedby | Scopus | |
dc.description.openaccess | NO | |
dc.description.publisherscope | International | |
dc.description.sponsoredbyTubitakEu | TÜBİTAK | |
dc.description.volume | 85 | |
dc.identifier.doi | 10.1016/j.specom.2016.10.004 | |
dc.identifier.eissn | 1872-7182 | |
dc.identifier.issn | 0167-6393 | |
dc.identifier.quartile | Q2 | |
dc.identifier.scopus | 2-s2.0-84992755423 | |
dc.identifier.uri | https://doi.org/10.1016/j.specom.2016.10.004 | |
dc.identifier.uri | https://hdl.handle.net/20.500.14288/8456 | |
dc.identifier.wos | 390507000004 | |
dc.keywords | Joint analysis of speech and gesture | |
dc.keywords | Speech-driven gesture animation | |
dc.keywords | Prosody-driven gesture synthesis | |
dc.keywords | Speech rhythm | |
dc.keywords | Unit selection | |
dc.keywords | Hidden semi-Markov models | |
dc.keywords | Utterances | |
dc.language.iso | eng | |
dc.publisher | Elsevier | |
dc.relation.grantno | Turk Telekom [11315-02] | |
dc.relation.grantno | TUBITAK[113E102] This work was supported by Turk Telekom under Grant Number 11315-02 and by TUBITAKunder Grant Number 113E102. | |
dc.relation.ispartof | Speech Communication | |
dc.subject | Acoustics | |
dc.subject | Computer science | |
dc.title | Multimodal analysis of speech and arm motion for prosody-driven synthesis of beat gestures | |
dc.type | Journal Article | |
dspace.entity.type | Publication | |
local.contributor.kuauthor | Bozkurt, Elif | |
local.contributor.kuauthor | Yemez, Yücel | |
local.contributor.kuauthor | Erzin, Engin | |
local.publication.orgunit1 | GRADUATE SCHOOL OF SCIENCES AND ENGINEERING | |
local.publication.orgunit1 | College of Engineering | |
local.publication.orgunit2 | Department of Computer Engineering | |
local.publication.orgunit2 | Graduate School of Sciences and Engineering | |
relation.isOrgUnitOfPublication | 89352e43-bf09-4ef4-82f6-6f9d0174ebae | |
relation.isOrgUnitOfPublication | 3fc31c89-e803-4eb1-af6b-6258bc42c3d8 | |
relation.isOrgUnitOfPublication.latestForDiscovery | 89352e43-bf09-4ef4-82f6-6f9d0174ebae | |
relation.isParentOrgUnitOfPublication | 8e756b23-2d4a-4ce8-b1b3-62c794a8c164 | |
relation.isParentOrgUnitOfPublication | 434c9663-2b11-4e66-9399-c863e2ebae43 | |
relation.isParentOrgUnitOfPublication.latestForDiscovery | 8e756b23-2d4a-4ce8-b1b3-62c794a8c164 |