Publication: Analysis and synthesis of multiview audio-visual dance figures
dc.contributor.coauthor | Canton-Ferrer C. | |
dc.contributor.coauthor | Tilmanne J. | |
dc.contributor.coauthor | Balcı K. | |
dc.contributor.coauthor | Bozkurt E. | |
dc.contributor.coauthor | Kızoǧlu I.Akarun L. | |
dc.contributor.coauthor | Erdem A.T. | |
dc.contributor.department | Department of Electrical and Electronics Engineering | |
dc.contributor.department | Department of Computer Engineering | |
dc.contributor.department | Department of Computer Engineering | |
dc.contributor.department | N/A | |
dc.contributor.department | N/A | |
dc.contributor.kuauthor | Tekalp, Ahmet Murat | |
dc.contributor.kuauthor | Erzin, Engin | |
dc.contributor.kuauthor | Yemez, Yücel | |
dc.contributor.kuauthor | Ofli, Ferda | |
dc.contributor.kuauthor | Demir, Yasemin | |
dc.contributor.kuprofile | Faculty Member | |
dc.contributor.kuprofile | Faculty Member | |
dc.contributor.kuprofile | Faculty Member | |
dc.contributor.kuprofile | PhD Student | |
dc.contributor.kuprofile | Master Student | |
dc.contributor.other | Department of Electrical and Electronics Engineering | |
dc.contributor.other | Department of Computer Engineering | |
dc.contributor.schoolcollegeinstitute | College of Engineering | |
dc.contributor.schoolcollegeinstitute | College of Engineering | |
dc.contributor.schoolcollegeinstitute | College of Engineering | |
dc.contributor.schoolcollegeinstitute | Graduate School of Sciences and Engineering | |
dc.contributor.schoolcollegeinstitute | Graduate School of Sciences and Engineering | |
dc.contributor.schoolcollegeinstitute | Graduate School of Sciences and Engineering | |
dc.contributor.yokid | 26207 | |
dc.contributor.yokid | 34503 | |
dc.contributor.yokid | 107907 | |
dc.contributor.yokid | N/A | |
dc.contributor.yokid | N/A | |
dc.contributor.yokid | N/A | |
dc.date.accessioned | 2024-11-09T23:27:28Z | |
dc.date.issued | 2008 | |
dc.description.abstract | This paper presents a framework for audio-driven human body motion analysis and synthesis. The video is analyzed to capture the time-varying posture of the dancer's body whereas the musical audio signal is processed to extract the beat information. The human body posture is extracted from multiview video information without any human intervention using a novel marker-based algorithm based on annealing particle filtering. Body movements of the dancer are characterized by a set of recurring semantic motion patterns, i.e., dance figures. Each dance figure is modeled in a supervised manner with a set of HMM (Hidden Markov Model) structures and the associated beat frequency. In synthesis, given an audio signal of a learned musical type, the motion parameters of the corresponding dance figures are synthesized via the trained HMM structures in synchrony with the input audio signal based on the estimated tempo information. Finally, the generated motion parameters are animated along with the musical audio using a graphical animation tool. Experimental results demonstrate the effectiveness of the proposed framework. | |
dc.description.indexedby | Scopus | |
dc.description.indexedby | WoS | |
dc.description.openaccess | YES | |
dc.description.publisherscope | International | |
dc.identifier.doi | 10.1109/SIU.2008.4632725 | |
dc.identifier.isbn | 9781-4244-1999-9 | |
dc.identifier.link | https://www.scopus.com/inward/record.uri?eid=2-s2.0-56449084971anddoi=10.1109%2fSIU.2008.4632725andpartnerID=40andmd5=6fbea9f765311bdac05a192a8d0acf15 | |
dc.identifier.quartile | N/A | |
dc.identifier.scopus | 2-s2.0-56449084971 | |
dc.identifier.uri | http://dx.doi.org/10.1109/SIU.2008.4632725 | |
dc.identifier.uri | https://hdl.handle.net/20.500.14288/11721 | |
dc.identifier.wos | 261359200188 | |
dc.keywords | Animation | |
dc.keywords | Annealing | |
dc.keywords | Hidden Markov models | |
dc.keywords | Information theory | |
dc.keywords | Markov processes | |
dc.keywords | Model structures | |
dc.keywords | Signal analysis | |
dc.keywords | Signal filtering and prediction | |
dc.keywords | Signal processing | |
dc.keywords | Analysis and synthesis | |
dc.keywords | Audio signals | |
dc.keywords | Audio visuals | |
dc.keywords | Beat frequencies | |
dc.keywords | Body movements | |
dc.keywords | Graphical animations | |
dc.keywords | HMM structures | |
dc.keywords | Human body motions | |
dc.keywords | Human body postures | |
dc.keywords | Human interventions | |
dc.keywords | Motion parameters | |
dc.keywords | Motion patterns | |
dc.keywords | Multiview videos | |
dc.keywords | Musical audio | |
dc.keywords | Musical audio signals | |
dc.keywords | Particle Filtering | |
dc.keywords | Audio acoustics | |
dc.language | Turkish | |
dc.publisher | IEEE | |
dc.source | 2008 IEEE 16th Signal Processing, Communication and Applications Conference, SIU | |
dc.subject | Electrical electronics engineering | |
dc.subject | Computer engineering | |
dc.title | Analysis and synthesis of multiview audio-visual dance figures | |
dc.title.alternative | Çok bakışlı i̇şitsel-görsel dans verilerinin analizi ve sentezi | |
dc.type | Conference proceeding | |
dspace.entity.type | Publication | |
local.contributor.authorid | 0000-0003-1465-8121 | |
local.contributor.authorid | 0000-0002-2715-2368 | |
local.contributor.authorid | 0000-0002-7515-3138 | |
local.contributor.authorid | 0000-0003-3918-3230 | |
local.contributor.authorid | N/A | |
local.contributor.authorid | N/A | |
local.contributor.kuauthor | Tekalp, Ahmet Murat | |
local.contributor.kuauthor | Erzin, Engin | |
local.contributor.kuauthor | Yemez, Yücel | |
local.contributor.kuauthor | Ofli, Ferda | |
local.contributor.kuauthor | Demir, Yasemin | |
relation.isOrgUnitOfPublication | 21598063-a7c5-420d-91ba-0cc9b2db0ea0 | |
relation.isOrgUnitOfPublication | 89352e43-bf09-4ef4-82f6-6f9d0174ebae | |
relation.isOrgUnitOfPublication.latestForDiscovery | 21598063-a7c5-420d-91ba-0cc9b2db0ea0 |