Publication: Multicamera audio-visual analysis of dance figures
dc.contributor.coauthor | N/A | |
dc.contributor.department | N/A | |
dc.contributor.department | Department of Computer Engineering | |
dc.contributor.department | Department of Computer Engineering | |
dc.contributor.department | Department of Electrical and Electronics Engineering | |
dc.contributor.kuauthor | Ofli, Ferda | |
dc.contributor.kuauthor | Erzin, Engin | |
dc.contributor.kuauthor | Yemez, Yücel | |
dc.contributor.kuauthor | Tekalp, Ahmet Murat | |
dc.contributor.kuprofile | PhD Student | |
dc.contributor.kuprofile | Faculty Member | |
dc.contributor.kuprofile | Faculty Member | |
dc.contributor.kuprofile | Faculty Member | |
dc.contributor.other | Department of Computer Engineering | |
dc.contributor.other | Department of Electrical and Electronics Engineering | |
dc.contributor.schoolcollegeinstitute | Graduate School of Sciences and Engineering | |
dc.contributor.schoolcollegeinstitute | College of Engineering | |
dc.contributor.schoolcollegeinstitute | College of Engineering | |
dc.contributor.schoolcollegeinstitute | College of Engineering | |
dc.contributor.yokid | N/A | |
dc.contributor.yokid | 34503 | |
dc.contributor.yokid | 107907 | |
dc.contributor.yokid | 26207 | |
dc.date.accessioned | 2024-11-10T00:12:12Z | |
dc.date.issued | 2007 | |
dc.description.abstract | We present an automated system for multicamera motion capture and audio-visual analysis of dance figures. the multiview video of a dancing actor is acquired using 8 synchronized cameras. the motion capture technique is based on 3D tracking of the markers attached to the person's body in the scene, using stereo color information without need for an explicit 3D model. the resulting set of 3D points is then used to extract the body motion features as 3D displacement vectors whereas MFC coefficients serve as the audio features. in the first stage of multimodal analysis, we perform Hidden Markov Model (HMM) based unsupervised temporal segmentation of the audio and body motion features, separately, to determine the recurrent elementary audio and body motion patterns. then in the second stage, we investigate the correlation of body motion patterns with audio patterns, that can be used for estimation and synthesis of realistic audio-driven body animation. | |
dc.description.indexedby | WoS | |
dc.description.indexedby | Scopus | |
dc.description.openaccess | NO | |
dc.description.publisherscope | International | |
dc.identifier.doi | N/A | |
dc.identifier.isbn | 978-1-4244-1016-3 | |
dc.identifier.quartile | N/A | |
dc.identifier.scopus | 2-s2.0-46449111503 | |
dc.identifier.uri | N/A | |
dc.identifier.uri | https://hdl.handle.net/20.500.14288/17614 | |
dc.identifier.wos | 252357703088 | |
dc.keywords | Real-time tracking | |
dc.language | English | |
dc.publisher | IEEE | |
dc.source | 2007 IEEE international Conference on Multimedia and Expo, Vols 1-5 | |
dc.subject | Computer science | |
dc.subject | Artificial intelligence | |
dc.subject | Engineering | |
dc.subject | Electrical and electronic engineering | |
dc.subject | Imaging science | |
dc.subject | Photographic technology | |
dc.title | Multicamera audio-visual analysis of dance figures | |
dc.type | Conference proceeding | |
dspace.entity.type | Publication | |
local.contributor.authorid | 0000-0003-3918-3230 | |
local.contributor.authorid | 0000-0002-2715-2368 | |
local.contributor.authorid | 0000-0002-7515-3138 | |
local.contributor.authorid | 0000-0003-1465-8121 | |
local.contributor.kuauthor | Ofli, Ferda | |
local.contributor.kuauthor | Erzin, Engin | |
local.contributor.kuauthor | Yemez, Yücel | |
local.contributor.kuauthor | Tekalp, Ahmet Murat | |
relation.isOrgUnitOfPublication | 89352e43-bf09-4ef4-82f6-6f9d0174ebae | |
relation.isOrgUnitOfPublication | 21598063-a7c5-420d-91ba-0cc9b2db0ea0 | |
relation.isOrgUnitOfPublication.latestForDiscovery | 21598063-a7c5-420d-91ba-0cc9b2db0ea0 |