Department of Computer EngineeringDepartment of Electrical and Electronics Engineering2024-11-092008978-1-4244-1765-01522-488010.1109/ICIP.2008.47120472-s2.0-69949132134http://dx.doi.org/10.1109/ICIP.2008.4712047https://hdl.handle.net/20.500.14288/11778This paper presents a framework for unsupervised video analysis in the context of dance performances, where gestures and 3D movements of a dancer are characterized by repetition of a set of unknown dance figures. The system is trained in an unsupervised manner using Hidden Markov Models (HMMs) to automatically segment multi-view video recordings of a dancer into recurring elementary temporal body motion patterns to identify the dance figures. That is, a parallel HMM structure is employed to automatically determine the number and the temporal boundaries of different dance figures in a given dance video. The success of the analysis framework has been evaluated by visualizing these dance figures on a dancing avatar animated by the computed 3D analysis parameters. Experimental results demonstrate that the proposed framework enables synthetic agents and/or robots to learn dance figures from video automatically.Computer scienceArtificial intelligenceEngineeringElectrical and electronic engineeringImaging sciencePhotographic technologyUnsupervised dance figure analysis from video for dancing avatar animationConference proceedingN/A265921400372N/A5871