Department of Computer EngineeringDepartment of Electrical and Electronics Engineering2024-11-092008978-1-4244-1483-31520-614910.1109/ICASSP.2008.45180892-s2.0-51449089854http://dx.doi.org/10.1109/ICASSP.2008.4518089https://hdl.handle.net/20.500.14288/13990This paper presents a framework for audio-driven human body motion analysis and synthesis. We address the problem in the context of a dance performance, where gestures and movements of the dancer are mainly driven by a musical piece and characterized by the repetition of a set of dance figures. The system is trained in a supervised manner using the multiview video recordings of the dancer. The human body posture is extracted from multiview video information without any human intervention using a novel marker-based algorithm based on annealing particle filtering. Audio is analyzed to extract beat and tempo information. The joint analysis of audio and motion features provides a correlation model that is then used to animate a dancing avatar when driven with any musical piece of the same genre. Results are provided showing the effectiveness of the proposed algorithm.AcousticsComputer scienceArtificial intelligenceCyberneticsEngineeringBiomedical engineeringElectrical and electronic engineeringComputational biologyImaging sciencePhotographic technologyRadiologyNuclear medicineMedical imagingTelecommunicationsAudio-driven human body motion analysis and synthesisConference proceeding257456701220N/A5870