Researcher: Demir, Yasemin
Name Variants
Demir, Yasemin
Email Address
Birth Date
7 results
Search Results
Now showing 1 - 7 of 7
Publication Metadata only An audio-driven dancing avatar(Springer, 2008) Balci, Koray; Kizoglu, Idil; Akarun, Lale; Canton-Ferrer, Cristian; Tilmanne, Joelle; Bozkurt, Elif; Erdem, A. Tanju; Department of Computer Engineering; N/A; N/A; Department of Computer Engineering; Department of Electrical and Electronics Engineering; Yemez, Yücel; Ofli, Ferda; Demir, Yasemin; Erzin, Engin; Tekalp, Ahmet Murat; Faculty Member; PhD Student; Master Student; Faculty Member; Faculty Member; Department of Computer Engineering; Department of Electrical and Electronics Engineering; College of Engineering; Graduate School of Sciences and Engineering; Graduate School of Sciences and Engineering; College of Engineering; College of Engineering; 107907; N/A; N/A; 34503; 26207We present a framework for training and synthesis of an audio-driven dancing avatar. The avatar is trained for a given musical genre using the multicamera video recordings of a dance performance. The video is analyzed to capture the time-varying posture of the dancer's body whereas the musical audio signal is processed to extract the beat information. We consider two different marker-based schemes for the motion capture problem. The first scheme uses 3D joint positions to represent the body motion whereas the second uses joint angles. Body movements of the dancer are characterized by a set of recurring semantic motion patterns, i.e., dance figures. Each dance figure is modeled in a supervised manner with a set of HMM (Hidden Markov Model) structures and the associated beat frequency. In the synthesis phase, an audio signal of unknown musical type is first classified, within a time interval, into one of the genres that have been learnt in the analysis phase, based on mel frequency cepstral coefficients (MFCC). The motion parameters of the corresponding dance figures are then synthesized via the trained HMM structures in synchrony with the audio signal based on the estimated tempo information. Finally, the generated motion parameters, either the joint angles or the 3D joint positions of the body, are animated along with the musical audio using two different animation tools that we have developed. Experimental results demonstrate the effectiveness of the proposed framework.Publication Metadata only Evaluation of audio features for audio-visual analysis of dance figures(IEEE, 2008) Department of Electrical and Electronics Engineering; Department of Computer Engineering; Department of Computer Engineering; Tekalp, Ahmet Murat; Erzin, Engin; Yemez, Yücel; Demir, Yasemin; Faculty Member; Faculty Member; Faculty Member; Master Student; Department of Electrical and Electronics Engineering; Department of Computer Engineering; College of Engineering; College of Engineering; College of Engineering; Graduate School of Sciences and Engineering; 26207; 34503; 107907; N/AWe present a framework for selecting best audio features for audio-visual analysis and synthesis of dance figures. Dance figures are performed synchronously with the musical rhythm. They can be analyzed through the audio spectra using spectral and rhythmic musical features. In the proposed audio feature evaluation system, dance figures are manually labeled over the video stream. The music segments, which correspond to labeled dance figures, are used to train hidden Markov model (HMM) structures to learn spectral audio patterns for the dance figure melodies. The melody recognition performances of the HMM models for various spectral feature sets are evaluated. Audio features, which are maximizing dance figure melody recognition performances, are selected as the best audio features for the analyzed audiovisual dance recordings. In our evaluations, mel-scale cepstral coefficients (MFCC) with their first and second derivatives, spectral centroid, spectral flux and spectral roll-off are used as candidate audio features. Selection of the best audio features can be used towards analysis and synthesis of audio-driven body animation.Publication Metadata only Dans figürlerinin işitsel-görsel analizi için işi̇tsel özniteliklerin deǧerlendi̇ri̇lmesi̇(IEEE, 2008) Department of Electrical and Electronics Engineering; Department of Computer Engineering; Department of Computer Engineering; N/A; N/A; Tekalp, Ahmet Murat; Erzin, Engin; Yemez, Yücel; Ofli, Ferda; Demir, Yasemin; Faculty Member; Faculty Member; Faculty Member; PhD Student; Master Student; Department of Electrical and Electronics Engineering; Department of Computer Engineering; College of Engineering; College of Engineering; College of Engineering; Graduate School of Sciences and Engineering; Graduate School of Sciences and Engineering; Graduate School of Sciences and Engineering; 26207; 34503; 107907; N/A; N/A; N/AWe present a framework for selecting best audio features for audiovisual analysis and synthesis of dance figures. Dance figures are performed synchronously with the musical rhythm. They can be analyzed through the audio spectra using spectral and rhythmic musical features. In the proposed audio feature evaluation system, dance figures are manually labeled over the video stream. The music segments, which correspond to labeled dance figures, are used to train hidden Markov model (HMM) structures to learn temporal spectrum patterns for the dance figures. The dance figure recognition performances of the HMM models for various spectral feature sets are evaluated. Audio features, which are maximizing dance figure recognition performances, are selected as the best audio features for the analyzed audiovisual dance recordings. In our evaluations, mel-scale cepstral coefficients (MFCC) with their first and second derivatives, spectral centroid, spectral flux and spectral roll-off are used as candidate audio features. Selection of the best audio features can be used towards analysis and synthesis of audio-driven body animation.Publication Metadata only Multicamera audio-visual analysis of dance figures using segmented body model(IEEE, 2007) Department of Electrical and Electronics Engineering; Department of Computer Engineering; Department of Computer Engineering; N/A; Tekalp, Ahmet Murat; Erzin, Engin; Yemez, Yücel; Ofli, Ferda; Demir, Yasemin; Faculty Member; Faculty Member; Faculty Member; PhD Student; Master Student; Department of Electrical and Electronics Engineering; Department of Computer Engineering; College of Engineering; College of Engineering; College of Engineering; Graduate School of Sciences and Engineering; Graduate School of Sciences and Engineering; 26207; 34503; 107907; N/A; N/AWe present a multi-camera system for audio-visual analysis of dance figures. The multi-view video of a dancing actor is acquired using 8 synchronized cameras. The motion capture technique of the proposed system is based on 3D tracking of the markers attached to the person's body in the scene. The resulting set of 3D points is then used to extract the body motion features as 3D displacement vectors whereas MFC coefficients serve as the audio features. In the multi-modal analysis phase, we perform Hidden Markov Model (HMM) based unsupervised temporal segmentation of the audio and body motion features such as legs and arms, separately, to determine the recurrent elementary audio and body motion patterns in the first stage. Then in the second stage, we investigate the correlation of body motion patterns with audio patterns that can be used towards estimation and synthesis of realistic audio-driven body animation.Publication Metadata only Audio-driven human body motion analysis and synthesis(IEEE, 2008) Canton-Ferrer, C.; Tilmanne, J.; Bozkurt, E.; N/A; N/A; Department of Computer Engineering; Department of Computer Engineering; Department of Electrical and Electronics Engineering; Ofli, Ferda; Demir, Yasemin; Yemez, Yücel; Erzin, Engin; Tekalp, Ahmet Murat; PhD Student; Master Student; Faculty Member; Faculty Member; Faculty Member; Department of Computer Engineering; Department of Electrical and Electronics Engineering; Graduate School of Sciences and Engineering; Graduate School of Sciences and Engineering; College of Engineering; College of Engineering; College of Engineering; N/A; N/A; 107907; 34503; 26207This paper presents a framework for audio-driven human body motion analysis and synthesis. We address the problem in the context of a dance performance, where gestures and movements of the dancer are mainly driven by a musical piece and characterized by the repetition of a set of dance figures. The system is trained in a supervised manner using the multiview video recordings of the dancer. The human body posture is extracted from multiview video information without any human intervention using a novel marker-based algorithm based on annealing particle filtering. Audio is analyzed to extract beat and tempo information. The joint analysis of audio and motion features provides a correlation model that is then used to animate a dancing avatar when driven with any musical piece of the same genre. Results are provided showing the effectiveness of the proposed algorithm.Publication Metadata only Analysis and synthesis of multiview audio-visual dance figures(IEEE, 2008) Canton-Ferrer C.; Tilmanne J.; Balcı K.; Bozkurt E.; Kızoǧlu I.Akarun L.; Erdem A.T.; Department of Electrical and Electronics Engineering; Department of Computer Engineering; Department of Computer Engineering; N/A; N/A; Tekalp, Ahmet Murat; Erzin, Engin; Yemez, Yücel; Ofli, Ferda; Demir, Yasemin; Faculty Member; Faculty Member; Faculty Member; PhD Student; Master Student; Department of Electrical and Electronics Engineering; Department of Computer Engineering; College of Engineering; College of Engineering; College of Engineering; Graduate School of Sciences and Engineering; Graduate School of Sciences and Engineering; Graduate School of Sciences and Engineering; 26207; 34503; 107907; N/A; N/A; N/AThis paper presents a framework for audio-driven human body motion analysis and synthesis. The video is analyzed to capture the time-varying posture of the dancer's body whereas the musical audio signal is processed to extract the beat information. The human body posture is extracted from multiview video information without any human intervention using a novel marker-based algorithm based on annealing particle filtering. Body movements of the dancer are characterized by a set of recurring semantic motion patterns, i.e., dance figures. Each dance figure is modeled in a supervised manner with a set of HMM (Hidden Markov Model) structures and the associated beat frequency. In synthesis, given an audio signal of a learned musical type, the motion parameters of the corresponding dance figures are synthesized via the trained HMM structures in synchrony with the input audio signal based on the estimated tempo information. Finally, the generated motion parameters are animated along with the musical audio using a graphical animation tool. Experimental results demonstrate the effectiveness of the proposed framework.Publication Metadata only Joint correlation analysis of audio-visual dance figures(IEEE, 2007) Department of Electrical and Electronics Engineering; Department of Computer Engineering; Department of Computer Engineering; N/A; N/A; Tekalp, Ahmet Murat; Erzin, Engin; Yemez, Yücel; Ofli, Ferda; Demir, Yasemin; Faculty Member; Faculty Member; Faculty Member; PhD Student; Master Student; Department of Electrical and Electronics Engineering; Department of Computer Engineering; College of Engineering; College of Engineering; College of Engineering; Graduate School of Sciences and Engineering; Graduate School of Sciences and Engineering; 26207; 34503; 107907; N/A; N/AIn this paper we present a framework for analysis of dance figures from audio-visual data. Our audio-visual data is the multiview video of a dancing actor which is acquired using 8 synchronized cameras. The multi-camera motion capture technique of this framework is based on 3D tracking of the markers attached to the dancer's body, using stereo color information. The extracted 3D points are used to calculate the body motion features as 3D displacement vectors. On the other hand, MFC coefficients serve as the audio features. In the first stage of the two stage analysis task, we perform Hidden Markov Model (HMM) based unsupervised temporal segmentation of the audio and body motion features, separately, to extract the recurrent elementary audio and body motion patterns. In the second stage, the correlation of body motion patterns with audio patterns is investigated to create a correlation model that can be used during the synthesis of an audio-driven body animation.