Publication:
Estimation of personalized facial gesture patterns

Placeholder

Program

KU Authors

Co-Authors

Advisor

Publication Date

Language

Turkish

Journal Title

Journal ISSN

Volume Title

Abstract

We propose a framework for estimation and analysis of temporal facial expression patterns of a speaker. The goal of this framework is to learn the personalized elementary dynamic facial expression patterns for a particular speaker. We track lip, eyebrow, and eyelid of the speaker in 3D across a head-andshoulder stereo video sequence. We use MPEG-4 Facial Definition Parameters (FDPs) to create the feature set, and MPEG4 Facial Animation Parameters (FAPs) to represent the temporal facial expression patterns. Hidden Markov Model (HMM) based unsupervised temporal segmentation of upper and lower facial expression features is performed separately to determine recurrent elementary facial expression patterns for the particular speaker. These facial expression patterns, which are coded by FAP sequences and may not be tied with prespecified emotions, can be used for personalized emotion estimation and synthesis of a speaker. Experimental results are presented.

Source:

2007 IEEE 15th Signal Processing and Communications Applications, SIU

Publisher:

IEEE

Keywords:

Subject

Electrical electronics engineering, Computer engineering

Citation

Endorsement

Review

Supplemented By

Referenced By

Copyrights Note

0

Views

0

Downloads

View PlumX Details