Publication:
Estimation of personalized facial gesture patterns

Placeholder

School / College / Institute

Organizational Unit

Program

KU Authors

Co-Authors

Publication Date

Language

Embargo Status

Journal Title

Journal ISSN

Volume Title

Alternative Title

Kişiselleştirilmiş yüz jest örüntülerinin kestirimi

Abstract

We propose a framework for estimation and analysis of temporal facial expression patterns of a speaker. The goal of this framework is to learn the personalized elementary dynamic facial expression patterns for a particular speaker. We track lip, eyebrow, and eyelid of the speaker in 3D across a head-andshoulder stereo video sequence. We use MPEG-4 Facial Definition Parameters (FDPs) to create the feature set, and MPEG4 Facial Animation Parameters (FAPs) to represent the temporal facial expression patterns. Hidden Markov Model (HMM) based unsupervised temporal segmentation of upper and lower facial expression features is performed separately to determine recurrent elementary facial expression patterns for the particular speaker. These facial expression patterns, which are coded by FAP sequences and may not be tied with prespecified emotions, can be used for personalized emotion estimation and synthesis of a speaker. Experimental results are presented.

Source

Publisher

IEEE

Subject

Electrical electronics engineering, Computer engineering

Citation

Has Part

Source

2007 IEEE 15th Signal Processing and Communications Applications, SIU

Book Series Title

Edition

DOI

10.1109/SIU.2007.4298615

item.page.datauri

Link

Rights

Copyrights Note

Endorsement

Review

Supplemented By

Referenced By

0

Views

0

Downloads

View PlumX Details