Publication:
Emotion dependent facial animation from affective speech

Thumbnail Image

School / College / Institute

Organizational Unit

Program

KU Authors

Co-Authors

Publication Date

Language

Embargo Status

NO

Journal Title

Journal ISSN

Volume Title

Alternative Title

Abstract

In human-to-computer interaction, facial animation in synchrony with affective speech can deliver more naturalistic conversational agents. In this paper, we present a two-stage deep learning approach for affective speech driven facial shape animation. In the first stage, we classify affective speech into seven emotion categories. In the second stage, we train separate deep estimators within each emotion category to synthesize facial shape from the affective speech. Objective and subjective evaluations are performed over the SAVEE dataset. The proposed emotion dependent facial shape model performs better in terms of the Mean Squared Error (MSE) loss and in generating the landmark animations, as compared to training a universal model regardless of the emotion.

Source

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

Subject

Multimedia signal processing

Citation

Has Part

Source

2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)

Book Series Title

Edition

DOI

10.1109/MMSP48831.2020.9287086

item.page.datauri

Link

Rights

Copyrights Note

Endorsement

Review

Supplemented By

Referenced By

0

Views

4

Downloads

View PlumX Details