Publication:
Cross-subject continuous emotion recognition using speech and body motion in dyadic interactions

Placeholder

School / College / Institute

Organizational Unit

Program

KU Authors

Co-Authors

N/A

Publication Date

Language

Embargo Status

Journal Title

Journal ISSN

Volume Title

Alternative Title

Abstract

Dyadic interactions encapsulate rich emotional exchange between interlocutors suggesting a multimodal, cross-speaker and cross-dimensional continuous emotion dependency. This study explores the dynamic inter-attribute emotional dependency at the cross-subject level with implications to continuous emotion recognition based on speech and body motion cues. We propose a novel two-stage Gaussian Mixture Model mapping framework for the continuous emotion recognition problem. In the first stage, we perform continuous emotion recognition (CER) of both speakers from speech and body motion modalities to estimate activation, valence and dominance (AVD) attributes. In the second stage, we improve the first stage estimates by performing CER of the selected speaker using her/his speech and body motion modalities as well as using the estimated affective attribute(s) of the other speaker. Our experimental evaluations indicate that the second stage, cross-subject continuous emotion recognition (CSCER), provides complementary information to recognize the affective state, and delivers promising improvements for the continuous emotion recognition problem.

Source

Publisher

International Speech Communication Association ( ISCA)

Subject

Computer science, Artificial intelligence, Engineering, Electrical electronic engineering

Citation

Has Part

Source

18th Annual Conference of the International-Speech-Communication-Association (INTERSPEECH 2017)

Book Series Title

Edition

DOI

10.21437/Interspeech.2017-1413

item.page.datauri

Link

Rights

Copyrights Note

Endorsement

Review

Supplemented By

Referenced By

0

Views

0

Downloads

View PlumX Details