Publication:
Cross-subject continuous emotion recognition using speech and body motion in dyadic interactions

Placeholder

Organizational Units

Program

KU Authors

Co-Authors

N/A

Advisor

Publication Date

2017

Language

English

Type

Conference proceeding

Journal Title

Journal ISSN

Volume Title

Abstract

Dyadic interactions encapsulate rich emotional exchange between interlocutors suggesting a multimodal, cross-speaker and cross-dimensional continuous emotion dependency. This study explores the dynamic inter-attribute emotional dependency at the cross-subject level with implications to continuous emotion recognition based on speech and body motion cues. We propose a novel two-stage Gaussian Mixture Model mapping framework for the continuous emotion recognition problem. In the first stage, we perform continuous emotion recognition (CER) of both speakers from speech and body motion modalities to estimate activation, valence and dominance (AVD) attributes. In the second stage, we improve the first stage estimates by performing CER of the selected speaker using her/his speech and body motion modalities as well as using the estimated affective attribute(s) of the other speaker. Our experimental evaluations indicate that the second stage, cross-subject continuous emotion recognition (CSCER), provides complementary information to recognize the affective state, and delivers promising improvements for the continuous emotion recognition problem.

Description

Source:

18th Annual Conference of the International-Speech-Communication-Association (INTERSPEECH 2017)

Publisher:

International Speech Communication Association ( ISCA)

Keywords:

Subject

Computer science, Artificial intelligence, Engineering, Electrical electronic engineering

Citation

Endorsement

Review

Supplemented By

Referenced By

Copy Rights Note

0

Views

0

Downloads

View PlumX Details