Publication:
Engagement rewarded actor-critic with conservative Q-learning for speech-driven laughter backchannel generation

Thumbnail Image

Organizational Units

Program

KU Authors

Co-Authors

Advisor

Publication Date

2021

Language

English

Type

Conference proceeding

Journal Title

Journal ISSN

Volume Title

Abstract

We propose a speech-driven laughter backchannel generation model to reward engagement during human-agent interaction. We formulate the problem as a Markov decision process where speech signal represents the state and the objective is to maximize human engagement. Since online training is often impractical in the case of human-agent interaction, we utilize the existing human-to-human dyadic interaction datasets to train our agent for the backchannel generation task. We address the problem using an actor-critic method based on conservative Q-learning (CQL), that mitigates the distributional shift problem by suppressing Q-value over-estimation during training. The proposed CQL based approach is evaluated objectively on the IEMOCAP dataset for laughter generation task. When compared to the existing off-policy Q-learning methods, we observe an improved compliance with the dataset in terms of laugh generation rate. Furthermore, we show the effectiveness of the learned policy by estimating the expected engagement using off-policy policy evaluation techniques.

Description

Source:

International Conference on Multimodal Interaction

Publisher:

Association for Computing Machinery (ACM)

Keywords:

Subject

Generation

Citation

Endorsement

Review

Supplemented By

Referenced By

Copy Rights Note

0

Views

1

Downloads

View PlumX Details