Publication:
An overview of affective speech synthesis and conversion in the deep learning era

Thumbnail Image

Program

KU-Authors

Sezgin, Tevfik Metin
İymen Gökçe

KU Authors

Co-Authors

Triantafyllopoulos, Andreas
Schuller, Bjorn W.
He, Xiangheng
Yang, Zijiang
Tzirakis, Panagiotis
Liu, Shuo
Mertes, Silvan
Andre, Elisabeth
Fu, Ruibo
Tao, Jianhua

Advisor

Publication Date

Language

en

Journal Title

Journal ISSN

Volume Title

Abstract

Speech is the fundamental mode of human communication, and its synthesis has long been a core priority in human-computer interaction research. In recent years, machines have managed to master the art of generating speech that is understandable by humans. However, the linguistic content of an utterance encompasses only a part of its meaning. Affect, or expressivity, has the capacity to turn speech into a medium capable of conveying intimate thoughts, feelings, and emotions-aspects that are essential for engaging and naturalistic interpersonal communication. While the goal of imparting expressivity to synthesized utterances has so far remained elusive, following recent advances in text-to-speech synthesis, a paradigm shift is well under way in the fields of affective speech synthesis and conversion as well. Deep learning, as the technology that underlies most of the recent advances in artificial intelligence, is spearheading these efforts. In this overview, we outline ongoing trends and summarize state-of-the-art approaches in an attempt to provide a broad overview of this exciting field.

Source:

Proceedings of the IEEE

Publisher:

IEEE-Inst Electrical Electronics Engineers Inc

Keywords:

Subject

Engineering, Electrical and Electronic

Citation

Endorsement

Review

Supplemented By

Referenced By

Copyrights Note

3

Views

3

Downloads

View PlumX Details