Publication:
An overview of affective speech synthesis and conversion in the deep learning era

Thumbnail Image

School / College / Institute

Organizational Unit
Organizational Unit

Program

KU Authors

Co-Authors

Triantafyllopoulos, Andreas
Schuller, Bjorn W.
He, Xiangheng
Yang, Zijiang
Tzirakis, Panagiotis
Liu, Shuo
Mertes, Silvan
Andre, Elisabeth
Fu, Ruibo
Tao, Jianhua

Publication Date

Language

Embargo Status

Journal Title

Journal ISSN

Volume Title

Alternative Title

Abstract

Speech is the fundamental mode of human communication, and its synthesis has long been a core priority in human-computer interaction research. In recent years, machines have managed to master the art of generating speech that is understandable by humans. However, the linguistic content of an utterance encompasses only a part of its meaning. Affect, or expressivity, has the capacity to turn speech into a medium capable of conveying intimate thoughts, feelings, and emotions-aspects that are essential for engaging and naturalistic interpersonal communication. While the goal of imparting expressivity to synthesized utterances has so far remained elusive, following recent advances in text-to-speech synthesis, a paradigm shift is well under way in the fields of affective speech synthesis and conversion as well. Deep learning, as the technology that underlies most of the recent advances in artificial intelligence, is spearheading these efforts. In this overview, we outline ongoing trends and summarize state-of-the-art approaches in an attempt to provide a broad overview of this exciting field.

Source

Publisher

IEEE-Inst Electrical Electronics Engineers Inc

Subject

Engineering, Electrical and Electronic

Citation

Has Part

Source

Proceedings of the IEEE

Book Series Title

Edition

DOI

10.1109/JPROC.2023.3250266

item.page.datauri

Link

Rights

Copyrights Note

Endorsement

Review

Supplemented By

Referenced By

6

Views

4

Downloads

View PlumX Details