Publication: Use of agreement/disagreement classification in dyadic interactions for continuous emotion recognition
dc.contributor.coauthor | N/A | |
dc.contributor.department | N/A | |
dc.contributor.department | Department of Computer Engineering | |
dc.contributor.kuauthor | Khaki, Hossein | |
dc.contributor.kuauthor | Erzin, Engin | |
dc.contributor.kuprofile | PhD Student | |
dc.contributor.kuprofile | Faculty Member | |
dc.contributor.other | Department of Computer Engineering | |
dc.contributor.schoolcollegeinstitute | Graduate School of Sciences and Engineering | |
dc.contributor.schoolcollegeinstitute | College of Engineering | |
dc.contributor.yokid | N/A | |
dc.contributor.yokid | 34503 | |
dc.date.accessioned | 2024-11-09T23:34:06Z | |
dc.date.issued | 2016 | |
dc.description.abstract | Natural and affective handshakes of two participants define the course of dyadic interaction. Affective states of the participants are expected to be correlated with the nature or type of the dyadic interaction. In this study, we investigate relationship between affective attributes and nature of dyadic interaction. In this investigation we use the JESTKOD database, which consists of speech and full-body motion capture data recordings for dyadic interactions under agreement and disagreement scenarios. The dataset also has affective annotations in activation, valence and dominance (AVD) attributes. We pose the continuous affect recognition problem under agreement and disagreement scenarios of dyadic interactions. We define a statistical mapping using the support vector regression (SVR) from speech and motion modalities to affective attributes with and without the dyadic interaction type (DIT) information. We observe an improvement in estimation of the valence attribute when the DIT is available. Furthermore this improvement sustains even we estimate the DIT from the speech and motion modalities of the dyadic interaction. | |
dc.description.indexedby | WoS | |
dc.description.indexedby | Scopus | |
dc.description.openaccess | NO | |
dc.description.publisherscope | International | |
dc.description.sponsoredbyTubitakEu | TÜBİTAK | |
dc.description.sponsorship | TUBITAK[113E102] This work was supported by TUBITAKunder Grant Number 113E102. | |
dc.identifier.doi | 10.21437/interspeech.2016-407 | |
dc.identifier.isbn | 978-1-5108-3313-5 | |
dc.identifier.issn | 2308-457X | |
dc.identifier.quartile | N/A | |
dc.identifier.scopus | 2-s2.0-84994376910 | |
dc.identifier.uri | http://dx.doi.org/10.21437/interspeech.2016-407 | |
dc.identifier.uri | https://hdl.handle.net/20.500.14288/12271 | |
dc.identifier.wos | 409394400126 | |
dc.keywords | Multimodal continuous emotion recognition | |
dc.keywords | Human-computer interaction | |
dc.keywords | Dyadic interaction type | |
dc.language | English | |
dc.publisher | Isca-int Speech Communication assoc | |
dc.source | 17th Annual Conference of the International Speech Communication Association (interspeech 2016), Vols 1-5: Understanding Speech Processing in Humans and Machines | |
dc.subject | Acoustics | |
dc.subject | Computer science | |
dc.subject | Artificial intelligence | |
dc.subject | Engineering | |
dc.subject | Electrical and electronic engineering | |
dc.subject | Linguistics | |
dc.title | Use of agreement/disagreement classification in dyadic interactions for continuous emotion recognition | |
dc.type | Conference proceeding | |
dspace.entity.type | Publication | |
local.contributor.authorid | N/A | |
local.contributor.authorid | 0000-0002-2715-2368 | |
local.contributor.kuauthor | Khaki, Hossein | |
local.contributor.kuauthor | Erzin, Engin | |
relation.isOrgUnitOfPublication | 89352e43-bf09-4ef4-82f6-6f9d0174ebae | |
relation.isOrgUnitOfPublication.latestForDiscovery | 89352e43-bf09-4ef4-82f6-6f9d0174ebae |