Publication:
Cross-subject continuous emotion recognition using speech and body motion in dyadic interactions

dc.contributor.coauthorN/A
dc.contributor.departmentN/A
dc.contributor.departmentDepartment of Computer Engineering
dc.contributor.kuauthorFatima, Syeda Narjis
dc.contributor.kuauthorErzin, Engin
dc.contributor.kuprofilePhD Student
dc.contributor.kuprofileFaculty Member
dc.contributor.otherDepartment of Computer Engineering
dc.contributor.schoolcollegeinstituteGraduate School of Sciences and Engineering
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.yokidN/A
dc.contributor.yokid34503
dc.date.accessioned2024-11-09T23:40:04Z
dc.date.issued2017
dc.description.abstractDyadic interactions encapsulate rich emotional exchange between interlocutors suggesting a multimodal, cross-speaker and cross-dimensional continuous emotion dependency. This study explores the dynamic inter-attribute emotional dependency at the cross-subject level with implications to continuous emotion recognition based on speech and body motion cues. We propose a novel two-stage Gaussian Mixture Model mapping framework for the continuous emotion recognition problem. In the first stage, we perform continuous emotion recognition (CER) of both speakers from speech and body motion modalities to estimate activation, valence and dominance (AVD) attributes. In the second stage, we improve the first stage estimates by performing CER of the selected speaker using her/his speech and body motion modalities as well as using the estimated affective attribute(s) of the other speaker. Our experimental evaluations indicate that the second stage, cross-subject continuous emotion recognition (CSCER), provides complementary information to recognize the affective state, and delivers promising improvements for the continuous emotion recognition problem.
dc.description.indexedbyWoS
dc.description.indexedbyScopus
dc.description.openaccessNO
dc.description.publisherscopeInternational
dc.description.sponsoredbyTubitakEuN/A
dc.identifier.doi10.21437/Interspeech.2017-1413
dc.identifier.isbn978-1-5108-4876-4
dc.identifier.issn2308-457X
dc.identifier.quartileN/A
dc.identifier.scopus2-s2.0-85039154284
dc.identifier.urihttp://dx.doi.org/10.21437/Interspeech.2017-1413
dc.identifier.urihttps://hdl.handle.net/20.500.14288/13229
dc.identifier.wos457505000357
dc.keywordsContinuous emotion recognition
dc.keywordsDyadic emotion estimator
dc.keywordsSide emotional information
dc.keywordsCross-subject continuous emotion recognition (CSCER)
dc.keywordsGaussian mixture regression
dc.keywordsActivation
dc.keywordsValence
dc.keywordsDominance FACE
dc.languageEnglish
dc.publisherInternational Speech Communication Association ( ISCA)
dc.source18th Annual Conference of the International-Speech-Communication-Association (INTERSPEECH 2017)
dc.subjectComputer science
dc.subjectArtificial intelligence
dc.subjectEngineering
dc.subjectElectrical electronic engineering
dc.titleCross-subject continuous emotion recognition using speech and body motion in dyadic interactions
dc.typeConference proceeding
dspace.entity.typePublication
local.contributor.authoridN/A
local.contributor.authorid0000-0002-2715-2368
local.contributor.kuauthorFatima, Syeda Narjis
local.contributor.kuauthorErzin, Engin
relation.isOrgUnitOfPublication89352e43-bf09-4ef4-82f6-6f9d0174ebae
relation.isOrgUnitOfPublication.latestForDiscovery89352e43-bf09-4ef4-82f6-6f9d0174ebae

Files