Publication:
Use of non-verbal vocalizations for continuous emotion recognition from speech and head motion

dc.contributor.departmentDepartment of Computer Engineering
dc.contributor.departmentGraduate School of Sciences and Engineering
dc.contributor.kuauthorErzin, Engin
dc.contributor.kuauthorFatima, Syeda Narjis
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.schoolcollegeinstituteGRADUATE SCHOOL OF SCIENCES AND ENGINEERING
dc.date.accessioned2024-11-09T23:35:04Z
dc.date.issued2019
dc.description.abstractDyadic interactions are reflective of mutual engagement between their participants through different verbal and non-verbal voicing cues. This study aims to investigate the effect of these cues on continuous emotion recognition (CER) using speech and head motion data. We exploit the non-verbal vocalizations that are extracted from speech as a complementary source of information and investigate their effect for the CER problem using gaussian mixture and convolutional neural network based regression frameworks. Our methods are evaluated on the CreativeIT database, which consists of speech and full-body motion capture under dyadic interaction settings. Head motion, acoustic features of speech and histograms of non-verbal vocalizations are employed to estimate activation, valence and dominance attributes for the CER problem. Our experimental evaluations indicate a strong improvement of CER performance, especially of the activation attribute, with the use of non-verbal vocalization cues of speech.
dc.description.indexedbyWOS
dc.description.indexedbyScopus
dc.description.openaccessNO
dc.description.publisherscopeInternational
dc.description.sponsoredbyTubitakEuN/A
dc.identifier.isbn978-1-5386-9490-9
dc.identifier.issn2156-2318
dc.identifier.scopus2-s2.0-85073066888
dc.identifier.urihttps://hdl.handle.net/20.500.14288/12463
dc.identifier.wos555952600080
dc.keywordsContinuous emotion recognition
dc.keywordsNon-verbal vocalizations
dc.keywordsGaussian mixture regression
dc.keywordsConvolutional neural networks dyadic interactions
dc.keywordsBody motion
dc.keywordsSighs
dc.keywordsCues
dc.language.isoeng
dc.publisherIEEE
dc.relation.ispartofProceedings Of The 2019 14th IEEE Conference On Industrial Electronics And Applications (Iciea 2019)
dc.subjectEngineering
dc.subjectIndustrial engineering
dc.subjectEngineering
dc.subjectElectrical electronic engineerings
dc.titleUse of non-verbal vocalizations for continuous emotion recognition from speech and head motion
dc.typeConference Proceeding
dspace.entity.typePublication
local.contributor.kuauthorFatima, Syeda Narjis
local.contributor.kuauthorErzin, Engin
local.publication.orgunit1GRADUATE SCHOOL OF SCIENCES AND ENGINEERING
local.publication.orgunit1College of Engineering
local.publication.orgunit2Department of Computer Engineering
local.publication.orgunit2Graduate School of Sciences and Engineering
relation.isOrgUnitOfPublication89352e43-bf09-4ef4-82f6-6f9d0174ebae
relation.isOrgUnitOfPublication3fc31c89-e803-4eb1-af6b-6258bc42c3d8
relation.isOrgUnitOfPublication.latestForDiscovery89352e43-bf09-4ef4-82f6-6f9d0174ebae
relation.isParentOrgUnitOfPublication8e756b23-2d4a-4ce8-b1b3-62c794a8c164
relation.isParentOrgUnitOfPublication434c9663-2b11-4e66-9399-c863e2ebae43
relation.isParentOrgUnitOfPublication.latestForDiscovery8e756b23-2d4a-4ce8-b1b3-62c794a8c164

Files