Publication:
Multimodal prediction of head nods in dyadic conversations

dc.contributor.departmentDepartment of Computer Engineering
dc.contributor.departmentGraduate School of Sciences and Engineering
dc.contributor.kuauthorErzin, Engin
dc.contributor.kuauthorSezgin, Tevfik Metin
dc.contributor.kuauthorTürker, Bekir Berker
dc.contributor.kuauthorYemez, Yücel
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.schoolcollegeinstituteGRADUATE SCHOOL OF SCIENCES AND ENGINEERING
dc.date.accessioned2024-11-09T23:37:35Z
dc.date.issued2018
dc.description.abstractNon-verbal expressions in human interactions carry important messages. These messages, which constitute a significant part of the information to be transferred, are not used effectively by machines in human-robot/agent interaction. In this study, the purpose is to predict the potential head nod moments for robot/agent and therefore to develop more human-like interfaces. To achieve this, acoustic feature extraction and social signal annotations are carried out on human-human dyadic conversations. A certain history window for each head nod instances are fed to binary classification. Consequently, upon the classification by Support Vector Machines, 'potential head nod' or 'no head nod' outputs are obtained. More than half of the head nods are succesfully predicted as 'potential head nod', which leads promising results for human-like robot/agents.
dc.description.indexedbyWOS
dc.description.indexedbyScopus
dc.description.openaccessYES
dc.description.publisherscopeInternational
dc.description.sponsoredbyTubitakEuN/A
dc.description.sponsorshipAselsan
dc.description.sponsorshipet al.
dc.description.sponsorshipHuawei
dc.description.sponsorshipIEEE Signal Processing Society
dc.description.sponsorshipIEEE Turkey Section
dc.description.sponsorshipNetas
dc.identifier.doi10.1109/SIU.2018.8404737
dc.identifier.isbn9781-5386-1501-0
dc.identifier.scopus2-s2.0-85050808573
dc.identifier.urihttps://doi.org/10.1109/SIU.2018.8404737
dc.identifier.urihttps://hdl.handle.net/20.500.14288/12854
dc.identifier.wos511448500590
dc.keywordsBackhannels
dc.keywordsHead nodding
dc.keywordsHuman-Computer interaction
dc.keywordsIntention recognition
dc.keywordsNon-verbal expressions
dc.keywordsSocial signal processing
dc.language.isotur
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)
dc.relation.ispartof26th IEEE Signal Processing and Communications Applications Conference, SIU 2018
dc.subjectCivil engineering
dc.subjectElectrical electronics engineering
dc.subjectTelecommunication
dc.titleMultimodal prediction of head nods in dyadic conversations
dc.title.alternativeİkili iletişimde olası kafa sallama anlarının çok kipli kestirimi
dc.typeConference Proceeding
dspace.entity.typePublication
local.contributor.kuauthorTürker, Bekir Berker
local.contributor.kuauthorSezgin, Tevfik Metin
local.contributor.kuauthorYemez, Yücel
local.contributor.kuauthorErzin, Engin
local.publication.orgunit1GRADUATE SCHOOL OF SCIENCES AND ENGINEERING
local.publication.orgunit1College of Engineering
local.publication.orgunit2Department of Computer Engineering
local.publication.orgunit2Graduate School of Sciences and Engineering
relation.isOrgUnitOfPublication89352e43-bf09-4ef4-82f6-6f9d0174ebae
relation.isOrgUnitOfPublication3fc31c89-e803-4eb1-af6b-6258bc42c3d8
relation.isOrgUnitOfPublication.latestForDiscovery89352e43-bf09-4ef4-82f6-6f9d0174ebae
relation.isParentOrgUnitOfPublication8e756b23-2d4a-4ce8-b1b3-62c794a8c164
relation.isParentOrgUnitOfPublication434c9663-2b11-4e66-9399-c863e2ebae43
relation.isParentOrgUnitOfPublication.latestForDiscovery8e756b23-2d4a-4ce8-b1b3-62c794a8c164

Files