Publication: Multimodal prediction of head nods in dyadic conversations
dc.contributor.department | Department of Computer Engineering | |
dc.contributor.department | Graduate School of Sciences and Engineering | |
dc.contributor.kuauthor | Erzin, Engin | |
dc.contributor.kuauthor | Sezgin, Tevfik Metin | |
dc.contributor.kuauthor | Türker, Bekir Berker | |
dc.contributor.kuauthor | Yemez, Yücel | |
dc.contributor.schoolcollegeinstitute | College of Engineering | |
dc.contributor.schoolcollegeinstitute | GRADUATE SCHOOL OF SCIENCES AND ENGINEERING | |
dc.date.accessioned | 2024-11-09T23:37:35Z | |
dc.date.issued | 2018 | |
dc.description.abstract | Non-verbal expressions in human interactions carry important messages. These messages, which constitute a significant part of the information to be transferred, are not used effectively by machines in human-robot/agent interaction. In this study, the purpose is to predict the potential head nod moments for robot/agent and therefore to develop more human-like interfaces. To achieve this, acoustic feature extraction and social signal annotations are carried out on human-human dyadic conversations. A certain history window for each head nod instances are fed to binary classification. Consequently, upon the classification by Support Vector Machines, 'potential head nod' or 'no head nod' outputs are obtained. More than half of the head nods are succesfully predicted as 'potential head nod', which leads promising results for human-like robot/agents. | |
dc.description.indexedby | WOS | |
dc.description.indexedby | Scopus | |
dc.description.openaccess | YES | |
dc.description.publisherscope | International | |
dc.description.sponsoredbyTubitakEu | N/A | |
dc.description.sponsorship | Aselsan | |
dc.description.sponsorship | et al. | |
dc.description.sponsorship | Huawei | |
dc.description.sponsorship | IEEE Signal Processing Society | |
dc.description.sponsorship | IEEE Turkey Section | |
dc.description.sponsorship | Netas | |
dc.identifier.doi | 10.1109/SIU.2018.8404737 | |
dc.identifier.isbn | 9781-5386-1501-0 | |
dc.identifier.scopus | 2-s2.0-85050808573 | |
dc.identifier.uri | https://doi.org/10.1109/SIU.2018.8404737 | |
dc.identifier.uri | https://hdl.handle.net/20.500.14288/12854 | |
dc.identifier.wos | 511448500590 | |
dc.keywords | Backhannels | |
dc.keywords | Head nodding | |
dc.keywords | Human-Computer interaction | |
dc.keywords | Intention recognition | |
dc.keywords | Non-verbal expressions | |
dc.keywords | Social signal processing | |
dc.language.iso | tur | |
dc.publisher | Institute of Electrical and Electronics Engineers (IEEE) | |
dc.relation.ispartof | 26th IEEE Signal Processing and Communications Applications Conference, SIU 2018 | |
dc.subject | Civil engineering | |
dc.subject | Electrical electronics engineering | |
dc.subject | Telecommunication | |
dc.title | Multimodal prediction of head nods in dyadic conversations | |
dc.title.alternative | İkili iletişimde olası kafa sallama anlarının çok kipli kestirimi | |
dc.type | Conference Proceeding | |
dspace.entity.type | Publication | |
local.contributor.kuauthor | Türker, Bekir Berker | |
local.contributor.kuauthor | Sezgin, Tevfik Metin | |
local.contributor.kuauthor | Yemez, Yücel | |
local.contributor.kuauthor | Erzin, Engin | |
local.publication.orgunit1 | GRADUATE SCHOOL OF SCIENCES AND ENGINEERING | |
local.publication.orgunit1 | College of Engineering | |
local.publication.orgunit2 | Department of Computer Engineering | |
local.publication.orgunit2 | Graduate School of Sciences and Engineering | |
relation.isOrgUnitOfPublication | 89352e43-bf09-4ef4-82f6-6f9d0174ebae | |
relation.isOrgUnitOfPublication | 3fc31c89-e803-4eb1-af6b-6258bc42c3d8 | |
relation.isOrgUnitOfPublication.latestForDiscovery | 89352e43-bf09-4ef4-82f6-6f9d0174ebae | |
relation.isParentOrgUnitOfPublication | 8e756b23-2d4a-4ce8-b1b3-62c794a8c164 | |
relation.isParentOrgUnitOfPublication | 434c9663-2b11-4e66-9399-c863e2ebae43 | |
relation.isParentOrgUnitOfPublication.latestForDiscovery | 8e756b23-2d4a-4ce8-b1b3-62c794a8c164 |