Publication: Agreement and disagreement classification of dyadic interactions using vocal and gestural cues
dc.contributor.coauthor | N/A | |
dc.contributor.department | N/A | |
dc.contributor.department | N/A | |
dc.contributor.department | Department of Computer Engineering | |
dc.contributor.kuauthor | Khaki, Hossein | |
dc.contributor.kuauthor | Bozkurt, Elif | |
dc.contributor.kuauthor | Erzin, Engin | |
dc.contributor.kuprofile | PhD Student | |
dc.contributor.kuprofile | PhD Student | |
dc.contributor.kuprofile | Faculty Member | |
dc.contributor.other | Department of Computer Engineering | |
dc.contributor.schoolcollegeinstitute | Graduate School of Sciences and Engineering | |
dc.contributor.schoolcollegeinstitute | Graduate School of Sciences and Engineering | |
dc.contributor.schoolcollegeinstitute | College of Engineering | |
dc.contributor.yokid | N/A | |
dc.contributor.yokid | N/A | |
dc.contributor.yokid | 34503 | |
dc.date.accessioned | 2024-11-10T00:11:19Z | |
dc.date.issued | 2016 | |
dc.description.abstract | In human-to-human communication gesture and speech co-exist in time with a tight synchrony, where we tend to use gestures to complement or to emphasize speech. In this study, we investigate roles of vocal and gestural cues to identify a dyadic interaction as agreement and disagreement. In this investigation we use the JESTKOD database, which consists of speech and full-body motion capture data recordings for dyadic interactions under agreement and disagreement scenarios. Spectral features of vocal channel and upper body joint angles of gestural channel are employed to extract unimodal and multimodal classification performances. Both of the modalities attain classification rates significantly above the chance level and the multimodal classifier performed more than 80% classification rate over 15 second utterances using statistical features of speech and motion. | |
dc.description.indexedby | WoS | |
dc.description.indexedby | Scopus | |
dc.description.openaccess | YES | |
dc.description.publisherscope | International | |
dc.description.sponsorship | The Institute of Electrical and Electronics Engineers Signal Processing Society | |
dc.description.volume | 2016-May | |
dc.identifier.doi | 10.1109/ICASSP.2016.7472180 | |
dc.identifier.isbn | 9781-4799-9988-0 | |
dc.identifier.issn | 1520-6149 | |
dc.identifier.link | https://www.scopus.com/inward/record.uri?eid=2-s2.0-84973402468&doi=10.1109%2fICASSP.2016.7472180&partnerID=40&md5=d0caa94ef4d2aa4a681109cb21def033 | |
dc.identifier.scopus | 2-s2.0-84973402468 | |
dc.identifier.uri | http://dx.doi.org/10.1109/ICASSP.2016.7472180 | |
dc.identifier.uri | https://hdl.handle.net/20.500.14288/17464 | |
dc.identifier.wos | 388373402181 | |
dc.keywords | Gesticulation | |
dc.keywords | Speech | |
dc.keywords | Affective state tracking | |
dc.keywords | Human-computer interaction | |
dc.keywords | Dyadic interaction | |
dc.language | English | |
dc.publisher | Institute of Electrical and Electronics Engineers (IEEE) | |
dc.source | ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings | |
dc.subject | Acoustics | |
dc.subject | Engineering | |
dc.subject | Electrical and electronic engineering | |
dc.title | Agreement and disagreement classification of dyadic interactions using vocal and gestural cues | |
dc.type | Conference proceeding | |
dspace.entity.type | Publication | |
local.contributor.authorid | N/A | |
local.contributor.authorid | N/A | |
local.contributor.authorid | 0000-0002-2715-2368 | |
local.contributor.kuauthor | Khaki, Hossein | |
local.contributor.kuauthor | Bozkurt, Elif | |
local.contributor.kuauthor | Erzin, Engin | |
relation.isOrgUnitOfPublication | 89352e43-bf09-4ef4-82f6-6f9d0174ebae | |
relation.isOrgUnitOfPublication.latestForDiscovery | 89352e43-bf09-4ef4-82f6-6f9d0174ebae |