Publication:
Agreement and disagreement classification of dyadic interactions using vocal and gestural cues

dc.contributor.coauthorN/A
dc.contributor.departmentN/A
dc.contributor.departmentN/A
dc.contributor.departmentDepartment of Computer Engineering
dc.contributor.kuauthorKhaki, Hossein
dc.contributor.kuauthorBozkurt, Elif
dc.contributor.kuauthorErzin, Engin
dc.contributor.kuprofilePhD Student
dc.contributor.kuprofilePhD Student
dc.contributor.kuprofileFaculty Member
dc.contributor.otherDepartment of Computer Engineering
dc.contributor.schoolcollegeinstituteGraduate School of Sciences and Engineering
dc.contributor.schoolcollegeinstituteGraduate School of Sciences and Engineering
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.yokidN/A
dc.contributor.yokidN/A
dc.contributor.yokid34503
dc.date.accessioned2024-11-10T00:11:19Z
dc.date.issued2016
dc.description.abstractIn human-to-human communication gesture and speech co-exist in time with a tight synchrony, where we tend to use gestures to complement or to emphasize speech. In this study, we investigate roles of vocal and gestural cues to identify a dyadic interaction as agreement and disagreement. In this investigation we use the JESTKOD database, which consists of speech and full-body motion capture data recordings for dyadic interactions under agreement and disagreement scenarios. Spectral features of vocal channel and upper body joint angles of gestural channel are employed to extract unimodal and multimodal classification performances. Both of the modalities attain classification rates significantly above the chance level and the multimodal classifier performed more than 80% classification rate over 15 second utterances using statistical features of speech and motion.
dc.description.indexedbyWoS
dc.description.indexedbyScopus
dc.description.openaccessYES
dc.description.publisherscopeInternational
dc.description.sponsorshipThe Institute of Electrical and Electronics Engineers Signal Processing Society
dc.description.volume2016-May
dc.identifier.doi10.1109/ICASSP.2016.7472180
dc.identifier.isbn9781-4799-9988-0
dc.identifier.issn1520-6149
dc.identifier.linkhttps://www.scopus.com/inward/record.uri?eid=2-s2.0-84973402468&doi=10.1109%2fICASSP.2016.7472180&partnerID=40&md5=d0caa94ef4d2aa4a681109cb21def033
dc.identifier.scopus2-s2.0-84973402468
dc.identifier.urihttp://dx.doi.org/10.1109/ICASSP.2016.7472180
dc.identifier.urihttps://hdl.handle.net/20.500.14288/17464
dc.identifier.wos388373402181
dc.keywordsGesticulation
dc.keywordsSpeech
dc.keywordsAffective state tracking
dc.keywordsHuman-computer interaction
dc.keywordsDyadic interaction
dc.languageEnglish
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)
dc.sourceICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
dc.subjectAcoustics
dc.subjectEngineering
dc.subjectElectrical and electronic engineering
dc.titleAgreement and disagreement classification of dyadic interactions using vocal and gestural cues
dc.typeConference proceeding
dspace.entity.typePublication
local.contributor.authoridN/A
local.contributor.authoridN/A
local.contributor.authorid0000-0002-2715-2368
local.contributor.kuauthorKhaki, Hossein
local.contributor.kuauthorBozkurt, Elif
local.contributor.kuauthorErzin, Engin
relation.isOrgUnitOfPublication89352e43-bf09-4ef4-82f6-6f9d0174ebae
relation.isOrgUnitOfPublication.latestForDiscovery89352e43-bf09-4ef4-82f6-6f9d0174ebae

Files