Publication: Affect recognition from lip articulations
dc.contributor.department | N/A | |
dc.contributor.department | Department of Computer Engineering | |
dc.contributor.kuauthor | Sadiq, Rizwan | |
dc.contributor.kuauthor | Erzin, Engin | |
dc.contributor.kuprofile | PhD Student | |
dc.contributor.kuprofile | Faculty Member | |
dc.contributor.other | Department of Computer Engineering | |
dc.contributor.schoolcollegeinstitute | Graduate School of Sciences and Engineering | |
dc.contributor.schoolcollegeinstitute | College of Engineering | |
dc.contributor.yokid | N/A | |
dc.contributor.yokid | 34503 | |
dc.date.accessioned | 2024-11-09T23:15:06Z | |
dc.date.issued | 2017 | |
dc.description.abstract | Lips deliver visually active clues for speech articulation. Affective states define how humans articulate speech; hence, they also change articulation of lip motion. In this paper, we investigate effect of phonetic classes for affect recognition from lip articulations. The affect recognition problem is formalized in discrete activation, valence and dominance attributes. We use the symmetric KullbackLeibler divergence (KLD) to rate phonetic classes with larger discrimination across different affective states. We perform experimental evaluations using the IEMOCAP database. Our results demonstrate that lip articulations over a set of discriminative phonetic classes improves the affect recognition performance, and attains 3-class recognition rates for the activation, valence and dominance (AVD) attributes as 72.16%, 46.44% and 64.92%, respectively. | |
dc.description.indexedby | WoS | |
dc.description.indexedby | Scopus | |
dc.description.openaccess | YES | |
dc.description.publisherscope | International | |
dc.description.sponsorship | The Institute of Electrical and Electronics Engineers Signal Processing Society | |
dc.identifier.doi | 10.1109/ICASSP.2017.7952593 | |
dc.identifier.isbn | 9781-5090-4117-6 | |
dc.identifier.issn | 1520-6149 | |
dc.identifier.link | https://www.scopus.com/inward/record.uri?eid=2-s2.0-85023774904&doi=10.1109%2fICASSP.2017.7952593&partnerID=40&md5=a181e952a1e72f4f0ed84942266388a0 | |
dc.identifier.scopus | 2-s2.0-85023774904 | |
dc.identifier.uri | http://dx.doi.org/10.1109/ICASSP.2017.7952593 | |
dc.identifier.uri | https://hdl.handle.net/20.500.14288/10268 | |
dc.identifier.wos | 414286202122 | |
dc.keywords | Affect recognition | |
dc.keywords | Emotion recognition | |
dc.keywords | KullbackLeibler divergence | |
dc.keywords | Lip articulations | |
dc.keywords | Phoneme | |
dc.language | English | |
dc.publisher | Institute of Electrical and Electronics Engineers (IEEE) | |
dc.source | ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings | |
dc.subject | Acoustics | |
dc.subject | Electrical electronic engineering | |
dc.title | Affect recognition from lip articulations | |
dc.type | Conference proceeding | |
dspace.entity.type | Publication | |
local.contributor.authorid | N/A | |
local.contributor.authorid | 0000-0002-2715-2368 | |
local.contributor.kuauthor | Sadiq, Rizwan | |
local.contributor.kuauthor | Erzin, Engin | |
relation.isOrgUnitOfPublication | 89352e43-bf09-4ef4-82f6-6f9d0174ebae | |
relation.isOrgUnitOfPublication.latestForDiscovery | 89352e43-bf09-4ef4-82f6-6f9d0174ebae |