Publication:
Robust lip-motion features for speaker identification

dc.contributor.departmentN/A
dc.contributor.departmentDepartment of Computer Engineering
dc.contributor.departmentDepartment of Computer Engineering
dc.contributor.departmentDepartment of Electrical and Electronics Engineering
dc.contributor.kuauthorÇetingül, Hasan Ertan
dc.contributor.kuauthorYemez, Yücel
dc.contributor.kuauthorErzin, Engin
dc.contributor.kuprofileMaster Student
dc.contributor.kuprofileFaculty Member
dc.contributor.kuprofileFaculty Member
dc.contributor.kuprofileFaculty Member
dc.contributor.otherDepartment of Computer Engineering
dc.contributor.otherDepartment of Electrical and Electronics Engineering
dc.contributor.schoolcollegeinstituteGraduate School of Sciences and Engineering
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.yokidN/A
dc.contributor.yokid107907
dc.contributor.yokid34503
dc.contributor.yokid26207
dc.date.accessioned2024-11-10T00:06:07Z
dc.date.issued2005
dc.description.abstractThis paper addresses the selection of robust lip-motion features for audio-visual open-set speaker identification problem. We consider two alternatives for initial lip motion representation. In the first alternative. the feature vector is composed of the 2D-DCT coefficients of the motion vectors estimated within the detected rectangular mouth region whereas in the second, lip boundaries are tracked over the video frames and only the motion vectors around the lip contour are taken into account along with the shape of the lip boundary. Experimental results of the HMM-based identification system are included for performance comparison of the two lip motion representation alternatives.
dc.description.indexedbyWoS
dc.description.indexedbyScopus
dc.description.openaccessNO
dc.description.publisherscopeInternational
dc.identifier.doiN/A
dc.identifier.isbn0-7803-8874-7
dc.identifier.issn1520-6149
dc.identifier.scopus2-s2.0-33646818965
dc.identifier.urihttps://hdl.handle.net/20.500.14288/16559
dc.identifier.wos229404200128
dc.keywordsSpeech
dc.languageEnglish
dc.publisherIEEE
dc.source2005 IEEE International Conference On Acoustics, Speech, And Signal Processing, Vols 1-5: Speech Processing
dc.subjectComputer science
dc.subjectArtificial intelligence
dc.subjectEngineering
dc.subjectElectrical electronic engineering
dc.titleRobust lip-motion features for speaker identification
dc.typeConference proceeding
dspace.entity.typePublication
local.contributor.authoridN/A
local.contributor.authorid0000-0002-7515-3138
local.contributor.authorid0000-0002-2715-2368
local.contributor.authorid0000-0003-1465-8121
local.contributor.kuauthorÇetingül, Hasan Ertan
local.contributor.kuauthorYemez, Yücel
local.contributor.kuauthorErzin, Engin
local.contributor.kuauthorAhmet Murat
relation.isOrgUnitOfPublication89352e43-bf09-4ef4-82f6-6f9d0174ebae
relation.isOrgUnitOfPublication21598063-a7c5-420d-91ba-0cc9b2db0ea0
relation.isOrgUnitOfPublication.latestForDiscovery21598063-a7c5-420d-91ba-0cc9b2db0ea0

Files