Publication: The use of lip motion for biometric speaker identification
Program
KU Authors
Co-Authors
Publication Date
Language
Embargo Status
Journal Title
Journal ISSN
Volume Title
Alternative Title
Biyometrik konuşmacı tanıma için dudak devinimi kullanımı]
Abstract
This paper addresses the selection of best lip motion features for biometric open-set speaker identification. The best features are those that result in the highest discrimination of individual speakers in a population. We first detect the face region in each video frame. The lip region for each frame is then segmented following registration of successive face regions by global motion compensation. The initial lip feature vector is composed of the 2D-DCT coefficients of the optical flow vectors within the lip region at each frame. We propose to select the most discriminative features from the full set of transform coefficients by using a probabilistic measure that maximizes the ratio of intra-class and inter-class probabilities. The resulting discriminative feature vector with reduced dimension is expected to maximize the identification performance. Experimental results support that the resulting discriminative feature vector with reduced dimension improves the identification performance.
Source
Publisher
IEEE
Subject
Electrical electronics engineering, Computer engineering
Citation
Has Part
Source
Proceedings of the IEEE 12th Signal Processing and Communications Applications Conference, SIU 2004
Book Series Title
Edition
DOI
10.1109/SIU.2004.1338280